text
stringlengths 16
172k
| source
stringlengths 32
122
|
|---|---|
Inmachine learningandnatural language processing, thepachinko allocation model (PAM)is atopic model. Topic models are a suite of algorithms to uncover the hidden thematic structure of a collection of documents.[1]The algorithm improves upon earlier topic models such aslatent Dirichlet allocation(LDA) by modeling correlations between topics in addition to the word correlations which constitute topics. PAM provides more flexibility and greater expressive power
than latent Dirichlet allocation.[2]While first described and implemented in the context of natural language processing, the algorithm may have applications in other fields such asbioinformatics. The
model is named forpachinkomachines—a game popular in Japan, in which metal balls bounce down around
a complex collection of pins until they land in various
bins at the bottom.[3]
Pachinko allocation was first described by Wei Li andAndrew McCallumin 2006.[3]The idea was extended with hierarchical Pachinko allocation by Li, McCallum, and David Mimno in 2007.[4]In 2007, McCallum and his colleagues proposed a nonparametric Bayesian prior for PAM based
on a variant of the hierarchical Dirichlet process (HDP).[2]The algorithm has been implemented in theMALLETsoftware package published by McCallum's group at theUniversity of Massachusetts Amherst.
PAM connects words in V and topics in T with an arbitrarydirected acyclic graph(DAG), where topic nodes occupy the interior levels and the leaves are words.
The probability of generating a whole corpus is the product of the probabilities for every document:[3]
P(D|α)=∏dP(d|α){\displaystyle P(\mathbf {D} |\alpha )=\prod _{d}P(d|\alpha )}
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Pachinko_allocation
|
Ininformation retrieval,tf–idf(alsoTF*IDF,TFIDF,TF–IDF, orTf–idf), short forterm frequency–inverse document frequency, is a measure of importance of a word to adocumentin a collection orcorpus, adjusted for the fact that some words appear more frequently in general.[1]Like the bag-of-words model, it models a document as amultisetof words, withoutword order. It is a refinement over the simplebag-of-words model, by allowing the weight of words to depend on the rest of the corpus.
It was often used as aweighting factorin searches of information retrieval,text mining, anduser modeling. A survey conducted in 2015 showed that 83% of text-based recommender systems in digital libraries used tf–idf.[2]Variations of the tf–idf weighting scheme were often used bysearch enginesas a central tool in scoring and ranking a document'srelevancegiven a userquery.
One of the simplestranking functionsis computed by summing the tf–idf for each query term; many more sophisticated ranking functions are variants of this simple model.
Karen Spärck Jones(1972) conceived a statistical interpretation of term-specificity called Inverse Document Frequency (idf), which became a cornerstone of term weighting:[3]
The specificity of a term can be quantified as an inverse function of the number of documents in which it occurs.
For example, the df (document frequency) and idf for some words in Shakespeare's 37 plays are as follows:[4]
We see that "Romeo", "Falstaff", and "salad" appears in very few plays, so seeing these words, one could get a good idea as to which play it might be. In contrast, "good" and "sweet" appears in every play and are completely uninformative as to which play it is.
Term frequency,tf(t,d), is the relative frequency of termtwithin documentd,
whereft,dis theraw countof a term in a document, i.e., the number of times that termtoccurs in documentd. Note the denominator is simply the total number of terms in documentd(counting each occurrence of the same term separately). There are various other ways to define term frequency:[5]: 128
Theinverse document frequencyis a measure of how much information the word provides, i.e., how common or rare it is across all documents. It is thelogarithmically scaledinverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient):
with
Then tf–idf is calculated as
A high weight in tf–idf is reached by a high termfrequency(in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf–idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf–idf closer to 0.
Idf was introduced as "term specificity" byKaren Spärck Jonesin a 1972 paper. Although it has worked well as aheuristic, its theoretical foundations have been troublesome for at least three decades afterward, with many researchers trying to findinformation theoreticjustifications for it.[7]
Spärck Jones's own explanation did not propose much theory, aside from a connection toZipf's law.[7]Attempts have been made to put idf on aprobabilisticfooting,[8]by estimating the probability that a given documentdcontains a termtas the relative document frequency,
so that we can define idf as
Namely, the inverse document frequency is the logarithm of "inverse" relative document frequency.
This probabilistic interpretation in turn takes the same form as that ofself-information. However, applying such information-theoretic notions to problems in information retrieval leads to problems when trying to define the appropriateevent spacesfor the requiredprobability distributions: not only documents need to be taken into account, but also queries and terms.[7]
Both term frequency and inverse document frequency can be formulated in terms ofinformation theory; it helps to understand why their product has a meaning in terms of joint informational content of a document. A characteristic assumption about the distributionp(d,t){\displaystyle p(d,t)}is that:
This assumption and its implications, according to Aizawa: "represent the heuristic that tf–idf employs."[9]
Theconditional entropyof a "randomly chosen" document in the corpusD{\displaystyle D}, conditional to the fact it contains a specific termt{\displaystyle t}(and assuming that all documents have equal probability to be chosen) is:
In terms of notation,D{\displaystyle {\cal {D}}}andT{\displaystyle {\cal {T}}}are "random variables" corresponding to respectively draw a document or a term. Themutual informationcan be expressed as
The last step is to expandpt{\displaystyle p_{t}}, the unconditional probability to draw a term, with respect to the (random) choice of a document, to obtain:
This expression shows that summing the Tf–idf of all possible terms and documents recovers the mutual information between documents and term taking into account all the specificities of their joint distribution.[9]Each Tf–idf hence carries the "bit of information" attached to a term x document pair.
Suppose that we have term count tables of a corpus consisting of only two documents, as listed on the right.
The calculation of tf–idf for the term "this" is performed as follows:
In its raw frequency form, tf is just the frequency of the "this" for each document. In each document, the word "this" appears once; but as the document 2 has more words, its relative frequency is smaller.
An idf is constant per corpus, andaccountsfor the ratio of documents that include the word "this". In this case, we have a corpus of two documents and all of them include the word "this".
So tf–idf is zero for the word "this", which implies that the word is not very informative as it appears in all documents.
The word "example" is more interesting - it occurs three times, but only in the second document:
Finally,
(using thebase 10 logarithm).
The idea behind tf–idf also applies to entities other than terms. In 1998, the concept of idf was applied to citations.[10]The authors argued that "if a very uncommon citation is shared by two documents, this should be weighted more highly than a citation made by a large number of documents". In addition, tf–idf was applied to "visual words" with the purpose of conducting object matching in videos,[11]and entire sentences.[12]However, the concept of tf–idf did not prove to be more effective in all cases than a plain tf scheme (without idf). When tf–idf was applied to citations, researchers could find no improvement over a simple citation-count weight that had no idf component.[13]
A number of term-weighting schemes have derived from tf–idf. One of them is TF–PDF (term frequency * proportional document frequency).[14]TF–PDF was introduced in 2001 in the context of identifying emerging topics in the media. The PDF component measures the difference of how often a term occurs in different domains. Another derivate is TF–IDuF. In TF–IDuF,[15]idf is not calculated based on the document corpus that is to be searched or recommended. Instead, idf is calculated on users' personal document collections. The authors report that TF–IDuF was equally effective as tf–idf but could also be applied in situations when, e.g., a user modeling system has no access to a global document corpus.
|
https://en.wikipedia.org/wiki/Tf-idf
|
Infer.NETis afree and open source.NETsoftware library formachine learning.[2]It supports runningBayesian inferencein graphical models and can also be used forprobabilistic programming.[3]
Infer.NET follows a model-based approach and is used to solve different kinds of machine learning problems including standard problems like classification, recommendation or clustering, customized solutions and domain-specific problems. The framework is used in various different domains such asbioinformatics,epidemiology,computer vision, andinformation retrieval.[4][5]
Development of the framework was started by a team atMicrosoft’s research centre inCambridge, UKin 2004. It was first released for academic use in 2008 and later open sourced in 2018.[5]In 2013, Microsoft was awarded theUSPTO’sPatents for Humanity Awardin Information Technology category for Infer.NET and the work in advanced machine learning techniques.[6][7]
Infer.NET is used internally at Microsoft as the machine learning engine in some of their products such asOffice,Azure, andXbox.[8]
The source code is licensed underMIT Licenseand available onGitHub.[9]It is also available asNuGetpackage.[10]
|
https://en.wikipedia.org/wiki/Infer.NET
|
Coh-Metrixis a computational tool that produces indices of thelinguisticanddiscourserepresentations of a text. Developed byArthur C. GraesserandDanielle S. McNamara, Coh-Metrix analyzes texts on many different features.
Coh-Metrix can be used in many different ways to investigate the cohesion of the explicit text and the coherence of themental representationof the text. "Our definition ofcohesionconsists of characteristics of the explicit text that play some role in helping the reader mentally connect ideas in the text" (Graesser, McNamara, & Louwerse, 2003). The definition of coherence is the subject of much debate. Theoretically, the coherence of a text is defined by the interaction between linguistic representations and knowledge representations. While coherence can be defined as characteristics of the text (i.e., aspects of cohesion) that are likely to contribute to the coherence of the mental representation, Coh-Metrix measurements provide indices of these cohesion characteristics.[1]
According to an empirical study, the Coh-Metrix L2 Reading Index performs significantly better than traditionalreadabilityformulas.[2]
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
This article aboutlanguage acquisitionis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Coh-Metrix
|
Innatural language processingandinformation retrieval,explicit semantic analysis(ESA) is avectoralrepresentation of text (individual words or entire documents) that uses a document corpus as aknowledge base. Specifically, in ESA, a word is represented as a column vector in thetf–idfmatrix of the text corpus and a document (string of words) is represented as thecentroidof the vectors representing its words. Typically, the text corpus isEnglish Wikipedia, though other corpora including theOpen Directory Projecthave been used.[1]
ESA was designed byEvgeniy Gabrilovichand Shaul Markovitch as a means of improvingtext categorization[2]and has been used by this pair of researchers to compute what they refer to as "semanticrelatedness" by means ofcosine similaritybetween the aforementioned vectors, collectively interpreted as a space of "concepts explicitly defined and described by humans", where Wikipedia articles (or ODP entries, or otherwise titles of documents in the knowledge base corpus) are equated with concepts. The name "explicit semantic analysis" contrasts withlatent semantic analysis(LSA), because the use of a knowledge base makes it possible to assign human-readable labels to the concepts that make up the vector space.[1][3]
To perform the basic variant of ESA, one starts with a collection of texts, say, all Wikipedia articles; let the number of documents in the collection beN. These are all turned into "bags of words", i.e., term frequency histograms, stored in aninverted index. Using this inverted index, one can find for any word the set of Wikipedia articles containing this word; in the vocabulary of Egozi, Markovitch and Gabrilovitch, "each word appearing in the Wikipedia corpus can be seen as triggering each of the concepts it points to in the inverted index."[1]
The output of the inverted index for a single word query is a list of indexed documents (Wikipedia articles), each given a score depending on how often the word in question occurred in them (weighted by the total number of words in the document). Mathematically, this list is anN-dimensional vector of word-document scores, where a document not containing the query word has score zero. To compute the relatedness of two words, one compares the vectors (sayuandv) by computing the cosine similarity,
and this gives a numeric estimate of the semantic relatedness of the words. The scheme is extended from single words to multi-word texts by simply summing the vectors of all words in the text.[3]
ESA, as originally posited by Gabrilovich and Markovitch, operates under the assumption that the knowledge base contains topicallyorthogonalconcepts. However, it was later shown by Anderka and Stein that ESA also improves the performance ofinformation retrievalsystems when it is based not on Wikipedia, but on theReuters corpus of newswire articles, which does not satisfy the orthogonality property; in their experiments, Anderka and Stein used newswire stories as "concepts".[4]To explain this observation, links have been shown between ESA and thegeneralized vector space model.[5]Gabrilovich and Markovitch replied to Anderka and Stein by pointing out that their experimental result was achieved using "a single application of ESA (text similarity)" and "just a single, extremely small and homogenous test collection of 50 news documents".[1]
ESA is considered by its authors a measure of semantic relatedness (as opposed tosemantic similarity). On datasets used to benchmark relatedness of words, ESA outperforms other algorithms, includingWordNetsemantic similarity measures and skip-gram Neural Network Language Model (Word2vec).[6]
ESA is used in commercial software packages for computing relatedness of documents.[7]Domain-specific restrictions on the ESA model are sometimes used to provide more robust document matching.[8]
Cross-language explicit semantic analysis (CL-ESA) is a multilingual generalization of ESA.[9]CL-ESA exploits a document-aligned multilingual reference collection (e.g., again, Wikipedia) to represent a document as a language-independent concept vector. The relatedness of two documents in different languages is assessed by the cosine similarity between the corresponding vector representations.
|
https://en.wikipedia.org/wiki/Explicit_semantic_analysis
|
Latent semantic mapping(LSM) is a data-driven framework to model globally meaningful relationships implicit in large volumes of (often textual) data. It is a generalization oflatent semantic analysis. In information retrieval, LSA enables retrieval on the basis of conceptual content, instead of merely matching words between queries and documents.
LSM was derived from earlier work on latent semantic analysis. There are 3 main characteristics of latent semantic analysis: Discrete entities, usually in the form of words and documents, are mapped onto continuous vectors, the mapping involves a form of global correlation pattern, anddimensionality reductionis an important aspect of the analysis process. These constitute generic properties, and have been identified as potentially useful in a variety of different contexts. This usefulness has encouraged great interest in LSM. The intended product of latent semantic mapping, is a data-driven framework for modeling relationships in large volumes of data.
Mac OS X v10.5and later includes aframeworkimplementing latent semantic mapping.[1]
Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Latent_semantic_mapping
|
Latent semantic structure indexing(LaSSI) is a technique for calculating chemical similarity derived fromlatent semantic analysis(LSA).
LaSSI was developed atMerck & Co.and patented in 2007[1]by Richard Hull, Eugene Fluder, Suresh Singh, Robert Sheridan, Robert Nachbar and Simon Kearsley.
LaSSI is similar to LSA in that it involves the construction of an occurrence matrix from a corpus of items and the application ofsingular value decompositionto that matrix to derive latent features. What differs is that the occurrence matrix represents the frequency of two- and three-dimensional chemical descriptors (rather than natural language terms) found within achemical databaseof chemical structures. This process derives latent chemical structure concepts that can be used to calculate chemical similarities andstructure–activity relationshipsfordrug discovery.
|
https://en.wikipedia.org/wiki/Latent_semantic_structure_indexing
|
Principal component analysis(PCA) is alineardimensionality reductiontechnique with applications inexploratory data analysis, visualization anddata preprocessing.
The data islinearly transformedonto a newcoordinate systemsuch that the directions (principal components) capturing the largest variation in the data can be easily identified.
Theprincipal componentsof a collection of points in areal coordinate spaceare a sequence ofp{\displaystyle p}unit vectors, where thei{\displaystyle i}-th vector is the direction of a line that best fits the data while beingorthogonalto the firsti−1{\displaystyle i-1}vectors. Here, a best-fitting line is defined as one that minimizes the average squaredperpendiculardistance from the points to the line. These directions (i.e., principal components) constitute anorthonormal basisin which different individual dimensions of the data arelinearly uncorrelated. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points.[1]
Principal component analysis has applications in many fields such aspopulation genetics,microbiomestudies, andatmospheric science.[2]
When performing PCA, the first principal component of a set ofp{\displaystyle p}variables is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed throughp{\displaystyle p}iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to anindependent set.
The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. Thei{\displaystyle i}-th principal component can be taken as a direction orthogonal to the firsti−1{\displaystyle i-1}principal components that maximizes the variance of the projected data.
For either objective, it can be shown that the principal components areeigenvectorsof the data'scovariance matrix. Thus, the principal components are often computed byeigendecompositionof the data covariance matrix orsingular value decompositionof the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related tofactor analysis. Factor analysis typically incorporates more domain-specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related tocanonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe thecross-covariancebetween two datasets while PCA defines a neworthogonal coordinate systemthat optimally describes variance in a single dataset.[3][4][5][6]RobustandL1-norm-based variants of standard PCA have also been proposed.[7][8][9][6]
PCA was invented in 1901 byKarl Pearson,[10]as an analogue of theprincipal axis theoremin mechanics; it was later independently developed and named byHarold Hotellingin the 1930s.[11]Depending on the field of application, it is also named the discreteKarhunen–Loèvetransform (KLT) insignal processing, theHotellingtransform in multivariate quality control,proper orthogonal decomposition(POD) in mechanical engineering,singular value decomposition(SVD) ofX(invented in the last quarter of the 19th century[12]),eigenvalue decomposition(EVD) ofXTXin linear algebra,factor analysis(for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe'sPrincipal Component Analysis),[13]Eckart–Young theorem(Harman, 1960), orempirical orthogonal functions(EOF) in meteorological science (Lorenz, 1956), empirical eigenfunction decomposition (Sirovich, 1987), quasiharmonic modes (Brooks et al., 1988),spectral decompositionin noise and vibration, andempirical modal analysisin structural dynamics.
PCA can be thought of as fitting ap-dimensionalellipsoidto the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small.
To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute thecovariance matrixof the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we mustnormalizeeach of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues.
Biplotsandscree plots(degree ofexplained variance) are used to interpret findings of the PCA.
PCA is defined as anorthogonallinear transformationon a realinner product spacethat transforms the data to a newcoordinate systemsuch that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.[13]
Consider ann×p{\displaystyle n\times p}datamatrix,X, with column-wise zeroempirical mean(the sample mean of each column has been shifted to zero), where each of thenrows represents a different repetition of the experiment, and each of thepcolumns gives a particular kind of feature (say, the results from a particular sensor).
Mathematically, the transformation is defined by a set of sizel{\displaystyle l}ofp-dimensional vectors of weights or coefficientsw(k)=(w1,…,wp)(k){\displaystyle \mathbf {w} _{(k)}=(w_{1},\dots ,w_{p})_{(k)}}that map each row vectorx(i)=(x1,…,xp)(i){\displaystyle \mathbf {x} _{(i)}=(x_{1},\dots ,x_{p})_{(i)}}ofXto a new vector of principal componentscorest(i)=(t1,…,tl)(i){\displaystyle \mathbf {t} _{(i)}=(t_{1},\dots ,t_{l})_{(i)}}, given by
in such a way that the individual variablest1,…,tl{\displaystyle t_{1},\dots ,t_{l}}oftconsidered over the data set successively inherit the maximum possible variance fromX, with each coefficient vectorwconstrained to be aunit vector(wherel{\displaystyle l}is usually selected to be strictly less thanp{\displaystyle p}to reduce dimensionality).
The above may equivalently be written in matrix form as
whereTik=tk(i){\displaystyle {\mathbf {T} }_{ik}={t_{k}}_{(i)}},Xij=xj(i){\displaystyle {\mathbf {X} }_{ij}={x_{j}}_{(i)}}, andWjk=wj(k){\displaystyle {\mathbf {W} }_{jk}={w_{j}}_{(k)}}.
In order to maximize variance, the first weight vectorw(1)thus has to satisfy
Equivalently, writing this in matrix form gives
Sincew(1)has been defined to be a unit vector, it equivalently also satisfies
The quantity to be maximised can be recognised as aRayleigh quotient. A standard result for apositive semidefinite matrixsuch asXTXis that the quotient's maximum possible value is the largesteigenvalueof the matrix, which occurs whenwis the correspondingeigenvector.
Withw(1)found, the first principal component of a data vectorx(i)can then be given as a scoret1(i)=x(i)⋅w(1)in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i)⋅w(1)}w(1).
Thek-th component can be found by subtracting the firstk− 1 principal components fromX:
and then finding the weight vector which extracts the maximum variance from this new data matrix
It turns out that this gives the remaining eigenvectors ofXTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors ofXTX.
Thek-th principal component of a data vectorx(i)can therefore be given as a scoretk(i)=x(i)⋅w(k)in the transformed coordinates, or as the corresponding vector in the space of the original variables, {x(i)⋅w(k)}w(k), wherew(k)is thekth eigenvector ofXTX.
The full principal components decomposition ofXcan therefore be given as
whereWis ap-by-pmatrix of weights whose columns are the eigenvectors ofXTX. The transpose ofWis sometimes called thewhitening or sphering transformation. Columns ofWmultiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are calledloadingsin PCA or in Factor analysis.
XTXitself can be recognized as proportional to the empirical samplecovariance matrixof the datasetXT.[13]: 30–31
The sample covarianceQbetween two of the different principal components over the dataset is given by:
where the eigenvalue property ofw(k)has been used to move from line 2 to line 3. However eigenvectorsw(j)andw(k)corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset.
Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix.
In matrix form, the empirical covariance matrix for the original variables can be written
The empirical covariance matrix between the principal components becomes
whereΛis the diagonal matrix of eigenvaluesλ(k)ofXTX.λ(k)is equal to the sum of the squares over the dataset associated with each componentk, that is,λ(k)= Σitk2(i)= Σi(x(i)⋅w(k))2.
The transformationP=XWmaps a data vectorx(i)from an original space ofxvariables to a new space ofpvariables which are uncorrelated over the dataset.
To non-dimensionalize the centered data, letXcrepresent the characteristic values of data vectorsXi, given by:
for a dataset of sizen. These norms are used to transform the original space of variablesx, yto a new space of uncorrelated variablesp, q(givenYcwith same meaning), such thatpi=XiXc,qi=YiYc{\displaystyle p_{i}={\frac {X_{i}}{X_{c}}},\quad q_{i}={\frac {Y_{i}}{Y_{c}}}};
and the new variables are linearly related as:q=αp{\displaystyle q=\alpha p}.
To find the optimal linear relationship, we minimize the total squared reconstruction error:E(α)=11−α2∑i=1n(αpi−qi)2{\displaystyle E(\alpha )={\frac {1}{1-\alpha ^{2}}}\sum _{i=1}^{n}(\alpha p_{i}-q_{i})^{2}}; such that setting the derivative of the error function to zero(E′(α)=0){\displaystyle (E'(\alpha )=0)}yields:α=12(−λ±λ2+4){\displaystyle \alpha ={\frac {1}{2}}\left(-\lambda \pm {\sqrt {\lambda ^{2}+4}}\right)}whereλ=p⋅p−q⋅qp⋅q{\displaystyle \lambda ={\frac {p\cdot p-q\cdot q}{p\cdot q}}}.[14]
Suchdimensionality reductioncan be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selectingL= 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data containsclustersthese too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable.
Similarly, inregression analysis, the larger the number ofexplanatory variablesallowed, the greater is the chance ofoverfittingthe model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method calledprincipal component regression.
Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns ofTwill also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrixW, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a highersignal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested usingparametric bootstrap, as an aid in determining how many principal components to retain.[15]
The principal components transformation can also be associated with another matrix factorization, thesingular value decomposition(SVD) ofX,
HereΣis ann-by-prectangular diagonal matrixof positive numbersσ(k), called the singular values ofX;Uis ann-by-nmatrix, the columns of which are orthogonal unit vectors of lengthncalled the left singular vectors ofX; andWis ap-by-pmatrix whose columns are orthogonal unit vectors of lengthpand called the right singular vectors ofX.
In terms of this factorization, the matrixXTXcan be written
whereΣ^{\displaystyle \mathbf {\hat {\Sigma }} }is the square diagonal matrix with the singular values ofXand the excess zeros chopped off that satisfiesΣ^2=ΣTΣ{\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{\mathsf {T}}\mathbf {\Sigma } }. Comparison with the eigenvector factorization ofXTXestablishes that the right singular vectorsWofXare equivalent to the eigenvectors ofXTX, while the singular valuesσ(k)ofX{\displaystyle \mathbf {X} }are equal to the square-root of the eigenvaluesλ(k)ofXTX.
Using the singular value decomposition the score matrixTcan be written
so each column ofTis given by one of the left singular vectors ofXmultiplied by the corresponding singular value. This form is also thepolar decompositionofT.
Efficient algorithms exist to calculate the SVD ofXwithout having to form the matrixXTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix,[16]unless only a handful of components are required.
As with the eigen-decomposition, a truncatedn×Lscore matrixTLcan be obtained by considering only the first L largest singular values and their singular vectors:
The truncation of a matrixMorTusing a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix ofrankLto the original matrix, in the sense of the difference between the two having the smallest possibleFrobenius norm, a result known as theEckart–Young theorem[1936].
Theorem (Optimal k‑dimensional fit).Let P be an n×m data matrix whose columns have been mean‑centered and scaled, and letP=UΣVT{\displaystyle P=U\,\Sigma \,V^{T}}be its singular value decomposition. Then the best rank‑k approximation to P in the least‑squares (Frobenius‑norm) sense isPk=UkΣkVkT{\displaystyle P_{k}=U_{k}\,\Sigma _{k}\,V_{k}^{T}},
where Vkconsists of the first k columns of V. Moreover, the relative residual variance isR(k)=∑j=k+1mσj2∑j=1mσj2{\displaystyle R(k)={\frac {\sum _{j=k+1}^{m}\sigma _{j}^{2}}{\sum _{j=1}^{m}\sigma _{j}^{2}}}}.
[14]
The singular values (inΣ) are the square roots of theeigenvaluesof the matrixXTX. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (seebelow). PCA is often used in this manner fordimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to thediscrete cosine transform, and in particular to the DCT-II which is simply known as the "DCT".Nonlinear dimensionality reductiontechniques tend to be more computationally demanding than PCA.
PCA is sensitive to the scaling of the variables. Mathematically this sensitivity comes from the way a rescaling changes the sample‑covariance matrix that PCA diagonalises.[14]
LetXc{\displaystyle \mathbf {X} _{\text{c}}}be the *centered* data matrix (nrows,pcolumns) and define the covarianceΣ=1nXcTXc.{\displaystyle \Sigma ={\frac {1}{n}}\,\mathbf {X} _{\text{c}}^{\mathsf {T}}\mathbf {X} _{\text{c}}.}If thej{\displaystyle j}‑th variable is multiplied by a factorαj{\displaystyle \alpha _{j}}we obtainXc(α)=XcD,D=diag(α1,…,αp).{\displaystyle \mathbf {X} _{\text{c}}^{(\alpha )}=\mathbf {X} _{\text{c}}D,\qquad D=\operatorname {diag} (\alpha _{1},\ldots ,\alpha _{p}).}Hence the new covariance isΣ(α)=DTΣD.{\displaystyle \Sigma ^{(\alpha )}=D^{\mathsf {T}}\,\Sigma \,D.}
Because the eigenvalues and eigenvectors ofΣ(α){\displaystyle \Sigma ^{(\alpha )}}are those ofΣ{\displaystyle \Sigma }scaled byD{\displaystyle D}, the principal axes rotate toward any column whose variance has been inflated, exactly as the 2‑D example below illustrates.
If we have just two variables and they have the samesample varianceand are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.
Classical PCA assumes the cloud of points has already been translated so its centroid is at the origin.[14]
Write each observation asqi=μ+zi,μ=1n∑i=1nqi.{\displaystyle \mathbf {q} _{i}={\boldsymbol {\mu }}+\mathbf {z} _{i},\qquad {\boldsymbol {\mu }}={\tfrac {1}{n}}\sum _{i=1}^{n}\mathbf {q} _{i}.}
Without subtractingμ{\displaystyle {\boldsymbol {\mu }}}we are in effect diagonalising
Σunc=nμμT+1nZTZ,{\displaystyle \Sigma _{\text{unc}}\;=\;n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}\;+\;{\tfrac {1}{n}}\,\mathbf {Z} ^{\mathsf {T}}\mathbf {Z} ,}
whereZ{\displaystyle \mathbf {Z} }is the centered matrix.
The rank‑one termnμμT{\displaystyle n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}}often dominates, forcing the leading eigenvector to point almost exactly toward the mean and obliterating any structure in the centred partZ{\displaystyle \mathbf {Z} }.
After mean subtraction that term vanishes and the principal axes align with the true directions of maximal variance.
Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name:Pearson Product-Moment Correlation). Also see the article by Kromrey & Foster-Johnson (1998) on"Mean-centering in Moderated Regression: Much Ado About Nothing". Sincecovariances are correlations of normalized variables(Z- or standard-scores) a PCA based on the correlation matrix ofXisequalto a PCA based on the covariance matrix ofZ, the standardized version ofX.
PCA is a popular primary technique inpattern recognition. It is not, however, optimized for class separability.[17]However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.[18]Thelinear discriminant analysisis an alternative which is optimized for class separability.
Some properties of PCA include:[13][page needed]
The statistical implication of this property is that the last few PCs are not simply unstructured left-overs after removing the important PCs. Because these last PCs have variances as small as possible they are useful in their own right. They can help to detect unsuspected near-constant linear relationships between the elements ofx, and they may also be useful inregression, in selecting a subset of variables fromx, and in outlier detection.
Before we look at its usage, we first look atdiagonalelements,
Then, perhaps the main statistical implication of the result is that not only can we decompose the combined variances of all the elements ofxinto decreasing contributions due to each PC, but we can also decompose the wholecovariance matrixinto contributionsλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}from each PC. Although not strictly decreasing, the elements ofλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}will tend to become smaller ask{\displaystyle k}increases, asλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}is nonincreasing for increasingk{\displaystyle k}, whereas the elements ofαk{\displaystyle \alpha _{k}}tend to stay about the same size because of the normalization constraints:αk′αk=1,k=1,…,p{\displaystyle \alpha _{k}'\alpha _{k}=1,k=1,\dots ,p}.
As noted above, the results of PCA depend on the scaling of the variables. This can be cured by scaling each feature by its standard deviation, so that one ends up with dimensionless features with unital variance.[19]
The applicability of PCA as described above is limited by certain (tacit) assumptions[20]made in its derivation. In particular, PCA can capture linear correlations between the features but fails when this assumption is violated (see Figure 6a in the reference). In some cases, coordinate transformations can restore the linearity assumption and PCA can then be applied (seekernel PCA).
Another limitation is the mean-removal process before constructing the covariance matrix for PCA. In fields such as astronomy, all the signals are non-negative, and the mean-removal process will force the mean of some astrophysical exposures to be zero, which consequently creates unphysical negative fluxes,[21]and forward modeling has to be performed to recover the true magnitude of the signals.[22]As an alternative method,non-negative matrix factorizationfocusing only on the non-negative elements in the matrices is well-suited for astrophysical observations.[23][24][25]See more atthe relation between PCA and non-negative matrix factorization.
PCA is at a disadvantage if the data has not been standardized before applying the algorithm to it. PCA transforms the original data into data that is relevant to the principal components of that data, which means that the new data variables cannot be interpreted in the same ways that the originals were. They are linear interpretations of the original variables. Also, if PCA is not performed properly, there is a high likelihood of information loss.[26]
PCA relies on a linear model. If a dataset has a pattern hidden inside it that is nonlinear, then PCA can actually steer the analysis in the complete opposite direction of progress.[27][page needed]Researchers at Kansas State University discovered that the sampling error in their experiments impacted the bias of PCA results. "If the number of subjects or blocks is smaller than 30, and/or the researcher is interested in PC's beyond the first, it may be better to first correct for the serial correlation, before PCA is conducted".[28]The researchers at Kansas State also found that PCA could be "seriously biased if the autocorrelation structure of the data is not correctly handled".[28]
Dimensionality reduction results in a loss of information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models.
Under the assumption that
that is, that the data vectorx{\displaystyle \mathbf {x} }is the sum of the desired information-bearing signals{\displaystyle \mathbf {s} }and a noise signaln{\displaystyle \mathbf {n} }one can show that PCA can be optimal for dimensionality reduction, from an information-theoretic point-of-view.
In particular, Linsker showed that ifs{\displaystyle \mathbf {s} }is Gaussian andn{\displaystyle \mathbf {n} }is Gaussian noise with a covariance matrix proportional to the identity matrix, the PCA maximizes themutual informationI(y;s){\displaystyle I(\mathbf {y} ;\mathbf {s} )}between the desired informations{\displaystyle \mathbf {s} }and the dimensionality-reduced outputy=WLTx{\displaystyle \mathbf {y} =\mathbf {W} _{L}^{T}\mathbf {x} }.[29]
If the noise is still Gaussian and has a covariance matrix proportional to the identity matrix (that is, the components of the vectorn{\displaystyle \mathbf {n} }areiid), but the information-bearing signals{\displaystyle \mathbf {s} }is non-Gaussian (which is a common scenario), PCA at least minimizes an upper bound on theinformation loss, which is defined as[30][31]
The optimality of PCA is also preserved if the noisen{\displaystyle \mathbf {n} }is iid and at least more Gaussian (in terms of theKullback–Leibler divergence) than the information-bearing signals{\displaystyle \mathbf {s} }.[32]In general, even if the above signal model holds, PCA loses its information-theoretic optimality as soon as the noisen{\displaystyle \mathbf {n} }becomes dependent.
The following is a detailed description of PCA using the covariance method[33]as opposed to the correlation method.[34]
The goal is to transform a given data setXof dimensionpto an alternative data setYof smaller dimensionL. Equivalently, we are seeking to find the matrixY, whereYis theKarhunen–Loèvetransform (KLT) of matrixX:
Y=KLT{X}{\displaystyle \mathbf {Y} =\mathbb {KLT} \{\mathbf {X} \}}
Suppose you have data comprising a set of observations ofpvariables, and you want to reduce the data so that each observation can be described with onlyLvariables,L<p. Suppose further, that the data are arranged as a set ofndata vectorsx1…xn{\displaystyle \mathbf {x} _{1}\ldots \mathbf {x} _{n}}with eachxi{\displaystyle \mathbf {x} _{i}}representing a single grouped observation of thepvariables.
Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data.[35]Hence we proceed by centering the data as follows:
In some applications, each variable (column ofB) may also be scaled to have a variance equal to 1 (seeZ-score).[36]This step affects the calculated principal components, but makes them independent of the units used to measure the different variables.
LetXbe ad-dimensional random vector expressed as column vector. Without loss of generality, assumeXhas zero mean.
We want to find(∗){\displaystyle (\ast )}ad×dorthonormal transformation matrixPso thatPXhas a diagonal covariance matrix (that is,PXis a random vector with all its distinct components pairwise uncorrelated).
A quick computation assumingP{\displaystyle P}were unitary yields:
Hence(∗){\displaystyle (\ast )}holds if and only ifcov(X){\displaystyle \operatorname {cov} (X)}were diagonalisable byP{\displaystyle P}.
This is very constructive, as cov(X) is guaranteed to be a non-negative definite matrix and thus is guaranteed to be diagonalisable by some unitary matrix.
In practical implementations, especially withhigh dimensional data(largep), the naive covariance method is rarely used because it is not efficient due to high computational and memory costs of explicitly determining the covariance matrix. The covariance-free approach avoids thenp2operations of explicitly calculating and storing the covariance matrixXTX, instead utilizing one ofmatrix-free methods, for example, based on the function evaluating the productXT(X r)at the cost of2npoperations.
One way to compute the first principal component efficiently[41]is shown in the following pseudo-code, for a data matrixXwith zero mean, without ever computing its covariance matrix.
Thispower iterationalgorithm simply calculates the vectorXT(X r), normalizes, and places the result back inr. The eigenvalue is approximated byrT(XTX) r, which is theRayleigh quotienton the unit vectorrfor the covariance matrixXTX. If the largest singular value is well separated from the next largest one, the vectorrgets close to the first principal component ofXwithin the number of iterationsc, which is small relative top, at the total cost2cnp. Thepower iterationconvergence can be accelerated without noticeably sacrificing the small cost per iteration using more advancedmatrix-free methods, such as theLanczos algorithmor the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method.
Subsequent principal components can be computed one-by-one via deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate principal components additively affect the accuracy of the subsequently computed principal components, thus increasing the error with every new computation. The latter approach in the block power method replaces single-vectorsrandswith block-vectors, matricesRandS. Every column ofRapproximates one of the leading principal components, while all columns are iterated simultaneously. The main calculation is evaluation of the productXT(X R). Implemented, for example, inLOBPCG, efficient blocking eliminates the accumulation of the errors, allows using high-levelBLASmatrix-matrix product functions, and typically leads to faster convergence, compared to the single-vector one-by-one technique.
Non-linear iterative partial least squares (NIPALS)is a variant the classicalpower iterationwith matrix deflation by subtraction implemented for computing the first few components in a principal component orpartial least squaresanalysis. For very-high-dimensional datasets, such as those generated in the *omics sciences (for example,genomics,metabolomics) it is usually only necessary to compute the first few PCs. Thenon-linear iterative partial least squares(NIPALS) algorithm updates iterative approximations to the leading scores and loadingst1andr1Tby thepower iterationmultiplying on every iteration byXon the left and on the right, that is, calculation of the covariance matrix is avoided, just as in the matrix-free implementation of the power iterations toXTX, based on the function evaluating the productXT(X r)=((X r)TX)T.
The matrix deflation by subtraction is performed by subtracting the outer product,t1r1TfromXleaving the deflated residual matrix used to calculate the subsequent leading PCs.[42]For large data matrices, or matrices that have a high degree of column collinearity, NIPALS suffers from loss of orthogonality of PCs due to machine precisionround-off errorsaccumulated in each iteration and matrix deflation by subtraction.[43]AGram–Schmidtre-orthogonalization algorithm is applied to both the scores and the loadings at each iteration step to eliminate this loss of orthogonality.[44]NIPALS reliance on single-vector multiplications cannot take advantage of high-levelBLASand results in slow convergence for clustered leading singular values—both these deficiencies are resolved in more sophisticated matrix-free block solvers, such as the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method.
In an "online" or "streaming" situation with data arriving piece by piece rather than being stored in a single batch, it is useful to make an estimate of the PCA projection that can be updated sequentially. This can be done efficiently, but requires different algorithms.[45]
In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs. These data were subjected to PCA for quantitative variables. When analyzing the results, it is natural to connect the principal components to the qualitative variablespecies.
For this, the following results are produced.
These results are what is calledintroducing a qualitative variable as supplementary element. This procedure is detailed in and Husson, Lê, & Pagès (2009) and Pagès (2013).
Few software offer this option in an "automatic" way. This is the case ofSPADthat historically, following the work ofLudovic Lebart, was the first to propose this option, and the R packageFactoMineR.
The earliest application of factor analysis was in locating and measuring components of human intelligence. It was believed that intelligence had various uncorrelated components such as spatial intelligence, verbal intelligence, induction, deduction etc and that scores on these could be adduced by factor analysis from results on various tests, to give a single index known as theIntelligence Quotient(IQ). The pioneering statistical psychologistSpearmanactually developed factor analysis in 1904 for histwo-factor theoryof intelligence, adding a formal technique to the science ofpsychometrics. In 1924Thurstonelooked for 56 factors of intelligence, developing the notion of Mental Age. Standard IQ tests today are based on this early work.[46]
In 1949, Shevky and Williams introduced the theory offactorial ecology, which dominated studies of residential differentiation from the 1950s to the 1970s.[47]Neighbourhoods in a city were recognizable or could be distinguished from one another by various characteristics which could be reduced to three by factor analysis. These were known as 'social rank' (an index of occupational status), 'familism' or family size, and 'ethnicity'; Cluster analysis could then be applied to divide the city into clusters or precincts according to values of the three key factor variables. An extensive literature developed around factorial ecology in urban geography, but the approach went out of fashion after 1980 as being methodologically primitive and having little place in postmodern geographical paradigms.
One of the problems with factor analysis has always been finding convincing names for the various artificial factors. In 2000, Flood revived the factorial ecology approach to show that principal components analysis actually gave meaningful answers directly, without resorting to factor rotation. The principal components were actually dual variables or shadow prices of 'forces' pushing people together or apart in cities. The first component was 'accessibility', the classic trade-off between demand for travel and demand for space, around which classical urban economics is based. The next two components were 'disadvantage', which keeps people of similar status in separate neighbourhoods (mediated by planning), and ethnicity, where people of similar ethnic backgrounds try to co-locate.[48]
About the same time, the Australian Bureau of Statistics defined distinct indexes of advantage and disadvantage taking the first principal component of sets of key variables that were thought to be important. These SEIFA indexes are regularly published for various jurisdictions, and are used frequently in spatial analysis.[49]
PCA can be used as a formal method for the development of indexes. As an alternativeconfirmatory composite analysishas been proposed to develop and assess indexes.[50]
The City Development Index was developed by PCA from about 200 indicators of city outcomes in a 1996 survey of 254 global cities. The first principal component was subject to iterative regression, adding the original variables singly until about 90% of its variation was accounted for. The index ultimately used about 15 indicators but was a good predictor of many more variables. Its comparative value agreed very well with a subjective assessment of the condition of each city. The coefficients on items of infrastructure were roughly proportional to the average costs of providing the underlying services, suggesting the Index was actually a measure of effective physical and social investment in the city.
The country-levelHuman Development Index(HDI) fromUNDP, which has been published since 1990 and is very extensively used in development studies,[51]has very similar coefficients on similar indicators, strongly suggesting it was originally constructed using PCA.
In 1978Cavalli-Sforzaand others pioneered the use of principal components analysis (PCA) to summarise data on variation in human gene frequencies across regions. The components showed distinctive patterns, including gradients and sinusoidal waves. They interpreted these patterns as resulting from specific ancient migration events.
Since then, PCA has been ubiquitous in population genetics, with thousands of papers using PCA as a display mechanism. Genetics varies largely according to proximity, so the first two principal components actually show spatial distribution and may be used to map the relative geographical location of different population groups, thereby showing individuals who have wandered from their original locations.[52]
PCA in genetics has been technically controversial, in that the technique has been performed on discrete non-normal variables and often on binary allele markers. The lack of any measures of standard error in PCA are also an impediment to more consistent usage. In August 2022, the molecular biologistEran Elhaikpublished a theoretical paper inScientific Reportsanalyzing 12 PCA applications. He concluded that it was easy to manipulate the method, which, in his view, generated results that were 'erroneous, contradictory, and absurd.' Specifically, he argued, the results achieved in population genetics were characterized by cherry-picking andcircular reasoning.[53]
Market research has been an extensive user of PCA. It is used to develop customer satisfaction or customer loyalty scores for products, and with clustering, to develop market segments that may be targeted with advertising campaigns, in much the same way as factorial ecology will locate geographical areas with similar characteristics.[54]
PCA rapidly transforms large amounts of data into smaller, easier-to-digest variables that can be more rapidly and readily analyzed. In any consumer questionnaire, there are series of questions designed to elicit consumer attitudes, and principal components seek out latent variables underlying these attitudes. For example, the Oxford Internet Survey in 2013 asked 2000 people about their attitudes and beliefs, and from these analysts extracted four principal component dimensions, which they identified as 'escape', 'social networking', 'efficiency', and 'problem creating'.[55]
Another example from Joe Flood in 2008 extracted an attitudinal index toward housing from 28 attitude questions in a national survey of 2697 households in Australia. The first principal component represented a general attitude toward property and home ownership. The index, or the attitude questions it embodied, could be fed into a General Linear Model of tenure choice. The strongest determinant of private renting by far was the attitude index, rather than income, marital status or household type.[56]
Inquantitative finance, PCA is used[57]infinancial risk management, and has been applied toother problemssuch asportfolio optimization.
PCA is commonly used in problems involvingfixed incomesecurities andportfolios, andinterest rate derivatives.
Valuations here depend on the entireyield curve, comprising numerous highly correlated instruments, and PCA is used to define a set of components or factors that explain rate movements,[58]thereby facilitating the modelling.
One common risk management application is tocalculating value at risk, VaR, applying PCA to theMonte Carlo simulation.[59]Here, for each simulation-sample, the components are stressed, and rates, andin turn option values, are then reconstructed;
with VaR calculated, finally, over the entire run.
PCA is also used inhedgingexposure tointerest rate risk, givenpartial durationsand other sensitivities.[58]Under both, the first three, typically, principal components of the system are of interest (representing"shift", "twist", and "curvature").
These principal components are derived from an eigen-decomposition of thecovariance matrixofyieldat predefined maturities;[60]and where thevarianceof each component is itseigenvalue(and as the components areorthogonal, no correlation need be incorporated in subsequent modelling).
Forequity, an optimal portfolio is one where theexpected returnis maximized for a given level of risk, or alternatively, where risk is minimized for a given return; seeMarkowitz modelfor discussion.
Thus, one approach is to reduce portfolio risk, whereallocation strategiesare applied to the "principal portfolios" instead of the underlyingstocks.
A second approach is to enhance portfolio return, using the principal components to select companies' stocks with upside potential.[61][62]PCA has also been used to understand relationships[57]between internationalequity markets, and within markets between groups of companies in industries orsectors.
PCA may also be applied tostress testing,[63]essentially an analysis of a bank's ability to endurea hypothetical adverse economic scenario. Its utility is in "distilling the information contained in [several]macroeconomic variablesinto a more manageable data set, which can then [be used] for analysis."[63]Here, the resulting factors are linked to e.g. interest rates – based on the largest elements of the factor'seigenvector– and it is then observed how a "shock" to each of the factors affects the implied assets of each of the banks.
A variant of principal components analysis is used inneuroscienceto identify the specific properties of a stimulus that increases aneuron's probability of generating anaction potential.[64][65]This technique is known asspike-triggered covariance analysis. In a typical application an experimenter presents awhite noiseprocess as a stimulus (usually either as a sensory input to a test subject, or as acurrentinjected directly into the neuron) and records a train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain features of the stimulus make the neuron more likely to spike. In order to extract these features, the experimenter calculates thecovariance matrixof thespike-triggered ensemble, the set of all stimuli (defined and discretized over a finite time window, typically on the order of 100 ms) that immediately preceded a spike. Theeigenvectorsof the difference between the spike-triggered covariance matrix and the covariance matrix of theprior stimulus ensemble(the set of all stimuli, defined over the same length time window) then indicate the directions in thespaceof stimuli along which the variance of the spike-triggered ensemble differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with the largest positive eigenvalues correspond to the directions along which the variance of the spike-triggered ensemble showed the largest positive change compared to the variance of the prior. Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features.
In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its action potential.Spike sortingis an important procedure becauseextracellularrecording techniques often pick up signals from more than one neuron. In spike sorting, one first uses PCA to reduce the dimensionality of the space of action potential waveforms, and then performsclustering analysisto associate specific action potentials with individual neurons.
PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. It has been used in determining collective variables, that is,order parameters, duringphase transitionsin the brain.[66]
Correspondence analysis(CA)
was developed byJean-Paul Benzécri[67]and is conceptually similar to PCA, but scales the data (which should be non-negative) so that rows and columns are treated equivalently. It is traditionally applied tocontingency tables.
CA decomposes thechi-squared statisticassociated to this table into orthogonal factors.[68]Because CA is a descriptive technique, it can be applied to tables for which the chi-squared statistic is appropriate or not.
Several variants of CA are available includingdetrended correspondence analysisandcanonical correspondence analysis. One special extension ismultiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data.[69]
Principal component analysis creates variables that are linear combinations of the original variables. The new variables have the property that the variables are all orthogonal. The PCA transformation can be helpful as a pre-processing step before clustering. PCA is a variance-focused approach seeking to reproduce the total variable variance, in which components reflect both common and unique variance of the variable. PCA is generally preferred for purposes of data reduction (that is, translating variable space into optimal factor space) but not when the goal is to detect the latent construct or factors.
Factor analysisis similar to principal component analysis, in that factor analysis also involves linear combinations of variables. Different from PCA, factor analysis is a correlation-focused approach seeking to reproduce the inter-correlations among variables, in which the factors "represent the common variance of variables, excluding unique variance".[70]In terms of the correlation matrix, this corresponds with focusing on explaining the off-diagonal terms (that is, shared co-variance), while PCA focuses on explaining the terms that sit on the diagonal. However, as a side result, when trying to reproduce the on-diagonal terms, PCA also tends to fit relatively well the off-diagonal correlations.[13]: 158Results given by PCA and factor analysis are very similar in most situations, but this is not always the case, and there are some problems where the results are significantly different. Factor analysis is generally used when the research purpose is detecting data structure (that is, latent constructs or factors) orcausal modeling. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results.[71]
It has been asserted that the relaxed solution ofk-means clustering, specified by the cluster indicators, is given by the principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace.[72][73]However, that PCA is a useful relaxation ofk-means clustering was not a new result,[74]and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[75]
Non-negative matrix factorization(NMF) is a dimension reduction method where only non-negative elements in the matrices are used, which is therefore a promising method in astronomy,[23][24][25]in the sense that astrophysical signals are non-negative. The PCA components are orthogonal to each other, while the NMF components are all non-negative and therefore constructs a non-orthogonal basis.
In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data.[21]For NMF, its components are ranked based only on the empirical FRV curves.[25]The residual fractional eigenvalue plots, that is,1−∑i=1kλi/∑j=1nλj{\displaystyle 1-\sum _{i=1}^{k}\lambda _{i}{\Big /}\sum _{j=1}^{n}\lambda _{j}}as a function of component numberk{\displaystyle k}given a total ofn{\displaystyle n}components, for PCA have a flat plateau, where no data is captured to remove the quasi-static noise, then the curves drop quickly as an indication of over-fitting (random noise).[21]The FRV curves for NMF is decreasing continuously[25]when the NMF components are constructedsequentially,[24]indicating the continuous capturing of quasi-static noise; then converge to higher levels than PCA,[25]indicating the less over-fitting property of NMF.
It is often difficult to interpret the principal components when the data include many variables of various origins, or when some variables are qualitative. This leads the PCA user to a delicate elimination of several variables. If observations or variables have an excessive impact on the direction of the axes, they should be removed and then projected as supplementary elements. In addition, it is necessary to avoid interpreting the proximities between the points close to the center of the factorial plane.
Theiconography of correlations, on the contrary, which is not a projection on a system of axes, does not have these drawbacks. We can therefore keep all the variables.
The principle of the diagram is to underline the "remarkable" correlations of the correlation matrix, by a solid line (positive correlation) or dotted line (negative correlation).
A strong correlation is not "remarkable" if it is not direct, but caused by the effect of a third variable. Conversely, weak correlations can be "remarkable". For example, if a variable Y depends on several independent variables, the correlations of Y with each of them are weak and yet "remarkable".
A particular disadvantage of PCA is that the principal components are usually linear combinations of all input variables.Sparse PCAovercomes this disadvantage by finding linear combinations that contain just a few input variables. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by adding sparsity constraint on the input variables.
Several approaches have been proposed, including
The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies were recently reviewed in a survey paper.[82]
Most of the modern methods fornonlinear dimensionality reductionfind their theoretical and algorithmic roots in PCA or K-means. Pearson's original idea was to take a straight line (or plane) which will be "the best fit" to a set of data points.Trevor Hastieexpanded on this concept by proposingPrincipalcurves[86]as the natural extension for the geometric interpretation of PCA, which explicitly constructs a manifold for dataapproximationfollowed byprojectingthe points onto it. See also theelastic mapalgorithm andprincipal geodesic analysis.[87]Another popular generalization iskernel PCA, which corresponds to PCA performed in a reproducing kernel Hilbert space associated with a positive definite kernel.
Inmultilinear subspace learning,[88][89][90]PCA is generalized tomultilinear PCA(MPCA) that extracts features directly from tensor representations. MPCA is solved by performing PCA in each mode of the tensor iteratively. MPCA has been applied to face recognition, gait recognition, etc. MPCA is further extended to uncorrelated MPCA, non-negative MPCA and robust MPCA.
N-way principal component analysis may be performed with models such asTucker decomposition,PARAFAC, multiple factor analysis, co-inertia analysis, STATIS, and DISTATIS.
While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive tooutliersin the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify.[91]For example, indata miningalgorithms likecorrelation clustering, the assignment of points to clusters and outliers is not known beforehand.
A recently proposed generalization of PCA[92]based on a weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy.
Outlier-resistant variants of PCA have also been proposed, based on L1-norm formulations (L1-PCA).[7][5]
Robust principal component analysis(RPCA) via decomposition in low-rank and sparse matrices is a modification of PCA that works well with respect to grossly corrupted observations.[93][94][95]
Independent component analysis(ICA) is directed to similar problems as principal component analysis, but finds additively separable components rather than successive approximations.
Given a matrixE{\displaystyle E}, it tries to decompose it into two matrices such thatE=AP{\displaystyle E=AP}. A key difference from techniques such as PCA and ICA is that some of the entries ofA{\displaystyle A}are constrained to be 0. HereP{\displaystyle P}is termed the regulatory layer. While in general such a decomposition can have multiple solutions, they prove that if the following conditions are satisfied :
then the decomposition is unique up to multiplication by a scalar.[96]
Discriminant analysis of principal components (DAPC) is a multivariate method used to identify and describe clusters of genetically related individuals. Genetic variation is partitioned into two components: variation between groups and within groups, and it maximizes the former. Linear discriminants are linear combinations of alleles which best separate the clusters. Alleles that most contribute to this discrimination are therefore those that are the most markedly different across groups. The contributions of alleles to the groupings identified by DAPC can allow identifying regions of the genome driving the genetic divergence among groups[97]In DAPC, data is first transformed using a principal components analysis (PCA) and subsequently clusters are identified using discriminant analysis (DA).
A DAPC can be realized on R using the package Adegenet. (more info:adegenet on the web)
Directional component analysis(DCA) is a method used in the atmospheric sciences for analysing multivariate datasets.[98]Like PCA, it allows for dimension reduction, improved visualization and improved interpretability of large data-sets.
Also like PCA, it is based on a covariance matrix derived from the input dataset.
The difference between PCA and DCA is that DCA additionally requires the input of a vector direction, referred to as the impact.
Whereas PCA maximises explained variance, DCA maximises probability density given impact.
The motivation for DCA is to find components of a multivariate dataset that are both likely (measured using probability density) and important (measured using the impact).
DCA has been used to find the most likely and most serious heat-wave patterns in weather prediction ensembles
,[99]and the most likely and most impactful changes in rainfall due to climate change
.[100]
|
https://en.wikipedia.org/wiki/Principal_components_analysis
|
Probabilistic latent semantic analysis(PLSA), also known asprobabilistic latent semantic indexing(PLSI, especially in information retrieval circles) is astatistical techniquefor the analysis of two-mode and co-occurrence data. In effect, one can derive a low-dimensional representation of the observed variables in terms of their affinity to certain hidden variables, just as inlatent semantic analysis, from which PLSA evolved.
Compared to standardlatent semantic analysiswhich stems fromlinear algebraand downsizes the occurrence tables (usually via asingular value decomposition), probabilistic latent semantic analysis is based on a mixture decomposition derived from alatent class model.
Considering observations in the form of co-occurrences(w,d){\displaystyle (w,d)}of words and documents, PLSA models the probability of each co-occurrence as a mixture of conditionally independentmultinomial distributions:
withc{\displaystyle c}being the words' topic. Note that the number of topics is a hyperparameter that must be chosen in advance and is not estimated from the data. The first formulation is thesymmetricformulation, wherew{\displaystyle w}andd{\displaystyle d}are both generated from the latent classc{\displaystyle c}in similar ways (using the conditional probabilitiesP(d|c){\displaystyle P(d|c)}andP(w|c){\displaystyle P(w|c)}), whereas the second formulation is theasymmetricformulation, where, for each documentd{\displaystyle d}, a latent class is chosen conditionally to the document according toP(c|d){\displaystyle P(c|d)}, and a word is then generated from that class according toP(w|c){\displaystyle P(w|c)}. Although we have used words and documents in this example, the co-occurrence of any couple of discrete variables may be modelled in exactly the same way.
So, the number of parameters is equal tocd+wc{\displaystyle cd+wc}. The number of parameters grows linearly with the number of documents. In addition, although PLSA is a generative model of the documents in the collection it is estimated on, it is not a generative model of new documents.
Their parameters are learned using theEM algorithm.
PLSA may be used in a discriminative setting, viaFisher kernels.[1]
PLSA has applications ininformation retrievalandfiltering,natural language processing,machine learningfrom text,bioinformatics,[2]and related areas.
It is reported that theaspect modelused in the probabilistic latent semantic analysis has severeoverfittingproblems.[3]
This is an example of alatent class model(see references therein), and it is related[6][7]tonon-negative matrix factorization. The present terminology was coined in 1999 by Thomas Hofmann.[8]
|
https://en.wikipedia.org/wiki/Probabilistic_latent_semantic_analysis
|
Spamdexing(also known assearch engine spam,search engine poisoning,black-hatsearch engine optimization,search spamorweb spam)[1]is the deliberate manipulation ofsearch engineindexes. It involves a number of methods, such aslink buildingand repeating related and/or unrelated phrases, to manipulate the relevance or prominence of resources indexed in a manner inconsistent with the purpose of the indexing system.[2][3]
Spamdexing could be considered to be a part ofsearch engine optimization,[4]although there are many SEO methods that improve the quality and appearance of the content of web sites and serve content useful to many users.[5]
Search engines use a variety ofalgorithmsto determine relevancyranking. Some of these include determining whether the search term appears in thebody textorURLof aweb page. Many search engines check for instances of spamdexing and will remove suspect pages from their indexes. Also, search-engine operators can quickly block the results listing from entire websites that use spamdexing, perhaps in response to user complaints of false matches. The rise of spamdexing in the mid-1990s made the leading search engines of the time less useful. Using unethical methods to make websites rank higher in search engine results than they otherwise would is commonly referred to in the SEO (search engine optimization) industry as "black-hat SEO".[6]These methods are more focused on breaking the search-engine-promotion rules and guidelines. In addition to this, the perpetrators run the risk of their websites being severely penalized by theGoogle PandaandGoogle Penguinsearch-results ranking algorithms.[7]
Common spamdexing techniques can be classified into two broad classes:content spam[5](term spam) andlink spam.[3]
The earliest known reference[2]to the termspamdexingis by Eric Convey in his article "Porn sneaks way back on Web",The Boston Herald, May 22, 1996, where he said:
The problem arises when site operators load their Web pages with hundreds of extraneous terms so search engines will list them among legitimate addresses.
The process is called "spamdexing," a combination ofspamming—the Internet term for sending users unsolicited information—and "indexing."[2]
Keyword stuffing had been used in the past to obtain top search engine rankings and visibility for particular phrases. This method is outdated and adds no value to rankings today. In particular,Googleno longer gives good rankings to pages employing this technique.
Hiding text from the visitor is done in many different ways. Text colored to blend with the background,CSSz-indexpositioning to place text underneath an image — and therefore out of view of the visitor — andCSSabsolute positioning to have the text positioned far from the page center are all common techniques. By 2005, many invisible text techniques were easily detected by major search engines.
"Noscript" tags are another way to place hidden content within a page. While they are a valid optimization method for displaying an alternative representation of scripted content, they may be abused, since search engines may index content that is invisible to most visitors.
Sometimes inserted text includes words that are frequently searched (such as "sex"), even if those terms bear little connection to the content of a page, in order to attract traffic to advert-driven pages.
In the past, keyword stuffing was considered to be either awhite hator ablack hattactic, depending on the context of the technique, and the opinion of the person judging it. While a great deal of keyword stuffing was employed to aid inspamdexing, which is of little benefit to the user, keyword stuffing in certain circumstances was not intended to skew results in a deceptive manner. Whether the term carries apejorativeor neutralconnotationis dependent on whether the practice is used to pollute the results with pages of little relevance, or to direct traffic to a page of relevance that would have otherwise been de-emphasized due to the search engine's inability to interpret and understand related ideas. This is no longer the case. Search engines now employ themed, related keyword techniques to interpret the intent of the content on a page.
These techniques involve altering the logical view that a search engine has over the page's contents. They all aim at variants of thevector space modelfor information retrieval on text collections.
Keyword stuffingis asearch engine optimization(SEO) technique in which keywords are loaded into a web page'smeta tags, visible content, orbacklinkanchor textin an attempt to gain an unfair rank advantage insearch engines. Keyword stuffing may lead to awebsitebeing temporarily or permanently banned or penalized on major search engines.[8]The repetition ofwordsinmeta tagsmay explain why manysearch enginesno longer use these tags. Nowadays, search engines focus more on the content that is unique, comprehensive, relevant, and helpful that overall makes the quality better which makes keyword stuffing useless, but it is still practiced by many webmasters.[citation needed]
Many major search engines have implemented algorithms that recognize keyword stuffing, and reduce or eliminate any unfair search advantage that the tactic may have been intended to gain, and oftentimes they will also penalize, demote or remove websites from their indexes that implement keyword stuffing.
Changes and algorithms specifically intended to penalize or ban sites using keyword stuffing include the Google Florida update (November 2003)Google Panda(February 2011)[9]Google Hummingbird(August 2013)[10]andBing's September 2014 update.[11]
Headlines in online news sites are increasingly packed with just the search-friendly keywords that identify the story. Traditional reporters and editors frown on the practice, but it is effective in optimizing news stories for search.[12]
Unrelatedhidden textis disguised by making it the same color as the background, using a tiny font size, or hiding it withinHTMLcode such as "no frame" sections,alt attributes, zero-sizedDIVs, and "no script" sections. People manually screening red-flagged websites for a search-engine company might temporarily or permanently block an entire website for having invisible text on some of its pages. However, hidden text is not always spamdexing: it can also be used to enhanceaccessibility.[13]
This involves repeating keywords in themeta tags, and using meta keywords that are unrelated to the site's content. This tactic has been ineffective. Google declared that it doesn't use the keywords meta tag in its online search ranking in September 2009.[14]
"Gateway" ordoorway pagesare low-quality web pages created with very little content, which are instead stuffed with very similar keywords and phrases. They are designed to rank highly within the search results, but serve no purpose to visitors looking for information. A doorway page will generally have "click here to enter" on the page; autoforwarding can also be used for this purpose. In 2006, Google ousted vehicle manufacturerBMWfor using "doorway pages" to the company's German site, BMW.de.[15]
Scraper sitesare created using various programs designed to "scrape" search-engine results pages or other sources of content and create "content" for a website.[citation needed]The specific presentation of content on these sites is unique, but is merely an amalgamation of content taken from other sources, often without permission. Such websites are generally full of advertising (such aspay-per-clickads), or they redirect the user to other sites. It is even feasible for scraper sites to outrank original websites for their own information and organization names.
Article spinninginvolves rewriting existing articles, as opposed to merely scraping content from other sites, to avoid penalties imposed by search engines forduplicate content. This process is undertaken by hired writers[citation needed]or automated using athesaurusdatabase or anartificial neural network.
Similarly toarticle spinning, some sites usemachine translationto render their content in several languages, with no human editing, resulting in unintelligible texts that nonetheless continue to be indexed by search engines, thereby attracting traffic.
Link spam is defined as links between pages that are present for reasons
other than merit.[16]Link spam takes advantage of link-based ranking algorithms, which giveswebsiteshigher rankings the more other highly ranked websites link to it. These techniques also aim at influencing other link-based ranking techniques such as theHITS algorithm.[citation needed]
Link farms are tightly-knit networks of websites that link to each other for the sole purpose of exploiting the search engine ranking algorithms. These are also known facetiously asmutual admiration societies.[17]Use of links farms has greatly reduced with the launch of Google's first Panda Update in February 2011, which introduced significant improvements in its spam-detection algorithm.
Blog networks(PBNs) are a group of authoritative websites used as a source of contextual links that point to the owner's main website to achieve higher search engine ranking. Owners of PBN websites use expired domains orauction domainsthat havebacklinksfrom high-authority websites. Google targeted and penalized PBN users on several occasions with several massive deindexing campaigns since 2014.[18]
Puttinghyperlinkswhere visitors will not see them is used to increaselink popularity. Highlighted link text can help rank a webpage higher for matching that phrase.
ASybil attackis the forging of multiple identities for malicious intent, named after the famousdissociative identity disorderpatient and the book about her that shares her name, "Sybil".[19][20]A spammer may create multiple web sites at differentdomain namesthat all link to each other, such as fake blogs (known asspam blogs).
Spam blogs are blogs created solely for commercial promotion and the passage of link authority to target sites. Often these "splogs" are designed in a misleading manner that will give the effect of a legitimate website but upon close inspection will often be written using spinning software or be very poorly written with barely readable content. They are similar in nature to link farms.[21][22]
Guest blog spam is the process of placing guest blogs on websites for the sole purpose of gaining a link to another website or websites. Unfortunately, these are often confused with legitimate forms of guest blogging with other motives than placing links. This technique was made famous byMatt Cutts, who publicly declared "war" against this form of link spam.[23]
Some link spammers utilize expired domain crawler software or monitor DNS records for domains that will expire soon, then buy them when they expire and replace the pages with links to their pages. However, it is possible but not confirmed that Google resets the link data on expired domains.[citation needed]To maintain all previous Google ranking data for the domain, it is advisable that a buyer grab the domain before it is "dropped".
Some of these techniques may be applied for creating aGoogle bomb—that is, to cooperate with other users to boost the ranking of a particular page for a particular query.
Web sites that can be edited by users can be used by spamdexers to insert links to spam sites if the appropriate anti-spam measures are not taken.
Automatedspambotscan rapidly make the user-editable portion of a site unusable.
Programmers have developed a variety of automatedspam prevention techniquesto block or at least slow down spambots.
Spam in blogs is the placing or solicitation of links randomly on other sites, placing a desired keyword into the hyperlinked text of the inbound link. Guest books, forums, blogs, and any site that accepts visitors' comments are particular targets and are often victims of drive-by spamming where automated software creates nonsense posts with links that are usually irrelevant and unwanted.
Comment spam is a form of link spam that has arisen in web pages that allow dynamic user editing such aswikis,blogs, andguestbooks. It can be problematic becauseagentscan be written that automatically randomly select a user edited web page, such as a Wikipedia article, and add spamming links.[24]
Wiki spam is when a spammer uses the open editability ofwikisystems to place links from the wiki site to the spam site.
Referrer spamtakes place when a spam perpetrator or facilitator accesses aweb page(thereferee), by following a link from another web page (thereferrer), so that the referee is given the address of the referrer by the person's web browser. Somewebsiteshave a referrer log which shows which pages link to that site. By having arobotrandomly access many sites enough times, with a message or specific address given as the referrer, that message or Internet address then appears in the referrer log of those sites that have referrer logs. Since someWeb search enginesbase the importance of sites on the number of different sites linking to them, referrer-log spam may increase the search engine rankings of the spammer's sites. Also, site administrators who notice the referrer log entries in their logs may follow the link back to the spammer's referrer page.
Because of the large amount of spam posted to user-editable webpages, Google proposed a "nofollow" tag that could be embedded with links. A link-based search engine, such as Google'sPageRanksystem, will not use the link to increase the score of the linked website if the link carries a nofollow tag. This ensures that spamming links to user-editable websites will not raise the sites ranking with search engines. Nofollow is used by several websites, such asWordpress,BloggerandWikipedia.[citation needed]
Amirror siteis the hosting of multiple websites with conceptually similar content but using differentURLs. Some search engines give a higher rank to results where the keyword searched for appears in the URL.
URL redirectionis the taking of the user to another page without his or her intervention,e.g., usingMETA refreshtags,Flash,JavaScript,JavaorServer side redirects. However,301 Redirect, or permanent redirect, is not considered as a malicious behavior.
Cloakingrefers to any of several means to serve a page to the search-enginespiderthat is different from that seen by human users. It can be an attempt to mislead search engines regarding the content on a particular web site. Cloaking, however, can also be used to ethically increase accessibility of a site to users with disabilities or provide human users with content that search engines aren't able to process or parse. It is also used to deliver content based on a user's location; Google itself usesIP delivery, a form of cloaking, to deliver results. Another form of cloaking iscode swapping,i.e., optimizing a page for top ranking and then swapping another page in its place once a top ranking is achieved. Google refers to these type of redirects asSneaky Redirects.[25]
Spamdexed pages are sometimes eliminated from search results by the search engine.
Users can employ search operators for filtering. For Google, a keyword preceded by "-" (minus) will omit sites that contains the keyword in their pages or in the URL of the pages from search result. As an example, the search "-<unwanted site>" will eliminate sites that contains word "<unwanted site>" in their pages and the pages whose URL contains "<unwanted site>".
Users could also use theGoogle Chromeextension "Personal Blocklist (by Google)", launched by Google in 2011 as part of countermeasures againstcontent farming.[26]Via the extension, users could block a specific page, or set of pages from appearing in their search results. As of 2021, the original extension appears to be removed, although similar-functioning extensions may be used.
Possible solutions to overcome search-redirection poisoning redirecting to illegal internet pharmacies include notification of operators of vulnerable legitimate domains. Further, manual evaluation of SERPs, previously published link-based and content-based algorithms as well as tailor-made automatic detection and classification engines can be used as benchmarks in the effective identification of pharma scam campaigns.[27]
|
https://en.wikipedia.org/wiki/Spamdexing
|
Innatural language processing, aword embeddingis a representation of a word. Theembeddingis used intext analysis. Typically, the representation is areal-valuedvector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning.[1]Word embeddings can be obtained usinglanguage modelingandfeature learningtechniques, where words or phrases from the vocabulary are mapped tovectorsofreal numbers.
Methods to generate this mapping includeneural networks,[2]dimensionality reductionon the wordco-occurrence matrix,[3][4][5]probabilistic models,[6]explainable knowledge base method,[7]and explicit representation in terms of the context in which words appear.[8]
Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in NLP tasks such assyntactic parsing[9]andsentiment analysis.[10]
Indistributional semantics, a quantitative methodological approach for understanding meaning in observed language, word embeddings or semanticfeature spacemodels have been used as a knowledge representation for some time.[11]Such models aim to quantify and categorize semantic similarities between linguistic items based on their distributional properties in large samples of language data. The underlying idea that "a word is characterized by the company it keeps" was proposed in a 1957 article byJohn Rupert Firth,[12]but also has roots in the contemporaneous work on search systems[13]and in cognitive psychology.[14]
The notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is thevector space modelfor information retrieval.[15][16][17]Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf.curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such assingular value decompositionthen led to the introduction oflatent semantic analysisin the late 1980s and therandom indexingapproach for collecting word co-occurrence contexts.[18][19][20][21]In 2000,Bengioet al. provided in a series of papers titled "Neural probabilistic language models" to reduce the high dimensionality of word representations in contexts by "learning a distributed representation for words".[22][23][24]
A study published inNeurIPS(NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example ofself-supervised learningof word embeddings.[25]
Word embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in Lavelli et al., 2004.[26]Roweis and Saul published inSciencehow to use "locally linear embedding" (LLE) to discover representations of high dimensional data structures.[27]Most new word embedding techniques after about 2005 rely on aneural networkarchitecture instead of more probabilistic and algebraic models, after foundational work done by Yoshua Bengio[28][circular reference]and colleagues.[29][30]
The approach has been adopted by many research groups after theoretical advances in 2010 had been made on the quality of vectors and the training speed of the model, as well as after hardware advances allowed for a broaderparameter spaceto be explored profitably. In 2013, a team atGoogleled byTomas Mikolovcreatedword2vec, a word embedding toolkit that can train vector space models faster than previous approaches. The word2vec approach has been widely used in experimentation and was instrumental in raising interest for word embeddings as a technology, moving the research strand out of specialised research into broader experimentation and eventually paving the way for practical application.[31]
Historically, one of the main limitations of static word embeddings or wordvector space modelsis that words with multiple meanings are conflated into a single representation (a single vector in the semantic space). In other words,polysemyandhomonymyare not handled properly. For example, in the sentence "The club I tried yesterday was great!", it is not clear if the termclubis related to the word sense of aclub sandwich,clubhouse,golf club, or any other sense thatclubmight have. The necessity to accommodate multiple meanings per word in different vectors (multi-sense embeddings) is the motivation for several contributions in NLP to split single-sense embeddings into multi-sense ones.[32][33]
Most approaches that produce multi-sense embeddings can be divided into two main categories for their word sense representation, i.e., unsupervised and knowledge-based.[34]Based onword2vecskip-gram, Multi-Sense Skip-Gram (MSSG)[35]performs word-sense discrimination and embedding simultaneously, improving its training time, while assuming a specific number of senses for each word. In the Non-Parametric Multi-Sense Skip-Gram (NP-MSSG) this number can vary depending on each word. Combining the prior knowledge of lexical databases (e.g.,WordNet,ConceptNet,BabelNet), word embeddings andword sense disambiguation, Most Suitable Sense Annotation (MSSA)[36]labels word-senses through an unsupervised and knowledge-based approach, considering a word's context in a pre-defined sliding window. Once the words are disambiguated, they can be used in a standard word embeddings technique, so multi-sense embeddings are produced. MSSA architecture allows the disambiguation and annotation process to be performed recurrently in a self-improving manner.[37]
The use of multi-sense embeddings is known to improve performance in several NLP tasks, such aspart-of-speech tagging, semantic relation identification,semantic relatedness,named entity recognitionand sentiment analysis.[38][39]
As of the late 2010s, contextually-meaningful embeddings such asELMoandBERThave been developed.[40]Unlike static word embeddings, these embeddings are at the token-level, in that each occurrence of a word has its own embedding. These embeddings better reflect the multi-sense nature of words, because occurrences of a word in similar contexts are situated in similar regions of BERT’s embedding space.[41][42]
Word embeddings forn-grams in biological sequences (e.g. DNA, RNA, and Proteins) forbioinformaticsapplications have been proposed by Asgari and Mofrad.[43]Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of deep learning inproteomicsandgenomics. The results presented by Asgari and Mofrad[43]suggest that BioVectors can characterize biological sequences in terms of biochemical and biophysical interpretations of the underlying patterns.
Word embeddings with applications ingame designhave been proposed by Rabii and Cook[44]as a way to discoveremergent gameplayusing logs of gameplay data. The process requires transcribing actions that occur during a game within aformal languageand then using the resulting text to create word embeddings. The results presented by Rabii and Cook[44]suggest that the resulting vectors can capture expert knowledge about games likechessthat are not explicitly stated in the game's rules.
The idea has been extended to embeddings of entire sentences or even documents, e.g. in the form of thethought vectorsconcept. In 2015, some researchers suggested "skip-thought vectors" as a means to improve the quality ofmachine translation.[45]A more recent and popular approach for representing sentences is Sentence-BERT, or SentenceTransformers, which modifies pre-trainedBERTwith the use of siamese and triplet network structures.[46]
Software for training and using word embeddings includesTomáš Mikolov'sWord2vec, Stanford University'sGloVe,[47]GN-GloVe,[48]Flair embeddings,[38]AllenNLP'sELMo,[49]BERT,[50]fastText,Gensim,[51]Indra,[52]andDeeplearning4j.Principal Component Analysis(PCA) andT-Distributed Stochastic Neighbour Embedding(t-SNE) are both used to reduce the dimensionality of word vector spaces and visualize word embeddings andclusters.[53]
For instance, the fastText is also used to calculate word embeddings fortext corporainSketch Enginethat are available online.[54]
Word embeddings may contain the biases and stereotypes contained in the trained dataset, as Bolukbasi et al. points out in the 2016 paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” that a publicly available (and popular) word2vec embedding trained on Google News texts (a commonly used data corpus), which consists of text written by professional journalists, still shows disproportionate word associations reflecting gender and racial biases when extracting word analogies.[55]For example, one of the analogies generated using the aforementioned word embedding is “man is to computer programmer as woman is to homemaker”.[56][57]
Research done by Jieyu Zhou et al. shows that the applications of these trained word embeddings without careful oversight likely perpetuates existing bias in society, which is introduced through unaltered training data. Furthermore, word embeddings can even amplify these biases .[58][59]
|
https://en.wikipedia.org/wiki/Word_vector
|
TheBird–Meertens formalism(BMF) is acalculusforderiving programsfromprogram specifications(in afunctional programmingsetting) by a process of equational reasoning. It was devised byRichard BirdandLambert Meertensas part of their work withinIFIP Working Group 2.1.
It is sometimes referred to in publications as BMF, as a nod toBackus–Naur form. Facetiously it is also referred to asSquiggol, as a nod toALGOL, which was also in the remit of WG 2.1, and because of the "squiggly" symbols it uses. A less-used variant name, but actually the first one suggested, isSQUIGOL.
Martin and Nipkow provided automated support for Squiggol development proofs, using theLarch Prover.[1]
Mapis a well-known second-order function that applies a given function to every element of a list; in BMF, it is written∗{\displaystyle *}:
Likewise,reduceis a function that collapses a list into a single value byrepeated application of a binary operator. It is written / in BMF.
Taking⊕{\displaystyle \oplus }as a suitable binary operator with neutral elemente, we have
Using those two operators and the primitives+{\displaystyle +}(as the usual addition), and++{\displaystyle +\!\!\!+}(for list concatenation), we can easily express the sum of all elements of a list, and theflattenfunction, assum=+/{\displaystyle {\rm {sum}}=+/}andflatten=++/{\displaystyle {\rm {flatten}}=+\!\!\!+/}, inpoint-free style. We have:
Similarly, writing⋅{\displaystyle \cdot }forfunctional compositionand∧{\displaystyle \land }forconjunction, it is easy to write a function testing that all elements of a list satisfy a predicatep, simply asallp=(∧/)⋅(p∗){\displaystyle {\rm {all}}\ p=(\land /)\cdot (p*)}:
Bird (1989) transforms inefficient easy-to-understand expressions ("specifications") into efficient involved expressions ("programs") by algebraic manipulation. For example, the specification "max⋅mapsum⋅segs{\displaystyle \mathrm {max} \cdot \mathrm {map} \;\mathrm {sum} \cdot \mathrm {segs} }" is an almost literal translation of themaximum segment sum problem,[6]but running that functional program on a list of sizen{\displaystyle n}will take timeO(n3){\displaystyle {\mathcal {O}}(n^{3})}in general. From this, Bird computes an equivalent functional program that runs in timeO(n){\displaystyle {\mathcal {O}}(n)}, and is in fact a functional version ofKadane's algorithm.
The derivation is shown in the picture, with computational complexities[7]given in blue, and law applications indicated in red.
Example instances of the laws can be opened by clicking on[show]; they use lists of integer numbers, addition, minus, and multiplication. The notation in Bird's paper differs from that used above:map{\displaystyle \mathrm {map} },concat{\displaystyle \mathrm {concat} }, andfoldl{\displaystyle \mathrm {foldl} }correspond to∗{\displaystyle *},flatten{\displaystyle \mathrm {flatten} }, and a generalized version of/{\displaystyle /}above, respectively, whileinits{\displaystyle \mathrm {inits} }andtails{\displaystyle \mathrm {tails} }compute a list of allprefixesandsuffixesof its arguments, respectively. As above, function composition is denoted by "⋅{\displaystyle \cdot }", which has lowestbinding precedence. In the example instances, lists are colored by nesting depth; in some cases, new operations are defined ad hoc (grey boxes).
A functionhon lists is called a listhomomorphismif there exists an associative binary operator⊕{\displaystyle \oplus }and neutral elemente{\displaystyle e}such that the following holds:
Thehomomorphism lemmastates thathis a homomorphism if and only if there exists an operator⊕{\displaystyle \oplus }and a functionfsuch thath=(⊕/)⋅(f∗){\displaystyle h=(\oplus /)\cdot (f*)}.
A point of great interest for this lemma is its application to the derivation of highlyparallelimplementations of computations. Indeed, it is trivial to see thatf∗{\displaystyle f*}has a highly parallel implementation, and so does⊕/{\displaystyle \oplus /}— most obviously as a binary tree. Thus for any list homomorphismh, there exists a parallel implementation. That implementation cuts the list into chunks, which are assigned to different computers; each computes the result on its own chunk. It is those results that transit on the network and are finally combined into one. In any application where the list is enormous and the result is a very simple type – say an integer – the benefits of parallelisation are considerable. This is the basis of themap-reduceapproach.
|
https://en.wikipedia.org/wiki/Bird%E2%80%93Meertens_formalism
|
The termcitizen science, synonymous to terms likecommunity science,crowd science,crowd-sourced science,civic science,participatory monitoring, orvolunteer monitoring) isresearchconducted with participation from the general public, oramateur/nonprofessional researchers or participants ofscience,social scienceand many other disciplines.[1][2]There are variations in the exact definition of citizen science, with different individuals and organizations having their own specific interpretations of what citizen science encompasses.[1]Citizen science is used in a wide range of areas of study includingecology,biologyandconservation,healthandmedical research,astronomy,media and communicationsandinformation science.[1][3]
There are different applications and functions of "citizen science" in research projects.[1][3]Citizen science can be used as a methodology where public volunteers help in collecting and classifyingdata, improving the scientific community's capacity.[3][4]Citizen science can also involve more direct involvement from the public, with communities initiating projects researching environment and health hazards in their own communities.[3]Participation in citizen science projects also educates the public about the scientific process and increases awareness about different topics.[3][5][4]Some schools have students participate in citizen science projects for this purpose as a part of the teaching curriculums.[5][4][6]
The first use of the term "citizen science" can be found in a January 1989 issue ofMIT Technology Review, which featured three community-based labs studying environmental issues.[1][7]In the 21st century, the number of citizen science projects, publications, and funding opportunities has increased.[1][3]Citizen science has been used more over time, a trend helped by technological advancements.[1][3][8]Digital citizen science platforms, such asZooniverseandiNaturalist, store large amounts of data for many projects and are a place where volunteers can learn how to contribute to projects.[9][1]For some projects, participants are instructed to collect and enter data, such as what species they observed, into large digital global databases.[3][10]For other projects, participants help classify data on digital platforms.[3]Citizen science data is also being used to developmachine learningalgorithms.[8][1]An example is using volunteer-classified images to train machine learning algorithms to identify species.[8][1]While global participation and global databases are found on online platforms,[10][1]not all locations always have the same amount of data from contributors.[8][11]Concerns over potential data quality issues, such asmeasurement errorsandbiases, in citizen science projects are recognized in the scientific community and there are statistical solutions and best practices available which can help.[10][12]
The term "citizen science" has multiple origins, as well as differing concepts.[13]"Citizen" is used in the general sense, as meaning in "citizen of the world", or the general public, rather than the legal termcitizenof sovereign countries. It was first defined independently in the mid-1990s byRick Bonneyin the United States andAlan Irwinin the United Kingdom.[13][14][15]Alan Irwin, a British sociologist, defines citizen science as "developing concepts of scientific citizenship which foregrounds the necessity of opening up science and science policy processes to the public".[13]Irwin sought to reclaim two dimensions of the relationship between citizens and science: 1) that science should be responsive to citizens' concerns and needs; and 2) that citizens themselves could produce reliable scientific knowledge.[16]The AmericanornithologistRick Bonney, unaware of Irwin's work, defined citizen science as projects in which nonscientists, such as amateur birdwatchers, voluntarily contributed scientific data. This describes a more limited role for citizens in scientific research than Irwin's conception of the term.[16]
The termscitizen scienceandcitizen scientistsentered theOxford English Dictionary(OED) in June 2014.[17][18]"Citizen science" is defined as "scientific work undertaken by members of the general public, often in collaboration with or under the direction of professional scientists and scientific institutions".[18]"Citizen scientist" is defined as: (a) "a scientist whose work is characterized by a sense of responsibility to serve the best interests of the wider community (now rare)"; or (b) "a member of the general public who engages in scientific work, often in collaboration with or under the direction of professional scientists and scientific institutions; an amateur scientist".[18]The first use of the term "citizen scientist" can be found in the magazineNew Scientistin an article aboutufologyfrom October 1979.[19]
Muki Haklaycites, from a policy report for theWilson Centerentitled "Citizen Science and Policy: A European Perspective", an alternate first use of the term "citizen science" by R. Kerson in the magazineMIT Technology Reviewfrom January 1989.[20][7]Quoting from the Wilson Center report: "The new form of engagement in science received the name 'citizen science'. The first recorded example of the use of the term is from 1989, describing how 225 volunteers across the US collected rain samples to assist theAudubon Societyin an acid-rain awareness raising campaign."[20][7]
AGreen Paper on Citizen Sciencewas published in 2013 by theEuropean Commission's Digital Science Unit and Socientize.eu, which included a definition for citizen science, referring to "the general public engagement in scientific research activities when citizens actively contribute to science either with their intellectual effort or surrounding knowledge or with their tools and resources. Participants provide experimental data and facilities for researchers, raise new questions and co-create a new scientific culture."[21][22]
Citizen science may be performed by individuals, teams, or networks of volunteers. Citizen scientists often partner with professional scientists to achieve common goals. Large volunteer networks often allow scientists to accomplish tasks that would be too expensive or time-consuming to accomplish through other means.[23]
Many citizen-science projects serve education and outreach goals.[24][25][26]These projects may be designed for a formal classroom environment or an informal education environment such as museums.
Citizen science has evolved over the past four decades. Recent projects place more emphasis on scientifically sound practices and measurable goals for public education.[27]Modern citizen science differs from its historical forms primarily in the access for, and subsequent scale of, public participation; technology is credited as one of the main drivers of the recent explosion of citizen science activity.[23]
In March 2015, theOffice of Science and Technology Policypublished a factsheet entitled "Empowering Students and Others through Citizen Science and Crowdsourcing".[28]Quoting: "Citizen science and crowdsourcing projects are powerful tools for providing students with skills needed to excel in science, technology, engineering, and math (STEM). Volunteers in citizen science, for example, gain hands-on experience doing real science, and in many cases take that learning outside of the traditional classroom setting".[28]The National Academies of Science citesSciStarteras a platform offering access to more than 2,700 citizen science projects and events, as well as helping interested parties access tools that facilitate project participation.[29]
In May 2016, a newopen-access journalwas started by theCitizen Science Associationalong withUbiquity PresscalledCitizen Science: Theory and Practice(CS:T&P).[30][31]Quoting from the editorial article titled "The Theory and Practice of Citizen Science: Launching a New Journal", "CS:T&Pprovides the space to enhance the quality and impact of citizen science efforts by deeply exploring the citizen science concept in all its forms and across disciplines. By examining, critiquing, and sharing findings across a variety of citizen science endeavors, we can dig into the underpinnings and assumptions of citizen science and critically analyze its practice and outcomes."[31]
In February 2020, Timber Press, an imprint ofWorkman Publishing Company, publishedThe Field Guide to Citizen Scienceas a practical guide for anyone interested in getting started with citizen science.[32]
Other definitions for citizen science have also been proposed. For example, Bruce Lewenstein ofCornell University'sCommunicationandS&TSdepartments describes three possible definitions:[33]
Scientists and scholars who have used other definitions includeFrank N. von Hippel,Stephen Schneider,Neal LaneandJon Beckwith.[34][35][36]Other alternative terminologies proposed are "civic science" and "civic scientist".[37]
Further, Muki Haklay offers an overview of the typologies of the level of citizen participation in citizen science, which range from "crowdsourcing" (level 1), where the citizen acts as a sensor, to "distributed intelligence" (level 2), where the citizen acts as a basic interpreter, to "participatory science", where citizens contribute to problem definition and data collection (level 3), to "extreme citizen science", which involves collaboration between the citizen and scientists in problem definition, collection and data analysis.[38]
A 2014Mashablearticle defines a citizen scientist as: "Anybody who voluntarily contributes his or her time and resources toward scientific research in partnership with professional scientists."[39]
In 2016, theAustralian Citizen Science Associationreleased their definition, which states "Citizen science involves public participation and collaboration in scientific research with the aim to increase scientific knowledge."[40][41]
In 2020, a group of birders in the Pacific Northwest of North America, eBird Northwest, has sought to rename "citizen science" to the use of "community science", "largely to avoid using the word 'citizen' when we want to be inclusive and welcoming to any birder or person who wants to learn more about bird watching, regardless of their citizen status."[42]
In aSmart Cityera, Citizen Science relies on various web-based tools, such asWebGIS, and becomes Cyber Citizen Science.[43]Some projects, such asSETI@home, use the Internet to take advantage ofdistributed computing. These projects are generally passive. Computation tasks are performed by volunteers' computers and require little involvement beyond initial setup. There is disagreement as to whether these projects should be classified as citizen science.
The astrophysicist andGalaxy Zooco-founderKevin Schawinskistated: "We prefer to call this [Galaxy Zoo] citizen science because it's a better description of what you're doing; you're a regular citizen but you're doing science. Crowd sourcing sounds a bit like, well, you're just a member of the crowd and you're not; you're our collaborator. You're pro-actively involved in the process of science by participating."[44]
Compared to SETI@home, "Galaxy Zoo volunteers do real work. They're not just passively running something on their computer and hoping that they'll be the first person to find aliens. They have a stake in science that comes out of it, which means that they are now interested in what we do with it, and what we find."[44]
Citizen policy may be another result of citizen science initiatives. Bethany Brookshire (pen name SciCurious) writes: "If citizens are going to live with the benefits or potential consequences of science (as the vast majority of them will), it's incredibly important to make sure that they are not only well informed about changes and advances in science and technology, but that they also ... are able to ... influence the science policy decisions that could impact their lives."[45]In "The Rightful Place of Science: Citizen Science", editorsDarlene Cavalierand Eric Kennedy highlight emerging connections between citizen science, civic science, and participatory technology assessment.[46]
The general public's involvement in scientific projects has become a means of encouraging curiosity and greater understanding of science while providing an unprecedented engagement between professional scientists and the general public.[5]In a research report published by the U.S.National Park Servicein 2008, Brett Amy Thelen and Rachel K. Thiet mention the following concerns, previously reported in the literature, about the validity of volunteer-generated data:[47][48]
The question of data accuracy, in particular, remains open.[49]John Losey, who created theLost Ladybugcitizen science project, has argued that the cost-effectiveness of citizen science data can outweigh data quality issues, if properly managed.[50]
In December 2016, authors M. Kosmala, A. Wiggins, A. Swanson and B. Simmons published a study in the journalFrontiers in Ecology and the Environmentcalled "Assessing Data Quality in Citizen Science".[51]The abstract describes how ecological and environmental citizen science projects have enormous potential to advance science. Citizen science projects can influence policy and guide resource management by producingdatasetsthat are otherwise not feasible to generate.[51]In the section "In a Nutshell" (pg3), four condensed conclusions are stated. They are:[51]
They conclude that as citizen science continues to grow and mature, a key metric of project success they expect to see will be a growing awareness of data quality. They also conclude that citizen science will emerge as a general tool helping "to collect otherwise unobtainable high-quality data in support of policy and resource management, conservation monitoring, and basic science."[51]
A study of Canadianlepidopteradatasets published in 2018 compared the use of a professionally curated dataset of butterfly specimen records with four years of data from a citizen science program,eButterfly.[52][53]The eButterfly dataset was used as it was determined to be of high quality because of the expert vetting process used on site, and there already existed a dataset covering the same geographic area consisting of specimen data, much of it institutional. The authors note that, in this case, citizen science data provides both novel and complementary information to the specimen data. Five new species were reported from the citizen science data, and geographic distribution information was improved for over 80% of species in the combined dataset when citizen science data was included.
Several recent studies have begun to explore the accuracy of citizen science projects and how to predict accuracy based on variables like expertise of practitioners. One example is a 2021 study by Edgar Santos-Fernandez and Kerrie Mengersen of theBritish Ecological Society, who utilized a case study which used recentRandStanprogramming software to offer ratings of the accuracy of species identifications performed by citizen scientists inSerengeti National Park,Tanzania. This provided insight into possible problems with processes like this which include, "discriminatory power and guessing behaviour". The researchers determined that methods for rating the citizen scientists themselves based on skill level and expertise might make studies they conduct more easy to analyze.[54]
Studies that are simple in execution are where citizen science excels, particularly in the field of conservation biology and ecology. For example, in 2019,Sumneret al. compared the data ofvespidwasp distributions collected by citizen scientists with the 4-decade, long-term dataset established by theBWARS.[55]They set up the Big Wasp Survey from 26 August to 10 September 2017, inviting citizen scientists to trap wasps and send them for identification by experts where data was recorded. The results of this study showed that the campaign garnered over 2,000 citizen scientists participating in data collection, identifying over 6,600 wasps. This experiment provides strong evidence that citizen science can generate potentially high-quality data comparable to that of expert data collection, within a shorter time frame. Although the experiment was to originally test the strength of citizen science, the team also learned more aboutVespidaebiology and species distribution in theUnited Kingdom. With this study, the simple procedure enabled citizen science to be executed in a successful manner. A study by J. Cohn describes that volunteers can be trained to use equipment and process data, especially considering that a large proportion of citizen scientists are individuals who are already well-versed in the field of science.[56]
The demographics of participants in citizen science projects are overwhelmingly White adults, of above-average income, having a university degree.[57]Other groups of volunteers include conservationists, outdoor enthusiasts, and amateur scientists. As such, citizen scientists are generally individuals with a pre-understanding of thescientific methodand how to conduct sensible and just scientific analysis.
Various studies have been published that explore theethicsof citizen science, including issues such asintellectual propertyand project design.(e.g.[13][12][58][59][60]) TheCitizen Science Association(CSA), based at theCornell Lab of Ornithology, and theEuropean Citizen Science Association(ECSA), based in theMuseum für Naturkundein Berlin, have working groups on ethics and principles.[61][62]
In September 2015, ECSA published itsTen Principles of Citizen Science, which have been developed by the "Sharing best practice and building capacity" working group of ECSA, led by theNatural History Museum, Londonwith input from many members of the association.[63][64]
The medical ethics of internet crowdsourcing has been questioned by Graber & Graber in theJournal of Medical Ethics.[65]In particular, they analyse the effect of games and the crowdsourcing projectFoldit. They conclude: "games can have possible adverse effects, and that they manipulate the user into participation".
In March 2019, the online journalCitizen Science: Theory and Practicelaunched a collection of articles on the theme of Ethical Issues in Citizen Science.[66]The articles are introduced with (quoting): "Citizen science can challenge existing ethical norms because it falls outside of customary methods of ensuring that research is conducted ethically. What ethical issues arise when engaging the public in research? How have these issues been addressed, and how should they be addressed in the future?"[66]
In June 2019,East Asian Science, Technology and Society: An International Journal(EASTS) published an issue titled "Citizen Science: Practices and Problems" which contains 15 articles/studies on citizen science, including many relevant subjects of which ethics is one.[67]Quoting from the introduction "Citizen, Science, and Citizen Science": "The term citizen science has become very popular among scholars as well as the general public, and, given its growing presence in East Asia, it is perhaps not a moment too soon to have a special issue of EASTS on the topic."[68]
Use of citizen science volunteers asde factounpaid laborers by some commercial ventures have been criticized as exploitative.[69]
Ethics in citizen science in the health and welfare field, has been discussed in terms of protection versus participation. Public involvement researcher Kristin Liabo writes that health researcher might, in light of their ethics training, be inclined to exclude vulnerable individuals from participation, to protect them from harm. However, she argues these groups are already likely to be excluded from participation in other arenas, and that participation can be empowering and a possibility to gain life skills that these individuals need. Whether or not to become involved should be a decision these individuals should be involved in and not a researcher decision.[70]
In the research paper "Can citizen science enhance public understanding of science?" by Bonney et al. 2016,[71]statistics which analyse the economic worth of citizen science are used, drawn from two papers:i)Sauermann and Franzoni 2015,[72]andii)Theobald et al. 2015.[73]In "Crowd science user contribution patterns and their implications" by Sauermann and Franzoni (2015), seven projects from the Zooniverse web portal are used to estimate the monetary value of the citizen science that had taken place. The seven projects are: Solar Stormwatch, Galaxy Zoo Supernovae, Galaxy Zoo Hubble, Moon Zoo, Old Weather, The Milky Way Project and Planet Hunters.[72]Using data from 180 days in 2010, they find a total of 100,386 users participated, contributing 129,540 hours of unpaid work.[72]Estimating at a rate of $12 an hour (an undergraduate research assistant's basic wage), the total contributions amount to $1,554,474, an average of $222,068 per project.[72]The range over the seven projects was from $22,717 to $654,130.[72]
In "Global change and local solutions: Tapping the unrealized potential of citizen science for biodiversity research" by Theobald et al. 2015, the authors surveyed 388 unique biodiversity-based projects.[73]Quoting: "We estimate that between 1.36 million and 2.28 million people volunteer annually in the 388 projects we surveyed, though variation is great" and that "the range of in-kind contribution of the volunteerism in our 388 citizen science projects as between $667 million to $2.5 billion annually."[73]
Worldwide participation in citizen science continues to grow. A list of the top five citizen science communities compiled byMarc Kuchnerand Kristen Erickson in July 2018 shows a total of 3.75 million participants, although there is likely substantial overlap between the communities.
There have been studies published which examine the place of citizen science within education.(e.g.[5][74][75]) Teaching aids can include books and activity or lesson plans.(e.g.[76][77][78][79]). Some examples of studies are:
From theSecond International Handbook of Science Education, a chapter entitled: "Citizen Science, Ecojustice, and Science Education: Rethinking an Education from Nowhere", by Mueller and Tippins (2011), acknowledges in the abstract that: "There is an emerging emphasis in science education on engaging youth in citizen science." The authors also ask: "whether citizen science goes further with respect to citizen development."[80]The abstract ends by stating that the "chapter takes account of the ways educators will collaborate with members of the community to effectively guide decisions, which offers promise for sharing a responsibility for democratizing science with others."[80]
From the journalDemocracy and Education, an article entitled: "Lessons Learned from Citizen Science in the Classroom" by authors Gray, Nicosia and Jordan (GNJ; 2012) gives a response to a study by Mueller, Tippins and Bryan (MTB) called "The Future of Citizen Science".[81][82]GNJ begins by stating in the abstract that "The Future of Citizen Science": "provides an important theoretical perspective about the future of democratized science andK12education." But GRB state: "However, the authors (MTB) fail to adequately address the existing barriers and constraints to moving community-based science into the classroom." They end the abstract by arguing: "that the resource constraints of scientists, teachers, and students likely pose problems to moving true democratized science into the classroom."[81]
In 2014, a study was published called "Citizen Science and Lifelong Learning" by R. Edwards in the journalStudies in the Education of Adults.[83]Edwards begins by writing in the abstract that citizen science projects have expanded over recent years and engaged citizen scientists and professionals in diverse ways. He continues: "Yet there has been little educational exploration of such projects to date."[83]He describes that "there has been limited exploration of the educational backgrounds of adult contributors to citizen science". Edwards explains that citizen science contributors are referred to as volunteers, citizens or as amateurs. He ends the abstract: "The article will explore the nature and significance of these different characterisations and also suggest possibilities for further research."[83]
In the journalMicrobiology and Biology Educationa study was published by Shah and Martinez (2015) called "Current Approaches in Implementing Citizen Science in the Classroom".[84]They begin by writing in the abstract that citizen science is a partnership between inexperienced amateurs and trained scientists. The authors continue: "With recent studies showing a weakening in scientific competency of American students, incorporating citizen science initiatives in the curriculum provides a means to address deficiencies".[84]They argue that combining traditional and innovative methods can help provide a practical experience of science. The abstract ends: "Citizen science can be used to emphasize the recognition and use of systematic approaches to solve problems affecting the community."[84]
In November 2017, authors Mitchell, Triska and Liberatore published a study inPLOS Onetitled "Benefits and Challenges of Incorporating Citizen Science into University Education".[85]The authors begin by stating in the abstract that citizen scientists contribute data with the expectation that it will be used. It reports that citizen science has been used for first year university students as a means to experience research. They continue: "Surveys of more than 1500 students showed that their environmental engagement increased significantly after participating in data collection and data analysis."[85]However, only a third of students agreed that data collected by citizen scientists was reliable. A positive outcome of this was that the students were more careful of their own research. The abstract ends: "If true for citizen scientists in general, enabling participants as well as scientists to analyse data could enhance data quality, and so address a key constraint of broad-scale citizen science programs."[85]
Citizen science has also been described as challenging the "traditional hierarchies and structures ofknowledge creation".[69]
While citizen science developed at the end of the 20th century, characteristics of citizen science are not new.[86][1]Prior to the 20th century, science was often the pursuit ofgentleman scientists, amateur or self-funded researchers such as SirIsaac Newton,Benjamin Franklin, andCharles Darwin.[23]Women citizen scientists from before the 20th century includeFlorence Nightingalewho "perhaps better embodies the radical spirit of citizen science".[87]Before the professionalization of science by the end of the 19th century, most pursued scientific projects as an activity rather than a profession itself, an example being amateurnaturalistsin the 18th and 19th centuries.[86]
During the British colonization of North America, American Colonists recorded the weather, offering much of the information now used to estimate climate data and climate change during this time period. These people includedJohn Campanius Holm, who recorded storms in the mid-1600s, as well asGeorge Washington,Thomas Jefferson, andBenjamin Franklinwho tracked weather patterns during America's founding. Their work focused on identifying patterns by amassing their data and that of their peers and predecessors, rather than specific professional knowledge in scientific fields.[88]Some consider these individuals to be the first citizen scientists, some consider figures such asLeonardo da VinciandCharles Darwinto be citizen scientists, while others feel that citizen science is a distinct movement that developed later on, building on the preceding history of science.[1][86]
By the mid-20th century, however, science was dominated by researchers employed by universities and government research laboratories. By the 1970s, this transformation was being called into question. PhilosopherPaul Feyerabendcalled for a "democratization of science".[89]BiochemistErwin Chargaffadvocated a return to science by nature-loving amateurs in the tradition ofDescartes, Newton,Leibniz,Buffon, andDarwin—science dominated by "amateurship instead of money-biased technical bureaucrats".[90]
A study from 2016 indicates that the largest impact of citizen science is in research on biology, conservation and ecology, and is utilized mainly as a methodology of collecting and classifying data.[3]
Astronomyhas long been a field where amateurs have contributed throughout time, all the way up to the present day.[91]
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes withequipment that they build themselves. Common targets of amateur astronomers include the Moon, planets, stars, comets, meteor showers, and a variety ofdeep-sky objectssuch as star clusters, galaxies, and nebulae. Observations of comets and stars are also used to measure the local level ofartificial skyglow.[92][93]One branch of amateur astronomy, amateurastrophotography, involves the taking of photos of the night sky. Many amateurs like to specialize in the observation of particular objects, types of objects, or types of events that interest them.[94][95]
TheAmerican Association of Variable Star Observershas gathered data on variable stars for educational and professional analysis since 1911 and promotes participation beyond its membership on its Citizen Sky website.[96]
Project PoSSUM is a relatively new organization, started in March 2012, which trains citizen scientists of many ages to go on polar suborbital missions. On these missions, they studynoctilucent cloudswithremote sensing, which reveals interesting clues about changes in the upper atmosphere and the ozone due to climate change. This is a form of citizen science which trains younger generations to be ambitious, participating in intriguing astronomy and climate change science projects even without a professional degree.[97]
Butterfly counts have a long tradition of involving individuals in the study of butterflies' range and their relative abundance. Two long-running programs are the UK Butterfly Monitoring Scheme (started in 1976) and theNorth American Butterfly Association's Butterfly Count Program (started in 1975).[98][99]There are various protocols for monitoring butterflies and different organizations support one or more of transects, counts and/or opportunistic sightings.[100]eButterflyis an example of a program designed to capture any of the three types of counts for observers in North America. Species-specific programs also exist, with monarchs the prominent example.[101]Two examples of this involve the counting of monarch butterflies during the fallmigrationto overwintering sites in Mexico: (1) Monarch Watch is a continent-wide project, while (2) the Cape May Monarch Monitoring Project is an example of a local project.[102][103]
Citizen science projects have become increasingly focused on providing benefits to scientific research.[104][105][106]TheNorth American Bird Phenology Program(historically called the Bird Migration and Distribution records) may have been the earliest collective effort of citizens collecting ornithological information in the U.S.[107]The program, dating back to 1883, was started by Wells Woodbridge Cooke. Cooke established a network of observers around North America to collect bird migration records. TheAudubon Society'sChristmas Bird Count, which began in 1900, is another example of a long-standing tradition of citizen science which has persisted to the present day,[108][109]now containing a collection of six million handwritten migration observer cards that date back to the 19th century. Participants input this data into an online database for analysis. Citizen scientists help gather data that will be analyzed by professional researchers, and can be used to produce bird population and biodiversity indicators.
Raptor migration research relies on the data collected by thehawkwatchingcommunity. This mostly volunteer group counts migrating accipiters, buteos, falcons, harriers, kites, eagles, osprey, vultures and other raptors at hawk sites throughout North America during the spring and fall seasons.[110]The daily data is uploaded to hawkcount.org where it can be viewed by professional scientists and the public.
Other programs in North America include Project FeederWatch, which is affiliated with the Cornell Lab of Ornithology.[111]
Such indices can be useful tools to inform management, resource allocation, policy and planning.[112]For example, European breeding bird survey data provide input for theFarmland Bird Index, adopted by the European Union as a structural indicator of sustainable development.[113]This provides a cost-effective alternative to government monitoring.
Similarly, data collected by citizen scientists as part of BirdLife Australia's has been analysed to produce the first-ever Australian Terrestrial Bird Indices.[114]
In the UK, the Royal Society for the Protection of Birds collaborated with a children’s TV show to create a national birdwatching day in 1979; the campaign has continued for over 40 years and in 2024, over 600,000 people counted almost 10 million birds during the Big Garden Birdwatch weekend.[115]
Most recently, more programs have sprung up worldwide, including NestWatch, a bird species monitoring program which tracks data on reproduction. This might include studies on when and how often nesting occurs, counting eggs laid and how many hatch successfully, and what proportion of hatchlings survive infancy. Participation in this program is extremely easy for the general public to join. Using the recently created nest watch app which is available on almost all devices, anyone can begin to observe their local species, recording results every 3 to 4 days within the app. This forms a continually-growing database which researchers can view and utilize to understand trends within specific bird populations.[116]
The concept of citizen science has been extended to the ocean environment for characterizing ocean dynamics and trackingmarine debris. For example, the mobile app Marine Debris Tracker is a joint partnership ofNational Oceanic and Atmospheric Administrationand theUniversity of Georgia.[117]Long term sampling efforts such as thecontinuous plankton recorderhas been fitted on ships of opportunity since 1931. Plankton collection by sailors and subsequent genetic analysis was pioneered in 2013 by Indigo V Expeditions as a way to better understand marine microbial structure and function.[118]
Citizen science incoral reefstudies developed in the 21st century.
Underwater photographyhas become more popular since the development of moderate priced digital cameras with waterproof housings in the early 2000s, resulting on millions of pictures posted every year on various websites and social media. This mass of documentation has great scientific potential, as millions of tourists possess a much superior coverage power than professional scientists, who cannot spend so much time in the field.
As a consequence, several participative sciences programs have been developed, supported bygeotaggingand identification web sites such asiNaturalist. TheMonitoring through many eyesproject collates thousands of underwater images of theGreat Barrier Reefand provides an interface for elicitation of reef health indicators.[119]
TheNational Oceanic and Atmospheric Administration(NOAA) also offers opportunities for volunteer participation. By taking measurements inThe United States' National Marine Sanctuaries, citizens contribute data tomarine biologyprojects. In 2016, NOAA benefited from 137,000 hours of research.[120]
There also exist protocols for auto-organization and self-teaching aimed at biodiversity-interested snorkelers, in order for them to turn their observations into sound scientific data, available for research. This kind of approach has been successfully used inRéunion island, allowing for tens of new records and even new species.[121]
Aquarium hobbyists and their respective organizations are very passionate about fish conservation and often more knowledgeable about specific fish species and groups than scientific researchers.[122]They have played an important role in the conservation of freshwater fishes by discovering new species, maintaining extensive databases with ecological information on thousands of species (such as for catfish,[123]Mexican freshwater fishes,[124]killifishes,[125]cichlids[126]), and successfully keeping and providing endangered and extinct-in-the-wild species for conservation projects.[127][128]The CARES (Conservation, Awareness, Recognition, Encouragement, and Support) preservation program[129]is the largest hobbyist organization containing over 30 aquarium societies and international organizations, and encourages serious aquarium hobbyists to devote tank space to the most threatened or extinct-in-the-wild species to ensure their survival for future generations.
Citizen scientists also work to monitor and conserve amphibian populations. One recent project isFrogWatch USA, organized by theAssociation of Zoos and Aquariums. Participants are invited to educate themselves on their local wetlands and help to save amphibian populations by reporting the data on the calls of local frogs and toads. The project already has over 150,000 observations from more than 5000 contributors. Participants are trained by program coordinators to identify calls and utilize this training to report data they find between February and August of each "monitoring season". Data is used to monitor diversity, invasion, and long-term shifts in population health within these frog and toad communities.[130]
Reef Life Surveyis a marine life monitoring programme based inHobart,Tasmania.[131]The project uses recreational divers that have been trained to make fish and invertebrate counts, using an approximate 50 m constant depth transect of tropical and temperate reefs, which might include coral reefs.[131]Reef Life Survey is international in its scope, but the data collectors are predominantly fromAustralia. The database is available tomarine ecologyresearchers, and is used by severalmarine protected areamanagements in Australia, New Zealand, American Samoa and the eastern Pacific.[132][133]Its results have also been included in the Australian Ocean DATA Network.[134]
Farmer participation in experiments has a long tradition inagricultural science.[135]There are many opportunities for citizen engagement in different parts of food systems.[136]Citizen science is actively used for crop variety selection forclimate adaptation, involving thousands of farmers.[137]Citizen science has also played arole in furthering sustainable agriculture.
Citizen science has a long tradition innatural science. Today, citizen science projects can also be found in various fields of science likeart history. For example, theZooniverseproject AnnoTate is a transcription tool developed to enable volunteers to read and transcribe the personal papers of British-born and émigré artists.[138]The papers are drawn from theTate Archive. Another example of citizen science in art history isARTigo.[139]ARTigo collectssemanticdata on artworks from the footprints left by players of games featuring artwork images. From these footprints, ARTigo automatically builds asemantic searchengine for artworks.
Citizen science has made significant contributions to the analysis ofbiodiversityacross the world. A majority of data collected has been focused primarily on species occurrence, abundance andphenology, with birds being primarily the most popular group observed.[140]There is growing efforts to expand the use of citizen science across other fields. Past data on biodiversity has had limitations in the quantity of data to make any meaningful broad connections to losses in biodiversity. Recruiting citizens already out in the field opens a tremendous amount of new data. For example, thousands of farmers reporting the changes in biodiversity in their farms over many years has provided a large amount of relevant data concerning the effect of different farming methods on biodiversity.[141]Another example, is WomSAT,[142]a citizen science project that collects data onwombatroadkill[143]andsarcoptic mangeincidence and distribution,[144]to support conservation efforts for the species.
Citizen science can be used to great effect in addition to the usual scientific methods in biodiversity monitoring. The typical active method of species detection is able to collect data on the broad biodiversity of areas while citizen science approaches has shown to be more effective at identifyinginvasive species.[145]In combination, this provides an effective strategy of monitoring the changes in biodiversity of ecosystems.
In the research fields of health and welfare, citizen science is often discussed in other terms, such as "public involvement", "user engagement", or "community member involvement". However the meaning is similar to citizen science, with the exception that citizens are not often involved in collecting data but more often involved in prioritisation of research ideas and improving methodology, e.g. survey questions. In the last decades, researchers and funders have gained awareness of the benefits from involving citizens in the research work, but the involvement of citizens in a meaningful way is not a common practice.[146]There is an ongoing discussion on how to evaluate citizen science in health and welfare research.[147]
One aspect to consider in citizen science in health and welfare, that stands out compared to in other academic fields, iswhoto involve. When research concerns human experiences, representation of a group becomes important. While it is commonly acknowledged that the people involved need to have lived experience of the concerned topic,[148]representation is still an issue, and researchers are debating whether this is a useful concept in citizen science.
Newer technologies have increased the options for citizen science.[149]Citizen scientists can build and operate their own instruments to gather data for their own experiments or as part of a larger project. Examples includeamateur radio,amateur astronomy, Six Sigma Projects, andMakeractivities. ScientistJoshua Pearcehas advocated for the creation ofopen-source hardwarebased scientific equipment that both citizen scientists and professional scientists, which can be replicated bydigital manufacturingtechniques such as3D printing.[150]Multiple studies have shown this approach radically reduces scientific equipment costs.[151][152]Examples of this approach include water testing, nitrate and other environmental testing, basic biology and optics.[152][153][154][155]Groups such asPublic Lab, which is a community where citizen scientists can learn how to investigate environmental concerns using inexpensiveDIYtechniques, embody this approach.[153]
Video technology is much used in scientific research. The Citizen Science Center in theNature Research Centerwing of theNorth Carolina Museum of Natural Scienceshas exhibits on how to get involved in scientific research and become a citizen scientist. For example, visitors can observe birdfeeders at thePrairie Ridge Ecostationsatellite facility via live video feed and record which species they see.
Since 2005, theGenographic Projecthas used the latest genetic technology to expand our knowledge of the human story, and its pioneering use of DNA testing to engage and involve the public in the research effort has helped to create a new breed of "citizen scientist". Geno 2.0 expands the scope for citizen science, harnessing the power of thecrowdto discover new details of human population history.[156]This includes supporting, organization and dissemination ofpersonal DNA testing. Likeamateur astronomy, citizen scientists encouraged by volunteer organizations like theInternational Society of Genetic Genealogyhave provided valuable information and research to the professional scientific community.[157][158]
Withunmanned aerial vehicles, further citizen science is enabled. One example is theESA'sAstroDrone smartphone app for gathering robotic data with theParrot AR.Drone.[159]
Citizens in Space (CIS), a project of the United States Rocket Academy, seeks to combine citizen science with citizen space exploration.[160]CIS is training citizen astronauts to fly as payload operators on suborbital reusable spacecraft that are now in development. CIS will also be developing, and encouraging others to develop, citizen-science payloads to fly on suborbital vehicles. CIS has already acquired a contract for 10 flights on the Lynx suborbital vehicle, being developed byXCOR Aerospace, and plans to acquire additional flights onXCOR Lynxand other suborbital vehicles in the future.[160]
CIS believes that "The development of low-cost reusable suborbital spacecraft will be the next great enabler, allowing citizens to participate in space exploration and space science."[161]
The websiteCitizenScience.govwas started by the U.S. government to "accelerate the use of crowdsourcing and citizen science" in the United States. Following the internet's rapid increase of citizen science projects, this site is one of the most prominent resource banks for citizen scientists and government supporters alike. It features three sections: a catalog of existing citizen science projects which are federally supported, a toolkit to help federal officials as they develop and maintain their future projects, and several other resources and projects. This was created as the result of a mandate within the Crowdsourcing and Citizen Science Act of 2016 (15 USC 3724).[162]
The Internet has been a boon to citizen science, particularly throughgamification.[149]One of the first Internet-based citizen science experiments wasNASA'sClickworkers, which enabled the general public to assist in the classification of images, greatly reducing the time to analyze large data sets. Another was the Citizen Science Toolbox, launched in 2003, of the Australian Coastal Collaborative Research Centre.[163]Mozak is a game in which players create 3D reconstructions from images of actual human and mouse neurons, helping to advance understanding of the brain. One of the largest citizen science games isEyewire, a brain-mapping puzzle game developed at theMassachusetts Institute of Technologythat now has over 200,000 players.[164]Another example isQuantum Moves, a game developed by the Center for Driven Community Research atAarhus University, which uses online community efforts to solve quantum physics problems.[165][166]The solutions found by players can then be used in the lab to feed computational algorithms used in building a scalablequantum computer.
More generally, Amazon'sMechanical Turkis frequently used in the creation, collection, and processing of data by paid citizens.[167][168]There is controversy as to whether or not the data collected through such services is reliable, as it is subject to participants' desire for compensation.[169]However, use of Mechanical Turk tends to quickly produce more diverse participant backgrounds, as well as comparably accurate data when compared to traditional collection methods.[170]
The internet has also enabled citizen scientists to gather data to be analyzed by professional researchers. Citizen science networks are often involved in the observation of cyclic events of nature (phenology), such as effects of global warming on plant and animal life in different geographic areas,[171]and in monitoring programs for natural-resource management.[172][173][174]OnBugGuide.Net, an online community of naturalists who share observations ofarthropod, amateurs and professional researchers contribute to the analysis. By October 2022, BugGuide has over 1,886,513 images submitted by 47,732 contributors.[175]
Not countingiNaturalistandeBird,[176]theZooniverseis home to the internet's largest, most popular and most successful citizen science projects.[177][178]The Zooniverse and the suite of projects it contains is produced, maintained and developed by the Citizen Science Alliance (CSA).[179]The member institutions of the CSA work with many academic and other partners around the world to produce projects that use the efforts and ability of volunteers to help scientists and researchers deal with the flood of data that confronts them. On 29 June 2015, the Zooniverse released a new software version with a project-building tool allowing any registered user to create a project.[180]Project owners may optionally complete an approval process to have their projects listed on the Zooniverse site and promoted to the Zooniverse community.[181]A NASA/JPL picture to the right gives an example from one of Zooniverse's projectsThe Milky Way Project.
The websiteCosmoQuesthas as its goal "To create a community of people bent on together advancing our understanding of the universe; a community of people who are participating in doing science, who can explain why what they do matters, and what questions they are helping to answer."[182]
CrowdCrafting enables its participants to create and run projects where volunteers help with image classification, transcription, geocoding and more.[183]The platform is powered by PyBossa software, a free and open-source framework for crowdsourcing.[184]
Project Soothe is a citizen science research project based at the University of Edinburgh. The aim of this research is to create a bank of soothing images, submitted by members of the public, which can be used to help others through psychotherapy and research in the future. Since 2015, Project Soothe has received over 600 soothing photographs from people in 23 countries. Anyone aged 12 years or over is eligible to participate in this research in two ways: (1) By submitting soothing photos that they have taken with a description of why the images make them feel soothed (2) By rating the photos that have been submitted by people worldwide for their soothability.[185]
The internet has allowed for many individuals to share and upload massive amounts of data. Using the internet citizen observatories have been designed as a platform to both increase citizen participation and knowledge of their surrounding environment by collecting whatever relevant data is focused by the program.[186]The idea is making it easier and more exciting for citizens to get and stay involved in local data collection.
The invention ofsocial mediahas aided in providing massive amounts of information from the public to create citizen science programs. In a case study by Andrea Liberatore, Erin Bowkett, Catriona J. MacLeod, Eric Spurr, andNancy Longnecker, the New Zealand Garden Bird Survey is conducted as one such project with the aid of social media. It examines the influence of utilizing a Facebook group to collect data from citizen scientists as the researchers work on the project over the span of a year. The authors claim that this use of social media greatly helps with the efficiency of this study and makes the atmosphere feel more communal.[187]
The bandwidth and ubiquity afforded bysmartphoneshas vastly expanded the opportunities for citizen science. Examples includeiNaturalist, Chronolog, the San Francisco project, the WildLab, Project Noah,[188][189][190]and Aurorasurus. Due to their ubiquity, for example,Twitter,Facebook, andsmartphoneshave been useful for citizen scientists, having enabled them to discover and propagate a new type of aurora dubbed "STEVE" in 2016.[191]
There are alsoappsfor monitoring birds, marine wildlife and other organisms, and the "Loss of the Night".[192][193]Chronolog, another citizen science initiative, uses smartphone photography to crowdsource environmental monitoring through timelapses.[194]By positioning their cameras at designated photo stations and submitting images, participants contribute to long-term ecological records at parks and conservation sites across 48 U.S. states and 10 countries.[195][196]Restoration professionals and other land stewards use this data to measure ecosystem health and understand the effectiveness of conservation interventions likehabitat restoration,controlled burns, removal of invasive species, planting ofnative species, and efforts to improve water quality.[194][195][197][198][199]
"The Crowd and the Cloud" is a four-part series broadcast during April 2017, which examines citizen science.[200]It shows how smartphones, computers and mobile technology enable regular citizens to become part of a 21st-century way of doing science.[200]The programs also demonstrate how citizen scientists help professional scientists to advance knowledge, which helps speed up new discoveries and innovations. The Crowd & The Cloud is based upon work supported by the U.S.National Science Foundation.[200]
Since 1975, in order to improve earthquake detection and collect useful information, theEuropean-Mediterranean Seismological Centremonitors the visits of earthquake eyewitnesses to its website and relies on Facebook and Twitter.[201]More recently, they developed the LastQuake[202]mobile application which notifies users about earthquakes occurring around the world, alerts people when earthquakes hit near them, gatherscitizen seismologists' testimonies to estimate the felt ground shaking and possible damages.
Citizen science has been used to provide valuable data inhydrology(catchment science), notably flood risk,water quality, andwater resource management.[203][204][205]A growth in internet use and smartphone ownership has allowed users to collect and share real-time flood-risk information using, for example, social media and web-based forms. Although traditional data collection methods are well-established, citizen science is being used to fill the data gaps on a local level, and is therefore meaningful to individual communities. Data collected from citizen science can also compare well to professionally collected data.[206]It has been demonstrated that citizen science is particularly advantageous during aflash floodbecause the public are more likely to witness these rarer hydrological events than scientists.[207]
Citizen science includes projects that help monitorplasticsand their associatedpollution.[208][209][210][211]These includeThe Ocean Cleanup, #OneLess, The Big Microplastic Survey, EXXpedition andAlliance to End Plastic Waste.[212][213][214][215]Ellipsis seeks to map the distribution of litter using aerial data mapping byunmanned aerial vehiclesandmachine learningsoftware.[216]A Zooniverse project called The Plastic Tide (now finished) helped train analgorithmused by Ellipsis.[217]
Examples of relevant articles (by date):
Examples of relevant scientific studies or books include (by date):
Citizen sensing can be a form of citizen science: (quote) "The work of citizen sensing, as a form of citizen science, then further transformsStengers's notion of the work of science by moving the experimental facts and collectives where scientific work is undertaken out of the laboratory of experts and into the world of citizens."[234]Similar sensing activities includeCrowdsensingandparticipatory monitoring. While the idea of using mobile technology to aid this sensing is not new, creating devices and systems that can be used to aid regulation has not been straightforward.[234]Some examples of projects that include citizen sensing are:
A group of citizen scientists in a community-led project targeting toxic smoke fromwood burnersinBristol, has recorded 11 breaches ofWorld Health Organizationdaily guidelines forultra-fine particulatepollution over a period of six months.[242][243]
In a £7M programme funded by water regulatorOfwat, citizen scientists are being trained to test for pollution andover-abstractionin 10 river catchment areas in the UK.[244]Sensors will be used and the information gathered will be available in a central visualisation platform.[244]The project is led byThe Rivers TrustandUnited Utilitiesand includes volunteers such as anglers testing the rivers they use.[245]TheAngling Trustprovides the pollution sensors, with Kristian Kent from the Trust saying: "Citizen science is a reality of the world in the future, so they’re not going to be able to just sweep it under the carpet."[245]
River water quality in the U.K. has been tested by a combined total of over 7,000 volunteers in so-called "blitzes" run over two weekends in 2024.[246]The research by theNGOEarthwatch Europe gathered data from 4,000 freshwater sites and used standardised testing equipment provide by the NGO andImperial College. The second blitz in October 2024 included testing for chemical pollutants, such as antibiotics, agricultural chemicals and pesticides. Results from 4,531 volunteers showed that over 61% of the freshwater sites "were in a poor state because of high levels of the nutrients phosphate and nitrate, the main source of which is sewage effluent and agricultural runoff". The data gathered through robust volunteer testing is analysed and put into a report helping provide theEnvironment Agencywith information it does not have.[246]
Resources for computer science and scientificcrowdsourcingprojects concerning COVID-19 can be found on the internet or as apps.[247][248]Some such projects are listed below:
For coronavirus studies and information that can help enable citizen science, many online resources are available throughopen accessandopen sciencewebsites, including anintensive care medicinee-book chapter hosted byEMCrit[281]and portals run by theCambridge University Press,[282]the Europe branch of theScholarly Publishing and Academic Resources Coalition,[283]The Lancet,[284]John Wiley and Sons,[285]andSpringer Nature.[286]
There have been suggestions that the pandemic and subsequent lockdown has boosted the public’s awareness and interest in citizen science, with more people around the world having the motivation and the time to become involved in helping to investigate the illness and potentially move on to other areas of research.[287][288][289][290]
The Citizen Science Global Partnership was created in 2022;[291]the partnership brings together networks from Australia, Africa, Asia, Europe, South America and the USA.
The CitSci Africa Association held its International Conference in February 2024 in Nairobi.[300][301]
As technology and public interest grew, theCitizenScience.Asiagroup was set up in 2022; it grew from an initial hackathon in Hong Kong which worked on the 2016 Zika scare.[316]The network is part of Citizen Science Global Partnership.[317]
The English naturalist Charles Darwin (1809–1882) is widely regarded to have been one of the earliest citizen science contributors in Europe (see§ History). A century later, citizen science was experienced by adolescents in Italy during the 1980s, working on urban energy usages and air pollution.[318]
In his book "Citizen Science", Alan Irwin considers the role that scientific expertise can play in bringing the public and science together and building a more scientifically active citizenry, empowering individuals to contribute to scientific development.[14]Since then a citizen science green paper was published in 2013, and European Commission policy directives have included citizen science as one of five strategic areas with funding allocated to support initiatives through the 'Science With and For Society (SwafS)', a strand of the Horizon 2020 programme.[21][22]This includes significant awards such as the EU Citizen Science Project, which is creating a hub for knowledge sharing, coordination, and action.[319]TheEuropean Citizen Science Association(ECSA) was set up in 2014 to encourage the growth of citizen science across Europe, to increase public participation in scientific processes, mainly by initiating and supporting citizen science projects as well as conducting research. ECSA has a membership of over 250 individual and organisational members from over 30 countries across the European Union and beyond.
Examples of citizen science organisations and associations based in Europe include the Biosphere Expeditions (Ireland),[320]Bürger schaffen Wissen (Germany),[321]Citizen Science Lab at Leiden University (Netherlands),[322]Ibercivis (See External Links), Österreich forscht (Austria).[323]Other organisations can be found here: EU Citizen Science.[324]
TheEuropean Citizen Science Associationwas created in 2014.[325]
In 2023, the European Union Prize for Citizen Science was established.[326]Bestowed throughArs Electronica, the prize was designed to honor, present and support "outstanding projects whose social and political impact advances the further development of a pluralistic, inclusive and sustainable society in Europe".[326]
The first Conference on Public Participation in Scientific Research was held in Portland, Oregon, in August 2012.[350]Citizen science is now often a theme at large conferences, such as the annual meeting of theAmerican Geophysical Union.[351]
In 2010, 2012 and 2014 there were three Citizen Cyberscience summits, organised by theCitizen Cyberscience CentreinGenevaandUniversity College London.[352]The 2014 summit was hosted inLondonand attracted over 300 participants.[352]
In November 2015, theETH ZürichandUniversity of Zürichhosted an international meeting on the "Challenges and Opportunities in Citizen Science".[353]
The first citizen science conference hosted by the Citizen Science Association was in San Jose, California, in February 2015 in partnership with the AAAS conference.[354]The Citizen Science Association conference, CitSci 2017, was held inSaint Paul,Minnesota, United States, between 17 and 20 May 2017. The conference had more than 600 attendees.[355][356]The next CitSci was in March 2019 inRaleigh, North Carolina.[355]
The platform "Österreich forscht" hosts the annualAustriancitizen science conference since 2015.[357]
Barbara Kingsolver’s 2012 novelFlight Behaviourlooks at the effects of citizen science on a housewife in Appalachia, when her interest in butterflies brings her into contact with scientists and academics.[358]
|
https://en.wikipedia.org/wiki/Citizen_science
|
ClickWorkerswas a smallNASAexperimentalprojectthat uses publicvolunteers(nicknamed "clickworkers" on the site) for scientific tasks. Clickworkers are able to work when, and for however long they choose, doing routine analysis that would normally require months of work byscientistsorgraduatestudents. The web site and database were created and maintained by one engineer, Bob Kanefsky, and advised by two scientists, Nadine Barlow and Virginia Gulick.[1]The pilot study was sponsored by the NASA Ames Director's Discretionary Fund.
As of March 31, 2020, the Clickworkers volunteer program appears to be defunct. None of the links to the program are functional, as of that date.
The original phase ran from November 2000 to September 2001, identifying and classifying the age of craters on Mars images fromViking Orbiterthat had already been analyzed by NASA. The goal was to answer two meta-science questions:
In February 2001 clickworkers started processing new images fromMars Global Surveyor, surveying small craters never before cataloged. Clickworkers also searched Mars images for "honeycomb" terrain, although no further images were discovered and it is suspected that this is an illusory feature type. Their analysis might potentially be useful for scientists, although there are no specific plans for using it yet.
As of 2007[update], new beta tasks were up on the Clickworker site. This time workers were being asked to help catalog Mars landforms in one of two ways. In the first task, high resolution images from the HiRISE camera on the Mars Reconnaissance Orbiter are displayed and the volunteers are to stamp areas on the image with appropriate landform types. The second task took a different approach and displayed wider field views from the older MOC camera on Mars Global Surveyor. The landforms on these wider views were then marked, and interesting features could be tagged for possible future hi-res imaging with HiRISE.
In November 2009 it was announced that NASA has developed a new website to allow volunteer users to help in Martian mapping. The site "Be a Martian" went live on November 17, 2009, and allows users to either map features or count craters on Mars.[2]As of March 2020, the "Be a Martian" website appears to be defunct.
|
https://en.wikipedia.org/wiki/Clickworkers
|
Acollaborative innovation network(CoIN) is a collaborative innovation practice that uses internet platforms to promote communication and innovation within self-organizing virtual teams.
Coins work across hierarchies and boundaries where members can exchange ideas and information directly and openly. This collaborative and transparent environment fosters innovation. Peter Gloor describes the phenomenon as "swarm creativity". He says, "CoINs are the best engines to drive innovation."[1]
CoINs existed well before the advent of modern communication technology. However, theInternetand instant communication improved productivity and enabled the reach of a global scale. Today, they rely on the Internet,e-mail, and other communications vehicles for information sharing.[1]
According to Gloor, CoINs have five main characteristics:[1]
There are also five essential elements of collaborative innovation networks (which Gloor calls "genetic code"):[1]
CoINs have been developing many disruptive innovations such as theInternet,Linux,the WebandWikipedia. Students with little or no budget created these inventions in universities or labs. They were not focused on the money but on the sense of accomplishment.[1]
Faced with creations like the Internet, large companies such asIBMandIntelhave learned to use the principles of open innovation to enhance their research learning curve. They increased or established collaborations with universities, agencies, and small companies to accelerate their processes and launch new services faster.[1]
Asheim and Isaksen (2002)[2]conclude that innovative network contributes to the achievement of optimal allocation of resources, and promoting knowledge transfer performance. However, four factors of collaborative innovation networks affect the performance of CoINs differently:[3]
Collaborative innovation still needs to be empowered. A more collaborative approach involving stakeholders such as governments, corporations, entrepreneurs, and scholars is critical to tackling today's main challenges.[according to whom?]
|
https://en.wikipedia.org/wiki/Collaborative_innovation_network
|
Collaborative mapping, also known as citizen mapping,[1]is the aggregation ofWeb mappinganduser-generated content,[2]from a group of individuals or entities, and can take several distinct forms. With the growth of technology for storing and sharing maps, collaborative maps have become competitors to commercial services, in the case ofOpenStreetMap, or components of them, as inGoogle Map Maker,WazeandYandex Map Editor.
Volunteers collect geographic information and the citizens/individuals can be regarded as sensors within a geographical environment that create, assemble, and disseminate geographic data provided voluntarily by the individuals.[2][3]Collaborative mapping is a special case of the larger phenomenon known ascrowd sourcing, that allows citizens to be part of collaborative approach to accomplish a goal. The goals in collaborative mapping have a geographical aspect, e.g. having a more active role inurban planning. Especially when data, information, knowledge is distributed in a population and an aggregation of data is not available, then collaborative mapping can bring a benefit for the citizens or activities in a community with an e-Planing Platform.[4]Extensions of critical and participatory approaches togeographic information systemscombines software tools with a joint activities to accomplish a community goal.[5]Additionally, the aggregated data can be used for aLocation-based servicelike available public transport options at the geolocation where a mobile device is currently used (GPS-sensor). The relevance for the user at a specific geolocation cannot be represented with logic value in general (relevant=true/false). The relevance can be represented with Fuzzy-Logic or aFuzzy architectural spatial analysis.[6]
Collaborative mapping applications vary depending on which feature the collaborative edition takes place: on the map itself (shared surface), or on overlays to the map. A very simple collaborative mapping application would just plot users' locations (social mapping orgeosocial networking) or Wikipedia articles' locations (Placeopedia). Collaborative implies the possibility of edition by several distinct individuals so the term would tend to exclude applications where the maps are not meant for the general user to modify.
In this kind of application, the map itself is created collaboratively by sharing a common surface. For example, bothOpenStreetMapandWikiMapiaallow for the creation of single 'points of interest', as well as linear features and areas. Collaborative mapping and specifically surface sharing faces the same problems asrevision control, namely concurrent access issues and versioning. In addition to these problems, collaborative maps must deal with the difficult issue of cluttering, due to the geometric constraints inherent in the media. One approach to this problem is using overlays, allowing to suitable use in consumer services.[7]Despite these issues, collaborative mapping platforms such as OpenStreetMap can be considered as being as trustworthy as professionally produced maps[8]
Overlays group together items on a map, allowing the user of the map to toggle the overlay's visibility and thus all items contained in the overlay. The application uses map tiles from a third-party (for example one of the mappingAPIs) and adds its own collaboratively edited overlays to them, sometimes in awikifashion. If each user's revisions are contained in an overlay, the issue of revision control and cluttering can be mitigated. One example of this is the accessibility platform Accessadvisr, which utilises collaborative mapping to inform persons of accessibility issues,[9]which is perceived to be as reliable and trustworthy as professional information.[10]
Other overlays-based collaborative mapping tools follow a different approach and focus on user centered content creation and experience. There users enrich maps with their own points of interest and build kind oftravel booksfor themselves. At the same time users can explore overlays of other users as collaborative extension.
Humanitarian OpenStreetMap Team,[11][12][13]based onOpenStreetMap,[14]provides collaborative mapping support for humanitarian objectives, e.g. collaborative transportation map,[15]epidemiological mapping for Malaria,[16]earthquake response,[17]or typhoon response.[18]
Inrobot navigation, 3-dimensional maps can be reconstructed collaboratively usingsimultaneous localization and mapping.[19][20]
Some mapping companies offer an online mapping tool that allows private collaboration between users when mapping sensitive data on digital maps, e.g.:
If citizens or a community collects data, information (like Wikipedia,Wikiversity) then concerns come up aboutdata quality, and specifically about itscredibility. The same aspects ofquality assuranceare relevant for collaborative mapping[25]and the possibility ofvandalism.[26]
Collaborative mapping is not restricted to the application of mobile devices but if data is captured with a mobile device thesatellite navigation(likeGPSis helpful to assign the current geolocation to the collected data at the geolocation. Open Source tools likeODKare used to collect the mapping data (e.g. about health care facilities or humanitarian operations) with a survey that could automatically insert the geolocation into the survey data that could include visual information (e.g. images, videos) and audio samples collected at the current geolocation. An image can be used e.g. as additional information of damage assessment after an earth quake.[27]
These sites provide general base map information and allow users to create their own content by marking locations where various events occurred or certain features exist, but aren’t already shown on the base map.
Some examples include 311-style request systems[28]and 3D spatial technology.[29]
The openness for changes to the community is possible for all individuals and the community is validating changes by putting regions and location at their personal watchlist. Any changes in the joint repository of the mapping process are captured by aversion control system- Reverting changes is possible and specific quality assured versions of specific areas can be marked as reference map for a specific area (like permanent links in Wikipedia). Quality assurance can be implemented on different scales:
Blockchaincan be used as integrity check of alterations[30]ordigital signature[31]can be used to mark a certain version as "quality assured" by the institution that signed a map as digital file or digital content.
|
https://en.wikipedia.org/wiki/Collaborative_mapping
|
Collective consciousness,collective conscience, orcollective conscious(French:conscience collective) is the set of shared beliefs, ideas, and moral attitudes which operate as a unifying force within society.[1]In general, it does not refer to the specifically moral conscience, but to a shared understanding of social norms.[2]
The modern concept of what can be considered collective consciousness includessolidarityattitudes,memes, extreme behaviors likegroup-thinkandherd behavior, and collectively shared experiences during collective rituals, dance parties,[3]and the discarnate entities which can be experienced from psychedelic use.[4]
Rather than existing as separate individuals, people come together as dynamic groups to share resources and knowledge. It has also developed as a way of describing how an entire community comes together to share similar values. This has also been termed "hive mind", "group mind", "mass mind", and "social mind".[5]
The term was introduced by the FrenchsociologistÉmile Durkheimin hisThe Division of Labour in Societyin 1893. The French wordconsciencegenerally means "conscience", "consciousness", "awareness",[6]or "perception".[7]Given the multiplicity of definitions, translators of Durkheim disagree on which is most appropriate, or whether the translation should depend on the context. Some prefer to treat the word 'conscience' as an untranslatable foreign word or technical term, without its normal English meaning.[8]As for "collective", Durkheim makes clear that he is notreifyingorhypostasizingthis concept; for him, it is "collective" simply in the sense that it is common to many individuals;[9]cf.social fact.
Scipio Sighelepublished ‘La Foule Criminele’ one year before Durkheim, in which he describes emergent characteristics of crowds that don’t appear in the individuals that form the crowd. He doesn’t call this collective consciousness, but ‘âme de la foule’ (soul of the crowd).[10]This term returns inSigmund Freud’s book about mass psychology and essentially overlaps with Durkheims concept of collective consciousness.
Durkheim used the term in his booksThe Division of Labour in Society(1893),The Rules of the Sociological Method(1895),Suicide(1897), andThe Elementary Forms of Religious Life(1912). InThe Division of Labour, Durkheim argued that in traditional/primitive societies (those based around clan, family or tribal relationships),totemicreligion played an important role in uniting members through the creation of a common consciousness. In societies of this type, the contents of an individual's consciousness are largely shared in common with all other members of their society, creating amechanical solidaritythrough mutual likeness.
The totality of beliefs and sentiments common to the average members of a society forms a determinate system with a life of its own. It can be termed the collective or common consciousness.
InSuicide, Durkheim developed the concept ofanomieto refer to the social rather than individual causes of suicide. This relates to the concept of collective consciousness, as if there is a lack of integration or solidarity in society then suicide rates will be higher.[12]
Antonio Gramscistates, “A collective consciousness, which is to say a living organism, is formed only after the unification of the multiplicity through friction on the part of the individuals; nor can one say that ‘silence’ is not a multiplicity.”[13]A form of collective consciousness can be formed from Gramsci's conception that the presence of ahegemonycan mobilize the collective consciousness of thoseoppressedby the ruling ideas ofsociety, or the ruling hegemony. Collective consciousness can refer to amultitudeof different individual forms of consciousness coalescing into a greater whole. In Gramsci's view, a unified whole is composed ofsolidarityamong its different constituent parts, and therefore, this whole cannot be uniformly the same. The unified whole can embrace different forms of consciousness (or individual experiences of social reality), which coexist to reflect the different experiences of themarginalizedpeoples in a given society. This agrees with Gramsci's theory of Marxism andclass struggleapplied to cultural contexts.Cultural Marxism(as distinguished from the right-wing use of the term) embodies the concept of collective consciousness. It incorporatessocial movementsthat are based on some sort of collective identity; these identities can include, for instance,gender,sexual orientation,race, andability, and can be incorporated by collective-based movements into a broader historical material analysis of class struggle.
According to Michelle Filippini, “The nature and workings of collective organisms – not only parties, but also trade unions, associations and intermediate bodies in general – represent a specific sphere of reflection in the Prison Notebooks, particularly in regard to the new relationship between State and society that in Gramsci's view emerged during the age of mass politics.”[14]Collective organisms can express collective consciousness. Whether this form of expression finds itself in the realm of the state or the realm of society is up to the direction that the subjects take in expressing their collective consciousness. In Gramsci'sPrison Notebooks, the ongoing conflict betweencivil society, thebureaucracy, and the state necessitates the emergence of a collective consciousness that can often act as an intermediary between these different realms. The public organizations of protest, such aslabor unionsand anti-war organizations, are vehicles that can unite multiple types of collective consciousness. Although identity-based movements are necessary for the progress ofdemocracyand can generate collective consciousness, they cannot completely do so without a unifying framework. This is whyanti-warandlabor movementsprovide an avenue that has united various social movements under the banner of a multiple collective consciousness. This is also why future social movements need to have anethosof collective consciousness if they are to succeed in the long-term.
Zukerfield states that “The different disciplines that have studied knowledge share an understanding of it as a product of human subjects – individual, collective, etc.”[15]Knowledgein a sociological sense is derived from social conditions andsocial realities. Collective consciousness also reflects social realities, and sociological knowledge can be gained through the adoption of a collective consciousness. Many different disciplines such asphilosophyandliteratureexamine collective consciousness from different lenses. These different disciplines reach a similar understanding of a collective consciousness despite their different approaches to the subject. The inherent humanness in the idea of collective consciousness refers to a shared way of thinking among human beings in the pursuit of knowledge.
Collective consciousness can provide an understanding of the relationship betweenselfand society. As Zukerfeld states, “Even though it impels us, as a first customary gesture, to analyse the subjective (such as individual consciousness) or intersubjective bearers (such as the values of a given society), in other words those which Marxism and sociology examine, now we can approach them in an entirely different light.”[16]“Cognitive materialism”[15]is presented in the work by Zukerfeld as a sort of ‘third way’ between sociological knowledge and Marxism. Cognitive materialism is based on a kind of collective consciousness of themind. This consciousness can be used, with cognitive materialism as a guiding force, by human beings in order to critically analyze society and social conditions.
Society is made up of various collective groups, such as the family, community, organizations, regions, nations which as Burns and Egdahl state "can be considered to possess agential capabilities: to think, judge, decide, act, reform; to conceptualize self and others as well as self's actions and interactions; and to reflect.".[17]It is suggested that these different national behaviors vary according to the different collective consciousness between nations. This illustrates that differences in collective consciousness can have practical significance.
According to a theory, the character of collective consciousness depends on the type of mnemonic encoding used within a group (Tsoukalas, 2007). The specific type of encoding used has a predictable influence on the group's behavior and collective ideology. Informal groups, that meet infrequently and spontaneously, have a tendency to represent significant aspects of their community as episodic memories. This usually leads to strong social cohesion and solidarity, an indulgent atmosphere, an exclusive ethos and a restriction of social networks. Formal groups, that have scheduled and anonymous meetings, tend to represent significant aspects of their community as semantic memories which usually leads to weak social cohesion and solidarity, a more moderate atmosphere, an inclusive ethos and an expansion of social networks.[18]
In a case study of a Serbian folk story, Wolfgang Ernst examines collective consciousness in terms of forms ofmedia, specifically collective oral and literary traditions. "Current discourse analysis drifts away from the 'culturalist turn' of the last two or three decades and its concern with individual and collective memory as an extended target of historical research".[19]There is still a collective consciousness present in terms of the shared appreciation offolk storiesandoral traditions. Folk stories enable the subject and the audiences to come together around a common experience and a shared heritage. In the case of the Serbian folk “gusle”,[20]the Serbian people take pride in this musical instrument of epic poetry and oral tradition and play it at social gatherings. Expressions ofartandcultureare expressions of a collective consciousness or expressions of multiple social realities.
Works by Durkheim
Works by others
|
https://en.wikipedia.org/wiki/Collective_consciousness
|
Collective intelligence(CI) is shared orgroupintelligence(GI) thatemergesfrom thecollaboration, collective efforts, and competition of many individuals and appears inconsensus decision making. The term appears insociobiology,political scienceand in context of masspeer reviewandcrowdsourcingapplications. It may involveconsensus,social capitalandformalismssuch asvoting systems,social mediaand other means of quantifying mass activity.[1]CollectiveIQis a measure of collective intelligence, although it is often used interchangeably with the term collective intelligence. Collective intelligence has also been attributed tobacteriaand animals.[2]
It can be understood as anemergent propertyfrom thesynergiesamong:
Or it can be more narrowly understood as an emergent property between people and ways of processing information.[4]This notion of collective intelligence is referred to as "symbiotic intelligence" by Norman Lee Johnson.[5]The concept is used insociology,business,computer scienceand mass communications: it also appears inscience fiction.Pierre Lévydefines collective intelligence as, "It is a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills. I'll add the following indispensable characteristic to this definition: The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized orhypostatizedcommunities."[6]According to researchers Pierre Lévy andDerrick de Kerckhove, it refers to capacity of networkedICTs(Information communication technologies) to enhance the collective pool of social knowledge by simultaneously expanding the extent of human interactions.[7][8]A broader definition was provided byGeoff Mulganin a series of lectures and reports from 2006 onwards[9]and in the book Big Mind[10]which proposed a framework for analysing any thinking system, including both human and machine intelligence, in terms of functional elements (observation, prediction, creativity, judgement etc.), learning loops and forms of organisation. The aim was to provide a way to diagnose, and improve, the collective intelligence of a city, business, NGO or parliament.
Collective intelligence strongly contributes to the shift of knowledge and power from the individual to the collective. According toEric S. Raymondin 1998 and JC Herz in 2005,[11][12]open-source intelligencewill eventually generate superior outcomes to knowledge generated by proprietary software developed within corporations.[13]Media theoristHenry Jenkinssees collective intelligence as an 'alternative source of media power', related to convergence culture. He draws attention to education and the way people are learning to participate in knowledge cultures outside formal learning settings. Henry Jenkins criticizes schools which promote 'autonomous problem solvers and self-contained learners' while remaining hostile to learning through the means of collective intelligence.[14]Both Pierre Lévy and Henry Jenkins support the claim that collective intelligence is important fordemocratization, as it is interlinked with knowledge-based culture and sustained by collective idea sharing, and thus contributes to a better understanding of diverse society.[15][16]
Similar to thegfactor (g)for general individual intelligence, a new scientific understanding of collective intelligence aims to extract a general collective intelligence factorc factorfor groups indicating a group's ability to perform a wide range of tasks.[17]Definition, operationalization and statistical methods are derived fromg. Similarly asgis highly interrelated with the concept ofIQ,[18][19]this measurement of collective intelligence can be interpreted as intelligence quotient for groups (Group-IQ) even though the score is not a quotient per se. Causes forcand predictive validity are investigated as well.
Writers who have influenced the idea of collective intelligence includeFrancis Galton,Douglas Hofstadter(1979), Peter Russell (1983),Tom Atlee(1993),Pierre Lévy(1994),Howard Bloom(1995),Francis Heylighen(1995),Douglas Engelbart, Louis Rosenberg,Cliff Joslyn,Ron Dembo,Gottfried Mayer-Kress(2003), andGeoff Mulgan.
The concept (although not so named) originated in 1785 with theMarquis de Condorcet, whose"jury theorem"states that if each member of a voting group is more likely than not to make a correct decision, the probability that the highest vote of the group is the correct decision increases with the number of members of the group.[20]Many theorists have interpretedAristotle's statement in thePoliticsthat "a feast to which many contribute is better than a dinner provided out of a single purse" to mean that just as many may bring different dishes to the table, so in a deliberation many may contribute different pieces of information to generate a better decision.[21][22]Recent scholarship,[23]however, suggests that this was probably not what Aristotle meant but is a modern interpretation based on what we now know about team intelligence.[24]
A precursor of the concept is found in entomologistWilliam Morton Wheeler's observation in 1910 that seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism.[25]Wheeler saw this collaborative process at work inantsthat acted like the cells of a single beast he called asuperorganism.
In 1912Émile Durkheimidentified society as the sole source of human logical thought. He argued in "The Elementary Forms of Religious Life" that society constitutes a higher intelligence because it transcends the individual over space and time.[26]Other antecedents areVladimir VernadskyandPierre Teilhard de Chardin's concept of "noosphere" andH. G. Wells's concept of "world brain".[27]Peter Russell,Elisabet Sahtouris, andBarbara Marx Hubbard(originator of the term "conscious evolution")[28]are inspired by the visions of a noosphere – a transcendent, rapidly evolving collective intelligence – an informational cortex of the planet. The notion has more recently been examined by the philosopher Pierre Lévy. In a 1962 research report,Douglas Engelbartlinked collective intelligence to organizational effectiveness, and predicted that pro-actively 'augmenting human intellect' would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone".[29]In 1994, he coined the term 'collective IQ' as a measure of collective intelligence, to focus attention on the opportunity to significantly raise collective IQ in business and society.[30]
The idea of collective intelligence also forms the framework for contemporary democratic theories often referred to asepistemic democracy. Epistemic democratic theories refer to the capacity of the populace, either through deliberation or aggregation of knowledge, to track the truth and relies on mechanisms to synthesize and apply collective intelligence.[31]
Collective intelligence was introduced into the machine learning community in the late 20th century,[32]and matured into a broader consideration of how to design "collectives" of self-interested adaptive agents to meet a system-wide goal.[33][34]This was related to single-agent work on "reward shaping"[35]and has been taken forward by numerous researchers in the game theory and engineering communities.[36]
Howard Bloomhas discussed mass behavior –collective behaviorfrom the level of quarks to the level of bacterial, plant, animal, and human societies. He stresses the biological adaptations that have turned most of this earth's living beings into components of what he calls "a learning machine". In 1986 Bloom combined the concepts ofapoptosis,parallel distributed processing,group selection, and the superorganism to produce a theory of how collective intelligence works.[37]Later he showed how the collective intelligences of competing bacterial colonies and human societies can be explained in terms of computer-generated "complex adaptive systems" and the "genetic algorithms", concepts pioneered byJohn Holland.[38]
Bloom traced the evolution of collective intelligence to our bacterial ancestors 1 billion years ago and demonstrated how a multi-species intelligence has worked since the beginning of life.[38]Ant societiesexhibit more intelligence, in terms of technology, than any other animal except for humans and co-operate in keeping livestock, for exampleaphidsfor "milking".[38]Leaf cutters care for fungi and carry leaves to feed the fungi.[38]
David Skrbina[39]cites the concept of a 'group mind' as being derived from Plato's concept ofpanpsychism(that mind or consciousness is omnipresent and exists in all matter). He develops the concept of a 'group mind' as articulated byThomas HobbesinLeviathanandFechner's arguments for acollective consciousnessof mankind. He citesDurkheimas the most notable advocate of a "collective consciousness"[40]andTeilhard de Chardinas a thinker who has developed the philosophical implications of the group mind.[41]
Tom Atlee focuses primarily on humans and on work to upgrade what Howard Bloom calls "the group IQ". Atlee feels that collective intelligence can be encouraged "to overcome 'groupthink' and individualcognitive biasin order to allow a collective to cooperate on one process – while achieving enhanced intellectual performance." George Pór defined the collective intelligence phenomenon as "the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as differentiation and integration, competition and collaboration."[42]Atlee and Pór state that "collective intelligence also involves achieving a single focus of attention and standard of metrics which provide an appropriate threshold of action".[43]Their approach is rooted inscientific community metaphor.[43]
The term group intelligence is sometimes used interchangeably with the term collective intelligence. Anita Woolley presents Collective intelligence as a measure of group intelligence and group creativity.[17]The idea is that a measure of collective intelligence covers a broad range of features of the group, mainly group composition and group interaction.[44]The features of composition that lead to increased levels of collective intelligence in groups include criteria such as higher numbers of women in the group as well as increased diversity of the group.[44]
Atlee and Pór suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory andartificial intelligencehave something to offer.[43]Individuals who respect collective intelligence are confident of their own abilities and recognize that the whole is indeed greater than the sum of any individual parts.[45]Maximizing collective intelligence relies on the ability of an organization to accept and develop "The Golden Suggestion", which is any potentially useful input from any member.[46]Groupthink often hampers collective intelligence by limiting input to a select few individuals or filtering potential Golden Suggestions without fully developing them to implementation.[43]
Robert David Steele VivasinThe New Craft of Intelligenceportrayed all citizens as "intelligence minutemen", drawing only on legal and ethical sources of information, able to create a "public intelligence" that keeps public officials and corporate managers honest, turning the concept of "national intelligence" (previously concerned about spies and secrecy) on its head.[47]
According toDon TapscottandAnthony D. Williams, collective intelligence ismass collaboration. In order for this concept to happen, four principles need to exist:[48]
A new scientific understanding of collective intelligence defines it as a group's general ability to perform a wide range of tasks.[17]Definition, operationalization and statistical methods are similar to thepsychometric approach of general individual intelligence. Hereby, an individual's performance on a given set of cognitive tasks is used to measure general cognitive ability indicated by the general intelligencefactorgproposed by English psychologistCharles Spearmanand extracted viafactor analysis.[49]In the same vein asgserves to display between-individual performance differences on cognitive tasks, collective intelligence research aims to find a parallel intelligence factor for groups'cfactor'[17](also called 'collective intelligence factor' (CI)[50]) displaying between-group differences on task performance. The collective intelligence score then is used to predict how this same group will perform on any other similar task in the future. Yet tasks, hereby, refer to mental or intellectual tasks performed by small groups[17]even though the concept is hoped to be transferable to other performances and any groups or crowds reaching from families to companies and even whole cities.[51]Since individuals'gfactor scores are highly correlated with full-scaleIQscores, which are in turn regarded as good estimates ofg,[18][19]this measurement of collective intelligence can also be seen as an intelligence indicator or quotient respectively for a group (Group-IQ) parallel to an individual's intelligence quotient (IQ) even though the score is not a quotient per se.
Mathematically,candgare both variables summarizing positive correlations among different tasks supposing that performance on one task is comparable with performance on other similar tasks.[52]cthus is a source of variance among groups and can only be considered as a group's standing on thecfactor compared to other groups in a given relevant population.[19][53]The concept is in contrast to competing hypotheses including other correlational structures to explain group intelligence,[17]such as a composition out of several equally important but independent factors as found inindividual personality research.[54]
Besides, this scientific idea also aims to explore the causes affecting collective intelligence, such as group size, collaboration tools or group members' interpersonal skills.[55]TheMIT Center for Collective Intelligence, for instance, announced the detection ofThe Genome of Collective Intelligence[55]as one of its main goals aiming to develop a "taxonomy of organizational building blocks, or genes, that can be combined and recombined to harness the intelligence of crowds".[55]
Individual intelligence is shown to be genetically and environmentally influenced.[56][57]Analogously, collective intelligence research aims to explore reasons why certain groups perform more intelligently than other groups given thatcis just moderately correlated with the intelligence of individual group members.[17]According to Woolley et al.'s results, neither team cohesion nor motivation or satisfaction is correlated withc. However, they claim that three factors were found as significant correlates: the variance in the number of speaking turns, group members' average social sensitivity and the proportion of females. All three had similar predictive power forc, but only social sensitivity was statistically significant (b=0.33, P=0.05).[17]
The number speaking turns indicates that "groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking".[50]Hence, providing multiple team members the chance to speak up made a group more intelligent.[17]
Group members' social sensitivity was measured via the Reading the Mind in the Eyes Test[58](RME) and correlated .26 withc.[17]Hereby, participants are asked to detect thinking or feeling expressed in other peoples' eyes presented on pictures and assessed in a multiple choice format. The test aims to measure peoples'theory of mind (ToM), also called 'mentalizing'[59][60][61][62]or 'mind reading',[63]which refers to the ability to attribute mental states, such as beliefs, desires or intents, to other people and in how far people understand that others have beliefs, desires, intentions or perspectives different from their own ones.[58]RME is a ToM test for adults[58]that shows sufficient test-retest reliability[64]and constantly differentiates control groups from individuals with functionalautismorAsperger Syndrome.[58]It is one of the most widely accepted and well-validated tests for ToM within adults.[65]ToM can be regarded as an associated subset of skills and abilities within the broader concept ofemotional intelligence.[50][66]
The proportion of females as a predictor ofcwaslargely mediated by social sensitivity (Sobelz = 1.93, P= 0.03)[17]which is in line with previous research showing that women score higher on social sensitivity tests.[58]While amediation, statistically speaking, clarifies the mechanism underlying the relationship between a dependent and an independent variable,[67]Wolley agreed in an interview with theHarvard Business Reviewthat these findings are saying that groups of women are smarter than groups of men.[51]However, she relativizes this stating that the actual important thing is the high social sensitivity of group members.[51]
It is theorized that the collective intelligence factorcis an emergent property resulting from bottom-up as well as top-down processes.[44]Hereby, bottom-up processes cover aggregated group-member characteristics. Top-down processes cover group structures and norms that influence a group's way of collaborating and coordinating.[44]
Top-down processes cover group interaction, such as structures, processes, and norms.[44]An example of such top-down processes is conversational turn-taking.[17]Research further suggest that collectively intelligent groups communicate more in general as well as more equally; same applies for participation and is shown for face-to-face as well as online groups communicating only via writing.[50][68]
Bottom-up processes include group composition,[44]namely the characteristics of group members which are aggregated to the team level.[44]An example of such bottom-up processes is the average social sensitivity or the average and maximum intelligence scores of group members.[17]Furthermore, collective intelligence was found to be related to a group's cognitive diversity[69]including thinking styles and perspectives.[70]Groups that are moderately diverse incognitive stylehave higher collective intelligence than those who are very similar in cognitive style or very different. Consequently, groups where members are too similar to each other lack the variety of perspectives and skills needed to perform well. On the other hand, groups whose members are too different seem to have difficulties to communicate and coordinate effectively.[69]
For most of human history, collective intelligence was confined to small tribal groups in which opinions were aggregated through real-time parallel interactions among members.[71]In modern times, mass communication, mass media, and networking technologies have enabled collective intelligence to span massive groups, distributed across continents and time-zones. To accommodate this shift in scale, collective intelligence in large-scale groups been dominated by serialized polling processes such as aggregating up-votes, likes, and ratings over time. While modern systems benefit from larger group size, the serialized process has been found to introduce substantial noise that distorts the collective output of the group. In one significant study of serialized collective intelligence, it was found that the first vote contributed to a serialized voting system can distort the final result by 34%.[72]
To address the problems of serialized aggregation of input among large-scale groups, recent advancements collective intelligence have worked to replace serialized votes, polls, and markets, with parallel systems such as "human swarms" modeled after synchronous swarms in nature.[73][74]Based on natural process ofSwarm Intelligence, these artificial swarms of networked humans enable participants to work together in parallel to answer questions and make predictions as an emergent collective intelligence.[75][76]In one high-profile example, a human swarm challenge by CBS Interactive to predict the Kentucky Derby. The swarm correctly predicted the first four horses, in order, defying 542–1 odds and turning a $20 bet into $10,800.[77]
The value of parallel collective intelligence was demonstrated in medical applications by researchers atStanford University School of MedicineandUnanimous AIin a set of published studies wherein groups of human doctors were connected by real-time swarming algorithms and tasked with diagnosing chest x-rays for the presence of pneumonia.[78][79]When working together as "human swarms", the groups of experienced radiologists demonstrated a 33% reduction in diagnostic errors as compared to traditional methods.[80][81]
Woolley, Chabris, Pentland, Hashmi, & Malone (2010),[17]the originators of this scientific understanding of collective intelligence, found a single statistical factor for collective intelligence in their research across 192 groups with people randomly recruited from the public. In Woolley et al.'s two initial studies, groups worked together on different tasks from theMcGrath Task Circumplex,[82]a well-established taxonomy of group tasks. Tasks were chosen from all four quadrants of the circumplex and included visual puzzles, brainstorming, making collective moral judgments, and negotiating over limited resources. The results in these tasks were taken to conduct afactor analysis. Both studies showed support for a general collective intelligence factorcunderlying differences in group performance with an initial eigenvalue accounting for 43% (44% in study 2) of the variance, whereas the next factor accounted for only 18% (20%). That fits the range normally found in research regarding ageneral individual intelligence factorgtypically accounting for 40% to 50% percent of between-individual performance differences on cognitive tests.[52]
Afterwards, a more complex task was solved by each group to determine whethercfactor scores predict performance on tasks beyond the original test. Criterion tasks were playingcheckers (draughts)against a standardized computer in the first and a complex architectural design task in the second study. In aregression analysisusing both individual intelligence of group members andcto predict performance on the criterion tasks,chad a significant effect, but average and maximum individual intelligence had not. While average (r=0.15, P=0.04) and maximum intelligence (r=0.19, P=0.008) of individual group members were moderately correlated withc,cwas still a much better predictor of the criterion tasks. According to Woolley et al., this supports the existence of a collective intelligence factorc,because it demonstrates an effect over and beyond group members' individual intelligence and thus thatcis more than just the aggregation of the individual IQs or the influence of the group member with the highest IQ.[17]
Engel et al.[50](2014) replicated Woolley et al.'s findings applying an accelerated battery of tasks with a first factor in the factor analysis explaining 49% of the between-group variance in performance with the following factors explaining less than half of this amount. Moreover, they found a similar result for groups working together online communicating only via text and confirmed the role of female proportion and social sensitivity in causing collective intelligence in both cases. Similarly to Wolley et al.,[17]they also measured social sensitivity with the RME which is actually meant to measure people's ability to detect mental states in other peoples' eyes. The online collaborating participants, however, did neither know nor see each other at all. The authors conclude that scores on the RME must be related to a broader set of abilities of social reasoning than only drawing inferences from other people's eye expressions.[83]
A collective intelligence factorcin the sense of Woolley et al.[17]was further found in groups of MBA students working together over the course of a semester,[84]in online gaming groups[68]as well as in groups from different cultures[85]and groups in different contexts in terms of short-term versus long-term groups.[85]None of these investigations considered team members' individual intelligence scores as control variables.[68][84][85]
Note as well that the field of collective intelligence research is quite young and published empirical evidence is relatively rare yet. However, various proposals and working papers are in progress or already completed but (supposedly) still in ascholarly peer reviewingpublication process.[86][87][88][89]
Next to predicting a group's performance on more complex criterion tasks as shown in the original experiments,[17]the collective intelligence factorcwas also found to predict group performance in diverse tasks in MBA classes lasting over several months.[84]Thereby, highly collectively intelligent groups earned significantly higher scores on their group assignments although their members did not do any better on other individually performed assignments. Moreover, highly collective intelligent teams improved performance over time suggesting that more collectively intelligent teams learn better.[84]This is another potential parallel to individual intelligence where more intelligent people are found to acquire new material quicker.[19][90]
Individual intelligence can be used to predict plenty of life outcomes from school attainment[91]and career success[92]to health outcomes[93]and even mortality.[93]Whether collective intelligence is able to predict other outcomes besides group performance on mental tasks has still to be investigated.
Gladwell[94](2008) showed that the relationship between individual IQ and success works only to a certain point and that additional IQ points over an estimate of IQ 120 do not translate into real life advantages. If a similar border exists for Group-IQ or if advantages are linear and infinite, has still to be explored. Similarly, demand for further research on possible connections of individual and collective intelligence exists within plenty of other potentially transferable logics of individual intelligence, such as, for instance, the development over time[95]or the question of improving intelligence.[96][97]Whereas it is controversial whether human intelligence can be enhanced via training,[96][97]a group's collective intelligence potentially offers simpler opportunities for improvement by exchanging team members or implementing structures and technologies.[51]Moreover, social sensitivity was found to be, at least temporarily, improvable by readingliterary fiction[98]as well as watching drama movies.[99]In how far such training ultimately improves collective intelligence through social sensitivity remains an open question.[100]
There are further more advanced concepts and factor models attempting to explain individual cognitive ability including the categorization of intelligence influid and crystallized intelligence[101][102]or thehierarchical model of intelligence differences.[103][104]Further supplementing explanations and conceptualizations for the factor structure of theGenomesof collective intelligence besides a general'cfactor', though, are missing yet.[105]
Other scholars explain team performance by aggregating team members' general intelligence to the team level[106][107]instead of building an own overall collective intelligence measure. Devine and Philips[108](2001) showed in a meta-analysis that mean cognitive ability predicts team performance in laboratory settings (0.37) as well as field settings (0.14) – note that this is only a small effect. Suggesting a strong dependence on the relevant tasks, other scholars showed that tasks requiring a high degree of communication and cooperation are found to be most influenced by the team member with the lowest cognitive ability.[109]Tasks in which selecting the best team member is the most successful strategy, are shown to be most influenced by the member with the highest cognitive ability.[66]
Since Woolley et al.'s[17]results do not show any influence of group satisfaction,group cohesiveness, or motivation, they, at least implicitly, challenge these concepts regarding the importance for group performance in general and thus contrast meta-analytically proven evidence concerning the positive effects ofgroup cohesion,[110][111][112]motivation[113][114]and satisfaction[115]on group performance.
Some scholars have noted that the evidence for collective intelligence in the body of work by Wolley et al.[17]is weak and may contain errors or misunderstandings of the data.[116]For example, Woolley et al.[17]stated in their findings that the maximum individual score on the Wonderlic Personnel Test (WPT;[117]an individual intelligence test used in their research) was 39, but also that the maximum averaged team score on the same test was also a 39. This indicates that their sample seemingly had a team composed entirely of people who, individually, got exactly the same score on the WPT, and also all happened to all have achieved the highest scores on the WPT found in Woolley et al.[17]This was noted by scholars as particularly unlikely to occur.[116]Other anomalies found in the data indicate that results may be driven in part by low-effort responding.[17][116]For instance, Woolley et al.'s[17]data indicates that at least one team scored a 0 on a task in which they were given 10 minutes to come up with as many uses for a brick as possible. Similarly, Woolley et al.'s[17]data show that at least one team had an average score of 8 out of 50 on the WPT. Scholars have noted that the probability of this occurring with study participants who are putting forth effort is nearly zero.[116]This may explain why Woolley et al.[17]found that the group's individual intelligence scores were not predictive of performance. In addition, low effort on tasks in human subjects research may inflate evidence for a supposed collective intelligence factor based on similarity of performance across tasks, because a team's low effort on one research task may generalize to low effort across many tasks.[116][118][119]It is notable that such a phenomenon is present merely because of the low stakes setting of laboratory research for research participants and not because it reflects how teams operate in organizations.[116][120]
Noteworthy is also that the involved researchers among the confirming findings widely overlap with each other and with the authors participating in the original first study around Anita Woolley.[17][44][50][69][83]
On 3 May 2022, the authors of "Quantifying collective intelligence in human groups",[121]who include Riedl and Woolley from the original 2010 paper on Collective Intelligence,[17]issued a correction to the article after mathematically impossible findings reported in the article were noted publicly by researcher Marcus Credé. Among the corrections is an admission that the average variance extracted (AVE)--that is to say, the evidence for collective intelligence—was only 19.6% from their Confirmatory Factor Analysis. Notable is that an AVE of at least 50% is generally required to demonstrate evidence for convergent validity of a single factor, with greater than 70% generally indicating good evidence for the factor.[122]Therefore, the evidence for collective intelligence referred to as "robust" in Riedl et al.[121]is in fact quite weak or nonexistent, as their primary evidence does not meet or near even the lowest thresholds of acceptable evidence for a latent factor.[122]Curiously, despite this and several other factual inaccuracies found throughout the article, the paper has not been retracted, and these inaccuracies were apparently not originally detected by the author team, peer reviewers, or editors of the journal.[121]
In 2001, Tadeusz (Tad) Szuba from theAGH Universityin Poland proposed a formal model for the phenomenon of collective intelligence. It is assumed to be an unconscious, random, parallel, and distributed computational process, run in mathematical logic by the social structure.[123]
In this model, beings and information are modeled as abstract information molecules carrying expressions of mathematical logic.[123]They are quasi-randomly displacing due to their interaction with their environments with their intended displacements.[123]Their interaction in abstract computational space creates multi-thread inference process which we perceive as collective intelligence.[123]Thus, a non-Turingmodel of computation is used. This theory allows simple formal definition of collective intelligence as the property ofsocial structureand seems to be working well for a wide spectrum of beings, from bacterial colonies up to human social structures. Collective intelligence considered as a specific computational process is providing a straightforward explanation of several social phenomena. For this model of collective intelligence, the formal definition of IQS (IQ Social) was proposed and was defined as "the probability function over the time and domain of N-element inferences which are reflecting inference activity of the social structure".[123]While IQS seems to be computationally hard, modeling of social structure in terms of a computational process as described above gives a chance for approximation.[123]Prospective applications are optimization of companies through the maximization of their IQS, and the analysis of drug resistance against collective intelligence of bacterial colonies.[123]
One measure sometimes applied, especially by more artificial intelligence focused theorists, is a "collective intelligence quotient"[124](or "cooperation quotient") – which can be normalized from the "individual"intelligence quotient(IQ)[124]– thus making it possible to determine the marginal intelligence added by each new individual participating in thecollective action, thus usingmetricsto avoid the hazards ofgroup thinkandstupidity.[125]
There have been many recent applications of collective intelligence, including in fields such as crowd-sourcing, citizen science and prediction markets. The Nesta Centre for Collective Intelligence Design[126]was launched in 2018 and has produced many surveys of applications as well as funding experiments. In 2020 the UNDP Accelerator Labs[127]began using collective intelligence methods in their work to accelerate innovation for theSustainable Development Goals.
Here, the goal is to get an estimate (in a single value) of something. For example, estimating the weight of an object, or the release date of a product or probability of success of a project etc. as seen in prediction markets like Intrade, HSX or InklingMarkets and also in several implementations of crowdsourced estimation of a numeric outcome such as theDelphi method. Essentially, we try to get the average value of the estimates provided by the members in the crowd.
In this situation, opinions are gathered from the crowd regarding an idea, issue or product. For example, trying to get a rating (on some scale) of a product sold online (such as Amazon's star rating system). Here, the emphasis is to collect and simply aggregate the ratings provided by customers/users.
In these problems, someone solicits ideas for projects, designs or solutions from the crowd. For example, ideas on solving adata scienceproblem (as inKaggle) or getting a good design for a T-shirt (as inThreadless) or in getting answers to simple problems that only humans can do well (as in Amazon's Mechanical Turk). The objective is to gather the ideas and devise some selection criteria to choose the best ideas.
James Surowieckidivides the advantages of disorganized decision-making into three main categories, which are cognition, cooperation and coordination.[128]
Because of the Internet's ability to rapidly convey large amounts of information throughout the world, the use of collective intelligence to predict stock prices and stock price direction has become increasingly viable.[129]Websites aggregate stock market information that is as current as possible so professional or amateur stock analysts can publish their viewpoints, enabling amateur investors to submit their financial opinions and create an aggregate opinion.[129]The opinion of all investor can be weighed equally so that a pivotal premise of the effective application of collective intelligence can be applied: the masses, including a broad spectrum of stock market expertise, can be utilized to more accurately predict the behavior of financial markets.[130][131]
Collective intelligence underpins theefficient-market hypothesisofEugene Fama[132]– although the term collective intelligence is not used explicitly in his paper. Fama cites research conducted byMichael Jensen[133]in which 89 out of 115 selected funds underperformed relative to the index during the period from 1955 to 1964. But after removing the loading charge (up-front fee) only 72 underperformed while after removing brokerage costs only 58 underperformed. On the basis of such evidenceindex fundsbecame popular investment vehicles using the collective intelligence of the market, rather than the judgement of professional fund managers, as an investment strategy.[133]
Political parties mobilize large numbers of people to form policy, select candidates and finance and run election campaigns.[134]Knowledge focusing through variousvotingmethods allows perspectives to converge through the assumption that uninformed voting is to some degree random and can be filtered from the decision process leaving only a residue of informed consensus.[134]Critics point out that often bad ideas, misunderstandings, and misconceptions are widely held, and that structuring of the decision process must favor experts who are presumably less prone to random or misinformed voting in a given context.[135]
Companies such as Affinnova (acquired by Nielsen),Google,InnoCentive,Marketocracy, andThreadless[136]have successfully employed the concept of collective intelligence in bringing about the next generation of technological changes through their research and development (R&D), customer service, and knowledge management.[136][137]An example of such application is Google's Project Aristotle in 2012, where the effect of collective intelligence on team makeup was examined in hundreds of the company's R&D teams.[138]
In 2012, theGlobal Futures Collective Intelligence System(GFIS) was created byThe Millennium Project,[139]which epitomizes collective intelligence as the synergistic intersection among data/information/knowledge, software/hardware, and expertise/insights that has a recursive learning process for better decision-making than the individual players alone.[139]
New mediaare often associated with the promotion and enhancement of collective intelligence. The ability of new media to easily store and retrieve information, predominantly through databases and the Internet, allows for it to be shared without difficulty. Thus, through interaction with new media, knowledge easily passes between sources[13]resulting in a form of collective intelligence. The use of interactive new media, particularly the internet, promotes online interaction and this distribution of knowledge between users.
Francis Heylighen,Valentin Turchin, and Gottfried Mayer-Kress are among those who view collective intelligence through the lens of computer science andcybernetics. In their view, the Internet enables collective intelligence at the widest, planetary scale, thus facilitating the emergence of aglobal brain.
The developer of the World Wide Web,Tim Berners-Lee, aimed to promote sharing and publishing of information globally. Later his employer opened up the technology for free use. In the early '90s, the Internet's potential was still untapped, until the mid-1990s when 'critical mass', as termed by the head of the Advanced Research Project Agency (ARPA), Dr.J.C.R. Licklider, demanded more accessibility and utility.[140]The driving force of this Internet-based collective intelligence is the digitization of information and communication.Henry Jenkins, a key theorist of new media and media convergence draws on the theory that collective intelligence can be attributed to media convergence and participatory culture.[13]He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contribute to the development of such skills.[141]Collective intelligence is not merely a quantitative contribution of information from all cultures, it is also qualitative.[141]
Lévyandde Kerckhoveconsider CI from a mass communications perspective, focusing on the ability of networked information and communication technologies to enhance the community knowledge pool. They suggest that these communications tools enable humans to interact and to share and collaborate with both ease and speed.[13]With the development of theInternetand its widespread use, the opportunity to contribute to knowledge-building communities, such asWikipedia, is greater than ever before. These computer networks give participating users the opportunity to store and to retrieve knowledge through the collective access to these databases and allow them to "harness the hive"[13]Researchers at theMIT Center for Collective Intelligenceresearch and explore collective intelligence of groups of people and computers.[142]
In this context collective intelligence is often confused withshared knowledge. The former is the sum total of information held individually by members of a community while the latter is information that is believed to be true and known by all members of the community.[143]Collective intelligence as represented byWeb 2.0has less user engagement thancollaborative intelligence. An art project using Web 2.0 platforms is "Shared Galaxy", an experiment developed by an anonymous artist to create a collective identity that shows up as one person on several platforms like MySpace, Facebook, YouTube and Second Life. The password is written in the profiles and the accounts named "Shared Galaxy" are open to be used by anyone. In this way many take part in being one.[144]Another art project using collective intelligence to produce artistic work is Curatron, where a large group of artists together decides on a smaller group that they think would make a good collaborative group. The process is used based on an algorithm computing the collective preferences[145]In creating what he calls 'CI-Art', Nova Scotia based artist Mathew Aldred follows Pierry Lévy's definition of collective intelligence.[146]Aldred's CI-Art event in March 2016 involved over four hundred people from the community of Oxford, Nova Scotia, and internationally.[147][148]Later work developed by Aldred used the UNUswarm intelligencesystem to create digital drawings and paintings.[149]The Oxford Riverside Gallery (Nova Scotia) held a public CI-Art event in May 2016, which connected with online participants internationally.[150]
Insocial bookmarking(also called collaborative tagging),[151]users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from thiscrowdsourcingprocess. The resulting information structure can be seen as reflecting the collective knowledge (or collective intelligence) of a community of users and is commonly called a "Folksonomy", and the process can be captured bymodels of collaborative tagging.[151]
Recent research using data from the social bookmarking websiteDelicious, has shown that collaborative tagging systems exhibit a form ofcomplex systems(orself-organizing) dynamics.[152][153][154]Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources has been shown to converge over time to a stablepower lawdistributions.[152]Once such stable distributions form, examining thecorrelationsbetween different tags can be used to construct simple folksonomy graphs, which can be efficiently partitioned to obtained a form of community or shared vocabularies.[155]Such vocabularies can be seen as a form of collective intelligence, emerging from the decentralised actions of a community of users. The Wall-it Project is also an example of social bookmarking.[156]
Research performed by Tapscott and Williams has provided a few examples of the benefits of collective intelligence to business:[48]
Cultural theorist and online community developer, John Banks considered the contribution of online fan communities in the creation of theTrainzproduct. He argued that its commercial success was fundamentally dependent upon "the formation and growth of an active and vibrant online fan community that would both actively promote the product and create content- extensions and additions to the game software".[157]
The increase in user created content and interactivity gives rise to issues of control over the game itself and ownership of the player-created content. This gives rise to fundamental legal issues, highlighted by Lessig[158]and Bray and Konsynski,[159]such asintellectual propertyand property ownership rights.
Gosney extends this issue of Collective Intelligence in videogames one step further in his discussion ofalternate reality gaming. This genre, he describes as an "across-media game that deliberately blurs the line between the in-game and out-of-game experiences"[160]as events that happen outside the game reality "reach out" into the player's lives in order to bring them together. Solving the game requires "the collective and collaborative efforts of multiple players"; thus the issue of collective and collaborative team play is essential to ARG. Gosney argues that the Alternate Reality genre of gaming dictates an unprecedented level of collaboration and "collective intelligence" in order to solve the mystery of the game.[160]
Co-operation helps to solve most important and most interesting multi-science problems. In his book, James Surowiecki mentioned that most scientists think that benefits of co-operation have much more value when compared to potential costs. Co-operation works also because at best it guarantees number of different viewpoints. Because of the possibilities of technology global co-operation is nowadays much easier and productive than before. It is clear that, when co-operation goes from university level to global it has significant benefits.
For example, why do scientists co-operate? Science has become more and more isolated and each science field has spread even more and it is impossible for one person to be aware of all developments. This is true especially in experimental research where highly advanced equipment requires special skills. With co-operation scientists can use information from different fields and use it effectively instead of gathering all the information just by reading by themselves."[128]
Military, trade unions, and corporations satisfy some definitions of CI – the most rigorous definition would require a capacity to respond to very arbitrary conditions without orders or guidance from "law" or "customers" to constrain actions. Online advertising companies are using collective intelligence to bypass traditional marketing and creative agencies.[161]
TheUNUopen platform for "human swarming" (or "social swarming") establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence.[162][163]When connected to UNU, groups of distributed users collectively answer questions and make predictions in real-time.[164]Early testing shows that human swarms can out-predict individuals.[162]In 2016, an UNU swarm was challenged by a reporter to predict the winners of the Kentucky Derby, and successfully picked the first four horses, in order, beating 540 to 1 odds.[165][166]
Specialized information sites such as Digital Photography Review[167]or Camera Labs[168]is an example of collective intelligence. Anyone who has an access to the internet can contribute to distributing their knowledge over the world through the specialized information sites.
Inlearner-generated contexta group of users marshal resources to create an ecology that meets their needs often (but not only) in relation to the co-configuration, co-creation and co-design of a particular learning space that allows learners to create their own context.[169][170][171]Learner-generated contexts represent anad hoccommunity that facilitates coordination of collective action in a network of trust. An example of learner-generated context is found on the Internet when collaborative users pool knowledge in a "shared intelligence space". As the Internet has developed so has the concept of CI as a shared public forum. The global accessibility and availability of the Internet has allowed more people than ever to contribute and access ideas.[13]
Games such asThe SimsSeries, andSecond Lifeare designed to be non-linear and to depend on collective intelligence for expansion. This way of sharing is gradually evolving and influencing the mindset of the current and future generations.[140]For them, collective intelligence has become a norm. In Terry Flew's discussion of 'interactivity' in the online games environment, the ongoing interactive dialogue between users and game developers,[172]he refers to Pierre Lévy's concept of Collective Intelligence[citation needed]and argues this is active in videogames as clans or guilds inMMORPGconstantly work to achieve goals.Henry Jenkinsproposes that the participatory cultures emerging between games producers, media companies, and the end-users mark a fundamental shift in the nature of media production and consumption. Jenkins argues that this new participatory culture arises at the intersection of three broad new media trends.[173]Firstly, the development of new media tools/technologies enabling the creation of content. Secondly, the rise of subcultures promoting such creations, and lastly, the growth of value adding media conglomerates, which foster image, idea and narrative flow.
Improvisational actors also experience a type of collective intelligence which they term "group mind", as theatrical improvisation relies on mutual cooperation and agreement,[174]leading to the unity of "group mind".[174][175]
Growth of the Internet and mobile telecom has also produced "swarming" or "rendezvous" events that enable meetings or even dates on demand.[32]The full impact has yet to be felt but theanti-globalization movement, for example, relies heavily on e-mail, cell phones, pagers, SMS and other means of organizing.[176]TheIndymediaorganization does this in a more journalistic way.[177]Such resources could combine into a form of collective intelligence accountable only to the current participants yet with some strong moral or linguistic guidance from generations of contributors – or even take on a more obviously democratic form to advance shared goal.[177]
A further application of collective intelligence is found in the "Community Engineering for Innovations".[178]In such an integrated framework proposed by Ebner et al., idea competitions and virtual communities are combined to better realize the potential of the collective intelligence of the participants, particularly in open-source R&D.[179]In management theory the use of collective intelligence and crowd sourcing leads to innovations and very robust answers to quantitative issues.[180]Therefore, collective intelligence and crowd sourcing is not necessarily leading to the best solution to economic problems, but to a stable, good solution.
Collective actions or tasks require different amounts of coordination depending on the complexity of the task. Tasks vary from being highly independent simple tasks that require very little coordination to complex interdependent tasks that are built by many individuals and require a lot of coordination. In the article written by Kittur, Lee and Kraut the writers introduce a problem in cooperation: "When tasks require high coordination because the work is highly interdependent, having more contributors can increase process losses, reducing the effectiveness of the group below what individual members could optimally accomplish". Having a team too large the overall effectiveness may suffer even when the extra contributors increase the resources. In the end the overall costs from coordination might overwhelm other costs.[181]
Group collective intelligence is a property that emerges through coordination from both bottom-up and top-down processes. In a bottom-up process the different characteristics of each member are involved in contributing and enhancing coordination. Top-down processes are more strict and fixed with norms, group structures and routines that in their own way enhance the group's collective work.[44]
Tom Atlee reflects that, although humans have an innate ability to gather and analyze data, they are affected by culture, education and social institutions.[182][self-published source?]A single person tends to make decisions motivated by self-preservation. Therefore, without collective intelligence, humans may drive themselves into extinction based on their selfish needs.[46]
Phillip Brown and Hugh Lauder quotes Bowles andGintis(1976) that in order to truly define collective intelligence, it is crucial to separate 'intelligence' from IQism.[183]They go on to argue that intelligence is an achievement and can only be developed if allowed to.[183]For example, earlier on, groups from the lower levels of society are severely restricted from aggregating and pooling their intelligence. This is because the elites fear that the collective intelligence would convince the people to rebel. If there is no such capacity and relations, there would be no infrastructure on which collective intelligence is built.[184]This reflects how powerful collective intelligence can be if left to develop.[183]
Skeptics, especially those critical of artificial intelligence and more inclined to believe that risk ofbodily harmand bodily action are the basis of all unity between people, are more likely to emphasize the capacity of a group to take action and withstand harm as one fluidmass mobilization, shrugging off harms the way a body shrugs off the loss of a few cells.[185][186]This train of thought is most obvious in theanti-globalization movementand characterized by the works ofJohn Zerzan,Carol Moore, andStarhawk, who typically shun academics.[185][186]These theorists are more likely to refer to ecological andcollective wisdomand to the role ofconsensus processin making ontological distinctions than to any form of "intelligence" as such, which they often argue does not exist, or is mere "cleverness".[185][186]
Harsh critics of artificial intelligence on ethical grounds are likely to promote collective wisdom-building methods, such as thenew tribalistsand theGaians.[187][self-published source]Whether these can be said to be collective intelligence systems is an open question. Some, e.g.Bill Joy, simply wish to avoid any form of autonomous artificial intelligence and seem willing to work on rigorous collective intelligence in order to remove any possible niche for AI.[188]
In contrast to these views, companies such asAmazon Mechanical TurkandCrowdFlowerare using collective intelligence andcrowdsourcingorconsensus-based assessmentto collect the enormous amounts of data formachine learningalgorithms.
|
https://en.wikipedia.org/wiki/Collective_intelligence
|
Problem solvingis the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks (e.g. how to turn on an appliance) to complex issues in business and technical fields. The former is an example of simple problem solving (SPS) addressing one issue, whereas the latter is complex problem solving (CPS) with multiple interrelated obstacles.[1]Another classification of problem-solving tasks is into well-defined problems with specific obstacles and goals, and ill-defined problems in which the current situation is troublesome but it is not clear what kind of resolution to aim for.[2]Similarly, one may distinguish formal or fact-based problems requiringpsychometric intelligence, versus socio-emotional problems which depend on the changeable emotions of individuals or groups, such astactfulbehavior, fashion, or gift choices.[3]
Solutions require sufficient resources and knowledge to attain the goal. Professionals such as lawyers, doctors, programmers, and consultants are largely problem solvers for issues that require technical skills and knowledge beyond general competence. Many businesses have found profitable markets by recognizing a problem and creating a solution: the more widespread and inconvenient the problem, the greater the opportunity to develop ascalablesolution.
There are many specialized problem-solving techniques and methods in fields such asscience,engineering,business,medicine,mathematics,computer science,philosophy, andsocial organization. The mental techniques to identify, analyze, and solve problems are studied inpsychologyandcognitive sciences. Also widely researched are the mental obstacles that prevent people from finding solutions; problem-solving impediments includeconfirmation bias,mental set, andfunctional fixedness.
The termproblem solvinghas a slightly different meaning depending on the discipline. For instance, it is a mental process inpsychologyand a computerized process incomputer science. There are two different types of problems: ill-defined and well-defined; different approaches are used for each. Well-defined problems have specific end goals and clearly expected solutions, while ill-defined problems do not. Well-defined problems allow for more initial planning than ill-defined problems.[2]Solving problems sometimes involves dealing withpragmatics(the way that context contributes to meaning) andsemantics(the interpretation of the problem). The ability to understand what the end goal of the problem is, and what rules could be applied, represents the key to solving the problem. Sometimes a problem requiresabstract thinkingor coming up with a creative solution.
Problem solving has two major domains:mathematical problem solvingand personal problem solving. Each concerns some difficulty or barrier that is encountered.[4]
Problem solving in psychology refers to the process of finding solutions to problems encountered in life.[5]Solutions to these problems are usually situation- or context-specific. The process starts withproblem findingandproblem shaping, in which the problem is discovered and simplified. The next step is to generate possible solutions and evaluate them. Finally a solution is selected to be implemented and verified. Problems have anend goalto be reached; how you get there depends upon problem orientation (problem-solving coping style and skills) and systematic analysis.[6]
Mental health professionals study the human problem-solving processes using methods such asintrospection,behaviorism,simulation,computer modeling, andexperiment. Social psychologists look into the person-environment relationship aspect of the problem and independent and interdependent problem-solving methods.[7]Problem solving has been defined as a higher-ordercognitiveprocess andintellectual functionthat requires the modulation and control of more routine or fundamental skills.[8]
Empirical research shows many different strategies and factors influence everyday problem solving.[9]Rehabilitation psychologistsstudying people with frontal lobe injuries have found that deficits in emotional control and reasoning can be re-mediated with effective rehabilitation and could improve the capacity of injured persons to resolve everyday problems.[10]Interpersonal everyday problem solving is dependent upon personal motivational and contextual components. One such component is theemotional valenceof "real-world" problems, which can either impede or aid problem-solving performance. Researchers have focused on the role of emotions in problem solving,[11]demonstrating that poor emotional control can disrupt focus on the target task, impede problem resolution, and lead to negative outcomes such as fatigue, depression, and inertia.[12]In conceptualization,[clarification needed]human problem solving consists of two related processes: problem orientation, and the motivational/attitudinal/affective approach to problematic situations and problem-solving skills.[13]People's strategies cohere with their goals[14]and stem from the process of comparing oneself with others.
Among the first experimental psychologists to study problem solving were theGestaltistsinGermany, such asKarl DunckerinThe Psychology of Productive Thinking(1935).[15]Perhaps best known is the work ofAllen NewellandHerbert A. Simon.[16]
Experiments in the 1960s and early 1970s asked participants to solve relatively simple, well-defined, but not previously seen laboratory tasks.[17][18]These simple problems, such as theTower of Hanoi, admittedoptimal solutionsthat could be found quickly, allowing researchers to observe the full problem-solving process. Researchers assumed that these model problems would elicit the characteristiccognitive processesby which more complex "real world" problems are solved.
An outstanding problem-solving technique found by this research is the principle ofdecomposition.[19]
Much of computer science andartificial intelligenceinvolves designing automated systems to solve a specified type of problem: to accept input data and calculate a correct or adequate response, reasonably quickly.Algorithmsare recipes or instructions that direct such systems, written intocomputer programs.
Steps for designing such systems include problem determination,heuristics,root cause analysis,de-duplication, analysis, diagnosis, and repair. Analytic techniques include linear and nonlinear programming,queuing systems, and simulation.[20]A large, perennial obstacle is to find and fix errors in computer programs:debugging.
Formallogicconcerns issues like validity, truth, inference, argumentation, and proof. In a problem-solving context, it can be used to formally represent a problem as a theorem to be proved, and to represent the knowledge needed to solve the problem as the premises to be used in a proof that the problem has a solution.
The use of computers to prove mathematical theorems using formal logic emerged as the field ofautomated theorem provingin the 1950s. It included the use ofheuristicmethods designed to simulate human problem solving, as in theLogic Theory Machine, developed by Allen Newell, Herbert A. Simon and J. C. Shaw, as well as algorithmic methods such as theresolutionprinciple developed byJohn Alan Robinson.
In addition to its use for finding proofs of mathematical theorems, automated theorem-proving has also been used forprogram verificationin computer science. In 1958,John McCarthyproposed theadvice taker, to represent information in formal logic and to derive answers to questions using automated theorem-proving. An important step in this direction was made byCordell Greenin 1969, who used a resolution theorem prover for question-answering and for such other applications in artificial intelligence as robot planning.
The resolution theorem-prover used by Cordell Green bore little resemblance to human problem solving methods. In response to criticism of that approach from researchers at MIT,Robert Kowalskidevelopedlogic programmingandSLD resolution,[21]which solves problems by problem decomposition. He has advocated logic for both computer and human problem solving[22]and computational logic to improve human thinking.[23]
When products or processes fail, problem solving techniques can be used to develop corrective actions that can be taken to prevent furtherfailures. Such techniques can also be applied to a product or process prior to an actual failure event—to predict, analyze, and mitigate a potential problem in advance. Techniques such asfailure mode and effects analysiscan proactively reduce the likelihood of problems.
In either the reactive or the proactive case, it is necessary to build a causal explanation through a process of diagnosis. In deriving an explanation of effects in terms of causes,abductiongenerates new ideas or hypotheses (asking "how?");deductionevaluates and refines hypotheses based on other plausible premises (asking "why?"); andinductionjustifies a hypothesis with empirical data (asking "how much?").[24]The objective of abduction is to determine which hypothesis or proposition to test, not which one to adopt or assert.[25]In thePeirceanlogical system, the logic of abduction and deduction contribute to our conceptual understanding of a phenomenon, while the logic of induction adds quantitative details (empirical substantiation) to our conceptual knowledge.[26]
Forensic engineeringis an important technique offailure analysisthat involves tracing product defects and flaws. Corrective action can then be taken to prevent further failures.
Reverse engineering attempts to discover the original problem-solving logic used in developing a product by disassembling the product and developing a plausible pathway to creating and assembling its parts.[27]
Inmilitary science, problem solving is linked to the concept of "end-states", the conditions or situations which are the aims of the strategy.[28]: xiii, E-2Ability to solve problems is important at anymilitary rank, but is essential at thecommand and controllevel. It results from deep qualitative and quantitative understanding of possible scenarios.Effectivenessin this context is an evaluation of results: to what extent the end states were accomplished.[28]: IV-24Planningis the process of determining how to effect those end states.[28]: IV-1
Some models of problem solving involve identifying agoaland then a sequence of subgoals towards achieving this goal. Andersson, who introduced theACT-Rmodel of cognition, modelled this collection of goals and subgoals as agoal stackin which the mind contains a stack of goals and subgoals to be completed, and a single task being carried out at any time.[29]: 51
Knowledge of how to solve one problem can be applied to another problem, in a process known astransfer.[29]: 56
Problem-solving strategies are steps to overcoming the obstacles to achieving a goal. The iteration of such strategies over the course of solving a problem is the "problem-solving cycle".[30]
Common steps in this cycle include recognizing the problem, defining it, developing a strategy to fix it, organizing knowledge and resources available, monitoring progress, and evaluating the effectiveness of the solution. Once a solution is achieved, another problem usually arises, and the cycle starts again.
Insight is the suddenaha!solution to a problem, the birth of a new idea to simplify a complex situation. Solutions found through insight are often more incisive than those from step-by-step analysis. A quick solution process requires insight to select productive moves at different stages of the problem-solving cycle. Unlike Newell and Simon's formal definition of amove problem, there is no consensus definition of aninsight problem.[31]
Some problem-solving strategies include:[32]
Common barriers to problem solving include mental constructs that impede an efficient search for solutions. Five of the most common identified by researchers are:confirmation bias,mental set,functional fixedness, unnecessary constraints, and irrelevant information.
Confirmation bias is an unintentional tendency to collect and use data which favors preconceived notions. Such notions may be incidental rather than motivated by important personal beliefs: the desire to be right may be sufficient motivation.[33]
Scientific and technical professionals also experience confirmation bias. One online experiment, for example, suggested that professionals within the field of psychological research are likely to view scientific studies that agree with their preconceived notions more favorably than clashing studies.[34]According to Raymond Nickerson, one can see the consequences of confirmation bias in real-life situations, which range in severity from inefficient government policies to genocide. Nickerson argued that those who killed people accused ofwitchcraftdemonstrated confirmation bias with motivation.[citation needed]Researcher Michael Allen found evidence for confirmation bias with motivation in school children who worked to manipulate their science experiments to produce favorable results.[35]
However, confirmation bias does not necessarily require motivation. In 1960,Peter Cathcart Wasonconducted an experiment in which participants first viewed three numbers and then created a hypothesis in the form of a rule that could have been used to create that triplet of numbers. When testing their hypotheses, participants tended to only create additional triplets of numbers that would confirm their hypotheses, and tended not to create triplets that would negate or disprove their hypotheses.[36]
Mental set is the inclination to re-use a previously successful solution, rather than search for new and better solutions. It is a reliance on habit.
It was first articulated byAbraham S. Luchinsin the 1940s with his well-known water jug experiments.[37]Participants were asked to fill one jug with a specific amount of water by using other jugs with different maximum capacities. After Luchins gave a set of jug problems that could all be solved by a single technique, he then introduced a problem that could be solved by the same technique, but also by a novel and simpler method. His participants tended to use the accustomed technique, oblivious of the simpler alternative.[38]This was again demonstrated inNorman Maier's 1931 experiment, which challenged participants to solve a problem by using a familiar tool (pliers) in an unconventional manner. Participants were often unable to view the object in a way that strayed from its typical use, a type of mental set known as functional fixedness (see the following section).
Rigidly clinging to a mental set is calledfixation, which can deepen to an obsession or preoccupation with attempted strategies that are repeatedly unsuccessful.[39]In the late 1990s, researcher Jennifer Wiley found that professional expertise in a field can create a mental set, perhaps leading to fixation.[39]
Groupthink, in which each individual takes on the mindset of the rest of the group, can produce and exacerbate mental set.[40]Social pressure leads to everybody thinking the same thing and reaching the same conclusions.
Functional fixedness is the tendency to view an object as having only one function, and to be unable to conceive of any novel use, as in the Maier pliers experiment described above. Functional fixedness is a specific form of mental set, and is one of the most common forms of cognitive bias in daily life.
As an example, imagine a man wants to kill a bug in his house, but the only thing at hand is a can of air freshener. He may start searching for something to kill the bug instead of squashing it with the can, thinking only of its main function of deodorizing.
Tim German and Clark Barrett describe this barrier: "subjects become 'fixed' on the design function of the objects, and problem solving suffers relative to control conditions in which the object's function is not demonstrated."[41]Their research found that young children's limited knowledge of an object's intended function reduces this barrier[42]Research has also discovered functional fixedness in educational contexts, as an obstacle to understanding: "functional fixedness may be found in learning concepts as well as in solving chemistry problems."[43]
There are several hypotheses in regards to how functional fixedness relates to problem solving.[44]It may waste time, delaying or entirely preventing the correct use of a tool.
Unnecessary constraints are arbitrary boundaries imposed unconsciously on the task at hand, which foreclose a productive avenue of solution. The solver may become fixated on only one type of solution, as if it were an inevitable requirement of the problem. Typically, this combines with mental set—clinging to a previously successful method.[45][page needed]
Visual problems can also produce mentally invented constraints.[46][page needed]A famous example is the dot problem: nine dots arranged in a three-by-three grid pattern must be connected by drawing four straight line segments, without lifting pen from paper or backtracking along a line. The subject typically assumes the pen must stay within the outer square of dots, but the solution requires lines continuing beyond this frame, and researchers have found a 0% solution rate within a brief allotted time.[47]
This problem has produced the expression "think outside the box".[48][page needed]Such problems are typically solved via a sudden insight which leaps over the mental barriers, often after long toil against them.[49]This can be difficult depending on how the subject has structured the problem in their mind, how they draw on past experiences, and how well they juggle this information in their working memory. In the example, envisioning the dots connected outside the framing square requires visualizing an unconventional arrangement, which is a strain on working memory.[48]
Irrelevant information is a specification or data presented in a problem that is unrelated to the solution.[45]If the solver assumes that all information presented needs to be used, this often derails the problem solving process, making relatively simple problems much harder.[50]
For example: "Fifteen percent of the people in Topeka have unlisted telephone numbers. You select 200 names at random from the Topeka phone book. How many of these people have unlisted phone numbers?"[48][page needed]The "obvious" answer is 15%, but in fact none of the unlisted people would be listed among the 200. This kind of "trick question" is often used in aptitude tests or cognitive evaluations.[51]Though not inherently difficult, they require independent thinking that is not necessarily common. Mathematicalword problemsoften include irrelevant qualitative or numerical information as an extra challenge.
The disruption caused by the above cognitive biases can depend on how the information is represented:[51]visually, verbally, or mathematically. A classic example is the Buddhist monk problem:
A Buddhist monk begins at dawn one day walking up a mountain, reaches the top at sunset, meditates at the top for several days until one dawn when he begins to walk back to the foot of the mountain, which he reaches at sunset. Making no assumptions about his starting or stopping or about his pace during the trips, prove that there is a place on the path which he occupies at the same hour of the day on the two separate journeys.
The problem cannot be addressed in a verbal context, trying to describe the monk's progress on each day. It becomes much easier when the paragraph is represented mathematically by a function: one visualizes agraphwhose horizontal axis is time of day, and whose vertical axis shows the monk's position (or altitude) on the path at each time. Superimposing the two journey curves, which traverse opposite diagonals of a rectangle, one sees they must cross each other somewhere. The visual representation by graphing has resolved the difficulty.
Similar strategies can often improve problem solving on tests.[45][52]
People who are engaged in problem solving tend to overlook subtractive changes, even those that are critical elements of efficient solutions. For example, a city planner may decide that the solution to decrease traffic congestion would be to add another lane to a highway, rather than finding ways to reduce the need for the highway in the first place. This tendency to solve by first, only, or mostly creating or adding elements, rather than by subtracting elements or processes is shown to intensify with highercognitive loadssuch asinformation overload.[53]
People can also solve problems while they are asleep. There are many reports of scientists and engineers who solved problems in theirdreams. For example,Elias Howe, inventor of the sewing machine, figured out the structure of the bobbin from a dream.[54]
The chemistAugust Kekuléwas considering how benzene arranged its six carbon and hydrogen atoms. Thinking about the problem, he dozed off, and dreamt of dancing atoms that fell into a snakelike pattern, which led him to discover the benzene ring. As Kekulé wrote in his diary,
One of the snakes seized hold of its own tail, and the form whirled mockingly before my eyes. As if by a flash of lightning I awoke; and this time also I spent the rest of the night in working out the consequences of the hypothesis.[55]
There also are empirical studies of how people can think consciously about a problem before going to sleep, and then solve the problem with a dream image. Dream researcherWilliam C. Dementtold his undergraduate class of 500 students that he wanted them to think about an infinite series, whose first elements were OTTFF, to see if they could deduce the principle behind it and to say what the next elements of the series would be.[56][page needed]He asked them to think about this problem every night for 15 minutes before going to sleep and to write down any dreams that they then had. They were instructed to think about the problem again for 15 minutes when they awakened in the morning.
The sequence OTTFF is the first letters of the numbers: one, two, three, four, five. The next five elements of the series are SSENT (six, seven, eight, nine, ten). Some of the students solved the puzzle by reflecting on their dreams. One example was a student who reported the following dream:[56][page needed]
I was standing in an art gallery, looking at the paintings on the wall. As I walked down the hall, I began to count the paintings: one, two, three, four, five. As I came to the sixth and seventh, the paintings had been ripped from their frames. I stared at the empty frames with a peculiar feeling that some mystery was about to be solved. Suddenly I realized that the sixth and seventh spaces were the solution to the problem!
With more than 500 undergraduate students, 87 dreams were judged to be related to the problems students were assigned (53 directly related and 34 indirectly related). Yet of the people who had dreams that apparently solved the problem, only seven were actually able to consciously know the solution. The rest (46 out of 53) thought they did not know the solution.
Albert Einsteinbelieved that much problem solving goes on unconsciously, and the person must then figure out and formulate consciously what the mindbrain[jargon]has already solved. He believed this was his process in formulating the theory of relativity: "The creator of the problem possesses the solution."[57]Einstein said that he did his problem solving without words, mostly in images. "The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be 'voluntarily' reproduced and combined."[58]
Problem-solving processes differ across knowledge domains and across levels of expertise.[59]For this reason,cognitive sciencesfindings obtained in the laboratory cannot necessarily generalize to problem-solving situations outside the laboratory. This has led to a research emphasis on real-world problem solving, since the 1990s. This emphasis has been expressed quite differently in North America and Europe, however. Whereas North American research has typically concentrated on studying problem solving in separate, natural knowledge domains, much of the European research has focused on novel, complex problems, and has been performed with computerized scenarios.[60]
In Europe, two main approaches have surfaced, one initiated byDonald Broadbent[61]in the United Kingdom and the other one byDietrich Dörner[62]in Germany. The two approaches share an emphasis on relatively complex, semantically rich, computerized laboratory tasks, constructed to resemble real-life problems. The approaches differ somewhat in their theoretical goals and methodology. The tradition initiated by Broadbent emphasizes the distinction between cognitive problem-solving processes that operate under awareness versus outside of awareness, and typically employs mathematically well-defined computerized systems. The tradition initiated by Dörner, on the other hand, has an interest in the interplay of the cognitive, motivational, and social components of problem solving, and utilizes very complex computerized scenarios that contain up to 2,000 highly interconnected variables.[63]
In North America, initiated by the work of Herbert A. Simon on "learning by doing" insemanticallyrich domains,[64]researchers began to investigate problem solving separately in different naturalknowledge domains—such as physics, writing, orchessplaying—rather than attempt to extract a global theory of problem solving.[65]These researchers have focused on the development of problem solving within certain domains, that is on the development ofexpertise.[66]
Areas that have attracted rather intensive attention in North America include:
Complex problem solving (CPS) is distinguishable from simple problem solving (SPS). In SPS there is a singular and simple obstacle. In CPS there may be multiple simultaneous obstacles. For example, a surgeon at work has far more complex problems than an individual deciding what shoes to wear. As elucidated by Dietrich Dörner, and later expanded upon by Joachim Funke, complex problems have some typical characteristics, which include:[1]
People solve problems on many different levels—from the individual to the civilizational. Collective problem solving refers to problem solving performed collectively.Social issuesand global issues can typically only be solved collectively.
The complexity of contemporary problems exceeds the cognitive capacity of any individual and requires different but complementary varieties of expertise and collective problem solving ability.[81]
Collective intelligenceis shared or group intelligence that emerges from thecollaboration, collective efforts, and competition of many individuals.
In collaborative problem solving peoplework togetherto solve real-world problems. Members of problem-solving groups share a common concern, a similar passion, and/or a commitment to their work. Members can ask questions, wonder, and try to understand common issues. They share expertise, experiences, tools, and methods.[82]Groups may be fluid based on need, may only occur temporarily to finish an assigned task, or may be more permanent depending on the nature of the problems.
For example, in the educational context, members of a group may all have input into the decision-making process and a role in the learning process. Members may be responsible for the thinking, teaching, and monitoring of all members in the group. Group work may be coordinated among members so that each member makes an equal contribution to the whole work. Members can identify and build on their individual strengths so that everyone can make a significant contribution to the task.[83]Collaborative group work has the ability to promote critical thinking skills, problem solving skills,social skills, andself-esteem. By using collaboration and communication, members often learn from one another and construct meaningful knowledge that often leads to better learning outcomes than individual work.[84]
Collaborative groups require joint intellectual efforts between the members and involvesocial interactionsto solve problems together. Theknowledge sharedduring these interactions is acquired during communication, negotiation, and production of materials.[85]Members actively seek information from others by asking questions. The capacity to use questions to acquire new information increases understanding and the ability to solve problems.[86]
In a 1962 research report,Douglas Engelbartlinked collective intelligence to organizational effectiveness, and predicted that proactively "augmenting human intellect" would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone".[87]
Henry Jenkins, a theorist of new media and media convergence, draws on the theory that collective intelligence can be attributed to media convergence andparticipatory culture.[88]He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contributes to the development of such skills.[89]
Collective impactis the commitment of a group of actors from different sectors to a common agenda for solving a specific social problem, using a structured form of collaboration.
AfterWorld War IItheUN, theBretton Woods organization, and theWTOwere created. Collective problem solving on the international level crystallized around these three types of organization from the 1980s onward. As these global institutions remain state-like or state-centric it is unsurprising that they perpetuate state-like or state-centric approaches to collective problem solving rather than alternative ones.[90]
Crowdsourcingis a process of accumulating ideas, thoughts, or information from many independent participants, with aim of finding the best solution for a given challenge. Moderninformation technologiesallow for many people to be involved and facilitate managing their suggestions in ways that provide good results.[91]TheInternetallows for a new capacity of collective (including planetary-scale) problem solving.[92]
|
https://en.wikipedia.org/wiki/Collective_problem_solving
|
Commons-based peer production(CBPP) is a term coined byHarvard Law SchoolprofessorYochai Benkler.[1]It describes a model ofsocio-economicproduction in which large numbers of people workcooperatively; usually over theInternet.Commons-based projects generally have less rigidhierarchical structuresthan those under more traditional business models.
One of the major characteristics of the commons-based peer production is its non-profit scope.[2]: 43Often—but not always—commons-based projects are designed without a need for financial compensation for contributors. For example, sharing ofSTL (file format)design files for objects freely on the internet enables anyone with a3-D printertodigitally replicatethe object, saving theprosumersignificant money.[3]
Synonymous terms for this process include consumer co-production and collaborative media production.[2]: 63
Yochai Benklerused this term as early as 2001. Benkler first introduced the term in his 2002 paper in theYale Law Journal(published as apre-printin 2001) "Coase's Penguin, or Linux and the Nature of the Firm", whose title refers to theLinux mascotand toRonald Coase, who originated the transaction coststheory of the firmthat provides the methodological template for the paper's analysis of peer production. The paper defines the concept as "decentralized information gathering and exchange" and creditsEben Moglenas the scholar who first identified it without naming it.[4][5][6]
Yochai Benklercontrastscommons-basedpeer productionwithfirm production, in which tasks are delegated based on a centraldecision-makingprocess, andmarket-based production, in which allocating different prices to different tasks serves as an incentive to anyone interested in performing a task.
In his bookThe Wealth of Networks(2006),Yochai Benklersignificantly expands on his definition of commons-based peer production. According to Benkler, what distinguishes commons-based production is that it doesn't rely upon or propagate proprietary knowledge: "The inputs and outputs of the process are shared, freely or conditionally, in an institutional form that leaves them equally available for all to use as they choose at their individual discretion." To ensure that the knowledge generated is available for free use, commons-based projects are often shared under anopen license.
Not all commons-based production necessarily qualifies as commons-based peer production. According to Benkler, peer production is defined not only by the openness of its outputs, but also by a decentralized, participant-driven working method of working.[7]
Peer production enterprises have two primary advantages over traditional hierarchical approaches to production:
InWikinomics,Don TapscottandAnthony D. Williamssuggest anincentivemechanism behind common-based peer production. "People participate in peer production communities," they write, "for a wide range of intrinsic and self-interested reasons....basically, people who participate in peer production communities love it. They feel passionate about their particular area of expertise and revel in creating something new or better."[9]
Aaron Krowne offers another definition:
Commons-based peer production refers to any coordinated, (chiefly) internet-based effort whereby volunteers contribute project components, and there exists some process to combine them to produce a unified intellectual work. CBPP covers many different types of intellectual output, from software to libraries of quantitative data tohuman-readabledocuments (manuals, books, encyclopedias, reviews, blogs, periodicals, and more).[10]
First, the potential goals of peer production must bemodular.[11]In other words, objectives must be divisible into components, or modules, each of which can be independently produced.[11]That allows participants to work asynchronously, without having to wait for each other's contributions or coordinate with each other in person.[8]
Second, thegranularityof the modules is essential. Granularity refers to the degree to which objects are broken down into smaller pieces (module size).[8]Different levels of granularity will allow people with different levels of motivation to work together by contributing small or large grained modules, consistent with their level of interest in the project and their motivation.[8]
Third, a successful peer-production enterprise must have low-costintegration—the mechanism by which the modules are integrated into a whole end product. Thus, integration must include both quality controls over the modules and a mechanism for integrating the contributions into the finished product at relatively low cost.[8]
Participation in commons-based peer production is often voluntary and not necessarily associated with getting profit out of it. Thus, the motivation behind this phenomenon goes far beyond traditionalcapitalistic theories, which picture individuals as self-interested andrationalagents, such portrayal is also calledhomo economicus.
However, it can be explained through alternative theories asbehavioral economics. Famous psychologistDan Arielyin his workPredictably Irrationalexplains that social norms shape people's decisions as much as market norms. Therefore, individuals tend to be willing to create value because of their social constructs, knowing that they won't be paid for that. He draws an example of a thanksgiving dinner: offering to pay would likely offend the family member who prepared the dinner as they were motivated by the pleasure of treating family members.[12]
Similarly, commons-based projects, as claimed byYochai Benkler, are the results of individuals acting "out of social and psychological motivations to do something interesting".[13]He goes on describing the wide range of reasons as pleasure, socially and psychologically rewarding experiences, to the economic calculation of possible monetary rewards (not necessarily from the project itself).[14]
On the other hand, the need for collaboration and interaction lies at the very core of human nature and turns out to be a very essential feature for one's survival. Enhanced with digital technologies, allowing easier and faster collaboration which was not as noticeable before, it resulted in a new social, cultural andeconomic trendnamedcollaborative society. This theory outlines further reasons for individuals to participate in peer production such as collaboration with strangers, building or integrating into a community or contributing to a general good.[2]
Examples of projects using commons-based peer production include:
Several outgrowths have been:
Interrelated concepts to Commons-based peer production are the processes of peer governance and peer property. To begin with, peer governance is a new mode of governance andbottom-upmode ofparticipative decision-makingthat is being experimented in peer projects, such asWikipediaandFLOSS; thus peer governance is the way that peer production, the process in which common value is produced, is managed.[15]Peer Property indicates the innovative nature of legal forms such as the General Public License, the Creative Commons, etc. Whereas traditional forms of property are exclusionary ("if it is mine, it is not yours"), peer property forms are inclusionary. It is from all of us, i.e. also for you, provided you respect the basic rules laid out in the license, such as the openness of the source code for example.[16]
The ease of entering and leaving an organization is a feature ofadhocracies.
The principle of commons-based peer production is similar to collective invention, a model ofopen innovationin economics coined by Robert Allen.[17]
Also related:Open-source economicsandCommercial use of copyleft works.
Some believe that the commons-based peer production (CBPP) vision, while powerful and groundbreaking, needs to be strengthened at its root because of some allegedly wrong assumptions concerningfree and open-source software(FOSS).[18][clarification needed]
The CBPP literature regularly and explicitly quotes FOSS products as examples of artifacts "emerging" by virtue of mere cooperation, with no need for supervising leadership (without "market signals or managerial commands", in Benkler's words).
It can be argued, however, that in the development of any less than trivial piece of software, irrespective of whether it be FOSS or proprietary, a subset of the (many) participants always play—explicitly and deliberately—the role of leading system and subsystem designers, determining architecture and functionality, while most of the people work “underneath” them in a logical, functional sense.[19]
From a micro-level, Bauwens and Pantazis are of the view that CBPP models should be considered a prototype, since it cannot reproduce itself fully outside of the limits that capitalism has imposed on it as a result of the interdependence of CBPP with capitalist competition. The innovative activities of CBPP occur within capitalist competitive contexts, and capitalist firms can gain competitive advantage over firms that rely on personal research without proprietary knowledge, because the former is able to utilize and access the knowledge commons, especially in digital commons where participants in CBPP struggle to earn direct livelihood for themselves. CBPP is then at the risk of being subordinated.[20]
Proponents argue that commons-based peer production (CBPP) represents an alternative form of production from traditionalcapitalism.[21]However, CBPP supporters acknowledged in 2019 that it was still a prototype of a new way of producing, and CBPP could not yet be considered a complete form of production by itself. According to Bauwens, Kostakis, and Pazaitis, CBPP "is currently a prototype since it cannot as yet fully reproduce itself outside of mutual dependence with capitalism."[21]: 6They claim the market and state willnotdisappear if CBPP triumphs over traditional capitalism (i.e., if CBPP becomes "the dominant way of allocating the necessary resources for human self-reproduction").[21]: 5Rather, the market and state will become instruments in service to maintaining the commons and the development of entrepreneurs that contribute to the commons as well as help more commoners become free to also earn their living through giving to the commons.[21]: 17
A socio-economic shift pursued by CBPP will not be straightforward or lead to a utopia, but it could help solve some current issues.[which?]As with any economic transition, new problems will emerge and the transition will be complicated. However, proponents of CBPP argue thatmoving towards a CBPP production model will be ideal[clarification needed]and a step forward for society.[21]CBPP is still a prototype of what a new way of production and society would look like, and can't separate itself completely from capitalism: CBPP proponents believe commoners should find innovative ways to become more autonomous from capitalism.[21]They also assert that, in a society led by commons, the market would continue to exist as in capitalism, but it would shift from being mainly extractive to being predominantly generative.[21]
Both scenarios, the extractive as well as the generative, can include elements which are based on peer-to-peer (P2P) dynamics, orsocial peer-to-peer processes. Therefore, one should not only discuss peer production as an opposing alternative to current forms of market organization, but also needs to discuss how both manifest in the organizations of today’s economy. Four scenarios can be described along the lines ofprofit maximizationand commons on one side, and centralized and decentralized control over digital production infrastructure, such as for example networking technologies: netarchical capitalism, distributed capitalism, global commons, and localized commons. Each of them uses P2P elements to a different extent and thus leads to different outcomes:[22]
|
https://en.wikipedia.org/wiki/Commons-based_peer_production
|
Crowd computingis a form of distributed work where tasks that are hard for computers to do, are handled by large numbers of humans distributed across the internet.
It is an overarching term encompassing tools that enable idea sharing, non-hierarchical decision making and utilization of "cognitive surplus" - the ability of the world’s population to collaborate on large, sometimes global projects.[1]Crowd computing combines elements ofcrowdsourcing,automation,distributed computing, andmachine learning.
Prof. Rob Miller of MIT further defines crowd computing as “harnessing the power of people out in the web to do tasks that are hard for individual users or computers to do alone. Like cloud computing, crowd computing offers elastic, on-demand human resources that can drive new applications and new ways of thinking about technology.”[2]
The practice predates the internet. At the end of the 18th century, the British Royal Astronomers distributed spreadsheets by mail, asking the crowd to help them create maps of the stars and the seas. In the United States during the 1930s, when the government employed hundreds of “human computers” to work on the WPA and the Manhattan Project.[3]
The modern day microchip made using large crowds for mechanical computation less attractive in the second half of the twentieth century. However, as the volume of data online grew, it became clear to companies like Amazon and Google that there were some things humans were simply better at doing than machines.[4]
|
https://en.wikipedia.org/wiki/Crowd_computing
|
Crowdcastingis the combination ofbroadcastingandcrowdsourcing. The process of crowdcasting uses a combination of push and pull strategies first to engage an audience and build a network of participants and then harness the network for new insights. Those insights are then used to shape broadcast programming. These insights and concepts can include new product ideas, new service ideas, new branding messages, or even scientific breakthroughs. These insights are extracted from participants' submissions.
The 'push' aspects of crowdcasting involve a public announcement of a prize for a particular innovation, invention, achievement, or accomplishment (such as the announcement of theAnsari X-Prizein 1996). This stage of crowdcasting serves to engage aspecific target audienceusing compelling offerings or incentives as a call to action.
The 'pull' aspects of crowdcasting involve building andharnessinga community of passionate participants. Crowdcasting competitions have a viral effect, as interested participants refer others to the event. Once the community is built, it can be harnessed to provide fresh perspectives, ideas, insights, prototypes, or radical breakthrough innovations.InnoCentiveis an example; its challenges tap into a community of over 100,000 scientists who might provide that unexpected innovation.
Openpitch.com[1]an upstart, has embraced the concept of crowdcasting to form a virtual advertising agency. The fundamental concept of crowdcasting—harnessing a specific, often expert, community of participants—separates OpenPitch from user-generated content (UGC) sites. Much like Innocentive, OpenPitch does not share or post submissions to the overall community during development. Instead, the sites keep user submissions confidential, protecting the intellectual property rights of both the posting company and the solutions provider. What is lost by not following a more opencrowdsourcingmodel is gained by a policy that, arguably, attracts a more professional, dedicated user base.
Aside from the advertising space, the merger of crowdsourcing with broadcast programming has been largely unexplored. One of the first to launch a "crowdcasting" application allowing listeners to take control of a radio station is LDR / "Listener Driven Radio".[2]"Listener Driven Radio" is a software application that allows listeners to go online, or to their mobile phone, and offer their input into what plays next on the radio station. The program constantly absorbs listener input, song votes, and comments on music and automatically adapts radio station programming in real-time. Clear Channel Communications, Cox Media Group, CBS, Cumulus, Harvard Broadcasting, and many major broadcasters in the USA, Canada, and Europe are using Listener Driven Radio's technology to give audiences the ability to influence on-air programming.
Crowdcasting is also no longer confined to traditional broadcasting platforms due to current technological advances. For instance, there is the case of the Internet-based platforms, which feature convenient and automated capabilities for collecting, storing, and analyzing data. This is demonstrated in the platform created bySalesforceforStarbucks, the crowdsourcing solution enables the coffee chain to source ideas from its customers through suggestions for improvements in their outlets.[3]The same strategy has been employed by companies likeAmazon,Philips,LG, andForbeswhen they useCrowdSpringto search for new creative ideas.[3]Startups likeElancealso integrate crowdcasting into their operations as a value-added service.
Thesocial mediais another example of an Internet-based crowdcasting platform. This can be the case once an organization taps it to enable stakeholders to self-organize as a crowd so that content about the organization can be produced and disseminated.[4]Here, the 'push' and 'pull' strategies are employed when engaging a community of stakeholders and building a network of participants ('push'), which is then harnessed to gain insights ('pull').[4]
John Seely-Brown and John Hagel III discuss the transition from 'push' to 'pull' innovation this way: "Rather than treating producers as passive consumers whose needs can be anticipated and shaped by centralized decision-makers, pull models treat people as networked creators even when they actually are customers purchasing goods and services.Pull platformsharness their participants’ passion, commitment, and desire to learn, thereby creating communities that can improvise and innovate rapidly."[5]
|
https://en.wikipedia.org/wiki/Crowdcasting
|
Crowdfixingis a specific way ofcrowdsourcing, in which people gather together to fix public spaces of thelocal community. The main aim is to fight against deterioration of public places. Crowdfixing actions include (but are not limited to) cleaningflashmobs, mowing, repairing structures, and removing unsafe elements.
Placemaking, a concept originated in the 1960s that focused on planning, management and design of public places, was the philosophical background to the crowdfixing movement. According to placemaking, in the modern times all the resources needed to create community-friendly, enjoyablepublic spacesand keep them in good conditions are available, but decision-making processes exclude citizens' preferences.
Crowdfixing promotes the idea of public spaces as belonging to the local community, in opposition to the concept of areas merely administrated and owned by theState.
Crowdfixing also tries to create better conditions for people to interact by providing them with onlinetoolsand mechanisms that allow them to set the different stages required to fix public spaces by improving the communication processes.
|
https://en.wikipedia.org/wiki/Crowdfixing
|
Crowdsourcing software developmentorsoftware crowdsourcingis an emerging area ofsoftware engineering. It is an open call for participation in any task ofsoftware development, includingdocumentation,design,codingandtesting. These tasks are normally conducted by either members of asoftwareenterprise or people contracted by the enterprise. But in softwarecrowdsourcing, all the tasks can be assigned to or are addressed by members of the general public. Individuals and teams may also participate in crowdsourcingcontests.[1]
Software crowdsourcing may have multiple goals.[2][3]
Quality software: Crowdsourcing organizers need to define specific software quality goals and their evaluation criteria. Quality software often comes from competent contestants who can submit good solutions for rigorous evaluation.
Rapid acquisition: Instead of waiting for software to be developed, crowdsourcing organizers may post a competition hoping that something identical or similar has been developed already. This is to reduce software acquisition time.
Talent identification: A crowdsourcing organizer may be mainly interested in identifying talents as demonstrated by their performance in the competition.
Cost reduction: A crowdsourcing organizer may acquire software at a low cost by paying a small fraction of development cost as the price for award may include recognition awards.
Solution diversity: As teams will turn in different solutions for the same problem, the diversity in these solutions will be useful for fault-tolerant computing.
Ideas creation: One goal is to get new ideas from contestants and these ideas may lead to new directions.
Broadening participation: One goal is to recruit as many participants as possible to get best solution or to spread relevant knowledge.
Participant education: Organizers are interested in educating participants new knowledge. One example is nonamesite.com sponsored byDARPAto teach STEMScience, Technology, Engineering, and Mathematics.
Fund leveraging: The goal is to stimulate other organizations to sponsor similar projects to leverage funds.
Marketing:Crowdsourcing projects can be used for brand recognition among participants.
A crowdsourcing support system needs to include 1) Software development tools: requirement tools, design tools, coding tools, compilers, debuggers, IDE, performance analysis tools, testing tools, and maintenance tools.
2) Project management tools: ranking, reputation, and award systems for products and participants.
3) Social network tools: allow participants to communicate and support each other. 4) Collaborating tools: For example, a blackboard platform where participants can see a common area and suggest ideas to improve the solutions presented in the common area.
Social networks can provide communication, documentation, blogs, twitters, wikis, comments, feedbacks, and indexing.
Any phase of software development can be crowdsourced, and that phase can be requirements (functional, user interface, performance), design (algorithm, architecture), coding (modules and components), testing (including security testing, user interface testing, user experience testing), maintenance, user experience, or any combination of these.[4]
Existing software development processes can be modified to include crowdsourcing: 1) Waterfall model; 2) Agile processes; 3) Model-driven approach; 4) Open-Sourced approach; 5) Software-as-a-Service (SaaS) approach where service components can be published, discovered, composed, customized, simulated, and tested; 6) formal methods: formal methods can be crowdsourced.
The crowdsourcing can be competitive or non-competitive. In competitive crowdsourcing, only selected participants will win, and in highly competitive projects, many contestants will compete but few will win. In non-competitive manner, either single individuals will participate in crowdsourcing or multiple individuals can collaborate to create software. Products produced can be cross evaluated to ensure the consistency and quality of products and to identify talents, and the cross evaluation can be evaluated by crowdsourcing.
Items developed by crowdsourcing can be evaluated by crowdsourcing to determine the work produced, and evaluation of evaluation can be crowdsourced to determine the quality of evaluation.
Notable crowdsourcing processes include AppStori andTopcoderprocesses.
Pre-selection of participants is important for quality software crowdsourcing. In competitive crowdsourcing, a low-ranked participant should not compete against a high-ranked participant.
Software crowdsourcing platforms includingApple Inc.'sApp Store,Topcoder, and uTest demonstrate the advantage of crowdsourcing in terms of software ecosystem expansion and product quality improvement. Apple’s App Store is an onlineiOSapplication market, where developers can directly deliver their creative designs and products to smartphone customers. These developers are motivated to contribute innovative designs for both reputation and payment by the micro-payment mechanism of the App Store. Within less than four years, Apple's App Store has become a huge mobile application ecosystem with 150,000 active publishers, and generated over 700,000 IOS applications. Around the App Store, there are many community-based, collaborative platforms for the smart-phone applications incubators. For example, AppStori introduces a crowd funding approach to build an online community for developing promising ideas about new iPhone applications. IdeaScale is another platform for software crowdsourcing.[5]
Another crowdsourcing example—Topcoder—creates a software contest model where programming tasks are posted as contests and the developer of the best solution wins the top prize. Following this model, Topcoder has established an online platform to support its ecosystem and gathered a virtual global workforce with more than 1 million registered members and nearly 50,000 active participants. All these Topcoder members compete against each other in software development tasks such as requirement analysis, algorithm design, coding, and testing.
TheTopcoderSoftware Development Process consists of a number of different phases, and within each phase there can be different competition types:[citation needed]
Each step can be a crowdsourcing competition.
BugFinders testing process:[6]
Game theoryhas been used in the analysis of various software crowdsourcing projects.[2]
Information theorycan be a basis for metrics.
Economic modelscan provide incentives for participation in crowdsourcing efforts.
Crowdsourcing software development may follow different software engineering methodologies using different process models, techniques, and tools. It also has specific crowdsourcing processes involving unique activities such as bidding tasks, allocating experts, evaluating quality, and integrating software.[citation needed]To support outsourcing process and facilitate community collaboration, a platform is usually built to provide necessary resources and services. For example,Topcoderfollows the traditional software development process with competition rules embedded, and AppStori allow flexible processes and crowd may be involved in almost all aspects of software development including funding, project concepts, design, coding, testing, and evaluation.
Thereference architecturehence defines umbrella activities and structure for crowd-based software development by unifying best practices and research achievements. In general, the reference architecture will address the following needs:[citation needed]
Particularly, crowdsourcing is used to develop large and complex software in a virtualized, decentralized manner.Cloud computingis a colloquial expression used to describe a variety of different types of computing concepts that involve a large number of computers connected through a real-time communication network (typically the Internet). Many advantages are to be found when moving crowdsourcing applications to the cloud: focus on project development rather than on the infrastructure that supports this process, foster the collaboration between geographically distributed teams, scale resources to the size of the projects, work in a virtualized, distributed, and collaborative environment.
The demands on software crowdsourcing systems are ever evolving as new development philosophies and technologies gain favor. The reference architecture presented above is designed to encompass generality in many dimensions including, for example different software development methodologies, incentive schemes, and competitive/collaborative approaches. There are several clear research directions that could be investigated to enhance the architecture such as data analytics, service based delivery, and framework generalization. As systems grow understanding the use of the platform is an important consideration, data regarding users, projects, and interaction between the two can all be explored to investigate performance. These data may also provide helpful insights when developing tasks or selecting participants. Many of the components designed in the architecture are general purpose and could be delivered as hosted services. By hosting these services the barriers for entry would be significantly reduced. Finally, through deployments of this architecture there is potential to derive a general purpose framework that could be used for different software development crowdsourcing projects or more widely for other crowdsourcing applications. The creation of such frameworks has had transformative effects in other domains for instance the predominant use of BOINC in volunteer computing.
Crowdsourcing in general is a multifaceted research topic. The use of crowdsourcing in software development is associated with a number of key tension points, or facets, which should be considered (see the figure below). At the same time, research can be conducted from the perspective of the three key players in crowdsourcing: the customer, the worker, and the platform.[7]
Task decomposition:
Coordination and communication:
Planning and scheduling:
Quality assurance:A software crowdsourcing process can be described in a game process, where one party tries to minimize an objective function, yet the other party tries to maximize the same objective function as though both parties compete with each other in the game. For example, aspecificationteam needs to produce quality specifications for the coding team to develop the code; the specification team will minimize thesoftware bugsin the specification, while the coding team will identify as many bugs as possible in the specification before coding.
The min-max process is important as it is a quality assurance mechanism and often a team needs to perform both. For example, the coding team needs to maximize the identification of bugs in the specification, but it also needs to minimize the number of bugs in the code it produces.
Bugcrowdshowed that participants will follow theprisoner's dilemmato identify bugs for security testing.[8]
Knowledge and Intellectual Property:
Motivation and Remuneration:
There are the following levels of crowdsourcing:[citation needed]
Level 1: single persons, well-defined modules, small size, limited time span (less than 2 months), quality products, current development processes such as the one byTopcoderand uTest. At this level, coders are ranked, websites contains online repository crowdsourcing materials, software can be ranked by participants, have communication tools such as wiki, blogs, comments, software development tools such as IDE, testing, compilers, simulation, modeling, and program analysis.
Level 2: teams of people (< 10), well-defined systems, medium large, medium time span (3 to 4 months), adaptive development processes with intelligent feedback in a blackboard architecture. At this level, a crowdsourcing website may support adaptive development process and even concurrent development processes with intelligent feedback with the blackboard architecture; intelligent analysis of coders, software products, and comments; multi-phase software testing and evaluation; Big Data analytics, automated wrapping software services into SaaS (Software-as-a-Service), annotate withontology, cross reference to DBpedia, and Wikipedia; automated analysis and classification of software services; ontology annotation and reasoning such as linking those service with compatible input/output.
Level 3: teams of people (< 100 and > 10), well-defined system, large systems, long time span (< 2 years), automated cross verification and cross comparison among contributions. A crowdsourcing website at this level may contain automated matching of requirements to existing components including matching of specification, services, and tests; automated regression testing.
Level 4: multinational collaboration of large and adaptive systems. A crowdsourcing website at this level may contain domain-oriented crowdsourcing with ontology, reasoning, and annotation; automated cross verification andtest generationprocesses; automated configuration of crowdsourcing platform; and may restructure the platform as SaaS with tenant customization.
MicrosoftcrowdsourcingWindows 8development. In 2011, Microsoft started blogs to encourage discussions among developers and general public.[9]In 2013, Microsoft also started crowdsourcing their mobile devices for Windows 8.[10]In June 2013, Microsoft also announced crowdsourcing software testing by offering $100K for innovative techniques to identify security bugs, and $50K for a solution to the problem identified.[11]
In 2011 theUnited States Patent and Trademark Officelaunching a crowdsourcing challenge under theAmerica COMPETES Acton theTopcoderplatform to develop for image processing algorithms and software to recognize figure and part labels in patent documents with a prize pool of $50,000 USD.[12]The contest resulted in 70 teams collectively making 1,797 code submissions. The solution of the contest winner achieved high accuracy in terms of recall and precision for the recognition of figure regions and part labels.[13]
Oracle uses crowdsourcing in their CRM projects.[14]
A software crowdsourcing workshop was held atDagstuhl, Germany in September 2013.[15]
|
https://en.wikipedia.org/wiki/Crowdsourcing_software_development
|
Human-based computation(HBC),human-assisted computation,[1]ubiquitous human computingordistributed thinking(by analogy todistributed computing) is acomputer sciencetechnique in which a machine performs its function by outsourcing certain steps to humans, usually asmicrowork. This approach uses differences in abilities and alternative costs between humans and computer agents to achieve symbiotic human–computer interaction. For computationally difficult tasks such as image recognition, human-based computation plays a central role in trainingDeep Learning-basedArtificial Intelligencesystems. In this case, human-based computation has been referred to ashuman-aided artificial intelligence.[2]
In traditional computation, a human employs a computer[3]to solve a problem; a human provides a formalized problem description and an algorithm to a computer, and receives a solution to interpret.[4]Human-based computation frequently reverses the roles; the computer asks a person or a large group of people to solve a problem,[5]then collects, interprets, and integrates their solutions. This turns hybrid networks of humans and computers into "large scale distributed computing networks".[6][7][8]where code is partially executed in human brains and on silicon based processors.
Human-based computation (apart from thehistorical meaning of "computer") research has its origins in the early work oninteractive evolutionary computation(EC).[9]The idea behind interactive evolutionary algorithms has been attributed toRichard Dawkins; in the Biomorphs software accompanying his bookThe Blind Watchmaker(Dawkins, 1986)[10]the preference of a human experimenter is used to guide the evolution of two-dimensional sets of line segments. In essence, this program asks a human to be the fitness function of an evolutionary algorithm, so that the algorithm can use human visual perception and aesthetic judgment to do something that a normal evolutionary algorithm cannot do. However, it is difficult to get enough evaluations from a single human if we want to evolve more complex shapes.Victor Johnston[11]andKarl Sims[12]extended this concept by harnessing the power of many people for fitness evaluation (Caldwell and Johnston, 1991; Sims, 1991). As a result, their programs could evolve beautiful faces and pieces of art appealing to the public. These programs effectively reversed the common interaction between computers and humans. In these programs, the computer is no longer an agent of its user, but instead, a coordinator aggregating efforts of many human evaluators. These and other similar research efforts became the topic of research in aesthetic selection orinteractive evolutionary computation(Takagi, 2001), however the scope of this research was limited to outsourcing evaluation and, as a result, it was not fully exploring the full potential of the outsourcing.
A concept of the automaticTuring testpioneered byMoni Naor(1996)[13]is another precursor of human-based computation. In Naor's test, the machine can control the access of humans and computers to a service by challenging them with anatural language processing(NLP) orcomputer vision(CV) problem to identify humans among them. The set of problems is chosen in a way that they have no algorithmic solution that is both effective and efficient at the moment. If it existed, such an algorithm could be easily performed by a computer, thus defeating the test. In fact, Moni Naor was modest by calling this an automated Turing test. Theimitation gamedescribed byAlan Turing(1950) didn't propose using CV problems. It was only proposing a specific NLP task, while the Naor test identifies and explores a largeclassof problems, not necessarily from the domain of NLP, that could be used for the same purpose in both automated and non-automated versions of the test.
Finally,Human-based genetic algorithm(HBGA)[14]encourages human participation in multiple different roles. Humans are not limited to the role of evaluator or some other predefined role, but can choose to perform a more diverse set of tasks. In particular, they can contribute their innovative solutions into the evolutionary process, make incremental changes to existing solutions, and perform intelligent recombination.[15]In short, HBGA allows humans to participate in all operations of a typicalgenetic algorithm. As a result of this, HBGA can process solutions for which there are no computational innovation operators available, for example, natural languages. Thus, HBGA obviated the need for a fixed representational scheme that was a limiting factor of both standard and interactive EC.[16]These algorithms can also be viewed as novel forms of social organization coordinated by a computer, according to Alex Kosorukoff and David Goldberg.[17]
Human-based computation methods combine computers and humans in different roles. Kosorukoff (2000) proposed a way to describe division of labor in computation, that groups human-based methods into three classes. The following table uses the evolutionary computation model to describe four classes of computation, three of which rely on humans in some role. For each class, a representative example is shown. The classification is in terms of the roles (innovation or selection) performed in each case by humans and computational processes. This table is a slice of a three-dimensional table. The third dimension defines if the organizational function is performed by humans or a computer. Here it is assumed to be performed by a computer.
Classes of human-based computation from this table can be referred by two-letter abbreviations: HC, CH, HH. Here the first letter identifies the type of agents performing innovation, the second letter specifies the type of selection agents. In some implementations (wikiis the most common example), human-based selection functionality might be limited, it can be shown with small h.
In different human-based computation projects people are motivated by one or more of the following.
Many projects had explored various combinations of these incentives. See more information about motivation of participants in these projects in Kosorukoff,[35]and Von Hippel.[36][37]
Viewed as a form of social organization, human-based computation often surprisingly turns out to be more robust and productive than traditional organizations.[38]The latter depend on obligations to maintain their more or less fixed structure, be functional and stable. Each of them is similar to a carefully designed mechanism with humans as its parts. However, this limits the freedom of their human employees and subjects them to various kinds of stresses. Most people, unlike mechanical parts, find it difficult to adapt to some fixed roles that best fit the organization. Evolutionary human-computation projects offer a natural solution to this problem. They adapt organizational structure to human spontaneity, accommodate human mistakes and creativity, and utilize both in a constructive way. This leaves their participants free from obligations without endangering the functionality of the whole, making people happier. There are still some challenging research problems that need to be solved before we can realize the full potential of this idea.
The algorithmic outsourcing techniques used in human-based computation are much more scalable than the manual or automated techniques used to manage outsourcing traditionally. It is this scalability that allows to easily distribute the effort among thousands (or more) of participants. It was suggested recently that this mass outsourcing is sufficiently different from traditional small-scale outsourcing to merit a new name:crowdsourcing.[39]However, others have argued that crowdsourcing ought to be distinguished from true human-based computation.[40]Crowdsourcing does indeed involve the distribution of computation tasks across a number of human agents, but Michelucci argues that this is not sufficient for it to be considered human computation. Human computation requires not just that a task be distributed across different agents, but also that the set of agents across which the task is distributed bemixed:some of them must be humans, but others must be traditional computers. It is this mixture of different types of agents in a computational system that gives human-based computation its distinctive character. Some instances of crowdsourcing do indeed meet this criterion, but not all of them do.
Human Computation organizes workers through a task market with APIs, task prices, and software-as-a-service protocols that allow employers / requesters to receive data produced by workers directly in to IT systems. As a result, many employers attempt to manage worker automatically through algorithms rather than responding to workers on a case-by-case basis or addressing their concerns. Responding to workers is difficult to scale to the employment levels enabled by human computation microwork platforms.[41]Workers in the system Mechanical Turk, for example, have reported that human computation employers can be unresponsive to their concerns and needs[42]
Human assistance can be helpful in solving anyAI-completeproblem, which by definition is a task which is infeasible for computers to do but feasible for humans. Specific practical applications include:
Human-based computation has been criticized as exploitative and deceptive with the potential to undermine collective action.[45][46]
Insocial philosophyit has been argued that human-based computation is an implicit form of online labour.[47]The philosopherRainer Mühlhoffdistinguishes five different types of "machinic capture" of human microwork in "hybrid human-computer networks": (1) gamification, (2) "trapping and tracking" (e.g. CAPTCHAs or click-tracking in Google search), (3) social exploitation (e.g. tagging faces on Facebook), (4) information mining and (5) click-work (such as onAmazon Mechanical Turk).[48][49]Mühlhoff argues that human-based computation often feeds intoDeep Learning-basedArtificial Intelligencesystems, a phenomenon he analyzes as "human-aided artificial intelligence".
|
https://en.wikipedia.org/wiki/Distributed_thinking
|
Distributed Proofreaders(commonly abbreviated asDPorPGDP) is a web-based project that supports the development ofe-textsforProject Gutenbergby allowing many people to work together inproofreadingdrafts of e-texts for errors. As of July 2024,[update]the site had digitized 48,000 titles.[2][3][4][5]
Distributed Proofreaders was founded by Charles Franks in 2000 as an independent site to assistProject Gutenberg.[6]Distributed Proofreaders became an official Project Gutenberg site in 2002.
On 8 November 2002, Distributed Proofreaders wasslashdotted,[7][8]and more than 4,000 new members joined in one day, causing an influx of new proofreaders and software developers, which helped to increase the quantity and quality of e-text production. In July 2015, the 30,000th Distributed Proofreaders produced e-text was posted to Project Gutenberg. DP-contributed e-texts comprised more than half of works in Project Gutenberg, as of July 2015[update].
On 31 July 2006, theDistributed Proofreaders Foundationwas formed to provide Distributed Proofreaders with its own legal entity andnot-for-profitstatus.IRSapproval of section501(c)(3)status was granted retroactive to 7 April 2006.
Public domainworks, typically books with expired copyright, are scanned by volunteers, or sourced from digitization projects and the images are run throughoptical character recognition(OCR)software. Since OCR software is far from perfect, many errors often appear in the resulting text. To correct them, pages are made available to volunteers via the Internet; the original page image and the recognized text appear side by side.[9]This process thereby distributes the time-consuming error-correction process, akin todistributed computing.
Each page is proofread and formatted several times, and then a post-processor combines the pages and prepares the text for uploading to Project Gutenberg.
Besides custom software created to support the project, DP also runs a forum and a wiki for project coordinators and participants.
In January 2004, Distributed Proofreaders Europe started, hosted byProject Rastko, Serbia.[10]This site had the ability to process text inUnicodeUTF-8encoding. Books proofread centered on European culture, with a considerable proportion of non-English texts including Hebrew, Arabic, Urdu, and many others. As of October 2013[update], DP Europe had produced 787 e-texts, the last of these in November 2011.
The original DP is sometimes referred to as "DP International" by members of DP Europe. However, DP servers are located in the United States, and therefore works must be cleared by Project Gutenberg as being in thepublic domainaccording to U.S.copyrightlaw before they can be proofread and eventually published at DP.
In December 2007,Distributed Proofreaders Canadalaunched to support the production of e-books forProject Gutenberg Canadaand take advantage of shorterCanadian copyrightterms. Although it was established by members of the original Distributed Proofreaders site, it is a separate entity. All its projects are posted toFaded Page, their book archive website. In addition, it supplies books to Project Gutenberg Canada (which launched onCanada Day2007) and (where copyright laws are compatible) to the original Project Gutenberg.
In addition to preservingCanadiana, DP Canada is notable because it is the first major effort to take advantage of Canada's copyright laws which may allow more works to be preserved. Unlike copyright law in some other countries, Canada has a "life plus 50" copyright term. This means that works by authors who died more than fifty years ago may be preserved in Canada, whereas in other parts of the world those works may not be distributed because they are still under copyright.
Notable authors whose works may be preserved in Canada but not in other parts of the world includeClark Ashton Smith,Dashiell Hammett,Ernest Hemingway,Carl Jung,A. A. Milne,Dorothy Sayers,Nevil Shute,Walter de la Mare,Sheila Kaye-SmithandAmy Carmichael.
On 9 March 2007, Distributed Proofreaders announced the completion of more than 10,000 titles. In celebration, a collection of fifteen titles was published:
On April 10, 2011, the 20,000th book milestone was celebrated as a group release of bilingual books:[18]
On 7 July 2015, the 30,000th book milestone was celebrated with a group of thirty texts. One was numbered 30,000:[19]
|
https://en.wikipedia.org/wiki/Distributed_Proofreaders
|
Aflash mob(orflashmob)[1]is a group of people that assembles suddenly in a public place, performs for a brief time, then quickly disperses, often for the purposes of entertainment, satire, and/or artistic expression.[2][3][4]Flash mobs may be organized viatelecommunications,social media, orviral emails.[5][6][7][8][9]
The term, coined in 2003, is generally not applied to events and performances organized for the purposes of politics (such as protests),commercial advertisement,publicity stuntsthat involvepublic relationfirms, or paid professionals.[7][10][11]In these cases of a planned purpose for the social activity in question, the termsmart mobsis often applied instead.
The term "flash rob" or "flash mob robberies", a reference to the way flash mobs assemble, has been used to describe a number of robberies and assaults perpetrated suddenly by groups of teenage youth.[12][13][14]Bill Wasik, originator of the first flash mobs, and a number of other commentators have questioned or objected to the usage of "flash mob" to describe criminal acts.[14][15]Flash mobs have also been featured in some Hollywood movie series, such asStep Up.[16]
The first flash mobs were created inManhattanin 2003, byBill Wasik, senior editor ofHarper's Magazine.[7][9][17]The first attempt was unsuccessful after the targeted retail store was tipped off about the plan for people to gather.[18]Wasik avoided such problems during the first successful flash mob, which occurred on June 17, 2003, atMacy'sdepartment store, by sending participants to preliminary staging areas—in four Manhattan bars—where they received further instructions about the ultimate event and location just before the event began.[19]
More than 130 people converged upon the ninth-floor rug department of the store, gathering around an expensive rug. Anyone approached by a sales assistant was advised to say that the gatherers lived together in a warehouse on the outskirts of New York, that they were shopping for a "love rug", and that they made all their purchase decisions as a group.[20]Subsequently, 200 people flooded the lobby and mezzanine of theHyatthotel in synchronized applause for about 15 seconds, and a shoe boutique inSoHowas invaded by participants pretending to be tourists on a bus trip.[9]
Wasik claimed that he created flash mobs as asocial experimentdesigned to poke fun athippiesand to highlight the cultural atmosphere ofconformityand of wanting to be an insider or part of "the next big thing".[9]The Vancouver Sunwrote, "It may have backfired on him ... [Wasik] may instead have ended up giving conformity a vehicle that allowed it to appear nonconforming."[21]In another interview he said "the mobs started as a kind of playful social experiment meant to encourage spontaneity and big gatherings to temporarily take over commercial and public areas simply to show that they could".[22]
In 1973, the story "Flash Crowd" byLarry Nivendescribed a concept similar to flash mobs.[23]With the invention of popular and very inexpensiveteleportation, an argument at a shopping mall—which happens to be covered by a news crew—quickly swells into a riot. In the story, broadcast coverage attracts the attention of other people, who use the widely available technology of the teleportation booth to swarm first that event—thus intensifying the riot—and then other events as they happen. Commenting on the social impact of such mobs, one character (articulating the police view) says, "We call them flash crowds, and we watch for them." In related short stories, they are named as a prime location for illegal activities (such as pickpocketing and looting) to take place.Lev Grossmansuggests that the story title is a source of the term "flash mob".[24]
Flash mobs began as a form ofperformance art.[18]While they started as an apolitical act, flash mobs may share superficial similarities to politicaldemonstrations. In the 1960s, groups such as the Yippies used street theatre to expose the public to political issues.[25]Flash mobs can be seen as a specialized form ofsmart mob,[7]a term and concept proposed by authorHoward Rheingoldin his 2002 bookSmart Mobs: The Next Social Revolution.[26]
The first documented use of the termflash mobas it is understood today was in 2003 in a blog entry posted in the aftermath of Wasik's event.[17][19][27]The term was inspired by the earlier termsmart mob.[28]
Flash mob was added to the 11th edition of theConcise Oxford English Dictionaryon July 8, 2004, where it noted it as an "unusual and pointless act" separating it from other forms of smart mobs such as types of performance, protests, and other gatherings.[3][29]Also recognized noun derivatives are flash mobber and flash mobbing.[3]Webster's New Millennium Dictionary of Englishdefinesflash mobas "a group of people who organize on the Internet and then quickly assemble in a public place, do something bizarre, and disperse."[30]This definition is consistent with the original use of the term; however, both news media and promoters have subsequently used the term to refer to any form of smart mob, including political protests;[31]a collaborative Internetdenial of serviceattack;[32]a collaborativesupercomputingdemonstration;[33]and promotional appearances by pop musicians.[34]The press has also used the termflash mobto refer to a practice in China where groups of shoppers arrange online to meet at a store in order to drive a collective bargain.[35]
In 19th-centuryTasmania, the termflash mobwas used to describe a subculture consisting of female prisoners, based on the termflash languagefor the jargon that these women used. The 19th-century Australian termflash mobreferred to a segment of society, not an event, and showed no other similarities to the modern termflash mobor the events it describes.[36]
The city ofBraunschweig(Brunswick), Germany, has stopped flash mobs by strictly enforcing the already existing law of requiring a permit to use any public space for an event.[37]In the United Kingdom, a number of flash mobs have been stopped over concerns for public health and safety.[38]TheBritish Transport Policehave urged flash mob organizers to "refrain from holding such events at railway stations".[39]
Referred to asflash robs,flash mob robberies, orflash robberiesby the media, crimes organized by teenage youth using social media rose to international notoriety beginning in 2011.[12][13][14][40]TheNational Retail Federationdoes not classify these crimes as "flash mobs" but rather "multiple offender crimes" that utilize "flash mob tactics".[41][42]In a report, the NRF noted, "multiple offender crimes tend to involve groups or gangs of juveniles who already know each other, which does not earn them the term 'flash mob'."[42]Mark Leary, a professor ofpsychologyandneuroscienceatDuke University, said that most "flash mob thuggery" involves crimes of violence that are otherwise ordinary, but are perpetrated suddenly by large, organized groups of people: "What social media adds is the ability to recruit such a large group of people, that individuals who would not rob a store or riot on their own feel freer to misbehave without being identified."[43]
It's hard for me to believe that these kids saw some YouTube video of people Christmas caroling in a food court, and said, 'Hey, we should do that, except as a robbery!' More likely, they stumbled on the simple realization (like I did back in 2003, but like lots of other people had before and have since) that one consequence of all this technology is that you can coordinate a ton of people to show up in the same place at the same time.
These kids are taking part in what's basically ameme. They heard about it from friends, and probably saw it on YouTube, and now they're getting their chance to participate in it themselves.
HuffPostraised the question asking if "the media was responsible for stirring things up", and added that in some cases the local authorities did not confirm the use of social media making the "use of the term flash mob questionable".[15]Amanda Walgrove wrote that criminals involved in such activities do not refer to themselves as "flash mobs", but that this use of the term is nonetheless appropriate.[44]Dr. Linda Kiltz drew similar parallels between flash robs and theOccupy Movementstating, "As the use of social media increases, the potential for more flash mobs that are used for political protest and for criminal purposes is likely to increase."[45]
|
https://en.wikipedia.org/wiki/Flash_mob
|
Gamificationis the process of enhancing systems, services, organisations and activities through the integration ofgame designelements and principles in non-game contexts. The goal is to increaseuser engagement,motivation,competitionandparticipationthrough the use of game mechanics such aspoints, badges, leaderboards andrewards.[1][2][3]It is a component of system design, and it commonly employsgamedesign elements[4][2][5][6][3]to improve user engagement,[7][8][9]organizational productivity,[10]flow,[11][12][13]learning,[14][15]crowdsourcing,[16]knowledge retention,[17]employee recruitmentandevaluation,usability, usefulness of systems,[13][18][19]physical exercise,[20]tailored interactions and icebreaker activities indating apps,[21][22][23][24]traffic violations,[25]voter apathy,[26][27]public attitudes about alternative energy,[28]and more. A collection of research on gamification shows that a majority of studies on gamification find it has positive effects on individuals.[29][5]However, individual and contextual differences exist.[30]
Gamification can be achieved using different game mechanics and elements which can be linked to 8 core drives when using theOctalysisframework.[31]
Gamification techniques are intended to leverage people's evolved desires for socializing, learning, mastery, competition, achievement, status, self-expression,altruism, or closure, or simply their response to theframingof a situation as game or play.[32]Early gamification strategies userewardsfor players who accomplish desired tasks orcompetitionto engage players. Types of rewards include points,[33]achievement badges or levels,[34]the filling of a progress bar,[35]or providing the user with virtual currency.[34]Making the rewards for accomplishing tasks visible to other players or providing leader boards are ways of encouraging players to compete.[36]
Another approach to gamification is to make existing tasks feel more like games.[37]Some techniques used in this approach include adding meaningful choice, onboarding with a tutorial, increasing challenge,[38]and adding narrative.[37]
Game elements are the basic building blocks of gamification applications.[39][40]Among these typical game design elements, are points, badges, leader-boards, performance graphs, meaningful stories, avatars, and teammates.[41]According to Chou, the efficacy of the Octalysis Framework in gamification, shows that experience points (XP), badges, and progress indicators can significantly enhance user engagement and productivity in business learning programs.[42]
Points are basic elements of a multitude of games and gamified applications.[43]They are typically rewarded for the successful accomplishment of specified activities within the gamified environment[44]and they serve to numerically represent a player's progress.[45]Various kinds of points can be differentiated between, e.g. experience points, redeemable points, or reputation points, as can the different purposes that points serve.[10]One of the most important purposes of points is to provide feedback. Points allow the players' in-game behavior to be measured, and they serve as continuous and immediate feedback and as a reward.[46]
Badges are defined as visual representations ofachievements[44]and can be earned and collected within the gamification environment. They confirm the players' achievements, symbolize their merits,[47]and visibly show their accomplishment of levels or goals.[48]Earning a badge can be dependent on a specific number of points or on particular activities within the game.[44]Badges have many functions, serving as goals, if the prerequisites for winning them are known to the player, or as virtual status symbols.[44]In the same way as points, badges also provide feedback, in that they indicate how the players have performed.[49]Badges can influence players' behavior, leading them to select certain routes and challenges in order to earn badges that are associated with them.[50]Additionally, as badges symbolize one's membership in a group of those who own this particular badge, they also can exert social influences on players and co-players,[47]particularly if they are rare or hard to earn.
Leaderboards rank players according to their relative success, measuring them against a certain success criterion.[51]As such, leaderboards can help determine who performs best in a certain activity[52]and are thus competitive indicators of progress that relate the player's own performance to the performance of others. However, the motivational potential of leaderboards is mixed. Werbach and Hunter[44]regard them as effective motivators if there are only a few points left to the next level or position, but as demotivators, if players find themselves at the bottom end of the leaderboard. Competition caused by leaderboards can create social pressure to increase the player's level of engagement and can consequently have a constructive effect on participation and learning.[53]However, these positive effects of competition are more likely if the respective competitors are approximately at the same performance level.[54][55]
Performance graphs, which are often used in simulation or strategy games, provide information about the players' performance compared to their preceding performance during a game.[41]Thus, in contrast to leaderboards, performance graphs do not compare the player's performance to other players, but instead, evaluate the player's own performance over time. Unlike the social reference standard of leaderboards, performance graphs are based on an individual reference standard. By graphically displaying the player's performance over a fixed period, they focus on improvements. Motivation theory postulates that this fosters mastery orientation, which is particularly beneficial to learning.[41]
Meaningful stories are game design elements that do not relate to the player's performance. The narrative context in which a gamified application can be embedded contextualizes activities and characters in the game and gives them meaning beyond the mere quest for points and achievements.[56]A story can be communicated by a game's title (e.g.,Space Invaders) or by complex storylines typical of contemporary role-playing video games (e.g.,The Elder Scrolls Series).[56]Narrative contexts can be oriented towards real, non-game contexts or act as analogies of real-world settings. The latter can enrich boring, barely stimulating contexts, and, consequently, inspire and motivate players particularly if the story is in line with their personal interests.[57]As such, stories are also an important part in gamification applications, as they can alter the meaning of real-world activities by adding a narrative 'overlay', e.g. being hunted by zombies while going for a run.
Avatars are visual representations of players within the game or gamification environment.[44]Usually, they are chosen or even created by the player.[56]Avatars can be designed quite simply as a mere pictogram, or they can be complexly animated, three- dimensional representations. Their main formal requirement is that they unmistakably identify the players and set them apart from other human or computer-controlled avatars.[44]Avatars allow the players to adopt or create another identity and, in cooperative games, to become part of a community.[58]
Teammates, whether they are other real players or virtual non-player characters, can induce conflict, competition or cooperation.[56]The latter can be fostered particularly by introducing teams, i.e. by creating defined groups of players that work together towards a shared objective.[44]Meta-analytic evidence supports that the combination of competition and collaboration in games is likely to be effective for learning.[59]
The described game elements fit within a broader framework, which involves three types of elements:dynamics,mechanics, andcomponents. These elements constitute the hierarchy of game elements.[60]
Dynamics are the highest in the hierarchy. They are the big picture aspects of the gamified system that should be considered and managed; however, they never directly enter into the game. Dynamics elements provide motivation through features such as narrative or social interaction.
Mechanics are the basic processes that drive the action forward and generate player engagement and involvement. Examples are chance, turns, and rewards.
Components are the specific instantiations of mechanics and dynamics; elements like points, quests, and virtual goods.[44]
Gamification has been applied to almost every aspect of life. Examples of gamification in business context include the U.S. Army, which usesmilitary simulatorAmerica's Armyas a recruitment tool, andM&M's"Eye Spy"pretzelgame, launched in 2013 to amplify the company's pretzel marketing campaign by creating a fun way to "boost user engagement." Another example can be seen in the American education system. Students are ranked in their class based on their earnedgrade-point average(GPA), which is comparable to earning a high score in video games.[61]Students may also receive incentives, such as an honorable mention on the dean's list, the honor roll, and scholarships, which are equivalent to leveling-up a video game character or earning virtual currency or tools that augment game success.
Job application processes sometimes use gamification as a way to hire employees by assessing their suitability through questionnaires and mini games that simulate the actual work environment of that company.
Gamification has been widely applied in marketing. Over 70% ofForbes Global 2000companies surveyed in 2013 said they planned to use gamification for the purposes of marketing andcustomer retention.[62]For example, in November, 2011, Australian broadcast and online media partnershipYahoo!7launched itsFango mobile app/SAP, which TV viewers use to interact with shows via techniques like check-ins and badges. Gamification has also been used in customer loyalty programs. In 2010,Starbucksgave customFoursquarebadges to people who checked in at multiple locations, and offered discounts to people who checked in most frequently at an individual store.[63]As a general rule Gamification Marketing or Game Marketing usually falls under four primary categories;
1. Brandification (in-game advertising):Messages, images or videos promoting a Brand, Product or Service within a game's visuals components. According to NBCNews game creatorsElectronic Artsused "Madden 09" and "Burnout Paradise" to promote 'in-game' billboards encouraging players to vote.[64]
2. Transmedia:The result of taking a media property and extending it into a different medium for both promotional and monetisation purposes.Nintendo's "007: GoldenEye"is a classic example. A video game created to advertise the originally titled movie. In the end, the promotional game brought in more money than the originally titled film.
3. Through-the-line (TTL) & Below-the-line (BTL):Text above, side or below main game screen (also known as an iFrame) advertising images or text. Example of this would be"I love Bees".
4. Advergames:Usually games based on popular mobile game templates, such as 'Candy Crush' or 'Temple Run'. These games are then recreated via platforms likeWIXwith software from the likes ofGamify, in order to promote Brands, Products and Services. Usually to encourage engagement, loyalty and product education. These usually involve social leaderboards and rewards that are advertised via social media platforms like Facebook's Top 10 games.[65]
Gamification also has been used as a tool forcustomer engagement,[66]and for encouraging desirable website usage behaviour.[35]Additionally, gamification is applicable to increasing engagement on sites built onsocial network services. For example, in August, 2010, the website builder DevHub announced an increase in the number of users who completed their online tasks from 10% to 80% after adding gamification elements.[67]On the programmingquestion-and-answer siteStack Overflowusers receive points and/or badges for performing a variety of actions, including spreading links to questions and answers viaFacebookandTwitter. A large number of different badges are available, and when a user'sreputation pointsexceed various thresholds, the user gains additional privileges, eventually including moderator privileges.
Gamification can be used forideation(structuredbrainstormingto produce new ideas). A study atMIT Sloanfound that ideation games helped participants generate more and better ideas, and compared it to gauging the influence of academic papers by the numbers of citations received in subsequent research.[68]Incorporating game mechanics such as leaderboards and rewards in these platforms can further encourage participation and foster collaboration, ultimately enhancing the ideation process.
Applications likeFitocracyandQUENTIQ(Dacadoo) use gamification to encourage their users to exercise more effectively and improve their overall health. Users are awarded varying numbers of points for activities they perform in their workouts, and gain levels based on points collected. Users can also complete quests (sets of related activities) and gain achievement badges for fitness milestones.[69]Health Month adds aspects ofsocial gamingby allowing successful users to restore points to users who have failed to meet certain goals. Public health researchers have studied the use of gamification in self-management of chronic diseases and common mental disorders,[70][71]STD prevention,[72][73]and infection prevention and control.[74]
In a review of health apps in the 2014 AppleApp Store, more than 100 apps showed a positive correlation between gamification elements used and high user ratings.MyFitnessPalwas named as the app that used the most gamification elements.[75]
Reviewers of the popularlocation-based gamePokémon Gopraised the game for promoting physical exercise. Terri Schwartz (IGN) said it was "secretly the best exercise app out there," and that it changed her daily walking routine.[76]Patrick Allen (Lifehacker) wrote an article with tips about how to work out usingPokémon Go.[77]Julia Belluz(Vox) said it could be the "greatest unintentional health fad ever," writing that one of the results of the game that the developers may not have imagined was that "it seems to be getting people moving."[78]One study showed users took an extra 194 steps per day once they started using the app, approximately 26% more than usual.[79]Ingressis a similar game that also requires a player to be physically active.Zombies, Run!,a game in which the player is trying to survive azombie apocalypsethrough a series of missions, requires the player to (physically) run, collect items to help the town survive, and listen to various audio narrations to uncover mysteries. Mobile, context-sensitiveserious gamesfor sports and health have been calledexergames.[80]
Gamification has been used in an attempt to improve employee productivity in healthcare, financial services, transportation, government,[81][82]and others.[83]In general, enterprise gamification refers to work situations where "game thinking and game-based tools are used in a strategic manner to integrate with existing business processes or information systems. And these techniques are used to help drive positive employee and organizational outcomes."[9]Gamification can enhance employee engagement, motivation, and skill development by incorporating elements such as challenges, progress tracking, and rewards. However, gamification can also build resentment and drive unsafe personal behavior in the workplace, such as workers skipping bathroom breaks.[84]
Crowdsourcinghas been gamified in games likeFoldit, a game designed by theUniversity of Washington, in which players compete to manipulate proteins into more efficient structures. A 2010 paper in science journalNaturecredited Foldit's 57,000 players with providing useful results that matched or outperformed algorithmically computed solutions.[85]TheESP Gameis a game that is used to generate image metadata. Google Image Labeler is a version of the ESP Game thatGooglehas licensed to generate its own image metadata.[86]Research from theUniversity of Bonnused gamification to increase wiki contributions by 62%.[87]
In the context of online crowdsourcing, gamification is also employed to improve the psychological and behavioral consequences of the solvers.[88]According to numerous research, adding gamification components to a crowdsourcing platform can be considered as a design that shifts participants' focus from task completion to involvement motivated by intrinsic factors.[89][90]Since the success of crowdsourcing competitions depends on a large number of participating solvers, the platforms for crowdsourcing provide motivating factors to increase participation by drawing on the concepts of the game.[91]
Gamification in the context of education and training is of particular interest because it offers a variety of benefits associated with learning outcomes and retention.[92][93][94][95]Using video-game inspired elements like leaderboards and badges has been shown to be effective in engaging large groups and providing objectives for students to achieve outside of traditional norms likegradesor verbal feedback. Online learning platforms such asKhan Academyand even physical schools likeNew York City Department of Education's Quest to Learn use gamification to motivate students to complete mission-based units and master concepts.[96][97]There is also an increasing interest in the use of gamification in health sciences and education as an engaging information delivery tool and in order to add variety to revision.[98][99][100]A 2016 study found that gamification can help students learn more effectively, especially when they are motivated by curiosity or enjoyment of the learning itself. One study found that students who were more intrinsically motivated tended to benefit more from gamified learning, while those focused mainly on external rewards didn’t respond as strongly.[101]
With increased access toone-to-onestudent devices, and accelerated by pressure from theCOVID-19 pandemic, many teachers from primary to post-secondary settings have introduced live, onlinequiz-show style gamesinto their lessons.[102]
Gamification has also been used to promote learning outside of schools. In August 2009,Gbangalaunched a game for theZurich Zoowhere participants learned about endangered species by collecting animals inmixed reality. Companies seeking to train their customers to use their product effectively can showcase features of their products with interactive games like Microsoft'sRibbon Hero 2.[103][104]
A wide range of employers including theUnited States Armed Forces,Unilever, andSAPcurrently use gamified training modules to educate their employees and motivate them to apply what they learned in trainings to their job.[82][105][106]According to a study conducted byBadgeville, 78% of workers are utilizing games-based motivation at work and nearly 91% say these systems improve their work experience by increasing engagement, awareness and productivity.[107]In the form ofoccupational safetytraining, technology can provide realistic and effective simulations of real-life experiences, making safety training less passive and more engaging, more flexible in terms of time management and a cost-effective alternative to practice.[108][109][110][111][112]The combined use of virtual reality and gamification can provide a more effective solutions in term of knowledge acquisition and retention when they are compared with traditional training methods.[113][114]
Alix Levine, an Americansecurityconsultant, reports that some techniques that a number of extremist websites such asStormfrontand various terrorism-related sites used to build loyalty and participation can be described as gamification. As an example, Levine mentioned reputation scores.[115][116]
The Chinese government has announced that it will begin using gamification to rate its citizens in 2020, implementing aSocial Credit Systemin which citizens will earn points representing trustworthiness. Details of this project are still vague, but it has been reported that citizens will receive points for good behavior, such as making payments on time and educational attainments.[117]
Bellingcatcontributor Robert Evans has written about the "gamification of terror" in the wake of theEl Paso shooting, in an analysis of the role8Chanand similarboardsplayed in inspiring the massacre, as well as other acts ofterrorismand mass shootings.[118]According to Evans, "[w]hat we see here is evidence of the only real innovation 8chan has brought to global terrorism: the gamification of mass violence. We see this not just in the references to "high scores", but in the very way theChristchurch shootingwas carried out. Brenton Tarrantlivestreamedhis massacre from a helmet cam in a way that made the shooting look almost exactly like aFirst Person Shootervideo game. This was a conscious choice, as was his decision to pick asound-trackfor the spree that would entertain and inspire his viewers."[118]
Traditionally, researchers thought of motivations to use computer systems to be primarily driven by extrinsic purposes; however, many modern systems have their use driven primarily by intrinsic motivations.[119]Examples of such systems used primarily to fulfill users' intrinsic motivations, include online gaming, virtual worlds, online shopping, learning/education, online dating, digital music repositories, social networking, online pornography, and so on. Such systems are excellent candidates for further 'gamification' in their design. Moreover, even traditional management information systems (e.g., ERP, CRM) are being 'gamified' such that both extrinsic and intrinsic motivations must increasingly be considered.
As illustration, Microsoft has announced plans to use gamification techniques for itsWindows Phone 7operating system design.[120]While businesses face the challenges of creating motivating gameplay strategies, what makes for effective gamification[121]is a key question.
One important type of technological design in gamification is the player centered design. Based on the design methodologyuser-centered design, its main goal is to promote greater connectivity and positive behavior change between technological consumers. It has five steps that help computer users connect with other people online to help them accomplish goals and other tasks they need to complete. The 5 steps are: an individual or company has to know their player (their target audience), identify their mission (their goal), understand human motivation (the personality, desires, and triggers of the target audience), apply mechanics (points, badges, leaderboards, etc.), and to manage, monitor, and measure the way they are using their mechanics to ensure it is helping them achieve the desired outcome of their goal and that their goal is specific and realistic.[122]
Gamification has also been applied toauthentication. Games have been proposed as a way for users to learn new and more complicated passwords.[123]Gamification has also been proposed as a way to select and manage archives.[124]
The merging of gambling and gamification referred to as "gamblification" has been used to some extent by online casinos.[125]Some brands use an incremental reward system to extend the typical player lifecycle and to encourage repeat visits and cash deposits at the casino in return for rewards such as free spins and cash match bonuses on subsequent deposits.[126]
The term "gamification" first appeared online in the context of computer software in 2008.[127][a]Gamification did not gain popularity until 2010.[131][132]Even prior to the term coming into use, other fields borrowing elements fromvideogameswas common; for example, some work inlearning disabilities[133]andscientific visualizationadapted elements from videogames.[134]
The term "gamification" first gained widespread usage in 2010, in a more specific sense referring to incorporation of social/reward aspects of games into software.[135]The technique captured the attention of venture capitalists, one of whom said he considered gamification the most promising area in gaming.[136]Another observed that half of all companies seeking funding for consumer software applications mentioned game design in their presentations.[35]
Several researchers consider gamification closely related to earlier work on adapting game-design elements and techniques to non-game contexts. Deterdinget al.[2]survey research inhuman–computer interactionthat uses game-derived elements for motivation and interface design, and Nelson[137]argues for a connection to both theSovietconcept ofsocialist competition, and the American management trend of "fun at work". Fuchs[138]points out that gamification might be driven by new forms ofludic interfaces. Gamification conferences have also retroactively incorporated simulation; e.g.Will Wright, designer of the 1989video gameSimCity, was the keynote speaker at the gamification conference Gsummit 2013.[139]
In October 2007,Bunchballwas the first company to provide game mechanics as a service, onDunder Mifflin Infinity, the community site for the NBC TV showThe Office.[140][141]Badgeville, which offered gamification services, launched in late 2010, and raised $15 million in venture-capital funding in its first year of operation.[142]
Gabe Zichermanncoined "funware" as an alternative term for gamification.[143][144][145]
Gamification as an educational andbehavior modificationtool reached thepublic sectorby 2012, when theUnited States Department of Energyco-funded multipleresearchtrials,[146]including consumer behavior studies,[147]adapting the format ofProgrammed learninginto mobilemicrolearningto experiment with the impacts of gamification in reducing energy use.[148]
Gamification 2013, an event exploring the future of gamification, was held at theUniversity of Waterloo Stratford Campusin October 2013.[149]
Through gamification's growing adoption and its nature as a data aggregator, multiple legal restrictions may apply to gamification. Some refer to the use of virtual currencies and virtual assets, data privacy laws and data protection, or labor laws.[150]
The use of virtual currencies, in contrast to traditional payment systems, is not regulated. The legal uncertainty surrounding the virtual currency schemes might constitute a challenge for public authorities, as these schemes can be used by criminals, fraudsters and money launderers to perform their illegal activities.[151]
A March 2022 consultation paper by the Board of the International Organization of Securities Commissions (IOSCO) questions whether some gamification tactics should be banned.[152]
University of HamburgresearcherSebastian Deterdinghas characterized the initial popular strategies for gamification as not being fun and creating an artificial sense of achievement. He also says that gamification can encourage unintended behaviours.[153]
Poorly designed gamification in the workplace has been compared toTaylorism, and is considered a form ofmicromanagement.[154]
In a review of 132 of the top health and fitness apps in the Apple app store, in 2014, using gamification as a method to modify behavior, the authors concluded that "Despite the inclusion of at least some components of gamification, the mean scores of integration of gamification components were still below 50 percent. This was also true for the inclusion of game elements and the use ofhealth behavior theory constructs, thus showing a lack of following any clear industry standard of effective gaming, gamification, or behavioral theory in health and fitness apps."[75]
Concern was also expressed in a 2016 study analyzing outcome data from 1,298 users who competed in gamified and incentivized exercise challenges while wearing wearable devices. In that study the authors conjectured that data may be highly skewed by cohorts of already healthy users, rather than the intended audiences of participants requiring behavioral intervention.[155]
Game designers likeJon Radoffand Margaret Robertson have also criticized gamification as excluding elements like storytelling and experiences and using simple reward systems in place of true game mechanics.[156][157]
Gamification practitioners[158][159]have pointed out that while the initial popular designs were in fact mostly relying on simplistic reward approach, even those led to significant improvements in short-term engagement.[160]This was supported by the first comprehensive study in 2014, which concluded that an increase in gamification elements correlated with an increase in motivation score, but not with capacity or opportunity/trigger scores.[75][161]
The same study called for standardization across the app industry on gamification principles to improve the effectiveness of health apps on the health outcomes of users.[75]
MIT ProfessorKevin Slavinhas described business research into gamification as flawed and misleading for those unfamiliar with gaming.[162]Heather Chaplin, writing inSlate, describes gamification as "an allegedly populist idea that actually benefits corporate interests over those of ordinary people".[163]Jane McGonigalhas distanced her work from the label "gamification", listing rewards outside of gameplay as the central idea of gamification and distinguishing game applications where the gameplay itself is the reward under the term "gameful design".[164]
"Gamification" as a term has also been criticized.Ian Bogosthas referred to the term as a marketing fad and suggested "exploitation-ware" as a more suitable name for the games used in marketing.[165]Other opinions on the terminology criticism have made the case why the term gamification makes sense.[166]
In an article by theLA Times, the gamification of worker engagement atDisneylandwas described as an "electronic whip".[167]Workers had reported feeling controlled and overworked by the system.
|
https://en.wikipedia.org/wiki/Gamification
|
Government crowdsourcingis a form ofcrowdsourcingemployed by governments to better leverage their constituents' collective knowledge and experience.[1]It has tended to take the form of public feedback, project development, or petitions in the past, but has grown to include public drafting of bills and constitutions, among other things.[2]This form of public involvement in the governing process differs from older systems of popular action, from town halls to referendums, in that it is primarily conducted online or through a similar IT medium.[2]
Various thinkers, including but not limited to Daren Brabham,Beth Noveck, andHelene Landemore, have each presented their own definitions of what crowdsourcing as a whole, and government crowdsourcing by extension, necessarily entails, but there has been no consensus thus far.[3][4]Governments which have adopted crowdsourcing as a method of information gathering, policy guidance, and in some cases a vital part of the lawmaking process include Brazil, Finland, Iceland, Egypt, Tunisia, and the United States, among many others.[3][5][6][7][8]
Though, in the past, direct democracy employed many of the same mechanisms as government crowdsourcing, and indeed could resemble it at times, the objectives direct democracy tended to pursue and the methods it used set it apart. Crowdsourcing requires a specific goal, rewards to be gained by both the crowd and the government, and an open request for anyone to participate, as well as to be conducted through the internet or some other IT medium.[9]Programs which truly resembled pre-Internet government crowdsourcing did not emerge until the 17th and 18th centuries. These were government-sponsored competitions such as the Alkali prize, which lead to the development of theLeblanc Process, and theLongitude Prize.
Other early precursors to government crowdsourcing either followed the same model of reaching out in search of an expert or inventor capable of solving the problem at hand, via what Brabham refers to as the "broadcast-search model". The American government distributed large tasks, such as mapping wind and current patterns along trade routes among American sailors, delegating the discovery of existing knowledge to a crowd in a model similar to what is now referred to as "distributed human intelligence tasking".[10][11]
Finally, the FrenchCahiers de doléanceswere the most directly governmental example of pre-internet government crowdsourcing. In the lead up to the French Revolution, the three estates listed their grievances and made suggestions to improve the government.[12]These problems, complaints, and proposals were considered and debated through the sessions of the Estates general.[13]Similar efforts to hear the grievances and suggestions of subjects, and later constituents, can be seen at the heart of various political institutions around the world, such as town halls in the United States, petitions in the United Kingdom and beyond.
Today, government crowdsourcing follows similar models, albeit utilizing the internet, mobile phones and other IT mediums.[citation needed]Around the world, governments continue to use competitions, delegate large tasks, and reach out to their constituents for feedback both on specific bills and for guidance on various policies. They have also begun, in countries as different as Brazil, British Columbia, and Finland, to not only draw feedback, expertise, and ideas from the crowd, but also specific provisions and wordings for legislation and even constitutions.[14][7]These initiatives, which give participating constituents greater leeway to both develop their ideas, deliberate, and be heard, mirroring what Brabham describes as "peer-vetted creative crowdsourcing".[15]
The established models persist, but attempts at crowd law and crowdsourced constitutions have met with mixed results. In some cases, the crowdsourcing platform has been purely suggestive, as in British Columbia, and their proposal ultimately put to a vote. Generally, once those crowdsourced suggestions entered the political system, they were either voted down or buried under mountains of procedural delay.[5][16]In other rarer cases, the efforts succeeded and the crowdsourced laws were adopted, though in somewhat amended form, as happened with theBrazilian Internet Bill of Rightsand in theConstitutional Convention (Ireland). Other efforts to crowdsource government are still in process, or have only been tested on a limited scale, leaving their long-term results still uncertain.[17]
In several post-revolutionary moments, notably those of Tunisia, Iceland, and Egypt, the new governments have also used crowdsourcing as a means of securing legitimacy and addressing the issues which brought them into power in the first place. This method of constitution making has been met with a limited level of success. The Icelandic constitution, crowdsourced in the aftermath of the2009 Icelandic financial crisis protests, utilized a multi-layered crowdsourcing structure, with an elected council at the top synthesizing suggestions into the new constitution, a popular forum for the crowd to make their voiced heard, and a Constitutional committee to organize the other parts.[5]By contrast, the Egyptian and Tunisian constitutional processes involved the crowd much less, with an official constitutional assembly composed of traditional political elites who simply received feedback on their draft clauses online.
Today, there is still no definitive definition of crowdsourcing, and even less agreement as to what aspects of participatory democracy fit under that description. Most academics and writers tend to create their own definitions. The most expansive definition of crowdsourcing, drawn from a survey of thousands of papers on the topic, describes it as something for which:
(a) there is a clearly defined crowd;
(b) there exists a task with a clear goal;
(c) the recompense received by the crowd is clear;
(d) the crowd-sourcer is clearly identified;
(e) the compensation to be received by the crowd-sourcer is clearly defined;
(f) it is an online assigned process of participative type;
(g) it uses an open call of variable extent;
(h) it uses the internet[9]
Other definitions are less exhaustive and more case-specific. Helene Landemore, following a similar line of thought as Brabham, defines crowdsourcing as an online problem-solving and production model in which an undefined crowd helps to complete a task by submitting knowledge, information, or talent. In its unrefined form, there is no accountability mechanism, and the crowd, unlike contractors in an outsourcing system, is not vetted.[5]She further distinguishes it from Wikipedia-esque "commons-based peer production", noting that crowdsourcing generally takes the form of individuals commenting and making suggestions in a vacuum as opposed to deliberating, discussing, and collaborating with one another.[18][5][19]
Brahbam also makes a subtle distinction, which makes his definition substantially more restrictive. For an aspect of participatory government to be crowdsourcing, it cannot be purely a government-driven sequence of responses from the crowd. Both sides must engage in direction and deliberation on the project.[20]Under this definition, certain elements of governmental participation, such as thePeer to Patentproject are not crowdsourcing at all, but rather similar processes which have been agglomerated under a new and trendy name.
Mabhoudi, on the other hand, defines it purely in terms of constitution making. His conception of crowdsourcing consists of posting a draft constitution online, utilizing both official websites and social media pages. Those platforms were used for feedback, commentary, recommendations, and possibly final approval. Left out of his definition, but noted in his description of the process of Egyptian constitution making is the use of committees to document, digest, and synthesize the volumes of online feedback into a more readable form, which served an indispensable role in the success, however temporary, of the crowdsourcing process.[21]
Noveck refers to government crowdsourcing as "collaborative democracy".[22]She defines that as a process of using technology to improve government outcomes by soliciting experience from groups of self-selected peers working together in groups of open networks.[22]Her version of government crowdsourcing more closely resembles the government competitions of the 17th and 18th centuries, or Amazon's Mechanical Turk, than the more participatory model favored by other thinkers.[23]That is, crowds gathered to work on an agenda set out by the government,[24]pooling their diverse expertise to cover every possible field of information.
Open Opportunities is a government wide program offering professional development opportunities to current federal employees and internships to students. The program facilitates collaboration and knowledge sharing across the Federal Government.[25]
Open Opportunities is for federal employees looking to gain additional experience, and students looking for internships. The program offers a wide variety of real world projects that promote experiential learning.[25]
TheVirtual Student Foreign Serviceis a crowd-work and eIntern program for college students.[26]
TheOffice of eDiplomacyis developing an internal crowdsourcing platform called CrowdWork that will facilitate collaborative work worldwide. Any office or mission will be able to post tasks online and any State Department employee with the requisite skills will be able to respond and complete the task. This creates an internal marketplace for foreign affairs work and matches State Department opportunities and requirements with untapped skills and experience. The platform is currently under development with an anticipated launch in December 2013. This crowd-work platform will be developed as part of an innovation toolkit for the U.S. Government.[27]This project is developed through a partnership between the State Department office ofeDiplomacy,[28]the White House Office of Science and Technology Policy, the General Services Administration and the State Department Office of the Director General. The crowd work element of the innovation toolkit will be developed as an open-source platform for public use.[29]
TheState Department Sounding Boardis an internal ideation tool for employees to suggest improvements to the Department.
The Humanitarian Information Unit (HIU),[30]a division within the Office of the Geographer and Global Issues at the U.S. Department of State, is working to increase the availability of spatial data in areas experiencing humanitarian emergencies. Built from a crowdsourcing model, the new "Imagery to the Crowd" process publishes high-resolution commercial satellite imagery, purchased by the United States Government, in a web-based format that can be easily mapped by volunteers.[31]The digital map data generated by the volunteers are stored in a database maintained byOpenStreetMap(OSM), a UK-registered non-profit foundation, under a license that ensures the data are freely available and open for a range of uses.
Inspired by the success of the OSM mapping effort after the2010 Haiti earthquake, the Imagery to the Crowd process harnesses the combined power of satellite imagery and the volunteer mapping community to help aid agencies provide informed and effective humanitarian assistance, and plan recovery and development activities. The HIU partners with the Humanitarian OpenStreetMap Team (HOT) on many of the Imagery to the Crowd projects. HOT provides volunteer support and access to its micro-tasking platform, the OSM Tasking Manager, which coordinates volunteer efforts by breaking down large mapping tasks into smaller areas that can be digitized in 45–60 minutes. A 5-minute Imagery to the Crowd Ignite talk is available.[32]
The Bureau of Education and Cultural Affairs partnered with RocketHub on the Alumni Engagement and Innovation Fund (AEIF) 2.0 which helps exchange program alumni crowdsource funding for their innovative projects.
The Special Advisor for Civil Society and Emerging Democracies created Diplomacy Lab as a way for the Department to crowdsource longer-term projects from the academic community.
Human Resource's Flex Connect program facilitates crowdsourcing of talent from across the Department.
eDiplomacy is also piloting a State Department GitHub Account to collaborate on open source software projects.
The Bureau of Arms Control, Verification and Compliance also ran a crowdsourced Arms Control Challenge examining how technology could aid arms control efforts.
In June 2012, USAID launched the Agency's first-ever crowdsourcing initiative to pinpoint the location of USAID Development Credit Authority (DCA) loan data and make the dataset publicly available. Crowdsourcing is a distributed problem-solving process whereby tasks are outsourced to a network of people known as "the crowd".
The engagement of the Crowd was an innovative way to process data and increase the transparency of the Agency. Visualizing where USAID enhances the capacity of the private sector can signal new areas for potential collaboration with host countries, researchers, development organizations, and the public. A case study explains the organizational, legal, and technical steps behind making these data open.[33]
CDCology[34]is a way for CDC staff to post unclassified, one-minute to one-day long (micro)tasks that can be solved by undergraduate, graduate, and post-graduate student volunteers. This expands the agency's workforce and relieves staff to focus on in-depth assignments. In return, universities can offer students short challenges directly impacting national initiatives and gain experience with government work. This allows students to bring fresh ideas to smaller projects and they can add "micro-volunteering for CDC" to their résumé.[35]
Pilot project launched by the Administrator's office called Skills Marketplace allowing micro-details[further explanation needed]to other offices and projects.[36]
Challenge.gov is an online challenge platform administered by the U.S. General Services Administration (GSA) in partnership with Challenge Post that empowers the U.S. Government and the public to bring the best ideas and top talent to bear on our nation's most pressing challenges. This platform is the latest milestone in the Administration's commitment to use prizes and challenges to promote innovation.[37]
Idea Factory empowers the Transportation Security Administration's large and dispersed workforce to submit and collaborate on innovative ideas to improve TSA and keep the nation's transportation systems secure.[38]
Digital Volunteers is a crowd-work platform for transcribing historical documents.[39]
eMammal is a crowdsourcing collection of images and data on N American mammal populations[40][41]
The National Archives uses digital volunteers to identify signatures in documents and index the archive.[42]
Remember Me is a project to post 1100 pictures of children displaced during the Jewish Holocaust to identify these children, piece together information about their wartime and postwar experiences, and facilitate renewed connections among these young survivors, their families, and other individuals who were involved in their care during and after the war.[43]
The World Memory Project aims to digitize the records of victims of the JewishHolocaust.[44]
CoCreate is a crowdsourced effort to identify and solve real life soldier challenges.
The US Navy crowdsourced solutions to Somali Piracy via itsMassive Multiplayer Online Wargame Leveraging the Internet.[45]It is a message-based game to encourage innovative thinking by many people, connected via the Web. It has been used to study a number of topics, such as how the Navy can prepare for the future of energy......starting in 2021 and beyond
TheUnited States Patent and Trademark Office (USPTO)has been working to integrate crowdsourcing into the patent process in a number of ways. The first key step was the introduction of thePeer to Patentprogram at the application review stage. Peer to Patent was designed to allow members of the public with relevant technical skills and information to submit information useful to the patent examiner when assessing the claims of pending patent applications. The first pilot Peer to Patent program, limited to software and business methods applications, was launched in 2007 and remained open through 2009.[46]After reviewing the results, a second expanded pilot was launched in 2010, extending coverage to include biotechnology, bioinformatics, telecommunications, and speech recognition.[47]
The USPTO was further encouraged to implement crowdsourcing by the Obama Administration's Executive Action on Crowdsourcing Prior Art.[48]The order was created with the goal of improving patent quality overall and fostering innovation by integrating new ways for companies, experts, and the general public to find and submit "prior art" or evidence needed to ascertain if an invention is novel.[49]
Having tested crowdsourcing during the application review process, the USPTO turned to explore applications for crowdsourcing during post-grant patent review. To do this, the USPTO consulted experts in the government and private sector to share their expertise through two roundtable discussions. The Roundtable on the Use of Crowdsourcing and Third-Party Preissuance Submissions to Identify Relevant Prior Art, took place April 10, 2014. Presenters at the roundtable included Andrea Casillas and Christopher Wong sharing their experience with Peer to Patent; Micah Siegel from Ask Patents, a patent crowdsourcing project throughstack exchange; Pedram Sameni from Patexia, a crowdsourcing platform focused on patent research and prior art searching; and Cheryl Milone of Article One Partners.[50]
The USPTO held their second roundtable on the use of crowdsourcing to identify relevant prior art on December 2, 2014. This roundtable focused on two key questions: (1) how the USPTO can utilize crowdsourcing tools to obtain relevant prior art in order to enhance the quality of examination and issued patents; and (2) ways the USPTO can leverage existing private sector solutions for the electronic receipt and hosting of crowdsourced materials as a means to provide prior art to examiners.[51]Speakers at the roundtable included Matt Levy (Computer & Communications Industry Association), Mark Nowotarski (Markets, Patents & Alliances LLC), Cheryl Milone (Article One Partners), and Pedram Sameni (Patexia Inc.).
The USPTO continues to consider additional applications for crowdsourcing following the roundtables and public comments submitted in response. As part of that goal, they are working with Christopher Wong, who was appointed to be thePresidential Innovation Fellowsupporting crowdsourcing and patent reform initiatives for the USPTO.[52]
The Patient Feedback Challenge is about getting the NHS to use feedback from patients to improve services, by spreading the best approaches already out there. Ideas are crowdsourced, and then bid on by NHS organizations.[53]
Since 2010 the FCO has made the Human Rights and Democracy Report available online and available for markup by NGOs, policy makers, academics and the general public. The comments are forwarded to the relevant policy teams for evaluation and to respond to accordingly.[54]
|
https://en.wikipedia.org/wiki/Government_crowdsourcing
|
Below is a list of projects that rely oncrowdsourcing. See alsoopen innovation.
|
https://en.wikipedia.org/wiki/List_of_crowdsourcing_projects
|
Collaborative tagging, also known as social tagging orfolksonomy, allows users to apply publictagsto online items, typically to make those items easier for themselves or others to find later. It has been argued that these tagging systems can provide navigational cues or "way-finders" for other users to explore information.[1][2]The notion is that given that social tags are labels users create to represent topics extracted from online documents, the interpretation of these tags should allow other users to predict the contents of different documents efficiently. Social tags are arguably more important inexploratory search, in which the users may engage in iterative cycles of goal refinement and exploration of new information (as opposed to simple fact-retrievals), and interpretation of information contents by others will provide useful cues for people to discover topics that are relevant.
One significant challenge that arises in social tagging systems is the rapid increase in the number and diversity of tags. As opposed to structured annotation systems, tags provide users an unstructured, open-ended mechanism to annotate and organizeweb content. As users are free to create any tag to describe any resource, it leads to what is referred to as the vocabulary problem.[3]Because users may use different words to describe the same document or extract different topics from the same document based on their own background knowledge, the lack of any top-down mediation may lead to an increase in the use of incoherent tags to represent the information resources in the system. In other words, the lack of structure inherent in social tags may hinder their potential as navigational cues for searchers because the diversities of users and their motivation may lead to diminishing tag-topic relations as the system grows. However, a number of studies have shown that structures do emerge at the semantic level – indicating that there are cohesive forces driving the emergent structures in a social tagging system.[4]
Just like anysocial phenomena,behavioral patternsin social tagging systems can be characterized by either adescriptiveorpredictive model. While descriptive models ask the question of "what", predictive models go deeper to also ask the question of "why" by attempting to provide explanations for the aggregate behavioral patterns.[5]While there may be no general agreement on what an acceptable explanation should be like, many believe that a good explanation should have a certain level of predictive accuracy.
Descriptive models typically are not concerned with explaining the actions of individuals. Instead, they focus on describing the patterns that emerge as individual behavior is aggregated in a large social information system. Predictive models, however, attempt to explain aggregate patterns by analyzing how individuals interact and link to each other in ways that bring about similar or different emergent patterns of social behavior. In particular, a mechanism-based predictive model assumes a certain set of rules governing how individuals interact with each other, and understand how these interactions could produce aggregate patterns as observed and characterized by descriptive models. Predictive models can therefore provide explanations to why different system characteristics may lead to different aggregate patterns, and can therefore potentially provide information on how systems should be designed to achieve different social purposes.
For most tagging systems, the total number of objects being tagged far exceeds the total number of tags in the collective vocabulary. If a single tag in this system is specified, many documents would match, so that using single tags cannot effectively isolate any one document. However, some documents are more popular or important than others, which is reflected in the number of bookmarks per document. Thus, the focus should be on how well the mapping of tags to documents retains information about the distribution of the documents.Information theoryprovides a framework to understand the amount of shared information between two random variables. Theconditional entropymeasures the amount of entropy remaining in one random variable when the value of a second random variable is known.
A 2008 paper byEd Chiand Todd Mytkowicz showed that the entropy of documents conditional on tags, H(D|T), is increasing rapidly.[6]This suggests that, even after knowing completely the value of a tag, the entropy of the set of documents is increasing over time. Conditional entropy asks the question: "Given that a set of tags is known, how much uncertainty remains regarding the document set referenced by those tags?" This curve is strictly increasing, which suggests that the specificity of any given tag is decreasing. As a navigation aid, tags are becoming harder and harder to use, and a single tag will gradually reference too many documents to be considered useful.
Another approach is throughmutual information, a measure of independence between two variables. Full independence is reached when I(D;T) = 0.[clarification needed]Chi and Mytkowicz's research shows that as a measure of usefulness of tags and their encoding, there is a worsening trend in the ability of users to specify and find tags and documents when they are engaged in simple fact retrieval.[6]This suggests that search and recommendation systems should be built to help users sift through resources in social tagging systems, especially when they are engaged in activities beyond fact retrieval, as characterized by information theory. Although the number of documents associated with any given tag is increasing, there are many ways contextual information can help users to look for relevant information. This is one of the major weakness of the simple information theory in explaining usefulness of tags – it ignores how humans can extract meanings from a set of tags assigned to a document. For example, a 2007 paper showed that while the number of tags is increasing, the general growth pattern is scale-free – the general distribution of tag-tag co-occurrences follows apower law.[7]
The same paper also found that the characteristics of this scale-free distribution depend on thesemanticsof the tag – tags that are semantically general (e.g.,blogs) tend to co-occur with many tags, while semantically narrow tags (e.g.,Ajax) tend to co-occur with few tags across a wide set of documents in a social tagging system.[7]This suggests that the assumption of the information theory approach is too simple – when taking into account the semantics of the set of tags assigned to documents, the predictive value of tags on contents of documents is relatively stable. This finding is important for development ofrecommender systems– discovering these higher-level semantic patterns is important in helping people find relevant information.
Despite this potential vocabulary problem, research has found that at the aggregate level, tagging behavior seemed relatively stable, and that the tag choice proportions seemed to be converging rather than diverging. While these observations provided evidence against the proposed vocabulary problem, they also initiated research investigating how and why tag proportions tended to converge over time.
One explanation for the stability was that there was an inherent propensity for users to "imitate" word use of others as they create tags. This propensity may act as a form of social cohesion that fosters the coherence of tag-topic relations in the system, and leads to stability in the system.[8]It was shown that thestochasticurn model created in 1923[9]was useful in explaining how simple imitation behavior at the individual level could explain the converging usage patterns of tags.[8]Specifically, the convergence of tag choices was simulated by a process in which a colored ball was randomly selected from an urn, then replaced in the urn along with an additional ball of the same color, simulating the probabilistic nature of tag reuse. This simple model, however, does not explain why certain tags would be "imitated" more often than others, and therefore cannot provide a realistic mechanism for tag choices and how social tags could be used as navigational cues during exploratory search.
Research based on data from the social bookmarking websiteDel.icio.ushas shown that collaborative tagging systems exhibit a form ofcomplex systems(orself-organizing) dynamics.[10]Furthermore, although there is no central, controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources has been shown to converge over time to a stable, power-law distribution.[10]Once such stable distributions form, the correlations between different tags can be used to construct simplefolksonomygraphs, which can be partitioned to obtain a form of community or shared vocabularies.[11]Such vocabularies can be seen as emerging from the decentralized actions of many users – a form ofcrowdsourcing.
The memory-based Yule-Simon (MBYS) model[7]attempts to explain tag choices by a stochastic process. It was found that the temporal order of tag assignment influences users' tag choices. Similar to the stochastic urn model, the MBYS model assumes that at each step, a tag would be randomly sampled: with probabilityp{\displaystyle p}that the sampled tag was new, and with probability 1-p{\displaystyle p}that the sampled tag was copied from existing tags. When copying, the probability of selecting a tag was assumed to decay with time, and this decay function was found to follow a power-law distribution. Thus, tags that were more recently used had a higher probability of being reused.
One major finding was that semantically general tags (e.g., "blog") generally co-occurred more frequently with other tags than semantically narrower tags (e.g., "Ajax"), and this difference could be captured by the decay function of tag reuse in their model.[7]Specifically, it was found that a slower decay parameter (when the tag is reused more often) could explain the phenomenon that semantically general tags tended to co-occur with a larger set of tags. In other words, it was argued that the "semantic breadth" of a tag could be modeled by a memory decay function, which could lead to different emergent behavioral patterns in a tagging system.[7]
Descriptive models were based on analyses of word-word relations as revealed by the various statistical structures in the organization of tags (e.g., how likely one tag would co-occur with other tags or how likely each tag was reused over time). Thus, these models are descriptive models at the aggregate level, and have little to offer about predictions at the level of an individual's interface interactions and cognitive processes.
Rather than imitating other users at the word level, one possible explanation for this kind of social cohesion could be grounded on the natural tendency for people to process tags at the semantic level, and it was at this level of processing that most of the imitation occurred. This explanation was supported by research in the area ofreading comprehension, which showed that during comprehension, people tended to be influenced by meanings of words rather than the words themselves.[12]Assuming that people in the same culture tend to have shared structures – such as using similar vocabularies and their corresponding meanings to conform and communicate, users of the same social tagging system may also share similar semantic representations of words and concepts, even when the use of tags may vary across individuals at the word level. As such, part of the reason for the stability of social tagging systems can be attributed to the shared semantic representations among the users, such that users may have relatively stable and coherent interpretation of information contents and tags as they interact with the system. Based on this assumption, the semantic imitation model predicts how different semantic representations may lead to differences in individual tag choices and eventually different emergent properties at the aggregate behavioral level.[13][14]The model also predicts that the folksonomies in the system reflect the shared semantic representations of the users.
Semantic imitation has important implications to the general vocabulary problem in information retrieval andhuman–computer interaction– the creation of a large number of diverse tags to describe the same set of information resources. Semantic imitation implies that the unit of communication among users is more likely at the semantic level rather than the word level. Thus, although there may not be strong coherence in the choice of words in describing a resource, at the semantic level, there seems to be a stronger coherence force that guides the convergence of descriptive indices. This is in sharp contrast to conclusions derived based on a purely information-theoretical approach, which assumes that humans search and evaluate information at the word level. Instead, the process of semantic imitation in social tagging implies that the information-theoretic approach is at most incomplete, as it does not take into account the basic unit of human information processing. Similar to the fact that human communication occurs at the semantic level, the fact that people may use different words or syntax does not affect the effectiveness of communication, so long as the underlying "common ground" between two people is the same.[15]
In the social tagging case, as long as users share a similar understanding of the contents of the information resources, the fact that the information value of tag-document decreases (that humans have more words in their languages) does not imply that it will always be harder to find relevant information (similarly, the fact that there are an increasing number words in human languages does not mean that communication becomes less effective). However, it does point to the notion that one needs to effectively present these semantic structures in the information system so that people can effectively interpret the semantics of the tagged documents. Intelligent techniques based on statistical models of language, such aslatent semantic analysisand theprobabilistic topics model[clarification needed], could potentially overcome this vocabulary problem.[citation needed]
|
https://en.wikipedia.org/wiki/Models_of_collaborative_tagging
|
Microcreditis the extension of very smallloans(microloans) to impoverished borrowers who typically do not have access to traditional banking services due to a lack ofcollateral, steady employment, and a verifiablecredit history.[1][2]The primary aim of microcredit is to support entrepreneurship, facilitate self-employment, and alleviate poverty, particularly in low-income communities[1]
TheUnited Nationsdeclared 2005 as theInternational Year of Microcreditto raise awareness of microfinance as a strategy for poverty reduction and financial inclusion.[3]By the early 2010s, microcredit had expanded significantly across developing countries, with estimates suggesting that more than 200 million people were beneficiaries of microcredit services worldwide.[4]While widely adopted, the effectiveness of microcredit remains debated, with mixed evidence on its long-term impact on poverty alleviation.[5]
Despite its widespread adoption, the impact of microcredit on poverty alleviation remains contested. Some studies have indicated that while microcredit can increase business activity, it has limited effects on household income, education, and health outcomes.[6]Critics argue that microcredit may contribute to over-indebtedness and perpetuate financial instability for some borrowers.[7]A randomized evaluation led byAbhijit Banerjeeand collaborators reported mixed results, noting that microcredit did not significantly impact household consumption, educational attainment, or overall economic stability.[8]
While the term "microcredit" gained prominence in the late 20th century, the practice of offering small loans to the poor has earlier roots. In the 18th century,Jonathan Swift, the Anglo-Irish satirist and Dean of St. Patrick's Cathedral in Dublin, established a charitable loan fund in 1727 with £500 of his own money.[9][10]This fund provided small, interest-free loans to impoverished tradespeople, requiring borrowers to have two neighbors act as guarantors, thereby ensuring community accountability. Swift's initiative inspired the creation of similar loan funds acrossIreland, which, at their peak in the 19th century, provided credit to approximately 20% of Irish households.[9]These early efforts laid the groundwork for later institutional models of microfinance.
Additional early examples of small-scale lending emerged throughout the 18th and 19th centuries. In 1746,John Wesley, the founder of Methodism, created a lending stock for the poor in England. His journal on 17/1/1748 records:
I made a public collection toward a lending stock for the poor. Our rule is, to lend only twenty shillings at once, which is repaid weekly within three months. I began this about a year and a half ago: thirty pounds sixteen shillings were then collected; and out of this, no less than two hundred and fifty-five persons have been relieved in eighteen months.
In the mid-19th century,Lysander Spooner, an American legal theorist, argued that access to small loans could enable the poor to become self-reliant entrepreneurs.[11]Around the same time in Germany,Friedrich Wilhelm Raiffeisenfounded the first cooperative rural credit unions to provide affordable credit to farmers, laying the foundation for the global credit union movement.[12]
The institutionalization of microcredit in its contemporary form began in the 1970s, withBangladeshserving as a central hub for early development. In 1983,Muhammad Yunusestablished theGrameen Bank, which is widely regarded as the first modern microcredit institution.[13][14]Yunus began the project in Jobra, using his own funds to deliver small loans at low-interest rates to the rural poor.[13]The Grameen model introduced a group-based lending system aimed at reducing risk through peer accountability and promoting financial inclusion for low-income borrowers, particularly women.[14]
The Grameen Bank model inspired the creation of similar institutions globally, including BRAC in 1972 and ASA in 1978 in Bangladesh, and PRODEM in Bolivia, which later became the for-profit BancoSol in 1986.[15][16]In Chile, BancoEstado Microempresas became a major provider of microcredit services.[17]Though the Grameen Bank was formed initially as a non-profit organization dependent upon government subsidies, it later became a corporate entity and was renamed Grameen II in 2002.[15]Yunus was awarded theNobel Peace Prizein 2006 for his work providing microcredit services to the poor.[18]
Microcredit organizations were initially created as alternatives to the "loan sharks" known to take advantage of clients.[14]Indeed, many microlenders began as non-profit organizations and operated with government funds or privatesubsidies. By the 1980s, however, the "financial systems approach", influenced byneoliberalismand propagated by theHarvard Institute for International Development, became the dominant ideology among microcredit organizations. The neoliberal model of microcredit can also be referred to as the institutionist model, which promotes applying market solutions as a viable way to address social problems.[19]The commercialization of microcredit officially began in 1984 with the formation of Unit Desa (BRI-UD) within theBank Rakyat Indonesia. Unit Desa offered 'kupedes' microloans based on market interest rates.
Yunus has sharply criticized the shift in microcredit organizations from the Grameen Bank model as a non-profit bank to for-profit institutions:[20]
I never dreamed that one day microcredit would give rise to its own breed of loan sharks... There are always people eager to take advantage of the vulnerable. But credit programs that seek to profit from the suffering of the poor should not be described as "microcredit," and investors who own such programs should not be allowed to benefit from the trust and respect that microcredit banks have rightly earned.
Many microcredit organizations now function as independent banks. This has led to their charging higher interest rates on loans and placing more emphasis on savings programs.[14]Notably, Unit Desa has charged in excess of 20 percent on small business loans.[21]The application ofneoliberal economicsto microcredit has generated much debate among scholars and development practitioners, with some claiming that microcredit bank directors, such as Muhammad Yunus, apply the practices of loan sharks for their personal enrichment.[15]Indeed, the academic debate foreshadowed a Wall-street style scandal involving the Mexican microcredit organizationCompartamos.[14]
Even so, the numbers indicate that ethical microlending and investor profit can go hand-in-hand. In the 1990s a rural finance minister in Indonesia showed how Unit Desa could lower its rates by about 8% while still bringing attractive returns to investors.[21]
Though early microcredit institutions such as Jobra andGrameen Bankinitially focused on individual lending, group lending approaches to microcredit were present as early as the 1970s through the use of solidarity circles.[16]These groups provide one another with mutual encouragement, information, and assistance in times of need, though loans remain the responsibility of individuals.[22][23]The use of group-lending was motivated byeconomics of scale, as the costs associated with monitoring loans and enforcing repayment are significantly lower when credit is distributed to groups rather than individuals.[16]Often, the loan to one participant in group-lending depends upon the successful repayment from another member, thus transferring repayment responsibility off of microcredit institutions to loan recipients.[16]
Microcredit is a tool that can possibly be helpful to reduce thefeminization of povertyin developing countries. Lending to women has become an important principle in microcredit, with banks andNGOssuch as BancoSol,WWB, andPro Mujercatering to women exclusively.[16]Pro Mujer also implemented a new strategy to combine microcredits with health-care services, since the health of their clients is crucial to the success of microcredits.[24]Though Grameen Bank initially tried to lend to both men and women at equal rates, women presently make up ninety-five percent of the bank's clients. Women continue to make up seventy-five percent of all microcredit recipients worldwide.[16]Exclusive lending to women began in the 1980s when Grameen Bank found that women have higher repayment rates, and tend to accept smaller loans than men.[14]
Grameen Bank inBangladeshis the oldest and probably best-known microfinance institution in the world.[citation needed]Grameen Banklaunched theirUS operationsin New York in April 2008.[25]Bank of America has announced plans to award more than $3.7 million in grants to nonprofits to use in backing microloan programs.[26]The Accion U.S. Network, the US subsidiary of the better-knownAccion International, has provided over $450 million in microloans since 1991, with an over 90% repayment rate.[27]One research study of the Grameen model shows that poorer individuals are safer borrowers because they place more value on the relationship with the bank.[28]Even so, efforts to replicate Grameen-stylesolidarity lendingin developed countries have generally not succeeded. For example, theCalmeadow Foundationtested an analogous peer-lending model in three locations inCanadaduring the 1990s. It concluded that a variety of factors—including difficulties in reaching the target market, the high risk profile of clients, their general distaste for the joint liability requirement, and high overhead costs—made solidarity lending unviable without subsidies.[29]Microcredits have also been introduced inIsrael,[30]Russia,Ukraineand other nations where micro-loans help small business entrepreneurs overcome cultural barriers in the mainstream business society. The Israel Free Loan Association (IFLA) has lent more than $100 million in the past two decades to Israeli citizens of all backgrounds.[31]
InIndia, theNational Bank for Agriculture and Rural Development(NABARD) finances more than 500 banks that on-lend funds toself-help groups(SHGs). SHGs comprise twenty or fewer members, of whom the majority are women from the poorestcastesand tribes. Members save small amounts of money, as little as a few rupees a month in a group fund. Members may borrow from the group fund for a variety of purposes ranging from household emergencies to school fees. As SHGs prove capable of managing their funds well, they may borrow from a local bank to invest in small business or farm activities. Banks typically lend up to four rupees for every rupee in the group fund. In Asia borrowers generally pay interest rates that range from 30% to 70% without commission and fees.[32]Nearly 1.4 million SHGs comprising approximately 20 million women now borrow from banks, which makes the Indian SHG-Bank Linkage model the largest microfinance program in the world. Similar programs are evolving in Africa and Southeast Asia with the assistance of organizations likeIFAD,Opportunity International,Catholic Relief Services,Compassion International,CARE, APMAS,Oxfam,TearfundandWorld Vision.
Microcredit initiatives in Pakistan have developed significantly over the past several decades, evolving from early cooperative lending models to large-scale institutional frameworks.[33]The first major microcredit initiative in the region was theComilla Model, introduced in the 1950s byAkhtar Hameed Khanin East Pakistan (now Bangladesh).[14]The Comilla Model was designed to address rural poverty through group-based lending and village cooperatives, aiming to empower small farmers by providing access to credit without traditional collateral.[14]While the model initially showed promise, it faced challenges due to bureaucratic interference, mismanagement, and power imbalances within borrower groups, ultimately limiting its long-term impact.[14]
Following the separation of Bangladesh in 1971, microcredit efforts in Pakistan evolved independently, influenced by both global microfinance trends and local economic conditions. In 2001, the establishment ofAkhuwatmarked a significant shift in microcredit philosophy within Pakistan.[34]Founded byDr. Amjad Saqib, Akhuwat operates on a unique interest-free lending model funded entirely by donations and community support.[34]The organization disburses loans to low-income borrowers through a network of mosques and community centers, promoting principles of social justice and financial inclusion. Akhuwat has provided over PKR 200 billion in interest-free loans to more than 4.5 million families as of 2024, positioning itself as one of the largest microfinance institutions in the country.[34][35]
Akhuwat’s success has been attributed to its emphasis on community engagement and its rejection of interest-based lending, aligning its model with both Islamic finance principles and conventional microcredit structures.[35]Borrowers are required to repay only the principal amount, fostering a culture of mutual support and accountability.[35]Akhuwat also offers social services such as educational scholarships, housing loans, and small business training to further enhance economic stability among beneficiaries.[35]
Microcredit initiatives in Pakistan have developed significantly over the past several decades, transitioning from cooperative lending models to formalized institutional frameworks.[33]While Akhuwat is a notable example of interest-free microfinance, other organizations have also contributed to the sector.
Kashf Foundation, established in 1996, was one of the first microfinance institutions in Pakistan to focus on women’s economic empowerment through microloans. The organization has expanded its services to include microinsurance and financial literacy programs.[36]
Khushhali Microfinance Bank (KMBL), founded in 2000 as part of the Microfinance Sector Development Program, provides microloans, agricultural credit, and digital banking services. KMBL operates as a for-profit institution and focuses on small business lending.[37]
The National Rural Support Programme (NRSP), launched in 1991, is the largest rural development initiative in Pakistan. NRSP offers microloans alongside agricultural training and infrastructure development for low-income households.[38]
The Pakistan Poverty Alleviation Fund (PPAF), established in 2000, functions as an apex institution that allocates funds to partner organizations involved in poverty reduction through microcredit, asset transfers, and community-based projects.[39]
Despite the expansion of microcredit in Pakistan, challenges such as operational costs, outreach in remote areas, and regulatory constraints remain prevalent.
In the United States, microcredit has generally been defined as loans of less than $50,000 to people—mostly entrepreneurs—who cannot, for various reasons, borrow from a bank. Most nonprofit microlenders include services like financial literacy training and business plan consultations, which contribute to the expense of providing such loans but also, those groups say, to the success of their borrowers.[40]One such organization in the United States, theAccion U.S. Networkis a nonprofit microfinance organization headquartered in New York,New York. It is the largest and only nationwide nonprofit microfinance network in the US. The Accion U.S. Network is part of Accion International, a US-based nonprofit organization operating globally, with the mission of giving people the financial tools they need to create or grow healthy businesses. The domestic Accion programs started inBrooklyn, New York, and grew from there to become the first nationwide network microlender.[41][circular reference]US microcredit programs have helped many poor but ambitious borrowers to improve their lot. The Aspen Institute's study of 405 microentrepreneurs indicates that more than half of the loan recipients escaped poverty within five years. On average, their household assets grew by nearly $16,000 during that period; the group's reliance on public assistance dropped by more than 60%.[42]Several corporate sponsors including Citi Foundation and Capital One launched Grameen America in New York. Since then the financial outfit—not bank—has been serving the poor, mainly women, throughout four of the city's five boroughs (Bronx, Brooklyn,Manhattan, andQueens) as well as Omaha, Nebraska and Indianapolis, Indiana. In four years, Grameen America has facilitated loans to over 9,000 borrowers valued over $35 million. It has had, as Grameen CEO Stephen Vogel notes, "a 99 percent repayment rate".[43]
The principles of microcredit have also been applied in attempting to address several non-poverty-related issues. Among these, multiple Internet-based organizations have developed platforms that facilitate a modified form ofpeer-to-peer lendingwhere a loan is not made in the form of a single, direct loan, but as the aggregation of a number of smaller loans—often at a negligible interest rate.
Examples of platforms that connect lenders to micro-entrepreneurs via Internet areKiva,Zidisha, and theMicroloan Foundation. Another internet-based microlender, United Prosperity (now defunct), uses a variation on the usual microlending model; with United Prosperity the micro-lender provides a guarantee to a local bank which then lends back double that amount to the micro-entrepreneur. United Prosperity claims this provides both greater leverage and allows the micro-entrepreneur to develop a credit history with their local bank for future loans.[44][45]In 2009, the US-based nonprofitZidishabecame the first peer-to-peer microlending platform to link lenders and borrowers directly across international borders without local intermediaries.[46]From 2008 through 2014,Vittanaallowed peer-to-peer lending forstudent loansin developing countries.[47]
The impact of microcredit is a subject of some controversy. Proponents state that it reduces poverty through higher employment and higher incomes. This is expected to lead to improved nutrition and improved education of the borrowers' children. Some argue that microcredit empowers women. In the US, UK and Canada, it is argued that microcredit helps recipients to graduate from welfare programs.[48]
Critics say that microcredit, if not carefully directed, may not increase incomes, and may drive poor households into adebt trap. They add that the money from loans may be used for durable consumer goods or consumption instead of being used for productive investments, that it may fail to empower women, and that it may not improve health or education.[49]
The available evidence indicates that in many cases microcredit has facilitated the creation and the growth of businesses. It has often generatedself-employment, but it has not necessarily increased incomes after interest payments. In some cases it has driven borrowers into debt traps. Some studies suggest that microcredit has not generally empowered women. Microcredit has achieved much less than what its proponents said it would achieve, but its negative impacts have not been as drastic as some critics have argued. Microcredit is just one factor influencing the success of a small businesses, whose success is influenced to a much larger extent by how much an economy or a particular market grows.[50]
Unintended consequencesof microfinance include informal intermediation: some entrepreneurial borrowers may become informal intermediaries between microfinance initiatives and poorer micro-entrepreneurs. Those who more easily qualify for microfinance may split loans into smaller credit to even poorer borrowers. Informal intermediation ranges from casual intermediaries at the good or benign end of the spectrum toloan sharksat the professional and sometimes criminal end of the spectrum.[51]
Many scholars and practitioners suggest an integrated package of services ("a credit-plus" approach) rather than just providing credits. When access to credit is combined with savings facilities, non-productive loan facilities, insurance,enterprise development(production-oriented and management training, marketing support) and welfare-related services (literacy and health services, gender and social awareness training), the adverse effects discussed above can be diminished.[52]Some argue that more experienced entrepreneurs who are getting loans should be qualified for bigger loans to ensure the success of the program.[53]
One of the principal challenges of microcredit is providing small loans at an affordable cost. The global average interest and fee rate is estimated at 37%, with rates reaching as high as 70% in some markets.[54]The reason for the high interest rates is not primarily cost of capital. Indeed, the local microfinance organizations that receive zero-interest loan capital from the online microlending platformKivacharge average interest and fee rates of 35.21%.[55]Rather, the principal reason for the high cost of microcredit loans is the high transaction cost of traditional microfinance operations relative to loan size.[56]Microcredit practitioners have long argued that such high interest rates are simply unavoidable. The result is that the traditional approach to microcredit has made only limited progress in resolving the problem it purports to address: that the world's poorest people pay the world's highest cost for small business growth capital. The high costs of traditional microcredit loans limit their effectiveness as a poverty-fighting tool. Borrowers who do not manage to earn a rate of return at least equal to the interest rate may actually end up poorer as a result of accepting the loans. According to a recent survey of microfinance borrowers in Ghana published by the Center for Financial Inclusion, more than one-third of borrowers surveyed reported struggling to repay their loans.[57]In recent years, microcredit providers have shifted their focus from the objective of increasing the volume of lending capital available, to address the challenge of providing microfinance loans more affordably. Analyst David Roodman contends that in mature markets, the average interest and fee rates charged by microfinance institutions tend to fall over time.[58]
ProfessorDean KarlanfromYale Universityadvocates also giving the poor access to savings accounts.[59]
|
https://en.wikipedia.org/wiki/Microcredit
|
Participatory democracy,participant democracy,participative democracy, orsemi-direct democracyis aform of governmentin whichcitizensparticipate individually and directly in political decisions and policies that affect their lives, rather than throughelected representatives.[1]Elements ofdirectandrepresentative democracyare combined in this model.[2]
Participatory democracy is a type ofdemocracy, which is itself a form ofgovernment. The term "democracy" is derived from theAncient Greek:δημοκρατία,romanized:dēmokratíafrom δῆμος/dēmos'people' and κράτος/kratos'rule'.[3]It has two main subtypes,directandrepresentative democracy. In the former, the people have the authority to deliberate and decidelegislation; in the latter, they choose governingofficialsto do so. While direct democracy was the original concept, its representative version is the most widespread today.[4]
Public participation, in this context, is the inclusion of the public in the activities of apolity. It can be any process that directly engages the public indecision-makingand gives consideration to its input.[5]The extent to which political participation should be considered necessary or appropriate is under debate inpolitical philosophy.[6]
Joiningpolitical partiesallows citizens to participate in democratic systems, but is not considered participatory democracy.
Participatory democracy is primarily concerned with ensuring that citizens have the opportunity to be involved in decision-making on matters that affect their lives.[6]It is not a new concept and has existed in various forms since theAthenian democracy. Its moderntheorywas developed byJean-Jacques Rousseauin the 18th century and later promoted byJohn Stuart MillandG. D. H. Cole, who argued that political participation is indispensable for ajustsociety.[7]In the early 21st century, participatory democracy has been more widely studied and experimented with, leading to various institutional reform ideas such asparticipatory budgeting.[8]
Democratic processes have been practiced throughout history.[9]
Probably the earliest well-documented example of large-scale democracy comes from thecity-stateofAthensduringclassical antiquity.[10][11]It was first established underCleisthenesin 508–507 BC.[12]This was a direct democracy, in which ordinary citizens wererandomly selectedto fill governmentadministrativeandjudicialoffices, and there was alegislativeassembly consisting of all Athenian citizens.[13]Athenian citizens controlled the entire political process through the assembly, thebouleand thecourts, and a large proportion of citizens were involved constantly in public matters.[11]However, Athenian citizenship excluded women,slaves, foreigners (μέτοικοι/métoikoi) and youths below the age of military service.[14][15]
During the 20th century, practical implementations began to take place, mostly on a small scale, attracting considerable academic attention in the 1980s. Experiments in participatory democracy took place in various cities around the world. For example,Porto Alegre,Braziladapted a system ofparticipatory budgetingin 1989. AWorld Bankstudy found that participatory democracy in these cities seemed to result in considerable improvement in thequality of lifefor residents.[16]
In the early 21st century, experiments in participatory democracy began to spread throughoutSouthandNorth America,China, and across theEuropean Union.[17]In aUSexample, the plans to rebuildNew OrleansafterHurricane Katrinain 2005 were drafted and approved by thousands of ordinary citizens.[18]
In 2011, as a response to citizens' growing distrust in the government following thefinancial crisis of 2007–2008,Irelandauthorised a citizens' assembly called"We the Citizens". Its task was to pilot the use of a participatory democratic body and test whether it could increasepolitical legitimacy. There was an increase in bothefficacyand interest in governmental functions, as well as significant opinion shifts on contested issues liketaxation.[19]
TheFrench governmentorganised"le grand débat national"(the Great National Debate) in early 2019 as a response to theYellow vests movement. It consisted of 18 regional conventions, each with 100randomly selectedcitizens, that had to deliberate on issues they valued the most so that they could influence government action.[20]After the debate, a citizens' convention was created specifically to discussclimate change,"la Convention citoyenne pour le climat"(theCitizens Convention for Climate, CCC), designed to serve as a legislative body to decide how the country could reduce itsgreenhouse gas emissionswithsocial justicein mind.[21]It consisted of 150 citizens selected bysortitionandstratified sampling, who were sorted into five sub-groups to discuss individual topics. The members were helped by experts onsteering committees. The proceedings of the CCC garnered international attention. After nine months, the convention outlined 149 measures in a 460-page report, andPresident Macroncommitted to supporting 146 of them. A bill containing these was submitted to theparliamentin late 2020.[20]
In recent years,social mediahas led to changes in the conduct of participatory democracy. Citizens with differing points of view are able to join conversations, mainly through the use ofhashtags.[22]To promote public interest and involvement,local governmentshave started using social media to make decisions based on public feedback.[23]Users have also organised onlinecommitteesto highlight local needs and appoint budgetdelegateswho work with the citizens and city agencies.[24]
Participatory democracy was a notable feature of theOccupy movementin 2011. "Occupy camps" around the world made decisions based on the outcome ofworking groupswhere every protester had a say. These decisions were thenaggregatedby general assemblies. This process combinedequality, mass participation, anddeliberation.[25]
The most prominent argument for participatory democracy is its function of greaterdemocratization.
[T]he argument is about changes that will make our own social and political life more democratic, that will provide opportunities for individuals to participate in decision-making in their everyday lives as well as in the wider political system. It is about democratizing democracy.
With participatory democracy, individuals or groups can realistically achieve their interests, "[providing] the means to a morejustand rewarding society, not a strategy for preserving thestatus quo."[7]
Participatory democracy may also have an educational effect. Greater political participation can help increase itsefficacyand depth: "the more individuals participate the better able they become to do so",[7]an idea already promoted byRousseau,Mill, andCole.[8]Pateman emphasises this potential as it counteracts the widespread lack of faith in the capacity and capability of citizens to meaningfully participate, especially in societies with complex organisations.[8]Joel D. Wolfe asserts his confidence that such models could be implemented even in large organizations, progressively diminishingstate intervention.[7]
Criticisms of participatory democracy can overlap withcriticism of democracy.[citation needed]
Some reject the feasibility of participatory models due to disbelief in citizens' capabilities to bear the greater responsibility. Critics conclude that the citizenry is disinterested and leader-dependent, making the mechanism for participatory democracy inherently incompatible with advanced societies.[26]Jason Brennanadvocates in bookAgainst Democracyfor a less participatory system because of theirrationalityof voters in arepresentative democracy. He proposes several mechanisms to reduce participation, presented with the assumption that a vote-based system of electoral representation is maintained.[27]Brennan proposes a system in which all citizens haveequal rights to voteor otherwise participate in government, but decisions made by the elected representatives are scrutinized by anepistocraticcouncil. This council could not make law, only "unmake" it, and would likely be composed of individuals who pass rigorous competency exams.[27]
Other concerns are whether participatory democracy can be managed and turned into effective output. David Plotke[who?]highlights that the institutional adjustments needed to make greater political participation possible would require a representative element. Consequently, both direct and participatory democracy must rely on some type of representation to sustain a stable system. He also states that achieving equal direct participation in large and heavily populated regions is hardly possible, and ultimately argues in favor of representation over participation, calling for a hybrid between participatory and representative models.[28]
Some forms or participatory democracy can violate the hard-won concept ofpolitical egalitarianism(One Man, One Vote).[29]Town meetings can have lowturnoutand an over-representation of seniors.[30]
In case of citizens' assemblies and sortition,Roslyn Fullercriticizes that the small chance of being randomly selected to participate results in lack of representation or participation for most citizens.[31][32]Fuller criticizes that deliberative democracy generally limits decisions to small, externally controllable groups while ignoring the plethora of e-democracy tools available which allow for unfiltered mass participation and deliberation while maintaining political representativeness.[31][33]
Scholars have recently proposed several mechanisms to increase citizen participation in democratic systems. These methods intend to increase theagenda-settinganddecision-makingpowers of the people by giving citizens more direct ways to contribute to politics.[34]
Also called mini-publics, citizens' assemblies arerepresentative samplesof a population that meet to createlegislationor advise legislative bodies. When citizens are chosen to participate bystratified sampling, the assemblies are more representative of the population than elected legislatures.[35]Assemblies chosen bysortitionprovide average citizens with the opportunity to exercise substantive agenda-setting and/or decision-making power. Over the course of the assembly, citizens are helped by experts and discussionfacilitators, and the results are either put to areferendumor sent in areportto thegovernment.
In studying the perceived legitimacy of citizens' assemblies,political scientistDaan Jacobs finds that the perceived legitimacy of assemblies is higher than that of system with no participation, but not any higher than that of any system involving self-selection.[36]Regardless, the use of citizens' assemblies has grown throughout the early 21st century and they have often been used in constitutional reforms, such as inBritish Columbia'sCitizens' Assembly on Electoral Reformin 2004 and theIrishConstitutional Conventionin 2012.[37]
TrademarkedbyStanfordprofessorJames S. Fishkin,deliberative opinion pollsallow citizens to develop informed opinions before voting throughdeliberation. Deliberative polling begins with surveying arandomrepresentative sampleof citizens to gauge their opinion.[38]The same individuals are then invited to deliberate for a weekend in the presence of political leaders, experts, and moderators. At the end, the group is surveyed again, and the final opinions are taken to be the conclusion the public would have reached if they had the opportunity to engage with the issue more deeply.[38]PhilosopherCristina Lafont, a critic of deliberative opinion polling, argues that the "filtered" (informed) opinion reached at the end of a poll is too far removed from the opinion of the citizenry, delegitimizing the actions based on them.[39]
Public consultation surveys are surveys on policy proposals or positions that have been put forward by legislators, government officials, or other policy leaders. The entirety of the deliberative process takes place within the survey. For each issue, respondents are provided relevant briefing materials and arguments for and against various proposals. Respondents then provide their final recommendation. Public consultation surveys are primarily done with large representative samples, usually several thousand nationally and several hundred in subnational jurisdictions.
Public consultation surveys have been used since the 1990s in the US. The American Talks Issue Foundation led by Alan Kay played a pioneering role.[40]The largest such program is the Program for Public Consultation at theUniversity of Maryland's School of Public Policy, directed bySteven Kull, conducting public consultation surveys on the national level, as well as in states and congressional districts. They have gathered public opinion data on over 300 policy proposals that have been put forward by Members of Congress and the Executive Branch, in a variety of areas.[41]Such surveys conducted in particular Congressional districts have also been used as the basis for face-to-face forums in congressional districts, in which survey participants and House Congressional Representative discuss the policy proposals and the results of the survey.[42]
The questionnaires used in the surveys by the Program for Public Consultation, which they call “policymaking simulations”, have also been made available for public use, as educational and advocacy tools.[43]Members of the public can take the policymaking simulations to better understand the proposal, and are given the option to send their policy recommendations to their elected officials in Congress.
E-democracyis an umbrella term describing a variety of proposals to increase participation through technology.[44]Open discussion forums provide citizens the opportunity to debatepolicyonline whilefacilitatorsguide discussion. These forums usually serve agenda-setting purposes or are sometimes used to provide legislators with additionaltestimony. Closed forums may be used to discuss more sensitive information: in theUnited Kingdom, one was used to enabledomestic violencesurvivors to testify to the All-Party Parliamentary Group on Domestic Violence and Abuse while preserving theiranonymity.
Another e-democratic mechanism isonline deliberative polling, a system in which citizens deliberate withpeersvirtually before answering a poll. The results ofdeliberative opinion pollsare more likely to reflect the considered judgments of the people and encourage increased citizen awareness of civic issues.[44]
In a hybrid betweendirectandrepresentative democracy,liquid democracypermits individuals to either vote on issues themselves or to select issue-competentdelegatesto vote on their behalf.[45]Political scientistsChristian Blum and Christina Isabel Zuber suggest that liquid democracy has the potential to improve a legislature's performance through bringing together delegates with a greater awareness on a specific issue, taking advantage of knowledge within the population. To make liquid democracy more deliberative, atrusteemodel of delegation may be implemented, in which the delegates vote after deliberation with other representatives.
Some concerns have been raised about the implementation of liquid democracy. Blum and Zuber, for example, find that it produces two classes of voters: individuals with one vote and delegates with two or more.[45]They also worry that policies produced in issue-specific legislatures will lackcohesiveness. Liquid democracy is utilized byPirate Partiesfor intra-party decision-making.
Inbinding referendums, citizens vote onlawsand/orconstitutional amendmentsproposed by alegislative body.[46]Referendums afford citizens greaterdecision-makingpower by giving them the ultimate decision, and they may also use referendums foragenda-settingif they are allowed to draftproposalsto be put to referendums in efforts calledpopular initiatives.Compulsory votingcan further increase participation. Political theoristHélène Landemoreraises the concern that referendums may fail to be sufficiently deliberative as people are unable to engage in discussions and debates that would enhance their decision-making abilities.[35]
Switzerlandcurrently uses a rigorous system of referendums, under which all laws the legislature proposes go to referendums. Swiss citizens may also startpopular initiatives, a process in which citizens put forward a constitutional amendment or propose the removal of an existing provision. Any proposal must receive the signature of 100,000 citizens to go to aballot.[47]
In local participatory democracy,town meetingsprovide all residents with legislative power.[48]Practiced in theUnited States, particularly inNew England, since the 17th century, they assure that local policy decisions are made directly by the public. Local democracy is often seen as the first step towards a participatory system.[49]Theorist Graham Smith, however, notes the limited impact of town meetings that cannot lead to action on national issues. He also suggests that town meetings are not representative as they disproportionately represent individuals withfree time, including theelderlyand theaffluent.
Participatory budgetingallows citizens to make decisions on the allocation of apublic budget.[48]Originating inPorto Alegre, Brazil, the general procedure involves the creation of a concretefinancial planthat then serves as a recommendation to elected representatives.Neighbourhoodsare given the authority to design budgets for the greater region and local proposals are brought to elected regional forums. This system lead to a decrease inclientelismandcorruptionand an increase in participation, particularly amongstmarginalizedandpoorerresidents. Theorist Graham Smith observes that participatory budgeting still has some barriers to entry for the poorest members of the population.[50]
|
https://en.wikipedia.org/wiki/Participatory_democracy
|
Participatory monitoring(also known ascollaborative monitoring,community-based monitoring,locally based monitoring, orvolunteer monitoring) is the regular collection ofmeasurementsor other kinds ofdata(monitoring), usually ofnatural resourcesandbiodiversity, undertaken by local residents of the monitored area, who rely on local natural resources and thus have more local knowledge of those resources. Those involved usually live in communities with considerable social cohesion, where they regularly cooperate on shared projects.
Participatory monitoring has emerged as an alternative or addition to professional scientist-executed monitoring.[1][2]Scientist-executed monitoring is often costly and hard to sustain, especially in those regions of the world where financial resources are limited.[3]Moreover, scientist-executed monitoring can be logistically and technically difficult and is often perceived to be irrelevant by resource managers and the local communities. Involving local people and their communities in monitoring is often part of the process of sharing the management of land and resources with the local communities. It is connected to the devolution of rights and power to the locals.[4]Aside from potentially providing high-quality information,[5][6][7]participatory monitoring can raise local awareness and build the community and local government expertise that is needed for addressing the management of natural resources.[4][8]
Participatory monitoring is sometimes included in terms such ascitizen science,[9]crowd-sourcing, ‘public participationin scientific research’[10]andparticipatory action research.
The term ‘participatory monitoring’ embraces a broad range of approaches, from self-monitoring of harvests by local resource users themselves, to censuses by local rangers, and inventories by amateur naturalists. The term includes techniques labelled as ‘self-monitoring’,[11][12]ranger-based monitoring’,[13]‘event-monitoring’,[14]‘participatory assessment, monitoring and evaluation of biodiversity’,[15][16]‘community-based observing’,[17]and ‘community-based monitoring and information systems’.[18]
Many of these approaches are directly linked to resource management, but the entities being monitored vary widely, from individual animals and plants,[5][12][19][20][21][22][23]through habitats,[24][25][26][27][28]to ecosystem goods and services.[29][30][31]However, all of the approaches have in common that the monitoring is carried out by individuals who live in the monitored places and rely on local natural resources, and that local people or local government staff are directly involved in formulation of research questions, data collection, and (in most instances) data analysis, and implementation of management solutions based on research findings.[3][32]
Participatory monitoring is included in the term ’participatory monitoring and management’ which has been defined as "approaches used by local and Indigenous communities, informed by traditional and local knowledge, and, increasingly, by contemporary science, to assess the status of resources and threats on their land and advance sustainable economic opportunities based on the use of natural resources".[32]term ’participatory monitoring and management’ is particularly used in tropical, Arctic and developing regions, where communities are most often the custodians of valuable biodiversity and extensive natural ecosystems.
Other definitions for participatory monitoring have also been proposed, including:
Likewise, the term ’community-based monitoring of natural resources’ has been defined as:
It has been suggested that participatory monitoring is unlikely to provide quantitative data on large-scale changes in habitat area, or on populations of cryptic species that are hard to identify or census reliably.[3]It has also been suggested that participatory monitoring is not suitable for monitoring resources that are so valuable they attract powerful outsiders.[38]Likewise, in areas where changes, threats, or interventions operate in complex fashions, where rural people do not depend on the use of natural resources and there are no real benefits flowing to the local people from doing monitoring work (or the costs to local people of involvement exceed the benefits[30]), or where there is a poor relationship between the authorities and the local people,[39]participatory monitoring is probably less likely to yield useful data and management solutions than conventional scientific approaches.[40]
Whereas government censuses of human populations, which date perhaps to the 16th century B.C.,[41]were likely the first formal attempts atenvironmental monitoring,[42]farmers, fishers and forest users have informally monitored resource conditions for even longer, their observations influencing survival strategies and resource use.[1]
Participatory monitoring schemes are in operation on all the inhabited continents, and the approach is beginning to appear in textbooks.[43][44]
An international symposium on participatory monitoring was hosted by theNordic Agency for Development and Ecologyand the Zoology Department atCambridge Universityin Denmark in April 2004.[45]It led to a special issue of Biodiversity and Conservation October 2005.[46]
In the Arctic, a symposium on data management and local knowledge was hosted by ELOKA and held in Boulder, USA, in November 2011.[47]It led to a special issue ofPolar Geographyin 2014.
In the Arctic, three circumpolar meetings were held in 2013-2014:
The first global conference on Participatory Monitoring and Management was hosted by theBrazilian Ministry of Environment(MMA) and theChico Mendes Institute for Biodiversity Conservation(ICMBio) and held in Manaus, Brazil in September 2014.[49][50][51]
Thematically, participatory monitoring has considerable potential in several areas, including:
A typology of monitoring schemes has been proposed, determined on the basis of relative contributions of local stakeholders and professional researchers.[87]and supported by findings from statistical analysis of published schemes.[36]The typology identified 5 categories of monitoring schemes that between them span the full spectrum of natural resource monitoring protocols:
Category A.Autonomous Local Monitoring. In this category the whole monitoring process—from design, to data collection, to analysis, and finally to use of data for management decisions—is carried out autonomously by local stakeholders. There is no direct involvement of external agencies. For an example see.[69]
Category B.Collaborative Monitoring with Local Data Interpretation. In these schemes, the original initiative was taken by scientists but local stakeholders collect, process and interpret the data, although external scientists may provide advice and training. The original data collected by local people remain in the area being monitored, which helps create local ownership of the scheme and its results, but copies of the data may be sent to professional researchers for in-depth or larger-scale analysis. Examples are included in.[1][14][62]
Category C.Collaborative Monitoring with External Data Interpretation. The third most distinct group is monitoring scheme category C. These schemes were designed by scientists who also analyse the data, but the local stakeholders collect the data, take decisions on the basis of the findings and carry out the management interventions emanating from the monitoring scheme. Examples are provided in.[11][19][24]
Category D.Externally Driven Monitoring with Local Data Collectors. This category of monitoring scheme involves local stakeholders only in data collection. The design, analysis, and interpretation of the monitoring results are undertaken by professional researchers—generally far from the site. Monitoring schemes of category D are mostly long-running ‘citizen science’ projects from Europe and North America. See for example[88][89]
Category E.Externally Driven, Professionally Executed Monitoring. Monitoring schemes of category E do not involve local stakeholders. Design of the scheme, analysis of the results, and management decisions derived from these analyses are all undertaken by professional scientists funded by external agencies. An example is[90]
Traditional methods of data collection for participatory monitoring use paper and pen. This has advantages in terms of low cost of materials and training, simplicity, and reduced potential for technical hitches. However, all data must be transcribed for analysis, which takes time and can be subject to transcription errors.[91]Increasingly, participatory monitoring initiatives incorporate technology, from GPS recorders to georeference the data collected on paper,[92]to drones to survey remote areas,[93]phones to send simple reports via SMS,[94]or smartphones to collect and store data.[95]Various apps exist to create and manage data collection forms on smartphones (e.g.ODK, Sapelli[96]and others[97]).
Some initiatives find that the use of smartphones for data collection has advantages over paper-based systems.[98]The advantages include that very little equipment need be carried on a survey, a large amount and variety of data can be stored (geographical locations, photos and audio, as well as data entered onto monitoring forms) and data can be shared rapidly for analysis without transcription errors.[91]The use of smartphones can incentivise young people to get involved in monitoring, sparking an interest in conservation.[99]Some apps are especially designed to be usable by illiterate monitors.[100][101][102]If local people risk threats or violence by monitoring illegal activities, the true purpose of the phones can be denied, and the monitoring data locked away.[103]However, phones are expensive; are vulnerable to damage and technical issues; necessitate additional training - not least due to rapid technological change; phone charging can be a challenge (especially under thick forest canopies); and uploading data for analysis is difficult in areas without network connections.[104][105]
A key challenge for participatory monitoring is to develop ways to store, manage and share data[106]and to do this in ways that respect the rights of the communities that supplied the data. A ‘rights-based approach to data sharing’ can be based on principles offree, prior and informed consent, and prioritise the protection of the rights of those who generated the data, and/or those potentially affected by data-sharing.[107]Local people can do much more than simply collect data: they can also define the ways that this data is used, and who has access to it.
Clear agreements ondata sharingare especially important for initiatives where diverse data is collected, of variable relevance to different stakeholders.[108]For example, monitoring could on the one hand, investigate sensitive social problems within a community, or contested resources at the centre of local conflicts or illegal exploitation - data that community leaders might want to keep confidential and address locally; on the other hand, the same initiative could generate data on forest biomass, of greater interest to external stakeholders.[109]
One way to establish the rules around data sharing is to set up a data sharing protocol. This can define:[107]
|
https://en.wikipedia.org/wiki/Participatory_monitoring
|
Open knowledge(orfree knowledge) isknowledgethat is free to use, reuse, and redistribute without legal, social, or technological restriction.[1]Open knowledge organizations and activists have proposed principles and methodologies related to the production and distribution of knowledge in an open manner.
The concept is related toopen sourceand theOpen Definition, whose first versions bore the title "Open Knowledge Definition", is derived from theOpen Source Definition.
Similarly to other "open" concepts, though the term is rather new, the concept is old: One of the earliest surviving printed texts, a copy of the BuddhistDiamond Sutraproduced in China around 868 AD, contains a dedication "for universal free distribution".[2]In the fourth volume of theEncyclopédie,Denis Diderotallowed re-use of his work in return for him having used material from other authors.[3]
In the early twentieth century, a debate aboutintellectual property rightsdeveloped within theGerman Social Democratic Party. A key contributor wasKarl Kautskywho in 1902 devoted a section of a pamphlet to "intellectual production", which he distinguished from material production:
Communism in material production, anarchy in the intellectual that is the type of a Socialist mode of production, as it will develop from the rule of theproletariat—in other words, from the Social Revolution through the logic of economic facts, whatever might be: the wishes, intentions, and theories of the proletariat.[4]: 40
This view was based on an analysis according to whichKarl Marx'slaw of valueonly affected material production, not intellectual production.
With the development of the public Internet from the early 1990s, it became far easier to copy and share information across the world. The phrase "information wants to be free" became a rallying cry for people who wanted to create an internet without the commercial barriers that they felt inhibited creative expression in traditional material production.
Wikipediawas founded in 2001 with the ethos of providing information which could be edited and modified to improve its quality. The success of Wikipedia became instrumental in making open knowledge something that millions of people interacted with and contributed to.
|
https://en.wikipedia.org/wiki/Open_knowledge
|
Asmart mobis a group whose coordination and communication abilities have been empowered bydigital communication technologies.[1]Smart mobs are particularly known for their ability to mobilize quickly.[1]
The concept was introduced byHoward Rheingoldin his 2002 bookSmart Mobs: The Next Social Revolution.[2]Rheingold defined the smart mob as follows: "Smart mobs consist of people who are able to act in concert even if they don’t know each other... because they carry devices that possess both communication and computing capabilities".[3]In December of that year, the "smart mob" concept was highlighted in theNew York Times"Year in Ideas".[4]
These technologies that empower smart mobs include theInternet,computer-mediated communicationsuch asInternet Relay Chat, andwirelessdevices likemobile phonesandpersonal digital assistants. Methodologies likepeer-to-peernetworks andubiquitous computingare also changing the ways in which people organize and share information.[citation needed]
Flash mobsare a specific form of smart mob, originally describing a group of people who assemble suddenly in a public place, do something unusual and pointless for a brief period of time, then quickly disperse. The difference between flash and smart mobs is primarily with regards to their duration: flash mobs disappear quickly, but smart mobs can have a more enduring presence.[2]The termflash mobis claimed to have been inspired by "smart mob".[5]
Smart mobs have begun to have an impact in current events, as mobile phones and text messages have empowered everyone from revolutionaries inMalaysiato individuals protesting the secondIraq War. Individuals who have divergent worldviews and methods have been able to coordinate short-term.[citation needed]
A 2009 entry in theEncyclopedia of Computer Science and Technologynoted that the term may be "fading from public use".[2]
A forerunner to the idea can be found in the work of anarchist thinker Kropotkin, "fishermen, hunters, travelling merchants, builders, or settled craftsmen came together for a common pursuit."[6]
According toCNN, the first smart mobs were teenage "thumb tribes" in Tokyo and Helsinki who usedtext messagingoncell phonesto organize imprompturavesor to stalk celebrities. For instance, in Tokyo, crowds of teenage fans would assemble seemingly spontaneously at subway stops where a rock musician was rumored to be headed.[7]
However, an even earlier example is theDîner en blancphenomenon, which has taken place annually inParis,France, since 1988, for one night around the end of June. The invited guests wear only white clothes and gather at a chosen spot, knowledge of which they have only a short time beforehand. They bring along food, drink, chairs and a table and the whole group then gathers to have a meal, after which they disperse. The event has been held each year in different places in the centre of Paris. It is not a normal cultural event because it is not advertised and only those who have received an invite attend—information on the chosen location is transferred by text message or more recentlyTwitter. The number of people attending has grown, in 2011, to over 10,000.[8]Dîner en blancwould be considered a smart mob rather than a flash mob, because the event lasts for several hours.[citation needed]
TheProfessional Contractors Grouporganised the first smart mob in the UK in 2000 when 700 contractors turned up at The House of Commons to lobby their MP following an email sent out a few days before.[9]
In the days after the U.S. presidential election of 2000, online activistZack Exleyanonymously created a website that allowed people to suggest locations for gatherings to protest for a full recount of the votes inFlorida. On the first Saturday after the election, more than 100 significant protests took place—many with thousands of participants—without any traditional organizing effort. Exley wrote in December 2000 that the self-organized protests "demonstrated that a fundamental change is taking place in our national political life. It's not the Internet per se, but the emerging potential for any individual to communicate—for free and anonymously if necessary—with any other individual."[10]
In thePhilippinesin 2001, a group of protesters organized via text messaging gathered at theEDSA Shrine, the site of the1986 revolutionthat overthrewFerdinand Marcos, to protest the corruption of PresidentJoseph Estrada. The protest grew quickly, and Estrada was soon removed from office.[11]
TheCritical Massbicycling events, dating back to 1992, are also sometimes compared to smart mobs, due to their self-organizing manner of assembly.[12][13]
Essentially, the smart mob is a practical implementation ofcollective intelligence. According to Rheingold, examples of smart mobs are the street protests organized by theanti-globalization movement. TheFree State Projecthas been described inForeign Policyas an example of potential "smartmob rule".[14]Other examples of smart mobs include:
The comic bookGlobal Frequency, written byWarren Ellis, describes a covert, non-governmental intelligence organization built around a smart mob of people that are called on to provide individual expertise in solving extraordinary crises.[citation needed]
David Brin's speculative science fiction novel,Existence(ISBN978-0-765-30361-5), similarly posits the use of on-the-fly smart mobs by credible journalists as sources of information and expertise.
|
https://en.wikipedia.org/wiki/Smart_mob
|
Social collaborationrefers to processes that help multiple people or groups interact and share information to achieve common goals. Such processes find their 'natural' environment on the Internet, where collaboration and social dissemination of information are made easier by current innovations and the proliferation of the web.
Sharing concepts on a digital collaboration environment often facilitates a "brainstorming" process, where new ideas may emerge due to the varied contributions of individuals. These individuals may hail from different walks of life, different cultures and different age groups, their diverse thought processes help in adding new dimensions to ideas, dimensions that previously may have been missed. A crucial concept behind social collaboration is that 'ideas are everywhere.' Individuals are able to share their ideas in an unrestricted environment as anyone can get involved and the discussion is not limited to only those who havedomain knowledge.
Social collaboration is also known as enterprise social networking, and the products to support it are often branded enterprise social networks (ESNs).[1]
It is important that we understand the rhythm of social collaboration. There needs to be a balance, with ease to move from focused solitary work to brainstorming for problem solving in group work. This critical balance can be achieved by creating structures or a work environment where it is not too rigid to prevent brainstorming in group work nor too loose to result in total chaos. Social collaboration should happen at theedge of chaos.
Work practices should support social collaboration. The most effective environment is one that supports opportunistic planning. Opportunistic planning provides a general plan but then gives enough room for flexibility to change activities and tasks until the last moment. This way, people are able to cope up with unforeseen developments and not throwing away everything with one grand plan.
Social collaboration is related tosocial networking, with the distinction that while social networking is individual-centric, social collaboration is entirely group-centric. Generally speaking, social networking means socializing for personal, professional or entertainment purposes, for example,LinkedInandFacebook. Social collaboration, on the other hand, means working socially to achieve a common goal, for example,GitHubandQuora.[1]Social networking services generally focus on individuals sharing messages in a more-or-less undirected way and receiving messages from many sources into a single personalized activity feed. Social collaboration services, on the other hand, focus on the identification of groups and collaboration spaces in which messages are explicitly directed at the group and the group activity feed is seen the same way by everyone.
Social collaboration may refer to time-bound collaborations with an explicit goal to be completed or perpetual collaborations in which the goal is knowledge sharing (e.g.community of practice,online community).
Social collaboration is similar tocrowdsourcingas it involves individuals working together towards a common goal.[3][4]Crowdsourcing is a method for harnessing specific information from a large, diverse group of people.[5]Unlike social collaboration, which involves much communication and cooperation among a large group of people, crowdsourcing is more like individuals working towards the common goal relatively independently. Therefore, the process of working involves less communication.
Andrea Grover, curator of a crowdsourcing art show,[6]explained that collaboration among individuals is an appealing experience, because participation is "a low investment, with the possibility of a high return."[7]
Notable socialcollaboration softwareincludesGlipmessaging,Google Apps,Knowledge PlazaElectronic Document System and Social Intranet,Microsoft Lyncsocial collaboration tool for businesses,Slack,Weekdonefor managers, andWrike.[8]
Social collaboration is going to be used as a tool in companies to enhance productivity. Social workers could be able to use social collaboration tools to manage personal tasks, professional projects and social networks with other colleagues within the same organization.[citation needed]
Social collaboration will serve as a platform to get people involved and connected. This kind of platform provides a spiritual training practice for social workers.[9]
Social collaboration software could help enhance the communication between customers and employees and build trust in the organization.[10]
When we[who?]need real-time chat, it would be excellent to include every participant in a shared and archived forum which keeps a record of important information and logs. So collaborators need not worry about losing important records while working towards the common goal.[citation needed][original research?]
The interactive communication and synchronous environment promote understanding among colleagues. Collaboration helps in building strong relationships between workers, which in turn leads to faster problem solving. The close connection between workers and customers creates a scalable organization which naturally increases the trust and faith that customers have in the company. Therefore, the interactive customer relationship levels up customer satisfaction in ways that traditional collaboration methods cannot.[citation needed]
Apart from its effect on the way work will be conducted in the future, social collaboration will also affect society. In the coming years social collaboration will be the driving force in societal change as more and more people work together to get their vision across to governments and governing agencies. An example of this isChange.org, anonline petitiontool where users can help bring their government's attention to pressing social issues that need to be addressed.[11]
|
https://en.wikipedia.org/wiki/Social_collaboration
|
Stone Soupis a Europeanfolk storyin which hungry strangers convince the people of a town to each share a small amount of their food in order to make a meal. In varying traditions, the stone has been replaced with other common inedible objects, and therefore the parable is also known asaxe soup,button soup,nail soup,bolt soup, andwood soup.
Some travelers come to a village, carrying nothing more than an empty cooking pot. Upon their arrival, the villagers are unwilling to share any of their food stores with the very hungry travelers. Then the travelers go to a stream and fill the pot with water, drop a largestonein it, and place it over a fire. One of the villagers becomes curious and asks what they are doing. The travelers answer that they are making "stonesoup", which tastes wonderful and which they would be delighted to share with the villager, although it still needs a little bit ofgarnish, which they are missing, to improve the flavor.
The villager, who anticipates enjoying a share of the soup, does not mind parting with a fewcarrots, so these are added to the soup. Another villager walks by, inquiring about the pot, and the travelers again mention their stone soup which has not yet reached its full potential. More and more villagers walk by, each adding another ingredient, likepotatoes,onions,cabbages,peas,celery,tomatoes,sweetcorn,meat(likechicken,porkandbeef),milk,butter,saltandpepper. Finally, the stone (being inedible) is removed from the pot, and a delicious and nourishing pot of soup is enjoyed by travelers and villagers alike. Although the travelers have thus tricked the villagers into sharing their food with them, they have successfully transformed it into a tasty meal which they share with the donors.
In theAarne–Thompson–Utherfolktale classification system, this tale and set of variants is type 1548.[4]
There are many examples of projects referencing the "Stone Soup" story's theme of making something significant by accumulating many small contributions. Examples include:
The filmFandango(1985) contains a wedding sequence towards the end which builds on the Stone Soup theme. The protagonists need to hold a wedding ceremony, but they lack the necessary funds. Therefore, they set up a foldingcard tableby the main street of a sleepy Texas town, dust it off, and invite passersby to come to the wedding. As they concoct stories of delinquent caterers and crashed champagne trucks, the friendly townspeople contribute their time and resources, the result being a magical wedding ceremony.
Gerald P. Murphy's stage adaptation of "Stone Soup" was published by Lazy Bee Scripts in 2008 and has had successful productions in US, UK and France.
Gerald Griffinwrote "The Collegians" (1829) which includes a version of limestone soup in chapter 30.
William Butler Yeats' playThe Pot of Broth(1904) tells a version of the story in which a clever Irish tramp uses his wits to swindle a shrewish medieval housewife out of her dinner.[10]
The story is the basis ofMarcia Brown's 1947 children's bookStone Soup: An Old Tale(1947),[11]which features soldiers tricking miserly villagers into cooking them a feast. The book was aCaldecott Honorbook in 1948[12]and was read aloud by the Captain (played byBob Keeshan) on an early episode ofCaptain Kangarooin the 1950s, as well as at least once in the 1960s or early 1970s.[13][14]
In 1965,Gordon R. Dicksonpublished a short story called "Soupstone", where a headstrong pilot is sent to solve a problem on a planet under the guise of a highly educated and competent official. He succeeds by pretending to understand everything, but actually merely making the locals apply their already present knowledge and abilities to the task.
"Stone Soup" (1968),[15]written byAnn McGovernand illustrated by Nola Langner, tells the story of a little old lady and a hungry young man at the door asking for food, and how he tricks her into making stone soup. The book was reprinted and reissued in 1986 with Winslow Pinney Pels as the illustrator.
In 1975,Walt Disney Productionspublished a Wonderful World of Reading book titledButton Soup. Daisy Duck tricks Scrooge McDuck to share his food to help flavor her Button Soup.
Canadian children's author Aubrey Davis adapted the story to a Jewish context in his bookBone Button Borscht(1996). According to Davis, he wrote the story when he was unable to find a story that he liked for aHanukkahreading.[16]Barbara Budd's narration ofBone Button Borschttraditionally airs across Canada onCBC Radio One'sAs It Happens, on the first day ofHanukkah.
French author and illustrator Anais Vaugelade published a children's picture book,Une soupe au caillou, in which the tramp from the original folktale is replaced by a wandering wolf, and the old woman by a curious hen. All characters in the story are animals, gathering to help making the stone soup, each of them carrying an ingredient for the final dish.
Jon J. Muth's children's book based on the story, also calledStone Soup(2003),[17]is set in China, as isYing Chang'sThe Real Story of Stone Soup(2007).[18]
Robert Rankin's bookNostradamus Ate My Hamsterfeatures a version of the story introduced as an old Irish tale.
Shel Silverstein's song "The Wonderful Soup Stone" tells a version of this story.Bobby Bareincluded the song on his albumLullabys, Legends and Lies(1973).[19]andDr. Hook & the Medicine Showincluded the song on their albumBelly Up!(1973).
A version of the tale written byTom ChapinandJohn Forsterappears on Chapin's albumMother Earth(1990).
Stone Soup– an album released in November 2001 by the UK artist Moss (a.k.a. Bernard Moss) onPork Recordings(catalogue ref. PORK 091).
"Stone Soup" - A song featured on the album Mr. Supernatural in 2004 by the artistKing Khan and the ShrinesonHazelwood Records
US Army GeneralGeorge S. Pattonreferred to the "rock soup method" of acquiring resources for attacks in the face of official disapproval by his superiors for offensive operations. In the military context, he sent units forward, ostensibly on reconnaissance missions, where he knew resistance was to be met. "Surprised" at the enemy resistance, Patton would later request support for his scouts, and these missions eventually turned into small scale probing attacks. Then, once full combat had begun, Patton would request (or make the executive decision) to encircle or push full force against enemy resistance, under the rationale that the reinforcements were either bogged down or unable to retreat. He did this during theBattle of Sicily, in the advance onPalermo, and again in the campaign in northwest Europe, nearMetzwhen his3rd US Armywas officially halted duringOperation Market Garden.[21]
A large pool located on Karl Johan street inOslo, funded by the steel companyChristiania Spigerverk("Christiania Nail Factory"), is nicknamedSpikersuppaliterally meaning "Nail Soup" in Norwegian.[22]
|
https://en.wikipedia.org/wiki/Stone_Soup
|
Truecalleris asmartphone applicationthat has features ofcaller ID,call-blocking, flash-messaging,call-recording(onAndroidup toversion 8),chatand voice by using theInternet. It requires users to provide a standard cellularmobile numberfor registering with theservice. The app is available for Android[1]andiOS.[2]
Truecaller is developed by True Software Scandinavia AB, apublic companylisted in Sweden with a head office inStockholm,Sweden, founded by Alan Mamedi and Nami Zarringhalam in 2009,[3]but most of its employees are inIndia.[4]
It was initially launched onSymbianandWindows Mobileon 1 July 2009. It was released forAndroidandApple iPhoneon 23 September 2009, forBlackBerryon 27 February 2012, forWindows Phoneon 1 March 2012, and forNokia Series 40on 3 September 2012.
As of September 2012, Truecaller had five million users[5]performing 120 million searches of thetelephone numberdatabase every month.[6]As of 22 January 2013, Truecaller reached 10 million users.[7]As of January 2017, Truecaller had reached 250 million users worldwide.[8]As of 4 February 2020, it crossed 200 million monthly user-base globally, of which 150 million were from India.[9][10]
On 18 September 2012,TechCrunchannounced[11]that OpenOcean,[12]aventure capitalfund led by formerMySQLandNokiaexecutives (includingMichael Widenius,[13]founder of MySQL), were investingUS$1.3 million in Truecaller to push Truecaller’s global reach.[14]Truecaller said that it intended to use the new funding to expand its footprint in "key markets"—specificallyNorth America,Asiaand theMiddle East.[15]
In February 2014, Truecaller receivedUS$18.8 millionin funding fromSequoia Capital, alongside existing investor OpenOcean, Truecaller chairman Stefan Lennhammer, and an unnamed private investor. It also announced a partnership withYelpto use Yelp'sAPIdata to help identify business numbers when they call asmartphone.[16]In October of the same year, they receivedUS$60 millionfromNiklas Zennström'sAtomicoinvestment firm and fromKleiner Perkins Caufield & Byers.[17]
On 7 July 2015, Truecaller launched itsSMSapp called TrueMessenger exclusively in India. TrueMessenger enables users to identify the sender of SMS messages. This launch was aimed at increasing the company's user base in India[18]which are the bulk of its active users.[4]TrueMessenger was integrated into the Truecaller app in April 2017.[19]
In December 2019, Truecaller announced it plans to go public in anIPOin 2022.[4]Truecaller has launched theCovidHospital Directory keeping in mind the increasing cases ofcoronainfection in India. Through this directory, Indian users will get information about the telephone number and address of Covid Hospitals.[20]
In January 2025, Truecaller added real-time Caller ID and spam-blocking foriOS 18.2 users, functionality previously available to Android phone users.[21][22][23]
Truecaller gets about 75% of its revenue from India. The Indian regulatory body TRAI has a competing service for Caller ID based on CNAP (Caller Name Presentation).[24]This would enable caller ID without the use of any apps.[25]There were small tests done in June/July 2024.[26]If this is rolled out pan India, analysts expect this to substantially impact the usage of Truecaller in India.
Indian data privacy actwhich will come into force in later 2024 is also expected to negatively impact Truecaller in India. When this is in force, Truecaller will not be able to collect and use unconsented data which powers their caller ID database.
In a lawsuit in Nigeria, Truecaller defended[27]their security and privacy policy stating that the users whose phone books were uploaded by Truecaller are the data controllers and that Truecaller is merely a data processor. This is different from Europe where they dont seem to upload users' phone books as it may run afoul of GDPR. There are now 2 more lawsuits against Truecaller in Nigeria seeking to end Truecaller's unconsented data practices (ONWUBUARIRI vs TRUECALLER INTERNATIONAL LLP and OKAFOR vs TRUECALLER INTERNATIONAL LLP).
In Aug/Sep 2024, the Indian regulatory body TRAI startedcracking downon tele spammers. They instructed telecom providers to terminate all telecom resources of unregistered tele marketers (UTMs). This enforcement started in late August and is leading to a massive termination of spamming numbers[28]and blacklisting their companies so that they cannot get any phone numbers from any other providers for 2 years. If this is (even partially) successful in disconnecting the top spamming companies, it may lead to a massive reduction in spam calls and thus reducing the number of ads shown by Truecaller.
In Feb 2025, IMY (Integritetsskyddsmyndigheten) which is Sweden's GDPR regulator started an investigation/supervision into the data practices of Truecaller. The decision is awaited.
On 17 July 2013, Truecaller servers were allegedly hacked into by theSyrian Electronic Army.[29]E Hacking News reported the group identified 7 sensitive databases it claimed to have exfiltrated, primarily due to an unmaintainedWordPressinstallation on theservers.[29]Claims made regarding the size of the databases were inconsistent. On 18 July 2013, Truecaller issued a statement on itsblogstating that theirwebsitewas indeedhacked, but claiming that the attack did not disclose anypasswordsorcredit cardinformation.[30]
Truecaller uploads users' stored contacts to their servers to form a database of phone numbers.[31][32]This may violate GDPR and similar regulations in multiple countries.
Truecaller also tracks phone calls made by non-users to users (and vice versa) and hence collects information about those non users in detail. These non-users don't have a way to stop this data collection.
In November 2019,India-basedsecurity researcherEhraz Ahmed discovered a security flaw that exposed user data as well as system and location information. Truecaller confirmed this information and the bug was immediately fixed.[33][34]
Multiple times in 2020 and 2021, there were reports about a massive Truecaller database leak on the internet. The reports surfaced again in 2024. The leaked database has information regarding users' (and non-users') phone numbers, names, phone carrier, tags, email addresses, etc.[citation needed]
|
https://en.wikipedia.org/wiki/Truecaller
|
Virtual collective consciousness(VCC) is a term rebooted and promoted by two behavioral scientists, Yousri Marzouki and Olivier Oullier in their 2012Huffington Postarticle titled: "Revolutionizing Revolutions: Virtual Collective Consciousness and theArab Spring",[1]after its first appearance in 1999-2000.[2]VCC is now defined as an internal knowledge catalyzed bysocial mediaplatforms and shared by a plurality of individuals driven by the spontaneity, the homogeneity, and the synchronicity of their online actions.[3]VCC occurs when a large group of persons, brought together by a social media platform think and act with one mind and share collective emotions.[4]Thus, they are able to coordinate their efforts efficiently, and could rapidly spread their word to a worldwide audience.[5]When interviewed about the concept of VCC that appeared in the book -Hyperconnectivity and the Future of Internet Communication- he edited,[6]Professor ofPervasive Computing,Adrian David Cheokmentioned the following: "The idea of a global (collective) virtual consciousness is a bottom-up process and a rather emergent property resulting from a momentum of complex interactions taking place in social networks. This kind of collective behaviour (or intelligence) results from a collision between a physical world and a virtual world and can have a real impact in our life by driving collective action."[7]
In 1999-2000, Richard Glen Boire[2]provided a cursory mention and the only occurrence of the term[citation needed][original research?]"Virtual collective consciousness" in his text as follows:
The trend of technology is to overcome the limitations of the human body. And, the Web has been characterized as a virtual collective consciousness and unconsciousness
The recent definition of VCC evolved from the first empirical study that provided a cyberpsychological insight into the contribution of Facebook to the 2011Tunisian revolution. In this study, the concept was originally called "collective cyberconsciousness".[8]The latter is an extension of the idea of "collective consciousness" coupled with "citizen media" usage. The authors of this study also made a parallel between this original definition of VCC and other comparable concepts such as Durkheim's collective representation,Žižek's "collective mind"[9]or Boguta's "new collective consciousness" that he used to describe the computational history of the Internet shutdown during theEgyptian revolution.[10]Since VCC is the byproduct of the network's successful actions, then these actions must be timely, acute, rapid, domain-specific, and purpose-oriented to successfully achieve their goal. Before reaching a momentum of complexity, each collective behavior starts by a spark that triggers a chain of events leading to a crystallized stance of a tremendous amount of interactions.[11]Thus, VCC is an emergent global pattern from these individual actions.
In 2012, the term virtual collective consciousness resurfaced and was brought to light after extending its applications to the Egyptian case and the whole social networking major impact on the success of the so-calledArab Spring.[1][12]Moreover, the acronym VCC was suggested to identify the theoretical framework covering on-line behaviors leading to a virtual collective consciousness. Hence, online social networks have provided a new and faster way of establishing or modifying "collective consciousness" that was paramount to the 2011 uprisings in the Arab world.[13][14]
Various theoretical references ranging from sociology to computer science were mentioned in order to account for the key features that render the framework for a virtual collective consciousness. The following list is not exhaustive, but the references it contains are often highlighted:
Besides the studied effect of social networking on the Tunisian and Egyptian revolutions, the former via Facebook and the latter via Twitter other applications were studied under the prism of VCC framework:
|
https://en.wikipedia.org/wiki/Virtual_collective_consciousness
|
Virtual volunteeringrefers tovolunteeractivities completed, in whole or in part, using theInternetand a home, school buildings, telecenter, or work computer or other Internet-connected device, such as asmartphoneor atablet.[1]Virtual volunteering is also known asonline volunteering,remote volunteeringore-volunteering. Contributing tofree and open source softwareprojects or editingWikipediaare examples of virtual volunteering.[2]
In one study,[3]over 70 percent of online volunteers chose assignments requiring one to five hours a week and nearly half chose assignments lasting 12 weeks or less. Some organizations offer online volunteering opportunities which last from ten minutes to an hour. A unique feature of online volunteering is that it can be done from a distance. People with restricted mobility or other special needs participate in ways that might not be possible in traditional face-to-face volunteering. Likewise, online volunteering may allow people to overcome social inhibitions andsocial anxiety, particularly if they would normally experience disability-related labeling or stereotyping. This empowers people who might not otherwise volunteer. It can buildself-confidenceandself-esteemwhile enhancing skills and extending networks and social ties. Online volunteering also allows participants to adapt their program of volunteer work to their unique skills and passions.[4]
People engaged in virtual volunteering undertake a variety of activities from locations remote to the organization or people they are assisting, via a computer or other Internet-connected device, such as:
In the developing world, innovative synergies between volunteerism and technology typically focus onmobile communicationtechnologies rather than the Internet. Around 26 per cent of people worldwide had Internet access in 2009. However, Internet penetration in low-income countries was only 18 per cent, compared to over 64 per cent in developed countries. While the costs of fixed broadband Internet are falling, access still remains unaffordable to many.[8]Despite this, online volunteering is developing rapidly. Online volunteers are "people who commit their time and skills over the Internet, freely and without financial considerations, for the benefit of society."[9][full citation needed]Online volunteering has eliminated the need for volunteerism to be tied to specific times and locations. Thus, it greatly increases the freedom and flexibility of volunteer engagement and complements the outreach and impact of volunteers serving in situ. Most online volunteers engage in operational and managerial activities such as fundraising, technological support, communications, marketing and consulting. Increasingly, they also engage in activities such as research and
writing and leading e-mail discussion groups.[4]
Onlinemicro-volunteeringis also an example of virtual volunteering andcrowdsourcing, where volunteers undertake assignments via their smart devices . These volunteers either are not required to undergo any screening or training by the nonprofit for such tasks, and do not have to make any other commitment when a micro-task is completed, or, have already undergone screening or training by the nonprofit, and are therefore approved to take on micro-tasks as their availability and interests allow. Online micro-volunteering was originally called "byte-sized volunteering" by the Virtual Volunteering Project, and has always been a part of the more than 30-year-old practice of online volunteering.[10]An early example of both micro-volunteering and crowdsourcing isClickWorkers, a small NASA project begun in 2001 that engaged online volunteers in scientific-related tasks that required just a person's perception and common sense, but not scientific training, such as identifying craters on Mars in photos the project posted online; volunteers were not trained or screened before participating. The phrase "micro-volunteering" is usually credited to a San Francisco-based nonprofit called The Extraordinaries.[11][12][13]
The practice of virtual volunteering to benefit nonprofit initiatives dates back to at least the early 1970s, whenProject Gutenbergbegan involving online volunteers to provide electronic versions of works in the public domain.[14]
In 1995, a newnonprofit organizationcalled Impact Online (now calledVolunteerMatch), based in Palo Alto, California, began promoting the idea of "virtual volunteers".[15]In 1996, Impact Online received a grant from theJames Irvine Foundationto launch an initiative to research the practice of virtual volunteering and to promote the practice to nonprofit organizations in the US. This new initiative was dubbed theVirtual Volunteering Project, and the web site was launched in early 1997.[16]After one year of operations, the Virtual Volunteering Project moved to the Charles A. Dana Center atThe University of Texas at Austin. In 2002, the Virtual Volunteering Project moved within the university to theLyndon B. Johnson School of Public Affairs. The first two years of the Virtual Volunteer Project were spent reviewing and adaptingremote workmanuals[17]and existing volunteer management guidelines with regards to virtual volunteering, as well as identifying organizations that were involving online volunteers. By April 1999, almost 100 organizations had been identified by the Virtual Volunteering Project as involving online volunteers and were listed on the web site.[18]Due to the growing numbers of nonprofit organizations, schools, government programs and other not-for-profit entities involving online volunteers, the Virtual Volunteering Project stopped listing every such organization involving online volunteers on its web site in 2000, and focused its efforts on promoting the practice, profiling organizations with large or unique online volunteering programs, and creating guidelines for the involvement of online volunteers. Until January 2001, the Virtual Volunteering Project listed all telementoring and teletutoring programs in the USA (programs where online volunteers mentor or tutor others, through a nonprofit organization or school). At that time, 40 were identified.[19]
In August 1999, theNetAid.orginitiative was launched.[20]The initiative included an online volunteering component, today known as theUN Online Volunteering service. It went live in 2000 and has been managed byUnited Nations Volunteerssince its inception. It quickly attracted a high number of people ready to support organizations working for development. In 2003, several thousand people already contributed to the UN's Online Volunteering service – volunteers with very diverse backgrounds, including university graduates, private sector employees, and retirees.[21]While the UN's Online Volunteering service became independent, NetAid continued as a joint project ofUNDPand Cisco Systems. It aimed "to utilize the unique networking capabilities of the Internet to promote development and alleviate extreme poverty across the world".[22]
Online volunteering has been adopted by thousands of nonprofit organizations and other initiatives.[14]There is no organization currently tracking best practices in online volunteering in the USA or worldwide, how many people are engaged in online volunteering, or how many organizations utilize online volunteers, and studies regarding volunteering, such as reports on volunteering trends in the USA, rarely include information about online volunteering (for example, a search of the termvirtual volunteeringon theCorporation for National Service's "Volunteering in America" yields no results.[23]On IVCO's Forum Discussion Paper 2015[24]it is recommended that a collective measurement tool developed as part of a global measurement framework should also capture online volunteering.
TheUN's Online Volunteering serviceconnects organizations working in or for the developing world with online volunteers. It does have statistics available regarding numbers of online volunteers and involving organizations (i.e. NGOs, other civil society organizations, a government or other public institutions, United Nations agencies or other intergovernmental institutions) that collaborate online via their platform. In 2013, all 17,370 online volunteering assignments offered by development organizations through the Online Volunteering service attracted applications from numerous qualified volunteers. About 58 percent of the 11,037 online volunteers were women, and 60 percent came from developing countries; on average, they were 30 years of age. More than 94 percent of organizations and online volunteers rated their collaboration as good or excellent in 2013.[25]Forcivil society organizationswith limited resources in particular, the impact of online volunteer engagement is significant: 41% involve UN Online Volunteers for technical expertise that is not available internally. According to the same impact evaluation carried out in 2014, in many instances, organizations without access to online volunteers would have difficulties achieving their own peace and development outcomes.[26]
In July 2016, UNV unveiled a redesigned website and launched two additional services: The 1-click query to allow organizations to reach out to half a million people to provide real-time data for their projects, and its new employee online volunteering solution for global companies. Inclusive multi-stakeholder partnerships emerged as a necessity to achieve theSustainable Development Goals(SDGs), and the first private sector partner of the Online Volunteering service is based in Brazil (Samsung ElectronicsLatin American Office).[27]
Several other matching services, such asVolunteerMatchandIdealist, also offer virtual volunteering positions with nonprofit organizations in addition to traditional, on-site volunteering opportunities. VolunteerMatch currently reports that about 5 percent of its active volunteer listings are virtual in nature. As of June 2010, its directory included more than 2,770 such listings including roles in interactive marketing, fundraising, accounting, social media, and business mentoring. The percentage of virtual listings has dropped since 2006, when it peaked at close to 8 percent of overall volunteer opportunities in the VolunteerMatch system.
Wikipediaand otherWikimedia Foundationendeavors are examples of online volunteering, in the form of crowdsourcing or micro-volunteering; the majority of Wikipedia contributing volunteers are not required to undergo any screening or training by the nonprofit for their role as editors, and do not have to make a specific time commitment to the organization in order to contribute service.
Many organizations involved in virtual volunteering might never mention the term, or the words "online volunteer," on their web sites or in organizational literature. For example, the nonprofit organization Business Council for Peace (Bpeace) recruits business professionals to donate their time mentoring entrepreneurs in conflict-affected countries, includingAfghanistanandRwanda, but the majority of these volunteers interact with Bpeace staff and entrepreneurs online rather than face-to-face; yet, the term virtual volunteering is not mentioned on the web site. Bpeace also engages in online micro-volunteering, asking for information leads from its supporters, such as where to find online communities of particular professionals in the USA, but the organization never mentions the term micro-volunteering on its web site. Another example is theElectronic Emissary, one of the first K-12 online mentoring programs, launched in 1992; the web site does not use the phrase virtual volunteering and prefers to call online volunteers onlinesubject-matter experts.
Rumie, an edtech non-profit organization also uses subject-matter experts, as well as corporate partners and leading non-profit organizations to create interactive learning modules centered on life skills and career development called Bytes. Rumie is an example of how virtual volunteering can offer an experience that is impactful on various levels. Rumie-Build, Rumie's microlearning authoring platform allows volunteers to work individually or in teams to create these Bytes. Filled with built-in guidance and prompts to support authors in creating quality content, real-time collaboration capabilities, and multimedia integration, Rumie-Build is the tool that facilitates a digital skills-based volunteer opportunity that feels effortless and fun, often helping volunteers develop their own knowledge in the process. The created Bytes are used by learners around the world to increase their skills, empowering them to achieve their full potential.
Evolving forms of volunteerism will enhance opportunities for people to volunteer. The spread of technology connects ever more rural and isolated areas. NGOs and governments are beginning to realise the value of South-to-South international volunteerism, as well as diaspora volunteering, and are dedicating resources to these schemes. Corporations are responding to the "social marketplace" by supportingCSRinitiatives that include volunteerism. New opportunities for engaging in volunteerism are opening up with the result that more people are becoming involved and those already participating can expand their commitment.[4]A phenomenon that is still quite new, but growing rapidly, is the formal integration of online employee volunteering programmes into the infrastructure and business plan of companies.
|
https://en.wikipedia.org/wiki/Virtual_volunteering
|
"Wisdom of the crowd"or "wisdomof the majority"expresses the notion that the collective opinion of a diverse and independent group of individuals (rather than that of a single expert) yields the bestjudgement.[1]This concept, while not new to theInformation Age, has been pushed into the spotlight by social information sites such asQuora,Reddit,Stack Exchange,Wikipedia,Yahoo! Answers, and other web resources which rely on collective human knowledge.[2]An explanation for this supposition is that the idiosyncratic noise associated with each individual judgment is replaced by an average of that noise taken over a large number of responses, tempering the effect of the noise.[3]
Trial by jurycan be understood as at least partly relying on wisdom of the crowd, compared tobench trialwhich relies on one or a few experts. In politics, sometimessortitionis held as an example of what wisdom of the crowd would look like.Decision-makingwould happen by a diverse group instead of by a fairly homogenous political group or party. Research incognitive sciencehas sought to model the relationship between wisdom of the crowd effects and individual cognition.
A large group's aggregated answers to questions involving quantity estimation, general world knowledge, and spatial reasoning has generally been found to be as good as, but often superior to, the answer given by any of the individuals within the group.
Jury theoremsfromsocial choice theoryprovide formal arguments for wisdom of the crowd given a variety of more or less plausible assumptions. Both the assumptions and the conclusions remain controversial, even though the theorems themselves are not. The oldest and simplest isCondorcet's jury theorem(1785).
Aristotleis credited as the first person to write about the "wisdom of the crowd" in his workPolitics.[4][5]According to Aristotle, "it is possible that the many, though not individually good men, yet when they come together may be better, not individually but collectively, than those who are so, just as public dinners to which many contribute are better than those supplied at one man's cost".[6]
The classic wisdom-of-the-crowds finding involves point estimation of a continuous quantity. At a 1906 country fair inPlymouth, 800 people participated in a contest to estimate the weight of a slaughtered and dressed ox. StatisticianFrancis Galtonobserved that themedianguess, 1207 pounds, was accurate within 1% of the true weight of 1198 pounds.[7]This has contributed to the insight in cognitive science that a crowd's individual judgments can be modeled as aprobability distributionof responses with the median centered near the true value of the quantity to be estimated.[8]
In recent years, the "wisdom of the crowd" phenomenon has been leveraged in business strategy, advertising spaces, and also political research. Marketing firms aggregate consumer feedback and brand impressions for clients. Meanwhile, companies such as Trada invoke crowds to design advertisements based on clients' requirements.[9]Lastly, political preferences are aggregated to predict or nowcast political elections.[10][11]
Although classic wisdom-of-the-crowds findings center on point estimates of single continuous quantities, the phenomenon also scales up to higher-dimensional problems that do not lend themselves to aggregation methods such as taking the mean. More complex models have been developed for these purposes. A few examples of higher-dimensional problems that exhibit wisdom-of-the-crowds effects include:
In further exploring the ways to improve the results, a new technique called the "surprisingly popular" was developed by scientists at MIT's Sloan Neuroeconomics Lab in collaboration with Princeton University. For a given question, people are asked to give two responses: What they think the right answer is, and what they think popular opinion will be. The averaged difference between the two indicates the correct answer. It was found that the "surprisingly popular" algorithm reduces errors by 21.3 percent in comparison to simple majority votes, and by 24.2 percent in comparison to basic confidence-weighted votes where people express how confident they are of their answers and 22.2 percent compared to advanced confidence-weighted votes, where one only uses the answers with the highest average.[18]
In the context of wisdom of the crowd, the termcrowdtakes on a broad meaning. One definition characterizes a crowd as a group of people amassed by an open call for participation.[19]
In the digital age, the potential for collective intelligence has expanded with the advent of information technologies and social media platforms such as Google, Facebook, Twitter, and others. These platforms enable the aggregation of opinions and knowledge on a massive scale, creating what some have defined as "intelligent communities."[20]However, the effectiveness of these digital crowds can be compromised by issues such as demographic biases, the influence of highly active users, and the presence of bots, which can skew the diversity and independence necessary for a crowd to be truly wise. To mitigate these issues, researchers have suggested using a multi-media approach to aggregate intelligence from various platforms or employing factor analysis to filter out biases and noise.[21]
While crowds are often leveraged in online applications, they can also be utilized in offline contexts.[19]In some cases, members of a crowd may be offered monetary incentives for participation.[22]Certain applications of "wisdom of the crowd", such as jury duty in the United States, mandate crowd participation.[23]
The insight that crowd responses to an estimation task can be modeled as a sample from aprobability distributioninvites comparisons with individual cognition. In particular, it is possible that individual cognition is probabilistic in the sense that individual estimates are drawn from an "internal probability distribution." If this is the case, then two or more estimates of the same quantity from the same person should average to a value closer to ground truth than either of the individual judgments, since the effect ofstatistical noisewithin each of these judgments is reduced. This of course rests on the assumption that the noise associated with each judgment is (at least somewhat)statistically independent. Thus, the crowd needs to be independent but also diversified, in order to allow a variety of answers. The answers on the ends of the spectrum will cancel each other, allowing the wisdom of the crowd phenomena to take its place. Another caveat is that individual probability judgments are often biased toward extreme values (e.g., 0 or 1). Thus any beneficial effect of multiple judgments from the same person is likely to be limited to samples from an unbiased distribution.[24]
Vul and Pashler (2008) asked participants for point estimates of continuous quantities associated with general world knowledge, such as "What percentage of the world's airports are in the United States?" Without being alerted to the procedure in advance, half of the participants were immediately asked to make a second, different guess in response to the same question, and the other half were asked to do this three weeks later. The average of a participant's two guesses was more accurate than either individual guess. Furthermore, the averages of guesses made in the three-week delay condition were more accurate than guesses made in immediate succession. One explanation of this effect is that guesses in the immediate condition were less independent of each other (ananchoringeffect) and were thus subject to (some of) the same kind of noise. In general, these results suggest that individual cognition may indeed be subject to an internal probability distribution characterized by stochastic noise, rather than consistently producing the best answer based on all the knowledge a person has.[24]These results were mostly confirmed in a high-powered pre-registered replication.[25]The only result that was not fully replicated was that a delay in the second guess generates a better estimate.
Hourihan and Benjamin (2010) tested the hypothesis that the estimate improvements observed by Vul and Pashler in the delayed responding condition were the result of increased independence of the estimates. To do this Hourihan and Benjamin capitalized on variations inmemory spanamong their participants. In support they found that averaging repeated estimates of those with lower memory spans showed greater estimate improvements than the averaging the repeated estimates of those with larger memory spans.[26]
Rauhut and Lorenz (2011) expanded on this research by again asking participants to make estimates of continuous quantities related to real world knowledge. In this case participants were informed that they would make five consecutive estimates. This approach allowed the researchers to determine, firstly, the number of times one needs to ask oneself in order to match the accuracy of asking others and then, the rate at which estimates made by oneself improve estimates compared to asking others. The authors concluded that asking oneself an infinite number of times does not surpass the accuracy of asking just one other individual. Overall, they found little support for a so-called "mental distribution" from which individuals draw their estimates; in fact, they found that in some cases asking oneself multiple times actually reduces accuracy. Ultimately, they argue that the results of Vul and Pashler (2008) overestimate the wisdom of the "crowd within" – as their results show that asking oneself more than three times actually reduces accuracy to levels below that reported by Vul and Pashler (who only asked participants to make two estimates).[27]
Müller-Trede (2011) attempted to investigate the types of questions in which utilizing the "crowd within" is most effective. He found that while accuracy gains were smaller than would be expected from averaging ones' estimates with another individual, repeated judgments lead to increases in accuracy for both year estimation questions (e.g., when was the thermometer invented?) and questions about estimated percentages (e.g., what percentage of internet users connect from China?). General numerical questions (e.g., what is the speed of sound, in kilometers per hour?) did not improve with repeated judgments, while averaging individual judgments with those of a random other did improve accuracy. This, Müller-Trede argues, is the result of the bounds implied by year and percentage questions.[28]
Van Dolder and Van den Assem (2018) studied the "crowd within" using a large database from three estimation competitions organised by Holland Casino. For each of these competitions, they find that within-person aggregation indeed improves accuracy of estimates. Furthermore, they also confirm that this method works better if there is a time delay between subsequent judgments. Even with considerable delay between estimates, between-person aggregation is more beneficial. The average of a large number of judgements from the same person is barely better than the average of two judgements from different people.[29]
Herzog and Hertwig (2009) attempted to improve on the "wisdom of many in one mind" (i.e., the "crowd within") by asking participants to use dialectical bootstrapping. Dialectical bootstrapping involves the use ofdialectic(reasoned discussion that takes place between two or more parties with opposing views, in an attempt to determine the best answer) andbootstrapping(advancing oneself without the assistance of external forces). They posited that people should be able to make greater improvements on their original estimates by basing the second estimate onantitheticalinformation. Therefore, these second estimates, based on different assumptions and knowledge than that used to generate the first estimate would also have a different error (bothsystematicandrandom) than the first estimate – increasing the accuracy of the average judgment. From an analytical perspective dialectical bootstrapping should increase accuracy so long as the dialectical estimate is not too far off and the errors of the first and dialectical estimates are different. To test this, Herzog and Hertwig asked participants to make a series of date estimations regarding historical events (e.g., when electricity was discovered), without knowledge that they would be asked to provide a second estimate. Next, half of the participants were simply asked to make a second estimate. The other half were asked to use a consider-the-opposite strategy to make dialectical estimates (using their initial estimates as a reference point). Specifically, participants were asked to imagine that their initial estimate was off, consider what information may have been wrong, what this alternative information would suggest, if that would have made their estimate an overestimate or an underestimate, and finally, based on this perspective what their new estimate would be. Results of this study revealed that while dialectical bootstrapping did not outperform the wisdom of the crowd (averaging each participants' first estimate with that of a random other participant), it did render better estimates than simply asking individuals to make two estimates.[30]
Hirt and Markman (1995) found that participants need not be limited to a consider-the-opposite strategy in order to improve judgments. Researchers asked participants to consider-an-alternative – operationalized as any plausible alternative (rather than simply focusing on the "opposite" alternative) – finding that simply considering an alternative improved judgments.[31]
Not all studies have shown support for the "crowd within" improving judgments. Ariely and colleagues asked participants to provide responses based on their answers to true-false items and their confidence in those answers. They found that while averaging judgment estimates between individuals significantly improved estimates, averaging repeated judgment estimates made by the same individuals did not significantly improve estimates.[32]
Wisdom-of-the-crowds research routinely attributes the superiority of crowd averages over individual judgments to the elimination of individual noise,[33]an explanation that assumesindependenceof the individual judgments from each other.[8][24]Thus the crowd tends to make its best decisions if it is made up of diverse opinions and ideologies.
Averaging can eliminaterandom errorsthat affect each person's answer in a different way, but notsystematic errorsthat affect the opinions of the entire crowd in the same way. For instance, a wisdom-of-the-crowd technique would not be expected to compensate forcognitive biases.[34][35]
Scott E. Pageintroduced the diversity prediction theorem: "The squared error of the collective prediction equals the average squared error minus the predictive diversity". Therefore, when the diversity in a group is large, the error of the crowd is small.[36]
Henning Piezunka and Oliver Schilke provide experimental evidence that people participating in the aggregation of opinions with the intent to leverage the wisdom of the crowd often do not share their sincere opinion, but engage in strategic voting.[37]
Miller and Stevyers reduced the independence of individual responses in a wisdom-of-the-crowds experiment by allowing limited communication between participants. Participants were asked to answer ordering questions for general knowledge questions such as the order of U.S. presidents. For half of the questions, each participant started with the ordering submitted by another participant (and alerted to this fact), and for the other half, they started with a random ordering, and in both cases were asked to rearrange them (if necessary) to the correct order. Answers where participants started with another participant's ranking were on average more accurate than those from the random starting condition. Miller and Steyvers conclude that different item-level knowledge among participants is responsible for this phenomenon, and that participants integrated and augmented previous participants' knowledge with their own knowledge.[38]
Crowds tend to work best when there is a correct answer to the question being posed, such as a question about geography or mathematics.[39]When there is not a precise answer crowds can come to arbitrary conclusions.[40]Wisdom-of-the-crowd algorithms thrive when individual responses exhibit proximity and a symmetrical distribution around the correct, albeit unknown, answer. This symmetry allows errors in responses to cancel each other out during the averaging process. Conversely, these algorithms may falter when the subset of correct answers is limited, failing to counteract random biases. This challenge is particularly pronounced in online settings where individuals, often with varying levels of expertise, respond anonymously. Some "wisdom-of-the-crowd" algorithms tackle this issue using expectation–maximization voting techniques. The Wisdom-IN-the-crowd (WICRO) algorithm[35]offers a one-pass classification solution. It gauges the expertise level of individuals by assessing the relative "distance" between them. Specifically, the algorithm identifies experts by presuming that their responses will be relatively "closer" to each other when addressing questions within their field of expertise. This approach enhances the algorithm's ability to discern expertise levels in scenarios where only a small subset of participants possess proficiency in a given domain, mitigating the impact of potential biases that may arise during anonymous online interactions.[35][41]
The wisdom of the crowd effect is easily undermined. Social influence can cause the average of the crowd answers to be inaccurate, while the geometric mean and the median are more robust.[42]This relies on knowing an individual's uncertainty and trust of their estimate. The average answer of individuals who are knowledgeable about a topic will vary from the average of individuals who know nothing of the topic. A simple average of knowledgeable and inexperienced opinions will be less accurate than one in which the weighting of the average is based on the uncertainty and trust of their answer.
Organizations that leverage the wisdom of the crowd may undermine the learning of their own members.[43]When organizations make decisions leveraging the wisdom of the crowds, contrarian opinions are marginalized and not acted upon. Correspondingly, contrarian voters never receive feedback as their opinion is not turned into action.
Experiments run by the Swiss Federal Institute of Technology found that when a group of people were asked to answer a question together they would attempt to come to a consensus which would frequently cause the accuracy of the answer to decrease. One suggestion to counter this effect is to ensure that the group contains a population with diverse backgrounds.[40]
Research from theGood Judgment Projectshowed that teams organized in prediction polls can avoid premature consensus and produce aggregate probability estimates that are more accurate than those produced in prediction markets.[44]
|
https://en.wikipedia.org/wiki/Wisdom_of_the_crowd
|
Wiki surveysorwikisurveysare asoftware-basedsurveymethod with similarity to howwikisevolve throughcrowdsourcing. In essence, they are surveys that allow participants to create the questions that are being asked.[1][2][3]Other names includebridging systemsand collective response systems.[4]As participants engage in the survey they can either vote on a survey question or create a survey question. A single open-ended prompt written by the creator of the survey determines the topic the questions should be on. The first known implementation of a wiki survey was in 2010,[5]and they have been used since then for a variety of purposes such as facilitatingdeliberative democracy, crowdsourcing opinions from experts and figuring out common beliefs on a given topic.[6][7][8]A notable usage of wiki surveys is inTaiwan's government system, where citizens can participate in crowdsourced lawmaking through Polis wiki surveys.[9][10][11]
Wiki surveys facilitatecollective intelligenceby allowing users to both contribute and respond to the survey, as well as see the results of the survey in real time. They can be seen in a more general sense as a tool for establishingconsensusin large volumes of people. Wiki surveys mainly differ from consensus-building incomment sectionsby using aheuristicwhich determines the order of questions for each participant that aims to maximize consensus, not allowing replies to questions and providing visualization tools to better understand consensus.
All Our Ideas was the first ever wiki survey.[1]Its focus is on ranking the favorability of each 'item' that users submit to the survey. Each question presented asks the participant to rank the best of two items. At any point in time, participants can view a ranking of the items in order of their score. The score for an item is the estimated probability that it would be favored over another randomly chosen item. In this sense, it is considered a 'pairwise wiki survey'. The code for All Our Ideas isopen source.[12]
Polis(also known as Pol.is) was developed in 2012.[2]The focus of Polis is to project participants into an 'opinion space' where they can see how their voting behavior compares to other participants. The opinion space clusters participants into groups of similar opinion and is designed in a way to avoidtyranny of the majorityby being able to include groups that have small numbers of participants. The questions participants are presented with are a simple agree/disagree/pass on a single 'comment' submitted by a participant. The code for Polis isopen source.[13]
Wiki surveys have the following three defining characteristics:[1][6][2]
Wiki surveys allow participants to contribute questions, as well as answer questions created by its participants.
Wiki surveys adapt to elicit the most useful information from its participants. This is done by changing the ordering of questions based on the voting behavior of previous participants so as to maximize consensus. The heuristic determining the ordering of questions highly values showing the comments that have been voted on the least.
Although being greedy typically has a negative connotation, it is used in a positive manner for wiki surveys. They are 'greedy' because they make full use of information that participants are willing to provide. Wiki surveys do not require participants to answer a fixed amount of questions, participants can answer as little or as much as they want.
Questions in traditional survey methods fall into two categories: Open and closed questions. Open questions ask the person taking the survey to write an open response while closed questions give a fixed set of responses to select from.[19]Wiki surveys are like a hybrid of the two, enabling insightful consensus in certain situations where traditional survey methods may lack. Closed questions are easy toanalyze quantitively, but the limited options to select from for a given question may cause bias. Open questions are not as subject to bias, but are difficult to analyze quantitatively at scale. Wiki surveys allow for open responses by the users' contribution of survey questions (also called 'items'), and uses machine learning techniques to automatically quantitative analyze the responses to those questions.
The 'greediness' characteristic of wiki surveys is thought to also be advantageous, as it allows for gathering more data per participant.[1]Data from real-world usage of wiki surveys shows that there are a relatively small amount of participants that answer a relatively large amount of questions compared to most participants. In traditional surveys, participants who want to provide more information than required are typically not allowed to do so, thus missing out on potentially useful information.
Traditional survey methods are better suited for situations where the survey creator(s) need to acquire consensus on a specific question or set of questions. Wiki surveys are more seen as a method for deliberation and gathering consensus on ideas that were not thought of by the survey creator(s). There is also a lack of research on determining potential biases that wiki surveys may cause.[citation needed]
|
https://en.wikipedia.org/wiki/Wiki_survey
|
Crowdsourceis acrowdsourcingplatform developed byGoogleintended to improve a host of Google services through the user-facing training of differentalgorithms.[2]
Crowdsource was released for theAndroidoperating system on theGoogle Playstore on August 29, 2016, and is also available on theweb. Crowdsource includes a variety of short tasks users can complete to improve many of Google's different services. Such tasks include image label verification, sentiment evaluation, and translation validation. By completing these tasks, users provide Google with data to improve services such asGoogle Maps,Google Translate, and Android.[3]As users complete tasks, they earnachievementsincluding stats, badges, and certificates, which track their progress.
Crowdsource was released quietly on the Google Play store, with no marketing from Google.[4]It received mixed reviews on release, with many reviews stating that its lack of monetary rewards is unusual, as similar platforms, such asGoogle Opinion Rewards, often reward users with Play credits.[5][4][6]
Crowdsource includes different types of tasks, and these each provide Google with different information that it can give as training data to itsmachine learning algorithms. In the app's description on Google Play, Google refers to these tasks as "microtasks" which should take "no more than 5-10 seconds" to complete.[7]
Upon launch, the Crowdsource Android application presented users with 5 different tasks: image transcription, handwriting recognition, translation, translation validation, and map translation validation.[6]The most recent version of the app includes 11 tasks: Image Label Verification, Sentiment Evaluation, Audio Validation, Smart Camera, Glide Type, Handwriting Recognition, Reading Charts, Trust in Charts, Translation, Translation Validation, and Image Capture.
Translation related tasks (translation and translation validation) are only shown to users who have selected more than one language they are fluent in.[7][5][4][6]While Maps Translation validation is no longer a task in the Crowdsource Android and web apps, users can still complete translation and translation validation tasks.[7]Translation presents the user with one of the languages they listed themselves as fluent in, and asks them to translate it into another language they are fluent in.[7]Translation validation presents users with a list of translations submitted by other users, and asks them to categorize them as correct or incorrect.[7]Both of these tasks help improve Google's translating capabilities, most notably in Google Translate, and any other Google app with translated content, including Google Maps.[3][8]
Image Label Verification allows users to select a word from a list of topics, such as "Cars, and then presents the user with a picture, asking the question "Does this image contain cars?". Users can select "Yes", "No", or "Skip" if they are unsure.[9][4]The data gained from this task is used to help read text within images for services likeGoogle Street View.[4]
A similar task is available in theGoogle Photosmobile and web apps. However, unlike Crowdsource, the photos that are presented there are the user's own photos.[10]
Handwriting recognitionrelies on users to read handwritten words and transcribe them to text. According to Google, completing this task helps improveGboard's handwriting feature.[3]
Sentiment evaluation presents the user with various reviews and comments, and asks them to describe the statement as "positive", "neutral", or "negative".[11]Alternatively, users can skip a question if they are unsure.[3]These evaluations by the users of Crowdsource help with various recommendation-based technologies that Google uses on platforms like Google Maps, the Google Play Store, andYouTube.[3][11]
This task asks users to confirm if a specific landmark is visible in pictures shown in the app. This task is designed to help ensure businesses and landmarks are recognizable in applications such as Google Maps. This task is now complete and no longer available.[12]
Audio Validation helps improve Google'sText-to-Speechtechnology.[9]The user is presented with a short audio clip in which a computer tries to read a word out loud. The user can then specify whether this is how said word would be correctly pronounced.
This task helps improve an algorithm to create captions for online images. According to the Google Crowdsource web app, "Verifying machine generated captions will help make images more accessible to people with visual and cognitive impairment".[13]
When working on the Image Capture task, the user is presented with an interface to upload an image and add tags to it.
When using Smart Camera, the user can point the devices camera to an object, similar to howGoogle Lensis used. A dialogue opens when an item is detected. The app then says what it thinks it sees. The user can then confirm or deny this description. If the user confirms, the photo will be uploaded and you return to the beginning. If the user denies, an option to select why you denied and a place to add tags to the image pops up.
This task was added on May 17, 2020, to help improve the algorithm for Gboard's glide typing feature.
These two tasks were also added on May 17, 2020, as a collaboration with King's College and University of Vienna to measure the community's understanding and trustworthiness of different graph types. The task was completed the next day, May 18th, and closed.
Beyond the tasks that users can complete, the Crowdsource app has an "Achievements" section that shows users stats and badges which they earn through completing different tasks in Crowdsource.[7][3][14][15]
When users contribute to Crowdsource by completing tasks, Crowdsource tracks their total number of contributions, as well as metrics like "upvotes" which show how many of a user's answers are "in agreement with answers from the Crowdsource community,"[14]and "accuracy" which shows what percentage of the user's answers have been accepted as correct.[14]
As users complete tasks, they also receive badges. There are badges for each type of task, which track progress along that particular task (such as translation validation), as well as badges for other milestones, like completing a task whileofflineor completing a task given through a push notification.[15][7]
An April 2018 update to Crowdsource included a new "Image Capture" task in which users can take photos, tag them, and upload them to Crowdsource.[4]Users can choose toopen-sourcetheir images, as well, to share them with researchers and developers not exclusive to Google.[16]In an interview withWired Magazine, Anurag Batra, a product manager at Google who leads the Crowdsource team, said that the data gained from users completing this image capture task could improveGoogle Images,Pixel Camera, and Google Lens.[17]The latest update adds a 'back' button to change your previous response.
On January 21, 2021, an update introduceddark themeto the mobile app.[9]This update also changed the overalluser interfacesignificantly. A new task, Audio Validation, was also added.[9]
Crowdsource is also available as a web application. It offers many of the same tasks, such as Image Label Verification, translation, and translation validation, and includes a page for users to view their achievements, much like the Android application. Unlike the Android version, the Crowdsource website includes a task for validation image captions, and evaluating facial expressions. However, it does not have the sentiment evaluation, image capture, smart camera, and audio validation tasks available.[13]As of March 18, 2021[update], The Crowdsource website also does not havedark theme.
On release, many reviewers found the app's lack of monetary rewards unusual, due to the fact that Google has a similar app, Google Opinion Rewards, which offers Google Play store credits after completing short surveys.[6][4][5]An August 2016 review from Android Pit noted that "this reliance on altruism [is] a little strange given that Google already has an app, Google Opinion Rewards, which has a financial incentive for user feedback. It works slightly differently, but I don't see why the same reward scheme could not be applied." The review also expressed concern with this model, citing that users would be less likely to complete these tasks without more substantial rewards, writing that Crowdsource is "banking on the kind nature of its users", and asking "How are users, who generally want everything free and withoutadverts, going to respond to this in the long-term? Why would they rate this app highly, when the results are so nebulous?"[6]A review fromWiredshared similar concerns, writing "while Google is being open about its motivations, it will be difficult for users to know what difference their contributions make."[17]
On Crowdsource's FAQ page, Google addresses this question of "Will I get paid for my answers?", answering, "No. Crowdsource is a community effort – we rely on the goodwill of community members to help improve the quality of services such as Google Maps, Google Translate, and others, so that everybody in the world can benefit".[18]In an August 2016 review,CNETnoted that Google's statement in Crowdsource's description, "Every time you use it, you know that you've made theinterneta better place for your community.", is not accurate, stating that Google does not offer free access to Google Maps and Google Translate data.[19]A review fromTechCrunchalso noted that Crowdsource is "solely focused on helping Google improve its own services," contrasting it withAmazon Mechanical Turk, which focuses on tasks from third parties.[4]
An April 2018 interview inWiredstated that Google'smachine learningalgorithms work best in theUnited StatesandWestern Europe, but are less effective in less prosperous countries. In this interview Anurag Batra, aproduct managerat Google who leads the Crowdsource team, shared Google's motivations behind the Crowdsource app, stating that Google has "very sparse training data set from parts of the world that are not the United States and Western Europe,"[17]According toWired, Google has a team that promotes the Crowdsource app inIndiaand throughoutAsiaat colleges, and will likely expand toLatin Americalater in 2018.[17]In a blog post on Local Guides Connect, Batra explains why Crowdsource is helpful to Google, detailing that the questions that Crowdsource asks users are designed to collect better samples of data to feed their machine learning algorithms.[3]
Google uses the answers provided by users of Crowdsource, and validates them by showing them anonymously to other Crowdsource users.[20]According to Google, once answers are validated, they are used to "train computer algorithms that run services such as Translate, Maps, Gboard, and others."[20]
A brief on Cio Dive stated that an "accurate data set is critical" to the success of new technologies such asvoice assistantsandautonomous vehicles.[21]The brief also notes that companies like Google andIBMare well positioned in the fields ofartificial intelligenceand machine learning due to the volume of data available to them to train and develop advanced artificial intelligence.[21]
|
https://en.wikipedia.org/wiki/Crowdsource_(app)
|
Majoritarianismis apolitical philosophyorideologywith an agenda asserting that amajority, whether based on areligion,language,social class, or othercategoryof the population, is entitled to a certain degree of primacy in society, and has the right to make decisions that affect the society. This traditional view has come under growing criticism, andliberal democracieshave increasingly included constraints on what theparliamentary majoritycan do, in order to protect citizens' fundamental rights.[1]Majoritarianism should not be confused withelectoral systemsthat give seats to candidates with only apluralityof votes. Although such systems are sometimes called majoritarian systems, they use plurality, not majority, to set winners. Some electoral systems, such asinstant-runoff voting, are most often majoritarian – winners are most often determined by having majority of the votes that are being counted –but not always. Aparliamentthat gives lawmaking power to any group that holds a majority of seats may be called amajoritarianparliament. Such is the case in theParliament of the United Kingdomand theParliament of Saudi Arabiaand many other chambers of power.
Under a democratic majoritarianpolitical structure, the majority would not exclude any minority from future participation in the democratic process. Majoritarianism is sometimespejorativelyreferred to by its opponents as "ochlocracy" or "tyranny of the majority". Majoritarianism is often referred to asmajority rule, which may refer to a majorityclassrulingover a minority class, while not referring to the decision process calledmajority rule. Majority rule is a belief that the majority community should be able to rule a country in whichever way it wants. However, due to active dis-empowerment of the minority or minorities, in many cases what is claimed as the majority with the right to rule is only a minority of the voters.
Advocates of majoritarianism argue that majority decision making is intrinsically democratic and that any restriction on majority decision making is intrinsically undemocratic. If democracy is restricted by aconstitutionthat cannot be changed by a simple majority decision, then yesterday's majority is being given more weight than today's. If it is restricted by some small group, such asaristocrats, judges, priests, soldiers, or philosophers, then society becomes anoligarchy. The only restriction acceptable in a majoritarian system is that a current majority has no right to prevent a different majority emerging in the future; this could happen, for example, if a minority persuades enough of the majority to change its position. In particular, a majority cannot exclude a minority from future participation in the democratic process. Majoritarianism does not prohibit a decision being made by representatives as long as this decision is made via majority rule, as it can be altered at any time by any different majority emerging in the future.
One critique of majoritarianism is that systems withoutsupermajorityrequirements for changing the rules for voting can be shown to likely be unstable.[2]Among other critiques of majoritarianism is that most decisions in fact take place not by majority rule, but by plurality, unless thevoting systempurposefully channels votes for candidates or options in such a way as to guarantee a majority, such as is done underContingent voting, two-round voting andInstant-runoff voting.[3]According toGibbard’s theoremandArrow's paradox, it is not possible to have avoting systemwith more than two options that retains adherence to both certain "fairness" criteria and rational decision-making criteria.[3][4]
Unchecked majoritarianism may threaten the rights of minority groups.[5]Some democracies have tried to resolve this by requiringsupermajoritysupport to enact changes to basic rights. For example, in the United States, the rights tofreedom of speechandfreedom of religionare written into theConstitution, meaning it would take more than a simple majority of the members of Congress to repeal the rights.[6]This actually empowers a minority and makes it stronger than the majority. Other democracies have sought to address threats to minority rights by adopting proportional voting systems that guarantee at least some seats in their national legislatures to minority political factions. Examples include New Zealand, wheremixed-member proportionalvoting is used, and Australia, where asingle transferable votesystem is used.[7][8]Whether these methods have succeeded in protecting minority interests, or have gone too far, remains a matter for debate.[9]
Majoritarianism, as a concept of government, branches out into several forms. The classic form includesunicameralismand aunitary state. Qualified majoritarianism is a more inclusionary form, with degrees of decentralization and federalism. Integrative majoritarianism incorporates several institutions to preserve minority groups and foster moderate political parties.[10]
There are relatively few instances of large-scale majority rule in recorded history, most notably the majoritarian system ofAthenian democracyand otherancient Greekcity-states. However, some argue that none of those Greek city-states were truly majority rule, particularly due to their exclusion of women, non-landowners, and slaves from decision-making processes. Most of the famous ancient philosophers staunchly opposed majoritarianism, because decisions based on the will of the uneducated and uninformed 'masses' are not necessarily wise or just.Platois a prime example with hisRepublic, which describes a societal model based on a tripartite class structure. Anarchist anthropologistDavid Graeberoffers a reason as to why majority democratic government is so scarce in the historical record. "Majority democracy, we might say, can only emerge when two factors coincide: 1. a feeling that people should have equal say in making group decisions, and 2. a coercive apparatus capable of enforcing those decisions." Graeber argues that those two factors almost never meet: "Where egalitarian societies exist, it is also usually considered wrong to impose systematic coercion. Where a machinery of coercion did exist, it did not even occur to those wielding it that they were enforcing any sort of popular will."[11]
Majoritarianism (as a theory), similar to democracy, has often been used as a pretext by sizable or aggressive minorities to politically oppress other smaller (or civically inactive) minorities, or even sometimes a civically inactive majority (seeRichard Nixon's reference to the "Silent Majority" that he asserted supported his policies). This agenda is most frequently encountered in the realm of religion: In essentially allWesternnations, for instance,Christmas Day—and in some countries, other important dates in theChristian yearas well—are recognized as legal holidays; plus a particular denomination may be designated as thestate religionand receive financial backing from the government (examples include theChurch of EnglandinEnglandand theLutheran Churchin theScandinaviancountries). Virtually all countries also have one or more official languages, often to the exclusion of some minority group or groups within that country who do not speak the language or languages so designated. In most cases, those decisions have not been made using a majoritarianreferendum, and even in the rare case when a referendum has been used, a new majority is not allowed to emerge at any time and repeal it.
TYRANNY OF THE MAJORITY.[12]... In America the majority raises formidable barriers around the liberty of opinion; within these barriers an author may write what he pleases, but woe to him if he goes beyond them.
In recent times—especially beginning in the 1960s—some forms of majoritarianism have been countered byliberalreformers in many countries.[clarification needed]In the 1963 caseAbington School District v. Schempp, theUnited States Supreme Courtdeclared that school-ledprayerin the nation'spublic schoolswas unconstitutional, and since then many localities have sought to limit, or even prohibit, religious displays on public property.[clarification needed]The movement toward greater consideration for the rights of minorities within a society is often referred to aspluralism.[clarification needed]This has provoked a backlash from some advocates of majoritarianism, who lament theBalkanizationof society they claim has resulted from the gains made by themulticulturalism; these concerns were articulated in a 1972 book,The Dispossessed Majority, written byWilmot Robertson. In turn, supporters of multiculturalism have accused majoritarians ofracismandxenophobia.[citation needed]
|
https://en.wikipedia.org/wiki/Majoritarianism
|
InBoolean logic, themajority function(also called themedianoperator) is theBoolean functionthat evaluates to false when half or more arguments are false and true otherwise, i.e. the value of the function equals the value of the majority of the inputs.
Amajority gateis alogical gateused incircuit complexityand other applications ofBoolean circuits. A majority gate returns true if and only if more than 50% of its inputs are true.
For instance, in afull adder, the carry output is found by applying a majority function to the three inputs, although frequently this part of the adder is broken down into several simpler logical gates.
Many systems havetriple modular redundancy; they use the majority function formajority logic decodingto implementerror correction.
A major result incircuit complexityasserts that the majority function cannot be computed byAC0 circuitsof subexponential size.
For anyx,y, andz, the ternary median operator ⟨x,y,z⟩ satisfies the following equations.
An abstract system satisfying these as axioms is amedian algebra.
Other useful properties of the ternary median operator function include:
Most applications deliberately force an odd number of inputs so they don't have to deal with the question of what happens when exactly half the inputs are 0 and exactly half the inputs are 1. The few systems that calculate the majority function on an even number of inputs are often biased towards "0" – they produce "0" when exactly half the inputs are 0 – for example, a 4-input majority gate has a 0 output only when two or more 0's appear at its inputs.[1]In a few systems, the tie can be broken randomly.[2]
Forn= 1 the median operator is just the unary identity operationx. Forn= 3 the ternary median operator can be expressed using conjunction and disjunction asxy+yz+zx.
For an arbitrarynthere exists a monotone formula for majority of size O(n5.3). This is proved usingprobabilistic method. Thus, this formula is non-constructive.[3]
Approaches exist for an explicit formula for majority of polynomial size:
Media related toMajority functionsat Wikimedia Commons
|
https://en.wikipedia.org/wiki/Majority_function
|
Condorcet methods
Positional voting
Cardinal voting
Quota-remainder methods
Approval-based committees
Fractional social choice
Semi-proportional representation
By ballot type
Pathological response
Strategic voting
Paradoxes ofmajority rule
Positive results
Insocial choice theory, themajority rule(MR) is asocial choice rulewhich says that, when comparing two options (such asbillsorcandidates), the option preferred by more than half of the voters (amajority) should win.
Inpolitical philosophy, themajority ruleis one of two major competing notions ofdemocracy. The most common alternative is given by theutilitarian rule(or otherwelfaristrules), which identify the spirit ofliberal democracywith theequal consideration of interests.[1]Although the two rules can disagree in theory,political philosophersbeginning withJames Millhave argued the two can be reconciled in practice, with majority rule being a valid approximation to the utilitarian rule whenever voters share similarly-strong preferences.[1][2]This position has found strong support in manysocial choicemodels, where thesocially-optimal winnerand themajority-preferred winneroften overlap.[3][4]
Majority rule is the most common social choice rule worldwide, being heavily used indeliberative assembliesfordichotomousdecisions, e.g. whether or not to pass a bill.[5]Mandatory referendumswhere the question is yes or no are also generally decided by majority rule.[6]It is one of the basic rules ofparliamentary procedure, as described in handbooks likeRobert's Rules of Order.[1]
A common alternative to the majority rule is theplurality-rule familyof voting rules, which includesranked choice voting (RCV),two-round plurality, andfirst-preference plurality. These rules are often used in elections with more than two candidates. Such rules elect the candidate with the most votes after applying some voting procedure, even if a majority of voters would prefer some other alternative.[5][7]
Theutilitarian rule, andcardinal social choice rulesin general, take into account not just the number of voters who support each choice but also the intensity of theirpreferences.
Philosophers critical of majority rule have often argued that majority rule does not take into account theintensity of preferencefor different voters, and as a result "two voters who are casually interested in doing something" can defeat one voter who has "dire opposition" to the proposal of the two,[8]leading to poor deliberative practice or even to "an aggressive culture and conflict";[9]however, themedian voter theoremguarantees that majority-rule will tend to elect "compromise" or "consensus" candidates in many situations, unlike plurality-rules (seecenter squeeze).
Parliamentary rules may prescribe the use of asupermajoritarian ruleunder certain circumstances, such as the 60%filibusterrule to close debate in theUS Senate.[4]However such requirement means that 41 percent of the members or more could prevent debate from being closed, an example where the majority will would be blocked by a minority.
Kenneth Mayproved that the simple majority rule is the only "fair"ordinaldecision rule, in that majority rule does not let some votes count more than others or privilege an alternative by requiring fewer votes to pass. Formally, majority rule is the only decision rule that has the following properties:[10][11]
If voter's preferences are defined over a multidimensional option space, then choosing options using pairwise majority rule is unstable. In most cases, there will be noCondorcet winnerand any option can be chosen through a sequence of votes, regardless of the original option. This means that adding more options and changing the order of votes ("agenda manipulation") can be used to arbitrarily pick the winner.[12]
In group decision-makingvoting paradoxescan form. It is possible that alternatives a, b, and c exist such that a majority prefers a to b, another majority prefers b to c, and yet another majority prefers c to a. Because majority rule requires an alternative to have majority support to pass, majority rule is vulnerable to rejecting the majority's decision.
A super-majority rule actually empowers the minority, making it stronger (at least through its veto) than the majority. McGann argued that when only one of multiple minorities is protected by the super-majority rule (same as seen in simple plurality elections systems), so the protection is for the status quo, rather than for the faction that supports it.
Another possible way to prevent tyranny is to elevate certain rights asinalienable.[13]Thereafter, any decision that targets such a right might bemajoritarian, but it would not be legitimate, because it would violate the requirement forequal rights.
Somesocial choice theoristshave arguedcyclingleads to debilitating instability.[5]BuchananandTullocknote thatunanimityis the only decision rule that guaranteeseconomic efficiencyand eliminates the possibility of cycling in all cases.[5]
McGann argued that majority rule helps to protectminority rights, at least in deliberative settings. The argument is that cycling ensures that parties that lose to a majority have an interest to remain part of the group's process, because any decision can easily be overturned by another majority. Furthermore, suppose a minority wishes to overturn a decision. In that case, under majority rule it just needs to form a coalition that has more than half of the officials involved and that will give it power. Under supermajority rules, a minority needs its own supermajority to overturn a decision.[5]
To support the view that majority rule protects minority rights better than supermajority rules, McGann pointed to the cloture rule in the US Senate, which was used to prevent the extension ofcivil libertiesto racial minorities.[5]Saunders, while agreeing that majority rule may offer better protection than supermajority rules, argued that majority rule may nonetheless be of little help to the least minorities.[14]
Under some circumstances, the legal rights of one person cannot be guaranteed without unjustly imposing on someone else. McGann wrote, "one man's right to property in the antebellum South was another man's slavery."[citation needed]
Amartya Senhas noted the existence of theliberal paradox, which shows that permitting assigning a very small number of rights to individuals may make everyone worse off.[15]
Saunders argued thatdeliberative democracyflourishes under majority rule and that under majority rule, participants always have to convince more than half the group, while undersupermajoritarianrules participants might only need to persuade a minority (to prevent a change).[14]
Where large changes in seats held by a party may arise from only relatively slight change in votes cast (such as under FPTP), and a simple majority is all that is required to wield power (most legislatures in democratic countries), governments may repeatedly fall into and out of power. This may cause polarization and policy lurch, or it may encourage compromise, depending on other aspects of political culture. McGann argued that such cycling encourages participants to compromise, rather than pass resolutions that have the bare minimum required to "win" because of the likelihood that they would soon be reversed.[15]
Within this atmosphere of compromise, a minority faction may accept proposals that it dislikes in order to build a coalition for a proposal that it deems of greater moment. In that way, majority rule differentiates weak and strong preferences. McGann argued that such situations encourage minorities to participate, because majority rule does not typically create permanent losers, encouraging systemic stability. He pointed to governments that use largely unchecked majority rule, such as is seen underproportional representationin theNetherlands,Austria, andSweden, as empirical evidence of majority rule's stability.[5]
|
https://en.wikipedia.org/wiki/Majority_rule
|
Thesilent majorityis an unspecified large group of people in a country or group who do not express their opinions publicly.[1]The term was popularized by U.S. PresidentRichard Nixonin a televised address on November 3, 1969, in which he said, "And so tonight—to you, the great silent majority of my fellow Americans—I ask for your support."[2][3]In this usage it referred to those Americans who did not join in the largedemonstrations against the Vietnam Warat the time, who did not join in thecounterculture, and who did not participate inpublic discourse. Nixon, along with many others, saw this group ofMiddle Americansas being overshadowed in the media by the more vocal minority.
Preceding Nixon by half a century, it was employed in 1919 byCalvin Coolidge's campaign for the1920 presidential nomination. Before that, the phrase was used in the 19th century as a euphemism referring to all the people who have died, and others have used it before and after Nixon to refer to groups of voters in various nations of the world.
"The majority" or "the silent majority" can be traced back to the Roman writer Petronius, who wroteabiit ad plures(he is gone to the majority) to describe deceased people, since the dead outnumber the living.[4](In 2023 there were approximately 14.6 dead for every living person.[5][6]). The phrase was used for much of the 19th century to refer to the dead. Phrases such as "gone to a better world", "gone before", and "joined the silent majority" served as euphemisms for "died".[7]In 1902, Supreme Court JusticeJohn Marshall Harlanemployed this sense of the phrase, saying in a speech that "great captains on both sides of our Civil War have long ago passed over to the silent majority, leaving the memory of their splendid courage."[8]
In May 1831, the expression "silent majority" was spoken byChurchill C. Cambreleng, representative ofNew Yorkstate, before 400 members of theTammany Society.[9]Cambreleng complained to his audience about a U.S federal bill that had been rejected without full examination by theUnited States House of Representatives. Cambreleng's "silent majority" referred to other representatives whovoted as a bloc:
Whenever majorities trample upon the rights of minorities—when men are denied even the privilege of having their causes of complaint examined into—when measures, which they deem for their relief, are rejected by the despotism of a silent majority at a second reading—when such become the rules of our legislation, the Congress of this Union will no longer justly represent a republican people.[9]
In 1883, an anonymous author calling himself "A German" wrote a memorial toLéon Gambetta, published inThe Contemporary Review, a British quarterly. Describing French Conservatives of the 1870s, the writer opined that "their mistake was, not in appealing to the country, but in appealing to it in behalf of a Monarchy which had yet to be defined, instead of a Republic which existed; for in the latter case they would have had the whole of that silent majority with them."[10]
In 1919, Madison Avenue advertising executive and Republican Party supporterBruce Bartonemployed the term to bolsterCalvin Coolidge's campaign for the 1920 Republican Presidential nomination. InCollier'smagazine, Barton portrayed Coolidge as theeverymancandidate: "It sometimes seems as if this greatsilent majorityhad no spokesman. But Coolidge belongs with that crowd: he lives like them, he works like them, and understands."[11][12]
Referring toCharles I of England, historianVeronica Wedgwoodwrote this sentence in her 1955 bookThe King's Peace, 1637–1641: "The King in his natural optimism still believed that a silent majority in Scotland were in his favour."[13]
While Nixon was serving in 1955 as vice-president toDwight D. Eisenhower,John F. Kennedyand his research assistants wrote in Kennedy's bookProfiles in Courage, "Some of them may have been representing the actual sentiments of the silent majority of their constituents in opposition to the screams of a vocal minority..."[14]In January 1956, Kennedy gave Nixon an autographed copy of the book. Nixon wrote back the next day to thank him: "My time for reading has been rather limited recently, but your book is first on my list and I am looking forward to reading it with great pleasure and interest."[15]Nixon wroteSix Crises, some say his response to Kennedy's book, after visiting Kennedy at the White House in April 1961.[16][17]
In 1967, labor leaderGeorge Meanyasserted that those labor unionists (such as himself) who supported the Vietnam War were "the vast, silent majority in the nation."[18][19]Meany's statement may have provided Nixon's speechwriters with the specific turn of phrase.[20]
Barbara Ehrenreich[21]andJay Caspian Kang[22]later argued that awareness by the media and politicians that there actually might be a silent majority opposed to the anti-war movement was heightened during the August1968 Democratic National Conventionin Chicago, especially in reaction to the widely broadcastviolence by police against protesters and mediathere. The media reacted indignantly "against the police and the mayor" after journalists and protesters were attacked and beaten by the police, but were stunned to find that a poll showed 56% of those surveyed "sympathized with the police".[22][21]"Overnight the press abandoned its protest", awaking "to the disturbing possibility that they had grown estranged from a sizable segment of the public."[21][22]
In the months leading up to Nixon's 1969 speech, his vice-presidentSpiro T. Agnewsaid on May 9, "It is time for America's silent majority to stand up for its rights, and let us remember the American majority includes every minority. America's silent majority is bewildered by irrational protest..."[8]Soon thereafter, journalistTheodore H. Whiteanalyzed the previous year's elections, writing "Never have America's leading cultural media, its university thinkers, its influence makers been more intrigued by experiment and change; but in no election have the mute masses more completely separated themselves from such leadership and thinking. Mr. Nixon's problem is to interpret what the silent people think, and govern the country against the grain of what its more important thinkers think."[8]
On October 15, 1969, the firstMoratorium to End the War in Vietnamdemonstrations were held, attracting thousands of protesters.[23]Feeling very much besieged, Nixon went on national television to deliver a rebuttal speech on November 3, 1969, where he outlined "my plan to end the war" in Vietnam.[24]In his speech Nixon stated his policy of Vietnamization would lower American losses as the South Vietnamese Army would take on the burden of fighting the war; announced his willingness to compromise provided that North Vietnam recognized South Vietnam; and finally promised he would take "strong and effective measures" against North Vietnam if the war continued.[24]Nixon also implicitly conceded to the anti-war movement that South Vietnam was really not very important as he maintained that the real issue was the global credibility of the United States, as he stated his belief that all of America's allies would lose faith in American promises if the United States were to abandon South Vietnam.[24]Nixon ended his speech by saying all of this would take time, and asked for the public to support his policy of winning "peace with honor" in Vietnam as he concluded: "And so tonight, to you, the great silent majority of my fellow Americans—I ask for your support. Let us be united for peace. Let us be united against defeat. Because let us understand: North Vietnam cannot defeat or humiliate the United States. Only Americans can do that".[24]The public reaction to the "silent majority speech" was very favorable at the time and the White House phone lines were overwhelmed with thousands of phone calls in the hours afterward as too many people called to congratulate the president for his speech.[24]
Thirty-five years later, Nixon speechwriterPat Buchananrecalled using the phrase in a memo to the president. He explained how Nixon singled out the phrase and went on to make use of it in his speech: "We [had] used 'forgotten Americans' and 'quiet Americans' and other phrases. And in one memo I mentioned twice the phrase 'silent majority', and it's double-underlined by Richard Nixon, and it would pop up in 1969 in that great speech that basically made his presidency." Buchanan noted that while he had written the memo that contained the phrase, "Nixon wrote that speech entirely by himself."[25]
Nixon's silent majority referred mainly to the older generation (thoseWorld War IIveterans in all parts of the U.S.) but it also described many young people in theMidwest,Westand in theSouth, many of whom eventually served inVietnam. The Silent Majority was mostly populated byblue collarwhite people who did not take an active part in politics: suburban,exurbanand rural middle class voters.[26]They did, in some cases, support theconservativepolicies of many politicians.[citation needed][27]According to columnist Kenneth Crawford, "Nixon's forgotten men should not be confused with Roosevelt's", adding that "Nixon's are comfortable, housed, clad and fed, who constitute the middle stratum of society. But they aspire to more and feel menaced by those who have less."[28]
In his famous speech, Nixon contrasted his international strategy ofpolitical realismwith the "idealism" of a "vocal minority." He stated that following the radical minority's demands to withdraw all troops immediately from Vietnam would bring defeat and be disastrous for world peace. Appealing to the silent majority, Nixon asked for united support "to end the war in a way that we could win the peace." The speech was one of the first to codify theNixon Doctrine, according to which, "the defense of freedom is everybody's business—not just America's business."[29]After giving the speech, Nixon's approval ratings which had been hovering around 50% shot up to 81% in the nation and 86% in theSouth.[30]
In January 1970,Timeput on their cover an abstract image of a man and a woman representing "Middle America" as a replacement for their annual "Man of the Year" award. Publisher Roy E. Larsen wrote that "the events of 1969 transcended specific individuals. In a time of dissent and 'confrontation', the most striking new factor was the emergence of the Silent Majority as a powerfully assertive force in U.S. society."[31]Larsen described how the silent majority had elected Nixon, had put a man on the moon, and how this demographic felt threatened by "attacks on traditional values".[31]
The silent majority theme has been a contentious issue amongst journalists since Nixon used the phrase. Some thought Nixon used it as part of theSouthern strategy; others claim it was Nixon's way of dismissing the obvious protests going on around the country, and Nixon's attempt to get other Americans not to listen to the protests. Whatever the rationale, Nixon won a landslide victory in1972, taking 49 of 50 states, vindicating his "silent majority". The opposition vote was split successfully, with 80% ofGeorge Wallacesupporters voting for Nixon rather thanGeorge McGovern, unlike Wallace himself.[32]
Nixon's use of the phrase was part of his strategy to divide Americans and to polarize them into two groups.[33]He used "divide and conquer" tactics to win his political battles, and in 1971 he directed Agnew to speak about "positive polarization" of the electorate.[34][35]The "silent majority" shared Nixon's anxieties and fears that normalcy was being eroded by changes in society.[26][36]The other group was composed of intellectuals, cosmopolitans, professionals and liberals, those willing to "live and let live."[26]Both groups saw themselves as the higher patriots.[26]According to Republican pollsterFrank Luntz, "silent majority" is but one of many labels which have been applied to the same group of voters. According to him, past labels used by the media include "silent majority" in the 1960s, "forgotten middle class" in the 1970s, "angry white males" in the 1980s, "soccer moms" in the 1990s, and "NASCAR dads" in the 2000s.[37]
"Silent majority" was the name of a movement (officially called Anticommunist City Committee) active inMilan, Italy, from 1971 to 1974 and headed by the former monarchist partisan Adamo Degli Occhi, that expressed the hostility of the middle class to the1968 movement. At the beginning it was of conservative tendency; later it moved more and more to the right, and in 1974 Degli Occhi was arrested because of his relationships with the terroristic movementMovimento di Azione Rivoluzionaria(MAR).
In 1975, in Portugal, then presidentAntónio de Spínolaused the term in confronting the more radical forces of post-revolutionaryPortugal.[38]
The phrase "silent majority" has also been used in the political campaigns ofRonald Reaganduring the 1970s and 1980s, theRepublican Revolutionin the 1994 elections, and the victories ofRudy GiulianiandMichael Bloomberg. The phrase was also used byQuebecPremierJean Charestduring the2012 Student Striketo refer to what he perceived as the majority of the Quebec voters supporting the tuition hikes.[39]
The term was used byBritish Prime MinisterDavid Cameronduring the2014 Scottish independence referendum; Cameron expressed his belief that most Scots opposed independence, while implicitly conceding they may not be as vocal as the people who support it.[40]
DuringDonald Trump's2016 presidential campaign, he said at a campaign rally on July 11, 2015, inPhoenix, Arizona, that "the silent majority is back, and we're going to take our country back".[41]He also referred to the silent majority in subsequent speeches and advertisement,[42]as did the press when describing those who voted for hiselection as President in 2016.[43]In the midst of theGeorge Floyd protests, he once again invoked the silent majority.[44]CNNanalystHarry Entendescribed that Trump's support fits better with the term "loud minority", based on the fact that neither did he win thepopular votein 2016 nor did he hit 50% in any live interview opinion poll throughouthis first presidency.[45]Jay Caspian Kangargues that some politicians and analysts (Jim Clyburn,Chuck Rocha) feel the unexpected increase in support for Donald Trump among blacks and Latinos in the 2020 election reflects a new silent majority (including some non-whites) reacting against calls for defunding the police and the arrogance of "wokewhite consultants".[22]
In 2019, thePrime Minister of Australia,Scott Morrison, acknowledgedthe quiet Australiansin his federal election victory speech.[46]
In the face of rising opposition, theHong Konggovernment often claims there is a silent majority that is too afraid to voice their support, and a group called "Silent Majority for Hong Kong" was set up in 2013 to counteract theOccupy Central with Love and Peacemovement. In 2019, when thedemocratic movementbecame increasingly violent, theCarrie Lam administrationand Beijing authorities appealed to the "silent majority" to dissociate themselves from the radical activists and to vote for thepro-government campin theDistrict Council elections, which were seen as ade factoreferendum on the protests.[47]However, with a record turnout of over 70%, thepro-democracy campwon 80% of overall seats and controlled 17 out of the 18 District Councils.[48]A commentator of TheNew Statesmandeduced that Hong Kong's true silent majority stood on the side of the democratic cause.[49]Foreign Policystated that Beijing had been confident of a huge pro-government victory as a result of a delusion created byits own propaganda.[50]
|
https://en.wikipedia.org/wiki/Silent_majority
|
Condorcet methods
Positional voting
Cardinal voting
Quota-remainder methods
Approval-based committees
Fractional social choice
Semi-proportional representation
By ballot type
Pathological response
Strategic voting
Paradoxes ofmajority rule
Positive results
Anelectoralorvoting systemis a set of rules used to determine the results of an election. Electoral systems are used in politics to elect governments, while non-political elections may take place in business,nonprofit organizationsand informal organisations. These rules govern all aspects of the voting process: when elections occur,who is allowed to vote,who can standas acandidate,how ballots are marked and cast, how the ballots are counted, how votes translate into the election outcome, limits oncampaign spending, and other factors that can affect the result. Political electoral systems are defined by constitutions and electoral laws, are typically conducted byelection commissions, and can use multiple types of elections for different offices.
Some electoral systems elect a single winner to a unique position, such as prime minister, president or governor, while others elect multiple winners, such as members of parliament or boards of directors. When electing alegislature, areas may be divided into constituencies with one or more representatives or the electorate may elect representatives as a single unit. Voters may vote directly for an individual candidate or for a list of candidates put forward by apolitical partyoralliance. There are many variations in electoral systems.
Themathematicalandnormativestudy of voting rules falls under the branches ofeconomicscalledsocial choiceandmechanism design, but the question has also engendered substantial contributions frompolitical scientists,analytic philosophers,computer scientists, andmathematicians. The field has produced several major results, includingArrow's impossibility theorem(showing thatranked votingcannot eliminate thespoiler effect) andGibbard's theorem(showing it is impossible to design astraightforwardvoting system, i.e. one where it is always obvious to astrategic voterwhich ballot they should cast).
The most common categorizations of electoral systems are: single-winner vs. multi-winner systems andproportional representationvs.winner-take-all systemsvs.mixed systems.
In all cases, where only a single winner is to be elected, the electoral system is winner-take-all. The same can be said for elections where only one person is elected per district. Since district elections are winner-take-all, the electoral system as a whole produces dis-proportional results. Some systems where multiple winners are elected at once (in the same district), such aplurality block votingare also winner-take-all.
Inparty block voting, voters can only vote for the list of candidates of a single party, with the party receiving the most votes winning all seats, even if that party receives only a minority of votes. This is also described as winner-take-all. This is used in five countries as part of mixed systems.[1]
Plurality votingis a system in which the candidate(s) with the largest number of votes wins, with no requirement to get a majority of votes. In cases where there is a single position to be filled, it is known asfirst-past-the-post. This is the second most common electoral system for national legislatures (afterproportional representation), with 58 countries using FPTP and single-member districts to elect the national legislative chamber,[1]the vast majority of which are current or former British or American colonies or territories. It is also the second most common system used for presidential elections, being used in 19 countries. Thetwo-round systemis the most common system used to elect a president.[1]
In cases where there are multiple positions to be filled, most commonly in cases of multi-member constituencies, there are several types of plurality electoral systems. Underblock voting(also known as multiple non-transferable vote or plurality-at-large), voters have as many votes as there are seats and can vote for any candidate, regardless of party, but give only one vote to each preferred candidate. The most-popular candidates are declared elected, whether they have a majority of votes or not and whether or not that result is proportional to the way votes were cast. Eight countries use this system.[1]
Cumulative votingallows a voter to cast more than one vote for the same candidate, in multi-member districts. Its effect may be proportional to the same degree thatsingle non-transferable votingorlimited votingis, thus it is often called semi-proportional.
Approval votingis a choose-all-you-like voting system that aims to increase the number of candidates that win with majority support.[2]Voters are free to pick as many candidates as they like and each choice has equal weight, independent of the number of candidates a voter supports. The candidate with the most votes wins.[3]
A runoff system is one in which a candidates receives a majority of votes to be elected, either in a runoff election or final round of vote counting. This is sometimes referred to as a way to ensure that a winner must have a majority of votes, although usually only a plurality is required in the last round (when three or more candidates move on to the runoff election), and sometimes even in the first round winners can avoid a second round without achieving a majority. In social choice theory, runoff systems are not called majority voting, as this term refers toCondorcet-methods.
There are two main groups of runoff systems, those in one group use a single round of voting achieved by voters castingranked votesand then using vote transfers if necessary to establish a majority, and those in the other group use two or more rounds of voting, to narrow the field of candidates and to determine a winner who has a majority of the votes. Both are primarily used for single-member constituencies or election of a single position such as mayor.
If a candidate receives a majority of the vote in the first round, then the system is simplefirst past the post voting. But if no one has a majority of votes in first round, the systems respond in different ways.
Underinstant-runoff voting(IRV), when no one wins a majority in first round, runoff is achieved through vote transfers made possible by voters having ranked the candidates in order of preference, with lower preferences used as back-up preferences. This system is used for parliamentary elections inAustraliaandPapua New Guinea. If no candidate receives a majority of the vote in the first round, the votes of the least-popular candidate are transferred as per marked second preferences and added to the totals of surviving candidates. This is repeated until a candidate achieves a majority. The count ends any time one candidate has a majority of votes but it may continue until only two candidates remain, at which point one or other of the candidates will take a majority of votes still in play.
A different form of single-winner preferential voting is thecontingent votewhere voters do not rank all candidates, but rank just two or three. If no candidate has a majority in the first round, all candidates are excluded except the top two. If the voter gave first preference to one of the excluded candidates, the votes is transferred to the next usable back-up preferences if possible, or otherwise put in the exhausted pile. The resulting vote totals are used to determine the winner by plurality. This system is used inSri Lankanpresidential elections, with voters allowed to give three preferences.[4]
The other main form of runoff system is thetwo-round system, which is the most common system used for presidential elections around the world, being used in 88 countries. It is also used, in conjunction with single-member districts, in 20 countries for electing members of the legislature.[1]If no candidate achieves a majority of votes in the first round of voting, a second round is held to determine the winner. In most cases the second round is limited to the top two candidates from the first round, although in some elections more than two candidates may choose to contest the second round; in these cases the second-round winner is not required to have a majority of votes, but may be elected by having a plurality of votes.
Some countries use a modified form of the two-round system, so going to a second round happens less often. InEcuadora candidate in the presidential election is declared the winner if they receive more than 50 percent of the vote or 40% of the vote and are 10% ahead of their nearest rival,[5]In Argentina, where the system is known asballotage, election is achieved by those with majority or if they have 45% and a 10% lead.
In some cases, where a certain level of support is required, a runoff may be held using a different system. InU.S. presidential elections, when no candidate wins a majority of theUnited States Electoral College(using seat count, not votes cast, as is used in the majoritarian systems described above), acontingent electionis held by the House of Representatives, not the voters themselves. The House contingency election sees three candidates go on to the last round and each state's Representatives vote as a single unit, not as individuals.
Anexhaustive ballotsees multiple rounds of voting (where no one has majority in first round). The number of rounds is not limited to two rounds, but sees the last-placed candidate eliminated in each round of voting, repeated until one candidate has majority of votes. Due to the potentially large number of rounds, this system is not used in any major popular elections, but is used to elect the Speakers of parliament in several countries and members of theSwiss Federal Council.
In some systems, such as election of the speaker of the United States House of Representatives, there may be multiple rounds held without any candidates being eliminated until a candidate achieves a majority.
Positional systemslike theBorda Countare ranked voting systems that assign a certain number of points to each candidate, weighted by position. The most popular such system isfirst-preference plurality. Another well-known variant, theBorda count, each candidate is given a number of points equal to their rank, and the candidate with the least points wins. This system is intended to elect broadly acceptable options or candidates, rather than those preferred by a majority.[6]This system is used to elect the ethnic minority representatives seats in the Slovenian parliament.[7][8]
The Dowdall system is used inNaurufor parliamentary elections and sees voters rank the candidates. First preference votes are counted as whole numbers, the second preferences by two, third preferences by three, and so on; this continues to the lowest possible ranking.[9]The totals for each candidate determine the winners.[10]
Multi-winner systems include both proportional systems and non-proportional multi-winner systems, such asparty block votingand plurality block voting.
Proportional representationis the most widely used electoral system for national legislatures, with the parliaments of over eighty countries elected by a form of the system. These systems elect multiple members in one contest, whether that is at-large, as in a city-wide election at the city level or state-wide or nation-wide at those levels, or in multi-member districts at any level.
Party-list proportional representationis the single most common electoral system and is used by 80 countries, and involves seats being allocated to parties based on party vote share.
Inclosed listsystems voters do not have any influence over which candidates are elected to fill the party seats, but inopen listsystems voters are able to both vote for the party list and for candidates (or only for candidates). Voters thus have means to sometimes influence the order in which party candidates will be assigned seats. In some countries, notablyIsraeland theNetherlands, elections are carried out using 'pure' proportional representation, with the votes tallied on a national level before assigning seats to parties. (There are no district seats, only at-large.) However, in most cases several multi-member constituencies are used rather than a single nationwide constituency, giving an element of geographical or local representation. Such may result in the distribution of seats not reflecting the national vote totals of parties. As a result, some countries that use districts haveleveling seatsthat are awarded to some of the parties whose seat proportion is lower than their proportion of the vote. Levelling seats are either used at the regional level or at the national level. Suchmixed member proportionalsystems are used in New Zealand and in Scotland. (They are discussed below.)
List PR systems usually set anelectoral threshold, the minimum percentage of the vote that a party must obtain to win levelling seats or to win seats at all. Some systems allow a go around of this rule. For instance, if a party takes a district seat, the party may be eligible for top-up seats even if its percentage of the votes is below the threshold.
There are different methods of allocating seats in proportional representation systems. There are two main methods:highest averageandlargest remainder. Highest average systems involve dividing the votes received by each party by adivisororvote averagethat represents an idealizedseats-to-votes ratio, then rounding normally. In the largest remainder system, parties' vote shares are divided by anelectoral quota. This usually leaves some seats unallocated, which are awarded to parties based on which parties have the largest number of "leftover" votes.
Single transferable vote(STV) is another form of proportional representation. Like list PR, STV is designed to elect multiple winners. In STV, multi-member districts or multi-winner at-large contests are used. Each voter casts one vote, being arankedballot marked for individual candidates, rather than voting for a party list. STV is used inMalta, theRepublic of Irelandand Australia (partially). To be certain of being elected, candidates must pass a quota (theDroop quotabeing the most common). Candidates that achieve the quota are elected. If necessary to fill seats, the least-successful candidate is eliminated and their voters transferred in accordance with the rankings marked by the voter. Surplus votes held by successful candidates may also be transferred. Eventually all seats are filled by candidates who have passed the quota or there are only as many remaining candidates as the number of remaining open seats.[10]
Undersingle non-transferable vote(SNTV), multi-member districts are used. Each voter can vote for only one candidate, with the candidates receiving the most votes declared the winners, whether any of them have a majority of votes or not. Despite its simplicity, its results are very close to those of STV and list PR - every district elects a mixed, balanced multi-party group of representatives.[citation needed]This system is used inKuwait, thePitcairn IslandsandVanuatu.[1]
In several countries,mixed systemsare used to elect the legislature. These includeparallel voting(also known as mixed-member majoritarian) andmixed-member proportional representation.
In non-compensatory, parallel voting systems, which are used in 20 countries,[1]members of a legislature are elected by two different methods; part of the membership is elected by a plurality or majoritarian election system in single-member constituencies and the other part by proportional representation. The results of the constituency contests have no effect on the outcome of the proportional vote.[10]
In compensatorymixed-member systemslevelling seats are allocated to balance nation-wide or regional disproportionality produced by the way seats are won in constituency contests. Themixed-member proportional systems, in use in eight countries, provide enough compensatory seats to ensure that many parties have a share of seats approximately proportional to their vote share.[1]Most of the MMP countries use a PR system at the district level, thus lowering the number of levelling seats that are needed to produce proportional results. Of the MMP countries, only New Zealand and Lesotho use single-winnerfirst-past-the-post votingin their districts. Scotland uses a regionalized MMP system where levelling seats are allocated in each region to balance the disproportionality produced in single-winner districts within the region. Variations of this include theAdditional Member System, andAlternative Vote Plus, in which voters cast votes for both single-member constituencies and multi-member constituencies; the allocation of seats in the multi-member constituencies is adjusted to achieve an overall seat allocation proportional to parties' vote share by taking into account the number of seats won by parties in the single-member constituencies.
Some MMP systems are insufficiently compensatory, and this may result inoverhang seats, where parties win more seats in the constituency system than they would be entitled to based on their vote share. Some MMP systems have mechanism (another form of top-up) where additional seats are awarded to the other parties to balance out the effect of the overhang. Germany in 2024 passed a new election law where district overhang seats may be denied, over-riding the district result in the pursuit of overall proportionality.[11]
Vote linkage mixed systemsare also compensatory, however they usually use different mechanism than seat linkage (top-up) method of MMP and usually aren't able to achieve proportional representation.
Some electoral systems feature amajority bonus systemto either ensure one party or coalition gains a majority in the legislature, or to give the party receiving the most votes a clear advantage in terms of the number of seats.San Marinohas a modified two-round system, which sees a second round of voting featuring the top two parties or coalitions if no party takes a majority of votes in the first round. The winner of the second round is guaranteed 35 seats in the 60-seatGrand and General Council.[12]InGreecethe party receiving the most votes was given an additional 50 seats,[13]a system which was abolished following the2019 elections.
Primary electionsare a feature of some electoral systems, either as a formal part of the electoral system or informally by choice of individual political parties as a method of selecting candidates, as is the case inItaly. Primary elections limit the possible adverse effect ofvote splittingby ensuring that a party puts forward only one party candidate. InArgentinathey are a formal part of the electoral system and take place two months before the main elections; any party receiving less than 1.5% of the vote is not permitted to contest the main elections.
In the United States, there are both partisan and non-partisanprimary elections. In non-partisan primaries, the most-popular nominees, even if only one party, are put forward to the election.
Some elections feature anindirect electoral system, whereby there is either no popular vote, or the popular vote is only one stage of the election; in these systems the final vote is usually taken by anelectoral college. In several countries, such asMauritiusorTrinidad and Tobago, the post of President is elected by the legislature. In others likeIndia, the vote is taken by an electoral college consisting of the national legislature and state legislatures. In theUnited States, the president is indirectly elected using a two-stage process; a popular vote in each state elects members to theelectoral collegethat in turn elects the President. This can result in a situation where a candidate who receives the most votes nationwide does not win the electoral college vote, as most recently happened in2000and2016.
In addition to the current electoral systems used for political elections, there are numerous other systems that have been used in the past, are currently used only in private organizations (such as electing board members of corporations or student organizations), or have never been fully implemented.
Among theRanked systemsthese includeBucklin voting, the variousCondorcet methods(Copeland's,Dodgson's,Kemeny-Young,Maximal lotteries,Minimax,Nanson's,Ranked pairs,Schulze), theCoombs' methodandpositional voting.
Among theCardinal electoral systems, the most well known of these isrange voting, where any number of candidates are scored from a set range of numbers. A very common example of range voting are the 5-star ratings used for many customer satisfaction surveys and reviews. Other cardinal systems includesatisfaction approval voting,highest median rules(including themajority judgment), and theD21 – Janeček methodwhere voters can cast positive and negative votes.
Historically,weighted votingsystems were used in some countries. These allocated a greater weight to the votes of some voters than others, either indirectly by allocating more seats to certain groups (such as thePrussian three-class franchise), or by weighting the results of the vote. The latter system was used in colonialRhodesiafor the1962and1965 elections. The elections featured two voter rolls (the 'A' roll being largely European and the 'B' roll largely African); the seats of the House Assembly were divided into 50 constituency seats and 15 district seats. Although all voters could vote for both types of seats, 'A' roll votes were given greater weight for the constituency seats and 'B' roll votes greater weight for the district seats. Weighted systems are still used in corporate elections, with votes weighted to reflect stock ownership.
Dual-member proportional representationis a proposed system with two candidates elected in each constituency, one with the most votes and one to ensure proportionality of the combined results.Biproportional apportionmentis a system where the total number of votes is used to calculate the number of seats each party is due, followed by a calculation of the constituencies in which the seats should be awarded in order to achieve the total due to them.
For proportional systems that useranked choice voting, there are several proposals, includingCPO-STV,Schulze STVand theWright system, which are each considered to be variants of proportional representation by means of the single transferable vote. Among the proportional voting systems that use rating areThiele's voting rulesandPhragmen's voting rule. A special case ofThiele's voting rulesisProportional Approval Voting. Some proportional systems that may be used with either ranking or rating include theMethod of Equal Sharesand theExpanding Approvals Rule.
In addition to the specific method of electing candidates, electoral systems are also characterised by their wider rules and regulations, which are usually set out in a country'sconstitutionorelectoral law. Participatory rules determinecandidate nominationandvoter registration, in addition to the location ofpolling placesand the availability ofonline voting,postal voting, andabsentee voting. Other regulations include the selection of voting devices such as paperballots,machine votingoropen ballot systems, and consequently the type ofvote counting systems, verification andauditingused.
Electoral rules place limits on suffrage and candidacy. Most countries's electorates are characterised byuniversal suffrage, but there are differences on theage at which people are allowed to vote, with the youngest being 16 and the oldest 21. People may be disenfranchised for a range of reasons, such as being a serving prisoner, being declared bankrupt, having committed certain crimes or being a serving member of the armed forces. Similar limits are placed on candidacy (also known as passive suffrage), and in many cases the age limit for candidates is higher than the voting age. A total of 21 countries havecompulsory voting, although in some there is an upper age limit on enforcement of the law.[14]Many countries also have thenone of the aboveoption on their ballot papers.
In systems that useconstituencies,apportionmentor districting defines the area covered by each constituency. Where constituency boundaries are drawn has a strong influence on the likely outcome of elections in the constituency due to the geographic distribution of voters. Political parties may seek to gain an advantage duringredistrictingby ensuring their voter base has a majority in as many constituencies as possible, a process known asgerrymandering. Historicallyrotten and pocket boroughs, constituencies with unusually small populations, were used by wealthy families to gain parliamentary representation.
Some countries have minimum turnout requirements for elections to be valid. In Serbia this rule caused multiple re-runs of presidential elections, with the 1997 election re-run once and the 2002 elections re-run three times due insufficient turnout in thefirst,secondandthirdattempts to run the election. The turnout requirement was scrapped prior to thefourth votein 2004.[15]Similar problems inBelarusled to the1995 parliamentary electionsgoing to a fourth round of voting before enough parliamentarians were elected to make aquorum.[16]
Reserved seatsare used in many countries to ensure representation for ethnic minorities, women, young people or the disabled. These seats are separate from general seats, and may be elected separately (such as in Morocco where a separate ballot is used to elect the 60 seats reserved for women and 30 seats reserved for young people in the House of Representatives), or be allocated to parties based on the results of the election; inJordanthe reserved seats for women are given to the female candidates who failed to win constituency seats but with the highest number of votes, whilst inKenyathe Senate seats reserved for women, young people and the disabled are allocated to parties based on how many seats they won in the general vote. Some countries achieve minority representation by other means, including requirements for a certain proportion of candidates to be women, or by exempting minority parties from the electoral threshold, as is done inPoland,[17]RomaniaandSerbia.[18]
Inancient GreeceandItaly, the institution of suffrage already existed in a rudimentary form at the outset of the historical period. In the earlymonarchiesit was customary for the king to invite pronouncements of his people on matters in which it was prudent to secure its assent beforehand. In these assemblies the people recorded their opinion by clamouring (a method which survived inSpartaas late as the 4th century BCE), or by the clashing ofspearsonshields.[19]
Voting has been used as a feature of democracy since the 6th century BCE, when democracy was introduced by theAthenian democracy. However, in Athenian democracy, voting was seen as the least democratic among methods used for selecting public officials, and was little used, because elections were believed to inherently favor the wealthy and well-known over average citizens. Viewed as more democratic were assemblies open to all citizens, andselection by lot, as well as rotation of office.
Generally, the taking of votes was effected in the form of a poll. The practice of the Athenians, which is shown by inscriptions to have been widely followed in the other states of Greece, was to hold a show of hands, except on questions affecting the status of individuals: these latter, which included alllawsuitsand proposals ofostracism, in which voters chose the citizen they most wanted to exile for ten years, were determined by secret ballot (one of the earliest recorded elections in Athens was aplurality votethat it was undesirable to win, namely an ostracism vote). AtRomethe method which prevailed up to the 2nd century BCE was that of division (discessio). But the system became subject to intimidation and corruption. Hence a series of laws enacted between 139 and 107 BCE prescribed the use of the ballot (tabella), a slip of wood coated with wax, for all business done in the assemblies of the people.
For the purpose of carrying resolutions a simple majority of votes was deemed sufficient. As a general rule equal value was made to attach to each vote; but in the popular assemblies at Rome a system of voting by groups was in force until the middle of the 3rd century BCE by which the richer classes secured a decisive preponderance.[19]
Most elections in the earlyhistory of democracywere held using plurality voting or some variant, but as an exception, the state ofVenicein the 13th century adopted approval voting to elect their Great Council.[20]
The Venetians' method forelecting the Dogewas a particularly convoluted process, consisting of five rounds of drawing lots (sortition) and five rounds of approval voting. By drawing lots, a body of 30 electors was chosen, which was further reduced to nine electors by drawing lots again. Anelectoral collegeof nine members elected 40 people by approval voting; those 40 were reduced to form a second electoral college of 12 members by drawing lots again. The second electoral college elected 25 people by approval voting, which were reduced to form a third electoral college of nine members by drawing lots. The third electoral college elected 45 people, which were reduced to form a fourth electoral college of 11 by drawing lots. They in turn elected a final electoral body of 41 members, who ultimately elected the Doge. Despite its complexity, the method had certain desirable properties such as being hard to game and ensuring that the winner reflected the opinions of both majority and minority factions.[21]This process, with slight modifications, was central to the politics of theRepublic of Venicethroughout its remarkable lifespan of over 500 years, from 1268 to 1797.
Jean-Charles de Bordaproposed theBorda countin 1770 as a method for electing members to theFrench Academy of Sciences. His method was opposed by theMarquis de Condorcet, who proposed instead the method of pairwise comparison that he had devised. Implementations of this method are known asCondorcet methods. He also wrote about theCondorcet paradox, which he called theintransitivity of majority preferences. However, recent research has shown that the philosopherRamon Llulldevised both the Borda count and a pairwise method that satisfied the Condorcet criterion in the 13th century. The manuscripts in which he described these methods had been lost to history until they were rediscovered in 2001.[22]
Later in the 18th century,apportionment methodscame to prominence due to theUnited States Constitution, which mandated that seats in theUnited States House of Representativeshad to be allocated among the states proportionally to their population, but did not specify how to do so.[23]A variety of methods were proposed by statesmen such asAlexander Hamilton,Thomas Jefferson, andDaniel Webster. Some of the apportionment methods devised in the United States were in a sense rediscovered in Europe in the 19th century, as seat allocation methods for the newly proposed method of party-list proportional representation. The result is that many apportionment methods have two names;Jefferson's methodis equivalent to theD'Hondt method, as isWebster's methodto theSainte-Laguë method, whileHamilton's methodis identical to the Hare largest remainder method.[23]
Thesingle transferable vote(STV) method was devised byCarl AndræinDenmarkin 1855 and in theUnited KingdombyThomas Harein 1857. STV elections were first held in Denmark in 1856, and inTasmaniain 1896 after its use was promoted byAndrew Inglis Clark. Over the course of the 20th century, STV was subsequently adopted by Ireland and Malta for their national elections, in Australia for theirSenateelections, as well as by many municipal elections around the world.[24]
Party-list proportional representation began to be used to elect European legislatures in the early 20th century, withBelgiumthe first to implement it for its1900 general elections. Since then, proportional and semi-proportional methods have come to be used in almost all democratic countries, with most exceptions being formerBritishandFrenchcolonies.[25]
Perhaps influenced by the rapid development of multiple-winner STV, theorists published new findings about single-winner methods in the late 19th century. Around 1870,William Robert Wareproposed applying STV to single-winner elections, yieldinginstant-runoff voting(IRV).[26]Soon, mathematicians began to revisit Condorcet's ideas and invent new methods for Condorcet completion;Edward J. Nansoncombined the newly described instant runoff voting with the Borda count to yield a new Condorcet method calledNanson's method. Charles Dodgson, better known asLewis Carroll, proposed the straightforward Condorcet method known asDodgson's method. He also proposed a proportional representation system based on multi-member districts, quotas as minimum requirements to take seats, and votes transferable by candidates throughproxy voting.[27]
Ranked voting electoral systems eventually gathered enough support to be adopted for use in government elections. InAustralia, IRV was first adopted in 1893 and STV in 1896 (Tasmania). IRV continues to be used along with STV today.
In the United States, during the early 20th-centuryprogressive erasome municipalities began to usesupplementary votingandBucklin voting. However, a series of court decisions ruled Bucklin to be unconstitutional, while supplementary voting was soon repealed in every city that had implemented it.[28]
The use ofgame theoryto analyze electoral systems led to discoveries about the effects of certain methods. Earlier developments such asArrow's impossibility theoremhad already shown the issues withranked votingsystems. Research ledSteven BramsandPeter Fishburnto formally define and promote the use ofapproval votingin 1977.[29]Political scientists of the 20th century published many studies on the effects that the electoral systems have on voters' choices and political parties,[30][31][32]and on political stability.[33][34]A few scholars also studied which effects caused a nation to switch to a particular electoral system.[35][36][37][38][39]
A new push forelectoral reformoccurred in the 1990s, when proposals were made to replace plurality voting in governmental elections with other methods.New Zealandadopted mixed-member proportional representation for the1996 general elections, having been approved in a1993 referendum.[40]After plurality voting was a factor in the contested results of the2000 presidential electionsin the United States, various municipalities in the United States have begun to adoptinstant-runoff voting. In 2020 a referendum adoptingapproval votinginSt. Louispassed with 70% support.[41]
In Canada, three separate referendums on thesingle transferable votehave been held but producing no reform (in2005,2009, and2018). The2020 Massachusetts Question 2, which attempted to expand instant-runoff voting intoMassachusetts, was defeated by a 10-point margin. In theUnited Kingdom, a2011 referendumon IRV saw the proposal rejected by a two-to-one margin.
Some cities that adopted instant-runoff voting subsequently returned tofirst-past-the-post. Studies have found voter satisfaction with IRV falls dramatically the first time a race produces a result different from first-past-the-post.[42]The United Kingdom used a form ofinstant-runoff votingfor local elections prior to 2022, before returning tofirst-past-the-postover concerns regarding the system's complexity.[43]Ranked-choice voting has been implemented in two states and banned in 10 others[44](in addition to other states with constitutional prohibitions on the rule).
In November 2024, voters in the U.S. decided on 10 ballot measures related to electoral systems. Nine of the ballot measures aimed to change existing electoral systems, and voters rejected each proposal. One, in Missouri, which banned ranked-choice voting (RCV), was approved. Voters rejected ballot measures to enact ranked-choice voting and other electoral system changes in Arizona, Colorado, Idaho, Nevada, and Oregon, as well as in Montana and South Dakota. In Alaska, voters rejected a ballot initiative 50.1% to 49.9% to repeal the state's top-four primaries and ranked-choice voting general elections, a system that was adopted via ballot measure in 2020.[45]
Electoral systems can be compared by different means:
Gibbard's theorem, built upon the earlierArrow's theoremand theGibbard–Satterthwaite theorem, to prove that for any single-winner deterministic voting methods, at least one of the following three properties must hold:
According to a 2006 survey of electoral system experts, their preferred electoral systems were in order of preference:[51]
Multi-member constituencies,majoritarian:
Multi-member constituencies,proportional:
Mixed majoritarian and proportional:
No relevant electoral system information:
Multi-member constituencies,majoritarian:
Multi-member constituencies,proportional:
Mixed majoritarian and proportional:
Other:
Indirect election:
No relevant electoral system information:
|
https://en.wikipedia.org/wiki/Voting_system
|
Theage of majorityis the threshold of legaladulthoodas recognized or declared inlaw.[1]It is the moment when a person ceases to be considered aminor, and assumes legal control over their person, actions, and decisions, thus terminating the control and legal responsibilities of their parents or guardian over them.
Most countries set the age of majority at 18, but some jurisdictions have a higher age and others lower. The wordmajorityhere refers to having greater years and being of full age as opposed tominority, the state of being a minor. The law in a given jurisdiction may not actually use the term "age of majority". The term refers to a collection of laws bestowing the status of adulthood.
The termage of majoritycan be confused with the similar concept of theage of license.[2]As a legal term, "license" means "permission", referring to a legally enforceable right or privilege. Thus, an age of license is an age at which one has legal permission from a given government to participate in certain activities or rituals. The age of majority, on the other hand, is a legal recognition that one has become an adult.
Many ages of license coincide with the age of majority to recognize the transition to legal adulthood, but they are nonetheless legally distinct concepts. One need not have attained the age of majority to have permission to exercise certain rights and responsibilities. Some ages of license may be higher, lower, or match the age of majority.
For example, to purchasealcoholic beverages, the age of license is 21 in all U.S. states. Another example is the voting age, which prior to 1971 was 21 in the US, as was the age of majority in all or most states. After the voting age was lowered from 21 to 18, the age of majority was lowered to 18 in most states. In most US states, one may obtain a driver's license, consent to sexual activity, and gain full-time employment at age 16 even though the age of majority is 18 in most states.[3]In the Republic of Ireland the age of majority is 18, but one must be 21 or over to stand for election to the Houses of theOireachtas.[4]Also, in Portugal the age of majority is 18, and citizens who have reached that age are also eligible to run for Parliament,[5]but they need to be 35 or over in order to run for President.[6]
A child who is legallyemancipatedby a court of competent jurisdiction automatically attains to their maturity upon the signing of the court order. Only emancipation confers the status of maturity before a person has actually reached the age of majority. In almost all places, minors who marry are automatically emancipated. Some places also do the same for minors who are in the armed forces or who have a certain degree or diploma.[7]
Minors who are emancipated may be able to choose where they live, sign contracts, and have control over their financial and medical decisions and generally make decisions free from parental control but are not exempt from age requirements set forth in law for other rights. For example, a minor can emancipate at 16 in the US (or younger depending on the state) but must still wait until 18 to vote or buy a firearm, and 21 to buy alcohol or tobacco.
The Jewish Talmud says that every judgmentJosiah, the sixteenth king of Judah(c.640–609BCE),issued from his coronation until the age of eighteen was reversed and he returned the money to the parties whom he judged liable, due to concern that in his youth he may not have judged the cases correctly.[8]Other Jewish commentators have discussed whether age 13 or 18 is the age to make decisions in aJewish Court.[9]
Roman law did not have an age of majority in the modern sense, as individuals remained under the authority of thePater familiasuntil his death. Theage of adulthoodwas set at 12 for girls and 14 for boys, with boys gaining rights such as marriage, military service, and any legal capacity that depended on age only, including, until the introduction of theLex Villia, the ability to be eligible for public office.[10]
TheLex Plaetoriaallowed those under 25 to contest disadvantageous agreements in case of fraud, later extending to other circumstances, and the other party might escape repercussions only if acuratorwas involved. To enter a contract, individuals in this age group could request thepraetorfor such acurator, thus ensuring protection for both sides: this shielded the other contracting party from legal risk and allowed transactions to proceed, as no prudent person would engage without this safeguard. Unlike with atutor, the requester retained full legal capacity to act, and the role of thecuratorwas merely to prevent fraud. Later, under Marcus Aurelius, their appointment became mandatory. Someone under 25 who wanted to enter a contracthad torequest acurator, and could propose a candidate, which thepraetorcould reject. Thecurator's control over property became closer to that of atutor, but it was only applied to the properties that thepraetorassigned to him, not those acquired by the requester after his appointment.[10]
Over time, there was a gradual evolution, initially focusing on property laws (while other legal matters, such as marriage and wills, continued to have separate age thresholds), eventually arriving at the modern concept of age of majority, commonly set at 18.
Since 2015, some countries have lowered the voting age to 16.[11][12]Some countries, likeEngland and Wales, are even considering lowering the age of majority to 16,[13]similar to how it already is inCubaandScotland.[14]The main argument for lowering is that, on average, young people are much more educated (both because of better individual educational outcomes and being raised by more educated parents) than in the past (the same argument was made in the 1970s when most countries lowered the age of majority from 21 to 18, which remains the age used for most countries, including the United States).[15][16]Related to newer generations being more educated and being ready for life earlier: compared to the past, information is much moreeasily accessibleas a result of the spread of theInternet, which can be accessed through both thepersonal computerand thesmartphone.
A person reaches the age of majority atmidnightat the beginning of the day of that person's relevant birthday; under English common law this was not always the case.[17][better source needed]
In many countries minors can beemancipated: depending on jurisdiction, this may happen through acts such asmarriage, attaining economic self-sufficiency, obtaining an educationaldegreeordiploma, or participating in a form ofmilitary service. In the United States, all states have some form of emancipation of minors.[18]
The age of majority in countries (oradministrative divisions) in the order of lowest to highest:
Religions have their own rules as to theage of maturity, when a child is regarded to be an adult, at least for ritual purposes:
In some countries, reaching the age of majority carries other rights and obligations, although in other countries, these rights and obligations may be had before or after reaching the aforementioned age.
|
https://en.wikipedia.org/wiki/Age_of_majority
|
Inprobabilityandstatistics, amixture distributionis theprobability distributionof arandom variablethat is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection according to given probabilities of selection, and then the value of the selected random variable is realized. The underlying random variables may be random real numbers, or they may berandom vectors(each having the same dimension), in which case the mixture distribution is amultivariate distribution.
In cases where each of the underlying random variables iscontinuous, the outcome variable will also be continuous and itsprobability density functionis sometimes referred to as amixture density. Thecumulative distribution function(and theprobability density functionif it exists) can be expressed as aconvex combination(i.e. a weighted sum, with non-negative weights that sum to 1) of other distribution functions and density functions. The individual distributions that are combined to form the mixture distribution are called themixture components, and the probabilities (or weights) associated with each component are called themixture weights. The number of components in a mixture distribution is often restricted to being finite, although in some cases the components may becountably infinitein number. More general cases (i.e. anuncountableset of component distributions), as well as the countable case, are treated under the title ofcompound distributions.
A distinction needs to be made between arandom variablewhose distribution function or density is the sum of a set of components (i.e. a mixture distribution) and a random variable whose value is the sum of the values of two or more underlying random variables, in which case the distribution is given by theconvolutionoperator. As an example, the sum of twojointly normally distributedrandom variables, each with different means, will still have a normal distribution. On the other hand, a mixture density created as a mixture of two normal distributions with different means will have two peaks provided that the two means are far enough apart, showing that this distribution is radically different from a normal distribution.
Mixture distributions arise in many contexts in the literature and arise naturally where astatistical populationcontains two or moresubpopulations. They are also sometimes used as a means of representing non-normal distributions. Data analysis concerningstatistical modelsinvolving mixture distributions is discussed under the title ofmixture models, while the present article concentrates on simple probabilistic and statistical properties of mixture distributions and how these relate to properties of the underlying distributions.
Given a finite set of probability density functionsp1(x), ...,pn(x), or corresponding cumulative distribution functionsP1(x),...,Pn(x)andweightsw1, ...,wnsuch thatwi≥ 0and∑wi= 1, the mixture distribution can be represented by writing either the density,f, or the distribution function,F, as a sum (which in both cases is a convex combination):F(x)=∑i=1nwiPi(x),{\displaystyle F(x)=\sum _{i=1}^{n}\,w_{i}\,P_{i}(x),}f(x)=∑i=1nwipi(x).{\displaystyle f(x)=\sum _{i=1}^{n}\,w_{i}\,p_{i}(x).}This type of mixture, being a finite sum, is called afinite mixture,and in applications, an unqualified reference to a "mixture density" usually means a finite mixture. The case of a countably infinite set of components is covered formally by allowingn=∞{\displaystyle n=\infty \!}.
Where the set of component distributions isuncountable, the result is often called acompound probability distribution. The construction of such distributions has a formal similarity to that of mixture distributions, with either infinite summations or integrals replacing the finite summations used for finite mixtures.
Consider a probability density functionp(x;a)for a variablex, parameterized bya. That is, for each value ofain some setA,p(x;a)is a probability density function with respect tox. Given a probability density functionw(meaning thatwis nonnegative and integrates to 1), the function
f(x)=∫Aw(a)p(x;a)da{\displaystyle f(x)=\int _{A}\,w(a)\,p(x;a)\,da}
is again a probability density function forx. A similar integral can be written for the cumulative distribution function. Note that the formulae here reduce to the case of a finite or infinite mixture if the densitywis allowed to be ageneralized functionrepresenting the "derivative" of the cumulative distribution function of adiscrete distribution.
The mixture components are often not arbitrary probability distributions, but instead are members of aparametric family(such as normal distributions), with different values for a parameter or parameters. In such cases, assuming that it exists, the density can be written in the form of a sum as:f(x;a1,…,an)=∑i=1nwip(x;ai){\displaystyle f(x;a_{1},\ldots ,a_{n})=\sum _{i=1}^{n}\,w_{i}\,p(x;a_{i})}for one parameter, orf(x;a1,…,an,b1,…,bn)=∑i=1nwip(x;ai,bi){\displaystyle f(x;a_{1},\ldots ,a_{n},b_{1},\ldots ,b_{n})=\sum _{i=1}^{n}\,w_{i}\,p(x;a_{i},b_{i})}for two parameters, and so forth.
A generallinear combinationof probability density functions is not necessarily a probability density, since it may be negative or it may integrate to something other than 1. However, aconvex combinationof probability density functions preserves both of these properties (non-negativity and integrating to 1), and thus mixture densities are themselves probability density functions.
LetX1, ...,Xndenote random variables from thencomponent distributions, and letXdenote a random variable from the mixture distribution. Then, for any functionH(·)for whichE[H(Xi)]{\displaystyle \operatorname {E} [H(X_{i})]}exists, and assuming that the component densitiespi(x)exist,
E[H(X)]=∫−∞∞H(x)∑i=1nwipi(x)dx=∑i=1nwi∫−∞∞pi(x)H(x)dx=∑i=1nwiE[H(Xi)].{\displaystyle {\begin{aligned}\operatorname {E} [H(X)]&=\int _{-\infty }^{\infty }H(x)\sum _{i=1}^{n}w_{i}p_{i}(x)\,dx\\&=\sum _{i=1}^{n}w_{i}\int _{-\infty }^{\infty }p_{i}(x)H(x)\,dx=\sum _{i=1}^{n}w_{i}\operatorname {E} [H(X_{i})].\end{aligned}}}
Thejth moment about zero (i.e. choosingH(x) =xj) is simply a weighted average of thej-th moments of the components. Moments about the meanH(x) = (x − μ)jinvolve a binomial expansion:[1]
E[(X−μ)j]=∑i=1nwiE[(Xi−μi+μi−μ)j]=∑i=1nwi∑k=0j(jk)(μi−μ)j−kE[(Xi−μi)k],{\displaystyle {\begin{aligned}\operatorname {E} \left[{\left(X-\mu \right)}^{j}\right]&=\sum _{i=1}^{n}w_{i}\operatorname {E} \left[{\left(X_{i}-\mu _{i}+\mu _{i}-\mu \right)}^{j}\right]\\&=\sum _{i=1}^{n}w_{i}\sum _{k=0}^{j}{\binom {j}{k}}{\left(\mu _{i}-\mu \right)}^{j-k}\operatorname {E} \left[{\left(X_{i}-\mu _{i}\right)}^{k}\right],\end{aligned}}}
whereμidenotes the mean of thei-th component.
In the case of a mixture of one-dimensional distributions with weightswi, meansμiand variancesσi2, the total mean and variance will be:E[X]=μ=∑i=1nwiμi,{\displaystyle \operatorname {E} [X]=\mu =\sum _{i=1}^{n}w_{i}\mu _{i},}E[(X−μ)2]=σ2=E[X2]−μ2(standard variance reformulation)=(∑i=1nwiE[Xi2])−μ2=∑i=1nwi(σi2+μi2)−μ2(σi2=E[Xi2]−μi2⟹E[Xi2]=σi2+μi2){\displaystyle {\begin{aligned}\operatorname {E} \left[(X-\mu )^{2}\right]&=\sigma ^{2}\\&=\operatorname {E} [X^{2}]-\mu ^{2}&({\text{standard variance reformulation}})\\&=\left(\sum _{i=1}^{n}w_{i}\operatorname {E} \left[X_{i}^{2}\right]\right)-\mu ^{2}\\&=\sum _{i=1}^{n}w_{i}(\sigma _{i}^{2}+\mu _{i}^{2})-\mu ^{2}&(\sigma _{i}^{2}=\operatorname {E} [X_{i}^{2}]-\mu _{i}^{2}\implies \operatorname {E} [X_{i}^{2}]=\sigma _{i}^{2}+\mu _{i}^{2})\end{aligned}}}
These relations highlight the potential of mixture distributions to display non-trivial higher-order moments such asskewnessandkurtosis(fat tails) and multi-modality, even in the absence of such features within the components themselves. Marron and Wand (1992) give an illustrative account of the flexibility of this framework.[2]
The question ofmultimodalityis simple for some cases, such as mixtures ofexponential distributions: all such mixtures areunimodal.[3]However, for the case of mixtures ofnormal distributions, it is a complex one. Conditions for the number of modes in a multivariate normal mixture are explored by Ray & Lindsay[4]extending earlier work on univariate[5][6]and multivariate[7]distributions.
Here the problem of evaluation of the modes of anncomponent mixture in aDdimensional space is reduced to identification of critical points (local minima, maxima andsaddle points) on amanifoldreferred to as theridgeline surface, which is the image of the ridgeline functionx∗(α)=[∑i=1nαiΣi−1]−1×[∑i=1nαiΣi−1μi],{\displaystyle x^{*}(\alpha )=\left[\sum _{i=1}^{n}\alpha _{i}\Sigma _{i}^{-1}\right]^{-1}\times \left[\sum _{i=1}^{n}\alpha _{i}\Sigma _{i}^{-1}\mu _{i}\right],}whereα{\displaystyle \alpha }belongs to the(n−1){\displaystyle (n-1)}-dimensional standardsimplex:Sn={α∈Rn:αi∈[0,1],∑i=1nαi=1}{\displaystyle {\mathcal {S}}_{n}=\left\{\alpha \in \mathbb {R} ^{n}:\alpha _{i}\in [0,1],\sum _{i=1}^{n}\alpha _{i}=1\right\}}andΣi∈RD×D,μi∈RD{\displaystyle \Sigma _{i}\in \mathbb {R} ^{D\times D},\,\mu _{i}\in \mathbb {R} ^{D}}correspond to the covariance and mean of thei-th component. Ray & Lindsay[4]consider the case in whichn−1<D{\displaystyle n-1<D}showing a one-to-one correspondence of modes of the mixture and those on theridge elevation functionh(α)=q(x∗(α)){\displaystyle h(\alpha )=q(x^{*}(\alpha ))}thus one may identify the modes by solvingdh(α)dα=0{\displaystyle {\frac {dh(\alpha )}{d\alpha }}=0}with respect toα{\displaystyle \alpha }and determining the valuex∗(α){\displaystyle x^{*}(\alpha )}.
Using graphical tools, the potential multi-modality of mixtures with number of componentsn∈{2,3}{\displaystyle n\in \{2,3\}}is demonstrated; in particular it is shown that the number of modes may exceedn{\displaystyle n}and that the modes may not be coincident with the component means. For two components they develop a graphical tool for analysis by instead solving the aforementioned differential with respect to the first mixing weightw1{\displaystyle w_{1}}(which also determines the second mixing weight throughw2=1−w1{\displaystyle w_{2}=1-w_{1}}) and expressing the solutions as a functionΠ(α),α∈[0,1]{\displaystyle \Pi (\alpha ),\,\alpha \in [0,1]}so that the number and location of modes for a given value ofw1{\displaystyle w_{1}}corresponds to the number of intersections of the graph on the lineΠ(α)=w1{\displaystyle \Pi (\alpha )=w_{1}}. This in turn can be related to the number of oscillations of the graph and therefore to solutions ofdΠ(α)dα=0{\displaystyle {\frac {d\Pi (\alpha )}{d\alpha }}=0}leading to an explicit solution for the case of a two component mixture withΣ1=Σ2=Σ{\displaystyle \Sigma _{1}=\Sigma _{2}=\Sigma }(sometimes called ahomoscedasticmixture) given by1−α(1−α)dM(μ1,μ2,Σ)2{\displaystyle 1-\alpha (1-\alpha )d_{M}(\mu _{1},\mu _{2},\Sigma )^{2}}wheredM(μ1,μ2,Σ)=(μ2−μ1)TΣ−1(μ2−μ1){\textstyle d_{M}(\mu _{1},\mu _{2},\Sigma )={\sqrt {(\mu _{2}-\mu _{1})^{\mathsf {T}}\Sigma ^{-1}(\mu _{2}-\mu _{1})}}}is theMahalanobis distancebetweenμ1{\displaystyle \mu _{1}}andμ2{\displaystyle \mu _{2}}.
Since the above is quadratic it follows that in this instance there are at most two modes irrespective of the dimension or the weights.
For normal mixtures with generaln>2{\displaystyle n>2}andD>1{\displaystyle D>1}, a lower bound for the maximum number of possible modes, and – conditionally on the assumption that the maximum number is finite – an upper bound are known. For those combinations ofn{\displaystyle n}andD{\displaystyle D}for which the maximum number is known, it matches the lower bound.[8]
Simple examples can be given by a mixture of two normal distributions. (SeeMultimodal distribution#Mixture of two normal distributionsfor more details.)
Given an equal (50/50) mixture of two normal distributions with the same standard deviation and different means (homoscedastic), the overall distribution will exhibit lowkurtosisrelative to a single normal distribution – the means of the subpopulations fall on the shoulders of the overall distribution. If sufficiently separated, namely by twice the (common) standard deviation, so|μ1−μ2|>2σ,{\displaystyle \left|\mu _{1}-\mu _{2}\right|>2\sigma ,}these form abimodal distribution, otherwise it simply has a wide peak.[9]The variation of the overall population will also be greater than the variation of the two subpopulations (due to spread from different means), and thus exhibitsoverdispersionrelative to a normal distribution with fixed variationσ, though it will not be overdispersed relative to a normal distribution with variation equal to variation of the overall population.
Alternatively, given two subpopulations with the same mean and different standard deviations, the overall population will exhibit high kurtosis, with a sharper peak and heavier tails (and correspondingly shallower shoulders) than a single distribution.
The following example is adapted from Hampel,[10]who creditsJohn Tukey.
Consider the mixture distribution defined by
The mean ofi.i.d.observations fromF(x)behaves "normally" except for exorbitantly large samples, although the mean ofF(x)does not even exist.
Mixture densities are complicated densities expressible in terms of simpler densities (the mixture components), and are used both because they provide a good model for certain data sets (where different subsets of the data exhibit different characteristics and can best be modeled separately), and because they can be more mathematically tractable, because the individual mixture components can be more easily studied than the overall mixture density.
Mixture densities can be used to model astatistical populationwithsubpopulations, where the mixture components are the densities on the subpopulations, and the weights are the proportions of each subpopulation in the overall population.
Mixture densities can also be used to modelexperimental erroror contamination – one assumes that most of the samples measure the desired phenomenon, with some samples from a different, erroneous distribution.
Parametric statistics that assume no error often fail on such mixture densities – for example, statistics that assume normality often fail disastrously in the presence of even a fewoutliers– and instead one usesrobust statistics.
Inmeta-analysisof separate studies,study heterogeneitycauses distribution of results to be a mixture distribution, and leads tooverdispersionof results relative to predicted error. For example, in astatistical survey, themargin of error(determined by sample size) predicts thesampling errorand hence dispersion of results on repeated surveys. The presence of study heterogeneity (studies have differentsampling bias) increases the dispersion relative to the margin of error.
|
https://en.wikipedia.org/wiki/Mixture_distribution
|
Inprobabilityandstatistics, acompound probability distribution(also known as amixture distributionorcontagious distribution) is theprobability distributionthat results from assuming that arandom variableis distributed according to some parametrized distribution, with (some of) the parameters of that distribution themselves being random variables.
If the parameter is ascale parameter, the resulting mixture is also called ascale mixture.
The compound distribution ("unconditional distribution") is the result ofmarginalizing(integrating) over thelatentrandom variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution").
Acompound probability distributionis the probability distribution that results from assuming that a random variableX{\displaystyle X}is distributed according to some parametrized distributionF{\displaystyle F}with an unknown parameterθ{\displaystyle \theta }that is again distributed according to some other distributionG{\displaystyle G}. The resulting distributionH{\displaystyle H}is said to be the distribution that results from compoundingF{\displaystyle F}withG{\displaystyle G}. The parameter's distributionG{\displaystyle G}is also called themixing distributionorlatent distribution. Technically, theunconditionaldistributionH{\displaystyle H}results frommarginalizingoverG{\displaystyle G}, i.e., from integrating out the unknown parameter(s)θ{\displaystyle \theta }. Itsprobability density functionis given by:
The same formula applies analogously if some or all of the variables are vectors.
From the above formula, one can see that a compound distribution essentially is a special case of amarginal distribution: Thejoint distributionofx{\displaystyle x}andθ{\displaystyle \theta }is given byp(x,θ)=p(x|θ)p(θ){\displaystyle p(x,\theta )=p(x|\theta )p(\theta )}, and the compound results as its marginal distribution:p(x)=∫p(x,θ)dθ{\displaystyle {\textstyle p(x)=\int p(x,\theta )\operatorname {d} \!\theta }}.
If the domain ofθ{\displaystyle \theta }is discrete, then the distribution is again a special case of amixture distribution.
The compound distributionH{\displaystyle H}will depend on the specific expression of each distribution, as well as which parameter ofF{\displaystyle F}is distributed according to the distributionG{\displaystyle G}, and the parameters ofH{\displaystyle H}will include any parameters ofG{\displaystyle G}that are not marginalized, or integrated, out.
ThesupportofH{\displaystyle H}is the same as that ofF{\displaystyle F}, and if the latter is a two-parameter distribution parameterized with the mean and variance, some general properties exist.
The compound distribution's first twomomentsare given by thelaw of total expectationand thelaw of total variance:
EH[X]=EG[EF[X|θ]]{\displaystyle \operatorname {E} _{H}[X]=\operatorname {E} _{G}{\bigl [}\operatorname {E} _{F}[X|\theta ]{\bigr ]}}
VarH(X)=EG[VarF(X|θ)]+VarG(EF[X|θ]){\displaystyle \operatorname {Var} _{H}(X)=\operatorname {E} _{G}{\bigl [}\operatorname {Var} _{F}(X|\theta ){\bigr ]}+\operatorname {Var} _{G}{\bigl (}\operatorname {E} _{F}[X|\theta ]{\bigr )}}
If the mean ofF{\displaystyle F}is distributed asG{\displaystyle G}, which in turn has meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}the expressions above implyEH[X]=EG[θ]=μ{\displaystyle \operatorname {E} _{H}[X]=\operatorname {E} _{G}[\theta ]=\mu }andVarH(X)=VarF(X|θ)+VarG(Y)=τ2+σ2{\displaystyle \operatorname {Var} _{H}(X)=\operatorname {Var} _{F}(X|\theta )+\operatorname {Var} _{G}(Y)=\tau ^{2}+\sigma ^{2}}, whereτ2{\displaystyle \tau ^{2}}is the variance ofF{\displaystyle F}.
letF{\displaystyle F}andG{\displaystyle G}be probability distributions parameterized with mean a variance asx∼F(θ,τ2)θ∼G(μ,σ2){\displaystyle {\begin{aligned}x&\sim {\mathcal {F}}(\theta ,\tau ^{2})\\\theta &\sim {\mathcal {G}}(\mu ,\sigma ^{2})\end{aligned}}}then denoting the probability density functions asf(x|θ)=pF(x|θ){\displaystyle f(x|\theta )=p_{F}(x|\theta )}andg(θ)=pG(θ){\displaystyle g(\theta )=p_{G}(\theta )}respectively, andh(x){\displaystyle h(x)}being the probability density ofH{\displaystyle H}we haveEH[X]=∫Fxh(x)dx=∫Fx∫Gf(x|θ)g(θ)dθdx=∫G∫Fxf(x|θ)dxg(θ)dθ=∫GEF[X|θ]g(θ)dθ{\displaystyle {\begin{aligned}\operatorname {E} _{H}[X]=\int _{F}xh(x)dx&=\int _{F}x\int _{G}f(x|\theta )g(\theta )d\theta dx\\&=\int _{G}\int _{F}xf(x|\theta )dx\ g(\theta )d\theta \\&=\int _{G}\operatorname {E} _{F}[X|\theta ]g(\theta )d\theta \end{aligned}}}and we have from the parameterizationF{\displaystyle {\mathcal {F}}}andG{\displaystyle {\mathcal {G}}}thatEF[X|θ]=∫Fxf(x|θ)dx=θEG[θ]=∫Gθg(θ)dθ=μ{\displaystyle {\begin{aligned}\operatorname {E} _{F}[X|\theta ]&=\int _{F}xf(x|\theta )dx=\theta \\\operatorname {E} _{G}[\theta ]&=\int _{G}\theta g(\theta )d\theta =\mu \end{aligned}}}and therefore the mean of the compound distributionEH[X]=μ{\displaystyle \operatorname {E} _{H}[X]=\mu }as per the expression for its first moment above.
The variance ofH{\displaystyle H}is given byEH[X2]−(EH[X])2{\displaystyle \operatorname {E} _{H}[X^{2}]-(\operatorname {E} _{H}[X])^{2}}, andEH[X2]=∫Fx2h(x)dx=∫Fx2∫Gf(x|θ)g(θ)dθdx=∫Gg(θ)∫Fx2f(x|θ)dxdθ=∫Gg(θ)(τ2+θ2)dθ=τ2∫Gg(θ)dθ+∫Gg(θ)θ2dθ=τ2+(σ2+μ2),{\displaystyle {\begin{aligned}\operatorname {E} _{H}[X^{2}]=\int _{F}x^{2}h(x)dx&=\int _{F}x^{2}\int _{G}f(x|\theta )g(\theta )d\theta dx\\&=\int _{G}g(\theta )\int _{F}x^{2}f(x|\theta )dx\ d\theta \\&=\int _{G}g(\theta )(\tau ^{2}+\theta ^{2})d\theta \\&=\tau ^{2}\int _{G}g(\theta )d\theta +\int _{G}g(\theta )\theta ^{2}d\theta \\&=\tau ^{2}+(\sigma ^{2}+\mu ^{2}),\end{aligned}}}given the fact that∫Fx2f(x∣θ)dx=EF[X2∣θ]=VarF(X∣θ)+(EF[X∣θ])2{\displaystyle \int _{F}x^{2}f(x\mid \theta )dx=\operatorname {E} _{F}[X^{2}\mid \theta ]=\operatorname {Var} _{F}(X\mid \theta )+(\operatorname {E} _{F}[X\mid \theta ])^{2}}and∫Gθ2g(θ)dθ=EG[θ2]=VarG(θ)+(EG[θ])2{\displaystyle \int _{G}\theta ^{2}g(\theta )d\theta =\operatorname {E} _{G}[\theta ^{2}]=\operatorname {Var} _{G}(\theta )+(\operatorname {E} _{G}[\theta ])^{2}}. Finally we getVarH(X)=EH[X2]−(EH[X])2=τ2+σ2{\displaystyle {\begin{aligned}\operatorname {Var} _{H}(X)&=\operatorname {E} _{H}[X^{2}]-(\operatorname {E} _{H}[X])^{2}\\&=\tau ^{2}+\sigma ^{2}\end{aligned}}}
Distributions of commontest statisticsresult as compound distributions under their null hypothesis, for example inStudent's t-test(where the test statistic results as the ratio of anormaland achi-squaredrandom variable), or in theF-test(where the test statistic is the ratio of twochi-squaredrandom variables).
Compound distributions are useful for modeling outcomes exhibitingoverdispersion, i.e., a greater amount of variability than would be expected under a certain model. For example, count data are commonly modeled using thePoisson distribution, whose variance is equal to its mean. The distribution may be generalized by allowing for variability in itsrate parameter, implemented via agamma distribution, which results in a marginalnegative binomial distribution. This distribution is similar in its shape to the Poisson distribution, but it allows for larger variances. Similarly, abinomial distributionmay be generalized to allow for additional variability by compounding it with abeta distributionfor its success probability parameter, which results in abeta-binomial distribution.
Besides ubiquitous marginal distributions that may be seen as special cases of compound distributions,
inBayesian inference, compound distributions arise when, in the notation above,Frepresents the distribution of future observations andGis theposterior distributionof the parameters ofF, given the information in a set of observed data. This gives aposterior predictive distribution. Correspondingly, for theprior predictive distribution,Fis the distribution of a new data point whileGis theprior distributionof the parameters.
Convolutionof probability distributions (to derive the probability distribution of sums of random variables) may also be seen as a special case of compounding; here the sum's distribution essentially results from considering one summand as a randomlocation parameterfor the other summand.[1]
Compound distributions derived fromexponential familydistributions often have a closed form.
If analytical integration is not possible, numerical methods may be necessary.
Compound distributions may relatively easily be investigated usingMonte Carlo methods, i.e., by generating random samples. It is often easy to generate random numbers from the
distributionsp(θ){\displaystyle p(\theta )}as well asp(x|θ){\displaystyle p(x|\theta )}and then utilize these to performcollapsed Gibbs samplingto generate samples fromp(x){\displaystyle p(x)}.
A compound distribution may usually also be approximated to a sufficient degree by amixture distributionusing a finite number of mixture components, allowing to derive approximate density, distribution function etc.[1]
Parameter estimation(maximum-likelihoodormaximum-a-posterioriestimation) within a compound distribution model may sometimes be simplified by utilizing theEM-algorithm.[2]
The notion of "compound distribution" as used e.g. in the definition of aCompound Poisson distributionorCompound Poisson processis different from the definition found in this article. The meaning in this article corresponds to what is used in e.g.Bayesian hierarchical modeling.
The special case for compound probability distributions where the parametrized distributionF{\displaystyle F}is thePoisson distributionis also calledmixed Poisson distribution.
|
https://en.wikipedia.org/wiki/Compound_distribution
|
Instatistics,probability density estimationor simplydensity estimationis the construction of anestimate, based on observeddata, of an unobservable underlyingprobability density function. The unobservable density function is thought of as the density according to which a large population is distributed; the data are usually thought of as a random sample from that population.[1]
A variety of approaches to density estimation are used, includingParzen windowsand a range ofdata clusteringtechniques, includingvector quantization. The most basic form of density estimation is a rescaledhistogram.
We will consider records of the incidence ofdiabetes. The following is quoted verbatim from thedata setdescription:
In this example, we construct three density estimates for "glu" (plasmaglucoseconcentration), oneconditionalon the presence of diabetes,
the second conditional on the absence of diabetes, and the third not conditional on diabetes.
The conditional density estimates are then used to construct the probability of diabetes conditional on "glu".
The "glu" data were obtained from the MASS package[4]of theR programming language. Within R,?Pima.trand?Pima.tegive a fuller account of the data.
Themeanof "glu" in the diabetes cases is 143.1 and the standard deviation is 31.26.
The mean of "glu" in the non-diabetes cases is 110.0 and the standard deviation is 24.29.
From this we see that, in this data set, diabetes cases are associated with greater levels of "glu".
This will be made clearer by plots of the estimated density functions.
The first figure shows density estimates ofp(glu | diabetes=1),p(glu | diabetes=0), andp(glu).
The density estimates are kernel density estimates using a Gaussian kernel. That is, a Gaussian density function is placed at each data point, and the sum of the density functions is computed over the range of the data.
From the density of "glu" conditional on diabetes, we can obtain the probability of diabetes conditional on "glu" viaBayes' rule. For brevity, "diabetes" is abbreviated "db." in this formula.
The second figure shows the estimated posterior probabilityp(diabetes=1 | glu). From these data, it appears that an increased level of "glu" is associated with diabetes.
A very natural use of density estimates is in the informal investigation of the properties of a given set of data. Density estimates can give a valuable indication of such features as skewness and multimodality in the data. In some cases they will yield conclusions that may then be regarded as self-evidently true, while in others all they will do is to point the way to further analysis and/or data collection.[5]
An important aspect of statistics is often the presentation of data back to the client in order to provide explanation and illustration of conclusions that may possibly have been obtained by other means. Density estimates are ideal for this purpose, for the simple reason that they are fairly easily comprehensible to non-mathematicians.
More examples illustrating the use of density estimates for exploratory and presentational purposes, including the important case of bivariate data.[7]
Density estimation is also frequently used inanomaly detectionornovelty detection:[8]if an observation lies in a very low-density region, it is likely to be an anomaly or a novelty.
Sources
|
https://en.wikipedia.org/wiki/Density_estimation
|
Total absorption spectroscopyis a measurement technique that allows the measurement of the gamma radiation emitted in the different nuclear gamma transitions that may take place in the daughter nucleus after its unstable parent has decayed by means of the beta decay process.[1]This technique can be used forbeta decaystudies related to beta feeding measurementswithin the full decay energy windowfor nuclei far from stability.
It is implemented with a special type ofdetector, the "total absorption spectrometer" (TAS), made of ascintillatorcrystal that almost completely surrounds the activity to be measured, covering a solid angle of approximately 4π. Also, in an ideal case, it should be thick enough to have a peakefficiencyclose to 100%, in this way its total efficiency is also very close to 100% (this is one of the reasons why it is called "total" absorption spectroscopy). Finally, it should be blind to any other type of radiation. The gamma rays produced in the decay under study are collected byphotomultipliersattached to the scintillator material. This technique may solve the problem of thePandemonium effect.
There is a change in philosophy when measuring with a TAS. Instead of detecting the individual gamma rays (ashigh-resolution detectorsdo), it will detect the gamma cascades emitted in the decay. Then, the final energy spectrum will not be a collection of different energy peaks coming from the different transitions (as can be expected in the case of agermanium detector), but a collection of peaks situated at an energy that is the sum of the different energies of all the gammas of the cascade emitted from each level. This means that the energy spectrum measured with a TAS will be in reality a spectrum of the levels of the nuclei, where each peak is a level populated in the decay. Since the efficiency of these detectors is close to 100%, it is possible to see the feeding to the high excitation levels that usually can not be seen by high-resolution detectors. This makes total absorption spectroscopy the best method to measure beta feedings and provide accurate beta intensity (Iβ) distributions for complex decay schemes.
In an ideal case, the measured spectrum would be proportional to the beta feeding (Iβ). But a real TAS has limited efficiency andresolution, and also theIβhas to be extracted from the measured spectrum, which depends on the spectrometer response. The analysis of TAS data is not simple: to obtain the strength from the measured data, adeconvolutionprocess should be applied.
The complex analysis of the data measured with the TAS can be reduced to the solution of a linear problem:
d = Ri
given that it relates the measured data (d) with the feedings (i) from which the beta intensity distributionIβcan be obtained.
Ris the response matrix of the detector (meaning the probability that a decay that feeds a certain level gives a count in certain bin of the spectrum). The functionRdepends on the detector but also of the particular level scheme that is being measured. To be able to extract the value ofifrom the datadthe equation has to be inverted (this equation is also called the "inverse problem").
Unfortunately this can not be done easily because there is similar response to the feeding of adjacent levels when they are at high excitation energies where the level density is high. In other words, this is one of the so-called"ill-posed" problems, for which several sets of parameters can reproduce closely the same data set. Then, to findi, the response has to be obtained for which thebranching ratiosand a precise simulation of the geometry of the detector are needed. The higher the efficiency of the TAS used, the lower the dependence of the response on the branching ratios will be. Then it is possible to introduce the unknown branching ratios by hand from a plausible guess. A good guess can be calculated by means of theStatistical Model.
Then the procedure to find the feedings is iterative: using theexpectation-maximization algorithmto solve the inverse problem,[2]Then the procedure to find the feedings is iterative: using theexpectation-maximization algorithmto solve the inverse problem,[3]the feedings are extracted; if they don't reproduce the experimental data, it means that the initial guess of the branching ratios is wrong and has to be changed (of course, it is possible to play with other parameters of the analysis). Repeating this procedure iteratively in a reduced number of steps, the data is finally reproduced.
The best way to handle this problem is to keep a set of discrete levels at low excitation energies and a set of binned levels at high energies. The set at low energies is supposed to be known and can be taken from databases (for example, the [ENSDF] database,[4]which has information from what has been already measured with the high resolution technique). The set at high energies is unknown and does not overlap with the known part. At the end of this calculation, the whole region of levels inside theQ valuewindow (known and unknown) is binned.
At this stage of the analysis it is important to know theinternal conversioncoefficients for the transitions connecting the known levels. The internal conversion coefficient is defined as the number of de-excitations via e− emission over those via γ emission. If internal conversion takes place, theEM multipolefields of the nucleus do not result in the emission of a photon, instead, the fields interact with the atomic electrons and cause one of the electrons to be emitted from the atom. The gamma that would be emitted after the beta decay is missed, and the γ intensity decreases accordingly: IT = Iγ + Ie− = Iγ(1 + αe), so this phenomenon has to be taken into account in the calculation. Also, the x rays will be contaminated with those coming from the electron conversion process. This is important inelectron capturedecay, as it can affect the results of any x-ray gated spectra if the internal conversion is strong. Its probability is higher for lower energies and high multipolarities.
One of the ways to obtain the whole branching ratio matrix is to use the Statistical Nuclear Model. This model generates a binned branching ratio matrix from average level densities and average gamma strength functions. For the unknown part, average branching ratios can be calculated, for which several parameterizations may be chosen, while for the known part the information in the databases is used.
It is not possible to produce gamma sources that emit all the energies needed to calculate accurately the response of a TAS detector. For this reason, it is better to perform aMontecarlo simulationof the response. For this simulation to be reliable, the interactions of all the particles emitted in the decay (γ, e−/e+, Auger e, x rays, etc.) have to be modeled accurately, and the geometry and materials in the way of these particles have to be well reproduced. Also, the light production of the scintillator has to be included. The way to perform this simulation is explained in detail in paper by D. Cano-Ott et al.[5]GEANT3andGEANT4are well suited for these kind of simulations.
If the scintillator material of the TAS detector suffers from a non proportionality in the light production,[6]the peaks produced by a cascade will be displaced further for every increment in the multiplicity and the width of these peaks will be different from the width of single peaks with the same energy. This effect can be introduced in the simulation by means of a hyperbolic scintillation efficiency.[7]
The simulation of the light production will widen the peaks of the TAS spectrum; however, this still does not reproduce the real width of the experimental peaks. During the measurement there are additional statistical processes that affect the energy collection and are not included in the Montecarlo. The effect of this is an extra widening of the TAS experimental peaks. Since the peaks reproduced with the Montecarlo do not have the correct width, a convolution with an empirical instrumental resolution distribution has to be applied to the simulated response.
Finally, if the data to be analyzed comes from electron capture events, a simulated gamma response matrix must be built using the simulated responses to individual monoenergetic γ rays of several energies. This matrix contains the information related to the dependence of the response function on the detector. To include also the dependence on the level scheme that is being measured, the above-mentioned matrix should be convoluted with the branching ratio matrix calculated previously. In this way, the final global responseRis obtained.
An important thing to have in mind when using the TAS technique is that, if nuclei with shorthalf-lifesare measured, the energy spectrum will be contaminated with the gamma cascades of thedaughter nucleiproduced in the decay chain. Normally the TAS detectors have the possibility to place ancillary detectors inside of them, to measure secondary radiation likeX-rays,electronsorpositrons. In this way it is possible to tag the other components of the decay during theanalysis[broken anchor], allowing to separate the contributions coming from all the different nuclei (isobaricseparation).
In 1970, a spectrometer consisting of two cylindrical NaI detectors of 15 cm diameter and 10 cm length was used atISOLDE.[8]
The TAS Measuring Station installed at theGSI[9]had a tape transport system that allowed the collection of the ions coming out of the separator (they were implanted in the tape), and the transportation of those ions from the collection position to the center of the TAS for the measurement (by means of the movement of the tape). The TAS at this facility was made of a cylindrical NaI crystal of Φ = h = 35.6 cm, with a concentric cylindrical hole in the direction of the symmetry axis. This hole was filled by a plug detector (4.7x15.0 cm) with a holder that allowed the placement of ancillary detectors and two rollers for a tape.
This measuring station, installed at the end of one of theISOLDEbeamlines, consists of a TAS, and a tape station.[10]
In this station, a beam pipe is used to hold the tape. The beam is implanted in the tape outside of the TAS, which is then transported to the center of the detector for the measurement.[10]In this station it is also possible to implant the beam directly in the center of the TAS, by changing the position of the rollers. The latter procedure allows the measurement of more exotic nuclei with very short half-lives.[citation needed]
Lucreciais the TAS at this station. It is made of one piece of NaI(Tl) material cylindrically shaped with φ = h = 38 cm (the largest ever built to our knowledge). It has a cylindrical cavity of 7.5 cm diameter that goes through perpendicularly to its symmetry axis. The purpose of this hole is to allow the beam pipe to reach the measurement position so that the tape can be positioned in the center of the detector. It also allows the placement of ancillary detectors in the opposite side to measure other types of radiation emitted by the activity implanted in the tape (x rays, e−/e+, etc.).[11]However, the presence of this hole makes this detector less efficient as compared to the GSI TAS (Lucrecia’s total efficiency is around 90% from 300 to 3000 keV).[12]Lucrecia’s light is collected by 8 photomultipliers.[13]During the measurements Lucrecia is kept measuring at a total counting rate not larger than 10 kHz to avoid second and higher order pileup contributions.[14]
Surrounding the TAS there is a shielding box 19.2 cm thick made of four layers: polyethylene, lead, copper and aluminium. The purpose of it is to absorb most of the external radiation (neutrons, cosmic rays, and the room background).[15]
|
https://en.wikipedia.org/wiki/Total_absorption_spectroscopy
|
TheMM algorithmis an iterativeoptimizationmethod which exploits theconvexityof a function in order to find its maxima or minima. The MM stands for “Majorize-Minimization” or “Minorize-Maximization”, depending on whether the desired optimization is a minimization or a maximization. Despite the name, MM itself is not an algorithm, but a description of how to construct anoptimization algorithm.
Theexpectation–maximization algorithmcan be treated as a special case of the MM algorithm.[1][2]However, in the EM algorithmconditional expectationsare usually involved, while in the MM algorithm convexity and inequalities are the main focus, and it is easier to understand and apply in most cases.[3]
The historical basis for the MM algorithm can be dated back to at least 1970, when Ortega and Rheinboldt were performing studies related toline searchmethods.[4]The same concept continued to reappear in different areas in different forms. In 2000, Hunter and Lange put forth "MM" as a general framework.[5]Recent studies[who?]have applied the method in a wide range of subject areas, such asmathematics,statistics,machine learningandengineering.[citation needed]
The MM algorithm works by finding a surrogate function that minorizes or majorizes the objective function. Optimizing the surrogate function will either improve the value of the objective function or leave it unchanged.
Taking the minorize-maximization version, letf(θ){\displaystyle f(\theta )}be the objective concave function to be maximized. At themstep of the algorithm,m=0,1...{\displaystyle m=0,1...}, the constructed functiong(θ|θm){\displaystyle g(\theta |\theta _{m})}will be called the minorized version of the objective function (the surrogate function) atθm{\displaystyle \theta _{m}}if
Then, maximizeg(θ|θm){\displaystyle g(\theta |\theta _{m})}instead off(θ){\displaystyle f(\theta )}, and let
The above iterative method will guarantee thatf(θm){\displaystyle f(\theta _{m})}will converge to a local optimum or a saddle point asmgoes to infinity.[6]By the above construction
The marching ofθm{\displaystyle \theta _{m}}and the surrogate functions relative to the objective function is shown in the figure.
Majorize-Minimization is the same procedure but with a convex objective to be minimised.
One can use any inequality to construct the desired majorized/minorized version of the objective function. Typical choices include
|
https://en.wikipedia.org/wiki/MM_algorithm
|
Attention inequalityis the inequality of distribution ofattentionacross users on social networks,[1]people in general,[2]and for scientific papers.[3][4]YunFamily Foundation introduced "Attention Inequality Coefficient" as a measure of inequality in attention and arguments it by the close interconnection withwealth inequality.[5]
Attention inequality is related toeconomic inequalitysince attention is an economically scarce good.[2][6]The same measures and concepts as in classical economy can be applied forattention economy. The relationship develops also beyond the conceptual level—considering theAIDAprocess, attention is the prerequisite for real monetary income on the Internet.[7]On data of 2018,[8]a significant relationship betweenlikesand comments on Facebook to donations is proven fornon-profit organizations.
As data of 2008 shows, 50% of the attention is concentrated on approximately 0.2% of allhostnames, and 80% on 5% of hostnames.[6]TheGini coefficientof attention distribution lay in 2008 at over 0.921 for such commercial domains names as ac.jp and at 0.985 for.org-domains.
The Gini coefficient was measured on Twitter in 2016 for the number of followers as 0.9412, for the number of mentions as 0.9133, and for the number of retweets as 0.9034. For comparison, the world's income Gini coefficient was 0.68 in 2005 and 0.904 in 2018. More than 96% of all followers, 93% of the retweets, and 93% of all mentions are owned by 20% of Twitter.[1]
At least for scientific papers, today's consensus states that inequality is unexplainable by variations of quality and individual talent.[9][10][11]TheMatthew effectplays a significant role in the emergence of attention inequality—those who already enjoy large amounts of attention get even more attention, and those who do not lose even more.[12][13]Ranking algorithms based on relevance to the user have been found to alleviate the inequality of the number of posts across topics.[7]
|
https://en.wikipedia.org/wiki/Attention_inequality
|
TheCheiRankis aneigenvectorwith a maximal real eigenvalue of theGoogle matrixG∗{\displaystyle G^{*}}constructed for a directed network with the inverted directions of links. It is similar to thePageRankvector, which ranks the network nodes in average proportionally to a number of incoming links being the maximal eigenvector of theGoogle matrixG{\displaystyle G}with a given initial direction of links. Due to inversion of link directions the CheiRank ranks the network nodes in average proportionally to a number of outgoing links. Since each node belongs both to CheiRank andPageRankvectors the ranking of information flow on a directed network becomestwo-dimensional.
For a given directed network the Google matrix is constructed in the way described in the articleGoogle matrix. ThePageRankvector is the eigenvector with the maximal real eigenvalueλ=1{\displaystyle \lambda =1}. It was introduced in[1]and is discussed in the articlePageRank. In a similar way the CheiRank is the eigenvector with the maximal real eigenvalue of the matrixG∗{\displaystyle G^{*}}built in the same way asG{\displaystyle G}butusing inverted direction of links in the initially givenadjacency matrix. Both matricesG{\displaystyle G}andG∗{\displaystyle G^{*}}belong to the class of Perron–Frobenius operators and according to thePerron–Frobenius theoremthe CheiRankPi∗{\displaystyle P_{i}^{*}}and PageRankPi{\displaystyle P_{i}}eigenvectors have nonnegative components which can be interpreted as probabilities.[2][3]Thus allN{\displaystyle N}nodesi{\displaystyle i}of the network can be ordered in a decreasing probability order with ranksKi∗,Ki{\displaystyle K_{i}^{*},K_{i}}for CheiRank and PageRankPi∗,Pi{\displaystyle P_{i}^{*},P_{i}}respectively. In average the PageRank probabilityPi{\displaystyle P_{i}}is proportional to the number of ingoing links withPi∝1/Kiβ{\displaystyle P_{i}\propto 1/{K_{i}}^{\beta }}.[4][5][6]For the World Wide Web (WWW) network the exponentβ=1/(ν−1)≈0.9{\displaystyle \beta =1/(\nu -1)\approx 0.9}whereν≈2.1{\displaystyle \nu \approx 2.1}is the exponent for ingoing links distribution.[4][5]In a similar way the CheiRank probability is in average proportional to the number of outgoing links withPi∗∝1/Ki∗β∗{\displaystyle P_{i}^{*}\propto 1/{K_{i}^{*}}^{\beta ^{*}}}withβ∗=1/(ν∗−1)≈0.6{\displaystyle \beta ^{*}=1/(\nu ^{*}-1)\approx 0.6}whereν∗≈2.7{\displaystyle \nu ^{*}\approx 2.7}is the exponent for outgoing links distribution of the WWW.[4][5]The CheiRank was introduced for the procedure call network of Linux Kernel software in,[7]the term itself was used in Zhirov.[8]While the PageRank highlights very well known and popular nodes, the CheiRank highlights very communicative nodes. Top PageRank and CheiRank nodes have certain analogy to authorities and hubs appearing in theHITS algorithm[9]but the HITS is query dependent while the rank probabilitiesPi{\displaystyle P_{i}}andPi∗{\displaystyle P_{i}^{*}}classify all nodes of the network. Since each node belongs both to CheiRank and PageRank we obtain a two-dimensional ranking of network nodes. There had been early studies of PageRank in networks with inverted direction of links[10][11]but the properties of two-dimensional ranking had not been analyzed in detail.
An example of nodes distribution in the plane of PageRank and CheiRank is shown in Fig.1 for the procedure call network of Linux Kernel software.[7]
The dependence ofP,P∗{\displaystyle P,P^{*}}onK,K∗{\displaystyle K,K^{*}}for the network of hyperlink network of Wikipedia English articles is shown in Fig.2 from Zhirov. The distribution of these articles in the plane of PageRank and CheiRank is shown in Fig.3 from Zhirov. The difference between PageRank and CheiRank is clearly seen from the names of Wikipedia articles (2009) with highest rank. At the top of PageRank we have 1.United States, 2.United Kingdom, 3.France while for CheiRank we find 1.Portal:Contents/Outline of knowledge/Geography and places, 2.List of state leaders by year, 3.Portal:Contents/Index/Geography and places. Clearly PageRank selects first articles on a broadly known subject with a large number of ingoing links while CheiRank selects first highly communicative articles with many outgoing links. Since the articles are distributed in 2D they can be ranked in various ways corresponding to projection of 2D set on a line. The horizontal and vertical lines correspond to PageRank and CheiRank, 2DRank combines properties of CheiRank and PageRank as it is discussed in Zhirov.[8]It gives top Wikipedia articles 1.India, 2.Singapore, 3.Pakistan.
The 2D ranking highlights the properties of Wikipedia articles in a new rich and fruitful manner. According to the PageRank the top 100 personalities described in Wikipedia articles have in 5 main category activities: 58 (politics), 10 (religion),17 (arts), 15 (science), 0 (sport) and thus the importance of politicians is strongly overestimated. The CheiRank gives respectively 15, 1, 52, 16, 16 while for 2DRank one finds 24, 5, 62, 7, 2. Such type of 2D ranking can find useful applications for various complex directed networks including the WWW.
CheiRank and PageRank naturally appear for the world trade network, orinternational trade, where they and linked with export and import flows for a given country respectively.[12]
Possibilities of development of two-dimensional search engines based on PageRank and CheiRank are considered.[13]Directed networks can be characterized by the correlator between PageRank and CheiRank vectors: in certain networks this correlator is close to zero (e.g. Linux Kernel network) while other networks have large correlator values (e.g. Wikipedia or university networks).[7][13]
A simple example of the construction of the Google matricesG{\displaystyle G}andG∗{\displaystyle G^{*}}, used for determination of the related PageRank and CheiRank vectors, is given below. The directed network example with 7 nodes is shown in Fig.4. The matrixS{\displaystyle S}, built with the rules described
in the articleGoogle matrix, is shown in Fig.5;
the related Google matrix isG=αS+(1−α)eeT/N{\displaystyle G=\alpha S+(1-\alpha )ee^{T}/N}and the PageRank vector is the right eigenvector ofG{\displaystyle G}with the unit eigenvalue (GP=P{\displaystyle GP=P}). In a similar way, to determine the CheiRank eigenvector all directions of links in Fig.4 are inverted,
then the matrixS∗{\displaystyle S^{*}}is built,
according to the same rules applied for the network with inverted link
directions, as shown in Fig.6. The related Google matrix isG∗=αS∗+(1−α)eeT/N{\displaystyle G^{*}=\alpha S^{*}+(1-\alpha )ee^{T}/N}and the CheiRank vector
is the right eigenvector ofG∗{\displaystyle G^{*}}with the unit eigenvalue (G∗P∗=P∗{\displaystyle G^{*}P^{*}=P^{*}}). Hereα≈0.85{\displaystyle \alpha \approx 0.85}is the damping factor taken at its usual value.
|
https://en.wikipedia.org/wiki/CheiRank
|
Thedomain authority(also referred to asthought leadership) of a website describes its relevance for a specific subject area or industry. Domain Authority is asearch engine rankingscore developed by Moz.[1]This relevance has a direct impact on its ranking by search engines, trying to assess domain authority through automated analytic algorithms. The relevance of domain authority on website-listing in the Search Engine Results Page (SERPs) of search engines led to the birth of a whole industry ofBlack-Hat SEOproviders, trying to feign an increased level of domain authority.[2]The ranking by major search engines, e.g.,Google’sPageRankis agnostic of specific industry or subject areas and assesses a website in the context of the totality of websites on the Internet.[3]The results on the SERP page set the PageRank in the context of a specific keyword. In a less competitive subject area, even websites with a low PageRank can achieve high visibility in search engines, as the highest ranked sites that match specific search words are positioned on the first positions in the SERPs.[4]
Domain authority can be described through four dimensions:
The weight of these factors varies in function of the ranking body. When individuals judge domain authority, decisive factors can include the prestige of a website, the prestige of the contributing authors in a specific domain, the quality and relevance of the information on a website, the novelty of the content, but also the competitive situation around the discussed subject area or the quality of the outgoing links.[5]Several search engines (e.g.,Bing,Google,Yahoo) have developed automated analyses and rank algorithms for domain authority. Lacking "human reasoning" which would allow to directly judge quality, they make use of complementary parameters such as information or website prestige and centrality from agraph-theoreticalperspective, manifested in the quantity and quality of inbound links.[6]Thesoftware as a servicecompany Moz.org has developed an algorithm and weighted level metric, branded as "Domain Authority", which gives predictions on a website's performance in search engine rankings with a discriminating range from 0 to 100.[7][8]
Prestige identifies the prominent actors in a qualitative and quantitative manner on the basis ofGraph theory. A website is considered a node. Its prestige is defined by the quantity of nodes that have directed edges pointing on the website and the quality of those nodes. The nodes’ quality is also defined through their prestige. This definition assures that a prestigious website is not only pointed at by many other websites, but that those pointing websites are prestigious themselves[9]Similar to the prestige of a website, the contributing authors’ prestige is taken into consideration,[10]in those cases, where the authors are named and identified (e.g., with theirTwitterorGoogle Plusprofile). In this case, prestige is measured with the prestige of the authors who quote them or refer to them and the quantity of referrals which these authors receive.[5]Search engines use additional factors to scrutinize the websites’ prestige. To do so, Google’s PageRank looks at factors like link-diversification and link-dynamics: When too many links are coming from the same domain or webmaster, there is a risk ofblack-hat SEO. When backlinks grow rapidly, this nourishes suspicion ofspamor black-hat SEO as origin.[11]In addition, Google looks at factors like the public availability of thewhoIsinformation of the domain owner, the use of globaltop-level domains, domain age and volatility of ownership to assess their apparent prestige. Lastly, search engines look at the traffic and the amount of organic searches for a site as the amount of traffic should be congruent to the level of prestige that a website has in a certain domain.[5]
Information qualitydescribes the value which information provides to the reader. Wang and Strong categorize assessable dimensions of information into intrinsic (accuracy,objectivity,believability,reputation), contextual (relevancy,value-added/authenticity,timelessness,completeness, quantity), representational (interpretability, format, coherence, compatibility) and accessible (accessibilityand access security).[12]Humans can base their judgments on quality based on experience in judging content, style and grammatical correctness. Information systems like search engines need indirect means, allowing concluding on the quality of information. In 2015, Google’sPageRankalgorithm took approximately 200 ranking factors included in a learning algorithm to assess information quality.[13]
Prominent actors have extensive and ongoing relationships with other prominent actors. This increases their visibility and makes the content more relevant, interconnected, and useful.[9]Centrality, from a graph-theoretical perspective, describes unidirectional relationships without distinguishing between receiving and sending information. In this context, it includes the inbound links considered in the definition of 'prestige,' complemented by outgoing links. Another difference between prestige and centrality is that the measure of prestige applies to a complete website or author, whereas centrality can be considered at a more granular level, such as an individual blog post. Search engines evaluate various factors to assess the quality of outgoing links, including link centrality, which describes the quality, quantity, and relevance of outgoing links as well as the prestige of their destination. They also consider the frequency of new content publication ('freshness of information') to ensure that the website remains an active participant in the community.[5]
The domain authority that a website attains is not the only factor which defines its positioning in the SERPs of search engines. The second important factor is the competitiveness of a specific sector. Subjects likeSEOare very competitive. A website needs to outperform the prestige of competing websites to attain domain authority. This prestige, relative to other websites, can be defined as “relative domain authority.”
|
https://en.wikipedia.org/wiki/Domain_authority
|
EigenTrustalgorithmis areputation managementalgorithm forpeer-to-peernetworks, developed bySep Kamvar, Mario Schlosser, andHector Garcia-Molina.[1]The algorithm provides each peer in the network a unique global trust value based on the peer's history of uploads and thus aims to reduce the number of inauthentic files in aP2Pnetwork. It has been cited by approximately 3853 other articles according to Google Scholar.[2]
Peer-to-peersystems available today (likeGnutella) are open, often anonymous and lack accountability. Hence a user with malicious intent can introduce into the peer-to-peer network resources that may be inauthentic, corrupted or malicious (Malware). This reflects poorly on the credibility of current peer-to-peer systems. A research team fromStanfordprovides a reputation management system, where each peer in the system has a unique global trust value based on the peer's history of uploads. Any peer requesting resources will be able to access the trust value of a peer and avoid downloading files from untrusted peers.
The Eigentrust algorithm is based on the notion of transitive trust: If a peeritrusts any peerj, it would also trust the peers trusted byj. Each peericalculates the local trust valuesijfor all peers that have provided it with authentic or fake downloads based on the satisfactory or unsatisfactory transactions that it has had.
where sat (i,j) refers to the number of satisfactory responses that peerihas received from peerj,
and unsat(i,j) refers to the number of unsatisfactory responses that peerihas received from peerj.
The local value is normalized, to prevent malicious peers from assigning arbitrarily high local trust values to colluding malicious peers and arbitrarily low local trust values to good peers. The normalized local trust valuecijis then
The local trust values are aggregated at a central location or in a distributed manner to create a trust vector for the whole network. Based on the idea of transitive trust, a peeriwould ask other peers it knows to report the trust value of a peerkand weigh responses of these peers by the trust peeriplaces in them.
If we assume that a user knew thecijvalues for the whole network in the form of amatrixC, then trust vectort¯i{\displaystyle {\bar {t}}_{i}}that defines the trust value fortik{\displaystyle t_{ik}}is given by
In the equation shown above, if C is assumed to be aperiodic and strongly connected, powers of the matrix C will converge to a stable value at some point.
It seems that for a large value ofx, the trust vectort¯i{\displaystyle {\bar {t}}_{i}}will converge to the same vector for every peer in the network. The vectort¯i{\displaystyle {\bar {t}}_{i}}is known as the left principaleigenvectorof the matrixC. We also note that sincet¯i{\displaystyle {\bar {t}}_{i}}is same for all nodes in the network, it represents the global trust value.
Based on the results above a simple centralized trust value computing algorithm can be written. Note that we assume that all the local trust values for the whole network are available and present in the matrixC. We also note that, if the equation shown above converges, we can replace the initial vectorc¯i{\displaystyle {\bar {c}}_{i}}by a vectore¯{\displaystyle {\bar {e}}}that is an m-vector representing uniform probability distribution over all m peers. The basic EigenTrust algorithm is shown below:
|
https://en.wikipedia.org/wiki/EigenTrust
|
The termsGoogle bombingandGoogle washingrefer to the practice of causing awebsiteto rank highly inweb search engineresults for irrelevant, unrelated or off-topic search terms. In contrast,search engine optimization(SEO) is the practice of improving thesearch enginelistings of web pages forrelevantsearch terms.
Google-bombing is done for either business, political, or comedic purposes (or some combination thereof).[1]Google'ssearch-rank algorithmranks pages higher for a particular search phrase if enough other pages linked to it use similaranchor text. By January 2007, however, Google had tweaked its search algorithm to counter popular Google bombs such as "miserable failure" leading toGeorge W. BushandMichael Moore; now, search results list pages about the Google bomb itself.[2]On 21 June 2015, the first result in a Google search for "miserable failure" was this article.[3]Used both as averband anoun, "Google bombing" was introduced to theNew Oxford American Dictionaryin May 2005.[4]
Google bombing is related tospamdexing, the practice of deliberately modifyingHTMLto increase the chance of a website being placed close to the beginning of search engine results, or to influence the category to which the page is assigned in a misleading or dishonest manner.[5]
The termGooglewashingwas coined byAndrew Orlowskiin 2003 in order to describe the use ofmedia manipulationto change the perception of a term, or push out competition fromsearch engine results pages(SERPs).[6][7]
Google bombs date back as far as 1999, when a search for "moreevilthanSatanhimself" resulted in theMicrosofthomepage as the top result.[8][9]
In September 2000 the first Google bomb with a verifiable creator was created byHugedisk Men's Magazine, a now-defunct online humor magazine, when it linked the text "dumb motherfucker" to a site sellingGeorge W. Bush-related merchandise.[10]Hugedisk had also unsuccessfully attempted to Google bomb an equally derogatory term to bring up anAl Gore-related site. After a fair amount of publicity the George W. Bush-related merchandise site retained lawyers and sent acease-and-desistletter toHugedisk, thereby ending the Google bomb.[11]
Adam Mathes is credited with coining the term "Google bombing" when he mentioned it in an April 6, 2001, article in the online magazineuber.nu. In the article Mathes details his connection of the search term "talentless hack" to the website of his friend, Andy Pressman, by recruiting fellow webloggers to link to his friend's page with the desired term.[12]Some experts forecast that the practice of Google Bombing is over, as changes to Google's algorithm over the years have minimised the effect of the technique.
The Google Bomb has been used fortactical mediaas a way of performing a "hit-and-run" media attack on popular topics. Such attacks include Anthony Cox's attack in 2003. He created a parody of the "404 – page not found" browser error message in response to the war in Iraq. The page looked like the error page but was titled "These Weapons of Mass Destruction cannot be displayed". This website could be found as one of the top hits on Google after the start of the war in Iraq.[13]Also, in an attempt to detract attention from the far-right groupEnglish Defence League(EDL), a parody group has been made known as "English Disco Lovers", with the expressed purpose of Google bombing the acronym.[14]
The Google bomb is often misunderstood by those in the media and publishing industry who do not retain technical knowledge of Google's ranking factors. For example, talk radio hostAlex Joneshas often conducted what he calls "Google bombs" by dispatching instructions to his radio/Internet listeners.[15][16]In this context, the term is used to describe a rapid and massive influx of keyword searches for a particular phrase. The keyword surge gives the impression that the related content has suddenly become popular. The strategy behind this type of Google bombing is to attract attention from the larger mainstream media and influence them to publish content related to the keyword.[citation needed]
By studying what types of ranking manipulations a search engine is using, a company can provoke a search engine intoloweringthe ranking of a competitor's website. This practice, known asGoogle bowlingornegative SEO, is often done by purchasing Google bombing services (or otherSEOtechniques) not for one's own website, but rather for that of a competitor. The attacker provokes the search company into punishing the "offending" competitor by displaying their page further down in the search results.[17][18]For victims of Google bowling, it may be difficult to appeal the ranking decrease because Google avoids explaining penalties, preferring not to "educate" real offenders. If the situation is clear-cut, however, Google could lift the penalty after submitting a request for reconsideration. Furthermore, after theGoogle Penguinupdate, Google search rankings now take Google bowling into account and very rarely will a website be penalized due to low-quality "farm" backlinks.[citation needed]
Other search engines use similar techniques to rank results and are also affected by Google bombs. A search for "miserable failure" or "failure" on September 29, 2006, brought up the official George W. Bush biography number one onGoogle,Yahoo!, andMSNand number two on Ask.com. On June 2, 2005, Tooter reported that George Bush was ranked first for the keyword "miserable", "failure", and "miserable failure" in both Google and Yahoo!; Google has since addressed this and disarmed the George Bush Google bomb and many others.[citation needed]
TheBBC, reporting on Google bombs in 2002, used the headline "Google Hit By Link Bombers",[19]acknowledging to some degree the idea of "link bombing". In 2004,Search Engine Watchsuggested that the term be "link bombing" because of its application beyond Google, and continues to use thattermas it is considered more accurate.[20]
We don't condone the practice of googlebombing, or any other action that seeks to affect the integrity of our search results, but we're also reluctant to alter our results by hand in order to prevent such items from showing up. Pranks like this may be distracting to some, but they don't affect the overall quality of our search service, whose objectivity, as always, remains the core of our mission.[21]
By January 2007, Google changed its indexing structure[2]so that Google bombs such as "miserable failure" would "typically return commentary, discussions, and articles" about the tactic itself.[2]Google announced the changes on its official blog. In response to criticism for allowing the Google bombs,Matt Cutts, head of Google's Webspam team, said that Google bombs had not "been a very high priority for us".[2][22]
Over time, we’ve seen more people assume that they are Google's opinion, or that Google has hand-coded the results for these Google-bombed queries. That's not true, and it seemed like it was worth trying to correct that misperception.[23]
In May 2004, the websites Dark Blue and SearchGuild teamed up to create what they termed the "SEO Challenge" to Google bomb the phrase "nigritude ultramarine".[24]
The contest sparked controversy around the Internet, as some groups worried thatsearch engine optimization(SEO) companies would abuse the techniques used in the competition to alter queries more relevant to the average user. This fear was offset by the belief thatGooglewould alter their algorithm based on the methods used by the Google bombers.
In September 2004, anotherSEO contestwas created. This time, the objective was to get the top result for the phrase "seraphim proudleduck". A large sum of money was offered to the winner, but the competition turned out to be a hoax.[citation needed]
In March 2005's issue of.netmagazine, a contest was created among five professional web developers to make their site the number-one site for the made-up phrase "crystalline incandescence".
Some of the most famous Google bombs are also expressions of political opinions (e.g. "liar" leading toTony Blairor "miserable failure" leading to the White House's biography of George W. Bush):
Some website operators have adapted Google bombing techniques to do "spamdexing". This includes, among other techniques, posting of links to a site in anInternet forumalong with phrases the promoter hopes to associate with the site (seespam in blogs). Unlike conventional message board spam, the object is not to attract readers to the site directly, but to increase the site's ranking under those search terms. Promoters using this technique frequently target forums with low reader traffic, in hopes that it will fly under the moderators' radar.Wikisin particular are often the target of this kind of page rank vandalism, as all of the pages are freely editable. This practice was also called "money bombing" byJohn Hilercirca 2004.[65][66]
Another technique is for the owner of an Internetdomain nameto set up the domain'sDNSentry so that allsubdomainsare directed to the same server. The operator then sets up the server so that page requests generate a page full of desired Google search terms, each linking to a subdomain of the same site, with the same title as the subdomain in the requestedURL. Frequently the subdomain matches the linked phrase, with spaces replaced byunderscoresorhyphens. Since Google treats subdomains as distinct sites, the effect of many subdomains linking to each other is a boost to thePageRankof those subdomains and of any other site they link to.
On February 2, 2007, many users noticed changes in the Google algorithm. These changes largely affected (among other things) Google bombs: as of February 15, 2007, only roughly 10% of the Google bombs still worked. This change was largely due to Google refactoring its valuation of PageRank.[citation needed][67][68]
Quixtar, amulti-level marketingcompany now known asAmway North America, has been accused by its critics of using its large network of websites to move sites critical of Quixtar lower in search engine rankings. A Quixtar/Amway independent business owner (IBO) reports that a Quixtar leader advocated the practice in a meeting of Quixtar IBOs. Quixtar/Amway denied wrongdoing and states that its practices are in accordance with search engine rules.[69]
On December 26, 2011, a bomb was started againstGoDaddyto remove them from the #1 place on Google for "domain registration" in retaliation for its support forSOPA.[70]This was then disseminated throughHacker News.[71]
In Australia, one of the first examples of Google bombs was when the keyword "old rice and monkey nuts" was used to generate traffic forHerald SuncolumnistAndrew Bolt's website. The keyword phrase references the alleged $4 billion in loan deals brokered byTirath Khemlanito Australia in 1974.[72]
In May 2019,David BenioffandD. B. Weisswere targets of multiple Google bombs caused byRedditusers' dissatisfaction with the eighth season of their showGame of Thrones. Targeted phrases included "bad writers" and "Dumb and Dumber".[73]
In Indonesia, PresidentJoko Widodowas target of Googlebombing on Google Picture Search when typing "Monyet Pakai Jas Hujan" (Monkey Wearing Raincoat) the results were President Joko Widodo wearing greenraincoatwhen on an official visit.[74]
|
https://en.wikipedia.org/wiki/Google_bombing
|
Hummingbirdis the codename given to a significantalgorithmchange inGoogle Searchin 2013. Its name was derived from the speed and accuracy of thehummingbird. The change was announced on September 26, 2013, having already been in use for a month. "Hummingbird" places greater emphasis onnatural languagequeries, considering context and meaning over individualkeywords. It also looks deeper at content on individual pages of a website, with improved ability to lead users directly to the most appropriate page rather than just a website's homepage.
The upgrade marked the most significant change to Google search in years, with more "human" search interactions and a much heavier focus on conversation and meaning.[1]Thus, web developers and writers were encouraged tooptimize their siteswith natural writing rather than forced keywords, and make effective use of technical web development for on-site navigation.
Google announced "Hummingbird", a new searchalgorithm, at a September 2013 press event,[2]having already used the algorithm for approximately one month prior to announcement.[3]
The "Hummingbird" update was the first major update to Google's search algorithm since the 2010"Caffeine" search architecture upgrade, but even that was limited primarily to improving theindexingof information rather than sorting through information.[3]Amit Singhal, then-search chief at Google, toldSearch Engine Landthat "Hummingbird" was the most dramatic change of the algorithm since 2001, when he first joined Google.[3][4]Unlike previous search algorithms, which would focus on each individual word in the search query, "Hummingbird" considers the context of the different words together, with the goal that pages matching the meaning do better, rather than pages matching just a few words.[5]The name is derived from the speed and accuracy of thehummingbird.[5]
"Hummingbird" is aimed at making interactions more human, in the sense that the search engine is capable of understanding the concepts and relationships between keywords.[6]It places greater emphasis on page content, making search results more relevant, and looks at the authority of a page, and in some cases the page author, to determine the importance of a website. It uses this information to better lead users to a specific page on a website rather than the standard website homepage.[7]
Search engine optimizationchanged with the addition of "Hummingbird", with web developers and writers encouraged to usenatural languagewhen writing on their websites rather than using forced keywords. They were also advised to make effective use of technical website features, such aspage linking, on-page elements including title tags,URLaddresses andHTML tags, as well as writing high-quality, relevant content without duplication.[8]While keywords within the query still continue to be important, "Hummingbird" adds more strength to long-tailed keywords, effectively catering to the optimization of content rather than just keywords.[7]The use of synonyms has also been optimized; instead of listing results with exact phrases or keywords, Google shows more theme-related results.[9]
|
https://en.wikipedia.org/wiki/Google_Hummingbird
|
AGoogle matrixis a particularstochastic matrixthat is used byGoogle'sPageRankalgorithm. The matrix represents a graph with edges representing links between pages. The PageRank of each page can then be generated iteratively from the Google matrix using thepower method. However, in order for the power method to converge, the matrix must be stochastic,irreducibleandaperiodic.
In order to generate the Google matrixG, we must first generate anadjacency matrixAwhich represents the relations between pages or nodes.
Assuming there areNpages, we can fill outAby doing the following:
Then the final Google matrix G can be expressed viaSas:
By the construction the sum of all non-negative elements inside each matrix column is equal to unity. The numerical coefficientα{\displaystyle \alpha }is known as a damping factor.
UsuallySis asparse matrixand for modern directed networks it has only about ten nonzero elements in a line or column, thus only about 10Nmultiplications are needed to multiply a vector by matrixG.[2][3]
An example of the matrixS{\displaystyle S}construction via Eq.(1) within a simple network is given in the articleCheiRank.
For the actual matrix, Google uses a damping factorα{\displaystyle \alpha }around 0.85.[2][3][4]The term(1−α){\displaystyle (1-\alpha )}gives a surfer probability to jump randomly on any page. The matrixG{\displaystyle G}belongs to the class ofPerron-Frobenius operatorsofMarkov chains.[2]The examples of Google matrix structure are shown in Fig.1 for Wikipedia articles hyperlink network in 2009 at small scale and in Fig.2 for University of Cambridge network in 2006 at large scale.
For0<α<1{\displaystyle 0<\alpha <1}there is only one maximal eigenvalueλ=1{\displaystyle \lambda =1}with the corresponding right eigenvector which has non-negative elementsPi{\displaystyle P_{i}}which can be viewed as stationary probability distribution.[2]These probabilities ordered by their decreasing values give the PageRank vectorPi{\displaystyle P_{i}}with the PageRankKi{\displaystyle K_{i}}used by Google search to rank webpages. Usually one has for the World Wide Web thatP∝1/Kβ{\displaystyle P\propto 1/K^{\beta }}withβ≈0.9{\displaystyle \beta \approx 0.9}. The number of nodes with a given PageRank value scales asNP∝1/Pν{\displaystyle N_{P}\propto 1/P^{\nu }}with the exponentν=1+1/β≈2.1{\displaystyle \nu =1+1/\beta \approx 2.1}.[6][7]The left eigenvector atλ=1{\displaystyle \lambda =1}has constant matrix elements. With0<α{\displaystyle 0<\alpha }all eigenvalues move asλi→αλi{\displaystyle \lambda _{i}\rightarrow \alpha \lambda _{i}}except the maximal eigenvalueλ=1{\displaystyle \lambda =1}, which remains unchanged.[2]The PageRank vector varies withα{\displaystyle \alpha }but other eigenvectors withλi<1{\displaystyle \lambda _{i}<1}remain unchanged due to their orthogonality to the constant left vector atλ=1{\displaystyle \lambda =1}. The gap betweenλ=1{\displaystyle \lambda =1}and other eigenvalue being1−α≈0.15{\displaystyle 1-\alpha \approx 0.15}gives a rapid convergence of a random initial vector to the PageRank approximately after 50 multiplications onG{\displaystyle G}matrix.
Atα=1{\displaystyle \alpha =1}the matrixG{\displaystyle G}has generally many degenerate eigenvaluesλ=1{\displaystyle \lambda =1}(see e.g. [6][8]). Examples of the eigenvalue spectrum of the Google matrix of various directed networks is shown in Fig.3 from[5]and Fig.4 from.[8]
The Google matrix can be also constructed for the Ulam networks generated by the Ulam method [8] for dynamical maps. The spectral properties of such matrices are discussed in [9,10,11,12,13,15].[5][9]In a number of cases the spectrum is described by the fractal Weyl law [10,12].
The Google matrix can be constructed also for other directed networks, e.g. for the procedure call network of the Linux Kernel software introduced in [15]. In this case the spectrum ofλ{\displaystyle \lambda }is described by the fractal Weyl law with the fractal dimensiond≈1.3{\displaystyle d\approx 1.3}(see Fig.5 from[9]).Numerical analysisshows that the eigenstates of matrixG{\displaystyle G}are localized (see Fig.6 from[9]).Arnoldi iterationmethod allows to compute many eigenvalues and eigenvectors for matrices of rather large size [13].[5][9]
Other examples ofG{\displaystyle G}matrix include the Google matrix of brain [17]
and business process management [18], see also.[1]Applications of Google matrix analysis to
DNA sequences is described in [20]. Such a Google matrix approach allows also to analyze entanglement of cultures via ranking of multilingual Wikipedia articles abouts persons [21]
The Google matrix with damping factor was described bySergey BrinandLarry Pagein 1998 [22], see also articles on PageRank history [23],[24].
|
https://en.wikipedia.org/wiki/Google_matrix
|
Google Pandais analgorithmused by theGooglesearch engine, first introduced in February 2011. The main goal of this algorithm is to improve the quality of search results by lowering the rankings of websites with "low-quality content".[1][2][3]Panda is part of Google's broader approach to combat low-quality websites that use manipulative methods to gain higher positions in search engine results.
CNETreported a surge in the rankings ofnews websitesandsocial networking sites, and a drop in rankings for sites containing large amounts of advertising.[4]This change reportedly affected the rankings of almost 12 percent of all search results.[5]Soon after the Panda rollout, many websites, including Google's webmaster forum, became filled with complaints ofscrapers/copyright infringers getting better rankings than sites with original content. At one point, Google publicly asked for data points to help detect scrapers better.[6]In 2016,Matt Cutts, Google's head of webspam at the time of the Panda update, commented that "with Panda, Google took a big enough revenue hit via some partners that Google actually needed to disclose Panda as a material impact on an earnings call. But I believe it was the right decision to launch Panda, both for the long-term trust of our users and for a better ecosystem for publishers."[2]
Google's Panda received several updates after the original rollout in February 2011, and their effect went global in April 2011. To help affected publishers, Google provided an advisory on its blog,[7]thus giving some direction for self-evaluation of a website's quality. Google has provided a list of 23 bullet points on its blog answering the question of "What counts as a high-quality site?" that is supposed to help webmasters "step into Google's mindset".[8]Since 2015, Panda has been incorporated into Google's core algorithm.[9][10]
The name "Panda" comes from the Google engineer Navneet Panda, who developed the technology that allowed Google to create and implement the algorithm.[11][5]
The Google Panda patent (patent 8,682,892), filed on September 28, 2012, and granted on March 25, 2014, states that Panda creates a ratio between a site'sinbound linksand search queries related to the site's brand. This ratio is then used to create a sitewide modification factor, which is applied to a page based on a search query. If the page does not meet a certain threshold, the modification factor is applied, and the page ranks lower in search engine results.[12]
After the rollout of the Google Panda update, there were significant shifts in search rankings. News and social networking sites saw higher rankings, while heavily-advertised sites dropped, affecting nearly 12% of search results.[1]Panda affects the ranking of an entire site or specific sections of it, rather than just individual pages.[13]
For the first two years, Google Panda's updates were rolled out about once a month, but Google stated in March 2013 that future updates would be integrated into the algorithm and would therefore be continuous and less noticeable.[14][15]
On 20 May 2014, the Panda 4.0 update was released. One of the consequences of this update was the decline in rankings of websites that Google considers "low-quality," including content aggregators, news sites (especially in the areas of rumors and gossip), and price comparison websites.[16][17]
Google released a "slow rollout" of Panda 4.2 starting on July 18, 2015.[18]
|
https://en.wikipedia.org/wiki/Google_Panda
|
Google Penguinis acodename[1]for a Google algorithm update that was first announced on April 24, 2012. The update was aimed at decreasingsearch engine rankingsof websites that violate Google's Webmaster Guidelines[2]by using now declaredGrey HatSEMtechniques involved in increasing artificially the ranking of a webpage by manipulating the number of links pointing to the page. Such tactics are commonly described as link schemes.[3]According to Google's John Mueller,[1]as of 2013, Google announced all updates to the Penguin filter to the public.[4]
By Google's estimates,[5]Penguin affected approximately 3.1% ofsearch queriesinEnglish, about 3% of queries in languages likeGerman,Chinese, andArabic, and an even greater percentage of them in "highly spammed" languages. On May 25, 2012, Google unveiled another Penguin update, called Penguin 1.1. This update, according toMatt Cutts, former head of webspam at Google, was supposed to affect less than one-tenth of a percent of English searches. The guiding principle for the update was to penalize websites that were using manipulative techniques to achieve high rankings. Pre-Penguin sites commonly used negative link building techniques to rank highly and get traffic. Once Penguin was rolled out, it meant that content was key, and those with great content would be recognised and those with little or spammy content would be penalised and receive no ranking benefits.[6]The purpose according to Google was to catch excessive spammers. Allegedly, few websites lost search rankings on Google for specific keywords during thePandaand Penguin rollouts.[7]Google specifically mentions thatdoorway pages, which are only built to attract search engine traffic, are against their webmaster guidelines.
In January 2012, the so-called Page Layout Algorithm Update[8](also known as the Top Heavy Update)[9]was released, which targeted websites with too many ads, or too little contentabove the fold.
Penguin 3 was released October 5, 2012, and affected 0.3% of queries.[10]Penguin 4 (also known as Penguin 2.0) was released on May 22, 2013, and affected 2.3% of queries.[11]Penguin 5 (also known as Penguin 2.1)[12]was released on October 4, 2013, affected around 1% of queries, and has been the most recent of the Google Penguin algorithm updates.[13]
Google was reported to have released Penguin 3.0 on October 18, 2014.[14]
On October 21, 2014, Google's Pierre Farr confirmed that Penguin 3.0 was an algorithm "refresh", with no new signals added.[15]
On April 7, 2015, Google's John Mueller said in a Google+ hangout that both Penguin and Panda "currently are not updating the data regularly" and that updates must be pushed out manually. This confirms that the algorithm is not updated continuously which was believed to be the case earlier on in the year.[16]
The strategic goal that Panda, Penguin, and the page layout update share is to display higher quality websites at the top of Google'ssearch results. However, sites that were downranked as the result of these updates have different sets of characteristics. The main target of Google Penguin is to focus on The so-called "black-hat" link-building strategies, such as link buying, link farming, automated links, PBNs, and others.[17]
In a Google+ Hangout on April 15, 2016, John Mueller said "I am pretty sure when we start rolling out [Penguin] we will have a message to kind of post but at the moment I don't have anything specific to kind of announce."[18]
On September 23, 2016 Google announced that Google Penguin was now part of the core algorithm[19]meaning that it updates in real time. Hence there will no longer be announcements by Google relating to future refreshes.[20]Real-time also means that websites are evaluated in real-time and rankings impacted in real-time. During the last years webmasters instead always had to wait for the roll-out of the next update to get out of a Penguin penalty. Also, Google Penguin 4.0 is more granular as opposed to previous updates, since it may affect a website on a URL-basis as opposed to always affecting a whole website. Finally, Penguin 4.0[21][22]differs from previous Penguin versions since it does not demote a web site when it finds bad links. Instead it discounts the links, meaning it ignores them and they no longer count toward the website's ranking. As a result of this, there is less need to use the disavow file.[21]Google uses both algorithm and human reviewers to identify links that are unnatural (artificial), manipulative or deceptive and includes these in its Manual Actions report for websites.[23]
Two days after the Penguin update was released Google prepared a feedback form,[24]designed for two categories of users: those who want to reportweb spamthat still ranks highly after thesearch algorithmchange, and those who think that their site got unfairly hit by the update. Google also has a reconsideration form through Google Webmaster Tools.
In January 2015, Google's John Mueller said that a Penguin penalty can be removed by simply building good links. The usual process is to remove bad links manually or by using Google's Disavow tool and then filing a reconsideration request.[25]Mueller elaborated on this by saying the algorithm looks at the percentage of good links versus bad links, so by building more good links it may tip the algorithm in your favor which would lead to recovery.[26]
|
https://en.wikipedia.org/wiki/Google_Penguin
|
Google Search(also known simply asGoogleorGoogle.com) is asearch engineoperated byGoogle. It allows users to search for information on theWebby entering keywords or phrases. Google Search usesalgorithmsto analyze and rankwebsitesbased on their relevance to the search query. It is the most popular search engine worldwide.
Google Search is themost-visited website in the world. As of 2025, Google Search has a 90% share of the global search engine market.[3]Approximately 24.84% of Google's monthly global traffic comes from theUnited States, 5.51% fromIndia, 4.7% fromBrazil, 3.78% from theUnited Kingdomand 5.28% fromJapanaccording to data provided bySimilarweb.[4]
The order of search results returned by Google is based, in part, on a priority rank system called "PageRank". Google Search also provides many different options for customized searches, using symbols to include, exclude, specify or require certain search behavior, and offers specialized interactive experiences, such as flight status and package tracking, weather forecasts, currency, unit, and time conversions, word definitions, and more.
The main purpose of Google Search is to search for text in publicly accessible documents offered by web servers, as opposed to other data, such asimagesordata contained in databases. It was originally developed in 1996 byLarry Page,Sergey Brin, andScott Hassan.[5][6][7]The search engine would also be set up in the garage ofSusan Wojcicki'sMenlo Parkhome.[8]In 2011, Google introduced "Google Voice Search" to search for spoken, rather than typed, words.[9]In 2012, Google introduced asemantic searchfeature namedKnowledge Graph.
Analysis of the frequency of search terms may indicate economic, social and health trends.[10]Data about the frequency of use of search terms on Google can beopenlyinquired viaGoogle Trendsandhave been shown to correlatewithfluoutbreaks and unemployment levels, and provide the information faster than traditional reporting methods and surveys. As of mid-2016, Google's search engine has begun to rely ondeep neural networks.[11]
In August 2024, a US judge in Virginia ruled that Google held anillegal monopolyover Internet search and search advertising.[12][13]The court found that Google maintained its market dominance by paying large amounts to phone-makers and browser-developers to make Google its default search engine.[13]In April 2025, the trial to determine which remedies sought by the Department of Justice would be imposed to address Google’s illegal monopoly, which could include breaking up the company and preventing it from using its data to secure dominance in the AI sector.[14]
Googleindexeshundreds ofterabytesof information fromweb pages.[15]Forwebsitesthat are currently down or otherwise not available, Google provides links tocachedversions of the site, formed by the search engine's latest indexing of that page.[16]Additionally, Google indexes some file types, being able to show usersPDFs,Word documents,Excel spreadsheets,PowerPoint presentations, certainFlash multimedia content, andplain textfiles.[17]Users can also activate "SafeSearch", a filtering technology aimed at preventing explicit and pornographic content from appearing in search results.[18]
Despite Google search's immense index, sources generally assume that Google is only indexing less than 5% of the total Internet, with the rest belonging to thedeep web, inaccessible through its search tools.[15][19][20]
In 2012, Google changed its search indexing tools to demote sites that had been accused ofpiracy.[21]In October 2016, Gary Illyes, a webmaster trends analyst with Google, announced that the search engine would be making a separate, primary web index dedicated for mobile devices, with a secondary, less up-to-date index for desktop use. The change was a response to the continued growth in mobile usage, and a push for web developers to adopt a mobile-friendly version of their websites.[22][23]In December 2017, Google began rolling out the change, having already done so for multiple websites.[24]
In August 2009, Google invited web developers to test a new search architecture, codenamed "Caffeine", and give their feedback. The new architecture provided no visual differences in the user interface, but added significant speed improvements and a new "under-the-hood" indexing infrastructure. The move was interpreted in some quarters as a response toMicrosoft's recent release of an upgraded version of its own search service, renamedBing, as well as the launch ofWolfram Alpha, a new search engine based on "computational knowledge".[25][26]Google announced completion of "Caffeine" on June 8, 2010, claiming 50% fresher results due to continuous updating of its index.[27]
With "Caffeine", Google moved its back-end indexing system away fromMapReduceand ontoBigtable, the company's distributed database platform.[28][29]
In August 2018,Danny Sullivanfrom Google announced a broad core algorithm update. As per current analysis done by the industry leaders Search Engine Watch and Search Engine Land, the update was to drop down the medical and health-related websites that were not user friendly and were not providing good user experience. This is why the industry experts named it "Medic".[30]
Google reserves very high standards for YMYL (Your Money or Your Life) pages. This is because misinformation can affect users financially, physically, or emotionally. Therefore, the update targeted particularly those YMYL pages that have low-quality content and misinformation. This resulted in the algorithm targeting health and medical-related websites more than others. However, many other websites from other industries were also negatively affected.[31]
By 2012, it handled more than 3.5 billion searches per day.[32]In 2013 theEuropean Commissionfound that Google Search favored Google's own products, instead of the best result for consumers' needs.[33]In February 2015 Google announced a major change to its mobile searchalgorithmwhich would favor mobile friendly over otherwebsites. Nearly 60% of Googlesearchescome from mobile phones. Google says it wants users to have access to premium qualitywebsites. Those websites which lack a mobile-friendlyinterfacewould be ranked lower and it is expected that this update will cause a shake-up ofranks. Businesses who fail to update theirwebsitesaccordingly could see a dip in their regular websites traffic.[34]
Google's rise was largely due to a patentedalgorithmcalled PageRank which helps rank web pages that match a given search string.[35]When Google was a Stanford research project, it was nicknamedBackRubbecause the technology checksbacklinksto determine a site's importance. Other keyword-based methods to rank search results, used by many search engines that were once more popular than Google, would check how often the search terms occurred in a page, or how strongly associated the search terms were within each resulting page. The PageRank algorithm instead analyzes human-generatedlinksassuming that web pages linked from many important pages are also important. The algorithm computes arecursivescore for pages, based on the weighted sum of other pages linking to them. PageRank is thought tocorrelatewell with human concepts of importance. In addition to PageRank, Google, over the years, has added many other secret criteria for determining the ranking of resulting pages. This is reported to comprise over 250 different indicators,[36][37]the specifics of which are kept secret to avoid difficulties created by scammers and help Google maintain an edge over its competitors globally.
PageRank was influenced by a similar page-ranking and site-scoring algorithm earlier used forRankDex, developed byRobin Liin 1996. Larry Page's patent for PageRank filed in 1998 includes a citation to Li's earlier patent. Li later went on to create the Chinese search engineBaiduin 2000.[38][39]
In a potential hint of Google's future direction of their Search algorithm, Google's then chief executiveEric Schmidt, said in a 2007 interview with theFinancial Times: "The goal is to enable Google users to be able to ask the question such as 'What shall I do tomorrow?' and 'What job shall I take?'".[40]Schmidt reaffirmed this during a 2010 interview withThe Wall Street Journal: "I actually think most people don't want Google to answer their questions, they want Google to tell them what they should be doing next."[41]
Because Google is the most popularsearch engine, manywebmastersattempt to influence their website's Google rankings. An industry of consultants has arisen to help websites increase their rankings on Google and other search engines. This field, called search engine optimization, attempts to discern patterns in search engine listings, and then develop a methodology for improving rankings to draw more searchers to their clients' sites. Search engine optimization encompasses both "on page" factors (like body copy, title elements, H1 heading elements and imagealt attributevalues) and Off Page Optimization factors (likeanchor textand PageRank). The general idea is to affect Google's relevance algorithm by incorporating the keywords being targeted in various places "on page", in particular the title element and the body copy (note: the higher up in the page, presumably the better its keyword prominence and thus the ranking). Too many occurrences of the keyword, however, cause the page to look suspect to Google's spam checking algorithms. Google has published guidelines for website owners who would like to raise their rankings when using legitimate optimization consultants.[42]It has been hypothesized, and, allegedly, is the opinion of the owner of one business about which there have been numerous complaints, that negative publicity, for example, numerous consumer complaints, may serve as well to elevate page rank on Google Search as favorable comments.[43]The particular problem addressed inThe New York Timesarticle, which involvedDecorMyEyes, was addressed shortly thereafter by an undisclosed fix in the Google algorithm. According to Google, it was not the frequently published consumer complaints about DecorMyEyes which resulted in the high ranking but mentions on news websites of events which affected the firm such as legal actions against it.Google Search Consolehelps to check for websites that use duplicate or copyright content.[44]
In 2013, Google significantly upgraded its search algorithm with "Hummingbird". Its name was derived from the speed and accuracy of thehummingbird.[45]The change was announced on September 26, 2013, having already been in use for a month.[46]"Hummingbird" places greater emphasis onnatural languagequeries, considering context and meaning over individual keywords.[45]It also looks deeper at content on individual pages of a website, with improved ability to lead users directly to the most appropriate page rather than just a website's homepage.[47]The upgrade marked the most significant change to Google search in years, with more "human" search interactions[48]and a much heavier focus on conversation and meaning.[45]Thus, web developers and writers were encouraged tooptimize their siteswith natural writing rather than forced keywords, and make effective use of technical web development for on-site navigation.[49]
In 2023, drawing on internal Google documents disclosed as part of theUnited States v. Google LLC (2020)antitrust case, technology reporters claimed that Google Search was "bloated and overmonetized"[50]and that the "semantic matching" of search queries put advertising profits before quality.[51]Wiredwithdrew Megan Gray's piece after Google complained about alleged inaccuracies, while the author reiterated that «As stated in court, "A goal of Project Mercury was to increase commercial queries"».[52]
In March 2024, Google announced a significant update to its core search algorithm and spam targeting, which is expected to wipe out 40 percent of all spam results.[53]On March 20th, it was confirmed that the roll out of the spam update was complete.[54]
On September 10, 2024, the European-basedEU Court of Justicefound that Google held an illegal monopoly with the way the company showed favoritism to its shopping search, and could not avoid paying €2.4 billion.[55]The EU Court of Justice referred to Google's treatment of rival shopping searches as "discriminatory" and in violation of theDigital Markets Act.[55]
At the top of the search page, the approximate result count and the response time two digits behind decimal is noted. Of search results, page titles and URLs, dates, and a preview text snippet for each result appears. Along with web search results, sections with images, news, and videos may appear.[56]The length of the previewed text snipped was experimented with in 2015 and 2017.[57][58]
"Universal search" was launched by Google on May 16, 2007, as an idea that merged the results from different kinds of search types into one. Prior to Universal search, a standard Google search would consist of links only to websites. Universal search, however, incorporates a wide variety of sources, including websites, news, pictures, maps, blogs, videos, and more, all shown on the same search results page.[59][60]Marissa Mayer, then-vice president of search products and user experience, described the goal of Universal search as "we're attempting to break down the walls that traditionally separated our various search properties and integrate the vast amounts of information available into one simple set of search results.[61]
In June 2017, Google expanded its search results to cover available job listings. The data is aggregated from various major job boards and collected by analyzing company homepages. Initially only available in English, the feature aims to simplify finding jobs suitable for each user.[62][63]
In May 2009, Google announced that they would be parsing websitemicroformatsto populate search result pages with "Rich snippets". Such snippets include additional details about results, such as displaying reviews for restaurants and social media accounts for individuals.[64]
In May 2016, Google expanded on the "Rich snippets" format to offer "Rich cards", which, similarly to snippets, display more information about results, but shows them at the top of the mobile website in a swipeable carousel-like format.[65]Originally limited to movie and recipe websites in the United States only, the feature expanded to all countries globally in 2017.[66]
The Knowledge Graph is a knowledge base used by Google to enhance its search engine's results with information gathered from a variety of sources.[67]This information is presented to users in a box to the right of search results.[68]Knowledge Graph boxes were added to Google's search engine in May 2012,[67]starting in the United States, with international expansion by the end of the year.[69]The information covered by the Knowledge Graph grew significantly after launch, tripling its original size within seven months,[70]and being able to answer "roughly one-third" of the 100 billion monthly searches Google processed in May 2016.[71]The information is often used as a spoken answer inGoogle Assistant[72]andGoogle Homesearches.[73]The Knowledge Graph has been criticized for providing answers without source attribution.[71]
A Google Knowledge Panel[74]is a feature integrated into Google search engine result pages, designed to present a structured overview of entities such as individuals, organizations, locations, or objects directly within the search interface. This feature leverages data from Google's Knowledge Graph,[75]a database that organizes and interconnects information about entities, enhancing the retrieval and presentation of relevant content to users.
The content within a Knowledge Panel[76]is derived from various sources, includingWikipediaand other structured databases, ensuring that the information displayed is both accurate and contextually relevant. For instance, querying a well-known public figure may trigger a Knowledge Panel displaying essential details such as biographical information, birthdate, and links to social media profiles or official websites.
The primary objective of the Google Knowledge Panel is to provide users with immediate, factual answers, reducing the need for extensive navigation across multiple web pages.
In May 2017, Google enabled a new "Personal" tab in Google Search, letting users search for content in their Google accounts' various services, including email messages fromGmailand photos fromGoogle Photos.[77][78]
Google Discover, previously known as Google Feed, is a personalized stream of articles, videos, and other news-related content. The feed contains a "mix of cards" which show topics of interest based on users' interactions with Google, or topics they choose to follow directly.[79]Cards include, "links to news stories, YouTube videos, sports scores, recipes, and other content based on what [Google] determined you're most likely to be interested in at that particular moment."[79]Users can also tell Google they're not interested in certain topics to avoid seeing future updates.
Google Discover launched in December 2016[80]and received a major update in July 2017.[81]Another major update was released in September 2018, which renamed the app from Google Feed to Google Discover, updated the design, and adding more features.[82]
Discover can be found on a tab in the Google app and by swiping left on the home screen of certain Android devices. As of 2019, Google will not allowpolitical campaignsworldwide to target their advertisement to people to make them vote.[83]
At the 2023Google I/Oevent in May, Google unveiled Search Generative Experience (SGE), an experimental feature in Google Search available throughGoogle Labswhich producesAI-generatedsummaries in response to search prompts.[84]This was part of Google's wider efforts to counter the unprecedented rise of generative AI technology, ushered byOpenAI's launch ofChatGPT, which sent Google executives to a panic due to its potential threat to Google Search.[85]Google added the ability to generate images in October.[86]At I/O in 2024, the feature was upgraded and renamed AI Overviews.[87]
AI Overviews was rolled out to users in the United States in May 2024.[87]The feature faced public criticism in the first weeks of its rollout after errors from the tool went viral online. These included results suggesting users add glue to pizza or eat rocks,[88]or incorrectly claimingBarack Obamais Muslim.[89]Google described these viral errors as "isolated examples", maintaining that most AI Overviews provide accurate information.[88][90]Two weeks after the rollout of AI Overviews, Google made technical changes and scaled back the feature, pausing its use for some health-related queries and limiting its reliance on social media posts.[91]Scientific Americanhas criticised the system on environmental grounds, as such a search uses 30 times more energy than a conventional one.[92]It has also been criticized for condensing information from various sources, making it less likely for people to view full articles and websites. When it was announced in May 2024, Danielle Coffey, CEO of the News/Media Alliance was quoted as saying "This will be catastrophic to our traffic, as marketed by Google to further satisfy user queries, leaving even less incentive to click through so that we can monetize our content."[93]
In August 2024, AI Overviews were rolled out in the UK, India, Japan, Indonesia, Mexico and Brazil, with local language support.[94]On October 28, 2024, AI Overviews was rolled out to 100 more countries, including Australia and New Zealand.[95]
In March 2025, Google introduced an experimental "AI Mode" within its Search platform, enabling users to input complex, multi-part queries and receive comprehensive, AI-generated responses. This feature leverages Google's advanced Gemini 2.0 model, which enhances the system's reasoning capabilities and supports multimodal inputs, including text, images, and voice.
Initially, AI Mode is available to Google One AI Premium subscribers in the United States, who can access it through the Search Labs platform. This phased rollout allows Google to gather user feedback and refine the feature before a broader release.
The introduction of AI Mode reflects Google's ongoing efforts to integrate advanced AI technologies into its services, aiming to provide users with more intuitive and efficient search experiences.[96][97]
In late June 2011, Google introduced a new look to the Google homepage in order to boost the use of the Google+ social tools.[98]
One of the major changes was replacing the classic navigation bar with a black one. Google's digital creative director Chris Wiggins explains: "We're working on a project to bring you a new and improved Google experience, and over the next few months, you'll continue to see more updates to our look and feel."[99]The new navigation bar has been negatively received by a vocal minority.[100]
In November 2013, Google started testing yellow labels for advertisements displayed in search results, to improve user experience. The new labels, highlighted in yellow color, and aligned to the left of each sponsored link help users differentiate between organic and sponsored results.[101]
On December 15, 2016, Google rolled out a new desktop search interface that mimics their modular mobile user interface. The mobile design consists of a tabular design that highlights search features in boxes. and works by imitating the desktop Knowledge Graph real estate, which appears in the right-hand rail of the search engine result page, these featured elements frequently feature Twitter carousels, People Also Search For, and Top Stories (vertical and horizontal design) modules. The Local Pack and Answer Box were two of the original features of the GoogleSERPthat were primarily showcased in this manner, but this new layout creates a previously unseen level of design consistency for Google results.[102]
Google offers a "Google Search"mobile appforAndroidandiOSdevices.[103]The mobile apps exclusively feature Google Discover and a "Collections" feature, in which the user can save for later perusal any type of search result like images, bookmarks or map locations into groups.[104]Android devices were introduced to a preview of the feed, perceived as related toGoogle Now, in December 2016,[105]while it was made official on both Android and iOS in July 2017.[106][107]
In April 2016, Google updated its Search app on Android to feature "Trends"; search queries gaining popularity appeared in the autocomplete box along with normal query autocompletion.[108]The update received significant backlash, due to encouraging search queries unrelated to users' interests or intentions, prompting the company to issue an update with an opt-out option.[109]In September 2017, the Google Search app on iOS was updated to feature the same functionality.[110]
In December 2017, Google released "Google Go", an app designed to enable use of Google Search on physically smaller and lower-spec devices in multiple languages. A Google blog post about designing "India-first" products and features explains that it is "tailor-made for the millions of people in [India and Indonesia] coming online for the first time".[111]
Google Search consists of a series oflocalized websites. The largest of those, thegoogle.com site, is the top most-visited website in the world.[112]Some of its features include a definition link for most searches including dictionary words, the number of results you got on your search, links to other searches (e.g. for words that Google believes to be misspelled, it provides a link to the search results using its proposed spelling), the ability to filter results to a date range,[113]and many more.
Google search accepts queries as normal text, as well as individual keywords.[114]Itautomatically correctsapparent misspellings by default (while offering to use the original spelling as a selectable alternative), and provides the same results regardless of capitalization.[114]For more customized results, one can use a wide variety ofoperators, including, but not limited to:[115][116]
Google also offers aGoogle Advanced Searchpage with a web interface to access the advanced features without needing to remember the special operators.[117]
Unlike other search engines, when searching for exact phrases, Google Search only takes words that are on the same line into account.
Google appliesquery expansionto submitted search queries, using techniques to deliver results that it considers "smarter" than the query users actually submitted. This technique involves several steps, including:[118]
In 2008, Google started to give usersautocompletedsearch suggestionsin a list below the search bar while typing, originally with the approximate result count previewed for each listed search suggestion.[119]
Google's homepage includes a button labeled "I'm Feeling Lucky". This feature originally allowed users to type in their search query, click the button and be taken directly to the first result, bypassing the search results page. Clicking it while leaving the search box empty opens Google's archive ofDoodles.[120]With the 2010 announcement ofGoogle Instant, an automatic feature that immediately displays relevant results as users are typing in their query, the "I'm Feeling Lucky" button disappears, requiring that users opt-out of Instant results through search settings to keep using the "I'm Feeling Lucky" functionality.[121]In 2012, "I'm Feeling Lucky" was changed to serve as an advertisement for Google services; users hover their computer mouse over the button, it spins and shows an emotion ("I'm Feeling Puzzled" or "I'm Feeling Trendy", for instance), and, when clicked, takes users to a Google service related to that emotion.[122]
Tom Chavezof "Rapt", a firm helping to determine a website's advertising worth, estimated in 2007 that Google lost $110 million in revenue per year due to use of the button, which bypasses the advertisements found on the search results page.[123]
Besides the main text-based search-engine function of Google search, it also offers multiple quick, interactive features. These include, but are not limited to:[124][125][126]
During Google's developer conference,Google I/O, in May 2013, the company announced that users onGoogle ChromeandChromeOSwould be able to have the browser initiate an audio-based search by saying "OK Google", with no button presses required. After having the answer presented, users can follow up with additional, contextual questions; an example include initially asking "OK Google, will it be sunny in Santa Cruz this weekend?", hearing a spoken answer, and reply with "how far is it from here?"[127][128]An update to the Chrome browser withvoice-searchfunctionality rolled out a week later, though it required a button press on a microphone icon rather than "OK Google" voice activation.[129]Google released a browser extension for the Chrome browser, named with a "beta" tag for unfinished development, shortly thereafter.[130]In May 2014, the company officially added "OK Google" into the browser itself;[131]they removed it in October 2015, citing low usage, though the microphone icon for activation remained available.[132]In May 2016, 20% of search queries on mobile devices were done through voice.[133]
In addition to its tool for searchingweb pages, Google also provides services for searching images,Usenetnewsgroups, news websites, videos (Google Videos),searching by locality, maps, and items for sale online.Google Videosallows searching theWorld Wide Webfor video clips.[134]The service evolved fromGoogle Video, Google's discontinued video hosting service that also allowed to search the web for video clips.[134]
In 2012, Google has indexed over 30 trillion web pages, and received 100 billion queries per month.[135]It alsocachesmuch of the content that itindexes. Google operates other tools and services includingGoogle News,Google Shopping,Google Maps,Google Custom Search,Google Earth,Google Docs,Picasa(discontinued),Panoramio(discontinued),YouTube,Google Translate,Google BlogSearch andGoogle DesktopSearch (discontinued[136]).
There are also products available from Google that are not directly search-related.Gmail, for example, is awebmailapplication, but still includes search features;Google Browser Syncdoes not offer any search facilities, although it aims to organize your browsing time.
In 2009, Google claimed that a search query requires altogether about 1kJor 0.0003kW·h,[137]which is enough to raise the temperature of one liter of water by 0.24 °C. According to green search engineEcosia, the industry standard for search engines is estimated to be about 0.2 grams of CO2emission per search.[138]Google's 40,000 searches per second translate to 8 kg CO2per second or over 252 million kilos of CO2per year.[139]
On certain occasions, thelogoon Google's webpage will change to a special version, known as a "Google Doodle". This is a picture, drawing, animation, or interactive game that includes the logo. It is usually done for a special event or day although not all of them are well known.[140]Clicking on the Doodle links to a string of Google search results about the topic. The first was a reference to theBurning Man Festivalin 1998,[141][142]and others have been produced for the birthdays of notable people likeAlbert Einstein, historical events like the interlockingLegoblock's 50th anniversary and holidays likeValentine's Day.[143]Some Google Doodles have interactivity beyond a simple search, such as the famous "Google Pac-Man" version that appeared on May 21, 2010.
Google has been criticized for placing long-termcookieson users' machines to store preferences, a tactic which also enables them to track a user's search terms and retain the data for more than a year.[144]
Since 2012, Google Inc. has globally introduced encrypted connections for most of its clients, to bypass governative blockings of the commercial and IT services.[145]
Google searches have also triggeredkeyword warrantsin which information is shared with law enforcement leading to a criminal case.[146]
In 2003,The New York Timescomplained about Google'sindexing, claiming that Google'scachingof content on its site infringed its copyright for the content.[147]In bothField v. GoogleandParker v. Google, the United States District Court ofNevadaruled in favor of Google.[148][149]
A 2019New York Timesarticle on Google Search showed that images ofchild sexual abusehad been found on Google and that the company had been reluctant at times to remove them.[150]
Google flags search results with the message "This site may harm your computer" if the site is known to install malicious software in the background or otherwise surreptitiously. For approximately 40 minutes on January 31, 2009, all search results were mistakenly classified asmalwareand could therefore not be clicked; instead a warning message was displayed and the user was required to enter the requested URL manually. The bug was caused by human error.[151][152][153][154]TheURLof "/" (which expands to all URLs) was mistakenly added to the malware patterns file.[152][153]
In 2007, a group of researchers observed a tendency for users to rely exclusively on Google Search for finding information, writing that "With the Google interface the user gets the impression that the search results imply a kind of totality. ... In fact, one only sees a small part of what one could see if one also integrates other research tools."[155]
In 2011, Google Search query results have been shown by Internet activistEli Pariserto be tailored to users, effectively isolating users in what he defined as afilter bubble. Pariser holds algorithms used in search engines such as Google Search responsible for catering "a personal ecosystem of information".[156]Although contrasting views have mitigated the potential threat of "informational dystopia" and questioned the scientific nature of Pariser's claims,[157]filter bubbles have been mentioned to account for the surprising results of theU.S. presidential election in 2016alongsidefake newsandecho chambers, suggesting thatFacebookand Google have designed personalized online realities in which "we only see and hear what we like".[158]
In 2012, the USFederal Trade Commissionfined GoogleUS$22.5 million for violating their agreement not to violate the privacy of users of Apple'sSafari web browser.[159]The FTC was also continuing to investigate if Google's favoring of their own services in their search results violated antitrust regulations.[160]
In a November 2023 disclosure, during the ongoing antitrust trial against Google, an economics professor at theUniversity of Chicagorevealed that Google pays Apple 36% of all search advertising revenue generated when users access Google through the Safari browser. This revelation reportedly caused Google's lead attorney to cringe visibly.[citation needed]The revenue generated from Safari users has been kept confidential, but the 36% figure suggests that it is likely in the tens of billions of dollars.
Both Apple and Google have argued that disclosing the specific terms of their search default agreement would harm their competitive positions. However, the court ruled that the information was relevant to the antitrust case and ordered its disclosure. This revelation has raised concerns about the dominance of Google in the search engine market and the potential anticompetitive effects of its agreements with Apple.[161]
Googlesearch enginerobots are programmed to usealgorithmsthat understand and predict humanbehavior. The book,Race After Technology: Abolitionist Tools for the New Jim Code[162]byRuha Benjamintalks about humanbiasas a behavior that the Google search engine can recognize. In 2016, some users Google searched "three Black teenagers" and images of criminalmugshotsof young African American teenagers came up. Then, the users searched "three White teenagers" and were presented with photos of smiling, happy teenagers. They also searched for "three Asian teenagers", and very revealing photos of Asian girls and women appeared. Benjamin concluded that these results reflect humanprejudiceand views on differentethnic groups. A group of analysts explained the concept of aracistcomputer program: "The idea here is that computers, unlike people, can't be racist but we're increasingly learning that they do in fact take after their makers ... Some experts believe that this problem might stem from the hidden biases in the massive piles ofdatathat the algorithms process as they learn to recognize patterns ... reproducing our worst values".[162]
On August 5, 2024, Google lost alawsuit which started in 2020inD.C. Circuit Court, with JudgeAmit Mehtafinding that the company had an illegal monopoly over Internet search.[163]This monopoly was held to be in violation of Section 2 of theSherman Act.[164]Google has said it will appeal the ruling,[165]though they did propose to loosen search deals with Apple and others requiring them to set Google as the default search engine.[166]
As people talk about "googling" rather than searching, the company has taken some steps to defend its trademark, in an effort to prevent it from becoming ageneric trademark.[167][168]This has led to lawsuits, threats of lawsuits, and the use of euphemisms, such as calling Google Search afamous web search engine.[169]
Until May 2013, Google Search had offered a feature totranslate search queries into other languages. A Google spokesperson toldSearch Engine Landthat "Removing features is always tough, but we do think very hard about each decision and its implications for our users. Unfortunately, this feature never saw much pick up".[170]
Instant search was announced in September 2010 as a feature thatdisplayed suggested results while the user typed in their search query, initially only in select countries or to registered users.[171]The primary advantage of the new system was its ability to save time, withMarissa Mayer, then-vice president of search products and user experience, proclaiming that the feature would save 2–5 seconds per search, elaborating that "That may not seem like a lot at first, but it adds up. With Google Instant, we estimate that we'll save our users 11 hours with each passing second!"[172]Matt Van Wagner ofSearch Engine Landwrote that "Personally, I kind of like Google Instant and I think it represents a natural evolution in the way search works", and also praised Google's efforts inpublic relations, writing that "With just a press conference and a few well-placed interviews, Google has parlayed this relatively minor speed improvement into an attention-grabbing front-page news story".[173]The upgrade also became notable for the company switching Google Search's underlying technology fromHTMLtoAJAX.[174]
Instant Search could be disabled via Google's "preferences" menu for those who didn't want its functionality.[175]
The publication2600: The Hacker Quarterlycompiled a list of words that Google Instant did not show suggested results for, with a Google spokesperson giving the following statement toMashable:[176]
There are several reasons you may not be seeing search queries for a particular topic. Among other things, we apply a narrow set of removal policies for pornography, violence, and hate speech. It's important to note that removing queries from Autocomplete is a hard problem, and not as simple as blacklisting particular terms and phrases.
In search, we get more than one billion searches each day. Because of this, we take an algorithmic approach to removals, and just like our search algorithms, these are imperfect. We will continue to work to improve our approach to removals in Autocomplete, and are listening carefully to feedback from our users.
Our algorithms look not only at specific words, but compound queries based on those words, and across all languages. So, for example, if there's a bad word in Russian, we may remove a compound word including the transliteration of the Russian word into English. We also look at the search results themselves for given queries. So, for example, if the results for a particular query seem pornographic, our algorithms may remove that query from Autocomplete, even if the query itself wouldn't otherwise violate our policies. This system is neither perfect nor instantaneous, and we will continue to work to make it better.
PC Magazinediscussed the inconsistency in how some forms of the same topic are allowed; for instance, "lesbian" was blocked, while "gay" was not, and "cocaine" was blocked, while "crack" and "heroin" were not. The report further stated that seemingly normal words were also blocked due to pornographic innuendos, most notably "scat", likely due to having two completely separate contextual meanings, one for music and one for a sexual practice.[177]
On July 26, 2017, Google removed Instant results, due to a growing number of searches on mobile devices, where interaction with search, as well as screen sizes, differ significantly from a computer.[178][179]
"Instant previews" allowed previewing screenshots of search results' web pages without having to open them. The feature was introduced in November 2010 to the desktop website and removed in April 2013 citing low usage.[180][181]
Various search engines provide encrypted Web search facilities. In May 2010 Google rolled out SSL-encrypted web search.[182]The encrypted search was accessed atencrypted.google.com[183]However, the web search is encrypted via Transport Layer Security (TLS) by default today, thus every search request should be automatically encrypted if TLS is supported by the web browser.[184]On its support website, Google announced that the addressencrypted.google.comwould be turned off April 30, 2018, stating that all Google products and most new browsers use HTTPS connections as the reason for the discontinuation.[185]
Google Real-Time Search was a feature of Google Search in which search results also sometimes includedreal-timeinformation from sources such asTwitter,Facebook,blogs, and news websites.[186]The feature was introduced on December 7, 2009,[187]and went offline on July 2, 2011, after the deal with Twitter expired.[188]Real-Time Search includedFacebookstatus updates beginning on February 24, 2010.[189]A feature similar to Real-Time Search was already available onMicrosoft'sBing search engine, which showed results fromTwitterand Facebook.[190]The interface for the engine showed a live, descending "river" of posts in the main region (which could be paused or resumed), while abar chartmetric of the frequency of posts containing a certain search term or hashtag was located on the right hand corner of the page above a list of most frequently reposted posts and outgoing links.Hashtagsearch links were also supported, as were "promoted" tweets hosted by Twitter (located persistently on top of the river) and thumbnails of retweeted image or video links.
In January 2011, geolocation links of posts were made available alongside results in Real-Time Search. In addition, posts containing syndicated or attached shortened links were made searchable by thelink:query option. In July 2011, Real-Time Search became inaccessible, with the Real-Time link in the Google sidebar disappearing and a custom 404 error page generated by Google returned at its former URL. Google originally suggested that the interruption was temporary and related to the launch ofGoogle+;[191]they subsequently announced that it was due to the expiry of a commercial arrangement with Twitter to provide access to tweets.[192]
|
https://en.wikipedia.org/wiki/Google_Search
|
TheHilltop algorithmis analgorithmused to find documents relevant to a particular keyword topic in news search. Created byKrishna Bharatwhile he was atCompaq Systems Research CenterandGeorge A. MihăilăUniversity of Toronto,[1]it was acquired byGooglefor use in its news results in February 2003.
When you enter a query or keyword into theGoogle news search engine, the Hilltop algorithm helps to find relevant keywords whose results are more informative about the query or keyword.[2]
The algorithm operates on a special index ofexpert documents. These are pages that are about a specific topic and have links to many non-affiliated pages on that topic. The original algorithm relied on independent directories with categorized links to sites. Results are ranked based on the match between the query and relevant descriptive text forhyperlinkson expert pages pointing to a given result page. Websites which havebacklinksfrom many of the best expert pages areauthoritiesand are ranked well.
Basically, it looks at the relationship between the "expert" and "authority" pages: an "expert" is a page that links to many other relevant documents; an "authority" is a page that has links pointing to it from the "expert" pages. Here they mean pages about a specific topic with links to many non-affiliated pages on that topic. If a website hasbacklinksfrom many of the best expert pages it will be an "authority".
Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hilltop_algorithm
|
Ingraph theory, theKatz centralityoralpha centralityof a node is a measure ofcentralityin anetwork. It was introduced byLeo Katzin 1953 and is used to measure the relative degree of influence of an actor (or node) within asocial network.[1]Unlike typical centrality measures which consider only the shortest path (thegeodesic) between a pair of actors, Katz centrality measures influence by taking into account the total number ofwalksbetween a pair of actors.[2]
It is similar toGoogle'sPageRankand to theeigenvector centrality.[3]
Katz centrality computes the relative influence of a node within a network by measuring the number of the immediate neighbors (first degree nodes) and also all other nodes in the network that connect to the node under consideration through these immediate neighbors. Connections made with distant neighbors are, however, penalized by an attenuation factorα{\displaystyle \alpha }.[4]Each path or connection between a pair of nodes is assigned a weight determined byα{\displaystyle \alpha }and the distance between nodes asαd{\displaystyle \alpha ^{d}}.
For example, in the figure on the right, assume that John's centrality is being measured and thatα=0.5{\displaystyle \alpha =0.5}. The weight assigned to each link that connects John with his immediate neighbors Jane and Bob will be(0.5)1=0.5{\displaystyle (0.5)^{1}=0.5}. Since Jose connects to John indirectly through Bob, the weight assigned to this connection (composed of two links) will be(0.5)2=0.25{\displaystyle (0.5)^{2}=0.25}. Similarly, the weight assigned to the connection between Agneta and John through Aziz and Jane will be(0.5)3=0.125{\displaystyle (0.5)^{3}=0.125}and the weight assigned to the connection between Agneta and John through Diego, Jose and Bob will be(0.5)4=0.0625{\displaystyle (0.5)^{4}=0.0625}.
LetAbe theadjacency matrixof a network under consideration. Elements(aij){\displaystyle (a_{ij})}ofAare variables that take a value 1 if a nodeiis connected to nodejand 0 otherwise. The powers ofAindicate the presence (or absence) of links between two nodes through intermediaries. For instance, in matrixA3{\displaystyle A^{3}}, if element(a2,12)=1{\displaystyle (a_{2,12})=1}, it indicates that node 2 and node 12 are connected through some walk of length 3. IfCKatz(i){\displaystyle C_{\mathrm {Katz} }(i)}denotes Katz centrality of a nodei, then, given a valueα∈(0,1){\displaystyle \alpha \in (0,1)}, mathematically:
Note that the above definition uses the fact that the element at location(i,j){\displaystyle (i,j)}ofAk{\displaystyle A^{k}}reflects the total number ofk{\displaystyle k}degree connections between nodesi{\displaystyle i}andj{\displaystyle j}. The value of the attenuation factorα{\displaystyle \alpha }has to be chosen such that it is smaller than the reciprocal of the absolute value of the largesteigenvalueofA.[5]In this case the following expression can be used to calculate Katz centrality:
HereI{\displaystyle I}is the identity matrix,I→{\displaystyle {\overrightarrow {I}}}is a vector of sizen(nis the number of nodes) consisting of ones.AT{\displaystyle A^{T}}denotes thetransposed matrixof A and(I−αAT)−1{\displaystyle (I-\alpha A^{T})^{-1}}denotesmatrix inversionof the term(I−αAT){\displaystyle (I-\alpha A^{T})}.[5]
An extension of this framework allows for the walks to be computed in a dynamical setting.[6][7]By taking a time dependent series of network adjacency snapshots of the transient edges, the dependency for walks to contribute towards a cumulative effect is presented. The arrow of time is preserved so that the contribution of activity is asymmetric in the direction of information propagation.
Network producing data of the form:
representing the adjacency matrix at each timetk{\displaystyle t_{k}}. Hence:
The time pointst0<t1<⋯<tM{\displaystyle t_{0}<t_{1}<\cdots <t_{M}}are ordered but not necessarily equally spaced.Q∈RN×N{\displaystyle Q\in \mathbb {R} ^{N\times N}}for which(Q)ij{\displaystyle (Q)_{ij}}is a weighted count of the number of dynamic walks of lengthw{\displaystyle w}from nodei{\displaystyle i}to nodej{\displaystyle j}. The form for the dynamic communicability between participating nodes is:
This can be normalized via:
Therefore, centrality measures that quantify how effectively noden{\displaystyle n}can 'broadcast' and 'receive' dynamic messages across the network:
Given a graph withadjacency matrixAi,j{\displaystyle A_{i,j}}, Katz centrality is defined as follows:
whereej{\displaystyle e_{j}}is the external importance given to nodej{\displaystyle j}, andα{\displaystyle \alpha }is a nonnegative attenuation factor which must be smaller than the inverse of thespectral radiusofA{\displaystyle A}. The original definition by Katz[8]used a constant vectore→{\displaystyle {\vec {e}}}. Hubbell[9]introduced the usage of a generale→{\displaystyle {\vec {e}}}.
Half a century later, Bonacich and Lloyd[10]defined alpha centrality as:
which is essentially identical to Katz centrality. More precisely, the score of a nodej{\displaystyle j}differs exactly byej{\displaystyle e_{j}}, so ife→{\displaystyle {\vec {e}}}is constant the order induced on the nodes is identical.
Katz centrality can be used to compute centrality in directed networks such as citation networks and the World Wide Web.[11]
Katz centrality is more suitable in the analysis of directed acyclic graphs where traditionally used measures likeeigenvector centralityare rendered useless.[11]
Katz centrality can also be used in estimating the relative status or influence of actors in a social network. The work presented in[12]shows the case study of applying a dynamic version of the Katz centrality to data from Twitter and focuses on particular brands which have stable discussion leaders. The application allows for a comparison of the methodology with that of human experts in the field and how the results are in agreement with a panel of social media experts.
Inneuroscience, it is found that Katz centrality correlates with the relative firing rate ofneuronsin a neural network.[13]The temporal extension of the Katz centrality is applied to fMRI data obtained from a musical learning experiment in[14]where data is collected from the subjects before and after the learning process. The results show that the changes to the network structure over the musical exposure created in each session a quantification of the cross communicability that produced clusters in line with the success of learning.
A generalized form of Katz centrality can be used as an intuitive ranking system for sports teams, such as incollege football.[15]
Alpha centrality is implemented in igraph library for network analysis and visualization.[16]
|
https://en.wikipedia.org/wiki/Katz_centrality
|
In the field ofsearch engine optimization(SEO),link buildingdescribes actions aimed at increasing the number and quality ofinbound linksto awebpagewith the goal of increasing the search engine rankings of that page orwebsite.[1]Briefly, link building is the process of establishingrelevanthyperlinks (usually called links) to a website from external sites. Link building can increase the number of high-quality links pointing to a website, in turn increasing the likelihood of the website ranking highly insearch engineresults. Link building is also a proven marketing tactic for increasingbrand awareness.[2]
Editorial links are the links not acquired from paying money, asking, trading or exchanging. These links are attracted because of the good content and marketing strategies of a website. These are the links that the website owner does not need to ask for as they are naturally given by other website owners.[3]
Resource links are a category of links, which can be either one-way or two-way, usually referenced as "Resources" or "Information" in navbars, but sometimes, especially in the early, less compartmentalized years of the Web, simply called "links". Basically, they arehyperlinksto a website or a specific web page containing content believed to be beneficial, useful and relevant to visitors of the site establishing the link.
In recent years, resource links have grown in importance because most major search engines have made it plain that—inGoogle's words—"quantity, quality, and relevance of links count towards your rating".[4]
Search engines measure a website's value and relevance by analyzing the links to the site from other websites. The resulting “link popularity” is a measure of the number and quality of links to a website. It is an integral part of a website's ranking in search engines. Search engines examine each of the links to a particular website to determine its value. Although every link to a website is a vote in its favor, not all votes are counted equally. A website with similar subject matter to the website receiving the inbound link carries more weight than an unrelated site, and a well-regarded website (such as a university) has higher link quality than an unknown or disreputable website.[5][self-published source?]
The text of links helps search engines categorize a website. The engines' insistence on resource links being relevant and beneficial developed because many artificial link building methods were employed solely tospamsearch engines, i.e. to "fool" the engines' algorithms into awarding the sites employing these unethical devices undeservedly high page ranks and/or return positions.
Google has cautioned site developers to avoid "free-for-all" links, link-popularity schemes, and the submission of a site to thousands of search engines, given that these tactics are typically useless exercises that do not affect the ranking of a site in the results of the major search engines.[6]For many years now, the major[which?]search engines have deployed technology designed to "red flag" and potentially penalize sites employing such practices.[7]
These are the links acquired by the website owner through payment or distribution. They are also known asorganically obtained links. Such links include link advertisements, paid linking, article distribution, directory links and comments on forums, blogs, articles and other interactive forms of social media.[8]
A reciprocal link is a mutual link between two objects, commonly between twowebsites, to ensure mutual traffic. For example, Alice and Bob have websites. If Bob's website links to Alice's website and Alice's website links to Bob's website, the websites are reciprocally linked. Website owners often submit their sites to reciprocallink exchangedirectories in order to achieve higher rankings in thesearch engines. Reciprocal linking between websites is no longer an important part of the search engine optimization process. In 2005, with their Jagger 2 update, Google stopped giving credit to reciprocal links as it does not indicate genuine link popularity.[9]
User-generated contentsuch as blog and forum comments with links can drive valuable referral traffic if it's well-thought-out and pertains to the discussion of the post on the blog.[10]However, these links almost always contain theNofollowor the newer ugc attribute which signal that Google shouldn't take these into its ranking considerations.[11]
Website directoriesare lists of links to websites which are sorted into categories. Website owners can submit their site to many of these directories. Some directories accept payment for listing in their directory while others are free.
Social bookmarkingis a way of saving and categorizing web pages in a public location on the web. Because bookmarks have anchor text and are shared and stored publicly, they are scanned by search engine crawlers and havesearch engine optimizationvalue.
Image linkingis a way of submitting images, such as infographics, to image directories and linking them back to a specific URL.
Also known as guest posting, is a popularSEOtechnique that consists of writing a piece of content for another website with the goal of getting more visibility and possibly link back to the author's website. According to Google, such links are considered unnatural and should be generally containing theNofollowattribute.[12]
In early incarnations, when Google's algorithm relied on incoming links as an indicator of website success, Black Hat SEOs manipulated website rankings by creating link-building schemes, such as building subsidiary websites to send links to a primary website. With an abundance of incoming links, the prime website outranked many reputable sites. However, the conflicts of being devalued by major search engines while building links could be caused by web owners using other black hat strategies. Black hat link building refers explicitly to the process of acquiring as many links as possible with minimal effort.
The Penguin algorithm was created to eliminate this type of abuse. At the time, Google clarified its definition of a "bad" link: “Any links intended to manipulate a site’s ranking in Google search results may be considered part of a link scheme.”
With Penguin, it wasn't the quantity of links that improved a site's rankings but thequality.Since then,Google's web spam team has attempted to prevent the manipulation of their search results through link building. Major brands includingJ.C. Penney,BMW,Forbes,Overstock.com, and many others have received severe penalties to their search rankings for employing spammy and non-user friendly link building tactics.[13]
On October 5, 2014, Google launched a new algorithm update Penguin 3.0 to penalize those sites who use black hat link building tactics to build unnatural links to manipulate search engines. The update affected 0.3% English Language queries all over the world.[14]
Black hat SEO could also be referred to asSpamdexing, which utilizes other black SEO strategies and link building tactics.[15]Some black hat link building strategies include getting unqualified links from and participating inLink farm, link schemes andDoorway page.[6]Black Hat SEO could also refer to "negative SEO," the practice of deliberately harming another website's performance.
White hatlink building strategies are those strategies that add value to end users, abide by Google's term of service and produce good results that could be sustained for a long time. White hat link building strategies focus on producing high-quality as well as relevant links to the website. Although more difficult to acquire, white hat link building tactics are widely implemented by website owners because such kind of strategies are not only beneficial to their websites' long-term developments but also good to the overall online environment.
|
https://en.wikipedia.org/wiki/Link_building
|
Search engine optimization(SEO) is the process of improving the quality and quantity ofwebsite trafficto awebsiteor aweb pagefromsearch engines.[1][2]SEO targets unpaid search traffic (usually referred to as "organic" results) rather than direct traffic, referral traffic,social mediatraffic, orpaid traffic.
Unpaid search engine traffic may originate from a variety of kinds of searches, includingimage search,video search,academic search,[3]news search, and industry-specificvertical searchengines.
As anInternet marketingstrategy, SEO considers how search engines work, the computer-programmedalgorithmsthat dictate search engine results, what people search for, the actual search queries orkeywordstyped into search engines, and which search engines are preferred by a target audience. SEO is performed because a website will receive more visitors from a search engine when websites rank higher within asearch engine results page(SERP), with the aim of either converting the visitors or building brand awareness.[4]
Webmastersand content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the earlyWeb. Initially, webmasters submitted the address of a page, orURLto the various search engines, which would send aweb crawlertocrawlthat page, extract links to other pages from it, and return information found on the page to beindexed.[5]
According to a 2004 article by former industry analyst and currentGoogleemployeeDanny Sullivan, the phrase "search engine optimization" came into use in 1997. Sullivan credits SEO practitioner Bruce Clay as one of the first people to popularize the term.[6]
Early versions of searchalgorithmsrelied on webmaster-provided information such as the keywordmeta tagor index files in engines likeALIWEB. Meta tags provide a guide to each page's content. Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Flawed data in meta tags, such as those that were inaccurate or incomplete, created the potential for pages to be mischaracterized in irrelevant searches.[7][dubious–discuss]
Web content providers also manipulated attributes within theHTMLsource of a page in an attempt to rank well in search engines.[8]By 1997, search engine designers recognized that webmasters were making efforts to rank in search engines and that some webmasters weremanipulating their rankingsin search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such asAltavistaandInfoseek, adjusted their algorithms to prevent webmasters from manipulating rankings.[9]
By relying on factors such askeyword density, which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure theirresults pagesshowed the most relevant search results, rather than unrelated pages with numerous keywords by unscrupulous webmasters. This meant moving away from heavy reliance on term density to a more holistic process for scoring semantic signals.[10]
Search engines responded by developing more complexranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.[citation needed]
Some search engines have also reached out to the SEO industry and are frequent sponsors and guests at SEO conferences, webchats, and seminars. Major search engines provide information and guidelines to help with website optimization.[11][12]Google has aSitemapsprogram to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[13]Bing Webmaster Toolsprovides a way for webmasters to submit a sitemap and web feeds, allows users to determine the "crawl rate", and track the web pages index status.
In 2015, it was reported thatGooglewas developing and promoting mobile search as a key feature within future products. In response, many brands began to take a different approach to their Internet marketing strategies.[14]
In 1998, two graduate students atStanford University,Larry PageandSergey Brin, developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm,PageRank, is a function of the quantity and strength ofinbound links.[15]PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random web surfer.
Page and Brin founded Google in 1998.[16]Google attracted a loyal following among the growing number ofInternetusers, who liked its simple design.[17]Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency,meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult togame, webmasters had already developed link-building tools and schemes to influence theInktomisearch engine, and these methods proved similarly applicable to gaming PageRank. Many sites focus on exchanging, buying, and selling links, often on a massive scale. Some of these schemes involved the creation of thousands of sites for the sole purpose oflink spamming.[18]
By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation.[19]The leading search engines, Google,Bing, andYahoo, do not disclose the algorithms they use to rank pages. Some SEO practitioners have studied different approaches to search engine optimization and have shared their personal opinions.[20]Patents related to search engines can provide information to better understand search engines.[21]In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users.[22]
In 2007, Google announced a campaign against paid links that transfer PageRank.[23]On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of thenofollowattribute on links.Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat any no follow links, in the same way, to prevent SEO service providers from using nofollow for PageRank sculpting.[24]As a result of this change, the usage of nofollow led to evaporation of PageRank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscatedJavaScriptand thus permit PageRank sculpting. Additionally, several solutions have been suggested that include the usage ofiframes,Flash, and JavaScript.[25]
In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[26]On June 8, 2010 a new web indexing system calledGoogle Caffeinewas announced. Designed to allow users to find news results, forum posts, and other content much sooner after publishing than before, Google Caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[27]Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs, the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[28]
In February 2011, Google announced thePandaupdate, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice. However, Google implemented a new system that punishes sites whose content is not unique.[29]The 2012Google Penguinattempted to penalize websites that used manipulative techniques to improve their rankings on the search engine.[30]Although Google Penguin has been presented as an algorithm aimed at fighting web spam, it really focuses on spammy links[31]by gauging the quality of the sites the links are coming from. The 2013Google Hummingbirdupdate featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages. Hummingbird's language processing system falls under the newly recognized term of "conversational search", where the system pays more attention to each word in the query in order to better match the pages to the meaning of the query rather than a few words.[32]With regards to the changes made to search engine optimization, for content publishers and writers, Hummingbird is intended to resolve issues by getting rid of irrelevant content and spam, allowing Google to produce high-quality content and rely on them to be 'trusted' authors.
In October 2019, Google announced they would start applyingBERTmodels for English language search queries in the US. Bidirectional Encoder Representations from Transformers (BERT) was another attempt by Google to improve their natural language processing, but this time in order to better understand the search queries of their users.[33]In terms of search engine optimization, BERT intended to connect users more easily to relevant content and increase the quality of traffic coming to websites that are ranking in theSearch Engine Results Page.
The leading search engines, such as Google, Bing,Brave Searchand Yahoo!, usecrawlersto find pages for their algorithmic search results. Pages that are linked from other search engine-indexed pages do not need to be submitted because they are found automatically. TheYahoo! DirectoryandDMOZ, two major directories which closed in 2014 and 2017 respectively, both required manual submission and human editorial review.[34]Google offersGoogle Search Console, for which an XMLSitemapfeed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links[35]in addition to their URL submission console.[36]Yahoo! formerly operated a paid submission service that guaranteed to crawl for acost per click;[37]however, this practice was discontinued in 2009.
Search enginecrawlers may look at a number of different factors whencrawlinga site. Not every page is indexed by search engines. The distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[38]
Mobile devices are used for the majority of Google searches.[39]In November 2016, Google announced a major change to the way they are crawling websites and started to make their index mobile-first, which means the mobile version of a given website becomes the starting point for what Google includes in their index.[40]In May 2019, Google updated the rendering engine of their crawler to be the latest version of Chromium (74 at the time of the announcement). Google indicated that they would regularly update theChromiumrendering engine to the latest version.[41]In December 2019, Google began updating the User-Agent string of their crawler to reflect the latest Chrome version used by their rendering service. The delay was to allow webmasters time to update their code that responded to particular bot User-Agent strings. Google ran evaluations and felt confident the impact would be minor.[42]
To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standardrobots.txtfile in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using ameta tagspecific to robots (usually <meta name="robots" content="noindex"> ). When a search engine visits a site, the robots.txt located in theroot directoryis the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish to crawl. Pages typically prevented from being crawled include login-specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[43]
In 2020, Googlesunsettedthe standard (and open-sourced their code) and now treats it as a hint rather than a directive. To adequately ensure that pages are not indexed, a page-level robot's meta tag should be included.[44]
A variety of methods can increase the prominence of a webpage within the search results.Cross linkingbetween pages of the same website to provide more links to important pages may improve its visibility. Page design makes users trust a site and want to stay once they find it. When people bounce off a site, it counts against the site and affects its credibility.[45]
Writing content that includes frequently searched keyword phrases so as to be relevant to a wide variety of search queries will tend to increase traffic. Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a web page's metadata, including thetitle tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic.URL canonicalizationof web pages accessible via multiple URLs, using thecanonical link element[46]or via301 redirectscan help make sure links to different versions of the URL all count towards the page's link popularity score. These are known as incoming links, which point to the URL and can count towards the page link's popularity score, impacting the credibility of a website.[45]
SEO techniques can be classified into two broad categories: techniques that search engine companies recommend as part of good design ("white hat"), and those techniques of which search engines do not approve ("black hat"). Search engines attempt to minimize the effect of the latter, among themspamdexing. Industry commentators have classified these methods and the practitioners who employ them as eitherwhite hatSEO orblack hatSEO.[47]White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[48]
An SEO technique is considered a white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines[11][12][49]are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the online "spider" algorithms, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,[50]although the two are not identical.
Black hat SEOattempts to improve rankings in ways that are disapproved of by the search engines or involve deception. One black hat technique uses hidden text, either as text colored similar to the background, in an invisiblediv, or positioned off-screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known ascloaking. Another category sometimes used isgrey hat SEO. This is in between the black hat and white hat approaches, where the methods employed avoid the site being penalized but do not act in producing the best content for users. Grey hat SEO is entirely focused on improving search engine rankings.
Search engines may penalize sites they discover using black or grey hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms or by a manual site review. One example was the February 2006 Google removal of bothBMWGermany andRicohGermany for the use of deceptive practices.[51]Both companies subsequently apologized, fixed the offending pages, and were restored to Google's search engine results page.[52]
Companies that employ black hat techniques or other spammy tactics can get their client websites banned from the search results. In 2005, theWall Street Journalreported on a company,Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[53]Wiredmagazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[54]Google'sMatt Cuttslater confirmed that Google had banned Traffic Power and some of its clients.[55]
SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective, such as paid advertising through pay-per-click(PPC)campaigns, depending on the site operator's goals.[editorializing]Search engine marketing (SEM)is the practice of designing, running, and optimizing search engine ad campaigns. Its difference from SEO is most simply depicted as the difference between paid and unpaid priority ranking in search results. SEM focuses on prominence more so than relevance; website developers should regard SEM with the utmost importance with consideration to visibility as most navigate to the primary listings of their search.[56]A successful Internet marketing campaign may also depend upon building high-quality web pages to engage and persuade internet users, setting upanalyticsprograms to enable site owners to measure results, and improving a site'sconversion rate.[57][58]In November 2015, Google released a full 160-page version of its Search Quality Rating Guidelines to the public,[59]which revealed a shift in their focus towards "usefulness" andmobile local search. In recent years the mobile market has exploded, overtaking the use of desktops, as shown in byStatCounterin October 2016, where they analyzed 2.5 million websites and found that 51.3% of the pages were loaded by a mobile device.[60]Google has been one of the companies that are utilizing the popularity of mobile usage by encouraging websites to use theirGoogle Search Console, the Mobile-Friendly Test, which allows companies to measure up their website to the search engine results and determine how user-friendly their websites are. The closer the keywords are together their ranking will improve based on key terms.[45]
SEO may generate an adequatereturn on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantee and uncertainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[61]Search engines can change their algorithms, impacting a website's search engine ranking, possibly resulting in a serious loss of traffic. According to Google's CEO,Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day.[62]It is considered a wise business practice for website operators to liberate themselves from dependence on search engine traffic.[63]In addition to accessibility in terms of web crawlers (addressed above), userweb accessibilityhas become increasingly important for SEO.
Optimization techniques are highly tuned to the dominant search engines in the target market.
The search engines' market shares vary from market to market, as does competition.
In 2003,Danny Sullivanstated thatGooglerepresented about 75% of all searches.[64]In markets outside the United States, Google's share is often larger, and data showed Google was the dominant search engine worldwide as of 2007.[65]As of 2006, Google had an 85–90% market share in Germany.[66]While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[66]As of March 2024, Google still had a significant market share of 89.85% in Germany.[67]As of June 2008, the market share of Google in the UK was close to 90% according toHitwise.[68][obsolete source]As of March 2024, Google's market share in the UK was 93.61%.[69]
Successful search engine optimization (SEO) for international markets requires more than just translating web pages. It may also involve registering a domain name with acountry-code top-level domain(ccTLD) or a relevanttop-level domain(TLD) for the target market, choosing web hosting with a local IP address or server, and using aContent Delivery Network(CDN) to improve website speed and performance globally. It is also important to understand the local culture so that the content feels relevant to the audience. This includes conducting keyword research for each market, using hreflang tags to target the right languages, and building local backlinks. However, the core SEO principles—such as creating high-quality content, improving user experience, and building links—remain the same, regardless of language or region.[66]
Regional search engines have a strong presence in specific markets:
By the early 2000s, businesses recognized that the web and search engines could help them reach global audiences. As a result, the need for multilingual SEO emerged.[74]In the early years of international SEO development, simple translation was seen as sufficient. However, over time, it became clear that localization and transcreation—adapting content to local language, culture, and emotional resonance—were far more effective than basic translation.[75]
On October 17, 2002, SearchKing filed suit in theUnited States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted atortious interferencewith contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[76][77]
In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. KinderStart's website was removed from Google's index prior to the lawsuit, and the amount of traffic to the site dropped by 70%. On March 16, 2007, theUnited States District Court for the Northern District of California(San JoseDivision) dismissed KinderStart's complaint without leave to amend and partially granted Google's motion forRule 11sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.[78][79]
|
https://en.wikipedia.org/wiki/Search_engine_optimization
|
TrustRankis analgorithmthat conductslink analysisto separate usefulwebpagesfromspamand helps search engine rank pages inSERPs(Search Engine Results Pages). It is semi-automated process which means that it needs some human assistance in order to function properly. Search engines have many different algorithms and ranking factors that they use when measuring the quality of webpages. TrustRank is one of them.
Because manual review of the Internet is impractical and very expensive, TrustRank was introduced in order to help achieve this task much more quickly and cheaply. It was first introduced by researchers Zoltan Gyongyi and Hector Garcia-Molina ofStanford Universityand Jan Pedersen ofYahoo!in their paper "Combating Web Spam with TrustRank" in 2004.[1]Today, this algorithm is a part of major web search engines like Yahoo! and Google.[2]
One of the most important factors that helpweb search enginedetermine the quality of a web page when returning results arebacklinks. Search engines take a number and quality of backlinks into consideration when assigning a place to a certain web page in SERPs. Manyweb spampages are created only with the intention of misleadingsearch engines. These pages, chiefly created for commercial reasons, use various techniques toachieve higher-than-deserved rankingsin thesearch engines' result pages. While human experts can easily identify spam, search engines are still being improved daily in order to do it without help of humans.
One popular method for improving rankings is to increase the perceived importance of a document through complex linking schemes.Google'sPageRankand other search ranking algorithms have been subjected to such manipulation.
TrustRank seeks to combat spam by filtering the web based upon reliability. The method calls for selecting a small set of seed pages to be evaluated by an expert. Once the reputable seed pages are manually identified, a crawl extending outward from the seed set seeks out similarly reliable and trustworthy pages. TrustRank's reliability diminishes with increased distance between documents and the seed set.
The logic works in the opposite way as well, which is called Anti-Trust Rank. The closer a site is to spam resources, the more likely it is to be spam as well.[3]
The researchers who proposed the TrustRank methodology have continued to refine their work by evaluating related topics, such as measuringspam mass.
|
https://en.wikipedia.org/wiki/TrustRank
|
VisualRankis a system forfindingand ranking images by analysing andcomparing their content, rather than searching image names, Web links or other text.Googlescientists made their VisualRank work public in a paper describing applyingPageRankto Google image search at the International World Wide Web Conference inBeijingin 2008.[1][2]
Bothcomputer visiontechniques andlocality-sensitive hashing(LSH) are used in the VisualRankalgorithm. Consider an image search initiated by a text query. An existing search technique based on image metadata and surrounding text is used to retrieve the initial result candidates (PageRank), which along with other images in the index are clustered in agraphaccording to their similarity (which is precomputed).Centralityis then measured on the clustering, which will return the most canonical image(s) with respect to the query. The idea here is that agreement between users of the web about the image and its related concepts will result in those images being deemed more similar. VisualRank is defined iteratively byVR=S∗×VR{\displaystyle VR=S^{*}\times VR}, whereS∗{\displaystyle S^{*}}is the image similarity matrix. As matrices are used,eigenvector centralitywill be the measure applied, with repeated multiplication ofVR{\displaystyle VR}andS∗{\displaystyle S^{*}}producing theeigenvectorwe're looking for. Clearly, the image similarity measure is crucial to the performance of VisualRank since it determines the underlying graph structure.
The main VisualRank system begins with local feature vectors being extracted from images usingscale-invariant feature transform(SIFT). Local feature descriptors are used instead of color histograms as they allow similarity to be considered between images with potential rotation, scale, and perspective transformations. Locality-sensitive hashing is then applied to these feature vectors using thep-stable distribution scheme. In addition to this, LSH amplification using AND/OR constructions are applied. As part of the applied scheme, aGaussian distributionis used under theℓ2{\displaystyle \ell _{2}}norm.
|
https://en.wikipedia.org/wiki/VisualRank
|
Thewebgraphdescribes the directed links between pages of theWorld Wide Web. Agraph, in general, consists of several vertices, some pairs connected by edges. In adirected graph, edges are directed lines or arcs. The webgraph is a directed graph, whose vertices correspond to the pages of the WWW, and a directed edge connects page X to page Y if there exists ahyperlinkon page X, referring to page Y.[1]
The webgraph is used for:
|
https://en.wikipedia.org/wiki/Webgraph
|
Sequential pattern miningis a topic ofdata miningconcerned with finding statistically relevant patterns between data examples where the values are delivered in a sequence.[1][2]It is usually presumed that the values are discrete, and thustime seriesmining is closely related, but usually considered a different activity. Sequential pattern mining is a special case ofstructured data mining.
There are several key traditional computational problems addressed within this field. These include building efficient databases and indexes for sequence information, extracting the frequently occurring patterns, comparing sequences forsimilarity, and recovering missing sequence members. In general, sequence mining problems can be classified asstring miningwhich is typically based onstring processing algorithmsanditemset miningwhich is typically based onassociation rule learning.Local process models[3]extend sequential pattern mining to more complex patterns that can include (exclusive) choices, loops, and concurrency constructs in addition to the sequential ordering construct.
String mining typically deals with a limitedalphabetfor items that appear in asequence, but the sequence itself may be typically very long. Examples of an alphabet can be those in theASCIIcharacter set used in natural language text,nucleotidebases 'A', 'G', 'C' and 'T' inDNA sequences, oramino acidsforprotein sequences. Inbiologyapplications analysis of the arrangement of the alphabet in strings can be used to examinegeneandproteinsequences to determine their properties. Knowing the sequence of letters of aDNAor aproteinis not an ultimate goal in itself. Rather, the major task is to understand the sequence, in terms of its structure andbiological function. This is typically achieved first by identifying individual regions or structural units within each sequence and then assigning a function to each structural unit. In many cases this requires comparing a given sequence with previously studied ones. The comparison between the strings becomes complicated wheninsertions,deletionsandmutationsoccur in a string.
A survey and taxonomy of the key algorithms for sequence comparison for bioinformatics is presented by Abouelhoda & Ghanem (2010), which include:[4]
Some problems in sequence mining lend themselves to discovering frequent itemsets and the order they appear, for example, one is seeking rules of the form "if a {customer buys a car}, he or she is likely to {buy insurance} within 1 week", or in the context of stock prices, "if {Nokia up and Ericsson up}, it is likely that {Motorola up and Samsung up} within 2 days". Traditionally, itemset mining is used in marketing applications for discovering regularities between frequently co-occurring items in large transactions. For example, by analysing transactions of customer shopping baskets in a supermarket, one can produce a rule which reads "if a customer buys onions and potatoes together, he or she is likely to also buy hamburger meat in the same transaction".
A survey and taxonomy of the key algorithms for item set mining is presented by Han et al. (2007).[5]
The two common techniques that are applied to sequence databases forfrequent itemsetmining are the influentialapriori algorithmand the more-recentFP-growthtechnique.
With a great variation of products and user buying behaviors, shelf on which products are being displayed is one of the most important resources in retail environment. Retailers can not only increase their profit but, also decrease cost by proper management of shelf space allocation and products display. To solve this problem, George and Binu (2013) have proposed an approach to mine userbuying patternsusing PrefixSpan algorithm and place the products on shelves based on the order of mined purchasing patterns.[6]
Commonly used algorithms include:
|
https://en.wikipedia.org/wiki/Sequence_mining
|
Aproduction system(orproduction rule system) is acomputer programtypically used to provide some form ofartificial intelligence, which consists primarily of a set of rules about behavior, but also includes the mechanism necessary to follow those rules as the system responds to states of the world.[citation needed]Those rules, termedproductions, are a basicknowledge representationfound useful inautomated planning and scheduling,expert systems, andaction selection.
Productions consist of two parts: a sensory precondition (or "IF" statement) and an action ("THEN"). If a production's precondition matches the currentstateof the world, then the production is said to betriggered. If a production's action isexecuted, it hasfired. A production system also contains a database, sometimes calledworking memory, which maintains data about the current state or knowledge, and a rule interpreter. The rule interpreter must provide a mechanism for prioritizing productions when more than one is triggered.[citation needed]
Rule interpreters generally execute aforward chainingalgorithm for selecting productions to execute to meet current goals, which can include updating the system's data orbeliefs. The condition portion of each rule (left-hand sideor LHS) is tested against the current state of the working memory.
In idealized or data-oriented production systems, there is an assumption that any triggered conditions should be executed: the consequent actions (right-hand sideor RHS) will update the agent's knowledge, removing or adding data to the working memory. The system stops processing either when the user interrupts the forward chaining loop; when a given number of cycles have been performed; when a "halt" RHS is executed, or when no rules have LHSs that are true.
Real-time and expert systems, in contrast, often have to choose between mutually exclusive productions—since actions take time, only one action can be taken, or (in the case of an expert system) recommended. In such systems, the rule interpreter, orinference engine, cycles through two steps: matching production rules against the database, followed by selecting which of the matched rules to apply and executing the selected actions.
Production systems may vary on theexpressive powerof conditions in production rules. Accordingly, thepattern matchingalgorithm that collects production rules with matched conditions may range from the naive—trying all rules in sequence, stopping at the first match—to the optimized, in which rules are "compiled" into a network of inter-related conditions.
The latter is illustrated by theRete algorithm, designed byCharles L. Forgyin[1]1974, which is used in a series of production systems, called OPS and originally developed atCarnegie Mellon Universityculminating inOPS5in the early 1980s. OPS5 may be viewed as a full-fledged programming language for production system programming.
Production systems may also differ in the final selection of production rules to execute, orfire. The collection of rules resulting from the previous matching algorithm is called theconflict set, and the selection process is also called aconflict resolution strategy.
Here again, such strategies may vary from the simple—use the order in which production rules were written; assign weights or priorities to production rules and sort the conflict set accordingly—to the complex—sort the conflict set according to the times at which production rules were previously fired; or according to the extent of the modifications induced by their RHSs. Whichever conflict resolution strategy is implemented, the method is indeed crucial to the efficiency and correctness of the production system. Some systems simply fire all matching productions.
The use of production systems varies from simplestringrewritingrules to the modeling of human cognitive processes, from term rewriting and reduction systems toexpert systems.
This example shows a set of production rules for reversing a string from an alphabet that does not contain the symbols "$" and "*" (which are used as marker symbols).
In this example, production rules are chosen for testing according to their order in this production list. For each rule, the input string is examined from left to right with a moving window to find a match with the LHS of the production rule. When a match is found, the matched substring in the input string is replaced with the RHS of the production rule. In this production system, x and y arevariablesmatching any character of the input string alphabet. Matching resumes with P1 once the replacement has been made.
The string "ABC", for instance, undergoes the following sequence of transformations under these production rules:
In such a simple system, the ordering of the production rules is crucial. Often, the lack of control structure makes production systems difficult to design. It is, of course, possible to add control structure to the production systems model, namely in the inference engine, or in the working memory.
In a toy simulation world where a monkey in a room can grab different objects and climb on others, an example production rule to grab an object suspended from the ceiling would look like:
In this example, data in working memory is structured and variables appear between angle brackets. The name of the data structure, such as "goal" and "physical-object", is the first literal in conditions; the fields of a structure are prefixed with "^". The "-" indicates a negative condition.
Production rules in OPS5 apply to all instances of data structures that match conditions and conform to variable bindings. In this example, should several objects be suspended from the ceiling, each with a different ladder nearby supporting an empty-handed monkey, the conflict set would contain as many production rule instances derived from the same production "Holds::Object-Ceiling". The conflict resolution step would later select which production instances to fire.
The binding of variables resulting from the pattern matching in the LHS is used in the RHS to refer to the data to be modified. The working memory contains explicit control structure data in the form of "goal" data structure instances. In the example, once a monkey holds the suspended object, the status of the goal is set to "satisfied" and the same production rule can no longer apply as its first condition fails.
BothRussellandNorvig'sArtificial Intelligence: A Modern ApproachandJohn Sowa'sKnowledge Representation: Logical, Philosophical, and Computational Foundationscharacterize production systems as systems oflogicthat perform reasoning by means of forward chaining. However,Stewart Shapiro, reviewing Sowa's book, argues that this is a misrepresentation.[2]Similarly,Kowalskiand Sadri[3]argue that, because actions in production systems are understood as imperatives, production systems do not have a logical semantics. Their logic and computer language Logic Production System[4](LPS) combines logic programs, interpreted as an agent's beliefs, with reactive rules, interpreted as an agent's goals. They argue that reactive rules in LPS give a logical semantics to production rules, which they otherwise lack. In the following example, lines 1-3 are type declarations, 4 describes the initial state, 5 is a reactive rule, 6-7 are logic program clauses, and 8 is a causal law:
Notice in this example that the reactive rule on line 5 is triggered, just like a production rule, but this time its conclusion deal_with_fire becomes a goal to be reduced to sub-goals using the logic programs on lines 6-7. These subgoals are actions (line 2), at least one of which needs to be executed to satisfy the goal.
|
https://en.wikipedia.org/wiki/Production_system_(computer_science)
|
Learning classifier systems, orLCS, are a paradigm ofrule-based machine learningmethods that combine a discovery component (e.g. typically agenetic algorithminevolutionary computation) with a learning component (performing eithersupervised learning,reinforcement learning, orunsupervised learning).[2]Learning classifier systems seek to identify a set of context-dependent rules that collectively store and apply knowledge in apiecewisemanner in order to make predictions (e.g.behavior modeling,[3]classification,[4][5]data mining,[5][6][7]regression,[8]function approximation,[9]orgame strategy). This approach allows complexsolution spacesto be broken up into smaller, simpler parts for the reinforcement learning that is inside artificial intelligence research.
The founding concepts behind learning classifier systems came from attempts to modelcomplex adaptive systems, using rule-based agents to form an artificial cognitive system (i.e.artificial intelligence).
The architecture and components of a given learning classifier system can be quite variable. It is useful to think of an LCS as a machine consisting of several interacting components. Components may be added or removed, or existing components modified/exchanged to suit the demands of a given problem domain (like algorithmic building blocks) or to make the algorithm flexible enough to function in many different problem domains. As a result, the LCS paradigm can be flexibly applied to many problem domains that call formachine learning. The major divisions among LCS implementations are as follows: (1) Michigan-style architecture vs. Pittsburgh-style architecture,[10](2)reinforcement learningvs.supervised learning, (3) incremental learning vs. batch learning, (4)online learningvs.offline learning, (5) strength-based fitness vs. accuracy-based fitness, and (6) complete action mapping vs best action mapping. These divisions are not necessarily mutually exclusive. For example, XCS,[11]the best known and best studied LCS algorithm, is Michigan-style, was designed for reinforcement learning but can also perform supervised learning, applies incremental learning that can be either online or offline, applies accuracy-based fitness, and seeks to generate a complete action mapping.
Keeping in mind that LCS is a paradigm for genetic-based machine learning rather than a specific method, the following outlines key elements of a generic, modern (i.e. post-XCS) LCS algorithm. For simplicity let us focus on Michigan-style architecture with supervised learning. See the illustrations on the right laying out the sequential steps involved in this type of generic LCS.
The environment is the source of data upon which an LCS learns. It can be an offline, finitetraining dataset(characteristic of adata mining,classification, or regression problem), or an online sequential stream of live training instances. Each training instance is assumed to include some number offeatures(also referred to asattributes, orindependent variables), and a singleendpointof interest (also referred to as theclass,action,phenotype,prediction, ordependent variable). Part of LCS learning can involvefeature selection, therefore not all of the features in the training data need to be informative. The set of feature values of an instance is commonly referred to as thestate. For simplicity let's assume an example problem domain withBoolean/binaryfeatures and aBoolean/binaryclass. For Michigan-style systems, one instance from the environment is trained on each learning cycle (i.e. incremental learning). Pittsburgh-style systems perform batch learning, where rule sets are evaluated in each iteration over much or all of the training data.
A rule is a context dependent relationship between state values and some prediction. Rules typically take the form of an {IF:THEN} expression, (e.g. {IF 'condition' THEN 'action'},or as a more specific example,{IF 'red' AND 'octagon' THEN 'stop-sign'}). A critical concept in LCS and rule-based machine learning alike, is that an individual rule is not in itself a model, since the rule is only applicable when its condition is satisfied. Think of a rule as a "local-model" of the solution space.
Rules can be represented in many different ways to handle different data types (e.g. binary, discrete-valued, ordinal, continuous-valued). Given binary data LCS traditionally applies a ternary rule representation (i.e. rules can include either a 0, 1, or '#' for each feature in the data). The 'don't care' symbol (i.e. '#') serves as a wild card within a rule's condition allowing rules, and the system as a whole to generalize relationships between features and the target endpoint to be predicted. Consider the following rule (#1###0 ~ 1) (i.e. condition ~ action). This rule can be interpreted as: IF the second feature = 1 AND the sixth feature = 0 THEN the class prediction = 1. We would say that the second and sixth features were specified in this rule, while the others were generalized. This rule, and the corresponding prediction are only applicable to an instance when the condition of the rule is satisfied by the instance. This is more commonly referred to as matching. In Michigan-style LCS, each rule has its own fitness, as well as a number of other rule-parameters associated with it that can describe the number of copies of that rule that exist (i.e. thenumerosity), the age of the rule, its accuracy, or the accuracy of its reward predictions, and other descriptive or experiential statistics. A rule along with its parameters is often referred to as aclassifier. In Michigan-style systems, classifiers are contained within apopulation[P] that has a user defined maximum number of classifiers. Unlike moststochasticsearch algorithms (e.g.evolutionary algorithms), LCS populations start out empty (i.e. there is no need to randomly initialize a rule population). Classifiers will instead be initially introduced to the population with a covering mechanism.
In any LCS, the trained model is a set of rules/classifiers, rather than any single rule/classifier. In Michigan-style LCS, the entire trained (and optionally, compacted) classifier population forms the prediction model.
One of the most critical and often time-consuming elements of an LCS is the matching process. The first step in an LCS learning cycle takes a single training instance from the environment and passes it to [P] where matching takes place. In step two, every rule in [P] is now compared to the training instance to see which rules match (i.e. are contextually relevant to the current instance). In step three, any matching rules are moved to amatch set[M]. A rule matches a training instance if all feature values specified in the rule condition are equivalent to the corresponding feature value in the training instance. For example, assuming the training instance is (001001 ~ 0), these rules would match: (###0## ~ 0), (00###1 ~ 0), (#01001 ~ 1), but these rules would not (1##### ~ 0), (000##1 ~ 0), (#0#1#0 ~ 1). Notice that in matching, the endpoint/action specified by the rule is not taken into consideration. As a result, the match set may contain classifiers that propose conflicting actions. In the fourth step, since we are performing supervised learning, [M] is divided into a correct set [C] and an incorrect set [I]. A matching rule goes into the correct set if it proposes the correct action (based on the known action of the training instance), otherwise it goes into [I]. In reinforcement learning LCS, an action set [A] would be formed here instead, since the correct action is not known.
At this point in the learning cycle, if no classifiers made it into either [M] or [C] (as would be the case when the population starts off empty), the covering mechanism is applied (fifth step). Covering is a form ofonline smart population initialization. Covering randomly generates a rule that matches the current training instance (and in the case of supervised learning, that rule is also generated with the correct action. Assuming the training instance is (001001 ~ 0), covering might generate any of the following rules: (#0#0## ~ 0), (001001 ~ 0), (#010## ~ 0). Covering not only ensures that each learning cycle there is at least one correct, matching rule in [C], but that any rule initialized into the population will match at least one training instance. This prevents LCS from exploring the search space of rules that do not match any training instances.
In the sixth step, the rule parameters of any rule in [M] are updated to reflect the new experience gained from the current training instance. Depending on the LCS algorithm, a number of updates can take place at this step. For supervised learning, we can simply update the accuracy/error of a rule. Rule accuracy/error is different than model accuracy/error, since it is not calculated over the entire training data, but only over all instances that it matched. Rule accuracy is calculated by dividing the number of times the rule was in a correct set [C] by the number of times it was in a match set [M]. Rule accuracy can be thought of as a 'local accuracy'. Rule fitness is also updated here, and is commonly calculated as a function of rule accuracy. The concept of fitness is taken directly from classicgenetic algorithms. Be aware that there are many variations on how LCS updates parameters in order to perform credit assignment and learning.
In the seventh step, asubsumptionmechanism is typically applied. Subsumption is an explicit generalization mechanism that merges classifiers that cover redundant parts of the problem space. The subsuming classifier effectively absorbs the subsumed classifier (and has its numerosity increased). This can only happen when the subsuming classifier is more general, just as accurate, and covers all of the problem space of the classifier it subsumes.
In the eighth step, LCS adopts a highly elitistgenetic algorithm(GA) which will select two parent classifiers based on fitness (survival of the fittest). Parents are selected from [C] typically usingtournament selection. Some systems have appliedroulette wheel selectionor deterministic selection, and have differently selected parent rules from either [P] - panmictic selection, or from [M]).Crossoverandmutationoperators are now applied to generate two new offspring rules. At this point, both the parent and offspring rules are returned to [P]. The LCSgenetic algorithmis highly elitist since each learning iteration, the vast majority of the population is preserved. Rule discovery may alternatively be performed by some other method, such as anestimation of distribution algorithm, but a GA is by far the most common approach. Evolutionary algorithms like the GA employ a stochastic search, which makes LCS a stochastic algorithm. LCS seeks to cleverly explore the search space, but does not perform an exhaustive search of rule combinations, and is not guaranteed to converge on an optimal solution.
The last step in a generic LCS learning cycle is to maintain the maximum population size. The deletion mechanism will select classifiers for deletion (commonly using roulette wheel selection). The probability of a classifier being selected for deletion is inversely proportional to its fitness. When a classifier is selected for deletion, its numerosity parameter is reduced by one. When the numerosity of a classifier is reduced to zero, it is removed entirely from the population.
LCS will cycle through these steps repeatedly for some user defined number of training iterations, or until some user defined termination criteria have been met. For online learning, LCS will obtain a completely new training instance each iteration from the environment. For offline learning, LCS will iterate through a finite training dataset. Once it reaches the last instance in the dataset, it will go back to the first instance and cycle through the dataset again.
Once training is complete, the rule population will inevitably contain some poor, redundant and inexperienced rules. It is common to apply arule compaction, orcondensationheuristic as a post-processing step. This resulting compacted rule population is ready to be applied as a prediction model (e.g. make predictions on testing instances), and/or to be interpreted forknowledge discovery.
Whether or not rule compaction has been applied, the output of an LCS algorithm is a population of classifiers which can be applied to making predictions on previously unseen instances. The prediction mechanism is not part of the supervised LCS learning cycle itself, however it would play an important role in a reinforcement learning LCS learning cycle. For now we consider how the prediction mechanism can be applied for making predictions to test data. When making predictions, the LCS learning components are deactivated so that the population does not continue to learn from incoming testing data. A test instance is passed to [P] where a match set [M] is formed as usual. At this point the match set is differently passed to a prediction array. Rules in the match set can predict different actions, therefore a voting scheme is applied. In a simple voting scheme, the action with the strongest supporting 'votes' from matching rules wins, and becomes the selected prediction. All rules do not get an equal vote. Rather the strength of the vote for a single rule is commonly proportional to its numerosity and fitness. This voting scheme and the nature of how LCS's store knowledge, suggests that LCS algorithms are implicitlyensemble learners.
Individual LCS rules are typically human readable IF:THEN expression. Rules that constitute the LCS prediction model can be ranked by different rule parameters and manually inspected. Global strategies to guide knowledge discovery using statistical and graphical have also been proposed.[12][13]With respect to other advanced machine learning approaches, such asartificial neural networks,random forests, orgenetic programming, learning classifier systems are particularly well suited to problems that require interpretable solutions.
John Henry Hollandwas best known for his work popularizinggenetic algorithms(GA), through his ground-breaking book "Adaptation in Natural and Artificial Systems"[14]in 1975 and his formalization ofHolland's schema theorem. In 1976, Holland conceptualized an extension of the GA concept to what he called a "cognitive system",[15]and provided the first detailed description of what would become known as the first learning classifier system in the paper "Cognitive Systems based on Adaptive Algorithms".[16]This first system, namedCognitive System One (CS-1)was conceived as a modeling tool, designed to model a real system (i.e.environment) with unknown underlying dynamics using a population of human readable rules. The goal was for a set of rules to performonline machine learningto adapt to the environment based on infrequent payoff/reward (i.e. reinforcement learning) and apply these rules to generate a behavior that matched the real system. This early, ambitious implementation was later regarded as overly complex, yielding inconsistent results.[2][17]
Beginning in 1980,Kenneth de Jongand his student Stephen Smith took a different approach to rule-based machine learning with(LS-1), where learning was viewed as an offline optimization process rather than an online adaptation process.[18][19][20]This new approach was more similar to a standard genetic algorithm but evolved independent sets of rules. Since that time LCS methods inspired by the online learning framework introduced by Holland at the University of Michigan have been referred to asMichigan-style LCS, and those inspired by Smith and De Jong at the University of Pittsburgh have been referred to asPittsburgh-style LCS.[2][17]In 1986, Holland developed what would be considered the standard Michigan-style LCS for the next decade.[21]
Other important concepts that emerged in the early days of LCS research included (1) the formalization of abucket brigade algorithm(BBA) for credit assignment/learning,[22](2) selection of parent rules from a common 'environmental niche' (i.e. thematch set[M]) rather than from the wholepopulation[P],[23](3)covering, first introduced as acreateoperator,[24](4) the formalization of anaction set[A],[24](5) a simplified algorithm architecture,[24](6)strength-based fitness,[21](7) consideration of single-step, or supervised learning problems[25]and the introduction of thecorrect set[C],[26](8)accuracy-based fitness[27](9) the combination of fuzzy logic with LCS[28](which later spawned a lineage offuzzy LCS algorithms), (10) encouraginglong action chainsanddefault hierarchiesfor improving performance on multi-step problems,[29][30][31](11) examininglatent learning(which later inspired a new branch ofanticipatory classifier systems(ACS)[32]), and (12) the introduction of the firstQ-learning-like credit assignment technique.[33]While not all of these concepts are applied in modern LCS algorithms, each were landmarks in the development of the LCS paradigm.
Interest in learning classifier systems was reinvigorated in the mid 1990s largely due to two events; the development of theQ-Learningalgorithm[34]forreinforcement learning, and the introduction of significantly simplified Michigan-style LCS architectures by Stewart Wilson.[11][35]Wilson'sZeroth-level Classifier System (ZCS)[35]focused on increasing algorithmic understandability based on Hollands standard LCS implementation.[21]This was done, in part, by removing rule-bidding and the internal message list, essential to the original BBA credit assignment, and replacing it with a hybrid BBA/Q-Learningstrategy. ZCS demonstrated that a much simpler LCS architecture could perform as well as the original, more complex implementations. However, ZCS still suffered from performance drawbacks including the proliferation of over-general classifiers.
In 1995, Wilson published his landmark paper, "Classifier fitness based on accuracy" in which he introduced the classifier systemXCS.[11]XCS took the simplified architecture of ZCS and added an accuracy-based fitness, a niche GA (acting in the action set [A]), an explicit generalization mechanism calledsubsumption, and an adaptation of theQ-Learningcredit assignment. XCS was popularized by its ability to reach optimal performance while evolving accurate and maximally general classifiers as well as its impressive problem flexibility (able to perform bothreinforcement learningandsupervised learning). XCS later became the best known and most studied LCS algorithm and defined a new family ofaccuracy-based LCS. ZCS alternatively became synonymous withstrength-based LCS. XCS is also important, because it successfully bridged the gap between LCS and the field ofreinforcement learning. Following the success of XCS, LCS were later described as reinforcement learning systems endowed with a generalization capability.[36]Reinforcement learningtypically seeks to learn a value function that maps out a complete representation of the state/action space. Similarly, the design of XCS drives it to form an all-inclusive and accurate representation of the problem space (i.e. acomplete map) rather than focusing on high payoff niches in the environment (as was the case with strength-based LCS). Conceptually, complete maps don't only capture what you should do, or what is correct, but also what you shouldn't do, or what's incorrect. Differently, most strength-based LCSs, or exclusively supervised learning LCSs seek a rule set of efficient generalizations in the form of abest action map(or apartial map). Comparisons between strength vs. accuracy-based fitness and complete vs. best action maps have since been examined in greater detail.[37][38]
XCS inspired the development of a whole new generation of LCS algorithms and applications. In 1995, Congdon was the first to apply LCS to real-worldepidemiologicalinvestigations of disease[39]followed closely by Holmes who developed theBOOLE++,[40]EpiCS,[41]and laterEpiXCS[42]forepidemiologicalclassification. These early works inspired later interest in applying LCS algorithms to complex and large-scaledata miningtasks epitomized bybioinformaticsapplications. In 1998, Stolzmann introducedanticipatory classifier systems (ACS)which included rules in the form of 'condition-action-effect, rather than the classic 'condition-action' representation.[32]ACS was designed to predict the perceptual consequences of an action in all possible situations in an environment. In other words, the system evolves a model that specifies not only what to do in a given situation, but also provides information of what will happen after a specific action will be executed. This family of LCS algorithms is best suited to multi-step problems, planning, speeding up learning, or disambiguating perceptual aliasing (i.e. where the same observation is obtained in distinct states but requires different actions). Butz later pursued this anticipatory family of LCS developing a number of improvements to the original method.[43]In 2002, Wilson introducedXCSF, adding a computed action in order to perform function approximation.[44]In 2003, Bernado-Mansilla introduced asUpervised Classifier System (UCS), which specialized the XCS algorithm to the task ofsupervised learning, single-step problems, and forming a best action set. UCS removed thereinforcement learningstrategy in favor of a simple, accuracy-based rule fitness as well as the explore/exploit learning phases, characteristic of many reinforcement learners. Bull introduced a simple accuracy-based LCS(YCS)[45]and a simple strength-based LCSMinimal Classifier System (MCS)[46]in order to develop a better theoretical understanding of the LCS framework. Bacardit introducedGAssist[47]andBioHEL,[48]Pittsburgh-style LCSs designed fordata miningandscalabilityto large datasets inbioinformaticsapplications. In 2008, Drugowitsch published the book titled "Design and Analysis of Learning Classifier Systems" including some theoretical examination of LCS algorithms.[49]Butz introduced the first rule online learning visualization within aGUIfor XCSF[1](see the image at the top of this page). Urbanowicz extended the UCS framework and introducedExSTraCS,explicitly designed forsupervised learningin noisy problem domains (e.g. epidemiology and bioinformatics).[50]ExSTraCS integrated (1) expert knowledge to drive covering and genetic algorithm towards important features in the data,[51](2) a form of long-term memory referred to as attribute tracking,[52]allowing for more efficient learning and the characterization of heterogeneous data patterns, and (3) a flexible rule representation similar to Bacardit's mixed discrete-continuous attribute list representation.[53]Both Bacardit and Urbanowicz explored statistical and visualization strategies to interpret LCS rules and perform knowledge discovery for data mining.[12][13]Browne and Iqbal explored the concept of reusing building blocks in the form of code fragments and were the first to solve the 135-bit multiplexer benchmark problem by first learning useful building blocks from simpler multiplexer problems.[54]ExSTraCS 2.0was later introduced to improve Michigan-style LCS scalability, successfully solving the 135-bit multiplexer benchmark problem for the first time directly.[5]The n-bitmultiplexerproblem is highlyepistaticandheterogeneous, making it a very challengingmachine learningtask.
Michigan-Style LCSs are characterized by a population of rules where the genetic algorithm operates at the level of individual rules and the solution is represented by the entire rule population. Michigan style systems also learn incrementally which allows them to perform both reinforcement learning and supervised learning, as well as both online and offline learning. Michigan-style systems have the advantage of being applicable to a greater number of problem domains, and the unique benefits of incremental learning.
Pittsburgh-Style LCSs are characterized by a population of variable length rule-sets where each rule-set is a potential solution. The genetic algorithm typically operates at the level of an entire rule-set. Pittsburgh-style systems can also uniquely evolve ordered rule lists, as well as employ a default rule. These systems have the natural advantage of identifying smaller rule sets, making these systems more interpretable with regards to manual rule inspection.
Systems that seek to combine key strengths of both systems have also been proposed.
The name, "Learning Classifier System (LCS)", is a bit misleading since there are manymachine learningalgorithms that 'learn to classify' (e.g.decision trees,artificial neural networks), but are not LCSs. The term 'rule-based machine learning (RBML)' is useful, as it more clearly captures the essential 'rule-based' component of these systems, but it also generalizes to methods that are not considered to be LCSs (e.g.association rule learning, orartificial immune systems). More general terms such as, 'genetics-based machine learning', and even 'genetic algorithm'[39]have also been applied to refer to what would be more characteristically defined as a learning classifier system. Due to their similarity togenetic algorithms, Pittsburgh-style learning classifier systems are sometimes generically referred to as 'genetic algorithms'. Beyond this, some LCS algorithms, or closely related methods, have been referred to as 'cognitive systems',[16]'adaptive agents', 'production systems', or generically as a 'classifier system'.[55][56]This variation in terminology contributes to some confusion in the field.
Up until the 2000s nearly all learning classifier system methods were developed with reinforcement learning problems in mind. As a result, the term ‘learning classifier system’ was commonly defined as the combination of ‘trial-and-error’ reinforcement learning with the global search of a genetic algorithm. Interest in supervised learning applications, and even unsupervised learning have since broadened the use and definition of this term.
|
https://en.wikipedia.org/wiki/Learning_classifier_system
|
Rule-based machine learning(RBML) is a term incomputer scienceintended to encompass anymachine learningmethod that identifies, learns, or evolves 'rules' to store, manipulate or apply.[1][2][3]The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system.
Rule-based machine learning approaches includelearning classifier systems,[4]association rule learning,[5]artificial immune systems,[6]and any other method that relies on a set of rules, each covering contextual knowledge.
While rule-based machine learning is conceptually a type of rule-based system, it is distinct from traditionalrule-based systems, which are often hand-crafted, and other rule-based decision makers. This is because rule-based machine learning applies some form of learning algorithm such as Rough sets theory[7]to identify and minimise the set of features and to automatically identify useful rules, rather than a human needing to apply priordomain knowledgeto manually construct rules and curate a rule set.
Rules typically take the form of an'{IF:THEN} expression', (e.g. {IF 'condition' THEN 'result'},or as a more specific example,{IF 'red' AND 'octagon' THEN 'stop-sign}). An individual rule is not in itself a model, since the rule is only applicable when its condition is satisfied. Therefore rule-based machine learning methods typically comprise a set of rules, orknowledge base, that collectively make up the prediction model usually know as decision algorithm. Rules can also be interpreted in various ways depending on the domain knowledge, data types(discrete or continuous) and in combinations.
Repeated incremental pruning to produce error reduction(RIPPER) is a propositional rule learner proposed by William W. Cohen as an optimized version of IREP.[8]
|
https://en.wikipedia.org/wiki/Rule-based_machine_learning
|
Bootstrap aggregating, also calledbagging(frombootstrapaggregating) orbootstrapping, is amachine learning(ML)ensemblemeta-algorithmdesigned to improve thestabilityand accuracy of MLclassificationandregressionalgorithms. It also reducesvarianceandoverfitting. Although it is usually applied todecision treemethods, it can be used with any type of method. Bagging is a special case of theensemble averagingapproach.
Given a standardtraining setD{\displaystyle D}of sizen{\displaystyle n}, bagging generatesm{\displaystyle m}new training setsDi{\displaystyle D_{i}}, each of sizen′{\displaystyle n'}, bysamplingfromD{\displaystyle D}uniformlyandwith replacement. By sampling with replacement, some observations may be repeated in eachDi{\displaystyle D_{i}}. Ifn′=n{\displaystyle n'=n}, then for largen{\displaystyle n}the setDi{\displaystyle D_{i}}is expected to have the fraction (1 - 1/e) (~63.2%) of the unique samples ofD{\displaystyle D}, the rest being duplicates.[1]This kind of sample is known as abootstrapsample. Sampling with replacement ensures each bootstrap is independent from its peers, as it does not depend on previous chosen samples when sampling. Then,m{\displaystyle m}models are fitted using the above bootstrap samples and combined by averaging the output (for regression) or voting (for classification).
Bagging leads to "improvements for unstable procedures",[2]which include, for example,artificial neural networks,classification and regression trees, and subset selection inlinear regression.[3]Bagging was shown to improve preimage learning.[4][5]On the other hand, it can mildly degrade the performance of stable methods such ask-nearest neighbors.[2]
There are three types of datasets in bootstrap aggregating. These are theoriginal, bootstrap, and out-of-bag datasets.Each section below will explain how each dataset is made except for the original dataset. The original dataset is whatever information is given.
The bootstrap dataset is made by randomly picking objects from the original dataset. Also,it must be the same size as the original dataset.However, the difference is that the bootstrap dataset can have duplicate objects. Here is a simple example to demonstrate how it works along with the illustration below:
Suppose theoriginal datasetis agroup of 12 people.Their names areEmily, Jessie, George, Constantine, Lexi, Theodore, John, James, Rachel, Anthony, Ellie, and Jamal.
By randomly picking a group of names, let us sayour bootstrap datasethadJames, Ellie, Constantine, Lexi, John, Constantine, Theodore, Constantine, Anthony, Lexi, Constantine, and Theodore.In this case, the bootstrap sample contained four duplicates for Constantine, and two duplicates for Lexi, and Theodore.
The out-of-bag datasetrepresents the remaining people who were not in the bootstrap dataset.It can be calculated by taking the difference between the original and the bootstrap datasets. In this case, the remaining samples who were not selected areEmily, Jessie, George, Rachel, and Jamal.Keep in mind that since both datasets are sets, when taking the difference the duplicate names are ignored in the bootstrap dataset. The illustration below shows how the math is done:
Creating the bootstrap and out-of-bag datasets is crucial since it is used to test the accuracy ofensemble learningalgorithms likerandom forest. For example, a model that produces 50 trees using the bootstrap/out-of-bag datasets will have a better accuracy than if it produced 10 trees. Since the algorithm generates multiple trees and therefore multiple datasets the chance that an object is left out of the bootstrap dataset is low. The next few sections talk about how the random forest algorithm works in more detail.
The next step of the algorithm involves the generation ofdecision treesfrom the bootstrapped dataset. To achieve this, the process examines each gene/feature and determines for how many samples the feature's presence or absence yields a positive or negative result. This information is then used to compute aconfusion matrix, which lists the true positives, false positives, true negatives, and false negatives of the feature when used as a classifier. These features are then ranked according to variousclassification metricsbased on their confusion matrices. Some common metrics include estimate of positive correctness (calculated by subtracting false positives from true positives), measure of "goodness", andinformation gain. These features are then used to partition the samples into two sets: those that possess the top feature, and those that do not.
The diagram below shows a decision tree of depth two being used to classify data. For example, a data point that exhibits Feature 1, but not Feature 2, will be given a "No". Another point that does not exhibit Feature 1, but does exhibit Feature 3, will be given a "Yes".
This process is repeated recursively for successive levels of the tree until the desired depth is reached. At the very bottom of the tree, samples that test positive for the final feature are generally classified as positive, while those that lack the feature are classified as negative. These trees are then used as predictors to classify new data.
The next part of the algorithm involves introducing yet another element of variability amongst the bootstrapped trees. In addition to each tree only examining a bootstrapped set of samples, only a small but consistent number of unique features are considered when ranking them as classifiers. This means that each tree only knows about the data pertaining to a small constant number of features, and a variable number of samples that is less than or equal to that of the original dataset. Consequently, the trees are more likely to return a wider array of answers, derived from more diverse knowledge. This results in arandom forest, which possesses numerous benefits over a single decision tree generated without randomness. In a random forest, each tree "votes" on whether or not to classify a sample as positive based on its features. The sample is then classified based on majority vote. An example of this is given in the diagram below, where the four trees in a random forest vote on whether or not a patient with mutations A, B, F, and G has cancer. Since three out of four trees vote yes, the patient is then classified as cancer positive.
Because of their properties, random forests are considered one of the most accurate data mining algorithms, are less likely tooverfittheir data, and run quickly and efficiently even for large datasets.[6]They are primarily useful for classification as opposed toregression, which attempts to draw observed connections between statistical variables in a dataset. This makes random forests particularly useful in such fields as banking, healthcare, the stock market, ande-commercewhere it is important to be able to predict future results based on past data.[7]One of their applications would be as a useful tool for predicting cancer based on genetic factors, as seen in the above example.
There are several important factors to consider when designing a random forest. If the trees in the random forests are too deep, overfitting can still occur due to over-specificity. If the forest is too large, the algorithm may become less efficient due to an increased runtime. Random forests also do not generally perform well when given sparse data with little variability.[7]However, they still have numerous advantages over similar data classification algorithms such asneural networks, as they are much easier to interpret and generally require less data for training.[citation needed]As an integral component of random forests, bootstrap aggregating is very important to classification algorithms, and provides a critical element of variability that allows for increased accuracy when analyzing new data, as discussed below.
While the techniques described above utilizerandom forestsandbagging(otherwise known as bootstrapping), there are certain techniques that can be used in order to improve their execution and voting time, their prediction accuracy, and their overall performance. The following are key steps in creating an efficient random forest:
For classification, use a training setD{\displaystyle D}, InducerI{\displaystyle I}and the number of bootstrap samplesm{\displaystyle m}as input. Generate a classifierC∗{\displaystyle C^{*}}as output[12]
To illustrate the basic principles of bagging, below is an analysis on the relationship betweenozoneand temperature (data fromRousseeuwand Leroy[clarification needed](1986), analysis done inR).
The relationship between temperature and ozone appears to be nonlinear in this dataset, based on the scatter plot. To mathematically describe this relationship,LOESSsmoothers (with bandwidth 0.5) are used. Rather than building a single smoother for the complete dataset, 100bootstrapsamples were drawn. Each sample is composed of a random subset of the original data and maintains a semblance of the master set's distribution and variability. For each bootstrap sample, a LOESS smoother was fit. Predictions from these 100 smoothers were then made across the range of the data. The black lines represent these initial predictions. The lines lack agreement in their predictions and tend to overfit their data points: evident by the wobbly flow of the lines.
By taking the average of 100 smoothers, each corresponding to a subset of the original dataset, we arrive at one bagged predictor (red line). The red line's flow is stable and does not overly conform to any data point(s).
Advantages:
Disadvantages:
The concept of bootstrap aggregating is derived from the concept of bootstrapping which was developed by Bradley Efron.[15]Bootstrap aggregating was proposed byLeo Breimanwho also coined the abbreviated term "bagging" (bootstrapaggregating). Breiman developed the concept of bagging in 1994 to improve classification by combining classifications of randomly generated training sets. He argued, "If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy".[3]
|
https://en.wikipedia.org/wiki/Bootstrap_aggregating
|
Out-of-bag(OOB)error, also calledout-of-bag estimate, is a method of measuring theprediction errorofrandom forests,boosted decision trees, and othermachine learningmodels utilizingbootstrap aggregating(bagging). Bagging uses subsampling with replacement to create training samples for the model to learn from. OOB error is the mean prediction error on each training samplexi, using only the trees that did not havexiin their bootstrap sample.[1]
Bootstrap aggregatingallows one to define an out-of-bag estimate of the prediction performance improvement by evaluating predictions on those observations that were not used in the building of the next base learner.
Whenbootstrap aggregatingis performed, two independent sets are created. One set, the bootstrap sample, is the data chosen to be "in-the-bag" by sampling with replacement. The out-of-bag set is all data not chosen in the sampling process.
When this process is repeated, such as when building arandom forest, many bootstrap samples and OOB sets are created. The OOB sets can be aggregated into one dataset, but each sample is only considered out-of-bag for the trees that do not include it in their bootstrap sample. The picture below shows that for each bag sampled, the data is separated into two groups.
This example shows how bagging could be used in the context of diagnosing disease. A set of patients are the original dataset, but each model is trained only by the patients in its bag. The patients in each out-of-bag set can be used to test their respective models. The test would consider whether the model can accurately determine if the patient has the disease.
Since each out-of-bag set is not used to train the model, it is a good test for the performance of the model. The specific calculation of OOB error depends on the implementation of the model, but a general calculation is as follows.
Thebaggingprocess can be customized to fit the needs of a model. To ensure an accurate model, the bootstrap training sample size should be close to that of the original set.[2]Also, the number of iterations (trees) of the model (forest) should be considered to find the true OOB error. The OOB error will stabilize over many iterations so starting with a high number of iterations is a good idea.[3]
Shown in the example to the right, the OOB error can be found using the method above once the forest is set up.
Out-of-bag error andcross-validation(CV) are different methods of measuring the error estimate of amachine learningmodel. Over many iterations, the two methods should produce a very similar error estimate. That is, once the OOB error stabilizes, it will converge to thecross-validation(specifically leave-one-out cross-validation) error.[3]The advantage of the OOB method is that it requires less computation and allows one to test the model as it is being trained.
Out-of-bag error is used frequently for error estimation withinrandom forestsbut with the conclusion of a study done by Silke Janitza and Roman Hornung, out-of-bag error has shown to overestimate in settings that include an equal number of observations from all response classes (balanced samples), small sample sizes, a large number of predictor variables, small correlation between predictors, and weak effects.[4]
|
https://en.wikipedia.org/wiki/Out-of-bag_error
|
Instatisticsandmachine learning,leakage(also known asdata leakageortarget leakage) is the use ofinformationin the model training process which would not be expected to be available atpredictiontime, causing the predictive scores (metrics) tooverestimatethe model's utility when run in a production environment.[1]
Leakage is often subtle and indirect, making it hard to detect and eliminate. Leakage can cause a statistician or modeler to select a suboptimal model, which could be outperformed by a leakage-free model.[1]
Leakage can occur in many steps in the machine learning process. The leakage causes can be sub-classified into two possible sources of leakage for a model: features and training examples.[1]
Feature or column-wise leakage is caused by the inclusion of columns which are one of the following: a duplicate label, a proxy for the label, or the label itself. These features, known asanachronisms, will not be available when the model is used for predictions, and result in leakage if included when the model is trained.[2]
For example, including a "MonthlySalary" column when predicting "YearlySalary"; or "MinutesLate" when predicting "IsLate".
Row-wise leakage is caused by improper sharing of information between rows of data. Types of row-wise leakage include:
A 2023 review found data leakage to be "a widespread failure mode in machine-learning (ML)-based science", having affected at least 294 academic publications across 17 disciplines, and causing a potentialreproducibility crisis.[5]
Data leakage in machine learning can be detected through various methods, focusing on performance analysis, feature examination, data auditing, and model behavior analysis. Performance-wise, unusually high accuracy or significant discrepancies between training and test results often indicate leakage.[6]Inconsistent cross-validation outcomes may also signal issues.
Feature examination involves scrutinizing feature importance rankings and ensuring temporal integrity in time series data. A thorough audit of the data pipeline is crucial, reviewing pre-processing steps, feature engineering, and data splitting processes.[7]Detecting duplicate entries across dataset splits is also important.
For language models, the Min-K% method can detect the presence of data in a pretraining dataset. It presents a sentence suspected to be present in the pretraining dataset, and computes the log-likelihood of each token, then compute the average of the lowest K of these. If this exceeds a threshold, then the sentence is likely present.[8][9]This method is improved by comparing against a baseline of the mean and variance.[10]
Analyzing model behavior can reveal leakage. Models relying heavily on counter-intuitive features or showing unexpected prediction patterns warrant investigation. Performance degradation over time when tested on new data may suggest earlier inflated metrics due to leakage.
Advanced techniques include backward feature elimination, where suspicious features are temporarily removed to observe performance changes. Using a separate hold-out dataset for final validation before deployment is advisable.[7]
|
https://en.wikipedia.org/wiki/Leakage_(machine_learning)
|
Validityis the main extent to which aconcept, conclusion, or measurement is well-founded and likely corresponds accurately to the real world.[1][2]The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool (for example, a test in education) is the degree to which the tool measures what it claims to measure.[3]Validity is based on the strength of a collection of different types of evidence (e.g. face validity, construct validity, etc.) described in greater detail below.
Inpsychometrics, validity has a particular application known astest validity: "the degree to which evidence and theory support the interpretations of test scores" ("as entailed by proposed uses of tests").[4]
It is generally accepted that the concept of scientific validity addresses the nature of reality in terms of statistical measures and as such is anepistemologicalandphilosophicalissue as well as a question ofmeasurement. The use of the term inlogicis narrower, relating to the relationship between the premises and conclusion of an argument. In logic, validity refers to the property of an argument whereby if the premises are true then the truth of the conclusion follows by necessity. The conclusion of an argument is true if the argument is sound, which is to say if the argument is valid and its premises are true. By contrast, "scientific or statistical validity" is not a deductive claim that is necessarily truth preserving, but is an inductive claim that remains true or false in an undecided manner. This is why "scientific or statistical validity" is a claim that is qualified as being either strong or weak in its nature, it is never necessary nor certainly true. This has the effect of making claims of "scientific or statistical validity" open to interpretation as to what, in fact, the facts of the matter mean.
Validity is important because it can help determine what types of tests to use, and help to ensure researchers are using methods that are not only ethical and cost-effective, but also those that truly measure the ideas or constructs in question.
Validity[5]of an assessment is the degree to which it measures what it is supposed to measure. This is not the same asreliability, which is the extent to which a measurement gives results that are very consistent. Within validity, the measurement does not always have to be similar, as it does in reliability. However, just because a measure is reliable, it is not necessarily valid. E.g. a scale that is 5 pounds off is reliable but not valid. A test cannot be valid unless it is reliable. Validity is also dependent on the measurement measuring what it was designed to measure, and not something else instead.[6]Validity (similar to reliability) is a relative concept; validity is not an all-or-nothing idea. There are many different types of validity.
Construct validityrefers to the extent to which operationalizations of a construct (e.g., practical tests developed from a theory) measure a construct as defined by a theory. It subsumes all other types of validity. For example, the extent to which a test measures intelligence is a question of construct validity. A measure of intelligence presumes, among other things, that the measure is associated with things it should be associated with (convergent validity), not associated with things it should not be associated with (discriminant validity).[7]
Construct validity evidence involves the empirical and theoretical support for the interpretation of the construct. Such lines of evidence include statistical analyses of the internal structure of the test including the relationships between responses to different test items. They also include relationships between the test and measures of other constructs. As currently understood, construct validity is not distinct from the support for the substantive theory of the construct that the test is designed to measure. As such, experiments designed to reveal aspects of the causal role of the construct also contribute to constructing validity evidence.[7]
Content validityis a non-statistical type of validity that involves "the systematic examination of the test content to determine whether it covers a representative sample of the behavior domain to be measured" (Anastasi & Urbina, 1997 p. 114). For example, does an IQ questionnaire have items covering all areas of intelligence discussed in the scientific literature?
Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. For example, a test of the ability to add two numbers should include a range of combinations of digits. A test with only one-digit numbers, or only even numbers, would not have good coverage of the content domain. Content related evidence typically involves a subject matter expert (SME) evaluating test items against the test specifications. Experts should pay attention to any cultural differences. For example, when a driving assessment questionnaire adopts from England (e. g. DBQ), the experts should consider right-hand driving in Britain. Some studies found how this will be critical to get a valid questionnaire.[8]Before going to the final administration of questionnaires, the researcher should consult the validity of items against each of the constructs or variables and accordingly modify measurement instruments on the basis of SME's opinion.
A test has content validity built into it by careful selection of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain. Foxcroft, Paterson, le Roux & Herbst (2004, p. 49)[9]note that by using a panel of experts to review the test specifications and the selection of items the content validity of a test can be improved. The experts will be able to review the items and comment on whether the items cover a representative sample of the behavior domain.
Face validityis an estimate of whether a test appears to measure a certain criterion; it does not guarantee that the test actually measures phenomena in that domain. Measures may have high validity, but when the test does not appear to be measuring what it is, it has low face validity. Indeed, when a test is subject to faking (malingering), low face validity might make the test more valid. Considering one may get more honest answers with lower face validity, it is sometimes important to make it appear as though there is low face validity whilst administering the measures.
Face validity is very closely related to content validity. While content validity depends on a theoretical basis for assuming if a test is assessing all domains of a certain criterion (e.g. does assessing addition skills yield in a good measure for mathematical skills? To answer this you have to know, what different kinds of arithmetic skills mathematical skills include) face validity relates to whether a test appears to be a good measure or not. This judgment is made on the "face" of the test, thus it can also be judged by the amateur.
Face validity is a starting point, but should never be assumed to be probably valid for any given purpose, as the "experts" have been wrong before—theMalleus Malificarum(Hammer of Witches) had no support for its conclusions other than the self-imagined competence of two "experts" in "witchcraft detection", yet it was used as a "test" to condemn and burn at the stake tens of thousands men and women as "witches".[10]
Criterion validityevidence involves the correlation between the test and a criterion variable (or variables) taken as representative of the construct. In other words, it compares the test with other measures or outcomes (the criteria) already held to be valid. For example, employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion).
If the test data and criterion data are collected at the same time, this is referred to as concurrent validity evidence. If the test data are collected first in order to predict criterion data collected at a later point in time, then this is referred to as predictive validity evidence.
Concurrent validityrefers to the degree to which the operationalization correlates with other measures of the same construct that are measured at the same time. When the measure is compared to another measure of the same type, they will be related (or correlated). Returning to the selection test example, this would mean that the tests are administered to current employees and then correlated with their scores on performance reviews.
Predictive validityrefers to the degree to which the operationalization can predict (or correlate with) other measures of the same construct that are measured at some time in the future. Again, with the selection test example, this would mean that the tests are administered to applicants, all applicants are hired, their performance is reviewed at a later time, and then their scores on the two measures are correlated.
This is also when measurement predicts a relationship between what is measured and something else; predicting whether or not the other thing will happen in the future. High correlation between ex-ante predicted and ex-post actual outcomes is the strongest proof of validity.
The validity of the design of experimental research studies is a fundamental part of thescientific method,[2]and a concern ofresearch ethics. Without a valid design, valid scientific conclusions cannot be drawn.
Statistical conclusion validityis the degree to which conclusions about the relationship amongvariablesbased on the data are correct or 'reasonable'. This began as being solely about whether the statistical conclusion about the relationship of the variables was correct, but now there is a movement towards moving to 'reasonable' conclusions that use: quantitative, statistical, and qualitative data.[11]
Statistical conclusion validity involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures.[12]As this type of validity is concerned solely with the relationship that is found among variables, the relationship may be solely a correlation.
Internal validityis aninductiveestimate of the degree to which conclusions aboutcausalrelationships can be made (e.g. cause and effect), based on the measures used, the research setting, and the whole research design. Good experimental techniques, in which the effect of anindependent variableon adependent variableis studied under highly controlled conditions, usually allow for higher degrees of internal validity than, for example, single-case designs.
Eight kinds ofconfoundingvariable can interfere with internal validity (i.e. with the attempt to isolate causal relationships):
External validityconcerns the extent to which the (internally valid) results of a study can be held to be true for other cases, for example to different people, places or times. In other words, it is about whether findings can be validly generalized. If the same research study was conducted in those other cases, would it get the same results?
A major factor in this is whether the study sample (e.g. the research participants) are representative of the general population along relevant dimensions. Other factors jeopardizing external validity are:
Ecological validityis the extent to which research results can be applied to real-life situations outside of research settings. This issue is closely related to external validity but covers the question of to what degree experimental findings mirror what can be observed in the real world (ecology = the science of interaction between organism and its environment). To be ecologically valid, the methods, materials and setting of a study must approximate the real-life situation that is under investigation.
Ecological validity is partly related to the issue of experiment versus observation. Typically in science, there are two domains of research: observational (passive) and experimental (active). The purpose of experimental designs is to test causality, so that you can infer A causes B or B causes A. But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational. You can only conclude that A occurs together with B. Both techniques have their strengths and weaknesses.
On first glance, internal and external validity seem to contradict each other – to get an experimental design you have to control for all interfering variables. That is why you often conduct your experiment in a laboratory setting. While gaining internal validity (excluding interfering variables by keeping them constant) you lose ecological or external validity because you establish an artificial laboratory setting. On the other hand, with observational research you can not control for interfering variables (low internal validity) but you can measure in the natural (ecological) environment, at the place where behavior normally occurs. However, in doing so, you sacrifice internal validity.
The apparent contradiction of internal validity and external validity is, however, only superficial. The question of whether results from a particular study generalize to other people, places or times arises only when one follows aninductivist research strategy. If the goal of a study is todeductively testa theory, one is only concerned with factors which might undermine the rigor of the study, i.e. threats to internal validity. In other words, the relevance of external and internal validity to a research study depends on the goals of the study. Furthermore, conflating research goals with validity concerns can lead to the mutual-internal-validity problem, where theories are able to explain only phenomena in artificial laboratory settings but not the real world.[13][14]
Inpsychiatrythere is a particular issue with assessing the validity of thediagnostic categoriesthemselves. In this context:[15]
Robins and Guze proposed in 1970 what were to become influential formal criteria for establishing the validity of psychiatric diagnoses. They listed five criteria:[15]
These were incorporated into theFeighner CriteriaandResearch Diagnostic Criteriathat have since formed the basis of the DSM and ICD classification systems.
Kendler in 1980 distinguished between:[15]
Nancy Andreasen(1995) listed several additional validators –molecular geneticsandmolecular biology,neurochemistry,neuroanatomy,neurophysiology, andcognitive neuroscience– that are all potentially capable of linking symptoms and diagnoses to theirneural substrates.[15]
Kendell and Jablinsky (2003) emphasized the importance of distinguishing between validity andutility, and argued that diagnostic categories defined by their syndromes should be regarded as valid only if they have been shown to be discrete entities with natural boundaries that separate them from other disorders.[15]
Kendler (2006) emphasized that to be useful, a validating criterion must be sensitive enough to validate most syndromes that are true disorders, while also being specific enough to invalidate most syndromes that are not true disorders. On this basis, he argues that a Robins and Guze criterion of "runs in the family" is inadequately specific because most human psychological and physical traits would qualify - for example, an arbitrary syndrome comprising a mixture of "height over 6 ft, red hair, and a large nose" will be found to "run in families" and be "hereditary", but this should not be considered evidence that it is a disorder. Kendler has further suggested that "essentialist"genemodels of psychiatric disorders, and the hope that we will be able to validate categorical psychiatric diagnoses by "carving nature at its joints" solely as a result of gene discovery, are implausible.[16]
In the United States Federal Court System validity and reliability of evidence is evaluated using the Daubert Standard: seeDaubert v. Merrell Dow Pharmaceuticals. Perri and Lichtenwald (2010) provide a starting point for a discussion about a wide range of reliability and validity topics in their analysis of a wrongful murder conviction.[17]
|
https://en.wikipedia.org/wiki/Validity_(statistics)
|
Business process modeling(BPM) is the action of capturing and representingprocessesof an enterprise (i.e.modelingthem), so that the current business processes may be analyzed, applied securely and consistently, improved, and automated.
BPM is typically performed by business analysts, with subject matter experts collaborating with these teams to accurately model processes. It is primarily used inbusiness process management,software development, orsystems engineering.
Alternatively, process models can be directly modeled from IT systems, such as event logs.
According to the Association of Business Process Management Professionals (ABPMP), business process modeling is one of the five key disciplines withinBusiness Process Management(BPM).[1](Chapter 1.4 CBOK® structure) ← automatic translation from GermanThe five disciplines are:
However, these disciplines cannot be considered in isolation: Business process modeling always requires abusiness process analysisfor modeling the as-is processes (see sectionAnalysis of business activities) or specifications fromprocess designfor modeling the to-be processes (see sectionsBusiness process reengineeringandBusiness process optimization).
The focus of business process modeling is on therepresentationof the flow ofactions (activities), according to Hermann J. Schmelzer and Wolfgang Sesselmann consisting "of the cross-functional identification of value-adding activities that generate specific services expected by the customer and whose results have strategic significance for the company. They can extend beyond company boundaries and involve activities of customers, suppliers, or even competitors."[2](Chapter 2.1 Differences between processes and business processes) ← automatic translation from German
But also otherqualities(facts) such asdataandbusiness objects(as inputs/outputs,formal organizationsandroles(responsible/accountable/consulted/informed persons, seeRACI),resourcesandIT-systemsas well asguidelines/instructions (work equipment),requirements,key figuresetc. can be modeled.
Incorporating more of these characteristics into business process modeling enhances the accuracy of abstraction but also increases model complexity. "To reduce complexity and improve the comprehensibility and transparency of the models, the use of a view concept is recommended."[3](Chapter 2.4 Views of process modeling) ← automatic translation from GermanThere is also a brief comparison of the view concepts of five relevant German-speaking schools ofbusiness informatics: 1) August W. Scheer, 2) Hubert Österle, 3) Otto K. Ferstl and Elmar J. Sinz, 4) Hermann Gehring and 5) Andreas Gadatsch.
The termviews (August W. Scheer, Otto K. Ferstl and Elmar J. Sinz, Hermann Gehring and Andreas Gadatsch) is not used uniformly in all schools of business informatics – alternative terms aredesign dimensions(Hubert Österle) orperspectives(Zachman).
M. Rosemann, A. Schwegmann, and P. Delfmann also see disadvantages in theconcept of views: "It is conceivable to create information models for each perspective separately and thus partially redundantly. However, redundancies always mean increased maintenance effort and jeopardize the consistency of the models."[4](Chapter 3.2.1 Relevant perspectives on process models) ← automatic translation from German
According to Andreas Gadatsch,business process modeling is understood as a part of business process management alongside process definition and process management.[3](Chapter 1.1 Process management) ← automatic translation from German
Business process modeling is also a central aspect of holistic company mapping – which also deals with the mapping of thecorporate mission statement, corporate policy/corporate governance, organizational structure, process organization,application architecture, regulations and interest groups as well as themarket.
According to the European Association of Business Process Management EABPM, there are three different types of end-to-end business processes:
These three process types can be identified in every company and are used in practice almost without exception as the top level for structuring business process models.[5]Instead the termleadership processesthe termmanagement processesis typically used. Instead of the termexecution processesthe termcore processeshas become widely accepted.[2](Chapter 6.2.1 Objectives and concept) ← automatic translation from German,[6](Chapter 1.3 The concept of process) ← automatic translation from German,[7](Chapter 4.12.2 Differentiation between core and support objectives) ← automatic translation from German,[8](Chapter 6.2.2 Identification and rough draft) ← automatic translation from German
If thecore processesare then organized/decomposed at the next level insupply chain management(SCM),customer relationship management(CRM), andproduct lifecycle management(PLM), standard models of large organizations and industry associations such as theSCOR modelcan also be integrated into business process modeling.
Techniques to model business processes such as theflow chart,functional flow block diagram,control flow diagram,Gantt chart,PERTdiagram, andIDEFhave emerged since the beginning of the 20th century. The Gantt charts were among the first to arrive around 1899, the flow charts in the 1920s, functional flow block diagram and PERT in the 1950s, anddata-flow diagramsand IDEF in the 1970s. Among the modern methods areUnified Modeling LanguageandBusiness Process Model and Notation. Still, these represent just a fraction of the methodologies used over the years to document business processes.[9]The termbusiness process modelingwas coined in the 1960s in the field ofsystems engineeringby S. Williams in his 1967 article "Business Process Modelling Improves Administrative Control".[10]His idea was that techniques for obtaining a better understanding of physical control systems could be used in a similar way forbusiness processes. It was not until the 1990s that the term became popular.
In the 1990s, the termprocessbecame a new productivity paradigm.[11]Companies were encouraged to think inprocessesinstead offunctionsandprocedures. Process thinking looks at the chain of events in the company from purchase to supply, from order retrieval to sales, etc. The traditional modeling tools were developed to illustrate time and cost, while modern tools focus on cross-functional activities. These cross-functional activities have increased significantly in number and importance, due to the growth of complexity and dependence. New methodologies includebusiness process redesign, business process innovation, business process management,integrated business planning, among others, all "aiming at improving processes across the traditional functions that comprise a company".[11]
In the field ofsoftware engineering, the termbusiness process modelingopposed the commonsoftware processmodeling, aiming to focus more on the state of the practice duringsoftware development.[12]In that time (the early 1990s) all existing and new modeling techniques to illustrate business processes were consolidated as 'business processmodeling languages'[citation needed]. In theObject Orientedapproach, it was considered to be an essential step in the specification of business application systems. Business process modeling became the base of new methodologies, for instance, those that supporteddata collection, data flow analysis, process flow diagrams, and reporting facilities. Around 1995, the first visually oriented tools for business process modeling and implementation were presented.
The objective of business process modeling is a – usually graphical – representation of end-to-end processes, whereby complex facts of reality are documented using a uniform (systematized) representation and reduced to the substantial (qualities). Regulatory requirements for the documentation of processes often also play a role here (e.g.document control,traceability, orintegrity), for example fromquality management,information security managementordata protection.
Business process modeling typically begins with determining the environmental requirements: First, thegoalof the modeling (applications of business process modeling) must be determined. Business process models are now often used in a multifunctional way (see above). Second the model addressees must be determined, as the properties of the model to be created must meet their requirements. This is followed by the determination of the business processes to be modeled.
The qualities of the business process that are to be represented in the model are specified in accordance with the goal of the modeling. As a rule, these are not only the functions constituting the process, including therelationshipsbetween them, but also a number of other qualities, such as formal organization, input, output,resources,information,media,transactions,events,states,conditions,operationsandmethods.
The objectives of business process modeling may include (compare: Association of Business Process Management Professionals (ABPMP)[1](Chapter 3.1.2 Process characteristics and properties) ← automatic translation from German):
Since business process modeling in itself makes no direct contribution to the financialsuccessof a company, there is no motivation for business process modeling from the most important goal of a company, theintention to make a profit. The motivation of a company to engage in business process modeling therefore always results from the respective purpose.Michael Rosemann, Ansgar Schwegmann und Patrick Delfmannlists a number of purposes as motivation for business process modeling:
Within an extensive research program initiated in 1984 titled "Management in the 1990s" atMIT, the approach ofprocess re-engineeringemerged in the early 1990s. The research program was designed to explore the impact of information technology on the way organizations would be able to survive and thrive in the competitive environment of the 1990s and beyond. In the final report, N. Venkat Venkatraman[15]summarizes the result as follows: The greatest increases in productivity can be achieved when new processes are planned in parallel with information technologies.
This approach was taken up byThomas H. Davenport[16](Part I: A Framework For Process Innovation, Chapter: Introduction)as well asMichael M. HammerandJames A. Champy[17]and developed it into business process re-engineering (BPR) as we understand it today, according to which business processes are fundamentally restructured in order to achieve an improvement in measurable performance indicators such as costs, quality, service and time.
Business process re-engineering has been criticized in part for starting from a "green field" and therefore not being directly implementable for established companies.Hermann J. Schmelzer and Wolfgang Sesselmannassess this as follows: "The criticism of BPR has an academic character in many respects. ... Some of the points of criticism raised are justified from a practical perspective. This includes pointing out that an overly radical approach carries the risk of failure. It is particularly problematic if the organization and employees are not adequately prepared for BPR."[2](Chapter 6.2.1 Objectives and concept) ← automatic translation from German
The high-level approach to BPR according to Thomas H. Davenport consists of:
With ISO/IEC 27001:2022, the standard requirements for management systems are now standardized for all major ISO standards and have a process character.
In the ISO/IEC 9001,ISO/IEC 14001, ISO/IEC 27001 standards, this is anchored in Chapter 4.4 in each case:
Clause 4.4 Quality management system and its processes
Clause 4.4. Environmental management systems
Clause 4.4 Information security management system
Each of these standards requires the organization to establish, implement, maintain and continually improve an appropriate management system "including the processes needed and their interactions".[18],[19],[20]
In the definition of the standard requirements for theprocesses needed and their interactions, ISO/IEC 9001 is more specific in clause 4.4.1 than any other ISO standard for management systems and defines that "the organization shall determine and apply the processes needed for"[18]an appropriate management system throughout the organization and also lists detailed requirements with regard to processes:
In addition, clause 4.4.2 of the ISO/IEC 9001 lists some more
detailed requirements with regard to processes:
The standard requirements fordocumented informationare also relevant for business process modelling as part of an ISO management system.
In the standards ISO/IEC 9001, ISO/IEC 14001, ISO/IEC 27001 the requirements with regard todocumented informationare anchored in clause 7.5 (detailed in the respective standard in clauses "7.5.1. General", "7.5.2. Creating and updating" and "7.5.3. Control of documented information").
The standard requirements of ISO/IEC 9001 used here as an exampleincludein clause "7.5.1. General"
Demandin clause "7.5.2. Creating and updating"
Andrequirein clause "7.5.3. Control of documented information"
Based on the standard requirements,
Preparing for ISO certification of a management system is a very good opportunity to establish or promote business process modelling in the organisation.
Hermann J. Schmelzer and Wolfgang Sesselmann point out that the field of improvement of the three methods mentioned by them as examples for process optimization (control and reduction of total cycle time (TCT),KaizenandSix Sigma) are processes: In the case of total cycle time (TCT), it is the business processes (end-to-end processes) and sub-processes, with Kaizen it is the process steps and activity and with Six Sigma it is the sub-processes, process steps and activity.[2](Chapter 6.3.1 Total Cycle Time (TCT), KAIZEN and Six Sigma in comparison) ← automatic translation from German
For thetotal cycle time(TCT), Hermann J. Schmelzer and Wolfgang Sesselmann list the following key features:[2](Chapter 6.3.2 Total Cycle Time (TCT)) ← automatic translation from German
Consequently, business process modeling for TCT must support adequate documentation of barriers, barrier handling, and measurement.
When examining Kaizen tools, initially, there is no direct connection to business processes or business process modeling. However, Kaizen and business process management can mutually enhance each other. In the realm of business process management, Kaizen's objectives are directly derived from the objectives for business processes and sub-processes. This linkage ensures that Kaizen measures effectively support the overarching business objectives."[2](Chapter 6.3.3 KAIZEN) ← automatic translation from German
Six Sigma is designed to prevent errors and improve theprocess capabilityso that the proportion of process outcomes that meet the requirements is 6σ – or in other words, for every million process outcomes, only 3.4 errors occur. Hermann J. Schmelzer and Wolfgang Sesselmann explain: "Companies often encounter considerable resistance at a level of 4σ, which makes it necessary to redesign business processes in the sense of business process re-engineering (design for Six Sigma)."[2](Chapter 6.3.4 Six Sigma) ← automatic translation from GermanFor a reproducible measurement of process capability, precise knowledge of the business processes is required and business process modeling is a suitable tool for design for Six Sigma. Six Sigma, therefore, uses business process modeling according toSIPOCas an essential part of the methodology, and business process modeling using SIPOC has established itself as a standard tool for Six Sigma.
The aim of inter-company business process modeling is to include the influences of externalstakeholdersin the analysis or to achieve inter-company comparability of business processes, e.g. to enable benchmarking.
Martin Kuglerlists the following requirements for business process modeling in this context:[21](Chapter 14.2.1 Requirements for inter-company business process modeling) ← automatic translation from German
The analysis of business activities determines and defines the framework conditions for successful business process modeling. This is where the company should start,
Thisstrategy for the long-term success of business process modelingcan be characterized by the market-oriented view and/or the resource-based view.Jörg Becker and Volker Meiseexplain: "Whereas in the market view, the industry and the behavior of competitors directly determine a company's strategy, the resource-oriented approach takes an internal view by analyzing the strengths and weaknesses of the company and deriving the direction of development of the strategy from this."[7](Chapter 4.6 The resource-based view) ← automatic translation from GermanAnd further: "The alternative character initially formulated in the literature between the market-based and resource-based view has now given way to a differentiated perspective. The core competence approach is seen as an important contribution to the explanation of success potential, which is used alongside the existing, market-oriented approaches."[7](Chapter 4.7 Combination of views) ← automatic translation from GermanDepending on the company's strategy, theprocess mapwill therefore be the business process models with a view to market development and to resource optimization in a balanced manner.
Following the identification phase, a company's business processes are distinguished from one another through an analysis of their respective business activities (refer also to business process analysis). A business process constitutes a set of interconnected, organized actions (activities) geared towards delivering a specific service or product (to fulfill a specific goal) for a particular customer or customer group.
According to the European Association of Business Process Management (EABPM), establishing a common understanding of the current process and its alignment with the objectives serves as an initial step in process design or reengineering."[1](Chapter 4 Process analysis) ← automatic translation from German
The effort involved in analysing the as-is processes is repeatedly criticised in the literature, especially by proponents of business process re-engineering (BPR), and it is suggested that the definition of the target state should begin immediately.
Hermann J. Schmelzer and Wolfgang Sesselmann, on the other hand, discuss and evaluate the criticism levelled at the radical approach of business process re-engineering (BPR) in the literature and "recommend carrying out as-is analyses. A reorganisation must know the current weak points in order to be able to eliminate them. The results of the analyses also provide arguments as to why a process re-engineering is necessary. It is also important to know the initial situation for the transition from the current to the target state. However, the analysis effort should be kept within narrow limits. The results of the analyses should also not influence the redesign too strongly."[2](Chapter 6.2.2 Critical assessment of the BPR) ← automatic translation from German
Timo Füermannexplains: "Once the business processes have been identified and named, they are now compiled in an overview. Such overviews are referred to as process maps."[22](Chapter 2.4 Creating the process map) ← automatic translation from German
Jörg Becker and Volker Meiseprovide the following list of activities for structuring business processes:
The structuring of business processes generally begins with a distinction between management, core, and support processes.
As thecore business processesclearly make up the majority of a company's identified business processes, it has become common practice to subdivide the core processes once again. There are different approaches to this depending on the type of company and business activity. These approaches are significantly influenced by the definedapplicationof business process modeling and thestrategy for the long-term success of business process modeling.
In the case of a primarily market-based strategy, end-to-end core business processes are often defined from the customer or supplier to the retailer or customer (e.g. "from offer to order", "from order to invoice", "from order to delivery", "from idea to product", etc.). In the case of a strategy based on resources, the core business processes are often defined on the basis of the central corporate functions ("gaining orders", "procuring and providing materials", "developing products", "providing services", etc.).
In a differentiated view without a clear focus on the market view or the resource view, the core business processes are typically divided into CRM, PLM and SCM.
However, other approaches to structuring core business processes are also common, for example from the perspective of customers, products or sales channels.
The result of structuring a company's business processes is theprocess map(shown, for example, as avalue chain diagram).Hermann J. Schmelzer and Wolfgang Sesselmannadd: "There are connections and dependencies between the business processes. They are based on the transfer of services and information. It is important to know these interrelationships in order to understand, manage, and control the business processes."[2](Chapter 2.4.3 Process map) ← automatic translation from German
The definition of business processes often begins with the company's core processes because they
For the company
The scope of a business process should be selected in such a way that it contains a manageable number of sub-processes, while at the same time keeping the total number of business processes within reasonable limits. Five to eight business processes per business unit usually cover the performance range of a company.
Each business process should be independent – but the processes are interlinked.
The definition of a business process includes: What result should be achieved on completion? What activities are necessary to achieve this? Which objects should be processed (orders, raw materials, purchases, products, ...)?
Depending on the prevailing corporate culture, which may either be more inclined towards embracing change or protective of the status quo and the effectiveness of communication, defining business processes can prove to be either straightforward or challenging. This hinges on the willingness of key stakeholders within the organization, such as department heads, to lend their support to the endeavor. Within this context, effective communication plays a pivotal role.
In elucidating this point, Jörg Becker and Volker Meise elucidate that the communication strategy within an organizational design initiative should aim to garner support from members of the organization for the intended structural changes. It is worth noting that business process modeling typically precedes business process optimization, which entails a reconfiguration of process organization – a fact well understood by the involved parties. Therefore, the communication strategy must focus on persuading organizational members to endorse the planned structural adjustments."[7](Chapter 4.15 Influencing the design of the regulatory framework) ← automatic translation from GermanIn the event of considerable resistance, however, external knowledge can also be used to define the business processes.
Jörg Becker and Volker Meisemention two approaches (general process identificationandindividual process identification) and state the following about general process identification: "In the general process definition, it is assumed that basic, generally valid processes exist that are the same in all companies." It goes on to say: "Detailed reference models can also be used for general process identification. They describe industry- or application system-specific processes of an organization that still need to be adapted to the individual case, but are already coordinated in their structure."[7](Chapter 4.11 General process identification) ← automatic translation from German
Jörg Becker and Volker Meisestate the following about individual process identification: "In individual or singular process identification, it is assumed that the processes in each company are different according to customer needs and the competitive situation and can be identified inductively based on the individual problem situation."[7](Chapter 4.12 Individual process identification) ← automatic translation from German
The result of the definition of the business processes is usually a rough structure of the business processes as a value chain diagram.
The rough structure of the business processes created so far will now be decomposed – by breaking it down into sub-processes that have their own attributes but also contribute to achieving the goal of the business process. This decomposition should be significantly influenced by theapplicationandstrategy for the long-term success of business process modelingand should be continued as long as the tailoring of the sub-processes defined this way contributes to the implementation of thepurposeandstrategy.
A sub-process created in this way uses amodelto describe the way in which procedures are carried out in order to achieve the intended operating goals of the company. The model is an abstraction of reality (or a target state) and its concrete form depends on the intended use (application).
A further decomposition of the sub-processes can then take place duringbusiness process modelingif necessary. If the business process can be represented as a sequence of phases, separated bymilestones, the decomposition into phases is common. Where possible, the transfer of milestones to the next level of decomposition contributes to general understanding.
The result of the further structuring of business processes is usually a hierarchy of sub-processes, represented in value chain diagrams. It is common that not all business processes have the same depth of decomposition. In particular, business processes that are not safety-relevant, cost-intensive or contribute to the operating goal are broken down to a much lesser depth. Similarly, as a preliminary stage of a decomposition of a process planned for (much) later, a common understanding can first be developed using simpler / less complex means thanvalue chain diagrams– e.g. with a textual description or with a turtle diagram[22](Chapter 3.1 Defining process details) ← automatic translation from German(not to be confused withturtle graphic!).
Complete, self-contained processes are summarized and handed over to a responsible person or team. Theprocess owneris responsible for success, creates the framework conditions, and coordinates his or her approach with that of the other process owners. Furthermore, he/she is responsible for the exchange of information between the business processes. This coordination is necessary in order to achieve the overall goal orientation.
If business processes are documented using a specific IT-system andrepresentation, e.g. graphically, this is generally referred to as modeling. The result of the documentation is thebusiness process model.
The question of whether the business process model should be created throughas is modelingorto be modelingis significantly influenced by the definedapplicationand thestrategy for the long-term success of business process modeling. The previous procedure with analysis of business activities,defineition of business processesandfurther structuring of business processesis advisable in any case.
Ansgar Schwegmann and Michael Laske explain: "Determining the current status is the basis for identifying weaknesses and localizing potential for improvement. For example, weak points such as organizational breaks or insufficient IT penetration can be identified."[23](Chapter 5.1 Intention of theas ismodeling) ← automatic translation from German
The following disadvantages speak againstas ismodeling:
These arguments weigh particularly heavily if Business process re-engineering (BPR) is planned anyway.
Ansgar Schwegmann and Michael Laske also list a number of advantages ofas ismodeling:[23](Chapter 5.1 Intention of as-is modeling) ← automatic translation from German
Other advantages can also be found, such as
Mario Speck and Norbert Schnetgöke define the objective ofto bemodeling as follows: "The target processes are based on the strategic goals of the company. This means that all sub-processes and individual activities of a company must be analyzed with regard to their target contribution. Sub-processes or activities that cannot be identified as value-adding and do not serve at least one non-monetary corporate objective must therefore be eliminated from the business processes."[8](Chapter 6.2.3 Capturing and documentingto bemodels
)
They also list five basic principles that have proven their worth in the creation ofto bemodels:
The business process model created byas is modelingorto be modelingconsists of:
August W. Scheer is said to have said in his lectures:A process is a process is a process.This is intended to express therecursivenessof the term, because almost every process can be broken down into smaller processes (sub-processes). In this respect, terms such asbusiness process,main process,sub-processorelementary processare only a desperate attempt to name the level of process decomposition. As there is no universally valid agreement on the granularity of abusiness process,main process,sub-processorelementary process, the terms are not universally defined, but can only be understood in the context of the respective business process model.
In addition, some German-speaking schools of business informatics do not use the termsprocess(in the sense of representing the sequence ofactions) andfunction(in the sense of a delimitedcorporate function/action (activity) area that is clearly assigned to acorporate function owner).
For example, in August W. Scheer's ARIS it is possible to use functions from thefunction viewas processes in thecontrol viewand vice versa. Although this has the advantage that already defined processes or functions can be reused across the board, it also means that the proper purpose of thefunction viewis diluted and the ARIS user is no longer able to separateprocessesandfunctionsfrom one another.
The first image shows as a value chain diagram how the business processEdit sales pipelinehas been broken down intosub-processes(in the sense of representing the sequence of actions (activities)) based on its phases.
The second image shows an excerpt of typicalfunctions(in the sense of delimitedcorporate function/action (activity) areas, which are assigned to acorporate function owner), which are structured based on the areas of competence and responsibility hierarchy. Thecorporate functionsthat support the business processEdit sales pipelineare marked in the function tree.
A business process can be decomposed into sub-processes until further decomposition is no longer meaningful/possible (smallest meaningful sub-process =elementary process). Usually, all levels of decomposition of a business process are documented in the same methodology: Process symbols. The process symbols used when modeling one level of decomposition then usually refer to the sub-processes of the next level until the level ofelementary processesis reached. Value chain diagrams are often used to representbusiness processes,main processes,sub-processesandelementary processes.
Aworkflowis a representation of a sequence of tasks, declared as work of a person, of a simple or complex mechanism, of a group of persons,[24]of an organization of staff, or of machines (including IT-systems). A workflow is therefore always located at the elementary process level. The workflow may be seen as any abstraction of real work, segregated into workshare, work split, or other types of ordering. For control purposes, the workflow may be a view of real work under a chosen aspect.
The termfunctionsis often used synonymously for a delimitedcorporate function/action (activita) area, which is assigned to acorporate function owner, and the atomicactivity (task)at the level of theelementary processes. In order to avoid the double meaning of the termfunction, the termtaskcan be used for the atomic activities at the level of theelementary processesin accordance with the naming in BPMN. Modern tools also offer the automatic conversion of ataskinto aprocess, so that it is possible to create a further level of process decomposition at any time, in which ataskmust then be upgraded to anelementary process.
The graphical elements used at the level of elementary processes then describe the (temporal-logical) sequence with the help of functions (tasks). The sequence of the functions (tasks) within theelementary processesis determined by their logical linking with each other (bylogical operatorsorGateways), provided it is not already specified by input/output relationships or Milestones. It is common to use additional graphical elements to illustrate interfaces, states (events), conditions (rules), milestones, etc. in order to better clarify the process. Depending on the modeling tool used, very different graphical representation (models) are used.
Furthermore, the functions (tasks) can be supplemented with graphical elements to describe inputs, outputs, systems, roles, etc. with the aim of improving the accuracy of the description and/or increasing the number of details. However, these additions quickly make themodelconfusing. To resolve the contradiction between accuracy of description and clarity, there are two main solutions: Outsourcing the additional graphical elements for describing inputs, outputs, systems, roles, etc. to aFunction Allocation Diagram(FAD) or selectively showing/hiding these elements depending on the question/application.
Thefunction allocation diagramshown in the image illustrates the addition of graphical elements for the description of inputs, outputs, systems, roles, etc. to functions (tasks) very well.
The termmaster datais neither defined byThe Open Group(The Open Group Architecture Framework, TOGAF) orJohn A. Zachman(Zachman Framework) nor any of the five relevant German-speaking schools of business informatics: 1)August W. Scheer, 2)Hubert Österle, 3) Otto K. Ferstl and Elmar J. Sinz, 4) Hermann Gehring and 5) Andreas Gadatsch and is commonly used in the absence of a suitable term in the literature. It is based on the general term fordatathat represents basic information about operationally relevant objects and refers to basic information that is not primary information of the business process.
For August W. Scheer in ARIS, this would be the basic information of the organization view, data view, function view and performance view.[25](Chapter 1 The vision: A common language for IT and management) ← automatic translation from German
For Andreas Gadatsch in GPM (GanzheitlicheProzessmodellierung (German), means holistic process modelling), this would be the basic information of the organizational structure view, activity structure view, data structure view, and application structure view.[3](Chapter 3.2 GPM – Holistic process modelling) ← automatic translation from German
For Otto K. Ferstl and Elmar J. Sinz in SOM (SemanticObjektmodell), this would be the basic information of the levels Business plan and Resourcen.
Master data can be, for example:
By adding master data to the business process modeling, the same business process model can be used for differentapplicationand areturn on investmentfor the business process modeling can be achieved more quickly with the resulting synergy.
Depending on how much value is given to master data in business process modeling, it is still possible to embed the master data in the process model without negatively affecting the readability of the model or the master data should be outsourced to a separate view, e.g.Function Allocation Diagrams.
If master data is systematically added to the business process model, this is referred to as anartifact-centric business processmodel.
Theartifact-centric business process modelhas emerged as a holistic approach for modeling business processes, as it provides a highly flexible solution to capture operational specifications of business processes. It particularly focuses on describing the data of business processes, known as "artifacts", by characterizing business-relevant data objects, their life-cycles, and related services. The artifact-centric process modelling approach fosters the automation of the business operations and supports the flexibility of the workflow enactment and evolution.[26]
The integration of externaldocumentsand IT-systems can significantly increase the added value of a business process model.
For example, direct access to objects in aknowledge databaseor documents in arule frameworkcan significantly increase the benefits of the business process model in everyday life and thus the acceptance of business process modeling. All IT-systems involved can exploit their specific advantages and cross-fertilize each other (e.g. link to each other or standardize the filing structure):
If all relevant objects of theknowledge databaseand / or documents of therule frameworkare connected to the processes, the end users have context-related access to this information and do not need to be familiar with the respective filing structure of the connected systems.
The direct connection of external systems can also be used to integrate current measurement results or system statuses into the processes (and, for example, to display the current operating status of the processes), to displaywidgetsand show output from external systems or to jump to external systems and initiate a transaction there with a preconfigured dialog.
Further connections to external systems can be used, for example, forelectronic data interchange(EDI).
This is about checking whether there are any redundancies. If so, the relevant sub-processes are combined. Or sub-processes that are used more than once are outsourced to support processes. For a successful model consolidation, it may be necessary to revise the original decomposition of the sub-processes.
Ansgar Schwegmann and Michael Laskeexplain: "A consolidation of the models of different modeling complexes is necessary in order to obtain an integrated ... model."[23](Chapter 5.2.4 Model consolidation) ← automatic translation from GermanThey also list a number of aspects for which model consolidation is important:
The chaining of the sub-processes with each other and the chaining of the functions (tasks) in the sub-processes is modeled using Control Flow Patterns.
Material details of the chaining (What does the predecessor deliver to the successor?) are specified in the process interfaces if intended.
Process interfaces are defined in order to
As a rule, thiswhatand its structure is determined by the requirements in the subsequent process.
Process interfaces represent the exit from the current business process/sub-process and the entry into the subsequent business process/sub-process.
Process interfaces are therefore description elements for linking processes section by section. A process interface can
Process interfaces are agreed between the participants of superordinate/subordinate or neighboring business process models. They are defined and linked once and used as often as required inprocess models.
Interfaces can be defined by:
In real terms, the transferred inputs/outputs are often data or information, but any other business objects are also conceivable (material, products in their final or semi-finished state, documents such as a delivery bill). They are provided via suitable transport media (e.g. data storage in the case of data).
See article Business process management.
In order to put improved business processes into practice,change managementprograms are usually required. With advances in software design, the vision of BPM models being fully executable (enabling simulations and round-trip engineering) is getting closer to reality.
In business process management, process flows are regularly reviewed and optimized (adapted) if necessary. Regardless of whether this adaptation of process flows is triggered bycontinuous process improvementor by process reorganization (business process re-engineering), it entails an update of individual sub-processes or an entire business process.
In practice, combinations ofinformal,semiformalandformalmodels are common:informaltextual descriptions for explanation,semiformalgraphical representation forvisualization, andformal languagerepresentation to supportsimulationand transfer into executable code.
There are various standards for notations; the most common are:
Furthermore:
In addition, representation types fromsoftware architecturecan also be used:
Business Process Model and Notation(BPMN) is agraphical representationfor specifyingbusiness processesin a business process model.
Anevent-driven process chain(EPC) is a type offlow chartfor business process modeling. EPC can be used to configureenterprise resource planningexecution, and forbusiness processimprovement. It can be used to control an autonomous workflow instance in work sharing.
APetri net, also known as a place/transition net (PT net), is one of severalmathematicalmodeling languagesfor the description ofdistributed systems. It is a class ofdiscrete event dynamic system. A Petri net is a directedbipartite graphthat has two types of elements: places and transitions. Place elements are depicted as white circles and transition elements are depicted as rectangles.
A place can contain any number of tokens, depicted as black circles. A transition is enabled if all places connected to it as inputs contain at least one token. Some sources[33]state that Petri nets were invented in August 1939 byCarl Adam Petri— at the age of 13 — for the purpose of describing chemical processes.
Like industry standards such asUMLactivity diagrams,Business Process Model and Notation, andevent-driven process chains, Petri nets offer agraphical notationfor stepwise processes that include choice,iteration, andconcurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis[citation needed].
Aflowchartis a type ofdiagramthat represents aworkfloworprocess. A flowchart can also be defined as a diagrammatic representation of analgorithm, a step-by-step approach to solving a task.
The Lifecycle Modeling Language (LML)is an open-standard modeling language designed forsystems engineering. It supports the fulllifecycle: conceptual, utilization, support and retirement stages. Along with the integration of all lifecycle disciplines including,program management, systems and designengineering,verification and validation, deployment and maintenance into one framework.[38]LML was originally designed by the LML steering committee. The specification was published October 17, 2013.
Subject-oriented business process management(S-BPM) is a communication based view onactors(the subjects), which compose a business process orchestration or choreography.[40]The modeling paradigm uses five symbols to model any process and allows direct transformation into executable form.
Each business process consists of two or moresubjectswhich exchangemessages. Each subject has aninternal behavior(capsulation), which is defined as a control flow between different states, which arereceiveandsend messageanddo something. For practical usage and for syntactical sugaring there are more elements available, but not necessary.
Cognition enhanced Natural language Information Analysis Method (CogNIAM)is a conceptualfact-based modelling method, that aims to integrate the different dimensions of knowledge: data, rules, processes and semantics. To represent these dimensions world standardsSBVR,BPMNandDMNfrom theObject Management Group(OMG) are used. CogNIAM, a successor ofNIAM, is based on the work of knowledge scientistSjir Nijssen.[citation needed]
TheUnified Modeling Language(UML) is a general-purpose visualmodeling languagethat is intended to provide a standard way to visualize the design of a system.[45]
UML provides a standard notation for many types of diagrams which can be roughly divided into three main groups: behavior diagrams, interaction diagrams, and structure diagrams.
The creation of UML was originally motivated by the desire to standardize the disparate notational systems and approaches to software design. It was developed atRational Softwarein 1994–1995, with further development led by them through 1996.[46]
In 1997, UML was adopted as a standard by theObject Management Group(OMG) and has been managed by this organization ever since. In 2005, UML was also published by theInternational Organization for Standardization(ISO) and theInternational Electrotechnical Commission(IEC) as the ISO/IEC 19501 standard.[47]Since then the standard has been periodically revised to cover the latest revision of UML.[48]
IDEF, initially an abbreviation of ICAM Definition and renamed in 1999 as Integration Definition, is a family of modeling languages in the field of systems and software engineering. They cover a wide range of uses from functional modeling to data, simulation,object-oriented analysis and design, and knowledge acquisition. These definition languages were developed under funding from U.S. Air Force and, although still most commonly used by them and other military and United States Department of Defense (DoD) agencies, are in the public domain.
Harbarian process modeling (HPM)is a method for obtaining internalprocessinformation from an organization and then documenting that information in a visually effective, simple manner.
The HPM method involves two levels:
Business process modelling tools provide business users with the ability to model their business processes, implement and execute those models, and refine the models based on as-executed data. As a result, business process modelling tools can provide transparency into business processes, as well as the centralization of corporate business process models and execution metrics.[51]Modelling tools may also enable collaborate modelling of complex processes by users working in teams, where users can share and simulate models collaboratively.[52]Business process modelling tools should not be confused with business process automation systems – both practices have modeling the process as the same initial step and the difference is that process automation gives you an 'executable diagram' and that is drastically different from traditional graphical business process modelling tools.[citation needed]
BPM suite software provides programming interfaces (web services, application program interfaces (APIs)) which allow enterprise applications to be built to leverage the BPM engine.[51]This component is often referenced as theengineof the BPM suite.
Programming languages that are being introduced for BPM include:[53]
Some vendor-specific languages:
Other technologies related to business process modelling includemodel-driven architectureandservice-oriented architecture.
The simulation functionality of such tools allows for pre-execution "what-if" modelling (which has particular requirements for this application) and simulation. Post-execution optimization is available based on the analysis of actual as-performed metrics.[51]
Abusiness reference modelis a reference model, concentrating on the functional and organizational aspects of anenterprise,service organization, orgovernment agency. In general, areference modelis a model of something that embodies the basic goal or idea of something and can then be looked at as a reference for various purposes. A business reference model is a means to describe the business operations of an organization, independent of the organizational structure that performs them. Other types of business reference models can also depict the relationship between the business processes, business functions, and the business area's business reference model. These reference models can be constructed in layers, and offer a foundation for the analysis of service components, technology, data, and performance.
The most familiar business reference model is the Business Reference Model of the US federal government. That model is afunction-drivenframework for describing the business operations of the federal government independent of the agencies that perform them. The Business Reference Model provides an organized, hierarchical construct for describing the day-to-day business operations of the federal government. While many models exist for describing organizations –organizational charts, location maps, etc. – this model presents the business using a functionally driven approach.[55]
A business model, which may be considered an elaboration of a business process model, typically shows business data and business organizations as well as business processes. By showing business processes and their information flows, a business model allows business stakeholders to define, understand, and validate their business enterprise. Thedata modelpart of the business model shows how business information is stored, which is useful for developingsoftware code. See the figure on the right for an example of the interaction between business process models and data models.[56]
Usually, a business model is created after conducting an interview, which is part of thebusiness analysisprocess. The interview consists of a facilitator asking a series of questions to extract information about the subject business process. The interviewer is referred to as a facilitator to emphasize that it is the participants, not the facilitator, who provide the business process information. Although the facilitator should have some knowledge of the subject business process, but this is not as important as the mastery of a pragmatic and rigorous method interviewing business experts. The method is important because for most enterprises a team of facilitators is needed to collect information across the enterprise, and the findings of all the interviewers must be compiled and integrated once completed.[56]
Business models are developed to define either the current state of the process, resulting in the 'as is' snapshot model, or a vision of what the process should evolve into, leading to a 'to be' model. By comparing and contrasting the 'as is' and 'to be' models, business analysts can determine if existing business processes and information systems require minor modifications or if reengineering is necessary to enhance efficiency. As a result, business process modeling and subsequent analysis can fundamentally reshape the way an enterprise conducts its operations.[56]
Business process reengineering(BPR) aims to improve theefficiencyand effectiveness of the processes that exist within and across organizations. It examines business processes from a "clean slate" perspective to determine how best to construct them.
Business process re-engineering (BPR) began as a private sector technique to help organizations fundamentally rethink how they do their work. A key stimulus for re-engineering has been the development and deployment of sophisticated information systems and networks. Leading organizations use this technology to support innovative business processes, rather than refining current ways of doing work.[57]
Change management programs are typically involved to put any improved business processes into practice. With advances in software design, the vision of BPM models becoming fully executable (and capable of simulations and round-trip engineering) is coming closer to reality.
In business process management, process flows are regularly reviewed and, if necessary, optimized (adapted). Regardless of whether this adaptation of process flows is triggered bycontinual improvement processor business process re-engineering, it entails updating individual sub-processes or an entire business process.
{{|bot=InternetArchiveBot |fix-attempted=yes}}
|
https://en.wikipedia.org/wiki/Business_process_model
|
Core architecture data model(CADM) inenterprise architectureis alogical data modelof information used to describe and build architectures.[2]
The CADM is essentially a commondatabase schema, defined within the US Department of Defense Architecture FrameworkDoDAF. It was initially published in 1997 as alogical data modelfor architecture data.[3]
Core architecture data model (CADM) is designed to captureDoDAFarchitecture information in a standardized structure.[4]CADM was developed to support the datarequirementsof the DoDAF. The CADM defines the entities and relationships for DoDAF architecturedata elementsthat enable integration within and across architecture descriptions. In this manner, the CADM supports the exchange of architecture information among mission areas, components, and federal and coalition partners, thus facilitating the data interoperability of architectures.[5]
CADM is a critical aspect of being able to integrate architectures in conformance with DoDAF. This includes the use of common data element definitions, semantics, and data structure for all architecture description entities or objects. The use of the underlying CADM faithfully relates common objects across multiple views. Adherence with the framework, which includes conformance with the currently approved version of CADM, provides both a common approach for developing architectures and a basic foundation for relating architectures. Conformance with the CADM ensures the use of common architecture data elements (or types).[5]
The CADM was initially published in 1997 as alogical data modelfor architecture data. It was revised in 1998 to meet all the requirements of theC4ISR Architecture FrameworkVersion 2.0.1 As a logical data model, the initial CADM provided a conceptual view of how architecture information is organized. It identified and defined entities, attributes, and relations. The CADM has evolved since 1998, so that it now has a physical view providing the data types, abbreviated physical names, and domain values that are needed for a database implementation. Because the CADM is also aphysical data model, it constitutes a database design and can be used to automatically generate databases.[3]
The CADM v1.01 was released with theDoD Architecture Frameworkv1.0 in August 2003. This DoDAF version restructured the C4ISR Framework v2.0 to offer guidance, product descriptions, and supplementary information in two volumes and a desk book. It broadened the applicability of architecture tenets and practices to all mission areas rather than just the C4ISR community. This document addressed usage, integrated architectures, DoD and Federal policies, value of architecture, architecture measures, DoD decision support processes, development techniques, analytical techniques, and the CADM v1.01, and moved towards a repository-based approach by placing emphasis on architecture data elements that comprise architecture products.[5]
The CADM v1.5 was pre-released with the DoD Architecture Framework, v1.5 in April 2007. The DoDAF v1.5 was an evolution of the DoDAF v1.0 and reflects and leverages the experience that the DoD components have gained in developing and using architecture descriptions. This transitional version provided additional guidance on how to reflect net-centric concepts within architecture descriptions, includes information on architecture data management and federating architectures through the department, and incorporates the pre-release CADM v1.5, a simplified model of previous CADM versions that includes net-centric elements. Pre-release CADM v1.5 is also backward compatible with previous CADM versions. Data sets built in accordance with the vocabulary of CADM v1.02/1.03 can be expressed faithfully and completely using the constructs of CADM v1.5.[5]
Note: For DoDAF V2.0, The DoDAF Meta-model (DM2) is working to replace the core architecture data model (CADM) which supported previous versions of the DoDAF. DM2 is a data construct that facilitates reader understanding of the use of data within an architecture document. CADM can continue to be used in support of architectures created in previous versions of DoDAF.
The major elements of a core architecture data model are described as follows:[3]
The DoDAF incorporatesdata modeling(CADM) and visualization aspects (products and views) to support architecture analysis. The DoDAF's data model, CADM, defines architecture data entities, the relationships between them, and the data entity attributes, essentially specifying the “grammar” for the architecture community. It contains a set of “nouns,” “verbs,” and “adjectives” that, together with the “grammar,” allow one to create “sentences” about architecture artifacts that are consistent with the DoDAF. The CADM is a necessary aspect of the architecture and provides the meaning behind the architectural visual representations (products). It enables the effective comparing and sharing of architecture data across the enterprise, contributing to the overall usefulness of architectures. The CADM describes the following data model levels in further detail:[5]
Data visualization is a way of graphically or textually representing architecture data to support decision-making analysis. The DoDAF provides products as a way of representing the underlying data in a user-friendly manner. In some cases, the existing DoDAF products are sufficient for representing the required information. Regardless of how one chooses to represent the architecture description, the underlying data (CADM) remains consistent, providing a common foundation to which analysis requirements are mapped.[5]
As illustrated in the figure, boxes represent entities for which architecture data are collected (representing tables when used for a relational database); they are depicted by open boxes with square corners (independent entities) or rounded corners (dependent entities). The entity name is outside and on top of the open box. The lines of text inside the box denote the attributes of that entity (representing columns in the entity table when used for a relational database). The horizontal line in each box separates the primary key attributes (used to find unique instances of the entity) from the non-key descriptive attributes.[1]
The symbol with a circle and line underneath indicates subtyping, for which all the entities connected below are non-overlapping subsets of the entity connected at the top of the symbol. Relationships are represented by dotted (non-identifying) and solid (identifying) relationships in which the child entity (the one nearest the solid dot) has zero, one, or many instances associated to each instance of the parent entity (the other entity connected by the relationship line).[1]
An architecture data repository responsive to the architecture products of the DoDAF contains information on basic architectural elements such as the following:[3]
The depicted (conceptual) relationships shown in this diagram include the following (among many others):[3]
With these relationships, many types of architectural and related information can be represented such as networks, information flows, information requirements, interfaces, and so forth.[3]
The counterpart to CADM withinNASAis the NASA Exploration Information Ontology Model (NeXIOM), which is designed to capture and expressively describe the engineering and programmatic data that drives exploration program decisions. NeXIOM is intended to be a repository that can be accessed by various simulation tools and models that need to exchange information and data.[4]
|
https://en.wikipedia.org/wiki/Core_architecture_data_model
|
Acommon data model(CDM) can refer to any standardiseddata modelwhich allows fordataandinformation exchangebetween differentapplicationsanddata sources. Common data models aim to standardise logical infrastructure so that related applications can "operate on and share the same data",[1]and can be seen as a way to "organize data from many sources that are in different formats into a standard structure".[2]
A common data model has been described as one of the components of a "strong information system".[3]A standardised common data model has also been described as a typical component of a well designedagile applicationbesides a common communication protocol.[4]Providing a single common data model within an organisation is one of the typical tasks of adata warehouse.
X-trans.euwas a cross-border pilot project between theFree State of Bavaria(Germany) andUpper Austriawith the aim of developing a faster procedure for the application and approval of cross-border large-capacity transports. The portal was based on a common data model that contained all the information required for approval.
TheClimate Data Store Common Data Modelis a common data model set up by theCopernicus Climate Change Servicefor harmonising essentialclimate variablesfrom different sources and data providers.
Withinservice-oriented architecture, S-RAMP is a specification released byHP,IBM,Software AG,TIBCO, andRed Hat[5]which defines a common data model for SOA repositories[6]as well as an interaction protocol to facilitate the use of common tooling and sharing of data.[7]
Content Management Interoperability Services(CMIS) is an open standard for inter-operation of differentcontent management systemsover the internet, and provides a common data model for typed files and folders used withversion control.[8]
The NetCDF software libraries for array-oriented scientific data implements a common data model called theNetCDF Java common data model, which consists of three layers built on top of each other to add successively richer semantics.
Withingenomic and medical data, the Observational Medical Outcomes Partnership (OMOP) research program established under the U.S.National Institutes of Healthhas created a common data model for claims and electronic health records which can accommodate data from different sources around the world. PCORnet, which was developed by thePatient-Centered Outcomes Research Institute, is another common data model for health data including electronic health records and patient claims. The Sentinel Common Data Model was initially started as Mini-Sentinel in 2008. It is used by the Sentinel Initiative of the USA's Food and Drug Administration. The Generalized Data Model was first published in 2019.[9]It was designed to be a stand-alone data model as well as to allow for further transformation into other data models (e.g., OMOP, PCORNet, Sentinel). It has a hierarchical structure to flexibly capture relationships among data elements. TheJANUS clinical trial data repositoryalso provides a common data model which is based on theSDTMstandard to represent clinical data submitted to regulatory agencies, such as tabulation datasets, patient profiles, listings, etc.
SX000iis a specification developed jointly by theAerospace and Defence Industries Association of Europe(ASD) and the AmericanAerospace Industries Association(AIA) to provide information, guidance and instructions to ensure compatibility and the commonality. The associated SX002D specification contains a common data model.
The Microsoft Common Data Model is a collection of many standardised extensible data schemas with entities, attributes, semantic metadata, and relationships, which represent commonly used concepts and activities in various businesses areas.[citation needed]It is maintained byMicrosoftand its partners, and is published onGitHub.[10]Microsoft's Common Data Model is used amongst others inMicrosoft Dataverse[11]and with variousMicrosoft Power Platform[12]andMicrosoft Dynamics 365[13]services.
RailTopoModelis a common data model for therailway sector.[14]
There are many more examples of various common data models for different uses published by different sources.[15][16][17][18][19]
|
https://en.wikipedia.org/wiki/Common_data_model
|
Data collection system(DCS) is acomputer applicationthat facilitates the process ofdata collection, allowing specific, structured information to be gathered in a systematic fashion, subsequently enablingdata analysisto be performed on the information.[1][2][3]Typically a DCS displays a form that accepts data input from a user and then validates that input prior to committing the data to persistent storage such as a database.
Many computer systems implement data entry forms, but data collection systems tend to be more complex, with possibly many related forms containing detailed user input fields, data validations, and navigation links among the forms.
DCSs can be considered a specialized form ofcontent management system(CMS), particularly when they allow the information being gathered to be published, edited, modified, deleted, and maintained. Some general-purpose CMSs include features of DCSs.[4][5]
Accurate data collection is essential to manybusiness processes,[6][7][8]to the enforcement of many governmentregulations,[9]and to maintaining the integrity of scientific research.[10]
Data collection systems are an end-product ofsoftware development. Identifying and categorizing software or a software sub-system as having aspects of, or as actually being a "Data collection system" is very important. This categorization allows encyclopedic knowledge to be gathered and applied in the design and implementation of future systems. Insoftware design, it is very important to identify generalizations andpatternsand tore-useexisting knowledge whenever possible.[11]
Generally the computer software used fordata collectionfalls into one of the following categories of practical application.[12]
There is ataxonomic schemeassociated with data collection systems, with readily-identifiable synonyms used by different industries and organizations.[23][24][25]Cataloging the most commonly used and widely accepted vocabulary improves efficiencies, helps reduce variations, and improves data quality.[26][27][28]
The vocabulary of data collection systems stems from the fact that these systems are often a software representation of what would otherwise be a paper data collectionformwith a complex internal structure of sections and sub-sections. Modeling these structures and relationships in software yields technical terms describing thehierarchyofdata containers, along with a set of industry-specific synonyms.[29][30]
Acollection(used as a noun) is the topmost container for grouping related documents,data models, anddatasets. Typical vocabulary at this level includes the terms:[29]
Each document ordatasetwithin acollectionis modeled in software. Constructing these models is part of designing or "authoring" the expected data to be collected. The terminology for thesedata modelsincludes:[29]
Data modelsare oftenhierarchical, containing sub-collections ormaster–detailstructures described with terms such as:[29]
At the lowest level of thedata modelare thedata elementsthat describe individual pieces of data. Synonyms include:[29][32]
Moving from the abstract,domain modellingfacet to that of the concrete, actual data: the lowest level here is thedata pointwithin adataset. Synonyms fordata pointinclude:[29]
Finally, the synonyms fordatasetinclude:[29]
|
https://en.wikipedia.org/wiki/Data_collection_system
|
Adata dictionary, ormetadata repository, as defined in theIBM Dictionary of Computing, is a "centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format".[1]Oracledefines it as a collection of tables with metadata. The term can have one of several closely related meanings pertaining todatabasesanddatabase management systems(DBMS):
The termsdata dictionaryanddata repositoryindicate a more general software utility than a catalogue. Acatalogueis closely coupled with the DBMS software. It provides the information stored in it to the user and the DBA, but it is mainly accessed by the various software modules of the DBMS itself, such asDDLandDMLcompilers, the query optimiser, the transaction processor, report generators, and the constraint enforcer. On the other hand, adata dictionaryis a data structure that storesmetadata, i.e., (structured) data about information. The software package for a stand-alone data dictionary or data repository may interact with the software modules of the DBMS, but it is mainly used by the designers, users and administrators of a computer system for information resource management. These systems maintain information on system hardware and software configuration, documentation, application and users as well as other information relevant to system administration.[2]
If a data dictionary system is used only by the designers, users, and administrators and not by the DBMS Software, it is called apassive data dictionary.Otherwise, it is called anactive data dictionaryordata dictionary.When a passive data dictionary is updated, it is done so manually and independently from any changes to a DBMS (database) structure. With an active data dictionary, the dictionary is updated first and changes occur in the DBMS automatically as a result.
Databaseusersandapplicationdevelopers can benefit from an authoritative data dictionary document that catalogs the organization, contents, and conventions of one or more databases.[3]This typically includes the names and descriptions of varioustables(recordsorentities) and their contents (fields) plus additional details, like thetypeand length of eachdata element. Another important piece of information that a data dictionary can provide is the relationship between tables. This is sometimes referred to inentity-relationshipdiagrams (ERDs), or if using set descriptors, identifying which sets database tables participate in.
In an active data dictionary constraints may be placed upon the underlying data. For instance, a range may be imposed on the value of numeric data in a data element (field), or a record in a table may be forced to participate in a set relationship with another record-type. Additionally, a distributed DBMS may have certain location specifics described within its active data dictionary (e.g. where tables are physically located).
The data dictionary consists of record types (tables) created in the database by systems generated command files, tailored for each supported back-end DBMS. Oracle has a list of specific views for the "sys" user. This allows users to look up the exact information that is needed. Command files contain SQL Statements forCREATE TABLE,CREATE UNIQUE INDEX,ALTER TABLE(for referential integrity), etc., using the specific statement required by that type of database.
There is no universal standard as to the level of detail in such a document.
In the construction of database applications, it can be useful to introduce an additional layer of data dictionary software, i.e.middleware, which communicates with the underlying DBMS data dictionary. Such a "high-level" data dictionary may offer additional features and a degree of flexibility that goes beyond the limitations of the native "low-level" data dictionary, whose primary purpose is to support the basic functions of the DBMS, not the requirements of a typical application. For example, a high-level data dictionary can provide alternativeentity-relationship modelstailored to suit different applications that share a common database.[4]Extensions to the data dictionary also can assist inquery optimizationagainstdistributed databases.[5]Additionally, DBA functions are often automated using restructuring tools that are tightly coupled to an active data dictionary.
Software frameworksaimed atrapid application developmentsometimes include high-level data dictionary facilities, which can substantially reduce the amount of programming required to buildmenus,forms, reports, and other components of a database application, including the database itself. For example, PHPLens includes aPHPclass libraryto automate the creation of tables, indexes, andforeign keyconstraintsportablyfor multiple databases.[6]Another PHP-based data dictionary, part of the RADICORE toolkit, automatically generates programobjects,scripts, and SQL code for menus and forms withdata validationand complexjoins.[7]For theASP.NETenvironment,Base One'sdata dictionary provides cross-DBMS facilities for automated database creation, data validation, performance enhancement (cachingand index utilization),application security, and extendeddata types.[8]Visual DataFlexfeatures[9]provides the ability to use DataDictionaries as class files to form middle layer between the user interface and the underlying database. The intent is to create standardized rules to maintain data integrity and enforce business rules throughout one or more related applications.
Some industries use generalized data dictionaries as technical standards to ensure interoperability between systems. The real estate industry, for example, abides by aRESO's Data Dictionaryto which theNational Association of REALTORSmandates[10]itsMLSscomply with through its policy handbook.[11]This intermediate mapping layer for MLSs' native databases is supported by software companies which provide API services to MLS organizations.
Developers use adata description specification(DDS) to describe data attributes in file descriptions that are external to the application program that processes the data, in the context of anIBM i.[12]Thesys.ts$table in Oracle stores information about every table in the database. It is part of the data dictionary that is created when theOracle Databaseis created.[13]Developers may also use DDS context fromfree and open-source software(FOSS) for structured and transactional queries in open environments.
Here is a non-exhaustive list of typical items found in a data dictionary for columns or fields:
|
https://en.wikipedia.org/wiki/Data_dictionary
|
Data Format Description Language(DFDL, often pronounceddaff-o-dil) is a modeling language for describing general text andbinary datain a standard way. It was published as anOpen Grid ForumRecommendation[1]in February 2021, and in April 2024 was published as anISOstandard.[2]
A DFDL model or schema allows any text or binary data to be read (or "parsed") from its native format and to be presented as an instance of aninformation set. (An information set is a logical representation of the data contents, independent of the physical format. For example, two records could be in different formats, because one has fixed-length fields and the other uses delimiters, but they could contain exactly the same data, and would both be represented by the same information set). The same DFDL schema also allows data to be taken from an instance of an information set and written out (or "serialized") to its native format.
DFDL isdescriptiveand notprescriptive. DFDL is not a data format, nor does it impose the use of any particular data format. Instead it provides a standard way of describing many different kinds of data formats. This approach has several advantages.[3]It allows an application author to design an appropriate data representation according to their requirements while describing it in a standard way which can be shared, enabling multiple programs to directly interchange the data.
DFDL achieves this by building upon the facilities ofW3C XML Schema 1.0. A subset of XML Schema is used, enough to enable the modeling of non-XML data. The motivations for this approach are to avoid inventing a completely new schema language, and to make it easy to convert general text and binary data, via a DFDL information set, into a corresponding XML document.
Educational material is available in the form of DFDL Tutorials, videos and several hands-on DFDL labs.
DFDL was created in response to a need for grid APIs to be able to understand data regardless of source. A language was needed capable of modeling a wide variety of existing text and binary data formats. Aworking groupwas established at the Global Grid Forum (which later became theOpen Grid Forum) in 2003 to create a specification for such a language.
A decision was made early on to base the language on a subset ofW3C XML Schema, using <xs:appinfo> annotations to carry the extra information necessary to describe non-XML physical representations. This is an established approach that was already being used by 2003 in commercial systems. DFDL takes this approach and evolves it into an open standard capable of describing many text or binary data formats.
Work continued on the language, resulting in the publication of a DFDL 1.0 specification as OGF Proposed Recommendation GFD.174 in January 2011.
The official OGF Recommendation is nowGFD.240published in February 2021 which obsoletes all prior versions and incorporates all issues noted to date (also available ashtml). Asummaryof DFDL and its features is available at the OGF. Any issues with the specification are being tracked using GitHubissue trackers.
In April 2024, DFDL was published asISO/IEC 23415:2024by way of theISO Publicly Available Standards (PAS)process. The standard is available from ISO but will remain publicly available from the Open Grid Forum as well.
Implementations of DFDL processors that can parse and serialize data using DFDL schemas are available.
A public repository for DFDL schemas that describe commercial and scientific data formats has been established onGitHub. DFDL schemas for formats like UN/EDIFACT, NACHA, MIL-STD-2045, NITF, and ISO8583 are available for free download.
Take as an example the following text data stream which gives the name, age and location of a person:
The logical model for this data can be described by the following fragment of an XML Schema document. The order, names, types and cardinality of the fields are expressed by the XML schema model.
To additionally model the physical representation of the data stream, DFDL augments the XML schema fragment with annotations on the xs:element and xs:sequence objects, as follows:
The property attributes on these DFDL annotations express that the data are represented in an ASCII text format with fields being of variable length and delimited by commas
An alternative, more compact syntax is also provided, where DFDL properties are carried as non-native attributes on the XML Schema objects themselves.
The goal of DFDL is to provide a rich modeling language capable of representing any text or binary data format. The 1.0 release is a major step towards this goal. The capability includes support for:
|
https://en.wikipedia.org/wiki/Data_Format_Description_Language
|
Adistributional–relational database, orword-vector database, is adatabase management system(DBMS) that uses distributionalword-vectorrepresentations to enrich the semantics ofstructured data.
As distributional word-vectors can be built automatically from large-scalecorpora,[1]this enrichment supports the construction of databases which can embed large-scale commonsense background knowledge into their operations. Distributional-Relational models can be applied to the construction ofschema-agnostic databases(databases in which users can query the data without being aware of itsschema),semantic search, schema-integration andinductiveandabductive reasoningas well as different applications in which a semantically flexible knowledge representation model is needed. The main advantage of distributional–relational models over purely logical /semantic webmodels is the fact that the core semantic associations can be automatically captured from corpora, in contrast to the definition of manually curatedontologiesand rule knowledge bases.[2]
Distributional–relational models were first formalized,[3][4]as a mechanism to cope with the vocabulary/semantic gap between users and the schema behind the data. In this scenario,distributional semanticrelatedness measures, combined with semantic pivotingheuristicscan support the approximation between user queries (expressed in their own vocabulary), anddata(expressed in the vocabulary of the designer).
In this model, the database symbols (entities and relations) are embedded into a distributionalsemantic spaceand have ageometricinterpretation under a latent or explicit semantic space. The geometric aspect supports the semantic approximation between entities from different databases, or between a query term and a database entity. The distributional relational model then becomes a double layered model where the semantics of the structured data provides the fine-grained semantics intended by thedatabase designer, which is extended by the distributional semantic model which contains the semantic associations expressed at a broader use.
These models support the generalization from a closed communication scenario (in which database designers and users live in the same context, e.g. the same organization) to an open communication scenario (e.g. different organizations, the Web), creating an abstraction layer between users and the specific representation of the conceptual model.
|
https://en.wikipedia.org/wiki/Distributional%E2%80%93relational_database
|
JC3IEDM, orJoint Consultation, Command and Control Information Exchange Data Modelis a model that, when implemented, aims to enable the interoperability of systems and projects required to shareCommand and Control (C2)information. JC3IEDM is an evolution of theC2IEDMstandard that includes joint operational concepts, just as the Land Command and Control Information Exchange Data Model (LC2IEDM) was extended to become C2IEDM. The program is managed by theMultilateral Interoperability Programme(MIP).
JC3IEDM is produced by the MIP-NATO Management Board (MNMB) and ratified underNATOSTANAG 5525.[1]JC3IEDM a fully documented standard for an information exchange data model for the sharing of C2 information.
The overall aim of JC3IEDM is to enable "international interoperability of C2 information systems at all levels from corps to battalion (or lowest appropriate level) in order to support multinational (including NATO), combined and joint operations and the advancement of digitisation in the international arena."[2]
According to JC3IEDM's documentation,[3]this aim is attempted to be achieved by "specifying the minimum set of data that needs to be
exchanged in coalition or multinational operations. Each nation, agency or community of
interest is free to expand its own data dictionary to accommodate its additional
information exchange requirements with the understanding that the added specifications
will be valid only for the participating nation, agency or community of interest. Any
addition that is deemed to be of general interest may be submitted as a change proposal
within the configuration control process to be considered for inclusion in the next version
of the specification."
"JC3IEDM is intended to represent the core of the data identified for exchange across multiple functional areas and multiple views of the requirements. Toward that end, it lays down a common approach to describing the information to be exchanged in a command and control (C2) environment.
JC3IEDM has been developed from the initial Generic Hub (GH) Data Model, which changed its name to Land C2 Information Exchange Data Model (LC2IEDM) in 1999. Development of the model continued in a Joint context and in November 2003 the C2 Information Exchange Data Model (C2IEDM) Edition 6.1 was released. Additional development to this model, incorporating the NATO Corporate Reference model, resulted in the model changing its name again to JC3IEDM with JC3IEDM Ed 0.5 being issued in December 2004.
Subsequent releases have seen areas of the model developed in greater depth than others and there is variation in the number of sub-types and attributes for each type in the current version. An example is HARBOUR within the FACILITY type which has 43 attributes compared to a VESSEL-TYPE with 12 attributes or a WEAPON-TYPE with 4 attributes. The associated attributes of a certain type also lack support for exploiting with those of other types. For example, VESSEL-TYPE does not support the length or width of a vessel in its attributes but HARBOUR has both maximum vessel length and width attributes.
The UK Ministry of Defence has mandated JC3IEDM as the C2 Information Exchange Model, in Joint Service Publication (JSP) 602:1007, for use on all systems and/or projects exchanging C2 information within and interoperating with the Land Environment at a Strategic and Operational Level. It is strongly recommended for other environments and mandated for all environments at the Tactical level.[4]JSP 602:1005 for Collaborative Services has also mandated JC3IEDM in the tactical domain for all systems/projects providing data sharing collaborative services.[5]
Note: Link for Ref 3 is broken. Link for Ref 5 is wrong. As of 05.05.2017 all MIP links are broken and point to different directions.
|
https://en.wikipedia.org/wiki/JC3IEDM
|
The termprocess modelis used in various contexts. For example, inbusiness process modelingthe enterprise process model is often referred to as thebusiness process model.
Process models areprocessesof the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development.[2]
The goals of a process model are to be:
From a theoretical point of view, themeta-process modelingexplains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process modeling is aimed at providing guidance for method engineers and application developers.[1]
The activity ofmodeling a businessprocess usually predicates a need to change processes or identify issues to be corrected. This transformation may or may not require IT involvement, although that is a common driver for the need to model a business process.Change managementprogrammes are desired to put the processes into practice. With advances in technology from larger platform vendors, the vision of business process models (BPM) becoming fully executable (and capable of round-trip engineering) is coming closer to reality every day. Supporting technologies includeUnified Modeling Language(UML),model-driven architecture, andservice-oriented architecture.
Process modeling addresses the process aspects of an enterprisebusiness architecture, leading to an all encompassingenterprise architecture. The relationships of a business processes in the context of the rest of the enterprise systems, data, organizational structure, strategies, etc. create greater capabilities in analyzing and planning a change. One real-world example is in corporatemergers and acquisitions; understanding the processes in both companies in detail, allowing management to identify redundancies resulting in a smoother merger.
Process modeling has always been a key aspect ofbusiness process reengineering, and continuous improvement approaches seen inSix Sigma.
There are five types of coverage where the term process model has been defined differently:[3]
Processes can be of different kinds.[2]These definitions "correspond to the various ways in which a process can be modelled".
Granularityrefers to the level of detail of a process model and affects the kind of guidance, explanation and trace that can be provided. Coarse granularity restricts these to a rather limited level of detail whereas fine granularity provides more detailed capability. The nature of granularity needed is dependent on the situation at hand.[2]
Project manager, customer representatives, the general, top-level, or middle management require rather coarse-grained process description as they want to gain an overview of time, budget, and resource planning for their decisions. In contrast, software engineers, users, testers, analysts, or software system architects will prefer a fine-grained process model where the details of the model can provide them with instructions and important execution dependencies such as the dependencies between people.
While notations for fine-grained models exist, most traditional process models are coarse-grained descriptions. Process models should, ideally, provide a wide range of granularity (e.g. Process Weaver).[2][7]
It was found that while process models were prescriptive, in actual practice departures from the prescription can occur.[6]Thus, frameworks for adopting methods evolved so that systems development methods match specific organizational situations and thereby improve their usefulness. The development of such frameworks is also called situationalmethod engineering.
Method construction approaches can be organized in a flexibility spectrum ranging from 'low' to 'high'.[8]
Lying at the 'low' end of this spectrum are rigid methods, whereas at the 'high' end there are modular method construction. Rigid methods are completely pre-defined and leave little scope for adapting them to the situation at hand. On the other hand, modular methods can be modified and augmented to fit a given situation. Selecting a rigid methods allows each project to choose its method from a panel of rigid, pre-defined methods, whereas selecting a path within a method consists of choosing the appropriate path for the situation at hand. Finally, selecting and tuning a method allows each project to select methods from different approaches and tune them to the project's needs."[9]
As the quality of process models is being discussed in this paper, there is a need to elaborate quality of modeling techniques as an important essence in quality of process models. In most existing frameworks created for understanding the quality, the line between quality of modeling techniques and the quality of models as a result of the application of those techniques are not clearly drawn. This report will concentrate both on quality of process modeling techniques and quality of process models to clearly differentiate the two.
Various frameworks were developed to help in understanding quality of process modeling techniques, one example is Quality based modeling evaluation framework or known as Q-Me framework which argued to provide set of well defined quality properties and procedures to make an objective assessment of this properties possible.[10]This framework also has advantages of providing uniform and formal description of the model element within one or different model types using one modeling techniques[10]In short this can make assessment of both the product quality and the process quality of modeling techniques with regard to a set of properties that have been defined before.
Quality properties that relate tobusiness process modelingtechniques discussed in[10]are:
To assess the quality of Q-ME framework; it is used to illustrate the quality of the dynamic essentials modeling of the organisation (DEMO) business modeling techniques.
It is stated that the evaluation of the Q-ME framework to the DEMO modeling techniques has revealed the shortcomings of Q-ME. One particular is that it does not include quantifiable metric to express the quality of business modeling technique which makes it hard to compare quality of different techniques in an overall rating.
There is also a systematic approach for quality measurement of modeling techniques known as complexity metrics suggested by Rossi et al. (1996). Techniques of Meta model is used as a basis for computation of these complexity metrics. In comparison to quality framework proposed byKrogstie, quality measurement focus more on technical level instead of individual model level.[11]
Authors (Cardoso, Mendling, Neuman and Reijers, 2006) used complexity metrics to measure the simplicity and understandability of a design. This is supported by later research done by Mendlinget al.who argued that without using the quality metrics to help question quality properties of a model, simple process can be modeled in a complex and unsuitable way. This in turn can lead to a lower understandability, higher maintenance cost and perhaps inefficient execution of the process in question.[12]
The quality of modeling technique is important in creating models that are of quality and contribute to the correctness and usefulness of models.
Earliest process models reflected the dynamics of the process with a practical process obtained by instantiation in terms of relevant concepts, available technologies, specific implementation environments, process constraints and so on.[13]
Enormous number of research has been done on quality of models but less focus has been shifted towards the quality of process models. Quality issues of process models cannot be evaluated exhaustively however there are four main guidelines and frameworks in practice for such. These are: top-down quality frameworks, bottom-up metrics related to quality aspects, empirical surveys related to modeling techniques, and pragmatic guidelines.[14]
Hommes quoted Wanget al.(1994)[11]that all the main characteristic of quality of models can all be grouped under 2 groups namely correctness and usefulness of a model, correctness ranges from the model correspondence to the phenomenon that is modeled to its correspondence to syntactical rules of the modeling and also it is independent of the purpose to which the model is used.
Whereas the usefulness can be seen as the model being helpful for the specific purpose at hand for which the model is constructed at first place. Hommes also makes a further distinction between internal correctness (empirical, syntactical and semantic quality) and external correctness (validity).
A common starting point for defining the quality of conceptual model is to look at the linguistic properties of the modeling language of which syntax and semantics are most often applied.
Also the broader approach is to be based on semiotics rather than linguistic as was done by Krogstie using the top-down quality framework known as SEQUAL.[15][16]It defines several quality aspects based on relationships between a model, knowledge Externalisation, domain, a modeling language, and the activities of learning, taking action, and modeling.
The framework does not however provide ways to determine various degrees of quality but has been used extensively for business process modeling in empirical tests carried out[17]According to previous research done by Moodyet al.[18]with use of conceptual model quality framework proposed by Lindlandet al.(1994) to evaluate quality of process model, three levels of quality[19]were identified:
From the research it was noticed that the quality framework was found to be both easy to use and useful in evaluating the quality of process models however it had limitations in regards to reliability and difficult to identify defects. These limitations led to refinement of the framework through subsequent research done byKrogstie. This framework is called SEQUEL framework by Krogstieet al.1995 (Refined further by Krogstie & Jørgensen, 2002) which included three more quality aspects.
Dimensions of Conceptual Quality framework[20]Modeling Domain is the set of all statements that are relevant and correct for describing a problem domain, Language Extension is the set of all statements that are possible given the grammar and vocabulary of the modeling languages used. Model Externalization is the conceptual representation of the problem domain.
It is defined as the set of statements about the problem domain that are actually made. Social Actor Interpretation and Technical Actor Interpretation are the sets of statements that actors both human model users and the tools that interact with the model, respectively 'think' the conceptual representation of the problem domain contains.
Finally, Participant Knowledge is the set of statements that human actors, who are involved in the modeling process, believe should be made to represent the problem domain. These quality dimensions were later divided into two groups that deal with physical and social aspects of the model.
In later work, Krogstie et al.[15]stated that while the extension of the SEQUAL framework has fixed some of the limitation of the initial framework, however other limitation remain .
In particular, the framework is too static in its view upon semantic quality, mainly considering models, not modeling activities, and comparing these models to a static domain rather than seeing the model as a facilitator for changing the domain.
Also, the framework's definition of pragmatic quality is quite narrow, focusing on understanding, in line with the semiotics of Morris, while newer research in linguistics and semiotics has focused beyond mere understanding, on how the model is used and affects its interpreters.
The need for a more dynamic view in the semiotic quality framework is particularly evident when considering process models, which themselves often prescribe or even enact actions in the problem domain, hence a change to the model may also change the problem domain directly. This paper discusses the quality framework in relation to active process models and suggests a revised framework based on this.
Further work by Krogstieet al.(2006) to revise SEQUAL framework to be more appropriate for active process models by redefining physical quality with a more narrow interpretation than previous research.[15]
The other framework in use is Guidelines of Modeling (GoM)[21]based on general accounting principles include the six principles: Correctness, Clarity deals with the comprehensibility and explicitness (System description) of model systems.
Comprehensibility relates to graphical arrangement of the information objects and, therefore, supports the understand ability of a model.
Relevance relates to the model and the situation being presented. Comparability involves the ability to compare models that is semantic comparison between two models, Economic efficiency; the produced cost of the design process need at least to be covered by the proposed use of cost cuttings and revenue increases.
Since the purpose of organizations in most cases is the maximization of profit, the principle defines the borderline for the modeling process. The last principle is Systematic design defines that there should be an accepted differentiation between diverse views within modeling.
Correctness, relevance and economic efficiency are prerequisites in the quality of models and must be fulfilled while the remaining guidelines are optional but necessary.
The two frameworks SEQUAL and GOM have a limitation of use in that they cannot be used by people who are not competent with modeling. They provide major quality metrics but are not easily applicable by non-experts.
The use of bottom-up metrics related to quality aspects of process models is trying to bridge the gap of use of the other two frameworks by non-experts in modeling but it is mostly theoretical and no empirical tests have been carried out to support their use.
Most experiments carried out relate to the relationship between metrics and quality aspects and these works have been done individually by different authors: Canfora et al. study the connection mainly between count metrics (for example, the number of tasks or splits -and maintainability of software process models);[22]Cardoso validates the correlation between control flow complexity and perceived complexity; and Mendling et al. use metrics to predict control flow errors such as deadlocks in process models.[12][23]
The results reveal that an increase in size of a model appears to reduce its quality and comprehensibility.
Further work by Mendling et al. investigates the connection between metrics and understanding[24]and[25]While some metrics are confirmed regarding their effect, also personal factors of the modeler – like competence – are revealed as important for understanding about the models.
Several empirical surveys carried out still do not give clear guidelines or ways of evaluating the quality of process models but it is necessary to have clear set of guidelines to guide modelers in this task. Pragmatic guidelines have been proposed by different practitioners even though it is difficult to provide an exhaustive account of such guidelines from practice.
Most of the guidelines are not easily put to practice but "label activities verb–noun" rule has been suggested by other practitioners before and analyzed empirically.
From the research.[26]value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels which need to be analyzed. It was found that it results in better models in terms of understanding than alternative labelling styles.
From the earlier research and ways to evaluate process model quality it has been seen that the process model's size, structure, expertise of the modeler and modularity affect its overall comprehensibility.[24][27]Based on these a set of guidelines was presented[28]7 Process Modeling Guidelines (7PMG). This guideline uses the verb-object style, as well as guidelines on the number of elements in a model, the application of structured modeling, and the decomposition of a process model. The guidelines are as follows:
7PMG still though has limitations with its use: Validity problem 7PMG does not relate to the content of a process model, but only to the way this content is organized and represented.
It does suggest ways of organizing different structures of the process model while the content is kept intact but the pragmatic issue of what must be included in the model is still left out.
The second limitation relates to the prioritizing guideline the derived ranking has a small empirical basis as it relies on the involvement of 21 process modelers only.
This could be seen on the one hand as a need for a wider involvement of process modelers' experience, but it also raises the question, what alternative approaches may be available to arrive at a prioritizing guideline?[28]
|
https://en.wikipedia.org/wiki/Process_model
|
Pattern recognitionis the task of assigning aclassto an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess PR capabilities but their primary function is to distinguish and create emergent patterns. PR has applications in statisticaldata analysis,signal processing,image analysis,information retrieval,bioinformatics,data compression,computer graphicsandmachine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use ofmachine learning, due to the increased availability ofbig dataand a new abundance ofprocessing power.
Pattern recognition systems are commonly trained from labeled "training" data. When nolabeled dataare available, other algorithms can be used to discover previously unknown patterns.KDDand data mining have a larger focus on unsupervised methods and stronger connection to business use. Pattern recognition focuses more on the signal and also takes acquisition andsignal processinginto consideration. It originated inengineering, and the term is popular in the context ofcomputer vision: a leading computer vision conference is namedConference on Computer Vision and Pattern Recognition.
Inmachine learning, pattern recognition is the assignment of a label to a given input value. In statistics,discriminant analysiswas introduced for this same purpose in 1936. An example of pattern recognition isclassification, which attempts to assign each input value to one of a given set ofclasses(for example, determine whether a given email is "spam"). Pattern recognition is a more general problem that encompasses other types of output as well. Other examples areregression, which assigns areal-valuedoutput to each input;[1]sequence labeling, which assigns a class to each member of a sequence of values[2](for example,part of speech tagging, which assigns apart of speechto each word in an input sentence); andparsing, which assigns aparse treeto an input sentence, describing thesyntactic structureof the sentence.[3]
Pattern recognition algorithms generally aim to provide a reasonable answer for all possible inputs and to perform "most likely" matching of the inputs, taking into account their statistical variation. This is opposed topattern matchingalgorithms, which look for exact matches in the input with pre-existing patterns. A common example of a pattern-matching algorithm isregular expressionmatching, which looks for patterns of a given sort in textual data and is included in the search capabilities of manytext editorsandword processors.
A modern definition of pattern recognition is:
The field of pattern recognition is concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories.[4]
Pattern recognition is generally categorized according to the type of learning procedure used to generate the output value.Supervised learningassumes that a set of training data (thetraining set) has been provided, consisting of a set of instances that have been properly labeled by hand with the correct output. A learning procedure then generates a model that attempts to meet two sometimes conflicting objectives: Perform as well as possible on the training data, and generalize as well as possible to new data (usually, this means being as simple as possible, for some technical definition of "simple", in accordance withOccam's Razor, discussed below).Unsupervised learning, on the other hand, assumes training data that has not been hand-labeled, and attempts to find inherent patterns in the data that can then be used to determine the correct output value for new data instances.[5]A combination of the two that has been explored issemi-supervised learning, which uses a combination of labeled and unlabeled data (typically a small set of labeled data combined with a large amount of unlabeled data). In cases of unsupervised learning, there may be no training data at all.
Sometimes different terms are used to describe the corresponding supervised and unsupervised learning procedures for the same type of output. The unsupervised equivalent of classification is normally known asclustering, based on the common perception of the task as involving no training data to speak of, and of grouping the input data into clusters based on some inherentsimilarity measure(e.g. thedistancebetween instances, considered as vectors in a multi-dimensionalvector space), rather than assigning each input instance into one of a set of pre-defined classes. In some fields, the terminology is different. Incommunity ecology, the termclassificationis used to refer to what is commonly known as "clustering".
The piece of input data for which an output value is generated is formally termed aninstance. The instance is formally described by avectorof features, which together constitute a description of all known characteristics of the instance. These feature vectors can be seen as defining points in an appropriatemultidimensional space, and methods for manipulating vectors invector spacescan be correspondingly applied to them, such as computing thedot productor the angle between two vectors. Features typically are eithercategorical(also known asnominal, i.e., consisting of one of a set of unordered items, such as a gender of "male" or "female", or a blood type of "A", "B", "AB" or "O"),ordinal(consisting of one of a set of ordered items, e.g., "large", "medium" or "small"),integer-valued(e.g., a count of the number of occurrences of a particular word in an email) orreal-valued(e.g., a measurement of blood pressure). Often, categorical and ordinal data are grouped together, and this is also the case for integer-valued and real-valued data. Many algorithms work only in terms of categorical data and require that real-valued or integer-valued data bediscretizedinto groups (e.g., less than 5, between 5 and 10, or greater than 10).
Many common pattern recognition algorithms areprobabilisticin nature, in that they usestatistical inferenceto find the best label for a given instance. Unlike other algorithms, which simply output a "best" label, often probabilistic algorithms also output aprobabilityof the instance being described by the given label. In addition, many probabilistic algorithms output a list of theN-best labels with associated probabilities, for some value ofN, instead of simply a single best label. When the number of possible labels is fairly small (e.g., in the case ofclassification),Nmay be set so that the probability of all possible labels is output. Probabilistic algorithms have many advantages over non-probabilistic algorithms:
Feature selectionalgorithms attempt to directly prune out redundant or irrelevant features. A general introduction tofeature selectionwhich summarizes approaches and challenges, has been given.[6]The complexity of feature-selection is, because of its non-monotonous character, anoptimization problemwhere given a total ofn{\displaystyle n}features thepowersetconsisting of all2n−1{\displaystyle 2^{n}-1}subsets of features need to be explored. TheBranch-and-Bound algorithm[7]does reduce this complexity but is intractable for medium to large values of the number of available featuresn{\displaystyle n}
Techniques to transform the raw feature vectors (feature extraction) are sometimes used prior to application of the pattern-matching algorithm.Feature extractionalgorithms attempt to reduce a large-dimensionality feature vector into a smaller-dimensionality vector that is easier to work with and encodes less redundancy, using mathematical techniques such asprincipal components analysis(PCA). The distinction betweenfeature selectionandfeature extractionis that the resulting features after feature extraction has taken place are of a different sort than the original features and may not easily be interpretable, while the features left after feature selection are simply a subset of the original features.
The problem of pattern recognition can be stated as follows: Given an unknown functiong:X→Y{\displaystyle g:{\mathcal {X}}\rightarrow {\mathcal {Y}}}(theground truth) that maps input instancesx∈X{\displaystyle {\boldsymbol {x}}\in {\mathcal {X}}}to output labelsy∈Y{\displaystyle y\in {\mathcal {Y}}}, along with training dataD={(x1,y1),…,(xn,yn)}{\displaystyle \mathbf {D} =\{({\boldsymbol {x}}_{1},y_{1}),\dots ,({\boldsymbol {x}}_{n},y_{n})\}}assumed to represent accurate examples of the mapping, produce a functionh:X→Y{\displaystyle h:{\mathcal {X}}\rightarrow {\mathcal {Y}}}that approximates as closely as possible the correct mappingg{\displaystyle g}. (For example, if the problem is filtering spam, thenxi{\displaystyle {\boldsymbol {x}}_{i}}is some representation of an email andy{\displaystyle y}is either "spam" or "non-spam"). In order for this to be a well-defined problem, "approximates as closely as possible" needs to be defined rigorously. Indecision theory, this is defined by specifying aloss functionor cost function that assigns a specific value to "loss" resulting from producing an incorrect label. The goal then is to minimize theexpectedloss, with the expectation taken over theprobability distributionofX{\displaystyle {\mathcal {X}}}. In practice, neither the distribution ofX{\displaystyle {\mathcal {X}}}nor the ground truth functiong:X→Y{\displaystyle g:{\mathcal {X}}\rightarrow {\mathcal {Y}}}are known exactly, but can be computed only empirically by collecting a large number of samples ofX{\displaystyle {\mathcal {X}}}and hand-labeling them using the correct value ofY{\displaystyle {\mathcal {Y}}}(a time-consuming process, which is typically the limiting factor in the amount of data of this sort that can be collected). The particular loss function depends on the type of label being predicted. For example, in the case ofclassification, the simplezero-one loss functionis often sufficient. This corresponds simply to assigning a loss of 1 to any incorrect labeling and implies that the optimal classifier minimizes theerror rateon independent test data (i.e. counting up the fraction of instances that the learned functionh:X→Y{\displaystyle h:{\mathcal {X}}\rightarrow {\mathcal {Y}}}labels wrongly, which is equivalent to maximizing the number of correctly classified instances). The goal of the learning procedure is then to minimize the error rate (maximize thecorrectness) on a "typical" test set.
For a probabilistic pattern recognizer, the problem is instead to estimate the probability of each possible output label given a particular input instance, i.e., to estimate a function of the form
where thefeature vectorinput isx{\displaystyle {\boldsymbol {x}}}, and the functionfis typically parameterized by some parametersθ{\displaystyle {\boldsymbol {\theta }}}.[8]In adiscriminativeapproach to the problem,fis estimated directly. In agenerativeapproach, however, the inverse probabilityp(x|label){\displaystyle p({{\boldsymbol {x}}|{\rm {label}}})}is instead estimated and combined with theprior probabilityp(label|θ){\displaystyle p({\rm {label}}|{\boldsymbol {\theta }})}usingBayes' rule, as follows:
When the labels arecontinuously distributed(e.g., inregression analysis), the denominator involvesintegrationrather than summation:
The value ofθ{\displaystyle {\boldsymbol {\theta }}}is typically learned usingmaximum a posteriori(MAP) estimation. This finds the best value that simultaneously meets two conflicting objects: To perform as well as possible on the training data (smallesterror-rate) and to find the simplest possible model. Essentially, this combinesmaximum likelihoodestimation with aregularizationprocedure that favors simpler models over more complex models. In aBayesiancontext, the regularization procedure can be viewed as placing aprior probabilityp(θ){\displaystyle p({\boldsymbol {\theta }})}on different values ofθ{\displaystyle {\boldsymbol {\theta }}}. Mathematically:
whereθ∗{\displaystyle {\boldsymbol {\theta }}^{*}}is the value used forθ{\displaystyle {\boldsymbol {\theta }}}in the subsequent evaluation procedure, andp(θ|D){\displaystyle p({\boldsymbol {\theta }}|\mathbf {D} )}, theposterior probabilityofθ{\displaystyle {\boldsymbol {\theta }}}, is given by
In theBayesianapproach to this problem, instead of choosing a single parameter vectorθ∗{\displaystyle {\boldsymbol {\theta }}^{*}}, the probability of a given label for a new instancex{\displaystyle {\boldsymbol {x}}}is computed by integrating over all possible values ofθ{\displaystyle {\boldsymbol {\theta }}}, weighted according to the posterior probability:
The first pattern classifier – the linear discriminant presented byFisher– was developed in thefrequentisttradition. The frequentist approach entails that the model parameters are considered unknown, but objective. The parameters are then computed (estimated) from the collected data. For the linear discriminant, these parameters are precisely the mean vectors and thecovariance matrix. Also the probability of each classp(label|θ){\displaystyle p({\rm {label}}|{\boldsymbol {\theta }})}is estimated from the collected dataset. Note that the usage of 'Bayes rule' in a pattern classifier does not make the classification approach Bayesian.
Bayesian statisticshas its origin in Greek philosophy where a distinction was already made between the 'a priori' and the 'a posteriori' knowledge. LaterKantdefined his distinction between what is a priori known – before observation – and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilitiesp(label|θ){\displaystyle p({\rm {label}}|{\boldsymbol {\theta }})}can be chosen by the user, which are then a priori. Moreover, experience quantified as a priori parameter values can be weighted with empirical observations – using e.g., theBeta-(conjugate prior) andDirichlet-distributions. The Bayesian approach facilitates a seamless intermixing between expert knowledge in the form of subjective probabilities, and objective observations.
Probabilistic pattern classifiers can be used according to a frequentist or a Bayesian approach.
Within medical science, pattern recognition is the basis forcomputer-aided diagnosis(CAD) systems. CAD describes a procedure that supports the doctor's interpretations and findings. Other typical applications of pattern recognition techniques are automaticspeech recognition,speaker identification,classification of text into several categories(e.g., spam or non-spam email messages), theautomatic recognition of handwritingon postal envelopes, automaticrecognition of imagesof human faces, or handwriting image extraction from medical forms.[9][10]The last two examples form the subtopicimage analysisof pattern recognition that deals with digital images as input to pattern recognition systems.[11][12]
Optical character recognition is an example of the application of a pattern classifier. The method of signing one's name was captured with stylus and overlay starting in 1990.[citation needed]The strokes, speed, relative min, relative max, acceleration and pressure is used to uniquely identify and confirm identity. Banks were first offered this technology, but were content to collect from the FDIC for any bank fraud and did not want to inconvenience customers.[citation needed]
Pattern recognition has many real-world applications in image processing. Some examples include:
In psychology,pattern recognitionis used to make sense of and identify objects, and is closely related to perception. This explains how the sensory inputs humans receive are made meaningful. Pattern recognition can be thought of in two different ways. The first concerns template matching and the second concerns feature detection. A template is a pattern used to produce items of the same proportions. The template-matching hypothesis suggests that incoming stimuli are compared with templates in the long-term memory. If there is a match, the stimulus is identified. Feature detection models, such as the Pandemonium system for classifying letters (Selfridge, 1959), suggest that the stimuli are broken down into their component parts for identification. One observation is a capital E having three horizontal lines and one vertical line.[22]
Algorithms for pattern recognition depend on the type of label output, on whether learning is supervised or unsupervised, and on whether the algorithm is statistical or non-statistical in nature. Statistical algorithms can further be categorized asgenerativeordiscriminative.
Parametric:[23]
Nonparametric:[24]
Unsupervised:
|
https://en.wikipedia.org/wiki/Pattern_recognition
|
Text miningcomputer programs are available from manycommercialandopen sourcecompanies and sources.
|
https://en.wikipedia.org/wiki/List_of_text_mining_software
|
Semi-structured data[1]is a form ofstructured datathat does not obey the tabular structure of data models associated withrelational databasesor other forms ofdata tables, but nonetheless containstagsor other markers to separate semantic elements and enforce hierarchies of records and fields within the data. Therefore, it is also known asself-describingstructure.
In semi-structured data, the entities belonging to the same class may have differentattributeseven though they are grouped together, and the attributes' order is not important.
Semi-structured data are increasingly occurring since the advent of theInternetwherefull-textdocumentsanddatabasesare not the only forms of data anymore, and different applications need a medium forexchanging information. Inobject-oriented databases, one often finds semi-structured data.
XML,[2]other markup languages,email, andEDIare all forms of semi-structured data.OEM(Object Exchange Model)[3]was created prior to XML as a means of self-describing a data structure. XML has been popularized by web services that are developed utilizingSOAPprinciples.
Some types of data described here as "semi-structured", especially XML, suffer from the impression that they are incapable of structural rigor at the same functional level as Relational Tables and Rows. Indeed, the view of XML as inherently semi-structured (previously, it was referred to as "unstructured") has handicapped its use for a widening range of data-centric applications. Even documents, normally thought of as the epitome of semi-structure, can be designed with virtually the same rigor asdatabase schema, enforced by theXML schemaand processed by both commercial and custom software programs without reducing their usability by human readers.
In view of this fact, XML might be referred to as having "flexible structure" capable of human-centric flow and hierarchy as well as highly rigorous element structure and data typing.
The concept of XML as "human-readable", however, can only be taken so far. Some implementations/dialects of XML, such as the XML representation of the contents of a Microsoft Word document, as implemented in Office 2007 and later versions, utilize dozens or even hundreds of different kinds of tags that reflect a particular problem domain - in Word's case, formatting at the character and paragraph and document level, definitions of styles, inclusion of citations, etc. - which are nested within each other in complex ways. Understanding even a portion of such an XML document by reading it, let alone catching errors in its structure, is impossible without a very deep prior understanding of the specific XML implementation, along with assistance by software that understands the XML schema that has been employed. Such text is not "human-understandable" any more than a book written in Swahili (which uses the Latin alphabet) would be to an American or Western European who does not know a word of that language: the tags are symbols that are meaningless to a person unfamiliar with the domain.
JSONor JavaScript Object Notation, is an open standard format that uses human-readable text to transmit data objects. JSON has been popularized by web services developed utilizingRESTprinciples.
Databases such asMongoDBandCouchbasestore data natively in JSON format, leveraging the pros of semi-structured data architecture.
Thesemi-structured modelis adatabase modelwhere there is no separation between thedataand theschema, and the amount of structure used depends on the purpose.
The advantages of this model are the following:
The primary trade-off being made in using a semi-structureddatabase modelis that queries cannot be made as efficiently as in a more constrained structure, such as in therelational model. Typically the records in a semi-structured database are stored with unique IDs that are referenced with pointers to their location on disk. This makes navigational or path-based queries quite efficient, but for doing searches over many records (as is typical inSQL), it is not as efficient because it has to seek around the disk following pointers.
TheObject Exchange Model(OEM) is one standard to express semi-structured data, another way isXML.
|
https://en.wikipedia.org/wiki/Semi-structured_data
|
Inlinguistics,coreference, sometimes writtenco-reference, occurs when two or more expressions refer to the same person or thing; they have the samereferent. For example, inBill said Alice would arrive soon, and she did, the wordsAliceandsherefer to the same person.[1]
Co-reference is often non-trivial to determine. For example, inBill said he would come, the wordhemay or may not refer to Bill. Determining which expressions are coreferences is an important part of analyzing or understanding the meaning, and often requires information from the context,
real-world knowledge, such as tendencies of some names to be associated with particular species ("Rover"), kinds of artifacts ("Titanic"), grammatical genders, or other properties.
Linguists commonly use indices to notate coreference, as inBillisaid heiwould come. Such expressions are said to becoindexed, indicating that they should be interpreted as coreferential.
When expressions are coreferential, the first to occur is often a full or descriptive form (for example, an entire personal name, perhaps with a title and role), while later occurrences use shorter forms (for example, just a given name, surname, or pronoun). The earlier occurrence is known as theantecedentand the other is called aproform, anaphor, or reference. However, pronouns can sometimes refer forward, as in "When she arrived home, Alice went to sleep." In such cases, the coreference is calledcataphoricrather than anaphoric.
Coreference is important forbindingphenomena in the field of syntax. The theory of binding explores the syntactic relationship that exists between coreferential expressions in sentences and texts.
When exploring coreference, numerous distinctions can be made, e.g.anaphora,cataphora, split antecedents, coreferring noun phrases, etc.[2]Several of these more specific phenomena are illustrated here:
Semanticists and logicians sometimes draw a distinction between coreference and what is known as abound variable.[3]Bound variables occur when the antecedent to the proform is an indefinite quantified expression, e.g.[4][clarification needed]
Quantified expressionssuch asevery studentandno studentare not considered referential. These expressions are grammatically singular but do not pick out single referents in the discourse or real world. Thus, the antecedents tohisin these examples are not properly referential, and neither ishis. Instead, it is considered avariablethat isboundby its antecedent. Its reference varies based upon which of the students in the discourse world is thought of. The existence of bound variables is perhaps more apparent with the following example:
This sentence is ambiguous. It can mean that Jack likes his grade but everyone else dislikes Jack's grade; or that no one likes theirowngrade except Jack. In the first meaning,hisis coreferential; in the second, it is a bound variable because its reference varies over the set of all students.
Coindex notation is commonly used for both cases. That is, when two or more expressions are coindexed, it does not signal whether one is dealing with coreference or a bound variable (or as in the last example, whether it depends on interpretation).
Incomputational linguistics, coreference resolution is a well-studied problem indiscourse. To derive the correct interpretation of a text, or even to estimate the relative importance of various mentioned subjects, pronouns and otherreferring expressionsmust be connected to the right individuals. Algorithms intended to resolve coreferences commonly look first for the nearest preceding individual that is compatible with the referring expression. For example,shemight attach to a preceding expression such asthe womanorAnne, but not as probably toBill. Pronouns such ashimselfhave much stricter constraints. As with many linguistic tasks, there is a tradeoff betweenprecision and recall.Cluster-quality metrics commonly used to evaluate coreference resolution algorithms include theRand index, theadjusted Rand index, and differentmutual information-based methods.
A particular problem for coreference resolution in English is the pronounit, which has many uses.Itcan refer much likeheandshe, except that it generally refers to inanimate objects (the rules are actually more complex: animals may be any ofit,he, orshe; ships are traditionallyshe; hurricanes are usuallyitdespite having gendered names).Itcan also refer to abstractions rather than beings, e.g.He was paid minimum wage, but didn't seem to mind it.Finally,italso haspleonasticuses, which do not refer to anything specific:
Pleonastic uses are not considered referential, and so are not part of coreference.[5]
Approaches to coreference resolution can broadly be separated into mention-pair, mention-ranking or entity-based algorithms. Mention-pair algorithms involvebinarydecisions if a pair of two given mentions belong to the same entity. Entity-wide constraints likegenderare not considered, which leads toerror propagation. For example, the pronounsheorshecan both have a high probability of coreference withthe teacher, but cannot be coreferent with each other. Mention-ranking algorithms expand on this idea but instead stipulate that one mention can only be coreferent with one (previous) mention. As a result, each previous mention must be given a score and the highest scoring mention (or no mention) is linked. Finally, in entity-based methods mentions are linked based on information of the whole coreference chain instead of individual mentions. The representation of a variable-width chain is more complex and computationally expensive than mention-based methods, which lead to these algorithms being mostly based onneural networkarchitectures.
|
https://en.wikipedia.org/wiki/Coreference#Coreference_resolution
|
Innatural language processing,Entity Linking, also referred to asnamed-entity disambiguation(NED),named-entity recognition and disambiguation(NERD),named-entity normalization(NEN),[1]orConcept Recognition, is the task of assigning a unique identity to entities (such as famous individuals, locations, or companies) mentioned in text.[2]For example, given the sentence"Paris is the capital of France", the main idea is to first identify"Paris"and"France"asnamed entities, and then to determine that"Paris"refers to the city ofParisand not toParis Hiltonor any other entity that could be referred to as"Paris"and"France"to thefrench country.
The Entity Linking task is composed of 3 subtasks.
In entity linking, words of interest (names of persons, locations and companies) are mapped from an input text to corresponding unique entities in a targetknowledge base. Words of interest are callednamed entities(NEs), mentions, or surface forms. The target knowledge base depends on the intended application, but for entity linking systems intended to work on open-domain text it is common to use knowledge-bases derived fromWikipedia(such asWikidataorDBpedia).[1][3]In this case, each individual Wikipedia page is regarded as a separate entity. Entity linking techniques that map named entities to Wikipedia entities are also calledwikification.[4]
Considering again the example sentence"Paris is the capital of France", the expected output of an entity linking system will beParisandFrance. Theseuniform resource locators(URLs) can be used as uniqueuniform resource identifiers(URIs) for the entities in the knowledge base. Using a different knowledge base will return different URIs, but for knowledge bases built starting from Wikipedia there exist one-to-one URI mappings.[5]
In most cases, knowledge bases are manually built,[6]but in applications where largetext corporaare available, the knowledge base can be inferred automatically from theavailable text.[7]
Entity linking is a critical step to bridge web data with knowledge bases, which is beneficial for annotating the huge amount of raw and often noisy data on the Web and contributes to the vision of theSemantic Web.[8]In addition to entity linking, there are other critical steps including but not limited to event extraction,[9]and event linking[10]etc.
Entity linking is beneficial in fields that need to extract abstract representations from text, as it happens in text analysis,recommender systems, semantic search and chatbots. In all these fields, concepts relevant to the application are separated from text and other non-meaningful data.[11][12]
For example, a common task performed bysearch enginesis to find documents that are similar to one given as input, or to find additional information about the persons that are mentioned in it.
Consider a sentence that contains the expression"the capital of France": without entity linking, the search engine that looks at the content of documents would not be able to directly retrieve documents containing the word"Paris", leading to so-calledfalse negatives(FN). Even worse, the search engine might produce
spurious matches (orfalse positives(FP)), such as retrieving documents referring to"France"as a country.
Many approaches orthogonal to entity linking exist to retrieve documents similar to an input document. For example,latent semantic analysis(LSA) or comparing document embeddings obtained withdoc2vec. However, these techniques do not allow the same fine-grained control that is offered by entity linking, as they will return other
documents instead of creating high-level representations of the original one. For example, obtaining schematic information about"Paris", as presented by Wikipediainfoboxeswould be much less straightforward, or sometimes even unfeasible, depending on the query complexity.[13]
Moreover, entity linking has been used to improve the performance ofinformation retrievalsystems[1]and to improve search performance on digital libraries.[14]Entity linking is also a key input forsemantic search.[15][16]
There are various difficulties in performing entity linking. Some of these are intrinsic to the task,[17]such as text ambiguity. Others are relevant in real-world use, such as scalability and execution time.
Entity linking related to other concepts. Definitions are often blurry and vary slightly between authors.
Paris is the capital of France.
[Paris]Cityis the capital of [France]Country.
Paris is the capital of France. It is also the largest city in France.
Entity linking has been a hot topic in industry and academia for the last decade. Manychallengesare unsolved, but many entity linking systems have been proposed, with widely different strengths and weaknesses.[25]
Broadly speaking, modern entity linking systems can be divided into two categories:
Often entity linking systems use both knowledge graphs and textual features extracted from, for example, the text corpora used to build the knowledge graphs themselves.[22][23]
The seminal work by Cucerzan in 2007 published one of the first entity linking systems. Specifically, it tackled the task of wikification, that is, linking textual mentions to Wikipedia pages.[26]This system categorizes pages into entity, disambiguation, or list pages. The set of entities present in each entity page is used to build the entity's context. The final step is a collective disambiguation by comparing binary vectors of hand-crafted features each entity's context. Cucerzan's system is still used as baseline for recent work.[28]
Rao et al.[17]proposed a two-step algorithm to link named entities to entities in a target knowledge base. First, candidate entities are chosen using string matching, acronyms, and known aliases. Then, the best link among the candidates is chosen with a rankingsupport vector machine(SVM) that uses linguistic features.
Recent systems, such as by Tsai et al.,[24]useword embeddingsobtained with askip-grammodel as language features, and can be applied to any language for which a large corpus to build word embeddings is available. Like most entity linking systems, it has two steps: an initial candidate selection, and ranking using linear SVM.
Various approaches have been tried to tackle the problem of entity ambiguity. The seminal approach of Milne and Witten usessupervised learningusing theanchor textsof Wikipedia entities as training data.[29]Other approaches also collected training data based on unambiguous synonyms.[30]
Modern entity linking systems also use largeknowledge graphscreated from knowledge bases such as Wikipedia, besides textual features generated from input documents or text corpora. Moreover, multilingual entity linking based onnatural language processing(NLP) is difficult, because it requires either large text corpora, which are absent for many languages, or hand-crafted grammar rules, which are widely different between languages. Graph-based entity linking uses features of the graph topology or multi-hop connections between entities, which are hidden to simple text analysis.
Hanet al.propose the creation of a disambiguation graph (a subgraph of the knowledge base which contains candidate entities).[3]This graph is used for collective ranking to select the best candidate entity for each textual mention.
Another famous approach is AIDA,[31]which uses a series of complex graph algorithms and a greedy algorithm that identifies coherent mentions on a dense subgraph by also considering context similarities and vertex importance features to perform collective disambiguation.[27]
Alhelbawy et al. presented an entity linking system that usesPageRankto perform collective entity linking on a disambiguation graph, and to understand which entities are more strongly related to each other and so would represent a better linking.[21]Graph ranking (or vertex ranking) algorithms such as PageRank (PR) andHyperlink-Induced Topic Search(HITS) aim to score node according their relative importance in the graph.
Mathematical expressions (symbols and formulae) can be linked to semantic entities (e.g.,Wikipediaarticles[32]orWikidataitems[33]) labeled with their natural language meaning. This is essential for disambiguation, since symbols may have different meanings (e.g., "E" can be "energy" or "expectation value", etc.).[34][33]The math entity linking process can be facilitated and accelerated through annotation recommendation, e.g., using the "AnnoMathTeX" system that is hosted by Wikimedia.[35][36][37]
To facilitate the reproducibility of Mathematical Entity Linking (MathEL) experiments, the benchmark MathMLben was created.[38][39]It contains formulae from Wikipedia, the arXiV and the
NIST Digital Library of Mathematical Functions (DLMF). Formulae entries in the benchmark are labeled and augmented byWikidatamarkup.[33]Furthermore, for two large corporae from the arXiv[40]and zbMATH[41]repository distributions of mathematical notation were examined. Mathematical Objects of Interest (MOI) are identified as potential candidates for MathEL.[42]
Besides linking to Wikipedia, Schubotz[39]and Scharpf et al.[33]describe linking mathematical formula content to Wikidata, both inMathMLandLaTeXmarkup. To extend classical citations by mathematical, they call for a Formula Concept Discovery (FCD) and Formula Concept Recognition (FCR) challenge to elaborate automated MathEL. Their FCD approach yields a recall of 68% for retrieving equivalent representations of frequent formulae, and 72% for extracting the formula name from the surrounding text on the NTCIR[43]arXiv dataset.[37]
|
https://en.wikipedia.org/wiki/Entity_linking
|
Knowledge extractionis the creation ofknowledgefrom structured (relational databases,XML) and unstructured (text, documents,images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and mustrepresent knowledgein a manner that facilitates inferencing. Although it is methodically similar toinformation extraction(NLP) andETL(data warehouse), the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into arelational schema. It requires either the reuse of existingformal knowledge(reusing identifiers orontologies) or the generation of a schema based on the source data.
The RDB2RDF W3C group[1]is currently standardizing a language for extraction ofresource description frameworks(RDF) fromrelational databases. Another popular example for knowledge extraction is the transformation of Wikipedia intostructured dataand also the mapping to existingknowledge(seeDBpediaandFreebase).
After the standardization of knowledge representation languages such asRDFandOWL, much research has been conducted in the area, especially regarding transforming relational databases into RDF,identity resolution,knowledge discoveryand ontology learning. The general process uses traditional methods frominformation extractionandextract, transform, and load(ETL), which transform the data from the sources into structured formats. So understanding how the interact and learn from each other.
The following criteria can be used to categorize approaches in this topic (some of them only account for extraction from relational databases):[2]
President Obamacalled Wednesday onCongressto extend a tax break for students included in last year's economic stimulus package, arguing that the policy provides more generous assistance.
When building a RDB representation of a problem domain, the starting point is frequently an entity-relationship diagram (ERD). Typically, each entity is represented as a database table, each attribute of the entity becomes a column in that table, and relationships between entities are indicated by foreign keys. Each table typically defines a particular class of entity, each column one of its attributes. Each row in the table describes an entity
instance, uniquely identified by a primary key. The table rows collectively describe an entity set. In an equivalent RDF representation of the same entity set:
So, to render an equivalent view based on RDF semantics, the basic mapping algorithm would be as follows:
Early mentioning of this basic or direct mapping can be found inTim Berners-Lee's comparison of theER modelto the RDF model.[4]
The 1:1 mapping mentioned above exposes the legacy data as RDF in a straightforward way, additional refinements can be employed to improve the usefulness of RDF output respective the given Use Cases. Normally, information is lost during the transformation of an entity-relationship diagram (ERD) to relational tables (Details can be found inobject-relational impedance mismatch) and has to bereverse engineered. From a conceptual view, approaches for extraction can come from two directions. The first direction tries to extract or learn an OWL schema from the given database schema. Early approaches used a fixed amount of manually created mapping rules to refine the 1:1 mapping.[5][6][7]More elaborate methods are employing heuristics or learning algorithms to induce schematic information (methods overlap withontology learning). While some approaches try to extract the information from the structure inherent in the SQL schema[8](analysing e.g. foreign keys), others analyse the content and the values in the tables to create conceptual hierarchies[9](e.g. a columns with few values are candidates for becoming categories). The second direction tries to map the schema and its contents to a pre-existing domain ontology (see also:ontology alignment). Often, however, a suitable domain ontology does not exist and has to be created first.
As XML is structured as a tree, any data can be easily represented in RDF, which is structured as a graph.XML2RDFis one example of an approach that uses RDF blank nodes and transforms XML elements and attributes to RDF properties. The topic however is more complex as in the case of relational databases. In a relational table the primary key is an ideal candidate for becoming the subject of the extracted triples. An XML element, however, can be transformed - depending on the context- as a subject, a predicate or object of a triple.XSLTcan be used a standard transformation language to manually convert XML to RDF.
The largest portion of information contained in business documents (about 80%[10]) is encoded in natural language and therefore unstructured. Becauseunstructured datais rather a challenge for knowledge extraction, more sophisticated methods are required, which generally tend to supply worse results compared to structured data. The potential for a massive acquisition of extracted knowledge, however, should compensate the increased complexity and decreased quality of extraction. In the following, natural language sources are understood as sources of information, where the data is given in an unstructured fashion as plain text. If the given text is additionally embedded in a markup document (e. g. HTML document), the mentioned systems normally remove the markup elements automatically.
As a preprocessing step to knowledge extraction, it can be necessary to perform linguistic annotation by one or multipleNLPtools. Individual modules in an NLP workflow normally build on tool-specific formats for input and output, but in the context of knowledge extraction, structured formats for representing linguistic annotations have been applied.
Typical NLP tasks relevant to knowledge extraction include:
In NLP, such data is typically represented in TSV formats (CSV formats with TAB as separators), often referred to as CoNLL formats. For knowledge extraction workflows, RDF views on such data have been created in accordance with the following community standards:
Other, platform-specific formats include
Traditionalinformation extraction[20]is a technology of natural language processing, which extracts information from typically natural language texts and structures these in a suitable manner. The kinds of information to be identified must be specified in a model before beginning the process, which is why the whole process of traditional Information Extraction is domain dependent. The IE is split in the following five subtasks.
The task ofnamed entity recognitionis to recognize and to categorize all named entities contained in a text (assignment of a named entity to a predefined category). This works by application of grammar based methods or statistical models.
Coreference resolution identifies equivalent entities, which were recognized by NER, within a text. There are two relevant kinds of equivalence relationship. The first one relates to the relationship between two different represented entities (e.g. IBM Europe and IBM) and the second one to the relationship between an entity and theiranaphoric references(e.g. it and IBM). Both kinds can be recognized by coreference resolution.
During template element construction the IE system identifies descriptive properties of entities, recognized by NER and CO. These properties correspond to ordinary qualities like red or big.
Template relation construction identifies relations, which exist between the template elements. These relations can be of several kinds, such as works-for or located-in, with the restriction, that both domain and range correspond to entities.
In the template scenario production events, which are described in the text, will be identified and structured with respect to the entities, recognized by NER and CO and relations, identified by TR.
Ontology-based information extraction[10]is a subfield of information extraction, with which at least oneontologyis used to guide the process of information extraction from natural language text. The OBIE system uses methods of traditional information extraction to identifyconcepts, instances and relations of the used ontologies in the text, which will be structured to an ontology after the process. Thus, the input ontologies constitute the model of information to be extracted.[21]
Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time consuming, there is great motivation to automate the process.
During semantic annotation,[22]natural language text is augmented with metadata (often represented inRDFa), which should make the semantics of contained terms machine-understandable. At this process, which is generally semi-automatic, knowledge is extracted in the sense, that a link between lexical terms and for example concepts from ontologies is established. Thus, knowledge is gained, which meaning of a term in the processed context was intended and therefore the meaning of the text is grounded inmachine-readable datawith the ability to draw inferences. Semantic annotation is typically split into the following two subtasks.
At the terminology extraction level, lexical terms from the text are extracted. For this purpose a tokenizer determines at first the word boundaries and solves abbreviations. Afterwards terms from the text, which correspond to a concept, are extracted with the help of a domain-specific lexicon to link these at entity linking.
In entity linking[23]a link between the extracted lexical terms from the source text and the concepts from an ontology or knowledge base such asDBpediais established. For this, candidate-concepts are detected appropriately to the several meanings of a term with the help of a lexicon. Finally, the context of the terms is analyzed to determine the most appropriate disambiguation and to assign the term to the correct concept.
Note that "semantic annotation" in the context of knowledge extraction is not to be confused withsemantic parsingas understood in natural language processing (also referred to as "semantic annotation"): Semantic parsing aims a complete, machine-readable representation of natural language, whereas semantic annotation in the sense of knowledge extraction tackles only a very elementary aspect of that.
The following criteria can be used to categorize tools, which extract knowledge from natural language text.
The following table characterizes some tools for Knowledge Extraction from natural language sources.
Knowledge discovery describes the process of automatically searching large volumes ofdatafor patterns that can be consideredknowledgeaboutthe data.[44]It is often described asderivingknowledge from the input data. Knowledge discovery developed out of thedata miningdomain, and is closely related to it both in terms of methodology and terminology.[45]
The most well-known branch ofdata miningis knowledge discovery, also known asknowledge discovery in databases(KDD). Just as many other forms of knowledge discovery it createsabstractionsof the input data. Theknowledgeobtained through the process may become additionaldatathat can be used for further usage and discovery. Often the outcomes from knowledge discovery are not actionable, techniques likedomain driven data mining,[46]aims to discover and deliver actionable knowledge and insights.
Another promising application of knowledge discovery is in the area ofsoftware modernization, weakness discovery and compliance which involves understanding existing software artifacts. This process is related to a concept ofreverse engineering. Usually the knowledge obtained from existing software is presented in the form of models to which specific queries can be made when necessary. Anentity relationshipis a frequent format of representing knowledge obtained from existing software.Object Management Group(OMG) developed the specificationKnowledge Discovery Metamodel(KDM) which defines an ontology for the software assets and their relationships for the purpose of performing knowledge discovery in existing code. Knowledge discovery from existing software systems, also known assoftware miningis closely related todata mining, since existing software artifacts contain enormous value for risk management andbusiness value, key for the evaluation and evolution of software systems. Instead of mining individualdata sets,software miningfocuses onmetadata, such as process flows (e.g. data flows, control flows, & call maps), architecture, database schemas, and business rules/terms/process.
|
https://en.wikipedia.org/wiki/Knowledge_extraction
|
Onomastics(oronomatologyin older texts) is the study ofproper names, including theiretymology, history, and use.
Analethonym('true name') or anorthonym('real name') is the proper name of the object in question, the object of onomastic study. Scholars studying onomastics are calledonomasticians.
Onomastics has applications indata mining, with applications such asnamed-entity recognition, or recognition of the origin of names.[1][2]It is a popular approach in historical research, where it can be used to identifyethnic minoritieswithin populations[3][4]and for the purpose ofprosopography.
Onomasticsoriginates from theGreekonomastikós(ὀνομαστικός, 'of or belonging to naming'),[5][6]itself derived fromónoma(ὄνομα, 'name').[7]
Thisonomastics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Onomastics
|
Record linkage(also known asdata matching,data linkage,entity resolution, and many other terms) is the task of findingrecordsin a data set that refer to the sameentityacross different data sources (e.g., data files, books, websites, and databases). Record linkage is necessary whenjoiningdifferent data sets based on entities that may or may not share a common identifier (e.g.,database key,URI,National identification number), which may be due to differences in record shape, storage location, or curator style or preference. A data set that has undergone RL-oriented reconciliation may be referred to as beingcross-linked.
"Record linkage" is the term used by statisticians, epidemiologists, and historians, among others, to describe the process of joining records from one data source with another that describe the same entity. However, many other terms are used for this process. Unfortunately, this profusion of terminology has led to few cross-references between these research communities.[1][2]
Computer scientistsoften refer to it as "data matching" or as the "object identity problem". Commercial mail and database applications refer to it as "merge/purge processing" or "list washing". Other names used to describe the same concept include: "coreference/entity/identity/name/record resolution", "entity disambiguation/linking", "fuzzy matching", "duplicate detection", "deduplication", "record matching", "(reference) reconciliation", "object identification", "data/information integration" and "conflation".[3]
While they share similar names, record linkage andlinked dataare two separate approaches to processing and structuring data. Although both involve identifying matching entities across different data sets, record linkage standardly equates "entities" with human individuals; by contrast, Linked Data is based on the possibility of interlinking anyweb resourceacross data sets, using a correspondingly broader concept of identifier, namely aURI.
The initial idea of record linkage goes back toHalbert L. Dunnin his 1946 article titled "Record Linkage" published in theAmerican Journal of Public Health.[4]
Howard Borden Newcombethen laid the probabilistic foundations of modern record linkage theory in a 1959 article inScience.[5]These were formalized in 1969 byIvan Fellegiand Alan Sunter, in their pioneering work "A Theory For Record Linkage", where they proved that the probabilistic decision rule they described was optimal when the comparison attributes were conditionally independent.[6]In their work they recognized the growing interest in applying advances in computing and automation to large collections ofadministrative data, and theFellegi-Sunter theoryremains the mathematical foundation for many record linkage applications.
Since the late 1990s, variousmachine learningtechniques have been developed that can, under favorable conditions, be used to estimate the conditional probabilities required by the Fellegi-Sunter theory. Several researchers have reported that the conditional independence assumption of the Fellegi-Sunter algorithm is often violated in practice; however, published efforts to explicitly model the conditional dependencies among the comparison attributes have not resulted in an improvement in record linkage quality.[citation needed]On the other hand, machine learning or neural network algorithms that do not rely on these assumptions often provide far higher accuracy, when sufficient labeled training data is available.[7]
Record linkage can be done entirely without the aid of a computer, but the primary reasons computers are often used to complete record linkages are to reduce or eliminate manual review and to make results more easily reproducible. Computer matching has the advantages of allowing central supervision of processing, better quality control, speed, consistency, and better reproducibility of results.[8]
Record linkage is highly sensitive to the quality of the data being linked, so all data sets under consideration (particularly their key identifier fields) should ideally undergo adata quality assessmentprior to record linkage. Many key identifiers for the same entity can be presented quite differently between (and even within) data sets, which can greatly complicate record linkage unless understood ahead of time. For example, key identifiers for a man named William J. Smith might appear in three different data sets as so:
In this example, the different formatting styles lead to records that look different but in fact all refer to the same entity with the same logical identifier values. Most, if not all, record linkage strategies would result in more accurate linkage if these values were firstnormalizedorstandardizedinto a consistent format (e.g., all names are "Surname, Given name", and all dates are "YYYY/MM/DD"). Standardization can be accomplished through simple rule-baseddata transformationsor more complex procedures such as lexicon-basedtokenizationand probabilistic hidden Markov models.[9]Several of the packages listed in theSoftware Implementationssection provide some of these features to simplify the process of data standardization.
Entity resolutionis an operationalintelligenceprocess, typically powered by an entity resolution engine ormiddleware, whereby organizations can connect disparate data sources with aviewto understanding possible entity matches and non-obvious relationships across multipledata silos. It analyzes all of theinformationrelating to individuals and/or entities from multiple sources of data, and then applies likelihood and probability scoring to determine which identities are a match and what, if any, non-obvious relationships exist between those identities.
Entity resolution engines are typically used to uncoverrisk,fraud, and conflicts of interest, but are also useful tools for use withincustomer data integration(CDI) andmaster data management(MDM) requirements. Typical uses for entity resolution engines include terrorist screening, insurance fraud detection,USA Patriot Actcompliance,organized retail crimering detection and applicant screening.
For example: Across different data silos – employee records, vendor data, watch lists, etc. – an organization may have several variations of an entity named ABC, which may or may not be the same individual. These entries may, in fact, appear as ABC1, ABC2, or ABC3 within those data sources. By comparing similarities between underlying attributes such asaddress,date of birth, orsocial security number, the user can eliminate some possible matches and confirm others as very likely matches.
Entity resolution engines then apply rules, based on common sense logic, to identify hidden relationships across the data. In the example above, perhaps ABC1 and ABC2 are not the same individual, but rather two distinct people who share common attributes such as address or phone number.
While entity resolution solutions include data matching technology, many data matching offerings do not fit the definition of entity resolution. Here are four factors that distinguish entity resolution from data matching, according to John Talburt, director of theUALRCenter for Advanced Research in Entity Resolution and Information Quality:
In contrast to data quality products, more powerful identity resolution engines also include arules engineand workflow process, which apply business intelligence to the resolved identities and their relationships. These advanced technologies makeautomated decisionsand impact business processes in real time, limiting the need for human intervention.
The simplest kind of record linkage, calleddeterministicorrules-based record linkage, generates links based on the number of individual identifiers that match among the available data sets.[10]Two records are said to match via a deterministic record linkage procedure if all or some identifiers (above a certain threshold) are identical. Deterministic record linkage is a good option when the entities in the data sets are identified by a common identifier, or when there are several representative identifiers (e.g., name, date of birth, and sex when identifying a person) whose quality of data is relatively high.
As an example, consider two standardized data sets, Set A and Set B, that contain different bits of information about patients in a hospital system. The two data sets identify patients using a variety of identifiers:Social Security Number(SSN), name, date of birth (DOB), sex, andZIP code(ZIP). The records in two data sets (identified by the "#" column) are shown below:
The most simple deterministic record linkage strategy would be to pick a single identifier that is assumed to be uniquely identifying, say SSN, and declare that records sharing the same value identify the same person while records not sharing the same value identify different people. In this example, deterministic linkage based on SSN would create entities based on A1 and A2; A3 and B1; and A4. While A1, A2, and B2 appear to represent the same entity, B2 would not be included into the match because it is missing a value for SSN.
Handling exceptions such as missing identifiers involves the creation of additional record linkage rules. One such rule in the case of missing SSN might be to compare name, date of birth, sex, and ZIP code with other records in hopes of finding a match. In the above example, this rule would still not match A1/A2 with B2 because the names are still slightly different: standardization put the names into the proper (Surname, Given name) format but could not discern "Bill" as a nickname for "William". Running names through aphonetic algorithmsuch asSoundex,NYSIIS, ormetaphone, can help to resolve these types of problems. However, they may still stumble over surname changes as the result of marriage or divorce, but then B2 would be matched only with A1 since the ZIP code in A2 is different. Thus, another rule would need to be created to determine whether differences in particular identifiers are acceptable (such as ZIP code) and which are not (such as date of birth).
As this example demonstrates, even a small decrease in data quality or small increase in the complexity of the data can result in a very large increase in the number of rules necessary to link records properly. Eventually, these linkage rules will become too numerous and interrelated to build without the aid of specialized software tools. In addition, linkage rules are often specific to the nature of the data sets they are designed to link together. One study was able to link the Social SecurityDeath Master Filewith two hospital registries from theMidwestern United Statesusing SSN, NYSIIS-encoded first name, birth month, and sex, but these rules may not work as well with data sets from other geographic regions or with data collected on younger populations.[11]Thus, continuous maintenance testing of these rules is necessary to ensure they continue to function as expected as new data enter the system and need to be linked. New data that exhibit different characteristics than was initially expected could require a complete rebuilding of the record linkage rule set, which could be a very time-consuming and expensive endeavor.
Probabilistic record linkage, sometimes calledfuzzy matching(alsoprobabilistic mergingorfuzzy mergingin the context of merging of databases), takes a different approach to the record linkage problem by taking into account a wider range of potential identifiers, computing weights for each identifier based on its estimated ability to correctly identify a match or a non-match, and using these weights to calculate the probability that two given records refer to the same entity. Record pairs with probabilities above a certain threshold are considered to be matches, while pairs with probabilities below another threshold are considered to be non-matches; pairs that fall between these two thresholds are considered to be "possible matches" and can be dealt with accordingly (e.g., human reviewed, linked, or not linked, depending on the requirements). Whereas deterministic record linkage requires a series of potentially complex rules to be programmed ahead of time, probabilistic record linkage methods can be "trained" to perform well with much less human intervention.
Many probabilistic record linkage algorithms assign match/non-match weights to identifiers by means of two probabilities calledu{\displaystyle u}andm{\displaystyle m}. Theu{\displaystyle u}probability is the probability that an identifier in twonon-matchingrecords will agree purely by chance. For example, theu{\displaystyle u}probability for birth month (where there are twelve values that are approximately uniformly distributed) is1/12≈0.083{\displaystyle 1/12\approx 0.083}; identifiers with values that are not uniformly distributed will have differentu{\displaystyle u}probabilities for different values (possibly including missing values). Them{\displaystyle m}probability is the probability that an identifier inmatchingpairs will agree (or be sufficiently similar, such as strings with lowJaro-WinklerorLevenshteindistance). This value would be1.0{\displaystyle 1.0}in the case of perfect data, but given that this is rarely (if ever) true, it can instead be estimated. This estimation may be done based on prior knowledge of the data sets, by manually identifying a large number of matching and non-matching pairs to "train" the probabilistic record linkage algorithm, or by iteratively running the algorithm to obtain closer estimations of them{\displaystyle m}probability. If a value of0.95{\displaystyle 0.95}were to be estimated for them{\displaystyle m}probability, then the match/non-match weights for the birth month identifier would be:
The same calculations would be done for all other identifiers under consideration to find their match/non-match weights. Then, every identifier of one record would be compared with the corresponding identifier of another record to compute the total weight of the pair: thematchweight is added to the running total whenever a pair of identifiers agree, while thenon-matchweight is added (i.e. the running total decreases) whenever the pair of identifiers disagrees. The resulting total weight is then compared to the aforementioned thresholds to determine whether the pair should be linked, non-linked, or set aside for special consideration (e.g. manual validation).[12]
Determining where to set the match/non-match thresholds is a balancing act between obtaining an acceptablesensitivity(orrecall, the proportion of truly matching records that are linked by the algorithm) andpositive predictive value(orprecision, the proportion of records linked by the algorithm that truly do match). Various manual and automated methods are available to predict the best thresholds, and some record linkage software packages have built-in tools to help the user find the most acceptable values. Because this can be a very computationally demanding task, particularly for large data sets, a technique known asblockingis often used to improve efficiency. Blocking attempts to restrict comparisons to just those records for which one or more particularly discriminating identifiers agree, which has the effect of increasing the positive predictive value (precision) at the expense of sensitivity (recall).[12]For example, blocking based on a phonetically coded surname and ZIP code would reduce the total number of comparisons required and would improve the chances that linked records would be correct (since two identifiers already agree), but would potentially miss records referring to the same person whose surname or ZIP code was different (due to marriage or relocation, for instance). Blocking based on birth month, a more stable identifier that would be expected to change only in the case of data error, would provide a more modest gain in positive predictive value and loss in sensitivity, but would create only twelve distinct groups which, for extremely large data sets, may not provide much net improvement in computation speed. Thus, robust record linkage systems often use multiple blocking passes to group data in various ways in order to come up with groups of records that should be compared to each other.
In recent years, a variety of machine learning techniques have been used in record linkage. It has been recognized[7]that the classic Fellegi-Sunter algorithm for probabilistic record linkage outlined above is equivalent to theNaive Bayesalgorithm in the field of machine learning,[13]and suffers from the same assumption of the independence of its features (an assumption that is typically not true).[14][15]Higher accuracy can often be achieved by using various other machine learning techniques, including a single-layerperceptron,[7]random forest, andSVM.[16]In conjunction with distributed technologies,[17]accuracy and scale for record linkage can be improved further.
High quality record linkage often requires a human–machine hybrid system to safely manage uncertainty in the ever changing streams of chaotic big data.[18][19]Recognizing that linkage errors propagate into the linked data and its analysis, interactive record linkage systems have been proposed. Interactive record linkage is defined as people iteratively fine tuning the results from the automated methods and managing the uncertainty and its propagation to subsequent analyses.[20]The main objectives of interactive record linkage systems is to manually resolve uncertain linkages and validate the results until it is at acceptable levels for the given application. Variations of interactive record linkage that enhance privacy during the human interaction steps have also been proposed.[21][22]
Record linkage is increasingly required across databases held by different organisations, where the complementary data held by these organisations can, for example, help to identify patients that are susceptible to certain adverse drug reactions (linking hospital, doctor, pharmacy databases). In many such applications, however, the databases to be linked contain sensitive information about people which cannot be shared between the organisations.[23]
Privacy-preserving record linkage (PPRL) methods have been developed with the aim to link databases without the need of sharing the original sensitive values between the organisations that participate in a linkage.[24][25]In PPRL, generally the attribute values of records to be compared are encoded or encrypted in some form. A popular such encoding technique used areBloom filter,[26]which allows approximate similarities to be calculated between encoded values without the need for sharing the corresponding sensitive plain-text values. At the end of the PPRL process only limited information about the record pairs classified as matches is revealed to the organisations that participate in the linkage process. The techniques used in PPRL[24]must guarantee that no participating organisation, nor any external adversary, can compromise the privacy of the entities that are represented by records in the databases being linked.[27]
In an application with two files, A and B, denote the rows (records) byα(a){\displaystyle \alpha (a)}in file A andβ(b){\displaystyle \beta (b)}in file B. AssignK{\displaystyle K}characteristicsto each record. The set of records that represent identical entities is defined by
M={(a,b);a=b;a∈A;b∈B}{\displaystyle M=\left\{(a,b);a=b;a\in A;b\in B\right\}}
and the complement of setM{\displaystyle M}, namely setU{\displaystyle U}representing different entities is defined as
U={(a,b);a≠b;a∈A;b∈B}{\displaystyle U=\{(a,b);a\neq b;a\in A;b\in B\}}.
A vector,γ{\displaystyle \gamma }is defined, that contains the coded agreements and disagreements on each characteristic:
γ[α(a),β(b)]={γ1[α(a),β(b)],...,γK[α(a),β(b)]}{\displaystyle \gamma \left[\alpha (a),\beta (b)\right]=\{\gamma ^{1}\left[\alpha (a),\beta (b)\right],...,\gamma ^{K}\left[\alpha (a),\beta (b)\right]\}}
whereK{\displaystyle K}is a subscript for the characteristics (sex, age, marital status, etc.) in the files. The conditional probabilities of observing a specific vectorγ{\displaystyle \gamma }given(a,b)∈M{\displaystyle (a,b)\in M},(a,b)∈U{\displaystyle (a,b)\in U}are defined as
m(γ)=P{γ[α(a),β(b)]|(a,b)∈M}=∑(a,b)∈MP{γ[α(a),β(b)]}⋅P[(a,b)|M]{\displaystyle m(\gamma )=P\left\{\gamma \left[\alpha (a),\beta (b)\right]|(a,b)\in M\right\}=\sum _{(a,b)\in M}P\left\{\gamma \left[\alpha (a),\beta (b)\right]\right\}\cdot P\left[(a,b)|M\right]}
and
u(γ)=P{γ[α(a),β(b)]|(a,b)∈U}=∑(a,b)∈UP{γ[α(a),β(b)]}⋅P[(a,b)|U],{\displaystyle u(\gamma )=P\left\{\gamma \left[\alpha (a),\beta (b)\right]|(a,b)\in U\right\}=\sum _{(a,b)\in U}P\left\{\gamma \left[\alpha (a),\beta (b)\right]\right\}\cdot P\left[(a,b)|U\right],}respectively.[6]
MostMaster data management(MDM) products use a record linkage process to identify records from different sources representing the same real-world entity. This linkage is used to create a "golden master record" containing the cleaned, reconciled data about the entity. The techniques used in MDM are the same as for record linkage generally. MDM expands this matching not only to create a "golden master record" but to infer relationships also. (i.e. a person has a same/similar surname and same/similar address, this might imply they share a household relationship).
Record linkage plays a key role indata warehousingandbusiness intelligence. Data warehouses serve to combine data from many different operational source systems into onelogical data model, which can then be subsequently fed into a business intelligence system for reporting and analytics. Each operational source system may have its own method of identifying the same entities used in the logical data model, so record linkage between the different sources becomes necessary to ensure that the information about a particular entity in one source system can be seamlessly compared with information about the same entity from another source system. Data standardization and subsequent record linkage often occur in the "transform" portion of theextract, transform, load(ETL) process.
Record linkage is important to social history research since most data sets, such ascensus recordsand parish registers were recorded long before the invention ofNational identification numbers. When old sources are digitized, linking of data sets is a prerequisite forlongitudinal study. This process is often further complicated by lack of standard spelling of names, family names that change according to place of dwelling, changing of administrative boundaries, and problems of checking the data against other sources. Record linkage was among the most prominent themes in theHistory and computingfield in the 1980s, but has since been subject to less attention in research.[citation needed]
Record linkage is an important tool in creating data required for examining the health of the public and of the health care system itself. It can be used to improve data holdings, data collection, quality assessment, and the dissemination of information. Data sources can be examined to eliminate duplicate records, to identify under-reporting and missing cases (e.g., census population counts), to create person-oriented health statistics, and to generate disease registries and health surveillance systems. Some cancer registries link various data sources (e.g., hospital admissions, pathology and clinical reports, and death registrations) to generate their registries. Record linkage is also used to create health indicators. For example, fetal and infant mortality is a general indicator of a country's socioeconomic development, public health, and maternal and child services. If infant death records are matched to birth records, it is possible to use birth variables, such as birth weight and gestational age, along with mortality data, such as cause of death, in analyzing the data. Linkages can help in follow-up studies of cohorts or other groups to determine factors such as vital status, residential status, or health outcomes. Tracing is often needed for follow-up of industrial cohorts, clinical trials, and longitudinal surveys to obtain the cause of death and/or cancer. An example of a successful and long-standing record linkage system allowing for population-based medical research is theRochester Epidemiology Projectbased inRochester, Minnesota.[28]
The main reasons cited are:[citation needed]
|
https://en.wikipedia.org/wiki/Record_linkage
|
Smart tagsare an earlyselection-based searchfeature, found in later versions ofMicrosoft Wordandbetaversions of theInternet Explorer 6web browser, by which the application recognizes certain words or types of data and converts it to ahyperlink. It is also included in otherMicrosoft Officeprograms as well asVisual Web Developer.[1]Selection-based search allows a user to invoke an online service from any other page using only the mouse.Microsofthad initially intended the technology to be built into itsWindows XPoperating systembut changed its plans due to public criticism.[2]
Smart tags are integrated in instances where a user might benefit from an added formatting assistance and it is part of Microsoft's control technology.[1]It is presented as a special shortcut menu, listing options such as paste, AutoCorrect, date, Person Name, and addresses, among others that flag entered information, accordingly.[3]Smart tags work throughactionsandrecognizers. The latter checks whether the information entered by the user is included in the list of smart tag terms and the action associated with it is executed.[4]It can be accessed through a dedicated smart tag button.[5]
With smart tags enabled,Microsoft Wordattempts to recognize certain types of data in a document (for example, dates or names) and automatically makes such text a smart tag, visually indicated by a purple dotted underline. Clicking on a smart tag is the selection-based search command to bring up a list of possible actions for that data type.
As an example, in Microsoft Word the words "John Smith" would be recognized as apersonal nameand smart tagged. The list of actions available when clicked might beOpen Contact,Schedule a Meeting,Add to Contacts, orInsert Address.
As of Word version 2010, the smart tag auto-recognition[6]andPerson Namesmart tag features are no longer available.
Within a web browser, smart tag technology passes its way through a web page, underlines the words it has been pre-programmed to react to, and inserts its ownhyperlinks. Selecting a smart tag, like many selection-based search commands, involved a hover followed by a mouse click. No keyboard commands are required to invoke the search. The click takes you to a destination specified by the smart tag developer, without the knowledge or permission of the web site proprietor (in early tests almost all the links offered were to sites or products of Microsoft or its affiliates).
Smart tags can also be generated bythird parties; for example, a company might contract a technology firm to develop a set of smart tags and actions for their specific products or services, so that product names are automatically recognized and linked to actions such as "check quantity in stock" or "check price."
Some security vendors feared that smart tags could be used for propagating viruses,[7][8]user tracking, or otherdata collectionpurposes that might violate user'sprivacy.[9]Another concern was that they could be used in negative or harmful ways such as linking a political candidate's name on his own website to negative advertising on other sites[citation needed].
In response to the criticism, Microsoft removed the technology from its Windows XP operating system and made it a feature that could be turned on or off in IE and in Office XP.[10]
However, Microsoft revisited the concept of smart tags in later versions ofInternet Explorer 8, which implemented a selection-based search feature calledInternet Explorer 8 Accelerators. Unlike the SmartTags feature, which automatically parsed a page looking for text of interest, Accelerators relied upon the user to select the text to which the Accelerator should be applied. The Accelerators interface was open to developers, and the accelerators included by default, which use Microsoft's or its affiliates' products, could be replaced within each category with another provider if desired.
Smart tags have also been included in email and SMS text messages on Windows Phone. For example, dates in email messages are automatically recognized and when tapped, opens a window that allows an appointment to be created with the date field already filled in.
|
https://en.wikipedia.org/wiki/Smart_tag_(Microsoft)
|
TriGis a serialization format for RDF (Resource Description Framework) graphs. It is a plain text format for serializingnamed graphsand RDF Datasets which offers a compact and readable alternative to the XML-basedTriXsyntax.
This example encodes three interlinked named graphs:
ThisWorld Wide Web–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/TriG_(syntax)
|
TriX(Triplesin XML) is a serialization format for RDF (Resource Description Framework) graphs. It is anXMLformat for serializing Named Graphs and RDF Datasets which offers a compact and readable alternative to the XML-basedRDF/XMLsyntax.[1][2]It was jointly created byHP LabsandNokia.[3]
It is suggested that those digital artifacts dependent of the serialization format need means to verify immutability, or digital artifacts including datasets, code, texts, and images are not verifiable nor permanent. Embedding cryptographic hash values to applied URIs has been suggested for structured data files such as nano-publications.[4]
ThisWorld Wide Web–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/TriX_(syntax)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.