text stringlengths 16 172k | source stringlengths 32 122 |
|---|---|
Inbioinformatics,BLAST(basic local alignment search tool)[3]is analgorithmand program for comparingprimarybiological sequence information, such as theamino-acidsequences ofproteinsor thenucleotidesofDNAand/orRNAsequences. A BLAST search enables a researcher to compare a subject protein or nucleotide sequence (called a query) with a library ordatabaseof sequences, and identify database sequences that resemble the query sequence above a certain threshold. For example, following the discovery of a previously unknown gene in themouse, a scientist will typically perform a BLAST search of thehuman genometo see if humans carry a similar gene; BLAST will identify sequences in the human genome that resemble the mouse gene based on similarity of sequence.
BLAST is one of the most widely used bioinformatics programs for sequence searching.[4]It addresses a fundamental problem in bioinformatics research. Theheuristicalgorithm it uses is much faster than other approaches, such as calculating an optimal alignment. This emphasis on speed is vital to making the algorithm practical on the huge genome databases currently available, although subsequent algorithms can be even faster.
The BLAST program was designed by Eugene Myers, Stephen Altschul, Warren Gish, David J. Lipman and Webb Miller at the NIH and was published in J. Mol. Biol. in 1990. BLAST extended the alignment work of a previously developed program for protein and DNA sequence similarity searches,FASTA, by adding a novel stochastic model developed bySamuel KarlinandStephen Altschul.[5]They proposed "a method for estimating similarities between the known DNA sequence of one organism with that of another",[3]and their work has been described as "the statistical foundation for BLAST."[6]Subsequently, Altschul, Gish, Miller, Myers, and Lipman designed and implemented the BLAST program, which was published in theJournal of Molecular Biologyin 1990 and has been cited over 100,000 times since.[7]
While BLAST is faster than anySmith-Watermanimplementation for most cases, it cannot "guarantee the optimal alignments of the query and database sequences" as Smith-Waterman algorithm does. The Smith-Waterman algorithm was an extension of a previous optimal method, theNeedleman–Wunsch algorithm, which was the first sequence alignment algorithm that was guaranteed to find the best possible alignment. However, the time and space requirements of these optimal algorithms far exceed the requirements of BLAST.
BLAST is more time-efficient than FASTA by searching only for the more significant patterns in the sequences, yet with comparative sensitivity. This could be further realized by understanding the algorithm of BLAST introduced below.
Examples of other questions that researchers use BLAST to answer are:
BLAST is also often used as part of other algorithms that requireapproximate sequence matching.
BLAST is available on the web on the NCBI website. Different types of BLASTs are available according to the query sequences and the target databases. Alternative implementations include AB-BLAST (formerly known as WU-BLAST), FSA-BLAST (last updated in 2006), and ScalaBLAST.[8][9]
The original paper by Altschul,et al.[7]was the most highly cited paper published in the 1990s.[10]
Input sequences (inFASTAorGenbankformat), database to search and other optional parameters such as scoring matrix.[clarification needed][11]
BLAST output can be delivered in a variety of formats. These formats includeHTML,plain text, andXMLformatting. For NCBI's webpage, the default format for output is HTML. When performing a BLAST on NCBI, the results are given in a graphical format showing the hits found, a table showing sequence identifiers for the hits with scoring related data, as well as alignments for the sequence of interest and the hits received with corresponding BLAST scores for these. The easiest to read and most informative of these is probably the table.
If one is attempting to search for a proprietary sequence or simply one that is unavailable in databases available to the general public through sources such as NCBI, there is a BLAST program available for download to any computer, at no cost. This can be found at BLAST+ executables. There are also commercial programs available for purchase. Databases can be found on the NCBI site, as well as on the Index of BLAST databases (FTP).
Using aheuristicmethod, BLAST finds similar sequences, by locating short matches between the two sequences. This process of finding similar sequences is called seeding. It is after this first match that BLAST begins to make local alignments. While attempting to find similarity in sequences, sets of common letters, known as words, are very important. For example, suppose that the sequence contains the following stretch of letters, GLKFA. If aBLASTwas being conducted under normal conditions, the word size would be 3 letters. In this case, using the given stretch of letters, the searched words would be GLK, LKF, and KFA. The heuristic algorithm of BLAST locates all common three-letter words between the sequence of interest and the hit sequence or sequences from the database. This result will then be used to build an alignment. After making words for the sequence of interest, the rest of the words are also assembled. These words must satisfy a requirement of having a score of at least the thresholdT, when compared by using a scoring matrix.
One commonly used scoring matrix for BLAST searches isBLOSUM62,[12]although the optimal scoring matrix depends on sequence similarity. Once both words and neighborhood words are assembled and compiled, they are compared to the sequences in the database in order to find matches. The threshold scoreTdetermines whether or not a particular word will be included in the alignment. Once seeding has been conducted, the alignment which is only 3 residues long, is extended in both directions by the algorithm used by BLAST. Each extension impacts the score of the alignment by either increasing or decreasing it. If this score is higher than a pre-determinedT, the alignment will be included in the results given by BLAST. However, if this score is lower than this pre-determinedT, the alignment will cease to extend, preventing the areas of poor alignment from being included in the BLAST results. Note that increasing theTscore limits the amount of space available to search, decreasing the number of neighborhood words, while at the same time speeding up the process of BLAST
To run the software, BLAST requires a query sequence to search for, and a sequence to search against (also called the target sequence) or a sequence database containing multiple such sequences. BLAST will find sub-sequences in the database which are similar to subsequences in the query. In typical usage, the query sequence is much smaller than the database, e.g., the query may be one thousand nucleotides while the database is several billion nucleotides.
The main idea of BLAST is that there are often High-scoring Segment Pairs (HSP) contained in a statistically significant alignment. BLAST searches for high scoringsequence alignmentsbetween the query sequence and the existing sequences in the database using a heuristic approach that approximates theSmith-Waterman algorithm. However, the exhaustive Smith-Waterman approach is too slow for searching large genomic databases such asGenBank. Therefore, the BLAST algorithm uses aheuristicapproach that is less accurate than the Smith-Waterman algorithm but over 50 times faster.[13]The speed and relatively good accuracy of BLAST are among the key technical innovations of the BLAST programs.
An overview of the BLAST algorithm (a protein to protein search) is as follows:[13]
BLASTn compares one or more nucleotide sequence to a database or another sequence. This is useful when trying to identify evolutionary relationships between organisms.[15]
tBLASTn used to search for proteins in sequences that haven't been translated into proteins yet. It takes a protein sequence and compares it to all possible translations of a DNA sequence. This is useful when looking for similar protein-coding regions in DNA sequences that haven't been fully annotated, like ESTs (short, single-read cDNA sequences) and HTGs (draft genome sequences). Since these sequences don't have known protein translations, we can only search for them using tBLASTn.[16]
BLASTx compares a nucleotide query sequence, which can be translated into six different protein sequences, against a database of known protein sequences. This tool is useful when the reading frame of the DNA sequence is uncertain or contains errors that might cause mistakes in protein-coding. BLASTx provides combined statistics for hits across all frames, making it helpful for the initial analysis of new DNA sequences.[17]
BLASTp, or Protein BLAST, is used to compare protein sequences. You can input one or more protein sequences that you want to compare against a single protein sequence or a database of protein sequences. This is useful when you're trying to identify a protein by finding similar sequences in existing protein databases.[18]
Parallel BLAST versions of split databases are implemented usingMPIandPthreads, and have been ported to various platforms includingWindows,Linux,Solaris,Mac OS X, andAIX. Popular approaches to parallelize BLAST include query distribution, hash table segmentation, computation parallelization, and database segmentation (partition). Databases are split into equal sized pieces and stored locally on each node. Each query is run on all nodes in parallel and the resultant BLAST output files from all nodes merged to yield the final output. Specific implementations include MPIblast, ScalaBLAST, DCBLAST and so on.[19]
MPIblast makes use of a database segmentation technique to parallelize the computation process.[20]This allows for significant performance improvements when conducting BLAST searches across a set of nodes in a cluster. In some scenarios a superlinear speedup is achievable. This makes MPIblast suitable for the extensive genomic datasets that are typically used in bioinformatics.
BLAST generally runs at a speed ofO(n), where n is the size of the database.[21]The time to complete the search increases linearly as the size of the database increases. MPIblast utilizesparallel processingto speed up the search. The ideal speed for anyparallel computationis a complexity of O(n/p), with n being the size of the database and p being the number of processors. This would indicate that the job is evenly distributed among the p number of processors. This is visualized in the included graph. The superlinear speedup that can sometimes occur with MPIblast can have a complexity better than O(n/p). This occurs because the cache memory can be used to decrease the run time.[22]
The predecessor to BLAST,FASTA, can also be used for protein and DNA similarity searching.FASTAprovides a similar set of programs for comparing proteins to protein and DNA databases, DNA to DNA and protein databases, and includes additional programs for working with unordered short peptides and DNA sequences. In addition, theFASTApackage provides SSEARCH, a vectorized implementation of the rigorousSmith-Watermanalgorithm. FASTA is slower than BLAST, but provides a much wider range of scoring matrices, making it easier to tailor a search to a specific evolutionary distance.
An extremely fast but considerably less sensitive alternative to BLAST isBLAT(BlastLikeAlignmentTool). While BLAST does a linear search, BLAT relies onk-merindexing the database, and can thus often find seeds faster.[23]Another software alternative similar to BLAT isPatternHunter.
Advances in sequencing technology in the late 2000s has made searching for very similar nucleotide matches an important problem. New alignment programs tailored for this use typically useBWT-indexing of the target database (typically a genome). Input sequences can then be mapped very quickly, and output is typically in the form of a BAM file. Example alignment programs areBWA,SOAP, andBowtie.
For protein identification, searching for known domains (for instance fromPfam) by matching withHidden Markov Modelsis a popular alternative, such asHMMER.
An alternative to BLAST for comparing two banks of sequences is PLAST. PLAST provides a high-performance general purpose bank to bank sequence similarity search tool relying on the PLAST[24]and ORIS[25]algorithms. Results of PLAST are very similar to BLAST, but PLAST is significantly faster and capable of comparing large sets of sequences with a small memory (i.e. RAM) footprint.
For applications in metagenomics, where the task is to compare billions of short DNA reads against tens of millions of protein references, DIAMOND[26]runs at up to 20,000 times as fast as BLASTX, while maintaining a high level of sensitivity.
The open-source software MMseqs is an alternative to BLAST/PSI-BLAST, which improves on current search tools over the full range of speed-sensitivity trade-off, achieving sensitivities better than PSI-BLAST at more than 400 times its speed.[27]
Optical computingapproaches have been suggested as promising alternatives to the current electrical implementations. OptCAM is an example of such approaches and is shown to be faster than BLAST.[28]
While bothSmith-Watermanand BLAST are used to find homologous sequences by searching and comparing a query sequence with those in the databases, they do have their differences.
Due to the fact that BLAST is based on a heuristic algorithm, the results received through BLAST will not include all the possible hits within the database. BLAST misses hard to find matches.
An alternative in order to find all the possible hits would be to use the Smith-Waterman algorithm. This method varies from the BLAST method in two areas, accuracy and speed. The Smith-Waterman option provides better accuracy, in that it finds matches that BLAST cannot, because it does not exclude any information. Therefore, it is necessary for remote homology. However, when compared to BLAST, it is more time consuming and requires large amounts of computing power and memory. However, advances have been made to speed up the Smith-Waterman search process dramatically. These advances includeFPGAchips andSIMDtechnology.
For more complete results from BLAST, the settings can be changed from their default settings. The optimal settings for a given sequence, however, may vary. The settings one can change are E-Value, gap costs, filters, word size, and substitution matrix.
Note, the algorithm used for BLAST was developed from the algorithm used for Smith-Waterman. BLAST employs an alignment which finds "local alignments between sequences by finding short matches and from these initial matches (local) alignments are created".[29]
To help users interpreting BLAST results, different software is available. According to installation and use, analysis features and technology, here are some available tools:[30]
Example visualizations of BLAST results are shown in Figure 4 and 5.
BLAST can be used for several purposes. These include identifying species, locating domains, establishing phylogeny, DNA mapping, and comparison. | https://en.wikipedia.org/wiki/BLAST_(biotechnology) |
CS-BLAST[1][2][3](Context-Specific BLAST) is a tool that searches aproteinsequence that extendsBLAST (Basic Local Alignment Search Tool),[4]using context-specific mutation probabilities. More specifically, CS-BLAST derives context-specificamino-acidsimilarities on each query sequence from short windows on the query sequences. Using CS-BLAST doubles sensitivity and significantly improves alignment quality without a loss of speed in comparison to BLAST.CSI-BLAST(Context-Specific Iterated BLAST) is the context-specific analog ofPSI-BLAST[5](Position-Specific Iterated BLAST), which computes the mutation profile with substitution probabilities and mixes it with the query profile. CSI-BLAST (Context-Specific Iterated BLAST) is the context specific analog of PSI-BLAST (Position-Specific Iterated BLAST). Both of these programs are available as web-server and are available for free download.
Homology is the relationship between biological structures or sequences derived from a common ancestor. Homologous proteins (proteins who have common ancestry) are inferred from their sequence similarity. Inferring homologous relationships involves calculating scores of aligned pairs minus penalties for gaps. Aligning pairs of proteins identify regions of similarity indicating a relationship between the two, or more, proteins. In order to have a homologous relationship, the sum of scores over all the aligned pairs of amino acids or nucleotides must be sufficiently high [2]. Standard methods of sequence comparisons use asubstitution matrixto accomplish this [4]. Similarities between amino acids or nucleotides are quantified in these substitution matrices. The substitution score (S{\displaystyle S}) of amino acidsa{\displaystyle a}andb{\displaystyle b}can we written as follows:
S(a,b)=const×log(P(a|b)P(a)){\displaystyle S(a,b)=const\times \log \left({\frac {P(a|b)}{P(a)}}\right)}
whereP(a|b){\displaystyle P(a|b)}denotes the probability of amino acida{\displaystyle a}mutating into amino acidb{\displaystyle b}[2]. In a large set of sequence alignments, counting the number of amino acids as well as the number of aligned pairs(a,b){\displaystyle (a,b)}will allow you to derive the probabilitiesP(a|b){\displaystyle P(a|b)}andP(a){\displaystyle P(a)}.
Since protein sequences need to maintain a stable structure, a residue’s substitution probabilities are largely determined by the structural context of where it is found. As a result, substitution matrices are trained for structural contexts. Since context information is encoded in transition probabilities between states, mixing mutation probabilities from substitution matrices weighted for corresponding states achieves improved alignment qualities when compared to standard substitution matrices. CS-BLAST improves further upon this concept. The figure illustrates the sequence to sequence and profile to sequence equivalence with the alignment matrix. The query profile results from the artificial mutations in which the bar heights are proportional to the corresponding amino acid probabilities.
(A FIGURE NEEDS TO GO HERE THIS IS THE CAPTION) “Sequence search/alignment algorithms find the path that maximizes the sum of similarity scores (color-coded blue to red). Substitution matrix scores are equivalent to profile scores if the sequence profile (colored histogram) is generated from the query sequence by adding artificial mutations with the substitution matrix pseudocount scheme. Histogram bar heights represent the fraction of amino acids in profile columns”.
CS-BLAST greatly improves alignment quality over the entire range of sequence identities and especially for difficult alignments in comparison to regular BLAST and PSI-BLAST. PSI-BLAST (Position-Specific Iterated BLAST) runs at about the same speed per iteration as regular BLAST, but is able to detect weaker sequence similarities that are still biologically relevant. Alignment quality is based on alignment sensitivity and alignment precision.
Alignment sensitivity is measured by correctly comparing predicted alignments of residue pairs to the total number of possible alignable pairs. This is calculated with the fraction: (pairs correctly aligned)/(pairs structurally alignable)
Alignment precision is measured by the correctness of aligned residue pairs. This is calculated with the fraction: (pairs correctly aligned)/(pairs aligned)
The graph is the benchmark Biegert and Söding used to evaluate homology detection. The benchmark compares CS-BLAST to BLAST using true positives from the same superfamily versus false positive of pairs from different folds. (A GRAPH NEEDS TO GO HERE)
The other graph uses detects true positives (with a different scale than the previous graph) and false positives of PSI-BLAST and CSI-BLAST and compares the two for one to five iterations. (A DIFFERENT GRAPH NEEDS TO GO HERE)
CS-BLAST offers improved sensitivity and alignment quality in sequence comparison. Sequence searches with CS-BLAST are more than twice as sensitive as BLAST. It produces higher quality alignments and generates reliable E-values without a loss of speed. CS-BLAST detects 139% more homologous proteins at a cumulative error rate of 20%. At a 10% error rate, 138% more homologs are detected, and for the easiest cases at a 1% error rate, CS-BLAST was still 96% more effective than BLAST. Additionally, CS-BLAST in 2 iterations is more sensitive than 5 iterations of PSI-BLAST. About 15% more homologs were detected in comparison.
The CS-BLAST method derives similarities between sequence context-specific amino acids for 13 residue windows centered on each residue. CS-BLAST works by generating a sequence profile for a query sequence by using context-specific mutations and then jumpstarting a profile-to-sequence search method.
CS-BLAST starts by predicting the expected mutation probabilities for each position. For a certain residue, a sequence window of ten total surrounding residues is selected as seen in the image. Then, Biegert and Söding compared the sequence window to a library with thousands of context profiles. The library is generated by clustering a representative set of sequence profile windows. The actual predicting of mutation probabilities is achieved by weighted mixing of the central columns of the most similar context profiles. This aligns short profiles that are nonhomologous and ungapped which gives higher weight to better matching profiles, making them easier to detect. A sequence profile represents a multiple alignment of homologous sequences and describes what amino acids are likely to occur at each position in related sequences. With this method substitution matrices are unnecessary. In addition, there is no need for transition probabilities as a result of the fact that context information is encoded within the context profiles. This makes computation simpler and allows for runtime to be scaled linearly instead of quadratically.
The context specific mutation probability, the probability of observing a specific amino acid in a homologous sequence given a context, is calculated by a weighted mixing of the amino acids in the central columns of the most similar context profiles. The image illustrates the calculation of expected mutation probabilities for a specific residue at a certain position. As seen in the image, the library of context profiles all contribute based on similarity to the context specific sequence profile for the query sequence.
In predicting substitution probabilities using only the amino acid’s local sequence context, you gain the advantage of not needing to know the structure of the query protein while still allowing for the detection of more homologous proteins than standard substitution matrices [4]. Bigert and Söding’s approach to predicting substitution probabilities was based on a generative model. In another paper in collaboration with Angermüller, they develop a discriminative machine learning method that improves prediction accuracy [2].
Given an observed variablex{\displaystyle x}and a target variabley{\displaystyle y}, a generative model defines the probabilitiesP(x,y){\displaystyle P(x,y)}andP(y){\displaystyle P(y)}separately. In order to predict the unobserved target variable,y{\displaystyle y}, Bayes’ theorem,P(y|x)=(P(x|y)P(y)[∑yP(x|y)P(y)]){\displaystyle P(y|x)=\left({\frac {P(x|y)P(y)}{[\textstyle \sum _{y}P(x|y)P(y)\displaystyle ]}}\right)}
is used. A generative model, as the name suggests, allows one to generate new data points(x,y){\displaystyle (x,y)}. The joint distribution is described asP(x,y)=P(x|y)P(y){\displaystyle P(x,y)=P(x|y)P(y)}. To train a generative model, the following equation is used to maximize the joint probability∏(P(xn,yn)trainingData(xn,yn)){\displaystyle \prod \left({\frac {P(x_{n},y_{n})}{trainingData(x_{n},y_{n})}}\right)}.
The discriminative model is a logistic regression maximum entropy classifier. With the discriminative model, the goal is to predict a context specific substitution probability given a query sequence. The discriminative approach for modeling substitution probabilities,P(a|Cl){\displaystyle P(a|C_{l})}whereCl{\displaystyle C_{l}}describes a sequence of amino acids around positionl{\displaystyle l}of a sequence, is based onK{\displaystyle K}context states. Context states are characterized by parameters emission weight (vk(a){\displaystyle v_{k}(a)}), bias weight (πk{\displaystyle \pi _{k}}), and context weight (λk(j,a){\displaystyle \lambda _{k}(j,a)}) [2]. Emission probabilities from a context state are given by the emission weights as follows ford=1{\displaystyle d=1}to20{\displaystyle 20}:P(a|k)=(exp(vk(a))∑exp(vk(a′))){\displaystyle P(a|k)=\left({\frac {exp(v_{k}(a))}{\sum exp(v_{k}(a'))}}\right)}
whereP(a|k){\displaystyle P(a|k)}is the emission probability and is the context state. In the discriminative approach, probability for a context statek{\displaystyle k}given contextCl{\displaystyle C_{l}}is modeled directly by the exponential of an affine function of the context account profile whereCl(j,a){\displaystyle C_{l}(j,a)}is the context count profile with a normalization constantZ(Cl){\displaystyle Z(C_{l})}normalizes the probability to 1. This equation is as follows where the first summation takesj=−d{\displaystyle j=-d}tod{\displaystyle d}and the second summation takesa=1{\displaystyle a=1}to20{\displaystyle 20}:P(k|Cl)=(1Z(Cl)exp(πk+π∑∑λk(j,a)(Cl(j,a))){\displaystyle P(k|C_{l})=\left({\frac {1}{Z(C_{l})}}exp(\pi _{k}+\pi \sum \sum \lambda _{k}(j,a)(C_{l}(j,a))\right)}.
As with the generative model, target distribution is obtained by mixing the emission probabilities of each context state weighted by the similarity.
The MPI Bioinformatics toolkit in an interactive website and service that allows anyone to do comprehensive and collaborative protein analysis with a variety of different tools including CS-BLAST as well as PSI-BLAST [1]. This tool allows for input of a protein and select options for you to customize your analysis. It also can forward the output to other tools as well.
[1] Alva, Vikram, Seung-Zin Nam, Johannes Söding, and Andrei N. Lupas. “The MPI Bioinformatics Toolkit as an Integrative Platform for Advanced Protein Sequence and Structure Analysis.”Nucleic Acids Research44.Web server Issue (2016): W410-415.NCBI. Web. 2 Nov. 2016.
[2] Angermüller, Christof, Andreas Biegert, and Johannes Söding. “Discriminative Modelling of Context-specific Amino Acid Substitution Properties”BIOINFORMATICS28.24 (2012): 3240-247.Oxford Journals. Web. 2 Nov. 2016.
[3] Astschul, Stephen F., et al. “Gapped BLAST and PSI-BLAST: A New Generation of Protein Database Search Programs.”Nucleic Acids Research25.17 (1997): 3389-402.Oxford University Press.Print
[4] Bigert, A., and J. Söding. “Sequence Context-specific Profiles for Homology Searching.”Proceedings of the National Academy of Sciences106.10 (2009): 3770-3775. PNAS. Web. 23 Oct. 2016. | https://en.wikipedia.org/wiki/CS-BLAST |
Pfamis a database ofprotein familiesthat includes their annotations andmultiple sequence alignmentsgenerated usinghidden Markov models.[1][2][3]The latest version of Pfam, 37.0, was released in June 2024 and contains 21,979 families.[4]It is currently provided throughInterProwebsite.
The general purpose of the Pfam database is to provide a complete and accurate classification of protein families and domains.[5]Originally, the rationale behind creating the database was to have a semi-automated
method of curating information on known protein families to improve the efficiency of annotating genomes.[6]The Pfam classification of protein families has been widely adopted by biologists because of its wide coverage of
proteins and sensible naming conventions.[7]
It is used by experimental biologists researching specific proteins, by structural biologists to identify new targets for structure determination, by computational biologists to organise sequences and by evolutionary biologists tracing the origins of proteins.[8]Early genome projects, such as human and fly used Pfam extensively for functional annotation of genomic data.[9][10][11]
The InterPro website allows users to submit protein or DNA sequences to search for matches to families in the Pfam database. If DNA is submitted, a six-frametranslationis performed, then each frame is searched.[12]Rather than performing a typicalBLASTsearch, Pfam uses profilehidden Markov models, which give greater weight to matches atconservedsites, allowing better remote homology detection, making them more suitable for annotating genomes of organisms with no well-annotated close relatives.[13]
Pfam has also been used in the creation of other resources such as iPfam, which catalogs domain-domain interactions within and between proteins, based on information in structure databases and mapping of Pfam domains onto these structures.[14]
For each family in Pfam one can:
Entries can be of several types: family, domain, repeat or motif. Family is the default class, which simply indicates that members are related. Domains are defined as an autonomous structural unit or reusable sequence unit that can be found in multiple protein contexts. Repeats are not usually stable in isolation, but rather are usually required to form tandem repeats in order to form a domain or extended structure. Motifs are usually shorter sequence units found outside of globular domains.[9]
The descriptions of Pfam families are managed by the general public using Wikipedia (see#Community curation).
As of release 29.0, 76.1% of protein sequences inUniprotKBmatched to at least one Pfam domain.[15]
New families come from a range of sources, primarily thePDBand analysis of complete proteomes to find genes with no Pfam hit.[16]
For each family, a representative subset of sequences are aligned into a high-quality seed alignment. Sequences for the seed alignment are taken primarily from pfamseq (a non-redundant database of reference proteomes) with some supplementation fromUniprotKB.[15]This seed alignment is then used to build a profile hidden Markov model usingHMMER. This HMM is then searched against sequence databases, and all hits that reach a curated gathering threshold are classified as members of the protein family. The resulting collection of members is then aligned to the profile HMM to generate a full alignment.
For each family, a manually curated gathering threshold is assigned that maximises the number of true matches to the family while excluding any false positive matches. False positives are estimated by observing overlaps between Pfam family hits that are not from the same clan. This threshold is used to assess whether a match to a family HMM should be included in the protein family. Upon each update of Pfam, gathering thresholds are reassessed to prevent overlaps between new and existing families.[16]
Domains of unknown function(DUFs) represent a growing fraction of the Pfam database. The families are so named because they have been found to be conserved across species, but perform an unknown role. Each newly added DUF is named in order of addition. Names of these entries are updated as their functions are identified. Normally when the function of at least one protein belonging to a DUF has been determined, the function of the entire DUF is updated and the family is renamed. Some named families are still domains of unknown function, that are named after a representative protein, e.g. YbbR. Numbers of DUFs are expected to continue increasing as conserved sequences of unknown function continue to be identified in sequence data. It is expected that DUFs will eventually outnumber families of known function.[16]
Over time both sequence and residue coverage have increased, and as families have grown, more evolutionary relationships have been discovered, allowing the grouping of families into clans.[8]Clans were first introduced to the Pfam database in 2005. They are groupings of related families that share a single evolutionary origin, as confirmed by structural, functional, sequence and HMM comparisons.[5]As of release 29.0, approximately one third of protein families belonged to a clan.[15]This portion has grown to around three-fourths by 2019 (version 32.0).[17]
To identify possible clan relationships, Pfam curators use the Simple Comparison Of Outputs Program (SCOOP) as well as information from theECODdatabase.[17]ECOD is a semi-automated hierarchical database of protein families with known structures, with families that map readily to Pfam entries and homology levels that usually map to Pfam clans.[18]
Pfam was founded in 1995 by Erik Sonnhammer,Sean EddyandRichard Durbinas a collection of commonly occurring protein domains that could be used to annotate the protein coding genes of multicellular animals.[6]One of its major aims at inception was to aid in the annotation of theC. elegansgenome.[6]The project was partly driven by the assertion in ‘One thousand families for the molecular biologist’ by Cyrus Chothia that there were around 1500 different families of proteins and that the majority of proteins fell into just 1000 of these.[5][19]Counter to this assertion, the Pfam database currently contains 16,306 entries corresponding to unique protein domains and families. However, many of these families contain structural and functional similarities indicating a shared evolutionary origin (seeClans).[5]
A major point of difference between Pfam and other databases at the time of its inception was the use of two alignment types for entries: a smaller, manually checked seed alignment, as well as a full alignment built by aligning sequences to a profile hidden Markov model built from the seed alignment. This smaller seed alignment was easier to update as new releases of sequence databases came out, and thus represented a promising solution to the dilemma of how to keep the database up to date as genome sequencing became more efficient and more data needed to be processed over time. A further improvement to the speed at which the database could be updated came in version 24.0, with the introduction of HMMER3, which is ~100 times faster than HMMER2 and more sensitive.[8]
Because the entries in Pfam-A do not cover all known proteins, an automatically generated supplement was provided called Pfam-B. Pfam-B contained a large number of small families derived from clusters produced by an algorithm called ADDA.[20]Although of lower quality, Pfam-B families could be useful when no Pfam-A families were found. Pfam-B was discontinued as of release 28.0,[21]then reintroduced in release 33.1 using a new clustering algorithm, MMSeqs2.[22]
Pfam was originally hosted on threemirror sitesaround the world to preserve redundancy. However between 2012 and 2014, the Pfam resource was moved toEMBL-EBI, which allowed for hosting of the website from one domain (xfam.org), using duplicate independent data centres. This allowed for better centralisation of updates, and grouping with other Xfam projects such asRfam,TreeFam, iPfam and others, whilst retaining critical resilience provided by hosting from multiple centres.[23]
From circa 2014 to 2016, Pfam underwent a substantial reorganisation to further reduce manual effort involved in curation and allow for more frequent updates.[15]Circa 2022, Pfam was integrated intoInterProat theEuropean Bioinformatics Institute.[24]
Curation of such a large database presented issues in terms of keeping up with the volume of new families and updated information that needed to be added. To speed up releases of the database, the developers started a number of initiatives to allow greater community involvement in managing the database.
A critical step in improving the pace of updating and improving entries was to open up the functional annotation of Pfam domains to the Wikipedia community in release 26.0.[16]For entries that already had a Wikipedia entry, this was linked into the Pfam page, and for those that did not, the community were invited to create one and inform the curators, in order for it to be linked in. It is anticipated that while community involvement will greatly improve the level of annotation of these families, some will remain insufficiently notable for inclusion in Wikipedia, in which case they will retain their original Pfam description. Some Wikipedia articles cover multiple families, such as theZinc fingerarticle. An automated procedure for generating articles based on InterPro and Pfam data has also been implemented, which populates a page with information and links to databases as well as available images, then once an article has been reviewed by a curator it is moved from the Sandbox to Wikipedia proper. In order to guard against vandalism of articles, each Wikipedia revision is reviewed by curators before it is displayed on the Pfam website. Almost all cases of vandalism have been corrected by the community before they reach curators, however.[16]
Pfam is run by an international consortium of three groups. In the earlier releases of Pfam, family entries could only be modified at the Cambridge, UK site, limiting the ability of consortium members to contribute to site curation. In release 26.0, developers moved to a new system that allowed registered users anywhere in the world to add or modify Pfam families.[16] | https://en.wikipedia.org/wiki/Pfam |
UGENEis computersoftwareforbioinformatics.[1][2]It helps biologists to analyze variousbiologicalgeneticsdata, such assequences, annotations,multiple alignments,phylogenetic trees,NGS assemblies, and others. UGENE integrates dozens of well-known biological tools, algorithms, and original tools in the context ofgenomics,evolutionary biology,virology, and other branches of life science.
UGENE works onpersonal computeroperating systems such asWindows,macOS, orLinux. It is released asfree and open-source software, under aGNU General Public License(GPL) version 2. The data can be stored both locally and on shared/networked storage. Thegraphical user interface(GUI) provides access to pre-built tools so users with nocomputer programmingexperience can access those tools easily. UGENE also has acommand-line interfaceto execute Workflows.
Using UGENE Workflow Designer, it is possible to streamline a multi-step analysis. The workflow consists of blocks such as data readers, blocks executing embedded tools and algorithms, and data writers. Blocks can be created with command line tools or a script. A set of sample workflows is available in the Workflow Designer, to annotate sequences, convert data formats, analyze NGS data, etc.
To improve performance, UGENE usesmulti-core processors(CPUs) andgraphics processing units(GPUs) to optimize a few algorithms.[3][4]
The software supports the following features:
The Sequence View is used to visualize, analyze and modifynucleic acidorproteinsequences. Depending on the sequence type and the options selected, the following views can be present in the Sequence View window:
The Alignment Editor allows working with multiplenucleic acidorproteinsequences -aligningthem, editing the alignment, analyzing it, storing theconsensus sequence, building a phylogenetic tree, and so on.
The Phylogenetic Tree Viewer helps to visualize and edit phylogenetic trees. It is possible to synchronize a tree with the corresponding multiple alignment used to build the tree.
TheAssembly Browserproject was started in 2010 as an entry for Illumina iDEA Challenge 2011.[19]The browser allows users to visualize and browse large (up to hundreds of millions of short reads) next generation sequence assemblies. It supports SAM,[20]BAM (the binary version of SAM), and ACE formats. Before browsing assembly data in UGENE, an input file is converted to a UGENE database file automatically. This approach has its pros and cons. The pros are that this allows viewing the whole assembly, navigating in it, and going to well-covered regions rapidly. The cons are that a conversion may take time for a large file, and needs enough disk space to store the database.
UGENE Workflow Designerallows creating and running complex computationalworkflowschemas.[21]
The distinguishing feature of Workflow Designer, relative to otherbioinformatics workflow management systemsis that workflows are executed on a local computer. It helps to avoid data transfer issues, whereas other tools’ reliance on remote file storage and internet connectivity does not.
The elements that a workflow consists of correspond to the bulk of algorithms integrated into UGENE. Using Workflow Designer also allows creating custom workflow elements. The elements can be based on a command-line tool or a script.
Workflows are stored in a special text format. This allows their reuse, and transfer between users.
A workflow can be run using the graphical interface or launched from the command line. The graphical interface also allows controlling the workflow execution, storing the parameters, and so on.
There is an embedded library of workflow samples to convert, filter, and annotate data, with several pipelines to analyze NGS data developed in collaboration with NIH NIAID.[22]A wizard is available for each workflow sample.
UGENE is primarily developed by Unipro LLC[23]with headquarters in Akademgorodok of Novosibirsk, Russia. Eachiterationlasts about 1–2 months, followed by a newrelease. Development snapshots may also be downloaded.
The features to include in each release are mostly initiated by users. | https://en.wikipedia.org/wiki/UGENE |
Markov renewal processesare a class ofrandom processesin probability and statistics that generalize the class ofMarkovjump processes. Other classes of random processes, such asMarkov chainsandPoisson processes, can be derived as special cases among the class of Markov renewal processes, while Markov renewal processes are special cases among the more general class ofrenewal processes.
In the context of a jump process that takes states in a state spaceS{\displaystyle \mathrm {S} }, consider the set of random variables(Xn,Tn){\displaystyle (X_{n},T_{n})}, whereTn{\displaystyle T_{n}}represents the jump times andXn{\displaystyle X_{n}}represents the associated states in the sequence of states (see Figure). Let the sequence of inter-arrival timesτn=Tn−Tn−1{\displaystyle \tau _{n}=T_{n}-T_{n-1}}. In order for the sequence(Xn,Tn){\displaystyle (X_{n},T_{n})}to be considered a Markov renewal process the following condition should hold:
Pr(τn+1≤t,Xn+1=j∣(X0,T0),(X1,T1),…,(Xn=i,Tn))=Pr(τn+1≤t,Xn+1=j∣Xn=i)∀n≥1,t≥0,i,j∈S{\displaystyle {\begin{aligned}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid (X_{0},T_{0}),(X_{1},T_{1}),\ldots ,(X_{n}=i,T_{n}))\\[5pt]={}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid X_{n}=i)\,\forall n\geq 1,t\geq 0,i,j\in \mathrm {S} \end{aligned}}} | https://en.wikipedia.org/wiki/Markov_renewal_process |
Hierarchical temporal memory(HTM) is a biologically constrainedmachine intelligencetechnology developed byNumenta. Originally described in the 2004 bookOn IntelligencebyJeff HawkinswithSandra Blakeslee, HTM is primarily used today foranomaly detectionin streaming data. The technology is based onneuroscienceand thephysiologyand interaction ofpyramidal neuronsin theneocortexof themammalian(in particular,human) brain.
At the core of HTM are learningalgorithmsthat can store, learn,infer, and recall high-order sequences. Unlike most othermachine learningmethods, HTM constantly learns (in anunsupervisedprocess) time-based patterns in unlabeled data. HTM is robust to noise, and has high capacity (it can learn multiple patterns simultaneously). When applied to computers, HTM is well suited for prediction,[1]anomaly detection,[2]classification, and ultimately sensorimotor applications.[3]
HTM has been tested and implemented in software through example applications from Numenta and a few commercial applications from Numenta's partners[clarification needed].
A typical HTM network is atree-shaped hierarchy oflevels(not to be confused with the "layers" of theneocortex, as describedbelow). These levels are composed of smaller elements calledregions (or nodes). A single level in the hierarchy possibly contains several regions. Higher hierarchy levels often have fewer regions. Higher hierarchy levels can reuse patterns learned at the lower levels by combining them to memorize more complex patterns.
Each HTM region has the same basic function. In learning and inference modes, sensory data (e.g. data from the eyes) comes into bottom-level regions. In generation mode, the bottom level regions output the generated pattern of a given category. The top level usually has a single region that stores the most general and most permanent categories (concepts); these determine, or are determined by, smaller concepts at lower levels—concepts that are more restricted in time and space[clarification needed]. When set in inference mode, a region (in each level) interprets information coming up from its "child" regions as probabilities of the categories it has in memory.
Each HTM region learns by identifying and memorizing spatial patterns—combinations of input bits that often occur at the same time. It then identifies temporal sequences of spatial patterns that are likely to occur one after another.
HTM is the algorithmic component toJeff Hawkins’ Thousand Brains Theory of Intelligence. So new findings on the neocortex are progressively incorporated into the HTM model, which changes over time in response. The new findings do not necessarily invalidate the previous parts of the model, so ideas from one generation are not necessarily excluded in its successive one. Because of the evolving nature of the theory, there have been several generations of HTM algorithms,[4]which are briefly described below.
The first generation of HTM algorithms is sometimes referred to aszeta 1.
Duringtraining, a node (or region) receives a temporal sequence of spatial patterns as its input. The learning process consists of two stages:
The concepts ofspatial poolingandtemporal poolingare still quite important in the current HTM algorithms. Temporal pooling is not yet well understood, and its meaning has changed over time (as the HTM algorithms evolved).
Duringinference, the node calculates the set of probabilities that a pattern belongs to each known coincidence. Then it calculates the probabilities that the input represents each temporal group. The set of probabilities assigned to the groups is called a node's "belief" about the input pattern. (In a simplified implementation, node's belief consists of only one winning group). This belief is the result of the inference that is passed to one or more "parent" nodes in the next higher level of the hierarchy.
"Unexpected" patterns to the node do not have a dominant probability of belonging to any one temporal group but have nearly equal probabilities of belonging to several of the groups. If sequences of patterns are similar to the training sequences, then the assigned probabilities to the groups will not change as often as patterns are received. The output of the node will not change as much, and a resolution in time[clarification needed]is lost.
In a more general scheme, the node's belief can be sent to the input of any node(s) at any level(s), but the connections between the nodes are still fixed. The higher-level node combines this output with the output from other child nodes thus forming its own input pattern.
Since resolution in space and time is lost in each node as described above, beliefs formed by higher-level nodes represent an even larger range of space and time. This is meant to reflect the organisation of the physical world as it is perceived by the human brain. Larger concepts (e.g. causes, actions, and objects) are perceived to change more slowly and consist of smaller concepts that change more quickly. Jeff Hawkins postulates that brains evolved this type of hierarchy to match, predict, and affect the organisation of the external world.
More details about the functioning of Zeta 1 HTM can be found in Numenta's old documentation.[5]
The second generation of HTM learning algorithms, often referred to ascortical learning algorithms(CLA), was drastically different from zeta 1. It relies on adata structurecalledsparse distributed representations(that is, a data structure whose elements are binary, 1 or 0, and whose number of 1 bits is small compared to the number of 0 bits) to represent the brain activity and a more biologically-realistic neuron model (often also referred to ascell, in the context of HTM).[6]There are two core components in this HTM generation: aspatial poolingalgorithm,[7]which outputssparse distributed representations(SDR), and asequence memoryalgorithm,[8]which learns to represent and predict complex sequences.
In this new generation, thelayersandminicolumnsof thecerebral cortexare addressed and partially modeled. Each HTM layer (not to be confused with an HTM level of an HTM hierarchy, as describedabove) consists of a number of highly interconnected minicolumns. An HTM layer creates a sparse distributed representation from its input, so that a fixed percentage ofminicolumnsare active at any one time[clarification needed]. A minicolumn is understood as a group of cells that have the samereceptive field. Each minicolumn has a number of cells that are able to remember several previous states. A cell can be in one of three states:active,inactiveandpredictivestate.
The receptive field of each minicolumn is a fixed number of inputs that are randomly selected from a much larger number of node inputs. Based on the (specific) input pattern, some minicolumns will be more or less associated with the active input values.Spatial poolingselects a relatively constant number of the most active minicolumns and inactivates (inhibits) other minicolumns in the vicinity of the active ones. Similar input patterns tend to activate a stable set of minicolumns. The amount of memory used by each layer can be increased to learn more complex spatial patterns or decreased to learn simpler patterns.
As mentioned above, a cell (or a neuron) of a minicolumn, at any point in time, can be in an active, inactive or predictive state. Initially, cells are inactive.
If one or more cells in the active minicolumn are in thepredictivestate (see below), they will be the only cells to become active in the current time step. If none of the cells in the active minicolumn are in the predictive state (which happens during the initial time step or when the activation of this minicolumn was not expected), all cells are made active.
When a cell becomes active, it gradually forms connections to nearby cells that tend to be active during several previous time steps. Thus a cell learns to recognize a known sequence by checking whether the connected cells are active. If a large number of connected cells are active, this cell switches to thepredictivestate in anticipation of one of the few next inputs of the sequence.
The output of a layer includes minicolumns in both active and predictive states. Thus minicolumns are active over long periods of time, which leads to greater temporal stability seen by the parent layer.
Cortical learning algorithms are able to learn continuously from each new input pattern, therefore no separate inference mode is necessary. During inference, HTM tries to match the stream of inputs to fragments of previously learned sequences. This allows each HTM layer to be constantly predicting the likely continuation of the recognized sequences. The index of the predicted sequence is the output of the layer. Since predictions tend to change less frequently than the input patterns, this leads to increasing temporal stability of the output in higher hierarchy levels. Prediction also helps to fill in missing patterns in the sequence and to interpret ambiguous data by biasing the system to infer what it predicted.
Cortical learning algorithms are currently being offered as commercialSaaSby Numenta (such as Grok[9]).
The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: "How do you know if the changes you are making to the model are good or not?" To which Jeff's response was "There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested. If our theories explain a vast array of neuroscience observations then it tells us that we’re on the right track. In the machine learning world, they don’t care about that, only how well it works on practical problems. In our case that remains to be seen. To the extent you can solve a problem that no one was able to solve before, people will take notice."[10]
The third generation builds on the second generation and adds in a theory of sensorimotor inference in the neocortex.[11][12]This theory proposes thatcortical columnsat every level of the hierarchy can learn complete models of objects over time and that features are learned at specific locations on the objects. The theory was expanded in 2018 and referred to as the Thousand Brains Theory.[13]
HTM attempts to implement the functionality that is characteristic of a hierarchically related group of cortical regions in the neocortex. Aregionof the neocortex corresponds to one or morelevelsin the HTM hierarchy, while thehippocampusis remotely similar to the highest HTM level. A single HTM node may represent a group ofcortical columnswithin a certain region.
Although it is primarily a functional model, several attempts have been made to relate the algorithms of the HTM with the structure of neuronal connections in the layers of neocortex.[14][15]The neocortex is organized in vertical columns of 6 horizontal layers. The 6 layers of cells in the neocortex should not be confused with levels in an HTM hierarchy.
HTM nodes attempt to model a portion of cortical columns (80 to 100 neurons) with approximately 20 HTM "cells" per column. HTMs model only layers 2 and 3 to detect spatial and temporal features of the input with 1 cell per column in layer 2 for spatial "pooling", and 1 to 2 dozen per column in layer 3 for temporal pooling. A key to HTMs and the cortex's is their ability to deal with noise and variation in the input which is a result of using a "sparse distributive representation" where only about 2% of the columns are active at any given time.
An HTM attempts to model a portion of the cortex's learning and plasticity as described above. Differences between HTMs and neurons include:[16]
Integrating memory component withneural networkshas a long history dating back to early research in distributed representations[17][18]andself-organizing maps. For example, insparse distributed memory(SDM), the patterns encoded by neural networks are used as memory addresses forcontent-addressable memory, with "neurons" essentially serving as address encoders and decoders.[19][20]
Computers store information indenserepresentations such as a 32-bitword, where all combinations of 1s and 0s are possible. By contrast, brains usesparsedistributed representations (SDRs).[21]The human neocortex has roughly 16 billion neurons, but at any given time only a small percent are active. The activities of neurons are like bits in a computer, and so the representation is sparse. Similar toSDMdeveloped byNASAin the 80s[19]andvector spacemodels used inLatent semantic analysis, HTM uses sparse distributed representations.[22]
The SDRs used in HTM are binary representations of data consisting of many bits with a small percentage of the bits active (1s); a typical implementation might have 2048 columns and 64K artificial neurons where as few as 40 might be active at once. Although it may seem less efficient for the majority of bits to go "unused" in any given representation, SDRs have two major advantages over traditional dense representations. First, SDRs are tolerant of corruption and ambiguity due to the meaning of the representation being shared (distributed) across a small percentage (sparse) of active bits. In a dense representation, flipping a single bit completely changes the meaning, while in an SDR a single bit may not affect the overall meaning much. This leads to the second advantage of SDRs: because the meaning of a representation is distributed across all active bits, the similarity between two representations can be used as a measure ofsemanticsimilarity in the objects they represent. That is, if two vectors in an SDR have 1s in the same position, then they are semantically similar in that attribute. The bits in SDRs have semantic meaning, and that meaning is distributed across the bits.[22]
Thesemantic foldingtheory[23]builds on these SDR properties to propose a new model for language semantics, where words are encoded into word-SDRs and the similarity between terms, sentences, and texts can be calculated with simple distance measures.
Likened to aBayesian network, an HTM comprises a collection of nodes that are arranged in a tree-shaped hierarchy. Each node in the hierarchy discovers an array of causes in the input patterns and temporal sequences it receives. A Bayesianbelief revisionalgorithm is used to propagate feed-forward and feedback beliefs from child to parent nodes and vice versa. However, the analogy to Bayesian networks is limited, because HTMs can be self-trained (such that each node has an unambiguous family relationship), cope with time-sensitive data, and grant mechanisms forcovert attention.
A theory of hierarchical cortical computation based on Bayesianbelief propagationwas proposed earlier by Tai Sing Lee andDavid Mumford.[24]While HTM is mostly consistent with these ideas, it adds details about handling invariant representations in the visual cortex.[25]
Like any system that models details of the neocortex, HTM can be viewed as anartificial neural network. The tree-shaped hierarchy commonly used in HTMs resembles the usual topology of traditional neural networks. HTMs attempt to model cortical columns (80 to 100 neurons) and their interactions with fewer HTM "neurons". The goal of current HTMs is to capture as much of the functions of neurons and the network (as they are currently understood) within the capability of typical computers and in areas that can be made readily useful such as image processing. For example, feedback from higher levels and motor control is not attempted because it is not yet understood how to incorporate them and binary instead of variable synapses are used because they were determined to be sufficient in the current HTM capabilities.
LAMINART and similar neural networks researched byStephen Grossbergattempt to model both the infrastructure of the cortex and the behavior of neurons in a temporal framework to explain neurophysiological and psychophysical data. However, these networks are, at present, too complex for realistic application.[26]
HTM is also related to work byTomaso Poggio, including an approach for modeling theventral streamof the visual cortex known as HMAX. Similarities of HTM to various AI ideas are described in the December 2005 issue of the Artificial Intelligence journal.[27]
Neocognitron, a hierarchical multilayered neural network proposed by ProfessorKunihiko Fukushimain 1987, is one of the firstdeep learningneural network models.[28] | https://en.wikipedia.org/wiki/Hierarchical_temporal_memory |
Inmathematics, the concept ofgraph dynamical systemscan be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of GDSs is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result.
The work on GDSs considers finite graphs and finite state spaces. As such, the research typically involves techniques from, e.g.,graph theory,combinatorics,algebra, anddynamical systemsrather than differential geometry. In principle, one could define and study GDSs over an infinite graph (e.g.cellular automataorprobabilistic cellular automataoverZk{\displaystyle \mathbb {Z} ^{k}}orinteracting particle systemswhen some randomness is included), as well as GDSs with infinite state space (e.g.R{\displaystyle \mathbb {R} }as in coupled map lattices); see, for example, Wu.[1]In the following, everything is implicitly assumed to be finite unless stated otherwise.
A graph dynamical system is constructed from the following components:
Thephase spaceassociated to a dynamical system with mapF:Kn→ Knis the finite directed graph with vertex setKnand directed edges (x,F(x)). The structure of the phase space is governed by the properties of the graphY, the vertex functions (fi)i, and the update scheme. The research in this area seeks to infer phase space properties based on the structure of the system constituents. The analysis has a local-to-global character.
If, for example, the update scheme consists of applying the vertex functions synchronously one obtains the class ofgeneralized cellular automata(CA). In this case, the global mapF:Kn→ Knis given by
F(x)v=fv(x[v]).{\displaystyle F(x)_{v}=f_{v}(x[v])\;.}
This class is referred to as generalized cellular automata since the classical or standardcellular automataare typically defined and studied over regular graphs or grids, and the vertex functions are typically assumed to be identical.
Example:LetYbe the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. LetK= {0,1} be the state space for each vertex and use the function nor3:K3→Kdefined by nor3(x,y,z) = (1 +x)(1 +y)(1 +z) with arithmetic modulo 2 for all vertex functions. Then for example the system state (0,1,0,0) is mapped to (0, 0, 0, 1) using a synchronous update. All the transitions are shown in the phase space below.
If the vertex functions are applied asynchronously in the sequence specified by a wordw= (w1,w2, ... ,wm) or permutationπ{\displaystyle \pi }= (π1{\displaystyle \pi _{1}},π2,…,πn{\displaystyle \pi _{2},\dots ,\pi _{n}}) ofv[Y] one obtains the class ofSequential dynamical systems(SDS).[2]In this case it is convenient to introduce theY-local mapsFiconstructed from the vertex functions by
The SDS mapF= [FY,w] :Kn→Knis the function composition
If the update sequence is a permutation one frequently speaks of apermutation SDSto emphasize this point.
Example:LetYbe the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. LetK={0,1} be the state space for each vertex and use the function nor3:K3→Kdefined by nor3(x, y, z) = (1 +x)(1 +y)(1 +z) with arithmetic modulo 2 for all vertex functions. Using the update sequence (1,2,3,4) then the system state (0, 1, 0, 0) is mapped to (0, 0, 1, 0). All the system state transitions for this sequential dynamical system are shown in the phase space below.
From, e.g., the point of view of applications it is interesting to consider the case where one or more of the components of a GDS contains stochastic elements. Motivating applications could include processes that are not fully understood (e.g. dynamics within a cell) and where certain aspects for all practical purposes seem to behave according to some probability distribution. There are also applications governed by deterministic principles whose description is so complex or unwieldy that it makes sense to consider probabilistic approximations.
Every element of a graph dynamical system can be made stochastic in several ways. For example, in a sequential dynamical system the update sequence can be made stochastic. At each iteration step one may choose the update sequencewat random from a given distribution of update sequences with corresponding probabilities. The matching probability space of update sequences induces a probability space of SDS maps. A natural object to study in this regard is theMarkov chainon state space induced by this collection of SDS maps. This case is referred to asupdate sequence stochastic GDSand is motivated by, e.g., processes where "events" occur at random according to certain rates (e.g. chemical reactions), synchronization in parallel computation/discrete event simulations, and in computational paradigms described later.
This specific example with stochastic update sequence illustrates two general facts for such systems: when passing to a stochastic graph dynamical system one is generally led to (1) a study of Markov chains (with specific structure governed by the constituents of the GDS), and (2) the resulting Markov chains tend to be large having an exponential number of states. A central goal in the study of stochastic GDS is to be able to derive reduced models.
One may also consider the case where the vertex functions are stochastic, i.e.,function stochastic GDS. For example, RandomBoolean networksare examples of function stochastic GDS using a synchronous update scheme and where the state space isK= {0, 1}. Finiteprobabilistic cellular automata(PCA) is another example of function stochastic GDS. In principle the class of Interacting particle systems (IPS) covers finite and infinitePCA, but in practice the work on IPS is largely concerned with the infinite case since this allows one to introduce more interesting topologies on state space.
Graph dynamical systems constitute a natural framework for capturing distributed systems such as biological networks and epidemics over social networks, many of which are frequently referred to as complex systems. | https://en.wikipedia.org/wiki/Graph_dynamical_system |
ABoolean networkconsists of a discrete set ofBoolean variableseach of which has aBoolean function(possibly different for each variable) assigned to it which takes inputs from a subset of those variables and output that determines the state of the variable it is assigned to. This set of functions in effect determines a topology (connectivity) on the set of variables, which then become nodes in anetwork. Usually, the dynamics of the system is taken as a discretetime serieswhere the state of the entire network at timet+1 is determined by evaluating each variable's function on the state of the network at timet. This may be donesynchronouslyorasynchronously.[1]
Boolean networks have been used in biology to model regulatory networks. Although Boolean networks are a crude simplification of genetic reality where genes are not simple binary switches, there are several cases where they correctly convey the correct pattern of expressed and suppressed genes.[2][3]The seemingly mathematical easy (synchronous) model was only fully understood in the mid 2000s.[4]
A Boolean network is a particular kind ofsequential dynamical system, where time and states are discrete, i.e. both the set of variables and the set of states in the time series each have abijectiononto an integer series.
Arandom Boolean network(RBN) is one that is randomly selected from the set of all possible Boolean networks of a particular size,N. One then can study statistically, how the expected properties of such networks depend on various statistical properties of the ensemble of all possible networks. For example, one may study how the RBN behavior changes as the average connectivity is changed.
The first Boolean networks were proposed byStuart A. Kauffmanin 1969, asrandommodels ofgenetic regulatory networks[5]but their mathematical understanding only started in the 2000s.[6][7]
Since a Boolean network has only 2Npossible states, a trajectory will sooner or later reach a previously visited state, and thus, since the dynamics are deterministic, the trajectory will fall into a steady state or cycle called anattractor(though in the broader field of dynamical systems a cycle is only an attractor if perturbations from it lead back to it). If the attractor has only a single state it is called apoint attractor, and if the attractor consists of more than one state it is called acycle attractor. The set of states that lead to an attractor is called thebasinof the attractor. States which occur only at the beginning of trajectories (no trajectories leadtothem), are calledgarden-of-Edenstates[8]and the dynamics of the network flow from these states towards attractors. The time it takes to reach an attractor is calledtransient time.[4]
With growing computer power and increasing understanding of the seemingly simple model, different authors gave different estimates for the mean number and length of the attractors, here a brief summary of key publications.[9]
In dynamical systems theory, the structure and length of the attractors of a network corresponds to the dynamic phase of the network. Thestability of Boolean networksdepends on the connections of theirnodes. A Boolean network can exhibit stable, critical orchaotic behavior. This phenomenon is governed by a critical value of the average number of connections of nodes (Kc{\displaystyle K_{c}}), and can be characterized by theHamming distanceas distance measure. In the unstable regime, the distance between two initially close states on average grows exponentially in time, while in the stable regime it decreases exponentially. In this, with "initially close states" one means that the Hamming distance is small compared with the number of nodes (N{\displaystyle N}) in the network.
ForN-K-model[15]the network is stable ifK<Kc{\displaystyle K<K_{c}}, critical ifK=Kc{\displaystyle K=K_{c}}, and unstable ifK>Kc{\displaystyle K>K_{c}}.
The state of a given nodeni{\displaystyle n_{i}}is updated according to itstruth table, whose outputs are randomly populated.pi{\displaystyle p_{i}}denotes the probability of assigning an off output to a given series of input signals.
Ifpi=p=const.{\displaystyle p_{i}=p=const.}for every node, the transition between the stable and chaotic range depends onp{\displaystyle p}. According toBernard DerridaandYves Pomeau[16], the critical value of the average number of connections isKc=1/[2p(1−p)]{\displaystyle K_{c}=1/[2p(1-p)]}.
IfK{\displaystyle K}is not constant, and there is no correlation between the in-degrees and out-degrees, the conditions of stability is determined by⟨Kin⟩{\displaystyle \langle K^{in}\rangle }[17][18][19]The network is stable if⟨Kin⟩<Kc{\displaystyle \langle K^{in}\rangle <K_{c}}, critical if⟨Kin⟩=Kc{\displaystyle \langle K^{in}\rangle =K_{c}}, and unstable if⟨Kin⟩>Kc{\displaystyle \langle K^{in}\rangle >K_{c}}.
The conditions of stability are the same in the case of networks withscale-freetopologywhere the in-and out-degree distribution is a power-law distribution:P(K)∝K−γ{\displaystyle P(K)\propto K^{-\gamma }}, and⟨Kin⟩=⟨Kout⟩{\displaystyle \langle K^{in}\rangle =\langle K^{out}\rangle }, since every out-link from a node is an in-link to another.[20]
Sensitivity shows the probability that the output of the Boolean function of a given node changes if its input changes. For random Boolean networks,qi=2pi(1−pi){\displaystyle q_{i}=2p_{i}(1-p_{i})}. In the general case, stability of the network is governed by the largesteigenvalueλQ{\displaystyle \lambda _{Q}}of matrixQ{\displaystyle Q}, whereQij=qiAij{\displaystyle Q_{ij}=q_{i}A_{ij}}, andA{\displaystyle A}is theadjacency matrixof the network.[21]The network is stable ifλQ<1{\displaystyle \lambda _{Q}<1}, critical ifλQ=1{\displaystyle \lambda _{Q}=1}, unstable ifλQ>1{\displaystyle \lambda _{Q}>1}.
One theme is to studydifferent underlyinggraph topologies.
Classical Boolean networks (sometimes calledCRBN, i.e. Classic Random Boolean Network) are synchronously updated. Motivated by the fact that genes don't usually change their state simultaneously,[24]different alternatives have been introduced. A common classification[25]is the following: | https://en.wikipedia.org/wiki/Boolean_network |
Agene(orgenetic)regulatory network(GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern thegene expressionlevels ofmRNAand proteins which, in turn, determine the function of the cell. GRN also play a central role inmorphogenesis, the creation of body structures, which in turn is central toevolutionary developmental biology(evo-devo).
The regulator can beDNA,RNA,proteinor any combination of two or more of these three that form a complex, such as a specific sequence of DNA and atranscription factorto activate that sequence. The interaction can be direct or indirect (through transcribed RNA or translated protein). In general, each mRNA molecule goes on to make a specific protein (or set of proteins). In some cases this protein will bestructural, and will accumulate at the cell membrane or within the cell to give it particular structural properties. In other cases the protein will be anenzyme, i.e., a micro-machine that catalyses a certain reaction, such as the breakdown of a food source or toxin. Some proteins though serve only to activate other genes, and these are thetranscription factorsthat are the main players in regulatory networks or cascades. By binding to thepromoterregion at the start of other genes they turn them on, initiating the production of another protein, and so on. Some transcription factors are inhibitory.[1]
In single-celled organisms, regulatory networks respond to the external environment, optimising the cell at a given time for survival in this environment. Thus a yeast cell, finding itself in a sugar solution, will turn on genes to make enzymes that process the sugar to alcohol.[2]This process, which we associate with wine-making, is how the yeast cell makes its living, gaining energy to multiply, which under normal circumstances would enhance its survival prospects.
In multicellular animals the same principle has been put in the service of gene cascades that control body-shape.[3]Each time a cell divides, two cells result which, although they contain the same genome in full, can differ in which genes are turned on and making proteins. Sometimes a 'self-sustaining feedback loop' ensures that a cell maintains its identity and passes it on. Less understood is the mechanism ofepigeneticsby whichchromatinmodification may provide cellular memory by blocking or allowing transcription. A major feature of multicellular animals is the use ofmorphogengradients, which in effect provide a positioning system that tells a cell where in the body it is, and hence what sort of cell to become. A gene that is turned on in one cell may make a product that leaves the cell anddiffusesthrough adjacent cells, entering them and turning on genes only when it is present above a certain threshold level. These cells are thus induced into a new fate, and may even generate othermorphogensthat signal back to the original cell. Over longer distances morphogens may use the active process ofsignal transduction. Such signalling controlsembryogenesis, the building of abody planfrom scratch through a series of sequential steps. They also control and maintain adult bodies throughfeedbackprocesses, and the loss of such feedback because of a mutation can be responsible for the cell proliferation that is seen incancer. In parallel with this process of building structure, the gene cascade turns on genes that makestructural proteinsthat give each cell the physical properties it needs.
At one level, biological cells can be thought of as "partially mixed bags" of biological chemicals – in the discussion of gene regulatory networks, these chemicals are mostly themessenger RNAs(mRNAs) andproteinsthat arise from gene expression. These mRNA and proteins interact with each other with various degrees of specificity. Some diffuse around the cell. Others are bound tocell membranes, interacting with molecules in the environment. Still others pass through cell membranes and mediate long range signals to other cells in a multi-cellular organism. These molecules and their interactions comprise agene regulatory network. A typical gene regulatory network looks something like this:
The nodes of this network can represent genes, proteins, mRNAs, protein/protein complexes or cellular processes. Nodes that are depicted as lying along vertical lines are associated with the cell/environment interfaces, while the others are free-floating and candiffuse. Edges between nodes represent interactions between the nodes, that can correspond to individual molecular reactions between DNA, mRNA, miRNA, proteins or molecular processes through which the products of one gene affect those of another, though the lack of experimentally obtained information often implies that some reactions are not modeled at such a fine level of detail. These interactions can be inductive (usually represented by arrowheads or the + sign), with an increase in the concentration of one leading to an increase in the other, inhibitory (represented with filled circles, blunt arrows or the minus sign), with an increase in one leading to a decrease in the other, or dual, when depending on the circumstances the regulator can activate or inhibit the target node. The nodes can regulate themselves directly or indirectly, creating feedback loops, which form cyclic chains of dependencies in the topological network. The network structure is an abstraction of the system's molecular or chemical dynamics, describing the manifold ways in which one substance affects all the others to which it is connected. In practice, such GRNs are inferred from the biological literature on a given system and represent a distillation of the collective knowledge about a set of related biochemical reactions. To speed up the manual curation of GRNs, some recent efforts try to usetext mining, curated databases, network inference from massive data, model checking and other information extraction technologies for this purpose.[4]
Genes can be viewed as nodes in the network, with input being proteins such astranscription factors, and outputs being the level ofgene expression. The value of the node depends on a function which depends on the value of its regulators in previous time steps (in theBoolean networkdescribed below these areBoolean functions, typically AND, OR, and NOT). These functions have been interpreted as performing a kind of information processing within the cell, which determines cellular behavior. The basic drivers within cells are concentrations of some proteins, which determine both spatial (location within the cell or tissue) and temporal (cell cycle or developmental stage) coordinates of the cell, as a kind of "cellular memory". The gene networks are only beginning to be understood, and it is a next step for biology to attempt to deduce the functions for each gene "node", to help understandthe behavior of the systemin increasing levels of complexity, from gene to signaling pathway, cell or tissue level.[5]
Mathematical modelsof GRNs have been developed to capture the behavior of the system being modeled, and in some cases generate predictions corresponding with experimental observations. In some other cases, models have proven to make accurate novel predictions, which can be tested experimentally, thus suggesting new approaches to explore in an experiment that sometimes wouldn't be considered in the design of the protocol of an experimental laboratory. Modeling techniques includedifferential equations(ODEs), Boolean networks,Petri nets,Bayesian networks, graphicalGaussian network models,Stochastic, andProcess Calculi.[6]Conversely, techniques have been proposed for generating models of GRNs that best explain a set oftime seriesobservations. Recently it has been shown thatChIP-seqsignal of histone modification are more correlated with transcription factor motifs at promoters in comparison to RNA level.[7]Hence it is proposed that time-series histone modification ChIP-seq could provide more reliable inference of gene-regulatory networks in comparison to methods based on expression levels.
Gene regulatory networks are generally thought to be made up of a few highly connectednodes(hubs) and many poorly connected nodes nested within a hierarchical regulatory regime. Thus gene regulatory networks approximate ahierarchicalscale free networktopology.[8]This is consistent with the view that most genes have limitedpleiotropyand operate within regulatorymodules.[9]This structure is thought to evolve due to thepreferential attachmentofduplicated genesto more highly connected genes.[8]Recent work has also shown that natural selection tends to favor networks with sparse connectivity.[10]
There are primarily two ways that networks can evolve, both of which can occur simultaneously. The first is that network topology can be changed by the addition or subtraction of nodes (genes) or parts of the network (modules) may be expressed in different contexts. TheDrosophilaHippo signaling pathwayprovides a good example. The Hippo signaling pathway controls both mitotic growth and post-mitotic cellular differentiation.[11]Recently it was found that the network the Hippo signaling pathway operates in differs between these two functions which in turn changes the behavior of the Hippo signaling pathway. This suggests that the Hippo signaling pathway operates as a conserved regulatory module that can be used for multiple functions depending on context.[11]Thus, changing network topology can allow a conserved module to serve multiple functions and alter the final output of the network. The second way networks can evolve is by changing the strength of interactions between nodes, such as how strongly a transcription factor may bind to acis-regulatory element. Such variation in strength of network edges has been shown to underlie between species variation in vulva cell fate patterning ofCaenorhabditisworms.[12]
Another widely cited characteristic of gene regulatory network is their abundance of certain repetitive sub-networks known asnetwork motifs. Network motifs can be regarded as repetitive topological patterns when dividing a big network into small blocks. Previous analysis found several types of motifs that appeared more often in gene regulatory networks than in randomly generated networks.[13][14][15]As an example, one such motif is called feed-forward loops, which consist of three nodes. This motif is the most abundant among all possible motifs made up of three nodes, as is shown in the gene regulatory networks of fly, nematode, and human.[15]
The enriched motifs have been proposed to followconvergent evolution, suggesting they are "optimal designs" for certain regulatory purposes.[16]For example, modeling shows that feed-forward loops are able to coordinate the change in node A (in terms of concentration and activity) and the expression dynamics of node C, creating different input-output behaviors.[17][18]Thegalactoseutilization system ofE. colicontains a feed-forward loop which accelerates the activation of galactose utilizationoperongalETK, potentially facilitating the metabolic transition to galactose when glucose is depleted.[19]The feed-forward loop in thearabinoseutilization systems ofE.colidelays the activation of arabinose catabolism operon and transporters, potentially avoiding unnecessary metabolic transition due to temporary fluctuations in upstream signaling pathways.[20]Similarly in the Wnt signaling pathway ofXenopus, the feed-forward loop acts as a fold-change detector that responses to the fold change, rather than the absolute change, in the level of β-catenin, potentially increasing the resistance to fluctuations in β-catenin levels.[21]Following the convergent evolution hypothesis, the enrichment of feed-forward loops would be anadaptationfor fast response and noise resistance. A recent research found that yeast grown in an environment of constant glucose developed mutations in glucose signaling pathways and growth regulation pathway, suggesting regulatory components responding to environmental changes are dispensable under constant environment.[22]
On the other hand, some researchers hypothesize that the enrichment of network motifs is non-adaptive.[23]In other words, gene regulatory networks can evolve to a similar structure without the specific selection on the proposed input-output behavior. Support for this hypothesis often comes from computational simulations. For example, fluctuations in the abundance of feed-forward loops in a model that simulates the evolution of gene regulatory networks by randomly rewiring nodes may suggest that the enrichment of feed-forward loops is a side-effect of evolution.[24]In another model of gene regulator networks evolution, the ratio of the frequencies of gene duplication and gene deletion show great influence on network topology: certain ratios lead to the enrichment of feed-forward loops and create networks that show features of hierarchical scale free networks. De novo evolution of coherent type 1 feed-forward loops has been demonstrated computationally in response to selection for their hypothesized function of filtering out a short spurious signal, supporting adaptive evolution, but for non-idealized noise, a dynamics-based system of feed-forward regulation with different topology was instead favored.[25]
Regulatory networks allowbacteriato adapt to almost every environmental niche on earth.[26][27]A network of interactions among diverse types of molecules including DNA, RNA, proteins and metabolites, is utilised by the bacteria to achieve regulation of gene expression. In bacteria, the principal function of regulatory networks is to control the response to environmental changes, for example nutritional status and environmental stress.[28]A complex organization of networks permits the microorganism to coordinate and integrate multiple environmental signals.[26]
One example stress is when the environment suddenly becomes poor of nutrients. This triggers a complex adaptation process inbacteria, such asE. coli.After this environmental change, thousands of genes change expression level. However, these changes are predictable from the topology and logic of the gene network[29]that is reported inRegulonDB. Specifically, on average, the response strength of a gene was predictable from the difference between the numbers of activating and repressing input transcription factors of that gene.[29]
It is common to model such a network with a set of coupledordinary differential equations(ODEs) orSDEs, describing the reaction kinetics of the constituent parts. Suppose that our regulatory network hasN{\displaystyle N}nodes, and letS1(t),S2(t),…,SN(t){\displaystyle S_{1}(t),S_{2}(t),\ldots ,S_{N}(t)}represent the concentrations of theN{\displaystyle N}corresponding substances at timet{\displaystyle t}. Then the temporal evolution of the system can be described approximately by
where the functionsfj{\displaystyle f_{j}}express the dependence ofSj{\displaystyle S_{j}}on the concentrations of other substances present in the cell. The functionsfj{\displaystyle f_{j}}are ultimately derived from basicprinciples of chemical kineticsor simple expressions derived from these e.g.Michaelis–Mentenenzymatic kinetics. Hence, the functional forms of thefj{\displaystyle f_{j}}are usually chosen as low-orderpolynomialsorHill functionsthat serve as anansatzfor the real molecular dynamics. Such models are then studied using the mathematics ofnonlinear dynamics. System-specific information, likereaction rateconstants and sensitivities, are encoded as constant parameters.[30]
By solving for thefixed pointof the system:
for allj{\displaystyle j}, one obtains (possibly several) concentration profiles of proteins and mRNAs that are theoretically sustainable (though not necessarilystable).Steady statesof kinetic equations thus correspond to potential cell types, andoscillatorysolutions to the above equation to naturally cyclic cell types. Mathematical stability of theseattractorscan usually be characterized by the sign of higher derivatives at critical points, and then correspond tobiochemical stabilityof the concentration profile.Critical pointsandbifurcationsin the equations correspond to critical cell states in which small state or parameter perturbations could switch the system between one of several stable differentiation fates. Trajectories correspond to the unfolding of biological pathways and transients of the equations to short-term biological events. For a more mathematical discussion, see the articles onnonlinearity,dynamical systems,bifurcation theory, andchaos theory.
The following example illustrates how aBoolean networkcan model a GRN together with its gene products (the outputs) and the substances from the environment that affect it (the inputs).Stuart Kauffmanwas amongst the first biologists to use the metaphor of Boolean networks to model genetic regulatory networks.[31][32]
The validity of the model can be tested by comparing simulation results with time series observations. A partial validation of a Boolean network model can also come from testing the predicted existence of a yet unknown regulatory connection between two particular transcription factors that each are nodes of the model.[33]
Continuous network models of GRNs are an extension of the Boolean networks described above. Nodes still represent genes and connections between them regulatory influences on gene expression. Genes in biological systems display a continuous range of activity levels and it has been argued that using a continuous representation captures several properties of gene regulatory networks not present in the Boolean model.[34]Formally most of these approaches are similar to anartificial neural network, as inputs to a node are summed up and the result serves as input to asigmoid function, e.g.,[35]but proteins do often control gene expression in a synergistic, i.e. non-linear, way.[36]However, there is now a continuous network model[37]that allows grouping of inputs to a node thus realizing another level of regulation. This model is formally closer to a higher orderrecurrent neural network. The same model has also been used to mimic the evolution ofcellular differentiation[38]and even multicellularmorphogenesis.[39]
Experimental results[40][41]have demonstrated that gene expression is a stochastic process. Thus, many authors are now using the stochastic formalism, after the work by Arkin et al.[42]Works on single gene expression[43]and small synthetic genetic networks,[44][45]such as the genetic toggle switch of Tim Gardner andJim Collins, provided additional experimental data on the phenotypic variability and the stochastic nature of gene expression. The first versions of stochastic models of gene expression involved only instantaneous reactions and were driven by theGillespie algorithm.[46]
Since some processes, such as gene transcription, involve many reactions and could not be correctly modeled as an instantaneous reaction in a single step, it was proposed to model these reactions as single step multiple delayed reactions in order to account for the time it takes for the entire process to be complete.[47]
From here, a set of reactions were proposed[48]that allow generating GRNs. These are then simulated using a modified version of the Gillespie algorithm, that can simulate multiple time delayed reactions (chemical reactions where each of the products is provided a time delay that determines when will it be released in the system as a "finished product").
For example, basic transcription of a gene can be represented by the following single-step reaction (RNAP is the RNA polymerase, RBS is the RNA ribosome binding site, and Proiis the promoter region of genei):
Furthermore, there seems to be a trade-off between the noise in gene expression, the speed with which genes can switch, and the metabolic cost associated their functioning. More specifically, for any given level of metabolic cost, there is an optimal trade-off between noise and processing speed and increasing the metabolic cost leads to better speed-noise trade-offs.[49][50][51]
A recent work proposed a simulator (SGNSim,Stochastic Gene Networks Simulator),[52]that can model GRNs where transcription and translation are modeled as multiple time delayed events and its dynamics is driven by a stochastic simulation algorithm (SSA) able to deal with multiple time delayed events.
The time delays can be drawn from several distributions and the reaction rates from complex
functions or from physical parameters. SGNSim can generate ensembles of GRNs within a set of user-defined parameters, such as topology. It can also be used to model specific GRNs and systems of chemical reactions. Genetic perturbations such as gene deletions, gene over-expression, insertions, frame shift mutations can also be modeled as well.
The GRN is created from a graph with the desired topology, imposing in-degree and out-degree distributions. Gene promoter activities are affected by other genes expression products that act as inputs, in the form of monomers or combined into multimers and set as direct or indirect. Next, each direct input is assigned to an operator site and different transcription factors can be allowed, or not, to compete for the same operator site, while indirect inputs are given a target. Finally, a function is assigned to each gene, defining the gene's response to a combination of transcription factors (promoter state). The transfer functions (that is, how genes respond to a combination of inputs) can be assigned to each combination of promoter states as desired.
In other recent work, multiscale models of gene regulatory networks have been developed that focus on synthetic biology applications. Simulations have been used that model all biomolecular interactions in transcription, translation, regulation, and induction of gene regulatory networks, guiding the design of synthetic systems.[53]
Other work has focused on predicting the gene expression levels in a gene regulatory network. The approaches used to model gene regulatory networks have been constrained to be interpretable and, as a result, are generally simplified versions of the network. For example, Boolean networks have been used due to their simplicity and ability to handle noisy data but lose data information by having a binary representation of the genes. Also, artificial neural networks omit using a hidden layer so that they can be interpreted, losing the ability to model higher order correlations in the data. Using a model that is not constrained to be interpretable, a more accurate model can be produced. Being able to predict gene expressions more accurately provides a way to explore how drugs affect a system of genes as well as for finding which genes are interrelated in a process. This has been encouraged by the DREAM competition[54]which promotes a competition for the best prediction algorithms.[55]Some other recent work has used artificial neural networks with a hidden layer.[56]
There are three classes of multiple sclerosis: relapsing-remitting (RRMS), primary progressive (PPMS) and secondary progressive (SPMS). Gene regulatory network (GRN) plays a vital role to understand the disease mechanism across these three different multiple sclerosis classes.[57] | https://en.wikipedia.org/wiki/Gene_regulatory_network |
Adynamic Bayesian network(DBN) is aBayesian network(BN) which relates variables to each other over adjacent time steps.
A dynamic Bayesian network (DBN) is often called a "two-timeslice" BN (2TBN) because it says that at any point in time T, the value of a variable can be calculated from the internal regressors and the immediate prior value (time T-1). DBNs were developed byPaul Dagumin the early 1990s atStanford University's Section on Medical Informatics.[1][2]Dagum developed DBNs to unify and extend traditional linearstate-space modelssuch asKalman filters, linear and normal forecasting models such asARMAand simple dependency models such ashidden Markov modelsinto a general probabilistic representation and inference mechanism for arbitrary nonlinear and non-normal time-dependent domains.[3][4]
Today, DBNs are common inrobotics, and have shown potential for a wide range ofdata miningapplications. For example, they have been used inspeech recognition,digital forensics,proteinsequencing, andbioinformatics. DBN is a generalization ofhidden Markov modelsandKalman filters.[5]
DBNs are conceptually related to probabilistic Boolean networks[6]and can, similarly, be used to model dynamical systems at steady-state.
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Dynamic_Bayesian_network |
In thenatural sciences, especially inatmosphericandEarth sciencesinvolvingapplied statistics, ananomalyis a persistingdeviationin aphysical quantityfrom its expected value, e.g., thesystematicdifference between a measurement and a trend or a model prediction.[1]Similarly, astandardized anomalyequals an anomaly divided by astandard deviation.[1]A group of anomalies can be analyzed spatially, as a map, or temporally, as atime series.
It should not be confused for an isolatedoutlier.
There are examples in atmospheric sciences and in geophysics.
Thelocation and scale measuresused in forming an anomaly time-series may either be constant or may themselves be a time series or a map. For example, if the original time series consisted of daily mean temperatures, the effect of seasonal cycles might be removed using adeseasonalizationfilter.
Robust statistics, resistant to the effects ofoutliers, are sometimes used as the basis of thetransformation.[1]
In the atmospheric sciences, the climatologicalannual cycleis often used as the expected value. Famous atmospheric anomalies are for instance theSouthern Oscillationindex (SOI) and theNorth Atlantic oscillationindex. SOI is the atmospheric component ofEl Niño, whileNAOplays an important role forEuropeanweatherby modification of the exit of the Atlanticstorm track.
Aclimate normalcan also be used to derive a climate anomaly.[2] | https://en.wikipedia.org/wiki/Anomaly_time_series |
Achirpis asignalin which thefrequencyincreases (up-chirp) or decreases (down-chirp) with time. In some sources, the termchirpis used interchangeably withsweep signal.[1]It is commonly applied tosonar,radar, andlasersystems, and to other applications, such as inspread-spectrumcommunications (seechirp spread spectrum). This signal type is biologically inspired and occurs as a phenomenon due to dispersion (a non-linear dependence between frequency and the propagation speed of the wave components). It is usually compensated for by using a matched filter, which can be part of the propagation channel. Depending on the specific performance measure, however, there are better techniques both for radar and communication. Since it was used in radar and space, it has been adopted also for communication standards. For automotive radar applications, it is usually called linear frequency modulated waveform (LFMW).[2]
In spread-spectrum usage,surface acoustic wave(SAW) devices are often used to generate and demodulate the chirped signals. Inoptics,ultrashortlaserpulses also exhibit chirp, which, in optical transmission systems, interacts with thedispersionproperties of the materials, increasing or decreasing total pulse dispersion as the signal propagates. The name is a reference to the chirping sound made by birds; seebird vocalization.
The basic definitions here translate as the common physics quantities location (phase), speed (angular velocity), acceleration (chirpyness).
If awaveformis defined as:x(t)=sin(ϕ(t)){\displaystyle x(t)=\sin \left(\phi (t)\right)}
then theinstantaneous angular frequency,ω, is defined as the phase rate as given by the first derivative of phase,
with the instantaneous ordinary frequency,f, being its normalized version:ω(t)=dϕ(t)dt,f(t)=ω(t)2π{\displaystyle \omega (t)={\frac {d\phi (t)}{dt}},\,f(t)={\frac {\omega (t)}{2\pi }}}
Finally, theinstantaneous angular chirpyness(symbolγ) is defined to be the second derivative of instantaneous phase or the first derivative of instantaneous angular frequency,γ(t)=d2ϕ(t)dt2=dω(t)dt{\displaystyle \gamma (t)={\frac {d^{2}\phi (t)}{dt^{2}}}={\frac {d\omega (t)}{dt}}}Angular chirpyness has units of radians per square second (rad/s2); thus, it is analogous toangular acceleration.
Theinstantaneous ordinary chirpyness(symbolc) is a normalized version, defined as the rate of change of the instantaneous frequency:[3]c(t)=γ(t)2π=df(t)dt{\displaystyle c(t)={\frac {\gamma (t)}{2\pi }}={\frac {df(t)}{dt}}}Ordinary chirpyness has units of square reciprocal seconds (s−2); thus, it is analogous torotational acceleration.
In alinear-frequency chirpor simplylinear chirp, the instantaneous frequencyf(t){\displaystyle f(t)}varies exactly linearly with time:f(t)=ct+f0,{\displaystyle f(t)=ct+f_{0},}wheref0{\displaystyle f_{0}}is the starting frequency (at timet=0{\displaystyle t=0}) andc{\displaystyle c}is the chirp rate, assumed constant:c=f1−f0T=ΔfΔt.{\displaystyle c={\frac {f_{1}-f_{0}}{T}}={\frac {\Delta f}{\Delta t}}.}
Here,f1{\displaystyle f_{1}}is the final frequency andT{\displaystyle T}is the time it takes to sweep fromf0{\displaystyle f_{0}}tof1{\displaystyle f_{1}}.
The corresponding time-domain function for thephaseof any oscillating signal is the integral of the frequency function, as one expects the phase to grow likeϕ(t+Δt)≃ϕ(t)+2πf(t)Δt{\displaystyle \phi (t+\Delta t)\simeq \phi (t)+2\pi f(t)\,\Delta t}, i.e., that the derivative of the phase is the angular frequencyϕ′(t)=2πf(t){\displaystyle \phi '(t)=2\pi \,f(t)}.
For the linear chirp, this results in:ϕ(t)=ϕ0+2π∫0tf(τ)dτ=ϕ0+2π∫0t(cτ+f0)dτ=ϕ0+2π(c2t2+f0t),{\displaystyle {\begin{aligned}\phi (t)&=\phi _{0}+2\pi \int _{0}^{t}f(\tau )\,d\tau \\&=\phi _{0}+2\pi \int _{0}^{t}\left(c\tau +f_{0}\right)\,d\tau \\&=\phi _{0}+2\pi \left({\frac {c}{2}}t^{2}+f_{0}t\right),\end{aligned}}}
whereϕ0{\displaystyle \phi _{0}}is the initial phase (at timet=0{\displaystyle t=0}). Thus this is also called aquadratic-phase signal.[4]
The corresponding time-domain function for asinusoidallinear chirp is the sine of the phase in radians:x(t)=sin[ϕ0+2π(c2t2+f0t)]{\displaystyle x(t)=\sin \left[\phi _{0}+2\pi \left({\frac {c}{2}}t^{2}+f_{0}t\right)\right]}
In ageometric chirp, also called anexponential chirp, the frequency of the signal varies with ageometricrelationship over time. In other words, if two points in the waveform are chosen,t1{\displaystyle t_{1}}andt2{\displaystyle t_{2}}, and the time interval between themT=t2−t1{\displaystyle T=t_{2}-t_{1}}is kept constant, the frequency ratiof(t2)/f(t1){\displaystyle f\left(t_{2}\right)/f\left(t_{1}\right)}will also be constant.[5][6]
In an exponential chirp, the frequency of the signal variesexponentiallyas a function of time:f(t)=f0ktT{\displaystyle f(t)=f_{0}k^{\frac {t}{T}}}wheref0{\displaystyle f_{0}}is the starting frequency (att=0{\displaystyle t=0}), andk{\displaystyle k}is the rate ofexponential changein frequency.
k=f1f0{\displaystyle k={\frac {f_{1}}{f_{0}}}}
Wheref1{\displaystyle f_{1}}is the ending frequency of the chirp (att=T{\displaystyle t=T}).
Unlike the linear chirp, which has a constant chirpyness, an exponential chirp has an exponentially increasing frequency rate.
The corresponding time-domain function for thephaseof an exponential chirp is the integral of the frequency:ϕ(t)=ϕ0+2π∫0tf(τ)dτ=ϕ0+2πf0∫0tkτTdτ=ϕ0+2πf0(TktTln(k)){\displaystyle {\begin{aligned}\phi (t)&=\phi _{0}+2\pi \int _{0}^{t}f(\tau )\,d\tau \\&=\phi _{0}+2\pi f_{0}\int _{0}^{t}k^{\frac {\tau }{T}}d\tau \\&=\phi _{0}+2\pi f_{0}\left({\frac {Tk^{\frac {t}{T}}}{\ln(k)}}\right)\end{aligned}}}whereϕ0{\displaystyle \phi _{0}}is the initial phase (att=0{\displaystyle t=0}).
The corresponding time-domain function for a sinusoidal exponential chirp is the sine of the phase in radians:x(t)=sin[ϕ0+2πf0(TktTln(k))]{\displaystyle x(t)=\sin \left[\phi _{0}+2\pi f_{0}\left({\frac {Tk^{\frac {t}{T}}}{\ln(k)}}\right)\right]}
As was the case for the Linear Chirp, the instantaneous frequency of the Exponential Chirp consists of the fundamental frequencyf(t)=f0ktT{\displaystyle f(t)=f_{0}k^{\frac {t}{T}}}accompanied by additionalharmonics.[citation needed]
Hyperbolic chirps are used in radar applications, as they show maximum matched filter response after being distorted by the Doppler effect.[7]
In a hyperbolic chirp, the frequency of the signal varies hyperbolically as a function of time:f(t)=f0f1T(f0−f1)t+f1T{\displaystyle f(t)={\frac {f_{0}f_{1}T}{(f_{0}-f_{1})t+f_{1}T}}}
The corresponding time-domain function for the phase of an hyperbolic chirp is the integral of the frequency:ϕ(t)=ϕ0+2π∫0tf(τ)dτ=ϕ0+2π−f0f1Tf1−f0ln(1−f1−f0f1Tt){\displaystyle {\begin{aligned}\phi (t)&=\phi _{0}+2\pi \int _{0}^{t}f(\tau )\,d\tau \\&=\phi _{0}+2\pi {\frac {-f_{0}f_{1}T}{f_{1}-f_{0}}}\ln \left(1-{\frac {f_{1}-f_{0}}{f_{1}T}}t\right)\end{aligned}}}whereϕ0{\displaystyle \phi _{0}}is the initial phase (att=0{\displaystyle t=0}).
The corresponding time-domain function for a sinusoidal hyperbolic chirp is the sine of the phase in radians:x(t)=sin[ϕ0+2π−f0f1Tf1−f0ln(1−f1−f0f1Tt)]{\displaystyle x(t)=\sin \left[\phi _{0}+2\pi {\frac {-f_{0}f_{1}T}{f_{1}-f_{0}}}\ln \left(1-{\frac {f_{1}-f_{0}}{f_{1}T}}t\right)\right]}
A chirp signal can be generated withanalog circuitryvia avoltage-controlled oscillator(VCO), and a linearly or exponentially ramping controlvoltage.[citation needed]It can also be generateddigitallyby adigital signal processor(DSP) anddigital-to-analog converter(DAC), using adirect digital synthesizer(DDS) and by varying the step in the numerically controlled oscillator.[8]It can also be generated by aYIG oscillator.[clarification needed]
A chirp signal shares the same spectral content with animpulse signal. However, unlike in the impulse signal, spectral components of the chirp signal have different phases,[9][10][11][12]i.e., their power spectra are alike but thephase spectraare distinct.Dispersionof a signal propagation medium may result in unintentional conversion of impulse signals into chirps (Whistler). On the other hand, many practical applications, such aschirped pulse amplifiersor echolocation systems,[11]use chirp signals instead of impulses because of their inherently lowerpeak-to-average power ratio(PAPR).[12]
Chirp modulation, or linear frequency modulation for digital communication, was patented bySidney Darlingtonin 1954 with significant later work performed by Winkler[who?]in 1962. This type of modulation employs sinusoidal waveforms whose instantaneous frequency increases or decreases linearly over time. These waveforms are commonly referred to as linear chirps or simply chirps.
Hence the rate at which their frequency changes is called thechirp rate. In binary chirp modulation, binary data is transmitted by mapping the bits into chirps of opposite chirp rates. For instance, over one bit period "1" is assigned a chirp with positive rateaand "0" a chirp with negative rate −a. Chirps have been heavily used inradarapplications and as a result advanced sources for transmission andmatched filtersfor reception of linear chirps are available.
Another kind of chirp is the projective chirp, of the form:g=f[a⋅x+bc⋅x+1],{\displaystyle g=f\left[{\frac {a\cdot x+b}{c\cdot x+1}}\right],}having the three parametersa(scale),b(translation), andc(chirpiness). The projective chirp is ideally suited toimage processing, and forms the basis for the projectivechirplet transform.[3]
A change in frequency ofMorse codefrom the desired frequency, due to poor stability in theRFoscillator, is known aschirp,[13]and in theR-S-T systemis given an appended letter 'C'. | https://en.wikipedia.org/wiki/Chirp |
Thedecomposition of time seriesis astatisticaltask that deconstructs atime seriesinto several components, each representing one of the underlying categories of patterns.[1]There are two principal types of decomposition, which are outlined below.
This is an important technique for all types oftime series analysis, especially forseasonal adjustment.[2]It seeks to construct, from an observed time series, a number of component series (that could be used to reconstruct the original by additions or multiplications) where each of these has a certain characteristic or type of behavior. For example, time series are usually decomposed into:
Hence a time series using anadditive modelcan be thought of as
whereas a multiplicative model would be
An additive model would be used when the variations around the trend do not vary with the level of the time series whereas a multiplicative model would be appropriate if the trend is proportional to the level of the time series.[3]
Sometimes the trend and cyclical components are grouped into one, called the trend-cycle component. The trend-cycle component can just be referred to as the "trend" component, even though it may contain cyclical behavior.[3]For example, a seasonal decomposition of time series by Loess (STL)[4]plot decomposes a time series into seasonal, trend and irregular components using loess and plots the components separately, whereby the cyclical component (if present in the data) is included in the "trend" component plot.
The theory oftime series analysismakes use of the idea of decomposing a times series into deterministic and non-deterministic components (or predictable and unpredictable components).[2]SeeWold's theoremandWold decomposition.
Kendall shows an example of a decomposition into smooth, seasonal and irregular factors for a set of data containing values of the monthly aircraft miles flown byUK airlines.[6]
In policy analysis, forecasting future production of biofuels is key data for making better decisions, and statistical time series models have recently been developed to forecast renewable energy sources, and a multiplicative decomposition method was designed to forecast future production ofbiohydrogen. The optimum length of the moving average (seasonal length) and start point, where the averages are placed, were indicated based on the best coincidence between the present forecast and actual values.[5]
An example of statistical software for this type of decomposition is the programBV4.1that is based on theBerlin procedure. The R statistical software also includes many packages for time series decomposition, such as seasonal,[7]stl, stlplus,[8]and bfast. Bayesian methods are also available; one example is the BEAST method in a package Rbeast[9]in R, Matlab, and Python. | https://en.wikipedia.org/wiki/Decomposition_of_time_series |
Instochastic processes,chaos theoryandtime series analysis,detrended fluctuation analysis(DFA) is a method for determining the statisticalself-affinityof a signal. It is useful for analysingtime seriesthat appear to belong-memoryprocesses (divergingcorrelation time, e.g. power-law decayingautocorrelation function) or1/f noise.
The obtained exponent is similar to theHurst exponent, except that DFA may also be applied to signals whose underlying statistics (such as mean and variance) or dynamics arenon-stationary(changing with time). It is related to measures based upon spectral techniques such asautocorrelationandFourier transform.
Penget al. introduced DFA in 1994 in a paper that has been cited over 3,000 times as of 2022[1]and represents an extension of the (ordinary)fluctuation analysis(FA), which is affected by non-stationarities.
Systematic studies of the advantages and limitations of the DFA method were performed by PCh Ivanov et al. in a series of papers focusing on the effects of different types of nonstationarities in real-world signals: (1) types of trends;[2](2) random outliers/spikes, noisy segments, signals composed of parts with different correlation;[3](3) nonlinear filters;[4](4) missing data;[5](5) signal coarse-graining procedures[6]and comparing DFA performance with moving average techniques[7](cumulative citations > 4,000).Datasetsgenerated to test DFA are available on PhysioNet.[8]
Given: atime seriesx1,x2,...,xN{\displaystyle x_{1},x_{2},...,x_{N}}.
Compute its average value⟨x⟩=1N∑t=1Nxt{\displaystyle \langle x\rangle ={\frac {1}{N}}\sum _{t=1}^{N}x_{t}}.
Sum it into a processXt=∑i=1t(xi−⟨x⟩){\displaystyle X_{t}=\sum _{i=1}^{t}(x_{i}-\langle x\rangle )}. This is thecumulative sum, orprofile, of the original time series. For example, the profile of ani.i.d.white noiseis a standardrandom walk.
Select a setT={n1,...,nk}{\displaystyle T=\{n_{1},...,n_{k}\}}of integers, such thatn1<n2<⋯<nk{\displaystyle n_{1}<n_{2}<\cdots <n_{k}}, the smallestn1≈4{\displaystyle n_{1}\approx 4}, the largestnk≈N{\displaystyle n_{k}\approx N}, and the sequence is roughly distributed evenly in log-scale:log(n2)−log(n1)≈log(n3)−log(n2)≈⋯{\displaystyle \log(n_{2})-\log(n_{1})\approx \log(n_{3})-\log(n_{2})\approx \cdots }. In other words, it is approximately ageometric progression.[9]
For eachn∈T{\displaystyle n\in T}, divide the sequenceXt{\displaystyle X_{t}}into consecutive segments of lengthn{\displaystyle n}. Within each segment, compute theleast squaresstraight-line fit (thelocal trend). LetY1,n,Y2,n,...,YN,n{\displaystyle Y_{1,n},Y_{2,n},...,Y_{N,n}}be the resulting piecewise-linear fit.
Compute theroot-mean-square deviationfrom the local trend (local fluctuation):F(n,i)=1n∑t=in+1in+n(Xt−Yt,n)2.{\displaystyle F(n,i)={\sqrt {{\frac {1}{n}}\sum _{t=in+1}^{in+n}\left(X_{t}-Y_{t,n}\right)^{2}}}.}And their root-mean-square is the total fluctuation:
(IfN{\displaystyle N}is not divisible byn{\displaystyle n}, then one can either discard the remainder of the sequence, or repeat the procedure on the reversed sequence, then take their root-mean-square.[10])
Make thelog-log plotlogn−logF(n){\displaystyle \log n-\log F(n)}.[11][12]
A straight line of slopeα{\displaystyle \alpha }on the log-log plot indicates a statisticalself-affinityof formF(n)∝nα{\displaystyle F(n)\propto n^{\alpha }}. SinceF(n){\displaystyle F(n)}monotonically increases withn{\displaystyle n}, we always haveα>0{\displaystyle \alpha >0}.
The scaling exponentα{\displaystyle \alpha }is a generalization of theHurst exponent, with the precise value giving information about the series self-correlations:
Because the expected displacement in anuncorrelated random walkof length N grows likeN{\displaystyle {\sqrt {N}}}, an exponent of12{\displaystyle {\tfrac {1}{2}}}would correspond to uncorrelated white noise. When the exponent is between 0 and 1, the result isfractional Gaussian noise.
Though the DFA algorithm always produces a positive numberα{\displaystyle \alpha }for any time series, it does not necessarily imply that the time series is self-similar.Self-similarityrequires the log-log graph to be sufficiently linear over a wide range ofn{\displaystyle n}. Furthermore, a combination of techniques includingmaximum likelihood estimation(MLE), rather than least-squares has been shown to better approximate the scaling, or power-law, exponent.[13]
Also, there are many scaling exponent-like quantities that can be measured for a self-similar time series, including the divider dimension andHurst exponent. Therefore, the DFA scaling exponentα{\displaystyle \alpha }is not afractal dimension, and does not have certain desirable properties that theHausdorff dimensionhas, though in certain special cases it is related to thebox-counting dimensionfor the graph of a time series.
The standard DFA algorithm given above removes a linear trend in each segment. If we remove a degree-n polynomial trend in each segment, it is called DFAn, or higher order DFA.[14]
SinceXt{\displaystyle X_{t}}is a cumulative sum ofxt−⟨x⟩{\displaystyle x_{t}-\langle x\rangle }, a linear trend inXt{\displaystyle X_{t}}is a constant trend inxt−⟨x⟩{\displaystyle x_{t}-\langle x\rangle }, which is a constant trend inxt{\displaystyle x_{t}}(visible as short sections of "flat plateaus"). In this regard, DFA1 removes the mean from segments of the time seriesxt{\displaystyle x_{t}}before quantifying the fluctuation.
Similarly, a degree n trend inXt{\displaystyle X_{t}}is a degree (n-1) trend inxt{\displaystyle x_{t}}. For example, DFA1 removes linear trends from segments of the time seriesxt{\displaystyle x_{t}}before quantifying the fluctuation, DFA1 removes parabolic trends fromxt{\displaystyle x_{t}}, and so on.
The HurstR/S analysisremoves constant trends in the original sequence and thus, in its detrending it is equivalent to DFA1.
DFA can be generalized by computingFq(n)=(1N/n∑i=1N/nF(n,i)q)1/q{\displaystyle F_{q}(n)=\left({\frac {1}{N/n}}\sum _{i=1}^{N/n}F(n,i)^{q}\right)^{1/q}}then making the log-log plot oflogn−logFq(n){\displaystyle \log n-\log F_{q}(n)}, If there is a strong linearity in the plot oflogn−logFq(n){\displaystyle \log n-\log F_{q}(n)}, then that slope isα(q){\displaystyle \alpha (q)}.[15]DFA is the special case whereq=2{\displaystyle q=2}.
Multifractal systems scale as a functionFq(n)∝nα(q){\displaystyle F_{q}(n)\propto n^{\alpha (q)}}. Essentially, the scaling exponents need not be independent of the scale of the system. In particular, DFA measures the scaling-behavior of the second moment-fluctuations.
Kantelhardt et al. intended this scaling exponent as a generalization of the classical Hurst exponent. The classical Hurst exponent corresponds toH=α(2){\displaystyle H=\alpha (2)}for stationary cases, andH=α(2)−1{\displaystyle H=\alpha (2)-1}for nonstationary cases.[15][16][17]
The DFA method has been applied to many systems, e.g. DNA sequences;[18][19]heartbeat dynamics in sleep and wake,[20]sleep stages,[21][22]rest and exercise,[23]and across circadian phases;[24][25]locomotor gate and wrist dynamics,[26][27][28][29]neuronal oscillations,[17]speech pathology detection,[30]and animal behavior pattern analysis.[31][32]
In the case of power-law decaying auto-correlations, thecorrelation functiondecays with an exponentγ{\displaystyle \gamma }:C(L)∼L−γ{\displaystyle C(L)\sim L^{-\gamma }\!\ }.
In addition thepower spectrumdecays asP(f)∼f−β{\displaystyle P(f)\sim f^{-\beta }\!\ }.
The three exponents are related by:[18]
The relations can be derived using theWiener–Khinchin theorem. The relation of DFA to the power spectrum method has been well studied.[33]
Thus,α{\displaystyle \alpha }is tied to the slope of the power spectrumβ{\displaystyle \beta }and is used to describe thecolor of noiseby this relationship:α=(β+1)/2{\displaystyle \alpha =(\beta +1)/2}.
Forfractional Gaussian noise(FGN), we haveβ∈[−1,1]{\displaystyle \beta \in [-1,1]}, and thusα∈[0,1]{\displaystyle \alpha \in [0,1]}, andβ=2H−1{\displaystyle \beta =2H-1}, whereH{\displaystyle H}is theHurst exponent.α{\displaystyle \alpha }for FGN is equal toH{\displaystyle H}.[34]
Forfractional Brownian motion(FBM), we haveβ∈[1,3]{\displaystyle \beta \in [1,3]}, and thusα∈[1,2]{\displaystyle \alpha \in [1,2]}, andβ=2H+1{\displaystyle \beta =2H+1}, whereH{\displaystyle H}is theHurst exponent.α{\displaystyle \alpha }for FBM is equal toH+1{\displaystyle H+1}.[16]In this context, FBM is the cumulative sum or theintegralof FGN, thus, the exponents of their
power spectra differ by 2. | https://en.wikipedia.org/wiki/Detrended_fluctuation_analysis |
Instatisticsandeconometrics, adistributed lag modelis a model fortime seriesdata in which aregressionequation is used to predict current values of adependent variablebased on both the current values of anexplanatory variableand the lagged (past period) values of this explanatory variable.[1][2]
The starting point for a distributed lag model is an assumed structure of the form
or the form
whereytis the value at time periodtof the dependent variabley,ais the intercept term to be estimated, andwiis called the lag weight (also to be estimated) placed on the valueiperiods previously of the explanatory variablex. In the first equation, the dependent variable is assumed to be affected by values of the independent variable arbitrarily far in the past, so the number of lag weights is infinite and the model is called aninfinite distributed lag model. In the alternative, second, equation, there are only a finite number of lag weights, indicating an assumption that there is a maximum lag beyond which values of the independent variable do not affect the dependent variable; a model based on this assumption is called afinite distributed lag model.
In an infinite distributed lag model, an infinite number of lag weights need to be estimated; clearly this can be done only if some structure is assumed for the relation between the various lag weights, with the entire infinitude of them expressible in terms of a finite number of assumed underlying parameters. In a finite distributed lag model, the parameters could be directly estimated byordinary least squares(assuming the number of data points sufficiently exceeds the number of lag weights); nevertheless, such estimation may give very imprecise results due to extrememulticollinearityamong the various lagged values of the independent variable, so again it may be necessary to assume some structure for the relation between the various lag weights.
The concept of distributed lag models easily generalizes to the context of more than one right-side explanatory variable.
The simplest way to estimate parameters associated with distributed lags is byordinary least squares, assuming a fixed maximum lagp{\displaystyle p}, assumingindependently and identically distributederrors, and imposing no structure on the relationship of the coefficients of the lagged explanators with each other. However,multicollinearityamong the lagged explanators often arises, leading to high variance of the coefficient estimates.
Structured distributed lag models come in two types: finite and infinite.Infinite distributed lagsallow the value of the independent variable at a particular time to influence the dependent variable infinitely far into the future, or to put it another way, they allow the current value of the dependent variable to be influenced by values of the independent variable that occurred infinitely long ago; but beyond some lag length the effects taper off toward zero.Finite distributed lagsallow for the independent variable at a particular time to influence the dependent variable for only a finite number of periods.
The most important structured finite distributed lag model is theAlmonlag model.[3]This model allows the data to determine the shape of the lag structure, but the researcher must specify the maximum lag length; an incorrectly specified maximum lag length can distort the shape of the estimated lag structure as well as the cumulative effect of the independent variable. The Almon lag assumes thatk+ 1lag weights are related ton+ 1linearly estimable underlying parameters(n < k)ajaccording to
fori=0,…,k.{\displaystyle i=0,\dots ,k.}
The most common type of structured infinite distributed lag model is thegeometric lag, also known as theKoyck lag. In this lag structure, the weights (magnitudes of influence) of the lagged independent variable values decline exponentially with the length of the lag; while the shape of the lag structure is thus fully imposed by the choice of this technique, the rate of decline as well as the overall magnitude of effect are determined by the data. Specification of the regression equation is very straightforward: one includes as explanators (right-hand side variables in the regression) the one-period-lagged value of the dependent variable and the current value of the independent variable:
where0≤λ<1{\displaystyle 0\leq \lambda <1}. In this model, the short-run (same-period) effect of a unit change in the independent variable is the value ofb, while the long-run (cumulative) effect of a sustained unit change in the independent variable can be shown to be
Other infinite distributed lag models have been proposed to allow the data to determine the shape of the lag structure. Thepolynomial inverse lag[4][5]assumes that the lag weights are related to underlying, linearly estimable parametersajaccording to
fori=0,…,∞.{\displaystyle i=0,\dots ,\infty .}
Thegeometric combination lag[6]assumes that the lags weights are related to underlying, linearly estimable parametersajaccording to either
fori=0,…,∞{\displaystyle i=0,\dots ,\infty }or
fori=0,…,∞.{\displaystyle i=0,\dots ,\infty .}
Thegamma lag[7]and therational lag[8]are other infinite distributed lag structures.
Distributed lag models were introduced into health-related studies in 2000 by Schwartz[9]and 2002 by Zanobetti and Schwartz.[10]The Bayesian version of the model was suggested by Welty in 2007.[11]Gasparrini introduced more flexible statistical models in 2010[12]that are capable of describing additional time dimensions of the exposure-response relationship, and developed a family of distributed lag non-linear models (DLNM), a modeling framework that can simultaneously represent non-linear exposure-response dependencies and delayed effects.[13]
The distributed lag model concept was first to applied tolongitudinal cohortresearch by Hsu in 2015,[14]studying the relationship betweenPM2.5and childasthma, and more complicated distributed lag method aimed to accommodatelongitudinal cohortresearch analysis such as Bayesian Distributed Lag Interaction Model[15]by Wilson have been subsequently developed to answer similar research questions. | https://en.wikipedia.org/wiki/Distributed_lag |
Insignal processing, the power spectrumSxx(f){\displaystyle S_{xx}(f)}of acontinuous timesignalx(t){\displaystyle x(t)}describes the distribution ofpowerinto frequency componentsf{\displaystyle f}composing that signal.[1]According toFourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (includingnoise) as analyzed in terms of its frequency content, is called itsspectrum.
When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute theenergy spectral density. More commonly used is thepower spectral density(PSD, or simplypower spectrum), which applies to signals existing overalltime, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The PSD then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite.Summationor integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integratingx2(t){\displaystyle x^{2}(t)}over the time domain, as dictated byParseval's theorem.[1]
The spectrum of a physical processx(t){\displaystyle x(t)}often contains essential information about the nature ofx{\displaystyle x}. For instance, thepitchandtimbreof a musical instrument are immediately determined from a spectral analysis. Thecolorof a light source is determined by the spectrum of the electromagnetic wave's electric fieldE(t){\displaystyle E(t)}as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves theFourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when adispersive prismis used to obtain a spectrum of light in aspectrograph, or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency.
However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important instatistical signal processingand in the statistical study ofstochastic processes, as well as in many other branches ofphysicsandengineering. Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms ofspatial frequency.[1]
Inphysics, the signal might be a wave, such as anelectromagnetic wave, anacoustic wave, or the vibration of a mechanism. Thepower spectral density(PSD) of the signal describes thepowerpresent in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed inSI unitsofwattsperhertz(abbreviated as W/Hz).[2]
When a signal is defined in terms only of avoltage, for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always beproportionalto the actual power delivered by that signal into a givenimpedance. So one might use units of V2Hz−1for the PSD.Energy spectral density(ESD) would have units of V2s Hz−1, sinceenergyhas units of power multiplied by time (e.g.,watt-hour).[3]
In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD in units of meters squared per hertz, m2/Hz.
In the analysis of randomvibrations, units ofg2Hz−1are frequently used for the PSD ofacceleration, wheregdenotes theg-force.[4]
Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning ofx(t) will remain unspecified, but the independent variable will be assumed to be that of time.
A PSD can be either aone-sidedfunction of only positive frequencies or atwo-sidedfunction of both positive andnegative frequenciesbut with only half the amplitude. Noise PSDs are generally one-sided in engineering and two-sided in physics.[5]
Insignal processing, theenergyof a signalx(t){\displaystyle x(t)}is given byE≜∫−∞∞|x(t)|2dt.{\displaystyle E\triangleq \int _{-\infty }^{\infty }\left|x(t)\right|^{2}\ dt.}Assuming the total energy is finite (i.e.x(t){\displaystyle x(t)}is asquare-integrable function) allows applyingParseval's theorem(orPlancherel's theorem).[6]That is,∫−∞∞|x(t)|2dt=∫−∞∞|x^(f)|2df,{\displaystyle \int _{-\infty }^{\infty }|x(t)|^{2}\,dt=\int _{-\infty }^{\infty }\left|{\hat {x}}(f)\right|^{2}\,df,}wherex^(f)=∫−∞∞e−i2πftx(t)dt,{\displaystyle {\hat {x}}(f)=\int _{-\infty }^{\infty }e^{-i2\pi ft}x(t)\ dt,}is theFourier transformofx(t){\displaystyle x(t)}atfrequencyf{\displaystyle f}(inHz).[7]The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of|x^(f)|2df{\displaystyle \left|{\hat {x}}(f)\right|^{2}df}can be interpreted as adensity functionmultiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequencyf{\displaystyle f}in the frequency intervalf+df{\displaystyle f+df}.
Therefore, theenergy spectral densityofx(t){\displaystyle x(t)}is defined as:[8]
The functionS¯xx(f){\displaystyle {\bar {S}}_{xx}(f)}and theautocorrelationofx(t){\displaystyle x(t)}form a Fourier transform pair, a result also known as theWiener–Khinchin theorem(see alsoPeriodogram).
As a physical example of how one might measure the energy spectral density of a signal, supposeV(t){\displaystyle V(t)}represents thepotential(involts) of an electrical pulse propagating along atransmission lineofimpedanceZ{\displaystyle Z}, and suppose the line is terminated with amatchedresistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). ByOhm's law, the power delivered to the resistor at timet{\displaystyle t}is equal toV(t)2/Z{\displaystyle V(t)^{2}/Z}, so the total energy is found by integratingV(t)2/Z{\displaystyle V(t)^{2}/Z}with respect to time over the duration of the pulse. To find the value of the energy spectral densityS¯xx(f){\displaystyle {\bar {S}}_{xx}(f)}at frequencyf{\displaystyle f}, one could insert between the transmission line and the resistor abandpass filterwhich passes only a narrow range of frequencies (Δf{\displaystyle \Delta f}, say) near the frequency of interest and then measure the total energyE(f){\displaystyle E(f)}dissipated across the resistor. The value of the energy spectral density atf{\displaystyle f}is then estimated to beE(f)/Δf{\displaystyle E(f)/\Delta f}. In this example, since the powerV(t)2/Z{\displaystyle V(t)^{2}/Z}has units of V2Ω−1, the energyE(f){\displaystyle E(f)}has units of V2s Ω−1=J, and hence the estimateE(f)/Δf{\displaystyle E(f)/\Delta f}of the energy spectral density has units of J Hz−1, as required. In many situations, it is common to forget the step of dividing byZ{\displaystyle Z}so that the energy spectral density instead has units of V2Hz−1.
This definition generalizes in a straightforward manner to a discrete signal with acountably infinitenumber of valuesxn{\displaystyle x_{n}}such as a signal sampled at discrete timestn=t0+(nΔt){\displaystyle t_{n}=t_{0}+(n\,\Delta t)}:S¯xx(f)=limN→∞(Δt)2|∑n=−NNxne−i2πfnΔt|2⏟|x^d(f)|2,{\displaystyle {\bar {S}}_{xx}(f)=\lim _{N\to \infty }(\Delta t)^{2}\underbrace {\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}} _{\left|{\hat {x}}_{d}(f)\right|^{2}},}wherex^d(f){\displaystyle {\hat {x}}_{d}(f)}is thediscrete-time Fourier transformofxn.{\displaystyle x_{n}.}The sampling intervalΔt{\displaystyle \Delta t}is needed to keep the correct physical units and to ensure that we recover the continuous case in the limitΔt→0.{\displaystyle \Delta t\to 0.}But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also seenormalized frequency)
The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define thepower spectral density(PSD) which exists forstationary processes; this describes how thepowerof a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study thevarianceof a function over timex(t){\displaystyle x(t)}(or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as thepower spectrumeven when there is no physical power involved. If one were to create a physicalvoltagesource which followedx(t){\displaystyle x(t)}and applied it to the terminals of a oneohmresistor, then indeed the instantaneous power dissipated in that resistor would be given byx2(t){\displaystyle x^{2}(t)}watts.
The average powerP{\displaystyle P}of a signalx(t){\displaystyle x(t)}over all time is therefore given by the following time average, where the periodT{\displaystyle T}is centered about some arbitrary timet=t0{\displaystyle t=t_{0}}:P=limT→∞1T∫t0−T/2t0+T/2|x(t)|2dt{\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{t_{0}-T/2}^{t_{0}+T/2}\left|x(t)\right|^{2}\,dt}
Whenever it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral, the average power can also be written asP=limT→∞1T∫−∞∞|xT(t)|2dt,{\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }\left|x_{T}(t)\right|^{2}\,dt,}wherexT(t)=x(t)wT(t){\displaystyle x_{T}(t)=x(t)w_{T}(t)}andwT(t){\displaystyle w_{T}(t)}is unity within the arbitrary period and zero elsewhere.
WhenP{\displaystyle P}is non-zero, the integral must grow to infinity at least as fast asT{\displaystyle T}does. That is the reason why we cannot use the energy of the signal, which is that diverging integral.
In analyzing the frequency content of the signalx(t){\displaystyle x(t)}, one might like to compute the ordinary Fourier transformx^(f){\displaystyle {\hat {x}}(f)}; however, for many signals of interest the ordinary Fourier transform does not formally exist.[nb 1]However, under suitable conditions, certain generalizations of the Fourier transform (e.g. theFourier-Stieltjes transform) still adhere toParseval's theorem. As such,P=limT→∞1T∫−∞∞|x^T(f)|2df,{\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|{\hat {x}}_{T}(f)|^{2}\,df,}where the integrand defines thepower spectral density:[9][10]
Theconvolution theoremthen allows regarding|x^T(f)|2{\displaystyle |{\hat {x}}_{T}(f)|^{2}}as theFourier transformof the timeconvolutionofxT∗(−t){\displaystyle x_{T}^{*}(-t)}andxT(t){\displaystyle x_{T}(t)}, where * represents the complex conjugate.
In order to deduce Eq.2, we will find an expression for[x^T(f)]∗{\displaystyle [{\hat {x}}_{T}(f)]^{*}}that will be useful for the purpose. In fact, we will demonstrate that[x^T(f)]∗=F{xT∗(−t)}{\displaystyle [{\hat {x}}_{T}(f)]^{*}={\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}}. Let's start by noting thatF{xT∗(−t)}=∫−∞∞xT∗(−t)e−i2πftdt{\displaystyle {\begin{aligned}{\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}&=\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt\end{aligned}}}and letz=−t{\displaystyle z=-t}, so thatz→−∞{\displaystyle z\rightarrow -\infty }whent→∞{\displaystyle t\rightarrow \infty }and vice versa. So∫−∞∞xT∗(−t)e−i2πftdt=∫∞−∞xT∗(z)ei2πfz(−dz)=∫−∞∞xT∗(z)ei2πfzdz=∫−∞∞xT∗(t)ei2πftdt{\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt&=\int _{\infty }^{-\infty }x_{T}^{*}(z)e^{i2\pi fz}\left(-dz\right)\\&=\int _{-\infty }^{\infty }x_{T}^{*}(z)e^{i2\pi fz}dz\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)e^{i2\pi ft}dt\end{aligned}}}Where, in the last line, we have made use of the fact thatz{\displaystyle z}andt{\displaystyle t}are dummy variables.
So, we haveF{xT∗(−t)}=∫−∞∞xT∗(−t)e−i2πftdt=∫−∞∞xT∗(t)ei2πftdt=∫−∞∞xT∗(t)[e−i2πft]∗dt=[∫−∞∞xT(t)e−i2πftdt]∗=[F{xT(t)}]∗=[x^T(f)]∗{\displaystyle {\begin{aligned}{\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}&=\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)e^{i2\pi ft}dt\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)[e^{-i2\pi ft}]^{*}dt\\&=\left[\int _{-\infty }^{\infty }x_{T}(t)e^{-i2\pi ft}dt\right]^{*}\\&=\left[{\mathcal {F}}\left\{x_{T}(t)\right\}\right]^{*}\\&=\left[{\hat {x}}_{T}(f)\right]^{*}\end{aligned}}}q.e.d.
Now, let's demonstrate eq.2 by using the demonstrated identity. In addition, we will make the subtitutionu(t)=xT∗(−t){\displaystyle u(t)=x_{T}^{*}(-t)}. In this way, we have:|x^T(f)|2=[x^T(f)]∗⋅x^T(f)=F{xT∗(−t)}⋅F{xT(t)}=F{u(t)}⋅F{xT(t)}=F{u(t)∗xT(t)}=∫−∞∞[∫−∞∞u(τ−t)xT(t)dt]e−i2πfτdτ=∫−∞∞[∫−∞∞xT∗(t−τ)xT(t)dt]e−i2πfτdτ,{\displaystyle {\begin{aligned}\left|{\hat {x}}_{T}(f)\right|^{2}&=[{\hat {x}}_{T}(f)]^{*}\cdot {\hat {x}}_{T}(f)\\&={\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}\cdot {\mathcal {F}}\left\{x_{T}(t)\right\}\\&={\mathcal {F}}\left\{u(t)\right\}\cdot {\mathcal {F}}\left\{x_{T}(t)\right\}\\&={\mathcal {F}}\left\{u(t)\mathbin {\mathbf {*} } x_{T}(t)\right\}\\&=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }u(\tau -t)x_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau \\&=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau ,\end{aligned}}}where the convolution theorem has been used when passing from the 3rd to the 4th line.
Now, if we divide the time convolution above by the periodT{\displaystyle T}and take the limit asT→∞{\displaystyle T\rightarrow \infty }, it becomes theautocorrelationfunction of the non-windowed signalx(t){\displaystyle x(t)}, which is denoted asRxx(τ){\displaystyle R_{xx}(\tau )}, provided thatx(t){\displaystyle x(t)}isergodic, which is true in most, but not all, practical cases.[nb 2]limT→∞1T|x^T(f)|2=∫−∞∞[limT→∞1T∫−∞∞xT∗(t−τ)xT(t)dt]e−i2πfτdτ=∫−∞∞Rxx(τ)e−i2πfτdτ{\displaystyle \lim _{T\to \infty }{\frac {1}{T}}\left|{\hat {x}}_{T}(f)\right|^{2}=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau =\int _{-\infty }^{\infty }R_{xx}(\tau )e^{-i2\pi f\tau }d\tau }
Assuming the ergodicity ofx(t){\displaystyle x(t)}, the power spectral density can be found once more as the Fourier transform of the autocorrelation function (Wiener–Khinchin theorem).[11]
Many authors use this equality to actually define the power spectral density.[12]
The power of the signal in a given frequency band[f1,f2]{\displaystyle [f_{1},f_{2}]}, where0<f1<f2{\displaystyle 0<f_{1}<f_{2}}, can be calculated by integrating over frequency. SinceSxx(−f)=Sxx(f){\displaystyle S_{xx}(-f)=S_{xx}(f)}, an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used):Pbandlimited=2∫f1f2Sxx(f)df{\displaystyle P_{\textsf {bandlimited}}=2\int _{f_{1}}^{f_{2}}S_{xx}(f)\,df}More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time intervalT{\displaystyle T}is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than1/T{\displaystyle 1/T}are not sampled, and results at frequencies which are not an integer multiple of1/T{\displaystyle 1/T}are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding tostatistical ensemblesof realizations ofx(t){\displaystyle x(t)}evaluated over the specified time window.
Just as with the energy spectral density, the definition of the power spectral density can be generalized todiscrete timevariablesxn{\displaystyle x_{n}}. As before, we can consider a window of−N≤n≤N{\displaystyle -N\leq n\leq N}with the signal sampled at discrete timestn=t0+(nΔt){\displaystyle t_{n}=t_{0}+(n\,\Delta t)}for a total measurement periodT=(2N+1)Δt{\displaystyle T=(2N+1)\,\Delta t}.Sxx(f)=limN→∞(Δt)2T|∑n=−NNxne−i2πfnΔt|2{\displaystyle S_{xx}(f)=\lim _{N\to \infty }{\frac {(\Delta t)^{2}}{T}}\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}}Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved whenN{\displaystyle N}(and thusT{\displaystyle T}) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called aperiodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time intervalT{\displaystyle T}approach infinity.[13]
If two signals both possess power spectral densities, then thecross-spectral densitycan similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to thecross-correlation.
Some properties of the PSD include:[14]
Given two signalsx(t){\displaystyle x(t)}andy(t){\displaystyle y(t)}, each of which possess power spectral densitiesSxx(f){\displaystyle S_{xx}(f)}andSyy(f){\displaystyle S_{yy}(f)}, it is possible to define across power spectral density(CPSD) orcross spectral density(CSD). To begin, let us consider the average power of such a combined signal.P=limT→∞1T∫−∞∞[xT(t)+yT(t)]∗[xT(t)+yT(t)]dt=limT→∞1T∫−∞∞|xT(t)|2+xT∗(t)yT(t)+yT∗(t)xT(t)+|yT(t)|2dt{\displaystyle {\begin{aligned}P&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }\left[x_{T}(t)+y_{T}(t)\right]^{*}\left[x_{T}(t)+y_{T}(t)\right]dt\\&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|x_{T}(t)|^{2}+x_{T}^{*}(t)y_{T}(t)+y_{T}^{*}(t)x_{T}(t)+|y_{T}(t)|^{2}dt\\\end{aligned}}}
Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtainSxy(f)=limT→∞1T[x^T∗(f)y^T(f)]Syx(f)=limT→∞1T[y^T∗(f)x^T(f)]{\displaystyle {\begin{aligned}S_{xy}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {x}}_{T}^{*}(f){\hat {y}}_{T}(f)\right]&S_{yx}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {y}}_{T}^{*}(f){\hat {x}}_{T}(f)\right]\end{aligned}}}where, again, the contributions ofSxx(f){\displaystyle S_{xx}(f)}andSyy(f){\displaystyle S_{yy}(f)}are already understood. Note thatSxy∗(f)=Syx(f){\displaystyle S_{xy}^{*}(f)=S_{yx}(f)}, so the full contribution to the cross power is, generally, from twice the real part of either individualCPSD. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limitT→∞{\displaystyle T\to \infty }becomes the Fourier transform of across-correlationfunction.[16]Sxy(f)=∫−∞∞[limT→∞1T∫−∞∞xT∗(t−τ)yT(t)dt]e−i2πfτdτ=∫−∞∞Rxy(τ)e−i2πfτdτSyx(f)=∫−∞∞[limT→∞1T∫−∞∞yT∗(t−τ)xT(t)dt]e−i2πfτdτ=∫−∞∞Ryx(τ)e−i2πfτdτ,{\displaystyle {\begin{aligned}S_{xy}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )y_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{xy}(\tau )e^{-i2\pi f\tau }d\tau \\S_{yx}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }y_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{yx}(\tau )e^{-i2\pi f\tau }d\tau ,\end{aligned}}}whereRxy(τ){\displaystyle R_{xy}(\tau )}is thecross-correlationofx(t){\displaystyle x(t)}withy(t){\displaystyle y(t)}andRyx(τ){\displaystyle R_{yx}(\tau )}is the cross-correlation ofy(t){\displaystyle y(t)}withx(t){\displaystyle x(t)}. In light of this, the PSD is seen to be a special case of the CSD forx(t)=y(t){\displaystyle x(t)=y(t)}. Ifx(t){\displaystyle x(t)}andy(t){\displaystyle y(t)}are real signals (e.g. voltage or current), their Fourier transformsx^(f){\displaystyle {\hat {x}}(f)}andy^(f){\displaystyle {\hat {y}}(f)}are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the fullCPSDis just one of theCPSDs scaled by a factor of two.CPSDFull=2Sxy(f)=2Syx(f){\displaystyle \operatorname {CPSD} _{\text{Full}}=2S_{xy}(f)=2S_{yx}(f)}
For discrete signalsxnandyn, the relationship between the cross-spectral density and the cross-covariance isSxy(f)=∑n=−∞∞Rxy(τn)e−i2πfτnΔτ{\displaystyle S_{xy}(f)=\sum _{n=-\infty }^{\infty }R_{xy}(\tau _{n})e^{-i2\pi f\tau _{n}}\,\Delta \tau }
The goal of spectral density estimation is toestimatethe spectral density of arandom signalfrom a sequence of time samples. Depending on what is known about the signal, estimation techniques can involveparametricornon-parametricapproaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to anautoregressive model. A common non-parametric technique is theperiodogram.
The spectral density is usually estimated usingFourier transformmethods (such as theWelch method), but other techniques such as themaximum entropymethod can also be used.
Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such asvisible light(perceived ascolor), musical notes (perceived aspitch),radio/TV(specified by their frequency, or sometimeswavelength) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to asine wavecomponent. And additionally there may be peaks corresponding toharmonicsof a fundamental peak, indicating a periodic signal which isnotsimply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by anotch filter.
The concept and use of the power spectrum of a signal is fundamental inelectrical engineering, especially inelectronic communication systems, includingradio communications,radars, and related systems, plus passiveremote sensingtechnology. Electronic instruments calledspectrum analyzersare used to observe and measure thepower spectraof signals.
The spectrum analyzer measures the magnitude of theshort-time Fourier transform(STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density.
Primordial fluctuations, density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale. | https://en.wikipedia.org/wiki/Frequency_spectrum |
TheHurst exponentis used as a measure oflong-term memoryoftime series. It relates to theautocorrelationsof the time series, and the rate at which these decrease as the lag between pairs of values increases. Studies involving the Hurst exponent were originally developed inhydrologyfor the practical matter of determining optimum dam sizing for theNile river's volatile rain and drought conditions that had been observed over a long period of time.[1][2]The name "Hurst exponent", or "Hurst coefficient", derives fromHarold Edwin Hurst(1880–1978), who was the lead researcher in these studies; the use of the standard notationHfor the coefficient also relates to his name.
Infractal geometry, thegeneralized Hurst exponenthas been denoted byHorHqin honor of both Harold Edwin Hurst andLudwig Otto Hölder(1859–1937) byBenoît Mandelbrot(1924–2010).[3]His directly related tofractal dimension,D, and is a measure of a data series' "mild" or "wild" randomness.[4]
The Hurst exponent is referred to as the "index of dependence" or "index of long-range dependence". It quantifies the relative tendency of a time series either to regress strongly to the mean or to cluster in a direction.[5]A valueHin the range 0.5–1 indicates a time series with long-term positive autocorrelation, meaning that the decay in autocorrelation is slower than exponential, following apower law; for the series it means that a high value tends to be followed by another high value and that future excursions to more high values do occur. A value in the range 0 – 0.5 indicates a time series with long-term switching between high and low values in adjacent pairs, meaning that a single high value will probably be followed by a low value and that the value after that will tend to be high, with this tendency to switch between high and low values lasting a long time into the future, also following a power law. A value ofH=0.5 indicatesshort-memory, with (absolute) autocorrelations decaying exponentially quickly to zero.
The Hurst exponent,H, is defined in terms of the asymptotic behaviour of therescaled rangeas a function of the time span of a time series as follows;[6][7]
E[R(n)S(n)]=CnHasn→∞,{\displaystyle \mathbb {E} \left[{\frac {R(n)}{S(n)}}\right]=Cn^{H}{\text{ as }}n\to \infty \,,}where
For self-similar time series,His directly related tofractal dimension,D, where 1 <D< 2, such thatD= 2 -H. The values of the Hurst exponent vary between 0 and 1, with higher values indicating a smoother trend, less volatility, and less roughness.[8]
For more general time series or multi-dimensional process, the Hurst exponent and fractal dimension can be chosen independently, as the Hurst exponent represents structure over asymptotically longer periods, while fractal dimension represents structure over asymptotically shorter periods.[9]
A number of estimators of long-range dependence have been proposed in the literature. The oldest and best-known is the so-calledrescaled range(R/S) analysis popularized by Mandelbrot and Wallis[3][10]and based on previous hydrological findings of Hurst.[1]Alternatives includeDFA, Periodogram regression,[11]aggregated variances,[12]local Whittle's estimator,[13]wavelet analysis,[14][15]both in thetime domainandfrequency domain.
To estimate the Hurst exponent, one must first estimate the dependence of therescaled rangeon the time spannof observation.[7]A time series of full lengthNis divided into a number of nonoverlapping shorter time series of lengthn, wherentakes valuesN,N/2,N/4, ... (in the convenient case thatNis a power of 2). The average rescaled range is then calculated for each value ofn.
For each such time series of lengthn{\displaystyle n},X=X1,X2,…,Xn{\displaystyle X=X_{1},X_{2},\dots ,X_{n}\,}, the rescaled range is calculated as follows:[6][7]
The Hurst exponent is estimated by fitting thepower lawE[R(n)/S(n)]=CnH{\displaystyle \mathbb {E} [R(n)/S(n)]=Cn^{H}}to the data. This can be done by plottinglog[R(n)/S(n)]{\displaystyle \log[R(n)/S(n)]}as a function oflogn{\displaystyle \log n}, and fitting a straight line; the slope of the line givesH{\displaystyle H}. A more principled approach would be to fit the power law in a maximum-likelihood fashion.[16]Such a graph is called a box plot. However, this approach is known to produce biased estimates of the power-law exponent.[clarification needed]For smalln{\displaystyle n}there is a significant deviation from the 0.5 slope.[clarification needed]Anis and Lloyd[17]estimated the theoretical (i.e., for white noise)[clarification needed]values of the R/S statistic to be:
E[R(n)/S(n)]={Γ(n−12)πΓ(n2)∑i=1n−1n−ii,forn≤3401nπ2∑i=1n−1n−ii,forn>340{\displaystyle \mathbb {E} [R(n)/S(n)]={\begin{cases}{\frac {\Gamma ({\frac {n-1}{2}})}{{\sqrt {\pi }}\Gamma ({\frac {n}{2}})}}\sum \limits _{i=1}^{n-1}{\sqrt {\frac {n-i}{i}}},&{\text{for }}n\leq 340\\{\frac {1}{\sqrt {n{\frac {\pi }{2}}}}}\sum \limits _{i=1}^{n-1}{\sqrt {\frac {n-i}{i}}},&{\text{for }}n>340\end{cases}}}
whereΓ{\displaystyle \Gamma }is theEuler gamma function.[clarification needed]The Anis-Lloyd corrected R/S Hurst exponent[clarification needed]is calculated as 0.5 plus the slope ofR(n)/S(n)−E[R(n)/S(n)]{\displaystyle R(n)/S(n)-\mathbb {E} [R(n)/S(n)]}.
No asymptotic distribution theory has been derived for most of the Hurst exponent estimators so far. However, Weron[18]usedbootstrappingto obtain approximate functional forms for confidence intervals of the two most popular methods, i.e., for the Anis-Lloyd[17]corrected R/S analysis:
and forDFA:
HereM=log2N{\displaystyle M=\log _{2}N}andN{\displaystyle N}is the series length. In both cases only subseries of lengthn>50{\displaystyle n>50}were considered for estimating the Hurst exponent; subseries of smaller length lead to a high variance of the R/S estimates.
The basic Hurst exponent can be related to the expected size of changes, as a function of the lag between observations, as measured by E(|Xt+τ−Xt|2). For the generalized form of the coefficient, the exponent here is replaced by a more general term, denoted byq.
There are a variety of techniques that exist for estimatingH, however assessing the accuracy of the estimation can be a complicated issue. Mathematically, in one technique, the Hurst exponent can be estimated such that:[19][20]Hq=H(q),{\displaystyle H_{q}=H(q),}for a time seriesg(t),t=1,2,…{\displaystyle g(t),t=1,2,\dots }may be defined by the scaling properties of itsstructurefunctionsSq{\displaystyle S_{q}}(τ{\displaystyle \tau }):Sq=⟨|g(t+τ)−g(t)|q⟩t∼τqH(q),{\displaystyle S_{q}=\left\langle \left|g(t+\tau )-g(t)\right|^{q}\right\rangle _{t}\sim \tau ^{qH(q)},}whereq>0{\displaystyle q>0},τ{\displaystyle \tau }is the time lag and averaging is over the time windowt≫τ,{\displaystyle t\gg \tau ,}usually the largest time scale of the system.
Practically, in nature, there is no limit to time, and thusHis non-deterministic as it may only be estimated based on the observed data; e.g., the most dramatic daily move upwards ever seen in a stock market index can always be exceeded during some subsequent day.[21]
In the above mathematical estimation technique, the functionH(q)contains information about averaged generalized volatilities at scaleτ{\displaystyle \tau }(onlyq= 1, 2are used to define the volatility). In particular, theH1exponent indicates persistent (H1>1⁄2) or antipersistent (H1<1⁄2) behavior of the trend.
For the BRW (brown noise,1/f2{\displaystyle 1/f^{2}}) one getsHq=12,{\displaystyle H_{q}={\frac {1}{2}},}and forpink noise(1/f{\displaystyle 1/f})Hq=0.{\displaystyle H_{q}=0.}
The Hurst exponent forwhite noiseis dimension dependent,[22]and for 1D and 2D it isHq1D=−12,Hq2D=−1.{\displaystyle H_{q}^{1D}=-{\frac {1}{2}},\quad H_{q}^{2D}=-1.}
For the popularLévy stable processesandtruncated Lévy processeswith parameter α it has been found that
Hq=q/α,{\displaystyle H_{q}=q/\alpha ,}forq<α{\displaystyle q<\alpha }, andHq=1{\displaystyle H_{q}=1}forq≥α{\displaystyle q\geq \alpha }.Multifractal detrended fluctuation analysis[23]is one method to estimateH(q){\displaystyle H(q)}from non-stationary time series.
WhenH(q){\displaystyle H(q)}is a non-linear function of q the time series is amultifractal system.
In the above definition two separate requirements are mixed together as if they would be one.[24]Here are the two independent requirements: (i)stationarity of the increments,x(t+T) −x(t) =x(T) −x(0)in distribution. This is the condition that yields longtime autocorrelations. (ii)Self-similarityof the stochastic process then yields variance scaling, but is not needed for longtime memory. E.g., bothMarkov processes(i.e., memory-free processes) andfractional Brownian motionscale at the level of 1-point densities (simple averages), but neither scales at the level of pair correlations or, correspondingly, the 2-point probability density.[clarification needed]
An efficient market requires amartingalecondition, and unless the variance is linear in the time this produces nonstationary increments,x(t+T) −x(t) ≠x(T) −x(0). Martingales are Markovian at the level of pair correlations, meaning that pair correlations cannot be used to beat a martingale market. Stationary increments with nonlinear variance, on the other hand, induce the longtime pair memory offractional Brownian motionthat would make the market beatable at the level of pair correlations. Such a market would necessarily be far from "efficient".
An analysis of economic time series by means of the Hurst exponent usingrescaled rangeandDetrended fluctuation analysisis conducted by econophysicist A.F. Bariviera.[25]This paper studies the time varying character ofLong-range dependencyand, thus of informational efficiency.
Hurst exponent has also been applied to the investigation oflong-range dependencyinDNA,[26]and photonicband gapmaterials.[27] | https://en.wikipedia.org/wiki/Hurst_exponent |
Monte Carlo methods, orMonte Carlo experiments, are a broad class ofcomputationalalgorithmsthat rely onrepeatedrandom samplingto obtain numerical results. The underlying concept is to userandomnessto solve problems that might bedeterministicin principle. The name comes from theMonte Carlo Casinoin Monaco, where the primary developer of the method, mathematicianStanisław Ulam, was inspired by his uncle's gambling habits.
Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically.
Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs.
Monte Carlo methods also have some limitations and challenges, such as the trade-off between accuracy and computational cost, thecurse of dimensionality, the reliability of random number generators, and the verification and validation of the results.
Monte Carlo methods vary, but tend to follow a particular pattern:
For example, consider aquadrant (circular sector)inscribed in aunit square. Given that the ratio of their areas isπ/4, the value ofπcan be approximated using the Monte Carlo method:[1]
In this procedure, the domain of inputs is the square that circumscribes the quadrant. One can generate random inputs by scattering grains over the square, then performing a computation on each input to test whether it falls within the quadrant. Aggregating the results yields our final result, the approximation ofπ.
There are two important considerations:
Uses of Monte Carlo methods require large amounts of random numbers, and their use benefitted greatly frompseudorandom number generators, which are far quicker to use than the tables of random numbers that had been previously employed.
Monte Carlo methods are often used inphysicalandmathematicalproblems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes:[2]optimization,numerical integration, and generating draws from aprobability distribution.
In physics-related problems, Monte Carlo methods are useful for simulating systems with manycoupleddegrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (seecellular Potts model,interacting particle systems,McKean–Vlasov processes,kinetic models of gases).
Other examples include modeling phenomena with significantuncertaintyin inputs such as the calculation ofriskin business and, in mathematics, evaluation of multidimensionaldefinite integralswith complicatedboundary conditions. In application to systems engineering problems (space,oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure,cost overrunsand schedule overruns are routinely better than human intuition or alternative "soft" methods.[3]
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By thelaw of large numbers, integrals described by theexpected valueof some random variable can be approximated by taking theempirical mean(a.k.a.the 'sample mean') of independent samples of the variable. When theprobability distributionof the variable is parameterized, mathematicians often use aMarkov chain Monte Carlo(MCMC) sampler.[4][5][6]The central idea is to design a judiciousMarkov chainmodel with a prescribedstationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution.[7][8]By theergodic theorem, the stationary distribution is approximated by theempirical measuresof the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of aMarkov processwhose transition probabilities depend on the distributions of the current random states (seeMcKean–Vlasov processes,nonlinear filtering equation).[9][10]In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain.[10][11]A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampledempirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, thesemean-field particletechniques rely on sequential interacting samples. The terminologymean fieldreflects the fact that each of thesamples(a.k.a.particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
Suppose one wants to know the expected valueμ{\displaystyle \mu }of a population (and knows thatμ{\displaystyle \mu }exists), but does not have a formula available to compute it. The simple Monte Carlo method gives an estimate forμ{\displaystyle \mu }by runningn{\displaystyle n}simulations and averaging the simulations' results. It has no restrictions on the probability distribution of the inputs to the simulations, requiring only that the inputs are randomly generated and are independent of each other and thatμ{\displaystyle \mu }exists. A sufficiently largen{\displaystyle n}will produce a value form{\displaystyle m}that is arbitrarily close toμ{\displaystyle \mu }; more formally, it will be the case that, for anyϵ>0{\displaystyle \epsilon >0},|μ−m|≤ϵ{\displaystyle |\mu -m|\leq \epsilon }.[12]
Typically, the algorithm to obtainm{\displaystyle m}is
Suppose we want to know how many times we should expect to throw three eight-sided dice for the total of the dice throws to be at leastT{\displaystyle T}. We know the expected value exists. The dice throws are randomly distributed and independent of each other. So simple Monte Carlo is applicable:
Ifn{\displaystyle n}is large enough,m{\displaystyle m}will be withinϵ{\displaystyle \epsilon }ofμ{\displaystyle \mu }for anyϵ>0{\displaystyle \epsilon >0}.
Letϵ=|μ−m|>0{\displaystyle \epsilon =|\mu -m|>0}. Choose the desired confidence level – the percent chance that, when the Monte Carlo algorithm completes,m{\displaystyle m}is indeed withinϵ{\displaystyle \epsilon }ofμ{\displaystyle \mu }. Letz{\displaystyle z}be thez{\displaystyle z}-score corresponding to that confidence level.
Lets2{\displaystyle s^{2}}be the estimated variance, sometimes called the “sample” variance; it is the variance of the results obtained from a relatively small numberk{\displaystyle k}of “sample” simulations. Choose ak{\displaystyle k}; Driels and Shin observe that “even for sample sizes an order of magnitude lower than the number required, the calculation of that number is quite stable."[13]
The following algorithm computess2{\displaystyle s^{2}}in one pass while minimizing the possibility that accumulated numerical error produces erroneous results:[12]
Note that, when the algorithm completes,mk{\displaystyle m_{k}}is the mean of thek{\displaystyle k}results.
The valuen{\displaystyle n}is sufficiently large when
Ifn≤k{\displaystyle n\leq k}, thenmk=m{\displaystyle m_{k}=m}; sufficient sample simulations were done to ensure thatmk{\displaystyle m_{k}}is withinϵ{\displaystyle \epsilon }ofμ{\displaystyle \mu }. Ifn>k{\displaystyle n>k}, thenn{\displaystyle n}simulations can be run “from scratch,” or, sincek{\displaystyle k}simulations have already been done, one can just runn−k{\displaystyle n-k}more simulations and add their results into those from the sample simulations:
An alternative formula can be used in the special case where all simulation results are bounded above and below.
Choose a value forϵ{\displaystyle \epsilon }that is twice the maximum allowed difference betweenμ{\displaystyle \mu }andm{\displaystyle m}. Let0<δ<100{\displaystyle 0<\delta <100}be the desired confidence level, expressed as a percentage. Let every simulation resultr1,r2,…,ri,…,rn{\displaystyle r_{1},r_{2},\ldots ,r_{i},\ldots ,r_{n}}be such thata≤ri≤b{\displaystyle a\leq r_{i}\leq b}for finitea{\displaystyle a}andb{\displaystyle b}. To have confidence of at leastδ{\displaystyle \delta }that|μ−m|<ϵ/2{\displaystyle |\mu -m|<\epsilon /2}, use a value forn{\displaystyle n}such that:
For example, ifδ=99%{\displaystyle \delta =99\%}, thenn≥2(b−a)2ln(2/0.01)/ϵ2≈10.6(b−a)2/ϵ2{\displaystyle n\geq 2(b-a)^{2}\ln(2/0.01)/\epsilon ^{2}\approx 10.6(b-a)^{2}/\epsilon ^{2}}.[12]
Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high.[14]Although this is a severe limitation in very complex problems, theembarrassingly parallelnature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) throughparallel computingstrategies in local processors, clusters, cloud computing, GPU, FPGA, etc.[15][16][17][18]
Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems usingprobabilisticmetaheuristics(seesimulated annealing).
An early variant of the Monte Carlo method was devised to solve theBuffon's needle problem, in whichπcan be estimated by dropping needles on a floor made of parallel equidistant strips. In the 1930s,Enrico Fermifirst experimented with the Monte Carlo method while studying neutron diffusion, but he did not publish this work.[19]
In the late 1940s,Stanisław Ulaminvented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at theLos Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon.[19]Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows:
The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than "abstract thinking" might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later [in 1946], I described the idea toJohn von Neumann, and we began to plan actual calculations.[20]
Being secret, the work of von Neumann and Ulam required a code name.[21]A colleague of von Neumann and Ulam,Nicholas Metropolis, suggested using the nameMonte Carlo, which refers to theMonte Carlo CasinoinMonacowhere Ulam's uncle would borrow money from relatives to gamble.[19]Monte Carlo methods were central to thesimulationsrequired for further postwar development of nuclear weapons, including the design of the H-bomb, though severely limited by the computational tools at the time. Von Neumann,Nicholas Metropolisand others programmed theENIACcomputer to perform the first fully automated Monte Carlo calculations, of afission weaponcore, in the spring of 1948.[22]In the 1950s Monte Carlo methods were used atLos Alamosfor the development of thehydrogen bomb, and became popularized in the fields ofphysics,physical chemistry, andoperations research. TheRand Corporationand theU.S. Air Forcewere two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
The theory of more sophisticated mean-field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work ofHenry P. McKean Jr.on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics.[23][24]An earlier pioneering article byTheodore E. Harrisand Herman Kahn, published in 1951, used mean-fieldgenetic-type Monte Carlo methods for estimating particle transmission energies.[25]Mean-field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (a.k.a.metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work ofAlan Turingon genetic type mutation-selection learning machines[26]and the articles byNils Aall Barricelliat theInstitute for Advanced StudyinPrinceton, New Jersey.[27][28]
Quantum Monte Carlo, and more specificallydiffusion Monte Carlo methodscan also be interpreted as a mean-field particle Monte Carlo approximation ofFeynman–Kacpath integrals.[29][30][31][32][33][34][35]The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi andRobert Richtmyerwho developed in 1948 a mean-field particle interpretation of neutron-chain reactions,[36]but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984.[35]In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work ofMarshall N. RosenbluthandArianna W. Rosenbluth.[37]
The use ofSequential Monte Carloin advancedsignal processingandBayesian inferenceis more recent. It was in 1993, that Gordon et al., published in their seminal work[38]the first application of a Monte Carloresamplingalgorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Another pioneering article in this field was Genshiro Kitagawa's, on a related "Monte Carlo filter",[39]and the ones by Pierre Del Moral[40]and Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut[41]on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in 1989–1992 by P. Del Moral, J. C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and theLAAS-CNRS(the Laboratory for Analysis and Architecture of Systems) on radar/sonar and GPS signal processing problems.[42][43][44][45][46][47]These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism.
From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms were written by Pierre Del Moral in 1996.[40][48]
Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons,[49][50][51]and by Dan Crisan, Pierre Del Moral and Terry Lyons.[52]Further developments in this field were described in 1999 to 2001 by P. Del Moral, A. Guionnet and L. Miclo.[30][53][54]
There is no consensus on howMonte Carloshould be defined. For example, Ripley[55]defines most probabilistic modeling asstochastic simulation, withMonte Carlobeing reserved forMonte Carlo integrationand Monte Carlo statistical tests.Sawilowsky[56]distinguishes between asimulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior).
Here are some examples:
Kalos and Whitlock[57]point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling."
Convergence of the Monte Carlo simulation can be checked with theGelman-Rubin statistic.
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known.
Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally.[58]Monte Carlo simulation methods do not always requiretruly random numbersto be useful (although, for some applications such asprimality testing, unpredictability is vital).[59]Many of the most useful techniques use deterministic,pseudorandomsequences, making it easy to test and re-run simulations. The only quality usually necessary to make goodsimulationsis for the pseudo-random sequence to appear "random enough" in a certain sense.
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers areuniformly distributedor follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary.
Sawilowsky lists the characteristics of a high-quality Monte Carlo simulation:[56]
Pseudo-random number samplingalgorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a givenprobability distribution.
Low-discrepancy sequencesare often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are calledquasi-Monte Carlo methods.
In an effort to assess the impact of random number quality on Monte Carlo simulation outcomes, astrophysical researchers tested cryptographically secure pseudorandom numbers generated via Intel'sRDRANDinstruction set, as compared to those derived from algorithms, like theMersenne Twister, in Monte Carlo simulations of radio flares frombrown dwarfs. No statistically significant difference was found between models generated with typical pseudorandom number generators and RDRAND for trials consisting of the generation of 107random numbers.[60]
There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a "best guess" estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded.[61]
By contrast, Monte Carlo simulations sample from aprobability distributionfor each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring.[62]For example, a comparison of a spreadsheet cost construction model run using traditional "what if" scenarios, and then running the comparison again with Monte Carlo simulation andtriangular probability distributionsshows that the Monte Carlo analysis has a narrower range than the "what if" analysis.[example needed]This is because the "what if" analysis gives equal weight to all scenarios (seequantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events".
Monte Carlo methods are especially useful for simulating phenomena with significantuncertaintyin inputs and systems with manycoupleddegrees of freedom. Areas of application include:
Monte Carlo methods are very important incomputational physics,physical chemistry, and related applied fields, and have diverse applications from complicatedquantum chromodynamicscalculations to designingheat shieldsandaerodynamicforms as well as in modeling radiation transport for radiation dosimetry calculations.[63][64][65]Instatistical physics,Monte Carlo molecular modelingis an alternative to computationalmolecular dynamics, and Monte Carlo methods are used to computestatistical field theoriesof simple particle and polymer systems.[37][66]Quantum Monte Carlomethods solve themany-body problemfor quantum systems.[9][10][29]Inradiation materials science, thebinary collision approximationfor simulatingion implantationis usually based on a Monte Carlo approach to select the next colliding atom.[67]In experimentalparticle physics, Monte Carlo methods are used for designingdetectors, understanding their behavior and comparing experimental data to theory. Inastrophysics, they are used in such diverse manners as to model bothgalaxyevolution[68]and microwave radiation transmission through a rough planetary surface.[69]Monte Carlo methods are also used in theensemble modelsthat form the basis of modernweather forecasting.
Monte Carlo methods are widely used in engineering forsensitivity analysisand quantitativeprobabilisticanalysis inprocess design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example,
TheIntergovernmental Panel on Climate Changerelies on Monte Carlo methods inprobability density functionanalysis ofradiative forcing.[73]
Monte Carlo methods are used in various fields ofcomputational biology, for example forBayesian inference in phylogeny, or for studying biological systems such as genomes, proteins,[74]or membranes.[75]The systems can be studied in the coarse-grained orab initioframeworks depending on the desired accuracy.
Computer simulations allow monitoring of the local environment of a particularmoleculeto see if somechemical reactionis happening for instance. In cases where it is not feasible to conduct a physical experiment,thought experimentscan be conducted (for instance: breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields).
Path tracing, occasionally referred to as Monte Carlo ray tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of therendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence.
The standards for Monte Carlo experiments in statistics were set by Sawilowsky.[76]In applied statistics, Monte Carlo methods may be used for at least four purposes:
Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximaterandomization testis based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected).
Monte Carlo methods have been developed into a technique calledMonte-Carlo tree searchthat is useful for searching for the best move in a game. Possible moves are organized in asearch treeand many random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves.[80]
The Monte Carlo tree search (MCTS) method has four steps:[81]
The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move.
Monte Carlo Tree Search has been used successfully to play games such asGo,[82]Tantrix,[83]Battleship,[84]Havannah,[85]andArimaa.[86]
Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used inglobal illuminationcomputations that produce photo-realistic images of virtual 3D models, with applications invideo games,architecture,design, computer generatedfilms, and cinematic special effects.[87]
TheUS Coast Guardutilizes Monte Carlo methods within its computer modeling softwareSAROPSin order to calculate the probable locations of vessels duringsearch and rescueoperations. Each simulation can generate as many as ten thousand data points that are randomly distributed based upon provided variables.[88]Search patterns are then generated based upon extrapolations of these data in order to optimize the probability of containment (POC) and the probability of detection (POD), which together will equal an overall probability of success (POS). Ultimately this serves as a practical application ofprobability distributionin order to provide the swiftest and most expedient method of rescue, saving both lives and resources.[89]
Monte Carlo simulation is commonly used to evaluate the risk and uncertainty that would affect the outcome of different decision options. Monte Carlo simulation allows the business risk analyst to incorporate the total effects of uncertainty in variables like sales volume, commodity and labor prices, interest and exchange rates, as well as the effect of distinct risk events like the cancellation of a contract or the change of a tax law.
Monte Carlo methods in financeare often used toevaluate investments in projectsat a business unit or corporate level, or other financial valuations. They can be used to modelproject schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project.[90]Monte Carlo methods are also used in option pricing, default risk analysis.[91][92]Additionally, they can be used to estimate the financial impact of medical interventions.[93]
A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications forharassmentanddomestic abuse restraining orders. It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk ofrapeandphysical assault. However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others. The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole.[94]
Monte Carlo approach had also been used to simulate the number of book publications based on bookgenrein Malaysia. The Monte Carlo simulation utilized previous published National Book publication data and book's price according to book genre in the local market. The Monte Carlo results were used to determine what kind of book genre that Malaysians are fond of and was used to compare book publications betweenMalaysiaandJapan.[95]
Nassim Nicholas Talebwrites about Monte Carlo generators in his 2001 bookFooled by Randomnessas a real instance of thereverse Turing test: a human can be declared unintelligent if their writing cannot be told apart from a generated one.
In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see alsoRandom number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
Deterministicnumerical integrationalgorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then10100points are needed for 100 dimensions—far too many to be computed. This is called thecurse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to aniterated integral.[96]100dimensionsis by no means unusual, since in many physical problems, a "dimension" is equivalent to adegree of freedom.
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonablywell-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By thecentral limit theorem, this method displays1/N{\displaystyle \scriptstyle 1/{\sqrt {N}}}convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.[96]
A refinement of this method, known asimportance samplingin statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such asstratified sampling,recursive stratified sampling, adaptive umbrella sampling[97][98]or theVEGAS algorithm.
A similar approach, thequasi-Monte Carlo method, useslow-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include theMetropolis–Hastings algorithm,Gibbs sampling,Wang and Landau algorithm, and interacting type MCMC methodologies such as thesequential Monte Carlosamplers.[99]
Another powerful and very popular application for random numbers in numerical simulation is innumerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, acomputer chessprogram could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In thetraveling salesman problemthe goal is to minimize distance traveled. There are also applications to engineering design, such asmultidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference[100]is a comprehensive review of many issues related to simulation and optimization.
Thetraveling salesman problemis what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. If instead of the goal being to minimize the total distance traveled to visit each desired destination but rather to minimize the total time needed to reach each destination, this goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine the optimal path a different simulation is required: optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize the travel decisions to identify the best path to follow taking that uncertainty into account.
Probabilistic formulation ofinverse problemsleads to the definition of aprobability distributionin the model space. This probability distribution combinespriorinformation with new information obtained by measuring some observable parameters (data).
As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as normally information on the resolution power of the data is desired. In the general case many parameters are modeled, and an inspection of themarginal probabilitydensities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to theposterior probability distributionand to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for thea prioridistribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complexa prioriinformation and data with an arbitrary noise distribution.[101][102]
Popular exposition of the Monte Carlo Method was conducted by McCracken.[103]The method's general philosophy was discussed byElishakoff[104]and Grüne-Yanoff and Weirich.[105] | https://en.wikipedia.org/wiki/Monte_Carlo_method |
Panel(data)analysisis a statistical method, widely used insocial science,epidemiology, andeconometricsto analyze two-dimensional (typically cross sectional and longitudinal)panel data.[1]The data are usually collected over time and over the same individuals and then aregressionis run over these two dimensions.Multidimensional analysisis aneconometricmethod in which data are collected over more than two dimensions (typically, time, individuals, and some third dimension).[2]
A commonpanel dataregression model looks likeyit=a+bxit+εit{\displaystyle y_{it}=a+bx_{it}+\varepsilon _{it}}, wherey{\displaystyle y}is thedependent variable,x{\displaystyle x}is theindependent variable,a{\displaystyle a}andb{\displaystyle b}are coefficients,i{\displaystyle i}andt{\displaystyle t}areindicesfor individuals and time. The errorεit{\displaystyle \varepsilon _{it}}is very important in this analysis. Assumptions about the error term determine whether we speak of fixed effects or random effects. In a fixed effects model,εit{\displaystyle \varepsilon _{it}}is assumed to vary non-stochastically overi{\displaystyle i}ort{\displaystyle t}making the fixed effects model analogous to a dummy variable model in one dimension. In a random effects model,εit{\displaystyle \varepsilon _{it}}is assumed to vary stochastically overi{\displaystyle i}ort{\displaystyle t}requiring special treatment of the error variance matrix.[3]
Panel data analysis has three more-or-less independent approaches:
The selection between these methods depends upon the objective of the analysis, and the problems concerning the exogeneity of the explanatory variables.
Key assumption:There are no unique attributes of individuals within the measurement set, and no universal effects across time.
Key assumption:There are unique attributes of individuals that do not vary over time. That is, the unique attributes for a given individuali{\displaystyle i}are timet{\displaystyle t}invariant. These attributes may or may not be correlated with the individual dependent variables yi. To test whether fixed effects, rather than random effects, is needed, theDurbin–Wu–Hausman testcan be used.
Key assumption:There are unique, time constant attributes of individuals that are not correlated with the individual regressors. Pooled OLS[clarification needed]can be used to derive unbiased and consistent estimates of parameters even when time constant attributes are present, but random effects will be moreefficient.
Random effects model is a feasiblegeneralised least squarestechnique which is asymptotically more efficient than Pooled OLS when time constant attributes are present. Random effects adjusts for the serial correlation which is induced by unobserved time constant attributes.
In the standard random effects (RE) and fixed effects (FE) models, independent variables are assumed to be uncorrelated with error terms. Provided the availability of valid instruments, RE and FE methods extend to the case where some of the explanatory variables are allowed to be endogenous. As in the exogenous setting, RE model with Instrumental Variables (REIV) requires more stringent assumptions than FE model with Instrumental Variables (FEIV) but it tends to be more efficient under appropriate conditions.[4]
To fix ideas, consider the following model:
whereci{\displaystyle c_{i}}is unobserved unit-specific time-invariant effect (call it unobserved effect) andxit{\displaystyle x_{it}}can be correlated withuis{\displaystyle u_{is}}forspossibly different fromt. Suppose there exists a set of valid instrumentszi=(zi1,…,zit){\displaystyle z_{i}=(z_{i1},\ldots ,z_{it})}.
In REIV setting, key assumptions include thatzi{\displaystyle z_{i}}is uncorrelated withci{\displaystyle c_{i}}as well asuit{\displaystyle u_{it}}fort=1,…,T{\displaystyle t=1,\ldots ,T}. In fact, for REIV estimator to be efficient, conditions stronger than uncorrelatedness between instruments and unobserved effect are necessary.
On the other hand, FEIV estimator only requires that instruments be exogenous with error terms after conditioning on unobserved effect i.e.E[uit∣zi,ci]=0[1]{\displaystyle E[u_{it}\mid z_{i},c_{i}]=0[1]}.[4]The FEIV condition allows for arbitrary correlation between instruments and unobserved effect. However, this generality does not come for free: time-invariant explanatory and instrumental variables are not allowed. As in the usual FE method, the estimator uses time-demeaned variables to remove unobserved effect. Therefore, FEIV estimator would be of limited use if variables of interest include time-invariant ones.
The above discussion has parallel to the exogenous case of RE and FE models. In the exogenous case, RE assumes uncorrelatedness between explanatory variables and unobserved effect, and FE allows for arbitrary correlation between the two. Similar to the standard case, REIV tends to be more efficient than FEIV provided that appropriate assumptions hold.[4]
In contrast to the standard panel data model, adynamic panel modelalso includes lagged values of the dependent variable as regressors. For example, including one lag of the dependent variable generates:
The assumptions of the fixed effect and random effect models are violated in this setting. Instead, practitioners use a technique like theArellano–Bond estimator. | https://en.wikipedia.org/wiki/Panel_analysis |
Instatistics,scaled correlationis a form of a coefficient ofcorrelationapplicable to data that have a temporal component such astime series. It is the average short-term correlation. If the signals have multiple components (slow and fast), scaled coefficient of correlation can be computed only for the fast components of the signals, ignoring the contributions of the slow components.[1]Thisfiltering-likeoperation has the advantages of not having to make assumptions about the sinusoidal nature of the signals.
For example, in the studies of brain signals researchers are often interested in the high-frequency components (beta and gamma range; 25–80 Hz), and may not be interested in lower frequency ranges (alpha, theta, etc.). In that case scaled correlation can be computed only for frequencies higher than 25 Hz by choosing the scale of the analysis,s, to correspond to the period of that frequency (e.g.,s= 40 ms for 25 Hz oscillation).
Scaled correlation between two signals is defined as the average correlation computed across short segments of those signals. First, it is necessary to determine the number of segmentsK{\displaystyle K}that can fit into the total lengthT{\displaystyle T}of the signals for a given scales{\displaystyle s}:
Next, ifrk{\displaystyle r_{k}}isPearson's coefficient of correlationfor segmentk{\displaystyle k}, the scaled correlation across the entire signalsr¯s{\displaystyle {\bar {r}}_{s}}is computed as
In a detailed analysis, Nikolić et al.[1]showed that the degree to which the contributions of the slow components will be attenuated depends on three factors, the choice of the scale, the amplitude ratios between the slow and the fast component, and the differences in their oscillation frequencies. The larger the differences in oscillation frequencies, the more efficiently will the contributions of the slow components be removed from the computed correlation coefficient. Similarly, the smaller the power of slow components relative to the fast components, the better will scaled correlation perform.
Scaled correlation can be applied toauto-andcross-correlationin order to investigate how correlations of high-frequency components change at different temporal delays. To compute cross-scaled-correlation for every time shift properly, it is necessary to segment the signals anew after each time shift. In other words, signals are always shiftedbeforethe segmentation is applied. Scaled correlation has been subsequently used to investigate synchronization hubs in the visual cortex.[2]Scaled correlation can be also used to extract functional networks.[3]
Scaled correlation should be in many cases preferred over signal filtering based on spectral methods. The advantage of scaled correlation is that it does not make assumptions about the spectral properties of the signal (e.g., sinusoidal shapes of signals). Nikolić et al.[1]have shown that the use ofWiener–Khinchin theoremto remove slow components is inferior to results obtained by scaled correlation. These advantages become obvious especially when the signals are non-periodic or when they consist of discrete events such as the time stamps at which neuronal action potentials have been detected.
A detailed insight into a correlation structure across different scales can be provided by visualization using multiresolution correlation analysis.[4] | https://en.wikipedia.org/wiki/Scaled_correlation |
Seasonal adjustmentordeseasonalizationis astatisticalmethod for removing theseasonal componentof atime series. It is usually done when wanting to analyse the trend, and cyclical deviations from trend, of a time series independently of the seasonal components. Many economic phenomena have seasonal cycles, such asagricultural production, (crop yields fluctuate with the seasons) and consumer consumption (increased personal spending leading up toChristmas). It is necessary to adjust for this component in order to understand underlying trends in the economy, soofficial statisticsare often adjusted to remove seasonal components.[1]Typically, seasonally adjusted data is reported for unemployment rates to reveal the underlying trends and cycles in labor markets.[2][3]
The investigation of many economic time series becomes problematic due to seasonal fluctuations. Time series are made up of four components:
The difference between seasonal and cyclic patterns:
The relation between decomposition of time series components
Unlike the trend and cyclical components, seasonal components, theoretically, happen with similar magnitude during the same time period each year. The seasonal components of a series are sometimes considered to be uninteresting and to hinder the interpretation of a series. Removing the seasonal component directs focus on other components and will allow better analysis.[5]
Different statistical research groups have developed different methods of seasonal adjustment, for example X-13-ARIMA andX-12-ARIMAdeveloped by theUnited States Census Bureau;TRAMO/SEATS developed by theBank of Spain;[6]MoveReg (for weekly data) developed by the United StatesBureau of Labor Statistics;[7]STAMP developed by a group led by S. J. Koopman;[8]and “Seasonal and Trend decomposition using Loess” (STL) developed by Cleveland et al. (1990).[9]While X-12/13-ARIMA can only be applied to monthly or quarterly data, STL decomposition can be used on data with any type of seasonality. Furthermore, unlike X-12-ARIMA, STL allows the user to control the degree of smoothness of the trend cycle and how much the seasonal component changes over time. X-12-ARIMA can handle both additive and multiplicative decomposition whereas STL can only be used for additive decomposition. In order to achieve a multiplicative decomposition using STL, the user can take the log of the data before decomposing, and then back-transform after the decomposition.[9]
Each group provides software supporting their methods. Some versions are also included as parts of larger products, and some are commercially available. For example,SASincludes X-12-ARIMA, while Oxmetrics includes STAMP. A recent move by public organisations to harmonise seasonal adjustment practices has resulted in the development ofDemetra+byEurostatandNational Bank of Belgiumwhich currently includes both X-12-ARIMA and TRAMO/SEATS.[10]Rincludes STL decomposition.[11]The X-12-ARIMA method can be utilized via the R package "X12".[12]EViewssupports X-12, X-13, Tramo/Seats, STL and MoveReg.
One well-known example is the rate ofunemployment, which is represented by a time series. This rate depends particularly on seasonal influences, which is why it is important to free the unemployment rate of its seasonal component. Such seasonal influences can be due to school graduates or dropouts looking to enter into the workforce and regular fluctuations during holiday periods. Once the seasonal influence is removed from this time series, the unemployment rate data can be meaningfully compared across different months and predictions for the future can be made.[3]
When seasonal adjustment is not performed with monthly data, year-on-year changes are utilised in an attempt to avoid contamination with seasonality.
When time series data has seasonality removed from it, it is said to bedirectly seasonally adjusted. If it is made up of a sum orindex aggregationof time series which have been seasonally adjusted, it is said to have beenindirectly seasonally adjusted. Indirect seasonal adjustment is used for large components of GDP which are made up of many industries, which may have different seasonal patterns and which are therefore analyzed and seasonally adjusted separately. Indirect seasonal adjustment also has the advantage that the aggregate series is the exact sum of the component series.[13][14][15]Seasonality can appear in an indirectly adjusted series; this is sometimes calledresidual seasonality.
Due to the various seasonal adjustment practices by different institutions, a group was created by Eurostat and theEuropean Central Bankto promote standard processes. In 2009 a small group composed of experts fromEuropean Unionstatistical institutions and central banks produced the ESS Guidelines on Seasonal Adjustment,[16]which is being implemented in all the European Union statistical institutions. It is also being adopted voluntarily by other public statistical institutions outside the European Union.
By theFrisch–Waugh–Lovell theoremit does not matter whetherdummy variablesfor all but one of the seasons are introduced into the regression equation, or if the independent variable is first seasonally adjusted (by the same dummy variable method), and the regression then run.
Since seasonal adjustment introduces a "non-revertible" moving average (MA) component into time series data,unit roottests (such as thePhillips–Perron test) will bebiasedtowards non-rejection of the unit rootnull.[17]
Use of seasonally adjusted time series data can be misleading because a seasonally adjusted series contains both thetrend-cyclecomponent and theerrorcomponent. As such, what appear to be "downturns" or "upturns" may actually be randomness in the data. For this reason, if the purpose is finding turning points in a series, using the trend-cycle component is recommended rather than the seasonally adjusted data.[3] | https://en.wikipedia.org/wiki/Seasonal_adjustment |
Inbioinformatics,sequence analysisis the process of subjecting aDNA,RNAorpeptide sequenceto any of a wide range of analytical methods to understand its features, function, structure, or evolution. It can be performed on the entire genome, transcriptome or proteome of an organism, and can also involve only selected segments or regions, like tandem repeats and transposable elements. Methodologies used includesequence alignment, searches againstbiological databases, and others.[1]
Since the development of methods of high-throughput production of gene and protein sequences, the rate of addition of new sequences to the databases increased very rapidly. Such a collection of sequences does not, by itself, increase the scientist's understanding of the biology of organisms. However, comparing these new sequences to those with known functions is a key way of understanding the biology of an organism from which the new sequence comes. Thus, sequence analysis can be used to assign function to coding and non-coding regions in a biological sequence usually by comparing sequences and studying similarities and differences. Nowadays, there are many tools and techniques that provide the sequence comparisons (sequence alignment) and analyze the alignment product to understand its biology.
Sequence analysis inmolecular biologyincludes a very wide range of processes:
Since the very first sequences of theinsulinprotein were characterized byFred Sangerin 1951, biologists have been trying to use this knowledge to understand the function of molecules.[2][3]He and his colleagues' discoveries contributed to the successful sequencing of the first DNA-based genome.[4]The method used in this study, which is called the “Sanger method” orSanger sequencing, was a milestone in sequencing long strand molecules such as DNA. This method was eventually used in thehuman genome project.[5]According toMichael Levitt, sequence analysis was born in the period from 1969 to 1977.[6]In 1969 the analysis of sequences oftransfer RNAswas used to infer residue interactions from correlated changes in the nucleotide sequences, giving rise to a model of the tRNAsecondary structure.[7]In 1970, Saul B. Needleman and Christian D. Wunsch published the firstcomputer algorithmfor aligning two sequences.[8]Over this time, developments in obtaining nucleotide sequence improved greatly, leading to the publication of the first complete genome of a bacteriophage in 1977.[9]Robert Holley and his team in Cornell University were believed to be the first to sequence an RNA molecule.[10]
Nucleotide sequence analyses identify functional elements like protein binding sites, uncover genetic variations like SNPs, study gene expression patterns, and understand the genetic basis of traits. It helps to understand mechanisms that contribute to processes like replication and transcription. Some of the tasks involved are outlined below.
Quality control assesses the quality of sequencing reads obtained from the sequencing technology (e.g.Illumina). It is the first step in sequence analysis to limit wrong conclusions due to poor quality data. The tools used at this stage depend on the sequencing platform. For instance, FastQC checks the quality of short reads (including RNA sequences), Nanoplot or PycoQC are used forlong read sequences(e.g. Nanopore sequence reads), and MultiQC aggregates the result of FastQC in a webpage format.[11][12][13]
Quality control provides information such as read lengths,GC content, presence of adapter sequences (for short reads), and a quality score, which is often expressed on aPHRED scale.[14]If adapters or other artifacts from PCR amplification are present in the reads (particularly short reads), they are removed using software such as Trimmomatic[15]or Cutadapt.[16]
At this step, sequencing reads whose quality have been improved are mapped to areference genomeusing alignment tools like BWA[17]for short DNA sequence reads, minimap[18]for long read DNA sequences, and STAR[19]for RNA sequence reads. The purpose of mapping is to find the origin of any given read based on the reference sequence. It is also important for detecting variations orphylogenetic studies.
The output from this step, that is, the aligned reads, are stored in compatible file formats known as SAM, which contains information about the reference genome as well as individual reads. Alternatively,BAM fileformats are preferred as they use much less desk or storage space.[14]
Note: This is different from sequence alignment which compares two or more whole sequences (or sequence regions) to quantify similarity or differences or to identify an unknown sequence (as discussed below).
The following analyses steps are peculiar to DNA sequences:
Identifying variants is a popular aspect of sequence analysis as variants often contain information of biological significance, such as explaining the mechanism of drug resistance in an infectious disease. These variants could be single nucleotide variants (SNVs), small insertions/deletions (indels), and largestructural variants. The read alignments are sorted usingSAMtools, after which variant callers such as GATK[20]are used to identify differences compared to the reference sequence.
The choice of variant calling tool depends heavily on the sequencing technology used, so GATK is often used when working with short reads, while long read sequences require tools like DeepVariant[21]and Sniffles.[22]Tools may also differ based on organism (prokaryotes or eukaryotes), source of sequence data (cancer vsmetagenomic), and variant type of interest (SNVs or structural variants). The output of variant calling is typically invcf format, and can be filtered using allele frequencies, quality scores, or other factors based on the research question at hand.[14]
This step adds context to the variant data using curated information from peer-reviewed papers and publicly available databases like gnomAD andEnsembl. Variants can be annotated with information about genomic features, functional consequences, regulatory elements, and population frequencies using tools like ANNOVAR or SnpEff,[23]or custom scripts and pipeline. The output from this step is an annotation file in bed or txt format.[14]
Genomic data, such as read alignments, coverage plots, and variant calls, can be visualized usinggenome browserslike IGV (Integrative Genomics Viewer) or UCSC Genome Browser. Interpretation of the results is done in the context of the biological question or hypothesis under investigation. The output can be a graphical representation of data in the forms of Circos plots, volcano plots, etc., or other forms of report describing the observations.[14]
DNA sequence analysis could also involve statistical modeling to infer relationships and epigenetic analysis, like identifying differential methylation regions using a tool like DSS.
The following steps are peculiar to RNA sequences:
Mapped RNA sequences are analyzed to estimate gene expression levels using quantification tools such as HTSeq,[24]and identify differentially expressed genes (DEGs) between experimental conditions using statistical methods likeDESeq2.[25]This is carried out to compare the expression levels of genes or isoforms between or across different samples, and infer biological relevance.[14]The output of gene expression analysis is typically a table with values representing the expression levels of gene IDs or names in rows and samples in the columns as well as standard errors and p-values. The results in the table can be further visualized using volcano plots and heatmaps, where colors represent the estimated expression level. Packages like ggplot2 in R and Matplotlib in Python are often used to create the visuals. The table can also be annotated using a reference annotation file, usually inGTF or GFFformat to provide more context about the genes, such as the chromosome name, strand, and start and positions, and aid result interpretation.[14][12][13][26]
Functional enrichment analysis identifies biological processes, pathways, and functional impacts associated with differentially expressed genes obtained from the previous step. It uses tools like GOSeq[27]and Pathview.[28]This creates a table with information about what pathways and molecular processes are associated with the differentially expressed genes, what genes are down or upregulated, and whatgene ontologyterms are recurrent or over-represented.[14][12][13][26]
RNA sequence analysis explores gene expression dynamics and regulatory mechanisms underlying biological processes and diseases. Interpretation of images and tables are carried out within the context of the hypotheses being investigated.
See also:Transcriptomic technologies.
Proteome sequence analysis studies the complete set of proteins expressed by an organism or a cell under specific conditions. It describes protein structure, function, post-translational modifications, and interactions within biological systems. It often starts with rawmass spectrometry(MS) data from proteomics experiments, typically in mzML, mzXML, or RAW file formats.[14]
Beyond preprocessing raw MS data to remove noise, normalize intensities, and detect peaks and converting proprietary file formats (e.g., RAW) to open-source formats (mzML, mzXML) for compatibility with downstream analysis tools, other analytical steps includepeptideidentification, peptide quantification, protein inference and quantification, generating quality control report, and normalization, imputation and significance testing. The choice and order of analytical steps depend on the MS method used, which can either be data dependent acquisition (DDA) or independent acquisition (DIA).[14][29]
Genome browsers offer a non-code, user-friendly interface to visualize genomes and genomic segments, identify genomic features, and analyze the relationship between numerous genomic elements. The three primary genome browsers—Ensembl genome browser, UCSC genome browser, and the National Centre for Biotechnology Information (NCBI)—support different sequence analysis procedures, including genome assembly, genome annotation, and comparative genomics like exploring differential expression patterns and identifying conserved regions. All browsers support multiple data formats for upload and download and provide links to external tools and resources for sequence analyses, which contributes to their versatility.[30][31]
There are millions ofproteinandnucleotidesequences known. These sequences fall into many groups of related sequences known asprotein familiesor gene families. Relationships between these sequences are usually discovered by aligning them together and assigning this alignment a score. There are two main types of sequence alignment. Pair-wise sequence alignment only compares two sequences at a time and multiple sequence alignment compares many sequences. Two important algorithms for aligning pairs of sequences are theNeedleman-Wunsch algorithmand theSmith-Waterman algorithm. Popular tools for sequence alignment include:
A common use for pairwise sequence alignment is to take a sequence of interest and compare it to all known sequences in a database to identifyhomologous sequences. In general, the matches in the database are ordered to show the most closely related sequences first, followed by sequences with diminishing similarity. These matches are usually reported with a measure of statistical significance such as anExpectation value.
In 1987, Michael Gribskov, Andrew McLachlan, andDavid Eisenbergintroduced the method of profile comparison for identifying distant similarities between proteins.[32]Rather than using a single sequence, profile methods use a multiple sequence alignment to encode a profile which contains information about the conservation level of each residue. These profiles can then be used to search collections of sequences to find sequences that are related. Profiles are also known as Position Specific Scoring Matrices (PSSMs). In 1993, a probabilistic interpretation of profiles was introduced byAnders Kroghand colleagues usinghidden Markov models.[33][34]These models have become known as profile-HMMs.
In recent years,[when?]methods have been developed that allow the comparison of profiles directly to each other. These are known as profile-profile comparison methods.[35]
Sequence assembly refers to the reconstruction of a DNA sequence byaligningand merging small DNA fragments. It is an integral part of modernDNA sequencing. Since presently-available DNA sequencing technologies are ill-suited for reading long sequences, large pieces of DNA (such as genomes) are often sequenced by (1) cutting the DNA into small pieces, (2) reading the small fragments, and (3) reconstituting the original DNA by merging the information on various fragments.
Recently, sequencing multiple species at one time is one of the top research objectives. Metagenomics is the study of microbial communities directly obtained from the environment. Different from cultured microorganisms from the lab, the wild sample usually contains dozens, sometimes even thousands of types of microorganisms from their original habitats.[36]Recovering the original genomes can prove to be very challenging.
Gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encodegenes. This includes protein-codinggenesas well asRNA genes, but may also include the prediction of other functional elements such asregulatory regions. Geri is one of the first and most important steps in understanding the genome of a species once it has beensequenced. In general, the prediction of bacterial genes is significantly simpler and more accurate than the prediction of genes in eukaryotic species that usually have complexintron/exonpatterns. Identifying genes in long sequences remains a problem, especially when the number of genes is unknown.Hidden markov modelscan be part of the solution.[37]Machine learning has played a significant role in predicting the sequence of transcription factors.[38]Traditional sequencing analysis focused on the statistical parameters of the nucleotide sequence itself (The most common programs used are listed inTable 4.1). Another method is to identify homologous sequences based on other known gene sequences (Tools seeTable 4.3).[39]The two methods described here are focused on the sequence. However, the shape feature of these molecules such as DNA and protein have also been studied and proposed to have an equivalent, if not higher, influence on the behaviors of these molecules.[40]
The 3D structures of molecules are of major importance to their functions in nature. Since structural prediction of large molecules at an atomic level is a largely intractable problem, some biologists introduced ways to predict 3D structure at a primary sequence level. This includes the biochemical or statistical analysis of amino acid residues in local regions and structural the inference from homologs (or other potentially related proteins) with known 3D structures.
There have been a large number of diverse approaches to solve the structure prediction problem. In order to determine which methods were most effective, a structure prediction competition was founded calledCASP(Critical Assessment of Structure Prediction).[41]
Sequence analysis tasks are often non-trivial to resolve and require the use of relatively complex approaches, many of which are the backbone behind many existing sequence analysis tools. Of the many methods used in practice, the most popular include the following: | https://en.wikipedia.org/wiki/Sequence_analysis |
Signal processingis anelectrical engineeringsubfield that focuses on analyzing, modifying and synthesizingsignals, such assound,images,potential fields,seismic signals,altimetry processing, andscientific measurements.[1]Signal processing techniques are used to optimize transmissions,digital storageefficiency, correcting distorted signals, improvesubjective video quality, and to detect or pinpoint components of interest in a measured signal.[2]
According toAlan V. OppenheimandRonald W. Schafer, the principles of signal processing can be found in the classicalnumerical analysistechniques of the 17th century. They further state that the digital refinement of these techniques can be found in the digitalcontrol systemsof the 1940s and 1950s.[3]
In 1948,Claude Shannonwrote the influential paper "A Mathematical Theory of Communication" which was published in theBell System Technical Journal.[4]The paper laid the groundwork for later development of information communication systems and the processing of signals for transmission.[5]
Signal processing matured and flourished in the 1960s and 1970s, anddigital signal processingbecame widely used with specializeddigital signal processorchips in the 1980s.[5]
A signal is afunctionx(t){\displaystyle x(t)}, where this function is either[6]
Analog signal processing is for signals that have not been digitized, as in most 20th-centuryradio, telephone, and television systems. This involves linear electronic circuits as well as nonlinear ones. The former are, for instance,passive filters,active filters,additive mixers,integrators, anddelay lines. Nonlinear circuits includecompandors, multipliers (frequency mixers,voltage-controlled amplifiers),voltage-controlled filters,voltage-controlled oscillators, andphase-locked loops.
Continuous-time signalprocessing is for signals that vary with the change of continuous domain (without considering some individual interrupted points).
The methods of signal processing includetime domain,frequency domain, andcomplex frequency domain. This technology mainly discusses the modeling of alinear time-invariantcontinuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals. For example, in time domain, a continuous-time signalx(t){\displaystyle x(t)}passing through alinear time-invariantfilter/system denoted ash(t){\displaystyle h(t)}, can be expressed at the output as
y(t)=∫−∞∞h(τ)x(t−τ)dτ{\displaystyle y(t)=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )\,d\tau }
In some contexts,h(t){\displaystyle h(t)}is referred to as the impulse response of the system. The aboveconvolutionoperation is conducted between the input and the system.
Discrete-time signalprocessing is for sampled signals, defined only at discrete points in time, and as such are quantized in time, but not in magnitude.
Analog discrete-time signal processingis a technology based on electronic devices such assample and holdcircuits, analog time-divisionmultiplexers,analog delay linesandanalog feedback shift registers. This technology was a predecessor of digital signal processing (see below), and is still used in advanced processing of gigahertz signals.[7]
The concept of discrete-time signal processing also refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without takingquantization errorinto consideration.
Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purposecomputersor by digital circuits such asASICs,field-programmable gate arraysor specializeddigital signal processors. Typical arithmetical operations includefixed-pointandfloating-point, real-valued and complex-valued, multiplication and addition. Other typical operations supported by the hardware arecircular buffersandlookup tables. Examples of algorithms are thefast Fourier transform(FFT),finite impulse response(FIR) filter,Infinite impulse response(IIR) filter, andadaptive filterssuch as theWienerandKalman filters.
Nonlinear signal processing involves the analysis and processing of signals produced fromnonlinear systemsand can be in the time,frequency, or spatiotemporal domains.[8][9]Nonlinear systems can produce highly complex behaviors includingbifurcations,chaos,harmonics, andsubharmonicswhich cannot be produced or analyzed using linear methods.
Polynomial signal processing is a type of non-linear signal processing, wherepolynomialsystems may be interpreted as conceptually straightforward extensions of linear systems to the nonlinear case.[10]
Statistical signal processingis an approach which treats signals asstochastic processes, utilizing theirstatisticalproperties to perform signal processing tasks.[11]Statistical techniques are widely used in signal processing applications. For example, one can model theprobability distributionof noise incurred when photographing an image, and construct techniques based on this model toreduce the noisein the resulting image.
Graph signal processinggeneralizes signal processing tasks to signals living on non-Euclidean domains whose structure can be captured by a weighted graph.[12]Graph signal processing presents several key points such as sampling signal techniques,[13]recovery techniques[14]and time-varying techiques.[15]Graph signal processing has been applied with success in the field of image processing, computer vision[16][17][18]and sound anomaly detection.[19]
In communication systems, signal processing may occur at:[citation needed] | https://en.wikipedia.org/wiki/Signal_processing |
Atime series databaseis a software system that is optimized for storing and servingtime seriesthrough associated pairs of time(s) and value(s).[1]In some fields,time seriesmay be called profiles, curves, traces or trends.[2]Several early time series databases are associated with industrial applications which could efficiently store measured values from sensory equipment (also referred to asdata historians), but now are used in support of a much wider range of applications.
In many cases, the repositories of time-series data will utilizecompression algorithmsto manage the data efficiently.[3][4]Although it is possible to store time-series data in many different database types, the design of these systems with time as a key index is distinctly different fromrelational databaseswhich reduce discrete relationships through referential models.[5]
Time series datasets are relatively large and uniform compared to other datasets―usually being composed of atimestampand associated data.[6]Time series datasets can also have fewer relationships between data entries in different tables and don't require indefinite storage of entries.[6]The unique properties of time series datasets mean that time series databases can provide significant improvements in storage space and performance over general purpose databases.[6]For instance, due to the uniformity of time series data, specialized compression algorithms can provide improvements over regular compression algorithms designed to work on less uniform data.[6]Time series databases can also be configured to regularly delete (or downsample) old data, unlike regular databases which are designed to store data indefinitely.[6]Specialdatabase indicescan also provide boosts in query performance.[6]
The following database systems have functionality optimized for handlingtime seriesdata. | https://en.wikipedia.org/wiki/Time_series_database |
Instatistics,signal processing, andeconometrics, anunevenly(orunequallyorirregularly)spaced time seriesis a sequence of observation time and value pairs (tn, Xn) in which the spacing of observation times is not constant.
Unevenly spacedtime seriesnaturally occur in many industrial and scientific domains:natural disasterssuch as earthquakes, floods, or volcanic eruptions typically occur at irregular time intervals. Inobservational astronomy, measurements such as spectra of celestial objects are taken at times determined by weather conditions, availability of observation time slots, and suitable planetary configurations. Inclinical trials(or more generally,longitudinal studies), a patient's state of health may be observed only at irregular time intervals, and different patients are usually observed at different points in time. Wireless sensors in theInternet of thingsoften transmit information only when a state changes to conserve battery life. There are many more examples inclimatology,ecology,high-frequency finance,geology, andsignal processing.
A common approach to analyzing unevenly spaced time series is to transform the data into equally spaced observations using some form ofinterpolation- most often linear - and then to apply existing methods for equally spaced data. However, transforming data in such a way can introduce a number of significant and hard to quantifybiases,[1][2][3][4][5]especially if the spacing of observations is highly irregular.
Ideally, unevenly spaced time series are analyzed in their unaltered form. However, most of the basic theory fortime series analysiswas developed at a time when limitations in computing resources favored an analysis of equally spaced data, since in this case efficientlinear algebraroutines can be used and many problems have anexplicit solution. As a result, fewer methods currently exist specifically for analyzing unevenly spaced time series data.[5][6][7][8][9][10][11]
Theleast-squares spectral analysismethods are commonly used for computing afrequency spectrumfrom such time series without any data alterations. | https://en.wikipedia.org/wiki/Unevenly_spaced_time_series |
Stochastic chains with memory of variable lengthare a family ofstochastic chainsof finite order in a finite alphabet, such as, for every time pass, only one finite suffix of the past, called context, is necessary to predict the next symbol. These models were introduced in the information theory literature byJorma Rissanenin 1983,[1]as a universal tool todata compression, but recently have been used to model data in different areas such asbiology,[2]linguistics[3]andmusic.[4]
A stochastic chain with memory of variable length is a stochastic chain(Xn)n∈Z{\displaystyle (X_{n})_{n\in Z}}, taking values in a finite alphabetA{\displaystyle A}, and characterized by a probabilistic context tree(τ,p){\displaystyle (\tau ,p)}, so that
The class of stochastic chains with memory of variable length was introduced byJorma Rissanenin the articleA universal data compression system.[1]Such class of stochastic chains was popularized in the statistical and probabilistic community by P. Bühlmann and A. J. Wyner in 1999, in the articleVariable Length Markov Chains. Named by Bühlmann and Wyner as “variable lengthMarkov chains” (VLMC), these chains are also known as “variable-order Markov models" (VOM), “probabilisticsuffix trees”[2]and “contexttree models”.[5]The name “stochastic chains with memory of variable length” seems to have been introduced byGalvesand Löcherbach, in 2008, in the article of the same name.[6]
Consider asystemby a lamp, an observer and a door between both of them. The lamp has two possiblestates: on, represented by 1, or off, represented by 0. When the lamp is on, the observer may see the light through the door, depending on which state the door is at the time: open, 1, or closed, 0. such states are independent of the original state of the lamp.
Let(Xn)n≥0{\displaystyle (X_{n})_{n\geq 0}}aMarkov chainthat represents the state of the lamp, with values inA=0,1{\displaystyle A={0,1}}and letp{\displaystyle p}be aprobability transition matrix. Also, let(ξn)n≥0{\displaystyle (\xi _{n})_{n\geq 0}}be a sequence ofindependent random variablesthat represents the door's states, also taking values inA{\displaystyle A}, independent of the chain(Xn)n≥0{\displaystyle (X_{n})_{n\geq 0}}and such that
where0<ϵ<1{\displaystyle 0<\epsilon <1}. Define a new sequence(Zn)n≥0{\displaystyle (Z_{n})_{n\geq 0}}such that
In order to determine the last instant that the observer could see the lamp on, i.e. to identify the least instantk{\displaystyle k}, withk<n{\displaystyle k<n}in whichZk=1{\displaystyle Z_{k}=1}.
Using a context tree it's possible to represent the past states of the sequence, showing which are relevant to identify the next state.
The stochastic chain(Zn)n∈Z{\displaystyle (Z_{n})_{n\in \mathbb {Z} }}is, then, a chain with memory of variable length, taking values inA{\displaystyle A}and compatible with the probabilistic context tree(τ,p){\displaystyle (\tau ,p)}, where
Given a sampleXl,…,Xn{\displaystyle X_{l},\ldots ,X_{n}}, one can find the appropriated context tree using the following algorithms.
In the articleA Universal Data Compression System,[1]Rissanen introduced a consistent algorithm to estimate the probabilistic context tree that generates the data. This algorithm's function can be summarized in two steps:
BeX0,…,Xn−1{\displaystyle X_{0},\ldots ,X_{n-1}}a sample of a finite probabilistic tree(τ,p){\displaystyle (\tau ,p)}. For any sequencex−j−1{\displaystyle x_{-j}^{-1}}withj≤n{\displaystyle j\leq n}, it is possible to denote byNn(x−j−1){\displaystyle N_{n}(x_{-j}^{-1})}the number of occurrences of the sequence in the sample, i.e.,
Rissanen first built a context maximum candidate, given byXn−K(n)n−1{\displaystyle X_{n-K(n)}^{n-1}}, whereK(n)=Clogn{\displaystyle K(n)=C\log {n}}andC{\displaystyle C}is an arbitrary positive constant. The intuitive reason for the choice ofClogn{\displaystyle C\log {n}}comes from the impossibility of estimating the probabilities of sequence with lengths greater thanlogn{\displaystyle \log {n}}based in a sample of sizen{\displaystyle n}.
From there, Rissanen shortens the maximum candidate through successive cutting the branches according to a sequence of tests based in statistical likelihood ratio. In a more formal definition, if bANnxk1b0 define the probability estimator of the transition probabilityp{\displaystyle p}by
wherex−j−1a=(x−j,…,x−1,a){\displaystyle x_{-j}^{-1}a=(x_{-j},\ldots ,x_{-1},a)}. If∑b∈ANn(x−k−1b)=0{\displaystyle \sum _{b\in A}N_{n}(x_{-k}^{-1}b)\,=\,0}, definep^n(a∣x−k−1)=1/|A|{\displaystyle {\hat {p}}_{n}(a\mid x_{-k}^{-1})\,=\,1/|A|}.
Toi≥1{\displaystyle i\geq 1}, define
whereyx−i−1=(y,x−i,…,x−1){\displaystyle yx_{-i}^{-1}=(y,x_{-i},\ldots ,x_{-1})}and
Note thatΛn(x−i−1){\displaystyle \Lambda _{n}(x_{-i}^{-1})}is the ratio of the log-likelihood to test the consistency of the sample with the probabilistic context tree(τ,p){\displaystyle (\tau ,p)}against the alternative that is consistent with(τ′,p′){\displaystyle (\tau ',p')}, whereτ{\displaystyle \tau }andτ′{\displaystyle \tau '}differ only by a set of sibling knots.
The length of the current estimated context is defined by
whereC{\displaystyle C}is any positive constant. At last, by Rissanen,[1]there's the following result. GivenX0,…,Xn−1{\displaystyle X_{0},\ldots ,X_{n-1}}of a finite probabilistic context tree(τ,p){\displaystyle (\tau ,p)}, then
whenn→∞{\displaystyle n\rightarrow \infty }.
The estimator of the context tree by BIC with a penalty constantc>0{\displaystyle c>0}is defined as
The smallest maximizer criterion[3]is calculated by selecting the smallest treeτof a set of champion treesCsuch that | https://en.wikipedia.org/wiki/Stochastic_chains_with_memory_of_variable_length |
This article contains examples ofMarkov chainsand Markov processes in action.
All examples are in the countablestate space. For an overview of Markov chains in general state space, seeMarkov chains on a measurable state space.
A game ofsnakes and laddersor any other game whose moves are determined entirely bydiceis a Markov chain, indeed, anabsorbing Markov chain. This is in contrast to card games such asblackjack, where the cards represent a 'memory' of the past moves. To see the difference, consider the probability for a certain event in the game. In the above-mentioned dice games, the only thing that matters is the current state of the board. The next state of the board depends on the current state, and the next roll of the dice. It does not depend on how things got to their current state. In a game such as blackjack, a player can gain an advantage by remembering which cards have already been shown (and hence which cards are no longer in the deck), so the next state (or hand) of the game is not independent of the past states.
Consider arandom walkon the number line where, at each step, the position (call itx) may change by +1 (to the right) or −1 (to the left) with probabilities:
(wherecis a constant greater than 0)
For example, if the constant,c, equals 1, the probabilities of a move to the left at positionsx= −2,−1,0,1,2 are given by16,14,12,34,56{\displaystyle {\dfrac {1}{6}},{\dfrac {1}{4}},{\dfrac {1}{2}},{\dfrac {3}{4}},{\dfrac {5}{6}}}respectively. The random walk has a centering effect that weakens ascincreases.
Since the probabilities depend only on the current position (value ofx) and not on any prior positions, this biased random walk satisfies the definition of a Markov chain.
Suppose that one starts with $10, and one wagers $1 on an unending, fair, coin toss indefinitely, or until all of the money is lost. IfXn{\displaystyle X_{n}}represents the number of dollars one has afterntosses, withX0=10{\displaystyle X_{0}=10}, then the sequence{Xn:n∈N}{\displaystyle \{X_{n}:n\in \mathbb {N} \}}is a Markov process. If one knows that one has $12 now, then it would be expected that with even odds, one will either have $11 or $13 after the next toss. This guess is not improved by the added knowledge that one started with $10, then went up to $11, down to $10, up to $11, and then to $12. The fact that the guess is not improved by the knowledge of earlier tosses showcases theMarkov property, the memoryless property of a stochastic process.[1]
This example came from Markov himself.[2]Markov chose 20,000 letters from Pushkin’sEugene Onegin, classified them into vowels and consonants, and counted the transition probabilities.vowelconsonantvowel.128.872consonant.663.337{\displaystyle {\begin{array}{lll}&{\text{vowel}}&{\text{consonant}}\\{\text{vowel}}&.128&.872\\{\text{consonant}}&.663&.337\end{array}}}The stationary distribution is 43.2 percent vowels and 56.8 percent consonants, which is close to the actual count in the book.[3]
The probabilities of weather conditions (modeled as either rainy or sunny), given the weather on the preceding day,
can be represented by atransition matrix:
The matrixPrepresents the weather model in which a sunny day is 90% likely to be followed by another sunny day, and a rainy day is 50% likely to be followed by another rainy day. The columns can be labelled "sunny" and "rainy", and the rows can be labelled in the same order.
(P)i jis the probability that, if a given day is of typei, it will be
followed by a day of typej.
Notice that the rows ofPsum to 1: this is becausePis astochastic matrix.
The weather on day 0 (today) is known to be sunny. This is represented by an initial state vector in which the "sunny" entry is 100%, and the "rainy" entry is 0%:
The weather on day 1 (tomorrow) can be predicted by multiplying the state vector from day 0 by the transition matrix:
Thus, there is a 90% chance that day 1 will also be sunny.
The weather on day 2 (the day after tomorrow) can be predicted in the same way, from the state vector we computed for day 1:
or
General rules for daynare:
In this example, predictions for the weather on more distant days change less and less on each subsequent day and tend towards asteady state vector. This vector represents the probabilities of sunny and rainy weather on all days, and is independent of the initial weather.
The steady state vector is defined as:
but converges to a strictly positive vector only ifPis a regular transition matrix (that is, there
is at least onePnwith all non-zero entries).
Sinceqis independent from initial conditions, it must be unchanged when transformed byP.[4]This makes it aneigenvector(witheigenvalue1), and means it can be derived fromP.
In layman's terms, the steady-state vector is the vector that, when we multiply it byP, we get the exact same vector back.[5]For the weather example, we can use this to set up a matrix equation:
and since they are a probability vector we know that
Solving this pair of simultaneous equations gives the steady state vector:
In conclusion, in the long term about 83.3% of days are sunny. Not all Markov processes have a steady state vector. In particular, the transition matrix must beregular. Otherwise, the state vectors will oscillate over time without converging.
Astate diagramfor a simple example is shown in the figure on the right, using a directed graph to picture thestate transitions. The states represent whether a hypothetical stock market is exhibiting abull market,bear market, or stagnant market trend during a given week. According to the figure, a bull week is followed by another bull week 90% of the time, a bear week 7.5% of the time, and a stagnant week the other 2.5% of the time. Labeling the state space {1 = bull, 2 = bear, 3 = stagnant} the transition matrix for this example is
The distribution over states can be written as astochastic row vectorxwith the relationx(n+ 1)=x(n)P. So if at timenthe system is in statex(n), then three time periods later, at timen+ 3the distribution is
In particular, if at timenthe system is in state 2 (bear), then at timen+ 3the distribution is
Using the transition matrix it is possible to calculate, for example, the long-term fraction of weeks during which the market is stagnant, or the average number of weeks it will take to go from a stagnant to a bull market. Using the transition probabilities, the steady-state probabilities indicate that 62.5% of weeks will be in a bull market, 31.25% of weeks will be in a bear market and 6.25% of weeks will be stagnant, since:
A thorough development and many examples can be found in the on-line monograph Meyn & Tweedie 2005.[6]
Afinite-state machinecan be used as a representation of a Markov chain. Assuming a sequence ofindependent and identically distributedinput signals (for example, symbols from a binary alphabet chosen by coin tosses), if the machine is in stateyat timen, then the probability that it moves to statexat timen+ 1 depends only on the current state.
If one pops one hundred kernels of popcorn in an oven, each kernel popping at an independentexponentially-distributedtime, then this would be acontinuous-time Markov process. IfXt{\displaystyle X_{t}}denotes the number of kernels which have popped up to timet, the problem can be defined as finding the number of kernels that will pop in some later time. The only thing one needs to know is the number of kernels that have popped prior to the time "t". It is not necessary to knowwhenthey popped, so knowingXt{\displaystyle X_{t}}for previous times "t" is not relevant.
The process described here is an approximation of aPoisson point process– Poisson processes are also Markov processes. | https://en.wikipedia.org/wiki/Examples_of_Markov_chains |
Variable-order Bayesian network (VOBN)models provide an important extension of both theBayesian networkmodels and thevariable-order Markov models. VOBN models are used inmachine learningin general and have shown great potential inbioinformaticsapplications.[1][2]These models extend the widely usedposition weight matrix(PWM) models,Markov models, and Bayesian network (BN) models.
In contrast to the BN models, where each random variable depends on a fixed subset of random variables, in VOBN models these subsets may vary based on the specific realization of observed variables. The observed realizations are often called the context and, hence, VOBN models are also known as context-specific Bayesian networks.[3]The flexibility in the definition of conditioning subsets of variables turns out to be a real advantage in classification and analysis applications, as the statistical dependencies between random variables in a sequence of variables (not necessarily adjacent) may be taken into account efficiently, and in a position-specific and context-specific manner. | https://en.wikipedia.org/wiki/Variable_order_Bayesian_network |
Markov renewal processesare a class ofrandom processesin probability and statistics that generalize the class ofMarkovjump processes. Other classes of random processes, such asMarkov chainsandPoisson processes, can be derived as special cases among the class of Markov renewal processes, while Markov renewal processes are special cases among the more general class ofrenewal processes.
In the context of a jump process that takes states in a state spaceS{\displaystyle \mathrm {S} }, consider the set of random variables(Xn,Tn){\displaystyle (X_{n},T_{n})}, whereTn{\displaystyle T_{n}}represents the jump times andXn{\displaystyle X_{n}}represents the associated states in the sequence of states (see Figure). Let the sequence of inter-arrival timesτn=Tn−Tn−1{\displaystyle \tau _{n}=T_{n}-T_{n-1}}. In order for the sequence(Xn,Tn){\displaystyle (X_{n},T_{n})}to be considered a Markov renewal process the following condition should hold:
Pr(τn+1≤t,Xn+1=j∣(X0,T0),(X1,T1),…,(Xn=i,Tn))=Pr(τn+1≤t,Xn+1=j∣Xn=i)∀n≥1,t≥0,i,j∈S{\displaystyle {\begin{aligned}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid (X_{0},T_{0}),(X_{1},T_{1}),\ldots ,(X_{n}=i,T_{n}))\\[5pt]={}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid X_{n}=i)\,\forall n\geq 1,t\geq 0,i,j\in \mathrm {S} \end{aligned}}} | https://en.wikipedia.org/wiki/Semi-Markov_process |
Artificial intelligence(AI) refers to the capability ofcomputational systemsto perform tasks typically associated withhuman intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is afield of researchincomputer sciencethat develops and studies methods andsoftwarethat enable machines toperceive their environmentand uselearningandintelligenceto take actions that maximize their chances of achieving defined goals.[1]Such machines may be called AIs.
High-profileapplications of AIinclude advancedweb search engines(e.g.,Google Search);recommendation systems(used byYouTube,Amazon, andNetflix);virtual assistants(e.g.,Google Assistant,Siri, andAlexa);autonomous vehicles(e.g.,Waymo);generativeandcreativetools (e.g.,ChatGPTandAI art); andsuperhumanplay and analysis instrategy games(e.g.,chessandGo). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it'snot labeled AI anymore."[2][3]
Various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include learning,reasoning,knowledge representation,planning,natural language processing,perception, and support forrobotics.[a]To reach these goals, AI researchers have adapted and integrated a wide range of techniques, includingsearchandmathematical optimization,formal logic,artificial neural networks, and methods based onstatistics,operations research, andeconomics.[b]AI also draws uponpsychology,linguistics,philosophy,neuroscience, and other fields.[4]Some AI companies, such asOpenAI,Google DeepMindandMeta, aim to createartificial general intelligence(AGI)—AI that can complete virtually any cognitive task at least as well as humans.[5]
Artificial intelligence was founded as an academic discipline in 1956,[6]and the field went through multiple cycles of optimism throughoutits history,[7][8]followed by periods of disappointment and loss of funding, known asAI winters.[9][10]Funding and interest vastly increased after 2012 whengraphics processing unitsstarted being used to accelerate neural networks, anddeep learningoutperformed previous AI techniques.[11]This growth accelerated further after 2017 with thetransformer architecture.[12]In the 2020s, the period of rapidprogressmarked by advanced generative AI became known as theAI boom. Generative AI and its ability to create and modify content exposed several unintended consequences and harms in the present and raisedethical concernsaboutAI's long-term effectsand potentialexistential risks, prompting discussions aboutregulatory policiesto ensure thesafetyand benefits of the technology.
The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a]
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logicaldeductions.[13]By the late 1980s and 1990s, methods were developed for dealing withuncertainor incomplete information, employing concepts fromprobabilityandeconomics.[14]
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow.[15]Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[16]Accurate and efficient reasoning is an unsolved problem.
Knowledge representationandknowledge engineering[17]allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[18]scene interpretation,[19]clinical decision support,[20]knowledge discovery (mining "interesting" and actionable inferences from largedatabases),[21]and other areas.[22]
Aknowledge baseis a body of knowledge represented in a form that can be used by a program. Anontologyis the set of objects, relations, concepts, and properties used by a particular domain of knowledge.[23]Knowledge bases need to represent things such as objects, properties, categories, and relations between objects;[24]situations, events, states, and time;[25]causes and effects;[26]knowledge about knowledge (what we know about what other people know);[27]default reasoning(things that humans assume are true until they are told differently and will remain true even when other facts are changing);[28]and many other aspects and domains of knowledge.
Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous);[29]and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally).[16]There is also the difficulty ofknowledge acquisition, the problem of obtaining knowledge for AI applications.[c]
An "agent" is anything that perceives and takes actions in the world. Arational agenthas goals or preferences and takes actions to make them happen.[d][32]Inautomated planning, the agent has a specific goal.[33]Inautomated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": theutilityof all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.[34]
Inclassical planning, the agent knows exactly what the effect of any action will be.[35]In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.[36]
In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., withinverse reinforcement learning), or the agent can seek information to improve its preferences.[37]Information value theorycan be used to weigh the value of exploratory or experimental actions.[38]The space of possible future actions and situations is typicallyintractablylarge, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be.
AMarkov decision processhas atransition modelthat describes the probability that a particular action will change the state in a particular way and areward functionthat supplies the utility of each state and the cost of each action. Apolicyassociates a decision with each possible state. The policy could be calculated (e.g., byiteration), beheuristic, or it can be learned.[39]
Game theorydescribes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents.[40]
Machine learningis the study of programs that can improve their performance on a given task automatically.[41]It has been a part of AI from the beginning.[e]
There are several kinds of machine learning.Unsupervised learninganalyzes a stream of data and finds patterns and makes predictions without any other guidance.[44]Supervised learningrequires labeling the training data with the expected answers, and comes in two main varieties:classification(where the program must learn to predict what category the input belongs in) andregression(where the program must deduce a numeric function based on numeric input).[45]
Inreinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good".[46]Transfer learningis when the knowledge gained from one problem is applied to a new problem.[47]Deep learningis a type of machine learning that runs inputs through biologically inspiredartificial neural networksfor all of these types of learning.[48]
Computational learning theorycan assess learners bycomputational complexity, bysample complexity(how much data is required), or by other notions ofoptimization.[49]
Natural language processing(NLP)[50]allows programs to read, write and communicate in human languages such asEnglish. Specific problems includespeech recognition,speech synthesis,machine translation,information extraction,information retrievalandquestion answering.[51]
Early work, based onNoam Chomsky'sgenerative grammarandsemantic networks, had difficulty withword-sense disambiguation[f]unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem[29]).Margaret Mastermanbelieved that it was meaning and not grammar that was the key to understanding languages, and thatthesauriand not dictionaries should be the basis of computational language structure.
Modern deep learning techniques for NLP includeword embedding(representing words, typically asvectorsencoding their meaning),[52]transformers(a deep learning architecture using anattentionmechanism),[53]and others.[54]In 2019,generative pre-trained transformer(or "GPT") language models began to generate coherent text,[55][56]and by 2023, these models were able to get human-level scores on thebar exam,SATtest,GREtest, and many other real-world applications.[57]
Machine perceptionis the ability to use input from sensors (such as cameras, microphones, wireless signals, activelidar, sonar, radar, andtactile sensors) to deduce aspects of the world.Computer visionis the ability to analyze visual input.[58]
The field includesspeech recognition,[59]image classification,[60]facial recognition,object recognition,[61]object tracking,[62]androbotic perception.[63]
Affective computingis a field that comprises systems that recognize, interpret, process, or simulate humanfeeling, emotion, and mood.[65]For example, somevirtual assistantsare programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitatehuman–computer interaction.
However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents.[66]Moderate successes related to affective computing include textualsentiment analysisand, more recently,multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject.[67]
A machine withartificial general intelligenceshould be able to solve a wide variety of problems with breadth and versatility similar tohuman intelligence.[68]
AI research uses a wide variety of techniques to accomplish the goals above.[b]
AI can solve many problems by intelligently searching through many possible solutions.[69]There are two very different kinds of search used in AI:state space searchandlocal search.
State space searchsearches through a tree of possible states to try to find a goal state.[70]For example,planningalgorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process calledmeans-ends analysis.[71]
Simple exhaustive searches[72]are rarely sufficient for most real-world problems: thesearch space(the number of places to search) quickly grows toastronomical numbers. The result is a search that istoo slowor never completes.[15]"Heuristics" or "rules of thumb" can help prioritize choices that are more likely to reach a goal.[73]
Adversarial searchis used forgame-playingprograms, such as chess or Go. It searches through atreeof possible moves and countermoves, looking for a winning position.[74]
Local searchusesmathematical optimizationto find a solution to a problem. It begins with some form of guess and refines it incrementally.[75]
Gradient descentis a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize aloss function. Variants of gradient descent are commonly used to trainneural networks,[76]through thebackpropagationalgorithm.
Another type of local search isevolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them,selectingonly the fittest to survive each generation.[77]
Distributed search processes can coordinate viaswarm intelligencealgorithms. Two popular swarm algorithms used in search areparticle swarm optimization(inspired by birdflocking) andant colony optimization(inspired byant trails).[78]
Formallogicis used forreasoningandknowledge representation.[79]Formal logic comes in two main forms:propositional logic(which operates on statements that are true or false and useslogical connectivessuch as "and", "or", "not" and "implies")[80]andpredicate logic(which also operates on objects, predicates and relations and usesquantifierssuch as "EveryXis aY" and "There aresomeXs that areYs").[81]
Deductive reasoningin logic is the process ofprovinga new statement (conclusion) from other statements that are given and assumed to be true (thepremises).[82]Proofs can be structured as prooftrees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes byinference rules.
Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whoseleaf nodesare labelled by premises oraxioms. In the case ofHorn clauses, problem-solving search can be performed by reasoningforwardsfrom the premises orbackwardsfrom the problem.[83]In the more general case of the clausal form offirst-order logic,resolutionis a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved.[84]
Inference in both Horn clause logic and first-order logic isundecidable, and thereforeintractable. However, backward reasoning with Horn clauses, which underpins computation in thelogic programminglanguageProlog, isTuring complete. Moreover, its efficiency is competitive with computation in othersymbolic programminglanguages.[85]
Fuzzy logicassigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true.[86]
Non-monotonic logics, including logic programming withnegation as failure, are designed to handledefault reasoning.[28]Other specialized versions of logic have been developed to describe many complex domains.
Many problems in AI (including reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods fromprobabilitytheory and economics.[87]Precise mathematical tools have been developed that analyze how an agent can make choices and plan, usingdecision theory,decision analysis,[88]andinformation value theory.[89]These tools include models such asMarkov decision processes,[90]dynamicdecision networks,[91]game theoryandmechanism design.[92]
Bayesian networks[93]are a tool that can be used forreasoning(using theBayesian inferencealgorithm),[g][95]learning(using theexpectation–maximization algorithm),[h][97]planning(usingdecision networks)[98]andperception(usingdynamic Bayesian networks).[91]
Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g.,hidden Markov modelsorKalman filters).[91]
The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand.Classifiers[99]are functions that usepattern matchingto determine the closest match. They can be fine-tuned based on chosen examples usingsupervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as adata set. When a new observation is received, that observation is classified based on previous experience.[45]
There are many kinds of classifiers in use.[100]Thedecision treeis the simplest and most widely used symbolic machine learning algorithm.[101]K-nearest neighboralgorithm was the most widely used analogical AI until the mid-1990s, andKernel methodssuch as thesupport vector machine(SVM) displaced k-nearest neighbor in the 1990s.[102]Thenaive Bayes classifieris reportedly the "most widely used learner"[103]at Google, due in part to its scalability.[104]Neural networksare also used as classifiers.[105]
An artificial neural network is based on a collection of nodes also known asartificial neurons, which loosely model theneuronsin a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once theweightcrosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers.[105]
Learning algorithms for neural networks uselocal searchto choose the weights that will get the right output for each input during training. The most common training technique is thebackpropagationalgorithm.[106]Neural networks learn to model complex relationships between inputs and outputs andfind patternsin data. In theory, a neural network can learn any function.[107]
Infeedforward neural networksthe signal passes in only one direction.[108]Recurrent neural networksfeed the output signal back into the input, which allows short-term memories of previous input events.Long short term memoryis the most successful network architecture for recurrent networks.[109]Perceptrons[110]use only a single layer of neurons; deep learning[111]uses multiple layers.Convolutional neural networksstrengthen the connection between neurons that are "close" to each other—this is especially important inimage processing, where a local set of neurons mustidentify an "edge"before the network can identify an object.[112]
Deep learning[111]uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, inimage processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.[113]
Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, includingcomputer vision,speech recognition,natural language processing,image classification,[114]and others. The reason that deep learning performs so well in so many applications is not known as of 2021.[115]The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s)[i]but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching toGPUs) and the availability of vast amounts of training data, especially the giantcurated datasetsused for benchmark testing, such asImageNet.[j]
Generative pre-trained transformers(GPT) arelarge language models(LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pre-trained on a largecorpus of textthat can be from the Internet. The pretraining consists of predicting the nexttoken(a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique calledreinforcement learning from human feedback(RLHF). Current GPT models are prone to generating falsehoods called "hallucinations". These can be reduced with RLHF and quality data, but the problem has been getting worse for reasoning systems.[123]Such systems are used inchatbots, which allow people to ask a question or request a task in simple text.[124][125]
Current models and services includeGemini(formerly Bard),ChatGPT,Grok,Claude,Copilot, andLLaMA.[126]MultimodalGPT models can process different types of data (modalities) such as images, videos, sound, and text.[127]
In the late 2010s,graphics processing units(GPUs) that were increasingly designed with AI-specific enhancements and used with specializedTensorFlowsoftware had replaced previously usedcentral processing unit(CPUs) as the dominant means for large-scale (commercial and academic)machine learningmodels' training.[128]Specializedprogramming languagessuch asPrologwere used in early AI research,[129]butgeneral-purpose programming languageslikePythonhave become predominant.[130]
The transistor density inintegrated circuitshas been observed to roughly double every 18 months—a trend known asMoore's law, named after theIntelco-founderGordon Moore, who first identified it. Improvements inGPUshave been even faster,[131]a trend sometimes calledHuang's law,[132]named afterNvidiaco-founder and CEOJensen Huang.
AI and machine learning technology is used in most of the essential applications of the 2020s, including:search engines(such asGoogle Search),targeting online advertisements,recommendation systems(offered byNetflix,YouTubeorAmazon), drivinginternet traffic,targeted advertising(AdSense,Facebook),virtual assistants(such asSiriorAlexa),autonomous vehicles(includingdrones,ADASandself-driving cars),automatic language translation(Microsoft Translator,Google Translate),facial recognition(Apple'sFaceIDorMicrosoft'sDeepFaceandGoogle'sFaceNet) andimage labeling(used byFacebook, Apple'sPhotosandTikTok). The deployment of AI may be overseen by aChief automation officer(CAO).
The application of AI inmedicineandmedical researchhas the potential to increase patient care and quality of life.[133]Through the lens of theHippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.[134][135]
For medical research, AI is an important tool for processing and integratingbig data. This is particularly important fororganoidandtissue engineeringdevelopment which usemicroscopyimaging as a key technique in fabrication.[136]It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research.[136][137]New AI tools can deepen the understanding of biomedically relevant pathways. For example,AlphaFold 2(2021) demonstrated the ability to approximate, in hours rather than months, the 3Dstructure of a protein.[138]In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria.[139]In 2024, researchers used machine learning to accelerate the search forParkinson's diseasedrug treatments. Their aim was to identify compounds that block the clumping, or aggregation, ofalpha-synuclein(the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold.[140][141]
Game playingprograms have been used since the 1950s to demonstrate and test AI's most advanced techniques.[142]Deep Bluebecame the first computer chess-playing system to beat a reigning world chess champion,Garry Kasparov, on 11 May 1997.[143]In 2011, in aJeopardy!quiz showexhibition match,IBM'squestion answering system,Watson, defeated the two greatestJeopardy!champions,Brad RutterandKen Jennings, by a significant margin.[144]In March 2016,AlphaGowon 4 out of 5 games ofGoin a match with Go championLee Sedol, becoming the firstcomputer Go-playing system to beat a professional Go player withouthandicaps. Then, in 2017, itdefeated Ke Jie, who was the best Go player in the world.[145]Other programs handleimperfect-informationgames, such as thepoker-playing programPluribus.[146]DeepMinddeveloped increasingly generalisticreinforcement learningmodels, such as withMuZero, which could be trained to play chess, Go, orAtarigames.[147]In 2019, DeepMind's AlphaStar achieved grandmaster level inStarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map.[148]In 2021, an AI agent competed in a PlayStationGran Turismocompetition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning.[149]In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseenopen-worldvideo games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.[150]
Large language models, such asGPT-4,Gemini,Claude,LLaMaorMistral, are increasingly used in mathematics. These probabilistic models are versatile, but can also produce wrong answers in the form ofhallucinations. They sometimes need a large database of mathematical problems to learn from, but also methods such assupervisedfine-tuning[151]or trainedclassifierswith human-annotated data to improve answers for new problems and learn from corrections.[152]A February 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data.[153]One technique to improve their performance involves training the models to produce correctreasoningsteps, rather than just the correct result.[154]TheAlibaba Groupdeveloped a version of itsQwenmodels calledQwen2-Math, that achieved state-of-the-art performance on several mathematical benchmarks, including 84% accuracy on the MATH dataset of competition mathematics problems.[155]In January 2025, Microsoft proposed the techniquerStar-Maththat leveragesMonte Carlo tree searchand step-by-step reasoning, enabling a relatively small language model likeQwen-7Bto solve 53% of theAIME2024 and 90% of the MATH benchmark problems.[156]
Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such asAlphaTensor,AlphaGeometryandAlphaProofall fromGoogle DeepMind,[157]LlemmafromEleutherAI[158]orJulius.[159]
When natural language is used to describe mathematical problems, converters can transform such prompts into a formal language such asLeanto define mathematical tasks.
Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics.[160]
Topological deep learningintegrates varioustopologicalapproaches.
Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years.[161]
According to Nicolas Firzli, director of theWorld Pensions & Investments Forum, it may be too early to see the emergence of highly innovative AI-informed financial products and services. He argues that "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation."[162]
Various countries are deploying AI military applications.[163]The main applications enhancecommand and control, communications, sensors, integration and interoperability.[164]Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous andautonomous vehicles.[163]AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions,target acquisition, coordination and deconfliction of distributedJoint Firesbetween networked combat vehicles, both human operated andautonomous.[164]
AI has been used in military operations in Iraq, Syria, Israel and Ukraine.[163][165][166][167]
Generative artificial intelligence(Generative AI, GenAI,[168]or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data.[169][170][171]These modelslearnthe underlying patterns and structures of theirtraining dataand use them to produce new data[172][173]based on the input, which often comes in the form of natural languageprompts.[174][175]
Generative AI tools have become more common since an "AI boom" in the 2020s. This boom was made possible by improvements intransformer-baseddeepneural networks, particularlylarge language models(LLMs). Major tools includechatbotssuch asChatGPT,DeepSeek,Copilot,Gemini,Llama, andGrok;text-to-imageartificial intelligence image generationsystems such asStable Diffusion,Midjourney, andDALL-E; andtext-to-videoAI generators such asSora.[176][177][178][179]Technology companies developing generative AI includeOpenAI,Anthropic,Microsoft,Google,DeepSeek, andBaidu.[180][181][182]
Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, includingvirtual assistants,chatbots,autonomous vehicles,game-playing systems, andindustrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.[186][187][188]
Applications of AI in this domain include AI-enabled menstruation and fertility trackers that analyze user data to offer prediction,[189]AI-integrated sex toys (e.g.,teledildonics),[190]AI-generated sexual education content,[191]and AI agents that simulate sexual and romantic partners (e.g.,Replika).[192]AI is also used for the production of non-consensualdeepfake pornography, raising significant ethical and legal concerns.[193]
AI technologies have also been used to attempt to identifyonline gender-based violenceand onlinesexual groomingof minors.[194][195]
There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes.[196]A few examples areenergy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions,foreign policy, or supply chain management.
AI applications for evacuation anddisastermanagement are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.[197][198][199]
In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conductpredictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.
Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights." For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.
During the2024 Indian elections, US$50 million was spent on authorized AI-generated content, notably by creatingdeepfakesof allied (including sometimes deceased) politicians to better engage with voters, and by translating speeches to various local languages.[200]
AI has potential benefits and potential risks.[201]AI may be able to advance science and find solutions for serious problems:Demis HassabisofDeepMindhopes to "solve intelligence, and then use that to solve everything else".[202]However, as the use of AI has become widespread, several unintended consequences and risks have been identified.[203]In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning.[204]
Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns aboutprivacy,surveillanceandcopyright.
AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency.
Sensitive user data collected may include online activity records, geolocation data, video, or audio.[205]For example, in order to buildspeech recognitionalgorithms,Amazonhas recorded millions of private conversations and allowedtemporary workersto listen to and transcribe some of them.[206]Opinions about this widespread surveillance range from those who see it as anecessary evilto those for whom it is clearlyunethicaland a violation of theright to privacy.[207]
AI developers argue that this is the only way to deliver valuable applications and have developed several techniques that attempt to preserve privacy while still obtaining the data, such asdata aggregation,de-identificationanddifferential privacy.[208]Since 2016, some privacy experts, such asCynthia Dwork, have begun to view privacy in terms offairness.Brian Christianwrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'."[209]
Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".[210][211]Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file.[212]In 2023, leading authors (includingJohn GrishamandJonathan Franzen) sued AI companies for using their work to train generative AI.[213][214]Another discussed approach is to envision a separatesui generissystem of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[215]
The commercial AI scene is dominated byBig Techcompanies such asAlphabet Inc.,Amazon,Apple Inc.,Meta Platforms, andMicrosoft.[216][217][218]Some of these players already own the vast majority of existingcloud infrastructureandcomputingpower fromdata centers, allowing them to entrench further in the marketplace.[219][220]
In January 2024, theInternational Energy Agency(IEA) releasedElectricity 2024, Analysis and Forecast to 2026, forecasting electric power use.[221]This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[222]
Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[223]
A 2024Goldman SachsResearch Paper,AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[224]Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[225]
In 2024, theWall Street Journalreported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[226]NvidiaCEOJen-Hsun Huangsaid nuclear power is a good option for the data centers.[227]
In September 2024,Microsoftannounced an agreement withConstellation Energyto re-open theThree Mile Islandnuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the USNuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 USInflation Reduction Act.[228]The US government and the state of Michigan are investing almost $2 billion (US) to reopen thePalisades Nuclearreactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO ofExelonwho was responsible for Exelon spinoff of Constellation.[229]
After the last approval in September 2023,Taiwansuspended the approval of data centers north ofTaoyuanwith a capacity of more than 5 MW in 2024, due to power supply shortages.[230]Taiwan aims tophase out nuclear powerby 2025.[230]On the other hand,Singaporeimposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban.[230]
Although most nuclear plants in Japan have been shut down after the 2011Fukushima nuclear accident, according to an October 2024Bloombergarticle in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI.[231]Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI.[231]
On 1 November 2024, theFederal Energy Regulatory Commission(FERC) rejected an application submitted byTalen Energyfor approval to supply some electricity from the nuclear power stationSusquehannato Amazon's data center.[232]According to the Commission ChairmanWillie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors.[232]
In 2025 a report prepared by the International Energy Agency estimated thegreenhouse gas emissionsfrom the energy consumption of AI at 180 million tons. By 2035, these emissions could rise to 300-500 million tonnes depending on what measures will be taken. This is below 1.5% of the energy sector emissions. The emissions reduction potential of AI was estimated at 5% of the energy sector emissions, butrebound effects(for example if people will pass from public transport to autonomous cars) can reduce it.[233]
YouTube,Facebookand others userecommender systemsto guide users to more content. These AI programs were given the goal ofmaximizinguser engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choosemisinformation,conspiracy theories, and extremepartisancontent, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people intofilter bubbleswhere they received multiple versions of the same misinformation.[234]This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.[235]The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took some steps to mitigate the problem.[236]
In 2022,generative AIbegan to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.[237]One such potential malicious use is deepfakes forcomputational propaganda.[238]AI pioneerGeoffrey Hintonexpressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks.[239]
AI researchers atMicrosoft,OpenAI, universities and other organisations have suggested using "personhood credentials" as a way to overcome online deception enabled by AI models.[240]
Machine learning applications will bebiased[k]if they learn from biased data.[242]The developers may not be aware that the bias exists.[243]Bias can be introduced by the waytraining datais selected and by the way a model is deployed.[244][242]If a biased algorithm is used to make decisions that can seriouslyharmpeople (as it can inmedicine,finance,recruitment,housingorpolicing) then the algorithm may causediscrimination.[245]The field offairnessstudies how to prevent harms from algorithmic biases.
On June 28, 2015,Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people,[246]a problem called "sample size disparity".[247]Google "fixed" this problem by preventing the system from labellinganythingas a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.[248]
COMPASis a commercial program widely used byU.S. courtsto assess the likelihood of adefendantbecoming arecidivist. In 2016,Julia AngwinatProPublicadiscovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend.[249]In 2017, several researchers[l]showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.[251]
A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender".[252]Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work."[253]
Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions asrecommendations, some of these "recommendations" will likely be racist.[254]Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will bebetterthan the past. It is descriptive rather than prescriptive.[m]
Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.[247]
There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category isdistributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negativestereotypesor render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict withanti-discrimination laws.[241]
At its 2022Conference on Fairness, Accountability, and Transparency(ACM FAccT 2022), theAssociation for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.[dubious–discuss][256]
Many AI systems are so complex that their designers cannot explain how they reach their decisions.[257]Particularly withdeep neural networks, in which there are a large amount of non-linearrelationships between inputs and outputs. But some popular explainability techniques exist.[258]
It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with aruleras "cancerous", because pictures of malignancies typically include a ruler to show the scale.[259]Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading.[260]
People who have been harmed by an algorithm's decision have a right to an explanation.[261]Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union'sGeneral Data Protection Regulationin 2016 included an explicit statement that this right exists.[n]Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.[262]
DARPAestablished theXAI("Explainable Artificial Intelligence") program in 2014 to try to solve these problems.[263]
Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output.[264]LIME can locally approximate a model's outputs with a simpler, interpretable model.[265]Multitask learningprovides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.[266]Deconvolution,DeepDreamand othergenerativemethods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning.[267]Forgenerative pre-trained transformers,Anthropicdeveloped a technique based ondictionary learningthat associates patterns of neuron activations with human-understandable concepts.[268]
Artificial intelligence provides a number of tools that are useful tobad actors, such asauthoritarian governments,terrorists,criminalsorrogue states.
A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o]Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentiallyweapons of mass destruction.[270]Even when used in conventional warfare, they currently cannot reliably choose targets and could potentiallykill an innocent person.[270]In 2014, 30 nations (including China) supported a ban on autonomous weapons under theUnited Nations'Convention on Certain Conventional Weapons, however theUnited Statesand others disagreed.[271]By 2015, over fifty countries were reported to be researching battlefield robots.[272]
AI tools make it easier forauthoritarian governmentsto efficiently control their citizens in several ways.Faceandvoice recognitionallow widespreadsurveillance.Machine learning, operating this data, canclassifypotential enemies of the state and prevent them from hiding.Recommendation systemscan precisely targetpropagandaandmisinformationfor maximum effect.Deepfakesandgenerative AIaid in producing misinformation. Advanced AI can make authoritariancentralized decision makingmore competitive than liberal and decentralized systems such asmarkets. It lowers the cost and difficulty ofdigital warfareandadvanced spyware.[273]All these technologies have been available since 2020 or earlier—AIfacial recognition systemsare already being used formass surveillancein China.[274][275]
There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[276]
Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[277]
In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI.[278]A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-termunemployment, but they generally agree that it could be a net benefit ifproductivitygains areredistributed.[279]Risk estimates vary; for example, in the 2010s, Michael Osborne andCarl Benedikt Freyestimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk".[p][281]The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[277]In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[282][283]
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence;The Economiststated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[284]Jobs at extreme risk range fromparalegalsto fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[285]
From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward byJoseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.[286]
It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicistStephen Hawkingstated, "spell the end of the human race".[287]This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character.[q]These sci-fi scenarios are misleading in several ways.
First, AI does not require human-likesentienceto be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. PhilosopherNick Bostromargued that if one givesalmost anygoal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of apaperclip factory manager).[289]Stuart Russellgives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead."[290]In order to be safe for humanity, asuperintelligencewould have to be genuinelyalignedwith humanity's morality and values so that it is "fundamentally on our side".[291]
Second,Yuval Noah Harariargues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things likeideologies,law,government,moneyand theeconomyare built onlanguage; they exist because there are stories that billions of people believe. The current prevalence ofmisinformationsuggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[292]
The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.[293]Personalities such asStephen Hawking,Bill Gates, andElon Musk,[294]as well as AI pioneers such asYoshua Bengio,Stuart Russell,Demis Hassabis, andSam Altman, have expressed concerns about existential risk from AI.
In May 2023,Geoffrey Hintonannounced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google".[295]He notably mentioned risks of anAI takeover,[296]and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.[297]
In 2023, many leading AI experts endorsedthe joint statementthat "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[298]
Some other researchers were more optimistic. AI pioneerJürgen Schmidhuberdid not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier."[299]While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors."[300][301]Andrew Ngalso argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."[302]Yann LeCun"scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction."[303]In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.[304]However, after 2016, the study of current and future risks and possible solutions became a serious area of research.[305]
Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans.Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.[306]
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.[307]The field of machine ethics is also called computational morality,[307]and was founded at anAAAIsymposium in 2005.[308]
Other approaches includeWendell Wallach's "artificial moral agents"[309]andStuart J. Russell'sthree principlesfor developing provably beneficial machines.[310]
Active organizations in the AI open-source community includeHugging Face,[311]Google,[312]EleutherAIandMeta.[313]Various AI models, such asLlama 2,MistralorStable Diffusion, have been made open-weight,[314][315]meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freelyfine-tuned, which allows companies to specialize them with their own data and for their own use-case.[316]Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitatebioterrorism) and that once released on the Internet, they cannot be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses.[317]
Artificial Intelligence projects can be guided by ethical considerations during the design, development, and implementation of an AI system. An AI framework such as the Care and Act Framework, developed by theAlan Turing Instituteand based on the SUM values, outlines four main ethical dimensions, defined as follows:[318][319]
Other developments in ethical frameworks include those decided upon during theAsilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;[320]however, these principles are not without criticism, especially regards to the people chosen to contribute to these frameworks.[321]
Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[322]
TheUK AI Safety Institutereleased in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[323]
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[324]The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.[325]According to AI Index atStanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[326][327]Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[328]Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[328]TheGlobal Partnership on Artificial Intelligencewas launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.[328]Henry Kissinger,Eric Schmidt, andDaniel Huttenlocherpublished a joint statement in November 2021 calling for a government commission to regulate AI.[329]In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[330]In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[331]In 2024, theCouncil of Europecreated the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[332]
In a 2022Ipsossurvey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[326]A 2023Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[333]In a 2023Fox Newspoll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[334][335]
In November 2023, the first globalAI Safety Summitwas held inBletchley Parkin the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[336]28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[337][338]In May 2024 at theAI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.[339][340]
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly toAlan Turing'stheory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning.[342][343]This, along with concurrent discoveries incybernetics,information theoryandneurobiology, led researchers to consider the possibility of building an "electronic brain".[r]They developed several areas of research that would become part of AI,[345]such asMcCullouchandPittsdesign for "artificial neurons" in 1943,[116]and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced theTuring testand showed that "machine intelligence" was plausible.[346][343]
The field of AI research was founded ata workshopatDartmouth Collegein 1956.[s][6]The attendees became the leaders of AI research in the 1960s.[t]They and their students produced programs that the press described as "astonishing":[u]computers were learningcheckersstrategies, solving word problems in algebra, provinglogical theoremsand speaking English.[v][7]Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.[343]
Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine withgeneral intelligenceand considered this the goal of their field.[350]In 1965Herbert Simonpredicted, "machines will be capable, within twenty years, of doing any work a man can do".[351]In 1967Marvin Minskyagreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[352]They had, however, underestimated the difficulty of the problem.[w]In 1974, both the U.S. and British governments cut off exploratory research in response to thecriticismofSir James Lighthill[354]and ongoing pressure from the U.S. Congress tofund more productive projects.[355]Minsky's andPapert's bookPerceptronswas understood as proving thatartificial neural networkswould never be useful for solving real-world tasks, thus discrediting the approach altogether.[356]The "AI winter", a period when obtaining funding for AI projects was difficult, followed.[9]
In the early 1980s, AI research was revived by the commercial success ofexpert systems,[357]a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan'sfifth generation computerproject inspired the U.S. and British governments to restore funding foracademic research.[8]However, beginning with the collapse of theLisp Machinemarket in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[10]
Up to this point, most of AI's funding had gone to projects that used high-levelsymbolsto representmental objectslike plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especiallyperception,robotics,learningandpattern recognition,[358]and began to look into "sub-symbolic" approaches.[359]Rodney Brooksrejected "representation" in general and focussed directly on engineering machines that move and survive.[x]Judea Pearl,Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[87][364]But the most important development was the revival of "connectionism", including neural network research, byGeoffrey Hintonand others.[365]In 1990,Yann LeCunsuccessfully showed thatconvolutional neural networkscan recognize handwritten digits, the first of many successful applications of neural networks.[366]
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such asstatistics,economicsandmathematics).[367]By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence" (a tendency known as theAI effect).[368]However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield ofartificial general intelligence(or "AGI"), which had several well-funded institutions by the 2010s.[68]
Deep learningbegan to dominate industry benchmarks in 2012 and was adopted throughout the field.[11]For many specific tasks, other methods were abandoned.[y]Deep learning's success was based on both hardware improvements (faster computers,[370]graphics processing units,cloud computing[371]) and access tolarge amounts of data[372](including curated datasets,[371]such asImageNet). Deep learning's success led to an enormous increase in interest and funding in AI.[z]The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.[328]
In 2016, issues offairnessand the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. Thealignment problembecame a serious field of academic study.[305]
In the late 2010s and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015,AlphaGo, developed byDeepMind, beat the world championGo player. The program taught only the game's rules and developed a strategy by itself.GPT-3is alarge language modelthat was released in 2020 byOpenAIand is capable of generating high-quality human-like text.[373]ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months.[374]It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness.[375]These programs, and others, inspired an aggressiveAI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI".[376]About 800,000 "AI"-related U.S. job openings existed in 2022.[377]According to PitchBook research, 22% of newly fundedstartupsin 2024 claimed to be AI companies.[378]
Philosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines.[379]Another major focus has been whether machines can be conscious, and the associated ethical implications.[380]Many other topics in philosophy are relevant to AI, such asepistemologyandfree will.[381]Rapid advancements have intensified public discussions on the philosophy andethics of AI.[380]
Alan Turingwrote in 1950 "I propose to consider the question 'can machines think'?"[382]He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour".[382]He devised the Turing test, which measures the ability of a machine to simulate human conversation.[346]Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes thatwe can not determine these things about other peoplebut "it is usual to have a polite convention that everyone thinks."[383]
RussellandNorvigagree with Turing that intelligence must be defined in terms of external behavior, not internal structure.[1]However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineeringtexts", they wrote, "do not define the goal of their field as making 'machines that fly so exactly likepigeonsthat they can fool other pigeons.'"[385]AI founderJohn McCarthyagreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".[386]
McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world".[387]Another AI founder,Marvin Minsky, similarly describes it as "the ability to solve hard problems".[388]The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals.[1]These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.
Another definition has been adopted by Google,[389]a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.
Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[390]with many companies during the early 2020s AI boom using the term as a marketingbuzzword, often even if they did "not actually use AI in a material way".[391]
No established unifying theory orparadigmhas guided AI research for most of its history.[aa]The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostlysub-symbolic,softandnarrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.
Symbolic AI(or "GOFAI")[393]simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed thephysical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[394]
However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning.Moravec's paradoxis the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.[395]PhilosopherHubert Dreyfushadarguedsince the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.[396]Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16]
The issue is not resolved:sub-symbolicreasoning can make many of the same inscrutable mistakes that human intuition does, such asalgorithmic bias. Critics such asNoam Chomskyargue continuing research into symbolic AI will still be necessary to attain general intelligence,[398][399]in part because sub-symbolic AI is a move away fromexplainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field ofneuro-symbolic artificial intelligenceattempts to bridge the two approaches.
"Neats" hope that intelligent behavior is described using simple, elegant principles (such aslogic,optimization, orneural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[400]but eventually was seen as irrelevant. Modern AI has elements of both.
Finding a provably correct or optimal solution isintractablefor many important problems.[15]Soft computing is a set of techniques, includinggenetic algorithms,fuzzy logicand neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.
AI researchers are divided as to whether to pursue the goals of artificial general intelligence andsuperintelligencedirectly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[401][402]General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.
Thephilosophy of minddoes not know whether a machine can have amind,consciousnessandmental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence.RussellandNorvigadd that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[403]However, the question has become central to the philosophy of mind. It is also typically the central question at issue inartificial intelligence in fiction.
David Chalmersidentified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.[404]The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how thisfeelsor why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While humaninformation processingis easy to explain, humansubjective experienceis difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person toknow what red looks like.[405]
Computationalism is the position in thephilosophy of mindthat the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to themind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophersJerry FodorandHilary Putnam.[406]
PhilosopherJohn Searlecharacterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[ac]Searle challenges this claim with hisChinese roomargument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind.[410]
It is difficult or impossible to reliably evaluate whether an advancedAI is sentient(has the ability to feel), and if so, to what degree.[411]But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[412][413]Sapience(a set of capacities related to high intelligence, such as discernment orself-awareness) may provide another moral basis for AI rights.[412]Robot rightsare also sometimes proposed as a practical way to integrate autonomous agents into society.[414]
In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[415]Critics argued in 2018 that granting rights to AI systems would downplay the importance ofhuman rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[416][417]
Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be amoral blind spotanalogous toslaveryorfactory farming, which could lead tolarge-scale sufferingif sentient AI is created and carelessly exploited.[413][412]
Asuperintelligenceis a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[402]If research intoartificial general intelligenceproduced sufficiently intelligent software, it might be able toreprogram and improve itself. The improved software would be even better at improving itself, leading to whatI. J. Goodcalled an "intelligence explosion" andVernor Vingecalled a "singularity".[418]
However, technologies cannot improve exponentially indefinitely, and typically follow anS-shaped curve, slowing when they reach the physical limits of what the technology can do.[419]
Robot designerHans Moravec, cyberneticistKevin Warwickand inventorRay Kurzweilhave predicted that humans and machines may merge in the future intocyborgsthat are more capable and powerful than either. This idea, called transhumanism, has roots in the writings ofAldous HuxleyandRobert Ettinger.[420]
Edward Fredkinargues that "artificial intelligence is the next step in evolution", an idea first proposed bySamuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon byGeorge Dysonin his 1998 bookDarwin Among the Machines: The Evolution of Global Intelligence.[421]
Arguments fordecomputinghave been raised byDan McQuillan(Resisting AI: An Anti-fascist Approach to Artificial Intelligence, 2022), meaning an opposition to the sweeping application and expansion of artificial intelligence. Similar todegrowth, the approach criticizes AI as an outgrowth of the systemic issues and capitalist world we live in. It argues that a different future is possible, in which distance between people is reduced rather than increased through AI intermediaries.[422]
Thought-capable artificial beings have appeared as storytelling devices since antiquity,[423]and have been a persistent theme inscience fiction.[424]
A commontropein these works began withMary Shelley'sFrankenstein, where a human creation becomes a threat to its masters. This includes such works asArthur C. Clarke'sandStanley Kubrick's2001: A Space Odyssey(both 1968), withHAL 9000, the murderous computer in charge of theDiscovery Onespaceship, as well asThe Terminator(1984) andThe Matrix(1999). In contrast, the rare loyal robots such as Gort fromThe Day the Earth Stood Still(1951) and Bishop fromAliens(1986) are less prominent in popular culture.[425]
Isaac Asimovintroduced theThree Laws of Roboticsin many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics;[426]while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[427]
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that havethe ability to feel, and thus to suffer. This appears inKarel Čapek'sR.U.R., the filmsA.I. Artificial IntelligenceandEx Machina, as well as the novelDo Androids Dream of Electric Sheep?, byPhilip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[428]
The two most widely used textbooks in 2023 (see theOpen Syllabus):
The four most widely used AI textbooks in 2008:
Other textbooks: | https://en.wikipedia.org/wiki/Artificial_intelligence |
The Art of Grammar(Greek:Τέχνη Γραμματική- or romanized, Téchnē Grammatikḗ) is a treatise onGreekgrammar, attributed toDionysius Thrax, who wrote in the 2nd century BC.
It is the first work on grammar in Greek, and also the first concerning a Western language.[citation needed]It sought mainly to help speakers ofKoine Greekunderstand the language ofHomer, and other great poets of the past.[1]It has become a source for how ancient texts should be acted out based on the experience from commonly read ancient authors.[2]There are six parts to understanding grammar including trained reading by understanding the dialect from certain poetical figures.[3]There is a nine-part word classification system, which strayed away from the previous eight-part classification system.[4]It describes morphological structure as containing no middle diathesis.[5]There is no morphological analysis and the text uses the Word and Paradigm model.[6][7]
It was translated intoSyriacbyJoseph Huzayaof theschool of Nisibisin the 6th century.[8]It was also translated intoArmenian.[9]
This article about a book ongrammaris astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/The_Art_of_Grammar |
Linguistic prescription[a]is the establishment of rules defining publicly preferredusageoflanguage,[1][2]including rules ofspelling,pronunciation,vocabulary,grammar, etc. Linguistic prescriptivism may aim to establish astandard language, teach what a particular society or sector of a society perceives as a correct or proper form, or advise on effective and stylistically apt communication. If usage preferences are conservative, prescription might appear resistant tolanguage change; if radical, it may produceneologisms.[3]Such prescriptions may be motivated by consistency (making a language simpler or more logical); rhetorical effectiveness;tradition;aestheticsor personal preferences;linguistic purismor nationalism (i.e. removing foreign influences);[4]or to avoid causing offense (etiquetteorpolitical correctness).[5]
Prescriptive approaches to language are often contrasted with thedescriptive approachofacademic linguistics, which observes and records how language is actually used (while avoiding passing judgment).[6][7]The basis of linguistic research is text (corpus) analysis and field study, both of which are descriptive activities. Description may also include researchers' observations of their own language usage. In the Eastern European linguistic tradition, the discipline dealing with standard language cultivation and prescription is known as "language culture" or "speech culture".[8][9]
Despite being apparent opposites, prescriptive and descriptive approaches have a certain degree of conceptual overlap[10]as comprehensive descriptive accounts must take into account and record existing speaker preferences, and a prior understanding of how language is actually used is necessary for prescription to be effective. Since the mid-20th century somedictionariesandstyle guides, which are prescriptive works by nature, have increasingly integrated descriptive material and approaches. Examples of guides updated to add more descriptive material includeWebster's Third New International Dictionary(1961) and the third editionGarner's Modern English Usage(2009) in English, or theNouveau Petit Robert(1993)[11]in French. A partially descriptive approach can be especially useful when approaching topics of ongoing conflict between authorities, or in differentdialects, disciplines,styles, orregisters. Other guides, such asThe Chicago Manual of Style, are designed to impose a single style and thus remain primarily prescriptive (as of 2017[update]).
Some authors define "prescriptivism" as the concept where a certain language variety is promoted as linguistically superior to others, thus recognizing thestandard language ideologyas a constitutive element of prescriptivism or even identifying prescriptivism with this system of views.[12][13]Others, however, use this term in relation to any attempts to recommend or mandate a particular way of language usage (in a specificcontextorregister), without, however, implying that these practices must involve propagating the standard language ideology.[14][15]According to another understanding, the prescriptive attitude is an approach to norm-formulating andcodificationthat involves imposing arbitrary rulings upon aspeech community,[16]as opposed to more liberal approaches that draw heavily from descriptive surveys;[17][18]in a wider sense, however, the latter also constitute a form of prescriptivism.[8]
Mate Kapovićmakes a distinction between "prescription" and "prescriptivism", defining the former as "a process of codification of a certain variety of language for some sort of official use", and the latter as "an unscientific tendency to mystify linguistic prescription".[19]
Linguistic prescription is a part of a language standardization process.[20]The chief aim of linguistic prescription is to specify socially preferred language forms (either generally, as inStandard English, or instyleandregister) in a way that is easily taught and learned.[21]Prescription may apply to most aspects of language, including spelling, pronunciation, vocabulary, morphology, syntax, and semantics.
Prescription is useful for facilitating inter-regional communication, allowing speakers of divergentdialectsto understand astandardized idiomused inbroadcasting, for example, more readily than each other's dialects.[citation needed]While such alingua francamay evolve by itself, the tendency to formallycodifyand normalize it is widespread in most parts of the world.[citation needed]Foreign language instruction is also considered a form of prescription, since it involves instructing learners how to speak, based on usage documentation laid down by others.[22]
Linguistic prescription may also be used to advance a social or political ideology. Throughout history, prescription has been created around high-class language, and therefore it degeneralizes lower-class language. This has led to many justifications ofclassism, as the lower-class can easily be portrayed to be incoherent and improper if they do not speak the standard language. This also corresponds to the use of prescription forracism, as dialects spoken by what is seen as the superior race are usually standardized in countries with prominent racism. A good example of this is the demeaning ofAAVEin theUnited States, as the idea that the "lower race" speaks improperly is propagated by people with an opposing ideology.[23]Later, during the second half of the 20th century, efforts driven by variousadvocacy groupshad considerable influence on language use under the broad banner of "political correctness", to promote special rules foranti-sexist,anti-racist, or generically anti-discriminatorylanguage (e.g. "people-first language" as advocated by disability rights organizations).[citation needed]
Prescription presupposes authorities whose judgments may come to be followed by many other speakers and writers. For English, these authorities tend to be books.H. W. Fowler'sModern English Usagewas widely taken as an authority forBritish Englishfor much of the 20th century;[24]StrunkandWhite'sThe Elements of Stylehas done similarly forAmerican English.[citation needed]TheDudengrammar (first edition 1880) has a similar status for German.
Althoughlexicographersoften see their work as purely descriptive, dictionaries are widely regarded as prescriptive authorities.[25]Books such asLynne Truss'sEats, Shoots & Leaves(2003), which argues for stricter adherence to prescriptivepunctuationrules, also seek to exert an influence.
Linguistic prescription is imposed by regulation in some places. TheFrench Academyin Paris is the national body inFrancewhose recommendations about theFrench languageare often followed in theFrench-speaking world (francophonie), though not legally enforceable. InGermanyand theNetherlands, recent spelling and punctuation reforms, such asthe German orthographic reform of 1996, were devised by teams of linguists commissioned by the respective governments and then implemented by statutes, some met with widespread dissent.
Examples of national prescriptive bodies and initiatives are:
Other kinds of authorities exist in specific settings, most commonly in the form of style guidebooks (also called style guides, manuals of style, style books, or style sheets). Style guides vary in form, and may be alphabetical usage dictionaries, comprehensive manuals divided into numerous subsection by the facet of language, or very compact works insistent upon only a few matters of particular importance to the publisher. Some aim to be comprehensive only for a specific field, deferring to more general-audience guides on matters that are not particular to the discipline in question. There are different types of style guides, by purpose and audience. Because the genres of writing and the audiences of each manual are different, style manuals often conflict with each other, even within the samevernacularof English.
Many publishers have established an internalhouse stylespecifying preferred spellings and grammatical forms, such asserial commas, how to writeacronyms, and various awkward expressions to avoid. Most of these are internal documentation for the publisher's staff, though various newspapers, universities, and other organizations have made theirs available for public inspection, and sometimes even sell them as books, e.g.The New York Times Manual of Style and UsageandThe EconomistStyle Guide.
In a few cases, an entire publishing sector complies with a publication that originated as a house style manual, such asThe Chicago Manual of StyleandNew Hart's Rulesin non-fiction book publishing in the United States and the United Kingdom, respectively, andThe Associated Press Stylebookin Americannews style. Others are by self-appointed advocates whose rules are propagated in the popular press, as in "proper Cantonese pronunciation". The aforementioned Fowler, and Strunk & White, were among the self-appointed, as are some modern authors of style works, likeBryan A. Garnerand hisModern English Usage(formerlyModern American Usage).
Various style guides are used for academic papers and professional journals and have becomede factostandards in particular fields, though the bulk of their material pertains to formatting of source citations (in mutually conflicting ways). Some examples are those issued by theAmerican Medical Association, theModern Language Association, and theModern Humanities Research Association; there are many others.Scientific Style and Format, by the Council of Science Editors, seeks to normalize style in scientific journal publishing, based where possible on standards issued by bodies like theInternational Standards Organization.
None of these works have any sort of legal or regulatory authority (though some governments produce their own house style books for internal use). They still have authority in the sense that a student may be marked down for failure to follow a specified style manual; a professional publisher may enforce compliance; a publication may require its employees to use house style as a matter of on-the-job competence. A well-respected style guide, and usually one intended for a general audience, may also have the kind of authority that a dictionary does consult as a reference work to satisfy personal curiosity or settle an argument.
Historically, linguistic prescriptivism originates in a standard language when a society establishessocial stratificationand a socio-economichierarchy. The spoken and written language usages of theauthorities(state, military, church) is preserved as the standard language. Departures from this standard language may jeopardize social success (seesocial class). Sometimes,archaismsandhonorificstylizations may be deliberately introduced or preserved to distinguish the prestige form of the language from contemporarycolloquial language. Likewise, thestyleof language used inritualalso differs from everyday speech.[32]Specialceremonial languagesknown only to a select few spiritual leaders are found throughout the world;Liturgical Latinhas served a similar function for centuries.
When a culture develops a writing system,orthographicrules for the consistent transcription of culturally important transactions (laws, scriptures, contracts, poetry, etc.) allow a large number of discussants to understand written conversations easily, and across multiple generations.
Early historical trends in literacy and alphabetization were closely tied to the influence of various religious institutions.Western Christianitypropagated theLatin alphabet.Eastern Orthodoxyspread theGreekandCyrillicalphabets.Judaismused theHebrew alphabet, andIslamtheArabic script.Hinduismused theDevanagari script.[33]In certain traditions, strict adherence to prescribed spellings and pronunciations was and remains of great spiritual importance. Islamic naming conventions and greetings are notable examples of the linguistic prescription being a prerequisite to spiritual righteousness. Another commonly cited example of prescriptive language usage closely associated with social propriety is the system ofJapanese honorific speech.
Most, if not all, widely spoken languages demonstrate some degree of social codification in how they conform to prescriptive rules.Linguistic prestigeis a central research topic withinsociolinguistics. Notions of linguistic prestige apply to different dialects of the same language and also to separate, distinct languages in multilingual regions. Prestige level disparity often leads todiglossia: speakers in certain social contexts consciously choose a prestige language or dialect over a less prestigious one, even if it is their native tongue.
Governmentbureaucracytends toward prescriptivism as a means of enforcing functional continuity. Such prescriptivism dates fromancient Egypt, where bureaucrats preserved the spelling of theMiddle Kingdom of Egyptinto thePtolemaic periodthrough the standard usage ofEgyptian hieroglyphics.[34]
From the earliest attempts at prescription in classical times grammarians have based their norms on observed prestige use of language. Modern prescriptivist textbooks[which?]draw heavily on descriptive linguistic analysis.
The prescription may privilege some existing forms over others for the sake of maximizing clarity and precision in language use. Others are subjective judgments of what constitutes good taste. Some reflect the promotion of one class or region within a language community over another, which can become politically controversial.
Prescription can also reflect ethical considerations, as in prohibitingswear words. Words referring to elements of sexuality or toilet hygiene may be regarded as obscene. Blasphemies against religion may be forbidden. In the 21st century,political correctnessobjects to the use of words perceived as offensive.[35]
Some elements of prescription in English are sometimes thought[by whom?]to have been based on the norms ofLatin grammar.Robert Lowthis frequently cited[by whom?][citation needed]as having done so,[clarification needed]but he specifically objected to "forcing the English under the rules of a foreign Language".[36]
Prescriptivism is often subject to criticism. Many linguists, such asGeoffrey Pullumand other posters toLanguage Log, are highly skeptical of the quality of advice given in many usage guides, including highly regarded books like Strunk and White'sThe Elements of Style.[37]In particular, linguists point out that popular books on English usage written by journalists or novelists (e.g.Simon Heffer'sStrictly English: The Correct Way to Write ... and Why It Matters) often make basic errors in linguistic analysis.[38][39]
A frequent criticism is that prescription has a tendency to favor the language of one particular area or social class over others, and thus militates against linguistic diversity.[40]Frequently, a standard dialect is associated with theupper class, for example theUnited Kingdom'sReceived Pronunciation(RP). RP has now lost much of its status as the Anglophone standard, and other standards are now alternative systems forEnglish as a foreign language. Although these have a more democratic base, they still exclude the vast majority of the English-speaking world: speakers ofScottish English,Hiberno-English,Appalachian English,Australian English,Indian English,Nigerian EnglishorAfrican-American Englishmay feel the standard is arbitrarily selected or slanted against them.[41][42]Therefore, prescription has political consequences; indeed, it can be—and has been—used consciously as a political tool.[citation needed]
A second issue with prescriptivism is that it tends to explicitly devaluenon-standard dialects. It has been argued that prescription, apart from formulatingstandard languagenorms, often attempts to influence speakers to apply the proposed linguistic devices invariably, without considering the existence of differentvarietiesandregistersof language. While some linguists approve the practical role of language standardization in modern nation states,[13][43]certain models ofprescriptive codificationhave been criticized for going far beyond mere norm-setting, i.e. by promoting the sanctioned language variety as the only legitimate means of communication and presenting it as the only valid baseline of correctness, while stigmatizing non-standard usages as "mistakes".[44][45][13]Such practices have been said to contribute to perpetuating the belief that non-codified forms of language are innately inferior, creating social stigma and discrimination toward their speakers.[46][47]In contrast, modern linguists would generally hold that all forms of language, including both vernacular dialects and different realizations of a standardized variety, are scientifically equal as instruments of communication, even if deemed socially inappropriate for certain situational contexts.[48][49]Resulting instandard language ideology, normative practices might also give rise to the conviction that explicit formal instruction is an essential prerequisite for acquiring proper command of one's native language, thus creating a massive feeling oflinguistic insecurity.[50]Propagating suchlanguage attitudesis characteristic of the prescriptivists inEastern Europe, where normativist ideas of correctness can be found even among professional linguists.[50][51][52]
Another serious issue with prescription is that prescriptive rules quickly become entrenched and it is difficult to change them when the language changes. Thus, there is a tendency for prescription to lag behind thevernacular language. In 1834, an anonymous writer advised against thesplit infinitive, reasoning that the construction was not a frequent feature of English as he knew it. Today the construction is ineveryday useand generally considered standard usage, yet the old prohibition can still be heard.[53]
A further problem is a challenge of specifying understandable criteria. Although prescribing authorizations may have clear ideas about why they make a particular choice, and their choices are seldom entirely arbitrary, there exists no linguistically sustainable metric for ascertaining which forms of language should be considered standard or otherwise preferable. Judgments that seek to resolve ambiguity or increase the ability of the language to make subtle distinctions are easier to defend. Judgments based on the subjective associations of a word are more problematic.[citation needed]
Finally, there is the problem of inappropriate dogmatism. Although competent authorities tend to make careful statements, popular pronouncements on language are apt to condemn. Thus, wise prescriptive advice identifying a form as colloquial or non-standard and suggesting that it be used with caution in some contexts may – when taken up in the classroom – become converted into a ruling that the dispreferred form is automatically unacceptable in all circumstances, a view academic linguists reject.[54][55](Linguists may accept that a construction is ungrammatical or incorrect in relation to a certain lect if it does not conform to its inherent rules, but they would not consider it absolutely wrong simply because it diverges from the norms of a prestige variety.)[43]A classic example from 18th-century England is Robert Lowth's tentative suggestion thatpreposition strandinginrelative clausessounds colloquial. This blossomed into a grammatical rule that a sentence should never end with a preposition.[citation needed]
For these reasons, some writers argue that linguistic prescription is foolish or futile.Samuel Johnsoncommented on the tendency of some prescription to resist language change:
When we see men grow old and die at a certain time one after another, from century to century, we laugh at the elixir that promises to prolong life to a thousand years; and with equal justice may thelexicographerbe derided, who is able to produce no example of a nation that has preserved their words and phrases from mutability, shall imagine that his dictionary can embalm his language, and secure it from corruption and decay, that it is in his power to change sublunary nature, and clear the world at once from folly, vanity, and affectation.
With this hope, however,academieshave been instituted, to guard the avenues of their languages, to retain fugitives, and repulse intruders; but their vigilance and activity have hitherto been vain; sounds remain too volatile and subtle for legal restraints; to enchain syllables, and to lash the wind, are equally the undertakings of pride, unwilling to measure its desires by its strength. The French language has visibly changed under the inspection of theacademy; the stile ofAmelot's translation ofFather Paulis witnessed, byPierre François le Courayerto beun peu passé; and no Italian will maintain that the diction of any modern writer is not perceptibly different from that ofBoccace,Machiavel, orCaro. | https://en.wikipedia.org/wiki/Linguistic_prescription |
Apedagogical grammaris a modern approach inlinguisticsintended to aid in teaching an additional language.
This method of teaching is divided into the descriptive: grammatical analysis, and the prescriptive: the articulation of a set of rules. Following an analysis of the context in which it is to be used, one grammatical form or arrangement of words will be determined to be the most appropriate. It helps in learning the grammar of foreign languages. Pedagogical grammars typically require rules that are definite, coherent, non-technical, cumulative andheuristic.[1]As the rules themselves accumulate, anaxiomatic systemis formed between the two languages that should then enable a native speaker of the first to learn the second.[2]
This article aboutlanguage acquisitionis astub. You can help Wikipedia byexpanding it.
Thisgrammar-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Pedagogical_grammar |
Aregular verbis anyverbwhoseconjugationfollows the typical pattern, or one of the typical patterns, of the language to which it belongs. A verb whose conjugation follows a different pattern is called anirregular verb. This is one instance of the distinction betweenregular and irregular inflection, which can also apply to other word classes, such as nouns and adjectives.
InEnglish, for example, verbs such asplay,enter, andlikeare regular since they form their inflected parts by adding the typical endings-s,-ingand-edto give forms such asplays,entering, andliked. On the other hand, verbs such asdrink,hitandhaveare irregular since some of their parts are not made according to the typical pattern:drankanddrunk(not "drinked");hit(aspast tenseandpast participle, not "hitted") andhasandhad(not "haves" and "haved").
The classification of verbs asregularorirregularis to some extent a subjective matter. If some conjugationalparadigmin a language is followed by a limited number of verbs, or if it requires the specification of more than oneprincipal part(as with theGerman strong verbs), views may differ as to whether the verbs in question should be considered irregular. Most inflectional irregularities arise as a result of series of fairly uniform historical changes so forms that appear to be irregular from asynchronic(contemporary) point of view may be seen as following more regular patterns when the verbs are analyzed from adiachronic(historical linguistic) viewpoint.
When a language develops some type ofinflection, such as verbconjugation, it normally produces certain typical (regular) patterns by which words in the givenclasscome to make their inflected forms. The language may develop a number of different regular patterns, either as a result of conditionalsound changeswhich cause differentiation within a single pattern, or through patterns with different derivations coming to be used for the same purpose. An example of the latter is provided by the strong andweakverbs of theGermanic languages; the strong verbs inherited their method of making past forms (vowelablaut) fromProto-Indo-European, while for the weak verbs a different method (addition ofdentalsuffixes) developed.
Irregularities in verb conjugation (and otherinflectional irregularities) may arise in various ways. Sometimes the result of multiple conditional and selective historical sound changes is to leave certain words following a practically unpredictable pattern. This has happened with the strong verbs (and some groups of weak verbs) in English; patterns such assing–sang–sungandstand–stood–stood, although they derive from what were more or less regular patterns in older languages, are now peculiar to a single verb or small group of verbs in each case, and are viewed as irregular.
Irregularities may also arise fromsuppletion– forms of one verb may be taken over and used as forms of another. This has happened in the case of the English wordwent, which was originally the past tense ofwend, but has come to be used instead as the past tense ofgo. The verbbealso has a number of suppletive forms (be,is,was, etc., with various different origins) – this is common forcopular verbsin Indo-European languages.
The regularity and irregularity of verbs is affected by changes taking place by way ofanalogy– there is often a tendency for verbs to switch to a different, usually more regular, pattern under the influence of other verbs. This is less likely when the existing forms are very familiar through common use – hence among the most common verbs in a language (likebe,have,go, etc.) there is often a greater incidence of irregularity. (Analogy can occasionally work the other way, too – someirregular English verb formssuch asshown,caughtandspathave arisen through the influence of existing strong or irregular verbs.)[citation needed]
The most straightforward type of regular verb conjugation pattern involves a single class of verbs, a singleprincipal part(therootor one particular conjugated form), and a set of exact rules which produce, from that principal part, each of the remaining forms in the verb'sparadigm. This is generally considered to be the situation with regularEnglish verbs– from the one principal part, namely the plain form of a regular verb (the bareinfinitive, such asplay,happen,skim,interchange, etc.), all the other inflected forms (which in English are not numerous; they consist of the third person singularpresent tense, thepast tenseandpast participle, and thepresent participle/gerundform) can be derived by way of consistent rules. These rules involve the addition of inflectional endings (-s,-[e]d,-ing), together with certainmorphophonologicalrules about how those endings are pronounced, and certain rules of spelling (such as the doubling of certain consonants). Verbs which in any way deviate from these rules (there arearound 200such verbs in the language) are classed as irregular.
A language may have more than one regular conjugation pattern.French verbs, for example, follow different patterns depending on whether their infinitive ends in-er,-iror-re(complicated slightly by certain rules of spelling). A verb which does not follow the expected pattern based on the form of its infinitive is considered irregular.
In some languages, however, verbs may be considered regular even if the specification of one of their forms is not sufficient to predict all of the rest; they have more than one principal part. InLatin, for example, verbs are considered to have four principal parts (seeLatin conjugationfor details). Specification of all of these four forms for a given verb is sufficient to predict all of the other forms of that verb – except in a few cases, when the verb is irregular.
To some extent it may be a matter of convention or subjective preference to state whether a verb is regular or irregular. In English, for example, if a verb is allowed to have three principal parts specified (the bare infinitive, past tense and past participle), then the number of irregular verbs will be drastically reduced (this is not the conventional approach, however). The situation is similar with the strong verbs inGerman(these may or may not be described as irregular). In French, what are traditionally called the "regular-reverbs" (those that conjugate likevendre) are not in fact particularly numerous, and may alternatively be considered to be just another group of similarly behaving irregular verbs. The most unambiguously irregular verbs are often very commonly used verbs such as thecopular verbbein English and its equivalents in other languages, which frequently have a variety ofsuppletiveforms and thus follow an exceptionally unpredictable pattern of conjugation.
It is possible for a verb to be regular in pronunciation, but irregular inspelling. Examples of this are the English verbslayandpay. In terms of pronunciation, these make their past forms in the regular way, by adding the/d/sound. However their spelling deviates from the regular pattern: they are not spelt (spelled) "layed" and "payed" (although the latter form is used in some e.g. nautical contexts as "the sailor payed out the anchor chain"), butlaidandpaid. This contrasts with fully regular verbs such asswayandstay, which have the regularly spelt past formsswayedandstayed. The English present participle is never irregular in pronunciation, with the exception thatsingeingirregularly retains theeto distinguish it fromsinging.
In linguistic analysis, the concept of regular and irregular verbs (and other types ofregular and irregular inflection) commonly arises inpsycholinguistics, and in particular in work related tolanguage acquisition. In studies of first language acquisition (where the aim is to establish how the human brain processes its native language), one debate among 20th-century linguists revolved around whether small children learn all verb forms as separate pieces of vocabulary or whether they deduce forms by the application of rules.[1]Since a child can hear a regular verb for the first time and immediately reuse it correctly in a different conjugated form which he or she has never heard, it is clear that the brain does work with rules; but irregular verbs must be processed differently. A common error for small children is to conjugate irregular verbs as though they were regular, which is taken as evidence that we learn and process our native language partly by the application of rules, rather than, as some earlier scholarship had postulated, solely by learning the forms. In fact, children often use the most common irregular verbs correctly in their earliest utterances but then switch to incorrect regular forms for a time when they begin to operate systematically. That allows a fairly precise analysis of the phases of this aspect of first language acquisition.
Regular and irregular verbs are also of significance insecond language acquisition, and in particular inlanguage teachingand formal learning, where rules such as verb paradigms are defined, and exceptions (such as irregular verbs) need to be listed and learned explicitly. The importance of irregular verbs is enhanced by the fact that they often include the most commonly used verbs in the language (including verbs such asbeandhavein English, their equivalentsêtreandavoirinFrench,seinandhabeninGerman, etc.).
Inhistorical linguisticsthe concept of irregular verbs is not so commonly referenced. Since most irregularities can be explained by processes of historical language development, these verbs are only irregular when viewedsynchronically; they often appear regular when seen in their historical context. In the study ofGermanic verbs, for example, historical linguists generally distinguish between strong and weak verbs, rather than irregular and regular (although occasional irregularities still arise even in this approach).
When languages are being compared informally, one of the few quantitative statistics which are sometimes cited is the number of irregular verbs. These counts are not particularly accurate for a wide variety of reasons, and academic linguists are reluctant to cite them. But it does seem that some languages have a greater tolerance for paradigm irregularity than others.
With the exception of the highly irregular verbbe, an English verb can have up to five forms: its plain form (or bareinfinitive), a third person singularpresent tense, apast tense(orpreterite), apast participle, and the-ingform that serves as both apresent participleandgerund.
The rules for the formation of the inflected parts ofregularverbs are given in detail in the article onEnglish verbs. In summary they are as follows:
The irregular verbs of English are described and listed in the articleEnglish irregular verbs(for a more extensive list, seeList of English irregular verbs). In the case of these:
Some examples of common irregular verbs in English, other than modals, are:[3]
For regular and irregular verbs in other languages, see the articles on the grammars of those languages. Particular articles include, for example:
Some grammatical information relating to specific verbs in various languages can also be found inWiktionary.
Mostnatural languages, to different extents, have a number of irregular verbs. Artificialauxiliary languagesusually have a single regular pattern for all verbs (as well as otherparts of speech) as a matter of design, because inflectional irregularities are considered to increase the difficulty of learning and using a language. Otherconstructed languages, however, need not show such regularity, especially if they are designed to look similar to natural ones.
The auxiliary languageInterlinguahas some irregular verbs, principallyesser"to be", which has an irregular present tense formes"is" (instead of expectedesse), an optional pluralson"are", an optional irregular past tenseera"was/were" (alongside regularesseva), and a unique subjunctive formsia(which can also function as an imperative). Other common verbs also have irregular present tense forms, namelyvader"to go" —va,ir"to go" —va(also shared by the present tense ofvader), andhaber"to have" —ha. | https://en.wikipedia.org/wiki/Regular_and_irregular_verbs |
Asentence diagramis a pictorial representation of thegrammaticalstructure of asentence. The term "sentence diagram" is used more whenteachingwritten language, where sentences arediagrammed. The model shows the relations between words and the nature of sentence structure and can be used as a tool to help recognize which potential sentences are actual sentences.
The Reed–Kellogg system was developed by Alonzo Reed and Brainerd Kellogg for teaching grammar to students through visualization.[1]It lost some support in the 1970s in the US, but has spread to Europe.[2]It is considered "traditional" in comparison to theparse treesof academic linguists.[3]
Simple sentences in the Reed–Kellogg system are diagrammed according to these forms:
The diagram of a simple sentence begins with a horizontal line called thebase. Thesubjectis written on the left, thepredicateon the right, separated by a vertical bar that extends through the base. The predicate must contain averb, and the verb either requires other sentence elements to complete the predicate, permits them to do so, or precludes them from doing so. The verb and itsobject, when present, are separated by a line that ends at the baseline. If the object is adirect object, the line is vertical. If the object is apredicate nounoradjective, the line looks like abackslash, \, sloping toward the subject.
Modifiersof the subject, predicate, or object are placed below the baseline:
Modifiers, such asadjectives(including articles) andadverbs, are placed on slanted lines below the word they modify. Prepositional phrases are also placed beneath the word they modify; the preposition goes on a slanted line and the slanted line leads to a horizontal line on which the object of the preposition is placed.
These basic diagramming conventions are augmented for other types of sentence structures, e.g. forcoordinationandsubordinate clauses.
Reed–Kellogg diagrams reflect, to some degree, concepts underlying modern parse trees. Those concepts are the constituency relation ofphrase structure grammarsand the dependency relation ofdependency grammars. These two relations are illustrated here adjacent to each other for comparison, where D means Determiner, N means Noun, NP means Noun Phrase, S means Sentence, V means Verb, VP means Verb Phrase and IP means Inflectional Phrase.
Constituency is a one-to-one-or-more relation; every word in the sentence corresponds to one or more nodes in the tree diagram. Dependency, in contrast, is a one-to-one relation; every word in the sentence corresponds to exactly one node in the tree diagram. Both parse trees employ the convention where the category acronyms (e.g. N, NP, V, VP) are used as the labels on the nodes in the tree. The one-to-one-or-more constituency relation is capable of increasing the amount of sentence structure to the upper limits of what is possible. The result can be very "tall" trees, such as those associated withX-bar theory. Both constituency-based and dependency-based theories of grammar have established traditions.[4][5]
Reed–Kellogg diagrams employ both of these modern tree generating relations. The constituency relation is present in the Reed–Kellogg diagrams insofar as subject, verb, object, and/or predicate are placed equi-level on the horizontal base line of the sentence and divided by a vertical or slanted line. In a Reed–Kellogg diagram, the vertical dividing line that crosses the base line corresponds to the binary division in the constituency-based tree (S → NP + VP), and the second vertical dividing line that does not cross the baseline (between verb and object) corresponds to the binary division of VP into verb and direct object (VP → V + NP). Thus the vertical and slanting lines that cross or rest on the baseline correspond to the constituency relation. The dependency relation, in contrast, is present insofar as modifiers dangle off of or appear below the words that they modify.
A sentence may also be broken down by functional parts: subject, object, adverbial, verb (predicator).[6]The subject is the owner of an action, the verb represents the action, the object represents the recipient of the action, and the adverbial qualifies the action. The various parts can be phrases rather than individual words. | https://en.wikipedia.org/wiki/Sentence_diagram |
Modern standardEnglishhas variousverbforms, including:
They can be used to expresstense(time reference),aspect,mood,modalityandvoice, in various configurations.
For details of how inflected forms of verbs are produced in English, seeEnglish verbs. For the grammatical structure of clauses, including word order, seeEnglish clause syntax. For non-standard or archaic forms, see individual dialect articles andthou.
A typical English verb may have five differentinflectedforms:
The verbbehas a larger number of different forms (am,is,are,was,were, etc.), while themodal verbshave a more limited number of forms. Some forms ofbeand of certain otherauxiliary verbsalso havecontractedforms ('s,'re,'ve, etc.).
For full details of how these inflected forms of verbs are produced, seeEnglish verbs.
In English, verbs frequently appear in combinations containing one or moreauxiliary verbsand anonfiniteform (infinitive or participle) of a main (lexical) verb. For example:
The first verb in such a combination is thefinite verb, the remainder arenonfinite(although constructions in which even the leading verb is nonfinite are also possible – see§ Perfect and progressive nonfinite constructionsbelow). Such combinations are sometimes calledverb catenae. As the last example shows, the words making up these combinations do not always remain consecutive.
For details of the formation of such constructions, seeEnglish clause syntax. The uses of the various types of combination are described in the detailed sections of the present article. (For another type of combination involving verbs – items such asgo on,slip awayandbreak off– seePhrasal verb.)
As in many other languages, the means English uses for expressing the three categories oftense(time reference),aspectandmoodare somewhat conflated (seetense–aspect–mood). In contrast to languages likeLatin, though, English has only limited means for expressing these categories through verbconjugation, and tends mostly to express themperiphrastically, using the verbcombinationsmentioned in the previous section. The tenses, aspects and moods that may be identified in English are described below (although the terminology used differs significantly between authors). In common usage, particularly inEnglish language teaching, particular tense–aspect–mood combinations such as "present progressive" and "conditional perfect" are often referred to simply as "tenses".
Verb tenses areinflectionalforms which can be used to express that something occurs in the past, present, or future.[1]In English, the only tenses are past and non-past, though the term "future" is sometimes applied toperiphrasticconstructions involving modals such aswillandgo.
Present tenseis used, in principle, to refer to circumstances that exist at the present time (or over a period that includes the present time) and general truths (seegnomic aspect). However the same forms are quite often also used to refer to future circumstances, as in "He's coming tomorrow" (hence this tense is sometimes referred to aspresent-futureornon-past). For certain grammatical contexts where the present tense is the standard way to refer to the future, seeconditional sentencesanddependent clausesbelow. It is also possible for the present tense to be used when referring to no particular real time (as when telling a story), or when recounting past events (thehistorical present, particularly common inheadline language). Thepresent perfectintrinsically refers to past events, although it can be considered to denote primarily the resulting present situation rather than the events themselves.
The present tense has twomoods, indicative and subjunctive; when no mood is specified, it is often the indicative that is meant. In a present indicative construction, the finite verb appears in its base form, or in its-sform if itssubjectisthird-personsingular. (The verbbehas the formsam,is,are, while themodal verbsdo not add-sfor third-person singular.) For the present subjunctive, seeEnglish subjunctive. (The present subjunctive has no particular relationship with present time, and is sometimes simply called the subjunctive, without specifying the tense.)
For specific uses of present tense constructions, see the sections below onsimple present,present progressive,present perfect, andpresent perfect progressive.
Past tenseforms express circumstances existing at some time in the past, although they also have certain uses in referring to hypothetical situations (as in someconditional sentences,dependent clausesandexpressions of wish). They are formed using the finite verb in its preterite (simple past) form.[2]
Certain uses of the past tense may be referred to assubjunctives; however the only distinction in verb conjugation between the past indicative and past subjunctive is the possible use ofwerein the subjunctive in place ofwas. For details seeEnglish subjunctive.
For specific uses of past tense constructions, see the sections below onsimple past,past progressive,past perfect, andpast perfect progressive. In certain contexts, past events are reported using thepresent perfect(or even other present tense forms—see above).
English lacks a morphological future tense, since there is no verbinflectionwhich expresses that an event will occur at a future time.[2]However, the term "future tense" is sometimes applied toperiphrasticconstructions involving modals such aswill,shall, andto be going to. For specific uses of future constructions formed withwill/shall, see the sections below onsimple future,future progressive,future perfect, andfuture perfect progressive.
The morphological present tense can be used to refer to future times, particularly inconditional sentencesanddependent clauses.
The morphologically past variants of future modals can be used to create a periphrasticfuture-in-the-pastconstruction.[3][4]Here the sentence as a whole refers to some particular past time, butwould winrefers to a time in the future relative to that past time. SeeFuture tense § Expressions of relative tense.
"Simple" forms of verbs are those appearing in constructions not marked for eitherprogressiveorperfectaspect (I go,I don't go,I went,I will go, etc., but notI'm goingorI have gone).
Simple constructions normally denote a single action (perfectiveaspect), as inBrutuskilledCaesar, a repeated action (habitual aspect), as inIgoto school, or a relatively permanent state, as inWelivein Dallas. They may also denote a temporary state (imperfective aspect), in the case of stative verbs that do not use progressive forms (see below).
For uses of specific simple constructions, see the sections below onsimple present,simple past,simple future, andsimple conditional.
Theprogressiveorcontinuousaspect is used to denote a temporary action or state that began at a previous time and continues into the present time (or other time of reference). It is expressed using the auxiliary verbto betogether with thepresent participle(-ingform) of the main verb:Iam reading;Wereyoushouting?;Hewill be sittingover there.
Certainstative verbsmake limited use of progressive aspect. Their non-progressive forms (simpleor non-progressiveperfectconstructions) are used in many situations even when expressing a temporary state. The main types are described below.
For specific uses of progressive (continuous) constructions, see the sections below onpresent progressive,past progressive,future progressive, andconditional progressive. For progressive infinitives, see§ Perfect and progressive nonfinite constructions. For the combination of progressive aspect with the perfect (he has been reading) seeperfect progressive.
Theperfect aspectis used to denote the circumstance of an action's being complete at a certain time. It is expressed using a form of theauxiliary verbhave(appropriately conjugated for tense etc.) together with thepast participleof the main verb:Shehas eatenit;Wehad left;Whenwillyouhave finished?
Perfect forms can also be used to refer to states or habitual actions, even if not complete, if the focus is on the time period before the point of reference (We had lived there for five years). If such a circumstance is temporary, the perfect is often combined with progressive aspect (see the following section).
The implications of thepresent perfect(that something occurred prior to the present moment) are similar to those of thesimple past, although the two forms are generally not used interchangeably—the simple past is used when the time frame of reference is in the past, while the present perfect is used when it extends to the present. For details, see the relevant sections below. For all uses of specific perfect constructions, see the sections below on thepresent perfect,past perfect,future perfect, andconditional perfect.
By usingnon-finiteforms of the auxiliaryhave, perfect aspect can also be marked on infinitives (as inshouldhave leftandexpectto have finishedworking), and on participles and gerunds (as inhaving seenthe doctor). For the usage of such forms, see the section below onperfect and progressive non-finite constructions.
Although all of the constructions referred to here are commonly referred to as perfect (based on their grammatical form), some of them, particularly non-present and non-finite instances, might not be considered truly expressive of the perfect aspect.[5]This applies particularly when theperfect infinitiveis used together withmodal verbs: for example,he could not have been a geniusmight be considered (based on its meaning) to be a past tense ofhe cannot/could not be a genius;[6]such forms are considered true perfect forms by some linguists but not others.[7]For the meanings of such constructions with the various modals, seeEnglish modal verbs.
Theperfectandprogressive(continuous) aspects can be combined, usually in referring to the completed portion of a continuing action or temporary state:Ihave been workingfor eight hours. Here a form of the verbhave(denoting the perfect) is used together withbeen(the past participle ofbe, denoting the progressive) and thepresent participleof the main verb.
In the case of the stative verbs, which do not use progressive aspect (see the section above about theprogressive), the plain perfect form is normally used in place of the perfect progressive:I've been here for half an hour(not *I've been being here...).
For uses of specific perfect progressive (perfect continuous) constructions, see the sections below on thepresent perfect progressive,past perfect progressive,future perfect progressive, andconditional perfect progressive. For perfect progressive infinitives, participles and gerunds, see§ Perfect and progressive nonfinite constructions.
Indicative mood, in English, refers to finite verb forms that are not marked assubjunctiveand are neitherimperativesnorconditionals. They are the verbs typically found in themain clausesofdeclarative sentencesand questions formed from them, as well as in mostdependent clauses(except for those that use the subjunctive). The information that a form is indicative is often omitted when referring to it: the simple present indicative is usually referred to as just thesimple present, etc. (unless some contrast of moods, such as between indicative and subjunctive, is pertinent to the topic).
Certain types of clause, mostlydependent clauses, use a verb form identified with thesubjunctive mood. The present subjunctive takes a form identical to thebare infinitive, as inIt is necessary that heberestrained.There is also a past subjunctive, distinct from the indicative only in the possible use ofwerein place ofwasin certain situations:If Iwereyou, ...
For details of the formation and usage of subjunctive forms in English, seeEnglish subjunctive.
An independent clause in theimperative mooduses the base form of the verb, usually with no subject (although the subjectyoucan be added for emphasis). Negation usesdo-support(i.e.do notordon't). For example:
Sentences of this type are used to give an instruction or order. When they are used to make requests, the wordplease(or other linguistic device) is often added forpoliteness:
First person imperatives (cohortatives) can be formed withlet us(usuallycontractedtolet's), as in "Let's go". Third person imperatives (jussives) are sometimes formed similarly, withlet, as in "Let him be released".
More detail can be found in theImperative moodarticle.
The status of theconditional moodin English is similar to that of thefuture tense: it may be considered to exist provided the category ofmoodis not required to be markedmorphologically. The English conditional is expressedperiphrasticallywith verb forms governed by theauxiliaryverbwould(or sometimesshouldwith a first-person singular subject; seeshallandwill). Themodal verbcouldis also sometimes used as a conditional (ofcan).
In certain uses, the conditional construction withwould/shouldmay also be described as "future-in-the-past".
For uses of specific conditional constructions, see the sections below onsimple conditional,conditional progressive,conditional perfect, andconditional perfect progressive, as well as the section onconditional sentences(and the main article onEnglish conditional sentences).
Theactive voice(where the verb'ssubjectis understood to denote the doer, oragent, of the denoted action) is the unmarkedvoicein English. To form thepassive voice(where the subject denotes the undergoer, orpatient, of the action), aperiphrasticconstruction is used. In the canonical form of the passive, a form of the auxiliary verbbe(or sometimesget) is used, together with thepast participleof the lexical verb.
Passive voice can be expressed in combination together with tenses, aspects and moods, by means of appropriate marking of the auxiliary (which for this purpose is not a stative verb, i.e. it hasprogressiveforms available). For example:
The uses of these various passive forms are analogous to those of the corresponding tense-aspect-mood combinations in the active voice.
The passive forms of certain of the combinations involving theprogressiveaspect are quite rare; these include thepresent perfect progressive(it has been being written),past perfect progressive(it had been being written),future progressive(it will be being written),future perfect progressive(it will have been being written),conditional progressive(it would be being written) andconditional perfect progressive(it would have been being written). Because of the awkwardness of these constructions, they may beparaphrased, for example using the expressionin the process of(it has been in the process of being written,it will be in the process of being written, and similar).
For further details of passive constructions, seeEnglish passive voice.
Negationof verbs usually takes place with the addition of the particlenot(or its shortened formn't) to an auxiliary or copular verb, withdo-supportbeing used if there is otherwise no auxiliary. However, if a sentence already contains a negative word (never,nothing, etc.), then there is not usually any additionalnot.
Questions (interrogativeconstructions) are generally formed usingsubject–auxiliary inversion, again usingdo-support if there is otherwise no auxiliary. In negative questions, it is possible to invert with just the auxiliary (should we not help?) or with the contracted negation (shouldn't we help?).
For full details on negation and question formation, seedo-support,English auxiliaries and contractions, and theNegationandQuestionssections of the English Grammar article.
English has themodal verbscan,could,may,might,must,shall,should,will,would, and also (depending on classification adopted)ought (to),dare,need,had (better),used (to). These do not add-sfor the third-person singular, and they do not form infinitives or participles; the only inflection they undergo is that to a certain extentcould,might,shouldandwould(and sometimesdared) function as preterites (past tenses) ofcan,may,shallandwill(anddare) respectively.
A modal verb can serve as the finite verb introducing a verbcatena, as inhemighthave been injuredthen. These generally express some form ofmodality(possibility, obligation, etc.), althoughwillandwould(and sometimesshallandshould) can serve—among their other uses—to expressfuturetime reference andconditionalmood, as described elsewhere on this page.
For details of the uses of modal verbs, seeEnglish modal verbs.
Thesimple pastorpast simple, sometimes also called thepreterite, consists of the bare past tense of the verb (ending in-edfor regular verbs, and formed in various ways forirregularones, with the following spelling rules for regular verbs: verbs ending in -e add only –d to the end (e.g. live – lived, not *liveed), verbs ending in -y change to -ied (e.g. study – studied) and verbs ending in a group of a consonant + a vowel + a consonant double the final consonant (e.g. stop – stopped) —seeEnglish verbsfor details). In most questions (and other situations requiringinversion), when negated, and in certainemphatic statements, aperiphrasticconstruction consisting ofdidand thebare infinitiveof the main verb is generally used instead—seedo-support.
The simple past is used for a single event in the past, for past habitual action, or for a past state:
However, for action that was ongoing at the time referred to, thepast progressiveis generally used instead. For stative verbs that do or do not use progressive aspect when expressing a temporary state, see§ Progressive aspect. For the use ofcould seein place ofsawetc., seehave gotandcan seebelow.
The simple past is often close in meaning to thepresent perfect. The simple past is used when the event is conceived as occurring at a particular time in the past, or during a period that ended in the past (i.e. it does not last up until the present time). This time frame may be explicitly stated, or implicit in the context (for example the past tense is often used when describing a sequence of past events).
For further discussion and examples, see§ Present perfectbelow.
Various compound constructions exist for denoting past habitual action. The sentenceWhen I was young, I played football every Saturdaymight alternatively be phrased usingused to(... I used to play ...) or usingwould(... I would play...).
In exceptional cases, the present simple can be used instead of the past simple as a stylistic tool, both as a way of literary expression and in everyday speech. Typical examples include telling jokes (as inThree men walk into a bar), emotional storytelling (as inSo I come home and I see this giant box in front of my door) and referring to historical events (as inKing Henry wins his last victory in 1422.).
The past simple is also used without past reference in some instances: in condition clauses and some other dependent clauses referring to hypothetical circumstances (see§ Conditional sentencesand§ Dependent clausesbelow), and after certainexpressions of wish. For the past subjunctive (werein place ofwas), seeEnglish subjunctive. For the use of the past tense in indirect speech and similar contexts, see§ Indirect speechbelow.
The -ed ending of regular verbs is pronounced as follows:
Thepast progressiveorpast continuousconstruction combinesprogressiveaspect withpasttense, and is formed using the past tense ofbe(wasorwere) with thepresent participleof the main verb. It indicates an action that was ongoing at the past time being considered:
Forstativeverbs that do not use theprogressiveaspect, thesimple pastis used instead (At three o'clock yesterday wewerein the garden).
The past progressive is often used to denote an action that was interrupted by an event,[8][9]or for two actions taking place in parallel:
(Interrupted actions in the past can also sometimes be denoted using thepast perfect progressive, as described below.)
The past progressive can also be used to refer to past action that occurred over a range of time and is viewed as an ongoing situation:
That could also be expressed using the simple past, asI worked..., which implies that the action is viewed as a unitary event (although the effective meaning is not very different).
The past progressive shares certain special uses with other past tense constructions; see§ Conditional sentences,§ Dependent clauses,§ Expressions of wish, and§ Indirect speech.
Thepast perfect, sometimes called thepluperfect, combinespasttense withperfectaspect; it is formed by combininghad(the past tense of the auxiliaryhave) with thepast participleof the main verb. It is used when referring to an event that took place prior to the time frame being considered.[10]This time frame may be stated explicitly, as a stated time or the time of another past action:
The time frame may also be understood implicitly from the previous or later context:
CompareHehad leftwhen we arrived(where his leaving preceded our arrival), with the form with thesimple past,Heleftwhen we arrived(where his leaving was concurrent with or shortly after our arrival). Unlike the present perfect, the past perfect can readily be used with an adverb specifying a past time frame for the occurrence. For example, while it is incorrect to say *I have done it last Friday(the use oflast Friday, specifying the past time, would require thesimple pastrather than thepresent perfect), there is no such objection to a sentence like "I had done it the previous Friday".[11]The past perfect can also be used for states or repeated occurrences pertaining over a period up to a time in the past, particularly in stating "for how long" or since when". However, if the state is temporary and the verb can be used in theprogressiveaspect, thepast perfect progressivewould normally be used instead. Some examples with the plain past perfect:
For other specific uses of the past perfect, see§ Conditional sentences,§ Dependent clauses,§ Expressions of wish, and§ Indirect speech.
Thepast perfect progressiveorpast perfect continuous(also known as thepluperfect progressiveorpluperfect continuous) combinesperfect progressiveaspect withpasttense. It is formed by combininghad(the past tense of auxiliaryhave),been(the past participle ofbe), and thepresent participleof the main verb.
Uses of the past perfect progressive are analogous to those of thepresent perfect progressive, except that the point of reference is in the past. For example:
This form is sometimes used for actions in the past that were interrupted by some event[12](compare the use of thepast progressiveas given above). For example:
This implies that I stopped working when she came in (or had already stopped a short time before); the plain past progressive (I was working...) would not necessarily carry this implication.
If the verb in question does not use theprogressive aspect, then the plainpast perfectis used instead (see examples in the previous section).
The past perfect progressive may also have additional specific uses similar to those of the plain past perfect; see§ Conditional sentences,§ Dependent clauses,§ Expressions of wish, and§ Indirect speech.
Thesimple presentorpresent simpleis a form that combinespresent tensewith"simple"(neither perfect nor progressive) aspect. In the indicative mood it consists of the base form of the verb, or the-sform when the subject isthird-personsingular (the verbbeuses the formsam,is,are). However, with non-auxiliary verbs it also has a periphrastic form consisting ofdo(or third-person singulardoes) with the bare infinitive of the main verb—this form is used in questions (and other clauses requiringinversion) and negations, and sometimes for emphasis. For details of this, seedo-support.
The principal uses of the simple present are given below. More examples can be found in the articleSimple present.
In colloquial English it is common to usecan see,can hearfor the present tense ofsee,hear, etc., andhave gotfor the present tense ofhave(denotingpossession). Seehave gotandcan seebelow.
For the present subjunctive, seeEnglish subjunctive. For uses of modal verbs (which may be regarded as instances of the simple present) seeEnglish modal verbs.
Thepresent progressiveorpresent continuousform combinespresenttense withprogressiveaspect. It thus refers to an action or event conceived of as having limited duration, taking place at the present time. It consists of a form of the simple present ofbetogether with thepresent participleof the main verb and the ending-ing.
This often contrasts with thesimple present, which expresses repeated or habitual action (Wecookdinner every day). However, sometimes the present continuous is used withalways, generally to express annoyance about a habitual action:
Certainstative verbsdo not use the progressive aspect, so the present simple is used instead in those cases (see§ Progressive aspectabove).
The present progressive can be used to refer to a planned future event:
It also appears with future reference in many condition and time clauses and other dependent clauses (see§ Dependent clausesbelow):
It can also refer to something taking place not necessarily at the time of speaking, but at the time currently under consideration, in the case of a story or narrative being told in the present tense (as mentioned above underpresent simple):
For the possibility of a present subjunctive progressive, seeEnglish subjunctive.
Thepresent perfect(traditionally called simply theperfect) combinespresenttense withperfectaspect, denoting the present state of an action's being completed, that is, that the action took place before the present time. (It is thus often close in meaning to thesimple pasttense, although the two are not usually interchangeable.) It is formed with the present tense of the auxiliaryhave(namelyhaveorhas) and thepast participleof the main verb.
The choice of present perfect or past tense depends on the frame of reference (period or point in time) in which the event is conceived as occurring. If the frame of reference extends to the present time, the present perfect is used. For example:
If the frame of reference is a time in the past, or a period that ended in the past, the past tense is used instead. For example:I wrote a letter this morning(it is now afternoon);He produced ten plays(he is now dead or his career is considered over, or a particular past time period is being referred to);They never traveled abroad(similarly). See underSimple pastfor more examples. The simple past is generally used when the occurrence has a specific past time frame—either explicitly stated (I wrote a bookin 1995;the water boileda minute ago), or implied by the context (for example, in the narration of a sequence of events). It is therefore normally incorrect to write a sentence like *I have written a novel yesterday; the present perfect cannot be used with an expression of past time such asyesterday.[15]
Withalreadyoryet, traditional usage calls for the present perfect:Have you eaten yet? Yes, I've already eaten.Current informal American speech allows the simple past:Did you eat yet? Yes, I ate already., although the present perfect is still fully idiomatic here and may be preferred depending on area, personal preference, or the wish to avoid possible ambiguity.
Use of the present perfect often draws attention to the present consequences of the past action or event, as opposed to its actual occurrence.[13]The sentenceshe has comeprobably means she is here now, while the simple pastshe camedoes not.[16]The sentence, “Have you been to the fair?” suggests that the fair is still going on, while the sentence, “Did you go to the fair?” could mean that the fair is over.[17](See alsobeenandgonebelow.) Some more examples:
It may also refer to an ongoing state or habitual action, particularly in sayingfor how long, orsince when, something is the case. For example,
This implies that I still live in Paris, that he still holds the record and that we still eat together every morning (although the first sentence may also refer to some unspecified past period of five years). When the circumstance is temporary, thepresent perfect progressiveis often appropriate in such sentences (see below); however, if the verb is one that does not use theprogressiveaspect, the basic present perfect is used in that case too:
The present perfect may refer to a habitual circumstance, or a circumstance being part of a theoretical or story narrative being given in the present tense (provided the circumstance is of an event's having taken place previously):
The present perfect may also be used with future reference, instead of thefuture perfect, in those dependent clauses where future occurrence is denoted by present tense (see§ Dependent clausesbelow). For example:
For the possibility of a present perfect subjunctive, seeEnglish subjunctive. For special use of the present perfect ofgetto express possession or obligation, seehave gotbelow. For the use ofhave beenin place ofhave gone, seebeenandgonebelow.
Thepresent perfect continuous(orpresent perfect progressive) construction combines some of thisperfect progressiveaspect withpresenttense. It is formed with the present tense ofhave(haveorhas), the past participle ofbe(been), and thepresent participleof the main verb and the ending-ing.
This construction is used for ongoing action in the past that continues right up to the present or has recently finished:
It is frequently used when statingfor how long, orsince when, something is the case:
In these sentences the actions are still continuing, but it is the past portion of them that is being considered, and so the perfect aspect is used. (A sentence without perfect aspect, such asI am sitting here for three hours, implies an intention to perform the action for that length of time.) With stative verbs that are not used in theprogressive, and for situations that are considered permanent, the present perfect (non-progressive) is used instead; for examples of this see§ Present perfectabove.
The termsimple future,future simpleorfuture indefinite, as applied to English, generally refers to the combination of the modal auxiliary verbwillwith thebare infinitiveof the main verb. Sometimes (particularly in more formal or old-fashioned English)shallis preferred towillwhen the subject isfirst person(Iorwe); seeshallandwillfor details. The auxiliary is often contracted to'll; seeEnglish auxiliaries and contractions.
This construction can be used to indicate what the speaker views as facts about the future, including confident predictions:
It may be used to describe future circumstances that are subject to some condition (see also§ Conditional sentences):
However English also has other ways of referring to future circumstances. For planned or scheduled actions thepresent progressiveorsimple presentmay be used (see those sections for examples). There is also agoing-tofuture, common in colloquial English, which is often used to express intentions or predictions (I am going to write a book some day;I think that it is going to rain). Use of thewill/shallconstruction when expressing intention often indicates a spontaneous decision:
CompareI'm going to use..., which implies that the intention to do so has existed for some time.
Use of present tense rather than future constructions in condition clauses and certain otherdependent clausesis described below under§ Conditional sentencesand§ Dependent clauses.
The modal verbswillandshallalso have other uses besides indicating future time reference. For example:
For more examples seewillandshallin the article on modal verbs, and the articleshallandwill.
Thefuture progressiveorfuture continuouscombinesprogressiveaspect withfuturetime reference; it is formed with the auxiliarywill(orshallin the first person; seeshallandwill), the bare infinitivebe, and thepresent participleof the main verb. It is used mainly to indicate that an event will be in progress at a particular point in the future:
The usual restrictions apply, on the use both of the future and of the progressive: simple rather than progressive aspect is used with some stative verbs (see§ Progressive aspect), and present rather than future constructions are used in many dependent clauses (see§ Conditional sentencesand§ Dependent clausesbelow).
The same construction may occur whenwillorshallis given one of its other uses (as described under§ Future simple), for example:
Thefuture perfectcombines§ Perfectaspect withfuturetime reference. It consists of the auxiliarywill(or sometimesshallin the first person, as above), the bare infinitivehave, and thepast participleof the main verb. It indicates an action that is to be completed sometime prior to a future time of perspective, or an ongoing action continuing up to a future time of perspective (compare uses of thepresent perfectabove).
For the use of the present tense rather than future constructions in certain dependent clauses, see§ Conditional sentencesand§ Dependent clausesbelow.
The same construction may occur whenwillorshallis given one of its other meanings (see under§ Simple future); for example:
Thefuture perfect progressiveorfuture perfect continuouscombinesperfect progressiveaspect withfuturetime reference. It is formed by combining the auxiliarywill(or sometimesshall, as above), the bare infinitivehave, the past participlebeen, and thepresent participleof the main verb.
Uses of the future perfect progressive are analogous to those of thepresent perfect progressive, except that the point of reference is in the future. For example:
For the use of present tense in place of future constructions in certain dependent clauses, see§ Conditional sentencesand§ Dependent clausesbelow.
The same construction may occur when the auxiliary (usuallywill) has one of its other meanings, particularly expressing a confident assumption about the present:
Thesimple conditionalorconditional simple, also calledconditional present, and in some meaningsfuture-in-the-pastsimple, is formed by combining the modal auxiliarywouldwith thebare infinitiveof the main verb. Sometimes (particularly in formal or old-fashioned English)shouldis used in place ofwouldwhen the subject is first person (Iorwe), in the same way thatshallmay replacewillin such instances; seeshallandwill. The auxiliary is often shortened to'd; seeEnglish auxiliaries and contractions.
The simple conditional is used principally in a main clause accompanied by an implicit or explicit condition (if-clause). (This is described in more detail in the article onEnglish conditional sentences; see also§ Conditional sentencesbelow.) The time referred to may be (hypothetical) present or future. For example:
In some varieties of English,would(or'd) is also regularly used in theif-clauses themselves (Ifyou'd leavenow, you'd be on time), but this is often considered nonstandard (standard:If you left now, you'd be on time). This is widespread especially in spoken American English in all registers, though not usually in more formal writing.[18]There are also situations wherewouldis used inif-clauses in British English too, but these can usually be interpreted as amodaluse ofwould(e.g.If youwould listento me once in a while, you might learn something).[19]For more details, seeEnglish conditional sentences § Use of will and would in condition clauses.
For the use ofwouldafter the verbwishand the expressionif only, see§ Expressions of wish.
The auxiliary verbscouldandmightcan also be used to indicate the conditional mood, as in the following:
Forms withwouldmay also have "future-in-the-past" meaning:
See also§ Indirect speechand§ Dependent clauses. For other possible meanings ofwouldandshould(as well ascouldandmight), see the relevant sections ofEnglish modal verbs.
Theconditional (present) progressiveorconditional continuouscombinesconditionalmood withprogressiveaspect. It combineswould(or the contraction'd, or sometimesshouldin the first person, as above) with the bare infinitivebeand thepresent participleof the main verb. It has similar uses to those of thesimple conditional(above), but is used for ongoing actions or situations (usually hypothetical):
It can also havefuture-in-the-pastmeanings:
For the use ofwouldin condition clauses, see§ Simple conditionalabove (see also§ Conditional sentencesand§ Dependent clausesbelow). For use in indirect speech constructions, see§ Indirect speech. For other uses of constructions withwouldandshould, seeEnglish modal verbs. For general information on conditionals in English, seeEnglish conditional sentences(and also§ Conditional sentencesbelow).
Theconditional perfectconstruction combinesconditionalmood withperfectaspect, and consists ofwould(or the contraction'd, or sometimesshouldin the first person, as above), the bare infinitivehave, and thepast participleof the main verb. It is used to denote conditional situations attributed to past time, usually those that are or may be contrary to fact.
For the possibility of use ofwouldin the condition clauses themselves, see§ Simple conditional(see also§ Dependent clausesbelow). For more information on conditional constructions, see§ Conditional sentencesbelow, and the articleEnglish conditional sentences.
The same construction may have "future-in-the-past" meanings (seeIndirect speech). For other meanings ofwould haveandshould have, seeEnglish modal verbs.
Theconditional perfect progressiveorconditional perfect continuousconstruction combinesconditionalmood withperfect progressiveaspect. It consists ofwould(or sometimesshouldin the first person, as above) with the bare infinitivehave, the past participlebeenand thepresent participleof the main verb. It generally refers to a conditional ongoing situation in hypothetical (usually counterfactual) past time:
Similar considerations and alternative forms and meanings apply as noted in the sections above about other conditional constructions.
In colloquial English, particularly British English, thepresent perfectof the verbget, namelyhave gotorhas got, is frequently used in place of thesimple presentindicative ofhave(i.e.haveorhas) when denotingpossession, broadly defined. For example:
In American English, the formgotis used in this idiom, even though the standard past participle ofgetisgotten.
The same applies in the expression of present obligation:I've got to go nowmay be used in place ofIhave to(must) go now.
In very informal registers, the contracted form ofhaveorhasmay be omitted altogether:I got three brothers.[20]
Another common idiom is the use of the modal verbcan(orcouldfor thepasttense orconditional) together with verbs of perception such assee,hear, etc., rather than the plain verb. For example:
Aspectualdistinctions can be made, particularly in the past tense:
Inperfectconstructions apparently requiring the verbgo, the normal past participlegoneis often replaced by the past participle of thecopula verbbe, namelybeen. This gives rise to sentences of contrasting meaning.
Whenbeenis used, the implication is that, at the time of reference, the act of going took place previously, but the subject is no longer at the place in question (unless a specific time frame including the present moment is specified). Whengoneis used, the implication is again that the act of going took place previously, but that the subject is still at (or possibly has not yet reached) that place (unless repetition is specified lexically). For example:
Beenis used in such sentences in combination withtoas if it were a verb of motion (being followed by adverbial phrases of motion), which is different from its normal uses as part of the copula verbbe. Compare:
The sentences above with thepresent perfectcan be further compared with alternatives using thesimple past, such as:
As usual, this tense would be used if a specific past time frame is stated ("in 1995", "last week") or is implied by the context (e.g. the event is part of a past narrative, or my father is no longer alive or capable of traveling). Use of this form does not in itself determine whether or not the subject is still there.
Aconditional sentenceusually contains two clauses: anif-clause or similar expressing the condition (theprotasis), and a main clause expressing the conditional circumstance (theapodosis). In English language teaching, conditional sentences are classified according to type as first, second or third conditional; there also exist "zero conditional" and mixed conditional sentences.
A "first conditional" sentence expresses a future circumstance conditional on some other future circumstance. It uses thepresenttense (withfuture reference) in the condition clause, and thefuturewithwill(or some other expression of future) in the main clause:
A "second conditional" sentence expresses a hypothetical circumstance conditional on some other circumstance, referring to nonpast time. It uses thepasttense (with the pastsubjunctivewereoptionally replacingwas) in the condition clause, and theconditionalformed withwouldin the main clause:
A "third conditional" sentence expresses a hypothetical (usually counterfactual) circumstance in the past. It uses thepast perfectin the condition clause, and theconditional perfectin the main clause:
A "mixed conditional" mixes the second and third patterns (for a past circumstance conditional on a not specifically past circumstance, or vice versa):
The "zero conditional" is a pattern independent of tense, simply expressing the dependence of the truth of one proposition on the truth of another:
See also the following sections onexpressions of wishanddependent clauses.
Particular rules apply to the tenses and verb forms used after the verbwishand certain other expressions with similar meaning.
When the verbwishgoverns afinite clause, the past tense (simple pastorpast progressiveas appropriate) is used when the desire expressed concerns a present state, thepast perfect(orpast perfect progressive) when it concerns a (usually counterfactual) past state or event, and thesimple conditionalwithwouldwhen it concerns a desired present action or change of state. For example:
The same forms are generally used independently of the tense or form of the verbwish:
The same rules apply after the expressionif only:
In finite clauses afterwould rather,imagineandit's (high) time, the past tense is used:
Afterwould ratherthe presentsubjunctiveis also sometimes possible:I'd rather you/hecomewith me.
After all of the expressions above (though not normallyit's (high) time) thepast subjunctiveweremay be used instead ofwas:
Other syntactic patterns are possible with most of these expressions. The verbwishcan be used with ato-infinitiveor as an ordinarytransitive verb(I wish to talk;I wish you good health). The expressionswould ratherandit's timecan also be followed by ato-infinitive. After the verbhopethe rules above do not apply; instead the logically expected tense is used, except that often the present tense is used with future meaning:
Verbs often undergo tense changes inindirect speech. This commonly occurs incontent clauses(typicallythat-clauses andindirect questions), when governed by a predicate of saying (thinking, knowing, etc.) which is in thepasttense orconditionalmood.
In this situation the following tense and aspect changes occur relative to the original words:
Verb forms not covered by any of the rules above (verbs already in the past perfect, or formed withwouldor other modals not having a preterite equivalent) do not change. Application of the rules above is not compulsory; sometimes the original verb tense is retained, particularly when the statement (with the original tense) remains equally valid at the moment of reporting:
The tense changes above do not apply when the verb of saying (etc.) is notpastorconditionalin form; in particular there are no such changes when that verb is in thepresent perfect:He has said that he likes apples.For further details, and information about other grammatical and lexical changes that take place in indirect speech, seeindirect speechandsequence of tenses. For related passive constructions (of the typeit is said thatandshe is said to), seeEnglish passive voice § Passive constructions without an exactly corresponding active.
Apart from the special cases referred to in the sections above, many otherdependent clausesuse a tense that might not logically be expected – in particular thepresenttense is used when the reference is to future time, and thepasttense is used when the reference is to a hypothetical situation (in other words, the form withwillis replaced by the present tense, and the form withwouldby the past tense). This occurs in condition clauses (as mentionedabove), in clauses of time and place and in many relative clauses:
In the examples above, thesimple presentis used instead of thesimple future, even though the reference is to future time. Examples of similar uses with other tense–aspect combinations are given below:
The past tense can be used for hypothetical situations in some noun clauses too:
The use of present and past tenses without reference to present and past time does not apply to all dependent clauses, however; if the future time or hypothetical reference is expressed in the dependent clause independently of the main clause, then a form withwillorwouldin a dependent clause is possible:
The main uses of the various nonfinite verb forms (infinitives, participles and gerunds) are described in the following sections. For how these forms are made, see§ Inflected forms of verbsabove. For more information on distinguishing between the various uses that use the form in-ing, see-ing: Uses.
A bareinfinitive(the base form of the verb, without the particleto), or an infinitive phrase introduced by such a verb, may be used as follows:
The form of the bare infinitive is also commonly taken as the dictionary form or citation form (lemma) of an English verb. For perfect and progressive (continuous) infinitive constructions, see§ Perfect and progressive nonfinite constructionsbelow.
Theto-infinitive consists of the bare infinitive introduced by the particleto.[21]Outside dictionaryheadwords, it is commonly used as acitation formof the English verb ("How do we conjugate the verbto go?") It is also commonly given as a translation of foreign infinitives ("The French wordboiremeans 'to drink'.")
Other modifiers may be placed betweentoand the verb (as into boldly go;to slowly drift away), but this is sometimes regarded by some as a grammatical or stylistic error – seesplit infinitivefor details.
The main uses ofto-infinitives, or infinitive phrases introduced by them, are as follows:
In many of the uses above, the implied subject of the infinitive can be marked using a prepositional phrase withfor: "This game is easyfor a child to play", etc. However this does not normally apply when the infinitive is the complement of a verb (other than the copula, and certain verbs that allow a construction withfor, such aswait: "They waited for us to arrive"). It also does not apply in elliptical questions, or in fixed expressions such asso as to,am to, etc. (although it does apply inin order to).
When the verb is implied, theto-infinitive may be reduced to simplyto: "Do I haveto?" Seeverb phrase ellipsis.
For perfect and progressive infinitives, such as(to) have writtenand(to) be writing, see§ Perfect and progressive nonfinite constructionsbelow.
Thepresent participleis one of the uses of the-ingform of a verb. This usage isadjectivaloradverbial. The main uses of this participle, or ofparticipial phrasesintroduced by it, are as follows. (Uses of gerunds and verbal nouns, which take the same-ingform, appear in sections below.)
For present participle constructions with perfect aspect (e.g.having written), see§ Perfect and progressive nonfinite constructionsbelow.
Present participles may come to be used as pure adjectives (seeTypes of participle). Examples of participles that do this frequently areinteresting,exciting, andenduring. Such words may then take various adjectival prefixes and suffixes, as inuninterestingandinterestingly.
Englishpast participleshave bothactiveandpassiveuses. In a passive use, an object or preposition complement becomeszero, the gap being understood to be filled by the noun phrase the participle modifies (compare similar uses of theto-infinitiveabove). Uses of past participles and participial phrases introduced by them are as follows:
The last type of phrase can be preceded with the prepositionwith:With these wordsspoken, he turned and left.
As with present participles, past participles may function as simple adjectives: "theburntlogs"; "we were veryexcited". These normally represent the passive meaning of the participle, although some participles formed fromintransitive verbscan be used in an active sense: "thefallenleaves"; "ourfallencomrades".
An English irregular verb’s simple past tense form is typically distinct from its past participle (with which the auxiliaryto haveconstructs the past perfect), as inwentvs.havegone(ofto go),despite them being the same for regular verbs, as indemandedvs.havedemanded(ofto demand). However, not all irregular verbs distinguish them from each other and their unmarked form (with which the particletoconstructs the full infinitive, as intogo): the participle may use the simple past form as intosay,said, havesaid, or use the unmarked form as intocome,came, havecome. For verbs with three distinct such forms in standardized Englishes (go/went/gone), many speakers use the same form for the past tense and past participle. The standardized past tense form is likely used for the participle, as in "I should have went" vs. "I should have gone" and "this song could've came out today" vs. "this song could've come out today". With a few verbs (such asto see,to do,to ringandto be), the standardized past participle form is used for the simple past, as in "I seen it yesterday" vs. "I saw it yesterday", "I done it" vs. "I did it" and "I been there" vs. "I was there". This pattern is found in multiple otherwise not closely related varieties.
Thegerundtakes the same form (ending in-ing) as the present participle, but is used as a noun (or rather the verb phrase introduced by the gerund is used as a noun phrase).[23]Many uses of gerunds are thus similar to noun uses of the infinitive. Uses of gerunds and gerund phrases are illustrated below:
It is considered grammatically correct to express theagent(logical subject) of a gerund using apossessiveform (they object tomyhelping them), although in informal English a simple noun or pronoun is often used instead (they object tomehelping them). For details seefused participle.
For gerund constructions with perfect aspect (e.g.(my) having written), see§ Perfect and progressive nonfinite constructionsbelow.
There are also nonfinite constructions that are marked forperfect,progressiveorperfect progressiveaspect, using the infinitives, participles or gerunds of the appropriate auxiliaries. The meanings are as would be expected for the respective aspects: perfect for prior occurrence, progressive for ongoing occurrence at a particular time. (Passive voicecan also be marked in nonfinite constructions – with infinitives, gerunds and present participles – in the expected way:(to) be eaten,being eaten,having been eaten, etc.)
Examples of nonfinite constructions marked for the various aspects are given below.
Bare infinitive:
To-infinitive:
Present participle:
Past participle:
Gerund:
Other aspectual, temporal andmodalinformation can be marked on nonfinite verbs usingperiphrasticconstructions. For example, a "future infinitive" can be constructed using forms such as(to) begoing toeator(to) beabout toeat.
Certain words are formed from verbs, but are used as common nouns or adjectives, without any of the grammatical behavior of verbs. These are sometimes called verbal nouns or adjectives, but they are also calleddeverbal nounsanddeverbal adjectives, to distinguish them from the truly "verbal" forms such as gerunds and participles.[24]
Besides its nonfinite verbal uses as agerundorpresent participle, the-ingform of a verb is also used as a deverbal noun, denoting an activity or occurrence in general, or a specific action or event (or sometimes a more distant meaning, such asbuildingorpipingdenoting an object or system of objects). One can compare the construction and meaning of noun phrases formed using the-ingform as a gerund, and of those formed using the same-ingform as a deverbal noun. Some points are noted below:
Some-ingforms, particularly those such asboring,exciting,interesting, can also serve as deverbal adjectives (distinguished from the present participle in much the same way as the deverbal noun is distinguished from the gerund). There are also many other nouns and adjectives derived from particular verbs, such ascompetitionandcompetitivefrom the verbcompete(as well as other types such asagent nouns). For more information seeverbal noun,deverbal nounanddeverbal adjective. For more on the distinction between the various uses of the-ingform of verbs, see-ing. | https://en.wikipedia.org/wiki/Uses_of_English_verb_forms |
Insociolinguistics,hypercorrectionis thenonstandarduse of languagethat results from the overapplication of a perceived rule oflanguage-usage prescription. A speaker or writer who produces a hypercorrection generally believes through a misunderstanding of such rules that the form or phrase they use is more "correct",standard, or otherwise preferable, often combined with a desire to appear formal or educated.[1][2]
Linguistic hypercorrection occurs when a real or imagined grammatical rule is applied in an inappropriate context, so that an attempt to be "correct" leads to an incorrect result. It does not occur when a speaker follows "a natural speech instinct", according toOtto Jespersenand Robert J. Menner.[3]
Hypercorrection can be found among speakers of lessprestigiouslanguage varietieswho attempt to produce forms associated with high-prestige varieties, even in situations where speakers of those varieties would not. Some commentators call such productionhyperurbanism.[4]
Hypercorrection can occur in many languages and wherever multiple languages or language varieties are in contact.
Studies insociolinguisticsandapplied linguisticshave noted the overapplication of rules ofphonology,syntax, ormorphology, resulting either from different rules in varieties of the same language orsecond-language learning. An example of a common hypercorrection based on application of the rules of a second (i.e., new, foreign) language is the use ofoctopifor theplural ofoctopusin English; this is based on the faulty assumption thatoctopusis asecond declensionword ofLatin originwhen in fact it isthird declensionand comes fromGreek.[5][better source needed]
Sociolinguists often note hypercorrection in terms of pronunciation (phonology). For example,William Labovnoted that all of the English speakers he studied inNew York Cityin the 1960s tended to pronounce words such ashardasrhotic(pronouncing the "R" as/hɑːrd/rather than/hɑːd/) more often when speaking carefully. Furthermore,middle classspeakers had more rhotic pronunciation thanworking classspeakers did.
However, lower-middle class speakers had more rhotic pronunciation than upper-middle class speakers. Labov suggested that these lower-middle class speakers were attempting to emulate the pronunciation of upper-middle class speakers, but were actually over-producing the very noticeable R-sound.[6]
A common source of hypercorrection in English speakers' use of the language's morphology and syntax happens in the use of pronouns (see§ Personal pronouns).[4]
Hypercorrection can also occur when learners of a new-to-them (second, foreign) language try to avoid applying grammatical rules from theirnative languageto the new language (a situation known aslanguage transfer). The effect can occur, for example, when a student of a new language has learned that certain sounds of their original language must usually be replaced by another in the studied language, but has not learned whennotto replace them.[7]
In addition, the special case of a pseudo-hypercorrection has been identified where standard usage is at issue, butaccidentally, i.e., where a speaker luckily produces acorrectresult.[8]
English has no authoritative body orlanguage academycodifyingnorms forstandard usage, unlike some otherlanguages. Nonetheless, within groups of users of English, certain usages are considered unduly elaborate adherences to formal rules. Such speech or writing is sometimes calledhyperurbanism, defined byKingsley Amisas an "indulged desire to be posher than posh".[citation needed]
In 2004, Jack Lynch,assistant professorof English atRutgers University, said onVoice of Americathat the correction of the subject-positioned "you and me" to "you and I" leads people to "internalize the rule that 'you and I' is somehow more proper, and they end up using it in places where they should not – such as 'he gave it to you and I' when it should be 'he gave it to you and me.'"[9]
However, the linguistsRodney HuddlestonandGeoffrey K. Pullumwrite thatutterancessuch as "They invited Sandy and I" are "heard constantly in the conversation of people whose status as speakers of Standard English is clear" and that "[t]hose who condemn it simply assume that the case of a pronoun in a coordination must be the same as when it stands alone. Actual usage is in conflict with this assumption."[10]
Some British accents, such asCockney, drop the initialhfrom words; e.g.,havebecomes'ave. A hypercorrection associated with this isH-adding, adding an initialhto a word which would not normally have one. An example of this can be found in the speech of the characterParkerin themarionetteTV seriesThunderbirds, e.g., "We'll 'ave the haristocrats 'ere soon" (from the episode "Vault of Death"). Parker's speech was based on a real person the creators encountered at a restaurant inCookham.[11]
The same, for the same reason, is often heard when a person of Italian origins speaks English: "I'mhangryhat Francesco", "I'd like toheat something". This should not be expected to be consistent with the h-dropping common in the Italian accent, so the same person may say "an edge-og" instead of "a hedgehog" or just say it correctly.[12]
Hyperforeignism arises from speakers misidentifying the distribution of a pattern found in loanwords and extending it to other environments. The result of this process does not reflect the rules of either language.[13]For example,habanerois sometimes pronounced as though it were spelled "habañero", in imitation of other Spanish words likejalapeñoandpiñata.[14]Machismois sometimes pronounced "makizmo", apparently as if it were Italian, rather than the phonetic English pronunciation which resembles the original Spanish word,/mɑːˈtʃiz.mo/. Similarly, the z inchorizois sometimes pronounced as /ts/ (as if it were Italian), whereas the original Spanish pronunciation has/θ/or/s/.
Some English-Spanishcognatesprimarily differ by beginning withsinstead ofes, such as the English wordspectacularand the Spanish wordespectacular. A native Spanish speaker may conscientiously hypercorrect for the wordescapeby writing or sayingscape, or for the wordestablishby writing or sayingstablish, which isarchaic, or an informal pronunciation in some dialects.[15]
As thelocative caseis rarely found invernacularusage in the southern and eastern dialects of Serbia, and theaccusativeis used instead, speakers tend to overcorrect when trying to deploy thestandard varietyof the language in more formal occasions, thus using the locative even when the accusative should be used (typically, when indicating direction rather than location): "Izlazim na kolovozu" instead of "izlazim na kolovoz".[18]
Ghil'ad Zuckermannargues that the following hypercorrect pronunciations inIsraeli Hebreware "snobbatives" (fromsnob+-ative, modelled uponcomparatives and superlatives):[19]
The last two hypercorrection examples derive from a confusion related to theQamatz GadolHebrew vowel, which in the acceptedSephardi Hebrewpronunciation is rendered as/aː/but which is pronounced/ɔ/inAshkenazi Hebrew, and in Hebrew words that also occur inYiddish. However, theQamatz Qaṭanvowel, which is visually indistinguishable from the Qamatz Gadol vowel, is rendered as/o/in both pronunciations. This leads to hypercorrections in both directions.
Other hypercorrections occur when speakers of Israeli Hebrew (which is based on Sephardic) attempt to pronounce Ashkenazi Hebrew, for example for religious purposes. The month ofShevat(שבט) is mistakenly pronouncedShvas, as if it were spelled *שְׁבַת. In an attempt to imitatePolishandLithuaniandialects,qamatz(bothgadolandqatan), which would normally be pronounced[ɔ], is hypercorrected to the pronunciation ofholam,[ɔj], renderingגדול ('large') asgoydlandברוך ('blessed') asboyrukh.
In some Spanish dialects, the final intervocalic/d/([ð]) is dropped, such as inpescado(fish), which would typically be pronounced[pesˈkaðo]but can be manifested as[pesˈkao]dialectically. Speakers sensitive to this variation may insert a/d/intervocalically into a word without such a consonant, such as in the case ofbacalao(cod), correctly pronounced[bakaˈlao]but occasionally hypercorrected to[bakaˈlaðo].[20]
Outside Spain and inAndalusia, the phonemes/θ/and/s/have merged, mostly into the realization[s]butceceo, i.e. the pronunciation of both as[s̟], is found in some areas as well, primarily parts of Andalusia. Speakers of varieties that have[s]in all cases will frequently produce[θ]even in places wherepeninsular Spanishhas[s]when trying to imitate a peninsular accent. AsSpanish orthographydistinguishes the two phonemes in all varieties, but the pronunciation is not differentiated in Latin American varieties, some speakers also get mixed up with the spelling.
Many Spanish dialects tend toaspiratesyllable-final/s/, and some even elide it often. Since this phenomenon is somewhat stigmatized, some speakers in theCaribbeanand especially theDominican Republicmay attempt to correct for it by pronouncing an/s/where it does not belong. For example,catorce años'14 years' may be pronounced ascatorces año.[21]
TheEast Franconian dialectsare notable forlenitionof stops /p/ /t/ /k/ to [b], [d], [g]. Thus, a common hypercorrection is thefortitionof properly lenis stops, sometimes including aspiration as evidenced by the speech ofGünther Beckstein.
Thedigraph⟨ig⟩ in word-final position is pronounced[ɪç]per theBühnendeutschstandard, but this pronunciation is frequently perceived as nonstandard and instead realized as[ɪɡ̊]or[ɪk](final obstruent devoicing) even by speakers fromdialect areasthat pronounce the digraph[ɪç]or[ɪʃ].
Palatinate German languagespeakers are among those who pronounce both the digraph⟨ch⟩and the trigraph⟨sch⟩as[ʃ]. A common hypercorrection is to produce[ç]even where standard German has[ʃ]such as inHelmut Kohl's hypercorrect rendering of "Geschichte", the German word for "history" with aGerman pronunciation:[ç]both for the ⟨sch⟩ (standard German[ʃ]) and the⟨ch⟩.
Proper names and German loanwords into other languages that have beenreborrowed, particularly when they have gone through or are perceived to have gone through the English language are often pronounced "hyperforeign". Examples include "Hamburger" or the names ofGerman-Americansand the companies named after them, even if they were or are first generation immigrants.
Some German speakers pronounce themetal umlautas if it were a "normal" German umlaut. For example, whenMötley Crüevisited Germany, singer Vince Neil said the band could not figure out why "the crowds were chanting, 'Mutley Cruh! Mutley Cruh!'"[22]
In Swedish, the wordattis sometimes pronounced/ɔ/when used as an infinitive marker (its conjunction homograph is never pronounced that way, however). The conjunctionochis also sometimes pronounced the same way. Both pronunciations can informally be speltå. ("Jag älskar å fiska å jag tycker också om å baka.") When spelt more formally, the infinitive marker/ˈɔ/is sometimes misspeltoch. (*"Få mig och hitta tillbaka.")
The third person plural pronoun, pronounceddomin many dialects, is formally speltdein the subjective case anddemin the objective case. Informally it can be spelleddom("Dom tycker om mig."), yetdomis only acceptable in spoken language.[23]When spelt more formally, they are often confused with each other. ("De tycker om mig." as a correct form, compared to *"Dem tycker om mig." as an incorrect form in this case). As an object form, usingdemin a sentence would be correct in the sentence "Jag ger dem en present." ("I give them a gift.") | https://en.wikipedia.org/wiki/Hypercorrection |
Inlogicandformal semantics,term logic, also known astraditional logic,syllogistic logicorAristotelian logic, is a loose name for an approach toformal logicthat began withAristotleand was developed further inancient historymostly by his followers, thePeripatetics. It was revived after the third century CE byPorphyry'sIsagoge.
Term logic revived inmedievaltimes, first inIslamic logicbyAlpharabiusin the tenth century, and later in Christian Europe in the twelfth century with the advent ofnew logic, remaining dominant until the advent ofpredicate logicin the late nineteenth century.
However, even if eclipsed by newer logical systems, term logic still plays a significant role in the study of logic. Rather than radically breaking with term logic, modern logics typically expand it.
Aristotle's logical work is collected in the six texts that are collectively known as theOrganon. Two of these texts in particular, namely thePrior AnalyticsandOn Interpretation, contain the heart of Aristotle's treatment of judgements and formalinference, and it is principally this part of Aristotle's works that is about termlogic. Modern work on Aristotle's logic builds on the tradition started in 1951 with the establishment byJan Lukasiewiczof a revolutionary paradigm.[1]Lukasiewicz's approach was reinvigorated in the early 1970s byJohn CorcoranandTimothy Smiley– which informs modern translations ofPrior Analyticsby Robin Smith in 1989 andGisela Strikerin 2009.[2]
ThePrior Analyticsrepresents the first formal study of logic, where logic is understood as the study of arguments. An argument is a series of true or false statements which lead to a true or false conclusion.[3]In thePrior Analytics, Aristotle identifies valid and invalid forms of arguments calledsyllogisms. A syllogism is an argument that consists of at least three sentences: at least twopremisesand a conclusion. Although Aristotle does not call them "categoricalsentences", tradition does; he deals with them briefly in theAnalyticsand more extensively inOn Interpretation.[4]Each proposition (statement that is a thought of the kind expressible by a declarative sentence)[5]of a syllogism is a categorical sentence which has a subject and a predicate connected by a verb. The usual way of connecting the subject and predicate of a categorical sentence as Aristotle does inOn Interpretationis by using a linking verb e.g. P is S. However, in the Prior Analytics Aristotle rejects the usual form in favour of three of his inventions:
Aristotle does not explain why he introduces these innovative expressions but scholars conjecture that the reason may have been that it facilitates the use of letters instead of terms avoiding the ambiguity that results in Greek when letters are used with the linking verb.[6]In his formulation of syllogistic propositions, instead of the copula ("All/some... are/are not..."), Aristotle uses the expression, "... belongs to/does not belong to all/some..." or "... is said/is not said of all/some..."[7]There are four different types of categorical sentences: universal affirmative (A), universal negative (E), particular affirmative (I) and particular negative (O).
A method of symbolization that originated and was used in the Middle Ages greatly simplifies the study of the Prior Analytics.
Following this tradition then, let:
Categorical sentences may then be abbreviated as follows:
From the viewpoint of modern logic, only a few types of sentences can be represented in this way.[8]
The fundamental assumption behind the theory is that theformalmodel ofpropositionsare composed of twological symbolscalled terms – hence the name "two-term theory" or "term logic" – and that thereasoningprocess is in turn built from propositions:
A proposition may be universal or particular, and it may be affirmative or negative. Traditionally, the four kinds of propositions are:
This was called thefourfold schemeof propositions (seetypes of syllogismfor an explanation of the letters A, I, E, and O in the traditional square). Aristotle'soriginalsquare of opposition, however, does not lackexistential import.
Aterm(Greek ὅροςhoros) is the basic component of the proposition. The original meaning of thehoros(and also of the Latinterminus) is "extreme" or "boundary". The two terms lie on the outside of the proposition, joined by the act of affirmation or denial.
For early modernlogicianslike Arnauld (whosePort-Royal Logicwas the best-known text of his day), it is a psychological entity like an "idea" or "concept".Millconsiders it a word. To assert "all Greeks are men" is not to say that the concept of Greeks is the concept of men, or that word "Greeks" is the word "men". Apropositioncannot be built from real things or ideas, but it is not just meaningless words either.
In term logic, a "proposition" is simply aform of language: a particular kind ofsentence, in which the subject andpredicateare combined, so as to assert something true or false. It is not a thought, nor anabstract entity. The word"propositio"is from the Latin, meaning the first premise of asyllogism. Aristotle uses the word premise (protasis) as a sentence affirming or denying one thing or another (Posterior Analytics1. 1 24a 16), so apremiseis also a form of words.
However, as in modern philosophical logic, it means that which is asserted by the sentence. Writers beforeFregeandRussell, such asBradley, sometimes spoke of the "judgment" as something distinct from a sentence, but this is not quite the same. As a further confusion the word "sentence" derives from the Latin, meaning anopinionorjudgment, and so is equivalent to "proposition".
Thelogical qualityof a proposition is whether it is affirmative (the predicate is affirmed of the subject) or negative (the predicate is denied of the subject). Thusevery philosopher is mortalis affirmative, since the mortality of philosophers is affirmed universally, whereasno philosopher is mortalis negative by denying such mortality in particular.
Thequantityof a proposition is whether it is universal (the predicate is affirmed or denied of all subjects or of "the whole") or particular (the predicate is affirmed or denied of some subject or a "part" thereof). In case whereexistentialimport is assumed,quantificationimplies the existence of at least one subject, unless disclaimed.
For Aristotle, the distinction between singular[citation needed]and universal is a fundamentalmetaphysicalone, and not merelygrammatical. A singular term for Aristotle isprimary substance, which can only bepredicatedof itself: (this) "Callias" or (this) "Socrates" are not predicable of any other thing, thus one does not sayevery Socratesone saysevery human(De Int.7;Meta.D9, 1018a4). It may feature as a grammatical predicate, as in the sentence "the person coming this way is Callias". But it is still alogicalsubject.
He contrasts universal (katholou)[9]secondary substance, genera, with primary substance, particular (kath' hekaston)[9][10]specimens. The formal nature ofuniversals, in so far as they can be generalized "always, or for the most part", is the subject matter of both scientific study and formal logic.[11]
The essential feature of thesyllogismis that, of the four terms in the two premises, one must occur twice. Thus
The subject of one premise, must be the predicate of the other, and so it is necessary to eliminate from the logic any terms which cannot function both as subject and predicate, namely singular terms.
However, in a popular 17th-century version of the syllogism,Port-Royal Logic, singular terms were treated as universals:[12]
This is clearly awkward, a weakness exploited by Frege in his devastating attack on the system.
The famous syllogism "Socrates is a man ...", is frequently quoted as though from Aristotle,[13]but in fact, it is nowhere in theOrganon.Sextus Empiricusin hisHyp. Pyrrh(Outlines of Pyrronism) ii. 164 first mentions the related syllogism "Socrates is a human being, Every human being is an animal, Therefore, Socrates is an animal."
Depending on the position of the middle term, Aristotle divides the syllogism into three kinds: syllogism in the first, second, and third figure.[14]If the Middle Term is subject of one premise and predicate of the other, the premises are in the First Figure. If the Middle Term is predicate of both premises, the premises are in the Second Figure. If the Middle Term is subject of both premises, the premises are in the Third Figure.[15]
Symbolically, the Three Figures may be represented as follows:[16]
In Aristotelian syllogistic (Prior Analytics, Bk I Caps 4-7), syllogisms are divided into three figures according to the position of the middle term in the two premises. The fourth figure, in which the middle term is the predicate in the major premise and the subject in the minor, was added by Aristotle's pupilTheophrastusand does not occur in Aristotle's work, although there is evidence that Aristotle knew of fourth-figure syllogisms.[17]
In thePrior Analyticstranslated by A. J. Jenkins as it appears in volume 8 of the Great Books of the Western World, Aristotle says of the First Figure: "... If A is predicated of all B, and B of all C, A must be predicated of all C."[18]In thePrior Analyticstranslated by Robin Smith, Aristotle says of the first figure: "... For if A is predicated of every B and B of every C, it is necessary for A to be predicated of every C."[19]
Takinga=is predicated of all=is predicated of every, and using the symbolical method used in the Middle Ages, then the first figure is simplified to:[20]
Or what amounts to the same thing:
When the four syllogistic propositions, a, e, i, o are placed in the first figure, Aristotle comes up with the following valid forms of deduction for the first figure:
In the Middle Ages, formnemonicreasons they were called "Barbara", "Celarent", "Darii" and "Ferio" respectively.[21]
The difference between the first figure and the other two figures is that the syllogism of the first figure is complete while that of the second and third is not. This is important in Aristotle's theory of the syllogism for the first figure is axiomatic while the second and third require proof. The proof of the second and third figure always leads back to the first figure.[22]
This is what Robin Smith says in English that Aristotle said in Ancient Greek: "... If M belongs to every N but to no X, then neither will N belong to any X. For if M belongs to no X, neither does X belong to any M; but M belonged to every N; therefore, X will belong to no N (for the first figure has again come about)."[23]
The above statement can be simplified by using the symbolical method used in the Middle Ages:
When the four syllogistic propositions, a, e, i, o are placed in the second figure, Aristotle comes up with the following valid forms of deduction for the second figure:
In the Middle Ages, for mnemonic reasons they were called respectively "Camestres", "Cesare", "Festino" and "Baroco".[24]
Aristotle says in the Prior Analytics, "... If one term belongs to all and another to none of the same thing, or if they both belong to all or none of it, I call such figure the third." Referring to universal terms, "... then when both P and R belongs to every S, it results of necessity that P will belong to some R."[25]
Simplifying:
When the four syllogistic propositions, a, e, i, o are placed in the third figure, Aristotle develops six more valid forms of deduction:
In the Middle Ages, for mnemonic reasons, these six forms were called respectively: "Darapti", "Felapton", "Disamis", "Datisi", "Bocardo" and "Ferison".[26]
Term logic began to decline inEuropeduring theRenaissance, when logicians likeRodolphus AgricolaPhrisius (1444–1485) andRamus(1515–1572) began to promote place logics. The logical tradition calledPort-Royal Logic, or sometimes "traditional logic", saw propositions as combinations of ideas rather than of terms, but otherwise followed many of the conventions of term logic. It remained influential, especially in England, until the 19th century.Leibnizcreated a distinctivelogical calculus, but nearly all of his work onlogicremained unpublished and unremarked untilLouis Couturatwent through the LeibnizNachlassaround 1900, publishing his pioneering studies in logic.
19th-century attempts to algebraize logic, such as the work ofBoole(1815–1864) andVenn(1834–1923), typically yielded systems highly influenced by the term-logic tradition. The firstpredicate logicwas that ofFrege's landmarkBegriffsschrift(1879), little read before 1950, in part because of its eccentric notation. Modernpredicate logicas we know it began in the 1880s with the writings ofCharles Sanders Peirce, who influencedPeano(1858–1932) and even more,Ernst Schröder(1841–1902). It reached fruition in the hands ofBertrand RussellandA. N. Whitehead, whosePrincipia Mathematica(1910–13) made use of a variant of Peano's predicate logic.
Term logic also survived to some extent in traditionalRoman Catholiceducation, especially inseminaries. Medieval Catholictheology, especially the writings ofThomas Aquinas, had a powerfullyAristoteleancast, and thus term logic became a part of Catholic theological reasoning. For example, Joyce'sPrinciples of Logic(1908; 3rd edition 1949), written for use in Catholic seminaries, made no mention ofFregeor ofBertrand Russell.[28][page needed][need quotation to verify]
Some philosophers have complained that predicate logic:
Even academic philosophers entirely in the mainstream, such asGareth Evans, have written as follows:
George Boole's unwavering acceptance of Aristotle's logic is emphasized by the historian of logicJohn Corcoranin an accessible introduction toLaws of Thought[29]Corcoran also wrote a point-by-point comparison ofPrior AnalyticsandLaws of Thought.[30]According to Corcoran, Boole fully accepted and endorsed Aristotle's logic. Boole's goals were “to go under, over, and beyond” Aristotle's logic by:
More specifically, Boole agreed with whatAristotlesaid; Boole's ‘disagreements’, if they might be called that, concern what Aristotle did not say. First, in the realm of foundations, Boole reduced the four propositional forms ofAristotle's logicto formulas in the form of equations– by itself a revolutionary idea. Second, in the realm of logic's problems, Boole's addition of equation solving to logic– another revolutionary idea –involved Boole's doctrine that Aristotle's rules of inference (the “perfect syllogisms”) must be supplemented by rules for equation solving. Third, in the realm of applications, Boole's system could handle multi-term propositions and arguments whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, Aristotle's system could not deduce “No quadrangle that is a square is a rectangle that is a rhombus” from “No square that is a quadrangle is a rhombus that is a rectangle” or from “No rhombus that is a rectangle is a square that is a quadrangle”. | https://en.wikipedia.org/wiki/Term_logic |
ThegrammarofAmerican Sign Language(ASL) has rules just like any othersign languageor spoken language. ASL grammar studies date back toWilliam Stokoein the 1960s.[1][2]This sign language consists of parameters that determine many other grammar rules. Typical word structure in ASL conforms to theSVO/OSVand topic-comment form, supplemented by a noun-adjective order and time-sequenced ordering of clauses. ASL has largeCPandDPsyntax systems, and also doesn't contain many conjunctions like some other languages do.
ASLmorphologyconsists of two different processes: derivational morphology and inflectional morphology.[3]
Derivational morphologyin ASL occurs when movement in a sign changes the meaning - often between a noun and a verb.[4]For example, for the sign CHAIR, a noun, a person would tap their dominant pointer and middle fingers against their non-dominant pointer and middle fingers twice or more. For the sign SIT, a verb, a person would tap these fingers together only once and with more force.
Inflectional morphologyadds units of language to other words.[3]For example, this would be changing 'watch' to 'watches' or 'watching.' In ASL, the sign may remain unchanged as WATCH, or the meaning may change based on NMM (Non-Manual-Markers).
ASL morphology demonstratesreduplicationandindexicalityas well.
Compoundingis used to derive new words in ASL, which often differ in meaning from their constituent signs.[5]For example, the signs FACE and STRONG compound to create a new sign FACE^STRONG, meaning 'to resemble'.[5]Compounds undergo the phonetic process of "hold deletion", whereby the holds at the end of the first constituent and the beginning of the second areelided:[5]
Many ASL nouns are derived from verbs.[6]This may be done either byreduplicatingthemovementof the verb if the verb has a single movement, or by restraining (making smaller and faster) the movement of the verb if it already has repeated movement.[7]For example, the noun CHAIR is derived from the verb SIT through reduplication.[7]Anotherproductivemethod is available for deriving nouns from non-stative verbs.[8]This form of derivation modifies the verb's movement, reduplicating it in a "trilled" manner ("small, quick, stiff movements").[8]For example, this method is used to derive the noun ACTING from the verb ACT.[8]
Characteristic adjectives, which refer to inherent states, may be derived from adjectives which refer to "incidental or temporary states".[9]Characteristic adjectives always use both hands, even if the source adjective only uses one, and they always have repeated, circular movement.[9]Additionally, if the source adjective was one-handed, the derived adjective has alternating movement.[9]"Trilling" may also be used productively to derive adjectives with an "ish" meaning, e.g. BLUE becomes BLUISH.[10]
ASL occasionally usessuffixationin derivation, but less often than in English.[10]Agent nouns may be derived from verbs by adding the suffix AGENT and deleting the final hold of the verb, e.g. TEACH+AGENT 'teacher'.[10]Superlatives are also formed by suffixation, e.g. SMART+MOST 'smartest'.[11]
Certain types of signs, for example those relating to time and age, may incorporate numbers by assimilating theirhandshape.[11]For example, the word WEEK has handshape /B/ with the weak hand and /1/ with the active hand; the active hand's handshape may be changed to the handshape of any number up to 9 to indicate that many weeks.[11]
There are about 20 non-manual modifiers in ASL, which are either adjectival or adverbial.[12]For example, the adverb 'th', realized as the tongue being placed between the teeth, means 'carelessly / lazily' when combined with a verb:[13]
JOHN
WRITE
LETTER
JOHN WRITE LETTER
'John writes a letter.'
JOHN
WRITE
th
LETTER
JOHN WRITE LETTER
{}th{}
'John writes a letter carelessly.'
Mouthingis when an individual appears to be making speech sounds, and this is very important for fluent signing. It also has specific morphological uses. For example, one may sign 'man tall' to indicatethe man is tall,but by mouthing the syllablechawhile signing 'tall', the phrase becomesthat man is enormous!
There are other ways of modifying a verb or adjective to make it more intense. These are all more or less equivalent to adding the word "very" in English; which morphology is used depends on the word being modified. Certain words which are short in English, such as 'sad' and 'mad', are sometimes fingerspelled rather than signed to mean 'very sad' and 'very mad'. However, the concept of 'very sad' or 'very mad' can be portrayed with the use of exaggerated body movements and facial expressions. Reduplication of the signs may also occur to emphasize the degree of the statement. Some signs are produced with an exaggeratedly large motion, so that they take up more sign space than normal. This may involve a back-and-forth scissoring motion of the arms to indicate that the sign ought to be yet larger, but that one is physically incapable of making it big enough. Many other signs are given a slow, tense production. The fact that this modulation is morphological rather than merelymimeticcan be seen in the sign for 'fast': both 'very slow' and 'very fast' are signed by making the motion either unusually slowly or unusually quickly than it is in the citation forms of 'slow' and 'fast'—not exclusively by making it slower for 'very slow' and faster for 'very fast'.
Reduplicationis morphological repetition, and this is extremely common in ASL. Generally the motion of the sign is shortened as well as repeated. Nouns may be derived from verbs through reduplication. For example, the nounclothesis formed from the verbto wear, signed by brushing open 5 hands down the chest once, by repeating it with a reduced degree of motion. Similar relationships exist betweenacquisitionandto get, airplaneandto fly (on an airplane),alsowindowandto open/close a window.Reduplication is commonly used to express intensity as well as several verbal aspects (see below). It is also used to derive signs such as 'every two weeks' from 'two weeks', and is used for verbal number (see below), where the reduplication is iconic for the repetitive meaning of the sign.
Many ASL words are historically compounds. However, the two elements of these signs have fused, with features being lost from one or both, to create what might be better called ablendthan a compound. Typically only the final hold (see above) remains from the first element, and any reduplication is lost from the second.
An example is the verbAGREE, which derives from the two signsTHINKandALIKE. The verbTHINKis signed by bringing a 1 hand inward and touching the forehead (a move and a hold).ALIKEis signed by holding two 1 hands parallel, pointing outward, and bringing them together two or three times. The compound/blendAGREEstarts asTHINKends: with the index finger touching the forehead (the final hold of that sign). In addition, the weak hand is already in place, in anticipation of the next part of the sign. Then the hand at the forehead is brought down parallel to the weak hand; it approaches but does not make actual contact, and there is no repetition.
ASL, like other mature signed languages, makes extensive use ofmorphology.[14]Many of ASL'saffixesare combined simultaneously rather than sequentially. For example,Ted Supalla's seminal work on ASL verbs of motion revealed that these signs consist of many different affixes, articulated simultaneously according to complex grammatical constraints.[15]This differs from theconcatenativemorphology of many spoken languages, which except forsuprasegmental featuressuch as tone are tightly constrained by the sequential nature of voice sounds.
ASL does have a limited number of concatenative affixes. For example, the agentive suffix (similar to the English '-er') is made by placing two B or 5 hands in front of the torso, palms facing each other, and lowering them. On its own this sign means 'person'; in a compound sign following a verb, it is a suffix for the performer of the action, as in 'drive-er' and 'teach-er'. However, it cannot generally be used to translate English '-er', as it is used with a much more limited set of verbs. It is very similar to the '-ulo' suffix inEsperanto, meaning 'person' by itself and '-related person' when combined with other words.
An ASL prefix, (touching the chin), is used with number signs to indicate 'years old'. The prefix completelyassimilateswith the initial handshape of the number. For instance, 'fourteen' is signed with a B hand that bends several times at the knuckles. The chin-touch prefix in 'fourteen years old' is thus also made with a B hand. For 'three years old', however, the prefix is made with a 3 hand.
Rather than relying on sequential affixes, ASL makes heavy use of simultaneous modification of signs. One example of this is found in the aspectual system (see below); another isnumeralincorporation:There are several families of two-handed signs which require one of the hands to take the handshape of a numeral. Many of these deal with time. For example, drawing the dominant hand lengthwise across the palm and fingers of a flat B hand indicates a number of weeks; the dominant hand takes the form of a numeral from one to nine to specify how many weeks. There are analogous signs for 'weeks ago' and 'weeks from now', etc., though in practice several of these signs are only found with the lower numerals.
ASL also hasa system of classifierswhich may be incorporated into signs.[16]A fist may represent an inactive object such as a rock (this is the default or neutral classifier), a horizontal ILY hand may represent an aircraft, a horizontal 3 hand (thumb pointing up and slightly forward) a motor vehicle, an upright G hand a person on foot, an upright V hand a pair of people on foot, and so on through higher numbers of people. These classifiers are moved through sign space to iconically represent the actions of their referents. For example, an ILY hand may 'lift off' or 'land on' a horizontal B hand to sign an aircraft taking off or landing; a 3 hand may be brought down on a B hand to sign parking a car; and a G hand may be brought toward a V hand to represent one person approaching two.
The frequency of classifier use depends greatly on genre, occurring at a rate of 17.7% in narratives but only 1.1% in casual speech and 0.9% in formal speech.[17]
Frames are a morphological device that may be unique to sign languages (Liddell 2004). They are incomplete sets of the features which make up signs, and they combine with existing signs, absorbing features from them to form a derived sign. It is the frame which specifies the number and nature of segments in the resulting sign, while the basic signs it combines with lose all but one or two of their original features.
One, theWEEKLYframe, consists of a simple downward movement. It combines with the signs for the days of the week, which then lose their inherent movement. For example, 'Monday' consists of an M/O hand made with a circling movement. 'MondayWEEKLY' (that is, 'on Mondays') is therefore signed as an M/O hand that drops downward, but without the circling movement. A similarALL DAYframe (a sideward pan) combines with times of the day, such as 'morning' and 'afternoon', which likewise keep their handshape and location but lose their original movement. Numeral incorporation (see above) also uses frames. However, in ASL frames are most productively utilized for verbal aspect.
While there is no grammaticaltensein ASL, there are numerous verbalaspects. These are produced by modulating the verb: Through reduplication, by placing the verb in an aspectual frame (see above), or with a combination of these means.
An example of an aspectual frame is theunrealized inceptiveaspect ('just about to X'), illustrated here with the verb 'to tell'. 'To tell' is an indexical (directional) verb, where the index finger (a G hand) begins with a touch to the chin and then moves outward to point out the recipient of the telling. 'To be just about to tell' retains just the locus and the initial chin touch, which now becomes the final hold of the sign; all other features from the basic verb (in this case, the outward motion and pointing) are dropped and replaced by features from the frame (which are shared with the unrealized inceptive aspects of other verbs such as 'look at', 'wash the dishes', 'yell', 'flirt', etc.). These frame features are: Eye gaze toward the locus (which is no longer pointed at with the hand), an open jaw, and a hand (or hands, in the case of two-hand verbs) in front of the trunk which moves in an arc to the onset location of the basic verb (in this case, touching the chin), while the trunk rotates and the signer inhales, catching her breath during the final hold. The hand shape throughout the sign is whichever is required by the final hold, in this case a G hand.
The variety of aspects in ASL can be illustrated by the verb 'to be sick', which involves the middle finger of the Y/8 hand touching the forehead, and which can be modified by a large number of frames. Several of these involve reduplication, which may but need not be analyzed as part of the frame. (The appropriate non-manual features are not described here.)
These modulations readily combine with each other to create yet finer distinctions. Not all verbs take all aspects, and the forms they do take will not necessarily be completely analogous to the verb illustrated here. Conversely, not all aspects are possible with this one verb.
Aspect is unusual in ASL in thattransitive verbsderived for aspect lose their transitivity. That is, while you can sign 'dog chew bone' forthe dog chewed on a bone,or 'she look-at me' forshe looked at me,you cannot do the same in the durative to meanthe dog gnawed on the boneorshe stared at me.Instead, you must use other strategies, such as a topic construction (see below) to avoid having an object for the verb.
Reduplication is also used for expressingverbal number. Verbal number indicates that the action of the verb is repeated; in the case of ASL it is apparently limited to transitive verbs, where the motion of the verb is either extended or repeated to cover multiple object or recipient loci. (Simple plurality of action can also be conveyed with reduplication, but without indexing any object loci; in fact, such aspectual forms do not allow objects, as noted above.) There are specific dual forms (and for some signerstrialforms), as well as plurals. With dual objects, the motion of the verb may be made twice with one hand, or simultaneously with both; while with plurals the object loci may be taken as a group by using a single sweep of the signing hand while the verbal motion is being performed, or individuated by iterating the move across the sweep. For example, 'to ask someone a question' is signed by flexing the index finger of an upright G hand in the direction of that person; the dual involves flexing it at both object loci (sequentially with one hand or simultaneously with both), the simple plural involves a single flexing which spans the object group while the hand arcs across it, and the individuated plural involves multiple rapid flexings while the hand arcs. If the singular verb uses reduplication, that is lost in the dual and plural forms.
There are three types of personalname signsin ASL: fingerspelled, arbitrary, and descriptive. Fingerspelled names are simply spelled out letter-by-letter. Arbitrary name signs only refer to a person's name, while descriptive name signs refer to a person's personality or physical characteristics.[18]Once given, names are for life, apart from changing from one of the latter types to an arbitrary sign in childhood.[citation needed][19]Name signs are usually assigned by another member of the Deaf community, and signal inclusion in that community. Name signs are not used to address people, as names are in English, but are used only for third-person reference, and usually only when the person is absent.[20]
The majority of people, probably well in excess of 90%, have arbitrary name signs. These are initialized signs: The hand shape is the initial of one of the English names of the person, usually the first.[21]The sign may occur in neutral space, with a tremble; with a double-tap (as a noun) at one of a limited number of specific locations, such as the side of the chin, the temple, or the elbow;[22]or moving across a location or between two locations, with a single tap at each.[23]Single-location signs are simpler in connotation, like English "Vee"; double-location signs are fancier, like English "Veronica". Sam Supalla (1992) collected 525 simple arbitrary name signs like these.
There are two constraints on arbitrary signs. First, it should not mean anything. That is, it should not duplicate an existing ASL word.[24]Second, there should not be more than one person with the name sign in the local community. If a person moves to a new community where someone already has their name sign, then the newcomer is obligated to modify theirs[dubious–discuss]. This is usually accomplished by compounding the hand shape, so that the first tap of the sign takes the initial of the person's first English name, and the second tap takes the initial of their last name. There are potentially thousands of such compound-initial signs.
Descriptive name signs are not initialized, but rather use non-core ASL signs. They tend to be assigned and used by children, rather like "Blinky" in English. Parents do not give such names to their children, but most Deaf people do not have deaf parents and are assigned their name sign by classmates in their first school for the deaf. At most 10% of Deaf people retain such name signs into adulthood.[citation needed]. Arbitrary name signs became established very early in the history of ASL. Descriptive name signs refer to a person's appearance or personality.[citation needed]
The two systems, arbitrary and descriptive, are sometimes combined, usually for humorous purposes. Hearing people learning ASL are also often assigned combined name signs. This is not traditional for Deaf people. Sometimes people with very short English names, such as "Ann" or "Lee", or ones that flow easily, such as "Larry", may never acquire a name sign, but may instead be referred to with finger-spelling.
There are 5 official parameters in American Sign Language that dictate meaning and grammar.[25]There is a 6th honorary parameter known as proximalization.[26]
Signs can share several of the same parameters. The difference in at least one parameter accounts for the difference in meaning or grammar.[27]
Handshapes in ASL consist of the fingerspelling alphabet (A-Z) as well as other variations.[25]For BROWN and TAN, the location, movement, and palm orientation are the same, but the handshape differs. It consists of B for BROWN, and T for TAN.
The ASL handshape parameter contains over 55 handshapes, which is over double the amount contained in theLatin-script alphabet.[28]Some of the differences between these handshapes are small. These handshapes play into morphology and how meaning changes based on minuscule details.
Depending on the handshape, a different grammatical meaning can be portrayed.
The palm orientation in ASL refers to which direction the hand's center faces during a sign. There are documented differences possible for palm orientation:[25]
These differences result in different semantic and structural meanings. The signs MAYBE and BALANCE have all of the same parameters except for palm orientation, resulting in different meanings.[28]
The movement parameter determines how and where the hand moves for a particular sign.[28]The hand can move up and down, forward and backward, in a circular motion, in a tapping rhythm, or many other ways. Like all other parameters, hand movement determines ASL grammar and diction meaning.
This parameter is also where derivational morphology in ASL is most noticeable.[4]Words change between nouns and verbs depending on the movement of the hand. Some examples of these differences are below:
The location parameter is the space in which your hands reside for a certain sign. This space is measured in adjacence to one's body, and resides within thesigning space.
A sign may move from one location to another from the beginning to end.[25]Signs in ASL are fluid, and are not always stagnant in one location.
Some common locations for signing are:
This list is non-exhaustive but a good indicator of where many signs reside.
Location changes word and sentence meaning, just like all other parameters.[29]APPLE and ONION have the same handshape, but different locations along the side of the face.
Non-Manual Markers(NMM), or Non-Manual Signals (NMS) are communication methods not found in the hands. They typically consist of facial expressions and body language. NMMs can change the grammar of a sentence. Not every sign uses non-manual markers, but for many others, these markers determine what sign is being produced.[25]
Some examples of non-manual markers would be the shifting of shoulders, the lowering or raising of eyebrows, a head nod or shake, scrunching of the nose, pursing of the lips, or an open mouth.[30]
NMMs are important for indicating if a question is being asked. ForWH-questions, the eyebrows are lowered, and forYES/NOquestions, the eyebrows are raised.[31]It is hard to indicate if a question is being asked otherwise without facial indicators.
Another example would be betweenI UNDERSTANDandI DON'T UNDERSTAND. The sign for this is the same; the hand is held up near the forehead with the palm facing the self. The pointer finger is then extended. If the person is nodding and has their eyebrows raised, it meansI UNDERSTAND. If they are shaking their head with eyebrows lowered, it meansI DON'T UNDERSTAND.
This parameter is a linguistic feature only found in infants with ASL as a primary language. Babies learn and acquire more motor skills in the arms, shoulders, knuckles, and fingers thanks to the early acquisition of signed language.[27]
Due to the exclusivity of this characteristic, as well as being a physical attribute rather than a performed one, this parameter is not widely talked about. It is mostly shared between members of the Deaf community.[32]
Understanding ASL grammar requires understanding the difference between a signer's dominant and non-dominanthand.If a person is right-handed, then their right hand is their dominant hand, and their left hand is their non-dominant hand. Almost all signs are completed with the more active, dominant hand, while the non-dominant hand serves as a base. For signs requiring two hands, the dominant hand performs more of the active component.[30]
ASL is asubject-verb-object(SVO) language.[33]However, OSV is also an accepted sentence form.[27]
The default SVO word order is sometimes altered by processes includingtopicalizationand null elements.[34]This is marked either with non-manual signals like eyebrow or body position, or withprosodicmarking such as pausing.[33]These non-manual grammatical markings (such as eyebrow movement or head-shaking) may optionally spread over thec-commanddomain of the node which it is attached to.[35]However, ASL is apro-drop language, and when the manual sign that a non-manual grammatical marking is attached to is omitted, the non-manual marking obligatorily spreads over the c-command domain.[36]
The full sentence structure in ASL is [topic] [subject] verb [object] [subject-pronoun-tag]. Topics andtagsare both indicated with non-manual features, and both give a great deal of flexibility to ASL word order.[37]Within anoun phrase, the word order is noun-number and noun-adjective.
ASL does not have acopula(linking 'to be' verb).[38]For example:
MY
HAIR
WET
MY HAIR WET
my hair is wet
[name
my]TOPIC
P-E-T-E
[name my]TOPICP-E-T-E
my name is Pete
In addition to its basic topic–comment structure, ASL typically places an adjective after a noun, though it may occur before the noun for stylistic purposes. Numerals also occur after the noun, a very rare pattern among oral languages.
DOG
BROWN
I
HAVE
DOG BROWN I HAVE
I have a brown dog.
Adverbs, however, occur before the verbs. Most of the time adverbs are simply the same sign as an adjective, distinguished by the context of the sentence.
HOUSE
I
QUIET
ENTER
HOUSE I QUIET ENTER
I enter the house quietly.
When the scope of the adverb is the entire clause, as in the case of time, it comes before the topic. This is the only thing which can appear before the topic in ASL:time–topic–comment.
9-HOUR
MORNING
STORE
I
GO
9-HOUR MORNING STORE I GO
I'm going to the store at 9:00AM.
Modal verbscome after the main verb of the clause:
FOR
YOU,
STORE
I
GO
CAN
FOR YOU, STORE I GO CAN
I can go to the store for you.
ASL makes heavy use of time-sequenced ordering, meaning that events are signed in the order in which they occur. For example, forI was late to class last night because my boss handed me a huge stack of work after lunch yesterday,one would sign 'YESTERDAY LUNCH FINISH, BOSS GIVE-me WORK BIG-STACK, NIGHT CLASS LATE-me'. In stories, however, ordering is malleable, since one can choose to sequence the events either in the order in which they occurred or in the order in which one found out about them.
It has been claimed thattensein ASL is markedadverbially, and that ASL lacks a separate category of tense markers.[39]However, Aarons et al. (1992, 1995) argue that "Tense" (T) is indeed a distinct category of syntactichead, and that the T node can be occupied either by a modal (e.g. SHOULD) or a lexical tense marker (e.g. FUTURE-TENSE).[39]They support this claim by noting that only one such item can occupy the T slot:[40]
REUBEN
CAN
RENT
VIDEO-TAPE
REUBEN CAN RENT VIDEO-TAPE
'Reuben can rent a video tape.'
REUBEN
WILL
RENT
VIDEO-TAPE
REUBEN WILL RENT VIDEO-TAPE
'Reuben will rent a video tape.'
*
REUBEN
CAN
WILL
RENT
VIDEO-TAPE
* REUBEN CAN WILL RENT VIDEO-TAPE
* 'Reuben can will rent a video tape.'
Aspect may be marked either by verbal inflection or by separate lexical items.[41]
These are ordered: Tense – Negation – Aspect – Verb:[42]
As noted above, in ASL aspectually marked verbs cannot take objects. To deal with this, the object must be known from context so that it does not need to be further specified. This is accomplished in two ways:
Of these two strategies, the first is the more common. Formy friend was typing her term paper all nightto be used with a durative aspect, this would result in
my friend type T-E-R-M paper. typeDURATIVEall-night
The less colloquial topic construction may come out as,
[my friend]TOPIC, [T-E-R-M paper]TOPIC, typeDURATIVEall-night
A topic sets off background information that will be discussed in the following main clause. Topic constructions are not often used in standard English, but they are common in some dialects, as in,
That dog, I never could hunt him.
Topicalization is used productively in ASL and often results in surface forms that do not follow the basic SVO word order.[43]In order to non-manually mark topics, the eyebrows are raised and the head is tilted back during the production of a topic. The head is often lowered toward the end of the sign, and sometimes the sign is followed rapidly nodding the head. A slight pause follows the topic, setting it off from the rest of the sentence:[44]
[MEAT]tm,
I
LIKE
LAMB
[MEAT]tm, I LIKE LAMB
As for meat, I prefer lamb.
Another way topics may be signed is by shifting the body. The signer may use the space on one side of his/her body to sign the topic, and then shifts to the other side for the rest of the sentence.[44]
ASL utterances do not require topics, but their use is extremely common. They are used for purposes ofinformation flow, to set up referent loci (see above), and to supply objects for verbs which are grammatically prevented from taking objects themselves (see below).
Without a topic,the dog chased my catis signed:
DOG
CHASE
MY
CAT
DOG CHASE MY CAT
The dog chased my cat
However, people tend to want to set up the object of their concern first and then discuss what happened to it. English does this withpassiveclauses:my cat was chased by the dog.In ASL, topics are used with similar effect:
[MY
CAT]tm
DOG
CHASE
[MY CAT]tmDOG CHASE
lit.'my cat, the dog chased it.'
If the word order of the main clause is changed, the meaning of the utterance also changes:
[MY
CAT]tm
CHASE
DOG
[MY CAT]tmCHASE DOG
'my cat chased the dog,'lit.'my cat, it chased the dog.'
There are three types of non-manual topic markers, all of which involve raised eyebrows.[45]The three types of non-manual topic markers are used with different types of topics and in different contexts, and the topic markings cannot spread over other elements in the utterance. Topics can be moved from and remain null in the main clause of an utterance, or topics can be base-generated and either be co-referential to either the subject or object in the main clause or be related to the subject of object by a semantic property.[46]
The first type of non-manual marking, topic marking 1 (tm1), is only used with a moved topic.[47]Tm1 is characterized by raised eyebrows, widened eyes, and head tilted backwards. At the end of the sign the head moves down and there is a pause, often with an eye blink, before the sentence is continued.[48]The following is an example of a context in which the tm1 marking is used:
[MARY]tm1
JOHN
LOVE
[MARY]tm1JOHN LOVE
'Mary, John loves,' or 'John loves Mary'[49]
Topic marking 2 (tm2) and topic marking 3 (tm3) are both used with base-generated topics. Tm2 is characterized by raised eyebrows, widened eyes, and the head tilted backwards and to the side. Toward the end of the sign the head moves forward and to the opposite side, and there is a pause and often an eye blink before continuing.[50]For tm3 the eyebrows are raised and the eyes are opened wide, the head starts tilted down and jerks up and down, the lips are opened and raised, and the head is nodded rapidly a few times before pausing and continuing the sentence. Although both tm2 and tm3 accompany base-generated topics, they are used in different contexts. Tm2 is used to introduce new information and change the topic of a conversation to something that the signer is going to subsequently characterize, while tm3 is used to introduce new information that the signer believes is already known by his/her interlocutor.[51]Tm2 may be used with any base-generated topic, whereas only topics that are co-referential with an argument in the sentence may be marked with tm3.[52]
An example of a tm2 marking used with a topic related to the object of the main clause is:
[VEGETABLE]tm2,
JOHN
LIKE
CORN
[VEGETABLE]tm2, JOHN LIKE CORN
'As for vegetables, John likes corn.'[50]
An example of a tm2 marking used with a co-referential topic is:
[FRESH
VEGETABLE]tm3,
JOHN
LIKE
IX-3rd
[FRESH VEGETABLE]tm3, JOHN LIKE IX-3rd
'As for fresh vegetables, John likes them.'[53]
IX-3rd represents a 3rd person index.
Another example of a tm2 marking with a co-referential topic is:
[JOHNi]tm2,
IX-3rdi
LOVE
MARY
[JOHNi]tm2, IX-3rdiLOVE MARY
'as for John, he loves Mary'[53]
An example of a tm3 topic marking is:
[JOHNi]tm3,
IX-3rdi
LOVE
MARY
[JOHNi]tm3, IX-3rdiLOVE MARY
'(you know) John, he loves Mary'[54]
ASL sentences may have up to two markedtopics.[45]Possible combinations of topic types are two tm2 topics, two tm3 topics, tm2 preceding tm1, tm3 preceding tm1, and tm2 preceding tm3. Sentences with these topic combinations in the opposite orders or with two tm1 topics are considered ungrammatical by native signers.[55]
Relative clausesare signaled by tilting back the head and raising the eyebrows and upper lip. This is done during the performance of the entire clause. There is no change in word order. For example:
[recently
dog
chase
cat]RELATIVE
come
home
[recently dog chase cat]RELATIVEcome home
The dog which recently chased the cat came home
where the brackets here indicate the duration of the non-manual features. If the sign 'recently' were made without these features, it would lie outside the relative clause, and the meaning would change to "the dog which chased the cat recently came home".
Negatedclauses may be signaled by shaking the head during the entire clause. A topic, however, cannot be so negated; the headshake can only be produced during the production of the main clause. (A second type of negation starts with the verb and continues to the end of the clause.)
In addition, in many communities, negation is put at the end of the clause, unless there is a wh- question word. For example, the sentence, "I thought the movie was not good," could be signed as, "BEFORE MOVIE ME SEE, THINK WHAT? IT GOOD NOT."
There are two manual signs that negate a sentence, NOT and NONE, which are accompanied by a shake of the head. NONE is typically used when talking about possession:
DOG
I
HAVE
NONE
DOG I HAVE NONE
I don't have any dogs.
NOT negates a verb:
TENNIS
I
LIKE
PLAY
NOT
TENNIS I LIKE PLAY NOT
I don't like to play tennis.
There are three types of questions with different constructions in ASL: wh- questions, yes/no questions, and rhetorical questions.[56]
Non-manual grammatical markings are grammatical and semantic features that do not include the use of hands. They can include mouth shape, eye gazes, facial expressions, body shifting, head tilting, and eyebrow raising. Non -manual grammatical markings can also aid in identifying sentence type, which is especially relevant to our discussion of different types of interrogatives.[57]
Wh-questions can be formed in a variety of ways in ASL. The wh-word can appear solely at the end of the sentence, solely at the beginning of the sentence, at both the beginning and end of the sentence (see section 4.4.2.1 on 'double-occurring wh-words', or in situ (i.e. where the wh-word is in the sentence structure before movement occurs)).[58]Manual wh-signs are also accompanied by a non-manual grammatical marking (see section 4.4.1), which can include a variety of features.[59]This non-manual grammatical marking can spread optionally over the entire wh-phrase or just a small part.
Some languages have very few wh-words, where context and discourse are sufficient to elicit the information that one needs. ASL has many different wh-words, with certain wh-words having multiple variations. A list of the wh-words of ASL can be found below. WHAT, WHAT-DO, WHAT-FOR, WHAT-PU, WHAT- FS, WHEN, WHERE, WHICH, WHO (several variations), WHY, HOW, HOW-MANY[60]
As mentioned above, ASL possesses wh-questions with word initial placement, word final placement, in situ structure, but the most unique style of wh-word occurrence in ASL is where the wh-word occurs twice, copied in final position.[61]This doubling can be seen in the table below.
This doubling provides a useful template to analyze two separate analyses about whether wh-words move rightward or leftward in ASL. While some researchers argue for rightward movement in wh- questions such as Aarons and Neidle,[36]others, including Petronio and Lillo-Martin, have argued that ASL has leftward movement and wh- words that appear to the right of the clause move by other processes.[58]Both analyses agree upon the fact that there is wh-movement present in these interrogative phrases, but it is a matter of what direction the wh-movement is moving in that causes controversy. No matter what direction the wh-movement is analyzed to go in, it is crucial to the analysis that the movement of the wh-element is to the position of SPEC CP[58]
Summary of the leftward wh-movement analysis in American Sign Language:
The leftward movement analysis is congruent with cross linguistic data that wh-movement is always leftward. It can be seen as the less controversial of the two proposals. The main arguments presented by the Leftward Wh-movement analysis are: That the spec-CP is on the left, that the wh-movement is leftward, and that the final wh-word in a sentence is a base-generated double. This is illustrated in the syntax tree located to the right of this paragraph.[62]Arguments for leftward movement are based on the facts that if wh-movement in ASL were rightward, ASL would be an exception to cross-linguistic generalizations that wh-movement is leftward.[58]
It has also been hypothesized that wh-elements cannot be topicalized, as topicalized elements must be presupposed and interrogatives are not.[58]This would be detrimental to the rightward analysis, as they are analyzing the doubled wh-word as a 'base generated topic'.
Summary of the rightward wh-movement analysis in American Sign Language
The rightward movement analysis is a newer, more abstract argument of how wh-movement occurs in ASL. The main arguments for rightward movement begin by analyzing spec-CP as being on the right, the wh-movement as being rightward, and as the initial wh-word as a base-generated topic.[58]This can be seen in the syntax tree on the right.
One of the rightward movement analysis' main arguments is in regards to the non-manual grammatical markings, and their optional spreading over the sentence. In ASL the use of non-manual grammatical markings is optional depending on the type of wh-question being asked. In the rightward analysis both partial and full spreading of non-manual grammatical markers can be accounted for due to the association with the +WH feature over its c-command domain.[63]In the leftward analysis, the partial or full spreading of non manual grammatical markings cannot be accounted for in this same way. The leftward movement analysis requires wh-marking to extend over the entirety of the question, regardless (which is not what is attested in ASL).
In spoken language Yes/no questions will oftentimes differ in their word order from the statement form. For example, in English:
English Statement:
HE WILL BUY THE SHIRT.
English Yes/no Q:
WILL HE BUY THE SHIRT?[64]
In ASL, yes/no questions are marked by the non-manual grammatical markings (as discussed in section 4.4.1). This eyebrow raise, slight tilt of the head and lean forward are what indicate that a yes/no question is being asked, without any change in word order from the statement form. There is speculation amongst linguists that these non-manual grammatical markings that indicate a yes/no questions are similar to the question intonation of spoken languages.[65]
Yes/no questions differ from wh-questions as they do not differ in word order from the original statement form of the sentence, whereas wh-questions do. As well, in yes/no questions, the non-manual marking must be used over the whole utterance in order for it to be judged as a statement opposed to a question.[66]The yes/no question is the same word order as the statement form of the sentence, with the addition of non-manual grammatical markings. This can be seen in the examples below.
ASL Statement:
JUAN WILL BUY SHOES TODAY
"Juan will buy shoes today"
ASL Yes/no Question:
_____________________brow raise
JUAN WILL BUY SHOES TODAY
"Will Juan buy shoes today?"[67]
Non-manual grammatical markings are also used for rhetorical questions, which are questions that do not intend to elicit an answer. To distinguish the non-manual marking for rhetorical questions from that of yes/no questions, the body is in a neutral position opposed to tilted forward, and the head is tilted in a different way than in yes/no questions.[68]Rhetorical questions are much more common in ASL than in English. For example, in ASL:
[I
LIKE]NEGATIVE
[WHAT?]RHETORICAL,
GARLIC.
[I LIKE]NEGATIVE[WHAT?]RHETORICAL, GARLIC.
"I don't like garlic"
This strategy is commonly used instead of signing the word 'because' for clarity or emphasis. For instance:
PASTA
I
EAT
ENJOY
TRUE
[WHY?]RHETORICAL,
ITALIAN
I.
PASTA I EAT ENJOY TRUE [WHY?]RHETORICAL, ITALIAN I.
"I love to eat pasta because I am Italian"
Information may also be added after the main clause as a kind of 'afterthought'. In ASL this is commonly seen with subject pronouns. These are accompanied by a nod of the head, and make a statement more emphatic:
boy
fall
boy fall
"The boy fell down."
versus
boy
fall
[he]TAG
boy fall [he]TAG
"The boy fell down, he did."
The subject need not be mentioned, as in
fall
fall
"He fell down."
versus
fall
[he]TAG
fall [he]TAG
"He fell down, he did."
In ASL signers set up regions of space (loci) for specific referents (see above); these can then be referred to indexically by pointing at those locations with pronouns and indexical verbs.
Personal pronouns in ASL are indexic. That is, they point to their referent, or to a locus representing their referent. When the referent is physically present, pronouns involve simply pointing at the referent, with different handshapes for different pronominal uses: A 'G' handshape is apersonal pronoun, an extended 'B' handshape with an outward palm orientation is apossessive pronoun, and an extended-thumb 'A' handshape is areflexive pronoun; these may be combined with numeral signs to sign 'you two', 'us three', 'all of them', etc.
If the referent is not physically present, the speaker identifies the referent and then points to a location (thelocus)in the sign space near their body. This locus can then be pointed at to refer to the referent. Theoretically, any number of loci may be set up, as long as the signer and recipient remember them all, but in practice, no more than eight loci are used.
Meier 1990 demonstrates that only twogrammatical personsare distinguished in ASL: First person and non-first person, as inDamin. Both persons come in several numbers as well as with signs such as 'my' and 'by myself'.
Meier provides several arguments for believing that ASL does not formally distinguish second from third person. For example, when pointing to a person that is physically present, a pronoun is equivalent to either 'you' or '(s)he' depending on the discourse. There is nothing in the sign itself, nor in the direction of eye gaze or body posture, that can be relied on to make this distinction. That is, the same formal sign can refer to any of several second or third persons, which the indexic nature of the pronoun makes clear. In English, indexic uses also occur, as in 'I needyouto go to the store andyouto stay here', but not so ubiquitously. In contrast, several first-person ASL pronouns, such as the plural possessive ('our'), look different from their non-first-person equivalents, and a couple of pronouns do not occur in the first person at all, so first and non-first persons are formally distinct.
Personal pronouns have separate forms for singular ('I' and 'you/(s)he') and plural ('we' and 'you/they'). These havepossessivecounterparts: 'my', 'our', 'your/his/her', 'your/their'. In addition, there are pronoun forms which incorporate numerals from two to five ('the three of us', 'the four of you/them', etc.), though thedualpronouns are slightly idiosyncratic in form(i.e.,they have a K rather than 2 handshape, and the wrist nods rather than circles). These numeral-incorporated pronouns have no possessive equivalents.
Also among the personal pronouns are the 'self' forms ('by myself', 'by your/themselves', etc.). These only occur in the singular and plural (there is no numeral incorporation), and are only found as subjects. They have derived emphatic and 'characterizing' forms, with modifications used for derivation rather like those for verbal aspect. The 'characterizing' pronoun is used when describing someone who has just been mentioned. It only occurs as a non-first-person singular form.
Finally, there are formal pronouns used for honored guests. These occur as singular and plural in the non-first person, but only as singular in the first person.
ASL is apro-droplanguage, which means that pronouns are not used when the referent is obvious from context and is not being emphasized.
Within ASL there is a class of indexical (often called 'directional') verbs. These include the signs for 'see', 'pay', 'give', 'show', 'invite', 'help', 'send', 'bite', etc. These verbs include an element of motion that indexes one or more referents, either physically present or set up through the referent locus system. If there are two loci, the first indicates the subject and the second the object,directorindirectdepending on the verb, reflecting the basic word order of ASL. For example, 'give' is a bi-indexical verb based on a flattened M/O handshape. For 'I give you', the hand moves from myself toward you; for 'you give me', it moves from you to me. 'See' is indicated with a V handshape. Two loci for a dog and a cat can be set up, with the sign moving between them to indicate 'the dog sees the cat' (if it starts at the locus for dog and moves toward the locus for cat) or 'the cat sees the dog' (with the motion in the opposite direction), or the V hand can circulate between both loci and myself to mean 'we (the dog, the cat, and myself) see each other'. The verb 'to be in pain' (index fingers pointed at each other and alternately approaching and separating) is signed at the location of the pain (head for headache, cheek for toothache, abdomen for stomachache, etc.). This is normally done in relation to the signer's own body, regardless of the person feeling the pain, but may take also use the locus system, especially for body parts which are not normally part of the sign space, such as the leg. There are also spatial verbs such as put-up and put-below, which allow signers to specify where things are or how they moved them around.
There is no separate sign in ASL for the conjunctionand. Instead, multiple sentences or phrases are combined with a short pause between. Often, lists are specified with a listing and ordering technique, a simple version of which is to show the length of the list first with the nondominant hand, then to describe each element after pointing to the nondominant finger that represents it.
There is a manual sign for the conjunctionor, but the concept is usually signednonmanuallywith a slight shoulder twist.
The manual sign for the conjunctionbutis similar to the sign fordifferent. It is more likely to be used inPidgin Signed Englishthan in ASL. Instead, shoulder shifts can be used, similar to "or" with appropriate facial expression.
^bDenotes the number (if known) of languages within the family. No further information is given on these languages. | https://en.wikipedia.org/wiki/American_Sign_Language_grammar |
Southern Athabascan(alsoApachean,Southern Athabaskan) is a subfamily ofAthabaskan languagesspoken in the North American Southwest. Refer toSouthern Athabascan languagesfor the main article.
Typologically, Southern Athabaskan languages are mostlyfusional,polysynthetic,nominative–accusativehead-markinglanguages. These languages are argued to benon-configurational languages. The canonical word order isSOV, as can be seen in Lipan example below:
Southern Athabaskan words are modified primarily byprefixes, which is uncommon for SOV languages (suffixes are expected).
The Southern Athabaskan languages are "verb-heavy" — they have a great ponderance of verbs but relatively few nouns. In addition to verbs and nouns, these languages have other elements such aspronouns,cliticsof various functions,demonstratives,numerals,adverbs, andconjunctions, among others. Harry Hoijer grouped most of the above into a word class which he calledparticlebased on the type ofinflectionthat occurs on the word class. This categorization provides three mainlexical categories(i.e. parts of speech):
There is nothing that corresponds to what are calledadjectivesin English. Adjectival notions are provided by verbs; however, these adjectival verb stems do form a distinct sub-class of verb stems which co-occur with adjectival prefixes.
SA nouns are essentially of the following types (with various subtypes):
The simple nouns can consist of only a noun stem (which are usually only a single syllable long), such as
Other nouns may consist of a noun plus one or more prefixes, such as
or of a noun plus anencliticorsuffix, such as
The added prefixes may be lexical or they may be inflectional prefixes (e.g. personal prefixes indicating possession). SA languages do not have many simple nouns, but these nouns are the most ancient part of the lexicon and thus are essential in making comparisons between Athabascan languages.
Another noun type is a noun compound consisting of more than one noun stem, such as
Other kinds of noun compounds are the following:
Many other various combinations of elements are possible.
The most common type of noun is the deverbal noun (i.e., a noun derived from a verb). Most of these nouns are formed by adding a nominalizingenclitic, such as Mescalero-ńor-í, Western Apache-íand Navajo-í, to the end of the verb phrase. For example, in Mescalero the verb’ént’į́į́"he/she bewitches him/her" may become a noun by adding either the enclitic-ń(for people) or-í(for things):
Thus, the word’ént’į́į́ń"witch" literally means "the one who bewitches him or her". Another example is from Navajo:
Many of these nouns may be quite complex, as in Navajo
Other deverbal nouns do not appear with a nominalizing enclitic, as in Navajo
For a comparison with nouns in aNorthern Athabascanlanguage, seeCarrier: Nouns
Most nouns can be inflected to showpossession. Simple nouns, compound nouns, and some deverbal nouns are inflected by adding apronominalprefix to the noun base, as in the following Chiricahua possessed nounparadigm(i.e. noundeclension):
As seen above, Chiricahua nouns are inflected fornumber(singular and dual) andperson(first, second, third, fourth, andindefinite). In the third and indefinite persons, there is only one pronominal prefixbi-and’i-(that is, Chiricahua does not have two different prefixes for the third person singular and the third person dual). Additionally, although there is a first person singularshi-and a second person singularni-, in the plural Chiricahua only has one prefixnahi-for both the first and second persons (that is,nahi-means both first and second person plural). A distributive plural prefixdaa-may also be added to possessed nouns in front of the pronominal prefixes:
The prefix table below shows these relationships:
A Navajo pronominal prefix paradigm may be compared with the Chiricahua above:
Two other pronominal prefixes include the reciprocal prefix as in Mescalero’ił-and Navajoał-"each other's" and the reflexive prefix as in Mescalero’ádi-and Navajoádi-"one's own".
Larger possessive phrases can be formed like the following Navajo phrases:
As seen above, the possessor occurs before the possessed noun(s). Thus, in order to say "John's bread", the 3rd person prefixbi-is added to the possessed nounbááh"bread" and the possessor nounJohnis placed beforebibááh"his bread". Usually, in the first and second persons only a pronominal prefix (shi-,ni-, andnihi-) is added to possessed nouns. However, if focusing on the possessor (i.e. a type of emphasis) is needed, an independent personal pronoun may be added to the possessive phrase. Thus, we have the following
By observing these Navajo possessive phrases, it is evident here that Southern Athabascan languages arehead-markingin that the possessive prefix is added to the possessed noun, which is theheadof thenoun phrase(this is unlike thedependent-marking languagesof Europe where possessiveaffixesare added to the possessor).
The key element in Southern Athabaskan languages is theverb, and it is notoriously complex. Verbs are composed of astemto whichinflectionaland/orderivationalprefixes are added. Every verb must have at least one prefix. The prefixes are affixed to the verb in a specified order.
The Southern Athabaskan verb can be sectioned into different morphological components. The verbstemis composed of an abstractrootand an often fused suffix. The stem together with aclassifierprefix (and sometimes otherthematicprefixes) make up the verbtheme. The theme is then combined with derivational prefixes which in turn make up the verbbase. Finally, inflectional prefixes (which Young & Morgan call "paradigmatic prefixes") are affixed to the base—producing a complete verb. This is represented schematically in the table below:
The prefixes that occur on a verb are added in specified order according to prefix type. This type of morphology is called aposition class template(orslot-and-filler template). Below is a table of one proposal of the Navajo verb template (Young & Morgan 1987).Edward Sapirand Harry Hoijer were the first to propose an analysis of this type. A given verb will not have a prefix for every position, in fact most Navajo verbs are not as complex as the template would seem to suggest.
The Navajo verb has 3 main parts:
These parts can be subdivided into 11 positions with some of the positions having even further subdivisions:
Although prefixes are generally found in a specific position, some prefixes change order by the process ofmetathesis.
For example, inNavajoprefix’a-(3i object pronoun) usually occurs beforedi-, as in
However, when’a-occurs with the prefixesdi-andni-, the’a-metathesizes withdi-, leading to an order ofdi-+’a-+ni-, as in
instead of the expected*adinisbąąs(’a-di-ni-sh-ł-bąąs) (’a-is reduced to’-). Metathesis is conditioned byphonologicalenvironment (Young & Morgan 1987:39).
Verb stems have different forms that alternate according toaspectandtense. The alternation (ablaut) mostly involves vowels (change in vowel,vowel length, or nasality) and tone, but sometimes includes thesuffixationof a final consonant. The Chiricahua verb stems below have five different forms that correspond tomode:
Each mode can also occur with differentaspects, such as momentaneous, continuative, repetitive, semelfactive, etc. For example, a stem can be momentaneous imperfective, momentaneous perfective, momentaneous optative, etc. The (partial) Navajo verb stem conjugation below illustrates the verb stem-’aah/-’ą́"to handle a solid roundish object" with the same mode in different aspects:
This same verb stem-’aah/-’ą́"to handle a solid round object" has a total of 26 combinations of 5 modes and 6 aspects:
Although there are 26 combinations for this verb, there is a high degree of homophony, in that there are only 7 different stem forms (-’aah,-’ááh,-’aał,-’ááł,-’a’,-á,-’ą́). To complicate matters, different verbs have different patterns of homophony: some verbs have only 1 stem form that occurs in all mode-aspect combinations, others have 5 forms, etc., and not all stems occur in the same mode-aspect combinations. Additionally, the different stem forms of different verbs are formed in different ways.
Southern Athabaskan languages have verb stems that classify a particular object by its shape or other physical characteristics in addition to describing the movement or state of the object. These are known in Athabaskan linguistics asclassificatory verb stems. These are usually identified by anacronymlabel. There are 11 primary classificatory "handling" verbs stems inNavajowhich are listed below (given in the perfective mode). Other Southern Athabaskan languages have a slightly different set of stems.
To compare with English,Navajohas no single verb that corresponds to the English wordgive. In order to say the equivalent ofGive me some hay!the Navajo verbníłjool(NCM) must be used, while forGive me a cigarette!the verbnítįįh(SSO) must be used. The English verbgiveis expressed by 11 verbs in Navajo, depending on the characteristics of the given object.
In addition to defining the physical properties of the object, primary classificatory verb stems also can distinguish between the manner of movement of the object. The stems can then be grouped into three categories:
Handlingincludes actions such as carrying, lowering, and taking.Propellingincludes tossing, dropping, and throwing.Free flightincludes falling, and flying through space.
Using an example for the SRO category Navajo has
In addition, Southern Athabaskan languages also have other somewhat similar verb stems that Young & Morgan (1987) callsecondary classificatory verbs.
(The termclassifieris used in Athabaskan linguistics to refer to a prefix that indicates transitivity or acts as a thematic prefix, and as such is somewhat of a misnomer. These transitivityclassifiersare not involved in the classificatory verb stems' classification of nouns and are not related in any way to the nounclassifiersfound in Chinese or Thai).
Like most Athabaskan languages, Southern Athabaskan languages show various levels ofanimacyin its grammar, with certain nouns taking specific verb forms according to their rank in this animacy hierarchy. For instance,Navajonouns can be ranked by animacy on a continuum from most animate (a human or lightning) to least animate (an abstraction) (Young & Morgan 1987: 65-66):
humans/lightning → infants/big animals → mid-size animals → small animals → insects → natural forces → inanimate objects/plants → abstractions
Generally, the most animate noun in a sentence must occur first while the noun with lesser animacy occurs second. If both nouns are equal in animacy, then either noun can occur in the first position. So, both example sentences (1) and (2) are correct. Theyi-prefix on the verb indicates that the 1st noun is the subject andbi-indicates that the 2nd noun is the subject.
But example sentence (3) sounds wrong to most Navajo speakers because the less animate noun occurs before the more animate noun:
In order express this idea, the more animate noun must occur first, as in sentence (4):
Although sentence (4) is translated into English with a passive verb, in Navajo it is not passive. Passive verbs are formed by certainclassifierprefixes (i.e. transitivity prefixes) that occur directly before the verb stem in position 9.
See theSouthern Athabaskan languages bibliographyfor references | https://en.wikipedia.org/wiki/Southern_Athabascan_grammar#Classificatory_verbs |
Inlinguistics, anoun classis a particularcategoryofnouns. A noun may belong to a given class because of the characteristic features of itsreferent, such as gender, animacy, shape, but such designations are often clearly conventional. Some authors use the term "grammatical gender" as a synonym of "noun class", but others consider these different concepts. Noun classes should not be confused withnoun classifiers.
There are three main ways by which natural languages categorize nouns into noun classes:
Usually, a combination of the three types of criteria is used, though one is more prevalent.
Noun classes form a system ofgrammatical agreement. A noun in a given class may require:
Modern English expresses noun classes through the third person singular personal pronounshe(male person),she(female person), andit(object, abstraction, or animal), and their other inflected forms.Countableand uncountable nouns are distinguished by the choice ofmany/much. The choice between the relative pronounwho(persons) andwhich(non-persons) may also be considered a form of agreement with a semantic noun class. A few nouns also exhibit vestigial noun classes, such asstewardess, where the suffix-essadded tostewarddenotes a female person. This type of noun affixation is not very frequent inEnglish, but quite common in languages which have the truegrammatical gender, including most of theIndo-Europeanfamily, to which English belongs.
In languages without inflectional noun classes, nouns may still be extensively categorized by independent particles callednoun classifiers.
Common criteria that define noun classes include:
TheOjibwe languageand other members of theAlgonquian languagesdistinguish between animate and inanimate classes. Some sources argue that the distinction is between things which are powerful and things which are not. Living things, as well as sacred things and things connected to the Earth, are considered powerful and belong to the animate class. Still, the assignment is somewhat arbitrary, as "raspberry" is animate, but "strawberry" is inanimate.
InNavajo(Southern Athabaskan) nouns are classified according to their animacy, shape, and consistency.Morphologically, however, the distinctions are not expressed on the nouns themselves, but on the verbs of which the nouns are the subject or direct object. For example, in the sentenceShi’éé’ tsásk’eh bikáa’gi dah siłtsooz"My shirt is lying on the bed", the verbsiłtsooz"lies" is used because the subjectshi’éé’"my shirt" is a flat, flexible object. In the sentenceSiziiz tsásk’eh bikáa’gi dah silá"My belt is lying on the bed", the verbsilá"lies" is used because the subjectsiziiz"my belt" is a slender, flexible object.
Koyukon(Northern Athabaskan) has a more intricate system of classification. Like Navajo, it has classificatory verb stems that classify nouns according to animacy, shape, and consistency. However, in addition to these verb stems, Koyukon verbs have what are called "gender prefixes" that further classify nouns. That is, Koyukon has two different systems that classify nouns:(a)a classificatory verb system and(b)a gender system. To illustrate, the verb stem-tonhis used for enclosed objects. When-tonhis combined with different gender prefixes, it can result indaaltonhwhich refers to objects enclosed in boxes oretltonhwhich refers to objects enclosed in bags.
TheDyirbal languageis well known for its system of four noun classes, which tend to be divided along the following semantic lines:[2]
The class usually labeled "feminine", for instance, includes the word for fire and nouns relating to fire, as well as all dangerous creatures and phenomena. (This inspired the title of theGeorge LakoffbookWomen, Fire, and Dangerous Things.)
TheNgangikurrunggurr languagehas noun classes reserved for canines and hunting weapons. TheAnindilyakwa languagehas a noun class for things that reflect light. TheDiyari languagedistinguishes only between female and other objects. Perhaps the most noun classes in any Australian language are found inYanyuwa, which has 16 noun classes, including nouns associated with food, trees and abstractions, in addition to separate classes for men and masculine things, women and feminine things. In the men's dialect, the classes for men and for masculine things have simplified to a single class, marked the same way as the women's dialect marker reserved exclusively for men.[3]
Basquehas two classes, animate and inanimate; however, the only difference is in the declension of locative cases (inessive, ablative, allative, terminal allative, and directional allative). For inanimate nouns, the locative case endings are attached directly if the noun is singular, and plural and indefinite number are marked by the suffixes-eta-and-(e)ta-, respectively, before the case ending (this is in contrast to the non-locative cases, which follow a different system of number marking where the indefinite form of the ending is the most basic). For example, the nounetxe"house" has the singular ablative formetxetik"from the house", the plural ablative formetxeetatik"from the houses", and the indefinite ablative formetxetatik(the indefinite form is mainly used with determiners that precede the noun:zenbat etxetatik"from how many houses"). For animate nouns, on the other hand, the locative case endings are attached (with some phonetic adjustments) to the suffix-gan-, which is itself attached to the singular, plural, or indefinite genitive case ending. Alternatively,-gan-may attach to the absolutive case form of the word if it ends in a vowel. For example, the nounume"child" has the singular ablative formumearengandikorumeagandik"from the child", the plural ablative formumeengandik"from the children", and the indefinite ablative formumerengandikorumegandik(cf. the genitive formsumearen,umeen, andumerenand the absolutive formsumea,umeak, andume). In the inessive case, the case suffix is replaced entirely by-ganfor animate nouns (compareetxean"in/at the house" andumearengan/umeagan"in/at the child").
Some members of theNorthwest Caucasianfamily, and almost all of theNortheast Caucasian languages, manifest noun class. In the Northeast Caucasian family, onlyLezgian,Udi, andAghuldo not have noun classes. Some languages have only two classes, whereasBatshas eight. The most widespread system, however, has four classes: male, female, animate beings and certain objects, and finally a class for the remaining nouns. TheAndi languagehas a noun class reserved for insects.
Among Northwest Caucasian languages, onlyAbkhazandAbazahave noun class, making use of a human male/human female/non-human distinction.
In all Caucasian languages that manifest class, it is not marked on the noun itself but on the dependent verbs, adjectives, pronouns and postpositions or prepositions.
Atlantic–Congo languagescan have ten or more noun classes, defined according to non-sexual criteria. Certain nominal classes are reserved for humans. TheFula languagehas about 26 noun classes (the exact number varies slightly by dialect).
According toCarl Meinhof, theBantu languageshave a total of 22 noun classes callednominal classes(this notion was introduced byW. H. I. Bleek). While no single language is known to express all of them, most of them have at least 10 noun classes. For example, by Meinhof's numbering,Shonahas 20 classes,Swahilihas 15,Sothohas 18 andGandahas 17.
Additionally, there arepolyplural noun classes. A polyplural noun class is a plural class for more than one singular class.[4]For example,Proto-Bantuclass 10 contains plurals of class 9 nouns and class 11 nouns, while class 6 contains plurals of class 5 nouns and class 15 nouns. Classes 6 and 10 are inherited as polyplural classes by most surviving Bantu languages, but many languages have developed new polyplural classes that are not widely shared by other languages.
Specialists in Bantu emphasize that there is a clear difference between genders (such as known fromAfro-AsiaticandIndo-European) and nominal classes (such as known from Niger–Congo). Languages with nominal classes divide nouns formally on the base ofhyperonymicmeanings. The category of nominal class replaces not only the category of gender, but also the categories ofnumberandcase.
Critics of Meinhof's approach notice that his numbering system of nominal classes counts singular and plural numbers of the same noun as belonging to separate classes. This seems to them to be inconsistent with the way other languages are traditionally considered, where number is orthogonal to gender (according to the critics, a Meinhof-style analysis would giveAncient Greek9 genders). If one follows broader linguistic tradition and counts singular and plural as belonging to the same class, then Swahili has 8 or 9 noun classes, Sotho has 11 and Ganda has 10.
The Meinhof numbering tends to be used in scientific works dealing with comparisons of different Bantu languages. For instance, inSwahilithe wordrafiki'friend' belongs to the class 9 and its "plural form" ismarafikiof the class 6, even if most nouns of the 9 class have the plural of the class 10. For this reason, noun classes are often referred to by combining their singular and plural forms, e.g.,rafikiwould be classified as "9/6", indicating that it takes class 9 in the singular, and class 6 in the plural.
However not all Bantu languages have these exceptions. InGandaeach singular class has a corresponding plural class (apart from one class which has no singular–plural distinction; also some plural classes correspond to more than one singular class) and there are no exceptions as there are in Swahili. For this reason Ganda linguists use the orthogonal numbering system when discussing Ganda grammar (other than in the context ofBantucomparative linguistics), giving the 10 traditional noun classes of that language.
The distinction between genders and nominal classes is blurred still further by Indo-European languages that have nouns that behave like Swahili'srafiki.Italian, for example, has a group of nouns deriving fromLatinneuter nouns that acts as masculine in the singular but feminine in the plural:il braccio/le braccia;l'uovo/le uova. (These nouns are still placed in a neuter gender of their own by some grammarians.)
"Ø-" meansno prefix. Some classes arehomonymous(esp. 9 and 10). The Proto-Bantu class 12 disappeared in Swahili, class 13 merged with 7, and 14 with 11.
Class prefixes appear also on adjectives and verbs, e.g.:
Kitabu
CL7-book
kikubwa
CL7-big
kinaanguka.
CL7-PRS-fall
Kitabukikubwakinaanguka.
CL7-book CL7-big CL7-PRS-fall
'The big book falls.'
Theclass markerswhich appear on the adjectives and verbs may differ from the noun prefixes:
Mtoto
CL1-child
wangu
CL1-my
alinunua
CL1-PST-CL7-buy
kitabu.
CL7-book
Mtotowangualinunuakitabu.
CL1-child CL1-my CL1-PST-CL7-buy CL7-book
'My child bought a book.'
In this example, the verbal prefixa-and the pronominal prefixwa-are in concordance with the noun prefixm-: they all express class 1 despite their different forms.
TheZande languagedistinguishes four noun classes:[5]
There are about 80 inanimate nouns which are in the animate class, including nouns denoting heavenly objects (moon, rainbow), metal objects (hammer, ring), edible plants (sweet potato, pea), and non-metallic objects (whistle, ball). Many of the exceptions have a round shape, and some can be explained by the role they play in Zande mythology.
The term "gender", as used by some linguists, refers to a noun-class system composed with two, three, or four classes, particularly if the classification is semantically based on a distinction between masculine and feminine. Genders are then considered a sub-class of noun classes. Not all linguists recognize a distinction between noun-classes and genders, however, and instead use either the term "gender" or "noun class" for both.
Sometimes the distinction can drift over time. For instance, in Danish, the main dialects merged the three original genders down to a total of two genders. Some other dialects merged all three genders down to almost a one gender similar to English, but kept the neuter adjective form for uncountable nouns (which are all neuter in Danish). This effectively created a noun class system of countable and uncountable nouns reflected in adjectives.[6]
Some languages, such as Japanese,Chineseand theTai languages, have elaborate systems ofparticlesthat go with nouns based on shape and function, but arefree morphemesrather than affixes. Because the classes defined by these classifying words are not generally distinguished in other contexts, there are many linguists who take the view that they do not create noun classes. | https://en.wikipedia.org/wiki/Noun_class |
Ananalytic languageis a type ofnatural languagein which a series of root/stem words is accompanied byprepositions,postpositions,particlesandmodifiers, usingaffixesvery rarely. This is opposed tosynthetic languages, which synthesize many concepts into a single word, using affixes regularly.
Syntacticroles are assigned to words primarily byword order. For example, by changing the individual words in theLatinphrase "fēl-is pisc-em cēpit" ("the cat caught the fish") to "fēl-em pisc-is cēpit" ("the fish caught the cat"), the fish becomes the subject, while the cat becomes the object. This transformation is not possible in an analytic language without altering the word order. Typically, analytic languages have a lowmorpheme-per-wordratio, especially with respect toinflectional morphemes.
No natural language, however, is purely analytic or purely synthetic.
The termanalyticis commonly used ina relative rather than an absolute sense. The most prominent and widely usedIndo-Europeananalytic language isModern English, which has lost much of theinflectional morphologythat it inherited fromProto-Indo-European,Proto-GermanicandOld Englishover the centuries and has not gained any new inflectional morphemes in the meantime, which makes it more analytic than most other Indo-European languages.
For example, Proto-Indo-European had much more complexgrammatical conjugation,grammatical genders,dual numberand inflections for eight or ninecasesin itsnouns,pronouns,adjectives,numerals,participles,postpositionsanddeterminers. Standard English has lost nearly all of them (except for three modified cases forpronouns) along with genders and dual number and simplified its conjugation.
Latin,German,Greek, andRussianand a majority of theSlavic languages, characterized by freeword order, aresynthetic languages.Nouns in Russianinflect for at least six cases, most of which descended from Proto-Indo-European cases, whose functions English translates by instead using other strategies likeprepositions,verbal voice, word order, andpossessive's.
Modern Hebrewis more analytic thanClassical Hebrewmostly with nouns.[1]Classical Hebrew relies heavily on inflectionalmorphologyto conveygrammaticalrelationships, while in Modern Hebrew, there has been a significant reduction of the use of inflectional morphology.
A related concept is that ofisolating languages, which are those with a low morpheme-per-word ratio (taking into accountderivational morphemesas well). Purely isolating languages are by definition analytic and lack inflectional morphemes. However, the reverse is not necessarily true, and a language can have derivational morphemes but lack inflectional morphemes. For example,Mandarin Chinesehas manycompound words,[2]which gives it a moderately high ratio of morphemes per word, but since it has almost no inflectional affixes at all to convey grammatical relationships, it is a very analytic language.
English is not totally analytic in its nouns since it uses inflections for number (e.g., "one day, three days; one boy, four boys") and possession ("The boy's ball" vis-à-vis "The boy has a ball"). Mandarin Chinese, by contrast, has no inflections on its nouns: compare一天yī tiān'one day',三天sān tiān'three days' (literally 'three day');一個男孩yī ge nánhái'one boy' (lit. 'one [entity of] male child'),四個男孩sì ge nánhái'four boys' (lit. 'four [entity of] male child'). However, English is considered to be weakly inflected and comparatively more analytic than most otherIndo-European languages.
Persiancould be considered an analytic language. Generally, there are no inflections as we know it. There is a system of prefixes and suffixes that connect the words to express possession or attribute a quality. They could be integrated in the word in writing while they keep their function. For example, the suffix ها hâ makes the words plural like English s: دختر ها آمدند dokhtar hâ âmadand 'The girls came'. Persian has no agreement of a noun's or adjective's number or gender in many other languages because it is inherently agenderless language. Practically, there are no inflections for numbers keeping the above example; یک روز yek rooz 'one day', سه روز se rooz 'three days' (literally 'three day'), یک پسر yek pesar 'one boy' (lit. 'One boy'), چهار پسر čahâr pesar 'four boys' (lit. 'Four boy'). Similarly, there are no inflections for possession as well. A short '-e' sound (a diacritical mark) ـِ -e is added after a word starting with a consonants letter to show that it is possessed by (or belongs to) the next word so 'The boy's ball' would be توپِ پسر toop -e pesar. However, the diacritical mark 'ـِ' is put under the last letter of the first word for beginners and in written literature and everyday publications. It is otherwise usually omitted but pronounced in reading. For words ending with long vowels, the letter ی is added with a short '-e' sound written as یِ as a suffix. Thus, 'The boy's foot' would be پا یِ پسر pa -ye pesar. However, in literature and daily writing, the letter is omitted although it is pronounced in reading. The same system is used to connect adjectives and nouns to words. | https://en.wikipedia.org/wiki/Analytic_language |
Determiner, also calleddeterminative(abbreviatedDET), is a term used in some models of grammatical description to describe a word or affix belonging to a class of noun modifiers. A determiner combines with anounto express itsreference.[1][2]Examples in English includearticles(theanda/an),demonstratives(this,that),possessive determiners(my,their), andquantifiers(many,both). Not all languages have determiners, and not all systems of grammatical description recognize them as a distinct category.
The linguistics term "determiner" was coined byLeonard Bloomfieldin 1933. Bloomfield observed that inEnglish, nouns often require a qualifying word such as anarticleoradjective. He proposed that such words belong to a distinct class which he called "determiners".[3]
If a language is said to have determiners, any articles are normally included in the class. Other types of words often regarded as belonging to the determiner class include demonstratives and possessives. Some linguists extend the term to include other words in thenoun phrasesuch as adjectives and pronouns, or even modifiers in other parts of the sentence.[2]
Qualifying a lexical item as a determiner may depend on a given language's rules ofsyntax. In English, for example, the wordsmy,youretc. are used without articles and so can be regarded as possessive determiners whereas theirItalianequivalentsmioetc. are used together with articles and so may be better classed as adjectives.[4]Not all languages can be said to have a lexically distinct class of determiners.
In some languages, the role of certain determiners can be played byaffixes(prefixes or suffixes) attached to a noun or by other types ofinflection. For example, definite articles are represented by suffixes inRomanian,Bulgarian,Macedonian, andSwedish. In Swedish,bok("book"), when definite, becomesboken("the book"), while the Romaniancaiet("notebook") similarly becomescaietul("the notebook"). Some languages, such asFinnish, havepossessive affixeswhich play the role of possessive determiners likemyandhis.
Determiners may bepredeterminers,central determinersorpostdeterminers, based on the order in which they can occur.[citation needed]For example, "all my many very young children" uses one of each. "My all many very young children" is not grammatically correct because a central determiner cannot precede a predeterminer.
Determiners are distinguished frompronounsby the presence of nouns.[5]
Plural personal pronouns can act as determiners in certain constructions.[6]
Some theoreticians unify determiners andpronounsinto a single class. For further information, seePronoun § Linguistics.
Some theoretical approaches regard determiners asheadsof their ownphrases, which are described asdeterminer phrases. In such approaches, noun phrases containing only a noun without a determiner present are called "bare noun phrases", and are considered to bedominatedby determiner phrases withnullheads.[7]For more detail on theoretical approaches to the status of determiners, seeNoun phrase § With and without determiners.
Some theoreticians analyzepronounsas determiners or determiner phrases. SeePronoun: Theoretical considerations. This is consistent with the determiner phrase viewpoint, whereby a determiner, rather than the noun that follows it, is taken to be the head of the phrase.
Articlesare words used (as a standalone word or a prefix or suffix) to specify the grammatical definiteness of a noun, and, in some languages, volume or numerical scope. Articles often include definite articles (such as Englishthe) and indefinite articles (such as Englishaandan).
Demonstrativesaredeicticwords, such asthisandthat, used to indicate which entities are being referred to and to distinguish those entities from others. They can indicate how close the things being referenced are to the speaker, listener, or other group of people. In the English language, demonstratives express proximity of things with respect to the speaker.
Possessive determinerssuch asmy,their,Jane’sandthe King of England’smodify a noun by attributing possession (or other sense of belonging) to someone or something. They are also known as possessive adjectives.
Quantifiersindicate quantity. Some examples of quantifiers include:all,some,many,little,few, andno. Quantifiers only indicate a general quantity of objects, not a precise number such astwelve,first,single, oronce(which are considerednumerals).[8]
Distributive determiners, also called distributive adjectives, consider members of a group separately, rather than collectively. Words such aseachandeveryare examples of distributive determiners.
Interrogativedeterminers such aswhich,what, andhoware used to ask a question:
Manyfunctionalistlinguists dispute that the determiner is a universally valid linguistic category. They argue that the concept isAnglocentric, since it was developed on the basis of the grammar of English and similar languages of north-western Europe. The linguist Thomas Payne comments that the term determiner "is not very viable as a universal natural class", because few languages consistently place all the categories described as determiners in the same place in the noun phrase.[9]
The category "determiner" was developed because in languages like English traditional categories like articles, demonstratives and possessives do not occur together. But in many languages these categories freely co-occur, asMatthew Dryerobserves.[10]For instance,Engenni, a Niger-Congo language of Nigeria, allows a possessive word, a demonstrative and an article all to occur as noun modifiers in the same noun phrase:[10]
ani
wife
wò
2SG.POSS
âka
that
nà
the
ani wò âka nà
wife 2SG.POSS that the
that wife of yours
There are also languages in which demonstratives and articles do not normally occur together, but must be placed on opposite sides of the noun.[10]For instance, in Urak Lawoi, a language of Thailand, the demonstrative follows the noun:
rumah
house
besal
big
itu
that
rumah besal itu
house big that
that big house
However, the definite article precedes the noun:
koq
the
nanaq
children
koq nanaq
the children
the children
As Dryer observes, there is little justification for a category of determiner in such languages.[10] | https://en.wikipedia.org/wiki/Determiner_(linguistics) |
Kenneth Jon Barwise(/ˈbɑːrwaɪz/; June 29, 1942 – March 5, 2000)[1]was an Americanmathematician,philosopherandlogicianwho proposed some fundamental revisions to the way thatlogicis understood and used.
He was born inIndependence, Missouri, to Kenneth T. and Evelyn Barwise.
A pupil ofSolomon FefermanatStanford University, Barwise started his research ininfinitary logic. After positions as assistant professor atYale Universityand theUniversity of Wisconsin, during which time his interests turned tonatural language, he returned to Stanford in 1983 to direct theCenter for the Study of Language and Information(CSLI). He began teaching atIndiana Universityin 1990. He was elected a Fellow of theAmerican Academy of Arts and Sciencesin 1999.[2]
In his last year, Barwise was invited to give the 2000Gödel Lecture; he died prior to the lecture.[3]
Barwise contended that, by being explicit about the context in which apropositionis made, thesituation, many problems in the application of logic can be eliminated. He sought... to understand meaning and inference within a general theory of information, one that takes us outside the realm of sentences and relations between sentences of any language, natural or formal.In particular, he claimed that such an approach resolved theliar paradox. He made use ofPeter Aczel'snon-well-founded set theoryin understanding "vicious circles" of reasoning.
Barwise, along with his former colleague at StanfordJohn Etchemendy, was the author of the popular logic textbookLanguage, Proof and Logic. Unlike theHandbook of Mathematical Logic, which was a survey of the state of the art ofmathematical logiccirca 1975, and of which he was the editor, this work targeted elementary logic. The text is notable for including computer-aided homework problems, some of which provide visual representations of logical problems. During his time at Stanford, he was also the first Director of theSymbolic Systems Program, an interdepartmental degree program focusing on the relationships between cognition, language, logic, and computation.The K. Jon Barwise Award for Distinguished Contributions to the Symbolic Systems Programhas been given periodically since 2001.[4] | https://en.wikipedia.org/wiki/Jon_Barwise |
Universal grammar(UG), in modernlinguistics, is the theory of the innate biological component of thelanguage faculty, usually credited toNoam Chomsky. The basic postulate of UG is that there areinnateconstraints on what the grammar of a possible human language could be. When linguistic stimuli are received in the course oflanguage acquisition, children then adopt specificsyntactic rulesthat conform to UG.[1]The advocates of this theory emphasize and partially rely onthe poverty of the stimulus(POS) argument and the existence of some universal properties of naturalhuman languages. However, the latter has not been firmly established.
Other linguists have opposed that notion, arguing that languages are so diverse that the postulated universality is rare.[2]The theory of universal grammar remains a subject of debate among linguists.[3]
The term "universal grammar" is placeholder for whicheverdomain-specificfeatures oflinguistic competenceturn out to be innate. Withingenerative grammar, it is generally accepted that there must be some such features, and one of the goals of generative research is to formulate and test hypotheses about which aspects those are.[4][5]In day-to-day generative research, the notion that universal grammar exists motivates analyses in terms of general principles. As much as possible, facts about particular languages are derived from these general principles rather than from language-specific stipulations.[4]
The idea that at least some aspects are innate is motivated bypoverty of the stimulusarguments.[6][7]For example, one famous poverty of the stimulus argument concerns the acquisition ofyes–no questionsin English. This argument starts from the observation that children only make mistakes compatible with rules targetinghierarchical structureeven though the examples which they encounter could have been generated by a simpler rule that targets linear order. In other words, children seem to ignore the possibility that the question rule is as simple as "switch the order of the first two words" and immediately jump to alternatives that rearrangeconstituentsintree structures. This is taken as evidence that children are born knowing that grammatical rules involve hierarchical structure, even though they have to figure out what those rules are.[6][7][8]
Between 1100 and 1400, the theoretical work on matters of language significantly expanded in western Europe, its typical social context being the teaching of grammar, logic, ortheology, producing a vast literature on aspects of linguistic theory, such as a 13th century theory of grammar known in modern times asmodism, although no assertions were made in the texts about "a theory of language," as such. While not much work was done on the evolution of languages,DanteandRoger Baconoffered perceptive observations.[9]Bacon had a complex notion of grammar, which ranged from the teaching of elementaryLatinthrough what he termed "rational grammar," to research on the so-called languages of sacred wisdom, i.e. Latin andGreek.[10]Professor of Latin literature Raf Van Rooy quotes Bacon's "notorious" dictum on grammar used to denote regional linguistic variation and notes Bacon's contention that Latin and Greek, although "one in substance," were each characterized by manyidioms(idiomata: proprietates). Van Rooy speculates that Bacon's references to grammar concerned a "quasi-universal nature of grammatical categories," whereas his assertions on Greek and Latin were applications of hislingua/idiomadistinction rather than a generalizing statements on the nature of grammar.[11]: 28–44Linguistics professor Margaret Thomas acknowledges that "intellectual commerce between ideas about Universal Grammar and [second language] acquisition is not a late-20th century invention,"[12]but rejects as "convenient" the interpretation of Bacon's dictum[n 1]by generative grammarians as an assertion by the English polymath of the existence of universal grammar.[13]
The concept of a generalized grammar was at the core of the 17th century projects forphilosophical languages. An influential work in that time wasGrammaire généralebyClaude LancelotandAntoine Arnauld. They describe a general grammar for languages, coming to the conclusion that grammar has to be universal.[14]There is a Scottish school of universal grammarians from the 18th century that includedJames Beattie,Hugh Blair,James Burnett,James Harris, andAdam Smith, distinguished from the philosophical-language project.
The article on grammar in the first edition of theEncyclopædia Britannica(1771) contains an extensive section titled "Of Universal Grammar."[15]
In the late 19th and early 20th century,Wilhelm WundtandOtto Jespersenclaimed that these earlier arguments were overly influenced byLatinand ignored the breadth of worldwide language real grammar", but reduced it to universalsyntactic categoriesor super-categories, such asnumber,tenses, etc.[16]
Behaviorists, after the rise of the eponymous theory, advanced the idea that language acquisition, like any other kind of learning, could be explained by a succession of trials, errors, and rewards for success.[17]In other words, children, according to behaviorists, learn their mother tongue by simple imitation, i.e. through listening and repeating what adults say. For example, when a child says "milk" and the mother will smile and give milk to her child, then, as a result, the child will find this outcome rewarding, which enhances the child's language development.[18]
Within generative grammar, there are a variety of theories about what universal grammar consists of. One notable hypothesis proposed byHagit Borerholds that the fundamental syntactic operations are universal and that all variation arises from differentfeature-specifications in thelexicon.[5][19]On the other hand, a strong hypothesis adopted in some variants ofOptimality Theoryholds that humans are born with a universal set of constraints, and that all variation arises from differences in how these constraints are ranked.[5][20]In a 2002 paper,Noam Chomsky,Marc HauserandW. Tecumseh Fitchproposed that universal grammar consists solely of the capacity for hierarchical phrase structure.[21]
In an article entitled "The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?"[22]Hauser, Chomsky, and Fitch present the three leading hypotheses for how language evolved and brought humans to the point where they have a universal grammar.
The first hypothesis states that the faculty of language in the broad sense (FLb) is strictly homologous to animal communication. This means that homologous aspects of the faculty of language exist in non-human animals.
The second hypothesis states that the FLb is a derived and uniquely human adaptation for language. This hypothesis holds that individual traits were subject to natural selection and came to be specialized for humans.
The third hypothesis states that only the faculty of language in the narrow sense (FLn) is unique to humans. It holds that while mechanisms of the FLb are present in both human and non-human animals, the computational mechanism ofrecursionhas evolved recently, and solely in humans.[23]
The presence ofcreole languagesis sometimes cited as further support for this theory, especially byBickerton'slanguage bioprogram theory. Creole languages develop and form when disparate societies with no common language come together and are forced to devise a new system of communication. The system used by the original speakers is typically an inconsistent mix of vocabulary items, known aspidgin. As these speakers' children begin to acquire their first language, they use the pidgin input to effectively create their own original language, known as acreole language. Unlike pidgins, creole languages havenative speakers(those with language acquisition from early childhood) and make use of a full, systematic grammar.
Bickerton claims the fact that certain features are shared by virtually all creole languages supports the notion of a universal grammar. For example, their default point of reference in time (expressed by bare verb stems) is not the present moment, but the past. Using pre-verbalauxiliaries, they uniformly expresstense,aspect, andmood.Negative concordoccurs, but it affects the verbal subject (as opposed to the object, as it does in languages likeSpanish). Another similarity among creole languages can be identified in the fact that questions are created simply by changing the intonation of a declarative sentence; not its word order or content.
Opposing this notion, the work by Carla Hudson-Kam and Elissa Newport suggests that creole languages may not support a universal grammar at all. In a series of experiments, Hudson-Kam and Newport looked at how children and adults learn artificial grammars. They found that children tend to ignore minor variations in the input when those variations are infrequent, and reproduce only the most frequent forms. In doing so, the children tend to standardize the language they hear around them. Hudson-Kam and Newport hypothesize that in a pidgin-development situation (and in the real-life situation of a deaf child whose parents are or weredisfluentsigners), children systematize the language they hear, based on the probability and frequency of forms, and not that which has been suggested on the basis of a universal grammar.[24][25]Further, they argue, it seems to follow that creole languages would share features with the languages from which they are derived, and thus look similar in terms of grammar.
Many researchers of universal grammar argue against the concept ofrelexification, i.e. that a language replaces its lexicon almost entirely with that of another. This, they argue, goes against the universalist notions of a universal grammar, which has an innate grammar.[citation needed]
Recent research has usedrecurrent neural networkarchitectures (RNNs). McCoy et al. (2018) focused on a strong version of thepoverty-of-the-stimulusargument, which claims that language learners require a hierarchicalconstraint, although they report that a milder version, which only asserts that a hierarchicalbiasis necessary, is difficult to assess using RNNs because RNNs must possess some biases and the nature of these biases remains "currently poorly understood." They go on to acknowledge that while all the architectures they used had a bias toward linear order and the GRU-with-attention architecture was the only one that overcame this linear bias sufficiently to generalize hierarchically. "Humans certainly could have such an innate constraint."[26]
The empirical basis of poverty-of-the-stimulus arguments has been challenged byGeoffrey Pullumand others, leading to a persistent back-and-forth debate in thelanguage acquisitionliterature.[27][28]
Language acquisitionresearcher Michael Ramscar has suggested that when children erroneously expect an ungrammatical form that then never occurs, the repeated failure of expectation serves as a form of implicitnegative feedbackthat allows them to correct their errors over time, in the way that, for example, children correct grammar generalizations likegoedtowentthrough repetitive failure.[29][30]
In addition, it has been suggested that people learn about probabilistic patterns of word distribution in their language, rather than hard and fast rules (seeDistributional hypothesis).[31]For example, inEnglish, children overgeneralize the past tense marker "-ed" and conjugate irregular verbs as if they were regular, producing forms likegoedandeated, and then correct this deviancy over time.[29]It has also been hypothesized that the poverty of the stimulus problem can be largely avoided if it is assumed that children employsimilarity-based generalizationstrategies in language learning, i.e. generalizing about the usage of new words from similar words they already know how to use.[32]
NeurogeneticistsSimon FisherandSonja Vernesobserve that, with human language-skills being evidently unmatched elsewhere in the world's fauna, there have been several theories about one singlemutationevent occurring some time in the past in our nonspeaking ancestors, as argued by e.g. Chomsky (2011), i.e. some "lone spark that was sufficient to trigger the sudden appearance of language and culture." They characterize that notion as "romantic" and "inconsistent with the messy mappings between genetics and cognitive processes." According to Fisher & Vernes, the link between genes to grammar has not been consistently mapped by scientists. What has been established by research, they claim, relates primarily tospeech pathologies. The arising lack of certainty, they conclude, has provided an audience for "unconstrained speculations" that have fed the "myth" of "so-called grammar genes".[33]
Professor ofNatural Language ComputingGeoffrey Sampsonmaintains that universal grammar theories are notfalsifiableand are thereforepseudoscientific. He argues that the grammatical "rules" linguists posit are simplypost-hocobservations about existing languages, rather than predictions about what is possible in a language.[34][35][36]Sampson claims that every one of the "poor" arguments used to justify the language-instinct claim is wrong. He writes that "either the logic is fallacious, or the factual data are incorrect (or, sometimes, both)," and the "evidence points the other way." Children are good at learning languages, because people are good at learning anything that life throws at us — not because we have fixed structures of knowledge built-in.[34]Similarly, professor of cognitive scienceJeffrey Elmanargues that the unlearnability of languages ostensibly assumed by universal grammar is based on a too-strict, "worst-case" model of grammar, which is not in keeping with any actual grammar.
LinguistJames Hurford, in his article "Nativist and Functional Explanations in Language Acquisition,"[37]offers the major differences between the glossogenetic and thephylogeneticmechanisms. He states that, "Deep aspects of the form of language are not likely to be readily identifiable with obvious specific uses, and one cannot suppose that it will be possible to attribute them directly to the recurring short-term needs of successive generations in a community. Here, nativist explanations for aspects of the form of language, appealing to an innate LAD, seem appropriate. But use or function can also be appealed to on the evolutionary timescale, to attempt to explain the structure of the LAD itself." For Hurford, biologicalmutationsplus functional considerations constitute theexplanans, while the LAD itself constitutes theexplanandum. The LAD is part of the species' heredity, the result of mutations over a long period, he states. But, while he agrees with Chomsky that the mechanism of grammaticisation is located in "the Chomskyan LAD" and that Chomsly is "entirely right in emphasising that a language (E-language) is an artifact resulting from the interplay of many factors," he states that this artifact should be of great interest and systematic study, and can affect grammatical competence, i.e. "I-language."[37]
Morten H. Christiansen, professor of Psychology, and Nick Chater, professor of Psychology and Language Sciences, have argued that "a biologically determined UG is not evolutionarily viable." As the processes of language change are much more rapid than processes of genetic change, they state, language constitutes a "moving target" both over time and across different human populations, and, hence, cannot provide a stable environment to which language genes could have adapted. In following Darwin, they view language as a complex and interdependent "organism," which evolves under selectional pressures from human learning and processing mechanisms, so that "apparently arbitrary aspects of linguistic structure may result from general learning and processing biases deriving from the structure of thought processes, perceptuo-motor factors, cognitive limitations, and pragmatics".[38]Professor of linguisticsNorbert Hornsteincountered polemically that Christiansen and Chater appear "to have no idea whatgenerative grammar[theory] is," and "especially, but not uniquely, about the Chomsky program." Hornstein points out that all "grammatically informed psycho-linguistic works done today or before" understand that generative/universal grammar capacities are but one factor among others needed to explain real-time acquisition." Christiansen and Chater's observation that "languageuseinvolves multiple interacting variables" [italics in the original] is, essentially, atruism. It is nothing new, he argues, to state that "much more than a competence theory will be required" to figure out how language is deployed, acquired, produced, or parsed. The position, he concludes, that universal grammar properties are just "probabilistic generalizations over available linguistic inputs" belongs to the "traditional" and "debunked" view held byassociationistsandstructuralistsmany decades in the past.[39]
In the same vein, professor of linguisticsNicholas Evansand prfessor of psycholinguistics Stephen C. Levinson observe[40]that Chomsky’s notion of a Universal Grammar has been mistaken for a set of substantial research findings about what all languages have in common, while, it is, "in fact," the programmatic label for "whatever it turns out to be that all children bring to learning a language." For substantial findings about universals across languages, they argue, one must turn to the field oflinguistic typology, which bares a "bewildering range of diverse languages" and in which "generalizations are really quite hard to extract." Chomsky’s actual views, combining, as they claim, philosophical and mathematical approaches to structure with claims about the innate endowment for language, have been "hugely influential in thecognitive sciences.[40]: 430
Wolfram Hinzen, in his workThe philosophical significance of Universal Grammar[41]seeks to re-establish theepistemologicalsignificance of grammar and addresses the three main current objections toCartesianuniversal grammar, i.e. that it has no coherent formulation, it cannot have evolved by standard, acceptedneo-Darwinianevolutionary principles, and it goes against the variation extant at all levels of linguistic organization, which lies at the heart of human faculty of language.
In the domain of field research,Daniel Everetthas claimed that thePirahã languageis a counterexample to the basic tenets of universal grammar because it lacksclausal embedding. According to Everett, this trait results from Pirahã culture emphasizing present-moment concrete matters.[42]Nevins et al. (2007) have responded that Pirahã does, in fact, have clausal embedding, and that, even if it did not, this would be irrelevant to current theories of universal grammar. They addressed each of Everett's claims and, using Everett's "rich material" data, claim to have found no evidence of a causal relation between culture and grammatical structure. Pirahã grammar, they concluded, presents no unusual challenge, much less the "severe" one claimed by Everett, to the notion of a universal grammar.[43]
In 2017, Chomsky and Berwick co-wrote their book titledWhy Only Us,where they defined both the minimalist program and the strong minimalist thesis and its implications, to update their approach to UG theory. According to Berwick and Chomsky, "the optimal situation would be that UG reduces to the simplest computational principles which operate in accord with conditions of computational efficiency. This conjecture is ... called the Strong Minimalist Thesis (SMT)."[44]: 94
The significance of SMT is to shift the previous emphasis on a universal grammar to the concept that Chomsky and Berwick now call "merge". "Merge" is defined there as follows:
Every computational system has embedded within it somewhere an operation that applies to two objects X and Y already formed, and constructs from them a new object Z. Call this operation Merge.
SMT dictates that "Merge will be as simple as possible: it will not modify X or Y or impose any arrangement on them; in particular, it will leave them unordered; an important fact. Merge is therefore justsetformation: Merge of X and Y yields the set {X, Y}."[44]: 98 | https://en.wikipedia.org/wiki/Universal_grammar |
Inlinguistics,grammaris the set of rules for how anatural languageis structured, as demonstrated by its speakers orwriters. Grammar rules may concern the use ofclauses,phrases, andwords. The term may also refer to the study of such rules, a subject that includesphonology,morphology, andsyntax, together withphonetics,semantics, andpragmatics. There are, broadly speaking, two different ways to study grammar:traditional grammarandtheoretical grammar.
Fluency in a particularlanguage varietyinvolves a speaker internalizing these rules, many or most of which areacquiredby observing other speakers, as opposed to intentional study orinstruction. Much of this internalization occurs during early childhood; learning a language later in life usually involves more direct instruction.[1]The termgrammarcan also describe the linguistic behaviour of groups of speakers and writers rather than individuals. Differences in scale are important to this meaning: for example,English grammarcould describe those rules followed by every one of the language's speakers.[2]At smaller scales, it may refer to rules shared by smaller groups of speakers.
A description, study, or analysis of such rules may also be known as a grammar, or as agrammar book. Areference workdescribing the grammar of a language is called areference grammaror simply agrammar. A fully revealed grammar, which describes thegrammaticalconstructions of a particular speech type in great detail is called descriptive grammar. This kind oflinguistic descriptioncontrasts withlinguistic prescription, a plan to marginalize some constructions whilecodifyingothers, either absolutely or in the framework of astandard language. The wordgrammaroften has divergent meanings when used in contexts outside linguistics. It may be used more broadly to includeorthographicconventions ofwritten language, such asspellingandpunctuation, which are not typically considered part of grammar by linguists; that is, theconventionsused for writing a language. It may also be used more narrowly to refer to a set ofprescriptive normsonly, excluding the aspects of a language's grammar which do notchangeor are clearly acceptable (or not) without the need for discussions.
The wordgrammaris derived fromGreekγραμματικὴ τέχνη(grammatikḕ téchnē), which means "art of letters", fromγράμμα(grámma), "letter", itself fromγράφειν(gráphein), "to draw, to write".[3]The same Greek root also appears in the wordsgraphics,grapheme, andphotograph.
The first systematic grammar ofSanskritoriginated inIron Age India, withYaska(6th century BC),Pāṇini(6th–5th century BC[4]) and his commentatorsPingala(c.200 BC),Katyayana, andPatanjali(2nd century BC).Tolkāppiyam, the earliestTamilgrammar, is mostly dated to before the 5th century AD. TheBabyloniansalso made some early attempts at language description.[5]
Grammar appeared as a discipline inHellenismfrom the 3rd century BC forward with authors such asRhyanusandAristarchus of Samothrace. The oldest known grammar handbook is theArt of Grammar(Τέχνη Γραμματική), a succinct guide to speaking and writing clearly and effectively, written by the ancient Greek scholarDionysius Thrax(c.170– c.90 BC), a student of Aristarchus of Samothrace who founded a school on the Greek island of Rhodes. Dionysius Thrax's grammar book remained the primary grammar textbook for Greek schoolboys until as late as the twelfth century AD. The Romans based their grammatical writings on it and its basic format remains the basis for grammar guides in many languages even today.[6]Latin grammardeveloped by following Greek models from the 1st century BC, due to the work of authors such asOrbilius Pupillus,Remmius Palaemon,Marcus Valerius Probus,Verrius Flaccus, andAemilius Asper.
The grammar ofIrishoriginated in the 7th century withAuraicept na n-Éces.Arabic grammaremerged withAbu al-Aswad al-Du'aliin the 7th century. The first treatises onHebrew grammarappeared in theHigh Middle Ages, in the context ofMidrash(exegesis of theHebrew Bible). TheKaraitetradition originated inAbbasidBaghdad. TheDiqduq(10th century) is one of the earliest grammatical commentaries on the Hebrew Bible.[7]Ibn Barunin the 12th century, compares the Hebrew language with Arabic in theIslamic grammatical tradition.[8]
Belonging to thetriviumof the sevenliberal arts, grammar was taught as a core discipline throughout theMiddle Ages, following the influence of authors fromLate Antiquity, such asPriscian. Treatment of vernaculars began gradually during theHigh Middle Ages, with isolated works such as theFirst Grammatical Treatise, but became influential only in theRenaissanceandBaroqueperiods. In 1486,Antonio de NebrijapublishedLas introduciones Latinas contrapuesto el romance al Latin, and the firstSpanish grammar,Gramática de la lengua castellana, in 1492. During the 16th-centuryItalian Renaissance, theQuestione della linguawas the discussion on the status and ideal form of the Italian language, initiated byDante'sde vulgari eloquentia(Pietro Bembo,Prose della volgar linguaVenice 1525). The first grammar ofSlovenewas written in 1583 byAdam Bohorič, andGrammatica Germanicae Linguae, the first grammar of German, was published in 1578.
Grammars of some languages began to be compiled for the purposes of evangelism andBible translationfrom the 16th century onward, such asGrammatica o Arte de la Lengua General de Los Indios de Los Reynos del Perú(1560), aQuechuagrammar byFray Domingo de Santo Tomás.
From the latter part of the 18th century, grammar came to be understood as a subfield of the emerging discipline of modern linguistics. TheDeutsche GrammatikofJacob Grimmwas first published in the 1810s. TheComparative GrammarofFranz Bopp, the starting point of moderncomparative linguistics, came out in 1833.
Frameworks of grammar which seek to give a precise scientific theory of the syntactic rules of grammar and their function have been developed intheoretical linguistics.
Other frameworks are based on an innate "universal grammar", an idea developed byNoam Chomsky. In such models, the object is placed into the verb phrase. The most prominent biologically oriented theories are:
Parse treesare commonly used by such frameworks to depict their rules. There are various alternative schemes for some grammar:
Grammars evolve throughusage. Historically, with the advent ofwritten representations, formal rules aboutlanguage usagetend to appear also, although such rules tend to describe writing conventions more accurately than conventions of speech.[11]Formal grammarsarecodificationsof usage which are developed by repeated documentation andobservationover time. As rules are established and developed, the prescriptive concept ofgrammatical correctnesscan arise. This often produces a discrepancy between contemporary usage and that which has been accepted, over time, as being standard or "correct". Linguists tend to view prescriptive grammar as having little justification beyond their authors' aesthetic tastes, although style guides may give useful advice aboutstandard language employmentbased on descriptions of usage in contemporary writings of the same language. Linguistic prescriptions also form part of the explanation for variation in speech, particularly variation in the speech of an individual speaker (for example, why some speakers say "I didn't do nothing", some say "I didn't do anything", and some say one or the other depending on social context).
The formal study of grammar is an important part of children's schooling from a young age through advancedlearning, though the rules taught in schools are not a "grammar" in the sense that mostlinguistsuse, particularly as they areprescriptivein intent rather thandescriptive.
Constructed languages(also calledplanned languagesorconlangs) are more common in the modern day, although still extremely uncommon compared to natural languages. Many have been designed to aid human communication (for example, naturalisticInterlingua, schematicEsperanto, and the highly logicalLojban). Each of these languages has its own grammar.
Syntaxrefers to the linguistic structure above the word level (for example, how sentences are formed) – though without taking into accountintonation, which is the domain of phonology. Morphology, by contrast, refers to the structure at and below the word level (for example, howcompound wordsare formed), but above the level of individual sounds, which, like intonation, are in the domain of phonology.[12]However, no clear line can be drawn between syntax and morphology.Analytic languagesuse syntax to convey information that is encoded byinflectioninsynthetic languages. In other words, word order is not significant, and morphology is highly significant in a purely synthetic language, whereas morphology is not significant and syntax is highly significant in an analytic language. For example, Chinese andAfrikaansare highly analytic, thus meaning is very context-dependent. (Both have some inflections, and both have had more in the past; thus, they are becoming even less synthetic and more "purely" analytic over time.)Latin, which is highlysynthetic, usesaffixesandinflectionsto convey the same information that Chinese does with syntax. Because Latin words are quite (though not totally) self-contained, an intelligible Latinsentencecan be made from elements that are arranged almost arbitrarily. Latin has a complex affixation and simple syntax, whereas Chinese has the opposite.
Prescriptivegrammar is taught in primary and secondary school. The term "grammar school" historically referred to a school (attached to a cathedral or monastery) that teaches Latin grammar to future priests and monks. It originally referred to a school that taught students how to read, scan, interpret, and declaim Greek and Latin poets (including Homer, Virgil, Euripides, and others). These should not be mistaken for the related, albeit distinct, modern British grammar schools.
Astandard languageis a dialect that is promoted above other dialects in writing, education, and, broadly speaking, in the public sphere; it contrasts withvernacular dialects, which may be the objects of study in academic,descriptive linguisticsbut which are rarely taught prescriptively. The standardized "first language" taught in primary education may be subject topoliticalcontroversy because it may sometimes establish a standard defining nationality orethnicity.
Recently, efforts have begun to updategrammar instructionin primary and secondary education. The main focus has been to prevent the use of outdated prescriptive rules in favor of setting norms based on earlier descriptive research and to change perceptions about the relative "correctness" of prescribed standard forms in comparison to non-standard dialects. A series of metastudies have found that the explicit teaching of grammatical parts of speech and syntax has little or no effect on the improvement of student writing quality in elementary school, middle school or high school; other methods of writing instruction had far greater positive effect, including strategy instruction, collaborative writing, summary writing, process instruction, sentence combining and inquiry projects.[13][14][15]
The preeminence ofParisian Frenchhas reigned largely unchallenged throughout the history of modern French literature. Standard Italian is based on the speech of Florence rather than the capital because of its influence on early literature. Likewise, standard Spanish is not based on the speech of Madrid but on that of educated speakers from more northern areas such as Castile and León (seeGramática de la lengua castellana). InArgentinaandUruguaythe Spanish standard is based on the local dialects of Buenos Aires and Montevideo (Rioplatense Spanish).Portuguesehas, for now,two official standards,Brazilian PortugueseandEuropean Portuguese.
TheSerbianvariant ofSerbo-Croatianis likewise divided;Serbiaand theRepublika SrpskaofBosnia and Herzegovinause their own distinct normative subvarieties, with differences inyatreflexes. The existence and codification of a distinct Montenegrin standard is a matter of controversy, some treatMontenegrinas a separate standard lect, and some think that it should be considered another form of Serbian.
Norwegianhas two standards,BokmålandNynorsk, the choice between which is subject tocontroversy: Each Norwegian municipality can either declare one as its official language or it can remain "language neutral". Nynorsk is backed by 27 percent of municipalities. The main language used in primary schools, chosen by referendum within the local school district, normally follows the official language of its municipality.Standard Germanemerged from the standardized chancellery use ofHigh Germanin the 16th and 17th centuries. Until about 1800, it was almost exclusively a written language, but now it is so widely spoken that most of the formerGerman dialectsare nearly extinct.
Standard Chinesehas official status as the standard spoken form of the Chinese language in the People's Republic of China (PRC), theRepublic of China(ROC), and theRepublic of Singapore. Pronunciation of Standard Chinese is based on the local accent ofMandarin Chinesefrom Luanping, Chengde in Hebei Province near Beijing, while grammar and syntax are based on modernvernacular written Chinese.
Modern Standard Arabicis directly based onClassical Arabic, the language of theQur'an. TheHindustani languagehas two standards,HindiandUrdu.
In the United States, the Society for the Promotion of Good Grammar designated 4 March asNational Grammar Dayin 2008.[16] | https://en.wikipedia.org/wiki/Grammar |
Inlinguistics, adeterminer phrase(DP) is a type ofphraseheaded by adeterminersuch asmany.[1]Controversially, many approaches take a phrase likenot very many applesto be a DP,headed, in this case, by the determinermany. This is called the DP analysis or the DP hypothesis. Others reject this analysis in favor of the more traditional NP (noun phraseor nominal phrase) analysis whereappleswould be the head of the phrase in which the DPnot very manyis merely a dependent. Thus, there are competing analyses concerningheadsand dependents in nominal groups.[2]The DP analysis developed in the late 1970s and early 1980s,[3]and it is the majority view ingenerative grammartoday.[4]
In the example determiner phrases below, the determiners are inboldface:
Although the DP analysis is the dominant view in generative grammar, most other grammar theories reject the idea. For instance, representationalphrase structure grammarsfollow the NP analysis, e.g.Head-driven Phrase Structure Grammar, and mostdependency grammarssuch asMeaning-Text Theory,Functional Generative Description, andLexicase Grammaralso assume the traditional NP analysis of noun phrases,Word Grammarbeing the one exception.[1]Construction GrammarandRole and Reference Grammaralso assume NP instead of DP.Noam Chomsky, on whose framework most generative grammar has been built, said in a 2020 lecture,
I’m going to assume here that nominal phrases are actually NPs. The DP hypothesis, which is widely accepted, was very fruitful, leading to a lot of interesting work; but I’ve never really been convinced by it. I think these structures are fundamentally nominal phrases. [. . . ] As far as determiners are concerned, like saythat, I suspect that they are adjuncts. So I’ll be assuming that the core system is basically nominal.[5]
The point at issue concerns the hierarchical status of determiners. Various types of determiners in English are summarized in the following table.
Should the determiner in phrases such asthe carandthose ideasbe construed as the head of or as a dependent in the phrase? The following trees illustrate the competing analyses, DP vs. NP. The two possibilities are illustrated first using dependency-based structures (ofdependency grammars):
The a-examples show the determiners dominating the nouns, and the b-examples reverse the relationship, since the nouns dominate the determiners. The same distinction is illustrated next using constituency-based trees (ofphrase structure grammars), which are equivalent to the above:
The convention used here employs the words themselves as the labels on the nodes in the structure. Whether a dependency-based or constituency-based approach to syntax is employed, the issue is which word is the head over the other.
The DP-hypothesis is held for four main reasons: 1) facilitates viewing phrases and clauses as structurally parallel, 2) accounts for determiners often introducing phrases and their fixed position within phrases, 3) accounts for possessive-sconstructions, and 4) accounts for the behaviour of definite pronouns given their complementary distribution with determiners.
The original motivation for the DP-analysis came in the form of parallelism across phrase and clause. The DP-analysis provides a basis for viewing clauses and phrases as structurally parallel.[6]The basic insight runs along the following lines: since clauses have functional categories above lexical categories, noun phrases should do the same. The traditional NP-analysis has the drawback that it positions the determiner, which is often a pure function word, below the lexical noun, which is usually a full content word. The traditional NP-analysis is therefore unlike the analysis of clauses, which positions the functional categories as heads over the lexical categories. The point is illustrated by drawing a parallel to the analysis of auxiliary verbs. Given a combination such aswill understand, one views the modal auxiliary verbwill, a function word, as head over the main verbunderstand, a content word. Extending this type of analysis to a phrase likethe car, the determinerthe, a function word, should be head overcar, a content word. In so doing, the NPthe carbecomes a DP. The point is illustrated with simple dependency-based hierarchies:
Only the DP-analysis shown in c establishes the parallelism with the verb chain. It enables one to assume that the architecture of syntactic structure is principled; functional categories (function words) consistently appear above lexical categories (content words) in phrases and clauses. This unity of the architecture of syntactic structure is perhaps the strongest argument in favor of the DP-analysis.
The fact that determiners typically introduce the phrases in which they appear is also viewed as support for the DP-analysis. One points to the fact that when more than one attributive adjective appears, their order is somewhat flexible, e.g.an old friendly dogvs.a friendly old dog. The position of the determiner, in contrast, is fixed; it has to introduce the phrase, e.g.*friendly an old dog,*old friendly a dog, etc. The fact that the determiner's position at the left-most periphery of the phrase is set is taken as an indication that it is the head of the phrase. The reasoning assumes that the architecture of phrases is robust if the position of the head is fixed. The flexibility of order for attributive adjectives is taken as evidence that they are indeed dependents of the noun.
Possessive-sconstructions in English are often produced as evidence in favor of the DP-analysis.[7]The key trait of the possessive-sconstruction is that the-scan attach to the right periphery of a phrase. This fact means that-sis not a suffix (since suffixes attach to words, not phrases). Further, the possessive-sconstruction has the same distribution as determiners, which means that it has determiner status. The assumption is therefore that possessive-sheads the entire DP, e.g.
The phrasal nature of the possessive-sconstructions like these is easy to accommodate on a DP-analysis. The possessive-sheads the possessive phrase; the phrase that immediately precedes the-s(in brackets) is in specifier position, and the noun that follows the-sis the complement. The claim is that the NP-analysis is challenged by this construction because it does not make a syntactic category available for the analysis of-s, that is, the NP-analysis does not have a clear means at its disposal to grant-sthe status of determiner. This claim is debatable, however, since nothing prevents the NP-analysis from also granting-sthe status of determiner. The NP-analysis is however forced to acknowledge that DPs do in fact exist, since possessive-sconstructions have to be acknowledged as phrases headed by the determiner-s. A certain type of DP definitely exists, namely one that has-sas its head.
The fact that definite pronouns are in complementary distribution with determiners is taken as evidence in favor of DP.[8]The important observation in this area is that definite pronouns cannot appear together with a determiner liketheorain one and the same DP, e.g.
On a DP-analysis, this trait of definite pronouns is relatively easy to account for. If definite pronouns are actually determiners, then it makes sense that they should not be able to appear together with another determiner since the two would be competing for the same syntactic position in the hierarchy of structure. On an NP-analysis in contrast, there is no obvious reason why a combination of the two would not be possible. In other words, the NP-analysis has to reach to additional stipulations to account for the fact that combinations like*the themare impossible. A difficulty with this reasoning, however, is posed by indefinite pronouns (one,few,many), which can easily appear together with a determiner, e.g.the old one. The DP-analysis must therefore draw a distinction between definite and indefinite pronouns, whereby definite pronouns are classified as determiners, but indefinite pronouns as nouns.
While the DP-hypothesis has largely replaced the traditional NP analysis in generative grammar, it is generally not held among advocates of other frameworks, for six reasons:[9]1) absent determiners, 2) morphological dependencies, 3) semantic and syntactic parallelism, 4) idiomatic expressions, 5) left-branch phenomena, and 6) genitives.
Many languages lack the equivalents of the English definite and indefinite articles, e.g. the Slavic languages. Thus in these languages, determiners appear much less often than in English, where the definite articletheand the indefinite articleaare frequent. What this means for the DP-analysis is that null determiners are a common occurrence in these languages. In other words, the DP-analysis must posit the frequent occurrence of null determiners in order to remain consistent about its analysis of DPs. DPs that lack an overt determiner actually involve a covert determiner in some sense. The problem is evident in English as well, where mass nouns can appear with or without a determiner, e.g.milkvs.the milk,watervs.the water. Plural nouns can also appear with or without a determiner, e.g.booksvs.the books,ideasvs.the ideas, etc. Since nouns that lack an overt determiner have the same basic distribution as nouns with a determiner, the DP-analysis should, if it wants to be consistent, posit the existence of a null determiner every time an overt determiner is absent. The traditional NP analysis is not confronted with this necessity, since for it, the noun is the head of the noun phrase regardless of whether a determiner is or is not present. Thus the traditional NP analysis requires less of the theoretical apparatus, since it does not need all those null determiners, the existence of which is non-falsifiable. Other things being equal, less is better according toOccam's Razor.
The NP-analysis is consistent with intuition in the area of morphological dependencies. Semantic and grammatical features of the noun influence the choice and morphological form of the determiner, not vice versa. Consider grammatical gender of nouns in a language like German, e.g.Tisch'table' is masculine (der Tisch),Haus'house' is neuter (das Haus),Zeit'time' is feminine (die Zeit). The grammatical gender of a noun is an inherent trait of the noun, whereas the form of the determiner varies according to this trait of the noun. In other words, the noun is influencing the choice and form of the determiner, not vice versa. In English, this state of affairs is visible in the area of grammatical number, for instance with the opposition between singularthisandthatand pluraltheseandthose. Since the NP-analysis positions the noun above the determiner, the influence of the noun on the choice and form of the determiner is intuitively clear: the head noun is influencing the dependent determiner. The DP-analysis, in contrast, is unintuitive because it necessitates that one view the dependent noun as influencing the choice and form of the head determiner.
Despite what was stated above about parallelism across clause and DP, the traditional NP-analysis of noun phrases actually maintains parallelism in a way that is destroyed if one assumes DPs. The semantic parallelism that can be obtained across clause and NP, e.g.He loves watervs.his love of water, is no longer present in the structure if one assumes DPs. The point is illustrated here first with dependency trees:
On the NP-analysis,hisis a dependent oflovein the same way thatheis a dependent ofloves. The result is that the NPhis love of waterand the clauseHe loves waterare mostly parallel in structure, which seems correct given the semantic parallelism across the two. In contrast, the DP analysis destroys the parallelism, sincehisbecomes head overlove. The same point is true for a constituency-based analysis:
These trees again employ the convention whereby the words themselves are used as the node labels. The NP-analysis maintains the parallelism because the determinerhisappears as specifier in the NP headed bylovein the same way thatheappears as specifier in the clause headed byloves. In contrast, the DP analysis destroys this parallelism becausehisno longer appears as a specifier in the NP, but rather as head over the noun.
The fixed words of many idioms in natural language include the noun of a noun phrase at the same time that they exclude the determiner.[10]This is particularly true of many idioms in English that require the presence of a possessor that is not a fixed part of the idiom, e.g.take X's time,pull X's leg,dance on X's grave,step on X's toes, etc. While the presence of the Xs in these idioms is required, the X argument itself is not fixed, e.g.pull his/her/their/John's leg. What this means is that the possessor is NOT part of the idiom; it is outside of the idiom. This fact is a problem for the DP-analysis because it means that the fixed words of the idiom are interrupted in the vertical dimension. That is, the hierarchical arrangement of the fixed words is interrupted by the possessor, which is not part of the idiom. The traditional NP-analysis is not confronted with this problem, since the possessor appears below the noun. The point is clearly visible in dependency-based structures:
The arrangement of the words in the vertical dimension is what is important. The fixed words of the idiom (in blue) are top-down continuous on the NP-analysis (they form acatena), whereas this continuity is destroyed on the DP-analysis, where the possessor (in green) intervenes. Therefore the NP-analysis allows one to construe idioms as chains of words, whereas on the DP-analysis, one cannot make this assumption. On the DP-analysis, the fixed words of many idioms really cannot be viewed as discernible units of syntax in any way.
In English and many closely related languages, constituents on left branches underneath nouns cannot be separated from their nouns. Long-distance dependencies are impossible between a noun and the constituents that normally appear on left branches underneath the noun. This fact is addressed in terms of the Left Branch Condition.[11]Determiners and attributive adjectives are typical "left-branch constituents". The observation is illustrated with examples oftopicalizationandwh-fronting:
These examples illustrate that with respect to the long-distance dependencies of topicalization and wh-fronting, determiners behave like attributive adjectives. Both cannot be separated from their head noun. The NP-analysis is consistent with this observation because it positions both attributive adjectives and determiners as left-branch dependents of nouns. On a DP-analysis, however, determiners are no longer on left branches underneath nouns. In other words, the traditional NP-analysis is consistent with the fact that determiners behave just like attributive adjectives with respect to long-distance dependencies, whereas the DP-analysis cannot appeal to left branches to account for this behavior because on the DP-analysis, the determiner is no longer on a left branch underneath the noun.
The NP-analysis is consistent with the observation that genitive case in languages like German can have the option to appear before or after the noun, whereby the meaning remains largely the same, as illustrated with the following examples:
While the b-phrases are somewhat archaic, they still occur on occasion in elevated registers. The fact that the genitive NPsmeines Brudersandseines Onkelscan precede or follow the noun is telling, since it suggests that the hierarchical analysis of the two variants should be similar in a way that accommodates the almost synonymous meanings. On the NP-analysis, these data are not a problem because in both cases, the genitive expression is a dependent of the noun. The DP-analysis, in contrast, is challenged because in the b-variants, it takes the genitive expression to be head over the noun. In other words, the DP-analysis has to account for the fact that the meaning remains consistent despite the quite different structures across the two variants. | https://en.wikipedia.org/wiki/Determiner_phrase |
This is alist ofEnglish determiners.
All cardinal numerals are also included.[1]: 385
Any genitive noun phrase such asthe cat's,the cats',Geoff's, etc. | https://en.wikipedia.org/wiki/List_of_English_determiners |
TheEnglishpronounsform a relatively smallcategory of wordsinModern Englishwhose primarysemanticfunction is that of apro-formfor anoun phrase.[1]Traditional grammarsconsider them to be a distinct part of speech, while mostmodern grammarssee them as a subcategory ofnoun, contrasting withcommon and proper nouns.[2]: 22Still others see them as a subcategory ofdeterminer(see theDP hypothesis). In this article, they are treated as a subtype of the noun category.
They clearly includepersonal pronouns,relative pronouns,interrogative pronouns, andreciprocal pronouns.[3]Other types that are included by some grammars but excluded by others aredemonstrative pronounsandindefinite pronouns. Other members are disputed (see below).
Pronouns in formal modern English.
The full set of pronouns (i.e.personal, relative, interrogative and reciprocal pronouns), along with dummiesitandthere, of which the status as pronouns is disputed. Nonstandard, informal, and archaic forms are initalics.
*Whomandwhichcan be the object of a fronted preposition, but not ofwhoor anomitted (Ø)pronoun:The chair on which she satorThe chair (that) she sat on, but not*The chair on that she sat.
†Except infree or fused relative constructions, in which casewhat,whateverorwhicheveris used for a thing andwhoeverorwhomeveris used for a person:What he did was clearly impossible,Whoever you married is welcome here(see below).
Pronoun is a category of words. Apro-formis not. It is a meaning relation in which a phrase "stands in" for (expresses the same content as) another where themeaningis recoverable from the context.[4]In English, pronouns mostly function as pro-forms, but there are pronouns that are not pro-forms and pro-forms that are not pronouns.[2]: 239Pronouns can be pro-forms for non-noun phrases. For example, inI fixed the bike,whichwas quite a challenge, the relative pronounwhichdoesn't stand in for "the bike". Instead, it stands in for the entire proposition "I fixed the bike", aclause, or arguably "fixing the bike", a verb phrase.
Most pronouns aredeictic:[2]: 68they have no inherentdenotation, and their meaning is always contextual. For example, the meaning ofmedepends entirely on who says it, just as the meaning ofyoudepends on who is being addressed. Pronouns are not the only deictic words though. For examplenowis deictic, but it's not a pronoun.[5]Also, dummy pronouns and interrogative pronouns are not deictic. In contrast, most noun phrases headed by common or proper nouns are not deictic. For example,a booktypically has the same denotation regardless of the situation in which it is said.
English pronouns have all of the functions of other noun phrases:[2]: ch. 5
On top of this, pronouns can appear ininterrogative tags(e.g.,that's the one,isn't it?).[2]: 238These tags are formed with an auxiliary verb and a pronoun. Other nouns cannot appear in this construction. This provides justification for categorizing dummythereas a pronoun.[2]: 256
Subject pronouns are typically in nominative form (e.g.,Sheworks here.), though independent genitives are also possible (e.g.,Hersis better.). In non-finite clauses, however, there is more variety, an example ofform-meaning mismatch. Inpresent participialclauses, the nominative, accusative, and dependent genitive are all possible:[2]: 460, 467
Ininfinitival clauses, accusative case pronouns function as the subject:
Object pronouns are typically in accusative form (e.g.,I sawhim.) but may also be reflexive (e.g.,She sawherself) or independent genitive (e.g.,We gotours.).
The pronoun object of a preposition is typically in the accusative form but may also be reflexive (e.g.,She sent it toherself) or independent genitive (e.g.,I hadn't heard oftheirs.). Withbut,than, andasin a very formal register, nominative is also possible (e.g.,You're taller thanme/I.)[2]: 461
A pronoun in predicative complement position is typically in the accusative form (e.g.,It'sme) but may also be reflexive (e.g.,She isn'therselftoday) or independent genitive (e.g.,It'stheirs.).
Only genitive pronouns may function as determinatives.
The most common form for adjuncts is the reflexive (e.g.,I did itmyself). Independent genitives and accusative are also possible (e.g.,Only one matters,mine/me.).
Like proper nouns, but unlike common nouns, pronouns usually resistdependents.[2]: 425They are not alwaysungrammatical, but they are quite limited in their use:
*theyou[b]
*youyou want to be
*new them
Personal pronouns are those that participate in the grammatical and semantic systems ofperson(1st, 2nd, & 3rd person).[2]: 1463They are called "personal" pronouns for this reason, and not because they refer to persons, though some do. They typically formdefiniteNPs.
The personal pronouns of modern standard English are presented in the table above. They areI, you, she, he, it, we, andthey, and their inflected forms.
The second-personyouforms are used with both singular and plural reference. In the Southern United States,y'all(fromyou all) is used as a plural form, and various other phrases such asyou guysare used in other places. An archaic set of second-person pronouns used for singular reference isthou, thee, thyself, thy, thine,which are still used in religious services and can be seen in older works, such as Shakespeare's—in such texts,yeand theyouset of pronouns are used for plural reference, or with singular reference as a formalV-form.[6]Youcan also be used as anindefinite pronoun, referring to a person in general (seegenericyou), compared to the more formal alternative,one(reflexiveoneself, possessiveone's).
The third-person singular forms are differentiated according to thegenderof the referent. For example,sheis used to refer to a woman, sometimes a female animal, and sometimes an object to which feminine characteristics are attributed, such as a ship, car or country. A man, and sometimes a male animal, is referred to usinghe. In other casesitcan be used. (SeeGender in English.)
The third-person formtheyis used with both plural and singularreferents. Historically,singulartheywas restricted toquantificationalconstructions such asEach employee should clean their deskand referential cases where the referent's gender was unknown.[7]However, it is increasingly used when the referent's gender is irrelevant or when the referent presents as neither man nor woman.[8]
The dependent genitive pronouns, such asmy, are used as determinatives together with nouns, as inmyold man,some ofhisfriends. The independent genitive forms likemineare used as full noun phrases (e.g.,mine is bigger than yours;this one is mine). Note also the constructiona friend of mine(meaning "someone who is my friend"). SeeEnglish possessivefor more details.
Theinterrogative pronounsarewho,whom,whose, whichandwhat(also with the suffix-ever). They are chiefly used in interrogativeclausesfor thespeech actof askingquestions.[2]: 61Whathas impersonal gender, whilewho,whomandwhosehave personal gender;[2]: 904they are used to refer to persons.Whomis the accusative form ofwho(though in most contexts this is replaced bywho), whilewhoseis the genitive form.[2]: 464For more information seewho.
All the interrogative pronouns can also be used as relative pronouns, thoughwhatis quite limited in its use;[9]see below for more details.
The mainrelative pronounsin English arewho(with its derived formswhomandwhose), andwhich.[10]
The relative pronounwhichrefers to things rather than persons, as inthe shirt, which used to be red, is faded. For persons,whois used (the man who saw me was tall). Theoblique caseform ofwhoiswhom, as inthe man whom I saw was tall, although in informalregisterswhois commonly used in place ofwhom.
The possessive form ofwhoiswhose(for example,the man whose car is missing); however the use ofwhoseis not restricted to persons (one can sayan idea whose time has come). This can be used without a head noun, as inThis is Jen, a friend ofwhoseyou've already met.
The wordthatis disputed. Traditionally, it is considered a pronoun, but modern approaches disagree. See below.
The wordwhatcan be used to form afree relative clause– one that has no antecedent and that serves as a complete noun phrase in itself, as inI like what he likes. The wordswhateverandwhichevercan be used similarly, in the role of either pronouns (whatever he likes) or determiners (whatever book he likes). When referring to persons,who(ever)(andwhom(ever)) can be used in a similar way (but not as determiners).
A generic pronoun is one with the interpretation of "a person in general". These pronouns cannot have adefiniteorspecificreferent, and they "cannot be used as ananaphorto another NP."[2]: 427The generic pronouns areone(e.g.,onecan seeoneselfin the mirror) andyou(e.g.,In Tokugawa Japan,youcouldn't leave the country), withonebeing more formal thanyou.[2]: 427
The Englishreciprocal pronounsareeach otherandone another. Although they are written with a space, they're best thought of as single words. No consistent distinction in meaning or use can be found between them. Like the reflexive pronouns, their use is limited to contexts where anantecedentprecedes it. In the case of the reciprocals, they need to appear in the same clause as the antecedent.[9]
Today, the Englishdeterminersare generally seen as a separate category of words, but they were traditionally viewed asadjectiveswhen they came before a noun (e.g.,somepeople,nobooks,eachbook) and as pronouns when they werepro-forms(e.g.,I'll havesome;I hadnone,eachof the books).[2]: 22
As pronouns,whatandwhichhave non-personal gender.[2]: 398This means they cannot be used to refer to persons;whatis thatcannot mean "who is that". But there are also determiners with the same forms. The determiners are not gendered, so they can refer to persons or non-persons (e.g.,whatgenius said that).
Relativewhichis usually a pronoun, but it can be a determiner in cases likeIt may rain, inwhichcase we won't go.Whatis almost never a relative word, but when it is, it is a pronoun (e.g.,I didn't seewhatyou took.)
Thedemonstrative pronounsthis(pluralthese), andthat(pluralthose), are a sub-type of determiner in English.[2]: 373Traditionally, they are viewed as pronouns in cases such asthese are good;I like that.
The determiners starting withsome-,any,no, andevery- and ending with-one,-body, -thing,-place(e.g.,someone,nothing) are often calledindefinite pronouns, though others consider them to be compound determiners.[2]: 423
The generic pronounsoneand thegeneric use ofyouare sometimes called indefinite. These are uncontroversial pronouns.[11]Note, however, that English has three words that share the spelling and pronunciation ofone.[2]: 426–427
The wordthereis adummy pronounin some clauses, chieflyexistential(There is no god) andpresentationalconstructions (There appeared a cat on the window sill). The dummy subject takes thenumber(singular or plural) of the logical subject (complement), hence it takes a plural verb if the complement is plural. In informal English, however, thecontractionthere'sis often used for both singular and plural.[12]
Therecan undergoinversion,Is there a test today?andNever has there been a man such as this.It can also appear without a corresponding logical subject, in short sentences andquestion tags:There wasn't a discussion, was there?
The wordtherein such sentences has sometimes been analyzed as anadverb, or as a dummypredicate, rather than as a pronoun.[13]However, its identification as a pronoun is most consistent with its behavior in inverted sentences and question tags as described above.
Because the wordtherecan also be adeicticadverb (meaning "at that place"), a sentence likeThere is a rivercould have either of two meanings: "a river exists" (withthereas a pronoun), and "a river is in that place" (withthereas an adverb). In speech, the adverbialtherewould be givenstress, while the pronoun would not – in fact, the pronoun is often pronounced as aweak form,/ðə(r)/.
These words are sometimes classified as nouns (e.g.,Tomorrowshould be a nice day), and sometimes asadverbs(I'll see youtomorrow).[14]But they are alternatively classified as pronouns in both of these examples.[2]: 429In fact, these words have most of the characteristics of pronouns (see above). In particular, they are pro-forms, and they resist most dependents (e.g.,*a good today).
Traditional grammars classifythatas a relative pronoun.[15]Most modern grammars disagree, calling it asubordinatoror acomplementizer.[2]: 63
Relativethatis normally found only inrestrictive relative clauses(unlikewhichandwho, which can be used in both restrictive and unrestrictive clauses). It can refer to either persons or things, and cannot follow a preposition. For example, one can saythe song that[orwhich]I listened to yesterday, butthe song to which[notto that]I listened yesterday. Relativethatis usually pronounced with a reduced vowel (schwa), and hence differently from the demonstrativethat(seeWeak and strong forms in English). Ifthatis not the subject of the relative clause (in the traditional view), it can be omitted (the song I listened to yesterday).
There is some confusion about the difference between a pronoun and apro-form. For example, some sources make claims such as the following:
We can useotheras a pronoun. As a pronoun,otherhas a plural form,others:
Butotheris just a common noun here. Unlike pronouns, it readily takes a determiner (manyothers) or arelative clausemodifier(othersthat we know).
Hwā("who") andhwæt("what") follow natural gender, not grammatical gender: as in Modern English,hwāis used with people,hwætwith things. However, that distinction only matters in thenominativeandaccusative cases, as they are identical in other cases:
Hwelċ("which" or "what kind of") is inflected like an adjective. Same withhwæðer, which also means "which" but is only used between two alternatives:
The first- and second-person pronouns are the same for all genders. They also have specialdual forms, which are only used for groups of two things, as in "we both" and "you two." The dual forms are common, but the ordinary plural forms can always be used instead when the meaning is clear.
Many of the forms above bear a strong resemblance to the Modern English words they eventually became. For instance, in the genitive case,ēowerbecame "your,"ūrebecame "our," andmīnbecame "my." However, the plural third-person personal pronouns were all replaced withOld Norseforms during theMiddle Englishperiod, yielding "they," "them," and "their."
Middle Englishpersonal pronounswere mostly developed fromthose of Old English, with the exception of the third-person plural, a borrowing fromOld Norse(the original Old English form clashed with the third person singular and was eventually dropped). Also, the nominative form of the feminine third-person singular was replaced by a form of thedemonstrativethat developed intosche(modernshe), but the alternativeheyrremained in some areas for a long time.
As with nouns, there was some inflectional simplification (the distinct Old Englishdualforms were lost), but pronouns, unlike nouns, retained distinct nominative and accusative forms. Third-person pronouns also retained a distinction between accusative and dative forms, but that was gradually lost: the masculinehinewas replaced byhimsouth of the Thames by the early 14th century, and the neuter dativehimwas ousted byitin most dialects by the 15th.[17]
The following table shows some of the various Middle English pronouns. Many other variations are noted in Middle English sources because of differences in spellings and pronunciations at different times and in different dialects.[18] | https://en.wikipedia.org/wiki/English_pronouns |
Ingrammar, anoun adjunct,attributive noun,qualifying noun,noun(pre)modifier, orapposite nounis an optionalnounthatmodifiesanother noun; functioning similarly to anadjective, it is, more specifically, a noun functioning as a pre-modifier in anoun phrase. For example, in the phrase "chicken soup" the nounadjunct"chicken" modifies the noun "soup". It is irrelevant whether the resultingcompound nounis spelled in one or two parts. "Field" is a noun adjunct in both "field player" and "fieldhouse".[1]
Theadjectival nounterm was formerly synonymous with noun adjunct but now usually meansnominalized adjective(i.e., an adjective used as a noun) as a term that contrasts the noun adjunct process, e.g.the Irishmeaning "Irish people" orthe poormeaning "poor people".[citation needed]Japanese adjectival nounsare a different concept.
Noun adjuncts were traditionally mostly singular (e.g. "trouser press") except when there were lexical restrictions (e.g. "arms race"), but there is a recent trend towards more use of plural ones. Many of these can also be or were originally interpreted and spelled as pluralpossessives(e.g. "chemicals' agency", "writers' conference", "Rangers' hockey game"),[2]but they are now often written without the apostrophe, although decisions on when to do so require editorial judgment.[3]There are morphologic restrictions on the classes of adjunct that can be plural and nonpossessive; irregular plurals aresolecisticas nonpossessive adjuncts (for example, "men clothing" or "women magazine" sounds improper to fluent speakers).
Fowler's Modern English Usagestates in the section "Possessive Puzzles":
Five years' imprisonment,Three weeks' holiday, etc.Yearsandweeksmay be treated as possessives and given an apostrophe or as adjectival nouns without one. The former is perhaps better, as to conform to what is inevitable in the singular –a year's imprisonment,a fortnight's holiday.
Noun adjuncts can also be strung together in a longer sequence preceding the final noun, with each added noun modifying the noun which follows it, in effect creating a multiple-word noun adjunct which modifies the following noun (e.g. "chicken soup bowl", in which "chicken" modifies "soup" and "chicken soup" modifies "bowl"). There is no theoretical limit to the number of noun adjuncts which can be added before a noun, and very long constructions are occasionally seen, for example "Dawlish pub car park cliff plunge man rescued",[4]in which "pub", "car park", "cliff", and "plunge" are all noun adjuncts. They could each be removed successively (starting at the beginning of the sentence) without changing the grammar of the sentence. This type of construction is not uncommon inheadlinese, the condensed grammar used in newspaperheadlines.
It is a trait ofnatural languagethat there is often more than one way to say something. Any logically valid option will usually find some currency in natural usage. Thus "erythrocyte maturation" and "erythrocytic maturation" can both be heard, the first using a noun adjunct and the second using an adjectivalinflection. In some cases one of the equivalent forms has greateridiomaticity; thus "cell cycle" is more commonly used than "cellular cycle". In some cases, each form tends to adhere to a certainsense; thus "face mask" is the normal term in hockey, and "facial mask" is heard more often in spa treatments. Although "spine cord" is not an idiomatic alternative to "spinal cord", in other cases, the options are arbitrarily interchangeable with negligible idiomatic difference; thus "spine injury" and "spinal injury" coexist and are equivalent from any practical viewpoint, as are "meniscus transplant" and "meniscal transplant". A special case in medical usage is "visual examination" versus "vision examination": the first typically means "an examination made visually", whereas the latter means "an examination of the patient's vision".
"Regulatory impact analysis of the law on business" is probably illogical or at least incomprehensible to all who are not familiar with the term "regulatory impact analysis". Such people understand the preposition "on" as belonging to the expression "law on business" (to which it grammatically belongs) or parse it as an incorrect preposition with "analysis" and do not recognize it as a feeble and grammatically incorrect attempt to refer back to the word "impact". Since the phrase "regulatory impact analysis" is standard in usage, changing it to "analysis of (the) regulatory impact" would look strange to experts even though putting the preposition "on" after it would not cause any problems: "analysis of the regulatory impact of the law on business". A possible solution that does not annoy experts or confuse non-experts is "regulatory impact analysis of the law's effects on business".
The English language is restrictive in its use of postpositive position for adjectival units (words or phrases), making English use ofpostpositive adjectives—although not rare—much less common than use of attributive/prepositive position. This restrictive tendency is even stronger regarding noun adjuncts; examples of postpositive noun adjuncts are rare in English, except in certain established uses such as names of lakes or operations, for exampleLake OntarioandOperation Desert Storm. Relatedly, in English when an institution is named in honor of a person, the person's name is idiomatically in prepositive position (for example, the NICHD is theEunice Kennedy Shriver National Institute of Child Health and Human Development), whereas various other languages tend to put it in postpositive position (sometimes in quotation marks); their pattern would translate overliterally asNational Institute of Child Health and Human Development "Eunice Kennedy Shriver". | https://en.wikipedia.org/wiki/Noun_adjunct |
Incomputer science, anabstract semanticgraph(ASG) orterm graphis a form ofabstract syntaxin which anexpressionof aformalorprogramming languageis represented by agraphwhose vertices are the expression'ssubterms. An ASG is at a higherlevel of abstractionthan anabstract syntax tree(or AST), which is used to express thesyntactic structureof an expression orprogram.
ASGs are more complex and concise than ASTs because they may contain shared subterms (also known as "common subexpressions").[1]Abstract semantic graphs are often used as anintermediate representationbycompilersto store the results of performingcommon subexpression eliminationuponabstract syntax trees. ASTs aretreesand are thus incapable of representing shared terms. ASGs are usuallydirected acyclic graphs (DAG), although in some applications graphs containingcycles[clarification needed]may be permitted. For example, a graph containing a cycle might be used to represent therecursiveexpressions that are commonly used infunctional programming languagesas non-loopingiterationconstructs. The mutability of these types of graphs, is studied in the field ofgraph rewriting.
The nomenclatureterm graphis associated with the field ofterm graph rewriting,[2]which involves the transformation and processing of expressions by the specification of rewriting rules,[3]whereasabstract semantic graphis used when discussinglinguistics,programming languages,type systemsandcompilation.
Abstract syntax trees are not capable of sharing subexpression nodes because it is not possible for a node in a proper tree to have more than one parent. Although this conceptual simplicity is appealing, it may come at the cost of redundant representation and, in turn, possibly inefficiently duplicating the computation of identical terms. For this reason ASGs are often used as anintermediate languageat a subsequent compilation stage to abstract syntax tree construction via parsing.
An abstract semantic graph is typically constructed from an abstract syntax tree by a process of enrichment and abstraction. The enrichment can for example be the addition ofback-pointers,edgesfrom anidentifiernode (where avariableis being used) to a node representing thedeclarationof that variable. The abstraction canentailthe removal of details which are relevant only inparsing, not for semantics.
For example, consider the case ofcode refactoring. To represent the implementation of a function that takes an input argument, the received parameter is conventionally given an arbitrary, distinctnamein the source code so that it can be referenced. The abstract representation of this conceptual entity, a "function argument" instance, will likely be mentioned in the function signature, and also one or more times within the implementation code body. Since the function as a whole is the parent of both its header or "signature" information as well as its implementation body, an AST would not be able to use the same node to co-identify the multiple uses or appearances of the argument entity. This is solved by the DAG nature of an ASG. A key advantage of having a single, distinct node identity for any given code element is that each element's properties are, by definition, uniquely stored. This simplifies refactoring operations, because there is exactly one existential nexus for any given property instantiation. If the developer decides to change a property value such as the "name" of any code element (the "function argument" in this example), the ASG inherently exposes that value in exactly one place, and it follows that any such property changes are implicitly, trivially, and immediately propagated globally. | https://en.wikipedia.org/wiki/Abstract_semantic_graph |
CmapToolsisconcept mappingsoftware developed by theFlorida Institute for Human and Machine Cognition(IHMC).[1]It allows users to easily create graphical nodes representing concepts, and to connect nodes using lines and linking words to form anetwork of interrelated propositionsthat represent knowledge of a topic.[2]The software has been used in classrooms and research labs,[3][4]and in corporate training.[5][6]
The various uses ofconcept mapsare supported by CmapTools.
Multiple linkscan be added to each concept to form a dynamic map that opens web pages or local documents; The links added receive a category chosen by the user on the provided list of types, to help with organization, some categories are: URLs; Documents; Images; and so on. Each link will be disposed accordingly with the category set by the user. The links are stacked by each category type under the chosen concept form (like show on the image sideway).
Even other concept maps can be linked to concepts letting the user construct a strong navigation tool.
Multiple maps connected can form aknowledge base, for example of a company structure, repository of standards, personal contacts and other important general information.
Thissoftwarearticle is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/CmapTools |
Incomputer science, aknowledge base(KB) is a set of sentences, each sentence given in aknowledge representation language, withinterfacesto tell new sentences and to ask questions about what is known, where either of these interfaces might useinference.[1]It is a technology used tostorecomplexstructured dataused by acomputer system. The initial use of the term was in connection withexpert systems, which were the firstknowledge-based systems.
The original use of the term knowledge base was to describe one of the two sub-systems of anexpert system. Aknowledge-based systemconsists of a knowledge-base representing facts about the world and ways ofreasoningabout those facts to deduce new facts or highlight inconsistencies.[2]
The term "knowledge-base" was coined to distinguish this form of knowledge store from the more common and widely used termdatabase. During the 1970s, virtually all largemanagement information systemsstored theirdatain some type ofhierarchicalorrelationaldatabase. At this point in the history ofinformation technology, the distinction between a database and a knowledge-base was clear and unambiguous.
A database had the following properties:
The first knowledge-based systems had data needs that were the opposite of these database requirements. An expert system requiresstructured data. Not just tables with numbers and strings, but pointers to other objects that in turn have additional pointers. The ideal representation for a knowledge base is an object model (often called anontologyinartificial intelligenceliterature) with classes, subclasses and instances.
Early expert systems also had little need for multiple users or the complexity that comes with requiring transactional properties on data. The data in early expert systems was used to arrive at a specific answer, such as a medical diagnosis, the design of a molecule, or a response to an emergency.[2]Once the solution to the problem was known, there was not a critical demand to store large amounts of data back to a permanent memory store. A more precise statement would be that given the technologies available, researchers compromised and did without these capabilities because they realized they were beyond what could be expected, and they could develop useful solutions to non-trivial problems without them. Even from the beginning, the more astute researchers realized the potential benefits of being able to store, analyze, and reuse knowledge. For example, see the discussion of Corporate Memory in the earliest work of theKnowledge-Based Software Assistantprogram byCordell Greenet al.[3]
The volume requirements were also different for a knowledge-base compared to a conventional database. The knowledge-base needed to know facts about the world. For example, to represent the statement that "All humans are mortal", a database typically could not represent this general knowledge but instead would need to store information about thousands of tables that represented information about specific humans. Representing that all humans are mortal and being able to reason about any given human that they are mortal is the work of a knowledge-base. Representing that George, Mary, Sam, Jenna, Mike,... and hundreds of thousands of other customers are all humans with specific ages, sex, address, etc. is the work for a database.[4][5]
As expert systems moved from being prototypes to systems deployed in corporate environments the requirements for their data storage rapidly started to overlap with the standard database requirements for multiple, distributed users with support for transactions. Initially, the demand could be seen in two different but competitive markets. From theAIandObject-Orientedcommunities,object-oriented databasessuch asVersantemerged. These were systems designed from the ground up to have support for object-oriented capabilities but also to support standard database services as well. On the other hand, the large database vendors such asOracleadded capabilities to their products that provided support for knowledge-base requirements such as class-subclass relations and rules wiki .
As any informational hub, the knowledge base can store various content types which will serve different audiences and have contrasting purposes. So, to better understand knowledge base types, let’s discuss them from two different angles: purpose and content.
Internal vs. external knowledge basesHere, we can divide our informational hubs into two main purposes – external and internal.
The next evolution for the term "knowledge-base" was theInternet. With the rise of the Internet, documents,hypertext, and multimedia support were now critical for any corporate database. It was no longer enough to support large tables of data or relatively small objects that lived primarily in computer memory. Support for corporate web sites required persistence and transactions for documents. This created a whole new discipline known asWeb Content Management.
The other driver for document support was the rise ofknowledge managementvendors such asHCL Notes(formerly Lotus Notes).Knowledge Managementactually predated the Internet but with the Internet there was great synergy between the two areas. Knowledge management products adopted the term "knowledge-base" to describe theirrepositoriesbut the meaning had a big difference. In the case of previous knowledge-based systems, the knowledge was primarily for the use of an automated system, to reason about and draw conclusions about the world. With knowledge management products, the knowledge was primarily meant for humans, for example to serve as a repository of manuals, procedures, policies, best practices, reusable designs and code, etc. In both cases the distinctions between the uses and kinds of systems were ill-defined. As the technology scaled up it was rare to find a system that could really be cleanly classified as knowledge-based in the sense of an expert system that performed automated reasoning and knowledge-based in the sense of knowledge management that provided knowledge in the form of documents and media that could be leveraged by humans.[7] | https://en.wikipedia.org/wiki/Knowledge_base |
Graph drawingis an area ofmathematicsandcomputer sciencecombining methods fromgeometric graph theoryandinformation visualizationto derive two-dimensional depictions ofgraphsarising from applications such associal network analysis,cartography,linguistics, andbioinformatics.[1]
A drawing of a graph ornetwork diagramis a pictorial representation of theverticesandedgesof a graph. This drawing should not be confused with the graph itself: very different layouts can correspond to the same graph.[2]In the abstract, all that matters is which pairs of vertices are connected by edges. In the concrete, however, the arrangement of these vertices and edges within a drawing affects its understandability, usability, fabrication cost, andaesthetics.[3]The problem gets worse if the graph changes over time by adding and deleting edges (dynamic graph drawing) and the goal is to preserve the user's mental map.[4]
Graphs are frequently drawn as node–link diagrams in which the vertices are represented as disks, boxes, or textual labels and the edges are represented asline segments,polylines, or curves in theEuclidean plane.[3]Node–link diagrams can be traced back to the 14th-16th century works of Pseudo-Lull which were published under the name ofRamon Llull, a 13th century polymath. Pseudo-Lull drew diagrams of this type forcomplete graphsin order to analyze all pairwise combinations among sets of metaphysical concepts.[5]
In the case ofdirected graphs,arrowheadsform a commonly used graphical convention to show theirorientation;[2]however, user studies have shown that other conventions such as tapering provide this information more effectively.[6]Upward planar drawinguses the convention that every edge is oriented from a lower vertex to a higher vertex, making arrowheads unnecessary.[7]
Alternative conventions to node–link diagrams include adjacency representations such ascircle packings, in which vertices are represented by disjoint regions in the plane and edges are represented by adjacencies between regions;intersection representationsin which vertices are represented by non-disjoint geometric objects and edges are represented by their intersections; visibility representations in which vertices are represented by regions in the plane and edges are represented by regions that have an unobstructed line of sight to each other; confluent drawings, in which edges are represented as smooth curves within mathematicaltrain tracks; fabrics, in which nodes are represented as horizontal lines and edges as vertical lines;[8]and visualizations of theadjacency matrixof the graph.
Many different quality measures have been defined for graph drawings, in an attempt to find objective means of evaluating their aesthetics and usability.[9]In addition to guiding the choice between different layout methods for the same graph, some layout methods attempt to directly optimize these measures.
There are many different graph layout strategies:
Graphs and graph drawings arising in other areas of application include
In addition, theplacementandroutingsteps ofelectronic design automation(EDA) are similar in many ways to graph drawing, as is the problem ofgreedy embeddingindistributed computing, and the graph drawing literature includes several results borrowed from the EDA literature. However, these problems also differ in several important ways: for instance, in EDA, area minimization and signal length are more important than aesthetics, and the routing problem in EDA may have more than two terminals per net while the analogous problem in graph drawing generally only involves pairs of vertices for each edge.
Software, systems, and providers of systems for drawing graphs include: | https://en.wikipedia.org/wiki/Network_diagram |
Therepertory gridis an interviewing technique which usesnonparametricfactor analysisto determine anidiographicmeasure of personality.[1][2]It was devised byGeorge Kellyin around 1955 and is based on hispersonal construct theoryofpersonality.[3]
The repertory grid is a technique for identifying the ways that a person construes (interprets orgives meaning to) his or her experience.[4]It provides information from which inferences about personality can be made, but it is not a personality test in the conventional sense. It is underpinned by thepersonal construct theorydeveloped byGeorge Kelly, first published in 1955.[3]
A grid consists of four parts:
Constructs are regarded as personal to the client, who is psychologically similar to other people depending on the extent to which they would tend to use similar constructs, and similar ratings, in relating to a particular set of elements.
The client is asked to consider the elements three at a time, and to identify a way in which two of the elements might be seen as alike, but distinct from, contrasted to, the third. For example, in considering a set of people as part of a topic dealing with personal relationships, a client might say that the element "my father" and the element "my boss" are similar because they are both fairly tense individuals, whereas the element "my wife" is different because she is "relaxed". And so we identify one construct that the individual uses when thinking about people: whether they are "tenseas distinct fromrelaxed". In practice, good grid interview technique would delve a little deeper and identify some more behaviorally explicit description of "tenseversusrelaxed". All the elements are rated on the construct, further triads of elements are compared and further constructs elicited, and the interview would continue until no further constructs are obtained.
Careful interviewing to identify what the individual means by the words initially proposed, using a 5-point rating system could be used to characterize the way in which a group of fellow-employees are viewed on the construct "keen and committedversusenergies elsewhere", a 1 indicating that the left pole of the construct applies ("keen and committed") and a 5 indicating that the right pole of the construct applies ("energies elsewhere"). On being asked to rate all of the elements, our interviewee might reply that Tom merits a 2 (fairly keen and committed), Mary a 1 (very keen and committed), and Peter a 5 (his energies are very much outside the place of employment). The remaining elements (another five people, for example) are then rated on this construct.
Typically (and depending on the topic) people have a limited number of genuinely different constructs for any one topic: 6 to 16 are common when they talk about their job or their occupation, for example. The richness of people's meaning structures comes from the many different ways in which a limited number of constructs can be applied to individual elements. A person may indicate that Tom is fairly keen, very experienced, lacks social skills, is a good technical supervisor, can be trusted to follow complex instructions accurately, has no sense of humour, will always return a favour but only sometimes help his co-workers, while Mary is very keen, fairly experienced, has good social and technical supervisory skills, needs complex instructions explained to her, appreciates a joke, always returns favours, and is very helpful to her co-workers: these are two very different and complex pictures, using just 8 constructs about a person's co-workers.
Important information can be obtained by including self-elements such as "Myself as I am now"; "Myself as I would like to be" among other elements, where the topic permits.
A single grid can be analysed for both content (eyeball inspection) and structure (cluster analysis,principal component analysis, and a variety of structuralindicesrelating to the complexity and range of the ratings being the chief techniques used). Sets of grids are dealt with using one or other of a variety ofcontent analysistechniques. A range of associated techniques can be used to provide precise, operationally defined expressions of an interviewee's constructs, or a detailed expression of the interviewee's personal values, and all of these techniques are used in a collaborative way. The repertory grid is emphatically not a standardized "psychological test"; it is an exercise in the mutual negotiation of a person's meanings.
The repertory grid has found favour among both academics and practitioners in a great variety of fields because it provides a way of describing people's construct systems (loosely, understanding people's perceptions) without prejudging the terms of reference—a kind of personalizedgrounded theory.[5][6][7]
Unlike a conventionalrating-scalequestionnaire, it is not the investigator but the interviewee who provides the constructs on which a topic is rated. Market researchers, trainers, teachers, guidance counsellors, new product developers, sports scientists, and knowledge capture specialists are among the users who find the technique (originally developed for use in clinical psychology) helpful.[8]
In the bookPersonal Construct Methodology, researchersBrian R. Gainesand Mildred L.G. Shaw noted that they "have also foundconcept mappingandsemantic networktools to be complementary to repertory grid tools and generally use both in most studies" but that they "see less use of network representations in PCP [personal construct psychology] studies than is appropriate".[9]They encouraged practitioners to use semantic network techniques in addition to the repertory grid.[10] | https://en.wikipedia.org/wiki/Repertory_grid |
Asemantic lexiconis a digitaldictionaryofwordslabeled withsemanticclasses so associations can be drawn between words that have not previously been encountered.[1]Semantic lexicons are built uponsemantic networks, which represent the semantic relations between words. The difference between a semantic lexicon and a semantic network is that a semantic lexicon has definitions for each word, or a "gloss".[2]
Semantic lexicons are made up of lexical entries. These entries are not orthographic, but semantic, eliminating issues of homonymy and polysemy. These lexical entries are interconnected withsemantic relations, such ashyperonymy,hyponymy,meronymy, ortroponymy. Synonymous entries are grouped together in what the PrincetonWordNetcalls "synsets"[2]Most semantic lexicons are made up of four different "sub-nets":[2]nouns, verbs, adjectives, and adverbs, though some researchers have taken steps to add an "artificial node" interconnecting the sub-nets.[3]
Nouns are ordered into ataxonomy, structured into a hierarchy where the broadest and most encompassing noun is located at the top, such as "thing", with the nouns becoming more and more specific the further they are from the top. The very top noun in a semantic lexicon is called aunique beginner.[4]The most specific nouns (those that do not have any subordinates), areterminal nodes.[3]
Semantic lexicons also distinguish between types, where a type of something has characteristics of a thing such as aRhodesian Ridgebackbeing a type of dog, and instances, where something is an example of said thing, such asDave Grohlis an instance of amusician. Instances are always terminal nodes because they are solitary and don’t have other words orontological categoriesbelonging to them.[2]
Semantic lexicons also addressmeronymy,[5]which is a “part-to-whole” relationship, such as keys are part of a laptop. The necessary attributes that define a specific entry are also necessarily present in that entry’shyponym. So, if acomputerhaskeys, and alaptopis a type ofcomputer, then alaptopmust havekeys. However, there are many instances where this distinction can become vague. A good example of this is the itemchair. Most would define a chair as having legs and a seat (as in the part one sits on). However, there are some artistic or modern chairs that do not have legs at all. Beanbags also do not have legs, but few would argue that they aren't chairs. Questions like this are the core questions that drive research and work in the fields oftaxonomyandontology.
Verb synsets are arranged much like their noun counterparts: the more general and encompassing verbs are near the top of the hierarchy whiletroponyms(verbs that describe a more specific way of doing something) are grouped beneath. Verb specificity moves along avector, with the verbs becoming more and more specific in reference to a certain quality.[2]For example. The set "walk / run / sprint" becomes more specific in terms of the speed, and "dislike / hate / abhor" becomes more specific in terms of the intensity of the emotion.
The ontological groupings and separations of verbs is far more arguable than their noun counterparts. It is widely accepted that adogis a type ofanimaland that astoolis a type ofchair, but it can be argued thatabhoris on the same emotional plane ashate(that they are synonyms and not super/subordinates). It can also be argued thatloveandadoreare synonyms, or that one is more specific than the other. Thus, the relations between verbs are not as agreed-upon as that of nouns.
Another attribute of verb synset relations is that there are also ordered into verb pairs. In these pairs, one verb necessarilyentailsthe other in the way thatmassacreentailskill, andknowentailsbelieve.[2]These verb pairs can be troponyms and their superordinates, as is the case in the first example, or they can be in completely different ontological categories, as in the case in the second example.
Adjective synset relations are very similar to verb synset relations. They are not quite as neatly hierarchical as the noun synset relations, and they have fewer tiers and more terminal nodes. However, there are generally less terminal nodes per ontological category in adjective synset relations than that of verbs. Adjectives in semantic lexicons are organized in word pairs as well, with the difference being that their word pairs areantonymsinstead ofentailments. More generic polar adjectives such ashotandcold, orhappyandsadare paired. Then other adjectives that are semantically similar are linked to each of these words.Hotis linked towarm,heated,sizzling, andsweltering, whilecoldis linked tocool,chilly,freezing, andnippy. These semantically similar adjectives are consideredindirect antonyms[2]to the opposite polar adjective (i.e.nippyis an indirect antonym tohot). Adjectives that are derived from a verb or a noun are also directly linked to said verb or noun across sub-nets. For example,enjoyableis linked to the semantically similar adjectivesagreeable, andpleasant, as well as to its origin verb,enjoy.
There are very few adverbs accounted for in semantic lexicons. This is because most adverbs are taken directly from their adjective counterparts, in both meaning and form, and changed onlymorphologically(i.e.happilyis derived fromhappy, andluckilyis derived fromlucky, which is derived fromluck). The only adverbs that are accounted for specifically are ones without these connections, such asreally,mostly, andhardly.[2]
The effects of the PrincetonWordNetproject extend far past English, though most research in the field revolves around the English language. Creating a semantic lexicon for other languages has proved to be very useful forNatural Language Processingapplications. One of the main focuses of research in semantic lexicons is linking lexicons of different languages to aid inmachine translation. The most common approach is to attempt to create a shared ontology that serves as a “middleman” of sorts between semantic lexicons of two different languages.[6]This is an extremely challenging and as-of-yet unsolved issue in the Machine Translation field. One issue arises from the fact that no two languages are word-for-word translations of each other. That is, every language has some sort of structural or syntactic difference from every other. In addition, languages often have words that don’t translate easily into other languages, and certainly not with an exact word-to-word match. Proposals have been made to create a set framework for wordnets. Research has shown that every known human language has some sort of concept resemblingsynonymy,hyponymy,meronymy, andantonymy. However, every idea so far proposed has been met with criticism for using a pattern that works best for English and less for other languages.[6]
Another obstacle in the field is that no solid guidelines exist for semantic lexicon framework and contents. Each lexicon project in each different language has had a slightly (or not so slightly) different approach to their wordnet. There is not even an agreed-upon definition of what a “word” is.Orthographically, they are defined as a string of letters with spaces on either side, but semantically it becomes a very debated subject. For example, though it is not difficult to definedogorrodas words, but what aboutguard dogorlightning rod? The latter two examples would be considered orthographically separate words, though semantically they make up one concept: one is a type of dog and one is a type of rod. In addition to these confusions, wordnets are alsoidiosyncratic, in that they do not consistently label items. They are redundant, in that they often have several words assigned to each meaning (synsets). They are also open-ended, in that they often focus on and extend intoterminologyand domain-specific vocabulary.[6] | https://en.wikipedia.org/wiki/Semantic_lexicon |
Semantic neural network(SNN) is based onJohn von Neumann's neural network [von Neumann, 1966] andNikolai AmosovM-Network.[1][2]There are limitations to a link topology for the von Neumann’s network but SNN accept a case without these limitations. Onlylogical valuescan be processed, but SNN accept that fuzzy values can be processed too. All neurons into the von Neumann network are synchronized by tacts. For further use of self-synchronizing circuit technique SNN accepts neurons can be self-running or synchronized.
In contrast to the von Neumann network there are no limitations for topology of neurons for semantic networks. It leads to the impossibility of relative addressing of neurons as it was done by von Neumann. In this case an absolute readdressing should be used. Every neuron should have a unique identifier that would provide a direct access to another neuron. Of course, neurons interacting by axons-dendrites should have each other's identifiers. An absolute readdressing can be modulated by using neuron specificity as it was realized for biological neural networks.
There’s no description for self-reflectiveness and self-modification abilities into the initial description of semantic networks [Dudar Z.V., Shuklin D.E., 2000]. But in [Shuklin D.E. 2004] a conclusion had been drawn about the necessity of introspection and self-modification abilities in the system. For maintenance of these abilities a concept of pointer to neuron is provided. Pointers represent virtual connections between neurons. In this model, bodies and signals transferring through the neurons connections represent a physical body, and virtual connections between neurons are representing an astral body. It is proposed to create models of artificial neuron networks on the basis of virtual machine supporting the opportunity for paranormal effects.
SNN is generally used for natural language processing. | https://en.wikipedia.org/wiki/Semantic_neural_network |
SemEval(SemanticEvaluation) is an ongoing series of evaluations ofcomputational semantic analysissystems; it evolved from theSensevalword senseevaluation series. The evaluations are intended to explore the nature ofmeaningin language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identifyword sensescomputationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g.,semantic role labeling), relations between sentences (e.g.,coreference), and the nature of what we are saying (semantic relationsandsentiment analysis).
The purpose of the SemEval and Senseval exercises is to evaluate semantic analysis systems. "Semantic Analysis" refers to a formal analysis of meaning, and "computational" refer to approaches that in principle support effective implementation.[1]
The first three evaluations, Senseval-1 through Senseval-3, were focused onword sense disambiguation(WSD), each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the fourth workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to includesemantic analysistasks outside of word sense disambiguation.[2]
Triggered by the conception of the*SEM conference, the SemEval community had decided to hold the evaluation workshops yearly in association with the *SEM conference. It was also the decision that not every evaluation task will be run every year, e.g. none of the WSD tasks were included in the SemEval-2012 workshop.
From the earliest days, assessing the quality of word sense disambiguation algorithms had been primarily a matter ofintrinsic evaluation, and “almost no attempts had been made to evaluate embedded WSD components”.[3]Only very recently(2006)had extrinsic evaluations begun to provide some evidence for the value of WSD in end-user applications.[4]Until 1990 or so, discussions of the sense disambiguation task focused mainly on illustrative examples rather than comprehensive evaluation. The early 1990s saw the beginnings of more systematic and rigorous intrinsic evaluations, including more formal experimentation on small sets of ambiguous words.[5]
In April 1997, Martha Palmer and Marc Light organized a workshop entitledTagging with Lexical Semantics: Why, What, and How?in conjunction with the Conference on Applied Natural Language Processing.[6]At the time, there was a clear recognition that manually annotatedcorporahad revolutionized other areas of NLP, such aspart-of-speech taggingandparsing, and that corpus-driven approaches had the potential to revolutionize automatic semantic analysis as well.[7]Kilgarriff recalled that there was "a high degree of consensus that the field needed evaluation", and several practical proposals by Resnik and Yarowsky kicked off a discussion that led to the creation of the Senseval evaluation exercises.[8][9][10]
After SemEval-2010, many participants feel that the 3-year cycle is a long wait. Many other shared tasks such asConference on Natural Language Learning(CoNLL) andRecognizing Textual Entailments(RTE) run annually. For this reason, the SemEval coordinators gave the opportunity for task organizers to choose between a 2-year or a 3-year cycle.[11]The SemEval community favored the 3-year cycle.Although the votes within the SemEval community favored a 3-year cycle, organizers and coordinators had settled to split the SemEval task into 2 evaluation workshops. This was triggered by the introduction of the new*SEM conference. The SemEval organizers thought it would be appropriate to associate our event with the *SEM conference and collocate the SemEval workshop with the *SEM conference. The organizers got very positive responses (from the task coordinators/organizers and participants) about the association with the yearly *SEM, and 8 tasks were willing to switch to 2012. Thus was born SemEval-2012 and SemEval-2013. The current plan is to switch to a yearly SemEval schedule to associate it with the *SEM conference but not every task needs to run every year.[12]
The framework of the SemEval/Senseval evaluation workshops emulates theMessage Understanding Conferences(MUCs) and other evaluation workshops ran by ARPA (Advanced Research Projects Agency, renamed theDefense Advanced Research Projects Agency (DARPA)).
Stages of SemEval/Senseval evaluation workshops[14]
Senseval-1 & Senseval-2 focused on evaluation WSD systems on major languages that were available corpus and computerized dictionary. Senseval-3 looked beyond thelexemesand started to evaluate systems that looked into wider areas of semantics, such as Semantic Roles (technically known asTheta rolesin formal semantics),Logic FormTransformation (commonly semantics of phrases, clauses or sentences were represented infirst-order logic forms) and Senseval-3 explored performances of semantics analysis onMachine translation.
As the types of different computational semantic systems grew beyond the coverage of WSD, Senseval evolved into SemEval, where more aspects of computational semantic systems were evaluated.
The SemEval exercises provide a mechanism for examining issues in semantic analysis of texts. The topics of interest fall short of the logical rigor that is found in formal computational semantics, attempting to identify and characterize the kinds of issues relevant to human understanding of language. The primary goal is to replicate human processing by means of computer systems. The tasks (shown below) are developed by individuals and groups to deal with identifiable issues, as they take on some concrete form.
The first major area in semantic analysis is the identification of the intended meaning at the word level (taken to include idiomatic expressions). This is word-sense disambiguation (a concept that is evolving away from the notion that words have discrete senses, but rather are characterized by the ways in which they are used, i.e., their contexts). The tasks in this area include lexical sample and all-word disambiguation, multi- and cross-lingual disambiguation, and lexical substitution. Given the difficulties of identifying word senses, other tasks relevant to this topic include word-sense induction, subcategorization acquisition, and evaluation of lexical resources.
The second major area in semantic analysis is the understanding of how different sentence and textual elements fit together. Tasks in this area include semantic role labeling, semantic relation analysis, and coreference resolution. Other tasks in this area look at more specialized issues of semantic analysis, such as temporal information processing, metonymy resolution, and sentiment analysis. The tasks in this area have many potential applications, such as information extraction, question answering, document summarization, machine translation, construction of thesauri and semantic networks, language modeling, paraphrasing,
and recognizing textual entailment. In each of these potential applications, the contribution of the types of semantic analysis constitutes the most outstanding research issue.
For example, in theword sense inductionanddisambiguationtask, there are three separate phases:
The unsupervised evaluation for WSI considered two types of evaluationV Measure(Rosenberg and Hirschberg, 2007), andpaired F-Score(Artiles et al., 2009). This evaluation follows the supervised evaluation of SemEval-2007WSItask (Agirre and Soroa, 2007)
The tables below reflects the workshop growth from Senseval to SemEval and gives an overview of which area of computational semantics was evaluated throughout the Senseval/SemEval workshops.
The Multilingual WSD task was introduced for the SemEval-2013 workshop.[17]The task is aimed at evaluating Word Sense Disambiguation systems in a multilingual scenario using BabelNet as its sense inventory. Unlike similar task like crosslingual WSD or themultilingual lexical substitutiontask, where no fixed sense inventory is specified, Multilingual WSD uses theBabelNetas its sense inventory. Prior to the development of BabelNet, a bilinguallexical sampleWSD evaluation task was carried out in SemEval-2007 on Chinese-English bitexts.[18]
The Cross-lingual WSD task was introduced in the SemEval-2007 evaluation workshop and re-proposed in the SemEval-2013 workshop
.[19]To facilitate the ease of integrating WSD systems into otherNatural Language Processing(NLP) applications, such as Machine Translation and multilingualInformation Retrieval, the cross-lingual WSD evaluation task was introduced a language-independent and knowledge-lean approach to WSD. The task is an unsupervised Word Sense Disambiguation task for English nouns by means of parallel corpora. It follows the lexical-sample variant of the Classic WSD task, restricted to only 20 polysemous nouns.
It is worth noting that the SemEval-2014 have only two tasks that were multilingual/crosslingual, i.e. (i) theL2 Writing Assistanttask, which is a crosslingual WSD task that includes English, Spanish, German, French and Dutch and (ii) theMultilingual Semantic Textual Similaritytask that evaluates systems on English and Spanish texts.
The major tasks in semantic evaluation include the following areas ofnatural language processing. This list is expected to grow as the field progresses.[20]
The following table shows the areas of studies that were involved in Senseval-1 through SemEval-2014 (S refers to Senseval and SE refers to SemEval, e.g. S1 refers to Senseval-1 and SE07 refers to SemEval2007):
SemEval tasks have created many types of semantic annotations, each type with various schema. In SemEval-2015, the organizers have decided to group tasks together into several tracks. These tracks are by the type of semantic annotations that the task hope to achieve.[21]Here lists the type of semantic annotations involved in the SemEval workshops:
A task and its track allocation is flexible; a task might develop into its own track, e.g. the taxonomy evaluation task in SemEval-2015 was under theLearning Semantic Relationstrack and in SemEval-2016, there is a dedicated track forSemantic Taxonomywith a newSemantic Taxonomy Enrichmenttask.[22][23] | https://en.wikipedia.org/wiki/SemEval |
Semantic analysis (computational)withinapplied linguisticsandcomputer science, is a composite ofsemantic analysisand computational components.Semantic analysisrefers to a formal analysis of meaning,[1]andcomputationalrefers to approaches that in principle support effective implementation in digital computers.[2]
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Semantic_analysis_(computational) |
Taxonomyis a practice and science concerned with classification or categorization. Typically, there are two parts to it: the development of an underlying scheme of classes (a taxonomy) and the allocation of things to the classes (classification).
Originally, taxonomy referred only to theclassification of organismson the basis of shared characteristics. Today it also has a more general sense. It may refer to the classification of things or concepts, as well as to the principles underlying such work. Thus a taxonomy can be used to organize species, documents, videos or anything else.
A taxonomy organizes taxonomic units known as "taxa" (singular "taxon"). Many arehierarchies.
One function of a taxonomy is to help users more easily find what they are searching for. This may be effected in ways that include alibrary classification systemand asearch engine taxonomy.
The word was coined in 1813 by the Swiss botanistA. P. de Candolleand is irregularly compounded from theGreekτάξις,taxis'order' andνόμος,nomos'law', connected by the French form-o-; the regular form would betaxinomy, as used in the Greekreborrowingταξινομία.[1][2]
Wikipedia categories form a taxonomy,[3]which can be extracted by automatic means.[4]As of 2009[update], it has been shown that a manually-constructed taxonomy, such as that of computational lexicons likeWordNet, can be used to improve and restructure the Wikipedia category taxonomy.[5]
In a broader sense, taxonomy also applies to relationship schemes other than parent-child hierarchies, such asnetwork structures. Taxonomies may then include a single child with multi-parents, for example, "Car" might appear with both parents "Vehicle" and "Steel Mechanisms"; to some however, this merely means that 'car' is a part of several different taxonomies.[6]A taxonomy might also simply be organization of kinds of things into groups, or an alphabetical list; here, however, the term vocabulary is more appropriate. In current usage withinknowledge management, taxonomies are considered narrower thanontologiessince ontologies apply a larger variety of relation types.[7]
Mathematically, a hierarchical taxonomy is atree structureof classifications for a given set of objects. It is also namedcontainment hierarchy. At the top of this structure is a single classification, the root node, that applies to all objects. Nodes below this root are more specific classifications that apply to subsets of the total set of classified objects. The progress of reasoning proceeds from the general to the more specific.
By contrast, in the context of legal terminology, an open-ended contextual taxonomy is employed—a taxonomy holding only with respect to a specific context. In scenarios taken from the legal domain, a formal account of the open-texture of legal terms is modeled, which suggests varying notions of the "core" and "penumbra" of the meanings of a concept. The progress of reasoning proceeds from the specific to the more general.[8]
Anthropologistshave observed that taxonomies are generally embedded in local cultural and social systems, and serve various social functions. Perhaps the most well-known and influential study offolk taxonomiesisÉmile Durkheim'sThe Elementary Forms of Religious Life. A more recent treatment of folk taxonomies (including the results of several decades of empirical research) and the discussion of their relation to the scientific taxonomy can be found inScott Atran'sCognitive Foundations of Natural History.Folk taxonomies of organisms have been found in large part to agree with scientific classification, at least for the larger and more obvious species, which means that it is not the case that folk taxonomies are based purely on utilitarian characteristics.[9]
In the seventeenth century, the German mathematician and philosopherGottfried Leibniz, following the work of the thirteenth-century Majorcan philosopherRamon Llullon hisArs generalis ultima, a system for procedurally generating concepts by combining a fixed set of ideas, sought to develop analphabet of human thought. Leibniz intended hischaracteristica universalisto be an "algebra" capable of expressing all conceptual thought. The concept of creating such a "universal language" was frequently examined in the 17th century, also notably by the English philosopherJohn Wilkinsin his workAn Essay towards a Real Character and a Philosophical Language(1668), from which the classification scheme inRoget'sThesaurusultimately derives.
Taxonomy in biology encompasses the description, identification, nomenclature, and classification of organisms. Uses of taxonomy include:
Uses of taxonomy in business and economics include:
Vegas et al.[10]make a compelling case to advance the knowledge in the field of software engineering through the use of taxonomies. Similarly, Ore et al.[11]provide a systematic methodology to approach taxonomy building in software engineering related topics.
Several taxonomies have been proposed in software testing research to classify techniques, tools, concepts and artifacts. The following are some example taxonomies:
Engström et al.[14]suggest and evaluate the use of a taxonomy to bridge the communication between researchers and practitioners engaged in the area of software testing. They have also developed a web-based tool[15]to facilitate and encourage the use of the taxonomy. The tool and its source code are available for public use.[16]
Uses of taxonomy in education include:
Uses of taxonomy in safety include:
Citing inadequacies with current practices in listing authors of papers in medical research journals, Drummond Rennie and co-authors called in a 1997 article inJAMA, theJournal of the American Medical Associationfor
a radical conceptual and systematic change, to reflect the realities of multiple authorship and to buttress accountability. We propose dropping the outmoded notion of author in favor of the more useful and realistic one of contributor.[17]: 152
In 2012, several major academic and scientific publishing bodies mountedProject CRediTto develop acontrolled vocabularyof contributor roles.[18]Known asCRediT (Contributor Roles Taxonomy), this is an example of a flat, non-hierarchical taxonomy; however, it does include an optional, broad classification of the degree of contribution:lead,equalorsupporting.Amy Brandand co-authors summarise their intended outcome as:
Identifying specific contributions to published research will lead to appropriate credit, fewer author disputes, and fewer disincentives to collaboration and the sharing of data and code.[17]: 151
CRediT comprises 14 specific contributor roles using the following defined terms:
The taxonomy is an open standard conformiing to theOpenStandprinciples,[19]and is published under aCreative Commonslicence.[18]
Websites with a well designed taxonomy or hierarchy are easily understood by users, due to the possibility of users developing a mental model of the site structure.[20]
Guidelines for writing taxonomy for the web include:
Frederick Suppe[21]distinguished two senses of classification: a broad meaning, which he called "conceptual classification" and a narrow meaning, which he called "systematic classification".
About conceptual classification Suppe wrote:[21]: 292"Classification is intrinsic to the use of language, hence to most if not all communication. Whenever we use nominative phrases we are classifying the designated subject as being importantly similar to other entities bearing the same designation; that is, we classify them together. Similarly the use of predicative phrases classifies actions or properties as being of a particular kind. We call this conceptual classification, since it refers to the classification involved in conceptualizing our experiences and surroundings"
About systematic classification Suppe wrote:[21]: 292"A second, narrower sense of classification is the systematic classification involved in the design and utilization of taxonomic schemes such as the biological classification of animals and plants by genus and species.
Two of the predominant types of relationships inknowledge-representationsystems arepredicationand the universally quantifiedconditional. Predication relationships express the notion that an individual entity is an example of a certain type (for example,John is a bachelor), while universally quantified conditionals express the notion that a type is a subtype of another type (for example, "A dog is a mammal", which means the same as "All dogs are mammals").[22]
The "has-a" relationship is quite different: an elephanthasa trunk; a trunk is a part, not a subtype of elephant. The study of part-whole relationships ismereology.
Taxonomies are often represented asis-ahierarchieswhere each level is more specific than the level above it (in mathematical language is "a subset of" the level above). For example, a basic biology taxonomy would have concepts such asmammal, which is a subset ofanimal, anddogsandcats, which are subsets ofmammal. This kind of taxonomy is called an is-a model because the specific objects are considered as instances of a concept. For example,Fidois-an instance of the conceptdogandFluffyis-acat.[23]
Inlinguistics, is-a relations are calledhyponymy. When one word describes a category, but another describe some subset of that category, the larger term is called ahypernymwith respect to the smaller, and the smaller is called a "hyponym" with respect to the larger. Such a hyponym, in turn, may have further subcategories for which it is a hypernym. In the simple biology example,dogis a hypernym with respect to its subcategorycollie, which in turn is a hypernym with respect toFidowhich is one of its hyponyms. Typically, however,hypernymis used to refer to subcategories rather than single individuals.
Researchers reported that large populations consistently develop highly similar category systems. This may be relevant to lexical aspects of large communication networks and cultures such asfolksonomiesandlanguageor human communication, and sense-making in general.[24][25]
Hull (1998) suggested "The fundamental elements of any classification are its theoretical commitments, basic units and the criteria for ordering these basic units into a classification".[26]
There is a widespread opinion in knowledge organization and related fields that such classes corresponds to concepts. We can, for example, classify "waterfowls" into the classes "ducks", "geese", and "swans"; we can also say, however, that the concept “waterfowl” is a generic broader term in relation to the concepts "ducks", "geese", and "swans". This example demonstrates the close relationship between classification theory and concept theory. A main opponent of concepts as units is Barry Smith.[27]Arp, Smith and Spear (2015) discuss ontologies and criticize the conceptualist understanding.[28]: 5ffThe book writes (7): “The code assigned to France, for example, is ISO 3166 – 2:FR and the code is assigned to France itself — to the country that is otherwise referred to as Frankreich or Ranska. It is not assigned to the concept of France (whatever that might be).” Smith's alternative to concepts as units is based on a realist orientation, when scientists make successful claims about the types of entities that exist in reality, they are referring to objectively existing entities which realist philosophers call universals or natural kinds. Smith's main argument - with which many followers of the concept theory agree - seems to be that classes cannot be determined by introspective methods, but must be based on scientific and scholarly research. Whether units are called concepts or universals, the problem is to decide when a thing (say a "blackbird") should be considered a natural class. In the case of blackbirds, for example, recent DNA analysis have reconsidered the concept (or universal) "blackbird" and found that what was formerly considered one species (with subspecies) are in reality many different species, which just have chosen similar characteristics to adopt to their ecological niches.[29]: 141
An important argument for considering concepts the basis of classification is that concepts are subject to change and that they change when scientific revolutions occur. Our concepts of many birds, for example, have changed with recent development in DNA analysis and the influence of the cladistic paradigm - and have demanded new classifications. Smith's example ofFrancedemands an explanation. First,Franceis not a general concept, but an individual concept. Next, the legal definition of France is determined by the conventions that France has made with other countries. It is still a concept, however, as Leclercq (1978) demonstrates with the corresponding conceptEurope.[30]
Hull (1998) continued:[26]"Two fundamentally different sorts of classification are those that reflect structural organization and those that are systematically related to historical development." What is referred to is that in biological classification the anatomical traits of organisms is one kind of classification, the classification in relation to the evolution of species is another (in the section below, we expand these two fundamental sorts of classification to four). Hull adds that in biological classification, evolution supplies the theoretical orientation.[26]
Ereshefsky (2000) presented and discussed three general philosophical schools of classification: "essentialism, cluster analysis, and historical classification. Essentialism sorts entities according to causal relations rather than their intrinsic qualitative features."[31]
These three categories may, however, be considered parts of broader philosophies. Four main approaches to classification may be distinguished: (1) logical and rationalist approaches including "essentialism"; (2) empiricist approaches including cluster analysis. (It is important to notice that empiricism is not the same as empirical study, but a certain ideal of doing empirical studies. With the exception of the logical approaches they all are based on empirical studies, but are basing their studies on different philosophical principles). (3) Historical and hermeneutical approaches including Ereshefsky's "historical classification" and (4) Pragmatic, functionalist and teleological approaches (not covered by Ereshefsky). In addition, there are combined approaches (e.g., the so-calledevolutionary taxonomy", which mixes historical and empiricist principles).
Logical division,[32]orlogical partitioning(top-down classification or downward classification) is an approach that divides a class into subclasses and then divide subclasses into their subclasses, and so on, which finally forms a tree of classes. The root of the tree is the original class, and the leaves of the tree are the final classes.Platoadvocated a method based on dichotomy, which was rejected byAristotleand replaced by the method of definitions based on genus, species, and specific difference.[33]The method of facet analysis (cf.,faceted classification) is primarily based on logical division.[34]This approach tends to classify according to "essential" characteristics, a widely discussed and criticized concept (cf.,essentialism). These methods may overall be related to the rationalist theory of knowledge. Michelle Bunn notes that logical partitioning uses categories which are establisheda priori; data is then collected and used to test the extent to which the classification system can be sustained.[35]
"Empiricism alone is not enough: a healthy advance in taxonomy depends on a sound theoretical foundation"[36]: 548
Pheneticsornumerical taxonomy[37]is by contrast bottom-up classification, where the starting point is a set of items or individuals, which are classified by putting those with shared characteristics as members of a narrow class and proceeding upward. Numerical taxonomy is an approach based solely on observable, measurable similarities and differences of the things to be classified. Classification is based on overall similarity: the elements that are most alike in most attributes are classified together. But it is based on statistics, and therefore does not fulfill the criteria of logical division (e.g. to produce classes, that are mutually exclusive and jointly coextensive with the class they divide). Some people will argue that this is not classification/taxonomy at all, but such an argument must consider the definitions of classification (see above). These methods may overall be related to the empiricist theory of knowledge.
Genealogical classificationis classification of items according to their common heritage. This must also be done on the basis of some empirical characteristics, but these characteristics are developed by the theory of evolution. Charles Darwin's[38]main contribution to classification theory was not just his claim "... all true classification is genealogical ..." but that he provided operational guidance for classification.[39]: 90–92Genealogical classification is not restricted to biology, but is also much used in, for example, classification of languages, and may be considered a general approach to classification." These methods may overall be related to the historicist theory of knowledge. One of the main schools of historical classification iscladistics, which is today dominant in biological taxonomy, but also applied to other domains.
The historical and hermeneutical approaches is not restricted to the development of the object of classification (e.g., animal species) but is also concerned with the subject of classification (the classifiers) and their embeddedness in scientific traditions and other human cultures.
Pragmatic classification(and functional[40]and teleological classification) is the classification of items which emphasis the goals, purposes, consequences,[41]interests, values and politics of classification. It is, for example, classifying animals into wild animals, pests, domesticated animals and pets. Alsokitchenware(tools, utensils, appliances, dishes, and cookware used in food preparation, or the serving of food) is an example of a classification which is not based on any of the above-mentioned three methods, but clearly on pragmatic or functional criteria. Bonaccorsi, et al. (2019) is about the general theory of functional classification and applications of this approach for patent classification.[40]Although the examples may suggest that pragmatic classifications are primitive compared to established scientific classifications, it must be considered in relation to the pragmatic and critical theory of knowledge, which consider all knowledge as influences by interests.[42]Ridley (1986) wrote:[43]: 191"teleological classification. Classification of groups by their shared purposes, or functions, in life - where purpose can be identified with adaptation. An imperfectly worked-out, occasionally suggested, theoretically possible principle of classification that differs from the two main such principles,pheneticandphylogenetic classification".
Natural classification is a concept closely related to the conceptnatural kind.Carl Linnaeusis often recognized as the first scholar to clearly have differentiated "artificial" and "natural" classifications[44][45]A natural classification is one, using Plato's metaphor, that is “carving nature at its joints”[46]Although Linnaeus considered natural classification the ideal, he recognized that his own system (at least partly) represented an artificial classification.
John Stuart Mill explained the artificial nature of the Linnaean classification and suggested the following definition of a natural classification:
"The Linnæan arrangement answers the purpose of making us think together of all those kinds of plants, which possess the same number of stamens and pistils; but to think of them in that manner is of little use, since we seldom have anything to affirm in common of the plants which have a given number of stamens and pistils."[47]: 498"The ends of scientific classification are best answered, when the objects are formed into groups respecting which a greater number of general propositions can be made, and those propositions more important, than could be made respecting any other groups into which the same things could be distributed."[47]: 499"A classification thus formed is properly scientific or philosophical, and is commonly called a Natural, in contradistinction to a Technical or Artificial, classification or arrangement."[47]: 499
Ridley (1986) provided the following definitions:[43]
Stamos (2004)[48]: 138wrote: "The fact is, modern scientists classify atoms into elements based on proton number rather than anything else because it alone is the causally privileged factor [gold is atomic number 79 in the periodic table because it has 79 protons in its nucleus]. Thus nature itself has supplied the causal monistic essentialism. Scientists in their turn simply discover and follow (where "simply" ≠ "easily")."
Theperiodic tableis the classification of the chemical elements which is in particular associated withDmitri Mendeleev(cf.,History of the periodic table). An authoritative work on this system is Scerri (2020).[49]Hubert Feger (2001; numbered listing added) wrote about it:[50]: 1967–1968"A well-known, still used, and expanding classification is Mendeleev's Table of Elements. It can be viewed as a prototype of all taxonomies in that it satisfies the following evaluative criteria:
Bursten (2020) wrote, however "Hepler-Smith, a historian of chemistry, and I, a philosopher whose work often draws on chemistry, found common ground in a shared frustration with our disciplines’ emphases on the chemical elements as the stereotypical example of a natural kind. The frustration we shared was that while the elements did display many hallmarks of paradigmatic kindhood, elements were not the kinds of kinds that generated interesting challenges for classification in chemistry, nor even were they the kinds of kinds that occupied much contemporary critical chemical thought. Compounds, complexes, reaction pathways, substrates, solutions – these were the kinds of the chemistry laboratory, and rarely if ever did they slot neatly into taxonomies in the orderly manner of classification suggested by the Periodic Table of Elements. A focus on the rational and historical basis of the development of the Periodic Table had made the received view of chemical classification appear far more pristine, and far less interesting, than either of us believed it to be."[51]
Linnaean taxonomyis the particular form of biological classification (taxonomy) set up byCarl Linnaeus, as set forth in hisSystema Naturae(1735) and subsequent works. A major discussion in the scientific literature is whether a system that was constructed before Charles Darwin's theory of evolution can still be fruitful and reflect the development of life.[52][53]
Astronomyis a fine example on howKuhn's(1962) theory of scientific revolutions (or paradigm shifts) influences classification.[54]For example:
Hornbostel–Sachsis a system of musical instrument classification devised by Erich Moritz von Hornbostel and Curt Sachs, and first published in 1914.[55]In the original classification, the top categories are:
A fifth top category,
Each top category is subdivided and Hornbostel-Sachs is a very comprehensive classification of musical instruments with wide applications. In Wikipedia, for example, all musical instruments are organized according to this classification.
In opposition to, for example, the astronomical and biological classifications presented above, the Hornbostel-Sachs classification seems very little influenced by research inmusicologyandorganology. It is based on huge collections of musical instruments, but seems rather as a system imposed upon the universe of instruments than as a system with organic connections to scholarly theory. It may therefore be interpreted as a system based on logical division and rationalist philosophy.
Diagnostic and Statistical Manual of Mental Disorders(DSM) is a classification of mental disorders published by the American Psychiatric Association (APA).The first edition of the DSM was published in 1952,[56]and the newest, fifth edition was published in 2013.[57]In contrast to, for example, the periodic table and the Hornbostel-Sachs classification, its principles for classification have changed much during its history. The first edition was influenced by psychodynamic theory. The DSM-III, published in 1980,[58]adopted an atheoretical, “descriptive” approach to classification[59]The system is very important for all people involved in psychiatry, whether as patients, researchers or therapists (in addition to insurance companies), but it is also strongly criticized and does not have the same scientific status as many other classifications.[60] | https://en.wikipedia.org/wiki/Taxonomy_(general) |
TheUnified Medical Language System(UMLS) is acompendiumof manycontrolled vocabulariesin thebiomedicalsciences (created 1986).[1]It provides a mapping structure among these vocabularies and thus allows one to translate among the various terminology systems; it may also be viewed as a comprehensivethesaurusandontologyof biomedical concepts. UMLS further provides facilities fornatural language processing. It is intended to be used mainly by developers of systems inmedical informatics.
UMLS consists of Knowledge Sources (databases) and a set of software tools.
The UMLS was designed and is maintained by theUSNational Library of Medicine, is updated quarterly and may be used for free. The project was initiated in 1986 byDonald A.B. Lindberg,M.D., then Director of the Library of Medicine, and directed byBetsy Humphreys.[2]
The number of biomedical resources available to researchers is enormous. Often this is a problem due to the large volume of documents retrieved when the medical literature is searched. The purpose of the UMLS is to enhance access to this literature by facilitating the development of computer systems that understand biomedical language. This is achieved by overcoming two significant barriers: "the variety of ways the same concepts are expressed in different machine-readable sources & by different people" and "the distribution of useful information among many disparate databases & systems".[citation needed]
Users of the system are required to sign a "UMLS agreement" and file brief annual usage reports. Academic users may use the UMLS free of charge for research purposes. Commercial or production use requires copyright licenses for some of the incorporated source vocabularies.
The Metathesaurus forms the base of the UMLS and comprises over 1 million biomedical concepts and 5 million concept names, all of which stem from the over 100 incorporated controlled vocabularies and classification systems. Some examples of the incorporated controlled vocabularies areCPT,ICD-10,MeSH,SNOMED CT,DSM-IV,LOINC,WHO Adverse Drug Reaction Terminology,UK Clinical Terms,RxNorm,Gene Ontology, andOMIM(seefull list).
The Metathesaurus is organized by concept, and each concept has specific attributes defining its meaning and is linked to the corresponding concept names in the various source vocabularies. Numerous relationships between the concepts are represented, for instance hierarchical ones such as "isa" for subclasses and "is part of" for subunits, and associative ones such as "is caused by" or "in the literature often occurs close to" (the latter being derived fromMedline).
The scope of the Metathesaurus is determined by the scope of the source vocabularies. If different vocabularies use different names for the same concept, or if they use the same name for different concepts, then this will be faithfully represented in the Metathesaurus. All hierarchical information from the source vocabularies is retained in the Metathesaurus. Metathesaurus concepts can also link to resources outside of the database, for instance gene sequence databases.
Each concept in the Metathesaurus is assigned one or moresemantic types(categories), which are linked with one another throughsemantic relationships.[3]Thesemantic networkis a catalog of these semantic types and relationships. This is a rather broad classification; there are 127 semantic types and 54 relationships in total.
The major semantic types are organisms, anatomical structures, biologic function, chemicals, events, physical objects, and concepts or ideas.
The links among semantic types define the structure of the network and show important relationships between thegroupingsand concepts. The primary link between semantic types is the "isa" link, establishing ahierarchyof types.
The network also has 5 major categories of non-hierarchical (or associative) relationships, which constitute the remaining 53 relationship types. These are "physically related to", "spatially related to", "temporally related to", "functionally related to" and "conceptually related to".[3]
The information about a semantic type includes an identifier, definition, examples, hierarchical information about the encompassing semantic type(s), andassociativerelationships. Associative relationships within the Semantic Network are very weak. They capture at most some-some relationships, i.e. they capture the fact that some instance of the first type may be connected by the salient relationship to some instance of the second type. Phrased differently, they capture the fact that a corresponding relational assertion is meaningful (though it need not be true in all cases).
An example of an associative relationship is "may-cause", applied to the terms (smoking, lung cancer) would yield: smoking "may-cause" lung cancer.
The SPECIALIST Lexicon contains information about common English vocabulary, biomedical terms, terms found inMEDLINEand terms found in the UMLS Metathesaurus. Each entry containssyntactic(how words are put together to create meaning),morphological(form and structure) andorthographic(spelling) information. A set ofJavaprograms use the lexicon to work through the variations in biomedical texts by relating words by their parts of speech, which can be helpful inwebsearches or searches through anelectronic medical record.
Entries may be one-word or multiple-word terms. Records contain four parts: base form (i.e. "run" for "running"); parts of speech (of which Specialist recognizes eleven); a unique identifier; and any available spelling variants.
For example, aqueryfor "anesthetic" would return the following:[4]
The SPECIALIST lexicon is available in two formats. The "unit record" format can be seen above, and comprisesslotsandfillers. Aslotis the element (i.e. "base=" or "spelling variant=") and thefillersare the values attributable to that slot for that entry. The "relational table" format is not yetnormalizedand contain a great deal of redundant data in the files.
Given the size and complexity of the UMLS and its permissive policy on integrating terms, errors are inevitable.[5]Errors include ambiguity and redundancy, hierarchical relationship cycles (a concept is both an ancestor and descendant to another), missing ancestors (semantic types of parent and child concepts are unrelated), and semantic inversion (the child/parent relationship with the semantic types is not consistent with the concepts).[6]
These errors are discovered and resolved by auditing the UMLS. Manual audits can be very time-consuming and costly. Researchers have attempted to address the issue through a number of ways. Automated tools can be used to search for these errors.
For structural inconsistencies (such as loops), a trivial solution based on the order would work. However, the same wouldn't apply when the inconsistency is at the term or concept level (context-specific meaning of a term).[7]This requires an informed search strategy to be used (knowledge representation).
In addition to the knowledge sources, theNational Library of Medicinealso provides supporting tools. | https://en.wikipedia.org/wiki/Unified_Medical_Language_System |
Artificial intelligence detection softwareaims to determine whether somecontent(text, image, video or audio) wasgeneratedusingartificial intelligence(AI).
However, the reliability of such software is a topic of debate,[1]and there are concerns about the potential misapplication of AI detection software by educators.
Multiple AI detection tools have been demonstrated to be unreliable in terms of accurately and comprehensively detecting AI-generated text. In a study conducted by Weber-Wulff et al. and published in 2023, researchers evaluated 14 detection tools includingTurnitinandGPTZeroand found that "all scored below 80% of accuracy and only 5 over 70%."[2]
In AI content detection, afalse positiveis when human-written work is incorrectly flagged as AI-written. Many AI detection software claim to have a minimal level of false positives, with Turnitin claiming a less than 1% false positive rate.[3]However, later research byThe Washington Postproduced much higher rates of 50%, though they used a smaller sample size.[4]False positives in an academic setting frequently lead to accusations ofacademic misconduct, which can have serious consequences for a student'sacademic record. Additionally, studies have shown evidence that many AI detection models are prone to give false positives to work written by those whosefirst languageisn'tEnglishandneurodiversepeople.[5][6]
Afalse negativeis a failure to identify documents with AI-written text. False negatives often happen as a result of a detection software'ssensitivitylevel or because evasive techniques were used when generating the work to make it sound more human.[7]False negatives are less of a concern academically, since they aren't likely to lead to accusations and ramifications. Notably, Turnitin stated they have a 15% false negative rate.[8]
For text, this is usually done to prevent allegedplagiarism, often by detecting repetition of words as telltale signs that a text was AI-generated (includingAI hallucinations). They are often used by teachers marking their students, usually on anad hocbasis. Following the release ofChatGPTand similar AI text generative software, many educational establishments have issued policies against the use of AI by students.[9]AI text detection software is also used by those assessing job applicants, as well as onlinesearch engines.[10]
Current detectors may sometimes be unreliable and have incorrectly marked work by humans as originating from AI[11][12][13]while failing to detect AI-generated work in other instances.[14]MIT Technology Reviewsaid that the technology "struggled to pick up ChatGPT-generated text that had been slightly rearranged by humans and obfuscated by a paraphrasing tool".[15]AI text detection software has also been shown to discriminate against non-native speakers of English.[10]
Two students from theUniversity of California, Davis, were referred to the university's Office of Student Success and Judicial Affairs (OSSJA) after their professors scanned their essays with positive results; the first with an AI detector called GPTZero, and the second with an AI detector integration inTurnitin. However, following media coverage,[16]and a thorough investigation, the students were cleared of any wrongdoing.[17][18]
In April 2023,Cambridge Universityand other members of theRussell Groupof universities in the United Kingdom opted out of Turnitin's AI text detection tool, after expressing concerns it was unreliable.[19]TheUniversity of Texas at Austinopted out of the system six months later.[20]
In May 2023, a professor atTexas A&M University–Commerceused ChatGPT to detect whether his students' content was written by it, which ChatGPT said was the case. As such, he threatened to fail the class despite ChatGPT not being able to detect AI-generated writing.[21]No students were prevented from graduating because of the issue, and all but one student (who admitted to using the software) were exonerated from accusations of having used ChatGPT in their content.[22]
An article by Thomas Germain, published onGizmodoin June 2024, reported job losses among freelance writers and journalists due to AI text detection software mistakenly classifying their work as AI-generated.[23]
To improve the reliability of AI text detection, researchers have exploreddigital watermarkingtechniques. A 2023 paper titled "A Watermark for Large Language Models"[24]presents a method to embed imperceptible watermarks into text generated bylarge language models(LLMs). This watermarking approach allows content to be flagged as AI-generated with a high level of accuracy, even when text is slightly paraphrased or modified. The technique is designed to be subtle and hard to detect for casual readers, thereby preserving readability, while providing a detectable signal for those employing specialized tools. However, while promising, watermarking faces challenges in remaining robust under adversarial transformations and ensuring compatibility across different LLMs.
There is software available designed to bypass AI text detection.[25]
A study published in August 2023 analyzed 20 abstracts from papers published in theEye Journal, which were then paraphrased usingGPT-4.0. The AI-paraphrased abstracts were examined for plagiarism using QueText and for AI-generated content using Originality.AI. The texts were then re-processed through anadversarial softwarecalledUndetectable.aiin order to reduce the AI-detection scores. The study found that the AI detection tool, Originality.AI, identified text generated by GPT-4 with a mean accuracy of 91.3%. However, after reprocessing by Undetectable.ai, the detection accuracy of Originality.ai dropped to a mean accuracy of 27.8%.[26][27]
Some experts also believe that techniques likedigital watermarkingare ineffective because they can be removed or added to trigger false positives.[28]"A Watermark for Large Language Models" paper by Kirchenbauer et al.[24]also addresses potential vulnerabilities of watermarking techniques. The authors outline a range of adversarial tactics, including text insertion, deletion, and substitution attacks, that could be used to bypass watermark detection. These attacks vary in complexity, from simple paraphrasing to more sophisticated approaches involvingtokenizationand homoglyph alterations. The study highlights the challenge of maintaining watermark robustness against attackers who may employ automatedparaphrasingtools or even specific language model replacements to alter text spans iteratively while retaining semantic similarity. Experimental results show that although such attacks can degrade watermark strength, they also come at the cost of text quality and increased computational resources.
One shortcoming of most AI content detection software is their inability to identify AI-generated text in any language. Large language models (LLMs) like ChatGPT, Claude, and Gemini can write in different languages, but traditional AI text detection tools have primarily been trained in English and a few other widely spoken languages, such as French and Spanish. Fewer AI detection solutions can detect AI-generated text in languages like Farsi, Arabic, or Hindi.[citation needed]
Several purported AI image detection software exist, to detect AI-generated images (for example, those originating fromMidjourneyorDALL-E). They are not completely reliable.[29][30]
Others claim to identify video and audiodeepfakes, but this technology is also not fully reliable yet either.[31]
Despite debate around the efficacy of watermarking,Google DeepMindis actively developing a detection software called SynthID, which works by inserting a digital watermark that is invisible to the human eye into thepixelsof an image.[32][33] | https://en.wikipedia.org/wiki/Artificial_intelligence_content_detection |
In the design of moderncomputers,memory geometrydescribes the internal structure ofrandom-access memory. Memory geometry is of concern to consumers upgrading their computers, since older memory controllers may not be compatible with later products. Memory geometry terminology can be confusing because of the number of overlapping terms.
The geometry of a memory system can be thought of as a multi-dimensional array. Each dimension has its own characteristics and physical realization. For example, the number of data pins on a memory module is one dimension.
Memory geometry describes the logical configuration of a RAM module, but consumers will always find it easiest to grasp the physical configuration. Much of the confusion surrounding memory geometry occurs when the physical configuration obfuscates the logical configuration. The first defining feature of RAM is form factor. RAM modules can be in compactSO-DIMMform for space constrained applications likelaptops,printers,embedded computers, andsmall form factorcomputers, and inDIMMformat, which is used in most desktops.[citation needed]
The other physical characteristics, determined by physical examination, are the number of memory chips, and whether both sides of thememory "stick"are populated. Modules with the number of RAM chips equal to some power of two do not support memory error detection or correction. If there are extra RAM chips (between powers of two), these are used forECC.
RAM modules are 'keyed' by indentations on the sides, and along the bottom of the module. This designates the technology, and classification of the modules, for instance whether it is DDR2, or DDR3, and whether it is suitable for desktops, or for servers. Keying was designed to make it difficult to install incorrect modules in a system (but there are more requirements than are embodied in keys). It is important to make sure that the keying of the module matches the key of the slot it is intended to occupy.[citation needed]
Additional, non-memory chips on the module may be an indication that it was designed[by whom?]for high capacity memory systems for servers, and that the module may be incompatible with mass-market systems.[citation needed]
As the next section of this article will cover the logical architecture, which covers the logical structure spanning every populated slot in a system, the physical features of the slots themselves become important. By consulting the documentation of your motherboard, or reading the labels on the board itself, you can determine the underlying logical structure of the slots. When there is more than one slot, they are numbered, and when there is more than one channel, the different slots are separated in that way as well – usually color-coded.[citation needed]
In the 1990s, computers usingcache-coherent non-uniform memory accesswere released, which allowed combining multiple computers that each had their own memory controller such that the software running on them could use I/O devices, memory, and CPU of all participating systems as if they were one unit (single system image). With AMD's release of the Opteron, which integrated the memory controller into the CPU, NUMA systems that share more than one memory controller in a single system have become common in applications that require the power of more than the common desktop.[citation needed]
Channels are the highest-level structure at the local memory controller level. Modern computers can havetwo, three or even more channels. It is usually important that, for each module in any one channel, there is a logically identical module in the same location on each of the other populated channels.[citation needed]
Module capacity is theaggregatespace in a module measured inbytes, or – more generally – inwords. Module capacity is equal to the product of the number of ranks and the rank density, and where the rank density is the product of rank depth and rank width.[1]The standard format for expressing this specification is (rank depth)Mbit× (rank width) × (number of ranks).[citation needed]
Ranksare sub-units of a memory module that share the same address and data buses and are selected bychip select(CS) in low-level addressing. For example, a memory module with 8 chips on each side, with each chip having an 8-bit-wide data bus, would have one rank for each side for a total of 2 ranks, if we define a rank to be 64 bits wide. A module composed ofMicron TechnologyMT47H128M16 chips with the organization 128 Mib × 16, meaning 128 Mi memory depth and 16-bit-wide data bus per chip; if the module has 8 of these chips on each side of the board, there would be a total of 16 chips × 16-bit-wide data = 256 total bits width of data. For a 64-bit-wide memory data interface, this equates to having 4 ranks, where each rank can be selected by a 2-bit chip select signal. Memory controllers such as theIntel 945Chipsetlist the configurations they support: "256-Mib, 512-Mib, and 1-Gib DDR2 technologies for ×8 and ×16 devices", "four ranks for all DDR2 devices up to 512-Mibit density", "eight ranks for 1-Gibit DDR2 devices". As an example, take ani945memory controller with fourKingstonKHX6400D2/1G memory modules, where each module has a capacity of 1GiB.[2]Kingston describes each module as composed of 16 "64M×8-bit" chips with each chip having an 8-bit-wide data bus. 16 × 8 equals 128, therefore, each module has two ranks of 64 bits each. So, from theMCHpoint of view there are four 1 GB modules. At a higher logical level, the MCH also sees two channels, each with four ranks.
In contrast,banks, while similar from a logical perspective to ranks, are implemented quite differently in physical hardware. Banks are sub-units inside a single memory chip, while ranks are sub-units composed of a subset of the chips on a module. Similar to chip select, banks are selected by bank select bits, which are part of the memory interface.[citation needed]
The lowest form of organization covered by memory geometry, sometimes called "memory device". These are the componentICsthat make up each module, or module of RAM. The most important measurement of a chip is its density, measured in bits. Because memory bus width is usually larger than the number of chips, most chips are designed to have width, meaning that they are divided into equal parts internally, and when one address "depth" is called up, instead of returning just one value, more than one value is returned. In addition to the depth, a second addressing dimension has been added at the chip level, banks. Banks allow one bank to be available, while another bank is unavailable because it isrefreshing.[citation needed]
Some measurements of modules are size, width, speed, and latency. A memory module consists of a multiple of the memory chips to equal the desired module width. So a 32-bitSIMMmodule could be composed of four 8-bit wide (×8) chips. As noted in the memory channel part, one physical module can be made up of one or more logical ranks. If that 32-bit SIMM were composed of eight 8-bit chips the SIMM would have two ranks.[citation needed]
A memory channel is made up of ranks. Physically a memory channel with just one memory module might present itself as having one or more logical ranks.[citation needed]
This is the highest level. A typical computer has only a single memory controller with only one or two channels. The logical features section described NUMA configurations, which can take the form of anetworkof memory controllers. For example, each socket of a two-socketAMDK8can have a two-channel memory controller, giving the system a total of four memory channels.
Various methods of specifying memory geometry can be encountered, giving different types of information.
(memory depth) × (memory width)
The memory width specifies the data width of the memory module interface in bits. For example, 64 would indicate a 64-bit data width, as is found on non-ECCDIMMscommon in SDR and DDR1–4 families of RAM. A memory of width of 72 would indicate an ECC module, with 8 extra bits in the data width for the error-correcting code syndrome. (The ECC syndrome allows single-bit errors to be corrected). The memory depth is the total memory capacity in bits divided by thenon-paritymemory width. Sometimes the memory depth is indicated in units of Meg (220), as in 32×64 or 64×64, indicating 32 Mi depth and 64 Mi depth respectively.
(memory density)
This is the total memory capacity of the chip.
Example: 128 Mib.
(memory depth) × (memory width)
Memory depth is the memory density divided by memory width. Example: for a memory chip with 128 Mib capacity and 8-bit wide data bus, it can be specified as: 16 Meg × 8. Sometimes the "Mi" is dropped, as in 16×8.
(memory depth per bank) × (memory width) × (number of banks)
Example: a chip with the same capacity and memory width as above but constructed with 4 banks would be specified as 4 Mi × 8 × 4. | https://en.wikipedia.org/wiki/Memory_geometry |
Memory organization is an aspect of computer architecture that is concerned with the storage and transfer of data and programs.[1]
There are several ways to organisememorieswith respect to the way they are connected to thecache:
The memory is onewordwide and connected via a one-word-widebusto the cache.
The memory is more than one word wide (usually four words wide) and connected by an equally wide bus to the low level cache (which is also wide). From the cache multiple busses of one word wide go to aMUXwhich selects the correct bus to connect to the high level cache.
There are several memory banks which are one word wide, and one word wide bus. There is some logic in the memory that selects the correct bank to use when the memory gets accessed by the cache.
Memory interleaving is a way to distribute individual addresses over memory modules. Its aim is to keep the most of modules busy as computations proceed. With memory interleaving, the low-orderkbits of the memory address generally specify the module on several buses.
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
Thiscomputer-engineering-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Memory_organization |
Aprocessor registeris a quickly accessible location available to a computer'sprocessor.[1]Registers usually consist of a small amount of faststorage, although some registers have specific hardware functions, and may be read-only or write-only. Incomputer architecture, registers are typically addressed by mechanisms other thanmain memory, but may in some cases be assigned amemory addresse.g. DECPDP-10,ICT 1900.[2]
Almost all computers, whetherload/store architectureor not, load items of data from a larger memory into registers where they are used forarithmetic operations,bitwise operations, and other operations, and are manipulated or tested bymachine instructions. Manipulated items are then often stored back to main memory, either by the same instruction or by a subsequent one. Modern processors use eitherstaticordynamicrandom-access memory(RAM) as main memory, with the latter usually accessed via one or morecache levels.
Processor registers are normally at the top of thememory hierarchy, and provide the fastest way to access data. The term normally refers only to the group of registers that are directly encoded as part of an instruction, as defined by theinstruction set. However, modern high-performance CPUs often have duplicates of these "architectural registers" in order to improve performance viaregister renaming, allowingparallelandspeculative execution. Modernx86design acquired these techniques around 1995 with the releases ofPentium Pro,Cyrix 6x86,Nx586, andAMD K5.
When acomputer programaccesses the same data repeatedly, this is calledlocality of reference. Holding frequently used values in registers can be critical to a program's performance.Register allocationis performed either by acompilerin thecode generationphase, or manually by anassembly languageprogrammer.
Registers are normally measured by the number ofbitsthey can hold, for example, an8-bitregister,32-bitregister,64-bitregister,128-bitregister, or more. In someinstruction sets, the registers can operate in various modes, breaking down their storage memory into smaller parts (32-bit into four 8-bit ones, for instance) to which multiple data (vector, orone-dimensional arrayof data) can be loaded and operated upon at the same time. Typically it is implemented by adding extra registers that map their memory into a larger register. Processors that have the ability to execute single instructions on multiple data are calledvector processors.
A processor often contains several kinds of registers, which can be classified according to the types of values they can store or the instructions that operate on them:
Hardware registersare similar, but occur outside CPUs.
In some architectures (such asSPARCandMIPS), the first or last register in the integerregister fileis apseudo-registerin that it is hardwired to always return zero when read (mostly to simplify indexing modes), and it cannot be overwritten. InAlpha, this is also done for the floating-point register file. As a result of this, register files are commonly quoted as having one register more than how many of them are actually usable; for example, 32 registers are quoted when only 31 of them fit within the above definition of a register.
The following table shows the number of registers in several mainstream CPU architectures. Note that inx86-compatible processors, the stack pointer (ESP) is counted as an integer register, even though there are a limited number of instructions that may be used to operate on its contents. Similar caveats apply to most architectures.
Although all of the below-listed architectures are different, almost all are in a basic arrangement known as thevon Neumann architecture, first proposed by the Hungarian-AmericanmathematicianJohn von Neumann. It is also noteworthy that the number of registers onGPUsis much higher than that on CPUs.
(64 elements)
(if FP present)
8 (if SSE/MMX present)
(if AVX-512 available)
(if FP present)
+ 2 × 32 Vector
(dedicated vector co-processor located nearby its GPU)
16 in G5 and later S/390 models and z/Architecture
(if FP present)
(if FPP present)
(up to 32)
The number of registers available on a processor and the operations that can be performed using those registers has a significant impact on theefficiencyof code generated byoptimizing compilers. TheStrahler numberof an expression tree gives the minimum number of registers required to evaluate that expression tree. | https://en.wikipedia.org/wiki/Processor_register |
Universal memoryrefers to acomputer data storagedevice combining the cost benefits ofDRAM, the speed ofSRAM, the non-volatility offlash memoryalong with infinite durability, and longevity. Such a device, if it ever becomes possible to develop, would have a far-reaching impact on the computer market. Some[1]doubt that such a type of memory will ever be possible.
Computers, for most of their recent history, have depended on several different data storage technologies simultaneously as part of their operation. Each one operates at a level in the memory hierarchy where another would be unsuitable. Apersonal computermight include a fewmegabytesof fast butvolatileand expensiveSRAMas theCPU cache, severalgigabytesof slowerDRAMfor program memory, and Hundreds of GB to a few TB of slow butnon-volatileflash memoryor "spinning platter"hard disk drivefor long-term storage. For example, a university[2]recommended students entering in 2015–2016 to have a PC with:
Researchers seek to replace these different memory types with one single type to reduce the cost and increase performance. For a memory technology to be considered a universal memory, it would need to have best characteristics of several memory technologies. It would need to:
The last criterion is likely to be satisfied last, aseconomies of scalein manufacturing reduce cost. Many types of memory technologies have been explored with the goal of creating a practical universal memory. These include:
Since each memory has its limitations, none of these have yet reached the goals of universal memory. | https://en.wikipedia.org/wiki/Universal_memory |
Ininformation science,authority controlis a process that organizes information, for example inlibrary catalogs,[1][2][3]by using a single, distinct spelling of a name (heading) or an identifier (generallypersistentandalphanumeric) for each topic or concept. The wordauthorityinauthority controlderives from the idea that the names of people, places, things, and concepts areauthorized,i.e., they are established in one particular form.[4][5][6]These one-of-a-kind headings or identifiers are applied consistently throughout catalogs which make use of the respective authority file,[7]and are applied for other methods of organizing data such as linkages andcross references.[7][8]Each controlled entry is described in an authorityrecordin terms of its scope and usage, and this organization helps the library staff maintain the catalog and make it user-friendly for researchers.[9]
Catalogersassign each subject—such as author, topic, series, or corporation—a particular unique identifier or heading term which is then used consistently, uniquely, and unambiguously for all references to that same subject, which removes variations from different spellings,transliterations,pen names, oraliases.[10]The unique header can guide users to all relevant information including related orcollocatedsubjects.[10]Authority records can be combined into a database and called anauthority file, and maintaining and updating these files as well as "logical linkages"[11]to other files within them is the work of librarians and other information catalogers. Accordingly, authority control is an example ofcontrolled vocabularyand ofbibliographic control.
While in theory any piece of information is amenable to authority control such as personal and corporate names,uniform titles, series names, and subjects,[2][3]library catalogers typically focus on author names and titles of works. Traditionally, one of the most commonly used authority files globally are thesubject headings from the Library of Congress. More recently, links to or titles of the articles and categories ofWikipediaemerged to function as an authority file due to the popularity of the encyclopedia, where each article or category is a notable topic or concept similar to other authority files.[citation needed]
As time passes, information changes, prompting needs for reorganization. According to one view, authority control is not about creating a perfect seamless system but rather it is an ongoing effort to keep up with these changes and try to bring "structure and order" to the task of helping users find information.[9]
Sometimes within a catalog, there are diverse names or spellings for only one person or subject.[10][13]This variation may cause researchers to overlook relevant information. Authority control is used by catalogers tocollocatematerials that logically belong together but that present themselves differently. Records are used to establishuniform titlesthat collocate all versions of a given work under one unique heading even when such versions are issued under different titles. With authority control, one unique preferred name represents all variations and will include different variations, spellings and misspellings, uppercase versus lowercase variants, differing dates, and so forth. For example, in Wikipedia, the first wife ofCharles IIIis described by an articleDiana, Princess of Walesas well as numerous other descriptors, e.g.Princess Diana, but bothPrincess DianaandDiana, Princess of Walesdescribe the same person so they all redirect to the same main article; in general, all authority records choose one title as the preferred one for consistency. In an online library catalog, various entries might look like the following:[2][3]
These terms describe the same person. Accordingly, authority control reduces these entries to one unique entry or officially authorized heading, sometimes termed anaccess point: Diana, Princess of Wales, 1961–1997.[18]
Generally, there are different authority file headings and identifiers used by different libraries in different countries, possibly inviting confusion, but there are different approaches internationally to try to lessen the confusion. One international effort to prevent such confusion is theVirtual International Authority Filewhich is a collaborative attempt to provide a single heading for a particular subject. It is a way to standardize information from different authority files around the world such as theIntegrated Authority File(GND) maintained and used cooperatively by many libraries in German-speaking countries and the United StatesLibrary of Congress. The idea is to create a single worldwide virtual authority file. For example, the ID forPrincess Dianain the GND is118525123(preferred name:Diana < Wales, Prinzessin>) while the United States Library of Congress uses the termDiana, Princess of Wales, 1961–1997; other authority files have other choices. The Virtual International Authority File choice for all of these variations isVIAF ID: 107032638— that is, a common number representing all of these variations.[18]
The English Wikipedia prefers the term "Diana, Princess of Wales", but at the bottom of the article about her, there are links to various international cataloging efforts for reference purposes.
Sometimes two different authors have been published under the same name.[10]This can happen if there is a title which is identical to another title or to a collective uniform title.[10]This, too, can cause confusion. Different authors can be distinguished correctly from each other by, for example, adding a middle initial to one of the names; in addition, other information can be added to one entry to clarify the subject, such as birth year, death year, range of active years such as 1918–1965 when the personflourished, or a brief descriptive epithet. When catalogers come across different subjects with similar or identical headings, they candisambiguatethem using authority control.
A customary way of enforcing authority control in a bibliographic catalog is to set up a separate index of authority records, which relates to and governs the headings used in the main catalog. This separate index is often referred to as an "authority file". It contains an indexable record of all decisions made by catalogers in a given library (or—as is increasingly the case—cataloging consortium), which catalogers consult when making, or revising, decisions about headings. As a result, the records contain documentation about sources used to establish a particular preferred heading, and may contain information discovered while researching the heading which may be useful.[17]
While authority files provide information about a particular subject, their primary function is not to provide information but to organize it.[17]They contain enough information to establish that a given author or title is unique, but that is all; irrelevant but interesting information is generally excluded. Although practices vary internationally, authority records in the English-speaking world generally contain the following information:
Since the headings function as access points, making sure that they are distinct and not in conflict with existing entries is important. For example, the English novelist William Collins (1824–89), whose works include the Moonstone and The Woman in White is better known as Wilkie Collins. Cataloguers [sic] have to decide which name the public would most likely look under, and whether to use a see also reference to link alternative forms of an individual's name.
For example, the Irish writerBrian O'Nolan, who lived from 1911 to 1966, wrote under manypen namessuch as Flann O'Brien and Myles na Gopaleen. Catalogers at the United States Library of Congress chose one form—"O'Brien, Flann, 1911–1966"—as the official heading.[20]The example contains all three elements of a valid authority record: the first headingO'Brien, Flann, 1911–1966is the form of the name that theLibrary of Congresschose as authoritative. In theory, every record in the catalog that represents a work by this author should have this form of the name as its author heading. What follows immediately below the heading beginning withNa Gopaleen, Myles, 1911–1966are theseereferences. These forms of the author's name will appear in the catalog, but only as transcriptions and not as headings. If a user queries the catalog under one of these variant forms of the author's name, he or she would receive the response: "See O'Brien, Flann, 1911–1966." There is an additional spelling variant of the Gopaleen name: "Na gCopaleen, Myles, 1911–1966" has an extraCinserted because the author also employed the non-anglicized Irish spelling of his pen-name, in which the capitalizedCshows the correct root word while the precedinggindicates its pronunciation in context. So if a library user comes across this spelling variant, he or she will be led to the same author regardless.See alsoreferences, which point from one authorized heading to another authorized heading, are exceedingly rare for personal name authority records, although they often appear in name authority records for corporate bodies. The final four entries in this record beginning withHis At Swim-Two-Birds ... 1939.constitute the justification for this particular form of the name: it appeared in this form on the 1939 edition of the author's novelAt Swim-Two-Birds, whereas the author's othernoms de plumeappeared on later publications.
The act of choosing a single authorized heading to represent all forms of a name is quite often a difficult and complex task, considering that any given individual may have legally changed their name or used a variety of legal names in the course of their lifetime, as well as a variety of nicknames, pen names, stage names or other alternative names. It may be particularly difficult to choose a single authorized heading for individuals whose various names have controversial political or social connotations, when the choice of authorized heading may be seen as endorsement of the associated political or social ideology.
An alternative to using authorized headings is the idea ofaccess control,where various forms of a name are related without the endorsement of one particular form.[21]
Before the advent of digitalonline public access catalogsand the Internet, individual cataloging departments within each library generally carried out creating and maintaining a library's authority files. Naturally, there was a considerable difference in the authority files of the different libraries. For the early part of library history, it was generally accepted that, as long as a library's catalog was internally consistent, the differences between catalogs in different libraries did not matter greatly.
As libraries became more attuned to the needs of researchers and began interacting more with other libraries, the value of standard cataloging practices came to be recognized. With the advent of automated database technologies, catalogers began to establish cooperative consortia, such asOCLCandRLINin theUnited States, in which cataloging departments from libraries all over the world contributed their records to, and took their records from, a shared database. This development prompted the need for national standards for authority work.
In the United States, the primary organization for maintaining cataloging standards with respect to authority work operates under the aegis of theLibrary of CongressProgram for Cooperative Cataloging. It is known as theName Authority Cooperative Program, or NACO Authority.[22]
There are variousstandardsusing different acronyms.
Standards for authority metadata:
Standards for object identification, controlled by an identification-authority:
Standards for identified-object metadata(examples):vCard,Dublin Core, etc. | https://en.wikipedia.org/wiki/Authority_control |
Adefining vocabularyis a list of words used by lexicographers to write dictionary definitions. The underlying principle goes back toSamuel Johnson's notion that words should be defined using 'terms less abstruse than that which is to be explained',[1]and a defining vocabulary provides the lexicographer with a restricted list of high-frequency words which can be used for producing simple definitions of any word in the dictionary.
Defining vocabularies are especially common in Englishmonolingual learner's dictionaries. The first such dictionary to use a defining vocabulary was theNew Method English DictionarybyMichael Westand James Endicott (published in 1935), a small dictionary written using a defining vocabulary of just 1,490 words. When theLongman Dictionary of Contemporary Englishwas first published in 1978, its most striking feature was its use of a 2,000-word defining vocabulary based on Michael West'sGeneral Service List, and since then defining vocabularies have become a standard component of monolingual learner's dictionaries for English and for other languages.
Using a defining vocabulary is not without its problems, and some scholars have argued that it can lead to definitions which are insufficiently precise or accurate, or that words in the list are sometimes used in non-central meanings.[2]The more common view, however, is that the disadvantages are outweighed by the advantages,[3][4]and there is some empirical research which supports this position.[5]Almost all English learner's dictionaries have a defining vocabulary, and these range in size between 2000 and 3000 words, for example:
It is possible that, inelectronic dictionariesat least, the need for a controlled defining vocabulary will disappear. In some online dictionaries, such as theMacmillan English Dictionary for Advanced Learners,[6]every word in every definition is hyperlinked to its own entry, so that a user who is unsure of the meaning of a word in a definition can immediately see the definition for the word that is causing problems. However, this strategy works only if all the definitions are written in reasonably accessible language, which argues for some sort of defining vocabulary to be maintained in dictionaries aimed at language learners.
Intermediate-level language learners are likely to have receptive familiarity with most words in a typical 2,000-word defining vocabulary. To accommodate beginning-level learners, the defining vocabulary can be divided into two or more layers, where words in one layer are explained using only the simpler words from the previous layers.[7]This strategy is used in theLearn These Words Firstmulti-layer dictionary, where a 360-word beginning-level defining vocabulary is used to explain a 2,000-word intermediate-level defining vocabulary, which in turn is used to define the remaining words in the dictionary.[8][9] | https://en.wikipedia.org/wiki/Defining_vocabulary |
IMS Vocabulary Definition Exchange(IMS VDEX) is a mark-up language or grammar forcontrolled vocabulariesdeveloped by IMS Global as an open specification, with the Final Specification being approved in February 2004.
IMS VDEX allows the exchange and expression of simple machine-readable lists of human language terms, along with information that may assist a human in understanding the meaning of the various terms, i.e. a flat list of values, a hierarchical tree of values, a thesaurus, a taxonomy, a glossary or a dictionary.
Structural a vocabulary has an identifier, title and a list of terms. Each term has a unique key, titles and (optional) descriptions. A term may have nested terms, thus a hierarchical structure can be created. It is possible to define relationships between terms and add custom metadata to terms.
IMS VDEX support multilinguality. All values supposed to be read by a human, i.e. titles, can be defined in one or more languages.
VDEX was designed to supplement other IMS specifications and the IEEE LOM standard by giving additional semantic control to tool developers. IMS VDEX could be used for the following purposes. It is used in practice for other purposes as well.
The VDEX Information Model is represented in the diagram. A VDEX file describing a vocabulary comprises a number of information elements, most of which are relatively simple, such as a string representation of the default (human) language or aURIidentifying the value domain (or vocabulary). Some of the elements are ‘containers’ – such as aterm– that contain additional elements.
Elements may be required or optional, and in some cases, repeatable. Within a term, for example, adescriptionandcaptionmay be defined. Multiple language definitions can be used inside a description, by using alangstringelement, where the description is paired with the language to be used. Additional elements within a term includemedia descriptors, which are one or more media files to supplement a term’s description; andmetadata, which is used to describe the vocabulary further.
Therelationshipcontainer defines a relationship between terms by identifying the two terms and the specifying type or relationship, such as a term being broader or narrower than another. The term used to specify the type of relationship may conform to the ISO standards for thesauri.
Vocabulary identifiersare unique, persistent URIs, whereas term or relationship identifiers are locally unique strings. VDEX also allows for adefault languageandvocabulary nameto be given, and for whether the ordering of terms within the vocabulary is significant (order significance) to be specified.
Aprofile typeis specified to describe the type of vocabulary being expressed; different features of the VDEX model are permitted depending on the profile type, providing a common grammar for several classes of vocabulary. For example, it is possible, in some profile types, for terms to be contained within one another and be nested, which is suited to the expression of hierarchical vocabularies. Five profile types exist:lax,thesaurus,hierarchicalTokenTerms, ‘glossaryOrDictionary’ andflatTokenTerms. The lax profile is the least restrictive and offers the full VDEX model, whereas the flatTokenTerms profile is the most restrictive and lightweight.
VDEX also offers some scope for complex vocabularies, assuming the existence of a well-defined application profile (for exchange interoperability). Some examples are:
Identifiers in VDEX data should be persistent, unique, resolvable, transportable and URI-compliant. Specifically, vocabulary identifiers should be unique URIs, whereas term and relationship identifiers should be locally unique strings. | https://en.wikipedia.org/wiki/IMS_VDEX |
Named-entity recognition(NER) (also known as(named)entity identification,entity chunking, andentity extraction) is a subtask ofinformation extractionthat seeks to locate and classifynamed entitiesmentioned inunstructured textinto pre-defined categories such as person names, organizations, locations,medical codes, time expressions, quantities, monetary values, percentages, etc.
Most research on NER/NEE systems has been structured as taking an unannotated block of text, such as this one:
Jim bought 300 shares of Acme Corp. in 2006.
And producing an annotated block of text that highlights the names of entities:
[Jim]Personbought 300 shares of [Acme Corp.]Organizationin [2006]Time.
In this example, a person name consisting of one token, a two-token company name and a temporal expression have been detected and classified.
State-of-the-art NER systems for English produce near-human performance. For example, the best system enteringMUC-7scored 93.39% ofF-measurewhile human annotators scored 97.60% and 96.95%.[1][2]
Notable NER platforms include:
In the expressionnamed entity, the wordnamedrestricts the task to those entities for which one or many strings, such as words or phrases, stand (fairly) consistently for some referent. This is closely related torigid designators, as defined byKripke,[5][6]although in practice NER deals with many names and referents that are not philosophically "rigid". For instance, theautomotive company created by Henry Ford in 1903can be referred to asFordorFord Motor Company, although "Ford" can refer to many other entities as well (seeFord). Rigid designators include proper names as well as terms for certain biological species and substances,[7]but exclude pronouns (such as "it"; seecoreference resolution), descriptions that pick out a referent by its properties (see alsoDe dicto and de re), and names for kinds of things as opposed to individuals (for example "Bank").
Full named-entity recognition is often broken down, conceptually and possibly also in implementations,[8]as two distinct problems: detection of names, andclassificationof the names by the type of entity they refer to (e.g. person, organization, or location).[9]The first phase is typically simplified to a segmentation problem: names are defined to be contiguous spans of tokens, with no nesting, so that "Bank of America" is a single name, disregarding the fact that inside this name, the substring "America" is itself a name. This segmentation problem is formally similar tochunking. The second phase requires choosing anontologyby which to organize categories of things.
Temporal expressionsand some numerical expressions (e.g., money, percentages, etc.) may also be considered as named entities in the context of the NER task. While some instances of these types are good examples of rigid designators (e.g., the year 2001) there are also many invalid ones (e.g., I take my vacations in “June”). In the first case, the year2001refers to the2001st year of the Gregorian calendar. In the second case, the monthJunemay refer to the month of an undefined year (past June,next June,every June, etc.). It is arguable that the definition ofnamed entityis loosened in such cases for practical reasons. The definition of the termnamed entityis therefore not strict and often has to be explained in the context in which it is used.[10]
Certainhierarchiesof named entity types have been proposed in the literature.BBNcategories, proposed in 2002, are used forquestion answeringand consists of 29 types and 64 subtypes.[11]Sekine's extended hierarchy, proposed in 2002, is made of 200 subtypes.[12]More recently, in 2011 Ritter used a hierarchy based on commonFreebaseentity types in ground-breaking experiments on NER oversocial mediatext.[13]
To evaluate the quality of an NER system's output, several measures have been defined. The usual measures are calledprecision, recall, andF1 score. However, several issues remain in just how to calculate those values.
These statistical measures work reasonably well for the obvious cases of finding or missing a real entity exactly; and for finding a non-entity. However, NER can fail in many other ways, many of which are arguably "partially correct", and should not be counted as complete success or failures. For example, identifying a real entity, but:
One overly simple method of measuring accuracy is merely to count what fraction of all tokens in the text were correctly or incorrectly identified as part of entity references (or as being entities of the correct type). This suffers from at least two problems: first, the vast majority of tokens in real-world text are not part of entity names, so the baseline accuracy (always predict "not an entity") is extravagantly high, typically >90%; and second, mispredicting the full span of an entity name is not properly penalized (finding only a person's first name when his last name follows might be scored as ½ accuracy).
In academic conferences such as CoNLL, a variant of theF1 scorehas been defined as follows:[9]
It follows from the above definition that any prediction that misses a single token, includes a spurious token, or has the wrong class, is a hard error and does not contribute positively to either precision or recall. Thus, this measure may be said to be pessimistic: it can be the case that many "errors" are close to correct, and might be adequate for a given purpose. For example, one system might always omit titles such as "Ms." or "Ph.D.", but be compared to a system or ground-truth data that expects titles to be included. In that case, every such name is treated as an error. Because of such issues, it is important actually to examine the kinds of errors, and decide how important they are given one's goals and requirements.
Evaluation models based on a token-by-token matching have been proposed.[14]Such models may be given partial credit for overlapping matches (such as using theIntersection over Unioncriterion). They allow a finer grained evaluation and comparison of extraction systems.
NER systems have been created that use linguisticgrammar-based techniques as well asstatistical modelssuch asmachine learning. Hand-crafted grammar-based systems typically obtain better precision, but at the cost of lower recall and months of work by experiencedcomputational linguists.[15]Statistical NER systems typically require a large amount of manuallyannotatedtraining data.Semisupervisedapproaches have been suggested to avoid part of the annotation effort.[16][17]
Many different classifier types have been used to perform machine-learned NER, withconditional random fieldsbeing a typical choice.[18]
In 2001, research indicated that even state-of-the-art NER systems were brittle, meaning that NER systems developed for one domain did not typically perform well on other domains.[19]Considerable effort is involved in tuning NER systems to perform well in a new domain; this is true for both rule-based and trainable statistical systems.
Early work in NER systems in the 1990s was aimed primarily at extraction from journalistic articles. Attention then turned to processing of military dispatches and reports. Later stages of theautomatic content extraction(ACE) evaluation also included several types of informal text styles, such asweblogsandtext transcriptsfrom conversational telephone speech conversations. Since about 1998, there has been a great deal of interest in entity identification in themolecular biology,bioinformatics, and medicalnatural language processingcommunities. The most common entity of interest in that domain has been names ofgenesand gene products. There has been also considerable interest in the recognition ofchemical entitiesand drugs in the context of the CHEMDNER
competition, with 27 teams participating in this task.[20]
Despite high F1 numbers reported on the MUC-7 dataset, the problem of named-entity recognition is far from being solved. The main efforts are directed to reducing the annotations labor by employingsemi-supervised learning,[16][21]robust performance across domains[22][23]and scaling up to fine-grained entity types.[12][24]In recent years, many projects have turned tocrowdsourcing, which is a promising solution to obtain high-quality aggregate human judgments forsupervisedand semi-supervised machine learning approaches to NER.[25]Another challenging task is devising models to deal with linguistically complex contexts such as Twitter and search queries.[26]
There are some researchers who did some comparisons about the NER performances from different statistical models such as HMM (hidden Markov model), ME (maximum entropy), and CRF (conditional random fields), and feature sets.[27]And some researchers recently proposed graph-based semi-supervised learning model for language specific NER tasks.[28]
A recently emerging task of identifying "important expressions" in text andcross-linking them to Wikipedia[29][30][31]can be seen as an instance of extremely fine-grained named-entity recognition, where the types are the actual Wikipedia pages describing the (potentially ambiguous) concepts. Below is an example output of a Wikification system:
Another field that has seen progress but remains challenging is the application of NER toTwitterand other microblogs, considered "noisy" due to non-standard orthography, shortness and informality of texts.[32][33]NER challenges in English Tweets have been organized by research communities to compare performances of various approaches, such asbidirectional LSTMs, Learning-to-Search, or CRFs.[34][35][36] | https://en.wikipedia.org/wiki/Named-entity_recognition |
Nomenclature(UK:/noʊˈmɛŋklətʃə,nə-/,US:/ˈnoʊmənkleɪtʃər/)[1][2]is asystemofnamesor terms, or the rules for forming these terms in a particular field of arts or sciences.[3](The theoretical field studying nomenclature is sometimes referred to asonymologyortaxonymy[4]). The principles of naming vary from the relatively informalconventionsof everyday speech to the internationally agreed principles, rules, and recommendations that govern the formation and use of the specialistterminologyused in scientific and any other disciplines.[5]
Naming "things" is a part of general humancommunicationusingwordsandlanguage: it is an aspect of everydaytaxonomyas people distinguish the objects of their experience, together with their similarities and differences, which observersidentify, name andclassify. The use of names, as the many different kinds ofnounsembedded in different languages, connects nomenclature totheoretical linguistics, while the way humans mentally structure the world in relation toword meaningsandexperiencerelates to thephilosophy of language.
Onomastics, the study ofproper namesand their origins, includes:anthroponymy(concerned with human names, includingpersonal names,surnamesandnicknames);toponymy(the study of place names); andetymology(the derivation, history and use of names) as revealed throughcomparativeanddescriptive linguistics.
The scientific need for simple, stable and internationally accepted systems for naming objects of the natural world has generated many formal nomenclatural systems.[citation needed]Probably the best known of these nomenclatural systems are the five codes ofbiological nomenclaturethat govern theLatinizedscientific namesoforganisms.
The wordnomenclatureis derived from theLatinwordnomen('name'), andcalare('to call'). The Latin termnomenclaturarefers to a list of names, as does the wordnomenclator, which can also indicate a provider or announcer of names.
The study of propernamesis known asonomastics, which has a wide-ranging scope that encompasses all names, languages, and geographical regions, as well ascultural areas.[6]
The distinction between onomastics and nomenclature is not readily clear: onomastics is an unfamiliar discipline to most people, and the use of nomenclature in an academic sense is also not commonly known. Although the two fields integrate, nomenclature concerns itself more with the rules and conventions that are used for the formation of names.[citation needed]
Due to social, political, religious, and cultural motivations, things that are the same may be given different names, while different things may be given the same name; closely related similar things may be considered separate, while on the other hand significantly different things might be considered the same.
For example,HindiandUrduare both closely related, mutually intelligibleHindustani languages(one beingsanskritisedand the otherarabised). However, they are favored as separate languages byHindusandMuslimsrespectively, as seen in the context ofHindu-Muslim conflictresulting in the violence of the 1947Partition of India. In contrast, mutually unintelligible dialects that differ considerably in structure, such asMoroccan Arabic,Yemeni Arabic, andLebanese Arabic, are considered to be the same language due to thepan-Islamism religious identity.[7][8][9]
Names provide us with a way of structuring andmappingthe world in ourmindsso, in some way, they mirror or represent the objects of our experience.
Elucidating the connections between language (especially names and nouns), meaning, and the way we perceive the world has provided a rich field of study forphilosophersandlinguists. Relevant areas of study include: the distinction between proper names andproper nouns;[10]as well as the relationship between names,[11]theirreferents,[12]meanings (semantics), and thestructure of language.
Modern scientific taxonomy has been described as "basically a Renaissance codification of folk taxonomic principles."[13]Formal systems of scientific nomenclature andclassificationare exemplified bybiological classification. Allclassification systemsare established for a purpose. The scientific classification system anchors each organism within thenested hierarchyof internationally accepted classification categories. Maintenance of this system involves formal rules of nomenclature and periodic international meetings of review. This modern system evolved from the folk taxonomy of prehistory.[14]
Folk taxonomycan be illustrated through the Western tradition ofhorticultureandgardening. Unlike scientific taxonomy, folk taxonomies serve many purposes. Examples in horticulture would be the grouping of plants, and naming of these groups, according to their properties and uses:
Folk Taxonomy is generally associated with the way rural or indigenous peoples use language to make sense of and organise the objects around them.Ethnobiologyframes this interpretation through either "utilitarianists" likeBronislaw Malinowskiwho maintain that names and classifications reflect mainly material concerns, and "intellectualists" likeClaude Lévi-Strausswho hold that they spring from innate mental processes.[15]The literature of ethnobiological classifications was reviewed in 2006.[16]Folk classification is defined by the way in which members of a language community name and categorize plants and animals whereasethnotaxonomyrefers to the hierarchical structure, organic content, and cultural function of biological classification that ethnobiologists find in every society around the world.[16]: 14
Ethnographic studiesof the naming and classification of animals and plants in non-Western societies have revealed some general principles that suggest pre-scientific man's conceptual and linguistic method of organising the biological world in a hierarchical way.[17][18][19][20]Such studies indicate that the urge to classify is a basic human instinct.[21][22]
The levels, moving from the most to least inclusive, are:
In almost all cultures objects are named using one or two words equivalent to 'kind' (genus) and 'particular kind' (species).[13]When made up of two words (abinomial) the name usually consists of a noun (likesalt,dogorstar) and an adjectival second word that helps describe the first, and therefore makes the name, as a whole, more "specific", for example,lap dog,sea salt, orfilm star. The meaning of the noun used for a common name may have been lost or forgotten (whelk,elm,lion,shark,pig) but when the common name is extended to two or more words much more is conveyed about the organism's use, appearance or other special properties (sting ray,poison apple,giant stinking hogweed,hammerhead shark). These noun-adjective binomials are just like our own names with a family or surname likeSimpsonand another adjectival Christian or forename name that specifies which Simpson, sayHomer Simpson. It seems reasonable to assume that the form of scientific names we callbinomial nomenclatureis derived from this simple and practical way of constructing common names—but with the use of Latin as a universal language.
In keeping with the utilitarian view other authors maintain that ethnotaxonomies resemble more a "complex web of resemblances" than a neat hierarchy.[23]Likewise, a recent study has suggested that some folk taxonomies display more than six ethnobiological categories.[24]Others go further and even doubt the reality of such categories,[25]especially those above the generic name level.[26]
A name is a label for any noun: names can identify a class orcategoryof things; or a single thing, either uniquely or within a givencontext. Names are given, for example, tohumansor any otherorganisms,places,products—as inbrandnames—and even toideasorconcepts. It is names as nouns that are the building blocks of nomenclature.
The wordnameis possibly derived from theProto-Indo-European languagehypothesised wordnomn.[27]The distinction between names and nouns, if made at all, is extremely subtle,[28]although clearlynounrefers to names aslexical categoriesand their function within the context of language,[29]rather that as "labels" for objects and properties.
Humanpersonal names, also referred to asprosoponyms,[30]are presented, used and categorised in many ways depending on the language and culture. In most cultures (Indonesia is one exception) it is customary for individuals to be given at least two names. In Western culture, the first name is given at birth or shortly thereafter and is referred to as thegiven name, theforename, thebaptismal name(if given then), or simply thefirst name. In England prior to the Norman invasion of 1066, small communities ofCelts,Anglo-SaxonsandScandinaviansgenerally used single names: each person was identified by a single name as either a personal name ornickname. As the population increased, it gradually became necessary to identify people further—giving rise to names like John the butcher, Henry from Sutton, and Roger son of Richard...which naturally evolved into John Butcher, Henry Sutton, and Roger Richardson. We now know this additional name variously as thesecond name,last name,family name,surnameor occasionally thebyname, and this natural tendency was accelerated by the Norman tradition of using surnames that were fixed and hereditary within individual families. In combination these two names are now known as the personal name or, simply, the name. There are many exceptions to this general rule: Westerners often insert a third or more names between the given and surnames; Chinese and Hungarian names have the family name preceding the given name; females now often retain their maiden names (their family surname) or combine, using a hyphen, their maiden name and the surname of their husband; some East Slavic nations insert the patronym (a name derived from the given name of the father) between the given and the family name; in Iceland the given name is used with the patronym, or matronym (a name derived from the given name of the mother), and surnames are rarely used.Nicknames(sometimes calledhypocoristicnames) are informal names used mostly between friends.
The distinction betweenproper namesandcommon namesis that proper names denote a unique entity e.g.London Bridge, while common names are used in a more general sense in reference to a class of objects e.g.bridge. Many proper names are obscure in meaning as they lack any apparent meaning in the way that ordinary words mean, probably for the practical reason that when they consist ofCollective nouns, they refer to groups, even when they are inflected for thesingulare.g. "committee".Concrete nounslike "cabbage" refer to physical bodies that can be observed by at least one of the senses whileabstract nouns, like "love" and "hate" refer to abstract objects. In English, many abstract nouns are formed by adding noun-forming suffixes ('-ness', '-ity', '-tion') to adjectives or verbs e.g. "happiness", "serenity", "concentration."Pronounslike "he", "it", "which", and "those" stand in place of nouns innoun phrases.
The capitalization of nouns varies with language and even the particular context: journals often have their ownhouse stylesfor common names.
Distinctions may be made between particular kinds of names simply by using the suffix-onym, from the Greekónoma(ὄνομα, 'name'). So we have, for example,hydronymsname bodies of water,synonymsare names with the same meaning, and so on. The entire field could be described as chrematonymy—the names of things.
Toponyms areproper namesgiven to various geographical features (geonyms), and also to cosmic features (cosmonyms). This could include names of mountains, rivers, seas, villages, towns, cities, countries, planets, stars etc. Toponymy can be further divided into specialist branches, like:choronymy, the study of proper names of regions and countries;econymy, the study of proper names of villages, towns and cities;hodonymy, the study of proper names of streets and roads;hydronymy, the study of proper names of water bodies;oronymy, the study of proper names of mountains and hills, etc.[31][32][33]
Toponymy has popular appeal because of its socio-cultural and historical interest and significance forcartography. However, work on the etymology of toponyms has found that many place names are descriptive, honorific or commemorative but frequently they have no meaning, or the meaning is obscure or lost. Also, the many categories of names are frequently interrelated. For example, many place-names are derived from personal names (Victoria), many names of planets and stars are derived from the names ofmythologicalcharacters (Venus,Neptune), and many personal names are derived from place-names, names of nations and the like (Wood, Bridge).[34][35]
In a strictly scientific sense, nomenclature is regarded as a part oftaxonomy(though distinct from it). Moreover, the precision demanded by science in the accurate naming of objects in the natural world has resulted in a variety ofcodes of nomenclature(worldwide-accepted sets of rules onbiological classification).
Taxonomycan be defined as the study of classification including its principles, procedures and rules,[36]: 8whileclassificationitself is the ordering of taxa (the objects of classification) into groups based on similarities or differences.[37][38]Doing taxonomy entails identifying, describing,[39]and naming taxa;[40]therefore, in the scientific sense, nomenclature is the branch of taxonomy concerned with the application of scientific names totaxa, based on a particular classification scheme, in accordance with agreed international rules and conventions.
Identificationdetermines whether a particular organism matches a taxon that has already been classified and named – so classification must precede identification.[41]This procedure is sometimes referred to asdetermination.[36]: 5
AlthoughLinnaeus' system ofbinomial nomenclaturewas rapidly adopted after the publication of hisSpecies PlantarumandSystema Naturaein 1753 and 1758 respectively, it was a long time before there was international consensus concerning the more general rules governingbiological nomenclature. The first botanical code was produced in 1905, the zoological code in 1889 and cultivated plant code in 1953. Agreement on the nomenclature and symbols for genes emerged in 1979.
Over the last few hundred years, the number of identified astronomical objects has risen from hundreds to over a billion, and more are discovered every year. Astronomers need universal systematic designations to unambiguously identify all of these objects usingastronomical naming conventions, while assigning names to the most interesting objects and, where relevant, naming important or interesting features of those objects.
TheIUPAC nomenclatureis a system of namingchemical compoundsand for describing the science ofchemistryin general. It is maintained by theInternational Union of Pure and Applied Chemistry.
Similarcompendiaexist forbiochemistry[51](in association with theIUBMB),analytical chemistry[52]andmacromolecular chemistry.[53]These books are supplemented by shorter recommendations for specific circumstances which are published from time to time in thejournalPure and Applied Chemistry. These systems can be accessed through theInternational Union of Pure and Applied Chemistry(IUPAC). | https://en.wikipedia.org/wiki/Nomenclature |
Ininformation science, anontologyencompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or alldomains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to asapplied ontology.[1]
Everyacademic disciplineor field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain,interoperabilityof data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain acontrolled vocabularyofjargonbetween each of their languages.[2]For instance, thedefinition and ontology of economicsis a primary concern inMarxist economics,[3]but also in othersubfields of economics.[4]An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining whatcapital assetsare at risk and by how much (seerisk management).
What ontologies in bothinformation scienceandphilosophyhave in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems ofontology engineering(e.g.,QuineandKripkein philosophy,SowaandGuarinoin information science),[5]and debates concerning to what extentnormativeontology is possible (e.g.,foundationalismandcoherentismin philosophy,BFOandCycin artificial intelligence).
Applied ontologyis considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishingcontrolled vocabulariesof narrow domains than with philosophicalfirst principles, or with questions such as the mode of existence offixed essencesor whether enduring objects (e.g.,perdurantismandendurantism) may be ontologically more primary thanprocesses.Artificial intelligencehas retained considerable attention regardingapplied ontologyin subfields likenatural language processingwithinmachine translationandknowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics,[6]industry.[7]Such efforts often use ontology editing tools such asProtégé.[8]
Ontologyis a branch ofphilosophyand intersects areas such asmetaphysics,epistemology, andphilosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality.Metaphysicsdeals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those betweenparticularsanduniversals,intrinsic and extrinsic properties, oressenceandexistence. Metaphysics has been an ongoing topic of discussion since recorded history.
Thecompoundwordontologycombinesonto-, from theGreekὄν,on(gen.ὄντος,ontos), i.e. "being; that which is", which is thepresentparticipleof theverbεἰμί,eimí, i.e. "to be, I am", and-λογία,-logia, i.e. "logical discourse", seeclassical compoundsfor this type of word formation.[9][10]
While theetymologyis Greek, the oldest extant record of the word itself, theNeo-Latinformontologia, appeared in 1606 in the workOgdoas ScholasticabyJacob Lorhard(Lorhardus) and in 1613 in theLexicon philosophicumbyRudolf Göckel(Goclenius).[11]
The first occurrence in English ofontologyas recorded by theOED(Oxford English Dictionary, online edition, 2008) came inArcheologia Philosophica NovaorNew Principles of PhilosophybyGideon Harvey.
Since the mid-1970s, researchers in the field ofartificial intelligence(AI) have recognized thatknowledge engineeringis the key to building large and powerful AI systems[citation needed]. AI researchers argued that they could create new ontologies ascomputational modelsthat enable certain kinds ofautomated reasoning, which was onlymarginally successful. In the 1980s, the AI community began to use the termontologyto refer to both a theory of a modeled world and a component ofknowledge-based systems. In particular, David Powers introduced the wordontologyto AI to refer to real world or robotic grounding,[12][13]publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings.[14]Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.[15]
In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" byTom Gruber[16]usedontologyas a technical term incomputer scienceclosely related to earlier idea ofsemantic networksandtaxonomies. Gruber introduced the term asa specification of a conceptualization:
An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.[17]
Attempting to distance ontologies from taxonomies and similar efforts inknowledge modelingthat rely onclassesandinheritance, Gruber stated (1993):
Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited toconservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972). To specify a conceptualization, one needs to state axioms thatdoconstrain the possible interpretations for the defined terms.[16]
Recent experimental ontology frameworks have also explored resonance-based AI-human co-evolution structures, such as IAMF (Illumination AI Matrix Framework). Though not yet widely adopted in academic discourse, such models propose phased approaches to ethical harmonization and structural emergence.[18]
As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."[19]
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations.
A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the wordcardhas many different meanings. An ontology about the domain ofpokerwould model the "playing card" meaning of the word, while an ontology about the domain ofcomputer hardwarewould model the "punched card" and "video card" meanings.
Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.)[citation needed].
At present, merging ontologies that are not developed from a commonupper ontologyis a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies,[20]but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like theOBO Foundry.
An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs acore glossarythat overarches the terms and associated object descriptions as they are used in various relevant domain ontologies.
Standardized upper ontologies available for use includeBFO,BORO method,Dublin Core,GFO,Cyc,SUMO,UMBEL, andDOLCE.[21][22]WordNethas been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.[23]
TheGellishontology is an example of a combination of an upper and a domain ontology.
A survey of ontology visualization methods is presented by Katifori et al.[24]An updated survey of ontology visualization methods and tools was published by Dudás et al.[25]The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al.[26]A visual language for ontologies represented inOWLis specified by theVisual Notation for OWL Ontologies (VOWL).[27]
Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain.[28]It is a subfield ofknowledge engineeringthat studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.[29][30]
Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include:
Ontology editorsare applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or moreontology languages.
Aspects of ontology editors include: visual navigation possibilities within theknowledge model,inference enginesandinformation extraction; support for modules; the import and export of foreignknowledge representationlanguages forontology matching; and the support of meta-ontologies such asOWL-S,Dublin Core, etc.[31]
Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction andtext mininghave been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.[32]
Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.[citation needed]
Anontology languageis aformal languageused to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based:
The W3CLinking Open Data community projectcoordinates attempts to converge different ontologies into worldwideSemantic Web.
The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries.
The following are libraries of human-selected ontologies.
The following are both directories and search engines.
In general, ontologies can be used beneficially in several fields. | https://en.wikipedia.org/wiki/Ontology_(computer_science) |
Terminologyis a group of specialized words and respective meanings in a particular field, and also the study of such terms and their use;[1]the latter meaning is also known asterminology science. Atermis a word,compound word, or multi-wordexpressionthat in specificcontextsis given specific meanings—these may deviate from the meanings the same words have in other contexts and in everyday language.[2]Terminology is a discipline that studies, among other things, the development of such terms and their interrelationships within a specialized domain. Terminology differs fromlexicography, as it involves the study ofconcepts, conceptual systems and their labels (terms), whereas lexicography studies words and their meanings.
Terminology is a discipline that systematically studies the "labelling or designating of concepts" particular to one or more subject fields or domains ofhumanactivity. It does this through the research and analysis of terms in context for the purpose of documenting and promoting consistent usage. Terminology can be limited to one or more languages (for example, "multilingual terminology" and "bilingual terminology"), or may have aninterdisciplinarityfocus on the use of terms in different fields.
The terminology discipline consists mainly of the following aspects:
A distinction is made between two types of terminology work:
Ad hoc terminology is prevalent in thetranslationprofession, where a translation for a specific term (or group of terms) is required quickly to solve a particular translation problem.
Nomenclaturecomprises types of terminology especially having to do withgeneral ontology,applied ontology, andtaxonomy(categorizationsandclassifications, such astaxonomy for life forms,taxonomy for search engines, and so on).
Aterminologistintends to hone categorical organization by improving the accuracy and content of its terminology. Technical industries andstandardizationinstitutes compile their own glossaries. This provides the consistency needed in the various areas—fields and branches, movements and specialties—to work with core terminology to then offer material for the discipline's traditional and doctrinal literature.
Terminology is also then key in boundary-crossing problems, such as in language translation andsocial epistemology.
Terminology helps to build bridges and to extend one area into another.Translatorsresearch the terminology of the languages they translate. Terminology is taught alongside translation in universities and translation schools. Large translation departments and translation bureaus have aTerminologysection.
Terminology scienceis a branch oflinguisticsstudying special vocabulary.
The main objects of terminological studies are speciallexical units(or speciallexemes), first of all terms. They are analysed from the point of view of their origin, formal structure, their meanings and also functional features. Terms are used to denote concepts, therefore terminology science also concerns itself with the formation and development of concepts, as well as with the principles of exposing the existing relations between concepts and classifying concepts; also, with the principles of defining concepts and appraising the existing definitions. Considering the fact that characteristics and functioning of term depend heavily on its lexical surrounding nowadays it is common to view as the main object of terminology science not separate terms, but rather the whole terminology used in some particular field of knowledge (also called subject field).
Terminological research started seventy years ago and was especially fruitful at the last forty years. At that time the main types of special lexical units, such as terms proper, nomens, terminoids, prototerms, preterms and quasiterms were singled out and studied.[further explanation needed]
The main principles of terminological work were elaborated, terminologies of the leading European languages belonging to many subject fields were described and analysed. It should be mentioned that at the former USSR terminological studies were conducted on an especially large scale: while in the 1940s only four terminological dissertations were successfully defended, in the 1950s there were 50 such dissertations, in the 1960s their number reached 231, in the 1970s – 463 and in the 1980s – 1110.
As the result of development and specialising of terminological studies, some of the branches of terminology science – such astypologicalterminology science,semasiologicalterminology science, terminological derivatology, comparative terminology science, terminography, functional terminology science, cognitive terminology science, historical terminology science and some branch terminology sciences – have gained the status of independent scientific disciplines.
Terminological theories include general theory of terminology,[7]socioterminology,[8]communicative theory of terminology,[9]sociocognitive terminology,[10]andframe-based terminology.[11] | https://en.wikipedia.org/wiki/Terminology |
TheUniversal Data Element Framework(UDEF) was[1]acontrolled vocabularydeveloped byThe Open Group. It provided a framework for categorizing, naming, and indexing data. It assigned to every item of data a structured alphanumeric tag plus a controlled vocabulary name that describes the meaning of the data. This allowed relating data elements to similar elements defined by other organizations.
UDEF defined aDewey-decimallike code for each concept. For example, an "employee number" is often used inhuman resource management. It has a UDEF tag a.5_12.35.8 and a controlled vocabulary description "Employee.PERSON_Employer.Assigned.IDENTIFIER".
UDEF has been superseded by theOpen Data Element Framework (ODEF).[2]
In an application used by a hospital, the last name and first name of several people could include the following example concepts:
For the examples above, the following UDEF IDs are available: | https://en.wikipedia.org/wiki/Universal_Data_Element_Framework |
Inmetadata, avocabulary-based transformation(VBT) is a transformation aided by the use of asemantic equivalencestatements within acontrolled vocabulary.
Many organizations today require communication between two or more computers. Although many standards exist to exchange data between computers such asHTMLoremail, there is still much structured information that needs to be exchanged between computers that is not standardized. The process of mapping one source of data into another is often a slow and labor-intensive process.
VBT is a possible way to avoid much of the time and cost of manual data mapping using traditionalextract, transform, loadtechnologies.
The termvocabulary-based transformationwas first defined by Roy Shulte of theGartner Grouparound May 2003 and appeared in annual "hype-cycle" forintegration.
VBT allows computer systems integrators to more automatically "look up" the definitions of data elements in a centralizeddata dictionaryand use that definition and the equivalent mappings to transform that data element into a foreignnamespace.
TheWeb Ontology Language(OWL) language also support threesemantic equivalencestatements. | https://en.wikipedia.org/wiki/Vocabulary-based_transformation |
TheEXtensible Cross-Linguistic Automatic Information Machine (EXCLAIM)was an integrated tool forcross-language information retrieval(CLIR), created at theUniversity of California, Santa Cruzin early 2006, with some support for more than a dozen languages. The lead developers were Justin Nuger and Jesse Saba Kirchner.
Early work on CLIR depended on manually constructed parallel corpora for each pair of languages. This method is labor-intensive compared to parallel corpora created automatically. A more efficient way of finding data to train a CLIR system is to use matching pages on thewebwhich are written in different languages.[1]
EXCLAIM capitalizes on the idea of latent parallel corpora on thewebby automating the alignment of such corpora in various domains. The most significant of these isWikipediaitself, which includes articles in250 languages. The role of EXCLAIM is to usesemanticsandlinguisticanalytic tools to align the information in these Wikipedias so that they can be treated as parallel corpora. EXCLAIM is also extensible to incorporate information from many other sources, such as theChinese Community Health Resource Center(CCHRC).
One of the main goals of the EXCLAIM project is to provide the kind of computational tools and CLIR tools forminority languagesandendangered languageswhich are often available only for powerful or prosperous majority languages.
In 2009, EXCLAIM was in a beta state, with varying degrees of functionality for different languages. Support for CLIR using the Wikipedia dataset and the most current version of EXCLAIM (v.0.5), including full UTF-8 support and Porter stemming for the English component, was available for the following twenty-three languages:
Support using the Wikipedia dataset and an earlier version of EXCLAIM (v.0.3) is available for the following languages:
Significant developments in the most recent version of EXCLAIM include support for Mandarin Chinese. By developing support for this language, EXCLAIM has added solutions tosegmentationandencodingproblems which will allow the system to be extended to many other languages written with non-European orthographic conventions. This support is supplied through the Trimming And Reformatting Modular System (TARMS) toolkit.
Future versions of EXCLAIM will extend the system to additional languages. Other goals include incorporation of available latent datasets in addition to the Wikipedia dataset.
The EXCLAIM development plan calls for an integrated CLIR instrument usable searching from English for information in any of the supported languages, or searching from any of the supported languages for information in English when EXCLAIM 1.0 is released. Future versions will allow searching from any supported language into any other, and searching from and into multiple languages.
EXCLAIM has been incorporated into several projects which rely on cross-languagequery expansionas part of theirbackends. One such project is a cross-linguisticreadabilitysoftware generation framework, detailed in work presented atACL 2009.[2] | https://en.wikipedia.org/wiki/EXCLAIM |
TheConference and Labs of the Evaluation Forum(formerlyCross-Language Evaluation Forum), orCLEF, is an organization promoting research in multilingualinformation access(currently focusing onEuropean languages). Its specific functions are to maintain an underlying framework for testinginformation retrievalsystems and to createrepositoriesof data for researchers to use in developing comparablestandards.[1]The organization holds a conference every September in Europe since a first constituting workshop in 2000. From 1997 to 1999,TREC, the similar evaluation conference organised annually in the US, included a track for the evaluation of Cross-Language IR for European languages. This track was coordinated jointly by NIST and by a group of European volunteers that grew over the years. At the end of 1999, a decision by some of the participants was made to transfer the activity to Europe and set it up independently. The aim was to expand coverage to a larger number of languages and to focus on a wider range of issues, including monolingual system evaluation for languages other than English. Over the years, CLEF has been supported by a number of various EU funded projects and initiatives.[2]
CLEF 2019 marked the 20th anniversary of the conference and it was celebrated by publishing a book[3]on the lessons learned in 20 years of evaluation activities.
Before 2010,[4]CLEF was organised as a workshop co-located with theEuropean Conference on Digital Libraries, consisting of a number of evaluation labs or tracks, similarly to TREC. In 2010, CLEF moved to become a self-sufficiently organised conference with evaluation labs, laboratory workshops, and a main conference track. In 2012, INEX, a workshop on retrieval and access to structured text, previously organised annually atSchloß Dagstuhl, merged with CLEF to become one of its evaluation labs.
Prior to each CLEF conference, participants in evaluation labs receive a set of challenge tasks. The tasks are designed to test various aspects of information retrieval systems and encourage their development. Groups of researchers propose and organize campaigns to satisfy those tasks and the results are used asbenchmarksfor the state of the art in the specific areas.,[5][6]
In the beginning, CLEF focussed mainly on fairly typical information retrieval tasks, but has moved to more specific tasks. For example, the 2005 interactive image search task worked with illustrating non-fiction texts using images fromFlickr[7]and the 2010 medical retrieval task focused on retrieval of computed tomography, MRI, and radiographic images.[8]In 2017, CLEF accommodated a number of tasks e.g. on identifying biological species from photographs or video clips, on stylistic analysis of authorship, and on health related information access.
This article about a computer conference is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/CLEF |
Agent miningis a research field that combines two areas of computer science:multiagent systemsanddata mining. It explores how intelligent computer agents can work together to discover, analyze, and learn from large amounts of data more effectively than traditional methods.[1][2]
The interaction and the integration between multiagent systems and data mining have a long history.[3][4]The very early work on agent mining focused on agent-based knowledge discovery,[5]agent-based distributed data mining,[6][7]and agent-based distributed machine learning,[8]and using data mining to enhance agent intelligence.[9]
The International Workshop on Agents and Data Mining Interaction[10]has been held for more than 10 times, co-located with the International Conference on Autonomous Agents and Multi-Agent Systems. Several proceedings are available from SpringerLecture Notes in Computer Science. | https://en.wikipedia.org/wiki/Agent_mining |
Indata analysis,anomaly detection(also referred to asoutlier detectionand sometimes asnovelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behavior.[1]Such examples may arouse suspicions of being generated by a different mechanism,[2]or appear inconsistent with the remainder of that set of data.[3]
Anomaly detection finds application in many domains includingcybersecurity,medicine,machine vision,statistics,neuroscience,law enforcementandfinancial fraudto name only a few. Anomalies were initially searched for clear rejection or omission from the data to aid statistical analysis, for example to compute the mean or standard deviation. They were also removed to better predictions from models such as linear regression, and more recently their removal aids the performance of machine learning algorithms. However, in many applications anomalies themselves are of interest and are the observations most desirous in the entire data set, which need to be identified and separated from noise or irrelevant outliers.
Three broad categories of anomaly detection techniques exist.[1]Supervised anomaly detectiontechniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier. However, this approach is rarely used in anomaly detection due to the general unavailability of labelled data and the inherent unbalanced nature of the classes.Semi-supervised anomaly detectiontechniques assume that some portion of the data is labelled. This may be any combination of the normal or anomalous data, but more often than not, the techniques construct a model representingnormal behaviorfrom a givennormaltraining data set, and then test the likelihood of a test instance to be generated by the model.Unsupervised anomaly detectiontechniques assume the data is unlabelled and are by far the most commonly used due to their wider and relevant application.
Many attempts have been made in the statistical and computer science communities to define an anomaly. The most prevalent ones include the following, and can be categorised into three groups: those that are ambiguous, those that are specific to a method with pre-defined thresholds usually chosen empirically, and those that are formally defined:
The concept of intrusion detection, a critical component of anomaly detection, has evolved significantly over time. Initially, it was a manual process where system administrators would monitor for unusual activities, such as a vacationing user's account being accessed or unexpected printer activity. This approach was not scalable and was soon superseded by the analysis of audit logs and system logs for signs of malicious behavior.[4]
By the late 1970s and early 1980s, the analysis of these logs was primarily used retrospectively to investigate incidents, as the volume of data made it impractical for real-time monitoring. The affordability of digital storage eventually led to audit logs being analyzed online, with specialized programs being developed to sift through the data. These programs, however, were typically run during off-peak hours due to their computational intensity.[4]
The 1990s brought the advent of real-time intrusion detection systems capable of analyzing audit data as it was generated, allowing for immediate detection of and response to attacks. This marked a significant shift towards proactive intrusion detection.[4]
As the field has continued to develop, the focus has shifted to creating solutions that can be efficiently implemented across large and complex network environments, adapting to the ever-growing variety of security threats and the dynamic nature of modern computing infrastructures.[4]
Anomaly detection is applicable in a very large number and variety of domains, and is an important subarea of unsupervised machine learning. As such it has applications in cyber-security,intrusion detection,fraud detection, fault detection, system health monitoring, event detection in sensor networks, detecting ecosystem disturbances, defect detection in images usingmachine vision, medical diagnosis and law enforcement.[5]
Anomaly detection was proposed forintrusion detection systems(IDS) byDorothy Denningin 1986.[6]Anomaly detection for IDS is normally accomplished with thresholds and statistics, but can also be done withsoft computing, and inductive learning.[7]Types of features proposed by 1999 included profiles of users, workstations, networks, remote hosts, groups of users, and programs based on frequencies, means, variances, covariances, and standard deviations.[8]The counterpart of anomaly detection inintrusion detectionismisuse detection.
Anomaly detection is vital infintechforfraudprevention.[9][10]
Preprocessingdata to remove anomalies can be an important step in data analysis, and is done for a number of reasons. Statistics such as the mean and standard deviation are more accurate after the removal of anomalies, and the visualisation of data can also be improved. Insupervised learning, removing the anomalous data from the dataset often results in a statistically significant increase in accuracy.[11][12]
Anomaly detection has become increasingly vital in video surveillance to enhance security and safety.[13][14]With the advent of deep learning technologies, methods using Convolutional Neural Networks (CNNs) and Simple Recurrent Units (SRUs) have shown significant promise in identifying unusual activities or behaviors in video data.[13]These models can process and analyze extensive video feeds in real-time, recognizing patterns that deviate from the norm, which may indicate potential security threats or safety violations.[13]An important aspect for video surveillance is the development of scalable real-time frameworks.[15][16]Such pipelines are required for processing multiple video streams with low computational resources.
InIT infrastructuremanagement, anomaly detection is crucial for ensuring the smooth operation and reliability of services.[17]These arecomplex systems, composed of many interactive elements and large data quantities, requiring methods to process and reduce this data into a human and machine interpretable format.[18]Techniques like the IT Infrastructure Library (ITIL) and monitoring frameworks are employed to track and manage system performance and user experience.[17]Detected anomalies can help identify and pre-empt potential performance degradations or system failures, thus maintaining productivity and business process effectiveness.[17]
Anomaly detection is critical for the security and efficiency of Internet of Things (IoT) systems.[19]It helps in identifying system failures and security breaches in complex networks of IoT devices.[19]The methods must manage real-time data, diverse device types, and scale effectively. Garbe et al.[20]have introduced a multi-stage anomaly detection framework that improves upon traditional methods by incorporating spatial clustering, density-based clustering, and locality-sensitive hashing. This tailored approach is designed to better handle the vast and varied nature of IoT data, thereby enhancing security and operational reliability in smart infrastructure and industrial IoT systems.[20]
Anomaly detection is crucial in thepetroleum industryfor monitoring critical machinery.[21]Martí et al. used a novel segmentation algorithm to analyze sensor data for real-time anomaly detection.[21]This approach helps promptly identify and address any irregularities in sensor readings, ensuring the reliability and safety of petroleum operations.[21]
In the oil and gas sector, anomaly detection is not just crucial for maintenance and safety, but also for environmental protection.[22]Aljameel et al. propose an advanced machine learning-based model for detecting minor leaks in oil and gas pipelines, a task traditional methods may miss.[22]
Many anomaly detection techniques have been proposed in literature.[1][23]The performance of methods usually depend on the data sets. For example, some may be suited to detecting local outliers, while others global, and methods have little systematic advantages over another when compared across many data sets.[24][25]Almost all algorithms also require the setting of non-intuitive parameters critical for performance, and usually unknown before application. Some of the popular techniques are mentioned below and are broken down into categories:
Also referred to as frequency-based or counting-based, the simplest non-parametric anomaly detection method is to build ahistogramwith the training data or a set of known normal instances, and if a test point does not fall in any of the histogram bins mark it as anomalous, or assign an anomaly score to test data based on the height of the bin it falls in.[1]The size of bins are key to the effectiveness of this technique but must be determined by the implementer.
A more sophisticated technique uses kernel functions to approximate the distribution of the normal data. Instances in low probability areas of the distribution are then considered anomalies.[26]
Histogram-based Outlier Score (HBOS) uses value histograms and assumes feature independence for fast predictions.[56]
Dynamic networks, such as those representing financial systems, social media interactions, and transportation infrastructure, are subject to constant change, making anomaly detection within them a complex task. Unlike static graphs, dynamic networks reflect evolving relationships and states, requiring adaptive techniques for anomaly detection.
Many of the methods discussed above only yield an anomaly score prediction, which often can be explained to users as the point being in a region of low data density (or relatively low density compared to the neighbor's densities). Inexplainable artificial intelligence, the users demand methods with higher explainability. Some methods allow for more detailed explanations: | https://en.wikipedia.org/wiki/Anomaly_detection |
Factor analysisis astatisticalmethod used to describevariabilityamong observed, correlatedvariablesin terms of a potentially lower number of unobserved variables calledfactors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobservedlatent variables. The observed variables are modelled aslinear combinationsof the potential factors plus "error" terms, hence factor analysis can be thought of as a special case oferrors-in-variables models.[1]
Simply put, the factor loading of a variable quantifies the extent to which the variable is related to a given factor.[2]
A common rationale behind factor analytic methods is that the information gained about the interdependencies between observed variables can be used later to reduce the set of variables in a dataset. Factor analysis is commonly used inpsychometrics,personalitypsychology, biology,marketing,product management,operations research,finance, andmachine learning. It may help to deal with data sets where there are large numbers of observed variables that are thought to reflect a smaller number of underlying/latent variables. It is one of the most commonly used inter-dependency techniques and is used when the relevant set of variables shows a systematic inter-dependence and the objective is to find out the latent factors that create a commonality.
The model attempts to explain a set ofp{\displaystyle p}observations in each ofn{\displaystyle n}individuals with a set ofk{\displaystyle k}common factors(fi,j{\displaystyle f_{i,j}}) where there are fewer factors per unit than observations per unit (k<p{\displaystyle k<p}). Each individual hask{\displaystyle k}of their own common factors, and these are related to the observations via the factorloading matrix(L∈Rp×k{\displaystyle L\in \mathbb {R} ^{p\times k}}), for a single observation, according to
where
In matrix notation
where observation matrixX∈Rp×n{\displaystyle X\in \mathbb {R} ^{p\times n}}, loading matrixL∈Rp×k{\displaystyle L\in \mathbb {R} ^{p\times k}}, factor matrixF∈Rk×n{\displaystyle F\in \mathbb {R} ^{k\times n}}, error term matrixε∈Rp×n{\displaystyle \varepsilon \in \mathbb {R} ^{p\times n}}and mean matrixM∈Rp×n{\displaystyle \mathrm {M} \in \mathbb {R} ^{p\times n}}whereby the(i,m){\displaystyle (i,m)}th element is simplyMi,m=μi{\displaystyle \mathrm {M} _{i,m}=\mu _{i}}.
Also we will impose the following assumptions onF{\displaystyle F}:
SupposeCov(X−M)=Σ{\displaystyle \mathrm {Cov} (X-\mathrm {M} )=\Sigma }. Then
and therefore, from conditions 1 and 2 imposed onF{\displaystyle F}above,E[LF]=LE[F]=0{\displaystyle E[LF]=LE[F]=0}andCov(LF+ϵ)=Cov(LF)+Cov(ϵ){\displaystyle Cov(LF+\epsilon )=Cov(LF)+Cov(\epsilon )}, giving
or, settingΨ:=Cov(ε){\displaystyle \Psi :=\mathrm {Cov} (\varepsilon )},
For anyorthogonal matrixQ{\displaystyle Q}, if we setL′=LQ{\displaystyle L^{\prime }=\ LQ}andF′=QTF{\displaystyle F^{\prime }=Q^{T}F}, the criteria for being factors and factor loadings still hold. Hence a set of factors and factor loadings is unique only up to anorthogonal transformation.
Suppose a psychologist has the hypothesis that there are two kinds ofintelligence, "verbal intelligence" and "mathematical intelligence", neither of which is directly observed.[note 1]Evidencefor the hypothesis is sought in the examination scores from each of 10 different academic fields of 1000 students. If each student is chosen randomly from a largepopulation, then each student's 10 scores are random variables. The psychologist's hypothesis may say that for each of the 10 academic fields, the score averaged over the group of all students who share some common pair of values for verbal and mathematical "intelligences" is someconstanttimes their level of verbal intelligence plus another constant times their level of mathematical intelligence, i.e., it is a linear combination of those two "factors". The numbers for a particular subject, by which the two kinds of intelligence are multiplied to obtain the expected score, are posited by the hypothesis to be the same for all intelligence level pairs, and are called"factor loading"for this subject.[clarification needed]For example, the hypothesis may hold that the predicted average student's aptitude in the field ofastronomyis
The numbers 10 and 6 are the factor loadings associated with astronomy. Other academic subjects may have different factor loadings.
Two students assumed to have identical degrees of verbal and mathematical intelligence may have different measured aptitudes in astronomy because individual aptitudes differ from average aptitudes (predicted above) and because of measurement error itself. Such differences make up what is collectively called the "error" — a statistical term that means the amount by which an individual, as measured, differs from what is average for or predicted by his or her levels of intelligence (seeerrors and residuals in statistics).
The observable data that go into factor analysis would be 10 scores of each of the 1000 students, a total of 10,000 numbers. The factor loadings and levels of the two kinds of intelligence of each student must be inferred from the data.
In the following, matrices will be indicated by indexed variables. "Academic Subject" indices will be indicated using lettersa{\displaystyle a},b{\displaystyle b}andc{\displaystyle c}, with values running from1{\displaystyle 1}top{\displaystyle p}which is equal to10{\displaystyle 10}in the above example. "Factor" indices will be indicated using lettersp{\displaystyle p},q{\displaystyle q}andr{\displaystyle r}, with values running from1{\displaystyle 1}tok{\displaystyle k}which is equal to2{\displaystyle 2}in the above example. "Instance" or "sample" indices will be indicated using lettersi{\displaystyle i},j{\displaystyle j}andk{\displaystyle k}, with values running from1{\displaystyle 1}toN{\displaystyle N}. In the example above, if a sample ofN=1000{\displaystyle N=1000}students participated in thep=10{\displaystyle p=10}exams, thei{\displaystyle i}th student's score for thea{\displaystyle a}th exam is given byxai{\displaystyle x_{ai}}. The purpose of factor analysis is to characterize the correlations between the variablesxa{\displaystyle x_{a}}of which thexai{\displaystyle x_{ai}}are a particular instance, or set of observations. In order for the variables to be on equal footing, they arenormalizedinto standard scoresz{\displaystyle z}:
where the sample mean is:
and the sample variance is given by:
The factor analysis model for this particular sample is then:
or, more succinctly:
where
Inmatrixnotation, we have
Observe that by doubling the scale on which "verbal intelligence"—the first component in each column ofF{\displaystyle F}—is measured, and simultaneously halving the factor loadings for verbal intelligence makes no difference to the model. Thus, no generality is lost by assuming that the standard deviation of the factors for verbal intelligence is1{\displaystyle 1}. Likewise for mathematical intelligence. Moreover, for similar reasons, no generality is lost by assuming the two factors areuncorrelatedwith each other. In other words:
whereδpq{\displaystyle \delta _{pq}}is theKronecker delta(0{\displaystyle 0}whenp≠q{\displaystyle p\neq q}and1{\displaystyle 1}whenp=q{\displaystyle p=q}). The errors are assumed to be independent of the factors:
Since any rotation of a solution is also a solution, this makes interpreting the factors difficult. See disadvantages below. In this particular example, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence without an outside argument.
The values of the loadingsL{\displaystyle L}, the averagesμ{\displaystyle \mu }, and thevariancesof the "errors"ε{\displaystyle \varepsilon }must be estimated given the observed dataX{\displaystyle X}andF{\displaystyle F}(the assumption about the levels of the factors is fixed for a givenF{\displaystyle F}).
The "fundamental theorem" may be derived from the above conditions:
The term on the left is the(a,b){\displaystyle (a,b)}-term of the correlation matrix (ap×p{\displaystyle p\times p}matrix derived as the product of thep×N{\displaystyle p\times N}matrix of standardized observations with its transpose) of the observed data, and itsp{\displaystyle p}diagonal elements will be1{\displaystyle 1}s. The second term on the right will be a diagonal matrix with terms less than unity. The first term on the right is the "reduced correlation matrix" and will be equal to the correlation matrix except for its diagonal values which will be less than unity. These diagonal elements of the reduced correlation matrix are called "communalities" (which represent the fraction of the variance in the observed variable that is accounted for by the factors):
The sample datazai{\displaystyle z_{ai}}will not exactly obey the fundamental equation given above due to sampling errors, inadequacy of the model, etc. The goal of any analysis of the above model is to find the factorsFpi{\displaystyle F_{pi}}and loadingsℓap{\displaystyle \ell _{ap}}which give a "best fit" to the data. In factor analysis, the best fit is defined as the minimum of the mean square error in the off-diagonal residuals of the correlation matrix:[3]
This is equivalent to minimizing the off-diagonal components of the error covariance which, in the model equations have expected values of zero. This is to be contrasted with principal component analysis which seeks to minimize the mean square error of all residuals.[3]Before the advent of high-speed computers, considerable effort was devoted to finding approximate solutions to the problem, particularly in estimating the communalities by other means, which then simplifies the problem considerably by yielding a known reduced correlation matrix. This was then used to estimate the factors and the loadings. With the advent of high-speed computers, the minimization problem can be solved iteratively with adequate speed, and the communalities are calculated in the process, rather than being needed beforehand. TheMinResalgorithm is particularly suited to this problem, but is hardly the only iterative means of finding a solution.
If the solution factors are allowed to be correlated (as in 'oblimin' rotation, for example), then the corresponding mathematical model usesskew coordinatesrather than orthogonal coordinates.
The parameters and variables of factor analysis can be given a geometrical interpretation. The data (zai{\displaystyle z_{ai}}), the factors (Fpi{\displaystyle F_{pi}}) and the errors (εai{\displaystyle \varepsilon _{ai}}) can be viewed as vectors in anN{\displaystyle N}-dimensional Euclidean space (sample space), represented asza{\displaystyle \mathbf {z} _{a}},Fp{\displaystyle \mathbf {F} _{p}}andεa{\displaystyle {\boldsymbol {\varepsilon }}_{a}}respectively. Since the data are standardized, the data vectors are of unit length (||za||=1{\displaystyle ||\mathbf {z} _{a}||=1}). The factor vectors define ank{\displaystyle k}-dimensional linear subspace (i.e. a hyperplane) in this space, upon which the data vectors are projected orthogonally. This follows from the model equation
and the independence of the factors and the errors:Fp⋅εa=0{\displaystyle \mathbf {F} _{p}\cdot {\boldsymbol {\varepsilon }}_{a}=0}. In the above example, the hyperplane is just a 2-dimensional plane defined by the two factor vectors. The projection of the data vectors onto the hyperplane is given by
and the errors are vectors from that projected point to the data point and are perpendicular to the hyperplane. The goal of factor analysis is to find a hyperplane which is a "best fit" to the data in some sense, so it doesn't matter how the factor vectors which define this hyperplane are chosen, as long as they are independent and lie in the hyperplane. We are free to specify them as both orthogonal and normal (Fp⋅Fq=δpq{\displaystyle \mathbf {F} _{p}\cdot \mathbf {F} _{q}=\delta _{pq}}) with no loss of generality. After a suitable set of factors are found, they may also be arbitrarily rotated within the hyperplane, so that any rotation of the factor vectors will define the same hyperplane, and also be a solution. As a result, in the above example, in which the fitting hyperplane is two dimensional, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence, or whether the factors are linear combinations of both, without an outside argument.
The data vectorsza{\displaystyle \mathbf {z} _{a}}have unit length. The entries of the correlation matrix for the data are given byrab=za⋅zb{\displaystyle r_{ab}=\mathbf {z} _{a}\cdot \mathbf {z} _{b}}. The correlation matrix can be geometrically interpreted as the cosine of the angle between the two data vectorsza{\displaystyle \mathbf {z} _{a}}andzb{\displaystyle \mathbf {z} _{b}}. The diagonal elements will clearly be1{\displaystyle 1}s and the off diagonal elements will have absolute values less than or equal to unity. The "reduced correlation matrix" is defined as
The goal of factor analysis is to choose the fitting hyperplane such that the reduced correlation matrix reproduces the correlation matrix as nearly as possible, except for the diagonal elements of the correlation matrix which are known to have unit value. In other words, the goal is to reproduce as accurately as possible the cross-correlations in the data. Specifically, for the fitting hyperplane, the mean square error in the off-diagonal components
is to be minimized, and this is accomplished by minimizing it with respect to a set of orthonormal factor vectors. It can be seen that
The term on the right is just the covariance of the errors. In the model, the error covariance is stated to be a diagonal matrix and so the above minimization problem will in fact yield a "best fit" to the model: It will yield a sample estimate of the error covariance which has its off-diagonal components minimized in the mean square sense. It can be seen that since thez^a{\displaystyle {\hat {z}}_{a}}are orthogonal projections of the data vectors, their length will be less than or equal to the length of the projected data vector, which is unity. The square of these lengths are just the diagonal elements of the reduced correlation matrix. These diagonal elements of the reduced correlation matrix are known as "communalities":
Large values of the communalities will indicate that the fitting hyperplane is rather accurately reproducing the correlation matrix. The mean values of the factors must also be constrained to be zero, from which it follows that the mean values of the errors will also be zero.
Exploratory factor analysis (EFA) is used to identify complex interrelationships among items and group items that are part of unified concepts.[4]The researcher makes noa prioriassumptions about relationships among factors.[4]
Confirmatory factor analysis (CFA) is a more complex approach that tests the hypothesis that the items are associated with specific factors.[4]CFA usesstructural equation modelingto test a measurement model whereby loading on the factors allows for evaluation of relationships between observed variables and unobserved variables.[4]Structural equation modeling approaches can accommodate measurement error and are less restrictive thanleast-squares estimation.[4]Hypothesized models are tested against actual data, and the analysis would demonstrate loadings of observed variables on the latent variables (factors), as well as the correlation between the latent variables.[4]
Principal component analysis(PCA) is a widely used method for factor extraction, which is the first phase of EFA.[4]Factor weights are computed to extract the maximum possible variance, with successive factoring continuing until there is no further meaningful variance left.[4]The factor model must then be rotated for analysis.[4]
Canonical factor analysis, also called Rao's canonical factoring, is a different method of computing the same model as PCA, which uses the principal axis method. Canonical factor analysis seeks factors that have the highest canonical correlation with the observed variables. Canonical factor analysis is unaffected by arbitrary rescaling of the data.
Common factor analysis, also calledprincipal factor analysis(PFA) or principal axis factoring (PAF), seeks the fewest factors which can account for the common variance (correlation) of a set of variables.
Image factoring is based on thecorrelation matrixof predicted variables rather than actual variables, where each variable is predicted from the others usingmultiple regression.
Alpha factoring is based on maximizing the reliability of factors, assuming variables are randomly sampled from a universe of variables. All other methods assume cases to be sampled and variables fixed.
Factor regression model is a combinatorial model of factor model and regression model; or alternatively, it can be viewed as the hybrid factor model,[5]whose factors are partially known.
Explained from PCA perspective, not from Factor Analysis perspective.
Researchers wish to avoid such subjective or arbitrary criteria for factor retention as "it made sense to me". A number of objective methods have been developed to solve this problem, allowing users to determine an appropriate range of solutions to investigate.[7]However these different methods often disagree with one another as to the number of factors that ought to be retained. For instance, theparallel analysismay suggest 5 factors while Velicer's MAP suggests 6, so the researcher may request both 5 and 6-factor solutions and discuss each in terms of their relation to external data and theory.
Horn's parallel analysis(PA):[8]A Monte-Carlo based simulation method that compares the observed eigenvalues with those obtained from uncorrelated normal variables. A factor or component is retained if the associated eigenvalue is bigger than the 95th percentile of the distribution of eigenvalues derived from the random data. PA is among the more commonly recommended rules for determining the number of components to retain,[7][9]but many programs fail to include this option (a notable exception beingR).[10]However,Formannprovided both theoretical and empirical evidence that its application might not be appropriate in many cases since its performance is considerably influenced bysample size,item discrimination, and type ofcorrelation coefficient.[11]
Velicer's (1976) MAP test[12]as described by Courtney (2013)[13]“involves a complete principal components analysis followed by the examination of a series of matrices of partial correlations” (p. 397 (though this quote does not occur in Velicer (1976) and the cited page number is outside the pages of the citation). The squared correlation for Step “0” (see Figure 4) is the average squared off-diagonal correlation for the unpartialed correlation matrix. On Step 1, the first principal component and its associated items are partialed out. Thereafter, the average squared off-diagonal correlation for the subsequent correlation matrix is then computed for Step 1. On Step 2, the first two principal components are partialed out and the resultant average squared off-diagonal correlation is again computed. The computations are carried out for k minus one step (k representing the total number of variables in the matrix). Thereafter, all of the average squared correlations for each step are lined up and the step number in the analyses that resulted in the lowest average squared partial correlation determines the number of components or factors to retain.[12]By this method, components are maintained as long as the variance in the correlation matrix represents systematic variance, as opposed to residual or error variance. Although methodologically akin to principal components analysis, the MAP technique has been shown to perform quite well in determining the number of factors to retain in multiple simulation studies.[7][14][15][16]This procedure is made available through SPSS's user interface,[13]as well as thepsychpackage for theR programming language.[17][18]
Kaiser criterion: The Kaiser rule is to drop all components with eigenvalues under 1.0 – this being the eigenvalue equal to the information accounted for by an average single item.[19]The Kaiser criterion is the default inSPSSand moststatistical softwarebut is not recommended when used as the sole cut-off criterion for estimating the number of factors as it tends to over-extract factors.[20]A variation of this method has been created where a researcher calculatesconfidence intervalsfor each eigenvalue and retains only factors which have the entire confidence interval greater than 1.0.[14][21]
Scree plot:[22]The Cattell scree test plots the components as the X-axis and the correspondingeigenvaluesas theY-axis. As one moves to the right, toward later components, the eigenvalues drop. When the drop ceases and the curve makes an elbow toward less steep decline, Cattell's scree test says to drop all further components after the one starting at the elbow. This rule is sometimes criticised for being amenable to researcher-controlled "fudging". That is, as picking the "elbow" can be subjective because the curve has multiple elbows or is a smooth curve, the researcher may be tempted to set the cut-off at the number of factors desired by their research agenda.[citation needed]
Variance explained criteria: Some researchers simply use the rule of keeping enough factors to account for 90% (sometimes 80%) of the variation. Where the researcher's goal emphasizesparsimony(explaining variance with as few factors as possible), the criterion could be as low as 50%.
By placing aprior distributionover the number of latent factors and then applying Bayes' theorem, Bayesian models can return aprobability distributionover the number of latent factors. This has been modeled using theIndian buffet process,[23]but can be modeled more simply by placing any discrete prior (e.g. anegative binomial distribution) on the number of components.
The output of PCA maximizes the variance accounted for by the first factor first, then the second factor, etc. A disadvantage of this procedure is that most items load on the early factors, while very few items load on later variables. This makes interpreting the factors by reading through a list of questions and loadings difficult, as every question is strongly correlated with the first few components, while very few questions are strongly correlated with the last few components.
Rotation serves to make the output easier to interpret. Bychoosing a different basisfor the same principal components – that is, choosing different factors to express the same correlation structure – it is possible to create variables that are more easily interpretable.
Rotations can be orthogonal or oblique; oblique rotations allow the factors to correlate.[24]This increased flexibility means that more rotations are possible, some of which may be better at achieving a specified goal. However, this can also make the factors more difficult to interpret, as some information is "double-counted" and included multiple times in different components; some factors may even appear to be near-duplicates of each other.
Two broad classes of orthogonal rotations exist: those that look for sparse rows (where each row is a case, i.e. subject), and those that look for sparse columns (where each column is a variable).
It can be difficult to interpret a factor structure when each variable is loading on multiple factors.
Small changes in the data can sometimes tip a balance in the factor rotation criterion so that a completely different factor rotation is produced. This can make it difficult to compare the results of different experiments. This problem is illustrated by a comparison of different studies of world-wide cultural differences. Each study has used different measures of cultural variables and produced a differently rotated factor analysis result. The authors of each study believed that they had discovered something new, and invented new names for the factors they found. A later comparison of the studies found that the results were rather similar when the unrotated results were compared. The common practice of factor rotation has obscured the similarity between the results of the different studies.[25]
Higher-order factor analysisis a statistical method consisting of repeating steps factor analysis –oblique rotation– factor analysis of rotated factors. Its merit is to enable the researcher to see the hierarchical structure of studied phenomena. To interpret the results, one proceeds either bypost-multiplyingthe primaryfactor pattern matrixby the higher-order factor pattern matrices (Gorsuch, 1983) and perhaps applying aVarimax rotationto the result (Thompson, 1990) or by using a Schmid-Leiman solution (SLS, Schmid & Leiman, 1957, also known as Schmid-Leiman transformation) which attributes thevariationfrom the primary factors to the second-order factors.
Factor analysis is related toprincipal component analysis(PCA), but the two are not identical.[26]There has been significant controversy in the field over differences between the two techniques. PCA can be considered as a more basic version ofexploratory factor analysis(EFA) that was developed in the early days prior to the advent of high-speed computers. Both PCA and factor analysis aim to reduce the dimensionality of a set of data, but the approaches taken to do so are different for the two techniques. Factor analysis is clearly designed with the objective to identify certain unobservable factors from the observed variables, whereas PCA does not directly address this objective; at best, PCA provides an approximation to the required factors.[27]From the point of view of exploratory analysis, theeigenvaluesof PCA are inflated component loadings, i.e., contaminated with error variance.[28][29][30][31][32][33]
WhilstEFAandPCAare treated as synonymous techniques in some fields of statistics, this has been criticised.[34][35]Factor analysis "deals withthe assumption of an underlying causal structure: [it] assumes that the covariation in the observed variables is due to the presence of one or more latent variables (factors) that exert causal influence on these observed variables".[36]In contrast, PCA neither assumes nor depends on such an underlying causal relationship. Researchers have argued that the distinctions between the two techniques may mean that there are objective benefits for preferring one over the other based on the analytic goal. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results. Factor analysis has been used successfully where adequate understanding of the system permits good initial model formulations. PCA employs a mathematical transformation to the original data with no assumptions about the form of the covariance matrix. The objective of PCA is to determine linear combinations of the original variables and select a few that can be used to summarize the data set without losing much information.[37]
Fabrigar et al. (1999)[34]address a number of reasons used to suggest that PCA is not equivalent to factor analysis:
Factor analysis takes into account therandom errorthat is inherent in measurement, whereas PCA fails to do so. This point is exemplified by Brown (2009),[38]who indicated that, in respect to the correlation matrices involved in the calculations:
"In PCA, 1.00s are put in the diagonal meaning that all of the variance in the matrix is to be accounted for (including variance unique to each variable, variance common among variables, and error variance). That would, therefore, by definition, include all of the variance in the variables. In contrast, in EFA, the communalities are put in the diagonal meaning that only the variance shared with other variables is to be accounted for (excluding variance unique to each variable and error variance). That would, therefore, by definition, include only variance that is common among the variables."
For this reason, Brown (2009) recommends using factor analysis when theoretical ideas about relationships between variables exist, whereas PCA should be used if the goal of the researcher is to explore patterns in their data.
The differences between PCA and factor analysis (FA) are further illustrated by Suhr (2009):[35]
Charles Spearmanwas the first psychologist to discuss common factor analysis[39]and did so in his 1904 paper.[40]It provided few details about his methods and was concerned with single-factor models.[41]He discovered that school children's scores on a wide variety of seemingly unrelated subjects were positively correlated, which led him to postulate that a single general mental ability, org, underlies and shapes human cognitive performance.
The initial development of common factor analysis with multiple factors was given byLouis Thurstonein two papers in the early 1930s,[42][43]summarized in his 1935 book,The Vector of Mind.[44]Thurstone introduced several important factor analysis concepts, including communality, uniqueness, and rotation.[45]He advocated for "simple structure", and developed methods of rotation that could be used as a way to achieve such structure.[39]
InQ methodology,William Stephenson, a student of Spearman, distinguish betweenRfactor analysis, oriented toward the study of inter-individual differences, andQfactor analysis oriented toward subjective intra-individual differences.[46][47]
Raymond Cattellwas a strong advocate of factor analysis andpsychometricsand used Thurstone's multi-factor theory to explain intelligence. Cattell also developed thescree testand similarity coefficients.
Factor analysis is used to identify "factors" that explain a variety of results on different tests. For example, intelligence research found that people who get a high score on a test of verbal ability are also good on other tests that require verbal abilities. Researchers explained this by using factor analysis to isolate one factor, often called verbal intelligence, which represents the degree to which someone is able to solve problems involving verbal skills.[citation needed]
Factor analysis in psychology is most often associated with intelligence research. However, it also has been used to find factors in a broad range of domains such as personality, attitudes, beliefs, etc. It is linked topsychometrics, as it can assess the validity of an instrument by finding if the instrument indeed measures the postulated factors.[citation needed]
Factor analysis is a frequently used technique in cross-cultural research. It serves the purpose of extractingcultural dimensions. The best known cultural dimensions models are those elaborated byGeert Hofstede,Ronald Inglehart,Christian Welzel,Shalom Schwartzand Michael Minkov. A popular visualization isInglehart and Welzel's cultural map of the world.[25]
In an early 1965 study, political systems around the world are examined via factor analysis to construct related theoretical models and research, compare political systems, and create typological categories.[50]For these purposes, in this study seven basic political dimensions are identified, which are related to a wide variety of political behaviour: these dimensions are Access, Differentiation, Consensus, Sectionalism, Legitimation, Interest, and Leadership Theory and Research.
Other political scientists explore the measurement of internal political efficacy using four new questions added to the 1988 National Election Study. Factor analysis is here used to find that these items measure a single concept distinct from external efficacy and political trust, and that these four questions provided the best measure of internal political efficacy up to that point in time.[51]
The basic steps are:
The data collection stage is usually done by marketing research professionals. Survey questions ask the respondent to rate a product sample or descriptions of product concepts on a range of attributes. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The attributes chosen will vary depending on the product being studied. The same question is asked about all the products in the study. The data for multiple products is coded and input into a statistical program such asR,SPSS,SAS,Stata,STATISTICA, JMP, and SYSTAT.
The analysis will isolate the underlying factors that explain the data using a matrix of associations.[52]Factor analysis is an interdependence technique. The complete set of interdependent relationships is examined. There is no specification of dependent variables, independent variables, or causality. Factor analysis assumes that all the rating data on different attributes can be reduced down to a few important dimensions. This reduction is possible because some attributes may be related to each other. The rating given to any one attribute is partially the result of the influence of other attributes. The statistical algorithm deconstructs the rating (called a raw score) into its various components and reconstructs the partial scores into underlying factor scores. The degree of correlation between the initial raw score and the final factor score is called afactor loading.
Factor analysis has also been widely used in physical sciences such asgeochemistry,hydrochemistry,[53]astrophysicsandcosmology, as well as biological sciences, such asecology,molecular biology,neuroscienceandbiochemistry.
In groundwater quality management, it is important to relate the spatial distribution of different chemical
parameters to different possible sources, which have different chemical signatures. For example, a sulfide mine is likely to be associated with high levels of acidity, dissolved sulfates and transition metals. These signatures can be identified as factors through R-mode factor analysis, and the location of possible sources can be suggested by contouring the factor scores.[54]
Ingeochemistry, different factors can correspond to different mineral associations, and thus to mineralisation.[55]
Factor analysis can be used for summarizing high-densityoligonucleotideDNA microarraysdata at probe level forAffymetrixGeneChips. In this case, the latent variable corresponds to theRNAconcentration in a sample.[56]
Factor analysis has been implemented in several statistical analysis programs since the 1980s: | https://en.wikipedia.org/wiki/Factor_analysis |
Incomputer scienceandoperations research, agenetic algorithm(GA) is ametaheuristicinspired by the process ofnatural selectionthat belongs to the larger class ofevolutionary algorithms(EA).[1]Genetic algorithms are commonly used to generate high-quality solutions tooptimizationandsearch problemsvia biologically inspired operators such asselection,crossover, andmutation.[2]Some examples of GA applications include optimizingdecision treesfor better performance, solvingsudoku puzzles,[3]hyperparameter optimization, andcausal inference.[4]
In a genetic algorithm, apopulationofcandidate solutions(called individuals, creatures, organisms, orphenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (itschromosomesorgenotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.[5]
The evolution usually starts from a population of randomly generated individuals, and is aniterative process, with the population in each iteration called ageneration. In each generation, thefitnessof every individual in the population is evaluated; the fitness is usually the value of theobjective functionin the optimization problem being solved. The more fit individuals arestochasticallyselected from the current population, and each individual's genome is modified (recombinedand possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of thealgorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires:
A standard representation of each candidate solution is as anarray of bits(also calledbit setorbit string).[5]Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simplecrossoveroperations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored ingenetic programmingand graph-form representations are explored inevolutionary programming; a mix of both linear chromosomes and trees is explored ingene expression programming.
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
The population size depends on the nature of the problem, but typically contains hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (thesearch space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found or the distribution of the sampling probability tuned to focus in those areas of greater interest.[6]
During each successive generation, a portion of the existing population isselectedto reproduce for a new generation. Individual solutions are selected through afitness-basedprocess, wherefittersolutions (as measured by afitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.
The fitness function is defined over the genetic representation and measures thequalityof the represented solution. The fitness function is always problem-dependent. For instance, in theknapsack problemone wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. Thefitnessof the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.
In some problems, it is hard or even impossible to define the fitness expression; in these cases, asimulationmay be used to determine the fitness function value of aphenotype(e.g.computational fluid dynamicsis used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or eveninteractive genetic algorithmsare used.
The next step is to generate a second generation population of solutions from those selected, through a combination ofgenetic operators:crossover(also called recombination), andmutation.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated.
Although reproduction methods that are based on the use of two parents are more "biology inspired", some research[7][8]suggests that more than two "parents" generate higher quality chromosomes.
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
Opinion is divided over the importance of crossover versus mutation. There are many references inFogel(2006) that support the importance of mutation-based search.
Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.[citation needed]
It is worth tuning parameters such as themutationprobability,crossoverprobability and population size to find reasonable settings for the problem'scomplexity classbeing worked on. A very small mutation rate may lead togenetic drift(which is non-ergodicin nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unlesselitist selectionis employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required.
In addition to the main operators above, otherheuristicsmay be employed to make the calculation faster or more robust. Thespeciationheuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.[9][10]
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
Goldberg describes the heuristic as follows:
Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Manyestimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold.[12][13]Although good results have been reported for someclasses of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs' efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.[14][15][16]
The practical use of a genetic algorithm has limitations, especially as compared to alternative optimization algorithms:
The simplest algorithm represents each chromosome as abit string. Typically, numeric parameters can be represented byintegers, though it is possible to usefloating pointrepresentations. The floating point representation is natural toevolution strategiesandevolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed byJohn Henry Hollandin the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in alinked list,hashes,objects, or any other imaginabledata structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
When bit-string representations of integers are used,Gray codingis often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-calledHamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming avirtual alphabet(when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.[19][20]
An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome.[21]This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.
A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known aselitist selectionand guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.[22]
Parallelimplementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction.
Other variants, like genetic algorithms foronline optimizationproblems, introduce time-dependence or noise in the fitness function.
Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Researchers have analyzed GA convergence analytically.[23][24]
Instead of using fixed values ofpcandpm, AGAs utilize the population information in each generation and adaptively adjust thepcandpmin order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm),[25]the adjustment ofpcandpmdepends on the fitness values of the solutions. There are more examples of AGA variants: Successive zooming method is an early example of improving convergence.[26]InCAGA(clustering-based adaptive genetic algorithm),[27]through the use of clustering analysis to judge the optimization states of the population, the adjustment ofpcandpmdepends on these optimization states. Recent approaches use more abstract variables for decidingpcandpm. Examples are dominance & co-dominance principles[28]and LIGA (levelized interpolative genetic algorithm), which combines a flexible GA with modified A* search to tackle search space anisotropicity.[29]
It can be quite effective to combine GA with other optimization methods. A GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such assimple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA[citation needed]while overcoming the lack of robustness of hill climbing.
This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, theinversion operatorhas the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.[30]
A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.
A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA,[31]GEMGA[32]and LLGA.[33]
Problems which appear to be particularly appropriate for solution by genetic algorithms includetimetabling and scheduling problems, and many scheduling software packages are based on GAs[citation needed]. GAs have also been applied toengineering.[34]Genetic algorithms are often applied as an approach to solveglobal optimizationproblems.
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complexfitness landscapeas mixing, i.e.,mutationin combination withcrossover, is designed to move the population away fromlocal optimathat a traditionalhill climbingalgorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provideergodicityof the overall genetic algorithm process (seen as aMarkov chain).
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector,[35]antennae designed to pick up radio signals in space,[36]walking methods for computer figures,[37]optimal design of aerodynamic bodies in complex flowfields[38]
In hisAlgorithm Design Manual,Skienaadvises against genetic algorithms for any task:
[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate.
[...]
I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick tosimulated annealingfor your heuristic search voodoo needs.
In 1950,Alan Turingproposed a "learning machine" which would parallel the principles of evolution.[40]Computer simulation of evolution started as early as in 1954 with the work ofNils Aall Barricelli, who was using the computer at theInstitute for Advanced StudyinPrinceton, New Jersey.[41][42]His 1954 publication was not widely noticed. Starting in 1957,[43]the Australian quantitative geneticistAlex Fraserpublished a series of papers on simulation ofartificial selectionof organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[44]and Crosby (1973).[45]Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition,Hans-Joachim Bremermannpublished a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms.[46]Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted byFogel(1998).[47]
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game,[48]artificial evolutiononly became a widely recognized optimization method as a result of the work ofIngo RechenbergandHans-Paul Schwefelin the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems throughevolution strategies.[49][50][51][52]Another approach was the evolutionary programming technique ofLawrence J. Fogel, which was proposed for generating artificial intelligence.Evolutionary programmingoriginally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work ofJohn Hollandin the early 1970s, and particularly his bookAdaptation in Natural and Artificial Systems(1975). His work originated with studies ofcellular automata, conducted byHollandand his students at theUniversity of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known asHolland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held inPittsburgh, Pennsylvania.
In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes.[53][circular reference]In 1989, Axcelis, Inc. releasedEvolver, the world's first commercial GA product for desktop computers.The New York Timestechnology writerJohn Markoffwrote[54]about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995.[55]Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version.[56]Since the 1990s,MATLABhas built in threederivative-free optimizationheuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search).[57]
Genetic algorithms are a sub-field:
Evolutionary algorithms is a sub-field ofevolutionary computing.
Swarm intelligence is a sub-field ofevolutionary computing.
Evolutionary computation is a sub-field of themetaheuristicmethods.
Metaheuristic methods broadly fall withinstochasticoptimisation methods. | https://en.wikipedia.org/wiki/Genetic_algorithms |
InArtificial Intelligence,intention miningorintent miningis the problem of determining auser's intentionfromlogsof his/her behavior ininteractionwith a computer system, such as insearch engines, where there has been research onuser intentor query intent prediction since 2002 (see Section 7.2.3 in[1]); and commercial intents expressed in social media posts.[2]
The notion of intention mining has been introduced in the Ph.D. thesis of Dr. Ghazaleh Khodabandelou in 2014.[3][4][5]This thesis presents a novel approach in Artificial Intelligence to automate the construction of intention models from users' activities. The proposed model uses Hidden Markov Models to model the relationship between users' activities and the strategies (i.e., the different ways to fulfill the intentions). The method also includes some specific algorithms and new optimization methods developed to infer users' intentions and construct intentional models as an oriented graph (with different levels of granularity) in order to have a better understanding of the human way ofthinking.[5]
Intention Mining has already been used in several domains: | https://en.wikipedia.org/wiki/Intention_mining |
Text mining,text data mining(TDM) ortext analyticsis the process of deriving high-qualityinformationfromtext. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources."[1]Written resources may includewebsites,books,emails,reviews, and articles.[2]High-quality information is typically obtained by devising patterns and trends by means such asstatistical pattern learning. According to Hotho et al. (2005), there are three perspectives of text mining:information extraction,data mining, andknowledge discovery in databases(KDD).[3]Text mining usually involves the process of structuring the input text (usuallyparsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into adatabase), deriving patterns within thestructured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination ofrelevance,novelty, and interest. Typical text mining tasks includetext categorization,text clustering, concept/entity extraction, production of granular taxonomies,sentiment analysis,document summarization, andentity relation modeling(i.e., learning relations betweennamed entities).
Text analysis involvesinformation retrieval,lexical analysisto study word frequency distributions,pattern recognition,tagging/annotation,information extraction,data miningtechniques including link and association analysis,visualization, andpredictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via the application ofnatural language processing(NLP), different types ofalgorithmsand analytical methods. An important phase of this process is the interpretation of the gathered information.
A typical application is to scan a set of documents written in anatural languageand either model thedocumentset forpredictive classificationpurposes or populate a database or search index with the information extracted. Thedocumentis the basic element when starting with text mining. Here, we define a document as a unit of textual data, which normally exists in many types of collections.[4]
Text analyticsdescribes a set oflinguistic,statistical, andmachine learningtechniques that model and structure the information content of textual sources forbusiness intelligence,exploratory data analysis,research, or investigation.[5]The term is roughly synonymous with text mining; indeed,Ronen Feldmanmodified a 2000 description of "text mining"[6]in 2004 to describe "text analytics".[7]The latter term is now used more frequently in business settings while "text mining" is used in some of the earliest application areas, dating to the 1980s,[8]notably life-sciences research and government intelligence.
The term text analytics also describes that application of text analytics to respond to business problems, whether independently or in conjunction with query and analysis of fielded, numerical data. It is a truism that 80% of business-relevant information originates inunstructuredform, primarily text.[9]These techniques and processes discover and present knowledge – facts,business rules, and relationships – that is otherwise locked in textual form, impenetrable to automated processing.
Subtasks—components of a larger text-analytics effort—typically include:
Text mining technology is now broadly applied to a wide variety of government, research, and business needs. All these groups may use text mining for records management and searching documents relevant to their daily activities. Legal professionals may use text mining fore-discovery, for example. Governments and military groups use text mining fornational securityand intelligence purposes. Scientific researchers incorporate text mining approaches into efforts to organize large sets of text data (i.e., addressing the problem ofunstructured data), to determine ideas communicated through text (e.g.,sentiment analysisinsocial media[15][16][17]) and to supportscientific discoveryin fields such as thelife sciencesandbioinformatics. In business, applications are used to supportcompetitive intelligenceand automatedad placement, among numerous other activities.
Many text mining software packages are marketed forsecurity applications, especially monitoring and analysis of online plain text sources such asInternet news,blogs, etc. fornational securitypurposes.[18]It is also involved in the study of textencryption/decryption.
A range of text mining applications in the biomedical literature has been described,[20]including computational approaches to assist with studies inprotein docking,[21]protein interactions,[22][23]and protein-disease associations.[24]In addition, with large patient textual datasets in the clinical field, datasets of demographic information in population studies and adverse event reports, text mining can facilitate clinical studies and precision medicine. Text mining algorithms can facilitate the stratification and indexing of specific clinical events in large patient textual datasets of symptoms, side effects, and comorbidities from electronic health records, event reports, and reports from specific diagnostic tests.[25]One online text mining application in the biomedical literature isPubGene, a publicly accessiblesearch enginethat combines biomedical text mining with network visualization.[26][27]GoPubMedis a knowledge-based search engine for biomedical texts. Text mining techniques also enable us to extract unknown knowledge from unstructured documents in the clinical domain[28]
Text mining methods and software is also being researched and developed by major firms, includingIBMandMicrosoft, to further automate the mining and analysis processes, and by different firms working in the area of search and indexing in general as a way to improve their results. Within the public sector, much effort has been concentrated on creating software for tracking and monitoringterrorist activities.[29]For study purposes,Weka softwareis one of the most popular options in the scientific world, acting as an excellent entry point for beginners. For Python programmers, there is an excellent toolkit calledNLTKfor more general purposes. For more advanced programmers, there's also theGensimlibrary, which focuses on word embedding-based text representations.
Text mining is being used by large media companies, such as theTribune Company, to clarify information and to provide readers with greater search experiences, which in turn increases site "stickiness" and revenue. Additionally, on the back end, editors are benefiting by being able to share, associate and package news across properties, significantly increasing opportunities to monetize content.
Text analytics is being used in business, particularly, in marketing, such as incustomer relationship management.[30]Coussement and Van den Poel (2008)[31][32]apply it to improvepredictive analyticsmodels for customer churn (customer attrition).[31]Text mining is also being applied in stock returns prediction.[33]
Sentiment analysismay involve analysis of products such as movies, books, or hotel reviews for estimating how favorable a review is for the product.[34]Such an analysis may need a labeled data set or labeling of theaffectivityof words.
Resources for affectivity of words and concepts have been made forWordNet[35]andConceptNet,[36]respectively.
Text has been used to detect emotions in the related area of affective computing.[37]Text based approaches to affective computing have been used on multiple corpora such as students evaluations, children stories and news stories.
The issue of text mining is of importance to publishers who hold largedatabasesof information needingindexingfor retrieval. This is especially true in scientific disciplines, in which highly specific information is often contained within the written text. Therefore, initiatives have been taken such asNature'sproposal for an Open Text Mining Interface (OTMI) and theNational Institutes of Health's common Journal PublishingDocument Type Definition(DTD) that would provide semantic cues to machines to answer specific queries contained within the text without removing publisher barriers to public access.
Academic institutions have also become involved in the text mining initiative:
Computational methods have been developed to assist with information retrieval from scientific literature. Published approaches include methods for searching,[41]determining novelty,[42]and clarifyinghomonyms[43]among technical reports.
The automatic analysis of vast textual corpora has created the possibility for scholars to analyze
millions of documents in multiple languages with very limited manual intervention. Key enabling technologies have been parsing,machine translation, topiccategorization, and machine learning.
The automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale, turning textual data into network data. The resulting networks, which can contain thousands of nodes, are then analyzed by using tools from network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes.[45]This automates the approach introduced by quantitative narrative analysis,[46]wherebysubject-verb-objecttriplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.[44]
Content analysishas been a traditional part of social sciences and media studies for a long time. The automation of content analysis has allowed a "big data" revolution to take place in that field, with studies in social media and newspaper content that include millions of news items.Gender bias,readability, content similarity, reader preferences, and even mood have been analyzed based on text mining methods over millions of documents.[47][48][49][50][51]The analysis of readability, gender bias and topic bias was demonstrated in Flaounas et al.[52]showing how different topics have different gender biases and levels of readability; the possibility to detect mood patterns in a vast population by analyzing Twitter content was demonstrated as well.[53][54]
Text mining computer programs are available from manycommercialandopen sourcecompanies and sources.
UnderEuropean copyrightanddatabase laws, the mining of in-copyright works (such as byweb mining) without the permission of the copyright owner is illegal. In the UK in 2014, on the recommendation of theHargreaves review, the government amended copyright law[55]to allow text mining as alimitation and exception. It was the second country in the world to do so, followingJapan, which introduced a mining-specific exception in 2009. However, owing to the restriction of theInformation Society Directive(2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law does not allow this provision to be overridden by contractual terms and conditions.
TheEuropean Commissionfacilitated stakeholder discussion on text anddata miningin 2013, under the title of Licenses for Europe.[56]The fact that the focus on the solution to this legal issue was licenses, and not limitations and exceptions to copyright law, led representatives of universities, researchers, libraries, civil society groups andopen accesspublishers to leave the stakeholder dialogue in May 2013.[57]
US copyright law, and in particular itsfair useprovisions, means that text mining in America, as well as other fair use countries such as Israel, Taiwan and South Korea, is viewed as being legal. As text mining is transformative, meaning that it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of theGoogle Book settlementthe presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one such use being text and data mining.[58]
There is no exception incopyright law of Australiafor text or data mining within theCopyright Act 1968. TheAustralian Law Reform Commissionhas noted that it is unlikely that the "research and study"fair dealingexception would extend to cover such a topic either, given it would be beyond the "reasonable portion" requirement.[59]
Until recently, websites most often used text-based searches, which only found documents containing specific user-defined words or phrases. Now, through use of asemantic web, text mining can find content based on meaning and context (rather than just by a specific word). Additionally, text mining software can be used to build large dossiers of information about specific people and events. For example, large datasets based on data extracted from news reports can be built to facilitate social networks analysis orcounter-intelligence. In effect, the text mining software may act in a capacity similar to anintelligence analystor research librarian, albeit with a more limited scope of analysis. Text mining is also used in some emailspam filtersas a way of determining the characteristics of messages that are likely to be advertisements or other unwanted material. Text mining plays an important role in determining financialmarket sentiment. | https://en.wikipedia.org/wiki/Text_mining |
Inmathematics, atime seriesis a series ofdata pointsindexed (or listed or graphed) in time order. Most commonly, a time series is asequencetaken at successive equally spaced points in time. Thus it is a sequence ofdiscrete-timedata. Examples of time series are heights of oceantides, counts ofsunspots, and the daily closing value of theDow Jones Industrial Average.
A time series is very frequently plotted via arun chart(which is a temporalline chart). Time series are used instatistics,signal processing,pattern recognition,econometrics,mathematical finance,weather forecasting,earthquake prediction,electroencephalography,control engineering,astronomy,communications engineering, and largely in any domain of appliedscienceandengineeringwhich involvestemporalmeasurements.
Time seriesanalysiscomprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data.Time seriesforecastingis the use of amodelto predict future values based on previously observed values. Generally, time series data is modelled as astochastic process. Whileregression analysisis often employed in such a way as to test relationships between one or more different time series, this type of analysis is not usually called "time series analysis", which refers in particular to relationships between different points in time within a single series.
Time series data have a natural temporal ordering. This makes time series analysis distinct fromcross-sectional studies, in which there is no natural ordering of the observations (e.g. explaining people's wages by reference to their respective education levels, where the individuals' data could be entered in any order). Time series analysis is also distinct fromspatial data analysiswhere the observations typically relate to geographical locations (e.g. accounting for house prices by the location as well as the intrinsic characteristics of the houses). Astochasticmodel for a time series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. In addition, time series models will often make use of the natural one-way ordering of time so that values for a given period will be expressed as deriving in some way from past values, rather than from future values (seetime reversibility).
Time series analysis can be applied toreal-valued, continuous data,discretenumericdata, or discrete symbolic data (i.e. sequences of characters, such as letters and words in theEnglish language[1]).
Methods for time series analysis may be divided into two classes:frequency-domainmethods andtime-domainmethods. The former includespectral analysisandwavelet analysis; the latter includeauto-correlationandcross-correlationanalysis. In the time domain, correlation and analysis can be made in a filter-like manner usingscaled correlation, thereby mitigating the need to operate in the frequency domain.
Additionally, time series analysis techniques may be divided intoparametricandnon-parametricmethods. Theparametric approachesassume that the underlyingstationary stochastic processhas a certain structure which can be described using a small number of parameters (for example, using anautoregressiveormoving-average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast,non-parametric approachesexplicitly estimate thecovarianceor thespectrumof the process without assuming that the process has any particular structure.
Methods of time series analysis may also be divided intolinearandnon-linear, andunivariateandmultivariate.
A time series is one type ofpanel data. Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel (as is across-sectional dataset). A data set may exhibit characteristics of both panel data and time series data. One way to tell is to ask what makes one data record unique from the other records. If the answer is the time data field, then this is a time series data set candidate. If determining a unique record requires a time data field and an additional identifier which is unrelated to time (e.g. student ID, stock symbol, country code), then it is panel data candidate. If the differentiation lies on the non-time identifier, then the data set is a cross-sectional data set candidate.
There are several types of motivation and data analysis available for time series which are appropriate for different purposes.
In the context ofstatistics,econometrics,quantitative finance,seismology,meteorology, andgeophysicsthe primary goal of time series analysis isforecasting. In the context ofsignal processing,control engineeringandcommunication engineeringit is used for signal detection. Other applications are indata mining,pattern recognitionandmachine learning, where time series analysis can be used forclustering,[2][3][4]classification,[5]query by content,[6]anomaly detectionas well asforecasting.[7]
A simple way to examine a regular time series is manually with aline chart. The datagraphic shows tuberculosis deaths in the United States,[8]along with the yearly change and the percentage change from year to year. The total number of deaths declined in every year until the mid-1980s, after which there were occasional increases, often proportionately - but not absolutely - quite large.
A study of corporate data analysts found two challenges to exploratory time series analysis: discovering the shape of interesting patterns, and finding an explanation for these patterns.[9]Visual tools that represent time series data asheat map matricescan help overcome these challenges.
This approach may be based onharmonic analysisand filtering of signals in thefrequency domainusing theFourier transform, andspectral density estimation. Its development was significantly accelerated duringWorld War IIby mathematicianNorbert Wiener, electrical engineersRudolf E. Kálmán,Dennis Gaborand others for filtering signals from noise and predicting signal values at a certain point in time.
An equivalent effect may be achieved in the time domain, as in aKalman filter; seefilteringandsmoothingfor more techniques.
Other related techniques include:
Curve fitting[12][13]is the process of constructing acurve, ormathematical function, that has the best fit to a series ofdatapoints,[14]possibly subject to constraints.[15][16]Curve fitting can involve eitherinterpolation,[17][18]where an exact fit to the data is required, orsmoothing,[19][20]in which a "smooth" function is constructed that approximately fits the data. A related topic isregression analysis,[21][22]which focuses more on questions ofstatistical inferencesuch as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization,[23][24]to infer values of a function where no data are available,[25]and to summarize the relationships among two or more variables.[26]Extrapolationrefers to the use of a fitted curve beyond therangeof the observed data,[27]and is subject to adegree of uncertainty[28]since it may reflect the method used to construct the curve as much as it reflects the observed data.
For processes that are expected to generally grow in magnitude one of the curves in the graphic (and many others) can be fitted by estimating their parameters.
The construction of economic time series involves the estimation of some components for some dates byinterpolationbetween values ("benchmarks") for earlier and later dates. Interpolation is estimation of an unknown quantity between two known quantities (historical data), or drawing conclusions about missing information from the available information ("reading between the lines").[29]Interpolation is useful where the data surrounding the missing data is available and its trend, seasonality, and longer-term cycles are known. This is often done by using a related series known for all relevant dates.[30]Alternativelypolynomial interpolationorspline interpolationis used where piecewisepolynomialfunctions are fitted in time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also calledregression). The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set. Spline interpolation, however, yield a piecewise continuous function composed of many polynomials to model the data set.
Extrapolationis the process of estimating, beyond the original observation range, the value of a variable on the basis of its relationship with another variable. It is similar tointerpolation, which produces estimates between known observations, but extrapolation is subject to greateruncertaintyand a higher risk of producing meaningless results.
In general, a function approximation problem asks us to select afunctionamong a well-defined class that closely matches ("approximates") a target function in a task-specific way.
One can distinguish two major classes of function approximation problems: First, for known target functions,approximation theoryis the branch ofnumerical analysisthat investigates how certain known functions (for example,special functions) can be approximated by a specific class of functions (for example,polynomialsorrational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.).
Second, the target function, call itg, may be unknown; instead of an explicit formula, only a set of points (a time series) of the form (x,g(x)) is provided. Depending on the structure of thedomainandcodomainofg, several techniques for approximatinggmay be applicable. For example, ifgis an operation on thereal numbers, techniques ofinterpolation,extrapolation,regression analysis, andcurve fittingcan be used. If thecodomain(range or target set) ofgis a finite set, one is dealing with aclassificationproblem instead. A related problem ofonlinetime series approximation[31]is to summarize the data in one-pass and construct an approximate representation that can support a variety of time series queries with bounds on worst-case error.
To some extent, the different problems (regression,classification,fitness approximation) have received a unified treatment instatistical learning theory, where they are viewed assupervised learningproblems.
Instatistics,predictionis a part ofstatistical inference. One particular approach to such inference is known aspredictive inference, but the prediction can be undertaken within any of the several approaches to statistical inference. Indeed, one description of statistics is that it provides a means of transferring knowledge about a sample of a population to the whole population, and to other related populations, which is not necessarily the same as prediction over time. When information is transferred across time, often to specific points in time, the process is known asforecasting.
Assigning time series pattern to a specific category, for example identify a word based on series of hand movements insign language.
Splitting a time-series into a sequence of segments. It is often the case that a time-series can be represented as a sequence of individual segments, each with its own characteristic properties. For example, the audio signal from a conference call can be partitioned into pieces corresponding to the times during which each person was speaking. In time-series segmentation, the goal is to identify the segment boundary points in the time-series, and to characterize the dynamical properties associated with each segment. One can approach this problem usingchange-point detection, or by modeling the time-series as a more sophisticated system, such as a Markov jump linear system.
Time series data may be clustered, however special care has to be taken when considering subsequence clustering.[33][34]Time series clustering may be split into
Subsequence time series clustering resulted in unstable (random) clustersinduced by the feature extractionusing chunking with sliding windows.[35]It was found that the cluster centers (the average of the time series in a cluster - also a time series) follow an arbitrarily shifted sine pattern (regardless of the dataset, even on realizations of arandom walk). This means that the found cluster centers are non-descriptive for the dataset because the cluster centers are always nonrepresentative sine waves.
Models for time series data can have many forms and represent differentstochastic processes. When modeling variations in the level of a process, three broad classes of practical importance are theautoregressive(AR) models, theintegrated(I) models, and themoving-average(MA) models. These three classes dependlinearlyon previous data points.[36]Combinations of these ideas produceautoregressive moving-average(ARMA) andautoregressive integrated moving-average(ARIMA) models. Theautoregressive fractionally integrated moving-average(ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial "V" for "vector", as in VAR forvector autoregression. An additional set of extensions of these models is available for use where the observed time-series is driven by some "forcing" time-series (which may not have a causal effect on the observed series): the distinction from the multivariate case is that the forcing series may be deterministic or under the experimenter's control. For these models, the acronyms are extended with a final "X" for "exogenous".
Non-linear dependence of the level of a series on previous data points is of interest, partly because of the possibility of producing achaotictime series. However, more importantly, empirical investigations can indicate the advantage of using predictions derived from non-linear models, over those from linear models, as for example innonlinear autoregressive exogenous models. Further references on nonlinear time series analysis: (Kantz and Schreiber),[37]and (Abarbanel)[38]
Among other types of non-linear time series models, there are models to represent the changes of variance over time (heteroskedasticity). These models representautoregressive conditional heteroskedasticity(ARCH) and the collection comprises a wide variety of representation (GARCH, TARCH, EGARCH, FIGARCH, CGARCH, etc.). Here changes in variability are related to, or predicted by, recent past values of the observed series. This is in contrast to other possible representations of locally varying variability, where the variability might be modelled as being driven by a separate time-varying process, as in adoubly stochastic model.
In recent work on model-free analyses, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favor.[39]Multiscale (often referred to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependence at multiple scales. See alsoMarkov switching multifractal(MSMF) techniques for modeling volatility evolution.
Ahidden Markov model(HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplestdynamic Bayesian network. HMM models are widely used inspeech recognition, for translating a time series of spoken words into text.
Many of these models are collected in the python packagesktime.
A number of different notations are in use for time-series analysis. A common notation specifying a time seriesXthat is indexed by thenatural numbersis written
Another common notation is
whereTis theindex set.
There are two sets of conditions under which much of the theory is built:
Ergodicity implies stationarity, but the converse is not necessarily the case. Stationarity is usually classified intostrict stationarityand wide-sense orsecond-order stationarity. Both models and applications can be developed under each of these conditions, although the models in the latter case might be considered as only partly specified.
In addition, time-series analysis can be applied where the series areseasonally stationaryor non-stationary. Situations where the amplitudes of frequency components change with time can be dealt with intime-frequency analysiswhich makes use of atime–frequency representationof a time-series or signal.[40]
Tools for investigating time-series data include:
Time-series metrics orfeaturesthat can be used for time seriesclassificationorregression analysis:[44]
Time series can be visualized with two categories of chart: Overlapping Charts and Separated Charts. Overlapping Charts display all-time series on the same layout while Separated Charts presents them on different layouts (but aligned for comparison purpose)[48] | https://en.wikipedia.org/wiki/Time_series_analysis |
Adatabase modelis a type ofdata modelthat determines the logical structure of adatabase. It fundamentally determines in which mannerdatacan be stored, organized and manipulated. The most popular example of a database model is therelational model, which uses a table-based format.
Commonlogical data modelsfor databases include:
Anobject–relational databasecombines the two related structures.
Physical data modelsinclude:
Other models include:
A given database management system may provide one or more models. The optimal structure depends on the natural organization of the application's data, and on the application's requirements, which include transaction rate (speed), reliability, maintainability, scalability, and cost. Mostdatabase management systemsare built around one particular data model, although it is possible for products to offer support for more than one model.
Variousphysical data modelscan implement any given logical model. Most database software will offer the user some level of control in tuning the physical implementation, since the choices that are made have a significant effect on performance.
A model is not just a way of structuring data: it also defines a set of operations that can be performed on the data.[1]The relational model, for example, defines operations such asselect,projectandjoin. Although these operations may not be explicit in a particularquery language, they provide the foundation on which a query language is built.
Theflat (or table) modelconsists of a single, two-dimensional array ofdataelements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another. For instance, columns for name and password that might be used as a part of a system security database. Each row would have the specific password associated with an individual user. Columns of the table often have a type associated with them, defining them as character data, date or time information, integers, or floating point numbers. This tabular format is a precursor to the relational model.
These models were popular in the 1960s, 1970s, but nowadays can be found primarily in oldlegacy systems. They are characterized primarily by beingnavigationalwith strong connections between their logical and physical representations, and deficiencies indata independence.
In ahierarchical model, data is organized into atree-like structure, implying a single parent for each record. A sort field keeps sibling records in a particular order. Hierarchical structures were widely used in the early mainframe database management systems, such as theInformation Management System(IMS) byIBM, and now describe the structure ofXMLdocuments. This structure allows one-to-many relationship between two types of data. This structure is very efficient to describe many relationships in the real world; recipes, table of contents, ordering of paragraphs/verses, any nested and sorted information.
This hierarchy is used as the physical order of records in storage. Record access is done by navigating downward through the data structure usingpointerscombined with sequential accessing. Because of this, the hierarchical structure is inefficient for certain database operations when a full path (as opposed to upward link and sort field) is not also included for each record. Such limitations have been compensated for in later IMS versions by additional logical hierarchies imposed on the base physical hierarchy.
Thenetwork modelexpands upon the hierarchical structure, allowing many-to-many relationships in a tree-like structure that allows multiple parents. It was most popular before being replaced by the relational model, and is defined by theCODASYLspecification.
The network model organizes data using two fundamental concepts, calledrecordsandsets. Records contain fields (which may be organized hierarchically, as in the programming languageCOBOL). Sets (not to be confused with mathematical sets) defineone-to-manyrelationships between records: one owner, many members. A record may be an owner in any number of sets, and a member in any number of sets.
A set consists of circularlinked listswhere one record type, the set owner or parent, appears once in each circle, and a second record type, the subordinate or child, may appear multiple times in each circle. In this way a hierarchy may be established between any two record types, e.g., type A is the owner of B. At the same time another set may be defined where B is the owner of A. Thus all the sets comprise a generaldirected graph(ownership defines a direction), ornetworkconstruct. Access to records is either sequential (usually in each record type) or by navigation in the circular linked lists.
The network model is able to represent redundancy in data more efficiently than in the hierarchical model, and there can be more than one path from an ancestor node to a descendant. The operations of the network model are navigational in style: a program maintains a current position, and navigates from one record to another by following the relationships in which the record participates. Records can also be located by supplying key values.
Although it is not an essential feature of the model, network databases generally implement the set relationships by means ofpointersthat directly address the location of a record on disk. This gives excellent retrieval performance, at the expense of operations such as database loading and reorganization.
Popular DBMS products that utilized it wereCincom Systems' Total andCullinet'sIDMS. IDMS gained a considerable customer base; in the 1980s, it adopted the relational model and SQL in addition to its original tools and languages.
Mostobject databases(invented in the 1990s) use the navigational concept to provide fast navigation across networks of objects, generally using object identifiers as "smart" pointers to related objects.Objectivity/DB, for instance, implements named one-to-one, one-to-many, many-to-one, and many-to-many named relationships that can cross databases. Many object databases also supportSQL, combining the strengths of both models.
In aninverted fileorinverted index, the contents of the data are used as keys in a lookup table, and the values in the table are pointers to the location of each instance of a given content item. This is also the logical structure of contemporarydatabase indexes, which might only use the contents from a particular columns in the lookup table. Theinverted file data modelcan put indexes in a set of files next to existing flat database files, in order to efficiently directly access needed records in these files.
Notable for using this data model is theADABASDBMS ofSoftware AG, introduced in 1970. ADABAS has gained considerable customer base and exists and supported until today. In the 1980s it has adopted the relational model and SQL in addition to its original tools and languages.
Document-oriented databaseClusterpointuses inverted indexing model to provide fastfull-text searchforXMLorJSONdata objects for example.
Therelational modelwas introduced byE.F. Coddin 1970[2]as a way to make database management systems more independent of any particular application. It is a mathematical model defined in terms ofpredicate logicandset theory, and implementations of it have been used by mainframe, midrange and microcomputer systems.
The products that are generally referred to asrelational databasesin fact implement a model that is only an approximation to the mathematical model defined by Codd. Three key terms are used extensively in relational database models:relations,attributes, anddomains. A relation is a table with columns and rows. The named columns of the relation are called attributes, and the domain is the set of values the attributes are allowed to take.
The basic data structure of the relational model is the table, where information about a particular entity (say, an employee) is represented in rows (also calledtuples) and columns. Thus, the "relation" in "relational database" refers to the various tables in the database; a relation is a set of tuples. The columns enumerate the various attributes of the entity (the employee's name, address or phone number, for example), and a row is an actual instance of the entity (a specific employee) that is represented by the relation. As a result, each tuple of the employee table represents various attributes of a single employee.
All relations (and, thus, tables) in a relational database have to adhere to some basic rules to qualify as relations. First, the ordering of columns is immaterial in a table. Second, there can not be identical tuples or rows in a table. And third, each tuple will contain a single value for each of its attributes.
A relational database contains multiple tables, each similar to the one in the "flat" database model. One of the strengths of the relational model is that, in principle, any value occurring in two different records (belonging to the same table or to different tables), implies a relationship among those two records. Yet, in order to enforce explicitintegrity constraints, relationships between records in tables can also be defined explicitly, by identifying or non-identifying parent-child relationships characterized by assigning cardinality (1:1, (0)1:M, M:M). Tables can also have a designated single attribute or a set of attributes that can act as a "key", which can be used to uniquely identify each tuple in the table.
A key that can be used to uniquely identify a row in a table is called a primary key. Keys are commonly used to join or combine data from two or more tables. For example, anEmployeetable may contain a column namedLocationwhich contains a value that matches the key of aLocationtable. Keys are also critical in the creation of indexes, which facilitate fast retrieval of data from large tables. Any column can be a key, or multiple columns can be grouped together into a compound key. It is not necessary to define all the keys in advance; a column can be used as a key even if it was not originally intended to be one.
A key that has an external, real-world meaning (such as a person's name, a book'sISBN, or a car's serial number) is sometimes called a "natural" key. If no natural key is suitable (think of the many people namedBrown), an arbitrary or surrogate key can be assigned (such as by giving employees ID numbers). In practice, most databases have both generated and natural keys, because generated keys can be used internally to create links between rows that cannot break, while natural keys can be used, less reliably, for searches and for integration with other databases. (For example, records in two independently developed databases could be matched up bysocial security number, except when the social security numbers are incorrect, missing, or have changed.)
The most common query language used with the relational model is the Structured Query Language (SQL).
Thedimensional modelis a specialized adaptation of the relational model used to represent data indata warehousesin a way that data can be easily summarized using online analytical processing, orOLAPqueries. In the dimensional model, a database schema consists of a single large table of facts that are described using dimensions and measures. A dimension provides the context of a fact (such as who participated, when and where it happened, and its type) and is used in queries to group related facts together. Dimensions tend to be discrete and are often hierarchical; for example, the location might include the building, state, and country. A measure is a quantity describing the fact, such as revenue. It is important that measures can be meaningfully aggregated—for example, the revenue from different locations can be added together.
In an OLAP query, dimensions are chosen and the facts are grouped and aggregated together to create a summary.
The dimensional model is often implemented on top of the relational model using astar schema, consisting of one highly normalized table containing the facts, and surrounding denormalized tables containing each dimension. An alternative physical implementation, called asnowflake schema, normalizes multi-level hierarchies within a dimension into multiple tables.
A data warehouse can contain multiple dimensional schemas that share dimension tables, allowing them to be used together. Coming up with a standard set of dimensions is an important part ofdimensional modeling.
Its high performance has made the dimensional model the most popular database structure for OLAP.
Products offering a more general data model than the relational model are sometimes classified aspost-relational.[3]Alternate terms include "hybrid database", "Object-enhanced RDBMS" and others. The data model in such products incorporatesrelationsbut is not constrained byE.F. Codd's Information Principle, which requires that
all information in the database must be cast explicitly in terms of values in relations and in no other way
Some of these extensions to the relational model integrate concepts from technologies that pre-date the relational model. For example, they allow representation of a directed graph withtreeson the nodes. The German companysonesimplements this concept in itsGraphDB.
Some post-relational products extend relational systems with non-relational features. Others arrived in much the same place by adding relational features to pre-relational systems. Paradoxically, this allows products that are historically pre-relational, such asPICKandMUMPS, to make a plausible claim to be post-relational.
The resource space model (RSM) is a non-relational data model based on multi-dimensional classification.[5]
Graph databases allow even more general structure than a network database; any node may be connected to any other node.
Multivalue databases are "lumpy" data, in that they can store exactly the same way as relational databases, but they also permit a level of depth which the relational model can only approximate using sub-tables. This is nearly identical to the way XML expresses data, where a given field/attribute can have multiple right answers at the same time. Multivalue can be thought of as a compressed form of XML.
An example is an invoice, which in either multivalue or relational data could be seen as (A) Invoice Header Table - one entry per invoice, and (B) Invoice Detail Table - one entry per line item. In the multivalue model, we have the option of storing the data as on table, with an embedded table to represent the detail: (A) Invoice Table - one entry per invoice, no other tables needed.
The advantage is that the atomicity of the Invoice (conceptual) and the Invoice (data representation) are one-to-one. This also results in fewer reads, less referential integrity issues, and a dramatic decrease in the hardware needed to support a given transaction volume.
In the 1990s, theobject-oriented programmingparadigm was applied to database technology, creating a new database model known asobject databases. This aims to avoid theobject–relational impedance mismatch– the overhead of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). Even further, thetype systemused in a particular application can be defined directly in the database, allowing the database to enforce the same data integrity invariants. Object databases also introduce the key ideas of object programming, such asencapsulationandpolymorphism, into the world of databases.
A variety of these ways have been tried for storing objects in a database. Some[which?]products have approached the problem from the application programming end, by making the objects manipulated by the programpersistent. This typically requires the addition of some kind of query language, since conventional programming languages do not have the ability to find objects based on their information content. Others[which?]have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database programming language that allows full programming capabilities as well as traditional query facilities.
Object databases suffered because of a lack of standardization: although standards were defined byODMG, they were never implemented well enough to ensure interoperability between products. Nevertheless, object databases have been used successfully in many applications: usually specialized applications such as engineering databases or molecular biology databases rather than mainstream commercial data processing. However, object database ideas were picked up by the relational vendors and influenced extensions made to these products and indeed to theSQLlanguage.
An alternative to translating between objects and relational databases is to use anobject–relational mapping(ORM) library. | https://en.wikipedia.org/wiki/Database_model |
Data managementcomprises alldisciplinesrelated to handlingdataas a valuable resource, it is the practice of managing an organization's data so it can be analyzed fordecision making.[1]
The concept of data management emerged alongside the evolution of computing technology. In the 1950s, as computers became more prevalent, organizations began to grapple with the challenge of organizing and storing data efficiently. Early methods relied on punch cards and manual sorting, which were labor-intensive and prone to errors. The introduction of database management systems in the 1970s marked a significant milestone, enabling structured storage and retrieval of data.
By the 1980s, relational database models revolutionized data management, emphasizing the importance of data as an asset and fostering a data-centric mindset in business. This era also saw the rise of data governance practices, which prioritized the organization and regulation of data to ensure quality and compliance. Over time, advancements in technology, such as cloud computing and big data analytics, have further refined data management, making it a cornerstone of modern business operations.
As of 2025[update], data management encompasses a wide range of practices, from data storage and security to analytics and decision-making, reflecting its critical role in driving innovation and efficiency across industries.[2]
The Data Management Body of Knowledge, DMBoK, developed by theData Management Association, DAMA, outlines key knowledge areas that serve as the foundation for modern data management practices. suggesting a framework for organizations to manage data as a strategicasset.
Setting policies, procedures, and accountability frameworks to ensure that data is accurate, secure, and used responsibly throughout the organization.
Focuses on designing the overall structure of data systems. It ensures that data flows are efficient and that systems are scalable, adaptable, and aligned with business needs.
This area centers on creating models that logically represent data relationships. It’s essential for both designing databases and ensuring that data is structured in a way that facilitates analysis and reporting.
Deals with the physical storage of data and its day-to-day management. This includes everything from traditional data centers to cloud-based storage solutions and ensuring efficient data processing.
Ensures that data from various sources can be seamlessly shared and combined across multiple systems, which is critical for comprehensive analytics and decision-making.
Focuses on managing unstructured data such as documents, multimedia, and other content, ensuring that it is stored, categorized, and easily retrievable.
Involves consolidating data into repositories that support analytics, reporting, and business insights.
Manages data about data, including definitions, origin, and usage, to enhance the understanding and usability of the organization’s data assets.
Dedicated to ensuring that data remains accurate, complete, and reliable, this area emphasizes continuous monitoring and improvement practices.
Reference data comprises standardized codes and values for consistent interpretation across systems. Master data management (MDM) governs and centralizes an organization’s critical data, ensuring a unified, reliable information source that supports effective decision-making and operational efficiency.
Data security refers to a comprehensive set of practices and technologies designed to protect digital information and systems from unauthorized access, use, disclosure, modification, or destruction. It encompasses encryption, access controls, monitoring, and risk assessments to maintain data integrity, confidentiality, and availability.
Data privacy involves safeguarding individuals’ personal information by ensuring its collection, storage, and use comply with consent, legal standards, and confidentiality principles. It emphasizes protecting sensitive data from misuse or unauthorized access while respecting users' rights.
The distinction between data and derived value is illustrated by the "information ladder" or the DIKAR model.
The "DIKAR" model stands for Data, Information, Knowledge, Action, and Result. It is a framework used to bridge the gap between raw data and actionable outcomes. The model emphasizes the transformation of data into information, which is then interpreted to create knowledge. This knowledge guides actions that lead to measurable results. DIKAR is widely applied in organizational strategies, helping businesses align their data management processes with decision-making and performance goals. By focusing on each stage, the model ensures that data is effectively utilized to drive informed decisions and achieve desired outcomes. It is particularly valuable in technology-driven environments.[3]
The "information ladder" illustrates the progression from data (raw facts) to information (processed data), knowledge (interpreted information), and ultimately wisdom (applied knowledge). Each step adds value and context, enabling better decision-making. It emphasizes the transformation of unstructured inputs into meaningful insights for practical use.[4]
In research, Data management refers to the systematic process of handling data throughout its lifecycle. This includes activities such as collecting, organizing, storing, analyzing, and sharing data to ensure its accuracy, accessibility, and security.
Effective data management also involves creating adata management plan, DMP, addressing issues like ethical considerations, compliance with regulatory standards, and long-term preservation. Proper management enhances research transparency, reproducibility, and the efficient use of resources, ultimately contributing to the credibility and impact of research findings. It is a critical practice across disciplines to ensure data integrity and usability both during and after a research project.[5]
big datarefers to the collection and analyses of massive sets of data. While big data is a recent phenomenon, the requirement for data to aid decision-making traces back to the early 1970s with the emergence of decision support systems (DSS). These systems can be considered as the initial iteration of data management for decision support.[6]
Studies indicate that customer transactions account for a 40% increase in the data collected annually, which means that financial data has a considerable impact on business decisions. Therefore, modern organizations are using big data analytics to identify 5 to 10 new data sources that can help them collect and analyze data for improved decision-making. Jonsen (2013) explains that organizations using average analytics technologies are 20% more likely to gain higher returns compared to their competitors who have not introduced any analytics capabilities in their operations. Also, IRI reported that the retail industry could experience an increase of more than $10 billion each year resulting from the implementation of modern analytics technologies. Therefore, the following hypothesis can be proposed: Economic and financial outcomes can impact how organizations use data analytics tools. | https://en.wikipedia.org/wiki/Data_maintenance |
Query by Example(QBE) is adatabasequery languageforrelational databases. It was devised byMoshé M. ZloofatIBM Researchduring the mid-1970s, in parallel to the development ofSQL.[1]It is the first graphical query language, using visual tables where the user would enter commands, example elements and conditions. Many graphical front-ends for databases use the ideas from QBE today. Originally limited only for the purpose ofretrieving data, QBE was later extended to allow other operations, such as inserts, deletes and updates, as well as creation of temporary tables.
The motivation behind QBE is that aparsercan convert the user's actions into statements expressed in a database manipulation language, such asSQL. Behind the scenes, it is this statement that is actually executed. A suitably comprehensive front-end can minimize the burden on the user to remember the finer details of SQL, and it is easier and more productive for end-users (and even programmers) to select tables and columns by selecting them rather than typing in their names.
In the context ofinformation retrieval, QBE has a somewhat different meaning. The user can submit a document, or several documents, and ask for "similar" documents to be retrieved from a document database [see search by multiple examples[2]]. Similarity search is based comparing document vectors (seeVector Space Model).
QBE represents seminal work inend-user development, frequently cited in research papers as an early example of this topic.
Currently, QBE is supported in several relational database front ends, notablyMicrosoft Access, which implements "Visual Query by Example", as well as Microsoft SQL Server Enterprise Manager. It is also implemented in severalobject-oriented databases(e.g. indb4o[3]).
QBE is based on the logical formalism calledtableau query, although QBE adds some extensions to that, much like SQL is based on therelational algebra.
An example using theSuppliers and Parts databaseis given here to illustrate how QBE works.
The term also refers to a general technique influenced by Zloof's work whereby only items with search values are used to "filter" the results. It provides a way for a software user to perform queries without having to know a query language (such asSQL). The software can automatically generate the queries for the user (usually behind the scenes). Here are two examples based on a Contacts table with the following text (character) columns: Name, Address, City, State, and Zipcode:
ResultingSQL:
Note how blank items do not generateSQLterms. Since "Address" is blank, there is no clause generated for it.
ResultingSQL:
More advanced versions of QBE have other comparison operator options, often via a pull-down menu, such as "Contains", "Not Contains", "Starts With", "Greater-Than", and so forth.
Another approach to text comparisons is to allow one or morewildcard charactercharacters. For example, if an asterisk is designated as a wildcard character in a particular system, then searching for last names using "Rob*" would return (match) last names such as "Rob", "Robert", "Robertson", "Roberto", etc.
ResultingSQL:
In standard SQL, the percent sign functions like a wildcard in a LIKE clause. In this case, the query-by-examplme form processing software would translate the asterisk to a percent sign. (An asterisk is a more common wildcard convention outside of SQL, so here the form is attempting to be moreuser friendly.)
WARNING: Query-by-example software should be careful to avoidSQL injection. Otherwise, devious users may penetrate further into the database than intended by builders of the query forms. | https://en.wikipedia.org/wiki/Query_by_Example |
Browsingis a kind of orienting strategy. It is supposed to identify something ofrelevancefor the browsing organism. In context of humans, it is ametaphortaken from the animal kingdom. It is used, for example, about people browsing open shelves in libraries,window shopping, or browsing databases or the Internet.
Inlibrary and information science, it is an important subject, both purely theoretically and as applied science aiming at designing interfaces which support browsing activities for the user.
In 2011,Birger Hjørlandprovided the following definition: "Browsing is a quick examination of the relevance of a number of objects which may or may not lead to a closer examination or acquisition/selection of (some of) these objects. It is a kind of orienting strategy that is formed by our "theories", "expectations" and "subjectivity".[1]
As with any kind of human psychology, browsing can be understood in biological, behavioral, or cognitive terms on the one hand or in social, historical, and cultural terms on the other hand. In 2007,Marcia Batesresearched browsing from "behavioural" approaches, while Hjørland (2011a+b)[2][1]defended a social view. Bates found that browsing is rooted in our history as exploratory, motile animals hunting for food and nesting opportunities. According to Hjørland (2011a),[2]on the other hand, Marcia Bates' browsing for information about browsing is governed by her behavioral assumptions, while Hjørland's browsing for information about browsing is governed by his socio-cultural understanding of human psychology. In short: Human browsing is based on our conceptions and interests.
Browsing is often understood as a random activity. Dictionary.com, for example, has this definition: "to glance atrandomthrough a book, magazine, etc.".[3]
Hjørland suggests, however, that browsing is an activity that is governed by our metatheories. We may dynamically change our theories and conceptions but when we browse, the activity is governed by the interests, conceptions, priorities and metatheories that we have at that time. Therefore, browsing is not totally random.[2]
In 1997,Gary Marchionini[4]wrote: "A fundamental distinction is made between analytical and browsing strategies [...]. Analytical strategies depend on careful planning, the recall of query terms, and iterative query reformulations and examinations of results. Browsing strategies are heuristic and opportunistic and depend on recognizing relevant information. Analytic strategies are batch oriented and half duplex (turn talking) like human conversation, whereas browsing strategies are more interactive, real-time exchanges and collaborations between the information seeker and the information system. Browsing strategies demand a lower cognitive load in advance and a steadier attentional load throughout the information-seeking process. When it comes to Browsing, giblets are amazing."[citation needed]
Some sociologists, such as Berger and Zelditch in 1993, Wagner in 1984, and Wagner & Berger in 1985, have used the term "orienting strategies". They find that orienting strategies should be understood asmetatheories: "Consider the very large proportion of sociological theory that is in the form of metatheory. It is discussion about theory: about what concepts it should include, about how those concepts should be linked, and about how theory should be studied. Similar to Kuhn’s paradigms, theories of this sort provide guidelines or strategies for understanding social phenomena and suggest the proper orientation of the theorist to these phenomena; they are orienting strategies. Textbooks in theory frequently focus on orienting strategies such as functionalism, exchange, or ethnomethodology."[5]
Sociologists thus use metatheories as orienting strategies. We may generalize and say that all people use metatheories as orienting strategies and that this is what direct our attention and also our browsing – also when we are not conscious about it. | https://en.wikipedia.org/wiki/Browse |
TheFBI Seeking Terror Information listis the third major"wanted" listto have been created by theUnited States Department of Justice'sFederal Bureau of Investigationto be used as a primary tool for publicly identifying and tracking down suspectedterroristsoperating againstUnited Statesnationals at home and abroad. The first preceding list for this purpose was theFBI Ten Most Wanted Fugitiveslist. In 2001, after theSeptember 11 attacks, that list was supplanted by theFBI Most Wanted Terroristslist, for the purpose of listingfugitiveswho are specifically wanted for acts ofterrorism.
Since inception in January 2002, the Seeking Information list also serves this purpose, but with the big difference from the two earlier lists being that the suspected terrorists on this third list need not be fugitivesindictedbygrand juriesin theUnited States district courts. Such lower level guidelines now allow for a much quicker response time by the FBI to deliver the early known information, often very limited, out to the public as quickly as possible. As the name of this list implies, the FBI's intent is to acquire any critical information from the public, as soon as possible, about the suspected terrorists, in order to prevent any future attacks that may be in the current planning stages.
All three of the major wanted lists now appear on the FBI web site along with several other types of wanted lists as well. All such FBI lists are grouped together under the heading "Wanted by the FBI."[1]
The FBI Seeking Information – War on Terrorism list has roots in the two earlier fugitive tracking FBI lists. During the 1990s decade in particular, the FBIbegan using the Ten Most Wanted listto profile some major terrorists, includingRamzi YousefandOsama bin Ladenamong others, such as the 1988 mass murder bombers ofPan Am Flight 103overLockerbie, Scotland.
In addition to these Justice Department fugitive programs, an even earlier method of terrorist tracking was created by theUnited States Department of State, in theBureau of Diplomatic Security. This DoS effort is known as the "Rewards for Justice Program", which began in 1984, and originally paid monetary rewards of up to $5 million for information countering terrorism.
After 9/11, in 2001, the FBI Most Wanted Terrorists list was created, as a companion list to the extant FBI Ten Most Wanted Fugitives Program, and to the State Department's Rewards for Justice Program.
After January 14, 2002, five individuals delivering whatUnited States Attorney GeneralJohn Ashcroftdescribed as "martyrdom messages from suicide terrorists" were found on five discovered videos, recovered from the rubble of the home ofMohammad Atefoutside ofKabul, Afghanistan.[2]Abd Al Rahim, one of the individuals in the films, was detained by American forces in Guantanamo Bay for seven years and allegedly tortured, though his "martyrdom message" was in fact known to be a video documenting his torture by Al-Qaeda members who imprisoned him.[3]
NBC Newssaid that the five videos had been recorded after the Sept. 11 terrorist attacks in the United States.[4]Of the five individuals originally listed, one has been detained by U.S. authorities, while another was detained and released, a third was killed in a drone strike, a fourth died as a suicide bomber and a fifth has not been brought into custody. None have been formally tried or convicted of any crimes in the United States.
In response, on January 17, 2002, the FBI released to the public the firstMost Wanted Terrorists Seeking Informationlist (now known as theFBI Seeking Information – Terrorism list), in order to profile the five wanted terrorists about whom very little was known, but who were suspected of plotting additional terrorist attacks in martyrdom operations. The videos were shown by the FBI without sound, to guard against the possibility that the messages contained signals for other terrorists.
Ashcroft called upon people worldwide to help "identify, locate and incapacitate terrorists who are suspected of planning additional attacks against innocent civilians." "These men could be anywhere in the world," he said. Ashcroft added that an analysis of the audio suggested "the men may be trained and prepared to commit future suicide terrorist acts."[2]
On that day,Ramzi bin al-Shibhwas one of the only four known names among the five. Ashcroft said not much was known about any of them except bin al-Shibh.[2]The fifth wanted terrorist was identified a week later asAbderraouf Jdey, alias:Al Rauf Bin Al Habib Bin Yousef Al-Jiddi.
The initial five terrorists on videos from the Atef rubble profiled on the list were:
A week after the initial Afghanistan martyrdom videos were released, the FBI had identified the fifth name,al-Jiddi, orJdey, a resident ofMontreal, Quebec, Canada. An international manhunt was launched January 25, 2002, for his companion, a Canadian citizen named Faker Boussora, then 37. U.S. officials said the two Tunisian-born Canadians were part of a Canadian group plotting to kill more civilians.
Added to the list on January 25, 2002, was:
On February 11, 2002, the FBI added an additional 17 terrorists to the list. But several days later, on February 14, 2002, six of the names were removed, and the FBI re-published the list as only eleven names and photos, because it was discovered that confusion over transliteration had failed to reveal initially that the removed six wanted terrorists were already in prison inYemen.[7]According to the FBI report, as a result of U.S. military operations inAfghanistanand on-going interviews of detainees in theGuantanamo Bay detention camp, information became available on February 11, 2002, regarding threats to U.S. interests, which indicated that a planned attack may have been about to occur in the United States or against U.S. interests in the country of Yemen on or around the next day, February 12, 2002.[8][9]
The six names identified in the Yemen plot on February 11, 2002, but removed from the list on February 14, 2002, as already in Yemen custody were:
The eleven names who were still being sought on February 14, 2002, in relation to the planned February 12, 2002, Yemen plot were:
Three of those remaining eleven suspects (Tunisi,Jeddawi, andZumari) did not have photos on the FBI website. Along with the earlier six suspects on the list, they brought the total count outstanding for the list to seventeen at that time.[13]
The attack of February 12, 2002, never occurred, but a series of plots and attacks followed later that year in Yemen, including the suicide bombing of theLimburg, a French oil tanker, for which al-Rabeei and others were later convicted. As of 2006, all the individuals of the February 12, 2002, Yemen plot have since been removed from the FBI's current main wanted page and from the official count for the Seeking Information – War on Terrorism list.
By February 2, 2003, the FBI rearranged its entire wanted lists on its web site. The outstanding five martyr video suspects (including Jdey's Montreal associate Boussora) were moved to a separate linked page, titled "Martyrdom Messages/video, Seeking Information Alert" (Although both Jdey and Boussora were later returned to the main FBI list page). Additionally, the remaining eight Yemen plot suspects were archived to a linked page titled, "February 2002, Seeking Information Alert". Around this time the FBI also changed the name of the list, to the FBI "Seeking Information – War on Terrorism", to distinguish it from its other wanted list of "Seeking Information", which the FBI already uses for ordinary fugitives, those who are not terrorists.
Along with the re-arrangement, the FBI also continued to add new fugitive names to the list, including one member ofThe Portland Seventerror cell.[14]
By June 2003, several new terrorist suspects were added:[16]
Two new additions to the list were introduced by September 5, 2003.[19]In addition,Jdeywas also moved on to the main list page, from the earlier archived 2002 group:[20]
On May 26, 2004, United States Attorney General John Ashcroft andFBI DirectorRobert Muellerannounced that reports indicated that seven al-Qaeda members were planning a terrorist action for the summer or fall of 2004. The alleged terrorists listed on that date wereAhmed Khalfan Ghailani,Fazul Abdullah Mohammed, andAbderraouf Jdey, along withAmer El-Maati,Aafia Siddiqui,Adam Yahiye Gadahn, andAdnan G. el Shukrijumah. The first two had been listed as FBI Most Wanted Terrorists since 2001, indicted for their roles in the1998 U.S. embassy bombings.Jdeywas already on the FBI's "Seeking Information" wanted list since January 17, 2002, andel-Maatisince February 2003, andSiddiquiandShukrijumahalso since early 2003. Gadahn was added as well to the Seeking Information list.[21]
23 people, 12 of them al-Qaeda members, escaped from a Yemeni jail on February 3, 2006, according to a BBC report.[18]On February 23, 2006, the U.S.FBIconfirmed the escape, as they issued a national Press Release naming some of the escapees as new Most Wanted Terrorists, and also one of the escapees as a new addition to the Seeking Information list,Abdullah Al-Rimi. He is being sought for questioning relating to any knowledge he might have of the 2000 attack on theUSS Cole.[24]
With this one addition below, as of February 23, 2006, the total count on the outstanding Seeking Information list stood at eight.
The very next day, on February 24, 2006, the FBI added an additional three names to the Seeking Information – War on Terrorism list, most notably,Abu Musab al-Zarqawi, the notorious leader ofAl-Qaeda in Iraq.[25]This marked the first time that al-Zarqawi had appeared on any of the three major FBI wanted lists. On June 8, 2006,ABC Newsreported that Abu Musab al-Zarqawi was confirmed to have been killed in Baghdad in a bombing raid the day before by a United States task force. His death was confirmed by multiple sources in Iraq, including the United States government.
Saleh Nabhanwas wanted for questioning for attacks inKenyain 2002.Noordin Topwas a member of theJemaah Islamiyahgroup, which was involved in bombings inIndonesiabetween 2002 and 2004. With these three additions, as of February 24, 2006, the total count on the outstanding Seeking Information list stood at ten. | https://en.wikipedia.org/wiki/FBI_Seeking_Information_%E2%80%93_War_on_Terrorism_list |
Information foragingis a theory that applies the ideas fromoptimal foraging theoryto understand how human users search for information. The theory is based on the assumption that, when searching for information, humans use "built-in" foraging mechanisms that evolved to help our animal ancestors find food. Importantly, a better understanding of human search behavior can improve the usability of websites or any other user interface.
In the 1970soptimal foraging theorywas developed byanthropologistsandecologiststo explain how animals hunt for food. It suggested that the eating habits of animals revolve around maximizingenergyintake over a given amount of time. For everypredator, certain prey is worth pursuing, while others would result in a net loss of energy.
In the early 1990s,Peter PirolliandStuart CardfromPARCnoticed the similarities between users' information searching patterns and animal food foraging strategies. Working together withpsychologiststo analyze users' actions and the information landscape that they navigated (links, descriptions, and other data), they showed that information seekers use the same strategies as food foragers.
In the late 1990s,Ed H. Chiworked with Pirolli, Card, and others atPARCto further develop information scent ideas and algorithms to actually use these concepts in real interactive systems, including the modeling of web user browsing behavior, the inference of information needs from web visit log files, and the use of information scent concepts in reading and browsing interfaces.
"Informavores" constantly make decisions on what kind of information to look for, whether to stay at the current site to try to find additional information or whether they should move on to another site, which path or link to follow to the next information site, and when to finally stop the search. Although human cognition is not a result of evolutionary pressure to improve Web use, survival-related traits to respond quickly on partial information and reduce energy expenditures force them to optimize their searching behavior and, simultaneously, to minimize the thinking required.
The most important concept in the information foraging theory isinformation scent.[1][2]As animals rely on scents to indicate the chances of finding prey in current area and guide them to other promising patches, so do humans rely on various cues in the information environment to get similar answers. Human users estimate how much useful information they are likely to get on a given path, and after seeking information compare the actual outcome with their predictions. When the information scent stops getting stronger (i.e., when users no longer expect to find useful additional information), the users move to a different information source.
Some tendencies in the behaviour of web users are easily understood from the information foraging theory standpoint. On the Web, each site is a patch and information is the prey. Leaving a site is easy, but finding good sites has not always been as easy. Advanced search engines have changed this fact by reliably providing relevant links, altering the foraging strategies of the users. When users expect that sites with lots of information are easy to find, they have less incentive to stay in one place. The growing availability of broadband connections may have a similar effect: always-on connections encourage this behavior, short online visits to get specific answers.
Attempts have been made to develop computational cognitive models to characterize information foraging behavior on the Web.[3][4][5]These models assume that users perceive relevance of information based on some measures of information scent, which are usually derived based on statistical techniques that extract semantic relatedness of words from large text databases. Recently these information foraging models have been extended to explain social information behavior.[6][7][8]See alsomodels of collaborative tagging. | https://en.wikipedia.org/wiki/Information_foraging |
Onboardingororganizational socializationis the American term for the mechanism through which newemployeesacquire the necessary knowledge, skills, and behaviors to become effective organizational members and insiders. In other than American English, such as in British and Australasian dialects, this is referred to as "induction".[1]In the United States, up to 25% of workers are organizational newcomers engaged in onboarding process.[2]
Tactics used in this process include formal meetings, lectures, videos, printed materials, or computer-based orientations that outline the operations and culture of the organization that the employee is entering into. This process is known in other parts of the world as an 'induction'[3]ortraining.[4]
Studies have documented that onboarding process is important to enhancing employee retention, improving productivity, and fostering a positive organizational culture.[5]Socialization techniques such as onboarding lead to positive outcomes for new employees. These include higherjob satisfaction, betterjob performance, greaterorganizational commitment, and reduction inoccupational stressand intent to quit.[6][7][8]
The term "onboarding" is managementjargoncoined in the 1970s.[9]
Researchers separate the process of onboarding into three parts: new employee characteristics, new employee behaviors, and organizational efforts.[10]
New employee characteristics attempt to identify key personality traits in onboarding employees that the business views as beneficial:
Finally, employees are segmented based onEmployee experience levelsas it has a material effect on understanding and ability to assimilate into a new role.
New employee behaviors refer to the process of encouraging and identifying behaviors that are viewed as beneficial to company culture and the onboarding process.
Two examples of these behaviors are building relationships and seeking information and feedback.[1]
Information seekingoccurs when new employees ask questions of their co-workers and superiors in an effort to learn about their new job and the company's norms, expectations, procedures, and policies. This is viewed as beneficial throughout the onboarding process and beyond into the characteristics of a functional employee more generally.[14][15]
Feedback seeking is similar to information seeking but refers to new employee efforts to gauge how to behave in their new organization. A new employee may ask co-workers or superiors for feedback on how well he or she is performing certain job tasks or whether certain behaviors are appropriate in the social and political context of the organization. In seeking constructive criticism about their actions, new employees learn what kinds of behaviors are expected, accepted, or frowned upon within the company or work group.[16]Instances of feedback inquiry vary across cultural contexts such that individuals high in self-assertiveness and cultures low inpower distancereport more feedback seeking than newcomers in cultures where self-assertiveness is low and power distance is high.[17]
Also callednetworking, relationship building involves an employee's efforts to develop camaraderie with co-workers and even supervisors. This can be achieved informally through simply talking to their new peers during a coffee break or through more formal means such as taking part in pre-arranged company events.
Positivecommunicationand relationships between employees and supervisors is important for worker morale. The way in which a message is delivered affects how supervisors develop relationships and feelings about employees. When developing a relationship evaluating personal reputation, delivery style, and message content all played important factors in the perceptions between supervisors and employees. Yet, when supervisors were assessing work competence, they primarily focused on the content of what they were discussing or the message. Creating interpersonal, professional relationships between employees and supervisors in organizations helps foster productive working relationships.[18]
Organizations invest a great amount of time and resources into the training and orientation of new company hires. Organizations differ in the variety of socialization activities they offer in order to integrate productive new workers. Possible activities include socialization tactics, formal orientation programs, recruitment strategies, and mentorship opportunities. Socialization tactics, or orientation tactics, are designed based on an organization's needs, values, and structural policies. Organizations either favor a systematic approach to socialization, or a "sink or swim" approach – in which new employees are challenged to figure out existing norms and company expectations without guidance.
John Van Maanen and Edgar H. Schein have identified six major tactical dimensions that characterize and represent all of the ways in which organizations may differ in their approaches tosocialization.
Collective socialization is the process of taking a group of new hires and giving them the same training. Examples of this include basic training/boot camp for a military organization, pledging for fraternities/sororities, and education in graduate schools. Individual socialization allows newcomers to experience unique training, separate from others. Examples of this process include but are not limited to apprenticeship programs, specific internships, and "on-the-job" training.[19]
Formal socialization refers to when newcomers are trained separately from current employees within the organization. These practices single out newcomers, or completely segregate them from the other employees. Formal socialization is witnessed in programs such as police academies, internships, and apprenticeships. Informal socialization processes involve little to no effort to distinguish the two groups. Informal tactics provide a less intimidating environment for recruits to learn their new roles via trial and error. Examples of informal socialization include on-the-job training assignments, apprenticeship programs with no clearly defined role, and using a situational approach in which a newcomer is placed into a work group with no recruit role.[19]
Sequential socialization refers to the degree to which an organization provides identifiable steps for newcomers to follow during the onboarding process. Random socialization occurs when the sequence of steps leading to the targeted role are unknown, and the progression of socialization is ambiguous; for example, while there are numerous steps or stages leading to specific organizational roles, there is no specific order in which the steps should be taken.[19]
This dimension refers to whether or not the organization provides a timetable to complete socialization. Fixed socialization provides a new hire with the exact knowledge of the time it will take to complete a given passage. For instance, some management trainees can be put on "fast tracks", where they are required to accept assignments on an annual basis, despite their own preferences. Variable techniques allow newcomers to complete the onboarding process when they feel comfortable in their position. This type of socialization is commonly associated with up-and-coming careers in business organizations; this is due to several uncontrollable factors such as the state of the economy or turnover rates which determine whether a given newcomer will be promoted to a higher level or not.[19]
A serial socialization process refers to experienced members of the organization mentoring newcomers. One example of serial socialization would be a first-year police officer being assigned patrol duties with an officer who has been in law enforcement for a lengthy period of time. Disjunctive socialization, in contrast, refers to when newcomers do not follow the guidelines of their predecessors; no mentors are assigned to inform new recruits on how to fulfill their duties.[19]
This tactic refers to the degree to which a socialization process either confirms or denies the personal identities of the new employees. Investiture socialization processes document what positive characteristics newcomers bring to the organization. When using this socialization process, the organization makes use of their preexisting skills, values, and attitudes. Divestiture socialization is a process that organizations use to reject and remove the importance of personal characteristics a new hire has; this is meant to assimilate them with the values of the workplace. Many organizations require newcomers to sever previous ties and forget old habits in order to create a new self-image based upon new assumptions.[19]
Thus, tactics influence the socialization process by defining the type of information newcomers receive, the source of this information, and the ease of obtaining it.[19]
Building on the work of Van Maanen and Schein, Jones (1986) proposed that the previous six dimensions could be reduced to two categories: institutionalized and individualized socialization. Companies that use institutionalized socialization tactics implement step-by-step programs, have group orientations, and implement mentor programs. One example of an organization using institutionalized tactics include incoming freshmen at universities, who may attend orientation weekends before beginning classes. Other organizations use individualized socialization tactics, in which the new employee immediately starts working on his or her new position and figures out company norms, values, and expectations along the way. In this orientation system, individuals must play a more proactive role in seeking out information and initiating work relationships.[20]
Regardless of the socialization tactics used, formal orientation programs can facilitate understanding ofcompany cultureand introduces new employees to their work roles and the organizational social environment. Formal orientation programs consist of lectures, videotapes, and written material. More recent approaches, such as computer-based orientations and Internets, have been used by organizations to standardize training programs across branch locations. A review of the literature indicates that orientation programs are successful in communicating the company's goals, history, and power structure.[21]
Recruitment events play a key role in identifying which potential employees are a good fit for an organization. Recruiting events allow employees to gather initial information about an organization's expectations and company culture. By providing a realistic job preview of what life inside the organization is like, companies can weed out potential employees who are clearly a misfit to an organization; individuals can identify which employment agencies are the most suitable match for their own personal values, goals, and expectations. Research has shown that new employees who receive a great amount of information about the job prior to being socialized tend to adjust better.[22]Organizations can also provide realistic job previews by offering internship opportunities.
Mentorshiphas demonstrated importance in the socialization of new employees.[23][24]Ostroff and Kozlowski (1993) discovered that newcomers with mentors become more knowledgeable about the organization than did newcomers without. Mentors can help newcomers better manage their expectations and feel comfortable with their new environment through advice-giving and social support.[25]Chatman (1991) found that newcomers are more likely to have internalized the key values of their organization's culture if they had spent time with an assigned mentor and attended company social events. Literature has also suggested the importance of demographic matching between organizational mentors and mentees.[23]Enscher & Murphy (1997) examined the effects of similarity (race and gender) on the amount of contact and quality of mentor relationships.[26]What often separates rapid onboarding programs from their slower counterparts is not the availability of a mentor, but the presence of a "buddy", someone the newcomer can comfortably ask questions that are either trivial ("How do I order office supplies?") or politically sensitive ("Whose opinion really matters here?").[2]Buddies can help establish relationships with co-workers in ways that can't always be facilitated by a newcomer's manager.[2]
Online onboarding, i.e., digital onboarding, means onboarding training that is carried out partially or fully online.[27][28][29]Onboarding a new employee is a process where a new hire gets to know the company and its culture and receives the means and knowledge needed to become a productive team member.[30]By onboarding online organizations can use technology to follow the onboarding process, automatize basic forms, follow new employees' progress and see when they may need additional help during the online onboarding training.[21]
Traditional face-to-face onboarding is often a one-way conversation, but online onboarding can make the onboarding process a more worthwhile experience for new hires.[28]The main advantages of online onboarding compared to traditional face-to-face onboarding are considered to be:
Online onboarding requires more thought and structured processes to be adequate and functional compared to the traditional onboarding process.[29]Online onboarding does not offer face-to-face interaction between the onboarding trainer and the new employee in comparison to on-site onboarding.[32]Traditional onboarding also allows better communication, and the development of personal connections and keeps new hires invested in the process compared to online onboarding.[33]
Role clarity describes a new employee's understanding of their job responsibilities and organizational role. One of the goals of an onboarding process is to aid newcomers in reducing uncertainty, making it easier for them to get their jobs done correctly and efficiently. Because there often is a disconnect between the main responsibilities listed in job descriptions and the specific, repeatable tasks that employees must complete to be successful in their roles, it's vital that managers are trained to discuss exactly what they expect from their employees.[34]A poor onboarding program may produce employees who exhibit sub-par productivity because they are unsure of their exact roles and responsibilities. A strong onboarding program produces employees who are especially productive; they have a better understanding of what is expected of them. Organizations benefit from increasing role clarity for a new employee. Not only does role clarity imply greater productivity, but it has also been linked to both job satisfaction and organizational commitment.[35]
Self-efficacyis the degree to which new employees feel capable of successfully completing and fulfilling their responsibilities. Employees who feel they can get the job done fare better than those who feel overwhelmed in their new positions; research has found that job satisfaction, organizational commitment, and turnover are all correlated with feelings of self-efficacy.[7]Research suggests social environments that encourage teamwork and employee autonomy help increase feelings of competence; this is also a result of support from co-workers, and managerial support having less impact on feelings of self-efficacy.[36]
Social acceptancegives new employees the support needed to be successful. While role clarity and self-efficacy are important to a newcomer's ability to meet the requirements of a job, the feeling of "fitting in" can do a lot for one's view of the work environment and has been shown to increase commitment to an organization and decrease turnover.[7]In order for onboarding to be effective employees must help in their own onboarding process by interacting with other coworkers and supervisors socially, and involving themselves in functions involving other employees.[21]The length of hire also determines social acceptance, often by influencing how much an employee is willing to change to maintain group closeness. Individuals who are hired with an expected long-term position are more likely to work toward fitting in with the main group, avoiding major conflicts. Employees who are expected to work in the short-term often are less invested in maintaining harmony with peers. This impacts the level of acceptance from existing employee groups, depending on the future job prospects of the new hire and their willingness to fit in.[37]
Identity impacts social acceptance as well. If an individual with a marginalized identity feels as if they are not accepted, they will suffer negative consequences. It has been shown that whenLGBTemployees conceal their identities at work they are a higher risk for mental health problems, as well as physical illness.[38][39]They are also more likely to experience low satisfaction and commitment at their job.[40][41]Employees possessing disabilities may struggle to be accepted in the workplace due to coworkers' beliefs about the capability of the individual to complete their tasks.[42]Black employees who are not accepted in the workplace and face discrimination experience decreased job satisfaction, which can cause them to perform poorly in the workplace resulting in monetary and personnel costs to organizations.[43]
Knowledge oforganizational culturerefers to how well a new employee understands a company's values, goals, roles, norms, and overall organizational environment. For example, some organizations may have very strict, yet unspoken, rules of how interactions with superiors should be conducted or whether overtime hours are the norm and an expectation. Knowledge of one's organizational culture is important for the newcomer looking to adapt to a new company, as it allows for social acceptance and aids in completing work tasks in a way that meets company standards. Overall, knowledge of organizational culture has been linked to increased satisfaction and commitment, as well as decreased turnover.[44]
Historically, organizations have overlooked the influence of business practices in shaping enduring work attitudes and have underestimated its impact on financial success.[45]Employees' job attitudes are particularly important from an organization's perspective because of their link toemployee engagement, productivity and performance on the job. Employee engagement attitudes, such as organizational commitment or satisfaction, are important factors in an employee's work performance. This translates into strong monetary gains for organizations. As research has demonstrated, individuals who are satisfied with their jobs and show organizational commitment are likely to perform better and have lower turnover rates.[45][46]Unengaged employees are very costly to organizations in terms of slowed performance and potential rehiring expenses. With the onboarding process, there can be short term and long-term outcomes. Short term outcomes include self-efficacy, role clarity, and social integration. Self-efficacy is the confidence a new employee has when going into a new job. Role clarity is the expectation and knowledge they have about the position. Social integration is the new relationships they form, and how comfortable they are in those relationships, once they have secured that position. Long term outcomes consist of organizational commitment, and job satisfaction. How satisfied the employee is after onboarding, can either help the company, or prevent it from succeeding.[47]
The outcomes of organizational socialization have been positively associated with the process ofuncertainty reduction, but are not desirable to all organizations. Jones (1986) and Allen and Meyer (1990) found that socialization tactics were related to commitment, but negatively correlated to role clarity.[20][48]Because formal socialization tactics protect the newcomer from their full responsibilities while "learning the ropes," there is a potential for role confusion once the new hire fully enters the organization. In some cases, organizations desire a certain level of person-organizational misfit in order to achieve outcomes via innovative behaviors.[10]Depending on the culture of the organization, it may be more desirable to increase ambiguity, despite the potentially negative connection with organizational commitment.
Additionally, socialization researchers have had major concerns over the length of time that it takes newcomers to adjust. There has been great difficulty determining the role that time plays, but once the length of the adjustment is determined, organizations can make appropriate recommendations regarding what matters most in various stages of the adjustment process.[10]
Further criticisms include the use of special orientation sessions to educate newcomers about the organization and strengthen their organizational commitment. While these sessions have been found to be formal and ritualistic, studies have found them unpleasant or traumatic.[49]Orientation sessions are a frequently used socialization tactic, however, employees have not found them to be helpful, nor has any research provided any evidence for their benefits.[50][51][52]
Executive onboarding is the application of general onboarding principles to helping new executives become productive members of an organization. It involves acquiring, accommodating, assimilating and accelerating new executives.[53]Hiring teams emphasize the importance of making the most of the new hire's "honeymoon" stage in the organization, a period which is described as either the first 90 to 100 days, or the first full year.[54][55][56]
Effective onboarding of new executives is an important contribution hiring managers, directsupervisorsorhuman resourceprofessionals make to long-term organizational success; executive onboarding done right can improveproductivityandexecutive retention, and buildcorporate culture. 40 percent of executives hired at the senior level are pushed out, fail, or quit within 18 months without effective socialization.[57]
Onboarding is valuable for externally recruited, or those recruited from outside the organization,executives. It may be difficult for those individuals to uncover personal, organizational, and role risks in complicated situations when they lack formal onboarding assistance.[58]Onboarding is also an essential tool for executives promoted into new roles and/or transferred from one business unit to another.[59]
The effectiveness of socialization varies depending on the structure and communication within the organization, and the ease of joining or leaving the organization.[60]These are dimensions that online organizations differ from conventional ones. This type of communication makes the development and maintenance of social relationships with other group members difficult to accomplish and weaken organizational commitment.[61][62]Joining and leaving online communities typically involves less cost than a conventional employment organization, which results in lower level of commitment.[63]
Socialization processes in most online communities are informal and individualistic, as compared with socialization in conventional organizations.[64]For example, lurkers in online communities typically have no opportunities for formal mentorship, because they are less likely to be known to existing members of the community. Another example is Wiki Projects, the task-oriented group in Wikipedia, rarely use institutional socialization tactics to socialize new members who join them,[65]as they rarely assign the new member a mentor or provide clear guidelines. A third example is the socialization of newcomers to the Python open-source software development community.[66]Even though there exists clear workflows and distinct social roles, socialization process is still informal.
Scholars at MIT Sloan, suggest that practitioners should seek to design an onboarding strategy that takes individual newcomer characteristics into consideration and encourages proactive behaviors, such as information seeking, that help facilitate the development of role clarity, self-efficacy, social acceptance, and knowledge of organizational culture. Research has consistently shown that doing so produces valuable outcomes such as high job satisfaction (the extent to which one enjoys the nature of his or her work), organizational commitment (the connection one feels to an organization), and job performance in employees, as well as lower turnover rates and decreased intent to quit.[67]
In terms of structure, evidence shows that formal institutionalized socialization is the most effective onboarding method.[21]New employees who complete these kinds of programs tend to experience more positive job attitudes and lower levels of turnover in comparison to those who undergo individualized tactics.[10][68]Evidence suggests that in-person onboarding techniques are more effective than virtual ones. Though it initially appears to be less expensive for a company to use a standard computer-based orientation programs, some previous research has demonstrated that employees learn more about their roles and company culture through face-to-face orientation.[69]
Comprehensive Employment and Training Act | https://en.wikipedia.org/wiki/Onboarding |
Collaboration(fromLatincom-"with" +laborare"to labor", "to work") is the process of two or more people, entities ororganizationsworking together to complete a task or achieve a goal.[1]Collaboration is similar tocooperation. The form of leadership can be social within adecentralizedandegalitariangroup.[2]Teams that work collaboratively often access greater resources, recognition and rewards when facing competition for finite resources.[3]
Structured methods of collaboration encourageintrospectionof behavior and communication.[2]Such methods aim to increase the success of teams as they engage in collaborativeproblem-solving. Collaboration is present in opposing goals exhibiting the notion ofadversarial collaboration, though this is not a common use of the term. In its applied sense, "[a] collaboration is a purposeful relationship in which all parties strategically choose to cooperate in order to accomplish a shared outcome".[4]Trade between nations is a form of collaboration between two societies which produce and exchange different portfolios of goods.
Trade began inprehistorictimes and continues because it benefits all of its participants. Prehistoric peoples bartered goods and services with each other without a modern currency.Peter Watsondates thehistory of long-distance commercefromcirca150,000 years ago.[5]Trade exists because different communities have acomparative advantagein the production of tradable goods.
TheRoman Empireused collaboration through ruling with visible control, which lasted from 31BC until (in theeast) 1453CE, across around fifty countries. The growth of trade was supported by the stable administration of the Romans.[6]Evidence shows that the Roman Empire andJulius Caesarwere influenced by the Greek writerXenophon'sThe Education of Cyruson leadership.[6]This says that 'social bonds, not command and control, were to be the primary mechanisms of governance'. Classics professorEmma Denchnotes that the Roman Empire extended itscitizenship"to enemies, former enemies of state, to people who'd helped them. The Romans were incredibly good at co-opting people and ideas."[7]The Romans created a stable empire that benefitted both ruled and allied countries. Gold and silver werecurrenciescreated by the Romans which supported a market economy, leading to trading within the Roman Empire and taxes.[clarification needed]
InHutteritecommunities housing units are built and assigned to individual families, but belong to the colony with little personal property. Meals are taken by the entire colony in a common long room.[8]
TheOneida CommunitypracticedCommunalism(in the sense of communal property and possessions) andMutual Criticism, where every member of the community was subject to criticism by committee or the community as a whole, during a general meeting. The goal was to remove bad character traits.[9]
Akibbutzis an Israeli collective community. The movement combinessocialismandZionismseeking a form of practicalLabor Zionism. Choosing communal life, and inspired by their own ideology, kibbutz members developed a communal mode of living. The kibbutzim lasted for several generations asutopiancommunities, although most became capitalist enterprises and regular towns.[10]
TheManhattan Projectwas a collaborative project duringWorld War IIamong theAlliesthat developed the firstatomic bomb. It was a collaborative effort by theUnited States, theUnited KingdomandCanada.
The value of this project as an influence on organized collaboration is attributed toVannevar Bush. In early 1940, Bush lobbied for the creation of theNational Defense Research Committee. Frustrated by previous bureaucratic failures in implementing technology in World War I, Bush sought to organize the scientific power of the United States for greater success.[11]
The project succeeded in developing and detonating three nuclear weapons in 1945: atest detonationof aplutoniumimplosion bomb on July 16 (theTrinity test) nearAlamogordo, New Mexico; anenriched uraniumbomb code-named "Little Boy" on August 6 overHiroshima, Japan; and a secondplutoniumbomb, code-named "Fat Man" on August 9 over Nagasaki, Japan.
The members of an intentional community typically hold a commonsocial,politicalorspiritualvision. They share responsibilities and resources. Intentional communities includecohousing, residentialland trusts,ecovillages,communes,kibbutzim,ashrams, andhousing cooperatives. Typically, new members of an intentional community are selected by the community's existing membership, rather than by real estate agents or land owners (if the land is not owned by the community).[12]
Collaboration in indigenous communities, particularly in the Americas, often involves the entire community working toward a common goal in a horizontal structure with flexibleleadership.[13]Children in some indigenous American communities collaborate with the adults. Children can be contributors in the process of meeting objectives by taking on tasks that suit their skills.[14]
Indigenous learning techniques compriseLearning by Observing and Pitching In. For example, a study of Mayan fathers and children with traditional Indigenous ways of learning worked together in collaboration more frequently when building a 3D model puzzle than Mayan fathers with western schooling.[14]Also, Chillihuani people of the Andes value work and create work parties in which members of each household in the community participate.[15]Children from indigenous-heritage communities want to help around the house voluntarily.[16]
In the Mazahua Indigenous community of Mexico, school children show initiative and autonomy by contributing in their classroom, completing activities as a whole, assisting and correcting their teacher during lectures when a mistake is made.[17]Fifth and sixth graders in the community work with the teacher installing a classroom window; the installation becomes a class project in which the students participate in the process alongside the teacher. They all work together without needing leadership, and their movements are all in sync and flowing. It is not a process of instruction, but rather a hands-on experience in which students work together as a synchronous group with the teacher, switching roles and sharing tasks. In these communities, collaboration is emphasized, and learners are trusted to take initiative. While one works, the other watches intently and all are allowed to attempt tasks with the more experienced stepping in to complete more complex parts, while others pay close attention.[18]
Game theoryis a branch of applied mathematics, computer science, and economics that looks at situations where multiple players make decisions in an attempt to maximize their returns. The first documented discussion of game theory is in a letter written byJames Waldegrave, 1st Earl Waldegravein 1713.Antoine Augustin Cournot'sResearches into the Mathematical Principles of the Theory of Wealthin 1838 provided the first general theory. In 1928 it became a recognized field whenJohn von Neumannpublished a series of papers. Von Neumann's work in game theory culminated in the 1944 book TheTheory of Games and Economic Behaviorby von Neumann andOskar Morgenstern.[19]
The termmilitary-industrial complexrefers to a close andsymbioticrelationship among a nation'sarmed forces, itsprivate industry, and associatedpoliticalinterests. In such a system, the military is dependent on industry to supply material and other support, while the defence industry depends on government for revenue.[20]
Skunk Worksis a term used in engineering and technical fields to describe a group within an organization given a high degree of autonomy unhampered by bureaucracy, tasked with advanced or secret projects. One such group was created atLockheedin 1943. The team developed highly innovative aircraft in short time frames, notably beating its first deadline by 37 days.[11]
As a discipline, project management developed from different fields including construction, engineering and defense. In the United States, the forefather of project management isHenry Gantt, who is known for his use of the"bar" chartas a project management tool, for being an associate ofFrederick Winslow Taylor's theories ofscientific management, and for his study of the management of Navy ship building. His work is the forerunner to many modern project management tools including thework breakdown structure(WBS) and resource allocation.
The 1950s marked the beginning of the modern project management era. Again, in the United States, prior to the 1950s, projects were managed on anad hocbasis using mostlyGantt charts, and informal techniques and tools. At that time, two mathematical project scheduling models were developed: (1) the "Program Evaluation and Review Technique" or PERT, developed as part of theUnited States Navy's (in conjunction with theLockheed Corporation)Polaris missilesubmarine program;[21]and (2) the "Critical Path Method" (CPM) developed in a joint venture by bothDuPont CorporationandRemington Rand Corporationfor managing plant maintenance projects. These mathematical techniques quickly spread into many private enterprises.
In 1969, theProject Management Institute(PMI) was formed to serve the interest of the project management industry. The premise of PMI is that the tools and techniques of project management are common even among the widespread application of projects from thesoftware industryto the construction industry. In 1981, the PMI Board of Directors authorized the development of what has becomeA Guide to the Project Management Body of Knowledge(PMBOK), standards and guidelines of practice that are widely used throughout the profession. The International Project Management Association (IPMA), founded in Europe in 1967, has undergone a similar development and instituted the IPMA Project Baseline. Both organizations are now participating in the development of a global project management standard.
However, the exorbitant cost overruns and missed deadlines of large-scale infrastructure, military R&D/procurement and utility projects in the US demonstrates that these advances have not been able to overcome the challenges of such projects.[22]
Founded in 1933 byJohn Andrew Rice, Theodore Dreier and other former faculty ofRollins College,Black Mountain Collegewas experimental by nature and committed to aninterdisciplinaryapproach, attracting a faculty which included leading visual artists, poets and designers.
Operating in a relatively isolated rural location with little budget, Black Mountain fostered an informal andcollaborativespirit. Innovations, relationships and unexpected connections formed at Black Mountain had a lasting influence on the postwar American art scene,high cultureand eventuallypop culture.Buckminster Fullermet studentKenneth Snelsonat Black Mountain, and the result was the firstgeodesic dome(improvised out of slats in the school's back yard);Merce Cunninghamformed his dance company; andJohn Cagestaged his firsthappening.
Black Mountain College was a consciously directedliberal artsschool that grew out of theprogressive education movement. In its day it was a unique educational experiment for the artists and writers who conducted it, and as such an important incubator for the Americanavant garde.
Dr. Wolff-Michael Roth and Stuart Lee of theUniversity of Victoriaassert[23]that until the early 1990s the individual was the 'unit of instruction' and the focus of research. The two observed that researchers and practitioners switched[24][25]to the idea that "knowing" is better thought of as a cultural practice.[26][27][28][29]Roth and Lee also claim[23]that this led to changes in learning and teaching design in which students were encouraged to share their ways of doing mathematics, history, science, with each other. In other words, that children take part in the construction of consensual domains, and 'participate in the negotiation and institutionalization of ... meaning'. In effect, they are participating inlearning communities.
This analysis does not consider the appearance of Learning communities in the United States in the early 1980s. For example,The Evergreen State College, which is widely considered a pioneer in this area, established an intercollegiate learning community in 1984. In 1985, the college established The Washington Center for Improving the Quality of Undergraduate Education, which focuses on collaborative education approaches, including learning communities as one of its centerpieces. The school later became notorious for less-successful collaborations.[30]
The romanticized notion of a lone, genius artist has existed since the time ofGiorgio Vasari'sLives of the Artists, published in 1568. Vasari promulgated the idea that artistic skill was endowed upon chosen individuals by gods, which created an enduring and largely false popular misunderstanding of many artistic processes. Artists have used collaboration to complete large scale works for centuries, but the myth of the lone artist was not widely questioned until the 1960s and 1970s.[31]
Collaborative art groups include:
Balletis a collaborative art form. It entails music, dancers, costumes, a venue, lighting, etc. Hypothetically, one person could control all of this, but most often every work of ballet is the by-product of collaboration. From the earliest formal works of ballet, to the great 19th century masterpieces ofPyotr TchaikovskyandMarius Petipa, to the 20th century masterworks ofGeorge BalanchineandIgor Stravinsky, to today's ballet companies, feature strong collaborative connections between choreographers, composers and costume designers are essential. Within dance as an art form, there is also the collaboration between choreographer and dancer. The choreographer creates a movement in her/his head and then physically demonstrates the movement to the dancer, which the dancer sees and attempts to either mimic or interpret.[32]
Musical collaboration occurs when musicians in different places or groups work on the piece. Typically, multiple parties are involved (singers, songwriters, lyricists, composers, and producers) and come together to create one work. For example, one specific collaboration from recent times (2015) was the song "FourFiveSeconds". This single represents a type of collaboration because it was developed by pop idolRihanna,Paul McCartney(former bassist, composer and vocalist forThe Beatles), and rapper/composerKanye West. Websites and software facilitate musical collaboration over theInternet, resulting in the emergence ofonline bands.
Several awards exist specifically for collaboration in music:
Collaboration has been a constant feature ofelectroacoustic music, due to the technology's complexity. Embedding technological tools into the process stimulated the emergence of new agents with new expertise: the musical assistant, the technician, the computer music designer, the music mediator (a profession that has been described and defined in different ways over the years) – aiding with writing, creating new instruments, recording and/or performance. The musical assistant explains developments in musical research and translates artistic ideas into programming languages. Finally, he or she transforms those ideas into a score or a computer program and often performs the musical piece during the concerts.[33]Examples of collaboration includePierre Boulezand Andrew Gerzso, Alvise Vidolin andLuigi Nono,Jonathan Harveyand Gilbert Nouno.
Although relatively rare compared with collaboration in popular music, there have been some notable examples of music written collaboratively by classical composers. Perhaps the best-known examples are:
Collaboration in entertainment dates from the origin of theatrical productions, millennia ago. It takes the form of writers, directors, actors, producers and other individuals or groups working on the same production. In the twenty-first century, new technology has enhanced collaboration. A system developed byWill Wrightfor the TV series titleBar Karmaon CurrentTV facilitates plot collaboration over theInternet. Screenwriter organizations bring together professional and amateur writers and filmmakers.
Collaboration in business can be found both within and across organizations,[35]and examples range from formalisedpartnerships, use ofcoworkingspaces where freelancers can work with others in a collaborative environment andcrowd funding, to the complexity of amultinational corporation. Inter-organizational collaboration brings participating parties to invest resources, mutually achieve goals, share information, resources, rewards and responsibilities, as well as make joint decisions and solve problems.[36]Collaboration between public, private and voluntary sectors can be effective in tackling complex policy problems, but may be handled more effectively byboundary-spanningteams andnetworksthan by formal organizational structures.[37]In turn, business and management scholars have paid much attention to the importance of both formal and informal mechanisms to support inter-organizational collaboration.[38]They especially point to the role ofcontractualand relational mechanisms and the inherent tensions between the two.[39]Global manufacturerUnileveroffers to collaborate with innovatingstart-upbusinesses, and its "Unilever Foundry" refers to over 400 examples of "strategic collaboration" in this field.[40]Collaborativeprocurementhas been commended as a means of achieving financial savings and operational efficiency in the acquisition of common goods and services in the public sector,[41]and producing mutually beneficial results in the private sector.[42]Collaboration allows for better communication within organizations and alongsupply chains. It is a way of coordinating different ideas from numerous people to generate a wide variety of knowledge. Collaboration with a few selected firms has been shown to positively impact firm performance and innovation outcomes.[43]
Technology has provided the internet, wireless connectivity and collaboration tools such as blogs and wikis, and has as such created the possibility of "mass collaboration". People are able to rapidly communicate and share ideas, crossing longstanding geographical and cultural boundaries. Social networks permeate business culture where collaborative uses includefile sharingandknowledge transfer. According to authorEvan Rosencommand-and-control organizational structures inhibit collaboration and replacing such structures allows collaboration to flourish.[44]
Studies have found that collaboration can increase achievement and productivity.[45]However, Bill Huber, former chair of the International Association for Contract and Commercial Management (IACCM, nowWorld Commerce & Contracting), notes that not all companies have what he calls "collaborative DNA".[46]Huber argues that
often when companies fail to implement or sustain successful collaborative relationships, the causes can be traced to insufficient leadership support or to underdeveloped collaboration skills.[46]
Andrew Cox, formerly ofBirmingham Business Schooland the founder of the International Institute for Advanced Purchasing and Supply (IIAPS),[47]has highlighted the dangers in thinking that collaborative relationships always produce mutually advantageous "win-win" outcomes for both buyers and sellers in commercial relationships. Cox uses case studies which show where competentbuyershave used collaboration successful to securevalue for money, and other examples where "incompetent buyers" utilizing "what initially appear to be win-win outcomes" subsequently lose out to "more commercially competent suppliers".[48]In relation to one of his examples, Cox concludes that
From a perception that the buyer was in a win-win situation, it soon became apparent that it was either close to a lose-win or at best a partial win-win situation favouring the supplier.[48]
A four-year study of interorganizational collaboration in a mental health setting found that successful collaboration can be rapidly derailed through external policy steering, particularly where it undermines relations built on trust.[49][50]Collaboration is also threatened by opportunism from the business partners and the possibility of coordination failures that can derail the efforts of even well-intentioned parties.
Margarita Leib, a professor atTilburg Universityin the Netherlands, wrote about how individuals working together sometimes promote dishonest behavior that prioritizes profit, like whatVolkswagendid to fakevehicle emission levels. This often begins with one person lying, which incentivizes or pressures everyone else to escalate in response.[51]
In recent years, co-teaching has become more common, found in US classrooms across all grade levels and content areas.[52]Once regarded as connectingspecial educationand general education teachers, it is now more generally defined as "…two professionals delivering substantive instruction to a diverse group of students in a single physical space."[53]
As American classrooms have become increasingly diverse, so have the challenges for educators. Due to the diverse needs of students with designated special needs,English language learners(ELL), and students of varied academic levels, teachers have developed new approaches that provide additional student support.[54][55]In practice, students remain in the classroom and receive instruction by both their general teacher and special education teachers.[52]
In the 1996 report "What Matters Most: Teaching for America's Future" economic success could be enhanced if students developed the capacity to learn how to "manage teams… and…work together successfully in teams".[56]
Teachers increasingly usecollaborative softwareto establishvirtual learning environments(VLEs). This allows them to share learning materials and feedback with both students and in some cases, parents. Approaches include:[57]
Writers, both in fiction and non-fiction, may cooperate on a one-time or long-term basis. It can be as simple as dual-authorship or as complex ascommons-based peer production. Tools includeUsenet,e-mail lists,blogsandWikiswhile 'brick and mortar' examples includemonographs(books) and periodicals such as newspapers, journals and magazines. One approach is for an author to publish early drafts/chapters of a work on the Internet and solicit suggestions from the world at large. This approach helped ensure that the technical aspects of the novelThe Martianwere as accurate as possible.[58]
The science fiction authorFrederik Pohlwas noted for his longtime collaborations withCyril KornbluthandJack Williamson.
Collaboration in technical communication (also commonly referred to as technical writing) has become increasingly important in the creation and dissemination of technical documents in multiple technical and occupational fields, including: computer hardware and software, medicine, engineering, robotics, aeronautics, biotechnology, information technology, and finance. Collaboration in technical communication allows for greater flexibility, productivity and innovation for technical writers and the companies they work for, resulting in technical documents that are more comprehensive and accurate than documents produced by individuals. Technical communication collaboration typically occurs on shared document work-spaces (such as Google Docs), through social media sites, videoconferencing, SMS and IM, and on cloud-based authoring platforms.
Scientific collaboration rapidly advanced throughout the twentieth century as measured by the increasing numbers of coauthors on published papers. Wagner andLeydesdorfffound international collaborations to have doubled from 1990 to 2005.[3]Whilecollaborative authorshipswithin nations has also risen, this has done so at a slower rate and is not cited as frequently.[3]Notable examples of scientific collaboration includeCERN, theInternational Space Station, theITERnuclear fusion experiment, and the European Union'sHuman Brain Project.
Collaboration in health care is defined as health care professionals assuming complementary roles and cooperatively working together, sharing responsibility for problem-solving and making decisions to formulate and carry out plans for patient care.[59]Collaboration between physicians, nurses, and other health care professionals increases team members' awareness of each other's type of knowledge and skills, leading to continued improvement in decision making.[59]A collaborative plan is filed with eachstateboard of medicine where the PA works. This plan formally delineates the scope of practice approved by the physician.
Welfare services, including healthcare systems, have become more specialised over time and are provided by an increasing number of departments and organisations.[60]One disadvantage from this development is fragmented supply of health and social services, which hampers integration of services resulting in suboptimal care, higher cost due to overlaps and poor quality of care.[61]
The current system, in which care is fragmented and delivered by several different stakeholders, increases the need of all relevant stakeholders to coordinate and collaborate both within and between organisations in order to deliver services tailored to people's needs.
This need of increased collaboration between stakeholders corresponds with the principles of people-centered care.[62]
Collaboration in technology encompasses a broad range of tools that enable groups of people to work together including social networking, instant messaging, team spaces, web sharing, audio conferencing, video, and telephony. Many large companies adopt collaboration platforms to allow employees, customers and partners to intelligently connect and interact.
Enterprise collaboration tools focus on encouragingcollective intelligenceand staff collaboration at the organization level, or with partners. These include features such as staff networking, expert recommendations, information sharing, expertise location,peer feedback, and real-time collaboration. At the personal level, this enables employees to enhance social awareness and their profiles and interactions Collaboration encompasses both asynchronous and synchronous methods of communication and serves as an umbrella term for a wide variety of software packages. Perhaps the most commonly associated form of synchronous collaboration is web conferencing, but the term can encompass IP telephony, instant messaging, and rich video interaction with telepresence, as well.
The effectiveness of a collaborative effort is driven by three critical factors:
The Internet's low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier. Not only can a group cheaply communicate, but the wide reach of the Internet allows groups to easily form, particularly among dispersed, niche participants. An example of this is thefree software movementin software development which producedGNUandLinuxfrom scratch and has taken over development ofMozillaandOpenOffice.org(formerly known asNetscape CommunicatorandStarOffice).
With the recent development ofsocial mediaplatforms, there has been a constant and quick growth in the use of the Internet for communication and collaboration between people. The2.0 version of the internethas become a tool for collaborative projects,blogs,online communities, social networks, group games. An example of how social media aids in more effective collaboration is seen via the business environment.[64]Communication and collaboration create new hierarchies and wider networks for employees and partners of organisations. Additionally, it also enables businesses to broaden theirmarketing strategiesby collaborating withinfluencersof those social media platforms.[65]
Commons-based peer productionis a term coined byYaleLaw professorYochai Benklerto describe a new model of economic production in which the creative energy of large numbers of people is coordinated (usually with the aid of the internet) into large, meaningful projects, mostly without hierarchical organization or financial compensation. He compares this to firm production (where a centralized decision process decides what has to be done and by whom) andmarket-based production(when tagging different prices to different jobs serves as an attractor to anyone interested in doing the job).
Examples of products created by means of commons-based peer production includeLinux, acomputeroperating system;Slashdot, a news and announcements website;Kuro5hin, a discussion site for technology and culture;Wikipedia, an onlineencyclopedia; andClickworkers, a collaborative scientific work. Another example isSocialtext, a software solution that uses tools such as wikis and weblogs and helps companies to create a collaborative work environment.
The termmassively distributed collaborationwas coined byMitchell Kapor, in a presentation atUC Berkeleyon 2005-11-09, to describe an emerging activity ofwikisandelectronic mailing listsandblogsand other content-creatingvirtual communitiesonline.
Wartime collaborationrefers to cooperating with the enemy or enemies of one's own country. Examples include: | https://en.wikipedia.org/wiki/Collaboration |
Collaborative learningis a situation in which two or more people learn or attempt to learn something together.[1]Unlike individual learning, people engaged in collaborative learning capitalize on one another's resources and skills (asking one another for information, evaluating one another's ideas, monitoring one another's work, etc.).[2][3]More specifically, collaborative learning is based on the model that knowledge can be created within a population where members actively interact by sharing experiences and take on asymmetric roles.[4]Put differently,collaborativelearning refers tomethodologies and environmentsin which learners engage in a common task where each individual depends on and is accountable to each other. These include both face-to-face conversations[5]and computer discussions (online forums, chat rooms, etc.).[6]Methods for examining collaborative learning processes includeconversation analysisand statistical discourse analysis.[7]
Thus, collaborative learning is commonly illustrated when groups of students work together to search for understanding, meaning, or solutions or to create an artifact or product of their learning. Furthermore, collaborative learning redefines the traditional student-teacher relationship in the classroom which results in controversy over whether this paradigm is more beneficial than harmful.[8][9]Collaborative learning activities can includecollaborative writing, group projects, joint problem solving, debates, study teams, and other activities. The approach is closely related tocooperative learning.
Collaborative learning is rooted inLev Vygotsky's concept of learning calledzone of proximal development. Typically there are tasks that learners can and cannot accomplish. Between these two areas is the zone of proximal development, which is a category of things that a learner can learn but with the help of guidance. The zone of proximal development gives guidance as to what set of skills a learner has that are in the process of maturation. In Vygotsky's definition of zone of proximal development, he highlighted the importance of learning through communication and interactions with others rather than just through independent work.[10]This has made way for the ideas of group learning, one of which being collaborative learning.
Collaborative learning is very important in achieving critical thinking. According to Gokhale (1995), individuals are able to achieve higher levels of learning and retain more information when they work in a group rather than individually, this applies to both the facilitators of knowledge, the instructors, and the receivers of knowledge, the students.[11]For example, Indigenous communities of the Americas illustrate that collaborative learning occurs because individual participation in learning occurs on a horizontal plane where children and adults are equal.[12]
There has been a split regarding the differences between collaborative and cooperative learning. Some believe that collaborative learning is similar to, yet distinct from, cooperative learning. While both models use a division of labor, collaborative learning requires the mutual engagement of all participants and a coordinated effort to solve the problem whereas cooperative learning requires individuals to take responsibility for a specific section and then coordinate their respective parts together.[13]Another proposed differentiation is that cooperative learning is typically used for children because it is used to understand the foundations of knowledge while collaborative learning applies to college and university students because it is used to teach non-foundations of learning. Another believed difference is that collaborative learning is a philosophy of interaction whereas cooperative learning is a structure of interaction.[14]
However, many psychologists have defined cooperative learning and collaborative learning similarly. Both are group learning mechanisms for learners to obtain a set of skills or knowledge. Some notable psychologists that use this definition for both collaborative and cooperative learning are Johnson & Johnson, Slavin, Cooper and more.
Often, collaborative learning is used as an umbrella term for a variety of approaches ineducationthat involve joint intellectual effort by students or students and teachers by engaging individuals in interdependent learning activities.[15]Many have found this to be beneficial in helping students learn effectively and efficiently than if the students were to learn independently. Some positive results from collaborative learning activities are students are able to learn more material by engaging with one another and making sure everyone understands, students retain more information from thoughtful discussion, and students have a more positive attitude about learning and each other by working together.[16][17]
Encouraging collaborative learning may also help improve the learning environment in higher education. Kenneth Bruffee performed a theoretical analysis on the state of higher education in America. Bruffee aimed to redefine collaborative learning in academia. Simply including more interdependent activities will help the students become more engaged and thoughtful learners, but teaching them that obtaining knowledge is a communal activity itself.[18]
When compared to more traditional methods where students non-interactively receive information from a teacher, cooperative, problem-based learning demonstrated improvement ofstudent engagementand retention of classroom material. Additionally, academic achievement and student retention within classrooms are increased.[17][19]A meta-analysis comparing small-group work to individual work in K-12 and college classrooms also found that students working in small groups achieved significantly more than students working individually, and optimal groups for learning tended to be three- to four-member teams with lower-ability students working best in mixed groups and medium-ability students doing best in homogeneous groups. For higher-ability students, group ability levels made no difference.[20]In more than 40 studies of elementary, middle, and high school English classrooms, discussion-based practices improved comprehension of the text and critical-thinking skills for students across ethnic and socioeconomic backgrounds.[21]Even discussions lasting as briefly as ten minutes with three participants improved perceived understanding of key story events and characters.[22]Improvement in students' understanding of course content has also been observed at universities.[17]
The popularity of collaborative learning in the workplace[23]has increased over the last decade. With the emergence of many new collaborative tools, as well as the cost benefit of being able to reinforce learning in workers and in trainees during collaborative training, many work environments are now looking toward methods that involve collaborating with older employees and giving trainees more of a hands-on approach. Most companies are transitioning from traditional training programs that include instructor-led training sessions or online guided tutorials. Collaborative learning is extremely helpful because it uses past experiences from prior employees to help new trainees get over different challenges.
There are many facets to collaboration in the workplace. It is critical to helping workers share information with each other and creating strategic planning documents that require multiple inputs. It also allows for forms of vertical integration to find effective ways to synchronize business operations with vendors without being forced to acquire additional businesses.[24]
Many businesses still work on the traditional instructor and trainee model and as they transition from one model to another there are many issues that still need to be debugged in the conversation process:
Web technologies have been accelerating learner-centered personalized learning environments. This helps knowledge be constructed and shared, instead of just passed down by authorities and passively consumed or ignored. Technologies such as discussion threads, email or electronic bulletin boards by sharing personal knowledge and ideas do not let others refine individual ideas so we need more collaborative tools. Now these tools on Web 2.0 have been able to enhance collaborative learning like no other because it allows individuals to work together to generate, discuss and evaluate evolving ideas. These tools allow for them to find people that are like minded and collaborate with them effortlessly.
According to a collaborative learning study conducted by Lee & Bonk (2014), there are still many issues that are still being resolved when dealing with collaborative learning in a workplace. The goal was to examine corporate personnel, including learning managers and instructors, plus the tools that they use for collaboration. The researchers conducted an online survey to see what aspects of collaborative learning should be investigated, followed by an open discussion forum with 30 corporate personnel. The results showed that collaboration is becoming very necessary in workplaces and tools such as wikis are very commonly used. There is implication for a lot of future work, in order to have collaborative learning be highly effective in the workplace. Some of the unsolved problems they identified:
It is crucial to consider the interactive processes among people, but the most critical point is the construction of new knowledge brought about through joint work.
Technology has become an important factor in collaborative learning. Over the past ten years, the Internet has allowed for a shared space for groups to communicate. Virtual environments have been critical to allowing people to communicate long-distances but still feel like they are part of the group. Research has been conducted on how technology has helped increase the potential of collaborative learning.One study in particular conducted by Elizabeth Stacey looked at how technology affected the communication of postgraduate students studying a Master of Business Administration (MBA) using computer-mediated communication (CMC).[25]Many of these students were able to still remotely learn even when they were not present on their university campus. The results of the study helped build an online learning environment model but since this research was conducted the Internet has grown extensively and thus new software is changing these means of communication.[26]
There has been a development of new technology that support collaborative learning in higher education and the workplace. These tools allow for a strong more power and engaging learning environment. Chickering identified seven principles for good practice in undergraduate education developed by Chickering.[27]Two of these principles are especially important in developing technology for collaboration.
Some examples of how technology is being increasingly integrated with technology are as follows:
Collaborative networked learning: according to Findley (1987) "Collaborative Networked Learning (CNL) is that learning which occurs via electronic dialogue between self-directed co-learners and learners and experts. Learners share a common purpose, depend upon each other and are accountable to each other for their success. CNL occurs in interactive groups in which participants actively communicate and negotiate learning with one another within a contextual framework which may be facilitated by an online coach, mentor or group leader.
Computer-supported collaborative learning(CSCL) is a relatively new educational paradigm within collaborative learning which uses technology in a learning environment to help mediate and support group interactions in a collaborative learning context.[28][29]CSCL systems use technology to control and monitor interactions, to regulate tasks, rules, and roles, and to mediate the acquisition of new knowledge.
Collaborative learning usingWikipedia: Wikipedia is an example of how collaborative learning tools have been extremely beneficial in both the classroom and workplace setting. They are able to change based on how groups think and are able to form into a coherent idea based on the needs of the Wikipedia user.
Collaborative learning in virtual worlds by their nature provide an excellent opportunity for collaborative learning. At first learning in virtual worlds was restricted to classroom meetings and lectures, similar to their counterparts in real life. Now collaborative learning is evolving as companies starting to take advantage of unique features offered by virtual world spaces - such as ability to record and map the flow of ideas,[18]use 3D models,[30]and virtual worlds mind mapping tools.
There also exists cultural variations in ways of collaborative learning. Research in this area has mainly focused on children in indigenous Mayan communities of the Americas or in San Pedro, Guatemala and European American middle-class communities.
Generally, researchers have found that children in indigenous Mayan communities such as San Pedro typically learn through keenly observing and actively contributing to the mature activities of their community.[31]This type of learning is characterized by the learner's collaborative participation through multi-modal communicationverbal and non-verbaland observations.[31]They are highly engaged within their community through focused observation.[32]Mayan parents believe that children learn best by observing and so an attentive child is seen as one who is trying to learn.[32]It has also been found that these children are extremely competent and independent in self-maintenance at an early age and tend to receive little pressure from their parents.[32]
Research has found that even when Indigenous Mayan children are in a classroom setting, the cultural orientation of indigenous learners shows that observation is a preferred strategy of learning.[33]Thus children and adults in a classroom setting adopt cultural practice and organize learning collaboratively.[33]This is in contrast to the European-American classroom model, which allocates control to teachers/adults allowing them to control classroom activities.[34]
Within the European American middle-class communities, children typically do not learn through collaborative learning methods. In the classroom, these children generally learn by engaging in initiation-reply-evaluation sequences.[31]This sequence starts with the teacher initiating an exchange, usually by asking a question. The student then replies, with the teacher evaluating the student's answer.[35]This way of learning fits with European-American middle-class cultural goals of autonomy and independence that are dominant in parenting styles within European-American middle-class culture.[31]
Although learning happens in a variety of ways in indigenous communities, collaborative learning is one of the main methods used inindigenous learning stylesinstead of using European-American approaches to learning. These methods include learning in a horizontal plane where children and adults equally contribute to ideas and activities.
For example, Mayan people of San Pedro use collaboration in order to build upon one another's ideas and activities. Specifically, many learning practices focus on "role-switching." When learning a new task, people alternate between helpful observer and active participant. Mayan mothers do not act as teachers when completing a task with their children, but instead collaborate with children throughplayand other activities.[36]People of this Mayan community use the shared endeavors method more than European-Americans who tend to use the transmit-and-test model more often.[37]The shared endeavors model is when people go off of others ideas[clarification needed]and learn from them, while the transmit-and-test model is what is used in most American schools when a teacher gives students information and then tests the students on the information.[37]The shared endeavors model is a form of collaborative learning because everyone learns from one another and are able to hear and share others' ideas.
In Nocutzepo, Mexico, indigenous heritage families form collective units where it is generally agreed that children and youth engage in adult cooperative household or community economic practices such as food preparation, child care, participating in markets, agriculture, animal herding, and construction to name a few.[37]During planting and harvesting season, entire families are out in the fields together where children usually pitch into the activity with smaller tasks alongside adults; however, are always observant when it comes to activities done by adults, such as driving a tractor or handling an axe.[37]These children learn through imitation, observation, listening, pitching in, and doing activities in a social and cultural context.[37]When children begin to participate in the daily family/community activities, they form a sense of belonging, especially when they collaborate with adults establishing a more mature integration with their family and community.
Indigenous people of the Americas utilize collaborative learning through their emphasis on role sharing and responsibility sharing within their communities. The Mayan community of San Pedro, Guatemala utilizes flexible leadership that allows children to have an active role in their learning.[38]Children and adults work as cohesive groups when tackling new projects.[38]Collaborative learning is prevalent in Indigenous communities due to the integration of children in the daily lives of the adults.[39]Age is not a determining factor in whether or not individuals are incorporated into collaborative efforts and learning that occurs in indigenous communities.
Participation of learner is a key component to collaborative learning as it functions as the method by which the learning process occurs. Thus collaborative learning occurs when children and adults in communities switch between "knowledge performers" and "observing helpers".[40]For example, when parents in an indigenousMazahuacommunity where assigned the task of organizing children to build a roof over a market stand in such a way that they would learn to do it themselves, parents and children both collaborated on a horizontal structure. Switching between knowledge performer and observing helper, adults and children completed the task peacefully, without assigned roles of educator/student and illustrated that children still took initiative even when adults were still performing.[40]
Adults and children in indigenous communities of the Americas participate in a horizontal organizational structure; therefore when they work together with one another they are reciprocals of each other.[41]This horizontal structure allows for flexible leadership, which is one of the key aspects of collaborative learning. The indigenous communities of the Americas are unique in their collaborative learning because they do not discriminate upon age, instead Indigenous communities of the Americas encourage active participation and flexible leadership roles, regardless of age. Children and adults regularly interchange their roles within their community, which contributes to the fluidity of the learning process. In addition, Indigenous communities considerobservationto be a part of the collaborative learning process.[40]
Collaborative learning can also be incorporated into university settings. For example, the Intercultural Maya University ofQuintana Roo,Mexico, has a system that incorporates elders, such as grandparents to act as tutors and as a resource for students to discuss information and knowledge regarding their own language and culture. The elders give their recommendation at the end of a semester in the decision of passing or failing a student, based on his/her behavior in the community and how well he/she is learningMaya. The system is called IKNAL, a mayan word that implies companionship in the learning and doing process that involves several members of the community.[42]
Collaborative learning varies across the world. The traditional model for learning is instructor based but that model is quickly changing on a global standpoint as countries fight to be at the top of the economy. A country's history, culture, religious beliefs and politics are all aspects of their national identity and these characteristics influence on citizen's view of collaboration in both a classroom and workplace setting.[43]
While the empirical research in Japan is still relatively sparse, many language educators have taken advantage of Japan's natural collectivism and experimented with collaborative learning programs[44][45][46][47]More recently, technological advancements and their high adoption rate among students in Japan[48]have made computer supported collaborative learning accessible.[46][49][50]Japanese student's value for friendship and their natural inclination towards reciprocity seems to support collaborative learning in Japan.[51]
Collaborative learning is also employed in the business and government sectors. For example, within thefederal government of the United States, theUnited States Agency for International Development(USAID) is employing a collaborative project management approach that focuses oncollaborating, learning and adapting(CLA). CLA involves three concepts:[56] | https://en.wikipedia.org/wiki/Collaborative_learning |
Collaborative softwareorgroupwareisapplication softwaredesigned to help people working on a common task to attain their goals. One of the earliest definitions of groupware is "intentional group processes plus software to support them."[1]
Regarding available interaction, collaborative software may be divided intoreal-time collaborative editingplatforms that allow multiple users to engage in live, simultaneous, and reversible editing of a single file (usually a document); andversion control(also known as revision control and source control) platforms, which allow users to make parallel edits to a file, while preserving every saved edit by users as multiple files that are variants of the original file.[citation needed]
Collaborative software is a broad concept that overlaps considerably withcomputer-supported cooperative work(CSCW). According to Carstensen and Schmidt (1999),[2]groupware is part of CSCW. The authors claim that CSCW, and thereby groupware, addresses "how collaborative activities and their coordination can be supported by means of computer systems."
The use of collaborative software in the work space creates acollaborative working environment(CWE).
Collaborative software relates to the notion ofcollaborative work systems, which are conceived as any form of human organization that emerges any time that collaboration takes place, whether it is formal or informal, intentional or unintentional.[3]Whereas the groupware or collaborative software pertains to the technological elements of computer-supported cooperative work, collaborative work systems become a useful analytical tool to understand the behavioral and organizational variables that are associated to the broader concept of CSCW.[4][5]
Douglas Engelbartfirst envisioned collaborative computing in 1951 and documented his vision in 1962,[6]withworking prototypesin full operational use by his research team by the mid-1960s.[7]He held the first public demonstration of his work in 1968 in what is now referred to as "The Mother of All Demos".[8]The following year, Engelbart's lab was hooked into theARPANET, the first computer network, enabling them to extend services to a broader userbase.
Online collaborative gaming software began between early networked computer users. In 1975,Will CrowthercreatedColossal Cave Adventureon aDEC PDP-10computer. As internet connections grew, so did the numbers of users and multi-user games. In 1978Roy Trubshaw, a student atUniversity of Essexin the United Kingdom, created the game MUD (Multi-User Dungeon).
TheUS Governmentbegan using truly collaborative applications in the early 1990s.[9]One of the first robust applications was the Navy's Common Operational Modeling, Planning and Simulation Strategy (COMPASS).[10]The COMPASS system allowed up to 6 users to create point-to-point connections with one another; the collaborative session only remained while at least one user stayed active, and would have to be recreated if all six logged out. MITRE improved on that model by hosting the collaborative session on a server into which each user logged. Called the Collaborative Virtual Workstation (CVW), it allowed the session to be set up in a virtual file cabinet and virtual rooms, and left as a persistent session that could be joined later.[11]
In 1996,Pavel Curtis, who had built MUDs atPARC, created PlaceWare, a server that simulated a one-to-many auditorium, with side chat between "seat-mates", and the ability to invite a limited number of audience members to speak. In 1997, engineers atGTEused the PlaceWare engine in a commercial version of MITRE's CVW, calling it InfoWorkSpace (IWS). In 1998, IWS was chosen as the military standard for the standardized Air Operations Center.[12]The IWS product was sold toGeneral Dynamicsand then later to Ezenia.[13]
Collaborative software was originally designated asgroupwareand this term can be traced as far back as the late 1980s, when Richman and Slovak (1987)[14]wrote: "Like an electronic sinew that binds teams together, the newgroupwareaims to place the computer squarely in the middle of communications among managers, technicians, and anyone else who interacts in groups, revolutionizing the way they work."
In 1978, Peter and Trudy Johnson-Lenz coined the term groupware; their initial 1978 definition of groupware was, "intentional group processes plus software to support them." Later in their article they went on to explain groupware as "computer-mediated culture... an embodiment of social organization in hyperspace." Groupware integrates co-evolving human and tool systems, yet is simply a single system.[15]
In the early 1990s the first commercial groupware products were delivered, and big companies such asBoeingandIBMstarted using electronic meeting systems for key internal projects.Lotus Notesappeared as a major example of that product category, allowing remote group collaboration when the internet was still in its infancy. Kirkpatrick and Losee (1992)[16]wrote then: "IfGROUPWAREreally makes a difference in productivity long term, the very definition of an office may change. You will be able to work efficiently as a member of a group wherever you have your computer. As computers become smaller and more powerful, that will mean anywhere." In 1999, Achacoso created and introduced the first wireless groupware.[17][18][19]
The complexity of groupware development is still an issue. One reason is the socio-technical dimension of groupware. Groupware designers do not only have to address technical issues (as in traditional software development) but also consider the organizational aspects[20]and the social group processes that should be supported with the groupware application. Some examples for issues in groupware development are:
One approach for addressing these issues is the use of design patterns for groupware design.[24]The patterns identify recurring groupware design issues and discuss design choices in a way that all stakeholders can participate in the groupware development process.
Groupware can be divided into three categories depending on the level ofcollaboration:[25][26]
Collaborative management tools facilitate and manage group activities. Examples include:
The design intent of collaborative software (groupware) is to transform the way documents andrich mediaare shared in order to enable more effective team collaboration.
Collaboration, with respect to information technology, seems to have several definitions. Some are defensible but others are so broad they lose any meaningful application. Understanding the differences in human interactions is necessary to ensure the appropriate technologies are employed to meet interaction needs.
There are three primary ways in which humans interact: conversations, transactions, and collaborations.
Conversational interactionis an exchange of information between two or more participants where the primary purpose of the interaction is discovery or relationship building. There is no central entity around which the interaction revolves but is a free exchange of information with no defined constraints, generally focused on personal experiences.[28]Communication technology such as telephones,instant messaging, and e-mail are generally sufficient for conversational interactions.
Transactional interactioninvolves the exchange of transaction entities where a major function of the transaction entity is to alter the relationship between participants.
Incollaborative interaction, the main function of the participants' relationship is to alter a collaboration entity (i.e., the converse of transactional). When teams collaborate on projects it is collaborative project management. | https://en.wikipedia.org/wiki/Collaborative_software |
Acollaborative working environment(CWE) supports people, such ase-professionals, in their individual and cooperative work. Research in CWE involves focusing on organizational, technical, and social issues.
Working practices in a collaborative working environment evolved from the traditional or geographical co-location paradigm. In a CWE, professionals work together regardless of their geographical location. In this context,e-professionalsuse a collaborative working environment to provide and share information[1]and exchange views in order to reach a common understanding. Such practices enable an effective and efficient collaboration among different proficiencies.
The following applications or services are considered elements of a CWE:
The concept of CWE is derived from the idea of virtual work-spaces,[2][3]and is related to the concept ofremote work. It extends the traditional concept of theprofessionalto include any type ofknowledge workerwho intensively usesinformation and communications technology(ICT) environments and tools[4]in their working practices. Typically, a group of e-professionals conduct their collaborative work through the use of collaborative working environments (CWE).[5]
CWE includes onlinecollaboration(such asvirtual teams,[6]mass collaboration,[7]andmassively distributed collaboration),[8]onlinecommunities of practice(such as theopen sourcecommunity), andopen innovationprinciples.
A collaborative working system (CWS) is an organizational unit that emerges any time when collaboration takes place, whether it is formal or informal, intentional or unintentional.[9]Collaborative work systems are those in which conscious efforts have been made to create strategies, policies, and structures in order to institutionalize values, behaviors, and practices that promote cooperation among different parties in an organization so as to achieve organizational goals. A high level of collaborative capacity will enable more effective work both at the local and daily levels, and at the global and long-term levels.
Collaboration is the collective work of two or more individuals where the work is undertaken with a sense of shared purpose and direction, and is attentive and responsive to the environment.[9]In most organizations collaboration occurs naturally, but ill-defined work practices may create barriers to natural collaboration. The result is a loss of both decision-making quality and valuable time. Well-designed collaborative working systems not only overcome these natural barriers to communication, they also establish a cooperative work culture that becomes an integral part of the organization's structure.[10]
A collaborative work system is related to the collaborative working environment. The latter notion is more focused on technology and was issued from the concept ofcollaborative workspaces,[11]driven from research within the MOSAIC Project.
The concept of 'system' in 'collaborative work system' has a self-explanatory power that is different from 'environment'. The former pertains to an integrated whole, including collaborative work conceived as a purposeful activity, whilst the later stresses the surroundings of an object – the collaborative working practices.
A collaborative work system generally includes a collaborative working environment, but it should be conceived primarily as a set of human activities, intentional or not, that emerge every time a collaboration occurs. This enables focus on the work practices that are necessary for human collaboration and draws attention to important behavioral variables such asleadershipandmotivationoutside the CWE.
Besides participatory leadership, another key element of a successful collaborative work system is the availability of group collaboration technology orgroupware– hardware and software tools that help groups to access and share the information the professionals need to meet, train or teach.[citation needed]
However, a collaborative work system (CWS) does not necessarily require groupware support. A simple way to conceptualize the relation between the two concepts is to considercomputer supported cooperative work(CSCW) as a whole consisting of a collaborative work system (CWS) supported bycollaborative softwareor groupware.
On the other hand, a collaborative working environment which supports people in both theirindividualandcooperativework, whatever their geographical location, transcends the notion of CSCW which deals specifically with cooperative work. | https://en.wikipedia.org/wiki/Collaborative_working_environment |
Acollaborative working environment(CWE) supports people, such ase-professionals, in their individual and cooperative work. Research in CWE involves focusing on organizational, technical, and social issues.
Working practices in a collaborative working environment evolved from the traditional or geographical co-location paradigm. In a CWE, professionals work together regardless of their geographical location. In this context,e-professionalsuse a collaborative working environment to provide and share information[1]and exchange views in order to reach a common understanding. Such practices enable an effective and efficient collaboration among different proficiencies.
The following applications or services are considered elements of a CWE:
The concept of CWE is derived from the idea of virtual work-spaces,[2][3]and is related to the concept ofremote work. It extends the traditional concept of theprofessionalto include any type ofknowledge workerwho intensively usesinformation and communications technology(ICT) environments and tools[4]in their working practices. Typically, a group of e-professionals conduct their collaborative work through the use of collaborative working environments (CWE).[5]
CWE includes onlinecollaboration(such asvirtual teams,[6]mass collaboration,[7]andmassively distributed collaboration),[8]onlinecommunities of practice(such as theopen sourcecommunity), andopen innovationprinciples.
A collaborative working system (CWS) is an organizational unit that emerges any time when collaboration takes place, whether it is formal or informal, intentional or unintentional.[9]Collaborative work systems are those in which conscious efforts have been made to create strategies, policies, and structures in order to institutionalize values, behaviors, and practices that promote cooperation among different parties in an organization so as to achieve organizational goals. A high level of collaborative capacity will enable more effective work both at the local and daily levels, and at the global and long-term levels.
Collaboration is the collective work of two or more individuals where the work is undertaken with a sense of shared purpose and direction, and is attentive and responsive to the environment.[9]In most organizations collaboration occurs naturally, but ill-defined work practices may create barriers to natural collaboration. The result is a loss of both decision-making quality and valuable time. Well-designed collaborative working systems not only overcome these natural barriers to communication, they also establish a cooperative work culture that becomes an integral part of the organization's structure.[10]
A collaborative work system is related to the collaborative working environment. The latter notion is more focused on technology and was issued from the concept ofcollaborative workspaces,[11]driven from research within the MOSAIC Project.
The concept of 'system' in 'collaborative work system' has a self-explanatory power that is different from 'environment'. The former pertains to an integrated whole, including collaborative work conceived as a purposeful activity, whilst the later stresses the surroundings of an object – the collaborative working practices.
A collaborative work system generally includes a collaborative working environment, but it should be conceived primarily as a set of human activities, intentional or not, that emerge every time a collaboration occurs. This enables focus on the work practices that are necessary for human collaboration and draws attention to important behavioral variables such asleadershipandmotivationoutside the CWE.
Besides participatory leadership, another key element of a successful collaborative work system is the availability of group collaboration technology orgroupware– hardware and software tools that help groups to access and share the information the professionals need to meet, train or teach.[citation needed]
However, a collaborative work system (CWS) does not necessarily require groupware support. A simple way to conceptualize the relation between the two concepts is to considercomputer supported cooperative work(CSCW) as a whole consisting of a collaborative work system (CWS) supported bycollaborative softwareor groupware.
On the other hand, a collaborative working environment which supports people in both theirindividualandcooperativework, whatever their geographical location, transcends the notion of CSCW which deals specifically with cooperative work. | https://en.wikipedia.org/wiki/Collaborative_working_system |
Computer-supported collaborationresearch focuses on technology that affects groups, organizations, communities and societies, e.g.,voice mailandtext chat. It grew fromcooperative workstudy of supporting people's work activities and working relationships. As net technology increasingly supported a wide range of recreational and social activities, consumer markets expanded the user base, enabling more and more people to connect online to create what researchers have called acomputer supported cooperative work, which includes "all contexts in which technology is used to mediate human activities such as communication, coordination, cooperation, competition, entertainment, games, art, and music" (from CSCW 2023[1]).
The subfieldcomputer-mediated communicationdeals specifically with how humans use "computers" (ordigital media) to form, support and maintain relationships with others (social uses), regulate information flow (instructional uses), and make decisions (including major financial and political ones). It does not focus on common work products or other "collaboration" but rather on "meeting" itself, and ontrust. By contrast, CSC is focused on the output from, rather than the character or emotional consequences of, meetings or relationships, reflecting the difference between "communication" and "collaboration".
Unlike communication research, which focuses on trust, orcomputer science, which focuses ontruthandlogic, CSC focuses oncooperationandcollaborationanddecision makingtheory, which are more concerned withrendezvousandcontract. For instance,auctionsandmarket systems, which rely onoffer and demandrelationships, are studied as part of CSC but not usually as part of communication.
The term CSC emerged in the 1990s to replace the following terms:
Two different types of software are sometimes differentiated:
Base technologies such asnetnews,email,chatandwikiscould be described as "social", "collaborative" or both or neither. Those who say "social" seem to focus on so-called "virtual community" while those who say "collaborative" seem to be more concerned withcontent managementand the actual output. While software may be designed to achieve closer social ties or specific deliverables, it is hard to support collaboration without also enabling relationships to form, and hard to support a social interaction without some kind of shared co-authored works.[citation needed]
Accordingly, the differentiation between social and collaborative software may also be stated as that between "play" and "work". Some theorists hold that a play ethic should apply, and that work must become more game-like or play-like in order to make using computers a more comfortable experience.[citation needed]The study ofMUDsandMMRPGsin the 1980s and 1990s led many to this conclusion, which is now not controversial.[citation needed]
True multi-playercomputer gamescan be considered a simple form of collaboration, but only a few theorists include this as part of CSC.
The relatively new areas of evolutionary computing, massively parallel algorithms, and even "artificial life" explore the solution of problems by the evolving interaction of large numbers of small actors, or agents, or decision-makers who interact in a largely unconstrained fashion. The "side effect" of the interaction may be a solution of interest, such as a new sorting algorithm; or there may be a permanent residual of the interaction, such as the setting of weights in a neural network that has now been "tuned" or "trained" to repeatedly solve a specific problem, such as making a decision about granting credit to a person, or distinguishing a diseased plant from a healthy one. Connectionism is a study of systems in which the learning is stored in the linkages, or connections, not in what is normally thought of as content.[citation needed]
This expands the definition of "computing", such that it is not just the data, or the metadata, or the context of the data, but the computer itself which is being "processed".[citation needed]
Communication essential to the collaboration, or disruptive of it, is studied in CSC proper.[citation needed]It is somehow hard to find or draw a line between a well-defined process and general human communications.[citation needed]
Reflecting desired organization protocols andbusiness processesandgovernancenorms directly, so that regulated communication (the collaboration) can be told apart from free-form interactions, is important to collaboration research, if only to know where to stop the study of work and start the study of people.The subfield CMC orcomputer-mediated communicationdeals with human relationships.[citation needed]
Tasks undertaken in this field resemble those of any social science, but with a special focus onsystems integrationandgroups:[2]
Less ambitiously, specific CSC fields are often studied under their own names with no reference to the more general field of study, focusing instead on the technology with only minimal attention to the collaboration implied, e.g.video games,videoconferences.[citation needed]Since some specialized devices exist for games or conferences that do not include all of the usualboot imagecapabilities of a true "computer", studying these separately may be justified. There is also separate study ofe-learning,e-government,e-democracyandtelemedicine.[citation needed]The subfieldteleworkalso often stands alone.
The development of this field reaches back to the late 1960s and the visionary assertions ofTed Nelson,Douglas Engelbart,Alan Kay,Glenn Gould,Nicholas Negroponteand others who saw a potential fordigital mediato ultimately redefine how people work. A very early thinker,Vannevar Bush, even suggested in 1945As We May Think.
The inventor of the computer "mouse",Douglas Engelbart, studiedcollaborative software(especiallyrevision controlincomputer-aided software engineeringand the way agraphic user interfacecould enable interpersonal communication) in the 1960s.Alan Kayworked onSmalltalk, which embodied these principles, in the 1970s, and by the 1980s it was well regarded and considered to represent the future of user interfaces.
However, at this time, collaboration capabilities were limited. As few computers had evenlocal area networks, and processors were slow and expensive, the idea of using them simply to accelerate and "augment" human communication was eccentric in many situations. Computers processed numbers, not text, and the collaboration was in general devoted only to better and more accurate handling of numbers.
This began to change in the 1980s with the rise ofpersonal computers,modemsand more general use of theInternetfor non-academic purposes. People were clearly collaborating online with all sorts of motives, but using a small suite of tools (LISTSERV,netnews,IRC,MUD) to support all of those motives. Research at this time focused on textual communication, as there was little or no exchange of audio and video representations. Some researchers, such asBrenda Laurel, emphasized how similar onlinedialoguewas to aplay, and appliedAristotle's model ofdramato their analysis of computers for collaboration.
Another major focus washypertext—in its pre-HTML, pre-WWWform, focused more on links andsemantic webapplications than on graphics. Such systems as Superbook,NoteCards,KMSand the much simpler HyperTies andHyperCardwere early examples ofcollaborative softwareused fore-learning.
In the 1990s, the rise ofbroadbandnetworks and thedotcom boompresented the internet as mass media to a whole generation. By the late 1990s,VoIPandnet phonesand chat had emerged. For the first time, people used computersprimarilyas communications, not "computing" devices. This, however, had long been anticipated, predicted, and studied by experts in the field.
Video collaboration is not usually studied. Online videoconferencing andwebcamshave been studied in small scale use for decades but since people simply do not have built-in facilities to create video together directly, they are properly a communication, not collaboration, concern.
Other pioneers in the field includedTed Nelson,Austin Henderson, Kjeld Schmidt,Lucy Suchman, Sara Bly,Randy Farmer, and many "economists,social psychologists,anthropologists,organizational theorists,educators, and anyone else who can shed light on group activity." - Grudin.
In this century, the focus has shifted tosociology,political science,management scienceand otherbusinessdisciplines. This reflects the use of the net in politics and business and even other high-stakes collaboration situations, such as war.
Though it is not studied at the ACM conferences, military use ofcollaborative softwarehas been a very major impetus of work onmapsanddata fusion, used inmilitary intelligence. A number of conferences and journals are concerned primarily with the military use of digital media and the security implications thereof.
Current research in computer-supported collaboration includes:
Early researchers, such asBill Buxton, had focused on non-voice gestures (likehummingorwhistling) as a way to communicate with the machine while not interfering too directly with speech directed at a person. Some researchers believedvoice as commandinterfaces were bad for this reason, because they encouraged speaking as if to a "slave".
HTML supports simple link types with the REL tag and REV tag. Some standards for using these on theWWWwere proposed, most notably in 1994, by people very familiar with earlier work inSGML. However, no such scheme has ever been adopted by a large number of web users, and the "semantic web" remains unrealized. Attempts such ascrit.orghave sometimes collapsed totally.
Who am I, online? Can an account be assumed to be the same as a person's real-life identity? Should I have rights to continue any relationship I start through a service, even if I'm not using it any longer? Who owns information about the user? What about others (not the user) who are affected by information revealed or learned by me?
Online identityandprivacyconcerns, especiallyidentity theft, have grown to dominate the CSCW agenda in more recent years. The separateComputers, Freedom and Privacyconferences deal with larger social questions, but basic concerns that apply to systems and work process design tend still to be discussed as part of CSC research.
Where decisions are made based exclusively or mostly on information received or exchanged online, how do people rendezvous to signal their trust in it, and willingness to make major decisions on it?
Teamconsensus decision makinginsoftware engineering, and the role ofrevision control,revert,reputationand other functions, has always been a major focus of CSC: There is no software without someone writing it. Presumably, those who do write it must understand something about collaboration in their own team. This design and code, however, is only one form of collaborative content.
What are the most efficient and effective ways to share information? Can creative networks form through online meeting/work systems? Can people have equal power relationships in building content?
By the late 1990s, with the rise ofwikis(a simple repository anddata dictionarythat was easy for the public to use), the wayconsensusapplied to joint editing, meeting agendas and so on had become a major concern. Different wikis adopted different social and pseudopolitical structures to combat the problems caused by conflicting points of view and differing opinions on content.[citation needed]
How can work be made simpler, less prone to error, easier to learn? What role do diagrams and notations play in improving work output? What words do workers come to work already understanding, what do they misunderstand, and how can they use the same words to mean the same thing?
Study ofcontent management,enterprise taxonomyand the other core instructional capital of thelearning organizationhas become increasingly important due toISOstandards and the use ofcontinuous improvementmethods.[citation needed]Natural language and application commands tend to converge over time, becoming reflexive user interfaces.[citation needed]
The role ofsocial network analysisandoutsourcingservices like e-lance, especially when combined in services likeLinkedIn, is of particular concern inhuman capitalmanagement—again, especially in the software industry, where it is becoming more and more normal to run 24x7 globally distributed shops.
The romanticized notion of a lone, genius artist has existed since the time ofGiorgio Vasari’sLives of the Artists, published in 1568. Vasari promulgated the idea that artistic skill was endowed upon chosen individuals by gods, which created an enduring and largely false popular misunderstanding of many artistic processes. Artists have used collaboration to complete large scale works for centuries, but the myth of the lone artist was not widely questioned until the 1960s and 1970s.
With the appearance of computers, and especially with the invention of the internet, collaboration on art became easier than before. This crowd-sourced creativity online is putting a "new twist" on traditional ideas of artistic ownership, online communication and art production.[3]In some cases, people don't even know they are making contributions to online art.[3]
Artists in the computer era are considered more "socially aware" in a way that supports social collaboration on social matters.[4]Art duos, such as the ItalianHackataoduo, collaborate both physically and online while creating their art in order to "create a meeting place between theNFTand traditional art worlds."[5][6][7]
Crowdsourcingaids with innovation processes, successful implementation and maintenance of ideas generation, thereby providing support for the development of promising innovative ideas.[8]Crowdsourcing has been used in various ways from rousing musical numbers, to choreography, set design, costumes and marketing materials and in some cases was crowdsourced using social media platforms.[9]
Related fields arecollaborative product development,CAD/CAM,computer-aided software engineering(CASE),concurrent engineering,workflow management,distance learning,telemedicine,medical CSCWand the real-time network conferences calledMUDs(after "multi-user dungeons," although they are now used for more than game-playing). | https://en.wikipedia.org/wiki/Computer-supported_collaboration |
Computer-supported collaborative learning(CSCL) is apedagogicalapproach wherein learning takes place via social interaction using a computer or through the Internet. This kind of learning is characterized by the sharing andconstruction of knowledgeamong participants using technology as their primary means of communication or as a common resource.[1]CSCL can be implemented in online and classroom learning environments and can take place synchronously or asynchronously.
The study of computer-supported collaborative learning draws on a number of academic disciplines, includinginstructional technology,educational psychology,sociology,cognitive psychology, andsocial psychology.[2]It is related tocollaborative learningandComputer Supported Cooperative Work.
Interactive computing technology was primarily conceived by academics, but the use of technology in education has historically been defined by contemporary research trends. The earliest instances of software in instruction drilled students using thebehavioristmethod that was popular throughout the mid-twentieth century. In the 1970s ascognitivismgained traction with educators, designers began to envision learning technology that employed artificial intelligence models that could adapt to individual learners.[3]Computer-supported collaborative learning emerged as a strategy rich with research implications for the growing philosophies ofconstructivismandsocial cognitivism.[4]
Though studies in collaborative learning and technology took place throughout the 1980s and 90s, the earliest public workshop directly addressing CSCL was "Joint Problem Solving andMicrocomputers" which took place inSan Diegoin 1983. Six years later in 1989, the term "computer-supported collaborative learning" was used in aNATO-sponsored workshop inMaratea, Italy.[1][5]A biannual CSCL conference series began in 1995. At the 2002 and 2003 CSCL conferences, the International Society of the Learning Sciences (ISLS) was established to run the CSCL and ICLS conference series and theInternational Journal of Computer-Supported Collaborative Learning(ijCSCL) and JLS journals.[6]
TheijCSCLwas established by the CSCL research community and ISLS. It began quarterly publication by Springer in 2006. It is peer reviewed and published both online and in print. Since 2009, it has been rated by ISI as being in the top 10% of educational research journals based on its impact factor.[7]
The rapid development of social media technologies and the increasing need of individuals to understand and use those technologies has brought researchers from many disciplines to the field of CSCL.[4]CSCL is used today in traditional and online schools and knowledge-building communities such asWikipedia.
The field of CSCL draws heavily from a number of learning theories that emphasize that knowledge is the result of learners interacting with each other, sharing knowledge, and building knowledge as a group. Since the field focuses on collaborative activity and collaborative learning, it inherently takes much from constructivist and social cognitivist learning theories.[4]
The roots of collaborative epistemology as related to CSCL can be found inVygotsky's social learning theory. Of particular importance to CSCL is the theory's notion of internalization, or the idea that knowledge is developed by one's interaction with one's surrounding culture and society. The second key element is what Vygotsky called theZone of proximal development. This refers to a range of tasks that can be too difficult for a learner to master by themselves but is made possible with the assistance of a more skilled individual or teacher.[8]These ideas feed into a notion central to CSCL: knowledge building is achieved through interaction with others.
Cooperative learning, though different in some ways from collaborative learning, also contributes to the success of teams in CSCL environments. The distinction can be stated as: cooperative learning focuses on the effects of group interaction on individual learning whereas collaborative learning is more concerned with the cognitive processes at the group unit of analysis such as shared meaning making and the joint problem space. The five elements for effective cooperative groups identified by the work of Johnson and Johnson are positive interdependence, individual accountability, promotive interaction,social skills, and group processing.[9]Because of the inherent relationship between cooperation and collaboration, understanding what encourages successful cooperation is essential to CSCL research.
In the late 1980s and early 1990s, Marlene Scardamalia and Carl Bereiter wrote seminal articles leading to the development of key CSCL concepts: knowledge-building communities and knowledge-building discourse, intentional learning, and expert processes. Their work led to an early collaboration-enabling technology known as the Computer Supported Intentional Learning Environment (CSILE).[10]Characteristically for CSCL, their theories were integrated with the design, deployment, and study of the CSCL technology. CSILE later became Knowledge Forum, which is the most widely used CSCL technology worldwide to date.[citation needed]
Other learning theories that provide a foundation for CSCL includedistributed cognition,problem-based learning,group cognition, cognitive apprenticeship, and situated learning. Each of these learning theories focuses on the social aspect of learning and knowledge building, and recognizes that learning and knowledge building involve inter-personal activities including conversation, argument, and negotiation.[4]
Only in the last 15 to 20 years have researchers begun to explore the extent to which computer technology could enhance the collaborative learning process. While researchers, in general, have relied on learning theories developed without consideration of computer-support, some have suggested that the field needs to have a theory tailored and refined for the unique challenges that confront those trying to understand the complex interplay of technology and collaborative learning.[11]
Collaboration theory, suggested as a system of analysis for CSCL byGerry Stahlin 2002–2006, postulates that knowledge is constructed in social interactions such as discourse. The theory suggests that learning is not a matter of accepting fixed facts, but is the dynamic, on-going, and evolving result of complex interactions primarily taking place within communities of people. It also emphasizes that collaborative learning is a process of constructing meaning and that meaning creation most often takes place and can be observed at the group unit of analysis.[12]The goal of collaboration theory is to develop an understanding of how meaning is collaboratively constructed, preserved, and re-learned through the media of language and artifacts in group interaction. There are four crucial themes in collaboration theory: collaborative knowledge building (which is seen as a more concrete term than "learning"); group and personal perspectives intertwining to create group understanding; mediation by artifacts (or the use of resources which learners can share or imprint meaning on); and interaction analysis using captured examples that can be analyzed as proof that the knowledge building occurred.[11]
Collaboration theory proposes that technology in support of CSCL should provide new types of media that foster the building of collaborative knowing; facilitate the comparison of knowledge built by different types and sizes of groups; and help collaborative groups with the act of negotiating the knowledge they are building. Further, these technologies and designs should strive to remove the teacher as the bottleneck in the communication process to the facilitator of student collaboration. In other words, the teacher should not have to act as the conduit for communication between students or as the avenue by which information is dispensed, but should structure the problem-solving tasks. Finally, collaboration theory-influenced technologies will strive to increase the quantity and quality of learning moments via computer-simulated situations.[11]
Stahl extended his proposals about collaboration theory during the next decade with his research ongroup cognition[3]. In his book on "Group Cognition",[13]he provided a number of case studies of prototypes of collaboration technology, as well as a sample in-depth interaction analysis and several essays on theoretical issues related to re-conceptualizing cognition at the small-group unit of analysis. He then launched theVirtual Math Teamsproject at the Math Forum, which conducted more than 10 years of studies of students exploring mathematical topics collaboratively online. "Studying VMT"[14]documented many issues of design, analysis and theory related to this project. The VMT later focused on supporting dynamic geometry by integrating a multi-user version of GeoGebra. All aspects of this phase of the VMT project were described in "Translating Euclid."[15]Then, "Constructing Dynamic Triangles Together"[16]provided a detailed analysis of how a group of four girls learned about dynamic geometry by enacting a series of group practices during an eight-session longitudinal case study. Finally, "Theoretical Investigations: Philosophical Foundations of Group Cognition"[17]collected important articles on the theory of collaborative learning from the CSCL journal and from Stahl's publications. The VMT project generated and analyzed data at the small-group unit of analysis, to substantiate and refine the theory of group cognition and to offer a model of design-based CSCL research.
Currently, CSCL is used in instructional plans in classrooms both traditional and online from primary school to post-graduate institutions. Like any other instructional activity, it has its own prescribed practices and strategies which educators are encouraged to employ in order to use it effectively. Because its use is so widespread, there are innumerable scenarios in the use of CSCL, but there are several common strategies that provide a foundation for group cognition.
One of the most common approaches to CSCL iscollaborative writing. Though the final product can be anything from a research paper, a Wikipedia entry, or a short story, the process of planning and writing together encourages students to express their ideas and develop a group understanding of the subject matter.[18]Tools likeblogs,interactive whiteboards, and custom spaces that combine free writing with communication tools can be used to share work, form ideas, and write synchronously.[19][20]
Technology-mediated discourse refers to debates, discussions, and other social learning techniques involving the examination of a theme using technology. For example, wikis are a way to encourage discussion among learners, but other common tools include mind maps, survey systems, and simple message boards. Like collaborative writing, technology-mediated discourse allows participants that may be separated by time and distance to engage in conversations and build knowledge together.[20][21]
Group exploration refers to the shared discovery of a place, activity, environment or topic among two or more people. Students do their exploring in an online environment, use technology to better understand a physical area, or reflect on their experiences together through the Internet.Virtual worldslikeSecond LifeandWhyvilleas well as synchronous communication tools likeSkypemay be used for this kind of learning.[22][23]Educators may use Orchestration Graphs to define activities and roles that students must adopt during learning, and analyzing afterwards the learning process.[24]
Problem-based learning is a popular instructional activity that lends itself well to CSCL because of the social implications of problem solving. Complex problems call for rich group interplay that encourages collaboration and creates movement toward a clear goal.[25][26]
Project-based learningis similar to problem-based learning in that it creates impetus to establish team roles and set goals. The need for collaboration is also essential for any project and encourages team members to build experience and knowledge together. Although there are many advantages to using software that has been specifically developed to support collaborative learning or project-based learning in a particular domain, any file sharing or communication tools can be used to facilitate CSCL in problem- or project-based environments.[27]
WhenWeb 2.0applications (wikies, blogs, RSS feed, collaborative writing, video sharing, social networks, etc.) are used for computer-supported collaborative learning specific strategies should be used for their implementation, especially regarding (1) adoption by teachers and students; (2) usability and quality in use issues; (3) technology maintenance; (4) pedagogy and instructional design; (5) social interaction between students; (6) privacy issues; and (7) information/system security.[28]
Though the focus in CSCL is on individuals collaborating with their peers, teachers still have a vital role in facilitating learning. Most obviously, the instructor must introduce the CSCL activity in a thoughtful way that contributes to an overarching design plan for the course. The design should clearly define the learning outcomes andassessmentsfor the activity. In order to assure that learners are aware of these objectives and that they are eventually met, proper administration of both resources and expectations is necessary to avoid learner overload. Once the activity has begun, the teacher is charged with kick-starting and monitoring discussion to facilitate learning. He or she must also be able to mitigate technical issues for the class. Lastly, the instructor must engage inassessment, in whatever form the design calls for, in order to ensure objectives have been met for all students.[29]
Without the proper structure, any CSCL strategy can lose its effectiveness. It is the responsibility of the teacher to make students aware of what their goals are, how they should be interacting, potential technological concerns, and the time-frame for the exercise. This framework should enhance the experience for learners by supporting collaboration and creating opportunities for the construction of knowledge.[30][31]Another important consideration of educators who implement online learning environments isaffordance. Students who are already comfortable with online communication often choose to interact casually. Mediators should pay special attention to make students aware of their expectations for formality online.[32]While students sometime have frames of reference for online communication, they often do not have all of the skills necessary to solve problems by themselves. Ideally, teachers provide what is called "scaffolding", a platform of knowledge that they can build on. A unique benefit of CSCL is that, given proper teacher facilitation, students can use technology to build learning foundations with their peers. This allows instructors to gauge the difficulty of the tasks presented and make informed decisions about the extent of the scaffolding needed.[25]
According to Salomon (1995), the possibility of intellectual partnerships with both peers and advanced information technology has changed the criteria for what is counted to be the effects of technology. Instead of only concentrating on the amount and quality of learning outcomes, we need to distinguish between two kinds of effects: that is, "effects with a tool and/or collaborating peers, and effects of these." He used the term called "effects with" which is to describe the changes that take place while one is engaged in intellectual partnership with peers or with a computer tool. For example, the changed quality of problem solving in a team. And he means the word "effects of" more lasting changes that take place when computer-enhanced collaboration teaches students to ask more exact and explicit questions even when not using that system.
It has a number of implications for instructional designers, developers, and teachers.
Though CSCL holds promise for enhancing education, it is not without barriers or challenges to successful implementation. Obviously, students or participants need sufficient access to computer technology. Though access to computers has improved in the last 15 to 20 years, teacher attitudes about technology and sufficient access to Internet-connected computers continue to be barriers to more widespread usage of CSCL pedagogy.
Furthermore, instructors find that the time needed to monitor student discourse and review, comment on, and grade student products can be more demanding than what is necessary for traditional face-to-face classrooms. The teacher or professor also has an instructional decision to make regarding the complexity of the problem presented. To warrant collaborative work, the problem must be of sufficient complexity, otherwise teamwork is unnecessary. Also, there is risk in assuming that students instinctively know how to work collaboratively. Though the task may be collaborative by nature, students may still need training on how to work in a truly cooperative process.
Others have noted a concern with the concept of scripting as it pertains to CSCL. There is an issue with possibly over-scripting the CSCL experience and in so doing, creating "fake collaboration". Such over-scripted collaboration may fail to trigger the social, cognitive, and emotional mechanisms that are necessary to true collaborative learning.[5]
There is also the concern that the mere availability of the technology tools can create problems. Instructors may be tempted to apply technology to a learning activity that can very adequately be handled without the intervention or support of computers. In the process of students and teachers learning how to use the "user-friendly" technology, they never get to the act of collaboration. As a result, computers become an obstacle to collaboration rather than a supporter of it.[34]
The advent of computer-supported collaborative learning (CSCL) as an instructional strategy forsecond language acquisitioncan be traced back to the 1990s. During that time, the internet was growing rapidly, which was one of the key factors that facilitated the process.[35]At the time, the firstwikis(such asWikiWikiWeb) were still undergoing early development,[36]but the use of other tools such as electronic discussion groups allowed for equal participation amongst peers, particularly benefiting those who would normally not participate otherwise during face-to-face interactions.[35]
During the establishment of wikis in the 2000s, global research began to emerge regarding their effectiveness in promoting second language acquisition. Some of this research focused on more specific areas such assystemic-functional linguistics,humanistic education,experiental learning, andpsycholinguistics. For example, in 2009 Yu-Ching Chen performed a study to determine the overall effectiveness of wikis in an English as a second language class in Taiwan.[37]Another example is a 2009 study by Greg Kessler in which pre-service, non-native English speaker teachers in a Mexican university were given the task to collaborate on a wiki, which served as the final product for one of their courses. In this study, emphasis was placed on the level of grammatical accuracy achieved by the students throughout the course of the task.[38]
Due to the continual development of technology, other educational tools aside from wikis are being implemented and studied to determine their potential in scaffolding second language acquisition. According to Mark Warschauer (2010), among these are blogs, automated writing evaluation systems, and open-source netbooks.[39]Ex situof the classroom, the development of other recent online tools such asLivemocha(2007) have facilitated language acquisition via member-to-member interactions,[40]demonstrating firsthand the impact the advancement of technology has made towards meeting the varying needs of language learners.
Studies in the field ofcomputer-assisted language learning(CALL) have shown that computers provide material and valuable feedback for language learners and that computers can be a positive tool for both individual and collaborative language learning. CALL programs offer the potential for interactions between the language learners and the computer.[41]Additionally, students'autonomous language learningandself-assessmentcan be made widely available through the web.[42]In CSCL, the computer is not only seen as a potential language tutor by providing assessment for students' responses,[43]but also as a tool to give language learners the opportunity to learn from the computer and also via collaboration with other language learners. Juan[44]focuses on new models and systems that perform efficient evaluation of student activity in online-based education. Their findings indicate that CSCL environments organized by teachers are useful for students to develop their language skills. Additionally, CSCL increases students' confidence and encourages them to maintain active learning, reducing the passive reliance on teachers' feedback. Using CSCL as a tool in the second language learning classroom has also shown to reducelearner anxiety.[45]
Various case studies and projects had been conducted in order to measure the effectiveness and perception of CSCL in a language learning classroom. After a collaborative internet-based project, language learners indicated that their confidence in using the language had increased and that they felt more motivated to learn and use the target language. After analyzing student questionnaires, discussion board entries, final project reports, and student journals, Dooly[46]suggests that during computer supported collaborative language learning, students have an increased awareness of different aspects of the target language and pay increased attention to their own language learning process. Since the participants of her project were language teacher trainees, she adds that they felt prepared and willing to incorporate online interaction in their own teaching in the future.
Culturemay be thought of as composed of "beliefs, norms, assumptions, knowledge, values, or sets of practice that are shared and form a system".[47]Learning communitiesfocused in whole or part on second language acquisition may often be distinctly multicultural in composition, and as the cultural background of individual learners affects their collaborative norms and practices, this can significantly impact their ability to learn in a CSCL environment.[48]
CSCL environments are generally valued for the potential to promote collaboration in cross-cultural learning communities. Based onsocial constructivistviews of learning,[49]many CSCL environments fundamentally emphasize learning as the co-construction of knowledge through the computer-mediated interaction of multivoiced community members. Computer-mediation of the learning process has been found to afford consideration of alternative viewpoints in multicultural/multilingual learning communities.[50]When compared to traditional face-to-face environments, computer-mediated learning environments have been shown to result in more equal levels of participation for ESL students in courses with native English speakers.[51]Language barriers for non-native speakers tend to detract from equal participation in general,[52]and this can be alleviated to some extent through the use of technologies which support asynchronous modes of written communication.[53][54]
Online learning environments however tend to reflect the cultural,epistemological, andpedagogicalgoals and assumptions of their designers.[55]In computer-supported collaborative learning environments, there is evidence that cultural background may impact learner motivation, attitude towards learning and e-learning, learning preference (style), computer usage, learning behavior and strategies, academic achievement, communication, participation, knowledge transfer, sharing and collaborative learning.[48]Studies variously comparing Asian, American and Danish and Finnish learners have suggested that learners from different cultures exhibit different interaction patterns with their peers and teachers in online.[56]A number of studies have shown that difference in Eastern and Western educational cultures, for instance, which are found in traditional environments are also present in online environments.[57][58]Zhang[59]has described Eastern education as more group-based, teacher-dominated, centrally organized, and examination-oriented than Western approaches. Students who have learned to learn in an Eastern context emphasizing teacher authority and standardized examinations may perform differently in a CSCL environment characterized bypeer critiqueand co-construction ofeducational artifactsas the primary mode of assessment.
A "multiple cultural model" ofinstructional designemphasizes variability and flexibility in the process of designing for multicultural inclusiveness, focusing on the development of learning environments reflecting the multicultural realities of society, include multiple ways of teaching and learning, and promote equity of outcomes.[60][61]McLoughlin, C. & Oliver[62]propose a social, constructivist approach to the design ofculturally-sensitiveCSCL environments which emphasizes flexibility with regard to specific learning tasks, tools, roles, responsibilities, communication strategies, social interactions, learning goals and modes of assessment [B5]. Constructivist instructional design approaches such as R2D2[63]which emphasize reflexive, recursive,participatory designof learning experiences may be employed in developing CSCL which authentically engages learners fromdiverse linguistic and cultural backgrounds.
Dyslexiaprimarily involves difficulties with reading, spelling and sentence structure, transposition, memory, organization and time management, and lack of confidence.[64]Dyslexia has in the past two decades become increasingly present in research and legislation. The United Kingdom passed theDisability Discrimination Act 1995in which institutions were required to "reasonably adjust" instruction for students with disabilities, particularly physical and sensory disabilities; in 2002, theSpecial Education Needs and Disabilities Actadjusted the legislation to include learning disabilities.
TheAmericans with Disabilities Act of 1990(ADA) established that all students with disabilities must be included in all state and districtwide assessments of student progress. The ADA also guarantees equal accommodation for disabled people in, "employment, public accommodations, state and local government services, transportation, and telecommunications."[64]
In recent years, tools such as WebHelpDyslexia and other capabilities of web applications have increased the availability of tools to provide coping skills for students with dyslexia.[65]
In 2006, Woodfine argued that dyslexia can impact the ability of a student to participate in synchronous e-learning environments, especially if activities being completed are text-based. During experimental qualitative research, Woodfine found that data suggested "learners with dyslexia might suffer from embarrassment, shame and even guilt about their ability to interact with other learners when in a synchronous environment."[64]
In a study by Fichten et al., it was found that assistive technology can be beneficial in aiding students with the progression of their reading and writing skills. Tools such as spell check or text-to-speech can be helpful to learners with dyslexia by allowing them to focus more on self-expression and less on errors.[66]
Alsobhi, et al., examined assistive technologies for dyslexic students and concluded that the most fundamental considerations to be had when serving students of this population are: "the learning styles that people with dyslexia exhibit, and howassistive technologycan be adapted to align with these learning behaviors."[66]
TheDyslexia Adaptive E-Learning(DAEL) is a suggested a framework that proposes four dimensions that cover 26 attributes. The proposed framework asks educators to make decisions based on perceived ease of use, perceived usefulness, and system adaptability:
Educators that choose to use the CSCL environment must be aware of508 complianceArchived2018-04-09 at theWayback Machineand its legal implications. "In the U.S., the criteria for designing Web pages accessibly are provided by two major sets: the W3C'sWeb Accessibility Guidelines(WCAG) and the design standards issued under U.S. federal law,Section 508 of the Rehabilitation Act, as amended in 1998.1 Features of accessible design include, among others, the provision of ALT tags for nontextual elements, such as images, animations and image map hot spots; meaningful link text; logical and persistent page organization, and the inclusion of skip navigation links."[68]
Unfortunately, not all educators are exposed to these guidelines, especially if their collegiate programs do not provide exposure to the use of computers, aspects of web design or technology in education. In some cases, it may be advantageous for the educator to collaborate with an instructional technologist or web designer to ensure 508 guidelines are addressed in the desired learning environment for the CSCL.
TheWorld Wide Webbegan as information sharing onstatic webpagesaccessible on a computer through the use of aweb browser. As more interactive capabilities were added, it evolved intoWeb 2.0, which allowed foruser-generated contentand participation (e.g.social networking). This opened up many new possibilities for computer-supported collaborative learning (CSCL) using the Internet. The internet is now entering a new phase,Web 3.0or theSemantic Web, which is characterized by the greaterinterconnectivityofmachine-readabledata from many different sources. New intelligent technology applications will be able to manage, organize and create meaning from this data,[69]which will have a significant impact on CSCL.
The interconnectivity of machine-readable data with semantic tags means that searches will be greatly enhanced. Search results will be more relevant, recommendations of resources will be made based on search terms and results will includemultimediacontent.[69][70][71][72]
New Web 3.0 capabilities for learners include enhanced tools for managing learning, allowing them toself-regulateandco-regulatelearning without the assistance of an instructor.[71]Through the use of Web 3.0, groups and communities can be formed according to specific criteria without human input. These communities and groups can provide support to new learners and give experts an opportunity to share their knowledge.[71]
Teachers can benefit from these same capabilities to manage their teaching.[73]In addition, the software for Web 3.0 collaboration will include using data from group communications, which then generates how much each individual has collaborated based on how often they communicate and how long their messages are.[74]
Making data machine-readable is leading to the development ofvirtual assistantsandintelligent agents. These are tools which can access data on a user's behalf and will be able to assist learners and collaborators in several ways. They can provide personalized and customized search results by accessing data on a variety of platforms, recommend resources based on user information and preferences, manage administrative tasks, communicate with other agents and databases, and help organize information and interactions with collaborators.[73][75]
Virtual learning communities arecyberspacesthat allow for individual andcollaborative learningto take place. While they exist today, with Web 3.0 they will gain enhanced features enabling more collaborative learning to take place. Some describe them as evolving out of existinglearning management systems(LMSs), adding intelligent agents and virtual assistants that can enhance content searches and deal with administrative and communication tasks,[73]or enabling different LMSs around the world to communicate with each other, creating an even larger community to share resources and locate potential collaborators.[76]Virtual learning communities will also enable different types of peer-to-peer interaction and resource sharing to support co-construction of knowledge.[77]These communities may also include some aspects of 3D gaming and VR.
Through the use of3D gaming, users can simulate lives of others while providing their knowledge throughout the 3D environment as anavatar. These 3D environments also fostersimulationand scenario building[71]for places where users would otherwise not have access. The 3D environments facilitate online knowledge building communities.[78]Non-immersive environments are environments in which not all five senses are used but still allows users to interact in virtual worlds.[79]Virtual Reality (VR) headsetsare sometimes used to give users a full immersion experience, into these 3D virtual worlds. This allows users to interact with each other in real time and simulate different learning situations with other users. These learning experiences and environments vary between fields and learning goals.[78]Certain virtual reality headsets allow users to communicate with each other while being in different physical locations.[79]
Multimodal literacy is the way processes of literacy - reading, writing, talking, listening and viewing - are occurring within and around new communication media. (Kress & Jewitt, 2003; Pahl & Rowsell, 2005; Walsh, 2008) It refers to meaning-making that occurs through the reading, viewing, understanding, responding to and producing and interacting with multimedia and digital texts. (Walsh, 2010)
Online forums offer numerous advantages for both teacher and students for collaborative learning online. Discussion forums provide a wider platform to exchange information and ideas, to develop writing and reading skills, critical thinking skills. (Jill Margerison, 2013) A collaborative online forum can also help students learn about the unique challenges of online communication, especially the need for clarity and the dangers of sarcasm. (Susan Martens-Baker, 2009) For the teacher, they offer a flexible platform from which to educate in a participatory culture, where teachers and students can interact with each other and create new knowledge. (Jill Margerison, 2013)
Video games were designed as a learning tool engaged learners who advance through experimentation, critical thinking and practice in the virtual world. (Abrams, 2009) Video games in CSCL can promote positive interdependence, individual accountability, face-to-face promotive interaction, social skills, and group processing abilities in the ELA classroom. Through interactions in the virtual world, learners have the opportunities to establish their presence, identity and create meanings for their lives.
Digital storytelling refers to integrating a variety of means, such as images, audio, video, graphics and diagram to personal narratives and crafts. Four skill competencies: reading, writing, speaking, and listening would be enhanced by producing digital products. (Brenner, 2014) Students have a greater sense of autonomy, agency through the digital storytelling in CSCL.
Online forums provide opportunities for young people to engage in the self-exposition as they practice digital literacies and hone the skill of movement across multiple literacies, languages and subject positions. Meanwhile, identity is a constellation of the multiple communities. It is also important to emphasize the potentially harmful cultural discourses that occur within young people's consumption. (Kim, 2015)
Through capitalizing on students' gaming experiences by recognizing how they apply to the subject at hand, teachers can highlight the benefits of virtual learning environments and draw upon students' gaming experiences to understand their application of virtual learning across curricula. Educators need to choose the appropriate game for the particular subject to endorse their instruction and promote collaboration among students.
Students who engage in collaborative learning for creating digital production show the characteristics of leadership. Moreover, students would gain the experience of collaboration and expand their skill of the multimodal literacy. In addition, digital composition provides a meaningful tool for teachers to assess. (Brenner, 2014)
Multimodal literacy can facilitate English learners' literacy learning. It has provided opportunities for English learners to expand the interpretation of texts. (Ajayi, 2009) Specifically, English language learners can increase their language ability through computer-collaborative learning.
The multimodality platforms provide students, especially ELLs with an anxiety-free zone to collaborate with their peers in a virtual world in order to make meanings together. Technologyself-efficacyincreases ELLs' level of independence and reduces their level of anxiety. (Mellati, Zangoei & Khademi, 2015) ELLs will have more motivation and self-confident while participating in online group projects to make contributions and share knowledge with their peers. As a result of collaborative learning, ELLs would expand their vocabulary, gain advanced and more academic grammars.
The applications of CSCL in post-secondary education demonstrate positive impacts on students' learning such as promoting learner interaction, motivation and understanding.[80]As collaborative learning is grounded in social constructivism, the interaction and collaboration during learning is valued.
There's research findings that shows online students had higher scores than face-to-face students in professional competence acquisition test, showing the effectiveness of CSCL in promoting the development of professional skills[81]
Knowledge co-construction among geographically dispersed students in an online postgraduate program was explained in a study as students relied heavily on each other for their on-going participation in the online discussions and joint refinement of ideas introduced.[82]
The design principles for using CSCL can be considered from different perspectives. For technical use, instructors need to provide tutorials and online training modules to students.[83]For collaboration, students need time to plan and coordinate group work as well as instructors' support and guidance[84]on the discussions. Also, group size and composition should be taken into consideration for better quality of interaction.[85]More instructional strategies are presented below.
Wikis is a tool for learners to co-construct knowledge online with the access to create and edit contents. There are three phases of using wikis for collaborative writing:[86]
Phase 1. Crisis of Authority
Users experience challenges due to unfamiliarity with the use of wiki and the unknown of other teammates' boundaries of being commented or revised on their writings.
Phase 2. Crisis of Relationship
Collaborative learning emerges and group communication is improved.
Phase 3. Resolution of Crisis
More frequent communication occurs and increased co-writing among team members.
To better design wiki-based project, the design principles design include:
1. Provide learners with a practice article to edit at the beginning of a course for getting familiar with using wikis
2. Informs learners of different communication tools to work collaboratively.
3. Engage learners with repeated wiki article assignments.
4. Provide timely feedback on students' discussion, participation and interaction.
The characteristic of social interaction in CSCL can be demonstrated on the online learning community where learners can communicate with each other. One of the medium facilitating the online community to work is online learning management system that provides all people including learners, professors, and administrative staff to communicate.
When using an online learning management system for collaborative learning, the instructor should provide technical training by presenting video tutorials, online training modules or online workshops[83]
Mobile CSCL (mCSCL) is beneficial to students' learning achievements, attitude and interactions.[85]The suggested design principles from CSCL include:
1. An idea group size is around 3 to 4 people.
2. A duration between 1 and 4 weeks demonstrate better effects. The criticisms version indicate in the case of short term course the interactions networks not consolidate.[87]
Professional teacher communities are positively related to student learning, teacher learning, teacher practice and school culture. Teacher collaboration is a significant element of these communities. Reflection‐oriented tasks (such as reflection on teaching performance in individual writing, peer feedback, and collective writing) stimulated participation, and in combination with task structure also interaction in these communities.[88]Furthermore, structured tasks(such as crossword puzzles, the path to come to a solution is unambiguous and answers can be immediately checked) which required critical reflection on personal experiences and perspectives triggered task‐related communication and a deep level of information exchange.
The European Union Comenius fund sponsored FISTE project which is concerned with the educational use of information and communication technologies (ICTs), specifically with the development and dissemination of a new pedagogical strategy for distance learning through in-service teacher education in schools across Europe.[89]This project uses the online Virtual Learning Environment platform BSCW as a Computer Supportive Communication Learning tool to facilitate the way the participants work together. This work has involved schools and teacher training providers, building culturally different work in in-service teacher education in the participating countries. The value of using CSCL supported technology for in-service teacher education in Europe lies in the concept of hinterland. Cross-national courses like the FISTE would be difficult to run without this technological approach. | https://en.wikipedia.org/wiki/Computer-supported_collaborative_learning |
Anintegrated collaboration environment(ICE) is an environment in which avirtual teamdoes its work. Such environments allow companies to realize a number of competitive advantages by using their existing computers and network infrastructure for group and personal collaboration. These fully featured environments combine the best features of web-based conferencing and collaboration, desktop videoconferencing, and instant message into a single easy-to-use, intuitive environment. Recent developments have allowed companies include streaming in real-time and archived modes into their ICE.
Common applications found within ICE are:
ICE allows organizations to take advantage of technological advances in computer processing power and video technology while maintaining backward compatibility with existing standards-based hardware conference equipment. ICE can reduce costs for a company. These benefits are achieved through cross discipline fertilization, which allowsknowledge workersto share information across departments of a company, which can be important for ensuring that corporate goals are shared and fully integrated.
There can be challenges to implementing ICE due to employees' lack of acceptance of knowledge management systems. Studies have shown that that lack of commitment and motivation by knowledge workers, professionals, and managers is the reason for problems, not the knowledge management technologies. Possible reasons for the lack of acceptance include: | https://en.wikipedia.org/wiki/Integrated_collaboration_environment |
Graphical perceptionis the human capacity forvisuallyinterpreting information ongraphs and charts. Both quantitative and qualitative information can be said to be encoded into the image, and the human capacity to interpret it is sometimes called decoding.[1]The importance of human graphical perception, what we discern easily versus what our brains have more difficulty decoding, is fundamental to goodstatistical graphicsdesign, where clarity, transparency, accuracy and precision in data display and interpretation are essential for understanding the translation of data in a graph to clarify and interpret the science.[2][3][4][5][6][7]
Graphical perception is achieved in dimensions or steps of discernment by:
Cleveland and McGill's experiments[1]to elucidate the graphical elements humansdetectmost accurately is a fundamental component of goodstatistical graphicsdesign principles.[2][3][5][6][8][9][10][11][12]In practical terms, graphs displaying relative position on a common scale most accurately are most effective. A graph type that utilizes this element is thedot plot. Conversely, angles are perceived with less accuracy; an example is thepie chart. Humans do not naturally order color hues. Only a limited number of hues can be discriminated in one graphic.
Graphic designs that utilize visualpre-attentive processingin the graph design'sassemblyis why a picture can be worth a thousand words by using the brain's ability to perceive patterns. Not all graphs are designed to consider pre-attentive processing. For example in the attached figure, a graphic design feature, table look-up, requires the brain to work harder and take longer to decode than if the graph utilizes our ability to discern patterns.[3]
Graphic design that readily answers the scientific questions of interest will include appropriateestimation. Details for choosing the appropriate graph type for continuous andcategorical dataand for grouping have been described.[6][13]Graphics principles for accuracy, clarity and transparency have been detailed[2][3][4][14]and key elements summarized.[15] | https://en.wikipedia.org/wiki/Graphical_perception |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.