text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
1 the Roadis an experimental novel composed byartificial intelligence(AI). EmulatingJack Kerouac'sOn the Road, Ross Goodwin drove from New York to New Orleans in March 2017 with an AI in a laptop hooked up to various sensors, whose output the AI turned into words that were printed on rolls ofreceipt paper. The novel was published in 2018 by Jean Boîte Éditions.
Goodwin left the text unedited. Although he felt the prose was "choppy", and contained typographical errors, he wanted to present the machine-generated text verbatim, for future study. The story begins: "It was nine seventeen in the morning, and the house was heavy".[1]
EmulatingJack Kerouac's novelOn the Road, Ross Goodwin traveled from New York to New Orleans in March 2017[2]with three sensors, providing real-world input; a surveillance camera mounted on the trunk,[2]trained on the passing scenery; a microphone, picking up conversations inside the car, and additionally theGlobal Positioning System(GPS), tracking the car's location.[3]
Input from these sources, and the time provided by the computer's internal clock,[1]was fed into along short-term memoryrecurrent neural network,[1]which in turn generated sentences on rolls ofreceipt paper.[3]
The car was a Cadillac; Goodwin explained later he wanted an "authoritative" car (and was unable to get aFord Crown Victoria), and worried that people might think him a terrorist if they saw the car with its electronics and wires.Googlepaid part of the cost, having become interested in Goodwin's work atNew York University.
Accompanying him were five other people (including his sister and his fiancée), and the Cadillac was followed by a film crew which documented the four-day journey; the documentary was directed by Lewis Rapkin.[2]
The training dataset included a samplefiction,[3]consisting of three differenttext corpora, each with about 20 million words—one with poetry, one with science fiction, and one with "bleak" writing, in Goodwin's words. It had also been fed a data set fromFoursquare; the AI recognized locations from Foursquare, and appended commentaries to them.
The conversations captured inside the car were rendered in mutated fashion. The locations provided by the GPS were outputted verbatim, to open the day's writing.[2]
The novel was generated letter by letter.[2]Due to continual input from the GPS and time clock, the novel often mentions thelatitude,longitude, and time of day.[1]It was printed unedited and thus is "choppy", according to Goodwin; typos were retained, since he wanted to show the text "in its most raw form".[3]
Goodwin said his main purpose for this novel is to reveal the way machines create words: "In the future when this text becomes more sophisticated it's a warning. If you see patterns like this, it may not have been written by a human".[3]
Thomas Hornigold, writing forSingularity Hub, concluded that the AI is no Jack Kerouac, but that "you might see, in the odd line, the flickering ghost of something like consciousness, a deeper understanding".[1]Brian MerchantofThe Atlanticread the entire novel in one sitting. He could not recognize a coherent plot or story arc, but saw "plenty of pixelated poetry in its ragtag assemblage of modern American imagery. And there are some striking and memorable lines".[2]
Ross Goodwin, a formerghostwriterfor theObama administrationand acreative technologist,[2]has often usedneural networksto create poetry and screenplays. Notable works include the short filmSunspring, starringThomas Middleditchand directed by Goodwin's frequent collaboratorOscar Sharp,[4]and Word.Camera, an 1885bellowscamera that outputs poetry about whatever it is pointed at when the button is pressed.[5]His Master's Thesis atNew York Universitywas a project called "Narrated Reality",[6]for which he walked around the city with a backpack containing compass, punch clock, and camera; data from these devices was fed into anLSTMneural network whose output was "weird associative poetry". A year after1 the Road,Googlehired him to work with theirArtists and Machine Intelligenceproject.[2]
|
https://en.wikipedia.org/wiki/1_the_Road
|
Artificial intelligence detection softwareaims to determine whether somecontent(text, image, video or audio) wasgeneratedusingartificial intelligence(AI).
However, the reliability of such software is a topic of debate,[1]and there are concerns about the potential misapplication of AI detection software by educators.
Multiple AI detection tools have been demonstrated to be unreliable in terms of accurately and comprehensively detecting AI-generated text. In a study conducted by Weber-Wulff et al. and published in 2023, researchers evaluated 14 detection tools includingTurnitinandGPTZeroand found that "all scored below 80% of accuracy and only 5 over 70%."[2]
In AI content detection, afalse positiveis when human-written work is incorrectly flagged as AI-written. Many AI detection software claim to have a minimal level of false positives, with Turnitin claiming a less than 1% false positive rate.[3]However, later research byThe Washington Postproduced much higher rates of 50%, though they used a smaller sample size.[4]False positives in an academic setting frequently lead to accusations ofacademic misconduct, which can have serious consequences for a student'sacademic record. Additionally, studies have shown evidence that many AI detection models are prone to give false positives to work written by those whosefirst languageisn'tEnglishandneurodiversepeople.[5][6]
Afalse negativeis a failure to identify documents with AI-written text. False negatives often happen as a result of a detection software'ssensitivitylevel or because evasive techniques were used when generating the work to make it sound more human.[7]False negatives are less of a concern academically, since they aren't likely to lead to accusations and ramifications. Notably, Turnitin stated they have a 15% false negative rate.[8]
For text, this is usually done to prevent allegedplagiarism, often by detecting repetition of words as telltale signs that a text was AI-generated (includingAI hallucinations). They are often used by teachers marking their students, usually on anad hocbasis. Following the release ofChatGPTand similar AI text generative software, many educational establishments have issued policies against the use of AI by students.[9]AI text detection software is also used by those assessing job applicants, as well as onlinesearch engines.[10]
Current detectors may sometimes be unreliable and have incorrectly marked work by humans as originating from AI[11][12][13]while failing to detect AI-generated work in other instances.[14]MIT Technology Reviewsaid that the technology "struggled to pick up ChatGPT-generated text that had been slightly rearranged by humans and obfuscated by a paraphrasing tool".[15]AI text detection software has also been shown to discriminate against non-native speakers of English.[10]
Two students from theUniversity of California, Davis, were referred to the university's Office of Student Success and Judicial Affairs (OSSJA) after their professors scanned their essays with positive results; the first with an AI detector called GPTZero, and the second with an AI detector integration inTurnitin. However, following media coverage,[16]and a thorough investigation, the students were cleared of any wrongdoing.[17][18]
In April 2023,Cambridge Universityand other members of theRussell Groupof universities in the United Kingdom opted out of Turnitin's AI text detection tool, after expressing concerns it was unreliable.[19]TheUniversity of Texas at Austinopted out of the system six months later.[20]
In May 2023, a professor atTexas A&M University–Commerceused ChatGPT to detect whether his students' content was written by it, which ChatGPT said was the case. As such, he threatened to fail the class despite ChatGPT not being able to detect AI-generated writing.[21]No students were prevented from graduating because of the issue, and all but one student (who admitted to using the software) were exonerated from accusations of having used ChatGPT in their content.[22]
An article by Thomas Germain, published onGizmodoin June 2024, reported job losses among freelance writers and journalists due to AI text detection software mistakenly classifying their work as AI-generated.[23]
To improve the reliability of AI text detection, researchers have exploreddigital watermarkingtechniques. A 2023 paper titled "A Watermark for Large Language Models"[24]presents a method to embed imperceptible watermarks into text generated bylarge language models(LLMs). This watermarking approach allows content to be flagged as AI-generated with a high level of accuracy, even when text is slightly paraphrased or modified. The technique is designed to be subtle and hard to detect for casual readers, thereby preserving readability, while providing a detectable signal for those employing specialized tools. However, while promising, watermarking faces challenges in remaining robust under adversarial transformations and ensuring compatibility across different LLMs.
There is software available designed to bypass AI text detection.[25]
A study published in August 2023 analyzed 20 abstracts from papers published in theEye Journal, which were then paraphrased usingGPT-4.0. The AI-paraphrased abstracts were examined for plagiarism using QueText and for AI-generated content using Originality.AI. The texts were then re-processed through anadversarial softwarecalledUndetectable.aiin order to reduce the AI-detection scores. The study found that the AI detection tool, Originality.AI, identified text generated by GPT-4 with a mean accuracy of 91.3%. However, after reprocessing by Undetectable.ai, the detection accuracy of Originality.ai dropped to a mean accuracy of 27.8%.[26][27]
Some experts also believe that techniques likedigital watermarkingare ineffective because they can be removed or added to trigger false positives.[28]"A Watermark for Large Language Models" paper by Kirchenbauer et al.[24]also addresses potential vulnerabilities of watermarking techniques. The authors outline a range of adversarial tactics, including text insertion, deletion, and substitution attacks, that could be used to bypass watermark detection. These attacks vary in complexity, from simple paraphrasing to more sophisticated approaches involvingtokenizationand homoglyph alterations. The study highlights the challenge of maintaining watermark robustness against attackers who may employ automatedparaphrasingtools or even specific language model replacements to alter text spans iteratively while retaining semantic similarity. Experimental results show that although such attacks can degrade watermark strength, they also come at the cost of text quality and increased computational resources.
One shortcoming of most AI content detection software is their inability to identify AI-generated text in any language. Large language models (LLMs) like ChatGPT, Claude, and Gemini can write in different languages, but traditional AI text detection tools have primarily been trained in English and a few other widely spoken languages, such as French and Spanish. Fewer AI detection solutions can detect AI-generated text in languages like Farsi, Arabic, or Hindi.[citation needed]
Several purported AI image detection software exist, to detect AI-generated images (for example, those originating fromMidjourneyorDALL-E). They are not completely reliable.[29][30]
Others claim to identify video and audiodeepfakes, but this technology is also not fully reliable yet either.[31]
Despite debate around the efficacy of watermarking,Google DeepMindis actively developing a detection software called SynthID, which works by inserting a digital watermark that is invisible to the human eye into thepixelsof an image.[32][33]
|
https://en.wikipedia.org/wiki/Artificial_intelligence_detection_software
|
Automated essay scoring(AES) is the use of specialized computer programs to assign grades to essays written in an educational setting. It is a form ofeducational assessmentand an application ofnatural language processing. Its objective is to classify a large set of textual entities into a small number of discrete categories, corresponding to the possible grades, for example, the numbers 1 to 6. Therefore, it can be considered a problem ofstatistical classification.
Several factors have contributed to a growing interest in AES. Among them are cost, accountability, standards, and technology. Rising education costs have led to pressure to hold the educational system accountable for results by imposing standards. The advance of information technology promises to measure educational achievement at reduced cost.
The use of AES forhigh-stakes testingin education has generated significant backlash, with opponents pointing to research that computers cannot yet grade writing accurately and arguing that their use for such purposes promotes teaching writing in reductive ways (i.e.teaching to the test).
Most historical summaries of AES trace the origins of the field to the work ofEllis Batten Page.[1]In 1966, he argued[2]for the possibility of scoring essays by computer, and in 1968 he published[3]his successful work with a program called Project Essay Grade (PEG). Using the technology of that time, computerized essay scoring would not have been cost-effective,[4]so Page abated his efforts for about two decades. Eventually, Page sold PEG toMeasurement Incorporated
By 1990, desktop computers had become so powerful and so widespread that AES was a practical possibility. As early as 1982, a UNIX program called Writer's Workbench was able to offer punctuation, spelling and grammar advice.[5]In collaboration with several companies (notably Educational Testing Service), Page updated PEG and ran some successful trials in the early 1990s.[6]
Peter Foltz andThomas Landauerdeveloped a system using a scoring engine called the Intelligent Essay Assessor (IEA). IEA was first used to score essays in 1997 for their undergraduate courses.[7]It is now a product from Pearson Educational Technologies and used for scoring within a number of commercial products and state and national exams.
IntelliMetric is Vantage Learning's AES engine. Its development began in 1996.[8]It was first used commercially to score essays in 1998.[9]
Educational Testing Service offers "e-rater", an automated essay scoring program. It was first used commercially in February 1999.[10]Jill Burstein was the team leader in its development. ETS's Criterion Online Writing Evaluation Service uses the e-rater engine to provide both scores and targeted feedback.
Lawrence Rudner has done some work with Bayesian scoring, and developed a system called BETSY (Bayesian Essay Test Scoring sYstem).[11]Some of his results have been published in print or online, but no commercial system incorporates BETSY as yet.
Under the leadership of Howard Mitzel and Sue Lottridge, Pacific Metrics developed a constructed response automated scoring engine, CRASE. Currently utilized by several state departments of education and in a U.S. Department of Education-funded Enhanced Assessment Grant, Pacific Metrics’ technology has been used in large-scale formative and summative assessment environments since 2007.
Measurement Inc. acquired the rights to PEG in 2002 and has continued to develop it.[12]
In 2012, theHewlett Foundationsponsored a competition onKagglecalled the Automated Student Assessment Prize (ASAP).[13]201 challenge participants attempted to predict, using AES, the scores that human raters would give to thousands of essays written to eight different prompts. The intent was to demonstrate that AES can be as reliable as human raters, or more so. The competition also hosted a separate demonstration among nine AES vendors on a subset of the ASAP data. Although the investigators reported that the automated essay scoring was as reliable as human scoring,[14]this claim was not substantiated by any statistical tests because some of the vendors required that no such tests be performed as a precondition for their participation.[15]Moreover, the claim that the Hewlett Study demonstrated that AES can be as reliable as human raters has since been strongly contested,[16][17]including byRandy E. Bennett, the Norman O. Frederiksen Chair in Assessment Innovation at theEducational Testing Service.[18]Some of the major criticisms of the study have been that five of the eight datasets consisted of paragraphs rather than essays, four of the eight data sets were graded by human readers for content only rather than for writing ability, and that rather than measuring human readers and the AES machines against the "true score", the average of the two readers' scores, the study employed an artificial construct, the "resolved score", which in four datasets consisted of the higher of the two human scores if there was a disagreement. This last practice, in particular, gave the machines an unfair advantage by allowing them to round up for these datasets.[16]
In 1966, Page hypothesized that, in the future, the computer-based judge will be better correlated with each human judge than the other human judges are.[2]Despite criticizing the applicability of this approach to essay marking in general, this hypothesis was supported for marking free text answers to short questions, such as those typical of the BritishGCSEsystem.[19]Results ofsupervised learningdemonstrate that the automatic systems perform well when marking by different human teachers is in good agreement. Unsupervisedclusteringof answers showed that excellent papers and weak papers formed well-defined clusters, and the automated marking rule for these clusters worked well, whereas marks given by human teachers for the third cluster ('mixed') can be controversial, and the reliability of any assessment of works from the 'mixed' cluster can often be questioned (both human and computer-based).[19]
According to a recent survey,[20]modern AES systems try to score different dimensions of an essay's quality in order to provide feedback to users. These dimensions include the following items:
From the beginning, the basic procedure for AES has been to start with a training set of essays that have been carefully hand-scored.[21]The program evaluates surface features of the text of each essay, such as the total number of words, the number of subordinate clauses, or the ratio of uppercase to lowercase letters—quantities that can be measured without any human insight. It then constructs a mathematical model that relates these quantities to the scores that the essays received. The same model is then applied to calculate scores of new essays.
Recently, one such mathematical model was created by Isaac Persing and Vincent Ng.[22]which not only evaluates essays on the above features, but also on their argument strength. It evaluates various features of the essay, such as the agreement level of the author and reasons for the same, adherence to the prompt's topic, locations of argument components (major claim, claim, premise), errors in the arguments, cohesion in the arguments among various other features. In contrast to the other models mentioned above, this model is closer in duplicating human insight while grading essays. Due to the growing popularity of deep neural networks, deep learning approaches have been adopted for automated essay scoring, generally obtaining superior results, often surpassing inter-human agreement levels.[23]
The various AES programs differ in what specific surface features they measure, how many essays are required in the training set, and most significantly in the mathematical modeling technique. Early attempts usedlinear regression. Modern systems may use linear regression or other machine learning techniques often in combination with other statistical techniques such aslatent semantic analysis[24]andBayesian inference.[11]
The automated essay scoring task has also been studied in the cross-domain setting using machine learning models, where the models are trained on essays written for one prompt (topic) and tested on essays written for another prompt. Successful approaches in the cross-domain scenario are based on deep neural networks[25]or models that combine deep and shallow features.[26]
Any method of assessment must be judged on validity, fairness, and reliability.[27]An instrument is valid if it actually measures the trait that it purports to measure. It is fair if it does not, in effect, penalize or privilege any one class of people. It is reliable if its outcome is repeatable, even when irrelevant external factors are altered.
Before computers entered the picture, high-stakes essays were typically given scores by two trained human raters. If the scores differed by more than one point, a more experienced third rater would settle the disagreement. In this system, there is an easy way to measure reliability: byinter-rater agreement. If raters do not consistently agree within one point, their training may be at fault. If a rater consistently disagrees with how other raters look at the same essays, that rater probably needs extra training.
Various statistics have been proposed to measure inter-rater agreement. Among them are percent agreement,Scott's π,Cohen's κ,Krippendorf's α,Pearson's correlation coefficient r,Spearman's rank correlation coefficientρ, and Lin'sconcordance correlation coefficient.
Percent agreement is a simple statistic applicable to grading scales with scores from 1 to n, where usually 4 ≤ n ≤ 6. It is reported as three figures, each a percent of the total number of essays scored: exact agreement (the two raters gave the essay the same score), adjacent agreement (the raters differed by at most one point; this includes exact agreement), and extreme disagreement (the raters differed by more than two points). Expert human graders were found to achieve exact agreement on 53% to 81% of all essays, and adjacent agreement on 97% to 100%.[28]
Inter-rater agreement can now be applied to measuring the computer's performance. A set of essays is given to two human raters and an AES program. If the computer-assigned scores agree with one of the human raters as well as the raters agree with each other, the AES program is considered reliable. Alternatively, each essay is given a "true score" by taking the average of the two human raters' scores, and the two humans and the computer are compared on the basis of their agreement with the true score.
Some researchers have reported that their AES systems can, in fact, do better than a human. Page made this claim for PEG in 1994.[6]Scott Elliot said in 2003 that IntelliMetric typically outperformed human scorers.[8]AES machines, however, appear to be less reliable than human readers for any kind of complex writing test.[29]
In current practice, high-stakes assessments such as the GMAT are always scored by at least one human. AES is used in place of a second rater. A human rater resolves any disagreements of more than one point.[30]
AES has been criticized on various grounds. Yanget al. mention "the over-reliance on surface features of responses, the insensitivity to the content of responses and to creativity, and the vulnerability to new types of cheating and test-taking strategies."[30]Several critics are concerned that students' motivation will be diminished if they know that no human will read their writing.[31]Among the most telling critiques are reports of intentionally gibberish essays being given high scores.[32]
On 12 March 2013, HumanReaders.Org launched anonline petition, "Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment". Within weeks, the petition gained thousands of signatures, includingNoam Chomsky,[33]and was cited in a number of newspapers, includingThe New York Times,[34]and on a number of education and technology blogs.[35]
The petition describes the use of AES for high-stakes testing as "trivial", "reductive", "inaccurate", "undiagnostic", "unfair" and "secretive".[36]
In a detailed summary of research on AES, the petition site notes, "RESEARCH FINDINGS SHOW THAT no one—students, parents, teachers, employers, administrators, legislators—can rely on machine scoring of essays ... AND THAT machine scoring does not measure, and therefore does not promote, authentic acts of writing."[37]
The petition specifically addresses the use of AES for high-stakes testing and says nothing about other possible uses.
Most resources for automated essay scoring are proprietary.
|
https://en.wikipedia.org/wiki/Automated_essay_scoring
|
Biomedical text mining(includingbiomedical natural language processingorBioNLP) refers to the methods and study of howtext miningmay be applied to texts and literature of thebiomedicaldomain. As a field of research, biomedical text mining incorporates ideas fromnatural language processing,bioinformatics,medical informaticsandcomputational linguistics. The strategies in this field have been applied to the biomedical literature available through services such asPubMed.
In recent years, the scientific literature has shifted to electronic publishing but the volume of information available can be overwhelming. This revolution of publishing has caused a high demand for text mining techniques. Text mining offers information retrieval (IR) and entity recognition (ER).[1]IR allows the retrieval of relevant papers according to the topic of interest, e.g. through PubMed. ER is practiced when certain biological terms are recognized (e.g.proteinsorgenes) for further processing.
Applying text mining approaches to biomedical text requires specific considerations common to the domain.
Large annotatedcorporaused in the development and training of general purpose text mining methods (e.g., sets of movie dialogue,[3]product reviews,[4]or Wikipedia article text) are not specific for biomedical language. While they may provide evidence of general text properties such as parts of speech, they rarely contain concepts of interest to biologists or clinicians. Development of new methods to identify features specific to biomedical documents therefore requires assembly of specialized corpora.[5]Resources designed to aid in building new biomedical text mining methods have been developed through the Informatics for Integrating Biology and the Bedside (i2b2) challenges[6][7][8]and biomedical informatics researchers.[9][10]Text mining researchers frequently combine these corpora with thecontrolled vocabulariesandontologiesavailable through theNational Library of Medicine'sUnified Medical Language System (UMLS)andMedical Subject Headings (MeSH).
Machine learning-based methods often require very large data sets as training data to build useful models.[11]Manual annotation of large text corpora is not realistically possible. Training data may therefore be products of weak supervision[12][13]or purely statistical methods.
Like other text documents, biomedical documents containunstructured data.[14]Research publications follow different formats, contain different types of information, and are interspersed with figures, tables, and other non-text content. Both unstructured text and semi-structured document elements, such as tables, may contain important information that should be text mined.[15]Clinical documents may vary in structure and language between departments and locations. Other types of biomedical text, such as drug labels,[16]may follow general structural guidelines but lack further details.
Biomedical literature contains statements about observations that may not be statements of fact. This text may express uncertainty or skepticism about claims. Without specific adaptations, text mining approaches designed to identify claims within text may mis-characterize these "hedged" statements as facts.[17]
Biomedical text mining applications developed for clinical use should ideally reflect the needs and demands of clinicians.[5]This is a concern in environments whereclinical decision supportis expected to be informative and accurate. A comprehensive overview of the development and uptake of NLP methods applied to free-text clinical notes related to chronic diseases
is presented in.[18]
New text mining systems must work with existing standards, electronic medical records, and databases.[5]Methods for interfacing with clinical systems such asLOINChave been developed[19]but require extensive organizational effort to implement and maintain.[20][21]
Text mining systems operating with private medical data must respect its security and ensure it is rendered anonymous where appropriate.[22][23][24]
Specific sub tasks are of particular concern when processing biomedical text.[14]
Developments in biomedical text mining have incorporated identification of biological entities withnamed entity recognition, or NER. Names and identifiers for biomolecules such asproteinsandgenes,[25]chemical compounds and drugs,[26]and disease names[27]have all been used as entities. Most entity recognition methods are supported by pre-defined linguistic features or vocabularies, though methods incorporatingdeep learningandword embeddingshave also been successful at biomedical NER.[28][29]
Biomedical documents may beclassifiedorclusteredbased on their contents and topics. In classification, document categories are specified manually,[30]while in clustering, documents form algorithm-dependent, distinct groups.[31]These two tasks are representative ofsupervisedandunsupervisedmethods, respectively, yet the goal of both is to produce subsets of documents based on their distinguishing features. Methods for biomedical document clustering have relied uponk-means clustering.[31]
Biomedical documents describe connections between concepts, whether they are interactions between biomolecules, events occurring subsequently over time (i.e.,temporalrelationships), orcausalrelationships. Text mining methods may perform relation discovery to identify these connections, often in concert with named entity recognition.[32]
The challenge of identifying uncertain or "hedged" statements has been addressed through hedge cue detection in biomedical literature.[17]
Multiple researchers have developed methods to identify specific scientific claims from literature.[33][34]In practice, this process involves both isolating phrases and sentences denoting the core arguments made by the authors of a document (a process known asargument mining, employing tools used in fields such as political science) and comparing claims to find potential contradictions between them.[34]
Information extraction, or IE, is the process of automatically identifying structured information fromunstructuredor partially structured text. IE processes can involve several or all of the above activities, including named entity recognition, relationship discovery, and document classification, with the overall goal of translating text to a more structured form, such as the contents of a template orknowledge base. In the biomedical domain, IE is used to generate links between concepts described in text, such asgene A inhibits gene Bandgene C is involved in disease G.[35]Biomedical knowledge bases containing this type of information are generally products of extensive manual curation, so replacement of manual efforts with automated methods remains a compelling area of research.[36][37]
Biomedical text mining supports applications for identifying documents and concepts matching search queries. Search engines such asPubMedsearch allow users to query literature databases with words or phrases present in document contents,metadata, orindicessuch asMeSH. Similar approaches may be used formedical literature retrieval. For more fine-grained results, some applications permit users to search withnatural language queriesand identify specific biomedical relationships.[38]
On 16 March 2020, theNational Library of Medicineand others launched the COVID-19 Open Research Dataset (CORD-19) to enabletext miningof the current literature on the novel virus. The dataset is hosted by the Semantic Scholar project[39]of theAllen Institute for AI.[40]Other participants includeGoogle,Microsoft Research, theCenter for Security and Emerging Technology, and theChan Zuckerberg Initiative.[41]
The following table lists a selection of biomedical text corpora and their contents. These items include annotated corpora, sources of biomedical research literature, and resources frequently used as vocabulary and/or ontology references, such asMeSH. Items marked "Yes" under "Freely Available" can be downloaded from a publicly accessible location.
Several groups have developed sets of biomedical vocabulary mapped to vectors of real numbers, known asword vectors or word embeddings. Sources of pre-trained embeddings specific for biomedical vocabulary are listed in the table below. The majority are results of theword2vecmodel developed by Mikolovet al[86]or variants of word2vec.
Text mining applications in the biomedical field include computational approaches to assist with studies inprotein docking,[91]protein interactions,[92][93]and protein-disease associations.[94]Text mining techniques have several advantages over traditional manual curation for identifying associations. Text mining algorithms can identify and extract information from a vast amount of literature, and more efficiently than manual curation. This includes the integration of data from different sources, including literature,databases, and experimental results. These algorithms have transformed the process of identifying and prioritizing novel genes and gene-disease associations that have previously been overlooked.[95]
These methods are the foundation to facilitate systematic searches of overlooked scientific and biomedical literature which could carry significant association between research. The combination of information can stem new discoveries and hypotheses especially with the integration of datasets. It must be noted that the quality of the database is as important as the size of it. Promising text mining methods such as iProLINK (integrated Protein Literature Information and Knowledge) have been developed to curate data sources that can aid text mining research in areas of bibliography mapping, annotation extraction, protein named entity recognition, and protein ontology development.[96]Curated databases such as UniProt can accelerate the accessibility of targeted information not only for genetic sequences, but also for literature and phylogeny.
Methods for determining the association ofgene clustersobtained bymicroarrayexperiments with the biological context provided by the corresponding literature have been developed.[97]
Automatic extraction of protein interactions[98]and associations of proteins to functional concepts (e.g.gene ontologyterms) has been explored.[citation needed]The search engine PIE was developed to identify and return protein-protein interaction mentions fromMEDLINE-indexed articles.[99]The extraction of kinetic parameters from text or thesubcellular locationof proteins have also been addressed by information extraction and text mining technology.[citation needed]
Computational gene prioritization is an essential step in understanding the genetic basis of diseases, particularly withingenetic linkageanalysis. Text mining and other computational tools extract relevant information, including gene-disease associations, among others, from numerous data sources, then apply differentranking algorithmsto prioritize the genes based on their relevance to the specific disease.[100]Text mining and gene prioritization allow researchers to focus their efforts on the most promising candidates for further research.
Computational tools for gene prioritization continue to be developed and analyzed. One group studied the performance of various text-mining techniques for disease gene prioritization. They investigated different domain vocabularies, text representation schemes, and ranking algorithms in order to find the best approach for identifying disease-causing genes to establish abenchmark.[101]
An agricultural genomics group identified genes related tobovinereproductive traits using text mining, among other approaches.[102]
A text mining study assembled a collection of 709 coreextracellular matrix proteinsand associated proteins based on two databases:MatrixDB(matrixdb.univ-lyon1.fr) andUniProt. This set of proteins had a manageable size and a rich body of associated information, making it a suitable for the application of text mining tools. The researchers conducted phrase-mining analysis to cross-examine individual extracellular matrix proteins across the biomedical literature concerned with six categories ofcardiovascular diseases. They used a phrase-mining pipeline, Context-aware SemanticOnline Analytical Processing(CaseOLAP),[103]then semantically scored all 709 proteins according to their Integrity, Popularity, and Distinctiveness using the CaseOLAP pipeline. The text mining study validated existing relationships and informed previously unrecognized biological processes in cardiovascular pathophysiology.[94]
Search engines designed toretrieve biomedical literaturerelevant to a user-provided query frequently rely upon text mining approaches. Publicly available tools specific for research literature includePubMedsearch,Europe PubMed Centralsearch, GeneView,[104]and APSE[105]Similarly, search engines and indexing systems specific for biomedical data have been developed, including DataMed[106]and OmicsDI.[107]
Some search engines, such as Essie,[108]OncoSearch,[109]PubGene,[110][111]andGoPubMed[112]were previously public but have since been discontinued, rendered obsolete, or integrated into commercial products.
Electronic medical records(EMRs) andelectronic health records(EHRs) are collected by clinical staff in the course of diagnosis and treatment. Though these records generally include structured components with predictable formats and data types, the remainder of the reports are often free-text and difficult to search, leading to challenges with patient care.[113]Numerous complete systems and tools have been developed to analyse these free-text portions.[114]The MedLEE system was originally developed for analysis of chestradiologyreports but later extended to other report topics.[115]Theclinical Text Analysis and Knowledge Extraction System, or cTAKES, annotates clinical text using a dictionary of concepts.[116]The CLAMP system offers similar functionality with a user-friendly interface.[117]
Computational frameworkshave been developed to rapidly build tools for biomedical text mining tasks. SwellShark[118]is a framework for biomedical NER that requires no human-labeled data but does make use of resources for weak supervision (e.g.,UMLSsemantic types). The SparkText framework[119]usesApache Sparkdata streaming, aNoSQLdatabase, and basicmachine learningmethods to buildpredictive modelsfrom scientific articles.
Some biomedical text mining and natural language processing tools are available throughapplication programming interfaces, or APIs. NOBLE Coder performs concept recognition through an API.[120]
The followingacademic conferencesand workshops host discussions and presentations in biomedical text mining advances. Most publishproceedings.
A variety ofacademic journalspublishing manuscripts on biology and medicine include topics in text mining and natural language processing software. Some journals, including theJournal of the American Medical Informatics Association(JAMIA) and theJournal of Biomedical Informaticsare popular publications for these topics.
|
https://en.wikipedia.org/wiki/Biomedical_text_mining
|
Computational linguisticsis aninterdisciplinaryfield concerned with thecomputational modellingofnatural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws uponlinguistics,computer science,artificial intelligence,mathematics,logic,philosophy,cognitive science,cognitive psychology,psycholinguistics,anthropologyandneuroscience, among others. Computational linguistics is closely related tomathematical linguistics.
The field overlapped withartificial intelligencesince the efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English.[1]Since rule-based approaches were able to makearithmetic(systematic) calculations much faster and more accurately than humans, it was expected thatlexicon,morphology,syntaxandsemanticscan be learned using explicit rules, as well. After thefailure of rule-based approaches,David Hays[2]coined the term in order to distinguish the field from AI and co-founded both theAssociation for Computational Linguistics (ACL)and theInternational Committee on Computational Linguistics(ICCL) in the 1970s and 1980s. What started as an effort to translate between languages evolved into a much wider field ofnatural language processing.[3][4]
In order to be able to meticulously study theEnglish language, an annotated text corpus was much needed. The PennTreebank[5]was one of the most used corpora. It consisted of IBM computer manuals, transcribed telephone conversations, and other texts, together containing over 4.5 million words of American English, annotated using bothpart-of-speechtagging and syntactic bracketing.[6]
Japanese sentence corpora were analyzed and a pattern oflog-normalitywas found in relation to sentence length.[7]
The fact that duringlanguage acquisition, children are largely only exposed to positive evidence,[8]meaning that the only evidence for what is a correct form is provided, and no evidence for what is not correct,[9]was a limitation for the models at the time because the now availabledeep learningmodels were not available in late 1980s.[10]
It has been shown that languages can be learned with a combination of simple input presented incrementally as the child develops better memory and longer attention span,[11]which explained the long period oflanguage acquisitionin human infants and children.[11]
Robots have been used to test linguistic theories.[12]Enabled to learn as children might, models were created based on anaffordancemodel in which mappings between actions, perceptions, and effects were created and linked to spoken words. Crucially, these robots were able to acquire functioning word-to-meaning mappings without needing grammatical structure.
Using thePrice equationandPólya urndynamics, researchers have created a system which not only predicts future linguistic evolution but also gives insight into the evolutionary history of modern-day languages.[13]
Noam Chomsky's theories have influenced computational linguistics, particularly in understanding how infants learn complex grammatical structures, such as those described inChomsky normal form.[14]Attempts have been made to determine how an infant learns a "non-normal grammar" as theorized by Chomsky normal form.[9]Research in this area combines structural approaches with computational models to analyze largelinguistic corporalike the PennTreebank, helping to uncover patterns in language acquisition.[15]
|
https://en.wikipedia.org/wiki/Computational_linguistics
|
Computer-assisted reviewing(CAR) tools are pieces of software based on text-comparison and analysisalgorithms.[1]These tools focus on the differences between two documents, taking into account each document'stypefacethrough an intelligent analysis.
The intelligent analysis used by CAR tools detect the differences do not have the same value depending on their type and/or the document field/subject. For example, a difference on a number is not the same if this number is a date, a price, a page number, a figure number, a part of an address, a footnote call, a list item number, a title number, etc.
These tools are interesting in various kind of applications:
Computer assisted reviewing for translation (CART) tools are CAR tools being able to manage multi-lingual comparisons. This implies to be able to match each part of text from one document to the other, taking into account the specificity of each language: date/number formats, punctuation (for example,French/Englishquotation marks), etc. The best CART tools are able to find matches between noun or verbal groups, this implying to find terminological and syntactical elements using linguistic analyzers.
|
https://en.wikipedia.org/wiki/Computer-assisted_reviewing
|
Controlled natural languages(CNLs) are subsets ofnatural languagesthat are obtained by restricting the grammar and vocabulary in order to reduce or eliminateambiguityand complexity. Traditionally, controlled languages fall into two major types: those that improve readability for human readers (e.g. non-native speakers),
and those that enable reliable automaticsemantic analysisof the language.[1][2]
The first type of languages (often called "simplified" or "technical" languages), for exampleASD Simplified Technical English, Caterpillar Technical English,IBM's Easy English, are used in the industry to increase the quality of technical documentation, and possibly simplify thesemi-automatic translationof the documentation. These languages restrict the writer by general rules such as "Keep sentences short", "Avoid the use ofpronouns", "Only use dictionary-approved words", and "Use only theactive voice".[3]
The second type of languages have a formal syntax andformal semantics, and can be mapped to an existingformal language, such asfirst-order logic. Thus, those languages can be used asknowledge representation languages,[4]and writing of those languages is supported by fully automaticconsistencyand redundancy checks,query answering, etc.
Existing controlled natural languages include:[5][6]
IETFhas reservedsimpleas aBCP 47variant subtagfor simplified versions of languages.[13]
|
https://en.wikipedia.org/wiki/Controlled_natural_language
|
Deep linguistic processingis anatural language processingframework which draws on theoretical anddescriptive linguistics. It models language predominantly by way of theoretical syntactic/semantic theory (e.g.CCG,HPSG,LFG,TAG, thePrague School). Deep linguistic processing approaches differ from "shallower" methods in that they yield more expressive and structural representations which directly capturelong-distance dependenciesand underlyingpredicate-argumentstructures.[1]The knowledge-intensive approach of deep linguistic processing requires considerable computational power, and has in the past sometimes been judged as being intractable. However, research in the early 2000s had made considerable advancement in efficiency of deep processing.[2][3]Today, efficiency is no longer a major problem for applications using deep linguistic processing.
Traditionally, deep linguistic processing has been concerned with computational grammar development (for use in bothparsingand generation). These grammars were manually developed, maintained and were computationally expensive to run. In recent years, machine learning approaches (also known asshallow linguistic processing) have fundamentally altered the field ofnatural language processing. The rapid creation of robust and wide-coverage machine learning NLP tools requires substantially lesser amount of manual labor. Thus deep linguistic processing methods have received less attention.
However, it is the belief of some computational linguists[who?]that in order for computers to understand natural language orinference, detailed syntactic andsemantic representationis necessary. Moreover, while humans can easily understand a sentence and its meaning, shallow linguistic processing might lack human language 'understanding'. For example:[4]
In sentence (a), a shallowinformation extractionsystem might infer wrongly that Microsoft's headquarters was located in Georgia. While as humans, we understand from the sentence that Microsoft office was never in Georgia.
In sentence (b), a shallow system could wrongly infer that Israel was established in May 1971. Humans know that it is the National Institute for Psychobiology that was established in 1971.In summary of the comparison between deep and shallow language processing, deep linguistic processing provides a knowledge-rich analysis of language through manually developed grammars and language resources. Whereas, shallow linguistic processing provides a knowledge-lean analysis of language through statistical/machine learning manipulation of texts and/orannotated linguisticresource.
"Deep" computational linguists are divided in different sub-communities based on the grammatical formalism they adopted for deep linguistic processing. The major sub-communities includes the:
The shortlist above is not exhaustively representative of all the communities working on deep linguistic processing.
|
https://en.wikipedia.org/wiki/Deep_linguistic_processing
|
Distributional semantics[1]is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. The basic idea of distributional semantics can be summed up in the so-calleddistributionalhypothesis:linguistic items with similar distributions have similar meanings.
Thedistributional hypothesisinlinguisticsis derived from thesemantic theoryof language usage, i.e. words that are used and occur in the samecontextstend to purport similar meanings.[2]
The underlying idea that "a word is characterized by the company it keeps" was popularized byFirthin the 1950s.[3]
The distributional hypothesis is the basis forstatistical semantics. Although the Distributional Hypothesis originated in linguistics,[4][5]it is now receiving attention incognitive scienceespecially regarding the context of word use.[6]
In recent years, the distributional hypothesis has provided the basis for the theory ofsimilarity-based generalizationin language learning: the idea that children can figure out how to use words they've rarely encountered before by generalizing about their use from distributions of similar words.[7][8]
The distributional hypothesis suggests that the more semantically similar two words are, the more distributionally similar they will be in turn, and thus the more that they will tend to occur in similar linguistic contexts.
Whether or not this suggestion holds has significant implications for both thedata-sparsityproblem in computational modeling,[9]and for the question of how children are able to learn language so rapidly given relatively impoverished input (this is also known as the problem of thepoverty of the stimulus) is unclear.
Distributional semantics favor the use of linear algebra as a computational tool and representational framework. The basic approach is to collect distributional information in high-dimensional vectors, and to define distributional/semantic similarity in terms of vector similarity.[10]Different kinds of similarities can be extracted depending on which type of distributional information is used to collect the vectors:topicalsimilarities can be extracted by populating the vectors with information on which text regions the linguistic items occur in;paradigmaticsimilarities can be extracted by populating the vectors with information on which other linguistic items the items co-occur with. Note that the latter type of vectors can also be used to extractsyntagmaticsimilarities by looking at the individual vector components.
The basic idea of a correlation between distributional and semantic similarity can be operationalized in many different ways. There is a rich variety of computational models implementing distributional semantics, includinglatent semantic analysis(LSA),[11][12]Hyperspace Analogue to Language(HAL), syntax- or dependency-based models,[13]random indexing,semantic folding[14]and various variants of thetopic model.[15]
Distributional semantic models differ primarily with respect to the following parameters:
Distributional semantic models that use linguistic items as context have also been referred to asword space, or vector space models.[17][18]
While distributional semantics typically has been applied to lexical items—words and multi-word terms—with considerable success, not least due to its applicability as an input layer for neurally inspired deep learning models,lexical semantics, i.e. the meaning of words, will only carry part of the semantics of an entire utterance. The meaning of a clause, e.g."Tigers love rabbits.", can only partially be understood from examining the meaning of the three lexical items it consists of. Distributional semantics can straightforwardly be extended to cover larger linguistic item such as constructions, with and without non-instantiated items, but some of the base assumptions of the model need to be adjusted somewhat.Construction grammarand its formulation of the lexical-syntactic continuum offers one approach for including more elaborate constructions in a distributional semantic model and some experiments have been implemented using the Random Indexing approach.[19]
Compositional distributional semanticmodels extend distributional semantic models by explicit semantic functions that use syntactically based rules to combine the semantics of participating lexical units into acompositional modelto characterize the semantics of entire phrases or sentences. This work was originally proposed by Stephen Clark,Bob Coecke, andMehrnoosh SadrzadehofOxford Universityin their 2008 paper, "A Compositional Distributional Model of Meaning".[20]Different approaches to composition have been explored—including neural models—and are under discussion at established workshops such asSemEval.[21]
Distributional semantic models have been applied successfully to the following tasks:
|
https://en.wikipedia.org/wiki/Distributional_semantics
|
Computer-assisted language learning(CALL), known ascomputer-aided instruction(CAI) in British English andcomputer-aided language instruction(CALI) in American English,[1]Levy (1997: p. 1) briefly defines it as "the exploration and study of computer applications in language teaching and learning."[2]CALL embraces a wide range ofinformation and communications technology"applications and approaches to teaching and learning foreign languages, ranging from the traditional drill-and-practice programs that characterized CALL in the 1960s and 1970s to more recent manifestations of CALL, such as those utilizedvirtual learning environmentand Web-baseddistance learning. It also extends to the use ofcorpora and concordancers, interactive whiteboards,[3]computer-mediated communication (CMC),[4]language learning in virtual worlds, andmobile-assisted language learning (MALL).[5]
The term CALI (computer-assisted language instruction) was used before CALL, originating as a subset of the broader term CAI (computer-assisted instruction). CALI fell out of favor among language teachers, however, because it seemed to emphasize a teacher-centered instructional approach. Language teachers increasingly favored a student-centered approach focused on learning rather than instruction. CALL began to replace CALI in the early 1980s (Davies & Higgins, 1982: p. 3).[6]and it is now incorporated into the names of the growing number ofprofessional associationsworldwide.
An alternative term, technology-enhanced language learning (TELL),[7]also emerged around the early 1990s: e.g. the TELL Consortium project, University of Hull.
The current philosophy of CALL emphasizes student-centered materials that empower learners to work independently. These materials can be structured or unstructured but typically incorporate two key features: interactive and individualized learning. CALL employs tools that assist teachers in facilitating language learning, whether reinforcing classroom lessons or providing additional support to learners. The design of CALL materials typically integrates principles from language pedagogy and methodology, drawing from various learning theories such as behaviourism, cognitive theory, constructivism, and second-language acquisition theories like Stephen Krashen's.monitor hypothesis.
A combination of face-to-face teaching and CALL is usually referred to asblended learning. Blended learning is designed to increase learning potential and is more commonly found than pure CALL (Pegrum 2009: p. 27).[8]
See Davieset al.(2011: Section 1.1,What is CALL?).[9]See also Levy & Hubbard (2005), who raise the questionWhy call CALL "CALL"?[10]
CALL dates back to the 1960s, when it was first introduced on university mainframe computers. The PLATO project, initiated at the University of Illinois in 1960, is an important landmark in the early development of CALL (Marty 1981).[11]The advent of the microcomputer in the late 1970s brought computing within the range of a wider audience, resulting in a boom in the development of CALL programs and a flurry of publications of books on CALL in the early 1980s.
Dozens of CALL programs are currently available on the internet, at prices ranging from free to expensive,[12]and other programs are available only through university language courses.
There have been several attempts to document the history of CALL. Sanders (1995) covers the period from the mid-1960s to the mid-1990s, focusing on CALL in North America.[13]Delcloque (2000) documents the history of CALL worldwide, from its beginnings in the 1960s to the dawning of the new millennium.[14]Davies (2005) takes a look back at CALL's past and attempts to predict where it is going.[15]Hubbard (2009) offers a compilation of 74 key articles and book excerpts, originally published in the years 1988–2007, that give a comprehensive overview of the wide range of leading ideas and research results that have exerted an influence on the development of CALL or that show promise in doing so in the future.[16]A published review of Hubbard's collection can be found inLanguage Learning & Technology14, 3 (2010).[17]
Butler-Pascoe (2011) looks at the history of CALL from a different point of view, namely the evolution of CALL in the dual fields of educational technology and second/foreign language acquisition and the paradigm shifts experienced along the way.[18]
See also Davies et al. (2011: Section 2,History of CALL).[9]
During the 1980s and 1990s, several attempts were made to establish a CALL typology. A wide range of different types of CALL programs was identified by Davies & Higgins (1985),[19]Jones & Fortescue (1987),[20]Hardisty & Windeatt (1989)[21]and Levy (1997: pp. 118ff.).[2]These included gap-filling and Cloze programs, multiple-choice programs, free-format (text-entry) programs, adventures and simulations, action mazes, sentence-reordering programs, exploratory programs—and "total Cloze", a type of program in which the learner has to reconstruct a whole text. Most of these early programs still exist in modernised versions.
Since the 1990s, it has become increasingly difficult to categorise CALL as it now extends to the use ofblogs,wikis,social networking,podcasting,Web 2.0applications,language learning in virtual worldsandinteractive whiteboards(Davies et al. 2010: Section 3.7).[9]
Warschauer (1996)[22]and Warschauer & Healey (1998)[23]took a different approach. Rather than focusing on the typology of CALL, they identified three historical phases of CALL, classified according to their underlying pedagogical and methodological approaches:
Most CALL programs in Warschauer & Healey's first phase, Behavioristic CALL (1960s to 1970s), consisted of drill-and-practice materials in which the computer presented a stimulus and the learner provided a response. At first, both could be done only through text. The computer would analyse students' input and give feedback, and more sophisticated programs would react to students' mistakes by branching to help screens and remedial activities. While such programs and their underlying pedagogy still exist today, behaviouristic approaches to language learning have been rejected by most language teachers, and the increasing sophistication of computer technology has led CALL to other possibilities.
The second phase described by Warschauer & Healey, Communicative CALL, is based on thecommunicative approachthat became prominent in the late 1970s and 1980s (Underwood 1984).[24]In the communicative approach the focus is on using the language rather than analysis of the language, and grammar is taught implicitly rather than explicitly. It also allows for originality and flexibility in student output of language. The communicative approach coincided with the arrival of the PC, which made computing much more widely available and resulted in a boom in the development of software for language learning. The first CALL software in this phase continued to provide skill practice but not in a drill format—for example: paced reading, text reconstruction and language games—but the computer remained the tutor. In this phase, computers provided context for students to use the language, such as asking for directions to a place, and programs not designed for language learning such asSim City,SleuthandWhere in the World is Carmen Sandiego?were used for language learning. Criticisms of this approach include using the computer in an ad hoc and disconnected manner for more marginal aims rather than the central aims of language teaching.
The third phase of CALL described by Warschauer & Healey, Integrative CALL, starting from the 1990s, tried to address criticisms of the communicative approach by integrating the teaching of language skills into tasks or projects to provide direction and coherence. It also coincided with the development of multimedia technology (providing text, graphics, sound and animation) as well as Computer-mediated communication (CMC). CALL in this period saw a definitive shift from the use of the computer for drill and tutorial purposes (the computer as a finite, authoritative base for a specific task) to a medium for extending education beyond the classroom. Multimedia CALL started with interactive laser videodiscs such asMontevidisco(Schneider & Bennion 1984)[25]andA la rencontre de Philippe(Fuerstenberg 1993),[26]both of which were simulations of situations where the learner played a key role. These programs later were transferred to CD-ROMs, and newrole-playing games(RPGs) such asWho is Oscar Lake?made their appearance in a range of different languages.
In a later publication Warschauer changed the name of the first phase of CALL from Behavioristic CALL to Structural CALL and also revised the dates of the three phases (Warschauer 2000):[27]
Bax (2003)[28]took issue with Warschauer & Haley (1998) and Warschauer (2000) and proposed these three phases:
See also Bax & Chambers (2006)[29]and Bax (2011),[30]in which the topic of "normalisation" is revisited.
A basic use of CALL is in vocabulary acquisition usingflashcards, which requires quite simple programs. Such programs often make use ofspaced repetition, a technique whereby the learner is presented with the vocabulary items that need to be committed to memory at increasingly longer intervals until long-term retention is achieved. This has led to the development of a number of applications known as spaced repetition systems (SRS),[31]including the genericAnkiorSuperMemopackage and programs such as BYKI[32]and phase-6,[33]which have been designed specifically for learners of foreign languages.
Above all, careful consideration must be given topedagogyin designing CALL software, but publishers of CALL software tend to follow the latest trend, regardless of its desirability. Moreover, approaches to teaching foreign languages are constantly changing, dating back togrammar-translation, through thedirect method,audio-lingualismand a variety of other approaches, to the more recentcommunicative approachandconstructivism(Decoo 2001).[34]
Designing and creating CALL software is an extremely demanding task, calling upon a range of skills. Major CALL development projects are usually managed by a team of people:
CALL inherently supportslearner autonomy, the final of the eight conditions that Egbert et al. (2007) cite as "Conditions for Optimal Language Learning Environments". Learner autonomy places the learner firmly in control so that he or she "decides on learning goals" (Egbert et al., 2007, p. 8).[36]
It is all too easy when designing CALL software to take the comfortable route and produce a set of multiple-choice and gap-filling exercises, using a simple authoring tool (Bangs 2011),[37]but CALL is much more than this; Stepp-Greany (2002), for example, describes the creation and management of an environment incorporating aconstructivistandwhole languagephilosophy. According to constructivist theory, learners are active participants in tasks in which they "construct" new knowledge derived from their prior experience. Learners also assume responsibility for their learning, and the teacher is a facilitator rather than a purveyor of knowledge. Whole language theory embraces constructivism and postulates that language learning moves from the whole to the part, rather than building sub-skills to lead towards the higher abilities of comprehension, speaking, and writing. It also emphasises that comprehending, speaking, reading, and writing skills are interrelated, reinforcing each other in complex ways. Language acquisition is, therefore, an active process in which the learner focuses on cues and meaning and makes intelligent guesses. Additional demands are placed upon teachers working in a technological environment incorporating constructivist and whole language theories. The development of teachers' professional skills must include new pedagogical as well as technical and management skills. Regarding the issue of teacher facilitation in such an environment, the teacher has a key role to play, but there could be a conflict between the aim to create an atmosphere for learner independence and the teacher's natural feelings of responsibility. In order to avoid learners' negative perceptions, Stepp-Greany points out that it is especially important for the teacher to continue to address their needs, especially those of low-ability learners.[38]
Language teachers have been avid users of technology for a very long time. Gramophone records were among the first technological aids to be used by language teachers in order to present students with recordings of native speakers' voices, and broadcasts from foreign radio stations were used to make recordings on reel-to-reel tape recorders. Other examples of technological aids that have been used in the foreign language classroom include slide projectors, film-strip projectors, film projectors, videocassette recorders and DVD players. In the early 1960s, integrated courses (which were often described as multimedia courses) began to appear. Examples of such courses areEcouter et Parler(consisting of a coursebook and tape recordings)[39]andDeutsch durch die audiovisuelle Methode(consisting of an illustrated coursebook, tape recordings and a film-strip – based on the Structuro-Global Audio-Visual method).[40]
During the 1970s and 1980s standard microcomputers were incapable of producing sound and they had poor graphics capability. This represented a step backwards for language teachers, who by this time had become accustomed to using a range of different media in the foreign language classroom. The arrival of the multimedia computer in the early 1990s was therefore a major breakthrough as it enabled text, images, sound and video to be combined in one device and the integration of the four basic skills of listening, speaking, reading and writing (Davies 2011: Section 1).[41]
Examples of CALL programs for multimedia computers that were published on CD-ROM and DVD from the mid-1990s onwards are described by Davies (2010: Section 3).[41]CALL programs are still being published on CD-ROM and DVD, but Web-based multimedia CALL has now virtually supplanted these media.
Following the arrival of multimedia CALL, multimedia language centres began to appear in educational institutions. While multimedia facilities offer many opportunities for language learning with the integration of text, images, sound and video, these opportunities have often not been fully utilised. One of the main promises of CALL is the ability to individualise learning but, as with the language labs that were introduced into educational institutions in the 1960s and 1970s, the use of the facilities of multimedia centres has often devolved into rows of students all doing the same drills (Davies 2010: Section 3.1).[41]There is therefore a danger that multimedia centres may go the same way as the language labs. Following a boom period in the 1970s, language labs went rapidly into decline. Davies (1997: p. 28) lays the blame mainly on the failure to train teachers to use language labs, both in terms of operation and in terms of developing new methodologies, but there were other factors such as poor reliability, lack of materials and a lack of good ideas.[42]
Managing a multimedia language centre requires not only staff who have a knowledge of foreign languages and language teaching methodology but also staff with technical know-how and budget management ability, as well as the ability to combine all these into creative ways of taking advantage of what the technology can offer. A centre manager usually needs assistants for technical support, for managing resources and even the tutoring of students. Multimedia centres lend themselves to self-study and potentially self-directed learning, but this is often misunderstood. The simple existence of a multimedia centre does not automatically lead to students learning independently. Significant investment of time is essential for materials development and creating an atmosphere conducive to self-study. Unfortunately, administrators often have the mistaken belief that buying hardware by itself will meet the needs of the centre, allocating 90% of its budget to hardware and virtually ignoring software and staff training needs (Davies et al. 2011:Foreword).[43]Self-access language learning centresor independent learning centres have emerged partially independently and partially in response to these issues. In self-access learning, the focus is on developing learner autonomy through varying degrees of self-directed learning, as opposed to (or as a complement to) classroom learning. In many centres learners access materials and manage their learning independently, but they also have access to staff for help. Many self-access centres are heavy users of technology and an increasing number of them are now offering online self-access learning opportunities. Some centres have developed novel ways of supporting language learning outside the context of the language classroom (also called 'language support') by developing software to monitor students' self-directed learning and by offering online support from teachers. Centre managers and support staff may need to have new roles defined for them to support students' efforts at self-directed learning: v. Mozzon-McPherson & Vismans (2001), who refer to a new job description, namely that of the "language adviser".[44]
The emergence of theWorld Wide Web(now known simply as "the Web") in the early 1990s marked a significant change in the use of communications technology for all computer users.Emailand other forms ofelectronic communicationhad been in existence for many years, but the launch ofMosaic, the first graphicalWeb browser, in 1993 brought about a radical change in the ways in which we communicate electronically. The launch of the Web in the public arena immediately began to attract the attention of language teachers. Many language teachers were already familiar with the concept ofhypertexton stand-alone computers, which made it possible to set up non-sequential structured reading activities for language learners in which they could point to items of text or images on a page displayed on the computer screen and branch to any other pages, e.g. in a so-called "stack" as implemented in theHyperCardprogram on Apple Mac computers. The Web took this one stage further by creating a worldwide hypertext system that enabled the user to branch to different pages on computers anywhere in the world simply by pointing and clicking at a piece of text or an image. This opened up access to thousands of authentic foreign-language websites to teachers and students that could be used in a variety of ways. A problem that arose, however, was that this could lead to a good deal of time-wasting if Web browsing was used in an unstructured way (Davies 1997: pp. 42–43),[42]and language teachers responded by developing more structured activities and online exercises (Leloup & Ponterio 2003).[45]Davies (2010) lists over 500 websites, where links to online exercises can be found, along with links to online dictionaries and encyclopaedias, concordancers, translation aids and other miscellaneous resources of interest to the language teacher and learner.[46]
The launch of the (free)Hot Potatoes(Holmes & Arneil) authoring tool, which was first demonstrated publicly at the EUROCALL 1998 conference, made it possible for language teachers to create their own online interactive exercises. Other useful tools are produced by the same authors.[47]
In its early days the Web could not compete seriously withmultimediaCALL on CD-ROM and DVD. Sound and video quality was often poor, and interaction was slow. But now the Web has caught up. Sound and video are of high quality and interaction has improved tremendously, although this does depend on sufficient bandwidth being available, which is not always the case, especially in remote rural areas and developing countries. One area in which CD-ROMs and DVDs are still superior is in the presentation of listen/respond/playback activities, although such activities on the Web are continually improving.
Since the early 2000s there has been a boom in the development of so-calledWeb 2.0applications. Contrary to popular opinion, Web 2.0 is not a new version of the Web, rather it implies a shift in emphasis from Web browsing, which is essentially a one-way process (from the Web to the end-user), to making use of Web applications in the same way as one uses applications on a desktop computer. It also implies more interaction and sharing. Walker, Davies & Hewer (2011: Section 2.1)[48]list the following examples of Web 2.0 applications that language teachers are using:
There is no doubt that the Web has proved to be a main focus for language teachers, who are making increasingly imaginative use of its wide range of facilities: see Dudeney (2007)[50]and Thomas (2008).[51]Above all, the use of Web 2.0 tools calls for a careful reexamination of the role of the teacher in the classroom (Richardson 2006).[52]
Corpora have been used for many years as the basis of linguistic research and also for the compilation of dictionaries and reference works such as the Collins Cobuild series, published by HarperCollins.[53]Tribble & Barlow (2001),[54]Sinclair (2004)[55]and McEnery & Wilson (2011)[56]describe a variety of ways in which corpora can be used in language teaching.
An early reference to the use of electronic concordancers in language teaching can be found in Higgins & Johns (1984: pp. 88–94),[57]and many examples of their practical use in the classroom are described by Lamy & Klarskov Mortensen (2010).[58]
It was Tim Johns (1991), however, who raised the profile of the use of concordancers in the language classroom with his concept of Data-driven learning (DDL).[59]DDL encourages learners to work out their own rules about the meaning of words and their usage by using a concordancer to locate examples in a corpus of authentic texts. It is also possible for the teacher to use a concordancer to find examples of authentic usage to demonstrate a point of grammar or typical collocations, and to generate exercises based on the examples found. Various types of concordancers and where they can be obtained are described by Lamy & Klarskov Mortensen (2011).[58]
Robb (2003) shows how it is possible to use Google as a concordancer, but he also points out a number of drawbacks, for instance there is no control over the educational level, nationality, or other characteristics of the creators of the texts that are found, and the presentation of the examples is not as easy to read as the output of a dedicated concordancer that places the key words (i.e. the search terms) in context.[60]
Virtual worldsdate back to the adventure games and simulations of the 1970s, for exampleColossal Cave Adventure, a text-only simulation in which the user communicated with the computer by typing commands at the keyboard. Language teachers discovered that it was possible to exploit these text-only programs by using them as the basis for discussion. Jones G. (1986) describes an experiment based on the Kingdom simulation, in which learners played roles as members of a council governing an imaginary kingdom. A single computer in the classroom was used to provide the stimulus for discussion, namely simulating events taking place in the kingdom: crop planting time, harvest time, unforeseen catastrophes, etc.[61]
The early adventure games and simulations led on to multi-user variants, which were known asMUDs(Multi-user domains). Like their predecessors, MUDs were text-only, with the difference that they were available to a wider online audience. MUDs then led on toMOOs(Multi-user domains object-oriented), which language teachers were able to exploit for teaching foreign languages and intercultural understanding: see Donaldson & Kötter (1999)[62]and (Shield 2003).[63]
The next major breakthrough in the history of virtual worlds was the graphical user interface.Lucasfilm's Habitat(1986), was one of the first virtual worlds that was graphically based, albeit only in a two-dimensional environment. Each participant was represented by a visual avatar who could interact with other avatars using text chat.
Three-dimensional virtual worlds such as Traveler andActive Worlds, both of which appeared in the 1990s, were the next important development. Traveler included the possibility of audio communication (but not text chat) between avatars who were represented as disembodied heads in a three-dimensional abstract landscape. Svensson (2003) describes the Virtual Wedding Project, in which advanced students of English made use of Active Worlds as an arena for constructivist learning.[64]
The 3D world ofSecond Lifewas launched in 2003. Initially perceived as anotherrole-playing game(RPG), it began to attract the interest of language teachers with the launch of the first of the series of SLanguages conferences in 2007.[65]Walker, Davies & Hewer (2011: Section 14.2.1)[48]and Molka-Danielsen & Deutschmann (2010)[66]describe a number of experiments and projects that focus on language learning in Second Life. See also the Wikipedia articleVirtual world language learning.
To what extent Second Life and other virtual worlds will become established as important tools for teachers of foreign languages remains to be seen. It has been argued by Dudeney (2010) in hisThat's Lifeblog that Second Life is "too demanding and too unreliable for most educators". The subsequent discussion shows that this view is shared by many teachers, but many others completely disagree.[67]
Regardless of the pros and cons of Second Life, language teachers' interest in virtual worlds continues to grow. The joint EUROCALL/CALICO Virtual Worlds Special Interest Group[68]was set up in 2009, and there are now many areas in Second Life that are dedicated to language learning and teaching, for example the commercial area for learners of English, which is managed by Language Lab,[69]and free areas such as the region maintained by the Goethe-Institut[70]and the EduNation Islands.[71]There are also examples of simulations created specifically for language education, such as those produced by the EC-funded NIFLAR[72]and AVALON[73]projects. NIFLAR is implemented both in Second Life and inOpensim.
Human language technologies (HLT) comprise a number of areas of research and development that focus on the use of technology to facilitate communication in a multilingual information society. Human language technologies are areas of activity in departments of the European Commission that were formerly grouped under the headinglanguage engineering(Gupta & Schulze 2011: Section 1.1).[74]
The parts of HLT that is of greatest interest to the language teacher isnatural language processing(NLP), especiallyparsing, as well as the areas ofspeech synthesisandspeech recognition.
Speech synthesis has improved immeasurably in recent years. It is often used in electronic dictionaries to enable learners to find out how words are pronounced. At word level, speech synthesis is quite effective, the artificial voice often closely resembling a human voice. At phrase level and sentence level, however, there are often problems of intonation, resulting in speech production that sounds unnatural even though it may be intelligible. Speech synthesis as embodied intext to speech(TTS) applications is invaluable as a tool for unsighted or partially sighted people. Gupta & Schulze (2010: Section 4.1) list several examples of speech synthesis applications.[74]
Speech recognition is less advanced than speech synthesis. It has been used in a number of CALL programs, in which it is usually described asautomatic speech recognition(ASR). ASR is not easy to implement. Ehsani & Knodt (1998) summarise the core problem as follows:
"Complex cognitive processes account for the human ability to associate acoustic signals with meanings and intentions. For a computer, on the other hand, speech is essentially a series of digital values. However, despite these differences, the core problem of speech recognition is the same for both humans and machines: namely, of finding the best match between a given speech sound and its corresponding word string. Automatic speech recognition technology attempts to simulate and optimize this process computationally."[75]
Programs embodying ASR normally provide a native speaker model that the learner is requested to imitate, but the matching process is not 100% reliable and may result in a learner's perfectly intelligible attempt to pronounce a word or phrase being rejected (Davies 2010: Section 3.4.6 and Section 3.4.7).[41]
Parsing is used in a number of ways in CALL. Gupta & Schulze (2010: Section 5) describe how parsing may be used to analyse sentences, presenting the learner with a tree diagram that labels the constituent parts of speech of a sentence and shows the learner how the sentence is structured.[74]
Parsing is also used in CALL programs to analyse the learner's input and diagnose errors. Davies (2002)[76]writes:
"Discrete error analysis and feedback were a common feature of traditional CALL, and the more sophisticated programs would attempt to analyse the learner's response, pinpoint errors, and branch to help and remedial activities. ... Error analysis in CALL is, however, a matter of controversy. Practitioners who come into CALL via the disciplines ofcomputational linguistics, e.g. Natural Language Processing (NLP) and Human Language Technologies (HLT), tend to be more optimistic about the potential of error analysis by computer than those who come into CALL via language teaching. [...] An alternative approach is the use of Artificial Intelligence (AI) techniques to parse the learner's response – so-calledintelligent CALL(ICALL)– but there is a gulf between those who favour the use of AI to develop CALL programs (Matthews 1994)[77]and, at the other extreme, those who perceive this approach as a threat to humanity (Last 1989:153)".[78]
Underwood (1989)[79]and Heift & Schulze (2007)[80]present a more positive picture of AI.
Research into speech synthesis, speech recognition and parsing and how these areas of NLP can be used in CALL are the main focus of the NLP Special Interest Group[81]within theEUROCALLprofessional association and the ICALL Special Interest Group[82]within theCALICOprofessional association. The EUROCALL NLP SIG also maintains a Ning.[83]
The question of the impact of CALL in language learning and teaching has been raised at regular intervals ever since computers first appeared in educational institutions (Davies & Hewer 2011: Section 3).[84]Recent large-scale impact studies include the study edited by Fitzpatrick & Davies (2003)[85]and the EACEA (2009) study,[86]both of which were produced for the European Commission.
A distinction needs to be made between the impact and the effectiveness of CALL. Impact may be measured quantitatively and qualitatively in terms of the uptake and use ofICTin teaching foreign languages, issues of availability of hardware and software, budgetary considerations, Internet access, teachers' and learners' attitudes to the use of CALL,[87]changes in the ways in which languages are learnt and taught, and paradigm shifts in teachers' and learners' roles. Effectiveness, on the other hand, usually focuses on assessing to what extent ICT is a more effective way of teaching foreign languages compared to using traditional methods – and this is more problematic as so many variables come into play. Worldwide, the picture of the impact of CALL is extremely varied. Most developed nations work comfortably with the new technologies, but developing nations are often beset with problems of costs and broadband connectivity. Evidence on the effectiveness of CALL – as with the impact of CALL – is extremely varied and many research questions still need to be addressed and answered. Hubbard (2002) presents the results of a CALL research survey that was sent to 120 CALL professionals from around the world asking them to articulate a CALL research question they would like to see answered. Some of the questions have been answered but many more remain open.[88]Leakey (2011) offers an overview of current and past research in CALL and proposes a comprehensive model for evaluating the effectiveness of CALL platforms, programs and pedagogy.[89]
A crucial issue is the extent to which the computer is perceived as taking over the teacher's role. Warschauer (1996: p. 6) perceived the computer as playing an "intelligent" role, and claimed that a computer program "should ideally be able to understand a user's spoken input and evaluate it not just for correctness but also for appropriateness. It should be able to diagnose a student's problems with pronunciation, syntax, or usage and then intelligently decide among a range of options (e.g. repeating, paraphrasing, slowing down, correcting, or directing the student to background explanations)."[22]Jones C. (1986), on the other hand, rejected the idea of the computer being "some kind of inferior teacher-substitute" and proposed a methodology that focused more on what teachers could do with computer programs rather than what computer programs could do on their own: "in other words, treating the computer as they would any other classroom aid".[90]Warschauer's high expectations in 1996 have still not been fulfilled, and currently there is an increasing tendency for teachers to go down the route proposed by Jones, making use of a variety of new tools such ascorpora and concordancers, interactive whiteboards[3]and applications for online communication.[4]
Since the advent of the Web there has been an explosion in online learning, but to what extent it is effective is open to criticism. Felix (2003) takes a critical look at popular myths attached to online learning from three perspectives, namely administrators, teachers and students. She concludes: "That costs can be saved in this ambitious enterprise is clearly a myth, as are expectations of saving time or replacing staff with machines."[91]
As for the effectiveness of CALL in promoting the four skills, Felix (2008) claims that there is "enough data in CALL to suggest positive effects on spelling, reading and writing", but more research is needed in order to determine its effectiveness in other areas, especially speaking online. She claims that students' perceptions of CALL are positive, but she qualifies this claim by stating that the technologies need to be stable and well supported, drawing attention to concerns that technical problems may interfere with the learning process. She also points out that older students may not feel comfortable with computers and younger students may not possess the necessary meta-skills for coping effectively in the challenging new environments. Training in computer literacy for both students and teachers is essential, and time constraints may pose additional problems. In order to achieve meaningful results she recommends "time-series analysis in which the same group of students is involved in experimental and control treatment for a certain amount of time and then switched – more than once if possible".[92]
Types of technology training in CALL for language teaching professionals certainly vary. Within second language teacher education programs, namely pre-service course work, we can find "online courses along with face-to-face courses", computer technology incorporated into a more general second language education course, "technology workshops","a series of courses offered throughout the teacher education programs, and even courses specifically designed for a CALL certificate and a CALL graduate degree"[93]The Organization for Economic Cooperation and Development has identified four levels of courses with only components, namely "web-supplemented, web-dependent, mixed mod and fully online".[94]
There is a rapidly growing interest in resources about the use of technology to deliver CALL. Journals that have issues that "deal with how teacher education programs help prepare language teachers to use technology in their own classrooms" includeLanguage Learning and Technology(2002),Innovations in Language Learning and Teaching(2009) and the TESOL international professional association's publication of technology standards for TESOL includes a chapter on preparation of teacher candidates in technology use, as well as the upgrading of teacher educators to be able to provide such instruction. Both CALICO and EUROCALL have special interest groups for teacher education in CALL.[95]
The following professional associations are dedicated to the promulgation of research, development and practice relating to the use of new technologies in language learning and teaching. Most of them organise conferences and publish journals on CALL.[96]
Hong, K. H. (2010) CALL teacher education as an impetus for 12 teachers in integrating technology.ReCALL, 22 (1), 53–69.doi:10.1017/s095834400999019X
Murray, D. E. (2013)A Case for Online English Language Teacher Education. The International Research Foundation for English Language Education.http://www.tirfonline.org/wp-content/uploads/2013/04/TIRF_OLTE_One-PageSpread_2013.pdf
|
https://en.wikipedia.org/wiki/Foreign_language_reading_aid
|
Aforeign language writing aidis acomputer programor any other instrument that assists a non-native language user (also referred to as a foreign language learner) in writing decently in their target language. Assistive operations can be classified into two categories: on-the-fly prompts and post-writing checks. Assisted aspects of writing include:lexical,syntactic(syntactic and semantic roles of a word's frame),lexical semantic(context/collocation-influenced word choice and user-intention-drivensynonymchoice) andidiomaticexpression transfer, etc. Different types offoreign languagewriting aids include automated proofreading applications,text corpora,dictionaries,translationaids andorthographyaids.
The four major components in the acquisition of a language are namely;listening,speaking,readingandwriting.[1]While most people have no difficulties in exercising these skills in their native language, doing so in a second or foreign language is not that easy. In the area of writing, research has found that foreign language learners find it painstaking to compose in the target language, producing less eloquent sentences and encountering difficulties in the revisions of their written work. However, these difficulties are not attributed to their linguistic abilities.[2]
Many language learners experienceforeign language anxiety, feelings of apprehensiveness and nervousness, when learning a second language.[1]In the case of writing in a foreign language, this anxiety can be alleviated via foreign language writing aids as they assist non-native language users in independently producing decent written work at their own pace, hence increasing confidence about themselves and their own learning abilities.[3]
With advancements in technology, aids in foreign language writing are no longer restricted to traditional mediums such as teacher feedback and dictionaries. Known ascomputer-assisted language learning(CALL), use of computers in language classrooms has become more common, and one example would be the use ofword processorsto assist learners of a foreign language in the technical aspects of their writing, such asgrammar.[4]In comparison with correction feedback from the teacher, the use of word processors is found to be a better tool in improving the writing skills of students who are learningEnglish as a foreign language(EFL), possibly because students find it more encouraging to learn their mistakes from a neutral and detached source.[3]Apart from learners' confidence in writing, their motivation and attitudes will also improve through the use of computers.[2]
Foreign language learners' awareness of the conventions in writing can be improved through reference to guidelines showing the features and structure of the target genre.[2]At the same time, interactions and feedback help to engage the learners and expedite their learning, especially with active participation.[5]In online writing situations, learners are isolated without face-to-face interaction with others. Therefore, a foreign language writing aid should provide interaction and feedback so as to ease the learning process. This complementscommunicative language teaching(CLT); which is a teaching approach that highlights interaction as both the means and aim of learning a language.
In accordance with the simple view of writing, both lower-order and higher-order skills are required. Lower-order skills involve those ofspellingandtranscription, whereas higher order-skills involve that of ideation; which refers to idea generation and organisation.[6]Proofreading is helpful for non-native language users in minimising errors while writing in a foreign language.Spell checkersandgrammar checkersare two applications that aid in the automatic proofreading process of written work.[7]
To achieve writing competence in a non-native language, especially in an alphabetic language, spelling proficiency is of utmost importance.[8]Spelling proficiency has been identified as a good indicator of a learner’s acquisition and comprehension of alphabetic principles in the target language.[9]Documented data on misspelling patterns indicate that majority of misspellings fall under the four categories of letter insertion, deletion, transposition and substitution.[10]In languages where pronunciation of certain sequences of letters may be similar, misspellings may occur when the non-native language learner relies heavily on the sounds of the target language because they are unsure about the accurate spelling of the words.[11]The spell checker application is a type of writing aid that non-native language learners can rely on to detect and correct their misspellings in the target language.[12]
In general, spell checkers can operate one of two modes, the interactive spell checking mode or the batch spell checking.[7]In the interactive mode, the spell checker detects and marks misspelled words with a squiggly underlining as the words are being typed. On the other hand, batch spell checking is performed on a batch-by-batch basis as the appropriate command is entered. Spell checkers, such as those used inMicrosoft Word, can operate in either mode.
Although spell checkers are commonplace in numerous software products, errors specifically made by learners of a target language may not be sufficiently catered for.[13]This is because generic spell checkers function on the assumption that their users are competent speakers of the target language, whose misspellings are primarily due to accidental typographical errors.[14]The majority of misspellings were found to be attributed to systematic competence errors instead of accidental typographical ones, with up to 48% of these errors failing to be detected or corrected by the generic spell checker used.[15]
In view of the deficiency of generic spell checkers, programs have been designed to gear towards non-native misspellings,[14]such as FipsCor and Spengels. In FipsCor, a combination of methods, such as the alpha-code method, phonological reinterpretation method and morphological treatment method, has been adopted in an attempt to create a spell checker tailored to French language learners.[11]On the other hand, Spengels is a tutoring system developed to aid Dutch children and non-native Dutch writers of English in accurate English spelling.[16]
Grammar(syntactical and morphological) competency is another indicator of a non-native speaker’s proficiency in writing in the target language.Grammar checkersare a type of computerised application which non-native speakers can make use of to proofread their writings as such programs endeavor to identify syntactical errors.[17]Grammar and style checking is recognized as one of the seven major applications ofNatural Language Processingand every project in this field aims to build grammar checkers into a writing aid instead of a robust man-machine interface.[17]
Currently, grammar checkers are incapable of inspecting the linguistic or even syntactic correctness of text as a whole. They are restricted in their usefulness in that they are only able to check a small fraction of all the possible syntactic structures. Grammar checkers are unable to detect semantic errors in a correctly structured syntax order; i.e. grammar checkers do not register the error when the sentence structure is syntactically correct but semantically meaningless.[18]
Although grammar checkers have largely been concentrated on ensuring grammatical writing, majority of them are modelled after native writers, neglecting the needs of non-native language users.[19]Much research have attempted to tailor grammar checkers to the needs of non-native language users. Granska, a Swedish grammar checker, has been greatly worked upon by numerous researchers in the investigation of grammar checking properties for foreign language learners.[19][20]TheUniversidad Nacional de Educación a Distanciahas a computerised grammar checker for native Spanish speakers of EFL to help identify and correct grammatical mistakes without feedback from teachers.[21]
Theoretically, the functions of a conventional spell checker can be incorporated into a grammar checker entirely and this is likely the route that the language processing industry is working towards.[18]In reality, internationally available word processors such as Microsoft Word have difficulties combining spell checkers and grammar checkers due to licensing issues; various proofing instrument mechanisms for a certain language would have been licensed under different providers at different times.[18]
Electronic corpora in the target language provide non-native language users with authentic examples of language use rather than fixed examples, which may not be reflected in daily interactions.[22]The contextualised grammatical knowledge acquired by non-native language users through exposure to authentic texts in corpora allows them to grasp the manner of sentence formation in the target language, enabling effective writing.[23]
Concordanceset up through concordancing programs of corpora allow non-native language users to conveniently grasp lexico-grammatical patterns of the target language. Collocational frequencies of words (i.e. word pairings frequencies) provide non-native language users with information about accurate grammar structures which can be used when writing in the target language.[22]Collocational information also enable non-native language users to make clearer distinctions between words and expressions commonly regarded as synonyms. In addition, corpora information about thesemantic prosody; i.e. appropriate choices of words to be used in positive and negative co-texts, is available as reference for non-native language users in writing. The corpora can also be used to check for the acceptability or syntactic "grammaticality" of their written work.[24]
A survey conducted onEnglish as a Second Language(ESL) students revealed corpus activities to be generally well received and thought to be especially useful for learning word usage patterns and improving writing skills in the foreign language.[23]It was also found that students' writings became more natural after using two online corpora in a 90-minute training session.[25]In recent years, there were also suggestions to incorporate the applications of corpora into EFL writing courses in China to improve the writing skills of learners.[26]
Dictionaries of the target learning languages are commonly recommended to non-native language learners.[27]They serve as reference tools by offering definitions, phonetic spelling, word classes and sample sentences.[22]It was found that the use of a dictionary can help learners of a foreign language write better if they know how to use them.[28]Foreign language learners can make use of grammar-related information from the dictionary to select appropriate words, check the correct spelling of a word and look upsynonymsto add more variety to their writing.[28]Nonetheless, learners have to be careful when using dictionaries as the lexical-semantic information contained in dictionaries might not be sufficient with regards to language production in a particular context and learners may be misled into choosing incorrect words.[29]
Presently, many notable dictionaries are available online and basic usage is usually free. These online dictionaries allow learners of a foreign language to find references for a word much faster and more conveniently than with a manual version, thus minimising the disruption to the flow of writing.[30]Online dictionaries available can be found under thelist of online dictionaries.
Dictionaries come in different levels of proficiency; such as advanced, intermediate and beginner, which learners can choose accordingly to the level best suited to them. There are many different types of dictionaries available; such as thesaurus or bilingual dictionaries, which cater to the specific needs of a learner of a foreign language. In recent years, there is also specialised dictionaries for foreign language learners that employ natural language processing tools to assist in the compilations of dictionary entries by generating feedback on the vocabulary that learners use and automatically providing inflectional and/or derivational forms for referencing items in the explanations.[31]
The wordthesaurusmeans 'treasury' or 'storehouse' in Greek and Latin is used to refer to several varieties of language resources, it is most commonly known as a book that groups words in synonym clusters and related meanings.[32]Its original sense of 'dictionary or encyclopedia' has been overshadowed by the emergence of the Roget-style thesaurus[32]and it is considered as a writing aid as it helps writers with the selection of words.[33]The differences between a Roget-style thesaurus and a dictionary would be the indexing and information given; the words in thesaurus are grouped by meaning, usually without definitions, while the latter is byalphabetical orderwith definitions.[33]When users are unable to find a word in a dictionary, it is usually due to the constraint of searching alphabetically by common and well-known headwords and the use of a thesaurus eliminates this issue by allowing users to search for a word through another word based on concept.[34]
Foreign language learners can make use of thesaurus to find near synonyms of a word to expand their vocabulary skills and add variety to their writing. Many word processors are equipped with a basic function of thesaurus, allowing learners to change a word to another similar word with ease. However, learners must be mindful that even if the words are near synonyms, they might not be suitable replacements depending on the context.[33]
Spelling dictionaries are referencing materials that specifically aid users in finding the correct spelling of a word. Unlike common dictionaries, spelling dictionaries do not typically provide definitions and other grammar-related information of the words. While typical dictionaries can be used to check or search for correct spellings, new and improved spelling dictionaries can assist users in finding the correct spelling of words even when the user does not know the first alphabet or knows it imperfectly.[35]This circumvents the alphabetic ordering limitations of a classic dictionary.[34]These spelling dictionaries are especially useful to foreign language learners as inclusion of concise definitions and suggestions for commonly confused words help learners to choose the correct spellings of words that sound alike or are pronounced wrongly by them.[35]
A personal spelling dictionary, being a collection of a single learner’s regularly misspelled words, is tailored to the individual and can be expanded with new entries that the learner does not know how to spell or contracted when the learner had mastered the words.[36]Learners also use the personal spelling dictionary more than electronic spellcheckers, and additions can be easily made to better enhance it as a learning tool as it can include things like rules for writing and proper nouns, which are not included in electronic spellcheckers.[36]Studies also suggest that personal spelling dictionaries are better tools for learners to improve their spelling as compared to trying to memorize words that are unrelated from lists or books.[37]
Current research have shown that language learners utilise dictionaries predominantly to check for meanings and thatbilingual dictionariesare preferred over monolingual dictionaries for these uses.[38]Bilingual dictionaries have proved to be helpful for learners of a new language, although in general, they hold less extensive coverage of information as compared to monolingual dictionaries.[30]Nonetheless, good bilingual dictionaries capitalize on the fact that they are useful for learners to integrate helpful information about commonly known errors, false friends and contrastive predicaments from the two languages.[30]
Studies have shown that learners of English have benefited from the use of bilingual dictionaries on their production and comprehension of unknown words.[39]When using bilingual dictionaries, learners also tend to read entries in both native and target languages[39]and this helps them to map the meanings of the target word in the foreign language onto its counterpart in their native language. It was also found that the use of bilingual dictionaries improves the results of translation tasks by learners of ESL, thus showing that language learning can be enhanced with the use of bilingual dictionaries.[40]
The use of bilingual dictionaries in foreign language writing tests remains a debate. Some studies support the view that the use of a dictionary in a foreign language examination increases the mean score of the test, and hence is one of the factors that influenced the decision to ban the use of dictionaries in several foreign language tests in the UK.[41]More recent studies, however, present that further research into the use of bilingual dictionaries during writing tests have shown that there is no significant differences in the test scores that can be attributed to the use of a dictionary.[42]Nevertheless, from the perspective of foreign language learners, being able to use a bilingual dictionary during a test is reassuring and increases their confidence.[43]
There are many free translation aids online, also known asmachine translation(MT) engines, such asGoogle TranslateandBabel Fish(now defunct), that allow foreign language learners to translate between their native language and the target language quickly and conveniently.[44]Out of the three major categories in computerised translation tools;computer-assisted translation(CAT), Terminology data banks and machine translation. Machine translation is the most ambitious as it is designed to handle the whole process of translation entirely without the intervention of human assistance.[45]
Studies have shown that translation into the target language can be used to improve the linguistic proficiency of foreign language learners.[46]Machine translation aids help beginner learners of a foreign language to write more and produce better quality work in the target language; writing directly in the target language without any aid requires more effort on the learners' part, resulting in the difference in quantity and quality.[44]
However, teachers advise learners against the use of machine translation aids as output from the machine translation aids are highly misleading and unreliable; producing the wrong answers most of the time.[47]Over-reliance on the aids also hinder the development of learners' writing skills, and is viewed as an act of plagiarism since the language used is technically not produced by the student.[47]
Theorthographyof a language is the usage of a specific script to write a language according to a conventionalised usage.[48]One’s ability to read in a language is further enhanced by a concurrent learning of writing.[49]This is because writing is a means of helping the language learner recognise and remember the features of the orthography, which is particularly helpful when the orthography has irregular phonetic-to-spelling mapping.[49]This, in turn, helps the language learner to focus on the components which make up the word.[49]
Online orthography aids[50]provide language learners with a step-by-step process on learning how to write characters. These are especially useful for learners of languages withlogographicwriting systems, such as Chinese or Japanese, in which the ordering of strokes for characters are important. Alternatively, tools like Skritter provide an interactive way of learning via a system similar to writing tablets[51][better source needed]albeit on computers, at the same time providing feedback on stroke ordering and progress.
Handwriting recognitionis supported on certain programs,[52]which help language learners in learning the orthography of the target language. Practice of orthography is also available in many applications, with tracing systems in place to help learners with stroke orders.[53]
Apart from online orthography programs, offline orthography aids for language learners of logographic languages are also available. Character cards, which contain lists of frequently used characters of the target language, serve as a portable form of visual writing aid for language learners of logographic languages who may face difficulties in recalling the writing of certain characters.[54]
Studies have shown that tracing logographic characters improves the word recognition abilities of foreign language learners, as well as their ability to map the meanings onto the characters.[55]This, however, does not improve their ability to link pronunciation with characters, which suggests that these learners need more than orthography aids to help them in mastering the language in both writing and speech.[56]
|
https://en.wikipedia.org/wiki/Foreign_language_writing_aid
|
Language and Communication Technologies(LCT; also known ashuman language technologiesorlanguage technologyfor short) is the scientific study of technologies that explore language and communication. It is an interdisciplinary field that encompasses the fields ofcomputer science,linguisticsandcognitive science.
One of the first problems to be studied in the 1950s, shortly after the invention of computers, was an LCT problem, namely the translation of human languages. The large amounts of funding poured intomachine translationtestifies to the perceived importance of the field, right from the beginning. It was also in this period that scholars started to develop theories of language and communication based on scientific methods. In the case of language, it wasNoam Chomskywho refines the goal of linguistics as a quest for a formal description of language,[1]whilstClaude ShannonandWarren Weaverprovided a mathematical theory that linked communication with information.[2]
Computers and related technologies have provided a physical and conceptual framework within which scientific studies concerning the notion of communication within a computational framework could be pursued. Indeed, this framework has been fruitful on a number of levels. For a start, it has given birth to a new discipline, known asnatural language processing(NLP), orcomputational linguistics(CL). This discipline studies, from a computational perspective, all levels of language from the production of speech to the meanings of texts and dialogues. And over the past 40 years, NLP has produced an impressive computational infrastructure of resources, techniques, and tools for analyzing sound structure (phonology), word structure (morphology), grammatical structure (syntax) and meaning structure (semantics). As well as being important for language-based applications, this computational infrastructure makes it possible to investigate the structure of human language and communication at a deeper scientific level than was ever previously possible.
Moreover, NLP fits in naturally with other branches of computer science, and in particular, withartificial intelligence(AI).[3]From an AI perspective, language use is regarded as a manifestation of intelligent behaviour by an active agent. The emphasis in AI-based approaches to language and communication is on the computational infrastructure required to integrate linguistic performance into a general theory of intelligent agents that includes, for example, learning generalizations on the basis of particular experience, the ability to plan and reason about intentionally produced utterances, the design of utterances that will fulfill a particular set of goals. Such work tends to be highly interdisciplinary in nature, as it needs to draw on ideas from such fields aslinguistics,cognitive psychology, andsociology. LCT draws on and incorporates knowledge and research from all these fields.
Language and communication are so fundamental to human activity that it is not at all surprising to find that Language and Communication Technologies affect all major areas of society, including health, education, finance, commerce, and travel. Modern LCT is based on a dual tradition of symbols and statistics. This means that nowadays research on language requires access to large databases of information about words and their properties, to large scale computational grammars, to computational tools for working with all levels of language, and to efficient inference systems for performing reasoning. By working computationally it is possible to get to grips with the deeper structure of natural languages, and in particular, to model the crucial interactions between the various levels of language and other cognitive faculties.
Relevant areas of research in LCT include:
The increasing interest in the field is proved by the existence of several European Masters in this dynamic research area:[4]Degree programmes of the University of Groningeninclude Language and Communication Technologies.
Erasmus MundusMasters:
|
https://en.wikipedia.org/wiki/Language_and_Communication_Technologies
|
Language technology, often calledhuman language technology(HLT), studies methods of how computer programs or electronic devices can analyze, produce, modify or respond to human texts and speech.[1]Working with language technology often requires broad knowledge not only aboutlinguisticsbut also aboutcomputer science. It consists ofnatural language processing(NLP) andcomputational linguistics(CL) on the one hand, many application oriented aspects of these, and more low-level aspects such as encoding andspeech technologyon the other hand.
Note that these elementary aspects are normally not considered to be within the scope of related terms such asnatural language processingand(applied) computational linguistics, which are otherwise near-synonyms. As an example, for many of the world's lesser known languages, the foundation of language technology is providing communities with fonts and keyboard setups so their languages can be written on computers or mobile devices.[2]
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Language_technology
|
Latent semantic analysis(LSA) is a technique innatural language processing, in particulardistributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (thedistributional hypothesis). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique calledsingular value decomposition(SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared bycosine similaritybetween any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.[1]
An information retrieval technique using latent semantic structure was patented in 1988[2]byScott Deerwester,Susan Dumais,George Furnas,Richard Harshman,Thomas Landauer,Karen LochbaumandLynn Streeter. In the context of its application toinformation retrieval, it is sometimes calledlatent semantic indexing(LSI).[3]
LSA can use adocument-term matrixwhich describes the occurrences of terms in documents; it is asparse matrixwhose rows correspond totermsand whose columns correspond to documents. A typical example of the weighting of the elements of the matrix istf-idf(term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.
This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.
After the construction of the occurrence matrix, LSA finds alow-rank approximation[5]to theterm-document matrix. There could be various reasons for these approximations:
The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:
This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem withpolysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.
LetX{\displaystyle X}be a matrix where element(i,j){\displaystyle (i,j)}describes the occurrence of termi{\displaystyle i}in documentj{\displaystyle j}(this can be, for example, the frequency).X{\displaystyle X}will look like this:
Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:
Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:
Now thedot producttiTtp{\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}}between two term vectors gives thecorrelationbetween the terms over the set of documents. Thematrix productXXT{\displaystyle XX^{T}}contains all these dot products. Element(i,p){\displaystyle (i,p)}(which is equal to element(p,i){\displaystyle (p,i)}) contains the dot producttiTtp{\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}}(=tpTti{\displaystyle ={\textbf {t}}_{p}^{T}{\textbf {t}}_{i}}). Likewise, the matrixXTX{\displaystyle X^{T}X}contains the dot products between all the document vectors, giving their correlation over the terms:djTdq=dqTdj{\displaystyle {\textbf {d}}_{j}^{T}{\textbf {d}}_{q}={\textbf {d}}_{q}^{T}{\textbf {d}}_{j}}.
Now, from the theory of linear algebra, there exists a decomposition ofX{\displaystyle X}such thatU{\displaystyle U}andV{\displaystyle V}areorthogonal matricesandΣ{\displaystyle \Sigma }is adiagonal matrix. This is called asingular value decomposition(SVD):
The matrix products giving us the term and document correlations then become
SinceΣΣT{\displaystyle \Sigma \Sigma ^{T}}andΣTΣ{\displaystyle \Sigma ^{T}\Sigma }are diagonal we see thatU{\displaystyle U}must contain theeigenvectorsofXXT{\displaystyle XX^{T}}, whileV{\displaystyle V}must be the eigenvectors ofXTX{\displaystyle X^{T}X}. Both products have the same non-zero eigenvalues, given by the non-zero entries ofΣΣT{\displaystyle \Sigma \Sigma ^{T}}, or equally, by the non-zero entries ofΣTΣ{\displaystyle \Sigma ^{T}\Sigma }. Now the decomposition looks like this:
The valuesσ1,…,σl{\displaystyle \sigma _{1},\dots ,\sigma _{l}}are called the singular values, andu1,…,ul{\displaystyle u_{1},\dots ,u_{l}}andv1,…,vl{\displaystyle v_{1},\dots ,v_{l}}the left and right singular vectors.
Notice the only part ofU{\displaystyle U}that contributes toti{\displaystyle {\textbf {t}}_{i}}is thei'th{\displaystyle i{\textrm {'th}}}row.
Let this row vector be calledt^iT{\displaystyle {\hat {\textrm {t}}}_{i}^{T}}.
Likewise, the only part ofVT{\displaystyle V^{T}}that contributes todj{\displaystyle {\textbf {d}}_{j}}is thej'th{\displaystyle j{\textrm {'th}}}column,d^j{\displaystyle {\hat {\textrm {d}}}_{j}}.
These arenotthe eigenvectors, butdependonallthe eigenvectors.
It turns out that when you select thek{\displaystyle k}largest singular values, and their corresponding singular vectors fromU{\displaystyle U}andV{\displaystyle V}, you get the rankk{\displaystyle k}approximation toX{\displaystyle X}with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vectort^iT{\displaystyle {\hat {\textbf {t}}}_{i}^{T}}then hask{\displaystyle k}entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vectord^j{\displaystyle {\hat {\textbf {d}}}_{j}}is an approximation in this lower-dimensional space. We write this approximation as
You can now do the following:
To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:
Note here that the inverse of the diagonal matrixΣk{\displaystyle \Sigma _{k}}may be found by inverting each nonzero value within the matrix.
This means that if you have a query vectorq{\displaystyle q}, you must do the translationq^=Σk−1UkTq{\displaystyle {\hat {\textbf {q}}}=\Sigma _{k}^{-1}U_{k}^{T}{\textbf {q}}}before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:
The new low-dimensional space typically can be used to:
Synonymy and polysemy are fundamental problems innatural language processing:
LSA has been used to assist in performingprior artsearches forpatents.[9]
The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas offree recalland memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as theSemantic Proximity Effect.[10]
When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall.[11]
Another model, termedWord Association Spaces(WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.[12]
TheSVDis typically computed using large matrix methods (for example,Lanczos methods) but may also be computed incrementally and with greatly reduced resources via aneural network-like approach, which does not require the large, full-rank matrix to be held in memory.[13]A fast, incremental, low-memory, large-matrix SVD algorithm has been developed.[14]MATLAB[15]and Python[16]implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution.
In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.[17]
Some of LSA's drawbacks include:
In semantic hashing[21]documents are mapped to memory addresses by means of aneural networkin such a way that semantically similar documents are located at nearby addresses.Deep neural networkessentially builds agraphical modelof the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster thanlocality sensitive hashing, which is the fastest current method.[clarification needed]
Latent semantic indexing(LSI) is an indexing and retrieval method that uses a mathematical technique calledsingular value decomposition(SVD) to identify patterns in the relationships between thetermsandconceptscontained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of abody of textby establishing associations between those terms that occur in similarcontexts.[22]
LSI is also an application ofcorrespondence analysis, a multivariate statistical technique developed byJean-Paul Benzécri[23]in the early 1970s, to acontingency tablebuilt from word counts in documents.
Called "latent semanticindexing" because of its ability to correlatesemanticallyrelated terms that arelatentin a collection of text, it was first applied to text atBellcorein the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.
LSI helps overcome synonymy by increasingrecall, one of the most problematic constraints ofBoolean keyword queriesand vector space models.[18]Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users ofinformation retrievalsystems.[24]As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant.
LSI is also used to perform automateddocument categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.[25]Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories.[26]LSI usesexampledocuments to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents.
Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text.
Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguisticconcept searchingand example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.[citation needed]
LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.[27]
LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.).[28]This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data.
Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text.
LSI has proven to be a useful solution to a number of conceptual matching problems.[29][30]The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.[31]
LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing aSingular Value Decompositionon the matrix, and using the matrix to identify the concepts contained in the text.
LSI begins by constructing a term-document matrix,A{\displaystyle A}, to identify the occurrences of them{\displaystyle m}unique terms within a collection ofn{\displaystyle n}documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell,aij{\displaystyle a_{ij}}, initially representing the number of times the associated term appears in the indicated document,tfij{\displaystyle \mathrm {tf_{ij}} }. This matrix is usually very large and very sparse.
Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell,aij{\displaystyle a_{ij}}ofA{\displaystyle A}, to be the product of a local term weight,lij{\displaystyle l_{ij}}, which describes the relative frequency of a term in a document, and a global weight,gi{\displaystyle g_{i}}, which describes the relative frequency of the term within the entire collection of documents.
Some common local weighting functions[33]are defined in the following table.
Some common global weighting functions are defined in the following table.
Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets.[34]In other words, each entryaij{\displaystyle a_{ij}}ofA{\displaystyle A}is computed as:
A rank-reduced,singular value decompositionis performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI.[35]It computes the term and document vector spaces by approximating the single term-frequency matrix,A{\displaystyle A}, into three other matrices— anmbyrterm-concept vector matrixT{\displaystyle T}, anrbyrsingular values matrixS{\displaystyle S}, and anbyrconcept-document vector matrix,D{\displaystyle D}, which satisfy the following relations:
A≈TSDT{\displaystyle A\approx TSD^{T}}
TTT=IrDTD=Ir{\displaystyle T^{T}T=I_{r}\quad D^{T}D=I_{r}}
S1,1≥S2,2≥…≥Sr,r>0Si,j=0wherei≠j{\displaystyle S_{1,1}\geq S_{2,2}\geq \ldots \geq S_{r,r}>0\quad S_{i,j}=0\;{\text{where}}\;i\neq j}
In the formula,Ais the suppliedmbynweighted matrix of term frequencies in a collection of text wheremis the number of unique terms, andnis the number of documents.Tis a computedmbyrmatrix of term vectors whereris the rank ofA—a measure of its unique dimensions≤ min(m,n).Sis a computedrbyrdiagonal matrix of decreasing singular values, andDis a computednbyrmatrix of document vectors.
The SVD is thentruncatedto reduce the rank by keeping only the largestk«rdiagonal entries in the singular value matrixS,
wherekis typically on the order 100 to 300 dimensions.
This effectively reduces the term and document vector matrix sizes tombykandnbykrespectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space ofA. This reduced set of matrices is often denoted with a modified formula such as:
Efficient LSI algorithms only compute the firstksingular values and term and document vectors as opposed to computing a full SVD and then truncating it.
Note that this rank reduction is essentially the same as doingPrincipal Component Analysis(PCA) on the matrixA, except that PCA subtracts off the means. PCA loses the sparseness of theAmatrix, which can make it infeasible for large lexicons.
The computedTkandDkmatrices define the term and document vector spaces, which with the computed singular values,Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors.
The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of theA = T S DTequation into the equivalentD = ATT S−1equation, a new vector,d, for a query or for a new document can be created by computing a new column inAand then multiplying the new column byT S−1. The new column inAis computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document.
A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors.
The process of augmenting the document vector spaces for an LSI index with new documents in this manner is calledfolding in. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in[14]) is needed.
It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome.
LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization.[36]Below are some other ways in which LSI is being used:
LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.[51]
Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques.[52]However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open sourcegensimsoftware package.[53]
Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents).[54]However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.[55]Checking the proportion of variance retained, similar toPCAorfactor analysis, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality.[56]When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality.
Due to its cross-domain applications inInformation Retrieval,Natural Language Processing(NLP),Cognitive ScienceandComputational Linguistics, LSA has been implemented to support many different kinds of applications.
|
https://en.wikipedia.org/wiki/Latent_semantic_indexing
|
Amulti-agent system(MASor "self-organized system") is a computerized system composed of multiple interactingintelligent agents.[1][2]Multi-agent systems can solve problems that are difficult or impossible for an individual agent or amonolithic systemto solve.[3]Intelligence may includemethodic,functional,proceduralapproaches,algorithmicsearchorreinforcement learning.[4]With advancements inlarge language models(LLMs), LLM-based multi-agent systems have emerged as a new area of research, enabling more sophisticated interactions and coordination among agents.[5]
Despite considerable overlap, a multi-agent system is not always the same as anagent-based model(ABM). The goal of an ABM is to search for explanatory insight into the collective behavior of agents (which do not necessarily need to be "intelligent") obeying simple rules, typically in natural systems, rather than in solving specific practical or engineering problems. The terminology of ABM tends to be used more often in the science, and MAS in engineering and technology.[6]Applications where multi-agent systems research may deliver an appropriate approach include online trading,[7]disaster response,[8][9]target surveillance[10]and social structure modelling.[11]
Multi-agent systems consist of agents and theirenvironment. Typically multi-agent systems research refers tosoftware agents. However, the agents in a multi-agent system could equally well be robots, humans or human teams. A multi-agent system may contain combined human-agent teams.
Agents can be divided into types spanning simple to complex. Categories include:
Agent environments can be divided into:
Agent environments can also be organized according to properties such as accessibility (whether it is possible to gather complete information about the environment), determinism (whether an action causes a definite effect), dynamics (how many entities influence the environment in the moment), discreteness (whether the number of possible actions in the environment is finite), episodicity (whether agent actions in certain time periods influence other periods),[13]and dimensionality (whether spatial characteristics are important factors of the environment and the agent considers space in its decision making).[14]Agent actions are typically mediated via an appropriate middleware. This middleware offers a first-class design abstraction for multi-agent systems, providing means to govern resource access and agent coordination.[15]
The agents in a multi-agent system have several important characteristics:[16]
Multi-agent systems can manifestself-organisationas well as self-direction and othercontrol paradigmsand related complex behaviors even when the individual strategies of all their agents are simple.[citation needed]When agents can share knowledge using any agreed language, within the constraints of the system's communication protocol, the approach may lead to a common improvement. Example languages areKnowledge Query Manipulation Language(KQML) orAgent Communication Language(ACL).
Many MAS are implemented in computer simulations, stepping the system through discrete "time steps". The MAS components communicate typically using a weighted request matrix, e.g.
and a weighted response matrix, e.g.
A challenge-response-contract scheme is common in MAS systems, where
also considering other components, evolving "contracts" and the restriction sets of the component algorithms.
Another paradigm commonly used with MAS is the "pheromone", where components leave information for other nearby components. These pheromones may evaporate/concentrate with time, that is their values may decrease (or increase).
MAS tend to find the best solution for their problems without intervention. There is high similarity here to physical phenomena, such as energy minimizing, where physical objects tend to reach the lowest energy possible within the physically constrained world. For example: many of the cars entering a metropolis in the morning will be available for leaving that same metropolis in the evening.
The systems also tend to prevent propagation of faults, self-recover and be fault tolerant, mainly due to the redundancy of components.
The study of multi-agent systems is "concerned with the development and analysis of sophisticatedAIproblem-solving and control architectures for both single-agent and multiple-agent systems."[18]Research topics include:
Frameworks have emerged that implement common standards (such as theFIPAandOMGMASIF standards).[24]These frameworks e.g.JADE, save time and aid in the standardization of MAS development.[25]
Currently though, no standard is actively maintained from FIPA or OMG. Efforts for further development of software agents in industrial context are carried out inIEEEIES technical committee on Industrial Agents.[26]
With advancements inlarge language models(LLMs) such asChatGPT, LLM-based multi-agent frameworks, such asCAMEL,[27][5]have emerged as a new paradigm for developing multi-agent applications.
MAS have not only been applied in academic research, but also in industry.[28]MAS are applied in the real world to graphical applications such as computer games. Agent systems have been used in films.[29]It is widely advocated for use in networking and mobile technologies, to achieve automatic and dynamic load balancing, high scalability and self-healing networks. They are being used for coordinated defence systems.
Other applications[30]includetransportation,[31]logistics,[32]graphics, manufacturing,power system,[33]smartgrids,[34]and theGIS.
Also,Multi-agent Systems Artificial Intelligence(MAAI) are used for simulating societies, the purpose thereof being helpful in the fields of climate, energy,epidemiology,conflict management, child abuse, ....[35]
Some organisations working on using multi-agent system models include Center for Modelling Social Systems,[36]Centre for Research in Social Simulation,[37]Centre for Policy Modelling, Society for Modelling and Simulation International.[35]
Vehicular traffic with controlled autonomous vehicles can be modelling as a multi-agent system involving crowd dynamics.[38]
Hallerbach et al. discussed the application of agent-based approaches for the development and validation ofautomated driving systemsvia a digital twin of the vehicle-under-test and microscopic traffic simulation based on independent agents.[39]Waymohas created a multi-agent simulation environment Carcraft to test algorithms forself-driving cars.[40][41]It simulates traffic interactions between human drivers, pedestrians and automated vehicles. People's behavior is imitated by artificial agents based on data of real human behavior.
|
https://en.wikipedia.org/wiki/Multi-agent_system
|
Native-language identification(NLI) is the task of determining an author'snative language(L1) based only on their writings in asecond language(L2).[1]NLI works through identifying language-usage patterns that are common to specific L1 groups and then applying this knowledge to predict the native language of previously unseen texts. This is motivated in part by applications insecond-language acquisition, language teaching andforensic linguistics, amongst others.
NLI works under the assumption that an author's L1 will dispose them towards particular language production patterns in their L2, as influenced by their native language. This relates to cross-linguistic influence (CLI), a key topic in the field of second-language acquisition (SLA) that analyzes transfer effects from the L1 on later learned languages.
Using large-scale English data, NLI methods achieve over 80% accuracy in predicting the native language of texts written by authors from 11 different L1 backgrounds.[2]This can be compared to a baseline of 9% for choosing randomly.
This identification of L1-specific features has been used to studylanguage transfereffects in second-language acquisition.[3]This is useful for developing pedagogical material, teaching methods, L1-specific instructions and generating learner feedback that is tailored to their native language.
NLI methods can also be applied inforensic linguisticsas a method of performing authorship profiling in order to infer the attributes of an author, including their linguistic background.
This is particularly useful in situations where a text, e.g. an anonymous letter, is the key piece of evidence in an investigation and clues about the native language of a writer can help investigators in identifying the source.
This has already attracted interest and funding from intelligence agencies.[4]
Natural language processingmethods are used to extract and identify language usage patterns common to speakers of an L1-group. This is done using language learner data, usually from alearner corpus. Next,machine learningis applied to train classifiers, likesupport vector machines, for predicting the L1 of unseen texts.[5]A range of ensemble based systems have also been applied to the task and shown to improve performance over single classifier systems.[6][7]
Various linguistic feature types have been applied for this task. These include syntactic features such as constituent parses, grammatical dependencies and part-of-speech tags.
Surface level lexical features such as character, word and lemman-gramshave also been found to be quite useful for this task. However, it seems that character n-grams[8][9]are the single best feature for the task.
The Building Educational Applications (BEA) workshop atNAACL2013 hosted the inaugural NLI shared task.[10]The competition resulted in 29 entries from teams across the globe, 24 of which also published a paper describing their systems and approaches.
|
https://en.wikipedia.org/wiki/Native-language_identification
|
Natural-language programming(NLP) is anontology-assisted way ofprogrammingin terms ofnatural-languagesentences, e.g.English.[1]A structured document with Content, sections and subsections for explanations of sentences forms a NLP document, which is actually acomputer program. Natural language programming is not to be mixed up with natural language interfacing or voice control where a program is first written and then communicated with through natural language using an interface added on. In NLP the functionality of a program is organised only for the definition of the meaning of sentences. For instance, NLP can be used to represent all the knowledge of an autonomous robot. Having done so, its tasks can be scripted by its users so that the robot can execute them autonomously while keeping to prescribed rules of behaviour as determined by the robot's user. Such robots are calledtransparent robots[2]as their reasoning is transparent to users and this develops trust in robots. Natural language use andnatural-language user interfacesincludeInform 7, a natural programming language for making interactive fiction,Shakespeare, anesotericnatural programming language in the style of the plays ofWilliam Shakespeare, andWolfram Alpha, a computational knowledge engine, using natural-language input.[citation needed]Some methods forprogram synthesisare based on natural-language programming.[3]
The smallest unit of statement in NLP is a sentence. Each sentence is stated in terms of concepts from the underlying ontology, attributes in that ontology and named objects incapital letters. In an NLP text every sentence unambiguouslycompilesinto aprocedure callin the underlyinghigh-level programming languagesuch asMATLAB,Octave,SciLab,Python, etc.
Symbolic languages such asWolfram Languageare capable ofinterpretedprocessing of queries by sentences. This can allow interactive requests such as that implemented inWolfram Alpha.[4][5]The difference between these and NLP is that the latter builds up a single program or a library of routines that are programmed through natural language sentences using an ontology that defines the available data structures in a high level programming language.
An example text from an English language natural-language program is as follows:
If U_ is 'smc01-control', then do the following. Define surface weights Alpha as "[0.5, 0.5]".
Initialise matrix Phi as a 'unit matrix'. Define J as the 'inertia matrix' of Spc01. Compute
matrix J2 as the inverse of J. Compute position velocity error Ve and angular velocity error
Oe from dynamical state X, guidance reference Xnow. Define the joint sliding surface G2
from the position velocity error Ve and angular velocity error Oe using the surface weights
Alpha. Compute the smoothed sign function SG2 from the joint sliding surface G2 with sign
threshold 0.01. Compute special dynamical force F from dynamical state X and surface
weights Alpha. Compute control torque T and control force U from matrix J2, surface weights
Alpha, special dynamical force F, smoothed sign function SG2. Finish conditional actions.
that defines a feedback control scheme using asliding mode controlmethod.
Natural-language programming is a top-down method of writing software. Its stages are as follows:
A natural-language program is a preciseformaldescription of some procedure that its author created. It is human readable and it can also be read by a suitable software agent. For example, a web page in an NLP format can be read by a softwarepersonal assistantagent to a person and she or he can ask the agent to execute some sentences, i.e. carry out some task or answer a question. There is areader agentavailable for English interpretation of HTML based NLP documents that a person can run on herpersonal computer.
An ontology class is a natural-language program that is not aconceptin the sense as humans use concepts. Concepts in an NLP are examples (samples) of generic human concepts. Each sentence in a natural-language program is either (1) stating a relationship in a world model or (2) carries out an action in the environment or (3) carries out a computational procedure or (4) invokes an answering mechanism in response to a question.
A set of NLP sentences, with associated ontology defined, can also be used as apseudo codethat does not provide the details in any underlying high level programming language. In such an application the sentences used become high level abstractions (conceptualisations) of computing procedures that are computer language and machine independent.
Researchers have started to experiment with natural language programming environments that use plain language prompts and then use AI (specifically large language models) to turn natural language into formal code. For example Spatial Pixelcreated a natural language programming environmentto turn natural language into P5.js code through OpenAI's API. In 2021 OpenAI developed a natural language programming environment for their programming large language model calledCodex.
|
https://en.wikipedia.org/wiki/Natural-language_programming
|
Natural language understanding(NLU) ornatural language interpretation(NLI)[1]is a subset ofnatural language processinginartificial intelligencethat deals with machinereading comprehension. NLU has been considered anAI-hardproblem.[2]
There is considerable commercial interest in the field because of its application toautomated reasoning,[3]machine translation,[4]question answering,[5]news-gathering,text categorization,voice-activation, archiving, and large-scalecontent analysis.
The programSTUDENT, written in 1964 byDaniel Bobrowfor his PhD dissertation atMIT, is one of the earliest known attempts at NLU by a computer.[6][7][8][9][10]Eight years afterJohn McCarthycoined the termartificial intelligence, Bobrow's dissertation (titledNatural Language Input for a Computer Problem Solving System) showed how a computer could understand simple natural language input to solve algebra word problems.
A year later, in 1965,Joseph Weizenbaumat MIT wroteELIZA, an interactive program that carried on a dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program adatabaseof real-world knowledge or a richlexicon. Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used byAsk.com.[11]
In 1969,Roger SchankatStanford Universityintroduced theconceptual dependency theoryfor NLU.[12]This model, partially influenced by the work ofSydney Lamb, was extensively used by Schank's students atYale University, such asRobert Wilensky,Wendy Lehnert, andJanet Kolodner.
In 1970,William A. Woodsintroduced theaugmented transition network(ATN) to represent natural language input.[13]Instead ofphrase structure rulesATNs used an equivalent set offinite-state automatathat were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years.
In 1971,Terry Winogradfinished writingSHRDLUfor his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field.[14][15]Winograd continued to be a major influence in the field with the publication of his bookLanguage as a Cognitive Process.[16]At Stanford, Winograd would later adviseLarry Page, who co-foundedGoogle.
In the 1970s and 1980s, the natural language processing group atSRI Internationalcontinued research and development in the field. A number of commercial efforts based on the research were undertaken,e.g., in 1982Gary HendrixformedSymantec Corporationoriginally as a company for developing a natural language interface for database queries on personal computers. However, with the advent of mouse-drivengraphical user interfaces, Symantec changed direction. A number of other commercial efforts were started around the same time,e.g., Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at Cognitive Systems Corp.[17][18]In 1983, Michael Dyer developed the BORIS system at Yale which bore similarities to the work of Roger Schank and W. G. Lehnert.[19]
The third millennium saw the introduction of systems using machine learning for text classification, such as the IBMWatson. However, experts debate how much "understanding" such systems demonstrate:e.g., according toJohn Searle, Watson did not even understand the questions.[20]
John Ball, cognitive scientist and inventor of thePatom Theory, supports this assessment. Natural language processing has made inroads for applications to support human productivity in service and e-commerce, but this has largely been made possible by narrowing the scope of the application. There are thousands of ways to request something in a human language that still defies conventional natural language processing.[citation needed]According to Wibe Wagemans, "To have a meaningful conversation with machines is only possible when we match every word to the correct meaning based on the meanings of the other words in the sentence – just like a 3-year-old does without guesswork."[21]
The umbrella term "natural language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued torobots, to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages. Many real-world applications fall between the two extremes, for instancetext classificationfor the automatic analysis of emails and their routing to a suitable department in a corporation does not require an in-depth understanding of the text,[22]but needs to deal with a much larger vocabulary and more diverse syntax than the management of simple queries to database tables with fixed schemata.
Throughout the years various attempts at processing natural language orEnglish-likesentences presented to computers have taken place at varying degrees of complexity. Some attempts have not resulted in systems with deep understanding, but have helped overall system usability. For example,Wayne Ratlifforiginally developed theVulcanprogram with an English-like syntax to mimic the English speaking computer inStar Trek. Vulcan later became thedBasesystem whose easy-to-use syntax effectively launched the personal computer database industry.[23][24]Systems with an easy to use or English-like syntax are, however, quite distinct from systems that use a richlexiconand include an internalrepresentation(often asfirst order logic) of the semantics of natural language sentences.
Hence the breadth and depth of "understanding" aimed at by a system determine both the complexity of the system (and the implied challenges) and the types of applications it can deal with. The "breadth" of a system is measured by the sizes of its vocabulary and grammar. The "depth" is measured by the degree to which its understanding approximates that of a fluent native speaker. At the narrowest and shallowest,English-likecommand interpreters require minimal complexity, but have a small range of applications. Narrow but deep systems explore and model mechanisms of understanding,[25]but they still have limited application. Systems that attempt to understand the contents of a document such as a news release beyond simple keyword matching and to judge its suitability for a user are broader and require significant complexity,[26]but they are still somewhat shallow. Systems that are both very broad and very deep are beyond the current state of the art.
Regardless of the approach used, most NLU systems share some common components. The system needs alexiconof the language and aparserandgrammarrules to break sentences into an internal representation. The construction of a rich lexicon with a suitableontologyrequires significant effort,e.g., theWordnetlexicon required many person-years of effort.[27]
The system also needs theory fromsemanticsto guide the comprehension. The interpretation capabilities of a language-understanding system depend on the semantic theory it uses. Competing semantic theories of language have specific trade-offs in their suitability as the basis of computer-automated semantic interpretation.[28]These range fromnaive semanticsorstochastic semantic analysisto the use ofpragmaticsto derive meaning from context.[29][30][31]Semantic parsersconvert natural-language texts into formal meaning representations.[32]
Advanced applications of NLU also attempt to incorporate logicalinferencewithin their framework. This is generally achieved by mapping the derived meaning into a set of assertions inpredicate logic, then usinglogical deductionto arrive at conclusions. Therefore, systems based on functional languages such asLispneed to include a subsystem to represent logical assertions, while logic-oriented systems such as those using the languageProloggenerally rely on an extension of the built-in logical representation framework.[33][34]
The management ofcontextin NLU can present special challenges. A large variety of examples and counter examples have resulted in multiple approaches to theformal modelingof context, each with specific strengths and weaknesses.[35][36]
|
https://en.wikipedia.org/wiki/Natural-language_understanding
|
Natural-language user interface(LUIorNLUI) is a type ofcomputer human interfacewhere linguistic phenomena such as verbs, phrases and clauses act as UI controls for creating, selecting and modifying data in software applications.
Ininterface design, natural-language interfaces are sought after for their speed and ease of use, but most suffer the challenges tounderstandingwide varieties ofambiguous input.[1]Natural-language interfaces are an active area of study in the field ofnatural-language processingandcomputational linguistics. An intuitive general natural-language interface is one of the active goals of theSemantic Web.
Text interfaces are "natural" to varying degrees. Many formal (un-natural) programming languages incorporate idioms of natural human language. Likewise, a traditionalkeyword searchengine could be described as a "shallow" natural-language user interface.
A natural-language search engine would in theory find targetedanswers to user questions(as opposed to keyword search). For example, when confronted with a question of the form 'whichU.S.state has the highestincome tax?', conventional search engines ignore the question and instead search on thekeywords'state', 'income' and 'tax'. Natural-language search, on the other hand, attempts to use natural-language processing to understand the nature of the question and then to search and return a subset of the web that contains the answer to the question. If it works, results would have a higher relevance than results from a keyword search engine, due to the question being included.[citation needed]
Prototype Nl interfaces had already appeared in the late sixties and early seventies.[2]
Natural-language interfaces have in the past led users to anthropomorphize the computer, or at least to attribute more intelligence to machines than is warranted. On the part of the user, this has led to unrealistic expectations of the capabilities of the system. Such expectations will make it difficult to learn the restrictions of the system if users attribute too much capability to it, and will ultimately lead to disappointment when the system fails to perform as expected as was the case in theAI winterof the 1970s and 80s.
A1995 papertitled 'Natural Language Interfaces to Databases – An Introduction', describes some challenges:[2]
Other goals to consider more generally are the speed and efficiency of the interface, in all algorithms these two points are the main point that will determine if some methods are better than others and therefore have greater success in the market. In addition, localisation across multiple language sites requires extra consideration - this is based on differing sentence structure and language syntax variations between most languages.
Finally, regarding the methods used, the main problem to be solved is creating a general algorithm that can recognize the entire spectrum of different voices, while disregarding nationality, gender or age. The significant differences between the extracted features - even from speakers who says the same word or phrase - must be successfully overcome.
The natural-language interface gives rise to technology used for many different applications.
Some of the main uses are:
Below are named and defined some of the applications that use natural-language recognition, and so have integrated utilities listed above.
Ubiquity, anadd-onforMozilla Firefox, is a collection of quick and easy natural-language-derived commands that act asmashupsof web services, thus allowing users to get information and relate it to current and other webpages.
Wolfram Alpha is an online service that answers factual queries directly by computing the answer from structured data, rather than providing a list of documents or web pages that might contain the answer as asearch enginewould.[5]It was announced in March 2009 byStephen Wolfram, and was released to the public on May 15, 2009.[6]
Siri is anintelligent personal assistantapplication integrated with operating systemiOS. The application usesnatural language processingto answer questions and make recommendations.
Siri's marketing claims include that it adapts to a user's individual preferences over time and personalizes results, and performs tasks such as making dinner reservations while trying to catch a cab.[7]
|
https://en.wikipedia.org/wiki/Natural-language_user_interface
|
The followingoutlineis provided as an overview of and topical guide to natural-language processing:
natural-language processing– computer activity in which computers are entailed toanalyze, understand,alter, or generatenatural language. This includes theautomationof any or all linguistic forms, activities, or methods of communication, such asconversation, correspondence,reading,written composition,dictation,publishing,translation,lip reading, and so on. Natural-language processing is also the name of the branch ofcomputer science,artificial intelligence, andlinguisticsconcerned with enabling computers to engage in communication using natural language(s) in all forms, including but not limited tospeech,print,writing, andsigning.
Natural-language processing can be described as all of the following:
The following technologies make natural-language processing possible:
Natural-language processing contributes to, and makes use of (the theories, tools, and methodologies from), the following fields:
Natural-language generation– task of converting information from computer databases into readable human language.
History of natural-language processing
The followingnatural-language processingtoolkitsare notable collections ofnatural-language processingsoftware. They are suites oflibraries,frameworks, andapplicationsfor symbolic, statistical natural-language and speech processing.
Chatterbot– a text-based conversationagentthat can interact with human users through some medium, such as aninstant messageservice. Some chatterbots are designed for specific purposes, while others converse with human users on a wide range of topics.
|
https://en.wikipedia.org/wiki/Outline_of_natural_language_processing
|
Query expansion(QE) is the process of reformulating a given query to improve retrieval performance ininformation retrievaloperations, particularly in the context ofquery understanding.[1]In the context ofsearch engines, query expansion involves evaluating a user's input (what words were typed into the search query area, and sometimes other types ofdata) and expanding the search query to match additional documents. Query expansion involves techniques such as:
Query expansion is a methodology studied in the field ofcomputer science, particularly within the realm ofnatural language processingandinformation retrieval.
Search engines invoke query expansion to increase the quality of user search results. It is assumed that users do not always formulate search queries using the best terms. Best in this case may be because the database does not contain the user entered terms.
Bystemminga user-entered term, more documents are matched, as the alternate word forms for a user entered term are matched as well, increasing the totalrecall. This comes at the expense of reducing theprecision. By expanding a search query to search for the synonyms of a user entered term, the recall is also increased at the expense of precision. This is due to the nature of the equation of how precision is calculated, in that a larger recall implicitly causes a decrease in precision, given that factors of recall are part of the denominator. It is also inferred that a larger recall negatively impacts overall search result quality, given that many users do not want more results to comb through, regardless of the precision.
The goal of query expansion in this regard is by increasing recall, precision can potentially increase (rather than decrease as mathematically equated), by including in the result set pages which are more relevant (of higher quality), or at least equally relevant. Pages which would not be included in the result set, which have the potential to be more relevant to the user's desired query, are included, and without query expansion would not have, regardless ofrelevance. At the same time, many of the current commercial search engines use word frequency (tf-idf) to assist in ranking.[citation needed]By ranking the occurrences of both the user entered words and synonyms and alternate morphological forms, documents with a higher density (high frequency and close proximity) tend to migrate higher up in the search results, leading to a higher quality of the search results near the top of the results, despite the larger recall.
Automatic methods for query expansion were proposed in 1960 by Maron and Kuhns.[2]Modern query expansion methods either imply document collection analysis (global or local)[3]or are dictionary- orontology-based.[4]The global analysis of the document collection is applied for searching for relations between terms. The local analysis refers to therelevance feedbackintroduced by Rocchio.[5]Rocchio proposed to judge manually some of the retrieved documents and use this feedback information to expand the query. Since collecting users' judgment can be challenging, only the first top retrieved documents are considered as relevant. This is the so calledpseudo-relevance feedback(PRF).[6]Pseudo-relevance feedback is efficient in average but can damage results for some queries,[7]especially difficult ones since the top retrieved documents are probably non-relevant. Pseudo-relevant documents are used to find expansion candidate terms that co-occur with many query terms.[8]This idea was further developed within the relevancelanguage modelformalism in positional relevance[9]and proximity relevance models[10]which consider the distance to query terms in the pseudo-relevant documents. Another direction in query expansion is the representation of index and query terms in a vector space which can be used to find related terms at query time, using semantic vectors orword embeddings.[11][12]
More generally, query expansion, with its counterpartdocument expansion, are today implemented in the form ofvector databases, using various encoding schemes based ondeep learning.[13]
|
https://en.wikipedia.org/wiki/Query_expansion
|
Ininformation retrievalandnatural language processingreificationis the process by which an abstract idea about a person, place or thing, is turned into an explicit data model or other object created in a programming language, such as a feature set of demographic[1]or psychographic[2]attributes or both. By means of reification, something that was previously implicit, unexpressed, and possibly inexpressible is explicitly formulated and made available to conceptual (logical or computational) manipulation.
The process by which a natural language statement is transformed so actions and events in it becomequantifiablevariables issemantic parsing.[3]For example "John chased the duck furiously" can be transformed into something like
Another example would be "Sally said John is mean", which could be expressed as something like
Such formal meaning representations allow one to use the tools of classicalfirst-order predicate calculuseven for statements which, due to their use of tense, modality, adverbial constructions, propositional arguments (e.g."Sally said that X"), etc., would have seemed intractable. This is an advantage because predicate calculus is better understood and simpler than the more complex alternatives (higher-order logics, modal logics, temporal logics, etc.), and there exist better automated tools (e.g.automated theorem proversandmodel checkers) for manipulating it.
Meaning representations can be used for other purposes besides the application of first-order logic; one example is the automatic discovery of synonymous phrases.[4][5]
The meaning representations are sometimes calledquasi-logical forms, and the existential variables are sometimes treated asSkolem constants.[5]
Not all natural language constructs admit a uniform translation to first order logic. Seedonkey sentencefor examples and a discussion.
|
https://en.wikipedia.org/wiki/Reification_(linguistics)
|
Speech processingis the study ofspeechsignalsand the processing methods of signals. The signals are usually processed in adigitalrepresentation, so speech processing can be regarded as a special case ofdigital signal processing, applied tospeech signals. Aspects of speech processing includes the acquisition, manipulation, storage, transfer and output of speech signals. Different speech processing tasks includespeech recognition,speech synthesis,speaker diarization,speech enhancement,speaker recognition, etc.[1]
Early attempts at speech processing and recognition were primarily focused on understanding a handful of simplephoneticelements such as vowels. In 1952, three researchers at Bell Labs, Stephen. Balashek, R. Biddulph, and K. H. Davis, developed a system that could recognize digits spoken by a single speaker.[2]Pioneering works in field of speech recognition using analysis of its spectrum were reported in the 1940s.[3]
Linear predictive coding(LPC), a speech processing algorithm, was first proposed byFumitada ItakuraofNagoya Universityand Shuzo Saito ofNippon Telegraph and Telephone(NTT) in 1966.[4]Further developments in LPC technology were made byBishnu S. AtalandManfred R. SchroederatBell Labsduring the 1970s.[4]LPC was the basis forvoice-over-IP(VoIP) technology,[4]as well asspeech synthesizerchips, such as theTexas Instruments LPC Speech Chipsused in theSpeak & Spelltoys from 1978.[5]
One of the first commercially available speech recognition products was Dragon Dictate, released in 1990. In 1992, technology developed byLawrence Rabinerand others at Bell Labs was used byAT&Tin their Voice Recognition Call Processing service to route calls without a human operator. By this point, the vocabulary of these systems was larger than the average human vocabulary.[6]
By the early 2000s, the dominant speech processing strategy started to shift away fromHidden Markov Modelstowards more modernneural networksanddeep learning.[7]
In 2012,Geoffrey Hintonand his team at theUniversity of Torontodemonstrated that deep neural networks could significantly outperform traditional HMM-based systems on large vocabulary continuous speech recognition tasks. This breakthrough led to widespread adoption of deep learning techniques in the industry.[8][9]
By the mid-2010s, companies likeGoogle,Microsoft,Amazon, andApplehad integrated advanced speech recognition systems into their virtual assistants such asGoogle Assistant,Cortana,Alexa, andSiri.[10]These systems utilized deep learning models to provide more natural and accurate voice interactions.
The development of Transformer-based models, like Google's BERT (Bidirectional Encoder Representations from Transformers) and OpenAI's GPT (Generative Pre-trained Transformer), further pushed the boundaries of natural language processing and speech recognition. These models enabled more context-aware and semantically rich understanding of speech.[11][8]In recent years, end-to-end speech recognition models have gained popularity. These models simplify the speech recognition pipeline by directly converting audio input into text output, bypassing intermediate steps like feature extraction and acoustic modeling. This approach has streamlined the development process and improved performance.[12]
Dynamic time warping (DTW) is analgorithmfor measuring similarity between twotemporal sequences, which may vary in speed. In general, DTW is a method that calculates anoptimal matchbetween two given sequences (e.g. time series) with certain restriction and rules. The optimal match is denoted by the match that satisfies all the restrictions and the rules and that has the minimal cost, where the cost is computed as the sum of absolute differences, for each matched pair of indices, between their values.[citation needed]
A hidden Markov model can be represented as the simplestdynamic Bayesian network. The goal of the algorithm is to estimate a hidden variable x(t) given a list of observations y(t). By applying theMarkov property, theconditional probability distributionof the hidden variablex(t) at timet, given the values of the hidden variablexat all times, dependsonlyon the value of the hidden variablex(t− 1). Similarly, the value of the observed variabley(t) only depends on the value of the hidden variablex(t) (both at timet).[citation needed]
An artificial neural network (ANN) is based on a collection of connected units or nodes calledartificial neurons, which loosely model theneuronsin a biologicalbrain. Each connection, like thesynapsesin a biologicalbrain, can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is areal number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs.[citation needed]
Phase is usually supposed to be random uniform variable and thus useless. This is due wrapping of phase:[13]result ofarctangentfunction is not continuous due to periodical jumps on2π{\displaystyle 2\pi }. After phase unwrapping (see,[14]Chapter 2.3;Instantaneous phase and frequency), it can be expressed as:[13][15]ϕ(h,l)=ϕlin(h,l)+Ψ(h,l){\displaystyle \phi (h,l)=\phi _{lin}(h,l)+\Psi (h,l)}, whereϕlin(h,l)=ω0(l′)Δt{\displaystyle \phi _{lin}(h,l)=\omega _{0}(l'){}_{\Delta }t}is linear phase (Δt{\displaystyle {}_{\Delta }t}is temporal shift at each frame of analysis),Ψ(h,l){\displaystyle \Psi (h,l)}is phase contribution of the vocal tract and phase source.[15]Obtained phase estimations can be used for noise reduction: temporal smoothing of instantaneous phase[16]and its derivatives by time (instantaneous frequency) and frequency (group delay),[17]smoothing of phase across frequency.[17]Joined amplitude and phase estimators can recover speech more accurately basing on assumption of von Mises distribution of phase.[15]
|
https://en.wikipedia.org/wiki/Speech_processing
|
Aspoken dialog system(SDS) is a computer system able to converse with a human with voice. It has two essential components that do not exist in a written textdialog system: aspeech recognizerand atext-to-speechmodule (written text dialog systems usually use other input systems provided by an OS). It can be further distinguished fromcommand and controlspeech systems that can respond to requests but do not attempt to maintain continuity over time.
Spoken dialog systems vary in their complexity. Directed dialog systems are very simple and require that the developer create a graph (typically a tree) that manages the task but may not correspond to the needs of the user. Information access systems, typically based on forms, allow users some flexibility (for example in the order in which retrieval constraints are specified, or in the use of optional constraints) but are limited in their capabilities. Problem-solving dialog systems may allow human users to engage in a number of different activities that may include information access, plan construction and possible execution of the latter.
Some examples of systems include:
Pionieers in dialogue systems are companies likeAT&T(with its speech recognizer system in the Seventies) andCSELTlaboratories, that led some European research projects during the Eighties (e.g. SUNDIAL) after the end of the DARPA project in the US.
The field of spoken dialog systems is quite large and includes research (featured at scientific conferences such asSIGdialandInterspeech) and a large industrial sector (with its own meetings such asSpeechTekandAVIOS).
The following might provide good technical introductions:
|
https://en.wikipedia.org/wiki/Spoken_dialogue_systems
|
Proofreadingis a phase in the process ofpublishingwheregalley proofsare compared against the originalmanuscriptsorgraphic artworks, to identifytranscriptionerrors in thetypesettingprocess.[1][2]In the past, proofreaders would place corrections or proofreading marks along the margins.[3]In modern publishing, material is generally provided in electronic form, traditional typesetting is no longer used and thus (in general) this kind of transcription no longer occurs.[a]
A "galley proof" (familiarly, "a proof") is atypesetversion ofcopyor amanuscriptdocument. It may containtypographical errors("printer's errors"), as a result of human error during typesetting. Traditionally, a proofreader looks at a portion of text on the copy, compares it to the corresponding typeset portion, and then marks any errors (sometimes called "line edits") usingstandard proofreaders' marks.[4]
Unlikecopy editing, the defining procedure of a proofreading service is to work directly with two sets of information at the same time. Proofs are then returned to the typesetter for correction. Correction-cycle proofs will typically have one descriptive term, such as "bounce", "bump", or "revise" unique to the department or organization and used for clarity to the strict exclusion of any other.[citation needed]
"Copy holding" or "copy reading" employs two readers per proof. The first reads the text aloud literally as it appears, usually at a comparatively fast but uniform rate. The second reader follows along and marks any pertinent differences between what is read and what was typeset. This method is appropriate for large quantities ofboilerplate textwhere it is assumed that there will be comparatively few mistakes.
Experienced copy holders employ variouscodesand verbal shortcuts that accompany their reading. The spoken word "digits", for example, means that the numbers about to be read are not words spelled out; and "in a hole" can mean that the upcoming segment of text is withinparentheses. "Bang" means anexclamation point. A "thump" or "screamer" made with a finger on the table represents theinitial cap,comma,period, or similar obvious attribute being read simultaneously. Thus the line of text "(He said the address was 1234 Central Blvd., and to hurry!)" would be read aloud as "in a hole[thump]he said the address was digits 1 2 3 4[thump]central[thump]buluhvuhd[thump]comma and to hurry bang". Mutual understanding is the only guiding principle, so codes evolve as opportunity permits. In the above example, two thumps afterbuluhvuhdmight be acceptable to proofreaders familiar with the text.
"Double reading" is when a single proofreader checks a proof in the traditional manner and then another reader repeats the process. Both initial the proof. With both copy holding and double reading, responsibility for a given proof is necessarily shared by the two proofreaders.
"Scanning" is used to check a proof without reading it word for word, has become common with computerization of typesetting and the popularization ofword processing. Many publishers have their own proprietary typesetting systems,[5]while their customers use more common commercial programs. Before the original data can be published, it must be converted into a format used by the publisher. The end product is usually called aconversion. If a customer has already proofread the contents of a file before submitting it to a publisher, there will be no reason for another proofreader to re-read it from the copy (although this additional service may be requested and paid for). Instead, the publisher is held responsible only for formatting errors, such as typeface, page width, and alignment ofcolumnsintables; and production errors such as text inadvertently deleted. To simplify matters further, a given conversion will usually be assigned a specifictemplate.
Proofreaders are expected to be consistently accurate by default because they occupy the last stage of typographic production beforepublication.
Checklists are common in proof-rooms where there is sufficient uniformity of product to distil some or all of its components into a list. They may also act as a training tool for new hires. Checklists are never comprehensive, however: proofreaders still have to find all mistakes that arenotmentioned or described, thus limiting their usefulness.
The term "proofreading" is sometimes incorrectly used to refer tocopy editing, and vice versa. Although there is necessarily some overlap, proofreaders typically lack any real editorial or managerial authority, but they may mark queries for typesetters, editors, or authors. To set expectations before hiring proofreaders, some employers post a notice that the job advertised is not a writing or editing position and will not become one. Creativity and critical thinking by their very nature conflict with the strict copy-following discipline thatcommercialandgovernmentalproofreading requires. Thus, proofreading and editing are fundamentally separate responsibilities. In contrast to proofreaders, copy editors focus on a sentence-by-sentence analysis of the text to "clean it up" by improving grammar, spelling, punctuation, syntax, and structure. The copy editor is usually the last editor an author will work with. Copy editing focuses intensely on style, content, punctuation,grammar, and consistency of usage.[6]
Copy editing and proofreading are parts of the same process; each is necessary at a different stage of the writing process. Copy editing is required during the drafting stage. The copy editors polish the text for precision and conciseness. They attempt to understand the purpose of the writing and the intended audience; therefore, they ask questions such as where the document will be published and who will read it, and they edit accordingly. Proofreading, rather, is required during the last stage of the editing process. Its scope is limited, as the proofreaders focus only on reading the text to ensure the document is error-free and ready for publication.[7]Proofreading generally focuses on correcting any final typos, spelling errors, stylistic inconsistencies (e.g., whether words or numerals are used for numbers), and punctuation errors.[8]
Examples of proofreaders in fiction include:
|
https://en.wikipedia.org/wiki/Text-proofing
|
Text simplificationis an operation used innatural language processingto change, enhance, classify, or otherwise process an existing body of human-readable text so its grammar and structure is greatly simplified while the underlyingmeaningandinformationremain the same. Text simplification is an important area of research because of communication needs in an increasingly complex and interconnected world more dominated by science, technology, and new media. But natural human languages pose huge problems because they ordinarily contain large vocabularies and complex constructions that machines, no matter how fast and well-programmed, cannot easily process. However, researchers have discovered that, to reduce linguistic diversity, they can use methods ofsemantic compressionto limit and simplify a set of words used in given texts.
Text simplification is illustrated with an example used by Siddharthan (2006).[1]The first sentence contains two relative clauses and one conjoined verb phrase. A text simplification system aims to change the first sentence into a group of simpler sentences, as seen just below the first sentence.
One approach to text simplification islexical simplificationvialexical substitution, a two-step process of first identifying complex words and then replacing them with simpler synonyms. A key challenge here is identifying complex words, which is performed by a machine learning classifier trained onlabeled data. Researchers, frustrated by the problems with using the classical method of asking research subjects to describe words as either simple or complex, have discovered that they can get a higher consistency in more levels of complexity if they ask labelers to sort words presented to them in order of complexity.[2]
|
https://en.wikipedia.org/wiki/Text_simplification
|
Thetransformeris adeep learningarchitecture that was developed by researchers atGoogleand is based on the multi-headattentionmechanism, which was proposed in the 2017 paper "Attention Is All You Need".[1]Text is converted to numerical representations calledtokens, and each token is converted into a vector via lookup from aword embeddingtable.[1]At each layer, eachtokenis thencontextualizedwithin the scope of thecontext windowwith other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for keytokensto be amplified and less important tokens to be diminished.
Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlierrecurrent neural architectures(RNNs) such aslong short-term memory(LSTM).[2]Later variations have been widely adopted for traininglarge language models(LLM) on large (language)datasets.[3]
Transformers were first developed as an improvement over previous architectures formachine translation,[4][5]but have found many applications since. They are used in large-scalenatural language processing,computer vision(vision transformers),reinforcement learning,[6][7]audio,[8]multimodal learning,robotics,[9]and even playingchess.[10]It has also led to the development ofpre-trained systems, such asgenerative pre-trained transformers(GPTs)[11]andBERT[12](bidirectional encoder representations from transformers).
For many years, sequence modelling and generation was done by using plainrecurrent neural networks(RNNs). A well-cited early example was theElman network(1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice thevanishing-gradient problemleaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens.
A key breakthrough wasLSTM(1995),[note 1]a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of anattention mechanismwhich used neurons that multiply the outputs of other neurons, so-calledmultiplicative units.[13]Neural networks using multiplicative units were later calledsigma-pi networks[14]orhigher-order networks.[15]LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers.
However, LSTM still used sequential processing, like most other RNNs.[note 2]Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence.
Modern Transformers overcome this problem, but unlike RNNs, they require computation time that isquadraticin the size of the context window. The linearly scalingfast weightcontroller (1992) learns to compute a weight matrix for further processing depending on the input.[16]One of its two networks has "fast weights" or "dynamic links" (1981).[17][18][19]A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries.[16]This was later shown to be equivalent to the unnormalized linear Transformer.[20][21]
The idea of encoder-decoder sequence transduction had been developed in the early 2010s (see previous papers[22][23]). The papers most commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.[22][23]
A 380M-parameter model for machine translation uses twolong short-term memories(LSTM).[23]Its architecture consists of two parts. Theencoderis an LSTM that takes in a sequence of tokens and turns it into a vector. Thedecoderis another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model usedgated recurrent units(GRU) instead of LSTM.[22]Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.[24][25]
These early seq2seq models had no attention mechanism, and the state vector is accessible only after thelastword of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into afixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.[26]
TheRNNsearchmodel introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of thefixed-sizeoutput vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".[4]
The relative performances were compared between global (that ofRNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.[27]
In 2016,Google Translatewas revamped toGoogle Neural Machine Translation, which replaced the previous model based onstatistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM.[28]It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.[29]
Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard toparallelize, which prevented them from being accelerated on GPUs. In 2016,decomposable attentionapplied a self-attention mechanism tofeedforward networks, which are easy to parallelize, and achievedSOTAresult intextual entailmentwith an order of magnitude fewer parameters than LSTMs.[30]One of its authors, Jakob Uszkoreit, suspected that attentionwithoutrecurrence is sufficient for language translation, thus the title "attention isallyou need".[31]That hypothesis was against conventional wisdom at the time, and even his fatherHans Uszkoreit, a well-known computational linguist, was skeptical.[31]In the same year, self-attention (calledintra-attention orintra-sentence attention) was proposed for LSTMs.[32]
In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improvingseq2seqformachine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance.[1]This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks.[33]
Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles.[34]Transformer architecture is now used alongside manygenerative modelsthat contribute to the ongoingAI boom.
In language modelling,ELMo(2018) was a bi-directional LSTM that produces contextualizedword embeddings, improving upon the line of research frombag of wordsandword2vec. It was followed byBERT(2018), an encoder-only Transformer model.[35]In 2019 October, Google started using BERT to process search queries.[36]In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.[37]
Starting in 2018, the OpenAIGPT seriesof decoder-only Transformers became state of the art innatural language generation. In 2022, a chatbot based on GPT-3,ChatGPT, became unexpectedly[38]popular, triggering a boom aroundlarge language models.[39][40]
Since 2020, Transformers have been applied in modalities beyond text, including thevision transformer,[41]speech recognition,[42]robotics,[6]andmultimodal.[43]The vision transformer, in turn, stimulated new developments inconvolutional neural networks.[44]Image and video generators likeDALL-E(2021),Stable Diffusion 3(2024),[45]andSora(2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data.
The plain transformer architecture had difficulty converging. In the original paper[1]the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again.
A 2020 paper found that usinglayer normalizationbefore(instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup.[46]
Transformers typically are first pretrained byself-supervised learningon a large generic dataset, followed bysupervisedfine-tuningon a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such asThe Pile. Tasks for pretraining and fine-tuning commonly include:
TheT5 transformerreport[47]documents a large number ofnatural languagepretraining tasks. Some examples are:
Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture.
In general, there are 3 classes of language modelling tasks: "masked",[49]"autoregressive",[50]and "prefixLM".[51]These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer.
In a masked task,[49]one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. Theloss functionfor the task is typically sum oflog-perplexitiesfor the masked-out tokens:Loss=−∑t∈masked tokensln(probability oftconditional on its context){\displaystyle {\text{Loss}}=-\sum _{t\in {\text{masked tokens}}}\ln({\text{probability of }}t{\text{ conditional on its context}})}and the model is trained to minimize this loss function. TheBERT series of modelsare trained for masked token prediction and another task.
In an autoregressive task,[50]the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. TheGPT series of modelsare trained by autoregressive tasks.
In a prefixLM task,[51]the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. TheT5 series of modelsare trained by prefixLM tasks.
Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not"prefixLM" (prefix language model).
All transformers have the same primary components:
The following description follows exactly the Transformer as described in the original paper. There are variants, described in thefollowing section.
By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, asxW{\displaystyle xW}.
As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is atokenizer.
The set of all tokens is the vocabulary of the tokenizer, and its size is thevocabulary sizenvocabulary{\displaystyle n_{\text{vocabulary}}}. When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown".
Some commonly used tokenizers arebyte pair encoding, WordPiece, and SentencePiece.
Each token is converted into an embedding vector via alookup table. Equivalently stated, it multiplies aone-hotrepresentation of the token by an embedding matrixM{\displaystyle M}. For example, if the input token is3{\displaystyle 3}, then the one-hot representation is[0,0,0,1,0,0,…]{\displaystyle [0,0,0,1,0,0,\dots ]}, and its embedding vector isEmbed(3)=[0,0,0,1,0,0,…]M{\displaystyle \mathrm {Embed} (3)=[0,0,0,1,0,0,\dots ]M}The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors.
The number of dimensions in an embedding vector is calledhidden sizeorembedding sizeand written asdemb{\displaystyle d_{\text{emb}}}.[35]This size is written asdmodel{\displaystyle d_{\text{model}}}in the original Transformer paper.[1]
An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens.
The un-embedding layer is a linear-softmaxlayer:UnEmbed(x)=softmax(xW+b){\displaystyle \mathrm {UnEmbed} (x)=\mathrm {softmax} (xW+b)}The matrix has shape(demb,nvocabulary){\displaystyle (d_{\text{emb}},n_{\text{vocabulary}})}. The embedding matrixM{\displaystyle M}and the un-embedding matrixW{\displaystyle W}are sometimes required to be transposes of each other, a practice called weight tying.[52]
A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information aboutwherethe words are in the input sequence. This shall induce abiastowards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man".
The positional encoding is defined as a function of typef:R→Rd;d∈Z,d>0{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0}, whered{\displaystyle d}is a positive eveninteger. The full positional encoding defined in the original paper[1]is:(f(t)2k,f(t)2k+1)=(sin(θ),cos(θ))∀k∈{0,1,…,d/2−1}{\displaystyle (f(t)_{2k},f(t)_{2k+1})=(\sin(\theta ),\cos(\theta ))\quad \forall k\in \{0,1,\ldots ,d/2-1\}}whereθ=trk,r=N2/d{\displaystyle \theta ={\frac {t}{r^{k}}},r=N^{2/d}}.
Here,N{\displaystyle N}is a free parameter that should be significantly larger than the biggestk{\displaystyle k}that would be input into the positional encoding function. The original paper usesN=10000{\displaystyle N=10000}.
The function is in a simpler form when written as a complex function of typef:R→Cd/2{\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}}f(t)=(eit/rk)k=0,1,…,d2−1{\displaystyle f(t)=\left(e^{it/r^{k}}\right)_{k=0,1,\ldots ,{\frac {d}{2}}-1}}wherer=N2/d{\displaystyle r=N^{2/d}}.
The main reason for using this positional encoding function is that using it, shifts are linear transformations:f(t+Δt)=diag(f(Δt))f(t){\displaystyle f(t+\Delta t)=\mathrm {diag} (f(\Delta t))f(t)}whereΔt∈R{\displaystyle \Delta t\in \mathbb {R} }is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication.
By taking a linear sum, any convolution can also be implemented as linear transformations:∑jcjf(t+Δtj)=(∑jcjdiag(f(Δtj)))f(t){\displaystyle \sum _{j}c_{j}f(t+\Delta t_{j})=\left(\sum _{j}c_{j}\,\mathrm {diag} (f(\Delta t_{j}))\right)f(t)}for any constantscj{\displaystyle c_{j}}. This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in aconvolutional neural networklanguage model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position."
In typical implementations, all operations are done over the real numbers, not the complex numbers, but sincecomplex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference.
Like earlierseq2seqmodels, the original transformer model used anencoder-decoderarchitecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far.
The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).[53][54]
Both the encoder and decoder layers have afeed-forward neural networkfor additional processing of their outputs and contain residual connections and layer normalization steps.[54]These feed-forward layers contain most of the parameters in a Transformer model.
The feedforward network (FFN) modules in a Transformer are 2-layeredmultilayer perceptrons:FFN(x)=ϕ(xW(1)+b(1))W(2)+b(2){\displaystyle \mathrm {FFN} (x)=\phi (xW^{(1)}+b^{(1)})W^{(2)}+b^{(2)}}whereW(1){\displaystyle W^{(1)}}andW(2){\displaystyle W^{(2)}}are weight matrices andb(1){\displaystyle b^{(1)}}andb(2){\displaystyle b^{(2)}}are bias vectors, andϕ{\displaystyle \phi }is its activation function. The original Transformer usedReLUactivation.
The number of neurons in the middle layer is calledintermediate size(GPT),[55]filter size(BERT),[35]orfeedforward size(BERT).[35]It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size:dffn=4demb{\displaystyle d_{\text{ffn}}=4d_{\text{emb}}}.
The attention mechanism used in the Transformer architecture are scaleddot-productattentionunits. For each unit, the transformer model learns three weight matrices: the query weightsWQ{\displaystyle W^{Q}}, the key weightsWK{\displaystyle W^{K}}, and the value weightsWV{\displaystyle W^{V}}.
The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of lengthℓseq, query{\displaystyle \ell _{\text{seq, query}}}, and each entry is a vector of dimensiondemb, query{\displaystyle d_{\text{emb, query}}}. Similarly for the key and value sequences.
For each vectorxi,query{\displaystyle x_{i,{\text{query}}}}in the query sequence, it is multiplied by a matrixWQ{\displaystyle W^{Q}}to produce a query vectorqi=xi,queryWQ{\displaystyle q_{i}=x_{i,{\text{query}}}W^{Q}}. The matrix of all query vectors is the query matrix:Q=XqueryWQ{\displaystyle Q=X_{\text{query}}W^{Q}}Similarly, we construct the key matrixK=XkeyWK{\displaystyle K=X_{\text{key}}W^{K}}and the value matrixV=XvalueWV{\displaystyle V=X_{\text{value}}W^{V}}.
It is usually the case that allWQ,WK,WV{\displaystyle W^{Q},W^{K},W^{V}}are square matrices, meaningdemb, query=dquery{\displaystyle d_{\text{emb, query}}=d_{\text{query}}}, etc.
Attention weights are calculated using the query and key vectors: the attention weightaij{\displaystyle a_{ij}}from tokeni{\displaystyle i}to tokenj{\displaystyle j}is thedot productbetweenqi{\displaystyle q_{i}}andkj{\displaystyle k_{j}}. The attention weights are divided by the square root of the dimension of the key vectors,dk{\displaystyle {\sqrt {d_{k}}}}, which stabilizes gradients during training, and passed through asoftmaxwhich normalizes the weights. The fact thatWQ{\displaystyle W^{Q}}andWK{\displaystyle W^{K}}are different matrices allows attention to be non-symmetric: if tokeni{\displaystyle i}attends to tokenj{\displaystyle j}(i.e.qi⋅kj{\displaystyle q_{i}\cdot k_{j}}is large), this does not necessarily mean that tokenj{\displaystyle j}will attend to tokeni{\displaystyle i}(i.e.qj⋅ki{\displaystyle q_{j}\cdot k_{i}}could be small). The output of the attention unit for tokeni{\displaystyle i}is the weighted sum of the value vectors of all tokens, weighted byaij{\displaystyle a_{ij}}, the attention from tokeni{\displaystyle i}to each token.
The attention calculation for all tokens can be expressed as one large matrix calculation using thesoftmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matricesQ{\displaystyle Q},K{\displaystyle K}andV{\displaystyle V}are defined as the matrices where thei{\displaystyle i}th rows are vectorsqi{\displaystyle q_{i}},ki{\displaystyle k_{i}}, andvi{\displaystyle v_{i}}respectively. Then we can represent the attention asAttention(Q,K,V)=softmax(QKTdk)V{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
where the softmax is applied over each of the rows of the matrix.
The number of dimensions in a query vector isquery sizedquery{\displaystyle d_{\text{query}}}and similarly for thekey sizedkey{\displaystyle d_{\text{key}}}andvalue sizedvalue{\displaystyle d_{\text{value}}}. The output dimension of an attention head is itshead dimensiondhead{\displaystyle d_{\text{head}}}. The attention mechanism requires the following three equalities to hold:ℓseq, key=ℓseq, value,dquery=dkey,dvalue=dhead{\displaystyle \ell _{\text{seq, key}}=\ell _{\text{seq, value}},\;d_{\text{query}}=d_{\text{key}},\;d_{\text{value}}=d_{\text{head}}}but is otherwise unconstrained.
If the attention head is used in a self-attention fashion, thenXquery=Xkey=Xvalue{\displaystyle X_{\text{query}}=X_{\text{key}}=X_{\text{value}}}. If the attention head is used in a cross-attention fashion, then usuallyXquery≠Xkey=Xvalue{\displaystyle X_{\text{query}}\neq X_{\text{key}}=X_{\text{value}}}. It is theoretically possible for all three to be different, but that is rarely the case in practice.
One set of(WQ,WK,WV){\displaystyle \left(W^{Q},W^{K},W^{V}\right)}matrices is called anattention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices,WQ{\displaystyle W^{Q}}andWK{\displaystyle W^{K}}, which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrixWV{\displaystyle W^{V}}, in combination with the part of the output projection matrixWO{\displaystyle W^{O}}, determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects.[56]The computations for each attention head can be performed inparallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into thefeed-forward neural networklayers.
Concretely, let the multiple attention heads be indexed byi{\displaystyle i}, then we haveMultiheadedAttention(Q,K,V)=Concati∈[nheads](Attention(QWiQ,KWiK,VWiV))WO{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}({\text{Attention}}(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}))W^{O}}where the matrixX{\displaystyle X}is the concatenation of word embeddings, and the matricesWiQ,WiK,WiV{\displaystyle W_{i}^{Q},W_{i}^{K},W_{i}^{V}}are "projection matrices" owned by individual attention headi{\displaystyle i}, andWO{\displaystyle W^{O}}is a final projection matrix owned by the whole multi-headed attention head.
It is theoretically possible for each attention head to have a different head dimensiondhead{\displaystyle d_{\text{head}}}, but that is rarely the case in practice.
As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:demb=768,nhead=12,dhead=64{\displaystyle d_{\text{emb}}=768,n_{\text{head}}=12,d_{\text{head}}=64}Since12×64=768{\displaystyle 12\times 64=768}, its output projection matrixWO∈R(12×64)×768{\displaystyle W^{O}\in \mathbb {R} ^{(12\times 64)\times 768}}is a square matrix.
The Transformer architecture is constructed to calculate output tokens iteratively. Assumingt=0{\displaystyle t=0}refers to the calculation of the first output tokeni=0{\displaystyle i=0}, for stept>0{\displaystyle t>0}, the output tokeni=0{\displaystyle i=0}shall remain constant. This ensures properties of the model similar toautoregressive models.[1]Therefore, at every time stept{\displaystyle t}, the calculation for all outputsi{\displaystyle i}should not have access to tokens at positionj{\displaystyle j}forj>=i{\displaystyle j>=i}(as it naturally is the case for time stept=i{\displaystyle t=i}, when tokensj>t{\displaystyle j>t}are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrixM{\displaystyle M}that is−∞{\displaystyle -\infty }at entries where the attention link must be cut, and0{\displaystyle 0}at other places:MaskedAttention(Q,K,V)=softmax(M+QKTdk)V{\displaystyle {\begin{aligned}{\text{MaskedAttention}}(Q,K,V)={\text{softmax}}\left(M+{\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}The following matrix is commonly used in decoder self-attention modules, called "causal masking":Mcausal=[0−∞−∞…−∞00−∞…−∞000…−∞⋮⋮⋮⋱⋮000…0]{\displaystyle M_{\text{causal}}={\begin{bmatrix}0&-\infty &-\infty &\dots &-\infty \\0&0&-\infty &\dots &-\infty \\0&0&0&\dots &-\infty \\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &0\end{bmatrix}}}
In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, theXLNetconsiders all masks of the formPMcausalP−1{\displaystyle PM_{\text{causal}}P^{-1}}, whereP{\displaystyle P}is a randompermutation matrix.[57]
An encoder consists of an embedding layer, followed by multiple encoder layers.
Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:given input vectorsh0,h1,…combine them into a matrixH=[h0h1⋮]EncoderLayer(H)=[FFN(MultiheadedAttention(H,H,H)0)FFN(MultiheadedAttention(H,H,H)1)⋮]{\displaystyle {\begin{aligned}{\text{given input vectors }}&h_{0},h_{1},\dots \\{\text{combine them into a matrix }}H&={\begin{bmatrix}h_{0}\\h_{1}\\\vdots \end{bmatrix}}\\{\text{EncoderLayer}}(H)&={\begin{bmatrix}{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{0})\\{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{1})\\\vdots \end{bmatrix}}\\\end{aligned}}}
whereFFN{\displaystyle {\text{FFN}}}stands for "feed-forward network". We can more succinctly write it asEncoderLayer(H)=FFN(MultiheadedAttention(H,H,H)){\displaystyle {\text{EncoderLayer}}(H)={\text{FFN}}({\text{MultiheadedAttention}}(H,H,H))}with the implicit convention that theFFN{\displaystyle {\text{FFN}}}is applied to each row of the matrix individually.
The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder.
As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking.
A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer.
Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called theencoder-decoder attention.[1][54]
Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow.[1]This allows forautoregressivetext generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked.
In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism.
Schematically, we have:H′=MaskedMultiheadedAttention(H,H,H)DecoderLayer(H)=FFN(MultiheadedAttention(H′,HE,HE)){\displaystyle {\begin{aligned}H'&={\text{MaskedMultiheadedAttention}}(H,H,H)\\{\text{DecoderLayer}}(H)&={\text{FFN}}({\text{MultiheadedAttention}}(H',H^{E},H^{E}))\end{aligned}}}whereHE{\displaystyle H^{E}}is the matrix with rows being the output vectors from the encoder.
The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text.
Manylarge language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. EarlyGPTmodels are decoder-only models trained to predict the next token in a sequence.[58]BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence.[35]
Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network.
The final points of detail are theresidual connectionsandlayer normalization(LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence.
The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero.
Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector.
There are two common conventions in use: thepost-LNand thepre-LNconvention. In the post-LN convention, the output of each sublayer isLayerNorm(x+Sublayer(x)){\displaystyle \mathrm {LayerNorm} (x+\mathrm {Sublayer} (x))}whereSublayer(x){\displaystyle \mathrm {Sublayer} (x)}is the function implemented by the sublayer itself.
In the pre-LN convention, the output of each sublayer isx+Sublayer(LayerNorm(x)){\displaystyle x+\mathrm {Sublayer} (\mathrm {LayerNorm} (x))}The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,[59]was found to be easier to train, requiring no warm-up, leading to faster convergence.[46]
The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from[60]
The Transformer architecture, being modular, allows variations. Several common variations are described here.[61]
An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding andrepresentation learningfor downstream applications.BERTis encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.[51]
A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used fortext generationandinstruction following. The models in theGPT seriesandChinchilla seriesare decoder-only.
An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such asalternative activation functions,changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in theT5 seriesare encoder-decoder.[61]
A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form[61]: Figure 3MprefixLM=[0−∞0Mcausal]{\displaystyle M_{\text{prefixLM}}={\begin{bmatrix}\mathbf {0} &-\infty \\\mathbf {0} &M_{\text{causal}}\end{bmatrix}}}where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.[51]
There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively.[62]
The original transformer usesReLUactivation function. Other activation functions were developed. TheLlama seriesandPaLMused SwiGLU;[63]both GPT-1 and BERT[35]used GELU.[64]
Alternative activation functions are often used in combination withGated Linear Unitsin the feedforward module.[63]
The normalization used in the Transformer can be different from LayerNorm. One example isRMSNorm[65]which is used in theLlama series. Other examples include CapsuleNorm[66]ScaleNorm,[67]or FixNorm.[67]
Transformers may use other positional encoding methods than sinusoidal.[68]
The original Transformer paper reported using a learned positional encoding,[69]but finding it not superior to the sinusoidal one.[1]Later,[70]found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module.
RoPE (rotary positional embedding),[71]is best explained by considering a list of 2-dimensional vectors[(x1(1),x1(2)),(x2(1),x2(2)),(x3(1),x3(2)),...]{\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),...]}. Now pick some angleθ{\displaystyle \theta }. Then RoPE encoding isRoPE(xm(1),xm(2),m)=(cosmθ−sinmθsinmθcosmθ)(xm(1)xm(2))=(xm(1)cosmθ−xm(2)sinmθxm(2)cosmθ+xm(1)sinmθ){\displaystyle {\text{RoPE}}{\big (}x_{m}^{(1)},x_{m}^{(2)},m{\big )}={\begin{pmatrix}\cos m\theta &-\sin m\theta \\\sin m\theta &\cos m\theta \end{pmatrix}}{\begin{pmatrix}x_{m}^{(1)}\\x_{m}^{(2)}\\\end{pmatrix}}={\begin{pmatrix}x_{m}^{(1)}\cos m\theta -x_{m}^{(2)}\sin m\theta \\x_{m}^{(2)}\cos m\theta +x_{m}^{(1)}\sin m\theta \\\end{pmatrix}}}Equivalently, if we write the 2-dimensional vectors as complex numberszm:=xm(1)+ixm(2){\displaystyle z_{m}:=x_{m}^{(1)}+ix_{m}^{(2)}}, then RoPE encoding is just multiplication by an angle:RoPE(zm,m)=eimθzm{\displaystyle {\text{RoPE}}{\big (}z_{m},m{\big )}=e^{im\theta }z_{m}}For a list of2n{\displaystyle 2n}-dimensional vectors, a RoPE encoder is defined by a sequence of anglesθ(1),...,θ(n){\displaystyle \theta ^{(1)},...,\theta ^{(n)}}. Then the RoPE encoding is applied to each pair of coordinates.
The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:RoPE(x,m)TRoPE(y,n)=RoPE(x,m+k)TRoPE(y,n+k){\displaystyle {\text{RoPE}}{\big (}x,m{\big )}^{T}{\text{RoPE}}{\big (}y,n{\big )}={\text{RoPE}}{\big (}x,m+k{\big )}^{T}{\text{RoPE}}{\big (}y,n+k{\big )}}for any integerk{\displaystyle k}.
ALiBi (Attention with Linear Biases)[72]is not areplacementfor the positional encoder on the original transformer. Instead, it is anadditionalpositional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism isAttention(Q,K,V)=softmax(QKTdk+sB)V{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+sB\right)V\end{aligned}}}Here,s{\displaystyle s}is a real number ("scalar"), andB{\displaystyle B}is thelinear biasmatrix defined byB=(0123⋯−1012⋯−2−101⋯−3−2−10⋯⋮⋮⋮⋮⋱){\displaystyle B={\begin{pmatrix}0&1&2&3&\cdots \\-1&0&1&2&\cdots \\-2&-1&0&1&\cdots \\-3&-2&-1&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}}in other words,Bi,j=j−i{\displaystyle B_{i,j}=j-i}. The idea being that the linear bias matrix is a softened mask. Just as0{\displaystyle 0}represent full attention paid, and−∞{\displaystyle -\infty }represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction.
ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located).
Relative Position Encodings[73]is similar to ALiBi, but more generic:Attention(Q,K,V)=softmax(QKTdk+B)V{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+B\right)V\end{aligned}}}whereB{\displaystyle B}is aToeplitz matrix, that is,Bi,j=Bi′,j′{\displaystyle B_{i,j}=B_{i',j'}}wheneveri−j=i′−j′{\displaystyle i-j=i'-j'}. This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding".[74]
The transformer model has been implemented in standard deep learningframeworkssuch asTensorFlowandPyTorch.Transformersis a library produced byHugging Facethat supplies transformer-based architectures and pretrained models.[11]
When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. TheKV cachingmethod saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token.PagedAttentionappliesmemory pagingto KV caching.[75][76][77]
If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots.
FlashAttention[78]is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performsmatrix multiplications in blocks, such that each block fits within thecacheof a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page onsoftmaxfor details.
An improved version, FlashAttention-2,[79][80][81]was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s onA100GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention.
Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).[82]
Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware likeH100GPUs and new data types like FP8.
Multi-Query Attention changes the multiheaded attention mechanism.[83]Whereas normally,
MultiheadedAttention(Q,K,V)=Concati∈[nheads](Attention(XWiQ,XWiK,XWiV))WO{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V})\right)W^{O}}with Multi-Query Attention, there is just oneWK,WV{\displaystyle W^{K},W^{V}}, thus:
MultiQueryAttention(Q,K,V)=Concati∈[nheads](Attention(XWiQ,XWK,XWV))WO{\displaystyle {\text{MultiQueryAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW^{K},XW^{V})\right)W^{O}}
This has a neutral effect on model quality and training speed, but increases inference speed.
More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups.[84]
Multihead Latent Attention (MLA) is alow-rank approximationto standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached.[85]
Speculative decoding[86][87]is a method to accelerate token decoding. Similarly tospeculative executionin CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly.
The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense.
Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a tokenx1,x2,...,x512{\displaystyle x_{1},x_{2},...,x_{512}}, taking time512TGPT-3{\displaystyle 512T_{\text{GPT-3}}}. However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that eachxt{\displaystyle x_{t}}is indeed the token with the largest log-likelihood in thet{\displaystyle t}-th output.
In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens:x~1,x~2,x~3,x~4{\displaystyle {\tilde {x}}_{1},{\tilde {x}}_{2},{\tilde {x}}_{3},{\tilde {x}}_{4}}. This only takes4TGPT-3-small{\displaystyle 4T_{\text{GPT-3-small}}}. These tokens are then run through the larger GPT-3 in one go. Suppose thatx~1{\displaystyle {\tilde {x}}_{1}}andx~2{\displaystyle {\tilde {x}}_{2}}are verified by GPT-3 as what it would have picked, then those are kept, butx~3{\displaystyle {\tilde {x}}_{3}}is not, sox~3,x~4{\displaystyle {\tilde {x}}_{3},{\tilde {x}}_{4}}are discarded, and GPT-3 is run on those. This would take4TGPT-3-small+3TGPT-3{\displaystyle 4T_{\text{GPT-3-small}}+3T_{\text{GPT-3}}}, which might be shorter than4TGPT-3{\displaystyle 4T_{\text{GPT-3}}}.
For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.[86][88]
In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict thenexttoken, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack.[89][90]
Training transformer-based architectures can be expensive, especially for long inputs.[91]Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows.[92]In the audio domain, SepTr decouples the attention in time and frequency domains.[93]Long Range Arena(2020)[94]is a standard benchmark for comparing the behavior of transformer architectures over long inputs.
The standard attention graph is either all-to-all or causal, both of which scales asO(N2){\displaystyle O(N^{2})}whereN{\displaystyle N}is the number of tokens in a sequence.
Reformer (2020)[91][95]reduces the computational load fromO(N2){\displaystyle O(N^{2})}toO(NlnN){\displaystyle O(N\ln N)}by usinglocality-sensitive hashingand reversible layers.[96]
Sparse attention[97]uses attention graphs that grows slower thanO(N2){\displaystyle O(N^{2})}. For example, BigBird (2020)[98]uses randomsmall-world networkswhich grows asO(N){\displaystyle O(N)}.
Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers[99]reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value.
Random Feature Attention (2021)[100]usesFourier random features:φ(x)=1D[cos⟨w1,x⟩,sin⟨w1,x⟩,⋯cos⟨wD,x⟩,sin⟨wD,x⟩]T{\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\cdots \cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}}wherew1,...,wD{\displaystyle w_{1},...,w_{D}}are independent samples from the normal distributionN(0,σ2I){\displaystyle N(0,\sigma ^{2}I)}. This choice of parameters satisfyE[⟨φ(x),φ(y)⟩]=e−‖x−y‖22σ2{\displaystyle \mathbb {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{-{\frac {\|x-y\|^{2}}{2\sigma ^{2}}}}}, ore⟨x,y⟩/σ2=E[⟨e‖x‖2/2σ2φ(x),e‖y‖2/2σ2φ(y)⟩]≈⟨e‖x‖2/2σ2φ(x),e‖y‖2/2σ2φ(y)⟩{\displaystyle e^{\langle x,y\rangle /\sigma ^{2}}=\mathbb {E} [\langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle ]\approx \langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle }Consequently, the one-headed attention, with one query, can be written asAttention(q,K,V)=softmax(qKTdk)V≈φ(q)T∑ie‖ki‖2/2σ2φ(ki)viTφ(q)T∑ie‖ki‖2/2σ2φ(ki){\displaystyle {\text{Attention}}(q,K,V)={\text{softmax}}\left({\frac {qK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx {\frac {\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})v_{i}^{T}}{\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})}}}whereσ=dK1/4{\displaystyle \sigma =d_{K}^{1/4}}. Similarly for multiple queries, and for multiheaded attention.
This approximation can be computed in linear time, as we can compute the matrixφ(ki)viT{\displaystyle \varphi (k_{i})v_{i}^{T}}first, then multiply it with the query. In essence, we have managed to obtain a more precise version ofAttention(Q,K,V)=softmax(QKTdk)V≈Q(KTV/dk){\displaystyle {\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx Q(K^{T}V/{\sqrt {d_{k}}})}Performer (2022)[101]uses the same Random Feature Attention, butw1,...,wD{\displaystyle w_{1},...,w_{D}}are first independently sampled from the normal distributionN(0,σ2I){\displaystyle N(0,\sigma ^{2}I)}, then they areGram-Schmidt processed.
Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality.
Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstratingtransfer learning.[102]The LLaVA was a vision-language model composed of a language model (Vicuna-13B)[103]and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.[104]
Vision transformers[41]adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer.
Conformer[42]and laterWhisper[105]follow the same pattern forspeech recognition, first turning the speech signal into aspectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
Perceivers[106][107]are a variant of Transformers designed for multimodality.
For image generation, notable architectures areDALL-E 1(2021), Parti (2022),[108]Phenaki (2023),[109]and Muse (2023).[110]Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by avariational autoencoderto an image.[111]Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image.[112]Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted.[110]Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.[109]
The transformer has had great success innatural language processing(NLP). Manylarge language modelssuch asGPT-2,GPT-3,GPT-4,Gemini, AlbertAGPT,Claude,BERT,Grok,XLNet,RoBERTaandChatGPTdemonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including:
Beyond traditional NLP, the transformer architecture has had success in other applications, such as:
|
https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)
|
Truecasing, also calledcapitalization recovery,[1]capitalization correction,[2]orcase restoration,[3]is the problem innatural language processing(NLP) of determining the propercapitalizationof words where such information is unavailable. This commonly comes up due to the standard practice (inEnglishand many other languages) of automatically capitalizing the first word of a sentence. It can also arise in badly cased or noncased text (for example, all-lowercase or all-uppercasetext messages).
Truecasing is unnecessary in languages whose scripts do not have a distinction between uppercase and lowercase letters. This includes all languages not written in theLatin,Greek,CyrillicorArmenian alphabets, such asKorean,Japanese,Chinese,Thai,Hebrew,Arabic,Hindi, andGeorgian.
Truecasing aids in other NLP tasks, such asnamed entity recognition(NER),automatic content extraction(ACE), andmachine translation.[4]Proper capitalization allows easier detection of proper nouns, which are the starting points of NER and ACE. Some translation systems usestatistical machine learningtechniques, which could make use of the information contained in capitalization to increase accuracy.
|
https://en.wikipedia.org/wiki/Truecasing
|
Question answering(QA) is acomputer sciencediscipline within the fields ofinformation retrievalandnatural language processing(NLP) that is concerned with building systems that automatically answerquestionsthat are posed by humans in anatural language.[1]
A question-answering implementation, usually a computer program, may construct its answers by querying a structureddatabaseof knowledge or information, usually aknowledge base. More commonly, question-answering systems can pull answers from an unstructured collection of natural language documents.
Some examples of natural language document collections used for question answering systems include:
Question-answering research attempts to develop ways of answering a wide range of question types, including fact, list,definition, how, why, hypothetical, semantically constrained, and cross-lingual questions.
Another way to categorize question-answering systems is by the technical approach used. There are a number of different types of QA systems, including
Rule-based systems use a set of rules to determine the correct answer to a question. Statistical systems use statistical methods to find the most likely answer to a question. Hybrid systems use a combination of rule-based and statistical methods.
Two early question answering systems were BASEBALL[4]and LUNAR.[5]BASEBALL answered questions about Major League Baseball over a period of one year[ambiguous]. LUNAR answered questions about the geological analysis of rocks returned by the Apollo Moon missions. Both question answering systems were very effective in their chosen domains. LUNAR was demonstrated at a lunar science convention in 1971 and it was able to answer 90% of the questions in its domain that were posed by people untrained on the system. Further restricted-domain question answering systems were developed in the following years. The common feature of all these systems is that they had a core database or knowledge system that was hand-written by experts of the chosen domain. The language abilities of BASEBALL and LUNAR used techniques similar toELIZAandDOCTOR, the firstchatterbotprograms.
SHRDLUwas a successful question-answering program developed byTerry Winogradin the late 1960s and early 1970s. It simulated the operation of a robot in a toy world (the "blocks world"), and it offered the possibility of asking the robot questions about the state of the world. The strength of this system was the choice of a very specific domain and a very simple world with rules of physics that were easy to encode in a computer program.
In the 1970s,knowledge baseswere developed that targeted narrower domains of knowledge. The question answering systems developed to interface with theseexpert systemsproducedmore repeatable[clarification needed]and valid responses to questions within an area of knowledge. These expert systems closely resembled modern question answering systems except in their internal architecture. Expert systems rely heavily on expert-constructed and organizedknowledge bases, whereas many modern question answering systems rely on statistical processing of a large, unstructured, natural language text corpus.
The 1970s and 1980s saw the development of comprehensive theories incomputational linguistics, which led to the development of ambitious projects in text comprehension and question answering. One example was the Unix Consultant (UC), developed byRobert WilenskyatU.C. Berkeleyin the late 1980s. The system answered questions pertaining to theUnixoperating system. It had a comprehensive, hand-crafted knowledge base of its domain, and it aimed at phrasing the answer to accommodate various types of users. Another project was LILOG, atext-understandingsystem that operated on the domain of tourism information in a German city. The systems developed in the UC and LILOG projects never went past the stage of simple demonstrations, but they helped the development of theories on computational linguistics and reasoning.
Specialized natural-language question answering systems have been developed, such as EAGLi for health and life scientists.[6]
QA systems are used in a variety of applications, including
As of 2001[update], question-answering systems typically included aquestion classifiermodule that determined the type of question and the type of answer.[7]
Different types of question-answering systems employ different architectures. For example, modern open-domain question answering systems may use a retriever-reader architecture. The retriever is aimed at retrieving relevant documents related to a given question, while the reader is used to infer the answer from the retrieved documents. Systems such asGPT-3, T5,[8]and BART[9]use an end-to-end[jargon]architecture in which a transformer-based[jargon]architecture stores large-scale textual data in the underlying parameters. Such models can answer questions without accessing any external knowledge sources.
Question answering is dependent on a good searchcorpus; without documents containing the answer, there is little any question answering system can do. Larger collections generally mean better question answering performance, unless the question domain is orthogonal to the collection.Data redundancyin massive collections, such as the web, means that nuggets of information are likely to be phrased in many different ways in differing contexts and documents,[10]leading to two benefits:
Some question answering systems rely heavily onautomated reasoning.[11][12]
Ininformation retrieval, an open-domain question answering system tries to return an answer in response to the user's question. The returned answer is in the form of short texts rather than a list of relevant documents.[13]The system finds answers by using a combination of techniques fromcomputational linguistics,information retrieval, andknowledge representation.
The system takes anatural languagequestion as an input rather than a set of keywords, for example: "When is the national day of China?" It then transforms this input sentence into a query in itslogical form. Accepting natural language questions makes the system more user-friendly, but harder to implement, as there are a variety of question types and the system will have to identify the correct one in order to give a sensible answer. Assigning a question type to the question is a crucial task; the entire answer extraction process relies on finding the correct question type and hence the correct answer type.
Keywordextractionis the first step in identifying the input question type.[14]In some cases, words clearly indicate the question type, e.g., "Who", "Where", "When", or "How many"—these words might suggest to the system that the answers should be of type "Person", "Location", "Date", or "Number", respectively.POS (part-of-speech) taggingand syntactic parsing techniques can also determine the answer type. In the example above, the subject is "Chinese National Day", the predicate is "is" and the adverbial modifier is "when", therefore the answer type is "Date". Unfortunately, some interrogative words like "Which", "What", or "How" do not correspond to unambiguous answer types: Each can represent more than one type. In situations like this, other words in the question need to be considered. A lexical dictionary such asWordNetcan be used for understanding the context.
Once the system identifies the question type, it uses aninformation retrievalsystem to find a set of documents that contain the correct keywords. AtaggerandNP/Verb Group chunkercan verify whether the correct entities and relations are mentioned in the found documents. For questions such as "Who" or "Where", anamed-entity recogniserfinds relevant "Person" and "Location" names from the retrieved documents.Only the relevant paragraphs are selected for ranking.[clarification needed]
Avector space modelcan classify the candidate answers. Check[who?]if the answer is of the correct type as determined in the question type analysis stage. An inference technique can validate the candidate answers. A score is then given to each of these candidates according to the number of question words it contains and how close these words are to the candidate—the more and the closer the better. The answer is then translated by parsing into a compact and meaningful representation. In the previous example, the expected output answer is "1st Oct."
An open-source, math-aware, question answering system calledMathQA, based onAsk PlatypusandWikidata, was published in 2018.[15]MathQA takes an English or Hindi natural language question as input and returns a mathematical formula retrieved from Wikidata as a succinct answer, translated into a computable form that allows the user to insert values for the variables. The system retrieves names and values of variables and common constants from Wikidata if those are available. It is claimed that the system outperforms a commercial computational mathematical knowledge engine on a test set.[15]MathQA is hosted by Wikimedia athttps://mathqa.wmflabs.org/. In 2022, it was extended to answer 15 math question types.[16]
MathQA methods need to combine natural and formula language. One possible approach is to perform supervised annotation viaEntity Linking. The "ARQMath Task" atCLEF2020[17]was launched to address the problem of linking newly posted questions from the platform MathStack Exchangeto existing ones that were already answered by the community. Providing hyperlinks to already answered, semantically related questions helps users to get answers earlier but is a challenging problem because semantic relatedness is not trivial.[18]The lab was motivated by the fact that 20% of mathematical queries in general-purpose search engines are expressed as well-formed questions.[19]The challenge contained two separate sub-tasks. Task 1: "Answer retrieval" matching old post answers to newly posed questions, and Task 2: "Formula retrieval" matching old post formulae to new questions. Starting with the domain of mathematics, which involves formula language, the goal is to later extend the task to other domains (e.g., STEM disciplines, such as chemistry, biology, etc.), which employ other types of special notation (e.g., chemical formulae).[17][18]
The inverse of mathematical question answering—mathematical question generation—has also been researched. The PhysWikiQuiz physics question generation and test engine retrieves mathematical formulae from Wikidata together with semantic information about their constituting identifiers (names and values of variables).[20]The formulae are then rearranged to generate a set of formula variants. Subsequently, the variables are substituted with random values to generate a large number of different questions suitable for individual student tests. PhysWikiquiz is hosted by Wikimedia athttps://physwikiquiz.wmflabs.org/.
Question answering systems have been extended in recent[may be outdated as of April 2023]years to encompass additional domains of knowledge[21]For example, systems have been developed to automatically answer temporal and geospatial questions, questions of definition and terminology, biographical questions, multilingual questions, and questions about the content of audio, images,[22]and video.[23]Current question answering research topics include:
In 2011,Watson, a question answering computer system developed byIBM, competed in two exhibition matches ofJeopardy!againstBrad RutterandKen Jennings, winning by a significant margin.[32]Facebook Researchmade theirDrQAsystem[33]available under anopen source license. This system usesWikipediaas knowledge source.[2]Theopen sourceframework Haystack bydeepsetcombines open-domain question answering with generative question answering and supports thedomain adaptation[clarification needed]of theunderlying[clarification needed]language modelsforindustry use cases[vague].[34][35]
Large Language Models (LLMs)[36]like GPT-4[37], Gemini[38]are examples of successful QA systems that are enabling more sophisticated understanding and generation of text. When coupled with Multimodal[39]QA Systems, which can process and understand information from various modalities like text, images, and audio, LLMs significantly improve the capabilities of QA systems.
|
https://en.wikipedia.org/wiki/Question_answering
|
ABellman equation, named afterRichard E. Bellman, is anecessary conditionfor optimality associated with the mathematicaloptimizationmethod known asdynamic programming.[1]It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices.[2]This breaks a dynamic optimization problem into asequenceof simpler subproblems, as Bellman's “principle of optimality" prescribes.[3]The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.[4]
The Bellman equation was first applied to engineeringcontrol theoryand to other topics in applied mathematics, and subsequently became an important tool ineconomic theory; though the basic concepts of dynamic programming are prefigured inJohn von NeumannandOskar Morgenstern'sTheory of Games and Economic BehaviorandAbraham Wald'ssequential analysis.[citation needed]The term "Bellman equation" usually refers to the dynamic programming equation (DPE) associated withdiscrete-timeoptimization problems.[5]In continuous-time optimization problems, the analogous equation is apartial differential equationthat is called theHamilton–Jacobi–Bellman equation.[6][7]
In discrete time any multi-stage optimization problem can be solved by analyzing the appropriate Bellman equation. The appropriate Bellman equation can be found by introducing new state variables (state augmentation).[8]However, the resulting augmented-state multi-stage optimization problem has a higher dimensional state space than the original multi-stage optimization problem - an issue that can potentially render the augmented problem intractable due to the “curse of dimensionality”. Alternatively, it has been shown that if the cost function of the multi-stage optimization problem satisfies a "backward separable" structure, then the appropriate Bellman equation can be found without state augmentation.[9]
To understand the Bellman equation, several underlying concepts must be understood. First, any optimization problem has some objective: minimizing travel time, minimizing cost, maximizing profits, maximizing utility, etc. The mathematical function that describes this objective is called theobjective function.[citation needed]
Dynamic programming breaks a multi-period planning problem into simpler steps at different points in time. Therefore, it requires keeping track of how the decision situation is evolving over time. The information about the current situation that is needed to make a correct decision is called the "state".[10][11]For example, to decide how much to consume and spend at each point in time, people would need to know (among other things) their initial wealth. Therefore, wealth(W){\displaystyle (W)}would be one of theirstate variables, but there would probably be others.
The variables chosen at any given point in time are often called thecontrol variables. For instance, given their current wealth, people might decide how much to consume now. Choosing the control variables now may be equivalent to choosing the next state; more generally, the next state is affected by other factors in addition to the current control. For example, in the simplest case, today's wealth (the state) and consumption (the control) might exactly determine tomorrow's wealth (the new state), though typically other factors will affect tomorrow's wealth too.[citation needed]
The dynamic programming approach describes the optimal plan by finding a rule that tells what the controls should be, given any possible value of the state. For example, if consumption (c) dependsonlyon wealth (W), we would seek a rulec(W){\displaystyle c(W)}that gives consumption as a function of wealth. Such a rule, determining the controls as a function of the states, is called apolicy function.[12][10]
Finally, by definition, the optimal decision rule is the one that achieves the best possible value of the objective. For example, if someone chooses consumption, given wealth, in order to maximize happiness (assuming happinessHcan be represented by a mathematical function, such as autilityfunction and is something defined by wealth), then each level of wealth will be associated with some highest possible level of happiness,H(W){\displaystyle H(W)}. The best possible value of the objective, written as a function of the state, is called thevalue function.[citation needed]
Bellman showed that a dynamicoptimizationproblem indiscrete timecan be stated in arecursive, step-by-step form known asbackward inductionby writing down the relationship between the value function in one period and the value function in the next period. The relationship between these two value functions is called the "Bellman equation". In this approach, the optimal policy in the last time period is specified in advance as a function of the state variable's value at that time, and the resulting optimal value of the objective function is thus expressed in terms of that value of the state variable. Next, the next-to-last period's optimization involves maximizing the sum of that period's period-specific objective function and the optimal value of the future objective function, giving that period's optimal policy contingent upon the value of the state variable as of the next-to-last period decision.[clarification needed]This logic continues recursively back in time, until the first period decision rule is derived, as a function of the initial state variable value, by optimizing the sum of the first-period-specific objective function and the value of the second period's value function, which gives the value for all the future periods. Thus, each period's decision is made by explicitly acknowledging that all future decisions will be optimally made.[citation needed]
Letxt{\displaystyle x_{t}}be the state at timet{\displaystyle t}. For a decision that begins at time 0, we take as given the initial statex0{\displaystyle x_{0}}. At any time, the set of possible actions depends on the current state; we express this asat∈Γ(xt){\displaystyle a_{t}\in \Gamma (x_{t})}, where a particular actionat{\displaystyle a_{t}}represents particular values for one or more control variables, andΓ(xt){\displaystyle \Gamma (x_{t})}is the set of actions available to be taken at statext{\displaystyle x_{t}}. It is also assumed that the state changes fromx{\displaystyle x}to a new stateT(x,a){\displaystyle T(x,a)}when actiona{\displaystyle a}is taken, and that the current payoff from taking actiona{\displaystyle a}in statex{\displaystyle x}isF(x,a){\displaystyle F(x,a)}. Finally, we assume impatience, represented by adiscount factor0<β<1{\displaystyle 0<\beta <1}.
Under these assumptions, an infinite-horizon decision problem takes the following form:
subject to the constraints
Notice that we have defined notationV(x0){\displaystyle V(x_{0})}to denote the optimal value that can be obtained by maximizing this objective function subject to the assumed constraints. This function is thevalue function. It is a function of the initial state variablex0{\displaystyle x_{0}}, since the best value obtainable depends on the initial situation.
The dynamic programming method breaks this decision problem into smaller subproblems. Bellman'sprinciple of optimalitydescribes how to do this:
Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (See Bellman, 1957, Chap. III.3.)[10][11][13]
In computer science, a problem that can be broken apart like this is said to haveoptimal substructure. In the context of dynamicgame theory, this principle is analogous to the concept ofsubgame perfect equilibrium, although what constitutes an optimal policy in this case is conditioned on the decision-maker's opponents choosing similarly optimal policies from their points of view.
As suggested by theprinciple of optimality, we will consider the first decision separately, setting aside all future decisions (we will start afresh from time 1 with the new statex1{\displaystyle x_{1}}). Collecting the future decisions in brackets on the right, the above infinite-horizon decision problem is equivalent to:[clarification needed]
subject to the constraints
Here we are choosinga0{\displaystyle a_{0}}, knowing that our choice will cause the time 1 state to bex1=T(x0,a0){\displaystyle x_{1}=T(x_{0},a_{0})}. That new state will then affect the decision problem from time 1 on. The whole future decision problem appears inside the square brackets on the right.[clarification needed][further explanation needed]
So far it seems we have only made the problem uglier by separating today's decision from future decisions. But we can simplify by noticing that what is inside the square brackets on the right isthe valueof the time 1 decision problem, starting from statex1=T(x0,a0){\displaystyle x_{1}=T(x_{0},a_{0})}.
Therefore, the problem can be rewritten as arecursivedefinition of the value function:
This is the Bellman equation. It may be simplified even further if the time subscripts are dropped and the value of the next state is plugged in:
The Bellman equation is classified as afunctional equation, because solving it means finding the unknown functionV{\displaystyle V}, which is thevalue function. Recall that the value function describes the best possible value of the objective, as a function of the statex{\displaystyle x}. By calculating the value function, we will also find the functiona(x){\displaystyle a(x)}that describes the optimal action as a function of the state; this is called thepolicy function.
In the deterministic setting, other techniques besides dynamic programming can be used to tackle the aboveoptimal controlproblem. However, the Bellman Equation is often the most convenient method of solvingstochasticoptimal control problems.
For a specific example from economics, consider an infinitely-lived consumer with initial wealth endowmenta0{\displaystyle {\color {Red}a_{0}}}at period0{\displaystyle 0}. They have an instantaneousutility functionu(c){\displaystyle u(c)}wherec{\displaystyle c}denotes consumption and discounts the next period utility at a rate of0<β<1{\displaystyle 0<\beta <1}. Assume that what is not consumed in periodt{\displaystyle t}carries over to the next period with interest rater{\displaystyle r}. Then the consumer's utility maximization problem is to choose a consumption plan{ct}{\displaystyle \{{\color {OliveGreen}c_{t}}\}}that solves
subject to
and
The first constraint is the capital accumulation/law of motion specified by the problem, while the second constraint is atransversality conditionthat the consumer does not carry debt at the end of their life. The Bellman equation is
Alternatively, one can treat the sequence problem directly using, for example, theHamiltonian equations.
Now, if the interest rate varies from period to period, the consumer is faced with a stochastic optimization problem. Let the interestrfollow aMarkov processwith probability transition functionQ(r,dμr){\displaystyle Q(r,d\mu _{r})}wheredμr{\displaystyle d\mu _{r}}denotes theprobability measuregoverning the distribution of interest rate next period if current interest rate isr{\displaystyle r}. In this model the consumer decides their current period consumption after the current period interest rate is announced.
Rather than simply choosing a single sequence{ct}{\displaystyle \{{\color {OliveGreen}c_{t}}\}}, the consumer now must choose a sequence{ct}{\displaystyle \{{\color {OliveGreen}c_{t}}\}}for each possible realization of a{rt}{\displaystyle \{r_{t}\}}in such a way that their lifetime expected utility is maximized:
The expectationE{\displaystyle \mathbb {E} }is taken with respect to the appropriate probability measure given byQon the sequences ofr's. Becauseris governed by a Markov process, dynamic programming simplifies the problem significantly. Then the Bellman equation is simply:
Under some reasonable assumption, the resulting optimal policy functiong(a,r) ismeasurable.
For a general stochastic sequential optimization problem with Markovian shocks and where the agent is faced with their decisionex-post, the Bellman equation takes a very similar form
The first known application of a Bellman equation in economics is due toMartin BeckmannandRichard Muth.[19]Martin Beckmann also wrote extensively on consumption theory using the Bellman equation in 1959. His work influencedEdmund S. Phelps, among others.
A celebrated economic application of a Bellman equation isRobert C. Merton's seminal 1973 article on theintertemporal capital asset pricing model.[20](See alsoMerton's portfolio problem). The solution to Merton's theoretical model, one in which investors chose between income today and future income or capital gains, is a form of Bellman's equation. Because economic applications of dynamic programming usually result in a Bellman equation that is adifference equation, economists refer to dynamic programming as a "recursive method" and a subfield ofrecursive economicsis now recognized within economics.
Nancy Stokey,Robert E. Lucas, andEdward Prescottdescribe stochastic and nonstochastic dynamic programming in considerable detail, and develop theorems for the existence of solutions to problems meeting certain conditions. They also describe many examples of modeling theoretical problems in economics using recursive methods.[21]This book led to dynamic programming being employed to solve a wide range of theoretical problems in economics, including optimaleconomic growth,resource extraction,principal–agent problems,public finance, businessinvestment,asset pricing,factorsupply, andindustrial organization.Lars LjungqvistandThomas Sargentapply dynamic programming to study a variety of theoretical questions inmonetary policy,fiscal policy,taxation,economic growth,search theory, andlabor economics.[22]Avinash DixitandRobert Pindyckshowed the value of the method for thinking aboutcapital budgeting.[23]Anderson adapted the technique to business valuation, including privately held businesses.[24]
Using dynamic programming to solve concrete problems is complicated by informational difficulties, such as choosing the unobservable discount rate. There are also computational issues, the main one being thecurse of dimensionalityarising from the vast number of possible actions and potential state variables that must be considered before an optimal strategy can be selected. For an extensive discussion of computational issues, see Miranda and Fackler,[25]and Meyn 2007.[26]
InMarkov decision processes, a Bellman equation is arecursionfor expected rewards. For example, the expected reward for being in a particular statesand following some fixed policyπ{\displaystyle \pi }has the Bellman equation:
This equation describes the expected reward for taking the action prescribed by some policyπ{\displaystyle \pi }.
The equation for the optimal policy is referred to as theBellman optimality equation:
whereπ∗{\displaystyle {\pi *}}is the optimal policy andVπ∗{\displaystyle V^{\pi *}}refers to the value function of the optimal policy. The equation above describes the reward for taking the action giving the highest expected return.
|
https://en.wikipedia.org/wiki/Bellman_equation
|
Clustering high-dimensional datais thecluster analysisof data with anywhere from a few dozen to many thousands ofdimensions. Suchhigh-dimensional spacesof data are often encountered in areas such asmedicine, whereDNA microarraytechnology can produce many measurements at once, and the clustering oftext documents, where, if a word-frequency vector is used, the number of dimensions equals thesize of the vocabulary.
Four problems need to be overcome for clustering in high-dimensional data:[1]
Recent research indicates that the discrimination problems only occur when there is a high number of irrelevant dimensions, and that shared-nearest-neighbor approaches can improve results.[2]
Approaches towards clustering in axis-parallel or arbitrarily orientedaffine subspacesdiffer in how they interpret the overall goal, which is finding clusters in data with high dimensionality.[1]An overall different approach is to find clusters based onpatternin the data matrix, often referred to asbiclustering, which is a technique frequently utilized inbioinformatics.
Subspace clustering aims to look for clusters in different combinations of dimensions (i.e., subspaces) and unlike many other clustering approaches does not assume that all of the clusters in a dataset are found in the same set of dimensions.[3]Subspace clustering can take bottom-up or top-down approaches. Bottom-up methods (such as CLIQUE) heuristically identify relevant dimensions by dividing the data space into a grid structure, selecting dense units, and then iteratively linking them if they are adjacent and dense.[3]
The adjacent image shows a mere two-dimensional space where a number of clusters can be identified. In the one-dimensional subspaces, the clustersca{\displaystyle c_{a}}(in subspace{x}{\displaystyle \{x\}}) andcb{\displaystyle c_{b}},cc{\displaystyle c_{c}},cd{\displaystyle c_{d}}(in subspace{y}{\displaystyle \{y\}}) can be found.cc{\displaystyle c_{c}}cannot be considered a cluster in a two-dimensional (sub-)space, since it is too sparsely distributed in thex{\displaystyle x}axis. In two dimensions, the two clusterscab{\displaystyle c_{ab}}andcad{\displaystyle c_{ad}}can be identified.
The problem of subspace clustering is given by the fact that there are2d{\displaystyle 2^{d}}different subspaces of a space withd{\displaystyle d}dimensions. If the subspaces are not axis-parallel, an infinite number of subspaces is possible. Hence, subspace clustering algorithms utilize some kind ofheuristicto remain computationally feasible, at the risk of producing inferior results. For example, thedownward-closure property(cf.association rules) can be used to build higher-dimensional subspaces only by combining lower-dimensional ones, as any subspace T containing a cluster, will result in a full space S also to contain that cluster (i.e. S ⊆ T), an approach taken by most of the traditional algorithms such as CLIQUE,[4]SUBCLU.[5]It is also possible to define a subspace using different degrees of relevance for each dimension, an approach taken by iMWK-Means,[6]EBK-Modes[7]and CBK-Modes.[8]
Projected clustering seeks to assign each point to a unique cluster, but clusters may exist in different subspaces. The general approach is to use a specialdistance functiontogether with a regularclustering algorithm.
For example, the PreDeCon algorithm checks which attributes seem to support a clustering for each point, and adjusts the distance function such that dimensions with lowvarianceare amplified in the distance function.[9]In the figure above, the clustercc{\displaystyle c_{c}}might be found usingDBSCANwith a distance function that places less emphasis on thex{\displaystyle x}-axis and thus exaggerates the low difference in they{\displaystyle y}-axis sufficiently enough to group the points into a cluster.
PROCLUSuses a similar approach with ak-medoidclustering.[10]Initial medoids are guessed, and for each medoid the subspace spanned by attributes with low variance is determined. Points are assigned to the medoid closest, considering only the subspace of that medoid in determining the distance. The algorithm then proceeds as the regularPAMalgorithm.
If the distance function weights attributes differently, but never with 0 (and hence never drops irrelevant attributes), the algorithm is called a"soft"-projected clustering algorithm.
Projection-based clustering is based on a nonlinear projection of high-dimensional data into a two-dimensional space.[11]Typical projection-methods liket-distributed stochastic neighbor embedding(t-SNE),[12]or neighbor retrieval visualizer (NerV)[13]are used to project data explicitly into two dimensions disregarding the subspaces of higher dimension than two and preserving only relevant neighborhoods in high-dimensional data. In the next step, theDelaunay graph[14]between the projected points is calculated, and each vertex between two projected points is weighted with the high-dimensional distance between the corresponding high-dimensional data points. Thereafter the shortest path between every pair of points is computed using theDijkstra algorithm.[15]The shortest paths are then used in the clustering process, which involves two choices depending on the structure type in the high-dimensional data.[11]This Boolean choice can be decided by looking at the topographic map of high-dimensional structures.[16]In a benchmarking of 34 comparable clustering methods, projection-based clustering was the only algorithm that always was able to find the high-dimensional distance or density-based structure of the dataset.[11]Projection-based clustering is accessible in the open-source R package "ProjectionBasedClustering" on CRAN.[17]
Bootstrap aggregation (bagging) can be used to create multiple clusters and aggregate the findings. This is done by taking random subsamples of the data, performing a cluster analysis on each of them and then aggregating the results of the clusterings to generate a dissimilarity measure which can then be used to explore and cluster the original data.[18][19]Since high-dimensional data are likely to have many non-informative features, weights can be used during the bagging process to increase the impact of the more informative aspects. This produces "ABC dissimilarities" which can then be used to explore and cluster the original data and also to assess which features appear to be more impactful in defining the clusters.[20][21][22]
Not all algorithms try to either find a unique cluster assignment for each point or all clusters in all subspaces; many settle for a result in between, where a number of possibly overlapping, but not necessarily exhaustive set of clusters are found. An example is FIRES, which is from its basic approach a subspace clustering algorithm, but uses aheuristictoo aggressive to credibly produce all subspace clusters.[23]Another hybrid approach is to include a human-into-the-algorithmic-loop: Human domain expertise can help to reduce an exponential search space through heuristic selection of samples. This can be beneficial in the health domain where, e.g., medical doctors are confronted with high-dimensional descriptions of patient conditions and measurements on the success of certain therapies. An important question in such data is to compare and correlate patient conditions and therapy results along with combinations of dimensions. The number of dimensions is often very large, consequently one needs to map them to a smaller number of relevant dimensions to be more amenable for expert analysis. This is because irrelevant, redundant, and conflicting dimensions can negatively affect effectiveness and efficiency of the whole analytic process.[24]
Another type of subspaces is considered inCorrelation clustering (Data Mining).
|
https://en.wikipedia.org/wiki/Clustering_high-dimensional_data
|
Inmathematics,concentration of measure(about amedian) is a principle that is applied inmeasure theory,probabilityandcombinatorics, and has consequences for other fields such asBanach spacetheory. Informally, it states that "A random variable that depends in aLipschitzway on many independent variables (but not too much on any of them) is essentially constant".[1]
The concentration of measure phenomenon was put forth in the early 1970s byVitali Milmanin his works on the local theory ofBanach spaces, extending an idea going back to the work ofPaul Lévy.[2][3]It was further developed in the works of Milman andGromov,Maurey,Pisier,Schechtman,Talagrand,Ledoux, and others.
Let(X,d){\displaystyle (X,d)}be ametric spacewith ameasureμ{\displaystyle \mu }on theBorel setswithμ(X)=1{\displaystyle \mu (X)=1}.
Let
where
is theϵ{\displaystyle \epsilon }-extension(also calledϵ{\displaystyle \epsilon }-fattening in the context ofthe Hausdorff distance) of a setA{\displaystyle A}.
The functionα(⋅){\displaystyle \alpha (\cdot )}is called theconcentration rateof the spaceX{\displaystyle X}. The following equivalent definition has many applications:
where the supremum is over all 1-Lipschitz functionsF:X→R{\displaystyle F:X\to \mathbb {R} }, and
the median (or Levy mean)M=MedF{\displaystyle M=\mathop {\mathrm {Med} } F}is defined by the inequalities
Informally, the spaceX{\displaystyle X}exhibits a concentration phenomenon ifα(ϵ){\displaystyle \alpha (\epsilon )}decays very fast asϵ{\displaystyle \epsilon }grows. More formally,
a family of metric measure spaces(Xn,dn,μn){\displaystyle (X_{n},d_{n},\mu _{n})}is called aLévy familyif
the corresponding concentration ratesαn{\displaystyle \alpha _{n}}satisfy
and anormal Lévy familyif
for some constantsc,C>0{\displaystyle c,C>0}. For examples see below.
The first example goes back toPaul Lévy. According to thespherical isoperimetric inequality, among all subsetsA{\displaystyle A}of the sphereSn{\displaystyle S^{n}}with prescribedspherical measureσn(A){\displaystyle \sigma _{n}(A)}, the spherical cap
for suitableR{\displaystyle R}, has the smallestϵ{\displaystyle \epsilon }-extensionAϵ{\displaystyle A_{\epsilon }}(for anyϵ>0{\displaystyle \epsilon >0}).
Applying this to sets of measureσn(A)=1/2{\displaystyle \sigma _{n}(A)=1/2}(whereσn(Sn)=1{\displaystyle \sigma _{n}(S^{n})=1}), one can deduce the followingconcentration inequality:
whereC,c{\displaystyle C,c}are universal constants. Therefore(Sn)n{\displaystyle (S^{n})_{n}}meet the definition above of a normal Lévy family.
Vitali Milmanapplied this fact to several problems in the local theory of Banach spaces, in particular, to give a new proof ofDvoretzky's theorem.
All classical statistical physics is based on the concentration of measure phenomena:
The fundamental idea (‘theorem’) about equivalence of ensembles in thermodynamic limit (Gibbs, 1902[4]andEinstein, 1902-1904[5][6][7]) is exactly the thin shell concentration theorem. For each mechanical system consider thephase spaceequipped by the invariantLiouville measure(the phase volume) and conserving energyE. Themicrocanonical ensembleis just an invariant distribution over the surface of constant energy E obtained by Gibbs as the limit of distributions inphase spacewith constant density in thin layers between the surfaces of states with energyEand with energyE+ΔE. Thecanonical ensembleis given by the probability density in the phase space (with respect to the phase volume)ρ=eF−EkT,{\displaystyle \rho =e^{\frac {F-E}{kT}},}where quantities F=const and T=const are defined by the conditions of probability normalisation and the given expectation of energyE.
When the number of particles is large, then the difference between average values of the macroscopic variables for the canonical and microcanonical ensembles tends to zero, and theirfluctuationsare explicitly evaluated. These results are proven rigorously under some regularity conditions on the energy functionEbyKhinchin(1943).[8]The simplest particular case whenEis a sum of squares was well-known in detail beforeKhinchinand Lévy and even before Gibbs and Einstein. This is theMaxwell–Boltzmann distributionof the particle energy in ideal gas.
The microcanonical ensemble is very natural from the naïve physical point of view: this is just a natural equidistribution on the isoenergetic hypersurface. The canonical ensemble is very useful because of an important property: if a system consists of two non-interacting subsystems, i.e. if the energyEis the sum,E=E1(X1)+E2(X2){\displaystyle E=E_{1}(X_{1})+E_{2}(X_{2})}, whereX1,X2{\displaystyle X_{1},X_{2}}are the states of the subsystems, then the equilibrium states of subsystems are independent, the equilibrium distribution of the system is the product of equilibrium distributions of the subsystems with the same T. The equivalence of these ensembles is the cornerstone of the mechanical foundations of thermodynamics.
|
https://en.wikipedia.org/wiki/Concentration_of_measure
|
Dimensionality reduction, ordimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to itsintrinsic dimension. Working in high-dimensional spaces can be undesirable for many reasons; raw data are oftensparseas a consequence of thecurse of dimensionality, and analyzing the data is usuallycomputationally intractable. Dimensionality reduction is common in fields that deal with large numbers of observations and/or large numbers of variables, such assignal processing,speech recognition,neuroinformatics, andbioinformatics.[1]
Methods are commonly divided into linear and nonlinear approaches.[1]Linear approaches can be further divided intofeature selectionandfeature extraction.[2]Dimensionality reduction can be used fornoise reduction,data visualization,cluster analysis, or as an intermediate step to facilitate other analyses.
The process offeature selectionaims to find a suitable subset of the input variables (features, orattributes) for the task at hand. The three strategies are: thefilterstrategy (e.g.,information gain), thewrapperstrategy (e.g., accuracy-guided search), and theembeddedstrategy (features are added or removed while building the model based on prediction errors).
Data analysissuch asregressionorclassificationcan be done in the reduced space more accurately than in the original space.[3]
Feature projection (also called feature extraction) transforms the data from thehigh-dimensional spaceto a space of fewer dimensions. The data transformation may be linear, as inprincipal component analysis(PCA), but manynonlinear dimensionality reductiontechniques also exist.[4][5]For multidimensional data,tensor representationcan be used in dimensionality reduction throughmultilinear subspace learning.[6]
The main linear technique for dimensionality reduction, principal component analysis, performs a linear mapping of the data to a lower-dimensional space in such a way that the variance of the data in the low-dimensional representation is maximized. In practice, thecovariance(and sometimes thecorrelation)matrixof the data is constructed and theeigenvectorson this matrix are computed. The eigenvectors that correspond to the largest eigenvalues (the principal components) can now be used to reconstruct a large fraction of the variance of the original data. Moreover, the first few eigenvectors can often be interpreted in terms of the large-scale physical behavior of the system, because they often contribute the vast majority of the system's energy, especially in low-dimensional systems. Still, this must be proved on a case-by-case basis as not all systems exhibit this behavior. The original space (with dimension of the number of points) has been reduced (with data loss, but hopefully retaining the most important variance) to the space spanned by a few eigenvectors.[citation needed]
NMF decomposes a non-negative matrix to the product of two non-negative ones, which has been a promising tool in fields where only non-negative signals exist,[7][8]such as astronomy.[9][10]NMF is well known since the multiplicative update rule by Lee & Seung,[7]which has been continuously developed: the inclusion of uncertainties,[9]the consideration of missing data and parallel computation,[11]sequential construction[11]which leads to the stability and linearity of NMF,[10]as well as otherupdatesincluding handling missing data indigital image processing.[12]
With a stable component basis during construction, and a linear modeling process,sequential NMF[11]is able to preserve the flux in direct imaging of circumstellar structures in astronomy,[10]as one of themethods of detecting exoplanets, especially for the direct imaging ofcircumstellar discs. In comparison with PCA, NMF does not remove the mean of the matrices, which leads to physical non-negative fluxes; therefore NMF is able to preserve more information than PCA as demonstrated by Ren et al.[10]
Principal component analysis can be employed in a nonlinear way by means of thekernel trick. The resulting technique is capable of constructing nonlinear mappings that maximize the variance in the data. The resulting technique is calledkernel PCA.
Other prominent nonlinear techniques includemanifold learningtechniques such asIsomap,locally linear embedding(LLE),[13]Hessian LLE, Laplacian eigenmaps, and methods based on tangent space analysis.[14]These techniques construct a low-dimensional data representation using a cost function that retains local properties of the data, and can be viewed as defining a graph-based kernel for Kernel PCA.
More recently, techniques have been proposed that, instead of defining a fixed kernel, try to learn the kernel usingsemidefinite programming. The most prominent example of such a technique ismaximum variance unfolding(MVU). The central idea of MVU is to exactly preserve all pairwise distances between nearest neighbors (in the inner product space) while maximizing the distances between points that are not nearest neighbors.
An alternative approach to neighborhood preservation is through the minimization of a cost function that measures differences between distances in the input and output spaces. Important examples of such techniques include: classicalmultidimensional scaling, which is identical to PCA;Isomap, which uses geodesic distances in the data space;diffusion maps, which use diffusion distances in the data space;t-distributed stochastic neighbor embedding(t-SNE), which minimizes the divergence between distributions over pairs of points; and curvilinear component analysis.
A different approach to nonlinear dimensionality reduction is through the use ofautoencoders, a special kind offeedforward neural networkswith a bottleneck hidden layer.[15]The training of deep encoders is typically performed using a greedy layer-wise pre-training (e.g., using a stack ofrestricted Boltzmann machines) that is followed by a finetuning stage based onbackpropagation.
Linear discriminant analysis (LDA) is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events.
GDA deals with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to thesupport-vector machines(SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space.[16][17]Similar to LDA, the objective of GDA is to find a projection for the features into a lower dimensional space by maximizing the ratio of between-class scatter to within-class scatter.
Autoencoders can be used to learn nonlinear dimension reduction functions and codings together with an inverse function from the coding to the original representation.
T-distributed Stochastic Neighbor Embedding (t-SNE) is a nonlinear dimensionality reduction technique useful for the visualization of high-dimensional datasets. It is not recommended for use in analysis such as clustering or outlier detection since it does not necessarily preserve densities or distances well.[18]
Uniform manifold approximation and projection(UMAP) is a nonlinear dimensionality reduction technique. Visually, it is similar to t-SNE, but it assumes that the data is uniformly distributed on alocally connectedRiemannian manifoldand that theRiemannian metricis locally constant or approximately locally constant.
For high-dimensional datasets, dimension reduction is usually performed prior to applying ak-nearest neighbors(k-NN) algorithm in order to mitigate thecurse of dimensionality.[19]
Feature extractionand dimension reduction can be combined in one step, usingprincipal component analysis(PCA),linear discriminant analysis(LDA),canonical correlation analysis(CCA), ornon-negative matrix factorization(NMF) techniques to pre-process the data, followed by clustering viak-NN onfeature vectorsin a reduced-dimension space. Inmachine learning, this process is also called low-dimensionalembedding.[20]
For high-dimensional datasets (e.g., when performing similarity search on live video streams, DNA data, or high-dimensionaltime series), running a fastapproximatek-NN search usinglocality-sensitive hashing,random projection,[21]"sketches",[22]or other high-dimensional similarity search techniques from theVLDB conferencetoolbox may be the only feasible option.
A dimensionality reduction technique that is sometimes used inneuroscienceismaximally informative dimensions,[23]which finds a lower-dimensional representation of a dataset such that as muchinformationas possible about the original data is preserved.
|
https://en.wikipedia.org/wiki/Dimensionality_reduction
|
Dynamic programmingis both amathematical optimizationmethod and analgorithmic paradigm. The method was developed byRichard Bellmanin the 1950s and has found applications in numerous fields, fromaerospace engineeringtoeconomics.
In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in arecursivemanner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to haveoptimal substructure.
If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems.[1]In the optimization literature this relationship is called theBellman equation.
In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time.
This is done by defining a sequence ofvalue functionsV1,V2, ...,Vntakingyas an argument representing thestateof the system at timesifrom 1 ton.
The definition ofVn(y) is the value obtained in stateyat the last timen.
The valuesViat earlier timesi=n−1,n− 2, ..., 2, 1 can be found by working backwards, using arecursiverelationship called theBellman equation.
Fori= 2, ...,n,Vi−1at any stateyis calculated fromViby maximizing a simple function (usually the sum) of the gain from a decision at timei− 1 and the functionViat the new state of the system if this decision is made.
SinceVihas already been calculated for the needed states, the above operation yieldsVi−1for those states.
Finally,V1at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed.
Incontrol theory, a typical problem is to find an admissible controlu∗{\displaystyle \mathbf {u} ^{\ast }}which causes the systemx˙(t)=g(x(t),u(t),t){\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {g} \left(\mathbf {x} (t),\mathbf {u} (t),t\right)}to follow an admissible trajectoryx∗{\displaystyle \mathbf {x} ^{\ast }}on a continuous time intervalt0≤t≤t1{\displaystyle t_{0}\leq t\leq t_{1}}that minimizes acost function
The solution to this problem is an optimal control law or policyu∗=h(x(t),t){\displaystyle \mathbf {u} ^{\ast }=h(\mathbf {x} (t),t)}, which produces an optimal trajectoryx∗{\displaystyle \mathbf {x} ^{\ast }}and acost-to-go functionJ∗{\displaystyle J^{\ast }}. The latter obeys the fundamental equation of dynamic programming:
apartial differential equationknown as theHamilton–Jacobi–Bellman equation, in whichJx∗=∂J∗∂x=[∂J∗∂x1∂J∗∂x2…∂J∗∂xn]T{\displaystyle J_{x}^{\ast }={\frac {\partial J^{\ast }}{\partial \mathbf {x} }}=\left[{\frac {\partial J^{\ast }}{\partial x_{1}}}~~~~{\frac {\partial J^{\ast }}{\partial x_{2}}}~~~~\dots ~~~~{\frac {\partial J^{\ast }}{\partial x_{n}}}\right]^{\mathsf {T}}}andJt∗=∂J∗∂t{\displaystyle J_{t}^{\ast }={\frac {\partial J^{\ast }}{\partial t}}}. One finds that minimizingu{\displaystyle \mathbf {u} }in terms oft{\displaystyle t},x{\displaystyle \mathbf {x} }, and the unknown functionJx∗{\displaystyle J_{x}^{\ast }}and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary conditionJ(t1)=b(x(t1),t1){\displaystyle J\left(t_{1}\right)=b\left(\mathbf {x} (t_{1}),t_{1}\right)}.[2]In practice, this generally requiresnumerical techniquesfor some discrete approximation to the exact optimization relationship.
Alternatively, the continuous process can be approximated by a discrete system, which leads to a following recurrence relation analog to the Hamilton–Jacobi–Bellman equation:
at thek{\displaystyle k}-th stage ofn{\displaystyle n}equally spaced discrete time intervals, and wheref^{\displaystyle {\hat {f}}}andg^{\displaystyle {\hat {\mathbf {g} }}}denote discrete approximations tof{\displaystyle f}andg{\displaystyle \mathbf {g} }. This functional equation is known as theBellman equation, which can be solved for an exact solution of the discrete approximation of the optimization equation.[3]
Ineconomics, the objective is generally to maximize (rather than minimize) some dynamicsocial welfare function. In Ramsey's problem, this function relates amounts of consumption to levels ofutility. Loosely speaking, the planner faces the trade-off between contemporaneous consumption and future consumption (via investment incapital stockthat is used in production), known asintertemporal choice. Future consumption is discounted at a constant rateβ∈(0,1){\displaystyle \beta \in (0,1)}. A discrete approximation to the transition equation of capital is given by
wherec{\displaystyle c}is consumption,k{\displaystyle k}is capital, andf{\displaystyle f}is aproduction functionsatisfying theInada conditions. An initial capital stockk0>0{\displaystyle k_{0}>0}is assumed.
Letct{\displaystyle c_{t}}be consumption in periodt, and assume consumption yieldsutilityu(ct)=ln(ct){\displaystyle u(c_{t})=\ln(c_{t})}as long as the consumer lives. Assume the consumer is impatient, so that hediscountsfuture utility by a factorbeach period, where0<b<1{\displaystyle 0<b<1}. Letkt{\displaystyle k_{t}}becapitalin periodt. Assume initial capital is a given amountk0>0{\displaystyle k_{0}>0}, and suppose that this period's capital and consumption determine next period's capital askt+1=Akta−ct{\displaystyle k_{t+1}=Ak_{t}^{a}-c_{t}}, whereAis a positive constant and0<a<1{\displaystyle 0<a<1}. Assume capital cannot be negative. Then the consumer's decision problem can be written as follows:
Written this way, the problem looks complicated, because it involves solving for all the choice variablesc0,c1,c2,…,cT{\displaystyle c_{0},c_{1},c_{2},\ldots ,c_{T}}. (The capitalk0{\displaystyle k_{0}}is not a choice variable—the consumer's initial capital is taken as given.)
The dynamic programming approach to solve this problem involves breaking it apart into a sequence of smaller decisions. To do so, we define a sequence ofvalue functionsVt(k){\displaystyle V_{t}(k)}, fort=0,1,2,…,T,T+1{\displaystyle t=0,1,2,\ldots ,T,T+1}which represent the value of having any amount of capitalkat each timet. There is (by assumption) no utility from having capital after death,VT+1(k)=0{\displaystyle V_{T+1}(k)=0}.
The value of any quantity of capital at any previous time can be calculated bybackward inductionusing theBellman equation. In this problem, for eacht=0,1,2,…,T{\displaystyle t=0,1,2,\ldots ,T}, the Bellman equation is
This problem is much simpler than the one we wrote down before, because it involves only two decision variables,ct{\displaystyle c_{t}}andkt+1{\displaystyle k_{t+1}}. Intuitively, instead of choosing his whole lifetime plan at birth, the consumer can take things one step at a time. At timet, his current capitalkt{\displaystyle k_{t}}is given, and he only needs to choose current consumptionct{\displaystyle c_{t}}and savingkt+1{\displaystyle k_{t+1}}.
To actually solve this problem, we work backwards. For simplicity, the current level of capital is denoted ask.VT+1(k){\displaystyle V_{T+1}(k)}is already known, so using the Bellman equation once we can calculateVT(k){\displaystyle V_{T}(k)}, and so on until we get toV0(k){\displaystyle V_{0}(k)}, which is thevalueof the initial decision problem for the whole lifetime. In other words, once we knowVT−j+1(k){\displaystyle V_{T-j+1}(k)}, we can calculateVT−j(k){\displaystyle V_{T-j}(k)}, which is the maximum ofln(cT−j)+bVT−j+1(Aka−cT−j){\displaystyle \ln(c_{T-j})+bV_{T-j+1}(Ak^{a}-c_{T-j})}, wherecT−j{\displaystyle c_{T-j}}is the choice variable andAka−cT−j≥0{\displaystyle Ak^{a}-c_{T-j}\geq 0}.
Working backwards, it can be shown that the value function at timet=T−j{\displaystyle t=T-j}is
where eachvT−j{\displaystyle v_{T-j}}is a constant, and the optimal amount to consume at timet=T−j{\displaystyle t=T-j}is
which can be simplified to
We see that it is optimal to consume a larger fraction of current wealth as one gets older, finally consuming all remaining wealth in periodT, the last period of life.
There are two key attributes that a problem must have in order for dynamic programming to be applicable:optimal substructureandoverlapping sub-problems. If a problem can be solved by combining optimal solutions tonon-overlappingsub-problems, the strategy is called "divide and conquer" instead.[1]This is whymerge sortandquick sortare not classified as dynamic programming problems.
Optimal substructuremeans that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Such optimal substructures are usually described by means ofrecursion. For example, given a graphG=(V,E), the shortest pathpfrom a vertexuto a vertexvexhibits optimal substructure: take any intermediate vertexwon this shortest pathp. Ifpis truly the shortest path, then it can be split into sub-pathsp1fromutowandp2fromwtovsuch that these, in turn, are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described inIntroduction to Algorithms). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what theBellman–Ford algorithmor theFloyd–Warshall algorithmdoes.
Overlappingsub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci sequence:Fi=Fi−1+Fi−2, with base caseF1=F2= 1. ThenF43=F42+F41, andF42=F41+F40. NowF41is being solved in the recursive sub-trees of bothF43as well asF42. Even though the total number of sub-problems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each sub-problem only once.
This can be achieved in either of two ways:[4]
Someprogramming languagescan automaticallymemoizethe result of a function call with a particular set of arguments, in order to speed upcall-by-nameevaluation (this mechanism is referred to ascall-by-need). Some languages make it possible portably (e.g.Scheme,Common Lisp,PerlorD). Some languages have automaticmemoizationbuilt in, such as tabledPrologandJ, which supports memoization with theM.adverb.[5]In any case, this is only possible for areferentially transparentfunction. Memoization is also encountered as an easily accessible design pattern within term-rewrite based languages such asWolfram Language.
Dynamic programming is widely used in bioinformatics for tasks such assequence alignment,protein folding, RNA structure prediction and protein-DNA binding. The first dynamic programming algorithms for protein-DNA binding were developed in the 1970s independently byCharles DeLisiin the US[6]and by Georgii Gurskii and Alexander Zasedatelev in theSoviet Union.[7]Recently these algorithms have become very popular in bioinformatics andcomputational biology, particularly in the studies ofnucleosomepositioning andtranscription factorbinding.
From a dynamic programming point of view,Dijkstra's algorithmfor theshortest path problemis a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by theReachingmethod.[8][9][10]
In fact, Dijkstra's explanation of the logic behind the algorithm,[11]namely
Problem 2.Find the path of minimum total length between two given nodesP{\displaystyle P}andQ{\displaystyle Q}.
We use the fact that, ifR{\displaystyle R}is a node on the minimal path fromP{\displaystyle P}toQ{\displaystyle Q}, knowledge of the latter implies the knowledge of the minimal path fromP{\displaystyle P}toR{\displaystyle R}.
is a paraphrasing ofBellman'sfamousPrinciple of Optimalityin the context of theshortest path problem.
Using dynamic programming in the calculation of thenth member of theFibonacci sequenceimproves its performance greatly. Here is a naïve implementation, based directly on the mathematical definition:
Notice that if we call, say,fib(5), we produce a call tree that calls the function on the same value many different times:
In particular,fib(2)was calculated three times from scratch. In larger examples, many more values offib, orsubproblems, are recalculated, leading to an exponential time algorithm.
Now, suppose we have a simplemapobject,m, which maps each value offibthat has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires onlyO(n) time instead of exponential time (but requiresO(n) space):
This technique of saving values that have already been calculated is calledmemoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values.
In thebottom-upapproach, we calculate the smaller values offibfirst, then build larger values from them. This method also uses O(n) time since it contains a loop that repeats n − 1 times, but it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map.
In both examples, we only calculatefib(2)one time, and then use it to calculate bothfib(4)andfib(3), instead of computing it every time either of them is evaluated.
Consider the problem of assigning values, either zero or one, to the positions of ann×nmatrix, withneven, so that each row and each column contains exactlyn/ 2zeros andn/ 2ones. We ask how many different assignments there are for a givenn{\displaystyle n}. For example, whenn= 4, five possible solutions are
There are at least three possible approaches:brute force,backtracking, and dynamic programming.
Brute force consists of checking all assignments of zeros and ones and counting those that have balanced rows and columns (n/ 2zeros andn/ 2ones). As there are2n2{\displaystyle 2^{n^{2}}}possible assignments and(nn/2)n{\displaystyle {\tbinom {n}{n/2}}^{n}}sensible assignments, this strategy is not practical except maybe up ton=6{\displaystyle n=6}.
Backtracking for this problem consists of choosing some order of the matrix elements and recursively placing ones or zeros, while checking that in every row and column the number of elements that have not been assigned plus the number of ones or zeros are both at leastn/ 2. While more sophisticated than brute force, this approach will visit every solution once, making it impractical fornlarger than six, since the number of solutions is already 116,963,796,250 forn= 8, as we shall see.
Dynamic programming makes it possible to count the number of solutions without visiting them all. Imagine backtracking values for the first row – what information would we require about the remaining rows, in order to be able to accurately count the solutions obtained for each first row value? We considerk×nboards, where1 ≤k≤n, whosek{\displaystyle k}rows containn/2{\displaystyle n/2}zeros andn/2{\displaystyle n/2}ones. The functionfto whichmemoizationis applied maps vectors ofnpairs of integers to the number of admissible boards (solutions). There is one pair for each column, and its two components indicate respectively the number of zeros and ones that have yet to be placed in that column. We seek the value off((n/2,n/2),(n/2,n/2),…(n/2,n/2)){\displaystyle f((n/2,n/2),(n/2,n/2),\ldots (n/2,n/2))}(n{\displaystyle n}arguments or one vector ofn{\displaystyle n}elements). The process of subproblem creation involves iterating over every one of(nn/2){\displaystyle {\tbinom {n}{n/2}}}possible assignments for the top row of the board, and going through every column, subtracting one from the appropriate element of the pair for that column, depending on whether the assignment for the top row contained a zero or a one at that position. If any one of the results is negative, then the assignment is invalid and does not contribute to the set of solutions (recursion stops). Otherwise, we have an assignment for the top row of thek×nboard and recursively compute the number of solutions to the remaining(k− 1) ×nboard, adding the numbers of solutions for every admissible assignment of the top row and returning the sum, which is being memoized. The base case is the trivial subproblem, which occurs for a1 ×nboard. The number of solutions for this board is either zero or one, depending on whether the vector is a permutation ofn/ 2(0,1){\displaystyle (0,1)}andn/ 2(1,0){\displaystyle (1,0)}pairs or not.
For example, in the first two boards shown above the sequences of vectors would be
The number of solutions (sequenceA058527in theOEIS) is
Links to the MAPLE implementation of the dynamic programming approach may be found among theexternal links.
Consider acheckerboardwithn×nsquares and a cost functionc(i, j)which returns a cost associated with square(i,j)(ibeing the row,jbeing the column). For instance (on a 5 × 5 checkerboard),
Thusc(1, 3) = 5
Let us say there was a checker that could start at any square on the first rank (i.e., row) and you wanted to know the shortest path (the sum of the minimum costs at each visited rank) to get to the last rank; assuming the checker could move only diagonally left forward, diagonally right forward, or straight forward. That is, a checker on(1,3)can move to(2,2),(2,3)or(2,4).
This problem exhibitsoptimal substructure. That is, the solution to the entire problem relies on solutions to subproblems. Let us define a functionq(i, j)as
Starting at ranknand descending to rank1, we compute the value of this function for all the squares at each successive rank. Picking the square that holds the minimum value at each rank gives us the shortest path between ranknand rank1.
The functionq(i, j)is equal to the minimum cost to get to any of the three squares below it (since those are the only squares that can reach it) plusc(i, j). For instance:
Now, let us defineq(i, j)in somewhat more general terms:
The first line of this equation deals with a board modeled as squares indexed on1at the lowest bound andnat the highest bound. The second line specifies what happens at the first rank; providing a base case. The third line, the recursion, is the important part. It represents theA,B,C,Dterms in the example. From this definition we can derive straightforward recursive code forq(i, j). In the following pseudocode,nis the size of the board,c(i, j)is the cost function, andmin()returns the minimum of a number of values:
This function only computes the path cost, not the actual path. We discuss the actual path below. This, like the Fibonacci-numbers example, is horribly slow because it too exhibits theoverlapping sub-problemsattribute. That is, it recomputes the same path costs over and over. However, we can compute it much faster in a bottom-up fashion if we store path costs in a two-dimensional arrayq[i, j]rather than using a function. This avoids recomputation; all the values needed for arrayq[i, j]are computed ahead of time only once. Precomputed values for(i,j)are simply looked up whenever needed.
We also need to know what the actual shortest path is. To do this, we use another arrayp[i, j]; apredecessor array. This array records the path to any squares. The predecessor ofsis modeled as an offset relative to the index (inq[i, j]) of the precomputed path cost ofs. To reconstruct the complete path, we lookup the predecessor ofs, then the predecessor of that square, then the predecessor of that square, and so on recursively, until we reach the starting square. Consider the following pseudocode:
Now the rest is a simple matter of finding the minimum and printing it.
Ingenetics,sequence alignmentis an important application where dynamic programming is essential.[12]Typically, the problem consists of transforming one sequence into another using edit operations that replace, insert, or remove an element. Each operation has an associated cost, and the goal is to find thesequence of edits with the lowest total cost.
The problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence B by either:
The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost of the relevant operations to the cost of its neighboring cells, and selecting the optimum.
Different variants exist, seeSmith–Waterman algorithmandNeedleman–Wunsch algorithm.
TheTower of HanoiorTowers ofHanoiis amathematical gameorpuzzle. It consists of three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape.
The objective of the puzzle is to move the entire stack to another rod, obeying the following rules:
The dynamic programming solution consists of solving thefunctional equation
where n denotes the number of disks to be moved, h denotes the home rod, t denotes the target rod, not(h,t) denotes the third rod (neither h nor t), ";" denotes concatenation, and
For n=1 the problem is trivial, namely S(1,h,t) = "move a disk from rod h to rod t" (there is only one disk left).
The number of moves required by this solution is 2n− 1. If the objective is tomaximizethe number of moves (without cycling) then the dynamic programmingfunctional equationis slightly more complicated and 3n− 1 moves are required.[13]
The following is a description of the instance of this famouspuzzleinvolving N=2 eggs and a building with H=36 floors:[14]
To derive a dynamic programmingfunctional equationfor this puzzle, let thestateof the dynamic programming model be a pair s = (n,k), where
For instance,s= (2,6) indicates that two test eggs are available and 6 (consecutive) floors are yet to be tested. The initial state of the process iss= (N,H) whereNdenotes the number of test eggs available at the commencement of the experiment. The process terminates either when there are no more test eggs (n= 0) or whenk= 0, whichever occurs first. If termination occurs at states= (0,k) andk> 0, then the test failed.
Now, let
Then it can be shown that[15]
withW(n,0) = 0 for alln> 0 andW(1,k) =kfor allk. It is easy to solve this equation iteratively by systematically increasing the values ofnandk.
Notice that the above solution takesO(nk2){\displaystyle O(nk^{2})}time with a DP solution. This can be improved toO(nklogk){\displaystyle O(nk\log k)}time by binary searching on the optimalx{\displaystyle x}in the above recurrence, sinceW(n−1,x−1){\displaystyle W(n-1,x-1)}is increasing inx{\displaystyle x}whileW(n,k−x){\displaystyle W(n,k-x)}is decreasing inx{\displaystyle x}, thus a local minimum ofmax(W(n−1,x−1),W(n,k−x)){\displaystyle \max(W(n-1,x-1),W(n,k-x))}is a global minimum. Also, by storing the optimalx{\displaystyle x}for each cell in the DP table and referring to its value for the previous cell, the optimalx{\displaystyle x}for each cell can be found in constant time, improving it toO(nk){\displaystyle O(nk)}time. However, there is an even faster solution that involves a different parametrization of the problem:
Letk{\displaystyle k}be the total number of floors such that the eggs break when dropped from thek{\displaystyle k}th floor (The example above is equivalent to takingk=37{\displaystyle k=37}).
Letm{\displaystyle m}be the minimum floor from which the egg must be dropped to be broken.
Letf(t,n){\displaystyle f(t,n)}be the maximum number of values ofm{\displaystyle m}that are distinguishable usingt{\displaystyle t}tries andn{\displaystyle n}eggs.
Thenf(t,0)=f(0,n)=1{\displaystyle f(t,0)=f(0,n)=1}for allt,n≥0{\displaystyle t,n\geq 0}.
Leta{\displaystyle a}be the floor from which the first egg is dropped in the optimal strategy.
If the first egg broke,m{\displaystyle m}is from1{\displaystyle 1}toa{\displaystyle a}and distinguishable using at mostt−1{\displaystyle t-1}tries andn−1{\displaystyle n-1}eggs.
If the first egg did not break,m{\displaystyle m}is froma+1{\displaystyle a+1}tok{\displaystyle k}and distinguishable usingt−1{\displaystyle t-1}tries andn{\displaystyle n}eggs.
Therefore,f(t,n)=f(t−1,n−1)+f(t−1,n){\displaystyle f(t,n)=f(t-1,n-1)+f(t-1,n)}.
Then the problem is equivalent to finding the minimumx{\displaystyle x}such thatf(x,n)≥k{\displaystyle f(x,n)\geq k}.
To do so, we could compute{f(t,i):0≤i≤n}{\displaystyle \{f(t,i):0\leq i\leq n\}}in order of increasingt{\displaystyle t}, which would takeO(nx){\displaystyle O(nx)}time.
Thus, if we separately handle the case ofn=1{\displaystyle n=1}, the algorithm would takeO(nk){\displaystyle O(n{\sqrt {k}})}time.
But the recurrence relation can in fact be solved, givingf(t,n)=∑i=0n(ti){\displaystyle f(t,n)=\sum _{i=0}^{n}{\binom {t}{i}}}, which can be computed inO(n){\displaystyle O(n)}time using the identity(ti+1)=(ti)t−ii+1{\displaystyle {\binom {t}{i+1}}={\binom {t}{i}}{\frac {t-i}{i+1}}}for alli≥0{\displaystyle i\geq 0}.
Sincef(t,n)≤f(t+1,n){\displaystyle f(t,n)\leq f(t+1,n)}for allt≥0{\displaystyle t\geq 0}, we can binary search ont{\displaystyle t}to findx{\displaystyle x}, giving anO(nlogk){\displaystyle O(n\log k)}algorithm.[16]
Matrix chain multiplication is a well-known example that demonstrates utility of dynamic programming. For example, engineering applications often have to multiply a chain of matrices. It is not surprising to find matrices of large dimensions, for example 100×100. Therefore, our task is to multiply matricesA1,A2,....An{\displaystyle A_{1},A_{2},....A_{n}}. Matrix multiplication is not commutative, but is associative; and we can multiply only two matrices at a time. So, we can multiply this chain of matrices in many different ways, for example:
and so on. There are numerous ways to multiply this chain of matrices. They will all produce the same final result, however they will take more or less time to compute, based on which particular matrices are multiplied. If matrix A has dimensions m×n and matrix B has dimensions n×q, then matrix C=A×B will have dimensions m×q, and will require m*n*q scalar multiplications (using a simplisticmatrix multiplication algorithmfor purposes of illustration).
For example, let us multiply matrices A, B and C. Let us assume that their dimensions are m×n, n×p, and p×s, respectively. Matrix A×B×C will be of size m×s and can be calculated in two ways shown below:
Let us assume that m = 10, n = 100, p = 10 and s = 1000. So, the first way to multiply the chain will require 1,000,000 + 1,000,000 calculations. The second way will require only 10,000+100,000 calculations. Obviously, the second way is faster, and we should multiply the matrices using that arrangement of parenthesis.
Therefore, our conclusion is that the order of parenthesis matters, and that our task is to find the optimal order of parenthesis.
At this point, we have several choices, one of which is to design a dynamic programming algorithm that will split the problem into overlapping problems and calculate the optimal arrangement of parenthesis. The dynamic programming solution is presented below.
Let's call m[i,j] the minimum number of scalar multiplications needed to multiply a chain of matrices from matrix i to matrix j (i.e. Ai× .... × Aj, i.e. i<=j). We split the chain at some matrix k, such that i <= k < j, and try to find out which combination produces minimum m[i,j].
The formula is:
wherekranges fromitoj− 1.
This formula can be coded as shown below, where input parameter "chain" is the chain of matrices, i.e.A1,A2,...An{\displaystyle A_{1},A_{2},...A_{n}}:
So far, we have calculated values for all possiblem[i,j], the minimum number of calculations to multiply a chain from matrixito matrixj, and we have recorded the corresponding "split point"s[i,j]. For example, if we are multiplying chainA1×A2×A3×A4, and it turns out thatm[1, 3] = 100ands[1, 3] = 2, that means that the optimal placement of parenthesis for matrices 1 to 3 is(A1×A2)×A3{\displaystyle (A_{1}\times A_{2})\times A_{3}}and to multiply those matrices will require 100 scalar calculations.
This algorithm will produce "tables"m[, ] ands[, ] that will have entries for all possible values of i and j. The final solution for the entire chain is m[1, n], with corresponding split at s[1, n]. Unraveling the solution will be recursive, starting from the top and continuing until we reach the base case, i.e. multiplication of single matrices.
Therefore, the next step is to actually split the chain, i.e. to place the parenthesis where they (optimally) belong. For this purpose we could use the following algorithm:
Of course, this algorithm is not useful for actual multiplication. This algorithm is just a user-friendly way to see what the result looks like.
To actually multiply the matrices using the proper splits, we need the following algorithm:
The termdynamic programmingwas originally used in the 1940s byRichard Bellmanto describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he refined this to the modern meaning, referring specifically to nesting smaller decision problems inside larger decisions,[17]and the field was thereafter recognized by theIEEEas asystems analysisandengineeringtopic. Bellman's contribution is remembered in the name of theBellman equation, a central result of dynamic programming which restates an optimization problem inrecursiveform.
Bellman explains the reasoning behind the termdynamic programmingin his autobiography,Eye of the Hurricane: An Autobiography:
I spent the Fall quarter (of 1950) atRAND. My first task was to find a name for multistage decision processes. An interesting question is, "Where did the name, dynamic programming, come from?" The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington namedWilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word "research". I'm not using the term lightly; I'm using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term research in his presence. You can imagine how he felt, then, about the term mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word "programming". I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying. I thought, let's kill two birds with one stone. Let's take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it's impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities.
The worddynamicwas chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive.[12]The wordprogrammingreferred to the use of the method to find an optimalprogram, in the sense of a military schedule for training or logistics. This usage is the same as that in the phraseslinear programmingandmathematical programming, a synonym formathematical optimization.[18]
The above explanation of the origin of the term may be inaccurate: According to Russell and Norvig, the above story "cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953."[19]Also,Harold J. Kushnerstated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstageDantzig'slinear programming by adding dynamic. Perhaps both motivations were true."[20]
|
https://en.wikipedia.org/wiki/Dynamic_programming
|
This is a list oflinear transformationsoffunctionsrelated toFourier analysis. Such transformationsmapa function to a set ofcoefficientsofbasis functions, where the basis functions aresinusoidaland are therefore strongly localized in thefrequency spectrum. (These transforms are generally designed to be invertible.) In the case of the Fourier transform, each basis function corresponds to a singlefrequencycomponent.
Applied to functions of continuous arguments, Fourier-related transforms include:
For usage oncomputers, number theory and algebra, discrete arguments (e.g. functions of a series of discrete samples) are often more appropriate, and are handled by the transforms (analogous to the continuous cases above):
The use of all of these transforms is greatly facilitated by the existence of efficient algorithms based on afast Fourier transform(FFT). TheNyquist–Shannon sampling theoremis critical for understanding the output of such discrete transforms.
|
https://en.wikipedia.org/wiki/Fourier-related_transforms
|
Thegrand touris a technique originally developed by Daniel Asimov 1980–85, which is used to exploremultivariate statistical databy means of an animation. The animation, or "movie", consists of a series of distinct views of the data as seen from different directions, displayed on a computer screen, that appear to change continuously and that get closer and closer to all possible views. This allows a human- or computer-based evaluation of these views, with the goal of detecting patterns that will convey useful information about the data.
This technique is like what many museum visitors do when they encounter a complicated abstract sculpture: They walk around it to view it from all directions, in order to understand it better. The human visual system perceives visual information as a pattern on the retina, which is 2-dimensional. Thus walking around the sculpture to understand it better creates a temporal sequence of 2-dimensional images in the brain.
The multivariate data that is the original input for any grand tour visualization is a (finite) set of points in some high-dimensional Euclidean space. This kind of set arises naturally when data is collected. Suppose that for some population of 1000 people, each person is asked to provide their age, height, weight, and number of nose hairs. Thus to each member of the population there is associated an ordered quadruple of numbers. Sincen-dimensional Euclidean space isdefinedas all ordered n-tuples of numbers, this means that the data on 1000 people correspond to 1000 points in 4-dimensional Euclidean space.
The grand tour converts the spatial complexity of the multivariate data set into temporal complexity by using the relatively simple 2-dimensional views of the projected data as the individual frames of the movie. (These are sometimes called "data views".) The projections will ordinarily be chosen so as not to change too fast, which means that the movie of the data will appearcontinuousto a human observer.
A grand tour "method" is an algorithm for assigning a sequence of projections onto (usually) 2-dimensional planes to any given dimension of Euclidean space. This allows any particular multivariate data set to be projected onto that sequence of 2-dimensional planes and thereby displayed on a computer screen one after the other, so that the effect is to create a movie of the data.
(Note that, once the data has been projected onto a given 2-plane, then in order to display it on a computer screen, it is necessary to choose the directions in that 2-plane that will correspond to the horizontal and vertical directions on the computer screen. This is typically a minor detail. But the choice of horizontal and vertical directions should ideally be done so as to minimize any unnecessary apparent "spinning" of the 2-dimensional data view.)
Each "view" (i.e., frame) of the animation is anorthogonal projectionof the data set onto a 2-dimensional subspace (of the Euclidean spaceRpwhere the data resides). The subspaces are selected by taking small steps along a continuous curve, parametrized by time, in the space of all 2-dimensional subspaces ofRp(known as theGrassmannianG(2,p)). To display these views on a computer screen, it is necessary to pick one particular rotated position of each view (in the plane of the computer screen) for display. This causes the positions of the data points on the computer screen to appear to vary continuously. Asimov showed that these subspaces can be selected so as to make the set of them (up to timet) increasingly close to all points inG(2,p), so that if the grand tour movie were allowed to run indefinitely, the set of displayed subspaces would correspond to adense subsetofG(2,p).[1][2]
|
https://en.wikipedia.org/wiki/Grand_Tour_(data_visualisation)
|
Linear least squares(LLS) is theleast squares approximationoflinear functionsto data.
It is a set of formulations for solving statistical problems involved inlinear regression, including variants forordinary(unweighted),weighted, andgeneralized(correlated)residuals.Numerical methods for linear least squaresinclude inverting the matrix of the normal equations andorthogonal decompositionmethods.
Consider the linear equation
whereA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}andb∈Rm{\displaystyle b\in \mathbb {R} ^{m}}are given andx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}is variable to be computed. Whenm>n,{\displaystyle m>n,}it is generally the case that (1) has no solution.
For example, there is no value ofx{\displaystyle x}that satisfies[100111]x=[110],{\displaystyle {\begin{bmatrix}1&0\\0&1\\1&1\end{bmatrix}}x={\begin{bmatrix}1\\1\\0\end{bmatrix}},}because the first two rows require thatx=(1,1),{\displaystyle x=(1,1),}but then the third row is not satisfied.
Thus, form>n,{\displaystyle m>n,}the goal of solving (1) exactly is typically replaced by finding the value ofx{\displaystyle x}that minimizes some error.
There are many ways that the error can be defined, but one of the most common is to define it as‖Ax−b‖2.{\displaystyle \|Ax-b\|^{2}.}This produces a minimization problem, called aleast squares problem
The solution to the least squares problem (1) is computed by solving thenormal equation[1]
whereA⊤{\displaystyle A^{\top }}denotes thetransposeofA{\displaystyle A}.
Continuing the example, above, withA=[100111]andb=[110],{\displaystyle A={\begin{bmatrix}1&0\\0&1\\1&1\end{bmatrix}}\quad {\text{and}}\quad b={\begin{bmatrix}1\\1\\0\end{bmatrix}},}we findA⊤A=[101011][100111]=[2112]{\displaystyle A^{\top }A={\begin{bmatrix}1&0&1\\0&1&1\end{bmatrix}}{\begin{bmatrix}1&0\\0&1\\1&1\end{bmatrix}}={\begin{bmatrix}2&1\\1&2\end{bmatrix}}}andA⊤b=[101011][110]=[11].{\displaystyle A^{\top }b={\begin{bmatrix}1&0&1\\0&1&1\end{bmatrix}}{\begin{bmatrix}1\\1\\0\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}.}Solving the normal equation givesx=(1/3,1/3).{\displaystyle x=(1/3,1/3).}
The three main linear least squares formulations are:
Other formulations include:
In OLS (i.e., assuming unweighted observations), theoptimal valueof theobjective functionis found by substituting the optimal expression for the coefficient vector:S=yT(I−H)T(I−H)y=yT(I−H)y,{\displaystyle S=\mathbf {y} ^{\mathsf {T}}(\mathbf {I} -\mathbf {H} )^{\mathsf {T}}(\mathbf {I} -\mathbf {H} )\mathbf {y} =\mathbf {y} ^{\mathsf {T}}(\mathbf {I} -\mathbf {H} )\mathbf {y} ,}whereH=X(XTX)−1XT{\displaystyle \mathbf {H} =\mathbf {X} (\mathbf {X} ^{\mathsf {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\mathsf {T}}}, the latter equality holding since(I−H){\displaystyle (\mathbf {I} -\mathbf {H} )}is symmetric and idempotent. It can be shown from this[9]that under an appropriate assignment of weights theexpected valueofSism−n{\textstyle m-n}. If instead unit weights are assumed, the expected value ofSis(m−n)σ2{\displaystyle (m-n)\sigma ^{2}}, whereσ2{\displaystyle \sigma ^{2}}is the variance of each observation.
If it is assumed that the residuals belong to a normal distribution, the objective function, being a sum of weighted squared residuals, will belong to achi-squared(χ2{\displaystyle \chi ^{2}})distributionwithm−ndegrees of freedom. Some illustrative percentile values ofχ2{\displaystyle \chi ^{2}}are given in the following table.[10]
These values can be used for a statistical criterion as to thegoodness of fit. When unit weights are used, the numbers should be divided by the variance of an observation.
For WLS, the ordinary objective function above is replaced for a weighted average of residuals.
Instatisticsandmathematics,linear least squaresis an approach to fitting amathematicalorstatistical modeltodatain cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknownparametersof the model. The resulting fitted model can be used tosummarizethe data, topredictunobserved values from the same system, and to understand the mechanisms that may underlie the system.
Mathematically, linear least squares is the problem of approximately solving anoverdetermined systemof linear equationsAx=b, wherebis not an element of thecolumn spaceof the matrixA. The approximate solution is realized as an exact solution toAx=b', whereb'is the projection ofbonto the column space ofA. The best approximation is then that which minimizes the sum of squared differences between the data values and their corresponding modeled values. The approach is calledlinearleast squares since the assumed function is linear in the parameters to be estimated. Linear least squares problems areconvexand have aclosed-form solutionthat is unique, provided that the number of data points used for fitting equals or exceeds the number of unknown parameters, except in special degenerate situations. In contrast,non-linear least squaresproblems generally must be solved by aniterative procedure, and the problems can be non-convex with multiple optima for the objective function. If prior distributions are available, then even an underdetermined system can be solved using theBayesian MMSE estimator.
In statistics, linear least squares problems correspond to a particularly important type ofstatistical modelcalledlinear regressionwhich arises as a particular form ofregression analysis. One basic form of such a model is anordinary least squaresmodel. The present article concentrates on the mathematical aspects of linear least squares problems, with discussion of the formulation and interpretation of statistical regression models andstatistical inferencesrelated to these being dealt with in the articles just mentioned. Seeoutline of regression analysisfor an outline of the topic.
If the experimental errors,ε{\displaystyle \varepsilon }, are uncorrelated, have a mean of zero and a constant variance,σ{\displaystyle \sigma }, theGauss–Markov theoremstates that the least-squares estimator,β^{\displaystyle {\hat {\boldsymbol {\beta }}}}, has the minimum variance of all estimators that are linear combinations of the observations. In this sense it is the best, or optimal, estimator of the parameters. Note particularly that this property is independent of the statisticaldistribution functionof the errors. In other words,the distribution function of the errors need not be anormal distribution. However, for some probability distributions, there is no guarantee that the least-squares solution is even possible given the observations; still, in such cases it is the best estimator that is both linear and unbiased.
For example, it is easy to show that thearithmetic meanof a set of measurements of a quantity is the least-squares estimator of the value of that quantity. If the conditions of the Gauss–Markov theorem apply, the arithmetic mean is optimal, whatever the distribution of errors of the measurements might be.
However, in the case that the experimental errors do belong to a normal distribution, the least-squares estimator is also amaximum likelihoodestimator.[11]
These properties underpin the use of the method of least squares for all types of data fitting, even when the assumptions are not strictly valid.
An assumption underlying the treatment given above is that the independent variable,x, is free of error. In practice, the errors on the measurements of the independent variable are usually much smaller than the errors on the dependent variable and can therefore be ignored. When this is not the case,total least squaresor more generallyerrors-in-variables models, orrigorous least squares, should be used. This can be done by adjusting the weighting scheme to take into account errors on both the dependent and independent variables and then following the standard procedure.[12][13]
In some cases the (weighted) normal equations matrixXTXisill-conditioned. When fitting polynomials the normal equations matrix is aVandermonde matrix. Vandermonde matrices become increasingly ill-conditioned as the order of the matrix increases.[citation needed]In these cases, the least squares estimate amplifies the measurement noise and may be grossly inaccurate.[citation needed]Variousregularizationtechniques can be applied in such cases, the most common of which is calledridge regression. If further information about the parameters is known, for example, a range of possible values ofβ^{\displaystyle \mathbf {\hat {\boldsymbol {\beta }}} }, then various techniques can be used to increase the stability of the solution. For example, seeconstrained least squares.
Another drawback of the least squares estimator is the fact that the norm of the residuals,‖y−Xβ^‖{\displaystyle \|\mathbf {y} -\mathbf {X} {\hat {\boldsymbol {\beta }}}\|}is minimized, whereas in some cases one is truly interested in obtaining small error in the parameterβ^{\displaystyle \mathbf {\hat {\boldsymbol {\beta }}} }, e.g., a small value of‖β−β^‖{\displaystyle \|{\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}}\|}.[citation needed]However, since the true parameterβ{\displaystyle {\boldsymbol {\beta }}}is necessarily unknown, this quantity cannot be directly minimized. If aprior probabilityonβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}is known, then aBayes estimatorcan be used to minimize themean squared error,E{‖β−β^‖2}{\displaystyle E\left\{\|{\boldsymbol {\beta }}-{\hat {\boldsymbol {\beta }}}\|^{2}\right\}}. The least squares method is often applied when no prior is known. When several parameters are being estimated jointly, better estimators can be constructed, an effect known asStein's phenomenon. For example, if the measurement error isGaussian, several estimators are known whichdominate, or outperform, the least squares technique; the best known of these is theJames–Stein estimator. This is an example of more generalshrinkage estimatorsthat have been applied to regression problems.
The primary application of linear least squares is indata fitting. Given a set ofmdata pointsy1,y2,…,ym,{\displaystyle y_{1},y_{2},\dots ,y_{m},}consisting of experimentally measured values taken atmvaluesx1,x2,…,xm{\displaystyle x_{1},x_{2},\dots ,x_{m}}of an independent variable (xi{\displaystyle x_{i}}may be scalar or vector quantities), and given a model functiony=f(x,β),{\displaystyle y=f(x,{\boldsymbol {\beta }}),}withβ=(β1,β2,…,βn),{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\beta _{2},\dots ,\beta _{n}),}it is desired to find the parametersβj{\displaystyle \beta _{j}}such that the model function "best" fits the data. In linear least squares, linearity is meant to be with respect to parametersβj,{\displaystyle \beta _{j},}sof(x,β)=∑j=1nβjφj(x).{\displaystyle f(x,{\boldsymbol {\beta }})=\sum _{j=1}^{n}\beta _{j}\varphi _{j}(x).}
Here, the functionsφj{\displaystyle \varphi _{j}}may benonlinearwith respect to the variablex.
Ideally, the model function fits the data exactly, soyi=f(xi,β){\displaystyle y_{i}=f(x_{i},{\boldsymbol {\beta }})}for alli=1,2,…,m.{\displaystyle i=1,2,\dots ,m.}This is usually not possible in practice, as there are more data points than there are parameters to be determined. The approach chosen then is to find the minimal possible value of the sum of squares of theresidualsri(β)=yi−f(xi,β),(i=1,2,…,m){\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f(x_{i},{\boldsymbol {\beta }}),\ (i=1,2,\dots ,m)}so to minimize the functionS(β)=∑i=1mri2(β).{\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}r_{i}^{2}({\boldsymbol {\beta }}).}
After substituting forri{\displaystyle r_{i}}and then forf{\displaystyle f}, this minimization problem becomes the quadratic minimization problem above withXij=φj(xi),{\displaystyle X_{ij}=\varphi _{j}(x_{i}),}and the best fit can be found by solving the normal equations.
A hypothetical researcher conducts an experiment and obtains four(x,y){\displaystyle (x,y)}data points:(1,6),{\displaystyle (1,6),}(2,5),{\displaystyle (2,5),}(3,7),{\displaystyle (3,7),}and(4,10){\displaystyle (4,10)}(shown in red in the diagram on the right). Because of exploratory data analysis or prior knowledge of the subject matter, the researcher suspects that they{\displaystyle y}-values depend on thex{\displaystyle x}-values systematically. Thex{\displaystyle x}-values are assumed to be exact, but they{\displaystyle y}-values contain some uncertainty or "noise", because of the phenomenon being studied, imperfections in the measurements, etc.
One of the simplest possible relationships betweenx{\displaystyle x}andy{\displaystyle y}is a liney=β1+β2x{\displaystyle y=\beta _{1}+\beta _{2}x}. The interceptβ1{\displaystyle \beta _{1}}and the slopeβ2{\displaystyle \beta _{2}}are initially unknown. The researcher would like to find values ofβ1{\displaystyle \beta _{1}}andβ2{\displaystyle \beta _{2}}that cause the line to pass through the four data points. In other words, the researcher would like to solve the system of linear equationsβ1+1β2=6,β1+2β2=5,β1+3β2=7,β1+4β2=10.{\displaystyle {\begin{alignedat}{3}\beta _{1}+1\beta _{2}&&\;=\;&&6,&\\\beta _{1}+2\beta _{2}&&\;=\;&&5,&\\\beta _{1}+3\beta _{2}&&\;=\;&&7,&\\\beta _{1}+4\beta _{2}&&\;=\;&&10.&\\\end{alignedat}}}With four equations in two unknowns, this system is overdetermined. There is no exact solution. To consider approximate solutions, one introducesresidualsr1{\displaystyle r_{1}},r2{\displaystyle r_{2}},r3{\displaystyle r_{3}},r4{\displaystyle r_{4}}into the equations:β1+1β2+r1=6,β1+2β2+r2=5,β1+3β2+r3=7,β1+4β2+r4=10.{\displaystyle {\begin{alignedat}{3}\beta _{1}+1\beta _{2}+r_{1}&&\;=\;&&6,&\\\beta _{1}+2\beta _{2}+r_{2}&&\;=\;&&5,&\\\beta _{1}+3\beta _{2}+r_{3}&&\;=\;&&7,&\\\beta _{1}+4\beta _{2}+r_{4}&&\;=\;&&10.&\\\end{alignedat}}}Thei{\displaystyle i}th residualri{\displaystyle r_{i}}is the misfit between thei{\displaystyle i}th observationyi{\displaystyle y_{i}}and thei{\displaystyle i}th predictionβ1+β2xi{\displaystyle \beta _{1}+\beta _{2}x_{i}}:r1=6−(β1+1β2),r2=5−(β1+2β2),r3=7−(β1+3β2),r4=10−(β1+4β2).{\displaystyle {\begin{alignedat}{3}r_{1}&&\;=\;&&6-(\beta _{1}+1\beta _{2}),&\\r_{2}&&\;=\;&&5-(\beta _{1}+2\beta _{2}),&\\r_{3}&&\;=\;&&7-(\beta _{1}+3\beta _{2}),&\\r_{4}&&\;=\;&&10-(\beta _{1}+4\beta _{2}).&\\\end{alignedat}}}Among all approximate solutions, the researcher would like to find the one that is "best" in some sense.
Inleast squares, one focuses on the sumS{\displaystyle S}of the squared residuals:S(β1,β2)=r12+r22+r32+r42=[6−(β1+1β2)]2+[5−(β1+2β2)]2+[7−(β1+3β2)]2+[10−(β1+4β2)]2=4β12+30β22+20β1β2−56β1−154β2+210.{\displaystyle {\begin{aligned}S(\beta _{1},\beta _{2})&=r_{1}^{2}+r_{2}^{2}+r_{3}^{2}+r_{4}^{2}\\[6pt]&=[6-(\beta _{1}+1\beta _{2})]^{2}+[5-(\beta _{1}+2\beta _{2})]^{2}+[7-(\beta _{1}+3\beta _{2})]^{2}+[10-(\beta _{1}+4\beta _{2})]^{2}\\[6pt]&=4\beta _{1}^{2}+30\beta _{2}^{2}+20\beta _{1}\beta _{2}-56\beta _{1}-154\beta _{2}+210.\\[6pt]\end{aligned}}}The best solution is defined to be the one thatminimizesS{\displaystyle S}with respect toβ1{\displaystyle \beta _{1}}andβ2{\displaystyle \beta _{2}}. The minimum can be calculated by setting thepartial derivativesofS{\displaystyle S}to zero:0=∂S∂β1=8β1+20β2−56,{\displaystyle 0={\frac {\partial S}{\partial \beta _{1}}}=8\beta _{1}+20\beta _{2}-56,}0=∂S∂β2=20β1+60β2−154.{\displaystyle 0={\frac {\partial S}{\partial \beta _{2}}}=20\beta _{1}+60\beta _{2}-154.}Thesenormal equationsconstitute a system of two linear equations in two unknowns. The solution isβ1=3.5{\displaystyle \beta _{1}=3.5}andβ2=1.4{\displaystyle \beta _{2}=1.4}, and the best-fit line is thereforey=3.5+1.4x{\displaystyle y=3.5+1.4x}.
The residuals are1.1,{\displaystyle 1.1,}−1.3,{\displaystyle -1.3,}−0.7,{\displaystyle -0.7,}and0.9{\displaystyle 0.9}(see the diagram on the right). The minimum value of the sum of squared residuals isS(3.5,1.4)=1.12+(−1.3)2+(−0.7)2+0.92=4.2.{\displaystyle S(3.5,1.4)=1.1^{2}+(-1.3)^{2}+(-0.7)^{2}+0.9^{2}=4.2.}
This calculation can be expressed in matrix notation as follows. The original system of equations isy=Xβ{\displaystyle \mathbf {y} =\mathbf {X} \mathbf {\beta } }, wherey=[65710],X=[11121314],β=[β1β2].{\displaystyle \mathbf {y} =\left[{\begin{array}{c}6\\5\\7\\10\end{array}}\right],\;\;\;\;\mathbf {X} =\left[{\begin{array}{cc}1&1\\1&2\\1&3\\1&4\end{array}}\right],\;\;\;\;\mathbf {\beta } =\left[{\begin{array}{c}\beta _{1}\\\beta _{2}\end{array}}\right].}Intuitively,y=Xβ⇒X⊤y=X⊤Xβ⇒β=(X⊤X)−1X⊤y=[3.51.4].{\displaystyle \mathbf {y} =\mathbf {X} \mathbf {\beta } \;\;\;\;\Rightarrow \;\;\;\;\mathbf {X} ^{\top }\mathbf {y} =\mathbf {X} ^{\top }\mathbf {X} \mathbf {\beta } \;\;\;\;\Rightarrow \;\;\;\;\mathbf {\beta } =\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} =\left[{\begin{array}{c}3.5\\1.4\end{array}}\right].}More rigorously, ifX⊤X{\displaystyle \mathbf {X} ^{\top }\mathbf {X} }is invertible, then the matrixX(X⊤X)−1X⊤{\displaystyle \mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }}represents orthogonal projection onto the column space ofX{\displaystyle \mathbf {X} }. Therefore, among all vectors of the formXβ{\displaystyle \mathbf {X} \mathbf {\beta } }, the one closest toy{\displaystyle \mathbf {y} }isX(X⊤X)−1X⊤y{\displaystyle \mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} }. SettingX(X⊤X)−1X⊤y=Xβ,{\displaystyle \mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} =\mathbf {X} \mathbf {\beta } ,}it is evident thatβ=(X⊤X)−1X⊤y{\displaystyle \mathbf {\beta } =\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} }is a solution.
Suppose that the hypothetical researcher wishes to fit a parabola of the formy=β1x2{\displaystyle y=\beta _{1}x^{2}}. Importantly, this model is still linear in the unknown parameters (now justβ1{\displaystyle \beta _{1}}), so linear least squares still applies. The system of equations incorporating residuals is6=β1(1)2+r15=β1(2)2+r27=β1(3)2+r310=β1(4)2+r4{\displaystyle {\begin{alignedat}{2}6&&\;=\beta _{1}(1)^{2}+r_{1}\\5&&\;=\beta _{1}(2)^{2}+r_{2}\\7&&\;=\beta _{1}(3)^{2}+r_{3}\\10&&\;=\beta _{1}(4)^{2}+r_{4}\\\end{alignedat}}}
The sum of squared residuals isS(β1)=(6−β1)2+(5−4β1)2+(7−9β1)2+(10−16β1)2.{\displaystyle S(\beta _{1})=(6-\beta _{1})^{2}+(5-4\beta _{1})^{2}+(7-9\beta _{1})^{2}+(10-16\beta _{1})^{2}.}There is just one partial derivative to set to 0:0=∂S∂β1=708β1−498.{\displaystyle 0={\frac {\partial S}{\partial \beta _{1}}}=708\beta _{1}-498.}The solution isβ1=0.703{\displaystyle \beta _{1}=0.703}, and the fit model isy=0.703x2{\displaystyle y=0.703x^{2}}.
In matrix notation, the equations without residuals are againy=Xβ{\displaystyle \mathbf {y} =\mathbf {X} \mathbf {\beta } }, where nowy=[65710],X=[14916],β=[β1].{\displaystyle \mathbf {y} =\left[{\begin{array}{c}6\\5\\7\\10\end{array}}\right],\;\;\;\;\mathbf {X} =\left[{\begin{array}{c}1\\4\\9\\16\end{array}}\right],\;\;\;\;\mathbf {\beta } =\left[{\begin{array}{c}\beta _{1}\end{array}}\right].}By the same logic as above, the solution isβ=(X⊤X)−1X⊤y=[0.703].{\displaystyle \mathbf {\beta } =\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\mathbf {y} =\left[{\begin{array}{c}0.703\end{array}}\right].}
The figure shows an extension to fitting the three parameter parabola using a design matrixX{\displaystyle \mathbf {X} }with three columns (one forx0{\displaystyle x^{0}},x1{\displaystyle x^{1}}, andx2{\displaystyle x^{2}}), and one row for each of the red data points.
More generally, one can haven{\displaystyle n}regressorsxj{\displaystyle x_{j}}, and a linear modely=β0+∑j=1nβjxj.{\displaystyle y=\beta _{0}+\sum _{j=1}^{n}\beta _{j}x_{j}.}
|
https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)
|
Model order reduction (MOR)is a technique for reducing thecomputational complexityofmathematical modelsinnumerical simulations. As such it is closely related to the concept ofmetamodeling, with applications in all areas ofmathematical modelling.
Many modernmathematical modelsof real-life processes pose challenges when used innumerical simulations, due to complexity and large size (dimension). Model order reduction aims to lower the computational complexity of such problems, for example, in simulations of large-scaledynamical systemsandcontrol systems. By a reduction of the model's associatedstate spacedimension ordegrees of freedom, an approximation to the original model is computed which is commonly referred to as a reduced order model.
Reduced order models are useful in settings where it is often unfeasible to performnumerical simulationsusing the complete full order model. This can be due to limitations incomputational resourcesor the requirements of the simulations setting, for instancereal-time simulationsettings or many-query settings in which a large number of simulations needs to be performed.[1][2]Examples of Real-time simulation settings includecontrol systemsin electronics andvisualizationof model results while examples for a many-query setting can includeoptimizationproblems and design exploration. In order to be applicable to real-world problems, often the requirements of a reduced order model are:[3][4]
It is interesting to note that in some cases (e.g. constrained lumping of polynomial differential equations) it is possible to have a null approximation error, resulting in an exact model order reduction.[5]
Contemporary model order reduction techniques can be broadly classified into 5 classes:[1][6]
The simplified physics approach can be described to be analogous to the traditionalmathematical modellingapproach, in which a less complex description of a system is constructed based on assumptions and simplifications using physical insight or otherwise derived information. However, this approach is not often the topic of discussion in the context of model order reduction as it is a general method in science, engineering, and mathematics.
Proper orthogonal decomposition, reduced basis, and balancing methods fall into the category of projection-based reduction. Projection-based reduction relies on the projection of either the model equations or the solution onto a basis of reduced dimensionality compared to the original solution space. Methods that also fall into this class but are perhaps less common are:
Nonlinear and manifold model reduction methods derive nonlinear approximations on manifolds and so can achieve higher accuracy with the same number of degrees of freedom than traditional methods that obtain linear approximations in subspaces.[11]Building on nonlinear approximations is essential for efficiently reducing certain problem classes such as wave problems and advection-dominated problems in computational fluid dynamics. The nature and principles underlying nonlinear model reduction methods are broad and include template-based methods,[15][16][17]the use of neural networks[18][19][20]and online adaptive spaces.[21][22]
There are also nonintrusive model reduction methods that learn reduced models from data without requiring knowledge about the governing equations and internals of the full, high-fidelity model. Nonintrusive methods learn a low-dimensional approximation space or manifold and the reduced operators that represent the reduced dynamics from data. Methods that are non-intrusive include:
Model order reduction finds application within all fields involving mathematical modelling and many reviews[10][12]exist for the topics ofelectronics,[30]fluid mechanics,[31]hydrodynamics,[32]structural mechanics,[7]MEMS,[33]Boltzmann equation,[8]anddesign optimization.[13][34]
Current problems in fluid mechanics involve largedynamical systemsrepresenting many effects on many different scales.Computational fluid dynamicsstudies often involve models solving theNavier–Stokes equationswith a number ofdegrees of freedomin the order of magnitude upwards of106{\displaystyle 10^{6}}. The first usage of model order reduction techniques dates back to the work of Lumley in 1967,[35]where it was used to gain insight into the mechanisms and intensity ofturbulenceandlarge coherent structurespresent in fluid flow problems. Model order reduction also finds modern applications inaeronauticsto model the flow over the body of aircraft.[36]An example can be found in Lieu et al[37]in which the full order model of anF16fighter-aircraft with over 2.1 million degrees of freedom, was reduced to a model of just 90 degrees of freedom. Additionally reduced order modeling has been applied to studyrheologyinhemodynamicsand thefluid–structure interactionbetween the blood flowing through the vascular system and the vascular walls.[38][39]
|
https://en.wikipedia.org/wiki/Model_order_reduction
|
Multilinear principal component analysis(MPCA) is amultilinearextension ofprincipal component analysis(PCA) that is used to analyze M-way arrays, also informally referred to as "data tensors". M-way arrays may be modeled by linear tensor models, such as CANDECOMP/Parafac, or by multilinear tensor models, such as multilinear principal component analysis (MPCA) or multilinear independent component analysis (MICA).
The origin of MPCA can be traced back to thetensor rank decompositionintroduced byFrank Lauren Hitchcockin 1927;[1]to theTucker decomposition;[2]and to Peter Kroonenberg's "3-mode PCA" work.[3]In 2000, De Lathauwer et al. restated Tucker and Kroonenberg's work in clear and concise numerical computational terms in their SIAM paper entitled "Multilinear Singular Value Decomposition",[4](HOSVD) and in their paper "On the Best Rank-1 and Rank-(R1, R2, ..., RN) Approximation of Higher-order Tensors".[5]
Circa 2001, Vasilescu and Terzopoulos reframed the data analysis, recognition and synthesis problems as multilinear tensor problems. Tensor factor analysis is the compositional consequence of several causal factors of data formation, and are well suited for multi-modal data tensor analysis. The power of the tensor framework was showcased by analyzing human motion joint angles, facial images or textures in terms of their causal factors of data formation in the following works: Human Motion Signatures[6](CVPR 2001, ICPR 2002), face recognition –TensorFaces,[7][8](ECCV 2002, CVPR 2003, etc.) and computer graphics –TensorTextures[9](Siggraph 2004).
Historically, MPCA has been referred to as "M-mode PCA", a terminology which was coined by Peter Kroonenberg in 1980.[3]In 2005, Vasilescu andTerzopoulosintroduced the Multilinear PCA[10]terminology as a way to better differentiate between linear and multilinear tensor decomposition, as well as, to better differentiate between the work[6][7][8][9]that computed 2nd order statistics associated with each data tensor mode(axis), and subsequent work on Multilinear Independent Component Analysis[10]that computed higher order statistics associated with each tensor mode/axis.
Multilinear PCA may be applied to compute the causal factors of data formation, or as signal processing tool on data tensors whose individual observation have either been vectorized,[6][7][8][9]or whose observations are treated as a collection of column/row observations, "data matrix" and concatenated into a data tensor. The main disadvantage of this approach is that, rather than computing all possible combinations, MPCA computes a set of orthonormal matrices associated with each mode of the data tensor which are analogous to the orthonormal row and column space of a matrix computed by the matrix SVD. This transformation aims to capture as high a variance as possible, accounting for as much of the variability in the data associated with each data tensor mode(axis).
The MPCA solution follows the alternating least square (ALS) approach. It is iterative in nature.
As in PCA, MPCA works on centered data. Centering is a little more complicated for tensors, and it is problem dependent.
MPCA features: Supervised MPCA is employed in causal factor analysis that facilitates object recognition[11]while a semi-supervised MPCA feature selection is employed in visualization tasks.[12]
Various extension of MPCA:
|
https://en.wikipedia.org/wiki/Multilinear_principal_component_analysis
|
Multilinear subspace learningis an approach for disentangling the causal factor of data formation and performing dimensionality reduction.[1][2][3][4][5]TheDimensionality reductioncan be performed on a datatensorthat contains a collection of observations that have been vectorized,[1]or observations that are treated as matrices and concatenated into a data tensor.[6][7]Here are some examples of data tensors whose observations are vectorized or whose observations are matrices concatenated into data tensorimages(2D/3D),videosequences (3D/4D), andhyperspectral cubes(3D/4D).
The mapping from ahigh-dimensional vector spaceto a set of lower dimensionalvector spacesis a multilinear projection.[4]When observations are retained in the same organizational structure as matrices or higher order tensors, their representations are computed by performing linear projections into the column space, row space and fiber space.[6]
Multilinear subspace learning algorithmsare higher-order generalizations oflinear subspacelearning methods such asprincipal component analysis(PCA),independent component analysis(ICA),linear discriminant analysis(LDA) andcanonical correlation analysis(CCA).
Multilinear methods may be causal in nature and perform causal inference, or they may be simple regression methods from which no causal conclusion are drawn.
Linear subspacelearning algorithms are traditional dimensionality reduction techniques that are well suited for datasets that are the result of varying a single causal factor. Unfortunately, they often become inadequate when dealing with datasets that are the result of multiple causal factors. .
Multilinear subspace learning can be applied to observations whose measurements were vectorized and organized into a data tensor for causally aware dimensionality reduction.[1]These methods may also be employed in reducing horizontal and vertical redundancies irrespective of the causal factors when the observations are treated as a "matrix" (ie. a collection of independent column/row observations) and concatenated into a tensor.[8][9]
Historically,multilinear principal component analysishas been referred to as "M-mode PCA", a terminology which was coined by Peter Kroonenberg.[10]In 2005, Vasilescu andTerzopoulosintroduced the Multilinear PCA[11]terminology as a way to better differentiate between multilinear tensor decompositions that computed 2nd order statistics associated with each data tensor mode,[1][2][3][12][13]and subsequent work on Multilinear Independent Component Analysis[11]that computed higher order statistics for each tensor mode. MPCA is an extension ofPCA.
Multilinear independent component analysis[11]is an extension ofICA.
There areNsets of parameters to be solved, one in each mode. The solution to one set often depends on the other sets (except whenN=1, the linear case). Therefore, the suboptimal iterative procedure in[23]is followed.
This is originated from the alternating least square method for multi-way data analysis.[10]
|
https://en.wikipedia.org/wiki/Multilinear_subspace_learning
|
Afat-tailed distributionis aprobability distributionthat exhibits a largeskewnessorkurtosis, relative to that of either anormal distributionor anexponential distribution.[when defined as?]In common usage, the terms fat-tailed andheavy-tailedare sometimes synonymous; fat-tailed is sometimes also defined as a subset of heavy-tailed. Different research communities favor one or the other largely for historical reasons, and may have differences in the precise definition of either.
Fat-tailed distributions have been empirically encountered in a variety of areas:physics, earth sciences, economics and political science. The class of fat-tailed distributions includes those whose tails decay like apower law, which is a common point of reference in their use in the scientific literature. However, fat-tailed distributions also include other slowly-decaying distributions, such as thelog-normal.[1]
The most extreme case of a fat tail is given by a distribution whose tail decays like apower law.
That is, if the complementarycumulative distributionof arandom variableXcan be expressed as[citation needed]
then the distribution is said to have a fat tail ifα<2{\displaystyle \alpha <2}. For such values the variance and the skewness of the tail are mathematically undefined (a special property of the power-law distribution), and hence larger than any normal or exponential distribution. For values ofα>2,{\displaystyle \ \alpha >2\ ,}the claim of a fat tail is more ambiguous, because in this parameter range, the variance, skewness, and kurtosis can be finite, depending on the precise value ofα,{\displaystyle \ \alpha \ ,}and thus potentially smaller than a high-variance normal or exponential tail. This ambiguity often leads to disagreements about precisely what is, or is not, a fat-tailed distribution. Fork>α−1,{\displaystyle \ k>\alpha -1\ ,}thekth{\displaystyle \ k^{\mathsf {th}}\ }moment is infinite, so for every power law distribution, some moments are undefined.[2]
Compared to fat-tailed distributions, in the normal distribution, events that deviate from themeanby five or morestandard deviations("5-sigma events") have lower probability, meaning that in the normal distribution extreme events are less likely than for fat-tailed distributions. Fat-tailed distributions such as theCauchy distribution(and all otherstable distributionswith the exception of thenormal distribution) have "undefined sigma" (more technically, thevarianceis undefined).
As a consequence, when data arise from an underlying fat-tailed distribution, shoehorning in the "normal distribution" model of risk—and estimating sigma based (necessarily) on a finite sample size—would understate the true degree of predictive difficulty (and of risk). Many—notablyBenoît Mandelbrotas well asNassim Taleb—have noted this shortcoming of the normal distribution model and have proposed that fat-tailed distributions such as thestable distributionsgovern asset returns frequently found infinance.[3][4][5]
TheBlack–Scholesmodel ofoption pricingis based on a normal distribution. If the distribution is actually a fat-tailed one, then the model will under-priceoptionsthat are farout of the money, since a 5- or 7-sigma event is much more likely than the normal distribution would predict.[6]
Infinance, fat tails often occur but are considered undesirable because of the additionalriskthey imply. For example, an investment strategy may have an expected return, after one year, that is five times its standard deviation. Assuming a normal distribution, the likelihood of its failure (negative return) is less than one in a million; in practice, it may be higher. Normal distributions that emerge in finance generally do so because the factors influencing an asset's value or price are mathematically "well-behaved", and thecentral limit theoremprovides for such a distribution. However, traumatic "real-world" events (such as an oil shock, a large corporate bankruptcy, or an abrupt change in a political situation) are usually not mathematicallywell-behaved.
Historical examples include theWall Street crash of 1929,Black Monday (1987),Dot-com bubble,2008 financial crisis,2010 flash crash, the2020 stock market crashand the unpegging of some currencies.[7]
Fat tails in market return distributions also have some behavioral origins (investor excessive optimism or pessimism leading to large market moves) and are therefore studied inbehavioral finance.
Inmarketing, the familiar80-20 rulefrequently found (e.g. "20% of customers account for 80% of the revenue") is a manifestation of a fat tail distribution underlying the data.[8]
The "fat tails" are also observed incommodity marketsor in therecord industry, especially inphonographic markets. The probability density function for logarithm of weekly record sales changes is highlyleptokurticand characterized by a narrower and larger maximum, and by a fatter tail than in the normal distribution case. On the other hand, this distribution has only one fat tail associated with an increase in sales due to promotion of the new records that enter the charts.[9]
|
https://en.wikipedia.org/wiki/Fat-tailed_distribution
|
Inprobability theory,heavy-tailed distributionsareprobability distributionswhose tails are not exponentially bounded:[1]that is, they have heavier tails than theexponential distribution. In many applications it is the right tail of the distribution that is of interest, but a distribution may have a heavy left tail, or both tails may be heavy.
There are three important subclasses of heavy-tailed distributions: thefat-tailed distributions, thelong-tailed distributions, and thesubexponential distributions. In practice, all commonly used heavy-tailed distributions belong to the subexponential class, introduced byJozef Teugels.[2]
There is still some discrepancy over the use of the termheavy-tailed. There are two other definitions in use. Some authors use the term to refer to those distributions which do not have all their powermomentsfinite; and some others to those distributions that do not have a finitevariance. The definition given in this article is the most general in use, and includes all distributions encompassed by the alternative definitions, as well as those distributions such aslog-normalthat possess all their power moments, yet which are generally considered to be heavy-tailed. (Occasionally, heavy-tailed is used for any distribution that has heavier tails than the normal distribution.)
The distribution of arandom variableXwithdistribution functionFis said to have a heavy (right) tail if themoment generating functionofX,MX(t), is infinite for allt> 0.[3]
That means
This is also written in terms of the tail distribution function
as
The distribution of arandom variableXwithdistribution functionFis said to have a long right tail[1]if for allt> 0,
or equivalently
This has the intuitive interpretation for a right-tailed long-tailed distributed quantity that if the long-tailed quantity exceeds some high level, the probability approaches 1 that it will exceed any other higher level.
All long-tailed distributions are heavy-tailed, but the converse is false, and it is possible to construct heavy-tailed distributions that are not long-tailed.
Subexponentiality is defined in terms ofconvolutions of probability distributions. For two independent, identically distributedrandom variablesX1,X2{\displaystyle X_{1},X_{2}}with a common distribution functionF{\displaystyle F}, the convolution ofF{\displaystyle F}with itself, writtenF∗2{\displaystyle F^{*2}}and called the convolution square, is defined usingLebesgue–Stieltjes integrationby:
and then-fold convolutionF∗n{\displaystyle F^{*n}}is defined inductively by the rule:
The tail distribution functionF¯{\displaystyle {\overline {F}}}is defined asF¯(x)=1−F(x){\displaystyle {\overline {F}}(x)=1-F(x)}.
A distributionF{\displaystyle F}on the positive half-line is subexponential[1][5][2]if
This implies[6]that, for anyn≥1{\displaystyle n\geq 1},
The probabilistic interpretation[6]of this is that, for a sum ofn{\displaystyle n}independentrandom variablesX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}with common distributionF{\displaystyle F},
This is often known as the principle of the single big jump[7]or catastrophe principle.[8]
A distributionF{\displaystyle F}on the whole real line is subexponential if the distributionFI([0,∞)){\displaystyle FI([0,\infty ))}is.[9]HereI([0,∞)){\displaystyle I([0,\infty ))}is theindicator functionof the positive half-line. Alternatively, a random variableX{\displaystyle X}supported on the real line is subexponential if and only ifX+=max(0,X){\displaystyle X^{+}=\max(0,X)}is subexponential.
All subexponential distributions are long-tailed, but examples can be constructed of long-tailed distributions that are not subexponential.
All commonly used heavy-tailed distributions are subexponential.[6]
Those that are one-tailed include:
Those that are two-tailed include:
Afat-tailed distributionis a distribution for which the probability density function, for large x, goes to zero as a powerx−a{\displaystyle x^{-a}}. Since such a power is always bounded below by the probability density function of an exponential distribution, fat-tailed distributions are always heavy-tailed. Some distributions, however, have a tail which goes to zero slower than an exponential function (meaning they are heavy-tailed), but faster than a power (meaning they are not fat-tailed). An example is thelog-normal distribution[contradictory]. Many other heavy-tailed distributions such as thelog-logisticandParetodistribution are, however, also fat-tailed.
There are parametric[6]and non-parametric[14]approaches to the problem of the tail-index estimation.[when defined as?]
To estimate the tail-index using the parametric approach, some authors employGEV distributionorPareto distribution; they may apply the maximum-likelihood estimator (MLE).
With(Xn,n≥1){\displaystyle (X_{n},n\geq 1)}a random sequence of independent and same density functionF∈D(H(ξ)){\displaystyle F\in D(H(\xi ))}, the Maximum Attraction Domain[15]of the generalized extreme value densityH{\displaystyle H}, whereξ∈R{\displaystyle \xi \in \mathbb {R} }. Iflimn→∞k(n)=∞{\displaystyle \lim _{n\to \infty }k(n)=\infty }andlimn→∞k(n)n=0{\displaystyle \lim _{n\to \infty }{\frac {k(n)}{n}}=0}, then thePickandstail-index estimation is[6][15]
whereX(n−k(n)+1,n)=max(Xn−k(n)+1,…,Xn){\displaystyle X_{(n-k(n)+1,n)}=\max \left(X_{n-k(n)+1},\ldots ,X_{n}\right)}. This estimator converges in probability toξ{\displaystyle \xi }.
Let(Xt,t≥1){\displaystyle (X_{t},t\geq 1)}be a sequence of independent and identically distributed random variables with distribution functionF∈D(H(ξ)){\displaystyle F\in D(H(\xi ))}, the maximum domain of attraction of thegeneralized extreme value distributionH{\displaystyle H}, whereξ∈R{\displaystyle \xi \in \mathbb {R} }. The sample path isXt:1≤t≤n{\displaystyle {X_{t}:1\leq t\leq n}}wheren{\displaystyle n}is the sample size. If{k(n)}{\displaystyle \{k(n)\}}is an intermediate order sequence, i.e.k(n)∈{1,…,n−1},{\displaystyle k(n)\in \{1,\ldots ,n-1\},},k(n)→∞{\displaystyle k(n)\to \infty }andk(n)/n→0{\displaystyle k(n)/n\to 0}, then the Hill tail-index estimator is[16]
whereX(i,n){\displaystyle X_{(i,n)}}is thei{\displaystyle i}-thorder statisticofX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}.
This estimator converges in probability toξ{\displaystyle \xi }, and is asymptotically normal providedk(n)→∞{\displaystyle k(n)\to \infty }is restricted based on a higher order regular variation property[17].[18]Consistency and asymptotic normality extend to a large class of dependent and heterogeneous sequences,[19][20]irrespective of whetherXt{\displaystyle X_{t}}is observed, or a computed residual or filtered data from a large class of models and estimators, including mis-specified models and models with errors that are dependent.[21][22][23]Note that both Pickand's and Hill's tail-index estimators commonly make use of logarithm of the order statistics.[24]
The ratio estimator (RE-estimator) of the tail-index was introduced by Goldie
and Smith.[25]It is constructed similarly to Hill's estimator but uses a non-random "tuning parameter".
A comparison of Hill-type and RE-type estimators can be found in Novak.[14]
Nonparametric approaches to estimate heavy- and superheavy-tailed probability density functions were given in
Markovich.[27]These are approaches based on variable bandwidth and long-tailed kernel estimators; on the preliminary data transform to a new random variable at finite or infinite intervals, which is more convenient for the estimation and then inverse transform of the obtained density estimate; and "piecing-together approach" which provides a certain parametric model for the tail of the density and a non-parametric model to approximate the mode of the density. Nonparametric estimators require an appropriate selection of tuning (smoothing) parameters like a bandwidth of kernel estimators and the bin width of the histogram. The well known data-driven methods of such selection are a cross-validation and its modifications, methods based on the minimization of the mean squared error (MSE) and its asymptotic and their upper bounds.[28]A discrepancy method which uses well-known nonparametric statistics like Kolmogorov-Smirnov's, von Mises and Anderson-Darling's ones as a metric in the space of distribution functions (dfs) and quantiles of the later statistics as a known uncertainty or a discrepancy value can be found in.[27]Bootstrap is another tool to find smoothing parameters using approximations of unknown MSE by different schemes of re-samples selection, see e.g.[29]
|
https://en.wikipedia.org/wiki/Heavy-tailed_distribution
|
When a quantity grows towards asingularityunder a finite variation (a "finite-time singularity") it is said to undergohyperbolic growth.[1]More precisely, thereciprocal function1/x{\displaystyle 1/x}has ahyperbolaas a graph, and has a singularity at 0, meaning that thelimitasx→0{\displaystyle x\to 0}is infinite: any similar graph is said to exhibit hyperbolic growth.
If the output of a function isinversely proportionalto its input, or inversely proportional to the difference from a given valuex0{\displaystyle x_{0}}, the function will exhibit hyperbolic growth, with a singularity atx0{\displaystyle x_{0}}.
In the real world hyperbolic growth is created by certain non-linearpositive feedbackmechanisms.[2]
Likeexponential growthandlogistic growth, hyperbolic growth is highlynonlinear, but differs in important respects.
These functions can be confused, as exponential growth, hyperbolic growth, and the first half of logistic growth areconvex functions; however theirasymptotic behavior(behavior as input gets large) differs dramatically:
A 1960 issue ofSciencemagazine included an article byHeinz von Foersterand his colleagues, P. M. Mora and L. W. Amiot, proposing an equation representing the best fit to the historical data on the Earth's population available in 1958:
Fifty years ago,Sciencepublished a study with the provocative title "Doomsday: Friday, 13 November, A.D. 2026". It fitted world population during the previous two millennia withP= 179 × 109/(2026.9 −t)0.99. This "quasi-hyperbolic" equation (hyperbolic having exponent 1.00 in the denominator) projected to infinite population in 2026—and to an imaginary one thereafter.
In 1975,von Hoernersuggested that von Foerster's doomsday equation can be written, without a significant loss of accuracy, in a simplified hyperbolic form (i.e.with the exponent in the denominator assumed to be 1.00):
where
Despite its simplicity, von Foerster's equation is very accurate in the range from 4,000,000 BP[4]to 1997 AD. For example, the doomsday equation (developed in 1958, when the Earth's population was 2,911,249,671[5]) predicts a population of 5,986,622,074 for the beginning of the year 1997:
The actual figure was 5,924,787,816.[5]
Having analyzed the timing of the events plotted onRay Kurzweil's "Countdown to Singularity" graph,Andrey Korotayevarrived at the following best-fit equation:
where
Korotayev also analyzed the timing of the events on the list of sociotechnological phase transition points independently compiled by Alexander Panov, and arrived at the following best-fit equation:
where
Korotayev discovered that these two equations are entirely identical with von Foerster's doomsday equation describing the world population growth. Both empirical and mathematical analyses indicate that all the three hyperbolic equations describe the same global macrodevelopmental process, in which demography is indivisibly combined with technology.[4]It can be set forth as follows: technological advance → increase in thecarrying capacityof the Earth → population growth → more potential inventors → acceleration of technological advance → faster growth of the Earth's carrying capacity → faster population growth → faster growth of the number of potential inventors → faster technological advance → faster growth of the Earth's carrying capacity, and so on.[1][6]
TheLorentz factorγis defined as[7]γ=11−(vc)2=11−β2,{\displaystyle \gamma ={\frac {1}{\sqrt {1-\left({\frac {v}{c}}\right)^{2}}}}={\frac {1}{\sqrt {1-\beta ^{2}}}},}where
Proxima Centauriis approximately 4.27 light-years away from the Earth. From a terrestrial observer's perspective, a traveller would cover the distance to Proxima Centauri in approximately 8.54 years at half the speed of light. However, due to the Lorentz factor, the time experienced by the traveller would be shorter:
The following graph shows the journey times for twenty runs to Proxima Centauri from the ship viewpoint. Notice that as speeds approach the speed of light, the journey times reduce dramatically, even though the actual increments in speed appear slight. On the 20th run, at1048575/1048576of the speed of light, the distance shrinks to 0.0059 light-years and the traveller experiences a journey time of 2.15 days. Whereas to those on Earth the ship looks almost "frozen" and the journey still takes 4.27 years, plus a couple of days.
The equation describing the growth of the Lorentz factor with speed is unmistakably hyperbolic, so the Lorentz factor of a spaceship, subjected to even a small but constant accelerating force, must become infinite in a finiteproper time. This requirement is met by assuming that a translationally accelerating spaceship loses its rest mass (which is the spaceship's resistance to its further translational acceleration along the path of flight):γ=mrelm0,{\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}},}where
Atv{\displaystyle v}= 0, themagnitude of the Lorentz factorisγ=mrelm0=11=1.{\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}}={\frac {1}{1}}=1.}Atv{\displaystyle v}= 0.5c, themagnitude of the Lorentz factorisγ=mrelm0=1.0720.928=1.155.{\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}}={\frac {1.072}{0.928}}=1.155.}Atv{\displaystyle v}= 0.999c, themagnitude of the Lorentz factorisγ=mrelm0=1.9140.086=22.366.{\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}}={\frac {1.914}{0.086}}=22.366.}Following this pattern, the spaceship will, after a finiteproper time, turn into a beam of photons:
Photons may be regarded as limiting particles whose rest mass has become zero while their Lorentz factor has become infinite.
The light-speed spaceship will then cover the remaining distance to its destination in zero proper time:
Since when traveling at the speed of light no apparent time elapses, the spacecraft would arrive instantly and simultaneously at all locations along the path of flight. Thus to the crew on the spacecraft, all spatial separations would collapse to zero along this path‑of‑flight. There is no relativistic dilatation, as all spatial separations are transverse to a light-speed spacecraft's flight. <...> Thus the spacecraft would disappear after reaching light speed, followed immediately by its reappearance trillions of miles away in the proximity of the target star, when the spacecraft returns to sub-light speed, Figure 9.6.
The universe's matter is falling into the universe's gravitational field:
Gravity rules. The moon orbiting Earth, matter falling into black holes, and the overall structure of the universe are dominated by gravity.
Consequently, the universe's matter accelerates to ever greater speeds, so that its Lorentz factor hyperbolically increases to infinity, while its rest mass hyperbolically vanishes:
As we go forwards in time, material weight continually changes into radiation. Conversely, as we go backwards in time, the total material weight of the universe must continually increase.
At the end of the hyperbolic growth of its Lorentz factor, the universe's matter attains the speed of light:
'It all just seemed unbelievably boring to me,' Penrose says. Then he found something interesting within it: at the very end of the universe, the only remaining particles will be massless. That means everything that exists will travel at the speed of light, making the flow of time meaningless.
So, the universe will eventually consist of relativistic kinetic energy, which isnegative,i.e.hierarchically binding/enslaving:
A beam of negative energy that travels into the past can be generated by the acceleration of the source to high speeds.
It is seen that the relativistic kinetic energy is always negative and therefore will lower the energy levels of a bound system.
This hierarchically binding/enslavingnegative energy is the universe's spirit or information:
Remember, more binding energy means the system is more bound—has greaternegativeenergy.
The Spirit is the binding energy expressed by the wordre-ligio/religion—a word that itself reflects the brokenness and fragmentation of the universe, that God is trying to heal.
Szilard's explanation was accepted by the physics community, andinformationwas accepted as a scientific concept, defined by its statistical‑mechanical properties as a kind ofnegative energythat introduced order into a system.
Thus, the hyperbolic growth of the Lorentz factor of the universe's matter hierarchically binds/enslavesor, which is the same, animates/informs the universe's matter. The sociotechnological singularity of the terrestrial animated/informed matter, expected at the end of the year 2026 AD (seeGlobal macrodevelopment) will signify that the Lorentz factor of the universe's matter has become infinite—since the end of the year 2026 AD, the universe's matter will be falling into the universe's animating/informing gravitational field (which is the funnel-shapedgradientof matter's negative-energiedness, animateness, informedness) at the speed of light:
The negative energy of the gravitational field is what allows negative entropy, equivalent to information, to grow, making the Universe a more complicated and interesting place.
"It's this idea that we represent some kind of singularity, or that we announce the nearby presence of a singularity. That the evolution of life and cultural form and all that is clearlyfunnelingtoward something fairly unimaginable."—McKenna, Terence.A Weekend with Terence McKennaAugust 1993
"In other words, we end the whole thing. We collapse the state vector and everything goes into a state of novelty. What happens then I think isthe universe becomes entirely made of light."—McKenna, Terence.Appreciating Imagination1997
"The conventions of relativity say that time slows down as one approaches the speed of light, but if one tries to imagine the point of view of a thing made of light, one must realize that what is never mentioned is thatif one moves at the speed of light, there is no time whatsoever. There is an experience of time zero. <...> One has transited into the eternal mode. One is then apart from the moving image; one exists in the completion of eternity. I believe that this is what technology pushes toward."—McKenna, Terence.New Maps of Hyperspace1984
"What exactly is immortality? It's the negation of time. How do we negate time? By getting close to, and perhaps matching, the speed of light. If you ARE light, everything is instant."—TimefUSION Anomaly, 1999 10 11
"And the angel that I saw standing upon the sea and upon the land lifted his hand up to heaven, and swore by him who lives forevermore, who created heaven and the things that are in it, and the sea and the things that are in it, thattime shall be no more, but in the days of the voice of the seventh angel, when he begins to blow, even the mystery of God shall be finished, as he preached by his servants the prophets."—Revelation 10:5-7New Matthew Bible
Another example of hyperbolic growth can be found inqueueing theory: the average waiting time of randomly arriving customers grows hyperbolically as a function of the average load ratio of the server. The singularity in this case occurs when the average amount of work arriving to the server equals the server's processing capacity. If the processing needs exceed the server's capacity, then there is no well-defined average waiting time, as the queue can grow without bound. A practical implication of this particular example is that for highly loaded queuing systems the average waiting time can be extremely sensitive to the processing capacity.
A further practical example of hyperbolic growth can be found inenzyme kinetics. When the rate of reaction (termed velocity) between anenzymeandsubstrateis plotted against various concentrations of the substrate, a hyperbolic plot is obtained for many simpler systems. When this happens, the enzyme is said to followMichaelis-Mentenkinetics.
The function
exhibits hyperbolic growth with a singularity at timetc{\displaystyle t_{c}}:in thelimitast→tc{\displaystyle t\to t_{c}},the function goes to infinity.
More generally, the function
exhibits hyperbolic growth, whereK{\displaystyle K}is ascale factor.
Note that this algebraic function can be regarded as an analytical solution for the function's differential:[1]
This means that with hyperbolic growth the absolute growth rate of the variablexin the momenttis proportional to the square of the value ofxin the momentt.
Respectively, the quadratic-hyperbolic function looks as follows:
|
https://en.wikipedia.org/wiki/Hyperbolic_growth
|
ALévy flightis arandom walkin which the step-lengths have astable distribution,[1]aprobability distributionthat isheavy-tailed. When defined as a walk in a space of dimension greater than one, the steps made are inisotropicrandom directions. Later researchers have extended the use of the term "Lévy flight" to also include cases where the random walk takes place on a discrete grid rather than on a continuous space.[2]
The term "Lévy flight" was coined afterPaul LévybyBenoît Mandelbrot,[3]who used this for one specific definition of the distribution of step sizes. He used the termCauchy flightfor the case where the distribution of step sizes is aCauchy distribution,[4]andRayleigh flightfor when the distribution is anormal distribution[5](which is not an example of a heavy-tailed probability distribution).
The particular case for which Mandelbrot used the term "Lévy flight"[3]is defined by thesurvival functionof the distribution of step-sizes,U, being[6]
HereDis a parameter related to thefractal dimensionand the distribution is a particular case of thePareto distribution.
Lévy flights are, by construction,Markov processes. For general distributions of the step-size, satisfying the power-like condition, the distance from the origin of the random walk tends, after a large number of steps, to astable distributiondue to the generalizedcentral limit theorem, enabling many processes to be modeled using Lévy flights.
The probability densities for particles undergoing a Levy flight can be modeled using a generalized version of theFokker–Planck equation, which is usually used to modelBrownian motion. The equation requires the use offractional derivatives. For jump lengths which have a symmetric probability distribution, the equation takes a simple form in terms of theRiesz fractional derivative. In one dimension, the equation reads as
whereγis a constant akin to the diffusion constant,αis the stability parameter[citation needed]andf(x,t) is the potential. The Riesz derivative can be understood in terms of itsFourier Transform.
This can be easily extended to multiple dimensions.
Another important property of the Lévy flight is that of diverging variances in all cases except that ofα= 2, i.e. Brownian motion. In general, the θ fractional moment of the distribution diverges ifα≤θ. Also,
The exponential scaling of the step lengths gives Lévy flights ascale invariantproperty,[citation needed]and they are used to model data that exhibits clustering.[citation needed]
The definition of a Lévy flight stems from the mathematics related tochaos theoryand is useful in stochastic measurement and simulations for random or pseudo-random natural phenomena. Examples includeearthquakedata analysis,financial mathematics,cryptography, signals analysis as well as many applications inastronomy,biology, andphysics.
It has been found that jumping between climate states observed in the paleoclimatic record can be described as a Lévy flight or an alpha-stable process[7]Another application is theLévy flight foraging hypothesis. When sharks and other ocean predators cannot find food, they abandon theBrownian motion, the random motion seen in swirling gas molecules, for the Lévy flight — a mix of long trajectories and short, random movements found in turbulent fluids. Researchers analyzed over 12 million movements recorded over 5,700 days in 55 data-logger-tagged animals from 14 ocean predator species in the Atlantic and Pacific Oceans, includingsilky sharks,yellowfin tuna, blue marlin and swordfish. The data showed that Lévy flights interspersed with Brownian motion can describe the animals' hunting patterns.[8][9][10][11]Birds and other animals (including humans)[12]follow paths that have been modeled using Lévy flight (e.g. when searching for food).[13]An example of an animal, specifically a beetle, that uses Lévy flight patterns isPterostichus melanarius. When the beetles are hungry and food is scarce, they avoid searching for prey in locations that other individuals ofP. melanariushave visited.This behavior is optimal for widely dispersed prey that may not always be fully consumed at one time, such as slugs.[14]
Additionally, biological flight can also apparently be mimicked by other models such as composite correlated random walks, which grow across scales to converge on optimal Lévy walks.[13]Composite Brownian walks can be finely tuned to theoretically optimal Lévy walks but they are not as efficient as Lévy search across most landscapes types, suggesting selection pressure for Lévy walk characteristics is more likely than multi-scaled normal diffusive patterns.[15]
Furthermore, it has been shown that Lévy walk appears in high-energyparticle physicsas well.[16]Observations indicate that Lévy-processes occur in high-energyheavy-ion collisions.[17][18][19]Here,hadronicscattering and decays after a high-energy heavy-ion collision lead to power-law tailed spatial particle creation (hadron freeze-out from thequark-gluon plasma) distributions.[20]
Efficient routing in a network can be performed by links having a Levy flight length distribution with specific values of alpha.[2]
|
https://en.wikipedia.org/wiki/L%C3%A9vy_flight
|
Instatisticsandbusiness, along tailof somedistributionsof numbers is the portion of the distribution having many occurrences far from the "head" or central part of the distribution. The distribution could involve popularities, random numbers of occurrences of events with variousprobabilities, etc.[1]The term is often used loosely, with no definition or an arbitrary definition, but precise definitions are possible.
In statistics, the termlong-tailed distributionhas a narrow technical meaning, and is a subtype ofheavy-tailed distribution.[2][3][4]Intuitively, a distribution is (right) long-tailed if, for any fixed amount, when a quantity exceeds a high level, it almost certainly exceeds it by at least that amount: large quantities are probably even larger.[a]Note that there is no sense ofthe"long tail" of a distribution, but only thepropertyof a distribution being long-tailed.
In business, the termlong tailis applied torank-size distributionsorrank-frequency distributions(primarily of popularity), which often formpower lawsand are thus long-tailed distributions in the statistical sense. This is used to describe the retailing strategy of selling many unique items with relatively small quantities sold of each (the "long tail")—usually in addition to selling fewer popular items in large quantities (the "head"). Sometimes an intermediate category is also included, variously called thebody,belly,torso, ormiddle. The specific cutoff of what part of a distribution isthe"long tail" is often arbitrary, but in some cases may be specified objectively; seesegmentation of rank-size distributions.
The long tail concept has found some ground for application, research, and experimentation. It is a term used in online business,mass media,micro-finance(Grameen Bank, for example), user-driven innovation (Eric von Hippel), knowledge management, and social network mechanisms (e.g.crowdsourcing,crowdcasting,peer-to-peer), economic models, marketing (viral marketing), and IT Security threat hunting within a SOC (Information security operations center).
Frequency distributionswith long tails have been studied by statisticians since at least 1946.[5]The term has also been used in the finance[6]and insurance business[7]for many years. The work ofBenoît Mandelbrotin the 1950s and later has led to him being referred to as "the father of long tails".[8]
The long tail was popularized byChris Andersonin an October 2004Wiredmagazine article, in which he mentionedAmazon.com,AppleandYahoo!as examples of businesses applying this strategy.[7][9]Anderson elaborated the concept in his bookThe Long Tail: Why the Future of Business Is Selling Less of More.
Anderson cites research published in 2003 byErik Brynjolfsson,Yu (Jeffrey) Hu, andMichael D. Smith, who first used a log-linear curve on an XY graph to describe the relationship betweenAmazon.comsales and sales ranking. They showed that the primary value of the internet to consumers comes from releasing new sources of value by providing access to products in the long tail.[10]
The distribution and inventory costs of businesses successfully applying a long tail strategy allow them to realize significant profit out of selling small volumes of hard-to-find items to many customers instead of only selling large volumes of a reduced number of popular items. The total sales of this large number of "non-hit items" is called "the long tail".
Given enough choice, a large population of customers, and negligible stocking and distribution costs, the selection and buying pattern of the population results in the demand across products having apower lawdistribution orPareto distribution.
It is important to understand why some distributions are normal vs. long tail (power) distributions. Chris Anderson argues that while quantities such as human height orIQfollow a normal distribution, inscale-free networkswithpreferential attachments, power law distributions are created, i.e. because some nodes are more connected than others (likeMalcolm Gladwell's “mavens” inThe Tipping Point).[11][12]
The long tailis the name for a long-known feature of some statistical distributions (such asZipf,power laws,Pareto distributionsandgeneral Lévy distributions). In "long-tailed" distributions a high-frequency or high-amplitude population is followed by a low-frequency or low-amplitude population which gradually "tails off"asymptotically. The events at the far end of the tail have a very low probability of occurrence.
As arule of thumb, for such population distributions the majority of occurrences (more than half, and where thePareto principleapplies, 80%) are accounted for by the first 20% of items in the distribution.
Power lawdistributions or functions characterize an important number of behaviors from nature and human endeavor. This fact has given rise to a keen scientific and social interest in such distributions, and the relationships that create them. The observation of such a distribution often points to specific kinds of mechanisms, and can often indicate a deep connection with other, seemingly unrelated systems. Examples of behaviors that exhibit long-tailed distribution are the occurrence of certain words in a given language, the income distribution of a business or the intensity of earthquakes (see:Gutenberg–Richter law).
Chris Anderson's andClay Shirky's articles highlight special cases in which we are able to modify the underlying relationships and evaluate the impact on the frequency of events. In those cases the infrequent, low-amplitude (or low-revenue) events – the long tail, represented here by the portion of the curve to the right of the 20th percentile – can become the largest area under the line. This suggests that a variation of one mechanism (internet access) or relationship (the cost of storage) can significantly shift the frequency of occurrence of certain events in the distribution. The shift has a crucial effect in probability and in the customer demographics of businesses likemass mediaand online sellers.
However, the long tails characterizing distributions such as theGutenberg–Richter lawor the words-occurrenceZipf's law, and those highlighted by Anderson and Shirky are of very different, if not opposite, nature: Anderson and Shirky refer to frequency-rank relations, whereas the Gutenberg–Richter law and the Zipf's law are probability distributions. Therefore, in these latter cases "tails" correspond to large-intensity events such as large earthquakes and most popular words, which dominate the distributions. By contrast, the long tails in the frequency-rank plots highlighted by Anderson and Shirky would rather correspond to short tails in the associated probability distributions, and therefore illustrate an opposite phenomenon compared to the Gutenberg–Richter and the Zipf's laws.
Use of the phrasethe long tailin business as "the notion of looking at the tail itself as a new market" of consumers was first coined byChris Anderson.[13]The concept drew in part from a February 2003 essay byClay Shirky, "Power Laws, Weblogs and Inequality",[14]which noted that a relative handful ofweblogshave many links going into them but "the long tail" of millions of weblogs may have only a handful of links going into them. Anderson described the effects of the long tail on current and future business models beginning with a series of speeches in early 2004 and with the publication of aWiredmagazine article in October 2004. Anderson later extended it into the bookThe Long Tail: Why the Future of Business is Selling Less of More(2006).
Anderson argues that products in low demand or that have a low sales volume can collectively make up a market share that rivals or exceeds the relatively few current bestsellers and blockbusters, if the store or distribution channel is large enough. Anderson cites earlier research byErik Brynjolfsson,Yu (Jeffrey) Hu, andMichael D. Smith, that showed that a significant portion of Amazon.com's sales come from obscure books that are not available in brick-and-mortar stores. The long tail is a potential market and, as the examples illustrate, the distribution and sales channel opportunities created by the Internet often enable businesses to tap that market successfully.
In his Wired article Anderson opens with an anecdote about creating a niche market for books on Amazon. He writes about a book titledTouching the Voidabout a near-death mountain climbing accident that took place in the Peruvian Andes. Anderson states the book got good reviews, but didn't have much commercial success. However, ten years later a book titledInto Thin AirbyJon Krakauerwas published and Touching the Void began to sell again. Anderson realized that this was due to Amazon's recommendations. This created a niche market for those who enjoy books about mountain climbing even though it is not considered a popular genre supporting the long tail theory.
An Amazon employee described the long tail as follows: "We sold more books today that didn't sell at all yesterday than we sold today of all the books that did sell yesterday."[15]
Anderson has explained the term as a reference to the tail of ademand curve.[16]The term has since beenrederived from an XY graph that is created when charting popularity to inventory. In the graph shown above, Amazon's book sales would be represented along the vertical axis, while the book or movie ranks are along the horizontal axis. The total volume of low popularity items exceeds the volume of high popularity items.
Erik Brynjolfsson,Yu (Jeffrey) Hu, andMichael D. Smithfound that a large proportion ofAmazon.com's book sales come from obscure books that were not available in brick-and-mortar stores. They then quantified the potential value of the long tail to consumers. In an article published in 2003, these authors showed that, while most of the discussion about the value of the Internet to consumers has revolved around lower prices, consumer benefit (a.k.a.consumer surplus) from access to increased product variety in online book stores is ten times larger than their benefit from access to lower prices online.[17]
A subsequent study byErik Brynjolfsson,Yu (Jeffrey) Hu, andMichael D. Smith[18]found that the long tail has grown longer over time, with niche books accounting for a larger share of total sales. Their analyses suggested that by 2008, niche books accounted for 36.7% of Amazon's sales while the consumer surplus generated by niche books has increased at least fivefold from 2000 to 2008. In addition, their new methodology finds that, while the widely used power laws are a good first approximation for the rank-sales relationship, the slope may not be constant for all book ranks, with the slope becoming progressively steeper for more obscure books.
In support of their findings, Wenqi Zhou and Wenjing Duan not only find a longer tail but also a fatter tail by an in-depth analysis on consumer software downloading pattern in their paper "Online user reviews, product variety, and the long tail".[19]The demand for all products decreases, but the decrease for the hits is more pronounced, indicating the demand shifting from the hits to the niches over time. In addition, they also observe a superstar effect in the presence of the long tail. A small number of very popular products still dominates the demand.
In a 2006 working paper titled "Goodbye Pareto Principle, Hello Long Tail",[20]Erik Brynjolfsson,Yu (Jeffrey) Hu, and Duncan Simester found that, by greatly loweringsearchcosts, information technology in general and Internet markets in particular could substantially increase the collective share of hard-to-find products, thereby creating a longer tail in the distribution of sales.
They used a theoretical model to show how a reduction in search costs will affect the concentration in product sales. By analyzing data collected from a multi-channel retailing company, they showed empirical evidence that the Internet channel exhibits a significantly less concentrated sales distribution, when compared with traditional channels. An 80/20 rule fits the distribution of product sales in the catalog channel quite well, but in the Internet channel, this rule needs to be modified to a 72/28 rule in order to fit the distribution of product sales in that channel. The difference in the sales distribution is highly significant, even after controlling for consumer differences.
The key supply-side factor that determines whether a sales distribution has a long tail is the cost of inventory storage and distribution. Where inventory storage and distribution costs are insignificant, it becomes economically viable to sell relatively unpopular products; however, when storage and distribution costs are high, only the most popular products can be sold. For example, a traditional movie rental store has limited shelf space, which it pays for in the form of buildingoverhead; to maximize its profits, it must stock only the most popular movies to ensure that no shelf space is wasted. Because online video rental provider (such asAmazon.comorNetflix) stocks movies in centralized warehouses, its storage costs are far lower and its distribution costs are the same for a popular or unpopular movie. It is therefore able to build a viable business stocking a far wider range of movies than a traditional movie rental store. Those economics of storage and distribution then enable the advantageous use of the long tail: for example, Netflix finds that in aggregate, "unpopular" movies are rented more than popular movies.
AnMIT Sloan Management Reviewarticle titled "From Niches to Riches: Anatomy of the Long Tail"[21]examined the long tail from both the supply side and the demand side and identifies several key drivers. On the supply side, the authors point out howe-tailers' expanded, centralized warehousing allows for more offerings, thus making it possible for them to cater to more varied tastes.[22]
On the demand side, tools such as search engines, recommendation software, and sampling tools are allowing customers to find products outside their geographic area. The authors also look toward the future to discuss second-order, amplified effects of Long Tail, including the growth of markets serving smaller niches.
Not all recommender systems are equal, however, when it comes to expanding the long tail. Some recommenders (i.e. certain collaborative filters) can exhibit a bias toward popular products, creatingpositive feedback, and actually reduce the long tail. AWhartonstudy details this phenomenon along with several ideas that may promote the long tail and greater diversity.[23]
A 2010 study conducted by Wenqi Zhou and Wenjing Duan[19]further points out that the demand side factor (online user reviews) and the supply side factor (product variety) interplay to influence the long tail formation of user choices. Consumers' reliance on online user reviews to choose products is significantly influenced by the quantity of products available. Specifically, they find that the impacts of both positive and negative user reviews are weakened as product variety goes up. In addition, the increase in product variety reduces the impact of user reviews on popular products more than it does on niche products.
The "crowds" of customers, users and small companies that inhabit the long-tail distribution can perform collaborative and assignment work. Some relevant forms of these new production models are:
The demand-side factors that lead to the long tail can be amplified by the "networks of products" which are created by hyperlinked recommendations across products. AnMIS Quarterlyarticle by Gal Oestreicher-Singer andArun Sundararajanshows that categories of books onAmazon.comwhich are more central and thus influenced more by their recommendation network have significantly more pronounced long-tail distributions. Their data across 200 subject areas shows that a doubling of this influence leads to a 50% increase in revenues from the least popular one-fifth of books.[25]
The long-tail distribution applies at a given point in time, but over time the relative popularity of the sales of the individual products will change.[26]Although the distribution of sales may appear to be similar over time, the positions of the individual items within it will vary. For example, new items constantly enter most fashion markets. A recent fashion-based model[27]ofconsumer choice, which is capable of generating power law distributions of sales similar to those observed in practice,[28]takes into account turnover in the relative sales of a given set of items, as well as innovation, in the sense that entirely new items become offered for sale.
There may be an optimal inventory size, given the balance between sales and the cost of keeping up with the turnover. An analysis based on this pure fashion model[29]indicates that, even for digital retailers, the optimal inventory may in many cases be less than the millions of items that they can potentially offer. In other words, by proceeding further and further into the long tail, sales may become so small that the marginal cost of tracking them in rank order, even at a digital scale, might be optimised well before a million titles, and certainly before infinite titles. This model can provide further predictions into markets with long-tail distribution, such as the basis for a model for optimizing the number of each individual item ordered, given its current sales rank and the total number of different titles stocked.
From a given country's viewpoint, diplomatic interactions with other countries likewise exhibit a long tail.[30]Strategic partners receive the largest amount of diplomatic attention, while a long tail of remote states obtains just an occasional signal of peace. The fact that even allegedly "irrelevant" countries obtain at least rare amicable interactions by virtually all other states was argued to create a societal surplus of peace, a reservoir that can be mobilized in case a state needs it. The long tail thus functionally resembles "weak ties" in interpersonal networks.
Before a long tail works, only the most popular products are generally offered. When the cost of inventory storage and distribution fall, a wide range of products become available. This can, in turn, have the effect of reducing demand for the most popular products. For example, a small website that focuses on niches of content can be threatened by a larger website which has a variety of information (such as Yahoo)Web content. The big website covers more variety while the small website has only a few niches to choose from.
The competitive threat from these niche sites is reduced by the cost of establishing and maintaining them and the effort required for readers to track multiple small web sites. These factors have been transformed by easy and cheap web site software and the spread ofRSS. Similarly, mass-market distributors likeBlockbustermay be threatened by distributors likeLoveFilm, which supply the titles that Blockbuster doesn't offer because they are not already very popular.
Some of the most successful Internet businesses have used the long tail as part of their business strategy. Examples includeeBay(auctions),Yahoo!andGoogle(web search),Amazon(retail), andiTunes Store(music andpodcasts), amongst the major companies, along with smaller Internet companies likeAudible(audio books) andLoveFilm(video rental). These purely digital retailers also have almost no marginal cost, which is benefiting the online services, unlike physical retailers that have fixed limits on their products. The internet can still sell physical goods, but at an unlimited selection and with reviews and recommendations.[31]The internet has opened up larger territories to sell and provide its products without being confined to just the "local Markets" such as physical retailers likeTargetor evenWalmart. With the digital and hybrid retailers there is no longer a perimeter on market demands.[32]
The adoption ofvideo gamesandmassively multiplayer online gamessuch asSecond Lifeas tools for education and training is starting to show a long-tailed pattern. It costs significantly less to modify a game than it has been to create unique training applications, such as those for training in business, commercial flight, and military missions. This has led some[who?]to envision a time in which game-based training devices or simulations will be available for thousands of different job descriptions.[citation needed]
The banking business has used internet technology to reach an increasing number of customers. The most important shift in business model due to the long tail has come from the various forms ofmicrofinancedeveloped.[citation needed]
As opposed to e-tailers, micro-finance is a distinctly low technology business. Its aim is to offer very small credits to lower-middle to lower class and poor people, that would otherwise be ignored by the traditional banking business. The banks that have followed this strategy of selling services to the low-frequency long tail of the sector have found out that it can be an important niche, long ignored by consumer banks.[33]The recipients of small credits tend to be very good payers of loans, despite their non-existent credit history. They are also willing to pay higher interest rates than the standard bank or credit card customer. It also is a business model that fills an important developmental role in an economy.[34]
Grameen BankinBangladeshhas successfully followed this business model. In Mexico the banks Compartamos andBanco Aztecaalso service this customer demographic, with an emphasis on consumer credit.Kiva.orgis an organization that provides micro credits to people worldwide, by using intermediaries called small microfinance organizations (S.M.O.'s)to distribute crowd sourced donations made by Kiva.org lenders.
According to theuser-driven innovationmodel, companies can rely on users of their products and services to do a significant part of theinnovationwork. Users want products that are customized to their needs. They are willing to tell the manufacturer what they really want and how it should work. Companies can make use of a series of tools, such as interactive and internet based technologies, to give their users a voice and to enable them to do innovation work that is useful to the company.
Given the diminishing cost of communication and information sharing (by analogy to the low cost of storage and distribution, in the case ofe-tailers), long-tailed user driven innovation will gain importance for businesses.
In following a long-tailed innovation strategy, the company is using the model to tap into a large group of users that are in the low-intensity area of the distribution. It is theircollaborationand aggregated work that results in an innovation effort.Social innovationcommunities formed by groups of users can perform rapidly thetrial and errorprocess of innovation, share information, test and diffuse the results.
Eric von Hippelof MIT's Sloan School of Management defined the user-led innovation model in his bookDemocratizing Innovation.[35]Among his conclusions is the insight that as innovation becomes more user-centered the information needs to flow freely, in a more democratic way, creating a "rich intellectual commons" and "attacking a major structure of the social division of labor".
In today's world, customers are eager to voice their opinions and shape the products and services they use. This presents a unique opportunity for companies to leverage interactive and internet-based technologies to give their users a voice and enable them to participate in the innovation process. By doing so, companies can gain valuable insights into their customer's needs and preferences, which can help drive product development and innovation. By creating a platform for their users to share their ideas and feedback, companies can harness the power of collaborative innovation and stay ahead of the competition. Ultimately, involving users in the innovation process is a win-win for both companies and their customers, as it leads to more tailored, effective products and services that better meet the needs of the end user.
The drive to build a market and obtain revenue from the consumer demographic of the long tail has led businesses to implement a series of long-tailmarketingtechniques, most of them based on extensive use of internet technologies. Among the most representative are:
The long tail has possible implications forcultureandpolitics. Where theopportunity costof inventory storage and distribution is high, only the most popular products are sold. But where the long tail works, minority tastes become available and individuals are presented with a wider array of choices. The long tail presents opportunities for various suppliers to introduce products in the niche category. These encourage the diversification of products. These niche products open opportunities for suppliers while concomitantly satisfying the demands of many individuals – therefore lengthening the tail portion of the long tail. In situations where popularity is currently determined by the lowest common denominator, a long-tail model may lead to improvement in a society's level of culture. The opportunities that arise because of the long tail greatly affect society's cultures because suppliers have unlimited capabilities due to infinite storage and demands that were unable to be met prior to the long tail are realized. At the end of the long tail, the conventional profit-making business model ceases to exist; instead, people tend to come up with products for varied reasons like expression rather than monetary benefit. In this way, the long tail opens up a large space for authentic works of creativity.
Televisionis a good example of this: Chris Anderson defines long-tail TV in the context of "content that is not available through traditional distribution channels but could nevertheless find an audience."[37]Thus, the advent of services such astelevision on demand,pay-per-viewand even premium cable subscription services such as HBO and Showtime open up the opportunity for niche content to reach the right audiences, in an otherwise mass medium. These may not always attract the highest level of viewership, but their business distribution models make that of less importance. As the opportunity cost goes down, the choice of TV programs grows and greater cultural diversity rises.
Often presented as a phenomenon of interest primarily to mass market retailers and web-based businesses, the long tail also has implications for the producers of content, especially those whose products could not – for economic reasons – find a place in pre-Internet information distribution channels controlled by book publishers, record companies, movie studios, and television networks. Looked at from the producers' side, the long tail has made possible a flowering of creativity across all fields of human endeavour.[citation needed]One example of this isYouTube, where thousands of diverse videos – whose content, production value or lack of popularity make them inappropriate for traditional television – are easily accessible to a wide range of viewers.
The intersection of viral marketing, online communities and new technologies that operate within the long tail of consumers and business is described in the novel byWilliam Gibson,Pattern Recognition.
In military thinking,John Robbapplies the long tail to the developments in insurgency and terrorist movements, showing how technology and networking allows the long tail of disgruntled groups and criminals to take on the nation state and have a chance to win.
A 2008 study byAnita Elberse, professor of business administration atHarvard Business School, calls the long tail theory into question, citing sales data which shows that the Web magnifies the importance of blockbuster hits.[38]On his blog, Chris Anderson responded to the study, praising Elberse and the academic rigor with which she explores the issue but drawing a distinction between their respective interpretations of where the "head" and "tail" begin. Elberse defined head and tail using percentages, while Anderson uses absolute numbers.[39]Similar results were published bySerguei Netessineand Tom F. Tan, who suggest that head and tail should be defined by percentages rather than absolute numbers.[40]
Also in 2008, a sales analysis of an unnamed UK digital music service by economistWill Pageand high-tech entrepreneur Andrew Bud found that sales exhibited alog-normal distributionrather than a power law; they reported that 80% of the music tracks available sold no copies at all over a one-year period. Anderson responded by stating that the study's findings are difficult to assess without access to its data.[41][42]
|
https://en.wikipedia.org/wiki/Long_tail
|
In ascale-free networkthedegree distributionfollows apower lawfunction. In some empirical examples this power-law fits the degree distribution well only in the high degree region; in some small degree nodes the empirical degree-distribution deviates from it. See for example the network of scientific citations.[1]This deviation of the observed degree-distribution from the theoretical prediction at the low-degree region is often referred aslow-degree saturation.[2]The empirical degree-distribution typically deviates downward from the power-law function fitted on higher order nodes, which means low-degree nodes are less frequent in real data than what is predicted by theBarabási–Albert model.[3]
One of the key assumptions of theBA modelispreferential attachment. It states, the probability of acquiring a new link from a new entrant node is proportional to the degree of each node. In other words, every new entrant favors to connect to higher-degree nodes. Formally:
Π(ki)=ki∑jkj{\displaystyle \Pi \left(k_{i}\right)={\frac {k_{i}}{\sum _{j}k_{j}}}}
WhereΠ(ki){\displaystyle \Pi \left(k_{i}\right)}is the probability of acquiring a link by a node with degreek{\displaystyle k}.
With a slight modification of this rule low-degree saturation can be predicted easily, by adding a term calledinitial attractiveness(A{\displaystyle A}). This was first introduced by Dorogovtsev, Mendes and Samukhin in 2000.[4][5]
Π(ki)=A+kiA+∑jkj{\displaystyle \Pi \left(k_{i}\right)={\frac {A+k_{i}}{A+\sum \limits _{j}k_{j}}}}
With this modified attachment rule a low-degree node (with lowk{\displaystyle k}) has a higher probability to acquire new links compared to the original set-up. Thus it is moreattractive. Therefore, this handicap makes less likely the existence of small degree-nodes as it is observed in real data.
More formally this modifies the degree distribution as:
pk=C(k+A)−γ{\displaystyle p_{k}=C\left(k+A\right)^{-\gamma }}
As a side effect it also increases the exponent relative to the original BA model.
It is calledinitialattractiveness because in the BA framework every node grows in degree by time. And ask{\displaystyle k}goes large the significance of this fixed additive term(A){\displaystyle (A)}diminishes.
All the distinctive features of scale-free networks are due to the existence of extremely high degree nodes, often called "hubs". Their existence is predicted by the power-law distribution of the degrees. Low-degree saturation is a deviation from this theoretical degree distribution, since it characterize the low end of the degree distribution, it does not deny the existence of hubs. Therefore, a scale-free network with low-degree saturation can produce all the following characteristics:small-worldcharacteristic,robustness, low attack tolerance,spreading behavior.
If it is modeled via the BA model augmented by the initial attractiveness, then this solution reduces the size of hubs because it affects the exponent of the degree distribution positively relative to the original BA model.
Initial attractiveness
|
https://en.wikipedia.org/wiki/Low-degree_saturation
|
ThePareto distribution, named after the Italiancivil engineer,economist, andsociologistVilfredo Pareto,[2]is apower-lawprobability distributionthat is used in description ofsocial,quality control,scientific,geophysical,actuarial, and many other types of observable phenomena; the principle originally applied to describing thedistribution of wealthin a society, fitting the trend that a large portion of wealth is held by a small fraction of the population.[3][4]
ThePareto principleor "80:20 rule" stating that 80% of outcomes are due to 20% of causes was named in honour of Pareto, but the concepts are distinct, and only Pareto distributions with shape value (α)oflog45 ≈ 1.16precisely reflect it. Empirical observation has shown that this 80:20 distribution fits a wide range of cases, including natural phenomena[5]and human activities.[6][7]
IfXis arandom variablewith a Pareto (Type I) distribution,[8]then the probability thatXis greater than some numberx, i.e., thesurvival function(also called tail function), is given by
wherexmis the (necessarily positive) minimum possible value ofX, andαis a positive parameter. The type I Pareto distribution is characterized by ascale parameterxmand ashape parameterα, which is known as thetail index. If this distribution is used to model the distribution of wealth, then the parameterαis called thePareto index.
From the definition, thecumulative distribution functionof a Pareto random variable with parametersαandxmis
It follows (bydifferentiation) that theprobability density functionis
When plotted on linear axes, the distribution assumes the familiar J-shaped curve which approaches each of the orthogonal axesasymptotically. All segments of the curve are self-similar (subject to appropriate scaling factors). When plotted in alog–log plot, the distribution is represented by a straight line.
Thus, since the expectation does not converge on anopen intervalcontainingt=0{\displaystyle t=0}we say that the moment generating function does not exist.
The parameters may be solved for using themethod of moments.[9]
Theconditional probability distributionof a Pareto-distributed random variable, given the event that it is greater than or equal to a particular numberx1{\displaystyle x_{1}}exceedingxm{\displaystyle x_{\text{m}}}, is a Pareto distribution with the same Pareto indexα{\displaystyle \alpha }but with minimumx1{\displaystyle x_{1}}instead ofxm{\displaystyle x_{\text{m}}}:
This implies that the conditional expected value (if it is finite, i.e.α>1{\displaystyle \alpha >1}) is proportional tox1{\displaystyle x_{1}}:
In case of random variables that describe the lifetime of an object, this means that life expectancy is proportional to age, and is called theLindy effector Lindy's Law.[10]
SupposeX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\dotsc }areindependent identically distributedrandom variableswhose probability distribution is supported on the interval[xm,∞){\displaystyle [x_{\text{m}},\infty )}for somexm>0{\displaystyle x_{\text{m}}>0}. Suppose that for alln{\displaystyle n}, the two random variablesmin{X1,…,Xn}{\displaystyle \min\{X_{1},\dotsc ,X_{n}\}}and(X1+⋯+Xn)/min{X1,…,Xn}{\displaystyle (X_{1}+\dotsb +X_{n})/\min\{X_{1},\dotsc ,X_{n}\}}are independent. Then the common distribution is a Pareto distribution.[citation needed]
Thegeometric mean(G) is[11]
Theharmonic mean(H) is[11]
The characteristic curved 'long tail' distribution, when plotted on a linear scale, masks the underlying simplicity of the function when plotted on alog-log graph, which then takes the form of a straight line with negative gradient: It follows from the formula for the probability density function that forx≥xm,
Sinceαis positive, the gradient −(α+ 1) is negative.
There is a hierarchy[8][12]of Pareto distributions known as Pareto Type I, II, III, IV, and Feller–Pareto distributions.[8][12][13]Pareto Type IV contains Pareto Type I–III as special cases. The Feller–Pareto[12][14]distribution generalizes Pareto Type IV.
The Pareto distribution hierarchy is summarized in the next table comparing thesurvival functions(complementary CDF).
Whenμ= 0, the Pareto distribution Type II is also known as theLomax distribution.[15]
In this section, the symbolxm, used before to indicate the minimum value ofx, is replaced byσ.
The shape parameterαis thetail index,μis location,σis scale,γis an inequality parameter. Some special cases of Pareto Type (IV) are
The finiteness of the mean, and the existence and the finiteness of the variance depend on the tail indexα(inequality indexγ). In particular, fractionalδ-moments are finite for someδ> 0, as shown in the table below, whereδis not necessarily an integer.
Feller[12][14]defines a Pareto variable by transformationU=Y−1− 1 of abeta random variable,Y, whose probability density function is
whereB( ) is thebeta function. If
thenWhas a Feller–Pareto distribution FP(μ,σ,γ,γ1,γ2).[8]
IfU1∼Γ(δ1,1){\displaystyle U_{1}\sim \Gamma (\delta _{1},1)}andU2∼Γ(δ2,1){\displaystyle U_{2}\sim \Gamma (\delta _{2},1)}are independentGamma variables, another construction of a Feller–Pareto (FP) variable is[16]
and we writeW~ FP(μ,σ,γ,δ1,δ2). Special cases of the Feller–Pareto distribution are
When a random variableY{\displaystyle Y}follows a pareto distribution, then its inverseX=1/Y{\displaystyle X=1/Y}follows a Power distribution.
Inverse Pareto distribution is equivalent to a Power distribution[17]
The Pareto distribution is related to theexponential distributionas follows. IfXis Pareto-distributed with minimumxmand indexα, then
isexponentially distributedwith rate parameterα. Equivalently, ifYis exponentially distributed with rateα, then
is Pareto-distributed with minimumxmand indexα.
This can be shown using the standard change-of-variable techniques:
The last expression is the cumulative distribution function of an exponential distribution with rateα.
Pareto distribution can be constructed by hierarchical exponential distributions.[18]Letϕ|a∼Exp(a){\displaystyle \phi |a\sim {\text{Exp}}(a)}andη|ϕ∼Exp(ϕ){\displaystyle \eta |\phi \sim {\text{Exp}}(\phi )}. Then we havep(η|a)=a(a+η)2{\displaystyle p(\eta |a)={\frac {a}{(a+\eta )^{2}}}}and, as a result,a+η∼Pareto(a,1){\displaystyle a+\eta \sim {\text{Pareto}}(a,1)}.
More in general, ifλ∼Gamma(α,β){\displaystyle \lambda \sim {\text{Gamma}}(\alpha ,\beta )}(shape-rate parametrization) andη|λ∼Exp(λ){\displaystyle \eta |\lambda \sim {\text{Exp}}(\lambda )}, thenβ+η∼Pareto(β,α){\displaystyle \beta +\eta \sim {\text{Pareto}}(\beta ,\alpha )}.
Equivalently, ifY∼Gamma(α,1){\displaystyle Y\sim {\text{Gamma}}(\alpha ,1)}andX∼Exp(1){\displaystyle X\sim {\text{Exp}}(1)}, thenxm(1+XY)∼Pareto(xm,α){\displaystyle x_{\text{m}}\!\left(1+{\frac {X}{Y}}\right)\sim {\text{Pareto}}(x_{\text{m}},\alpha )}.
The Pareto distribution andlog-normal distributionare alternative distributions for describing the same types of quantities. One of the connections between the two is that they are both the distributions of the exponential of random variables distributed according to other common distributions, respectively theexponential distributionandnormal distribution. (Seethe previous section.)
The Pareto distribution is a special case of thegeneralized Pareto distribution, which is a family of distributions of similar form, but containing an extra parameter in such a way that the support of the distribution is either bounded below (at a variable point), or bounded both above and below (where both are variable), with theLomax distributionas a special case. This family also contains both the unshifted and shiftedexponential distributions.
The Pareto distribution with scalexm{\displaystyle x_{m}}and shapeα{\displaystyle \alpha }is equivalent to the generalized Pareto distribution with locationμ=xm{\displaystyle \mu =x_{m}}, scaleσ=xm/α{\displaystyle \sigma =x_{m}/\alpha }and shapeξ=1/α{\displaystyle \xi =1/\alpha }and, conversely, one can get the Pareto distribution from the GPD by takingxm=σ/ξ{\displaystyle x_{m}=\sigma /\xi }andα=1/ξ{\displaystyle \alpha =1/\xi }ifξ>0{\displaystyle \xi >0}.
L>0{\displaystyle L>0}location(real)H>L{\displaystyle H>L}location(real)
Lα1−(LH)α⋅(αα−1)⋅(1Lα−1−1Hα−1),α≠1{\displaystyle {\frac {L^{\alpha }}{1-\left({\frac {L}{H}}\right)^{\alpha }}}\cdot \left({\frac {\alpha }{\alpha -1}}\right)\cdot \left({\frac {1}{L^{\alpha -1}}}-{\frac {1}{H^{\alpha -1}}}\right),\alpha \neq 1}
Lα1−(LH)α⋅(αα−2)⋅(1Lα−2−1Hα−2),α≠2{\displaystyle {\frac {L^{\alpha }}{1-\left({\frac {L}{H}}\right)^{\alpha }}}\cdot \left({\frac {\alpha }{\alpha -2}}\right)\cdot \left({\frac {1}{L^{\alpha -2}}}-{\frac {1}{H^{\alpha -2}}}\right),\alpha \neq 2}2H2L2H2−L2lnHL,α=2{\displaystyle {\frac {2{H}^{2}{L}^{2}}{{H}^{2}-{L}^{2}}}\ln {\frac {H}{L}},\alpha =2}
Lα1−(LH)α⋅α(Lk−α−Hk−α)(α−k),α≠j{\displaystyle {\frac {L^{\alpha }}{1-\left({\frac {L}{H}}\right)^{\alpha }}}\cdot {\frac {\alpha (L^{k-\alpha }-H^{k-\alpha })}{(\alpha -k)}},\alpha \neq j}
The bounded (or truncated) Pareto distribution has three parameters:α,LandH. As in the standard Pareto distributionαdetermines the shape.Ldenotes the minimal value, andHdenotes the maximal value.
Theprobability density functionis
whereL≤x≤H, andα> 0.
IfUisuniformly distributedon (0, 1), then applying inverse-transform method[19]
is a bounded Pareto-distributed.
The purpose of the Symmetric and Zero Symmetric Pareto distributions is to capture some special statistical distribution with a sharp probability peak and symmetric long probability tails. These two distributions are derived from the Pareto distribution. Long probability tails normally means that probability decays slowly, and can be used to fit a variety of datasets. But if the distribution has symmetric structure with two slow decaying tails, Pareto could not do it. Then Symmetric Pareto or Zero Symmetric Pareto distribution is applied instead.[20]
The Cumulative distribution function (CDF) of Symmetric Pareto distribution is defined as following:[20]
F(X)=P(x<X)={12(b2b−X)aX<b1−12(bX)aX≥b{\displaystyle F(X)=P(x<X)={\begin{cases}{\tfrac {1}{2}}({b \over 2b-X})^{a}&X<b\\1-{\tfrac {1}{2}}({\tfrac {b}{X}})^{a}&X\geq b\end{cases}}}
The corresponding probability density function (PDF) is:[20]
p(x)=aba2(b+|x−b|)a+1,X∈R{\displaystyle p(x)={ab^{a} \over 2(b+\left\vert x-b\right\vert )^{a+1}},X\in R}
This distribution has two parameters: a and b. It is symmetric about b. Then the mathematic expectation is b. When, it has variance as following:
E((x−b)2)=∫−∞∞(x−b)2p(x)dx=2b2(a−2)(a−1){\displaystyle E((x-b)^{2})=\int _{-\infty }^{\infty }(x-b)^{2}p(x)dx={2b^{2} \over (a-2)(a-1)}}
The CDF of Zero Symmetric Pareto (ZSP) distribution is defined as following:
F(X)=P(x<X)={12(bb−X)aX<01−12(bb+X)aX≥0{\displaystyle F(X)=P(x<X)={\begin{cases}{\tfrac {1}{2}}({b \over b-X})^{a}&X<0\\1-{\tfrac {1}{2}}({\tfrac {b}{b+X}})^{a}&X\geq 0\end{cases}}}
The corresponding PDF is:
p(x)=aba2(b+|x|)a+1,X∈R{\displaystyle p(x)={ab^{a} \over 2(b+\left\vert x\right\vert )^{a+1}},X\in R}
This distribution is symmetric about zero. Parameter a is related to the decay rate of probability and (a/2b) represents peak magnitude of probability.[20]
The univariate Pareto distribution has been extended to amultivariate Pareto distribution.[21]
Thelikelihood functionfor the Pareto distribution parametersαandxm, given an independentsamplex= (x1,x2, ...,xn), is
Therefore, the logarithmic likelihood function is
It can be seen thatℓ(α,xm){\displaystyle \ell (\alpha ,x_{\mathrm {m} })}is monotonically increasing withxm, that is, the greater the value ofxm, the greater the value of the likelihood function. Hence, sincex≥xm, we conclude that
To find theestimatorforα, we compute the corresponding partial derivative and determine where it is zero:
Thus themaximum likelihoodestimator forαis:
The expected statistical error is:[22]
Malik (1970)[23]gives the exact joint distribution of(x^m,α^){\displaystyle ({\hat {x}}_{\mathrm {m} },{\hat {\alpha }})}. In particular,x^m{\displaystyle {\hat {x}}_{\mathrm {m} }}andα^{\displaystyle {\hat {\alpha }}}areindependentandx^m{\displaystyle {\hat {x}}_{\mathrm {m} }}is Pareto with scale parameterxmand shape parameternα, whereasα^{\displaystyle {\hat {\alpha }}}has aninverse-gamma distributionwith shape and scale parametersn− 1 andnα, respectively.
Vilfredo Paretooriginally used this distribution to describe theallocation of wealthamong individuals since it seemed to show rather well the way that a larger portion of the wealth of any society is owned by a smaller percentage of the people in that society. He also used it to describe distribution of income.[4]This idea is sometimes expressed more simply as thePareto principleor the "80-20 rule" which says that 20% of the population controls 80% of the wealth.[24]As Michael Hudson points out (The Collapse of Antiquity[2023] p. 85 & n.7) "a mathematical corollary [is] that 10% would have 65% of the wealth, and 5% would have half the national wealth.” However, the 80-20 rule corresponds to a particular value ofα, and in fact, Pareto's data on British income taxes in hisCours d'économie politiqueindicates that about 30% of the population had about 70% of the income.[citation needed]Theprobability density function(PDF) graph at the beginning of this article shows that the "probability" or fraction of the population that owns a small amount of wealth per person is rather high, and then decreases steadily as wealth increases. (The Pareto distribution is not realistic for wealth for the lower end, however. In fact,net worthmay even be negative.) This distribution is not limited to describing wealth or income, but to many situations in which an equilibrium is found in the distribution of the "small" to the "large". The following examples are sometimes seen as approximately Pareto-distributed:
The Pareto distribution is a continuous probability distribution.Zipf's law, also sometimes called thezeta distribution, is a discrete distribution, separating the values into a simple ranking. Both are a simple power law with a negative exponent, scaled so that their cumulative distributions equal 1. Zipf's can be derived from the Pareto distribution if thex{\displaystyle x}values (incomes) are binned intoN{\displaystyle N}ranks so that the number of people in each bin follows a 1/rank pattern. The distribution is normalized by definingxm{\displaystyle x_{m}}so thatαxmα=1H(N,α−1){\displaystyle \alpha x_{\mathrm {m} }^{\alpha }={\frac {1}{H(N,\alpha -1)}}}whereH(N,α−1){\displaystyle H(N,\alpha -1)}is thegeneralized harmonic number. This makes Zipf's probability density function derivable from Pareto's.
wheres=α−1{\displaystyle s=\alpha -1}andx{\displaystyle x}is an integer representing rank from 1 to N where N is the highest income bracket. So a randomly selected person (or word, website link, or city) from a population (or language, internet, or country) hasf(x){\displaystyle f(x)}probability of rankingx{\displaystyle x}.
The "80/20 law", according to which 20% of all people receive 80% of all income, and 20% of the most affluent 20% receive 80% of that 80%, and so on, holds precisely when the Pareto index isα=log45=log105log104≈1.161{\displaystyle \alpha =\log _{4}5={\cfrac {\log _{10}5}{\log _{10}4}}\approx 1.161}. This result can be derived from theLorenz curveformula given below. Moreover, the following have been shown[34]to be mathematically equivalent:
This does not apply only to income, but also to wealth, or to anything else that can be modeled by this distribution.
This excludes Pareto distributions in which 0 <α≤ 1, which, as noted above, have an infinite expected value, and so cannot reasonably model income distribution.
Price's lawis sometimes offered as a property of or as similar to the Pareto distribution. However, the law only holds in the case thatα=1{\displaystyle \alpha =1}. Note that in this case, the total and expected amount of wealth are not defined, and the rule only applies asymptotically to random samples. The extended Pareto Principle mentioned above is a far more general rule.
TheLorenz curveis often used to characterize income and wealth distributions. For any distribution, the Lorenz curveL(F) is written in terms of the PDFfor the CDFFas
wherex(F) is the inverse of the CDF. For the Pareto distribution,
and the Lorenz curve is calculated to be
For0<α≤1{\displaystyle 0<\alpha \leq 1}the denominator is infinite, yieldingL=0. Examples of the Lorenz curve for a number of Pareto distributions are shown in the graph on the right.
According toOxfam(2016) the richest 62 people have as much wealth as the poorest half of the world's population.[35]We can estimate the Pareto index that would apply to this situation. Letting ε equal62/(7×109){\displaystyle 62/(7\times 10^{9})}we have:
or
The solution is thatαequals about 1.15, and about 9% of the wealth is owned by each of the two groups. But actually the poorest 69% of the world adult population owns only about 3% of the wealth.[36]
TheGini coefficientis a measure of the deviation of the Lorenz curve from the equidistribution line which is a line connecting [0, 0] and [1, 1], which is shown in black (α= ∞) in the Lorenz plot on the right. Specifically, the Gini coefficient is twice the area between the Lorenz curve and the equidistribution line. The Gini coefficient for the Pareto distribution is then calculated (forα≥1{\displaystyle \alpha \geq 1}) to be
(see Aaberge 2005).
Random samples can be generated usinginverse transform sampling. Given a random variateUdrawn from theuniform distributionon the unit interval [0, 1], the variateTgiven by
is Pareto-distributed.[37]
|
https://en.wikipedia.org/wiki/Pareto_distribution
|
Incontinuum mechanics, apower-law fluid, or theOstwald–de Waele relationship, is a type ofgeneralized Newtonian fluid. Thismathematicalrelationship is useful because of its simplicity, but only approximately describes the behaviour of a real non-Newtonian fluid. Power-law fluids can be subdivided into three different types of fluids based on the value of their flow behaviour index:pseudoplastic,Newtonian fluid, anddilatant. Afirst-order fluidis another name for a power-law fluid with exponential dependence ofviscosityontemperature. As aNewtonian fluidin a circular pipegive a quadratic velocity profile, a power-law fluid will result in a power-lawvelocity profile.
Incontinuum mechanics, a power-law fluid, or the Ostwald–de Waele relationship, is a type ofgeneralized Newtonian fluid(time-independentnon-Newtonian fluid) for which theshear stress,τ, is given by
where:
The quantity
represents anapparentoreffectiveviscosityas a function of the shear rate (SI unit Pa s). The value ofKandncan be obtained from the graph oflog(μeff){\textstyle \log(\mu _{\mathrm {eff} })}andlog(∂u∂y){\textstyle \log \left({\frac {\partial u}{\partial y}}\right)}. The slope line gives the value ofn– 1, from whichncan be calculated. The intercept atlog(∂u∂y)=0{\textstyle \log \left({\frac {\partial u}{\partial y}}\right)=0}gives the value oflog(K){\textstyle \log(K)}.
Also known as the Ostwald–de Waele power law afterWilhelm OstwaldandArmand de Waele,[1][2]thismathematicalrelationship is useful because of its simplicity, but only approximately describes the behaviour of a real non-Newtonian fluid. For example, ifnwere less than one, the power law predicts that the effective viscosity would decrease with increasing shear rate indefinitely, requiring a fluid with infinite viscosity at rest and zero viscosity as the shear rate approaches infinity, but a real fluid has both a minimum and a maximum effective viscosity that depend on thephysical chemistryat themolecularlevel. Therefore, the power law is only a good description of fluid behaviour across the range of shear rates to which the coefficients were fitted. There are a number of other models that better describe the entire flow behaviour of shear-dependent fluids, but they do so at the expense of simplicity, so the power law is still used to describe fluid behaviour, permit mathematical predictions, and correlate experimental data.
Power-law fluids can be subdivided into three different types of fluids based on the value of their flow behaviour index:
Pseudoplastic, orshear-thinningare those fluids whose behaviour is time independent and which have a lowerapparent viscosityat higher shear rates, and are usuallysolutionsof large,polymericmolecules in a solvent with smaller molecules. It is generally supposed that the large molecular chains tumble at random and affect large volumes of fluid under low shear, but that they gradually align themselves in the direction of increasing shear and produce less resistance.
A common household example of a strongly shear-thinning fluid is styling gel, which is primarily composed of water and a fixative such as a vinyl acetate/vinylpyrrolidone copolymer (PVP/PA). If one were to hold a sample of hair gel in one hand and a sample ofcorn syruporglycerinein the other, they would find that the hair gel is much harder to pour off the fingers (a low shear application), but that it produces much less resistance when rubbed between the fingers (a high shear application).[3]
This type of behavior is widely encountered in solutions or suspensions. In these cases, large molecules or fine particles form loosely bounded aggregates or alignment groupings that are stable and reproducible at any given shear rate. But these fluids rapidly and reversibly break down or reform with an increase or decrease in shear rate. Pseudo plastic fluids show this behavior over a wide range of shear rates; however often approach a limiting Newtonian behavior at very low and very high rates of shear. These Newtonian regions are characterized by the viscositiesμ0{\displaystyle \mu _{0}}andμ∞{\displaystyle \mu _{\infty }}respectively.
ANewtonian fluidis a power-law fluid with a behaviour index of 1, where the shear stress is directly proportional to the shear rate:
These fluids have a constant viscosity,μ, across all shear rates and include many of the most common fluids, such aswater, mostaqueous solutions,oils,corn syrup,glycerine,airand othergases.
While this holds true for relatively low shear rates, at high rates most oils in reality also behave in a non-Newtonian fashion and thin. Typical examples include oil films in automotive engine shell bearings and to a lesser extent in geartooth contacts.
Dilatant, orshear-thickeningfluids increase in apparent viscosity at higher shear rates.
They are in common use inviscous couplingsin automobiles. When both ends of the coupling are spinning at the same rotational speed, the viscosity of the dilatant fluid is minimal, but if the ends of the coupling differ in speed, the coupling fluid becomes very viscous. They are used to prevent all of the torque from going to one wheel when the traction on that wheel drops, e.g. when one wheel is on ice. The viscous coupling between the two driven wheels ensures that both wheels turn at the same rate, providing torque to the wheel that is not slipping. Viscous couplings are also used to keep the front axle and the rear axle spinning at the same rate in four-wheel drive passenger automobiles.
Dilatant fluids are rarely encountered in everyday situations. One common example is an uncooked paste ofcornstarchandwater, sometimes known asoobleck. Under high shear rates, the water is squeezed out from between thestarchmolecules, which are able to interact more strongly, enormously increasing the viscosity.
While not strictly a dilatant fluid,Silly Putty(viscoelasticfluid) is an example of a material that shares these viscosity characteristics.
Afirst-order fluidis another name for a power-law fluid with exponential dependence ofviscosityontemperature.
whereγ̇is theshear rate,Tis temperature andμ0,nandbare coefficients.
The model can be re-written as
Just like aNewtonian fluidin a circular pipe givesa quadratic velocity profile, a power-law fluid will result in a power-lawvelocity profile,
whereu(r) is the (radially) local axial velocity,dp/dzis the pressure gradient along the pipe, andRis the pipe radius.
|
https://en.wikipedia.org/wiki/Power-law_fluid
|
In applied probability theory, theSimon modelis a class ofstochastic modelsthat results in apower-lawdistribution function. It was proposed byHerbert A. Simon[1]to account for the wide range of empiricaldistributionsfollowing a power-law. It models the dynamics of a system of elements with associated counters (e.g., words and their frequencies in texts, or nodes in a network and their connectivityk{\displaystyle k}). In this model the dynamics of the system is based on constant growth via addition of new elements (new instances of words) as well as incrementing the counters (new occurrences of a word) at a rate proportional to their current values.
To model this type of network growth as described above, Bornholdt and Ebel[2]considered a network withn{\displaystyle n}nodes, and each node with connectivitieski{\displaystyle k_{i}},i=1,…,n{\displaystyle i=1,\ldots ,n}. These nodes
form classes[k]{\displaystyle [k]}off(k){\displaystyle f(k)}nodes with identical connectivityk{\displaystyle k}.
Repeat the following steps:
(i) With probabilityα{\displaystyle \alpha }add a new node and attach a link to it from an arbitrarily chosen node.
(ii) With probability1−α{\displaystyle 1-\alpha }add one link from an arbitrary node to a nodej{\displaystyle j}of class[k]{\displaystyle [k]}chosen with a probability proportional tokf(k){\displaystyle kf(k)}.
For this stochastic process, Simon found a stationary solution exhibitingpower-lawscaling,P(k)∝k−γ{\displaystyle P(k)\propto k^{-\gamma }}, with exponentγ=1+11−α.{\displaystyle \gamma =1+{\frac {1}{1-\alpha }}.}
(i)Barabási-Albert (BA) modelcan be mapped to the subclassα=1/2{\displaystyle \alpha =1/2}of Simon's model, when using the simpler probability for a node being
connected to another nodei{\displaystyle i}with connectivityki{\displaystyle k_{i}}P(newlinktoi)∝ki{\displaystyle P(\mathrm {new\ link\ to\ } i)\propto k_{i}}(same as the preferential attachment atBA model). In other words, the Simon model describes a general class of stochastic processes that can result in ascale-free network, appropriate to capturePareto and Zipf's laws.
(ii) The only free parameter of the modelα{\displaystyle \alpha }reflects the relative
growth of number of nodes versus the number of links. In generalα{\displaystyle \alpha }has small values; therefore, the scaling exponents can be predicted to beγ≈2{\displaystyle \gamma \approx 2}. For instance, Bornholdt and Ebel[2]studied the linking dynamics of World Wide Web, and predicted the scaling exponent asγ≈2.1{\displaystyle \gamma \approx 2.1}, which was consistent with observation.
(iii) The interest in the scale-free model comes from its ability to describe the topology of complex networks. The Simon model does not have an underlying network structure, as it was designed to describe events whose frequency follows apower-law. Thus network measures going beyond thedegree distributionsuch
as theaverage path length,spectral properties, andclustering coefficient, cannot be obtained from this mapping.
The Simon model is related togeneralized scale-free modelswith growth and preferential attachment properties. For more reference, see.[3][4]
|
https://en.wikipedia.org/wiki/Simon_model
|
α∈(0,2]{\displaystyle \alpha \in (0,2]}— stability parameterβ{\displaystyle \beta }∈ [−1, 1] — skewness parameter (note thatskewnessis undefined)c∈ (0, ∞) —scale parameter
x∈ [μ, +∞) ifα<1{\displaystyle \alpha <1}andβ=1{\displaystyle \beta =1}
x∈ (-∞,μ] ifα<1{\displaystyle \alpha <1}andβ=−1{\displaystyle \beta =-1}
exp[itμ−|ct|α(1−iβsgn(t)Φ)],{\displaystyle \exp \!{\Big [}\;it\mu -|c\,t|^{\alpha }\,(1-i\beta \operatorname {sgn}(t)\Phi )\;{\Big ]},}
Inprobability theory, adistributionis said to bestableif alinear combinationof twoindependentrandom variableswith this distribution has the same distribution,up tolocationandscaleparameters. A random variable is said to bestableif its distribution is stable. The stable distribution family is also sometimes referred to as theLévy alpha-stable distribution, afterPaul Lévy, the first mathematician to have studied it.[1][2]
Of the four parameters defining the family, most attention has been focused on the stability parameter,α{\displaystyle \alpha }(see panel). Stable distributions have0<α≤2{\displaystyle 0<\alpha \leq 2}, with the upper bound corresponding to thenormal distribution, andα=1{\displaystyle \alpha =1}to theCauchy distribution. The distributions have undefinedvarianceforα<2{\displaystyle \alpha <2}, and undefinedmeanforα≤1{\displaystyle \alpha \leq 1}. The importance of stable probability distributions is that they are "attractors" for properly normed sums of independent and identically distributed (iid) random variables. The normal distribution defines a family of stable distributions. By the classicalcentral limit theoremthe properly normed sum of a set of random variables, each with finite variance, will tend toward a normal distribution as the number of variables increases. Without the finite variance assumption, the limit may be a stable distribution that is not normal.Mandelbrotreferred to such distributions as "stable Paretian distributions",[3][4][5]afterVilfredo Pareto. In particular, he referred to those maximally skewed in the positive direction with1<α<2{\displaystyle 1<\alpha <2}as "Pareto–Lévy distributions",[1]which he regarded as better descriptions of stock and commodity prices than normal distributions.[6]
A non-degenerate distributionis a stable distribution if it satisfies the following property:
Since thenormal distribution, theCauchy distribution, and theLévy distributionall have the above property, it follows that they are special cases of stable distributions.
Such distributions form a four-parameter family of continuousprobability distributionsparametrized by location and scale parametersμandc, respectively, and two shape parametersβ{\displaystyle \beta }andα{\displaystyle \alpha }, roughly corresponding to measures of asymmetry and concentration, respectively (see the figures).
Thecharacteristic functionφ(t){\displaystyle \varphi (t)}of any probability distribution is theFourier transformof its probability density functionf(x){\displaystyle f(x)}. The density function is therefore the inverse Fourier transform of the characteristic function:[8]φ(t)=∫−∞∞f(x)eixtdx.{\displaystyle \varphi (t)=\int _{-\infty }^{\infty }f(x)e^{ixt}\,dx.}
Although the probability density function for a general stable distribution cannot be written analytically, the general characteristic function can be expressed analytically. A random variableXis called stable if its characteristic function can be written as[7][9]φ(t;α,β,c,μ)=exp(itμ−|ct|α(1−iβsgn(t)Φ)){\displaystyle \varphi (t;\alpha ,\beta ,c,\mu )=\exp \left(it\mu -|ct|^{\alpha }\left(1-i\beta \operatorname {sgn}(t)\Phi \right)\right)}wheresgn(t)is just thesignoftandΦ={tan(πα2)α≠1−2πlog|t|α=1{\displaystyle \Phi ={\begin{cases}\tan \left({\frac {\pi \alpha }{2}}\right)&\alpha \neq 1\\-{\frac {2}{\pi }}\log |t|&\alpha =1\end{cases}}}μ∈Ris a shift parameter,β∈[−1,1]{\displaystyle \beta \in [-1,1]}, called theskewness parameter, is a measure of asymmetry. Notice that in this context the usualskewnessis not well defined, as forα<2{\displaystyle \alpha <2}the distribution does not admit 2nd or highermoments, and the usual skewness definition is the 3rdcentral moment.
The reason this gives a stable distribution is that the characteristic function for the sum of two independent random variables equals the product of the two corresponding characteristic functions. Adding two random variables from a stable distribution gives something with the same values ofα{\displaystyle \alpha }andβ{\displaystyle \beta }, but possibly different values ofμandc.
Not every function is the characteristic function of a legitimate probability distribution (that is, one whosecumulative distribution functionis real and goes from 0 to 1 without decreasing), but the characteristic functions given above will be legitimate so long as the parameters are in their ranges. The value of the characteristic function at some valuetis the complex conjugate of its value at −tas it should be so that the probability distribution function will be real.
In the simplest caseβ=0{\displaystyle \beta =0}, the characteristic function is just astretched exponential function; the distribution is symmetric aboutμand is referred to as a (Lévy)symmetric alpha-stable distribution, often abbreviatedSαS.
Whenα<1{\displaystyle \alpha <1}andβ=1{\displaystyle \beta =1}, the distribution is supported on [μ, ∞).
The parameterc> 0 is a scale factor which is a measure of the width of the distribution whileα{\displaystyle \alpha }is the exponent or index of the distribution and specifies the asymptotic behavior of the distribution.
The parametrization of stable distributions is not unique. Nolan[10]tabulates 11 parametrizations seen in the literature and gives conversion formulas. The two most commonly used parametrizations are the one above (Nolan's "1") and the one immediately below (Nolan's "0").
The parametrization above is easiest to use for theoretical work, but its probability density is not continuous in the parameters atα=1{\displaystyle \alpha =1}.[11]A continuous parametrization, better for numerical work, is[7]φ(t;α,β,γ,δ)=exp(itδ−|γt|α(1−iβsgn(t)Φ)){\displaystyle \varphi (t;\alpha ,\beta ,\gamma ,\delta )=\exp \left(it\delta -|\gamma t|^{\alpha }\left(1-i\beta \operatorname {sgn}(t)\Phi \right)\right)}where:Φ={(|γt|1−α−1)tan(πα2)α≠1−2πlog|γt|α=1{\displaystyle \Phi ={\begin{cases}\left(|\gamma t|^{1-\alpha }-1\right)\tan \left({\tfrac {\pi \alpha }{2}}\right)&\alpha \neq 1\\-{\frac {2}{\pi }}\log |\gamma t|&\alpha =1\end{cases}}}
The ranges ofα{\displaystyle \alpha }andβ{\displaystyle \beta }are the same as before,γ(likec) should be positive, andδ(likeμ) should be real.
In either parametrization one can make a linear transformation of the random variable to get a random variable whose density isf(y;α,β,1,0){\displaystyle f(y;\alpha ,\beta ,1,0)}. In the first parametrization, this is done by defining the new variable:y={x−μγα≠1x−μγ−β2πlnγα=1{\displaystyle y={\begin{cases}{\frac {x-\mu }{\gamma }}&\alpha \neq 1\\{\frac {x-\mu }{\gamma }}-\beta {\frac {2}{\pi }}\ln \gamma &\alpha =1\end{cases}}}
For the second parametrization, simply usey=x−δγ{\displaystyle y={\frac {x-\delta }{\gamma }}}independent ofα{\displaystyle \alpha }. In the first parametrization, if the mean exists (that is,α>1{\displaystyle \alpha >1}) then it is equal toμ, whereas in the second parametrization when the mean exists it is equal toδ−βγtan(πα2).{\displaystyle \delta -\beta \gamma \tan \left({\tfrac {\pi \alpha }{2}}\right).}
A stable distribution is therefore specified by the above four parameters. It can be shown that any non-degenerate stable distribution has a smooth (infinitely differentiable) density function.[7]Iff(x;α,β,c,μ){\displaystyle f(x;\alpha ,\beta ,c,\mu )}denotes the density ofXandYis the sum of independent copies ofX:Y=∑i=1Nki(Xi−μ){\displaystyle Y=\sum _{i=1}^{N}k_{i}(X_{i}-\mu )}thenYhas the density1sf(y/s;α,β,c,0){\displaystyle {\tfrac {1}{s}}f(y/s;\alpha ,\beta ,c,0)}withs=(∑i=1N|ki|α)1α{\displaystyle s=\left(\sum _{i=1}^{N}|k_{i}|^{\alpha }\right)^{\frac {1}{\alpha }}}
The asymptotic behavior is described, forα<2{\displaystyle \alpha <2}, by:[7]f(x)∼1|x|1+α(cα(1+sgn(x)β)sin(πα2)Γ(α+1)π){\displaystyle f(x)\sim {\frac {1}{|x|^{1+\alpha }}}\left(c^{\alpha }(1+\operatorname {sgn}(x)\beta )\sin \left({\frac {\pi \alpha }{2}}\right){\frac {\Gamma (\alpha +1)}{\pi }}\right)}where Γ is theGamma function(except that whenα≥1{\displaystyle \alpha \geq 1}andβ=±1{\displaystyle \beta =\pm 1}, the tail does not vanish to the left or right, resp., ofμ, although the above expression is 0). This "heavy tail" behavior causes the variance of stable distributions to be infinite for allα<2{\displaystyle \alpha <2}. This property is illustrated in the log–log plots below.
Whenα=2{\displaystyle \alpha =2}, the distribution is Gaussian (see below), with tails asymptotic to exp(−x2/4c2)/(2c√π).
Whenα<1{\displaystyle \alpha <1}andβ=1{\displaystyle \beta =1}, the distribution is supported on [μ, ∞). This family is calledone-sided stable distribution.[12]Its standard distribution (μ= 0) is defined as
Letq=exp(−iαπ/2){\displaystyle q=\exp(-i\alpha \pi /2)}, its characteristic function isφ(t;α)=exp(−q|t|α){\displaystyle \varphi (t;\alpha )=\exp \left(-q|t|^{\alpha }\right)}. Thus the integral form of its PDF is (note:Im(q)<0{\displaystyle \operatorname {Im} (q)<0})Lα(x)=1πℜ[∫−∞∞eitxe−q|t|αdt]=2π∫0∞e−Re(q)tαsin(tx)sin(−Im(q)tα)dt,or=2π∫0∞e−Re(q)tαcos(tx)cos(Im(q)tα)dt.{\displaystyle {\begin{aligned}L_{\alpha }(x)&={\frac {1}{\pi }}\Re \left[\int _{-\infty }^{\infty }e^{itx}e^{-q|t|^{\alpha }}\,dt\right]\\&={\frac {2}{\pi }}\int _{0}^{\infty }e^{-\operatorname {Re} (q)\,t^{\alpha }}\sin(tx)\sin(-\operatorname {Im} (q)\,t^{\alpha })\,dt,{\text{ or }}\\&={\frac {2}{\pi }}\int _{0}^{\infty }e^{-{\text{Re}}(q)\,t^{\alpha }}\cos(tx)\cos(\operatorname {Im} (q)\,t^{\alpha })\,dt.\end{aligned}}}
The double-sine integral is more effective for very smallx{\displaystyle x}.
Consider the Lévy sumY=∑i=1NXi{\textstyle Y=\sum _{i=1}^{N}X_{i}}whereXi∼Lα(x){\textstyle X_{i}\sim L_{\alpha }(x)}, thenYhas the density1νLα(xν){\textstyle {\frac {1}{\nu }}L_{\alpha }\left({\frac {x}{\nu }}\right)}whereν=N1/α{\textstyle \nu =N^{1/\alpha }}. Setx=1{\textstyle x=1}to arrive at thestable count distribution.[13]Its standard distribution is defined as
The stable count distribution is theconjugate priorof the one-sided stable distribution. Its location-scale family is defined as
It is also a one-sided distribution supported on[ν0,∞){\displaystyle [\nu _{0},\infty )}. The location parameterν0{\displaystyle \nu _{0}}is the cut-off location, whileθ{\displaystyle \theta }defines its scale.
Whenα=12{\textstyle \alpha ={\frac {1}{2}}},L12(x){\textstyle L_{\frac {1}{2}}(x)}is theLévy distributionwhich is an inverse gamma distribution. ThusN12(ν;ν0,θ){\displaystyle {\mathfrak {N}}_{\frac {1}{2}}(\nu ;\nu _{0},\theta )}is a shiftedgamma distributionof shape 3/2 and scale4θ{\displaystyle 4\theta },
Its mean isν0+6θ{\displaystyle \nu _{0}+6\theta }and its standard deviation is24θ{\displaystyle {\sqrt {24}}\theta }. It is hypothesized thatVIXis distributed likeN12(ν;ν0,θ){\textstyle {\mathfrak {N}}_{\frac {1}{2}}(\nu ;\nu _{0},\theta )}withν0=10.4{\displaystyle \nu _{0}=10.4}andθ=1.6{\displaystyle \theta =1.6}(See Section 7 of[13]). Thus thestable count distributionis the first-order marginal distribution of a volatility process. In this context,ν0{\displaystyle \nu _{0}}is called the "floor volatility".
Another approach to derive the stable count distribution is to use the Laplace transform of the one-sided stable distribution, (Section 2.4 of[13])
Letx=1/ν{\displaystyle x=1/\nu }, and one can decompose the integral on the left hand side as aproduct distributionof a standardLaplace distributionand a standard stable count distribution,
This is called the "lambda decomposition" (See Section 4 of[13]) since the right hand side was named as "symmetric lambda distribution" in Lihn's former works. However, it has several more popular names such as "exponential power distribution", or the "generalized error/normal distribution", often referred to whenα>1{\displaystyle \alpha >1}.
The n-th moment ofNα(ν){\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}is the−(n+1){\displaystyle -(n+1)}-th moment ofLα(x){\displaystyle L_{\alpha }(x)}, and all positive moments are finite.
Stable distributions are closed under convolution for a fixed value ofα{\displaystyle \alpha }. Since convolution is equivalent to multiplication of the Fourier-transformed function, it follows that the product of two stable characteristic functions with the sameα{\displaystyle \alpha }will yield another such characteristic function. The product of two stable characteristic functions is given by:exp(itμ1+itμ2−|c1t|α−|c2t|α+iβ1|c1t|αsgn(t)Φ+iβ2|c2t|αsgn(t)Φ){\displaystyle \exp \left(it\mu _{1}+it\mu _{2}-|c_{1}t|^{\alpha }-|c_{2}t|^{\alpha }+i\beta _{1}|c_{1}t|^{\alpha }\operatorname {sgn}(t)\Phi +i\beta _{2}|c_{2}t|^{\alpha }\operatorname {sgn}(t)\Phi \right)}
SinceΦis not a function of theμ,corβ{\displaystyle \beta }variables it follows that these parameters for the convolved function are given by:μ=μ1+μ2c=(c1α+c2α)1αβ=β1c1α+β2c2αc1α+c2α{\displaystyle {\begin{aligned}\mu &=\mu _{1}+\mu _{2}\\c&=\left(c_{1}^{\alpha }+c_{2}^{\alpha }\right)^{\frac {1}{\alpha }}\\[6pt]\beta &={\frac {\beta _{1}c_{1}^{\alpha }+\beta _{2}c_{2}^{\alpha }}{c_{1}^{\alpha }+c_{2}^{\alpha }}}\end{aligned}}}
In each case, it can be shown that the resulting parameters lie within the required intervals for a stable distribution.
The Generalized Central Limit Theorem (GCLT) was an effort of multiple mathematicians (Berstein,Lindeberg,Lévy,Feller,Kolmogorov, and others) over the period from 1920 to 1937.[14]The first published complete proof (in French) of the GCLT was in 1937 byPaul Lévy.[15]An English language version of the complete proof of the GCLT is available in the translation ofGnedenkoandKolmogorov's 1954 book.[16]
The statement of the GCLT is as follows:[10]
In other words, if sums of independent, identically distributed random variables converge in distribution to someZ, thenZmust be a stable distribution.
There is no general analytic solution for the form off(x). There are, however, three special cases which can be expressed in terms ofelementary functionsas can be seen by inspection of thecharacteristic function:[7][9][17]
Note that the above three distributions are also connected, in the following way: A standard Cauchy random variable can be viewed as amixtureof Gaussian random variables (all with mean zero), with the variance being drawn from a standard Lévy distribution. And in fact this is a special case of a more general theorem (See p. 59 of[18]) which allows any symmetric alpha-stable distribution to be viewed in this way (with the alpha parameter of the mixture distribution equal to twice the alpha parameter of the mixing distribution—and the beta parameter of the mixing distribution always equal to one).
A general closed form expression for stable PDFs with rational values ofα{\displaystyle \alpha }is available in terms ofMeijer G-functions.[19]Fox H-Functions can also be used to express the stable probability density functions. For simple rational numbers, the closed form expression is often in terms of less complicatedspecial functions. Several closed form expressions having rather simple expressions in terms of special functions are available. In the table below, PDFs expressible by elementary functions are indicated by anEand those that are expressible by special functions are indicated by ans.[18]
Some of the special cases are known by particular names:
Also, in the limit ascapproaches zero or as α approaches zero the distribution will approach aDirac delta functionδ(x−μ).
The stable distribution can be restated as the real part of a simpler integral:[20]f(x;α,β,c,μ)=1πℜ[∫0∞eit(x−μ)e−(ct)α(1−iβΦ)dt].{\displaystyle f(x;\alpha ,\beta ,c,\mu )={\frac {1}{\pi }}\Re \left[\int _{0}^{\infty }e^{it(x-\mu )}e^{-(ct)^{\alpha }(1-i\beta \Phi )}\,dt\right].}
Expressing the second exponential as aTaylor series, this leads to:f(x;α,β,c,μ)=1πℜ[∫0∞eit(x−μ)∑n=0∞(−qtα)nn!dt]{\displaystyle f(x;\alpha ,\beta ,c,\mu )={\frac {1}{\pi }}\Re \left[\int _{0}^{\infty }e^{it(x-\mu )}\sum _{n=0}^{\infty }{\frac {(-qt^{\alpha })^{n}}{n!}}\,dt\right]}whereq=cα(1−iβΦ){\displaystyle q=c^{\alpha }(1-i\beta \Phi )}. Reversing the order of integration and summation, and carrying out the integration yields:f(x;α,β,c,μ)=1πℜ[∑n=1∞(−q)nn!(ix−μ)αn+1Γ(αn+1)]{\displaystyle f(x;\alpha ,\beta ,c,\mu )={\frac {1}{\pi }}\Re \left[\sum _{n=1}^{\infty }{\frac {(-q)^{n}}{n!}}\left({\frac {i}{x-\mu }}\right)^{\alpha n+1}\Gamma (\alpha n+1)\right]}which will be valid forx≠μand will converge for appropriate values of the parameters. (Note that then= 0 term which yields adelta functioninx−μhas therefore been dropped.) Expressing the first exponential as a series will yield another series in positive powers ofx−μwhich is generally less useful.
For one-sided stable distribution, the above series expansion needs to be modified, sinceq=exp(−iαπ/2){\displaystyle q=\exp(-i\alpha \pi /2)}andqiα=1{\displaystyle qi^{\alpha }=1}. There is no real part to sum. Instead, the integral of the characteristic function should be carried out on the negative axis, which yields:[21][12]Lα(x)=1πℜ[∑n=1∞(−q)nn!(−ix)αn+1Γ(αn+1)]=1π∑n=1∞−sin(n(α+1)π)n!(1x)αn+1Γ(αn+1){\displaystyle {\begin{aligned}L_{\alpha }(x)&={\frac {1}{\pi }}\Re \left[\sum _{n=1}^{\infty }{\frac {(-q)^{n}}{n!}}\left({\frac {-i}{x}}\right)^{\alpha n+1}\Gamma (\alpha n+1)\right]\\&={\frac {1}{\pi }}\sum _{n=1}^{\infty }{\frac {-\sin(n(\alpha +1)\pi )}{n!}}\left({\frac {1}{x}}\right)^{\alpha n+1}\Gamma (\alpha n+1)\end{aligned}}}
In addition to the existingtests for normalityand subsequentparameter estimation, a general method which relies on the quantiles was developed by McCulloch and works for both symmetric and skew stable distributions and stability parameter0.5<α≤2{\displaystyle 0.5<\alpha \leq 2}.[22]
There are no analytic expressions for the inverseF−1(x){\displaystyle F^{-1}(x)}nor the CDFF(x){\displaystyle F(x)}itself, so the inversion method cannot be used to generate stable-distributed variates.[11][13]Other standard approaches like the rejection method would require tedious computations. An elegant and efficient solution was proposed by Chambers, Mallows and Stuck (CMS),[23]who noticed that a certain integral formula[24]yielded the following algorithm:[25]
This algorithm yields a random variableX∼Sα(β,1,0){\displaystyle X\sim S_{\alpha }(\beta ,1,0)}. For a detailed proof see.[26]
To simulate a stable random variable for all admissible values of the parametersα{\displaystyle \alpha },c{\displaystyle c},β{\displaystyle \beta }andμ{\displaystyle \mu }use the following property: IfX∼Sα(β,1,0){\displaystyle X\sim S_{\alpha }(\beta ,1,0)}thenY={cX+μα≠1cX+2πβclogc+μα=1{\displaystyle Y={\begin{cases}cX+\mu &\alpha \neq 1\\cX+{\frac {2}{\pi }}\beta c\log c+\mu &\alpha =1\end{cases}}}isSα(β,c,μ){\displaystyle S_{\alpha }(\beta ,c,\mu )}. Forα=2{\displaystyle \alpha =2}(andβ=0{\displaystyle \beta =0}) the CMS method reduces to the well knownBox-Muller transformfor generatingGaussianrandom variables.[27]While other approaches have been proposed in the literature, including application of Bergström[28]and LePage[29]series expansions, the CMS method is regarded as the fastest and the most accurate.
Stable distributions owe their importance in both theory and practice to the generalization of thecentral limit theoremto random variables without second (and possibly first) ordermomentsand the accompanyingself-similarityof the stable family. It was the seeming departure from normality along with the demand for a self-similar model for financial data (i.e. the shape of the distribution for yearly asset price changes should resemble that of the constituent daily or monthly price changes) that ledBenoît Mandelbrotto propose that cotton prices follow an alpha-stable distribution withα{\displaystyle \alpha }equal to 1.7.[6]Lévy distributionsare frequently found in analysis ofcritical behaviorand financial data.[9][30]
They are also found inspectroscopyas a general expression for a quasistaticallypressure broadened spectral line.[20]
The Lévy distribution of solar flare waiting time events (time between flare events) was demonstrated forCGROBATSE hard x-ray solar flares in December 2001. Analysis of the Lévy statistical signature revealed that two different memory signatures were evident; one related to the solar cycle and the second whose origin appears to be associated with a localized or combination of localized solar active region effects.[31]
A number of cases of analytically expressible stable distributions are known. Let the stable distribution be expressed byf(x;α,β,c,μ){\displaystyle f(x;\alpha ,\beta ,c,\mu )}, then:
|
https://en.wikipedia.org/wiki/Stable_distribution
|
Stevens' power lawis an empirical relationship inpsychophysicsbetween an increased intensity or strength in a physical stimulus and the perceivedmagnitudeincrease in the sensation created by the stimulus. It is often considered to supersede theWeber–Fechner law, which is based on a logarithmic relationship between stimulus and sensation, because the power law describes a wider range of sensory comparisons, down to zero intensity.[1]
The theory is named after psychophysicistStanley Smith Stevens(1906–1973). Although the idea of apower lawhad been suggested by 19th-century researchers, Stevens is credited with reviving the law and publishing a body of psychophysical data to support it in 1957.
The general form of the law is
whereIis the intensity or strength of the stimulus in physical units (energy, weight, pressure, mixture proportions, etc.), ψ(I) is the magnitude of the sensation evoked by the stimulus,ais an exponent that depends on the type of stimulation or sensory modality, andkis aproportionalityconstant that depends on the units used.
A distinction has been made between localpsychophysics, where stimuli can only be discriminated with a probability around 50%, and global psychophysics, where the stimuli can be discriminated correctly with near certainty (Luce& Krumhansl, 1988). The Weber–Fechner law and methods described byL. L. Thurstoneare generally applied in local psychophysics, whereas Stevens' methods are usually applied in global psychophysics.
The adjacent table lists the exponents reported by Stevens.
The principal methods used by Stevens to measure the perceived intensity of a stimulus weremagnitude estimationandmagnitude production. In magnitude estimation with a standard, the experimenter presents a stimulus called astandardand assigns it a number called themodulus. For subsequent stimuli, subjects report numerically their perceived intensity relative to the standard so as to preserve the ratio between the sensations and the numerical estimates (e.g., a sound perceived twice as loud as the standard should be given a number twice the modulus). In magnitude estimation without a standard (usually justmagnitude estimation), subjects are free to choose their own standard, assigning any number to the first stimulus and all subsequent ones with the only requirement being that the ratio between sensations and numbers is preserved. In magnitude production a number and a reference stimulus is given and subjects produce a stimulus that is perceived as that number times the reference. Also used iscross-modality matching, which generally involves subjects altering the magnitude of one physical quantity, such as the brightness of a light, so that its perceived intensity is equal to the perceived intensity of another type of quantity, such as warmth or pressure.
Stevens generally collected magnitude estimation data from multiple observers, averaged the data across subjects, and then fitted a power function to the data. Because the fit was generally reasonable, he concluded the power law was correct.
A principal criticism has been that Stevens' approach provides neither a direct test of the power law itself nor the underlying assumptions of the magnitude estimation/production method: it simply fits curves to data points. In addition, the power law can be deduced mathematically from the Weber-Fechner logarithmic function (Mackay, 1963[2]), and the relation makes predictions consistent with data (Staddon, 1978[3]). As with all psychometric studies, Stevens' approach ignores individual differences in the stimulus-sensation relationship, and there are generally large individual differences in this relationship that averaging the data will obscure (Greem & Luce 1974).
Stevens' main assertion was that using magnitude estimations/productions respondents were able to make judgements on aratio scale(i.e., ifxandyare values on a given ratio scale, then there exists a constantksuch thatx=ky). In the context ofaxiomaticpsychophysics, (Narens 1996) formulated a testable property capturing the implicit underlying assumption this assertion entailed. Specifically, for two proportionspandq, and three stimuli,x,y,z, ifyis judgedptimesx,zis judgedqtimesy, thent=pqtimesxshould be equal toz. This amounts to assuming that respondents interpret numbers in a veridical way. This property was unambiguously rejected (Ellermeier & Faulhammer 2000,Zimmer 2005). Without assuming veridical interpretation of numbers, (Narens 1996) formulated another property that, if sustained, meant that respondents could make ratio scaled judgments, namely, ifyis judgedptimesx,zis judgedqtimesy, and ify'is judgedqtimesx,z'is judgedptimesy', thenzshould equalz'. This property has been sustained in a variety of situations (Ellermeier & Faulhammer 2000,Zimmer 2005).
Critics of the power law also point out that the validity of the law is contingent on the measurement of perceived stimulus intensity that is employed in the relevant experiments.Luce (2002), under the condition that respondents' numerical distortion function and the psychophysical functions could be separated, formulated a behavioral condition equivalent to the psychophysical function being a power function. This condition was confirmed for just over half the respondents, and the power form was found to be a reasonable approximation for the rest (Steingrimsson & Luce 2006).
It has also been questioned, particularly in terms ofsignal detection theory, whether any given stimulus is actually associated with a particular andabsoluteperceived intensity; i.e. one that is independent of contextual factors and conditions. Consistent with this, Luce (1990, p. 73) observed that "by introducing contexts such as background noise in loudness judgements, the shape of the magnitude estimation functions certainly deviates sharply from a power function". Indeed, nearly all sensory judgments can be changed by the context in which a stimulus is perceived.
|
https://en.wikipedia.org/wiki/Stevens%27s_power_law
|
AnL-systemorLindenmayer systemis aparallelrewriting systemand a type offormal grammar. An L-system consists of analphabetof symbols that can be used to makestrings, a collection ofproduction rulesthat expand each symbol into some larger string of symbols, an initial "axiom" string from which to begin construction, and a mechanism for translating the generated strings into geometric structures. L-systems were introduced and developed in 1968 byAristid Lindenmayer, a Hungarian theoreticalbiologistandbotanistat theUniversity of Utrecht.[1]Lindenmayer used L-systems to describe the behaviour of plant cells and to model the growth processes ofplant development. L-systems have also been used to model the morphology of a variety of organisms[2]and can be used to generate self-similarfractals.
As a biologist, Lindenmayer worked withyeastand filamentousfungiand studied the growth patterns of various types ofbacteria, such as the cyanobacteriaAnabaena catenula. Originally, the L-systems were devised to provide a formal description of the development of such simple multicellular organisms, and to illustrate the neighbourhood relationships between plant cells. Later on, this system was extended to describe higher plants and complex branching structures.
Therecursivenature of the L-system rules leads toself-similarityand thereby,fractal-like forms are easy to describe with an L-system. Plant models and natural-looking organic forms are easy to define, as by increasing the recursion level the form slowly 'grows' and becomes more complex. Lindenmayer systems are also popular in the generation ofartificial life.
L-system grammars are very similar to thesemi-Thue grammar(seeChomsky hierarchy). L-systems are now commonly known asparametricL systems, defined as atuple
where
The rules of the L-system grammar are applied iteratively starting from the initial state. As many rules as possible are applied simultaneously, per iteration. The fact that each iteration employs as many rules as possible differentiates an L-system from aformal languagegenerated by aformal grammar, which applies only one rule per iteration. If the production rules were to be applied only one at a time, one would quite simply generate a string in a language, and all such sequences of applications would produce the language specified by the grammar. There are some strings in some languages, however, that cannot be generated if the grammar is treated as an L-system rather than a language specification. For example,[3]suppose there is a rule S→SS in a grammar. If productions are done one at a time, then starting from S, we can get first SS, and then, applying the rule again, SSS. However, if all applicable rules are applied at every step, as in an L-system, then we cannot get this sentential form. Instead, the first step would give us SS, but the second would apply the rule twice, giving us SSSS. Thus, the set of strings produced by an L-systems from a given grammar is a subset of the formal language defined by the grammar, and if we take a language to be defined as a set of strings, this means that a given L-system is effectively a subset of the formal language defined by the L-system's grammar.
An L-system iscontext-freeif each production rule refers only to an individual symbol and not to its neighbours. Context-free L-systems are thus specified by acontext-free grammar. If a rule depends not only on a single symbol but also on its neighbours, it is termed acontext-sensitiveL-system.
If there is exactly one production for each symbol, then the L-system is said to bedeterministic(a deterministic context-free L-system is popularly called aD0L system). If there are several, and each is chosen with a certain probability during each iteration, then it is astochasticL-system.
Using L-systems for generating graphical images requires that the symbols in the model refer to elements of a drawing on the computer screen. For example, the programFractintusesturtle graphics(similar to those in theLogo programming language) to produce screen images. It interprets each constant in an L-system model as a turtle command.
Lindenmayer's original L-system for modelling the growth of algae.
which produces:
The result is the sequence ofFibonacci words. If one counts the length of each string, theFibonacci sequenceof numbers is obtained (skipping the first 1, due to the choice of axiom):
If it is not desired to skip the first 1, axiomBcan be used. That would place aBnode before the topmost node (A) of the graph above.
For each string, if one counts thek-th position from the left end of the string, the value is determined by whether a multiple of thegolden ratiofalls within the interval(k−1,k){\displaystyle (k-1,k)}. The ratio of A to B likewise converges to the golden mean.
This example yields the same result (in terms of the length of each string, not the sequence ofAs andBs) if the rule (A→AB) is replaced with (A→BA), except that the strings are mirrored.
This sequence is alocally catenative sequencebecauseG(n)=G(n−1)G(n−2){\displaystyle G(n)=G(n-1)G(n-2)}, whereG(n){\displaystyle G(n)}is then-th generation.
The shape is built byrecursivelyfeeding the axiom through the production rules. Each character of the input string is checked against the rule list to determine which character or string to replace it with in the output string. In this example, a '1' in the input string becomes '11' in the output string, while '[' remains the same. Applying this to the axiom of '0', one gets:
It can be seen that this string quickly grows in size and complexity. This string can be drawn as an image by usingturtle graphics, where each symbol is assigned a graphical operation for the turtle to perform. For example, in the sample above, the turtle may be given the following instructions:
The push and pop refer to aLIFOstack (more technical grammar would have separate symbols for "push position" and "turn left"). When the turtle interpretation encounters a '[', the current position and angle are saved, and are then restored when the interpretation encounters a ']'. If multiple values have been "pushed," then a "pop" restores the most recently saved values. Applying the graphical rules listed above to the earlier recursion, one gets:
LetAmean "draw forward" andBmean "move forward".
This produces the famousCantor's fractal seton a real straight lineR.
A variant of theKoch curvewhich uses only right angles.
Here, F means "draw forward", + means "turn left 90°", and − means "turn right 90°" (seeturtle graphics).
TheSierpinski triangledrawn using an L-system.
Here, F and G both mean "draw forward", + means "turn left by angle", and − means "turn right by angle".
It is also possible to approximate theSierpinski triangleusing aSierpiński arrowhead curveL-system.
Here, A and B both mean "draw forward", + means "turn left by angle", and − means "turn right by angle" (seeturtle graphics).
Thedragon curvedrawn using an L-system.
Here, F and G both mean "draw forward", + means "turn left by angle", and − means "turn right by angle".
First one needs to initialize an empty stack. This follows the LIFO (Last in, First Out) method to add and remove elements.
Here, F means "draw forward", − means "turn right 25°", and + means "turn left 25°". X does not correspond to any drawing action and is used to control the evolution of the curve. The square bracket "[" corresponds to saving the current values for position and angle, so the position and angle are pushed to the top of the stack, when the "]" token is encountered, the stack is popped and the position and angle are reset. Every "[" comes before every "]" token.
A number of elaborations on this basic L-system technique have been developed which can be used in conjunction with each other. Among these arestochastic grammars,context sensitive grammars, and parametric grammars.
The grammar model we have discussed thus far has been deterministic—that is, given any symbol in the grammar's alphabet, there has been exactly one production rule, which is always chosen, and always performs the same conversion. One alternative is to specify more than one production rule for a symbol, giving each aprobabilityof occurring. For example, in the grammar of Example 2, we could change the rule for rewriting "0" from:
to a probabilistic rule:
Under this production, whenever a "0" is encountered during string rewriting, there would be a 50% chance it would behave as previously described, and a 50% chance it would not change during production. When a stochastic grammar is used in anevolutionarycontext, it is advisable to incorporate arandomseed into thegenotype, so that the stochastic properties of the image remain constant between generations.
A context sensitive production rule looks not only at the symbol it is modifying, but the symbols on the string appearing before and after it. For instance, the production rule:
transforms "a" to "aa", but only if the "a" occurs between a "b" and a "c" in the input string:
As with stochastic productions, there are multiple productions to handle symbols in different contexts. If no production rule can be found for a given context, the identity production is assumed, and the symbol does not change on transformation. If context-sensitive and context-free productions both exist within the same grammar, the context-sensitive production is assumed to take precedence when it is applicable.
In a parametric grammar, each symbol in the alphabet has a parameter list associated with it. A symbol coupled with its parameter list is called a module, and a string in a parametric grammar is a series of modules. An example string might be:
The parameters can be used by the drawing functions, and also by the production rules. The production rules can use the parameters in two ways: first, in a conditional statement determining whether the rule will apply, and second, the production rule can modify the actual parameters. For example, look at:
The module a(x,y) undergoes transformation under this production rule if the conditional x=0 is met. For example, a(0,2) would undergo transformation, and a(1,2) would not.
In the transformation portion of the production rule, the parameters as well as entire modules can be affected. In the above example, the module b(x,y) is added to the string, with initial parameters (2,3). Also, the parameters of the already existing module are transformed. Under the above production rule,
Becomes
as the "x" parameter of a(x,y) is explicitly transformed to a "1" and the "y" parameter of a is incremented by one.
Parametric grammars allow line lengths and branching angles to be determined by the grammar, rather than the turtle interpretation methods. Also, if age is given as a parameter for a module, rules can change depending on the age of a plant segment, allowing animations of the entire life-cycle of the tree to be created.
The bi-directional model explicitly separates the symbolic rewriting system from the shape assignment. For example, the string rewriting process in the Example 2 (Fractal tree) is independent on how graphical operations are assigned to the symbols. In other words, an infinite number of draw methods are applicable to a given rewriting system.
The bi-directional model consists of 1) a forward process constructs the derivation tree with production rules, and 2) a backward process realizes the tree with shapes in a stepwise manner (from leaves to the root). Each inverse-derivation step involves essential geometric-topological reasoning. With this bi-directional framework, design constraints and objectives are encoded in the grammar-shape translation. In architectural design applications, the bi-directional grammar features consistent interior connectivity and a rich spatial hierarchy.[4]
Historically, the construction of L-systems relied heavily on manual efforts by experts,[5][6][7]requiring detailed measurements, domain knowledge, and significant time investment. The process often involved analyzing biological structures and encoding their developmental rules into L-systems, symbol by symbol. This labor-intensive method made creating accurate models for complex processes both tedious and error-prone.
A notable example is Nishida's[7]work on Japanese Cypress trees, where he manually segmented branches from a series of images and identified 42 distinct growth mechanisms to construct a stochastic L-system. Despite the significant effort involved, the resulting system provided only an approximation of the tree's growth, illustrating the challenges of manually encoding such detailed biological processes. This arduous task was described as "tedious and intricate," underscoring the limitations of manual approaches.
The challenges of manual L-system construction are also well-documented in The Algorithmic Beauty of Plants[6]by Przemyslaw Prusinkiewicz and Aristid Lindenmayerd. The book demonstrates how L-systems can elegantly model plant growth and fractal patterns, but the examples often required expert intervention to define the necessary rules.
Manual construction was further constrained by the need for domain-specific expertise, as seen in other applications of L-systems beyond biology, such as architectural design and urban modeling.[8]In these fields, creating an accurate L-system required not only an understanding of the L-system formalism but also extensive knowledge of the domain being modeled.
The idea of automating L-system inference emerged to address the inefficiencies of manual methods, which often required extensive expertise, measurements, and trial-and-error processes. This automation aimed to enable the inference of L-systems directly from observational data, eliminating the need for manual encoding of rules.
Initial algorithms primarily targeted deterministic context-free L-systems (D0L-systems), which are among the simplest types of L-systems. These early efforts demonstrated the feasibility of automatic inference but were severely limited in scope, typically handling only systems with small alphabets and simple rewriting rules.[9][10][11][12]For instance, Nakano's[10]work highlighted the challenges of inferring L-systems with larger alphabets and more complex structures, describing the task as "immensely complicated".
Early tools for L-system inference were often designed to assist experts rather than replace them. For example, systems that presented a population of potential L-systems to the user, allowing them to select aesthetically pleasing or plausible options, reduced some of the manual burden.[12][13]However, these tools relied heavily on human judgment and did not fully automate the inference process.
Some early algorithms were tightly integrated into specific research domains mainly plant modeling.[13]These approaches utilized domain knowledge to constrain the search space and achieve better results. However, their reliance on predefined domain-specific rules limited their generalizability and applicability to other areas.
Attempts to create generalized algorithms for L-system inference began with deterministic context-free systems. Researchers aimed to infer L-systems from data alone, such as sequences of strings or temporal data from images, without relying on domain-specific knowledge. These algorithms encountered significant challenges,[14][15]including:
Bernard's PhD dissertation,[16]supervised by Dr. Ian McQuillan at the University of Saskatchewan, represents a significant advancement in L-system inference, introducing the Plant Model Inference Tools (PMIT) suite. Despite the name, this tool is problem agnostic, and is so-named due to the source of the original funding from the P2IRC project. These tools address the challenges of inferring deterministic, stochastic, and parametric L-systems:
Deterministic Context-Free L-Systems (D0L):
The PMIT-D0L tool improved the state-of-the-art by enabling the inference of L-systems with up to 31 symbols, compared to previous algorithms that managed only two. This was achieved through novel encoding techniques and search-space reduction methods.
Deterministic Context-Sensitive L-Systems (D(j,k)L):
The PMIT-DCSL tool further improved the inference of deterministic L-systems by demonstrating that the techniques worked in the context-sensitive case with little modification. This tool also presented further improvements allowing for the inference of deterministic L-systems with up to hundreds of symbols. Furthermore, this work and McQuillan's[17]theoretical paper proves the complexity of context-sensitive L-systems inference. In an unpublished work, Bernard claims to show that context-sensitivity never changes the fundamental nature of the inference problem regardless of the selection rule. That is to say, inferring context-sensitive stochastic L-systems is possible if inferring context-free L-system is possible.
Stochastic L-Systems (S0L):
For stochastic L-systems, PMIT-S0L was developed, which uses a hybrid greedy and genetic algorithm approach to infer systems from multiple string sequences. The tool demonstrated the ability to infer rewriting rules and probabilities with high accuracy, a first in the field.
Temporal Parametric L-Systems:
McQuillan first realized that parametric L-systems could be thought of as stochastic L-systems; however, this did not solve the problem of inferring the parametric selection rules. Using Cartesian Genetic Programming, parametric L-systems could be inferred along with the parametric selection rules so long as the parameter set included time (in order to, provide a sequence to the parameters, but time is a reasonable parameter for any real process). This tool, PMIT-PARAM, successfully inferred complex systems with up to 27 rewriting rules, setting a new benchmark in L-system inference.
There are many open problems involving studies of L-systems. For example:
L-systems on thereal lineR:
Well-known L-systems on a planeR2are:
|
https://en.wikipedia.org/wiki/L-system
|
Colorless green ideas sleep furiouslywas composed byNoam Chomskyin his 1957 bookSyntactic Structuresas an example of asentencethat isgrammaticallywell-formed, butsemanticallynonsensical. The sentence was originally used in his 1955 thesisThe Logical Structure of Linguistic Theoryand in his 1956 paper "Three Models for the Description of Language".[1]: 116There is no obviousunderstandablemeaning that can be derived from it, which demonstrates the distinction betweensyntaxandsemantics, and the idea that a syntactically well-formed sentence is not guaranteed to also be semantically well-formed. As an example of acategory mistake, it was intended to show the inadequacy of certain probabilistic models of grammar, and the need for more structured models.
Chomsky wrote in his 1957 bookSyntactic Structures:
It is fair to assume that neither sentence (1) nor (2) had ever previously occurred in an English discourse. Hence, in any statistical model that accounts forgrammaticality, these sentences will be ruled out on identical grounds as equally "remote" from English. Yet (1), though nonsensical, is grammatical, while (2) is not grammatical.[2][1]
Colorless green ideas– which functions as thesubjectof the sentence – is an anomalous string for at least two reasons:
Sleep furiously– which functions as thepredicateof the sentence – is structurally well-formed; in other words, it is grammatical. However, the meaning that it expresses is peculiar, as the activity of sleeping is not generally taken to be something that can be done in a furious fashion. Nevertheless,sleep furiouslyis both grammatical and interpretable, though its interpretation is unusual.
CombiningColorless green ideaswithsleep furiouslycreates a sentence that some believe to be nonsensical. On the one hand, an abstract noun likeideais taken to not have the ability to engage in an activity like sleeping. On the other hand, some think it possible for an idea to sleep.[3][4]
Linguists account for the unusual nature of this sentence by distinguishing two types of selection: semantic selection (s-selection) and categorical selection (c-selection). Relative to s-selection, the sentence is semantically anomalous – senseless – for three reasons:
However, relative to c-selection, the sentence is structurally well-formed:
This leads to the conclusion that although meaningless, the structural integrity of this sentence is high.
The mechanism ofpolysemy– where a word has multiple meanings – can be used to create an interpretation for an otherwise non-sensical sentence. For example, the adjectivesgreenandcolorlessboth have figurative meanings.Greenhas a wide range of figurative meanings, including "immature", "pertaining to environmental consciousness", "newly formed", and "naive". Andcolorlesscan be interpreted as "nondescript". Likewise the verbsleepcan have the figurative meaning of "being in dormant state", and the adverbfuriouslycan have the figurative meaning "to do an action violently or quickly".
When these figurative meanings are taken into account the sentenceColorless green ideas sleep furiouslycan have legitimatemeaning, with less oblique semantics, and so is compatible with the following interpretations:
Chomsky's "colorless green" inspired written works, which all try to create meaning from the semantically meaningless utterance through added context. In 1958, linguist and anthropologistDell Hymespresented his work to show that nonsense words can develop into something meaningful when in the right sequence.[5][6]
Hued ideas mock the brain,Notions of color not yet color,Of pure, touchless, branching pallorOf invading, essential Green
Russian-American linguist and literary theoristRoman Jakobson(1959)[7]interpreted "colorless green" as a pale green, and "sleep furiously" as the wildness of "a state-like sleep, as that of inertness, torpidity, numbness." Jakobson gave the example that if "[someone's] hatred never slept, why then, cannot someone's ideas fall into sleep?"John Hollander, an American poet and literary critic, argued that the sentence operates in a vacuum as it is without context. He went on to write a poem based on that idea, entitledCoiled Alizarinethat was included in his book,The Night Mirror(1971).[8]
Curiously deep, the slumber of crimson thoughts:While breathless, in stodgy viridianColorless green ideas sleep furiously.
Years later, Hollander contacted Chomsky about whether the color choice of 'green' was intentional; however, Chomsky denied any intentions or influences, especially the hypothesized influence fromAndrew Marvell's lines from "The Garden" (1681).
"Annihilating all that's made / To a green thought in a green shade"
One of the first writers to have attempted to provide the sentence meaning through context is Chinese linguistYuen Ren Chao(1997).[9]Chao's poem, entitledMaking Sense Out of Nonsense: The Story of My Friend Whose "Colorless Green Ideas Sleep Furiously" (after Noam Chomsky)was published in 1971. This poem attempts to explain what "colorless green ideas" are and how they are able to "sleep furiously". Chao interprets "colorless" as plain, "green" as unripened, and "sleep furiously" as putting the ideas to rest; sleeping on them overnight whilst having internal conflict with these ideas.[10]
I have a friend who is always full of ideas, good ideas and bad ideas, fine ideas and crude ideas, old ideas and new ideas. Before putting his new ideas into practice, he usually sleeps over them to let them mature and ripen. However, when he is in a hurry, he sometimes puts his ideas into practice before they are quite ripe, in other words, while they are still green. Some of his green ideas are quite lively and colorful, but not always, some being quite plain and colorless. When he remembers that some of his colorless ideas are still too green to use, he will sleep over them, or let them sleep, as he puts it. But some of those ideas may be mutually conflicting and contradictory and when they sleep together in the same night they get into furious fights and turn the sleep into a nightmare. Thus my friend often complains that his colorless green ideas sleep furiously.
British linguistAngus McIntoshwas unable to accept that Chomsky's utterance was entirely meaningless because to him, "colorless green ideas may well sleep furiously". As if to prove that the sentences are in fact meaningful, McIntosh wrote two poems influenced by Chomsky's utterance, one of which was entitledNightmare I.[11]
Tortured my mind's eye at its small peepholesees through the virid glassthe endless ghostly oscillographic streamFuriously sleep ideas green colorlessMadly awake am I at my small window
In 1985, a literary competition was held atStanford Universityin which the contestants were invited to make Chomsky's sentence meaningful using not more than 100 words of prose or 14 lines of verse.[12]An example entry from the competition, by C. M. Street, is:
It can only be the thought of verdure to come, which prompts us in the autumn to buy these dormant white lumps of vegetable matter covered by a brown papery skin, and lovingly to plant them and care for them. It is a marvel to me that under this cover they are labouring unseen at such a rate within to give us the sudden awesome beauty of spring flowering bulbs. While winter reigns the earth reposes but these colourless green ideas sleep furiously.
Research has been done by implementing this into conversations on text.[13]Research led by Bruno Galantucci at Yeshiva University has implemented the meaningless sentence into real conversations to test reactions.[14]They ran 30 conversations with 1 male and 1 female slipping "colorless green ideas sleep furiously" eight minutes into the conversation during silence. After the conversation, the experimenters did a post-conversation questionnaire, mainly asking if they thought the conversation was unusual. Galantucci concluded that there was a trend ofinsensitivityto conversational coherence.
There are two general theories that were garnered from this experiment. The first theory is that people tend to ignore the inconsistency of speech to protect the quality of the conversation. In particular, face-to-face conversation has a 33.33% lower detection rate of nonsensical sentences than online messaging. The authors further explain how humans often disregard some contents of every conversation. The second theory the authors deduced is that effective communication may be subconsciously undermined when dealing withconversationalcoherence. These conclusions support the idea that phatic communication plays a key role in social life.
Since the 1950s, the field has used techniques more in line with Chomsky's approach. However, this all changed in the mid-1980s, when researchers started to experiment with statistical models, convincing over 90% of the researchers in the field to switch to statistical approaches.[15]
In 2000, Fernando Pereira of theUniversity of Pennsylvaniafitted a simple statistical Markov model to a body of newspaper text, and showed that under this model,Furiously sleep ideas green colorlessis about 200,000 times less probable thanColorless green ideas sleep furiously.[16]
This statistical model defines a similarity metric, whereby sentences which are more like those within a corpus in certain respects are assigned higher values than sentences less alike. Pereira's model assigns an ungrammatical version of the same sentence a lower probability than the syntactically well-formed structure demonstrating that statistical models can identify variations ingrammaticalitywith minimal linguistic assumptions. However, it is not clear that the model assigns every ungrammatical sentence a lower probability than every grammatical sentence. That is,colorless green ideas sleep furiouslymay still be statistically more "remote" from English than some ungrammatical sentences. To this, it may be argued thatnocurrent theory of grammar is capable of distinguishingallgrammatical English sentences from ungrammatical ones.[17]
TheFrenchsyntacticianLucien Tesnièrecame up with theFrench languagesentence "Le silence vertébral indispose la voile licite" ("The vertebral silence indisposes the licit sail"). He also compared the following two sentences to demonstrate the contrast between syntax and meaning:
As he described, "la syntaxe. Ilest autonome".[18]
In Russian schools of linguistics, theglokaya kuzdraexample has similar characteristics.
The game ofexquisite corpseis a method for generating nonsense sentences. It was named after the first sentence generated in the game in 1925:Le cadavre exquis boira le vin nouveau(the exquisite corpse will drink the new wine).
In the popular game of "Mad Libs", a chosen player asks each other player to provide parts of speech without providing any contextual information (e.g., "Give me a proper noun", or "Give me an adjective"), and these words are inserted into pre-composed sentences with a correct grammatical structure, but in which certain words have been omitted. The humor of the game is in the generation of sentences which are grammatical but which are meaningless or have absurd or ambiguous meanings (such as 'loud sharks'). The game also tends to generate humorousdouble entendres.
There are likely earlier examples of such sentences, possibly from the philosophy of language literature, but not necessarily uncontroversial ones, given that the focus has been mostly on borderline cases. For example, followers oflogical positivismhold that "metaphysical" (i.e. notempirically verifiable) statements are simply meaningless; e.g.Rudolf Carnapwrote an article in which he argued that almost every sentence fromHeideggerwas grammatically well-formed, yet meaningless.[19]
The philosopherBertrand Russellused the sentence "Quadruplicity drinks procrastination" in his "An Inquiry into Meaning and Truth" from 1940, to make a similar point;[20]W.V. Quinetook issue with him on the grounds that for a sentence to be false is nothing more than for it not to be true; and since quadruplicity does not drinkanything, the sentence is simply false, not meaningless.[21]
Other arguably "meaningless" utterances are ones that make sense, are grammatical, but have no reference to the present state of the world, such as Russell's "The presentKing of Franceis bald" (France does not presently have a king) from "On Denoting"[22](also seedefinite description).
Another approach is to create a syntactically-well-formed, easily parsable sentence using nonsense words; a famous such example is "The gostak distims the doshes".Lewis Carroll'sJabberwockyis also famous for using this technique, although in this case for literary purposes; similar sentences used in neuroscience experiments are calledJabberwocky sentences.
In a sketch about linguistics, British comedy duoFry and Laurieused the nonsensical sentence "Hold the newsreader's nose squarely, waiter, or friendly milk will countermand my trousers."[23]
TheStar Trek: The Next Generationepisode "Darmok" features a race that communicates entirely by referencing folklore and stories. While the vessel'suniversal translatorcorrectly translates the characters and places from these stories, it fails to decipher the intended meaning, leaving Captain Picard unable to understand the alien.
|
https://en.wikipedia.org/wiki/Colorless_green_ideas_sleep_furiously
|
AnL-systemorLindenmayer systemis aparallelrewriting systemand a type offormal grammar. An L-system consists of analphabetof symbols that can be used to makestrings, a collection ofproduction rulesthat expand each symbol into some larger string of symbols, an initial "axiom" string from which to begin construction, and a mechanism for translating the generated strings into geometric structures. L-systems were introduced and developed in 1968 byAristid Lindenmayer, a Hungarian theoreticalbiologistandbotanistat theUniversity of Utrecht.[1]Lindenmayer used L-systems to describe the behaviour of plant cells and to model the growth processes ofplant development. L-systems have also been used to model the morphology of a variety of organisms[2]and can be used to generate self-similarfractals.
As a biologist, Lindenmayer worked withyeastand filamentousfungiand studied the growth patterns of various types ofbacteria, such as the cyanobacteriaAnabaena catenula. Originally, the L-systems were devised to provide a formal description of the development of such simple multicellular organisms, and to illustrate the neighbourhood relationships between plant cells. Later on, this system was extended to describe higher plants and complex branching structures.
Therecursivenature of the L-system rules leads toself-similarityand thereby,fractal-like forms are easy to describe with an L-system. Plant models and natural-looking organic forms are easy to define, as by increasing the recursion level the form slowly 'grows' and becomes more complex. Lindenmayer systems are also popular in the generation ofartificial life.
L-system grammars are very similar to thesemi-Thue grammar(seeChomsky hierarchy). L-systems are now commonly known asparametricL systems, defined as atuple
where
The rules of the L-system grammar are applied iteratively starting from the initial state. As many rules as possible are applied simultaneously, per iteration. The fact that each iteration employs as many rules as possible differentiates an L-system from aformal languagegenerated by aformal grammar, which applies only one rule per iteration. If the production rules were to be applied only one at a time, one would quite simply generate a string in a language, and all such sequences of applications would produce the language specified by the grammar. There are some strings in some languages, however, that cannot be generated if the grammar is treated as an L-system rather than a language specification. For example,[3]suppose there is a rule S→SS in a grammar. If productions are done one at a time, then starting from S, we can get first SS, and then, applying the rule again, SSS. However, if all applicable rules are applied at every step, as in an L-system, then we cannot get this sentential form. Instead, the first step would give us SS, but the second would apply the rule twice, giving us SSSS. Thus, the set of strings produced by an L-systems from a given grammar is a subset of the formal language defined by the grammar, and if we take a language to be defined as a set of strings, this means that a given L-system is effectively a subset of the formal language defined by the L-system's grammar.
An L-system iscontext-freeif each production rule refers only to an individual symbol and not to its neighbours. Context-free L-systems are thus specified by acontext-free grammar. If a rule depends not only on a single symbol but also on its neighbours, it is termed acontext-sensitiveL-system.
If there is exactly one production for each symbol, then the L-system is said to bedeterministic(a deterministic context-free L-system is popularly called aD0L system). If there are several, and each is chosen with a certain probability during each iteration, then it is astochasticL-system.
Using L-systems for generating graphical images requires that the symbols in the model refer to elements of a drawing on the computer screen. For example, the programFractintusesturtle graphics(similar to those in theLogo programming language) to produce screen images. It interprets each constant in an L-system model as a turtle command.
Lindenmayer's original L-system for modelling the growth of algae.
which produces:
The result is the sequence ofFibonacci words. If one counts the length of each string, theFibonacci sequenceof numbers is obtained (skipping the first 1, due to the choice of axiom):
If it is not desired to skip the first 1, axiomBcan be used. That would place aBnode before the topmost node (A) of the graph above.
For each string, if one counts thek-th position from the left end of the string, the value is determined by whether a multiple of thegolden ratiofalls within the interval(k−1,k){\displaystyle (k-1,k)}. The ratio of A to B likewise converges to the golden mean.
This example yields the same result (in terms of the length of each string, not the sequence ofAs andBs) if the rule (A→AB) is replaced with (A→BA), except that the strings are mirrored.
This sequence is alocally catenative sequencebecauseG(n)=G(n−1)G(n−2){\displaystyle G(n)=G(n-1)G(n-2)}, whereG(n){\displaystyle G(n)}is then-th generation.
The shape is built byrecursivelyfeeding the axiom through the production rules. Each character of the input string is checked against the rule list to determine which character or string to replace it with in the output string. In this example, a '1' in the input string becomes '11' in the output string, while '[' remains the same. Applying this to the axiom of '0', one gets:
It can be seen that this string quickly grows in size and complexity. This string can be drawn as an image by usingturtle graphics, where each symbol is assigned a graphical operation for the turtle to perform. For example, in the sample above, the turtle may be given the following instructions:
The push and pop refer to aLIFOstack (more technical grammar would have separate symbols for "push position" and "turn left"). When the turtle interpretation encounters a '[', the current position and angle are saved, and are then restored when the interpretation encounters a ']'. If multiple values have been "pushed," then a "pop" restores the most recently saved values. Applying the graphical rules listed above to the earlier recursion, one gets:
LetAmean "draw forward" andBmean "move forward".
This produces the famousCantor's fractal seton a real straight lineR.
A variant of theKoch curvewhich uses only right angles.
Here, F means "draw forward", + means "turn left 90°", and − means "turn right 90°" (seeturtle graphics).
TheSierpinski triangledrawn using an L-system.
Here, F and G both mean "draw forward", + means "turn left by angle", and − means "turn right by angle".
It is also possible to approximate theSierpinski triangleusing aSierpiński arrowhead curveL-system.
Here, A and B both mean "draw forward", + means "turn left by angle", and − means "turn right by angle" (seeturtle graphics).
Thedragon curvedrawn using an L-system.
Here, F and G both mean "draw forward", + means "turn left by angle", and − means "turn right by angle".
First one needs to initialize an empty stack. This follows the LIFO (Last in, First Out) method to add and remove elements.
Here, F means "draw forward", − means "turn right 25°", and + means "turn left 25°". X does not correspond to any drawing action and is used to control the evolution of the curve. The square bracket "[" corresponds to saving the current values for position and angle, so the position and angle are pushed to the top of the stack, when the "]" token is encountered, the stack is popped and the position and angle are reset. Every "[" comes before every "]" token.
A number of elaborations on this basic L-system technique have been developed which can be used in conjunction with each other. Among these arestochastic grammars,context sensitive grammars, and parametric grammars.
The grammar model we have discussed thus far has been deterministic—that is, given any symbol in the grammar's alphabet, there has been exactly one production rule, which is always chosen, and always performs the same conversion. One alternative is to specify more than one production rule for a symbol, giving each aprobabilityof occurring. For example, in the grammar of Example 2, we could change the rule for rewriting "0" from:
to a probabilistic rule:
Under this production, whenever a "0" is encountered during string rewriting, there would be a 50% chance it would behave as previously described, and a 50% chance it would not change during production. When a stochastic grammar is used in anevolutionarycontext, it is advisable to incorporate arandomseed into thegenotype, so that the stochastic properties of the image remain constant between generations.
A context sensitive production rule looks not only at the symbol it is modifying, but the symbols on the string appearing before and after it. For instance, the production rule:
transforms "a" to "aa", but only if the "a" occurs between a "b" and a "c" in the input string:
As with stochastic productions, there are multiple productions to handle symbols in different contexts. If no production rule can be found for a given context, the identity production is assumed, and the symbol does not change on transformation. If context-sensitive and context-free productions both exist within the same grammar, the context-sensitive production is assumed to take precedence when it is applicable.
In a parametric grammar, each symbol in the alphabet has a parameter list associated with it. A symbol coupled with its parameter list is called a module, and a string in a parametric grammar is a series of modules. An example string might be:
The parameters can be used by the drawing functions, and also by the production rules. The production rules can use the parameters in two ways: first, in a conditional statement determining whether the rule will apply, and second, the production rule can modify the actual parameters. For example, look at:
The module a(x,y) undergoes transformation under this production rule if the conditional x=0 is met. For example, a(0,2) would undergo transformation, and a(1,2) would not.
In the transformation portion of the production rule, the parameters as well as entire modules can be affected. In the above example, the module b(x,y) is added to the string, with initial parameters (2,3). Also, the parameters of the already existing module are transformed. Under the above production rule,
Becomes
as the "x" parameter of a(x,y) is explicitly transformed to a "1" and the "y" parameter of a is incremented by one.
Parametric grammars allow line lengths and branching angles to be determined by the grammar, rather than the turtle interpretation methods. Also, if age is given as a parameter for a module, rules can change depending on the age of a plant segment, allowing animations of the entire life-cycle of the tree to be created.
The bi-directional model explicitly separates the symbolic rewriting system from the shape assignment. For example, the string rewriting process in the Example 2 (Fractal tree) is independent on how graphical operations are assigned to the symbols. In other words, an infinite number of draw methods are applicable to a given rewriting system.
The bi-directional model consists of 1) a forward process constructs the derivation tree with production rules, and 2) a backward process realizes the tree with shapes in a stepwise manner (from leaves to the root). Each inverse-derivation step involves essential geometric-topological reasoning. With this bi-directional framework, design constraints and objectives are encoded in the grammar-shape translation. In architectural design applications, the bi-directional grammar features consistent interior connectivity and a rich spatial hierarchy.[4]
Historically, the construction of L-systems relied heavily on manual efforts by experts,[5][6][7]requiring detailed measurements, domain knowledge, and significant time investment. The process often involved analyzing biological structures and encoding their developmental rules into L-systems, symbol by symbol. This labor-intensive method made creating accurate models for complex processes both tedious and error-prone.
A notable example is Nishida's[7]work on Japanese Cypress trees, where he manually segmented branches from a series of images and identified 42 distinct growth mechanisms to construct a stochastic L-system. Despite the significant effort involved, the resulting system provided only an approximation of the tree's growth, illustrating the challenges of manually encoding such detailed biological processes. This arduous task was described as "tedious and intricate," underscoring the limitations of manual approaches.
The challenges of manual L-system construction are also well-documented in The Algorithmic Beauty of Plants[6]by Przemyslaw Prusinkiewicz and Aristid Lindenmayerd. The book demonstrates how L-systems can elegantly model plant growth and fractal patterns, but the examples often required expert intervention to define the necessary rules.
Manual construction was further constrained by the need for domain-specific expertise, as seen in other applications of L-systems beyond biology, such as architectural design and urban modeling.[8]In these fields, creating an accurate L-system required not only an understanding of the L-system formalism but also extensive knowledge of the domain being modeled.
The idea of automating L-system inference emerged to address the inefficiencies of manual methods, which often required extensive expertise, measurements, and trial-and-error processes. This automation aimed to enable the inference of L-systems directly from observational data, eliminating the need for manual encoding of rules.
Initial algorithms primarily targeted deterministic context-free L-systems (D0L-systems), which are among the simplest types of L-systems. These early efforts demonstrated the feasibility of automatic inference but were severely limited in scope, typically handling only systems with small alphabets and simple rewriting rules.[9][10][11][12]For instance, Nakano's[10]work highlighted the challenges of inferring L-systems with larger alphabets and more complex structures, describing the task as "immensely complicated".
Early tools for L-system inference were often designed to assist experts rather than replace them. For example, systems that presented a population of potential L-systems to the user, allowing them to select aesthetically pleasing or plausible options, reduced some of the manual burden.[12][13]However, these tools relied heavily on human judgment and did not fully automate the inference process.
Some early algorithms were tightly integrated into specific research domains mainly plant modeling.[13]These approaches utilized domain knowledge to constrain the search space and achieve better results. However, their reliance on predefined domain-specific rules limited their generalizability and applicability to other areas.
Attempts to create generalized algorithms for L-system inference began with deterministic context-free systems. Researchers aimed to infer L-systems from data alone, such as sequences of strings or temporal data from images, without relying on domain-specific knowledge. These algorithms encountered significant challenges,[14][15]including:
Bernard's PhD dissertation,[16]supervised by Dr. Ian McQuillan at the University of Saskatchewan, represents a significant advancement in L-system inference, introducing the Plant Model Inference Tools (PMIT) suite. Despite the name, this tool is problem agnostic, and is so-named due to the source of the original funding from the P2IRC project. These tools address the challenges of inferring deterministic, stochastic, and parametric L-systems:
Deterministic Context-Free L-Systems (D0L):
The PMIT-D0L tool improved the state-of-the-art by enabling the inference of L-systems with up to 31 symbols, compared to previous algorithms that managed only two. This was achieved through novel encoding techniques and search-space reduction methods.
Deterministic Context-Sensitive L-Systems (D(j,k)L):
The PMIT-DCSL tool further improved the inference of deterministic L-systems by demonstrating that the techniques worked in the context-sensitive case with little modification. This tool also presented further improvements allowing for the inference of deterministic L-systems with up to hundreds of symbols. Furthermore, this work and McQuillan's[17]theoretical paper proves the complexity of context-sensitive L-systems inference. In an unpublished work, Bernard claims to show that context-sensitivity never changes the fundamental nature of the inference problem regardless of the selection rule. That is to say, inferring context-sensitive stochastic L-systems is possible if inferring context-free L-system is possible.
Stochastic L-Systems (S0L):
For stochastic L-systems, PMIT-S0L was developed, which uses a hybrid greedy and genetic algorithm approach to infer systems from multiple string sequences. The tool demonstrated the ability to infer rewriting rules and probabilities with high accuracy, a first in the field.
Temporal Parametric L-Systems:
McQuillan first realized that parametric L-systems could be thought of as stochastic L-systems; however, this did not solve the problem of inferring the parametric selection rules. Using Cartesian Genetic Programming, parametric L-systems could be inferred along with the parametric selection rules so long as the parameter set included time (in order to, provide a sequence to the parameters, but time is a reasonable parameter for any real process). This tool, PMIT-PARAM, successfully inferred complex systems with up to 27 rewriting rules, setting a new benchmark in L-system inference.
There are many open problems involving studies of L-systems. For example:
L-systems on thereal lineR:
Well-known L-systems on a planeR2are:
|
https://en.wikipedia.org/wiki/L-system#Stochastic_grammars
|
Statistical language acquisition, a branch ofdevelopmentalpsycholinguistics, studies the process by which humans develop the ability to perceive, produce, comprehend, and communicate withnatural languagein all of its aspects (phonological,syntactic,lexical,morphological,semantic) through the use of general learning mechanisms operating on statistical patterns in the linguistic input.Statistical learningacquisition claims that infants' language-learning is based on pattern perception rather than an innate biological grammar. Several statistical elements such as frequency of words, frequent frames, phonotactic patterns and other regularities provide information on language structure and meaning for facilitation of language acquisition.
Fundamental to the study of statistical language acquisition is the centuries-old debate betweenrationalism(or its modern manifestation in the psycholinguistic community,nativism) andempiricism, with researchers in this field falling strongly in support of the latter category. Nativism is the position that humans are born with innatedomain-specificknowledge, especially inborn capacities for language learning. Ranging from seventeenth century rationalist philosophers such asDescartes,Spinoza, andLeibnizto contemporary philosophers such asRichard Montagueand linguists such asNoam Chomsky, nativists posit an innate learning mechanism with the specific function of language acquisition.[1]
In modern times, this debate has largely surrounded Chomsky's support of auniversal grammar, properties that all natural languages must have, through the controversial postulation of alanguage acquisition device(LAD), an instinctive mental 'organ' responsible for language learning which searches all possible language alternatives and chooses the parameters that best match the learner's environmental linguistic input. Much of Chomsky's theory is founded on thepoverty of the stimulus(POTS) argument, the assertion that a child's linguistic data is so limited and corrupted that learning language from this data alone is impossible. As an example, many proponents of POTS claim that because children are never exposed to negative evidence, that is, information about what phrases are ungrammatical, the language structure they learn would not resemble that of correct speech without a language-specific learning mechanism.[2]Chomsky's argument for an internal system responsible for language, biolinguistics, poses a three-factor model. "Genetic endowment" allows the infant to extract linguistic info, detect rules, and have universal grammar. "External environment" illuminates the need to interact with others and the benefits of language exposure at an early age. The last factor encompasses the brain properties, learning principles, and computational efficiencies that enable children to pick up on language rapidly using patterns and strategies.
Standing in stark contrast to this position is empiricism, theepistemologicaltheory that all knowledge comes from sensory experience. This school of thought often characterizes the nascent mind as atabula rasa, or blank slate, and can in many ways be associated with the nurture perspective of the "nature vs. nurture debate". This viewpoint has a long historical tradition that parallels that of rationalism, beginning with seventeenth century empiricist philosophers such asLocke,Bacon,Hobbes, and, in the following century,Hume. The basic tenet of empiricism is that information in the environment is structured enough that its patterns are both detectable and extractable by domain-general learning mechanisms.[1]In terms oflanguage acquisition, these patterns can be either linguistic or social in nature.
Chomsky is very critical of this empirical theory of language acquisition. He has said, "It's true there's been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures." He claims the idea of using statistical methods to acquire language is simply a mimicry of the process, rather than a true understanding of how language is acquired.[3]
One of the most used experimentalparadigmsin investigations of infants' capacities for statistical language acquisition is the Headturn Preference Procedure (HPP), developed byStanfordpsychologistAnne Fernaldin 1985 to study infants' preferences for prototypicalchild-directed speechover normal adult speech.[4]In the classic HPP paradigm, infants are allowed to freely turn their heads and are seated between two speakers with mounted lights. The light of either the right or left speaker then flashes as that speaker provides some type of audial or linguistic input stimulus to the infant. Reliable orientation to a given side is taken to be an indication of a preference for the input associated with that side's speaker. This paradigm has since become increasingly important in the study ofinfant speech perception, especially for input at levels higher thansyllablechunks, though with some modifications, including using the listening times instead of the side preference as the relevant dependent measure.[5]
Similar to HPP, the Conditioned Headturn Procedure also makes use of an infant's differential preference for a given side as an indication of a preference for, or more often a familiarity with, the input or speech associated with that side. Used in studies ofprosodicboundary markers by Gout et al. (2004)[5]and later by Werker in her classic studies ofcategorical perceptionofnative-languagephonemes,[6]infants areconditionedby some attractive image or display to look in one of two directions every time a certain input is heard, a whole word in Gout's case and a single phonemic syllable in Werker's. After the conditioning, new or more complex input is then presented to the infant, and their ability to detect the earlier target word or distinguish the input of the two trials is observed by whether they turn their head in expectation of the conditioned display or not.
While HPP and the Conditioned Headturn Procedure allow for observations of behavioral responses to stimuli and after the fact inferences about what the subject's expectations must have been to motivate this behavior, the Anticipatory Eye Movement paradigm allows researchers to directly observe a subject's expectations before the event occurs. Bytrackingsubjects'eye movementsresearchers have been able to investigate infantdecision-makingand the ways in which infants encode and act onprobabilistic knowledgeto make predictions about their environments.[7]This paradigm also offers the advantage of comparing differences in eye movement behavior across a wider range of ages than others.
Artificial languages, that is, small-scale languages that typically have an extremely limitedvocabularyand simplifiedgrammarrules, are a commonly used paradigm forpsycholinguisticresearchers. Artificial languages allow researchers to isolate variables of interest and wield a greater degree of control over the input the subject will receive. Unfortunately, the overly simplified nature of these languages and the absence of a number of phenomena common to all human natural languages such asrhythm,pitchchanges, and sequential regularities raise questions ofexternal validityfor any findings obtained using this paradigm, even after attempts have been made to increase thecomplexityand richness of the languages used.[8]The artificial language's lack of complexity or decreased complexity fails to account for a child's need to recognize a given syllable in natural language regardless of the sound variability inherent to natural language, though "it is possible that the complexity of natural language actually facilitates learning."[9]
As such, artificial language experiments are typically conducted to explore what the relevant linguistic variables are, what sources of information infants are able to use and when, and how researchers can go about modeling thelearningand acquisition process.[5]AslinandNewport, for example, have used artificial languages to explore what features of linguistic input make certainpatternssalient and easily detectable by infants, allowing them to easily contrast the detection of syllable repetition with that of word-final syllables and make conclusions about the conditions under which either feature is recognized as important.[10]
Statistical learning has been shown to play a large role in language acquisition, but social interaction appears to be a necessary component of learning as well. In one study, infants presented with audio or audiovisual recordings of Mandarin speakers failed to distinguish the phonemes of the language.[11][12]This implies that simply hearing the sounds is not sufficient for language learning; social interaction cues the infant to take statistics. Particular interactions geared towards infants is known as "child-directed" language because it is more repetitive and associative, which makes it easier to learn. These "child directed" interactions could also be the reason why it is easier to learn a language as a child rather than an adult.
Studies of bilingual infants, such as a study Bijeljac-Babic, et al., on French-learning infants, have offered insight to the role of prosody in language acquisition.[13]The Bijeljac-Babic study found that language dominance influences "sensitivity to prosodic contrasts." Although this was not a study on statistical learning, its findings on prosodic pattern recognition might have implications for statistical learning.
It is possible that the kinds of language experience and knowledge gained through the statistical learning of the first language influences one's acquisition of a second language. Some research points to the possibility that the difficulty of learning a second language may be derived from the structural patterns and language cues that one has already picked up from his or her acquisition of first language. In that sense, the knowledge of and skills to process the first language from statistical acquisition may act as a complicating factor when one tries to learn a new language with different sentence structures, grammatical rules, and speech patterns.[citation needed]
The first step in developing knowledge of a system as complex as natural language is learning to distinguish the important language-specific classes of sounds, called phonemes, that distinguish meaning between words.UBCpsychologistJanet Werker, since her influential series of experiments in the 1980s, has been one of the most prominent figures in the effort to understand the process by which human babies develop these phonological distinctions. While adults who speak different languages are unable to distinguish meaningful sound differences in other languages that do not delineate different meanings in their own, babies are born with the ability to universally distinguish all speech sounds. Werker's work has shown that while infants at six to eight months are still able to perceive the difference between certainHindiandEnglishconsonants, they have completely lost this ability by 11 to 13 months.[6]
It is now commonly accepted that children use some form of perceptualdistributional learning, by which categories are discovered by clumping similar instances of an input stimulus, to form phonetic categories early in life.[5]Developing children have been found to be effective judges of linguistic authority, screening the input they model their language on by shifting theirattentionless to speakers who mispronounce words.[5]Infants also use statistical tracking to calculate the likelihood that particular phonemes will follow each other.[14]
Parsingis the process by which a continuous speech stream is segmented into itsdiscretemeaningful units, e.g.sentences,words, and syllables.Saffran(1996) represents a singularly seminal study in this line of research. Infants were presented with two minutes of continuous speech of an artificial language from a computerized voice to remove any interference fromextraneous variablessuch as prosody orintonation. After this presentation, infants were able to distinguish words from nonwords, as measured by longer looking times in the second case.[15]
An important concept in understanding these results is that oftransitional probability, thelikelihoodof an element, in this case a syllable, following or preceding another element. In this experiment, syllables that went together in words had a much higher transitional probability than did syllables atword boundariesthat just happened to be adjacent.[5][8][15]Incredibly, infants, after a short two-minute presentation, were able to keep track of thesestatisticsand recognize highprobabilitywords. Further research has since replicated these results with natural languages unfamiliar to infants, indicating that learning infants also keep track of the direction (forward or backward) of the transitional probabilities.[8]Though the neural processes behind this phenomenon remain largely unknown, recent research reports increased activity in theleft inferior frontal gyrusand themiddle frontal gyrusduring the detection of word boundaries.[16]
The development of syllable-ordering biases is an important step along the way to full language development. The ability to categorize syllables and group together frequentlyco-occurringsequences may be critical in the development of aprotolexicon, a set of common language-specific word templates based on characteristic patterns in the words an infant hears. The development of this protolexicon may in turn allow for the recognition of new types of patterns, e.g. the high frequency of word-initiallystressedconsonants in English, which would allow infants to further parse words by recognizing common prosodic phrasings as autonomous linguistic units, restarting the dynamic cycle of word and language learning.[5]
The question of how novice language-users are capable of associating learnedlabelswith the appropriatereferent, the person or object in the environment which the label names, has been at the heart ofphilosophicalconsiderations oflanguageandmeaningfromPlatotoQuinetoHofstadter.[17]This problem, that of finding some solid relationship between word and object, of finding a word'smeaningwithout succumbing to an infinite recursion of dictionary look-up, is known as thesymbol grounding problem.[18]
Researchers have shown that this problem is intimately linked with the ability to parse language, and that those words that are easy to segment due to their high transitional probabilities are also easier tomapto an appropriate referent.[8]This serves as further evidence of the developmental progression of language acquisition, with children requiring an understanding of the sound distributions of natural languages to form phonetic categories, parse words based on these categories, and then use these parses to map them to objects as labels.
The developmentally earliest understanding of word to referent associations have been reported at six months old, with infants comprehending the words 'mommy' and 'daddy' or their familial or cultural equivalents. Further studies have shown that infants quickly develop in this capacity and by seven months are capable of learning associations between moving images andnonsensewords and syllables.[5]
It is important to note that there is a distinction, often confounded in acquisition research, between mapping a label to a specificinstanceor individual and mapping a label to an entireclassof objects. This latter process is sometimes referred to asgeneralizationor rule learning. Research has shown that if input is encoded in terms of perceptually salient dimensions rather than specific details and if patterns in the input indicate that a number of objects are named interchangeably in the same context, a language learner will be much more likely to generalize that name to every instance with the relevant features. This tendency is heavily dependent on the consistency of context clues and the degree to which word contexts overlap in the input.[10]These differences are furthermore linked to the well-known patterns ofunderandovergeneralizationin infantword learning. Research has also shown that the frequency of co-occurrence of referents is tracked as well, which helps create associations and dispel ambiguities in object-referent models.[19]
The ability to appropriately generalize to whole classes of yet unseen words, coupled with the abilities to parse continuous speech and keep track of word-ordering regularities, may be the critical skills necessary to develop proficiency with and knowledge of syntax and grammar.[5]
According to recent research, there is no neural evidence of statistical language learning in children withautism spectrum disorders.[citation needed]When exposed to a continuous stream of artificial speech, children without autism displayed less cortical activity in thedorsolateral frontal cortices(specifically themiddle frontal gyrus) as cues for word boundaries increased. However activity in these networks remained unchanged in autistic children, regardless of the verbal cues provided. This evidence, highlighting the importance of proper Frontal Lobe brain function is in support of the "Executive Functions" Theory, used to explain some of the biologically related causes of Autistic language deficits. With impaired working memory, decision making, planning, and goal setting, which are vital functions of the Frontal Lobe, Autistic children are at loss when it comes to socializing and communication (Ozonoff, et al., 2004). Additionally, researchers have found that the level of communicative impairment in autistic children was inversely correlated with signal increases in these same regions during exposure to artificial languages. Based on this evidence, researchers have concluded that children with autism spectrum disorders don't have the neural architecture to identify word boundaries in continuous speech. Early word segmentation skills have been shown to predict later language development, which could explain why language delay is a hallmark feature of autism spectrum disorders.[20]
Language learning takes place in different contexts, with both the infant and the caregiver engaging in social interactions. Recent research have investigated how infants and adults use cross-situational statistics in order to learn about not only the meanings of words but also the constraints within a context. For example, Smith and his colleagues proposed that infants learn language by acquiring a bias to label objects to similar objects that come from categories that are well-defined. Important to this view is the idea that the constraints that assist learning of words are not independent of the input itself or the infant's experience. Rather, constraints come about as infants learn about the ways that the words are used and begin to pay attention to certain characteristics of objects that have been used in the past to represent the words.
Inductive learning problem can occur as words are oftentimes used in ambiguous situations in which there are more than one possible referents available. This can lead to confusion for the infants as they may not be able to distinguish which words should be extended to label objects being referenced to. Smith and Yu proposed that a way to make a distinction in such ambiguous situations is to track the word-referent pairings over multiple scenes. For instance, an infant who hears a word in the presence of object A and object B will be unsure of whether the word is the referent of object A or object B. However, if the infant then hears the label again in the presence of object B and object C, the infant can conclude that object B is the referent of the label because object B consistently pairs with the label across different situations.
Computational modelshave long been used to explore the mechanisms by which language learners process and manipulate linguisticinformation. Models of this type allow researchers to systematically control important learning variables that are oftentimes difficult to manipulate at all in human participants.[21]
Associative neural networkmodels of language acquisition are one of the oldest types ofcognitive model, usingdistributed representationsand changes in the weights of the connections between the nodes that make up these representations to simulate learning in a manner reminiscent of theplasticity-basedneuronalreorganization that forms the basis of human learning andmemory.[22]Associative models represent a break withclassical cognitivemodels, characterized by discrete andcontext-free symbols, in favor of adynamical systemsapproach to language better capable of handlingtemporalconsiderations.[23]
A precursor to this approach, and one of the first model types to account for the dimension of time in linguistic comprehension and production wasElman'ssimple recurrent network(SRN). By making use of afeedbacknetwork to represent the system's past states, SRNs were able in a word-prediction task toclusterinput into self-organizedgrammatical categoriesbased solely on statistical co-occurrence patterns.[23][24]
Early successes such as these paved the way for dynamical systems research into linguistic acquisition, answering many questions about early linguistic development but leaving many others unanswered, such as how these statistically acquiredlexemesarerepresented.[23]Of particular importance in recent research has been the effort to understand the dynamic interaction of learning (e.g. language-based) and learner (e.g. speaker-based) variables in lexical organization andcompetitioninbilinguals.[21]In the ceaseless effort to move toward more psychologically realistic models, many researchers have turned to a subset of associative models,self-organizing maps(SOMs), as established, cognitively plausible models of language development.[25][26]
SOMs have been helpful to researchers in identifying and investigating the constraints and variables of interest in a number of acquisition processes, and in exploring the consequences of these findings on linguistic and cognitive theories. By identifyingworking memoryas an important constraint both for language learners and for current computational models, researchers have been able to show that manipulation of this variable allows forsyntactic bootstrapping, drawing not just categorical but actual content meaning from words' positional co-occurrence in sentences.[27]
Some recentmodelsof language acquisition have centered around methods ofBayesian Inferenceto account for infants' abilities to appropriately parse streams of speech and acquire word meanings. Models of this type rely heavily on the notion ofconditional probability(the probability of A given B), in line with findings concerning infants' use of transitional probabilities of words and syllables to learn words.[15]
Models that make use of these probabilistic methods have been able to merge the previouslydichotomouslanguage acquisition perspectives ofsocial theoriesthat emphasize the importance of learning speaker intentions and statistical andassociative theoriesthat rely on cross-situational contexts into a single joint-inference problem. This approach has led to important results in explaining acquisition phenomena such asmutual exclusivity, one-trial learning orfast mapping, and the use ofsocial intentions.[28]
While these results seem to be robust, studies concerning these models' abilities to handle more complex situations such as multiple referent to single label mapping, multiple label to single referent mapping, and bilingual language acquisition in comparison to associative models' successes in these areas have yet to be explored. Hope remains, though, that these model types may be merged to provide a comprehensive account of language acquisition.[29]
Along the lines of probabilistic frequencies, the C/V hypothesis basically states all language hearers use consonantal frequencies to distinguish between words (lexical distinctions) in continuous speech strings, in comparison to vowels. Vowels are more pertinent to rhythmic identification. Several follow-up studies revealed this finding, as they showed that vowels are processed independently of their local statistical distribution.[30]Other research has shown that the consonant-vowel ratio doesn't influence the sizes of lexicons when comparing distinct languages. In the case of languages with a higher consonant ratio, children may depend more on consonant neighbors than rhyme or vowel frequency.[31]
Some models of language acquisition have been based onadaptive parsing[32]andgrammar inductionalgorithms.[33]
|
https://en.wikipedia.org/wiki/Statistical_language_acquisition
|
Instatistics, anexpectation–maximization(EM)algorithmis aniterative methodto find (local)maximum likelihoodormaximum a posteriori(MAP) estimates ofparametersinstatistical models, where the model depends on unobservedlatent variables.[1]The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of thelog-likelihoodevaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on theEstep. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture ofgaussians, or to solve the multiple linear regression problem.[2]
The EM algorithm was explained and given its name in a classic 1977 paper byArthur Dempster,Nan Laird, andDonald Rubin.[3]They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies byCedric Smith.[4]Another was proposed byH.O. Hartleyin 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated.[5]Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977.[6]Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers,[7][8][9]following his collaboration withPer Martin-LöfandAnders Martin-Löf.[10][11][12][13][14]The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997).
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published byC. F. Jeff Wuin 1983.[15]Wu's proof established the EM method's convergence also outside of theexponential family, as claimed by Dempster–Laird–Rubin.[15]
The EM algorithm is used to find (local)maximum likelihoodparameters of astatistical modelin cases where the equations cannot be solved directly. Typically these models involvelatent variablesin addition to unknownparametersand known data observations. That is, eithermissing valuesexist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, amixture modelcan be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
Finding a maximum likelihood solution typically requires taking thederivativesof thelikelihood functionwith respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or asaddle point.[15]In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also havesingularitiesin them, i.e., nonsensical maxima. For example, one of thesolutionsthat may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
Given thestatistical modelwhich generates a setX{\displaystyle \mathbf {X} }of observed data, a set of unobserved latent data ormissing valuesZ{\displaystyle \mathbf {Z} }, and a vector of unknown parametersθ{\displaystyle {\boldsymbol {\theta }}}, along with alikelihood functionL(θ;X,Z)=p(X,Z∣θ){\displaystyle L({\boldsymbol {\theta }};\mathbf {X} ,\mathbf {Z} )=p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})}, themaximum likelihood estimate(MLE) of the unknown parameters is determined by maximizing themarginal likelihoodof the observed data
However, this quantity is often intractable sinceZ{\displaystyle \mathbf {Z} }is unobserved and the distribution ofZ{\displaystyle \mathbf {Z} }is unknown before attainingθ{\displaystyle {\boldsymbol {\theta }}}.
The EM algorithm seeks to find the maximum likelihood estimate of the marginal likelihood by iteratively applying these two steps:
More succinctly, we can write it as one equation:θ(t+1)=argmaxθEZ∼p(⋅|X,θ(t))[logp(X,Z|θ)]{\displaystyle {\boldsymbol {\theta }}^{(t+1)}={\underset {\boldsymbol {\theta }}{\operatorname {arg\,max} }}\operatorname {E} _{\mathbf {Z} \sim p(\cdot |\mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}\left[\log p(\mathbf {X} ,\mathbf {Z} |{\boldsymbol {\theta }})\right]\,}
The typical models to which EM is applied useZ{\displaystyle \mathbf {Z} }as a latent variable indicating membership in one of a set of groups:
However, it is possible to apply EM to other sorts of models.
The motivation is as follows. If the value of the parametersθ{\displaystyle {\boldsymbol {\theta }}}is known, usually the value of the latent variablesZ{\displaystyle \mathbf {Z} }can be found by maximizing the log-likelihood over all possible values ofZ{\displaystyle \mathbf {Z} }, either simply by iterating overZ{\displaystyle \mathbf {Z} }or through an algorithm such as theViterbi algorithmforhidden Markov models. Conversely, if we know the value of the latent variablesZ{\displaystyle \mathbf {Z} }, we can find an estimate of the parametersθ{\displaystyle {\boldsymbol {\theta }}}fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where bothθ{\displaystyle {\boldsymbol {\theta }}}andZ{\displaystyle \mathbf {Z} }are unknown:
The algorithm as just described monotonically approaches a local minimum of the cost function.
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to amaximum likelihood estimator. Formultimodal distributions, this means that an EM algorithm may converge to alocal maximumof the observed data likelihood function, depending on starting values. A variety of heuristic ormetaheuristicapproaches exist to escape a local maximum, such as random-restarthill climbing(starting with several different random initial estimatesθ(t){\displaystyle {\boldsymbol {\theta }}^{(t)}}), or applyingsimulated annealingmethods.
EM is especially useful when the likelihood is anexponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment:[16]the E step becomes the sum of expectations ofsufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to deriveclosed-form expressionupdates for each step, using the Sundberg formula[17](proved and published by Rolf Sundberg, based on unpublished results ofPer Martin-LöfandAnders Martin-Löf).[8][9][11][12][13][14]
The EM method was modified to computemaximum a posteriori(MAP) estimates forBayesian inferencein the original paper by Dempster, Laird, and Rubin.
Other methods exist to find maximum likelihood estimates, such asgradient descent,conjugate gradient, or variants of theGauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
Expectation-Maximization works to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}rather than directly improvinglogp(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}. Here it is shown that improvements to the former imply improvements to the latter.[18]
For anyZ{\displaystyle \mathbf {Z} }with non-zero probabilityp(Z∣X,θ){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})}, we can write
We take the expectation over possible values of the unknown dataZ{\displaystyle \mathbf {Z} }under the current parameter estimateθ(t){\displaystyle \theta ^{(t)}}by multiplying both sides byp(Z∣X,θ(t)){\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}and summing (or integrating) overZ{\displaystyle \mathbf {Z} }. The left-hand side is the expectation of a constant, so we get:
whereH(θ∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}is defined by the negated sum it is replacing.
This last equation holds for every value ofθ{\displaystyle {\boldsymbol {\theta }}}includingθ=θ(t){\displaystyle {\boldsymbol {\theta }}={\boldsymbol {\theta }}^{(t)}},
and subtracting this last equation from the previous equation gives
However,Gibbs' inequalitytells us thatH(θ∣θ(t))≥H(θ(t)∣θ(t)){\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\geq H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})}, so we can conclude that
In words, choosingθ{\displaystyle {\boldsymbol {\theta }}}to improveQ(θ∣θ(t)){\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}causeslogp(X∣θ){\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}to improve at least as much.
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example ofcoordinate descent.[19][20]Consider the function:
whereqis an arbitrary probability distribution over the unobserved datazandH(q)is theentropyof the distributionq. This function can be written as
wherepZ∣X(⋅∣x;θ){\displaystyle p_{Z\mid X}(\cdot \mid x;\theta )}is the conditional distribution of the unobserved data given the observed datax{\displaystyle x}andDKL{\displaystyle D_{KL}}is theKullback–Leibler divergence.
Then the steps in the EM algorithm may be viewed as:
AKalman filteris typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
Suppose that aKalman filteror minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from themaximum likelihoodcalculation
wherex^k{\displaystyle {\widehat {x}}_{k}}are scalar output estimates calculated by a filter or a smoother from N scalar measurementszk{\displaystyle z_{k}}. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
wherex^k{\displaystyle {\widehat {x}}_{k}}andx^k+1{\displaystyle {\widehat {x}}_{k+1}}are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
The convergence of parameter estimates such as those above are well studied.[26][27][28][29]
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those usingconjugate gradientand modifiedNewton's methods(Newton–Raphson).[30]Also, EM can be used with constrained estimation methods.
Parameter-expanded expectation maximization (PX-EM)algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".[31]
Expectation conditional maximization (ECM)replaces each M step with a sequence of conditional maximization (CM) steps in which each parameterθiis maximized individually, conditionally on the other parameters remaining fixed.[32]Itself can be extended into theExpectation conditional maximization either (ECME)algorithm.[33]
This idea is further extended ingeneralized expectation maximization (GEM)algorithm, in which is sought only an increase in the objective functionFfor both the E step and M step as described in theAs a maximization–maximization proceduresection.[19]GEM is further developed in a distributed environment and shows promising results.[34]
It is also possible to consider the EM algorithm as a subclass of theMM(Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,[35]and therefore use any machinery developed in the more general case.
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm[36]which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm byYasuo Matsuyamais an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.[37]
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives aprobability distributionover the latent variables (in the Bayesian style) together with a point estimate forθ(either amaximum likelihood estimateor a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution overθand the latent variables. The Bayesian approach to inference is simply to treatθas another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now includingθ) and optimize them one at a time. Now,ksteps per iteration are needed, wherekis the number of latent variables. Forgraphical modelsthis is easy to do as each variable's newQdepends only on itsMarkov blanket, so localmessage passingcan be used for efficient inference.
Ininformation geometry, the E step and the M step are interpreted as projections under dualaffine connections, called the e-connection and the m-connection; theKullback–Leibler divergencecan also be understood in these terms.
Letx=(x1,x2,…,xn){\displaystyle \mathbf {x} =(\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{n})}be a sample ofn{\displaystyle n}independent observations from amixtureof twomultivariate normal distributionsof dimensiond{\displaystyle d}, and letz=(z1,z2,…,zn){\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{n})}be the latent variables that determine the component from which the observation originates.[20]
where
The aim is to estimate the unknown parameters representing themixingvalue between the Gaussians and the means and covariances of each:
where the incomplete-data likelihood function is
and the complete-data likelihood function is
or
whereI{\displaystyle \mathbb {I} }is anindicator functionandf{\displaystyle f}is theprobability density functionof a multivariate normal.
In the last equality, for eachi, one indicatorI(zi=j){\displaystyle \mathbb {I} (z_{i}=j)}is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.
Given our current estimate of the parametersθ(t), the conditional distribution of theZiis determined byBayes' theoremto be the proportional height of the normaldensityweighted byτ:
These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below).
This E step corresponds with setting up this function for Q:
The expectation oflogL(θ;xi,Zi){\displaystyle \log L(\theta ;\mathbf {x} _{i},Z_{i})}inside the sum is taken with respect to the probability density functionP(Zi∣Xi=xi;θ(t)){\displaystyle P(Z_{i}\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})}, which might be different for eachxi{\displaystyle \mathbf {x} _{i}}of the training set. Everything in the E step is known before the step is taken exceptTj,i{\displaystyle T_{j,i}}, which is computed according to the equation at the beginning of the E step section.
This full conditional expectation does not need to be calculated in one step, becauseτandμ/Σappear in separate linear terms and can thus be maximized independently.
Q(θ∣θ(t)){\displaystyle Q(\theta \mid \theta ^{(t)})}being quadratic in form means that determining the maximizing values ofθ{\displaystyle \theta }is relatively straightforward. Also,τ{\displaystyle \tau },(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}and(μ2,Σ2){\displaystyle ({\boldsymbol {\mu }}_{2},\Sigma _{2})}may all be maximized independently since they all appear in separate linear terms.
To begin, considerτ{\displaystyle \tau }, which has the constraintτ1+τ2=1{\displaystyle \tau _{1}+\tau _{2}=1}:
This has the same form as the maximum likelihood estimate for thebinomial distribution, so
For the next estimates of(μ1,Σ1){\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}:
This has the same form as a weighted maximum likelihood estimate for a normal distribution, so
and, by symmetry,
Conclude the iterative process ifEZ∣θ(t),x[logL(θ(t);x,Z)]≤EZ∣θ(t−1),x[logL(θ(t−1);x,Z)]+ε{\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon }forε{\displaystyle \varepsilon }below some preset threshold.
The algorithm illustrated above can be generalized for mixtures of more than twomultivariate normal distributions.
The EM algorithm has been implemented in the case where an underlyinglinear regressionmodel exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.[38]Special cases of this model include censored or truncated observations from onenormal distribution.[38]
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termedmoment-based approaches[39]or the so-calledspectral techniques.[40][41]Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.[citation needed]
|
https://en.wikipedia.org/wiki/EM_algorithm
|
Speech recognitionis aninterdisciplinarysubfield ofcomputer scienceandcomputational linguisticsthat developsmethodologiesand technologies that enable the recognition andtranslationof spoken language into text by computers. It is also known asautomatic speech recognition(ASR),computer speech recognitionorspeech-to-text(STT). It incorporates knowledge and research in thecomputer science,linguisticsandcomputer engineeringfields. The reverse process isspeech synthesis.
Some speech recognition systems require "training" (also called "enrollment") where an individual speaker reads text or isolatedvocabularyinto the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy. Systems that do not use training are called "speaker-independent"[1]systems. Systems that use training are called "speaker dependent".
Speech recognition applications includevoice user interfacessuch as voice dialing (e.g. "call home"), call routing (e.g. "I would like to make a collect call"),domoticappliance control, search key words (e.g. find a podcast where particular words were spoken), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g. a radiology report), determining speaker characteristics,[2]speech-to-text processing (e.g.,word processorsoremails), andaircraft(usually termeddirect voice input). Automaticpronunciation assessmentis used in education such as for spoken language learning.
The termvoice recognition[3][4][5]orspeaker identification[6][7][8]refers to identifying the speaker, rather than what they are saying.Recognizing the speakercan simplify the task oftranslating speechin systems that have been trained on a specific person's voice or it can be used toauthenticateor verify the identity of a speaker as part of a security process.
From the technology perspective, speech recognition has a long history with several waves of major innovations. Most recently, the field has benefited from advances indeep learningandbig data. The advances are evidenced not only by the surge of academic papers published in the field, but more importantly by the worldwide industry adoption of a variety of deep learning methods in designing and deploying speech recognition systems.
The key areas of growth were: vocabulary size, speaker independence, and processing speed.
Raj Reddywas the first person to take on continuous speech recognition as a graduate student atStanford Universityin the late 1960s. Previous systems required users to pause after each word. Reddy's system issued spoken commands for playingchess.
Around this time Soviet researchers invented thedynamic time warping(DTW) algorithm and used it to create a recognizer capable of operating on a 200-word vocabulary.[15]DTW processed speech by dividing it into short frames, e.g. 10ms segments, and processing each frame as a single unit. Although DTW would be superseded by later algorithms, the technique carried on. Achieving speaker independence remained unsolved at this time period.
During the late 1960sLeonard Baumdeveloped the mathematics ofMarkov chainsat theInstitute for Defense Analysis. A decade later, at CMU, Raj Reddy's studentsJames BakerandJanet M. Bakerbegan using thehidden Markov model(HMM) for speech recognition.[20]James Baker had learned about HMMs from a summer job at the Institute of Defense Analysis during his undergraduate education.[21]The use of HMMs allowed researchers to combine different sources of knowledge, such as acoustics, language, and syntax, in a unified probabilistic model.
The 1980s also saw the introduction of then-gramlanguage model.
Much of the progress in the field is owed to the rapidly increasing capabilities of computers. At the end of the DARPA program in 1976, the best computer available to researchers was thePDP-10with 4 MB ram.[28]It could take up to 100 minutes to decode just 30 seconds of speech.[29]
Two practical products were:
By this point, the vocabulary of the typical commercial speech recognition system was larger than the average human vocabulary.[28]Raj Reddy's former student,Xuedong Huang, developed theSphinx-IIsystem at CMU. The Sphinx-II system was the first to do speaker-independent, large vocabulary, continuous speech recognition and it had the best performance in DARPA's 1992 evaluation. Handling continuous speech with a large vocabulary was a major milestone in the history of speech recognition. Huang went on to found thespeech recognition group at Microsoftin 1993. Raj Reddy's studentKai-Fu Leejoined Apple where, in 1992, he helped develop a speech interface prototype for the Apple computer known as Casper.
Lernout & Hauspie, a Belgium-based speech recognition company, acquired several other companies, including Kurzweil Applied Intelligence in 1997 and Dragon Systems in 2000. The L&H speech technology was used in theWindows XPoperating system. L&H was an industry leader until an accounting scandal brought an end to the company in 2001. The speech technology from L&H was bought by ScanSoft which becameNuancein 2005.Appleoriginally licensed software from Nuance to provide speech recognition capability to its digital assistantSiri.[34]
In the 2000s DARPA sponsored two speech recognition programs: Effective Affordable Reusable Speech-to-Text (EARS) in 2002 andGlobal Autonomous Language Exploitation(GALE). Four teams participated in the EARS program:IBM, a team led byBBNwithLIMSIandUniv. of Pittsburgh,Cambridge University, and a team composed ofICSI,SRIandUniversity of Washington. EARS funded the collection of the Switchboard telephonespeech corpuscontaining 260 hours of recorded conversations from over 500 speakers.[35]The GALE program focused onArabicandMandarinbroadcast news speech.Google's first effort at speech recognition came in 2007 after hiring some researchers from Nuance.[36]The first product wasGOOG-411, a telephone based directory service. The recordings from GOOG-411 produced valuable data that helped Google improve their recognition systems.Google Voice Searchis now supported in over 30 languages.
In the United States, theNational Security Agencyhas made use of a type of speech recognition forkeyword spottingsince at least 2006.[37]This technology allows analysts to search through large volumes of recorded conversations and isolate mentions of keywords. Recordings can be indexed and analysts can run queries over the database to find conversations of interest. Some government research programs focused on intelligence applications of speech recognition, e.g. DARPA's EARS's program andIARPA'sBabel program.
In the early 2000s, speech recognition was still dominated by traditional approaches such ashidden Markov modelscombined with feedforwardartificial neural networks.[38]Today, however, many aspects of speech recognition have been taken over by adeep learningmethod calledLong short-term memory(LSTM), arecurrent neural networkpublished bySepp Hochreiter&Jürgen Schmidhuberin 1997.[39]LSTM RNNs avoid thevanishing gradient problemand can learn "Very Deep Learning" tasks[40]that require memories of events that happened thousands of discrete time steps ago, which is important for speech.
Around 2007, LSTM trained by Connectionist Temporal Classification (CTC)[41]started to outperform traditional speech recognition in certain applications.[42]In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available throughGoogle Voiceto all smartphone users.[43]Transformers, a type of neural network based solely on "attention", have been widely adopted in computer vision[44][45]and language modeling,[46][47]sparking the interest of adapting such models to new domains, including speech recognition.[48][49][50]Some recent papers reported superior performance levels using transformer models for speech recognition, but these models usually require large scale training datasets to reach high performance levels.
The use of deep feedforward (non-recurrent) networks foracoustic modelingwas introduced during the later part of 2009 byGeoffrey Hintonand his students at the University of Toronto and by Li Deng[51]and colleagues at Microsoft Research, initially in the collaborative work between Microsoft and the University of Toronto which was subsequently expanded to include IBM and Google (hence "The shared views of four research groups" subtitle in their 2012 review paper).[52][53][54]A Microsoft research executive called this innovation "the most dramatic change in accuracy since 1979".[55]In contrast to the steady incremental improvements of the past few decades, the application of deep learning decreased word error rate by 30%.[55]This innovation was quickly adopted across the field. Researchers have begun to use deep learning techniques for language modeling as well.
In the long history of speech recognition, both shallow form and deep form (e.g. recurrent nets) of artificial neural networks had been explored for many years during 1980s, 1990s and a few years into the 2000s.[56][57][58]But these methods never won over the non-uniform internal-handcraftingGaussian mixture model/hidden Markov model(GMM-HMM) technology based on generative models of speech trained discriminatively.[59]A number of key difficulties had been methodologically analyzed in the 1990s, including gradient diminishing[60]and weak temporal correlation structure in the neural predictive models.[61][62]All these difficulties were in addition to the lack of big training data and big computing power in these early days. Most speech recognition researchers who understood such barriers hence subsequently moved away from neural nets to pursue generative modeling approaches until the recent resurgence of deep learning starting around 2009–2010 that had overcome all these difficulties. Hinton et al. and Deng et al. reviewed part of this recent history about how their collaboration with each other and then with colleagues across four groups (University of Toronto, Microsoft, Google, and IBM) ignited a renaissance of applications of deep feedforward neural networks for speech recognition.[53][54][63][64]
By early 2010sspeechrecognition, also called voice recognition[65][66][67]was clearly differentiated fromspeakerrecognition, and speaker independence was considered a major breakthrough. Until then, systems required a "training" period. A 1987 ad for a doll had carried the tagline "Finally, the doll that understands you." – despite the fact that it was described as "which children could train to respond to their voice".[12]
In 2017, Microsoft researchers reached a historical human parity milestone of transcribing conversational telephony speech on the widely benchmarked Switchboard task. Multiple deep learning models were used to optimize speech recognition accuracy. The speech recognition word error rate was reported to be as low as 4 professional human transcribers working together on the same benchmark, which was funded by IBM Watson speech team on the same task.[68]
Bothacoustic modelingandlanguage modelingare important parts of modern statistically based speech recognition algorithms. Hidden Markov models (HMMs) are widely used in many systems. Language modeling is also used in many other natural language processing applications such asdocument classificationorstatistical machine translation.
Modern general-purpose speech recognition systems are based on hidden Markov models. These are statistical models that output a sequence of symbols or quantities. HMMs are used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. In a short time scale (e.g., 10 milliseconds), speech can be approximated as astationary process. Speech can be thought of as aMarkov modelfor many stochastic purposes.
Another reason why HMMs are popular is that they can be trained automatically and are simple and computationally feasible to use. In speech recognition, the hidden Markov model would output a sequence ofn-dimensional real-valued vectors (withnbeing a small integer, such as 10), outputting one of these every 10 milliseconds. The vectors would consist ofcepstralcoefficients, which are obtained by taking aFourier transformof a short time window of speech and decorrelating the spectrum using acosine transform, then taking the first (most significant) coefficients. The hidden Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood for each observed vector. Each word, or (for more general speech recognition systems), eachphoneme, will have a different output distribution; a hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markov models for the separate words and phonemes.
Described above are the core elements of the most common, HMM-based approach to speech recognition. Modern speech recognition systems use various combinations of a number of standard techniques in order to improve results over the basic approach described above. A typical large-vocabulary system would needcontext dependencyfor thephonemes(so that phonemes with different left and right context would have different realizations as HMM states); it would usecepstral normalizationto normalize for a different speaker and recording conditions; for further speaker normalization, it might use vocal tract length normalization (VTLN) for male-female normalization andmaximum likelihood linear regression(MLLR) for more general speaker adaptation. The features would have so-calleddeltaanddelta-delta coefficientsto capture speech dynamics and in addition, might useheteroscedastic linear discriminant analysis(HLDA); or might skip the delta and delta-delta coefficients and usesplicingand anLDA-based projection followed perhaps byheteroscedasticlinear discriminant analysis or aglobal semi-tied co variancetransform (also known asmaximum likelihood linear transform, or MLLT). Many systems use so-called discriminative training techniques that dispense with a purely statistical approach to HMM parameter estimation and instead optimize some classification-related measure of the training data. Examples are maximummutual information(MMI), minimum classification error (MCE), and minimum phone error (MPE).
Decoding of the speech (the term for what happens when the system is presented with a new utterance and must compute the most likely source sentence) would probably use theViterbi algorithmto find the best path, and here there is a choice between dynamically creating a combination hidden Markov model, which includes both the acoustic and language model information and combining it statically beforehand (thefinite state transducer, or FST, approach).
A possible improvement to decoding is to keep a set of good candidates instead of just keeping the best candidate, and to use a better scoring function (re scoring) to rate these good candidates so that we may pick the best one according to this refined score. The set of candidates can be kept either as a list (theN-best listapproach) or as a subset of the models (alattice). Re scoring is usually done by trying to minimize theBayes risk[69](or an approximation thereof) Instead of taking the source sentence with maximal probability, we try to take the sentence that minimizes the expectancy of a given loss function with regards to all possible transcriptions (i.e., we take the sentence that minimizes the average distance to other possible sentences weighted by their estimated probability). The loss function is usually theLevenshtein distance, though it can be different distances for specific tasks; the set of possible transcriptions is, of course, pruned to maintain tractability. Efficient algorithms have been devised to re scorelatticesrepresented as weightedfinite state transducerswithedit distancesrepresented themselves as afinite state transducerverifying certain assumptions.[70]
Dynamic time warping is an approach that was historically used for speech recognition but has now largely been displaced by the more successful HMM-based approach.
Dynamic time warping is an algorithm for measuring similarity between two sequences that may vary in time or speed. For instance, similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, or even if there were accelerations and deceleration during the course of one observation. DTW has been applied to video, audio, and graphics – indeed, any data that can be turned into a linear representation can be analyzed with DTW.
A well-known application has been automatic speech recognition, to cope with different speaking speeds. In general, it is a method that allows a computer to find an optimal match between two given sequences (e.g., time series) with certain restrictions. That is, the sequences are "warped" non-linearly to match each other. This sequence alignment method is often used in the context of hidden Markov models.
Neural networks emerged as an attractive acoustic modeling approach in ASR in the late 1980s. Since then, neural networks have been used in many aspects of speech recognition such as phoneme classification,[71]phoneme classification through multi-objective evolutionary algorithms,[72]isolated word recognition,[73]audiovisual speech recognition, audiovisual speaker recognition and speaker adaptation.
Neural networksmake fewer explicit assumptions about feature statistical properties than HMMs and have several qualities making them more attractive recognition models for speech recognition. When used to estimate the probabilities of a speech feature segment, neural networks allow discriminative training in a natural and efficient manner. However, in spite of their effectiveness in classifying short-time units such as individual phonemes and isolated words,[74]early neural networks were rarely successful for continuous recognition tasks because of their limited ability to model temporal dependencies.
One approach to this limitation was to use neural networks as a pre-processing, feature transformation or dimensionality reduction,[75]step prior to HMM based recognition. However, more recently, LSTM and related recurrent neural networks (RNNs),[39][43][76][77]Time Delay Neural Networks(TDNN's),[78]and transformers[48][49][50]have demonstrated improved performance in this area.
Deep neural networks and denoisingautoencoders[79]are also under investigation. A deep feedforward neural network (DNN) is anartificial neural networkwith multiple hidden layers of units between the input and output layers.[53]Similar to shallow neural networks, DNNs can model complex non-linear relationships. DNN architectures generate compositional models, where extra layers enable composition of features from lower layers, giving a huge learning capacity and thus the potential of modeling complex patterns of speech data.[80]
A success of DNNs in large vocabulary speech recognition occurred in 2010 by industrial researchers, in collaboration with academic researchers, where large output layers of the DNN based on context dependent HMM states constructed by decision trees were adopted.[81][82][83]See comprehensive reviews of this development and of the state of the art as of October 2014 in the recent Springer book from Microsoft Research.[84]See also the related background of automatic speech recognition and the impact of various machine learning paradigms, notably includingdeep learning, in
recent overview articles.[85][86]
One fundamental principle ofdeep learningis to do away with hand-craftedfeature engineeringand to use raw features. This principle was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features,[87]showing its superiority over the Mel-Cepstral features which contain a few stages of fixed transformation from spectrograms.
The true "raw" features of speech, waveforms, have more recently been shown to produce excellent larger-scale speech recognition results.[88]
Since 2014, there has been much research interest in "end-to-end" ASR. Traditional phonetic-based (i.e., allHMM-based model) approaches required separate components and training for the pronunciation, acoustic, andlanguage model. End-to-end models jointly learn all the components of the speech recognizer. This is valuable since it simplifies the training process and deployment process. For example, an-gram language modelis required for all HMM-based systems, and a typical n-gram language model often takes several gigabytes in memory making them impractical to deploy on mobile devices.[89]Consequently, modern commercial ASR systems fromGoogleandApple(as of 2017[update]) are deployed on the cloud and require a network connection as opposed to the device locally.
The first attempt at end-to-end ASR was withConnectionist Temporal Classification(CTC)-based systems introduced byAlex GravesofGoogle DeepMindand Navdeep Jaitly of theUniversity of Torontoin 2014.[90]The model consisted ofrecurrent neural networksand a CTC layer. Jointly, the RNN-CTC model learns the pronunciation and acoustic model together, however it is incapable of learning the language due toconditional independenceassumptions similar to a HMM. Consequently, CTC models can directly learn to map speech acoustics to English characters, but the models make many common spelling mistakes and must rely on a separate language model to clean up the transcripts. Later,Baiduexpanded on the work with extremely large datasets and demonstrated some commercial success in Chinese Mandarin and English.[91]In 2016,University of OxfordpresentedLipNet,[92]the first end-to-end sentence-level lipreading model, using spatiotemporal convolutions coupled with an RNN-CTC architecture, surpassing human-level performance in a restricted grammar dataset.[93]A large-scale CNN-RNN-CTC architecture was presented in 2018 byGoogle DeepMindachieving 6 times better performance than human experts.[94]In 2019,Nvidialaunched two CNN-CTC ASR models, Jasper and QuarzNet, with an overall performance WER of 3%.[95][96]Similar to other deep learning applications,transfer learninganddomain adaptationare important strategies for reusing and extending the capabilities of deep learning models, particularly due to the high costs of training models from scratch, and the small size of available corpus in many languages and/or specific domains.[97][98][99]
An alternative approach to CTC-based models are attention-based models. Attention-based ASR models were introduced simultaneously by Chan et al. ofCarnegie Mellon UniversityandGoogle Brainand Bahdanau et al. of theUniversity of Montrealin 2016.[100][101]The model named "Listen, Attend and Spell" (LAS), literally "listens" to the acoustic signal, pays "attention" to different parts of the signal and "spells" out the transcript one character at a time. Unlike CTC-based models, attention-based models do not have conditional-independence assumptions and can learn all the components of a speech recognizer including the pronunciation, acoustic and language model directly. This means, during deployment, there is no need to carry around a language model making it very practical for applications with limited memory. By the end of 2016, the attention-based models have seen considerable success including outperforming the CTC models (with or without an external language model).[102]Various extensions have been proposed since the original LAS model. Latent Sequence Decompositions (LSD) was proposed byCarnegie Mellon University,MITandGoogle Brainto directly emit sub-word units which are more natural than English characters;[103]University of OxfordandGoogle DeepMindextended LAS to "Watch, Listen, Attend and Spell" (WLAS) to handle lip reading surpassing human-level performance.[104]
Typically a manual control input, for example by means of a finger control on the steering-wheel, enables the speech recognition system and this is signaled to the driver by an audio prompt. Following the audio prompt, the system has a "listening window" during which it may accept a speech input for recognition.[citation needed]
Simple voice commands may be used to initiate phone calls, select radio stations or play music from a compatible smartphone, MP3 player or music-loaded flash drive. Voice recognition capabilities vary between car make and model. Some of the most recent[when?]car models offer natural-language speech recognition in place of a fixed set of commands, allowing the driver to use full sentences and common phrases. With such systems there is, therefore, no need for the user to memorize a set of fixed command words.[citation needed]
Automaticpronunciationassessment is the use of speech recognition to verify the correctness of pronounced speech,[105]as distinguished from manual assessment by an instructor or proctor.[106]Also called speech verification, pronunciation evaluation, and pronunciation scoring, the main application of this technology is computer-aided pronunciation teaching (CAPT) when combined withcomputer-aided instructionforcomputer-assisted language learning(CALL), speechremediation, oraccent reduction. Pronunciation assessment does not determine unknown speech (as indictationorautomatic transcription) but instead, knowing the expected word(s) in advance, it attempts to verify the correctness of the learner's pronunciation and ideally theirintelligibilityto listeners,[107][108]sometimes along with often inconsequentialprosodysuch asintonation,pitch,tempo,rhythm, andstress.[109]Pronunciation assessment is also used inreading tutoring, for example in products such asMicrosoft Teams[110]and from Amira Learning.[111]Automatic pronunciation assessment can also be used to help diagnose and treatspeech disorderssuch asapraxia.[112]
Assessing authentic listener intelligibility is essential for avoiding inaccuracies fromaccentbias, especially in high-stakes assessments;[113][114][115]from words with multiple correct pronunciations;[116]and from phoneme coding errors in machine-readable pronunciation dictionaries.[117]In 2022, researchers found that some newer speech to text systems, based onend-to-end reinforcement learningto map audio signals directly into words, produce word and phrase confidence scores very closely correlated with genuine listener intelligibility.[118]In theCommon European Framework of Reference for Languages(CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all levels.[119]
In thehealth caresector, speech recognition can be implemented in front-end or back-end of the medical documentation process. Front-end speech recognition is where the provider dictates into a speech-recognition engine, the recognized words are displayed as they are spoken, and the dictator is responsible for editing and signing off on the document. Back-end or deferred speech recognition is where the provider dictates into adigital dictationsystem, the voice is routed through a speech-recognition machine and the recognized draft document is routed along with the original voice file to the editor, where the draft is edited and report finalized. Deferred speech recognition is widely used in the industry currently.
One of the major issues relating to the use of speech recognition in healthcare is that theAmerican Recovery and Reinvestment Act of 2009(ARRA) provides for substantial financial benefits to physicians who utilize an EMR according to "Meaningful Use" standards. These standards require that a substantial amount of data be maintained by the EMR (now more commonly referred to as anElectronic Health Recordor EHR). The use of speech recognition is more naturally suited to the generation of narrative text, as part of a radiology/pathology interpretation, progress note or discharge summary: the ergonomic gains of using speech recognition to enter structured discrete data (e.g., numeric values or codes from a list or acontrolled vocabulary) are relatively minimal for people who are sighted and who can operate a keyboard and mouse.
A more significant issue is that most EHRs have not been expressly tailored to take advantage of voice-recognition capabilities. A large part of the clinician's interaction with the EHR involves navigation through the user interface using menus, and tab/button clicks, and is heavily dependent on keyboard and mouse: voice-based navigation provides only modest ergonomic benefits. By contrast, many highly customized systems for radiology or pathology dictation implement voice "macros", where the use of certain phrases – e.g., "normal report", will automatically fill in a large number of default values and/or generate boilerplate, which will vary with the type of the exam – e.g., a chest X-ray vs. a gastrointestinal contrast series for a radiology system.
Prolonged use of speech recognition software in conjunction withword processorshas shown benefits to short-term-memory restrengthening inbrain AVMpatients who have been treated withresection. Further research needs to be conducted to determine cognitive benefits for individuals whose AVMs have been treated using radiologic techniques.[citation needed]
Substantial efforts have been devoted in the last decade to the test and evaluation of speech recognition infighter aircraft. Of particular note have been the US program in speech recognition for theAdvanced Fighter Technology Integration (AFTI)/F-16aircraft (F-16 VISTA), the program in France forMirageaircraft, and other programs in the UK dealing with a variety of aircraft platforms. In these programs, speech recognizers have been operated successfully in fighter aircraft, with applications including setting radio frequencies, commanding an autopilot system, setting steer-point coordinates and weapons release parameters, and controlling flight display.
Working with Swedish pilots flying in theJAS-39Gripen cockpit, Englund (2004) found recognition deteriorated with increasingg-loads. The report also concluded that adaptation greatly improved the results in all cases and that the introduction of models for breathing was shown to improve recognition scores significantly. Contrary to what might have been expected, no effects of the broken English of the speakers were found. It was evident that spontaneous speech caused problems for the recognizer, as might have been expected. A restricted vocabulary, and above all, a proper syntax, could thus be expected to improve recognition accuracy substantially.[120]
TheEurofighter Typhoon, currently in service with the UKRAF, employs a speaker-dependent system, requiring each pilot to create a template. The system is not used for any safety-critical or weapon-critical tasks, such as weapon release or lowering of the undercarriage, but is used for a wide range of other cockpit functions. Voice commands are confirmed by visual and/or aural feedback. The system is seen as a major design feature in the reduction of pilotworkload,[121]and even allows the pilot to assign targets to his aircraft with two simple voice commands or to any of his wingmen with only five commands.[122]
Speaker-independent systems are also being developed and are under test for theF-35 Lightning II(JSF) and theAlenia Aermacchi M-346 Masterlead-in fighter trainer. These systems have produced word accuracy scores in excess of 98%.[123]
The problems of achieving high recognition accuracy under stress and noise are particularly relevant in thehelicopterenvironment as well as in the jet fighter environment. The acoustic noise problem is actually more severe in the helicopter environment, not only because of the high noise levels but also because the helicopter pilot, in general, does not wear afacemask, which would reduce acoustic noise in themicrophone. Substantial test and evaluation programs have been carried out in the past decade in speech recognition systems applications in helicopters, notably by theU.S. ArmyAvionics Research and Development Activity (AVRADA) and by the Royal Aerospace Establishment (RAE) in the UK. Work in France has included speech recognition in thePuma helicopter. There has also been much useful work inCanada. Results have been encouraging, and voice applications have included: control of communication radios, setting ofnavigationsystems, and control of an automated target handover system.
As in fighter applications, the overriding issue for voice in helicopters is the impact on pilot effectiveness. Encouraging results are reported for the AVRADA tests, although these represent only a feasibility demonstration in a test environment. Much remains to be done both in speech recognition and in overallspeech technologyin order to consistently achieve performance improvements in operational settings.
Training for air traffic controllers (ATC) represents an excellent application for speech recognition systems. Many ATC training systems currently require a person to act as a "pseudo-pilot", engaging in a voice dialog with the trainee controller, which simulates the dialog that the controller would have to conduct with pilots in a real ATC situation. Speech recognition andsynthesistechniques offer the potential to eliminate the need for a person to act as a pseudo-pilot, thus reducing training and support personnel. In theory, Air controller tasks are also characterized by highly structured speech as the primary output of the controller, hence reducing the difficulty of the speech recognition task should be possible. In practice, this is rarely the case. The FAA document 7110.65 details the phrases that should be used by air traffic controllers. While this document gives less than 150 examples of such phrases, the number of phrases supported by one of the simulation vendors speech recognition systems is in excess of 500,000.
The USAF, USMC, US Army, US Navy, and FAA as well as a number of international ATC training organizations such as the Royal Australian Air Force and Civil Aviation Authorities in Italy, Brazil, and Canada are currently using ATC simulators with speech recognition from a number of different vendors.[citation needed]
ASR is now commonplace in the field oftelephonyand is becoming more widespread in the field ofcomputer gamingand simulation. In telephony systems, ASR is now being predominantly used in contact centers by integrating it withIVRsystems. Despite the high level of integration with word processing in general personal computing, in the field of document production, ASR has not seen the expected increases in use.
The improvement of mobile processor speeds has made speech recognition practical insmartphones. Speech is used mostly as a part of a user interface, for creating predefined or custom speech commands.
People with disabilities can benefit from speech recognition programs. For individuals that are Deaf or Hard of Hearing, speech recognition software is used to automatically generate a closed-captioning of conversations such as discussions in conference rooms, classroom lectures, and/or religious services.[124]
Students who are blind (seeBlindness and education) or have very low vision can benefit from using the technology to convey words and then hear the computer recite them, as well as use a computer by commanding with their voice, instead of having to look at the screen and keyboard.[125]
Students who are physically disabled have aRepetitive strain injury/other injuries to the upper extremities can be relieved from having to worry about handwriting, typing, or working with scribe on school assignments by using speech-to-text programs. They can also utilize speech recognition technology to enjoy searching the Internet or using a computer at home without having to physically operate a mouse and keyboard.[125]
Speech recognition can allow students with learning disabilities to become better writers. By saying the words aloud, they can increase the fluidity of their writing, and be alleviated of concerns regarding spelling, punctuation, and other mechanics of writing.[126]Also, seeLearning disability.
The use of voice recognition software, in conjunction with a digital audio recorder and a personal computer running word-processing software has proven to be positive for restoring damaged short-term memory capacity, in stroke and craniotomy individuals.
Speech recognition is also very useful for people who have difficulty using their hands, ranging from mild repetitive stress injuries to involve disabilities that preclude using conventional computer input devices. In fact, people who used the keyboard a lot and developedRSIbecame an urgent early market for speech recognition.[127][128]Speech recognition is used indeaftelephony, such as voicemail to text,relay services, andcaptioned telephone. Individuals with learning disabilities who have problems with thought-to-paper communication (essentially they think of an idea but it is processed incorrectly causing it to end up differently on paper) can possibly benefit from the software but the technology is not bug proof.[129]Also the whole idea of speak to text can be hard for intellectually disabled person's due to the fact that it is rare that anyone tries to learn the technology to teach the person with the disability.[130]
This type of technology can help those with dyslexia but other disabilities are still in question. The effectiveness of the product is the problem that is hindering it from being effective. Although a kid may be able to say a word depending on how clear they say it the technology may think they are saying another word and input the wrong one. Giving them more work to fix, causing them to have to take more time with fixing the wrong word.[131]
The performance of speech recognition systems is usually evaluated in terms of accuracy and speed.[136][137]Accuracy is usually rated withword error rate(WER), whereas speed is measured with thereal time factor. Other measures of accuracy includeSingle Word Error Rate(SWER) andCommand Success Rate(CSR).
Speech recognition by machine is a very complex problem, however. Vocalizations vary in terms of accent, pronunciation, articulation, roughness, nasality, pitch, volume, and speed. Speech is distorted by a background noise and echoes, electrical characteristics. Accuracy of speech recognition may vary with the following:[138][citation needed]
As mentioned earlier in this article, the accuracy of speech recognition may vary depending on the following factors:
With discontinuous speech full sentences separated by silence are used, therefore it becomes easier to recognize the speech as well as with isolated speech.With continuous speech naturally spoken sentences are used, therefore it becomes harder to recognize the speech, different from both isolated and discontinuous speech.
Constraints are often represented by grammar.
Speech recognition is a multi-leveled pattern recognition task.
e.g. Known word pronunciations or legal word sequences, which can compensate for errors or uncertainties at a lower level;
For telephone speech the sampling rate is 8000 samples per second;
computed every 10 ms, with one 10 ms section called a frame;
Analysis of four-step neural network approaches can be explained by further information. Sound is produced by air (or some other medium) vibration, which we register by ears, but machines by receivers. Basic sound creates a wave which has two descriptions:amplitude(how strong is it), andfrequency(how often it vibrates per second).
Accuracy can be computed with the help of word error rate (WER). Word error rate can be calculated by aligning the recognized word and referenced word using dynamic string alignment. The problem may occur while computing the word error rate due to the difference between the sequence lengths of the recognized word and referenced word.
The formula to compute the word error rate (WER) is:
WER=(s+d+i)n{\displaystyle WER={(s+d+i) \over n}}
wheresis the number of substitutions,dis the number of deletions,iis the number of insertions, andnis the number of word references.
While computing, the word recognition rate (WRR) is used. The formula is:
wherehis the number of correctly recognized words:
Speech recognition can become a means of attack, theft, or accidental operation. For example, activation words like "Alexa" spoken in an audio or video broadcast can cause devices in homes and offices to start listening for input inappropriately, or possibly take an unwanted action.[140]Voice-controlled devices are also accessible to visitors to the building, or even those outside the building if they can be heard inside. Attackers may be able to gain access to personal information, like calendar, address book contents, private messages, and documents. They may also be able to impersonate the user to send messages or make online purchases.
Two attacks have been demonstrated that use artificial sounds. One transmits ultrasound and attempt to send commands without nearby people noticing.[141]The other adds small, inaudible distortions to other speech or music that are specially crafted to confuse the specific speech recognition system into recognizing music as speech, or to make what sounds like one command to a human sound like a different command to the system.[142]
Popular speech recognition conferences held each year or two include SpeechTEK and SpeechTEK Europe,ICASSP,Interspeech/Eurospeech, and the IEEE ASRU. Conferences in the field ofnatural language processing, such asACL,NAACL, EMNLP, and HLT, are beginning to include papers onspeech processing. Important journals include theIEEETransactions on Speech and Audio Processing (later renamedIEEETransactions on Audio, Speech and Language Processing and since Sept 2014 renamedIEEE/ACM Transactions on Audio, Speech and Language Processing—after merging with an ACM publication), Computer Speech and Language, and Speech Communication.
Books like "Fundamentals of Speech Recognition" byLawrence Rabinercan be useful to acquire basic knowledge but may not be fully up to date (1993). Another good source can be "Statistical Methods for Speech Recognition" byFrederick Jelinekand "Spoken Language Processing (2001)" byXuedong Huangetc., "Computer Speech", byManfred R. Schroeder, second edition published in 2004, and "Speech Processing: A Dynamic and Optimization-Oriented Approach" published in 2003 by Li Deng and Doug O'Shaughnessey. The updated textbookSpeech and Language Processing(2008) byJurafskyand Martin presents the basics and the state of the art for ASR.Speaker recognitionalso uses the same features, most of the same front-end processing, and classification techniques as is done in speech recognition. A comprehensive textbook, "Fundamentals of Speaker Recognition" is an in depth source for up to date details on the theory and practice.[143]A good insight into the techniques used in the best modern systems can be gained by paying attention to government sponsored evaluations such as those organised byDARPA(the largest speech recognition-related project ongoing as of 2007 is the GALE project, which involves both speech recognition and translation components).
A good and accessible introduction to speech recognition technology and its history is provided by the general audience book "The Voice in the Machine. Building Computers That Understand Speech" byRoberto Pieraccini(2012).
The most recent book on speech recognition isAutomatic Speech Recognition: A Deep Learning Approach(Publisher: Springer) written by Microsoft researchers D. Yu and L. Deng and published near the end of 2014, with highly mathematically oriented technical detail on how deep learning methods are derived and implemented in modern speech recognition systems based on DNNs and related deep learning methods.[84]A related book, published earlier in 2014, "Deep Learning: Methods and Applications" by L. Deng and D. Yu provides a less technical but more methodology-focused overview of DNN-based speech recognition during 2009–2014, placed within the more general context of deep learning applications including not only speech recognition but also image recognition, natural language processing, information retrieval, multimodal processing, and multitask learning.[80]
In terms of freely available resources,Carnegie Mellon University'sSphinxtoolkit is one place to start to both learn about speech recognition and to start experimenting. Another resource (free but copyrighted) is theHTKbook (and the accompanying HTK toolkit). For more recent and state-of-the-art techniques,Kalditoolkit can be used.[144]In 2017Mozillalaunched the open source project calledCommon Voice[145]to gather big database of voices that would help build free speech recognition project DeepSpeech (available free atGitHub),[146]using Google's open source platformTensorFlow.[147]When Mozilla redirected funding away from the project in 2020, it was forked by its original developers as Coqui STT[148]using the same open-source license.[149][150]
GoogleGboardsupports speech recognition on allAndroidapplications. It can be activated through themicrophoneicon.[151]Speech recognition can be activated inMicrosoft Windowsoperating systems by pressing Windows logo key + Ctrl + S.[152]
The commercial cloud based speech recognition APIs are broadly available.
For more software resources, seeList of speech recognition software.
|
https://en.wikipedia.org/wiki/Speech_recognition
|
Bioinformatics(/ˌbaɪ.oʊˌɪnfərˈmætɪks/ⓘ) is aninterdisciplinaryfield ofsciencethat develops methods andsoftware toolsfor understandingbiologicaldata, especially when the data sets are large and complex. Bioinformatics usesbiology,chemistry,physics,computer science,data science,computer programming,information engineering,mathematicsandstatisticsto analyze and interpretbiological data. The process of analyzing and interpreting data can sometimes be referred to ascomputational biology, however this distinction between the two terms is often disputed. To some, the termcomputational biologyrefers to building and using models of biological systems.
Computational, statistical, and computer programming techniques have been used forcomputer simulationanalyses of biological queries. They include reused specific analysis "pipelines", particularly in the field ofgenomics, such as by the identification ofgenesand singlenucleotidepolymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (especially in agricultural species), or differences between populations. Bioinformatics also includesproteomics, which tries to understand the organizational principles withinnucleic acidandproteinsequences.[1]
Image andsignal processingallow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observedmutations. Bioinformatics includestext miningof biological literature and the development of biological and geneontologiesto organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part ofsystems biology. Instructural biology, it aids in the simulation and modeling of DNA,[2]RNA,[2][3]proteins[4]as well as biomolecular interactions.[5][6][7][8]
The first definition of the termbioinformaticswas coined byPaulien HogewegandBen Hesperin 1970, to refer to the study of information processes in biotic systems.[9][10][11][12][13]This definition placed bioinformatics as a field parallel tobiochemistry(the study of chemical processes in biological systems).[10]
Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by theHuman Genome Projectand by rapid advances in DNA sequencing technology.[citation needed]
Analyzing biological data to produce meaningful information involves writing and running software programs that usealgorithmsfromgraph theory,artificial intelligence,soft computing,data mining,image processing, andcomputer simulation. The algorithms in turn depend on theoretical foundations such asdiscrete mathematics,control theory,system theory,information theory, andstatistics.[citation needed]
There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able tosequenceover 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less.[14]
Computers became essential in molecular biology whenprotein sequencesbecame available afterFrederick Sangerdetermined the sequence ofinsulinin the early 1950s.[15][16]Comparing multiple sequences manually turned out to be impractical.Margaret Oakley Dayhoff, a pioneer in the field,[17]compiled one of the first protein sequence databases, initially published as books[18]as well as methods of sequence alignment andmolecular evolution.[19]Another early contributor to bioinformatics wasElvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991.[20]
In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful.[21][22]
In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore[when?], the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide andamino acid sequences,protein domains, andprotein structures.[23]
Important sub-disciplines within bioinformatics andcomputational biologyinclude:
The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include:pattern recognition,data mining,machine learningalgorithms, andvisualization. Major research efforts in the field includesequence alignment,gene finding,genome assembly,drug design,drug discovery,protein structure alignment,protein structure prediction, prediction ofgene expressionandprotein–protein interactions,genome-wide association studies, the modeling ofevolutionandcell division/mitosis.
Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.
Over the past few decades, rapid developments in genomic and other molecular research technologies and developments ininformation technologieshave combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes.
Common activities in bioinformatics include mapping and analyzingDNAand protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.
Since the bacteriophagePhage Φ-X174wassequencedin 1977,[24]theDNA sequencesof thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encodeproteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within aspeciesor between different species can show similarities between protein functions, or relations between species (the use ofmolecular systematicsto constructphylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually.Computer programssuch asBLASTare used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billionnucleotides.[25]
Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank.DNA sequencingis still a non-trivial problem as the raw data may be noisy or affected by weak signals.Algorithmshave been developed forbase callingfor the various experimental approaches to DNA sequencing.
Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. Theshotgun sequencingtechnique (used byThe Institute for Genomic Research(TIGR) to sequence the first bacterial genome,Haemophilus influenzae)[26]generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as thehuman genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research.
Ingenomics,annotationrefers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate ofsequencingexceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics.[when?]
Genome annotation can be classified into three levels: thenucleotide, protein, and process levels.
Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination ofab initiogene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome.
The principal aim of protein-level annotation is to assign function to theproteinproducts of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function.
Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem.[27]
The first description of a comprehensive annotation system was published in 1995[26]byThe Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacteriumHaemophilus influenzae.[26]The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. TheGeneMarkprogram trained to find protein-coding genes inHaemophilus influenzaeis constantly changing and improving.
Following the goals that the Human Genome Project left to achieve after its closure in 2003, theENCODEproject was developed by theNational Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error).
While genome annotation is primarily based on sequence similarity (and thushomology), other properties of sequences can be used to predict the function of genes. In fact, mostgenefunction prediction methods focus onproteinsequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobicamino acidspredictstransmembrane segmentsin proteins. However, protein function prediction can also use external information such as gene (or protein)expressiondata,protein structure, orprotein-protein interactions.[28]
Evolutionary biologyis the study of the origin and descent ofspecies, as well as their change over time.Informaticshas assisted evolutionary biologists by enabling researchers to:
The core of comparative genome analysis is the establishment of the correspondence betweengenes(orthologyanalysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion.[30]Entire genomes are involved in processes of hybridization, polyploidization andendosymbiosisthat lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact,heuristics, fixed parameter andapproximation algorithmsfor problems based on parsimony models toMarkov chain Monte Carloalgorithms forBayesian analysisof problems based on probabilistic models.
Many of these studies are based on the detection ofsequence homologyto assign sequences toprotein families.[31]
Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particularmonophyletictaxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species.[32]
As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. SimpleMendelian inheritancehas been observed for over 3,000 disorders that have been identified at theOnline Mendelian Inheritance in Mandatabase, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such asinfertility,[33]breast cancer[34]andAlzheimer's disease[35]), rather than a single cause.[36][37]There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides.[38]
Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability.[39]Rare variantsmay account for some of themissing heritability.[40]Large-scalewhole genome sequencingstudies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions ofrare variants.[41]Functional annotationspredict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies.[42]Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization.[43][44]Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes.[45]
Incancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition tosingle-nucleotide polymorphismarrays identifyingpoint mutationsthat cause cancer,oligonucleotidemicroarrays can be used to identify chromosomal gains and losses (calledcomparative genomic hybridization). These detection methods generateterabytesof data per experiment. The data is often found to contain considerable variability, ornoise, and thusHidden Markov modeland change-point analysis methods are being developed to infer realcopy numberchanges.[citation needed]
Two important principles can be used to identify cancer by mutations in theexome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers.[46]
Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis oflesionsfound to be recurrent among many tumors.[47]
Theexpressionof many genes can be determined by measuringmRNAlevels with multiple techniques includingmicroarrays,expressed cDNA sequence tag(EST) sequencing,serial analysis of gene expression(SAGE) tag sequencing,massively parallel signature sequencing(MPSS),RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separatesignalfromnoisein high-throughput gene expression studies.[48]Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerousepithelialcells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Protein microarraysand high throughput (HT)mass spectrometry(MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinityproteomicsdisplayed as spatial data based onimmunohistochemistryandtissue microarrays.[49]
Gene regulationis a complex process where a signal, such as an extracellular signal such as ahormone, eventually leads to an increase or decrease in the activity of one or moreproteins. Bioinformatics techniques have been applied to explore various steps in this process.
For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study ofsequence motifsin the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA.Enhancerelements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis ofchromosome conformation captureexperiments.
Expression data can be used to infer gene regulation: one might comparemicroarraydata from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of thecell cycle, along with various stress conditions (heat shock, starvation, etc.).Clustering algorithmscan be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-representedregulatory elements. Examples of clustering algorithms applied in gene clustering arek-means clustering,self-organizing maps(SOMs),hierarchical clustering, andconsensus clusteringmethods.
Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. Agene ontologycategory,cellular component, has been devised to capture subcellular localization in manybiological databases.
Microscopic pictures allow for the location oforganellesas well as molecules, which may be the source of abnormalities in diseases.
Finding the location of proteins allows us to predict what they do. This is calledprotein function prediction. For instance, if a protein is found in thenucleusit may be involved ingene regulationorsplicing. By contrast, if a protein is found inmitochondria, it may be involved inrespirationor othermetabolic processes. There are well developedprotein subcellular localization predictionresources available, including protein subcellular location databases, and prediction tools.[50][51]
Data from high-throughputchromosome conformation captureexperiments, such asHi-C (experiment)andChIA-PET, can provide information on the three-dimensional structure andnuclear organizationofchromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such asTopologically Associating Domains(TADs), that are organised together in three-dimensional space.[52]
Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models.[53][54]
The linearamino acidsequence of a protein is called theprimary structure. The primary structure can be easily determined from the sequence ofcodonson the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfoldedprionprotein involved inbovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes thesecondary,tertiaryandquaternarystructure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time.[citation needed]
In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of geneA, whose function is known, is homologous to the sequence of geneB,whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins.Homology modelingis used to predict the structure of an unknown protein from existing homologous proteins.
One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the sameprotein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have very different amino acid sequences, their protein structures are very similar, reflecting their shared function and shared ancestor.[55]
Other techniques for predicting protein structure include protein threading andde novo(from scratch) physics-based modeling.
Another aspect of structural bioinformatics include the use of protein structures forVirtual Screeningmodels such asQuantitative Structure-Activity Relationshipmodels and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies andin silicomutagenesis studies.
A 2021deep-learningalgorithms-based software calledAlphaFold, developed by Google'sDeepMind, greatly outperforms all other prediction software methods,[56][how?]and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database.[57]
Network analysisseeks to understand the relationships withinbiological networkssuch asmetabolicorprotein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both.
Systems biologyinvolves the use ofcomputer simulationsofcellularsubsystems (such as thenetworks of metabolitesandenzymesthat comprisemetabolism,signal transductionpathways andgene regulatory networks) to both analyze and visualize the complex connections of these cellular processes.Artificial lifeor virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
Tens of thousands of three-dimensional protein structures have been determined byX-ray crystallographyandprotein nuclear magnetic resonance spectroscopy(protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performingprotein–protein interactionexperiments. A variety of methods have been developed to tackle theprotein–protein dockingproblem, though it seems that there is still much work to be done in this field.
Other interactions encountered in the field include Protein–ligand (including drug) andprotein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computationalalgorithms, termed docking algorithms, for studyingmolecular interactions.
Biodiversity informatics deals with the collection and analysis ofbiodiversitydata, such astaxonomic databases, ormicrobiomedata. Examples of such analyses includephylogenetics,niche modelling,species richnessmapping,DNA barcoding, orspeciesidentification tools. A growing area is alsomacro-ecology, i.e. the study of how biodiversity is connected toecologyand human impact, such asclimate change.
The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example:
The area of research draws fromstatisticsandcomputational linguistics.
Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-contentbiomedical imagery. Modernimage analysissystems can improve an observer'saccuracy,objectivity, or speed. Image analysis is important for bothdiagnosticsand research. Some examples are:
Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained fromflow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition.
Biological ontologies aredirected acyclic graphsofcontrolled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis.[citation needed]
TheOBO Foundrywas an effort to standardise certain ontologies. One of the most widespread is theGene ontologywhich describes gene function. There are also ontologies which describe phenotypes.
Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private.
Some of the most commonly used databases are listed below:
Software tools for bioinformaticsinclude simple command-line tools, more complex graphical programs, and standalone web-services. They are made bybioinformatics companiesor by public institutions.
Manyfree and open-source softwaretools have existed and continued to grow since the 1980s.[59]The combination of a continued need for newalgorithmsfor the analysis of emerging types of biological readouts, the potential for innovativein silicoexperiments, and freely availableopen codebases have created opportunities for research groups to contribute to both bioinformatics regardless offunding. The open source tools often act as incubators of ideas, or community-supportedplug-insin commercial applications. They may also providede factostandards and shared object models for assisting with the challenge of bioinformation integration.
Open-source bioinformatics software includesBioconductor,BioPerl,Biopython,BioJava,BioJS,BioRuby,Bioclipse,EMBOSS, .NET Bio,Orangewith its bioinformatics add-on,Apache Taverna,UGENEandGenoCAD.
The non-profitOpen Bioinformatics Foundation[59]and the annualBioinformatics Open Source Conferencepromote open-source bioinformatics software.[60]
SOAP- andREST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads.
Basic bioinformatics services are classified by theEBIinto three categories:SSS(Sequence Search Services),MSA(Multiple Sequence Alignment), andBSA(Biological Sequence Analysis).[61]The availability of theseservice-orientedbioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensiblebioinformatics workflow management systems.
Abioinformatics workflow management systemis a specialized form of aworkflow management systemdesigned specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to
Some of the platforms giving this service:Galaxy,Kepler,Taverna,UGENE,Anduril,HIVE.
In 2014, theUS Food and Drug Administrationsponsored a conference held at theNational Institutes of HealthBethesda Campus to discuss reproducibility in bioinformatics.[62]Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm.[63]These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including theHuman Variome Projectand theEuropean Federation for Medical Informatics, and research institutions includingStanford, theNew York Genome Center, and theGeorge Washington University.
It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff.[64]
In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for aBioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators.[65][66]
While bioinformatics is taught as an in-personmaster's degreeat many universities, there are many other methods and technologies available to learn and obtain certification in the subject. The computational nature of bioinformatics lends it tocomputer-aided and online learning.[67][68]Software platforms designed to teach bioinformatics concepts and methods includeRosalindand online courses offered through theSwiss Institute of BioinformaticsTraining Portal. TheCanadian Bioinformatics Workshopsprovides videos and slides from training workshops on their website under aCreative Commonslicense. The 4273π project or 4273pi project[69]also offers open source educational materials for free. The course runs on low costRaspberry Picomputers and has been used to teach adults and school pupils.[70][71]4273 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4273π operating system.[72][73]
MOOCplatforms also provide online certifications in bioinformatics and related disciplines, includingCoursera's Bioinformatics Specialization at theUniversity of California, San Diego, Genomic Data Science Specialization atJohns Hopkins University, andEdX's Data Analysis for Life Sciences XSeries atHarvard University.
There are several large conferences that are concerned with bioinformatics. Some of the most notable examples areIntelligent Systems for Molecular Biology(ISMB),European Conference on Computational Biology(ECCB), andResearch in Computational Molecular Biology(RECOMB).
|
https://en.wikipedia.org/wiki/Bioinformatics
|
Inscience, and most specificallychemistry, theaccepted valuedenotes a value of a substance accepted byalmost all scientistsand theexperimental valuedenotes the value of a substance's properties found in a localized lab.[1]
Thisphysical chemistry-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Accepted_and_experimental_value
|
Data qualityrefers to the state ofqualitativeorquantitativepieces of information. There are many definitions of data quality, but data is generally considered high quality if it is "fit for [its] intended uses in operations,decision makingandplanning".[1][2][3]Data is deemed of high quality if it correctly represents the real-world construct to which it refers. Apart from these definitions, as the number of data sources increases, the question of internaldata consistencybecomes significant, regardless of fitness for use for any particular external purpose.
People's views on data quality can often be in disagreement, even when discussing the same set of data used for the same purpose. When this is the case,data governanceis used to form agreed upon definitions and standards for data quality. In such cases,data cleansing, includingstandardization, may be required in order to ensure data quality.[4]
Defining data quality is difficult due to the many contexts data are used in, as well as the varying perspectives among end users, producers, and custodians of data.[5]
From a consumer perspective, data quality is:[5]
From a business perspective, data quality is:
From a standards-based perspective, data quality is:
Arguably, in all these cases, "data quality" is a comparison of the actual state of a particular set of data to a desired state, with the desired state being typically referred to as "fit for use," "to specification," "meeting consumer expectations," "free of defect," or "meeting requirements." These expectations, specifications, and requirements are usually defined by one or more individuals or groups, standards organizations, laws and regulations, business policies, or software development policies.[5]
Drilling down further, those expectations, specifications, and requirements are stated in terms of characteristics or dimensions of the data, such as:[5][6][7][8][11]
A systematic scoping review of the literature suggests that data quality dimensions and methods with real world data are not consistent in the literature, and as a result quality assessments are challenging due to the complex and heterogeneous nature of these data.[11]
Before the rise of the inexpensivecomputer data storage, massivemainframecomputers were used to maintain name and address data for delivery services. This was so that mail could be properly routed to its destination. The mainframes used business rules to correct common misspellings and typographical errors in name and address data, as well as to track customers who had moved, died, gone to prison, married, divorced, or experienced other life-changing events. Government agencies began to make postal data available to a few service companies to cross-reference customer data with the National Change of Address registry(NCOA). This technology saved large companies millions of dollars in comparison to manual correction of customer data. Large companies saved on postage, as bills and direct marketing materials made their way to the intended customer more accurately. Initially sold as a service, data quality moved inside the walls of corporations, as low-cost and powerful server technology became available.[citation needed]
Companies with an emphasis on marketing often focused their quality efforts on name and address information, but data quality is recognized[by whom?]as an important property of all types of data. Principles of data quality can be applied to supply chain data, transactional data, and nearly every other category of data found. For example, making supply chain data conform to a certain standard has value to an organization by: 1) avoiding overstocking of similar but slightly different stock; 2) avoiding false stock-out; 3) improving the understanding of vendor purchases to negotiate volume discounts; and 4) avoiding logistics costs in stocking and shipping parts across a large organization.[citation needed]
For companies with significant research efforts, data quality can include developingprotocolsfor research methods, reducingmeasurement error,bounds checkingof data,cross tabulation, modeling andoutlierdetection, verifyingdata integrity, etc.[citation needed]
There are a number of theoretical frameworks for understanding data quality. A systems-theoretical approach influenced by American pragmatism expands the definition of data quality to include information quality, and emphasizes the inclusiveness of the fundamental dimensions of accuracy and precision on the basis of the theory of science (Ivanov, 1972). One framework, dubbed "Zero Defect Data" (Hansen, 1991) adapts the principles of statistical process control to data quality. Another framework seeks to integrate the product perspective (conformance to specifications) and theserviceperspective (meeting consumers' expectations) (Kahn et al. 2002). Another framework is based insemioticsto evaluate the quality of the form, meaning and use of the data (Price and Shanks, 2004). One highly theoretical approach analyzes theontologicalnature ofinformation systemsto define data quality rigorously (Wand and Wang, 1996).
A considerable amount of data quality research involves investigating and describing various categories of desirable attributes (or dimensions) of data. Nearly 200 such terms have been identified and there is little agreement in their nature (are these concepts, goals or criteria?), their definitions or measures (Wang et al., 1993). Software engineers may recognize this as a similar problem to "ilities".
MIThas an Information Quality (MITIQ) Program, led by Professor Richard Wang, which produces a large number of publications and hosts a significant international conference in this field (International Conference on Information Quality, ICIQ). This program grew out of the work done by Hansen on the "Zero Defect Data" framework (Hansen, 1991).
In practice, data quality is a concern for professionals involved with a wide range of information systems, ranging fromdata warehousingandbusiness intelligencetocustomer relationship managementandsupply chain management. One industry study estimated the total cost to the U.S. economy of data quality problems at over U.S. $600 billion per annum (Eckerson, 2002). Incorrect data – which includes invalid and outdated information – can originate from different data sources – through data entry, ordata migrationand conversion projects.[12]
In 2002, the USPS and PricewaterhouseCoopers released a report stating that 23.6 percent of all U.S. mail sent is incorrectly addressed.[13]
One reason contact data becomes stale very quickly in the average database – more than 45 million Americans change their address every year.[14]
In fact, the problem is such a concern that companies are beginning to set up adata governanceteam whose sole role in the corporation is to be responsible for data quality. In some[who?]organizations, this data governance function has been established as part of a larger Regulatory Compliance function - a recognition of the importance of Data/Information Quality to organizations.
Problems with data quality don't only arise fromincorrectdata;inconsistentdata is a problem as well. Eliminatingdata shadow systemsand centralizing data in a warehouse is one of the initiatives a company can take to ensure data consistency.
Enterprises, scientists, and researchers are starting to participate within data curation communities to improve the quality of their common data.[15]
The market is going some way to providing data quality assurance. A number of vendors make tools for analyzing and repairing poor quality datain situ, service providers can clean the data on a contract basis and consultants can advise on fixing processes or systems to avoid data quality problems in the first place. Most data quality tools offer a series of tools for improving data, which may include some or all of the following:
ISO 8000is an international standard for data quality.[16]
Data quality assurance is the process ofdata profilingto discover inconsistencies and other anomalies in the data, as well as performingdata cleansing[17][18]activities (e.g. removingoutliers, missing datainterpolation) to improve the data quality.
These activities can be undertaken as part ofdata warehousingor as part of thedatabase administrationof an existing piece ofapplication software.[19]
Data quality controlis the process of controlling the usage of data for an application or a process. This process is performed both before and after a DataQuality Assurance(QA) process, which consists of discovery of data inconsistency and correction.
Before:
After QA process the following statistics are gathered to guide theQuality Control(QC) process:
The Data QC process uses the information from the QA process to decide to use the data for analysis or in an application or business process. General example: if a Data QC process finds that the data contains too many errors or inconsistencies, then it prevents that data from being used for its intended process which could cause disruption. Specific example: providing invalid measurements from several sensors to the automatic pilot feature on an aircraft could cause it to crash. Thus, establishing a QC process provides data usage protection.[citation needed]
Data Quality (DQ) is a niche area required for the integrity of the data management by covering gaps of data issues. This is one of the key functions that aid data governance by monitoring data to find exceptions undiscovered by current data management operations. Data Quality checks may be defined at attribute level to have full control on its remediation steps.[citation needed]
DQ checks and business rules may easily overlap if an organization is not attentive of its DQ scope. Business teams should understand the DQ scope thoroughly in order to avoid overlap. Data quality checks are redundant ifbusiness logiccovers the same functionality and fulfills the same purpose as DQ. The DQ scope of an organization should be defined in DQ strategy and well implemented. Some data quality checks may be translated into business rules after repeated instances of exceptions in the past.[citation needed]
Below are a few areas of data flows that may need perennial DQ checks:
CompletenessandprecisionDQ checks on all data may be performed at the point of entry for each mandatory attribute from each source system. Few attribute values are created way after the initial creation of the transaction; in such cases, administering these checks becomes tricky and should be done immediately after the defined event of that attribute's source and the transaction's other core attribute conditions are met.
All data having attributes referring toReference Datain the organization may be validated against the set of well-defined valid values of Reference Data to discover new or discrepant values through thevalidityDQ check. Results may be used to updateReference Dataadministered underMaster Data Management (MDM).
All data sourced from athird partyto organization's internal teams may undergoaccuracy(DQ) check against the third party data. These DQ check results are valuable when administered on data that made multiple hops after the point of entry of that data but before that data becomes authorized or stored for enterprise intelligence.
All data columns that refer toMaster Datamay be validated for itsconsistencycheck. A DQ check administered on the data at the point of entry discovers new data for the MDM process, but a DQ check administered after the point of entry discovers the failure (not exceptions) of consistency.
As data transforms, multiple timestamps and the positions of that timestamps are captured and may be compared against each other and its leeway to validate its value, decay, operational significance against a defined SLA (service level agreement). ThistimelinessDQ check can be utilized to decrease data value decay rate and optimize the policies of data movement timeline.
In an organization complex logic is usually segregated into simpler logic across multiple processes.ReasonablenessDQ checks on such complex logic yielding to a logical result within a specific range of values or static interrelationships (aggregated business rules) may be validated to discover complicated but crucial business processes and outliers of the data, its drift from BAU (business as usual) expectations, and may provide possible exceptions eventually resulting into data issues. This check may be a simple generic aggregation rule engulfed by large chunk of data or it can be a complicated logic on a group of attributes of a transaction pertaining to the core business of the organization. This DQ check requires high degree of business knowledge and acumen. Discovery of reasonableness issues may aid for policy and strategy changes by either business or data governance or both.
Conformitychecks andintegrity checksneed not covered in all business needs, it's strictly under the database architecture's discretion.
There are many places in the data movement where DQ checks may not be required. For instance, DQ check for completeness and precision on not–null columns is redundant for the data sourced from database. Similarly, data should be validated for its accuracy with respect to time when the data is stitched across disparate sources. However, that is a business rule and should not be in the DQ scope.[citation needed]
Regretfully, from a software development perspective, DQ is often seen as a nonfunctional requirement. And as such, key data quality checks/processes are not factored into the final software solution. Within Healthcare,wearable technologiesorBody Area Networks, generate large volumes of data.[20]The level of detail required to ensure data quality is extremely high and is often underestimated. This is also true for the vast majority ofmHealthapps,EHRsand other health related software solutions. However, some open source tools exist that examine data quality.[21]The primary reason for this, stems from the extra cost involved is added a higher degree of rigor within the software architecture.
The use of mobile devices in health, or mHealth, creates new challenges tohealth datasecurity and privacy, in ways that directly affect data quality.[2]mHealth is an increasingly important strategy for delivery of health services in low- and middle-income countries.[22]Mobile phones and tablets are used for collection, reporting, and analysis of data in near real time. However, these mobile devices are commonly used for personal activities, as well, leaving them more vulnerable to security risks that could lead to data breaches. Without proper security safeguards, this personal use could jeopardize the quality, security, and confidentiality of health data.[23]
Data quality has become a major focus of public health programs in recent years, especially as demand for accountability increases.[24]Work towards ambitious goals related to the fight against diseases such as AIDS, Tuberculosis, and Malaria must be predicated on strong Monitoring and Evaluation systems that produce quality data related to program implementation.[25]These programs, and program auditors, increasingly seek tools to standardize and streamline the process of determining the quality of data,[26]verify the quality of reported data, and assess the underlying data management and reporting systems for indicators.[27]An example is WHO and MEASURE Evaluation's Data Quality Review Tool[28]WHO, the Global Fund, GAVI, and MEASURE Evaluation have collaborated to produce a harmonized approach to data quality assurance across different diseases and programs.[29]
There are a number of scientific works devoted to the analysis of the data quality inopen datasources, such asWikipedia,Wikidata,DBpediaand other. In the case of Wikipedia, quality analysis may relate to the whole article[30]Modeling of quality there is carried out by means of various methods. Some of them usemachine learningalgorithms, includingRandom Forest,[31]Support Vector Machine,[32]and others. Methods for assessing data quality in Wikidata, DBpedia and otherLODsources differ.[33]
The Electronic Commerce Code Management Association (ECCMA) is a member-based, international not-for-profit association committed to improving data quality through the implementation of international standards. ECCMA is the current project leader for the development of ISO 8000 and ISO 22745, which are the international standards for data quality and the exchange of material and service master data, respectively. ECCMA provides a platform for collaboration amongst subject experts on data quality and data governance around the world to build and maintain global, open standard dictionaries that are used to unambiguously label information. The existence of these dictionaries of labels allows information to be passed from one computer system to another without losing meaning.[34]
|
https://en.wikipedia.org/wiki/Data_quality
|
Engineering toleranceis the permissible limit or limits of variation in:
Dimensions, properties, or conditions may have some variation without significantly affecting functioning of systems, machines, structures, etc. A variation beyond the tolerance (for example, a temperature that is too hot or too cold) is said to be noncompliant, rejected, or exceeding the tolerance.
A primary concern is to determine how wide the tolerances may be without affecting other factors or the outcome of a process. This can be by the use of scientific principles, engineering knowledge, and professional experience. Experimental investigation is very useful to investigate the effects of tolerances:Design of experiments, formal engineering evaluations, etc.
A good set of engineering tolerances in aspecification, by itself, does not imply that compliance with those tolerances will be achieved. Actual production of any product (or operation of any system) involves some inherent variation of input and output. Measurement error and statistical uncertainty are also present in all measurements. With anormal distribution, the tails of measured values may extend well beyond plus and minus three standard deviations from the process average. Appreciable portions of one (or both) tails might extend beyond the specified tolerance.
Theprocess capabilityof systems, materials, and products needs to be compatible with the specified engineering tolerances.Process controlsmust be in place and an effectivequality management system, such asTotal Quality Management, needs to keep actual production within the desired tolerances. Aprocess capability indexis used to indicate the relationship between tolerances and actual measured production.
The choice of tolerances is also affected by the intended statisticalsampling planand its characteristics such as the Acceptable Quality Level. This relates to the question of whether tolerances must be extremely rigid (high confidence in 100% conformance) or whether some small percentage of being out-of-tolerance may sometimes be acceptable.
Genichi Taguchiand others have suggested that traditional two-sided tolerancing is analogous to "goal posts" in afootball game: It implies that all data within those tolerances are equally acceptable. The alternative is that the best product has a measurement which is precisely on target. There is an increasing loss which is a function of the deviation or variability from the target value of any design parameter. The greater the deviation from target, the greater is the loss. This is described as theTaguchi loss functionorquality loss function, and it is the key principle of an alternative system calledinertial tolerancing.
Research and development work conducted by M. Pillet and colleagues[1]at the Savoy University has resulted in industry-specific adoption.[2]Recently the publishing of the French standard NFX 04-008 has allowed further consideration by the manufacturing community.
Dimensional tolerance is related to, but different fromfitin mechanical engineering, which is adesigned-inclearance or interference between two parts. Tolerances are assigned to parts for manufacturing purposes, as boundaries for acceptable build. No machine can hold dimensions precisely to the nominal value, so there must be acceptable degrees of variation. If a part is manufactured, but has dimensions that are out of tolerance, it is not a usable part according to the design intent. Tolerances can be applied to any dimension. The commonly used terms are:
This is identical to the upper deviation for shafts and the lower deviation for holes.[3]If the fundamental deviation is greater than zero, the bolt will always be smaller than the basic size and he hole will always be wider. Fundamental deviation is a form ofallowance, rather than tolerance.
For example, if a shaft with a nominal diameter of 10mmis to have a sliding fit within a hole, the shaft might be specified with a tolerance range from 9.964 to 10 mm (i.e., a zero fundamental deviation, but a lower deviation of 0.036 mm) and the hole might be specified with a tolerance range from 10.04 mm to 10.076 mm (0.04 mm fundamental deviation and 0.076 mm upper deviation). This would provide a clearance fit of somewhere between 0.04 mm (largest shaft paired with the smallest hole, called theMaximum Material Condition- MMC) and 0.112 mm (smallest shaft paired with the largest hole,Least Material Condition- LMC). In this case the size of the tolerance range for both the shaft and hole is chosen to be the same (0.036 mm), meaning that both components have the same International Tolerance grade but this need not be the case in general.
When no other tolerances are provided, themachining industryuses the followingstandard tolerances:[4][5]
When designing mechanical components, a system of standardized tolerances calledInternational Tolerance gradesare often used. The standard (size) tolerances are divided into two categories: hole and shaft. They are labelled with a letter (capitals for holes and lowercase for shafts) and a number. For example: H7 (hole,tapped hole, ornut) and h7 (shaft or bolt). H7/h6 is a very common standard tolerance which gives a tight fit. The tolerances work in such a way that for a hole H7 means that the hole should be made slightly larger than the base dimension (in this case for an ISO fit 10+0.015−0, meaning that it may be up to 0.015 mm larger than the base dimension, and 0 mm smaller). The actual amount bigger/smaller depends on the base dimension. For a shaft of the same size, h6 would mean 10+0−0.009, which means the shaft may be as small as 0.009 mm smaller than the base dimension and 0 mm larger. This method of standard tolerances is also known as Limits and Fits and can be found inISO 286-1:2010 (Link to ISO catalog).
The table below summarises the International Tolerance (IT) grades and the general applications of these grades:
An analysis of fit bystatistical interferenceis also extremely useful: It indicates the frequency (or probability) of parts properly fitting together.
An electrical specification might call for aresistorwith a nominal value of 100 Ω (ohms), but will also state a tolerance such as "±1%". This means that any resistor with a value in the range 99–101Ω is acceptable. For critical components, one might specify that the actual resistance must remain within tolerance within a specified temperature range, over a specified lifetime, and so on.
Many commercially availableresistorsandcapacitorsof standard types, and some smallinductors, are often marked withcoloured bandsto indicate their value and the tolerance. High-precision components of non-standard values may have numerical information printed on them.
Low tolerance means only a small deviation from the components given value, when new, under normal operating conditions and at room temperature. Higher tolerance means the component will have a wider range of possible values.
The terms are often confused but sometimes a difference is maintained. SeeAllowance (engineering) § Confounding of the engineering concepts of allowance and tolerance.
Incivil engineering,clearancerefers to the difference between theloading gaugeand thestructure gaugein the case ofrailroad carsortrams, or the difference between the size of anyvehicleand the width/height of doors, the width/height of anoverpassor thediameterof atunnelas well as theair draftunder abridge, the width of alockor diameter of a tunnel in the case ofwatercraft. In addition there is the difference between thedeep draftand thestream bedorsea bedof awaterway.
|
https://en.wikipedia.org/wiki/Engineering_tolerance
|
In mathematics,exactnessmay refer to:
|
https://en.wikipedia.org/wiki/Exactness_(disambiguation)
|
Experimental uncertainty analysisis a technique that analyses aderivedquantity, based on the uncertainties in the experimentallymeasuredquantities that are used in some form of mathematical relationship ("model") to calculate that derived quantity. The model used to convert the measurements into the derived quantity is usually based on fundamental principles of ascienceorengineeringdiscipline.
The uncertainty has two components, namely, bias (related toaccuracy) and the unavoidablerandom variationthat occurs when making repeated measurements (related toprecision). The measured quantities may havebiases, and they certainly haverandom variation, so what needs to be addressed is how these are "propagated" into the uncertainty of the derived quantity. Uncertainty analysis is often called the "propagation of error."
For example, an experimentaluncertainty analysisof an undergraduate physics lab experiment in which apendulumcan estimate the value of the localgravitational accelerationconstantg. The relevant equation[1]for an idealized simple pendulum is, approximately,
whereTis theperiodofoscillation(seconds),Lis the length (meters), andθis the initial angle. Sinceθis the single time-dependent coordinate of this system, it might be better to useθ0to denote the initial (starting)displacementangle, but it will be more convenient for notation to omit the subscript. Solving Eq(1) for the constantg,
This is theequation, or model, to be used for estimatinggfrom observed data. There will be some slight bias introduced into the estimation ofgby the fact that the term in brackets is only the first two terms of aseries expansion, but in practical experiments this bias can be, and will be, ignored.
The procedure is to measure the pendulum lengthLand then make repeated measurements of the periodT, each time starting the pendulum motion from the same initial displacement angleθ. The replicated measurements ofTareaveragedand then used in Eq(2) to obtain an estimate ofg. Equation (2) is the means to get from themeasuredquantitiesL,T, andθto thederivedquantityg.
Note that an alternative approach would be to convert all the individualTmeasurements to estimates ofg, using Eq(2), and then to average thosegvalues to obtain the final result. This would not be practical without some form of mechanized computing capability (i.e., computer or calculator), since the amount of numerical calculation in evaluating Eq(2) for manyTmeasurements would be tedious and prone to mistakes.
There are three quantities that must be measured: (1) the length of the pendulum, from its suspension point to the center of mass of the “bob;” (2) the period ofoscillation; (3) the initial displacement angle. The length is assumed to be fixed in this experiment, and it is to be measured once, although repeated measurements could be made, and the results averaged.
The initial displacement angle must be set for each replicate measurement of the periodT, and this angle is assumed to be constant. Often the initial angle is kept small (less than about 10 degrees) so that the correction for this angle is considered to be negligible; i.e., the term in brackets in Eq(2) is taken to be unity. For the experiment studied here, however, this correction is of interest, so that a typical initial displacement value might range from 30 to 45 degrees.
Suppose that it was the case, unknown to the students, that the length measurements were too small by, say, 5 mm. This could be due to a faulty measurement device (e.g. a meter stick), or, more likely, asystematic errorin the use of that device in measuringL. This could occur if the students forgot to measure to thecenter of massof the bob, and insteadconsistentlymeasured to the point where the string attached to it. Thus, this error is not random; it occurs each and every time the length is measured.
Next, the period of oscillationTcould suffer from asystematic errorif, for example, the studentsconsistentlymiscounted the back-and-forth motions of the pendulum to obtain aninteger numberof cycles. (Often the experimental procedure calls for timing several cycles, e.g., five or ten, not just one.) Or perhaps the digital stopwatch they used had an electronic problem, andconsistentlyread too large a value by, say, 0.02 seconds. There will of course also be random timing variations; that issue will be addressed later. Of concern here is a consistent, systematic, nonrandom error in the measurement of the period of oscillation of the pendulum.
Finally, the initial angle could be measured with a simpleprotractor. It is difficult to position and read the initial angle with high accuracy (or precision, for that matter; this measurement has poorreproducibility). Assume that the studentsconsistentlymis-position the protractor so that the angle reading is too small by, say, 5 degrees. Then all the initial angle measurements are biased by this amount.
However,biases are not known while the experiment is in progress. If it was known, for example, that the length measurements were low by 5 mm, the students could either correct their measurement mistake or add the 5 mm to their data to remove the bias. Rather, what is of more value is to study the effects of nonrandom, systematic error possibilitiesbeforethe experiment is conducted. This is a form ofsensitivity analysis.
The idea is to estimate the difference, orfractional change, in the derived quantity, hereg, given that the measured quantities are biased by some given amount. For example, if the initial angle wasconsistentlylow by 5 degrees, what effect would this have on the estimatedg? If the length isconsistentlyshort by 5 mm, what is the change in the estimate ofg? If the period measurements areconsistentlytoo long by 0.02 seconds, how much does the estimatedgchange? What happens to the estimate ofgif these biases occur in various combinations?
One reason for exploring these questions is that theexperimental design, in the sense of what equipment and procedure is to be used (not thestatistical sense; that is addressed later), depends on the relative effect of systematic errors in the measured quantities. If a 5-degree bias in the initial angle would cause an unacceptable change in the estimate ofg, then perhaps a more elaborate, and accurate, method needs to be devised for this measurement. On the other hand, if it can be shown, before the experiment is conducted, that this angle has a negligible effect ong, then using the protractor is acceptable.
Another motivation for this form of sensitivity analysis occursafterthe experiment was conducted, and thedata analysisshows a bias in the estimate ofg. Examining the change ingthat could result from biases in the severalinput parameters, that is, the measured quantities, can lead to insight into what caused the bias in the estimate ofg. This analysis can help to isolate such problems as measurement mistakes, problems with apparatus, incorrect assumptions about the model, etc.
The most straightforward, not to say obvious, way to approach this would be to directly calculate the change using Eq(2) twice, once with theorized biased values and again with the true, unbiased, values for the parameters:
where the ΔLetc. represent the biases in the respective measured quantities. (The carat overgmeans the estimated value ofg.) To make this more concrete, consider an idealized pendulum of length 0.5 meters, with an initial displacement angle of 30 degrees; from Eq(1) the period will then be 1.443 seconds. Suppose the biases are −5 mm, −5 degrees, and +0.02 seconds, forL,θ, andTrespectively. Then, considering first only the length bias ΔLby itself,
and for this and the other measurement parametersTandθthe changes ingare recorded inTable 1.
It is common practice in sensitivity analysis to express the changes as fractions (or percentages). Then the exact fractional change ingis
The results of these calculations for the example pendulum system are summarized in Table 1.
Next, suppose that it is impractical to use the direct approach to find the dependence of the derived quantity (g) upon the input, measured parameters (L, T, θ). Is there an alternative method? From calculus, the concept of thetotal differential[2]is useful here:
wherezis some function of several (p) variablesx. The symbol ∂z/ ∂x1represents the "partial derivative" of the functionzwith respect to one of the several variablesxthat affectz. For the present purpose, finding thisderivativeconsists of holding constant all variables other than the one with respect to which the partial is being found, and then finding the first derivative in the usual manner (which may, and often does, involve thechain rule). In functions that involve angles, as Eq(2) does, theangles must be measured inradians.
Eq(5) is alinear functionthatapproximates, e.g., a curve in two dimensions (p=1) by a tangent line at a point on that curve, or in three dimensions (p=2) it approximates a surface by atangent planeat a point on that surface. The idea is that thetotal change in z in the near vicinity of a specific pointis found from Eq(5). In practice, finite differences are used, rather than the differentials, so that
and this works very well as long as the increments Δxare sufficiently small.[3]Even highly curved functions are nearly linear over a small enough region. The fractional change is then
An alternate, useful, way to write Eq(6) uses vector-matrix formalism:
In the application of thesepartial derivatives, note that they are functions that will beevaluated at a point, that is, all the parameters that appear in the partials will have numerical values. Thus thevector productin Eq(8), for example, will result in a single numerical value. For bias studies, the values used in the partials are the true parameter values, since we are approximating the functionzin a small region near these true values.
Returning to the pendulum example and applying these equations, the absolute change in the estimate ofgis
and now the task is to find the partial derivatives in this equation. It will considerably simplify the process to define
Rewriting Eq(2) and taking the partials,
Plugging these derivatives into Eq(9),
and then applying the same numerical values for the parameters and their biases as before, the results in Table 1 are obtained. The values are reasonably close to those found using Eq(3), but not exact, except forL. That is because the change ingis linear withL, which can be deduced from the fact that the partial with respect to (w.r.t.)Ldoes not depend onL. Thus the linear "approximation" turns out to be exact forL. The partial w.r.t.θis more complicated, and results from applying thechain ruletoα. Also, in using Eq(10) in Eq(9) note that the angle measures, including Δθ, must be converted from degrees to radians.
The linearized-approximationfractional changein the estimate ofgis, applying Eq(7) to the pendulum example,
which looks very complicated, but in practice this usually results in a simple relation for the fractional change. Thus,
which reduces to
This, except for the last term, is a remarkably simple result. Expanding the last term as a series inθ,
so the result for the linearized approximation for the fractional change in the estimate ofgis
Recalling that angles are inradianmeasure, and that the value being used in the example is 30 degrees, this is about 0.524 radians; halved and squared as the coefficient of the fractional change inθsays, thiscoefficientis about 0.07. From Eq(12) it can then be readily concluded that the most-to-least influential parameters areT, L, θ.Another way of saying this is that the derived quantitygis more sensitive to, e.g., the measured quantityTthan toLorθ. Substituting the example's numerical values, the results are indicated in Table 1, and agree reasonably well with those found using Eq(4).
The form of Eq(12) is usually the goal of a sensitivity analysis, since it is general, i.e., not tied to a specific set of parameter values, as was the case for the direct-calculation method of Eq(3) or (4), and it is clear basically by inspection which parameters have the most effect should they have systematic errors. For example, if the length measurementLwas high by ten percent, then the estimate ofgwould also be high by ten percent. If the periodTwasunderestimated by 20 percent, then the estimate ofgwould beoverestimated by 40 percent (note the negative sign for theTterm). If the initial angleθwas overestimated by ten percent, the estimate ofgwould be overestimated by about 0.7 percent.
This information is very valuable in post-experiment data analysis, to track down which measurements might have contributed to an observed bias in the overall result (estimate ofg). The angle, for example, could quickly be eliminated as the only source of a bias ingof, say, 10 percent. The angle would need to be in error by some 140 percent, which is, one would hope, not physically plausible.
Next, consider the fact that, as the students repeatedly measure the oscillation period of the pendulum, they will obtain different values for each measurement. These fluctuations are random- small differences in reaction time in operating the stopwatch, differences in estimating when the pendulum has reached its maximum angular travel, and so forth; all these things interact to produce variation in the measured quantity. This isnotthe bias that was discussed above, where there was assumed to be a 0.02 second discrepancy between the stopwatch reading and the actual periodT. The bias is a fixed, constant value; random variation is just that – random, unpredictable.
Random variations are not predictable but they do tend to follow some rules, and those rules are usually summarized by a mathematical construct called aprobability density function(PDF). This function, in turn, has a few parameters that are very useful in describing the variation of the observed measurements. Two such parameters are themeanandvarianceof the PDF. Essentially, the mean is the location of the PDF on the real number line, and the variance is a description of the scatter or dispersion or width of the PDF.
To illustrate,Figure 1shows the so-calledNormal PDF, which will be assumed to be the distribution of the observed time periods in the pendulum experiment. Ignoring all the biases in the measurements for the moment, then the mean of this PDF will be at the true value ofTfor the 0.5 meter idealized pendulum, which has an initial angle of 30 degrees, namely, from Eq(1), 1.443 seconds. In the figure there are 10000 simulated measurements in thehistogram(which sorts the data into bins of small width, to show the distribution shape), and the Normal PDF is the solid line. The vertical line is the mean.
The interesting issue with random fluctuations is the variance. The positive square root of the variance is defined to be thestandard deviation, and it is a measure of the width of the PDF; there are other measures, but the standard deviation, symbolized by the Greek letterσ"sigma," is by far the most commonly used. For this simulation, a sigma of 0.03 seconds for measurements ofTwas used; measurements ofLandθassumed negligible variability.
In the figure the widths of one-, two-, and three-sigma are indicated by the vertical dotted lines with the arrows. It is seen that a three-sigma width on either side of the mean contains nearly all of the data for the Normal PDF. The range of time values observed is from about 1.35 to 1.55 seconds, but most of these time measurements fall in an interval narrower than that.
Figure 1shows the measurement results for many repeated measurements of the pendulum periodT. Suppose that these measurements were used, one at a time, in Eq(2) to estimateg. What would be the PDF of thosegestimates? Having that PDF, what are the mean and variance of thegestimates? This is not a simple question to answer, so a simulation will be the best way to see what happens. In Figure 2 there are again 10000 measurements ofT, which are then used in Eq(2) to estimateg,and those 10000 estimates are placed in the histogram. The mean (vertical black line) agrees closely[4]with the known value forgof 9.8 m/s2.
It is sometimes possible to derive the actual PDF of the transformed data. In the pendulum example the time measurementsTare, in Eq(2), squared and divided into some factors that for now can be considered constants. Using rules for the transformation of random variables[5]it can be shown that if theTmeasurements are Normally distributed, as in Figure 1, then the estimates ofgfollow another (complicated) distribution that can be derived analytically. Thatg-PDF is plotted with the histogram (black line) and the agreement with the data is very good. Also shown in Figure 2 is ag-PDF curve (red dashed line) for thebiasedvalues ofTthat were used in the previous discussion of bias. Thus the mean of the biased-T g-PDF is at 9.800 − 0.266 m/s2(see Table 1).
Consider again, as was done in the bias discussion above, a function
wherefneed not be, and often is not, linear, and thexare random variables which in general need not be normally distributed, and which in general may be mutually correlated. In analyzing the results of an experiment, the mean and variance of the derived quantityz,which will be a random variable,are of interest. These are defined as theexpected values
i.e., the firstmomentof the PDF about the origin, and the second moment of the PDF about the mean of the derived random variablez. These expected values are found using an integral, for the continuous variables being considered here. However, to evaluate these integrals a functional form is needed for the PDF of the derived quantityz. It has been noted that[6]
To illustrate, a simple example of this process is to find the mean and variance of the derived quantityz = x2where the measured quantityxis Normally distributed with meanμand varianceσ2. The derived quantityzwill have some new PDF, that can (sometimes) be found using the rules of probability calculus.[7]In this case, it can be shown using these rules that the PDF ofzwill be
Integratingthis from zero to positive infinity returns unity, which verifies that this is a PDF. Next, the mean and variance of this PDF are needed, to characterize the derived quantityz. The mean and variance (actually,mean squared error, a distinction that will not be pursued here) are found from the integrals
if these functions are integrable at all. As it happens in this case, analytical results are possible,[8]and it is found that
These results are exact. Note that the mean (expected value) ofzis not what would logically be expected, i.e., simply the square of the mean ofx. Thus, even when using arguably the simplest nonlinear function, the square of a random variable, the process of finding the mean and variance of the derived quantity is difficult, and for more complicated functions it is safe to say that this process is not practical for experimental data analysis.
As is good practice in these studies, the results above can be checked with a simulation. Figure 3 shows a histogram of 10000 samples ofz, with the PDF given above also graphed; the agreement is excellent. In this simulation thexdata had a mean of 10 and a standard deviation of 2. Thus the naive expected value forzwould of course be 100. The "biased mean" vertical line is found using the expression above forμz, and it agrees well with the observed mean (i.e., calculated from the data; dashed vertical line), and the biased mean is above the "expected" value of 100. The dashed curve shown in this figure is a Normal PDF that will be addressed later.
If, as is usually the case, the PDF of the derived quantity has not been found, and even if the PDFs of the measured quantities are not known, it turns out that it is still possible to estimate the mean and variance (and, thus, the standard deviation) of the derived quantity. This so-called "differential method"[9]will be described next. (For a derivation of Eq(13) and (14), seethis section, below.)
As is usual in applied mathematics, one approach for avoiding complexity is to approximate a function with another, simpler, function, and often this is done using a low-orderTaylor seriesexpansion. It can be shown[10]that, if the functionzis replaced with a first-order expansion about a point defined by the mean values of each of thepvariablesx, the variance of the linearized function is approximated by
whereσijrepresents thecovarianceof two variablesxiandxj. The double sum is taken overallcombinations ofiandj, with the understanding that the covariance of a variable with itself is the variance of that variable, that is,σii=σi2. Also, the covariances are symmetric, so thatσij=σji. Again, as was the case with the bias calculations, the partial derivatives are evaluated at a specific point, in this case, at the mean (average) value, or other best estimate, of each of the independent variables. Note that iffis linear then,and only then, Eq(13) is exact.
The expected value (mean) of the derived PDF can be estimated, for the case wherezis a function of one or two measured variables, using[11]
where the partials are evaluated at the mean of the respective measurement variable. (For more than two input variables this equation is extended, including the various mixed partials.)
Returning to the simple example case ofz = x2the mean is estimated by
which is the same as the exact result, in this particular case. For the variance (actually MSe),
which differs only by the absence of the last term that was in the exact result; sinceσshould be small compared toμ, this should not be a major issue.
In Figure 3 there is shown is a Normal PDF (dashed lines) with mean and variance from these approximations. The Normal PDF does not describe this derived data particularly well, especially at the low end. Substituting the known mean (10) and variance (4) of thexvalues in this simulation, or in the expressions above, it is seen that the approximate (1600) and exact (1632) variances only differ slightly (2%).
A more elegant way of writing the so-called "propagation of error" variance equation is to usematrices.[12]First define a vector of partial derivatives, as was used in Eq(8) above:
where superscript T denotes the matrix transpose; then define the covariance matrix
The propagation of error approximation then can be written concisely as thequadratic form
If thecorrelationsamongst thepvariables are all zero, as is frequently assumed, then the covariance matrixCbecomes diagonal, with the individual variances along the main diagonal. To stress the point again, the partials in the vectorγare all evaluated at a specific point, so that Eq(15) returns a single numerical result.
It will be useful to write out in detail the expression for the variance using Eq(13) or (15) for the casep= 2. This leads to
which, since the last two terms above are the same thing, is
Consider a relatively simple algebraic example, before returning to the more involved pendulum example. Let
so that
This expression could remain in this form, but it is common practice to divide through byz2since this will cause many of the factors to cancel, and will also produce in a more useful result:
which reduces to
Since the standard deviation ofzis usually of interest, its estimate is
where the use of the means (averages) of the variables is indicated by the overbars, and the carats indicate that the component (co)variances must also be estimated, unless there is some solida prioriknowledge of them. Generally this is not the case, so that theestimators
are frequently used,[13]based onnobservations (measurements).
For simplicity, consider only the measured time as a random variable, so that the derived quantity, the estimate ofg, amounts to
wherekcollects the factors in Eq(2) that for the moment are constants. Again applying the rules for probability calculus, a PDF can be derived for the estimates ofg(this PDF was graphed in Figure 2). In this case, unlike the example used previously, the mean and variance could not be found analytically. Thus there is no choice but to use the linearized approximations. For the mean, using Eq(14), with the simplified equation for the estimate ofg,
Then the expected value of the estimatedgwill be
where, if the pendulum period timesTare unbiased, the first term is 9.80 m/s2. This result says that the mean of the estimatedgvalues is biased high. This will be checked with a simulation, below.
Next, to find an estimate of the variance for the pendulum example, since the partial derivatives have already been found in Eq(10), all the variables will return to the problem. The partials go into the vectorγ. Following the usual practice, especially if there is no evidence to the contrary, it is assumed that the covariances are all zero, so thatCis diagonal.[14]Then
The same result is obtained using Eq(13). It must be stressed that these "sigmas" are the variances that describe the random variation in the measurements ofL,T, andθ; they are not to be confused with the biases used previously.The variances (or standard deviations) and the biases are not the same thing.
To illustrate this calculation, consider the simulation results from Figure 2. Here, only the time measurement was presumed to have random variation, and the standard deviation used for it was 0.03 seconds. Thus, using Eq(17),
and, using the numerical values assigned before for this example,
which compares favorably to the observed variance of 0.171, as calculated by the simulation program. (Estimated variances have a considerable amount of variability and these values would not be expected to agree exactly.) For the mean value, Eq(16) yields a bias of only about 0.01 m/s2, which is not visible in Figure 2.
To make clearer what happens as the random error in a measurement variable increases, consider Figure 4, where the standard deviation of the time measurements is increased to 0.15 s, or about ten percent. The PDF for the estimatedgvalues is also graphed, as it was in Figure 2; note that the PDF for the larger-time-variation case is skewed, and now the biased mean is clearly seen. The approximated (biased) mean and the mean observed directly from the data agree well. The dashed curve is a Normal PDF with mean and variance from the approximations; it does not represent the data particularly well.
Rather than the variance, often a more useful measure is the standard deviationσ, and when this is divided by the meanμwe have a quantity called therelative error, orcoefficient of variation. This is a measure ofprecision:
For the pendulum example, this gives a precision of slightly more than 4 percent. As with the bias, it is useful to relate the relative error in the derived quantity to the relative error in the measured quantities. Divide Eq(17) by the square ofg:
and use results obtained from the fractional change bias calculations to give (compare to Eq(12)):
Taking the square root then gives the RE:
In the example case this gives
which agrees with the RE obtained previously. This method, using the relative errors in the component (measured) quantities, is simpler, once the mathematics has been done to obtain a relation like Eq(17). Recall that the angles used in Eq(17) must be expressed in radians.
If, as is often the case, the standard deviation of the estimatedgshould be needed by itself, this is readily obtained by a simple rearrangement of Eq(18). This standard deviation is usually quoted along with the "point estimate" of the mean value: for the simulation this would be 9.81 ± 0.41 m/s2.What is to be inferred from intervals quoted in this manner needs to be considered very carefully.Discussion of this important topic is beyond the scope of this article, but the issue is addressed in some detail in the book by Natrella.[15]
It is good practice to check uncertainty calculations usingsimulation. These calculations can be very complicated and mistakes are easily made. For example, to see if the relative error for just the angle measurement was correct, a simulation was created to sample the angles from a Normal PDF with mean 30 degrees and standard deviation 5 degrees; both are converted to radians in the simulation. The relative error in the angle is then about 17 percent. From Eq(18) the relative error in the estimatedgis, holding the other measurements at negligible variation,
The simulation shows the observed relative error ingto be about 0.011, which demonstrates that the angle uncertainty calculations are correct. Thus, as was seen with the bias calculations, a relatively large random variation in the initial angle (17 percent) only causes about a one percent relative error in the estimate ofg.
Figure 5 shows the histogram for thesegestimates. Since the relative error in the angle was relatively large, the PDF of thegestimates is skewed (not Normal, not symmetric), and the mean is slightly biased. In this case the PDF is not known, but the mean can still be estimated, using Eq(14). The second partial for the angle portion of Eq(2), keeping the other variables as constants, collected ink, can be shown to be[8]
so that the expected value is
and the dotted vertical line, resulting from this equation, agrees with the observed mean.
In the introduction it was mentioned that there are two ways to analyze a set of measurements of the period of oscillationTof the pendulum:
It would be reasonable to think that these would amount to the same thing, and that there is no reason to prefer one method over the other. However, Method 2 results in a bias that is not removed by increasing the sample size. Method 1 is also biased, but that bias decreases with sample size. This bias, in both cases, is not particularly large, and it should not be confused with the bias that was discussed in the first section. What might be termed "Type I bias" results from a systematic error in the measurement process; "Type II bias" results from the transformation of a measurement random variable via a nonlinear model; here, Eq(2).
Type II bias is characterized by the terms after the first in Eq(14). As was calculated for the simulation in Figure 4, the bias in the estimatedgfor a reasonable variability in the measured times (0.03 s) is obtained from Eq(16) and was only about 0.01 m/s2. Rearranging the bias portion (second term) of Eq(16), and usingβfor the bias,
using the example pendulum parameters. From this it is seen that the bias varies as the square of the relative error in the periodT; for a larger relative error, about ten percent, the bias is about 0.32 m/s2, which is of more concern.
What is missing here, and has been deliberately avoided in all the prior material, is the effect of thesample sizeon these calculations. The number of measurementsnhas not appeared in any equation so far. Implicitly, all the analysis has been for the Method 2 approach, taking one measurement (e.g., ofT) at a time, and processing it through Eq(2) to obtain an estimate ofg.
To use the various equations developed above, values are needed for the mean and variance of the several parameters that appear in those equations. In practical experiments, these values will be estimated from observed data, i.e., measurements. These measurements are averaged to produce the estimated mean values to use in the equations, e.g., for evaluation of the partial derivatives. Thus, the variance of interest is thevariance of the mean, not of the population, and so, for example,
which reflects the fact that, as the number of measurements ofTincreases, the variance of the mean value ofTwould decrease. There is some inherent variability in theTmeasurements, and that is assumed to remain constant, but the variability of theaverage Twill decrease asnincreases. Assuming no covariance amongst the parameters (measurements), the expansion of Eq(13) or (15) can be re-stated as
where the subscript onnreflects the fact that different numbers of measurements might be done on the several variables (e.g., 3 forL, 10 forT, 5 forθ, etc.)
This dependence of the overall variance on the number of measurements implies that a component of statistical experimental design would be to define these sample sizes to keep the overall relative error (precision) within some reasonable bounds. Having an estimate of the variability of the individual measurements, perhaps from a pilot study, then it should be possible to estimate what sample sizes (number of replicates for measuring, e.g.,Tin the pendulum example) would be required.
Returning to the Type II bias in the Method 2 approach, Eq(19) can now be re-stated more accurately as
wheresis the estimated standard deviation of thenTTmeasurements. In Method 2, each individualTmeasurement is used to estimateg, so thatnT= 1 for this approach. On the other hand, for Method 1, theTmeasurements are first averaged before using Eq(2), so thatnTis greater than one. This means that
which says thatthe Type II bias of Method 2 does not decrease with sample size; it is constant. The variance of the estimate ofg, on the other hand, is in both cases
because in both methodsnTmeasurements are used to form the averagegestimate.[16]Thus the variance decreases with sample size for both methods.
These effects are illustrated in Figures 6 and 7. In Figure 6 is a series PDFs of the Method 2 estimatedgfor a comparatively large relative error in theTmeasurements, with varying sample sizes. The relative error in T is larger than might be reasonable so that the effect of the bias can be more clearly seen. In the figure the dots show the mean; the bias is evident, and it does not change withn.The variance, or width of the PDF, does become smaller with increasingn, and the PDF also becomes more symmetric. In Figure 7 are the PDFs for Method 1, and it is seen that the means converge toward the correct g value of 9.8 m/s2as the number of measurements increases, and the variance also decreases.
From this it is concluded that Method 1 is the preferred approach to processing the pendulum or other data.
Systematic errors in the measurement of experimental quantities leads tobiasin the derived quantity, the magnitude of which is calculated using Eq(6) or Eq(7). However, there is also a more subtle form of bias that can occur even if the input, measured, quantities are unbiased; all terms after the first in Eq(14) represent this bias. It arises from the nonlinear transformations of random variables that often are applied in obtaining the derived quantity. The transformation bias is influenced by the relative size of the variance of the measured quantity compared to its mean. The larger this ratio is, the more skew the derived-quantity PDF may be, and the more bias there may be.
The Taylor-series approximations provide a very useful way to estimate both bias and variability for cases where the PDF of the derived quantity is unknown or intractable. The mean can be estimated using Eq(14) and the variance using Eq(13) or Eq(15). There are situations, however, in which this first-order Taylor series approximation approach is not appropriate – notably if any of the component variables can vanish. Then, asecond-order expansionwould be useful; see Meyer[17]for the relevant expressions.
The sample size is an important consideration in experimental design. To illustrate the effect of the sample size, Eq(18) can be re-written as
where the average values (bars) and estimated standard deviationssare shown, as are the respective sample sizes. In principle, by using very largenthe RE of the estimatedgcould be driven down to an arbitrarily small value. However, there are often constraints or practical reasons for relatively small numbers of measurements.
Details concerning the difference between the variance and themean-squared error(MSe) have been skipped. Essentially, the MSe estimates the variability about the true (but unknown) mean of a distribution. This variability is composed of (1) the variability about the actual, observed mean, and (2) a term that accounts for how far that observed mean is from the true mean. Thus
whereβis the bias (distance). This is a statistical application of theparallel-axis theoremfrommechanics.[18]
In summary, the linearized approximation for the expected value (mean) and variance of a nonlinearly-transformed random variable is very useful, and much simpler to apply than the more complicated process of finding its PDF and then its first two moments. In many cases, the latter approach is not feasible at all. The mathematics of the linearized approximation is not trivial, and it can be avoided by using results that are collected for often-encountered functions of random variables.[19]
3. Finding the PDF is nontrivial, and may not even be possible in some cases, and is certainly not a practical method for ordinary data analysis purposes. Even if the PDF can be found, finding the moments (above) can be difficult.
4. The solution is to expand the functionzin asecond-orderTaylor series; the expansion is done around the mean values of the several variablesx. (Usually the expansion is done to first order; the second-order terms are needed to find the bias in the mean. Those second-order terms are usually dropped when finding the variance; see below).
5. With the expansion in hand, find the expected value. This will give an approximation for the mean ofz, and will include terms that represent any bias. In effect the expansion “isolates” the random variablesxso that their expectations can be found.
6. Having the expression for the expected value ofz, which will involve partial derivatives and the means and variances of the random variablesx, set up the expression for the expectation of the variance:
that is, find (z− E[z] ) and do the necessaryalgebrato collect terms and simplify.
7. For most purposes, it is sufficient to keep only the first-order terms; square that quantity.
8. Find the expected value of that result. This will be the approximation for the variance ofz.
This is the fundamental relation for the second-order expansion used in the approximations:[20]
To reduce notational clutter, the evaluation-at-the-mean symbols are not shown:
which reduces to
Using the previous result, take expected values:
and similarly forx2. The partials come outside the expectations since, evaluated at the respective mean values, they will be constants. The zero result above follows since the expected value of a sum or difference is the sum or difference of the expected values, so that, for anyi
Continuing,
and similarly forx2. Finally,
whereσ1,2is the covariance ofx1andx2. (This is often taken to be zero, correctly or not.) Then the expression for the approximation for the mean of the derived random variablezis
where all terms after the first represent the bias inz. This equation is needed to find the variance approximation, but it is useful on its own; remarkably, it does not appear in most texts on data analysis.
From the definition of variance, the next step would be to subtract the expected value, just found, from the expansion ofzfound previously. This leads to
Clearly, consideration of the second-order terms is going to lead to a very complicated and impractical result (although, if the first-order terms vanish, the use of all the terms above will be needed; see Meyer, p. 46). Hence, take only the linear terms (in the curly brackets), and square:
The final step is to take the expected value of this
which leads to the well-known result
and this is generalized forpvariables as the usual "propagation of error" formula
with the understanding that thecovarianceof a variable with itself is its variance. It is essential to recognize that all of these partial derivatives are to be evaluated at themeanof the respectivexvariables, and that the corresponding variances arevariances of those means. To reinforce this,
NOTES: r can be integer or fractional, positive or negative (or zero). If r is negative, ensure that the range of x does not include zero. If r is fractional with an even divisor, ensure that x is not negative. "n" is the sample size. These expressions are based on "Method 1" data analysis, where the observed values ofxare averagedbeforethe transformation (i.e., in this case, raising to a power and multiplying by a constant) is applied.
Type I bias, absolute.........................................................................Eq(1.1)
Type I bias, relative (fractional).........................................................Eq(1.2)
Mean (expected value).......................................................................Eq(1.3)
Type II bias, absolute........................................................................Eq(1.4)
Type II bias, fractional.......................................................................Eq(1.5)
Variance, absolute...........................................................................Eq(1.6)
Standard deviation, fractional...........................................................Eq(1.7)
Comments:
NOTES: b can be positive or negative. “n” is the sample size. Be aware that the effectiveness of these approximations isvery strongly dependenton the relative sizes of μ, σ, and b.
Type I bias, absolute.........................................................................Eq(2.1)
Type I bias, relative (fractional).........................................................Eq(2.2)
Mean (expected value).......................................................................Eq(2.3)
Type II bias, absolute........................................................................Eq(2.4)
Type II bias, fractional.......................................................................Eq(2.5)
Variance, absolute...........................................................................Eq(2.6)
Standard deviation, fractional...........................................................Eq(2.7)
NOTES: b and x must be positive. “n” is the sample size. Be aware that the effectiveness of these approximations isvery strongly dependenton the relative sizes of μ, σ, and b.
Type I bias, absolute.........................................................................Eq(3.1)
Type I bias, relative (fractional).........................................................Eq(3.2)
Mean (expected value).......................................................................Eq(3.3)
Type II bias, absolute........................................................................Eq(3.4)
Type II bias, fractional.......................................................................Eq(3.5)
Variance, absolute...........................................................................Eq(3.6)
Standard deviation, fractional...........................................................Eq(3.7)
NOTES: BVN is bivariate Normal PDF. “n” is the sample size.
Type I bias, absolute.........................................................................Eq(4.1)
Type I bias, relative (fractional).........................................................Eq(4.2)
Mean (expected value).......................................................................Eq(4.3)
Type II bias, absolute........................................................................Eq(4.4)
Type II bias, fractional.......................................................................Eq(4.5)
Variance, absolute...........................................................................Eq(4.6)
Standard deviation, fractional...........................................................Eq(4.7)
This is complicated, no point, does not simplify to anything useful; use (4.6)
Type I bias, absolute.........................................................................Eq(5.1)
Type I bias, relative (fractional).........................................................Eq(5.2)
Mean (expected value).......................................................................Eq(5.3)
Type II bias, absolute........................................................................Eq(5.4)
Type II bias, fractional.......................................................................Eq(5.5)
Variance, absolute...........................................................................Eq(5.6)
Standard deviation, fractional...........................................................Eq(5.7)
|
https://en.wikipedia.org/wiki/Experimental_uncertainty_analysis
|
Incomputing,floating-point arithmetic(FP) isarithmeticon subsets ofreal numbersformed by asignificand(asignedsequence of a fixed number of digits in somebase) multiplied by aninteger powerof that base.
Numbers of this form are calledfloating-point numbers.[1]: 3[2]: 10
For example, the number 2469/200 is a floating-point number in base ten with five digits:2469/200=12.345=12345⏟significand×10⏟base−3⏞exponent{\displaystyle 2469/200=12.345=\!\underbrace {12345} _{\text{significand}}\!\times \!\underbrace {10} _{\text{base}}\!\!\!\!\!\!\!\overbrace {{}^{-3}} ^{\text{exponent}}}However, 7716/625 = 12.3456 is not a floating-point number in base ten with five digits—it needs six digits.
The nearest floating-point number with only five digits is 12.346.
And 1/3 = 0.3333… is not a floating-point number in base ten with any finite number of digits.
In practice, most floating-point systems usebase two, though base ten (decimal floating point) is also common.
Floating-point arithmetic operations, such as addition and division, approximate the corresponding real number arithmetic operations byroundingany result that is not a floating-point number itself to a nearby floating-point number.[1]: 22[2]: 10For example, in a floating-point arithmetic with five base-ten digits, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345.
The termfloating pointrefers to the fact that the number'sradix pointcan "float" anywhere to the left, right, or between thesignificant digitsof the number. This position is indicated by the exponent, so floating point can be considered a form ofscientific notation.
A floating-point system can be used to represent, with a fixed number of digits, numbers of very differentorders of magnitude— such as the number of metersbetween galaxiesorbetween protons in an atom. For this reason, floating-point arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of thisdynamic rangeis that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent.[3]
Over the years, a variety of floating-point representations have been used in computers. In 1985, theIEEE 754Standard for Floating-Point Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE.
The speed of floating-point operations, commonly measured in terms ofFLOPS, is an important characteristic of acomputer system, especially for applications that involve intensive mathematical calculations.
Afloating-point unit(FPU, colloquially a mathcoprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers.
Anumber representationspecifies some way of encoding a number, usually as a string of digits.
There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the location of theradix pointis indicated by placing an explicit"point" character(dot or comma) there. If the radix point is not specified, then the string implicitly represents anintegerand the unstated radix point would be off the right-hand end of the string, next to the least significant digit. Infixed-pointsystems, a position in the string is specified for the radix point. So a fixed-point scheme might use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345.
Inscientific notation, the given number is scaled by apower of 10, so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period ofJupiter's moonIois152,853.5047seconds, a value that would be represented in standard-form scientific notation as1.528535047×105seconds.
Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of:
To derive the value of the floating-point number, thesignificandis multiplied by thebaseraised to the power of theexponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative.
Using base-10 (the familiardecimalnotation) as an example, the number152,853.5047, which has ten decimal digits of precision, is represented as the significand1,528,535,047together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105to give1.528535047×105, or152,853.5047. In storing such a number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred.
Symbolically, this final value is:sbp−1×be,{\displaystyle {\frac {s}{b^{\,p-1}}}\times b^{e},}
wheresis the significand (ignoring any implied decimal point),pis the precision (the number of digits in the significand),bis the base (in our example, this is the numberten), andeis the exponent.
Historically, several number bases have been used for representing floating-point numbers, with base two (binary) being the most common, followed by base ten (decimal floating point), and other less common varieties, such as base sixteen (hexadecimal floating point[4][5][nb 3]), base eight (octal floating point[1][5][6][4][nb 4]), base four (quaternary floating point[7][5][nb 5]), base three (balanced ternary floating point[1]) and even base 256[5][nb 6]and base65,536.[8][nb 7]
A floating-point number is arational number, because it can be represented as one integer divided by another; for example1.45×103is (145/100)×1000 or145,000/100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floating-point number using a binary base, but 1/5 can be represented exactly using a decimal base (0.2, or2×10−1). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but inbase 3, it is trivial (0.1 or 1×3−1) . The occasions on which infinite expansions occurdepend on the base and its prime factors.
The way in which the significand (including its sign) and exponent are stored in a computer is implementation-dependent. The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary single-precision (32-bit) floating-point representation,p=24{\displaystyle p=24}, and so the significand is a string of 24bits. For instance, the numberπ's first 33 bits are:110010010000111111011010_101000100.{\displaystyle 11001001\ 00001111\ 1101101{\underline {0}}\ 10100010\ 0.}
In this binary expansion, let us denote the positions from 0 (leftmost bit, or most significant bit) to 32 (rightmost bit). The 24-bit significand will stop at position 23, shown as the underlined bit0above. The next bit, at position 24, is called theround bitorrounding bit. It is used to round the 33-bit approximation to the nearest 24-bit number (there arespecific rules for halfway values, which is not the case here). This bit, which is1in this example, is added to the integer formed by the leftmost 24 bits, yielding:110010010000111111011011_.{\displaystyle 11001001\ 00001111\ 1101101{\underline {1}}.}
When this is stored in memory using the IEEE 754 encoding, this becomes thesignificands. The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from left-to-right as follows:(∑n=0p−1bitn×2−n)×2e=(1×2−0+1×2−1+0×2−2+0×2−3+1×2−4+⋯+1×2−23)×21≈1.57079637×2≈3.1415927{\displaystyle {\begin{aligned}&\left(\sum _{n=0}^{p-1}{\text{bit}}_{n}\times 2^{-n}\right)\times 2^{e}\\={}&\left(1\times 2^{-0}+1\times 2^{-1}+0\times 2^{-2}+0\times 2^{-3}+1\times 2^{-4}+\cdots +1\times 2^{-23}\right)\times 2^{1}\\\approx {}&1.57079637\times 2\\\approx {}&3.1415927\end{aligned}}}
wherepis the precision (24in this example),nis the position of the bit of the significand from the left (starting at0and finishing at23here) andeis the exponent (1in this example).
It can be required that the most significant digit of the significand of a non-zero number be non-zero (except when the corresponding exponent would be smaller than the minimum one). This process is callednormalization. For binary formats (which uses only the digits0and1), this non-zero digit is necessarily1. Therefore, it does not need to be represented in memory, allowing the format to have one more bit of precision. This rule is variously called theleading bit convention, theimplicit bit convention, thehidden bit convention,[1]or theassumed bit convention.
The floating-point representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives:
In 1914, the Spanish engineerLeonardo Torres QuevedopublishedEssays on Automatics,[9]where he designed a special-purpose electromechanical calculator based onCharles Babbage'sanalytical engineand described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format asn× 10m{\displaystyle ^{m}}, and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, "nwill always be the same number ofdigits(e.g. six), the first digit ofnwill be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form:n;m." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying a syntax to be used that could be entered through atypewriter, as was the case of hisElectromechanical Arithmometerin 1920.[10][11][12]
In 1938,Konrad Zuseof Berlin completed theZ1, the first binary, programmablemechanical computer;[13]it uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand (including one implicit bit), and a sign bit.[14]The more reliablerelay-basedZ3, completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as1/∞=0{\displaystyle ^{1}/_{\infty }=0}, and it stops on undefined operations, such as0×∞{\displaystyle 0\times \infty }.
Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes±∞{\displaystyle \pm \infty }and NaN representations, anticipating features of the IEEE Standard by four decades.[15]In contrast,von Neumannrecommended against floating-point numbers for the 1951IAS machine, arguing that fixed-point arithmetic is preferable.[15]
The firstcommercialcomputer with floating-point hardware was Zuse'sZ4computer, designed in 1942–1945. In 1946, Bell Laboratories introduced theModel V, which implementeddecimal floating-point numbers.[16]
ThePilot ACEhas binary floating-point arithmetic, and it became operational in 1950 atNational Physical Laboratory, UK. Thirty-three were later sold commercially as theEnglish Electric DEUCE. The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers.
The mass-producedIBM 704followed in 1954; it introduced the use of abiased exponent. For many decades after that, floating-point hardware was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "scientific computation" (SC) capability (see alsoExtensions for Scientific Computation(XSC)). It was not until the launch of the Intel i486 in 1989 thatgeneral-purposepersonal computers had floating-point capability in hardware as a standard feature.
TheUNIVAC 1100/2200 series, introduced in 1962, supported two floating-point representations:
TheIBM 7094, also introduced in 1962, supported single-precision and double-precision representations, but with no relation to the UNIVAC's representations. Indeed, in 1964, IBM introducedhexadecimal floating-point representationsin itsSystem/360mainframes; these same representations are still available for use in modernz/Architecturesystems. In 1998, IBM implemented IEEE-compatible binary floating-point arithmetic in its mainframes; in 2005, IBM also added IEEE-compatible decimal floating-point arithmetic.
Initially, computers used many different representations for floating-point numbers. The lack of standardization at the mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to the creation of theIEEE 754standard once the 32-bit (or 64-bit)wordhad become commonplace. This standard was significantly based on a proposal from Intel, which was designing thei8087numerical coprocessor; Motorola, which was designing the68000around the same time, gave significant input as well.
In 1989, mathematician and computer scientistWilliam Kahanwas honored with theTuring Awardfor being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor,Harold Stone.[17]
Among the x86 innovations are these:
A floating-point number consists of twofixed-pointcomponents, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend on their range, the floating-point range linearly depends on the significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number.
On a typical computer system, adouble-precision(64-bit) binary floating-point number has a coefficient of 53 bits (including 1 implied bit), an exponent of 11 bits, and 1 sign bit. Since 210= 1024, the complete range of the positive normal floating-point numbers in this format is from 2−1022≈ 2 × 10−308to approximately 21024≈ 2 × 10308.
The number of normal floating-point numbers in a system (B,P,L,U) where
is2(B−1)(BP−1)(U−L+1){\displaystyle 2\left(B-1\right)\left(B^{P-1}\right)\left(U-L+1\right)}.
There is a smallest positive normal floating-point number,
which has a 1 as the leading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent.
There is a largest floating-point number,
which hasB− 1 as the value for each digit of the significand and the largest possible value for the exponent.
In addition, there are representable values strictly between −UFL and UFL. Namely,positive and negative zeros, as well assubnormal numbers.
TheIEEEstandardized the computer representation for binary floating-point numbers inIEEE 754(a.k.a. IEC 60559) in 1985. This first standard is followed by almost all modern machines. It wasrevised in 2008. IBM mainframes supportIBM's own hexadecimal floating point formatand IEEE 754-2008decimal floating pointin addition to the IEEE 754 binary format. TheCray T90series had an IEEE version, but theSV1still uses Cray floating-point format.[citation needed]
The standard provides for many closely related formats, differing in only a few details. Five of these formats are calledbasic formats, and others are termedextended precision formatsandextendable precision format. Three formats are especially widely used in computer hardware and languages:[citation needed]
Increasing the precision of the floating-point representation generally reduces the amount of accumulatedround-off errorcaused by intermediate calculations.[24]Other IEEE formats include:
Any integer with absolute value less than 224can be exactly represented in the single-precision format, and any integer with absolute value less than 253can be exactly represented in the double-precision format. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get 53-bit integers on platforms that have double-precision floats but only 32-bit integers.
The standard specifies some special values, and their representation: positiveinfinity(+∞), negative infinity (−∞), anegative zero(−0) distinct from ordinary ("positive") zero, and "not a number" values (NaNs).
Comparison of floating-point numbers, as defined by the IEEE standard, is a bit different from usual integer comparison. Negative and positive zero compare equal, and every NaN compares unequal to every value, including itself. All finite floating-point numbers are strictly smaller than+∞and strictly greater than−∞, and they are ordered in the same way as their values (in the set of real numbers).
Floating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and a field for the significand, from left to right. For theIEEE 754binary formats (basic and extended) that have extant hardware implementations, they are apportioned as follows:
While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros andsubnormal numbers; values of all 1s are reserved for the infinities and NaNs. The exponent range for normal numbers is [−126, 127] for single precision, [−1022, 1023] for double, or [−16382, 16383] for quad. Normal numbers exclude subnormal values, zeros, infinities, and NaNs.
In the IEEE binary interchange formats the leading bit of a normalized significand is not actually stored in the computer datum, since it is always 1. It is called the "hidden" or "implicit" bit. Because of this, the single-precision format actually has a significand with 24 bits of precision, the double-precision format has 53, quad has 113, and octuple has 237.
For example, it was shown above that π, rounded to 24 bits of precision, has:
The sum of the exponent bias (127) and the exponent (1) is 128, so this is represented in the single-precision format as
An example of a layout for32-bit floating pointis
and the64-bit ("double")layout is similar.
In addition to the widely usedIEEE 754standard formats, other floating-point formats are used, or have been used, in certain domain-specific areas.
By their nature, all numbers expressed in floating-point format arerational numberswith a terminating expansion in the relevant base (for example, a terminating decimal expansion in base-10, or a terminating binary expansion in base-2). Irrational numbers, such asπor2{\textstyle {\sqrt {2}}}, or non-terminating rational numbers, must be approximated. The number of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. For example, the decimal number 123456789 cannot be exactly represented if only eight decimal digits of precision are available (it would be rounded to one of the two straddling representable values, 12345678 × 101or 12345679 × 101), the same applies tonon-terminating digits(.5to be rounded to either .55555555 or .55555556).
When a number is represented in some format (such as a character string) which is not a native floating-point representation supported in a computer implementation, then it will require a conversion before it can be used in that implementation. If the number can be represented exactly in the floating-point format then the conversion is exact. If there is not an exact representation then the conversion requires a choice of which floating-point number to use to represent the original value. The representation chosen will have a different value from the original, and the value thus adjusted is called therounded value.
Whether or not a rational number has a terminating expansion depends on the base. For example, in base-10 the number 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333...). In base-2 only rationals with denominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has a prime factor other than 2 will have an infinite binary expansion. This means that numbers that appear to be short and exact when written in decimal format may need to be approximated when converted to binary floating-point. For example, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binary representation would have a "1100" sequence continuing endlessly:
where, as previously,sis the significand andeis the exponent.
When rounded to 24 bits this becomes
which is actually 0.100000001490116119384765625 in decimal.
As a further example, the real numberπ, represented in binary as an infinite sequence of bits is
but is
when approximated byroundingto a precision of 24 bits.
In binary single-precision floating-point, this is represented ass= 1.10010010000111111011011 withe= 1.
This has a decimal value of
whereas a more accurate approximation of the true value of π is
The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimal representation of π in the first 7 digits. The difference is thediscretization errorand is limited by themachine epsilon.
The arithmetical difference between two consecutive representable floating-point numbers which have the same exponent is called aunit in the last place(ULP). For example, if there is no representable number lying between the representable numbers 1.45A70C2216and 1.45A70C2416, the ULP is 2×16−8, or 2−31. For numbers with a base-2 exponent part of 0, i.e. numbers with an absolute value higher than or equal to 1 but lower than 2, an ULP is exactly 2−23or about 10−7in single precision, and exactly 2−53or about 10−16in double precision. The mandated behavior of IEEE-compliant hardware is that the result be within one-half of a ULP.
Rounding is used when the exact result of a floating-point operation (or a conversion to floating-point format) would need more digits than there are digits in the significand. IEEE 754 requirescorrect rounding: that is, the rounded result is as if infinitely precise arithmetic was used to compute the value and then rounded (although in implementation only three extra bits are needed to ensure this). There are several differentroundingschemes (orrounding modes). Historically,truncationwas the typical approach. Since the introduction of IEEE 754, the default method (round to nearest, ties to even, sometimes called Banker's Rounding) is more commonly used. This method rounds the ideal (infinitely precise) result of an arithmetic operation to the nearest representable value, and gives that representation as the result.[nb 8]In the case of a tie, the value that would make the significand end in an even digit is chosen. The IEEE 754 standard requires the same rounding to be applied to all fundamental algebraic operations, including square root and conversions, when there is a numeric (non-NaN) result. It means that the results of IEEE 754 operations are completely determined in all bits of the result, except for the representation of NaNs. ("Library" functions such as cosine and log are not mandated.)
Alternative rounding options are also available. IEEE 754 specifies the following rounding modes:
Alternative modes are useful when the amount of error being introduced must be bounded. Applications that require a bounded error are multi-precision floating-point, andinterval arithmetic.
The alternative rounding modes are also useful in diagnosing numerical instability: if the results of a subroutine vary substantially between rounding to + and − infinity then it is likely numerically unstable and affected by round-off error.[34]
Converting a double-precision binary floating-point number to a decimal string is a common operation, but an algorithm producing results that are both accurate and minimal did not appear in print until 1990, with Steele and White's Dragon4. Some of the improvements since then include:
Many modern language runtimes use Grisu3 with a Dragon4 fallback.[41]
The problem of parsing a decimal string into a binary FP representation is complex, with an accurate parser not appearing until Clinger's 1990 work (implemented in dtoa.c).[35]Further work has likewise progressed in the direction of faster parsing.[42]
For ease of presentation and understanding, decimalradixwith 7 digit precision will be used in the examples, as in the IEEE 754decimal32format. The fundamental principles are the same in anyradixor precision, except that normalization is optional (it does not affect the numerical value of the result). Here,sdenotes the significand andedenotes the exponent.
A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number (with the smaller exponent) is shifted right by three digits, and one then proceeds with the usual addition method:
In detail:
This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized if necessary. The final result is
The lowest three digits of the second operand (654) are essentially lost. This isround-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them:
In the above conceptual examples it would appear that a large number of extra digits would need to be provided by the adder to ensure correct rounding; however, for binary addition or subtraction using careful implementation techniques only aguardbit, aroundingbit and one extrastickybit need to be carried beyond the precision of the operands.[43][44]: 218–220
Another problem of loss of significance occurs whenapproximationsto two nearly equal numbers are subtracted. In the following examplee= 5;s= 1.234571 ande= 5;s= 1.234567 are approximations to the rationals 123457.1467 and 123456.659.
The floating-point difference is computed exactly because the numbers are close—theSterbenz lemmaguarantees this, even in case of underflow whengradual underflowis supported. Despite this, the difference of the original numbers ise= −1;s= 4.877000, which differs more than 20% from the differencee= −1;s= 4.000000 of the approximations. In extreme cases, all significant digits of precision can be lost.[43][45]Thiscancellationillustrates the danger in assuming that all of the digits of a computed result are meaningful. Dealing with the consequences of these errors is a topic innumerical analysis; see alsoAccuracy problems.
To multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized.
Similarly, division is accomplished by subtracting the divisor's exponent from the dividend's exponent, and dividing the dividend's significand by the divisor's significand.
There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed in succession.[43]In practice, the way these operations are carried out in digital logic can be quite complex (seeBooth's multiplication algorithmandDivision algorithm).[nb 9]
Literals for floating-point numbers depend on languages. They typically useeorEto denotescientific notation. TheC programming languageand theIEEE 754standard also define ahexadecimal literal syntaxwith a base-2 exponent instead of 10. In languages likeC, when the decimal exponent is omitted, a decimal point is needed to differentiate them from integers. Other languages do not have an integer type (such asJavaScript), or allow overloading of numeric types (such asHaskell). In these cases, digit strings such as123may also be floating-point literals.
Examples of floating-point literals are:
Floating-point computation in a computer can run into three kinds of problems:
Prior to the IEEE standard, such conditions usually caused the program to terminate, or triggered some kind oftrapthat the programmer might be able to catch. How this worked was system-dependent, meaning that floating-point programs were notportable. (The term "exception" as used in IEEE 754 is a general term meaning an exceptional condition, which is not necessarily an error, and is a different usage to that typically defined in programming languages such as a C++ or Java, in which an "exception" is an alternative flow of control, closer to what is termed a "trap" in IEEE 754 terminology.)
Here, the required default method of handling exceptions according to IEEE 754 is discussed (the IEEE 754 optional trapping and other "alternate exception handling" modes are not discussed). Arithmetic exceptions are (by default) required to be recorded in "sticky" status flag bits. That they are "sticky" means that they are not reset by the next (arithmetic) operation, but stay set until explicitly reset. The use of "sticky" flags thus allows for testing of exceptional conditions to be delayed until after a full floating-point expression or subroutine: without them exceptional conditions that could not be otherwise ignored would require explicit testing immediately after every floating-point operation. By default, an operation always returns a result according to specification without interrupting computation. For instance, 1/0 returns +∞, while also setting the divide-by-zero flag bit (this default of ∞ is designed to often return a finite result when used in subsequent operations and so be safely ignored).
The original IEEE 754 standard, however, failed to recommend operations to handle such sets of arithmetic exception flag bits. So while these were implemented in hardware, initially programming language implementations typically did not provide a means to access them (apart from assembler). Over time some programming language standards (e.g.,C99/C11 and Fortran) have been updated to specify methods to access and change status flag bits. The 2008 version of the IEEE 754 standard now specifies a few operations for accessing and handling the arithmetic flag bits. The programming model is based on a single thread of execution and use of them by multiple threads has to be handled by ameansoutside of the standard (e.g.C11specifies that the flags havethread-local storage).
IEEE 754 specifies five arithmetic exceptions that are to be recorded in the status flags ("sticky bits"):
The default return value for each of the exceptions is designed to give the correct result in the majority of cases such that the exceptions can be ignored in the majority of codes.inexactreturns a correctly rounded result, andunderflowreturns a value less than or equal to the smallest positive normal number in magnitude and can almost always be ignored.[46]divide-by-zeroreturns infinity exactly, which will typically then divide a finite number and so give zero, or else will give aninvalidexception subsequently if not, and so can also typically be ignored. For example, the effective resistance of n resistors in parallel (see fig. 1) is given byRtot=1/(1/R1+1/R2+⋯+1/Rn){\displaystyle R_{\text{tot}}=1/(1/R_{1}+1/R_{2}+\cdots +1/R_{n})}. If a short-circuit develops withR1{\displaystyle R_{1}}set to 0,1/R1{\displaystyle 1/R_{1}}will return +infinity which will give a finalRtot{\displaystyle R_{tot}}of 0, as expected[47](see the continued fraction example ofIEEE 754 design rationalefor another example).
Overflowandinvalidexceptions can typically not be ignored, but do not necessarily represent errors: for example, aroot-findingroutine, as part of its normal operation, may evaluate a passed-in function at values outside of its domain, returning NaN and aninvalidexception flag to be ignored until finding a useful start point.[46]
The fact that floating-point numbers cannot accurately represent all real numbers, and that floating-point operations cannot accurately represent true arithmetic operations, leads to many surprising situations. This is related to the finiteprecisionwith which computers generally represent numbers.
For example, the decimal numbers 0.1 and 0.01 cannot be represented exactly as binary floating-point numbers. In the IEEE 754 binary32 format with its 24-bit significand, the result of attempting to square the approximation to 0.1 is neither 0.01 nor the representable number closest to it. The decimal number 0.1 is represented in binary ase= −4;s= 110011001100110011001101, which is
Squaring this number gives
Squaring it with rounding to the 24-bit precision gives
But the representable number closest to 0.01 is
Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity, nor will it even overflow in the usual floating-point formats (assuming an accurate implementation of tan). It is simply not possible for standard floating-point hardware to attempt to compute tan(π/2), because π/2 cannot be represented exactly. This computation in C:
will give a result of 16331239353195370.0. In single precision (using thetanffunction), the result will be −22877332.0.
By the same token, an attempted computation of sin(π) will not yield zero. The result will be (approximately) 0.1225×10−15in double precision, or −0.8742×10−7in single precision.[nb 10]
While floating-point addition and multiplication are bothcommutative(a+b=b+aanda×b=b×a), they are not necessarilyassociative. That is,(a+b) +cis not necessarily equal toa+ (b+c). Using 7-digit significand decimal arithmetic:
They are also not necessarilydistributive. That is,(a+b) ×cmay not be the same asa×c+b×c:
In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur:
Q(h)=f(a+h)−f(a)h.{\displaystyle Q(h)={\frac {f(a+h)-f(a)}{h}}.}
Machine precisionis a quantity that characterizes the accuracy of a floating-point system, and is used inbackward error analysisof floating-point algorithms. It is also known as unit roundoff ormachine epsilon. Usually denotedΕmach, its value depends on the particular rounding being used.
With rounding to zero,Emach=B1−P,{\displaystyle \mathrm {E} _{\text{mach}}=B^{1-P},\,}whereas rounding to nearest,Emach=12B1−P,{\displaystyle \mathrm {E} _{\text{mach}}={\tfrac {1}{2}}B^{1-P},}whereBis the base of the system andPis the precision of the significand (in baseB).
This is important since it bounds therelative errorin representing any non-zero real numberxwithin the normalized range of a floating-point system:|fl(x)−xx|≤Emach.{\displaystyle \left|{\frac {\operatorname {fl} (x)-x}{x}}\right|\leq \mathrm {E} _{\text{mach}}.}
Backward error analysis, the theory of which was developed and popularized byJames H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable.[52]The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined asbackward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, thecondition numberof a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.[53]
As a trivial example, consider a simple expression giving the inner product of (length two) vectorsx{\displaystyle x}andy{\displaystyle y}, thenfl(x⋅y)=fl(fl(x1⋅y1)+fl(x2⋅y2)),wherefl()indicates correctly rounded floating-point arithmetic=fl((x1⋅y1)(1+δ1)+(x2⋅y2)(1+δ2)),whereδn≤Emach,from above=((x1⋅y1)(1+δ1)+(x2⋅y2)(1+δ2))(1+δ3)=(x1⋅y1)(1+δ1)(1+δ3)+(x2⋅y2)(1+δ2)(1+δ3),{\displaystyle {\begin{aligned}\operatorname {fl} (x\cdot y)&=\operatorname {fl} {\big (}\operatorname {fl} (x_{1}\cdot y_{1})+\operatorname {fl} (x_{2}\cdot y_{2}){\big )},&&{\text{ where }}\operatorname {fl} (){\text{ indicates correctly rounded floating-point arithmetic}}\\&=\operatorname {fl} {\big (}(x_{1}\cdot y_{1})(1+\delta _{1})+(x_{2}\cdot y_{2})(1+\delta _{2}){\big )},&&{\text{ where }}\delta _{n}\leq \mathrm {E} _{\text{mach}},{\text{ from above}}\\&={\big (}(x_{1}\cdot y_{1})(1+\delta _{1})+(x_{2}\cdot y_{2})(1+\delta _{2}){\big )}(1+\delta _{3})\\&=(x_{1}\cdot y_{1})(1+\delta _{1})(1+\delta _{3})+(x_{2}\cdot y_{2})(1+\delta _{2})(1+\delta _{3}),\end{aligned}}}and sofl(x⋅y)=x^⋅y^,{\displaystyle \operatorname {fl} (x\cdot y)={\hat {x}}\cdot {\hat {y}},}
where
x^1=x1(1+δ1);x^2=x2(1+δ2);y^1=y1(1+δ3);y^2=y2(1+δ3),{\displaystyle {\begin{aligned}{\hat {x}}_{1}&=x_{1}(1+\delta _{1});&{\hat {x}}_{2}&=x_{2}(1+\delta _{2});\\{\hat {y}}_{1}&=y_{1}(1+\delta _{3});&{\hat {y}}_{2}&=y_{2}(1+\delta _{3}),\\\end{aligned}}}
where
δn≤Emach{\displaystyle \delta _{n}\leq \mathrm {E} _{\text{mach}}}
by definition, which is the sum of two slightly perturbed (on the order of Εmach) input data, and so is backward stable. For more realistic examples innumerical linear algebra, see Higham 2002[54]and other references below.
Although individual arithmetic operations of IEEE 754 are guaranteed accurate to within half aULP, more complicated formulae can suffer from larger errors for a variety of reasons. The loss of accuracy can be substantial if a problem or its data areill-conditioned, meaning that the correct result is hypersensitive to tiny perturbations in its data. However, even functions that are well-conditioned can suffer from large loss of accuracy if an algorithmnumerically unstablefor that data is used: apparently equivalent formulations of expressions in a programming language can differ markedly in their numerical stability. One approach to remove the risk of such loss of accuracy is the design and analysis of numerically stable algorithms, which is an aim of the branch of mathematics known asnumerical analysis. Another approach that can protect against the risk of numerical instabilities is the computation of intermediate (scratch) values in an algorithm at a higher precision than the final result requires,[55]which can remove, or reduce by orders of magnitude,[56]such risk:IEEE 754 quadruple precisionandextended precisionare designed for this purpose when computing at double precision.[57][nb 11]
For example, the following algorithm is a direct implementation to compute the functionA(x) = (x−1) / (exp(x−1) − 1)which is well-conditioned at 1.0,[nb 12]however it can be shown to be numerically unstable and lose up to half the significant digits carried by the arithmetic when computed near 1.0.[58]
If, however, intermediate computations are all performed in extended precision (e.g. by setting line [1] toC99long double), then up to full precision in the final double result can be maintained.[nb 13]Alternatively, a numerical analysis of the algorithm reveals that if the following non-obvious change to line [2] is made:
then the algorithm becomes numerically stable and can compute to full double precision.
To maintain the properties of such carefully constructed numerically stable programs, careful handling by thecompileris required. Certain "optimizations" that compilers might make (for example, reordering operations) can work against the goals of well-behaved software. There is some controversy about the failings of compilers and language designs in this area: C99 is an example of a language where such optimizations are carefully specified to maintain numerical precision. See the external references at the bottom of this article.
A detailed treatment of the techniques for writing high-quality floating-point software is beyond the scope of this article, and the reader is referred to,[54][59]and the other references at the bottom of this article. Kahan suggests several rules of thumb that can substantially decrease by orders of magnitude[59]the risk of numerical anomalies, in addition to, or in lieu of, a more careful numerical analysis. These include: as noted above, computing all expressions and intermediate results in the highest precision supported in hardware (a common rule of thumb is to carry twice the precision of the desired result, i.e. compute in double precision for a final single-precision result, or in double extended or quad precision for up to double-precision results[60]); and rounding input data and results to only the precision required and supported by the input data (carrying excess precision in the final result beyond that required and supported by the input data can be misleading, increases storage cost and decreases speed, and the excess bits can affect convergence of numerical procedures:[61]notably, the first form of the iterative example given below converges correctly when using this rule of thumb). Brief descriptions of several additional issues and techniques follow.
As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of aproton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact.[56][59]An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation.[62]The "decimal" data type of theC#andPythonprogramming languages, and the decimal formats of theIEEE 754-2008standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal.
Expectations from mathematics may not be realized in the field of floating-point computation. For example, it is known that(x+y)(x−y)=x2−y2{\displaystyle (x+y)(x-y)=x^{2}-y^{2}\,}, and thatsin2θ+cos2θ=1{\displaystyle \sin ^{2}{\theta }+\cos ^{2}{\theta }=1\,}, however these facts cannot be relied on when the quantities involved are the result of floating-point computation.
The use of the equality test (if (x==y) ...) requires care when dealing with floating-point numbers. Even simple expressions like0.6/0.2-3==0will, on most computers, fail to be true[63](in IEEE 754 double precision, for example,0.6/0.2 - 3is approximately equal to−4.44089209850063×10−16). Consequently, such tests are sometimes replaced with "fuzzy" comparisons (if (abs(x-y) < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon.[54]Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to round-off errors.[59]It is often better to organize the code in such a way that such tests are unnecessary. For example, incomputational geometry, exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods.[64]
Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples arematrix inversion,eigenvectorcomputation, and differential equation solving. These algorithms must be very carefully designed, using numerical approaches such asiterative refinement, if they are to work well.[65]
Summation of a vector of floating-point values is a basic algorithm inscientific computing, and so an awareness of when loss of significance can occur is essential. For example, if one is adding a very large number of numbers, the individual addends are very small compared with the sum. This can lead to loss of significance. A typical addition would then be something like
The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, all approximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are not regained. TheKahan summation algorithmmay be used to reduce the errors.[54]
Round-off error can affect the convergence and accuracy of iterative numerical procedures. As an example,Archimedesapproximated π by calculating the perimeters of polygons inscribing and circumscribing a circle, starting with hexagons, and successively doubling the number of sides. As noted above, computations may be rearranged in a way that is mathematically equivalent but less prone to error (numerical analysis). Two forms of the recurrence formula for the circumscribed polygon are:[citation needed]
Here is a computation using IEEE "double" (a significand with 53 bits of precision) arithmetic:
While the two forms of the recurrence formula are clearly mathematically equivalent,[nb 14]the first subtracts 1 from a number extremely close to 1, leading to an increasingly problematic loss ofsignificant digits. As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision.
The aforementioned lack ofassociativityof floating-point operations in general means thatcompilerscannot as effectively reorder arithmetic expressions as they could with integer and fixed-point arithmetic, presenting a roadblock in optimizations such ascommon subexpression eliminationand auto-vectorization.[66]The "fast math" option on many compilers (ICC, GCC, Clang, MSVC...) turns on reassociation along with unsafe assumptions such as a lack of NaN and infinite numbers in IEEE 754. Some compilers also offer more granular options to only turn on reassociation. In either case, the programmer is exposed to many of the precision pitfalls mentioned above for the portion of the program using "fast" math.[67]
In some compilers (GCC and Clang), turning on "fast" math may cause the program todisable subnormal floatsat startup, affecting the floating-point behavior of not only the generated code, but also any program using such code as alibrary.[68]
In mostFortrancompilers, as allowed by the ISO/IEC 1539-1:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting (also on by default). This setting stops the compiler from reassociating beyond the boundaries of parentheses.[69]Intel Fortran Compileris a notable outlier.[70]
A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen inIcing, a verified compiler.[71]
|
https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems
|
Information quality (IQ)is the quality of the content ofinformation systems. It is often pragmatically defined as: "The fitness for use of the information provided". IQ frameworks also provides a tangible approach to assess and measure DQ/IQ in arobustandrigorousmanner.[1]
Although this pragmatic definition is usable for most everyday purposes, specialists often use more complex models for information quality. Most information system practitioners use the term synonymously withdata quality. However, as many academics make a distinction betweendataandinformation,[2]some will the process to guarantee confidence that particular information meets some context specific quality requirements. It has been suggested, however, that higher the quality the greater will be the confidence in meeting more general, less specific contexts.[3]
"Information quality" is a measure of the value which the information provides to the user of that information.[1]"Quality" is often perceived as subjective and the quality of information can then vary among users and among uses of the information. Nevertheless, a high degree of quality increases its objectivity or at least theintersubjectivity. Accuracy can be seen as just one element of IQ but, depending upon how it is defined, can also be seen as encompassing many other dimensions of quality.
If not, it is perceived that often there is a trade-off between accuracy and other dimensions, aspects or elements of the information determining its suitability for any given tasks.Richard Wangand Diane Strong propose a list of dimensions or elements used in assessing Information Quality is:[4]
Other authors propose similar but different lists of dimensions for analysis, and emphasize measurement and reporting as information quality metrics. Larry English prefers the term "characteristics" to dimensions.[6]In fact, a considerable amount of information quality research involves investigating and describing various categories of desirable attributes (or dimensions) of data. Research has recently shown the huge diversity of terms and classification structures used.[7]
While information as a distinct term has various ambiguous definitions, there's one which is more general, such as "description of events". While the occurrences being described cannot be subjectively evaluated for quality, since they're very much autonomous events in space and time, their description can—since it possesses a garnishment attribute, unavoidably attached by the medium which carried the information, from the initial moment of the occurrences being described.
In an attempt to deal with this natural phenomenon, qualified professionals primarily representing the researchers' guild, have at one point or another identified particular metrics for information quality. They could also be described as 'quality traits' of information, since they're not so easily quantified, but rather subjectively identified on an individual basis.
Source:[1]
Authority refers to the expertise or recognized official status of a source. Consider the reputation of the author and publisher. When working with legal or government information, consider whether the source is the official provider of the information. Verifiability refers to the ability of a reader to verify the validity of the information irrespective of how authoritative the source is. To verify the facts is part of the duty of care of the journalisticdeontology, as well as, where possible, to provide the sources of information so that they can be verified
Scope of coverage refers to the extent to which a source explores a topic. Consider time periods, geography or jurisdiction and coverage of related or narrower topics.
Composition and organization has to do with the ability of the information source to present its particular message in a coherent, logically sequential manner.
Objectivity is the bias or opinion expressed when a writer interprets or analyze facts. Consider the use of persuasive language, the source's presentation of other viewpoints, its reason for providing the information and advertising.
Validity of some information has to do with the degree of obvious truthfulness which the information carries
As much as ‘uniqueness’ of a given piece of information is intuitive in meaning, it also significantly implies not only the originating point of the information but also the manner in which it is presented and thus the perception which it conjures. The essence of any piece of information we process consists to a large extent of those two elements.
Timeliness refers to information that is current at the time of publication. Consider publication, creation and revision dates. Beware of Web site scripting that automatically reflects the current day's date on a page.
Means that documented methods are capable of being used on the same data set to achieve a consistent result.
A number of major conferences relevant to information quality are held annually:
|
https://en.wikipedia.org/wiki/Information_quality
|
Inmetrology,measurement uncertaintyis the expression of thestatistical dispersionof the values attributed to a quantity measured on an interval or ratioscale.
All measurements are subject to uncertainty and a measurement result is complete only when it is accompanied by a statement of the associated uncertainty, such as thestandard deviation. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value. It is a non-negative parameter.[1]
The measurement uncertainty is often taken as thestandard deviationof a state-of-knowledge probability distribution over the possible values that could be attributed to a measured quantity. Relative uncertainty is the measurement uncertainty relative to the magnitude of a particular single choice for the value for the measured quantity, when this choice is nonzero. This particular single choice is usually called the measured value, which may be optimal in some well-defined sense (e.g., amean,median, ormode). Thus, the relative measurement uncertainty is the measurement uncertainty divided by the absolute value of the measured value, when the measured value is not zero.
The purpose of measurement is to provide information about aquantityof interest – ameasurand. Measurands on ratio or intervalscalesinclude the size of a cylindrical feature, thevolumeof a vessel, thepotential differencebetween the terminals of a battery, or themass concentrationof lead in a flask of water.
No measurement is exact. When a quantity is measured, the outcome depends on the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects.[2]Even if the quantity were to be measured several times, in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming the measuring system has sufficient resolution to distinguish between the values.
The dispersion of the measured values would relate to how well the measurement is performed. If measured on a ratio or intervalscale, theiraveragewould provide an estimate of the true value of the quantity that generally would be more reliable than an individual measured value.
The dispersion and the number of measured values would provide information relating to the average value as an estimate of the true value.
However, this information would not generally be adequate.
The measuring system may provide measured values that are not dispersed about the true value, but about some value offset from it. Take a domestic bathroom scale. Suppose it is not set to show zero when there is nobody on the scale, but to show some value offset from zero. Then, no matter how many times the person's mass were re-measured, the effect of this offset would be inherently present in the average of the values.
The"Guide to the Expression of Uncertainty in Measurement"(commonly known as the GUM) is the definitive document on this subject. The GUM has been adopted by all major National Measurement Institutes (NMIs) and by international laboratory accreditation standards such asISO/IEC 17025 General requirements for the competence of testing and calibration laboratories, which is required forinternational laboratory accreditation, and is employed in most modern national and international documentary standards on measurement methods and technology. SeeJoint Committee for Guides in Metrology.
Measurement uncertainty has important economic consequences for calibration and measurement activities. In calibration reports, the magnitude of the uncertainty is often taken as an indication of the quality of the laboratory, and smaller uncertainty values generally are of higher value and of higher cost. TheAmerican Society of Mechanical Engineers(ASME) has produced a suite of standards addressing various aspects of measurement uncertainty. For example, ASME standards are used to address the role of measurement uncertainty when accepting or rejecting products based on a measurement result and a product specification,[3]to provide a simplified approach (relative to the GUM) to the evaluation of dimensional measurement uncertainty,[4]to resolve disagreements over the magnitude of the measurement uncertainty statement,[5]and to provide guidance on the risks involved in any product acceptance/rejection decision.[6]
The above discussion concerns the direct measurement of a quantity, which incidentally occurs rarely. For example, the bathroom scale may convert a measured extension of a spring into an estimate of the measurand, themassof the person on the scale. The particular relationship between extension and mass is determined by thecalibrationof the scale. A measurementmodelconverts a quantity value into the corresponding value of the measurand.
There are many types of measurement in practice and therefore many models. A simple measurement model (for example for a scale, where the mass is proportional to the extension of the spring) might be sufficient for everyday domestic use. Alternatively, a more sophisticated model of a weighing, involving additional effects such as airbuoyancy, is capable of delivering better results for industrial or scientific purposes. In general there are often several different quantities, for exampletemperature,humidityanddisplacement, that contribute to the definition of the measurand, and that need to be measured.
Correction terms should be included in the measurement model when the conditions of measurement are not exactly as stipulated. These terms correspond tosystematic errors. Given an estimate of a correction term, the relevant quantity should be corrected by this estimate. There will be an uncertainty associated with the estimate, even if the estimate is zero, as is often the case. Instances of systematic errors arise in height measurement, when the alignment of the measuring instrument is not perfectly vertical, and the ambient temperature is different from that prescribed. Neither the alignment of the instrument nor the ambient temperature is specified exactly, but information concerning these effects is available, for example the lack of alignment is at most 0.001° and the ambient temperature at the time of measurement differs from that stipulated by at most 2 °C.
As well as raw data representing measured values, there is another form of data that is frequently needed in a measurement model. Some such data relate to quantities representingphysical constants, each of which is known imperfectly. Examples are material constants such asmodulus of elasticityandspecific heat. There are often other relevant data given in reference books, calibration certificates, etc., regarded as estimates of further quantities.
The items required by a measurement model to define a measurand are known as input quantities in a measurement model. The model is often referred to as a functional relationship. The output quantity in a measurement model is the measurand.
Formally, the output quantity, denoted byY{\displaystyle Y}, about which information is required, is often related to input quantities, denoted byX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}, about which information is available, by a measurement model in the form of
wheref{\displaystyle f}is known as the measurement function. A general expression for a measurement model is
It is taken that a procedure exists for calculatingY{\displaystyle Y}givenX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}, and thatY{\displaystyle Y}is uniquely defined by this equation.
The true values of the input quantitiesX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}are unknown.
In the GUM approach,X1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}are characterized byprobability distributionsand treated mathematically asrandom variables.
These distributions describe the respective probabilities of their true values lying in different intervals, and are assigned based on available knowledge concerningX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}.
Sometimes, some or all ofX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}are interrelated and the relevant distributions, which are known asjoint, apply to these quantities taken together.
Consider estimatesx1,…,xN{\displaystyle x_{1},\ldots ,x_{N}}, respectively, of the input quantitiesX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}, obtained from certificates and reports, manufacturers' specifications, the analysis of measurement data, and so on.
The probability distributions characterizingX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}are chosen such that the estimatesx1,…,xN{\displaystyle x_{1},\ldots ,x_{N}}, respectively, are theexpectations[7]ofX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}.
Moreover, for thei{\displaystyle i}th input quantity, consider a so-calledstandard uncertainty, given the symbolu(xi){\displaystyle u(x_{i})}, defined as thestandard deviation[7]of the input quantityXi{\displaystyle X_{i}}.
This standard uncertainty is said to be associated with the (corresponding) estimatexi{\displaystyle x_{i}}.
The use of available knowledge to establish a probability distribution to characterize each quantity of interest applies to theXi{\displaystyle X_{i}}and also toY{\displaystyle Y}.
In the latter case, the characterizing probability distribution forY{\displaystyle Y}is determined by the measurement model together with the probability distributions for theXi{\displaystyle X_{i}}.
The determination of the probability distribution forY{\displaystyle Y}from this information is known as thepropagation of distributions.[7]
The figure below depicts a measurement modelY=X1+X2{\displaystyle Y=X_{1}+X_{2}}in the case whereX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}are each characterized by a (different) rectangular, oruniform, probability distribution.Y{\displaystyle Y}has a symmetric trapezoidal probability distribution in this case.
Once the input quantitiesX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}have been characterized by appropriate probability distributions, and the measurement model has been developed, the probability distribution for the measurandY{\displaystyle Y}is fully specified in terms of this information. In particular, the expectation ofY{\displaystyle Y}is used as the estimate ofY{\displaystyle Y}, and the standard deviation ofY{\displaystyle Y}as the standard uncertainty associated with this estimate.
Often an interval containingY{\displaystyle Y}with a specified probability is required. Such an interval, a coverage interval, can be deduced from the probability distribution forY{\displaystyle Y}. The specified probability is known as the coverage probability. For a given coverage probability, there is more than one coverage interval. The probabilistically symmetric coverage interval is an interval for which the probabilities (summing to one minus the coverage probability) of a value to the left and the right of the interval are equal. The shortest coverage interval is an interval for which the length is least over all coverage intervals having the same coverage probability.
Prior knowledge about the true value of the output quantityY{\displaystyle Y}can also be considered. For the domestic bathroom scale, the fact that the person's mass is positive, and that it is the mass of a person, rather than that of a motor car, that is being measured, both constitute prior knowledge about the possible values of the measurand in this example. Such additional information can be used to provide a probability distribution forY{\displaystyle Y}that can give a smaller standard deviation forY{\displaystyle Y}and hence a smaller standard uncertainty associated with the estimate ofY{\displaystyle Y}.[8][9][10]
Knowledge about an input quantityXi{\displaystyle X_{i}}is inferred from repeated measured values ("Type A evaluation of uncertainty"), or scientific judgement or other information concerning the possible values of the quantity ("Type B evaluation of uncertainty").
In Type A evaluations of measurement uncertainty, the assumption is often made that the distribution best describing an input quantityX{\displaystyle X}given repeated measured values of it (obtained independently) is aGaussian distribution.X{\displaystyle X}then has expectation equal to the average measured value and standard deviation equal to the standard deviation of the average.
When the uncertainty is evaluated from a small number of measured values (regarded as instances of a quantity characterized by a Gaussian distribution), the corresponding distribution can be taken as at-distribution.[11]Other considerations apply when the measured values are not obtained independently.
For a Type B evaluation of uncertainty, often the only available information is thatX{\displaystyle X}lies in a specifiedinterval[a,b{\displaystyle a,b}].
In such a case, knowledge of the quantity can be characterized by arectangular probability distribution[11]with limitsa{\displaystyle a}andb{\displaystyle b}.
If different information were available, a probability distribution consistent with that information would be used.[12]
Sensitivity coefficientsc1,…,cN{\displaystyle c_{1},\ldots ,c_{N}}describe how the estimatey{\displaystyle y}ofY{\displaystyle Y}would be influenced by small changes in the estimatesx1,…,xN{\displaystyle x_{1},\ldots ,x_{N}}of the input quantitiesX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}.
For the measurement modelY=f(X1,…,XN){\displaystyle Y=f(X_{1},\ldots ,X_{N})}, the sensitivity coefficientci{\displaystyle c_{i}}equals thepartial derivativeof first order off{\displaystyle f}with respect toXi{\displaystyle X_{i}}evaluated atX1=x1{\displaystyle X_{1}=x_{1}},X2=x2{\displaystyle X_{2}=x_{2}}, etc.
For alinearmeasurement model
withX1,…,XN{\displaystyle X_{1},\ldots ,X_{N}}independent, a change inxi{\displaystyle x_{i}}equal tou(xi){\displaystyle u(x_{i})}would give a changeciu(xi){\displaystyle c_{i}u(x_{i})}iny.{\displaystyle y.}This statement would generally be approximate for measurement modelsY=f(X1,…,XN){\displaystyle Y=f(X_{1},\ldots ,X_{N})}.
The relative magnitudes of the terms|ci|u(xi){\displaystyle |c_{i}|u(x_{i})}are useful in assessing the respective contributions from the input quantities to the standard uncertaintyu(y){\displaystyle u(y)}associated withy{\displaystyle y}.
The standard uncertaintyu(y){\displaystyle u(y)}associated with the estimatey{\displaystyle y}of the output quantityY{\displaystyle Y}is not given by the sum of the|ci|u(xi){\displaystyle |c_{i}|u(x_{i})}, but these terms combined in quadrature,[1]namely by an expression that is generally approximate for measurement modelsY=f(X1,…,XN){\displaystyle Y=f(X_{1},\ldots ,X_{N})}:
which is known as the law of propagation of uncertainty.
When the input quantitiesXi{\displaystyle X_{i}}contain dependencies, the above formula is augmented by terms containingcovariances,[1]which may increase or decreaseu(y){\displaystyle u(y)}.
The main stages of uncertainty evaluation constitute formulation and calculation, the latter consisting of propagation and summarizing.
The formulation stage constitutes
The calculation stage consists of propagating the probability distributions for the input quantities through the measurement model to obtain the probability distribution for the output quantityY{\displaystyle Y}, and summarizing by using this distribution to obtain
The propagation stage of uncertainty evaluation is known as the propagation of distributions, various approaches for which are available, including
For any particular uncertainty evaluation problem, approach 1), 2) or 3) (or some other approach) is used, 1) being generally approximate, 2) exact, and 3) providing a solution with a numerical accuracy that can be controlled.
When the measurement model is multivariate, that is, it has any number of output quantities, the above concepts can be extended.[13]The output quantities are now described by a joint probability distribution, the coverage interval becomes a coverage region, the law of propagation of uncertainty has a natural generalization, and a calculation procedure that implements a multivariate Monte Carlo method is available.
The most common view of measurement uncertainty uses random variables as mathematical models for uncertain quantities and simple probability distributions as sufficient for representing measurement uncertainties. In some situations, however, a mathematicalintervalmight be a better model of uncertainty than a probability
distribution. This may include situations involving periodic measurements,binneddata values,censoring,detection limits, or plus-minus ranges of measurements where no particular probability distribution seems justified or where one cannot assume that the errors among individual measurements are completely independent.[citation needed]
A morerobustrepresentation of measurement uncertainty in such cases can be fashioned from intervals.[14][15]An interval [a,b] is different from a rectangular or uniform probability distribution over the same range in that the latter suggests that the true value lies inside the right half of the range [(a+b)/2,b] with probability one half, and within any subinterval of [a,b] with probability equal to the width of the subinterval divided byb−a. The interval makes no such claims, except simply that the measurement lies somewhere within the interval. Distributions of such measurement intervals can be summarized asprobability boxesandDempster–Shafer structuresover the real numbers, which incorporate bothaleatoric and epistemic uncertainties.
|
https://en.wikipedia.org/wiki/Measurement_uncertainty
|
Instatistics, theprecision matrixorconcentration matrixis thematrix inverseof thecovariance matrixor dispersion matrix,P=Σ−1{\displaystyle P=\Sigma ^{-1}}.[1][2][3]Forunivariate distributions, the precision matrix degenerates into ascalarprecision, defined as thereciprocalof thevariance,p=1σ2{\displaystyle p={\frac {1}{\sigma ^{2}}}}.[4]
Othersummary statisticsofstatistical dispersionalso calledprecision(orimprecision[5][6])
include the reciprocal of thestandard deviation,p=1σ{\displaystyle p={\frac {1}{\sigma }}};[3]the standard deviation itself and therelative standard deviation;[7]as well as thestandard error[8]and theconfidence interval(or its half-width, themargin of error).[9]
One particular use of the precision matrix is in the context ofBayesian analysisof themultivariate normal distribution: for example, Bernardo & Smith prefer to parameterise the multivariate normal distribution in terms of the precision matrix, rather than the covariance matrix, because of certain simplifications that then arise.[10]For instance, if both thepriorand thelikelihoodhaveGaussianform, and the precision matrix of both of these exist (because their covariance matrix is full rank and thus invertible), then the precision matrix of theposteriorwill simply be the sum of the precision matrices of the prior and the likelihood.
As the inverse of aHermitian matrix, the precision matrix of real-valued random variables, if it exists, ispositive definiteand symmetrical.
Another reason the precision matrix may be useful is that if two dimensionsi{\displaystyle i}andj{\displaystyle j}of a multivariate normal areconditionally independent, then theij{\displaystyle ij}andji{\displaystyle ji}elements of the precision matrix are0{\displaystyle 0}. This means that precision matrices tend to be sparse when many of the dimensions are conditionally independent, which can lead to computational efficiencies when working with them. It also means that precision matrices are closely related to the idea ofpartial correlation.
The precision matrix plays a central role ingeneralized least squares, compared toordinary least squares, whereP{\displaystyle P}is theidentity matrix, and toweighted least squares, whereP{\displaystyle P}is diagonal (theweight matrix).
The termprecisionin this sense ("mensura praecisionis observationum") first appeared in the works ofGauss(1809) "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" (page 212). Gauss's definition differs from the modern one by a factor of2{\displaystyle {\sqrt {2}}}. He writes, for the density function of anormal distributionwith precisionh{\displaystyle h}(reciprocal of standard deviation),
wherehh=h2{\displaystyle hh=h^{2}}(seemodern exponential notation).
Later Whittaker & Robinson (1924) "Calculus of observations" called this quantitythe modulus (of precision), but this term has dropped out of use.[11]
|
https://en.wikipedia.org/wiki/Precision_(statistics)
|
Observational error(ormeasurement error) is the difference between ameasuredvalue of aquantityand its unknowntrue value.[1]Such errors are inherent in the measurement process; for example lengths measured with a ruler calibrated in whole centimeters will have a measurement error of several millimeters. The error or uncertainty of a measurement can be estimated, and is specified with the measurement as, for example, 32.3 ± 0.5 cm.
Scientific observations are marred by two distinct types of errors, systematic errors on the one hand, andrandom, on the other hand. The effects ofrandom errorscan be mitigated by the repeated measurements. Constant orsystematic errorson the contrary must be carefully avoided, because they arise from one or more causes which constantly act in the same way, and have the effect of always altering the result of the experiment in the same direction. They therefore alter the value observed and repeated identical measurements do not reduce such errors.[2]
Measurement errors can be summarized in terms ofaccuracy and precision.
For example, length measurements with a ruler accurately calibrated in whole centimeters will be subject to random error with each use on the same distance giving a slightly different value resulting limited precision; a metallic ruler thetemperatureof which is not controlled will be affected bythermal expansioncausing an additional systematic error resulting in limited accuracy.[3]
When eitherrandomnessor uncertainty modeled byprobability theoryis attributed to such errors, they are "errors" in the sense in which that term is used instatistics; seeerrors and residuals in statistics.
Every time a measurement is repeated, slightly different results are obtained. The commonstatistical modelused is that the error has two additive parts:[4]
Some errors are not clearly random or systematic such as the uncertainty in the calibration of an instrument.[4]
Random errors or statistical errors in measurement lead to measurable values being inconsistent between repeated measurements of aconstantattribute orquantityare taken. Random errors createmeasurement uncertainty. These errors areuncorrelatedbetween measurements. Repeated measurements will fall in a pattern and in a large set of such measurements astandard deviationcan be calculated as a estimate of the amount of statistical error.[4]: 147
Systematic errors are errors that are not determined by chance but are introduced by repeatable processes inherent to thesystem.[5]Sources of systematic errors include errors in equipment calibration, uncertainty in correction terms applied during experimental analysis, errors due the use of approximate theoretical models.[4]: suplSystematic error is sometimes calledstatistical bias. It may often be reduced with standardized procedures.
Part of the learning process in the varioussciencesis learning how to use standard instruments and protocols so as to minimize systematic error.
Over a long period of time, systematic errors in science can be resolved and become a form of "negative knowledge": scientist build up an understanding of how to avoid specific kinds of systematic errors.[6]
When two or more observations or two or more instruments are combined, the errors in each combine. Estimates of the error in the result of such combinations depend upon the statistical characteristics of each individual measurement and on the possible statistical correlation between them.[7]: 92
Measurement errors can be divided into two components: random error and systematic error.[2]
Random erroris always present in a measurement. It is caused by inherently unpredictable fluctuations in the readings of a measurement apparatus or in the experimenter's interpretation of the instrumental reading. Random errors show up as different results for ostensibly the same repeated measurement. They can be estimated by comparing multiple measurements and reduced by averaging multiple measurements.
Systematic erroris predictable and typically constant or proportional to the true value. If the cause of the systematic error can be identified, then it usually can be eliminated. Systematic errors are caused by imperfect calibration of measurement instruments or imperfect methods ofobservation, or interference of theenvironmentwith the measurement process, and always affect the results of anexperimentin a predictable direction. Incorrect zeroing of an instrument is an example of systematic error in instrumentation.
The Performance Test Standard PTC 19.1-2005 "Test Uncertainty", published by theAmerican Society of Mechanical Engineers(ASME), discusses systematic and random errors in considerable detail. In fact, it conceptualizes its basic uncertainty categories in these terms.
Random error can be caused by unpredictable fluctuations in the readings of a measurement apparatus, or in the experimenter's interpretation of the instrumental reading; these fluctuations may be in part due to interference of the environment with the measurement process. The concept of random error is closely related to the concept ofprecision. The higher the precision of a measurement instrument, the smaller the variability (standard deviation) of the fluctuations in its readings.
Sources of systematic error may be imperfect calibration of measurement instruments (zero error), changes in theenvironmentwhich interfere with the measurement process and sometimes imperfect methods ofobservationcan be either zero error or percentage error. If you consider an experimenter taking a reading of the time period of a pendulum swinging past afiducial marker: If their stop-watch or timer starts with 1 second on the clock then all of their results will be off by 1 second (zero error). If the experimenter repeats this experiment twenty times (starting at 1 second each time), then there will be apercentage errorin the calculated average of their results; the final result will be slightly larger than the true period.
Distancemeasured byradarwill be systematically overestimated if the slight slowing down of the waves in air is not accounted for. Incorrect zeroing of an instrument is an example of systematic error in instrumentation.
Systematic errors may also be present in the result of anestimatebased upon amathematical modelorphysical law. For instance, the estimatedoscillation frequencyof apendulumwill be systematically in error if slight movement of the support is not accounted for.
Systematic errors can be either constant, or related (e.g. proportional or a percentage) to the actual value of the measured quantity, or even to the value of a different quantity (the reading of arulercan be affected by environmental temperature). When it is constant, it is simply due to incorrect zeroing of the instrument. When it is not constant, it can change its sign. For instance, if a thermometer is affected by a proportional systematic error equal to 2% of the actual temperature, and the actual temperature is 200°, 0°, or −100°, the measured temperature will be 204° (systematic error = +4°), 0° (null systematic error) or −102° (systematic error = −2°), respectively. Thus the temperature will be overestimated when it will be above zero and underestimated when it will be below zero.
Systematic errors which change during an experiment (drift) are easier to detect. Measurements indicate trends with time rather than varying randomly about amean. Drift is evident if a measurement of aconstantquantity is repeated several times and the measurements drift one way during the experiment. If the next measurement is higher than the previous measurement as may occur if an instrument becomes warmer during the experiment then the measured quantity is variable and it is possible to detect a drift by checking the zero reading during the experiment as well as at the start of the experiment (indeed, thezero readingis a measurement of a constant quantity). If the zero reading is consistently above or below zero, a systematic error is present. If this cannot be eliminated, potentially by resetting the instrument immediately before the experiment then it needs to be allowed by subtracting its (possibly time-varying) value from the readings, and by taking it into account while assessing the accuracy of the measurement.
If no pattern in a series of repeated measurements is evident, the presence of fixed systematic errors can only be found if the measurements are checked, either by measuring a known quantity or by comparing the readings with readings made using a different apparatus, known to be more accurate. For example, if you think of the timing of a pendulum using an accuratestopwatchseveral times you are given readings randomly distributed about the mean. Hopings systematic error is present if the stopwatch is checked against the 'speaking clock' of the telephone system and found to be running slow or fast. Clearly, the pendulum timings need to be corrected according to how fast or slow the stopwatch was found to be running.
Measuring instruments such asammetersandvoltmetersneed to be checked periodically against known standards.
Systematic errors can also be detected by measuring already known quantities. For example, aspectrometerfitted with adiffraction gratingmay be checked by using it to measure thewavelengthof the D-lines of thesodiumelectromagnetic spectrumwhich are at 600 nm and 589.6 nm. The measurements may be used to determine the number of lines per millimetre of the diffraction grating, which can then be used to measure the wavelength of any other spectral line.
Constant systematic errors are very difficult to deal with as their effects are only observable if they can be removed. Such errors cannot be removed by repeating measurements or averaging large numbers of results. A common method to remove systematic error is throughcalibrationof the measurement instrument.
The random or stochastic error in a measurement is the error that is random from one measurement to the next. Stochastic errors tend to benormally distributedwhen the stochastic error is the sum of many independent random errors because of thecentral limit theorem. Stochastic errors added to a regression equation account for the variation inYthat cannot be explained by the includedXs.
The term "observational error" is also sometimes used to refer to response errors and some other types ofnon-sampling error.[1]In survey-type situations, these errors can be mistakes in the collection of data, including both the incorrect recording of a response and the correct recording of a respondent's inaccurate response. These sources of non-sampling error are discussed in Salant and Dillman (1994) and Bland and Altman (1996).[8][9]
These errors can be random or systematic. Random errors are caused by unintended mistakes by respondents, interviewers and/or coders. Systematic error can occur if there is a systematic reaction of the respondents to the method used to formulate the survey question. Thus, the exact formulation of a survey question is crucial, since it affects the level of measurement error.[10]Different tools are available for the researchers to help them decide about this exact formulation of their questions, for instance estimating the quality of a question usingMTMM experiments. This information about the quality can also be used in order tocorrect for measurement error.[11][12]
If thedependent variablein a regression is measured with error, regression analysis and associated hypothesis testing are unaffected, except that theR2will be lower than it would be with perfect measurement.
However, if one or moreindependent variablesis measured with error, then the regression coefficients and standardhypothesis testsare invalid.[13]This is known asattenuation bias.[14]
|
https://en.wikipedia.org/wiki/Random_and_systematic_errors
|
Significant figures, also referred to assignificant digits, are specificdigitswithin a number that is written inpositional notationthat carry both reliability and necessity in conveying a particular quantity. When presenting the outcome of a measurement (such as length, pressure, volume, or mass), if the number of digits exceeds what the measurement instrument can resolve, only the digits that are determined by theresolutionare dependable and therefore considered significant.
For instance, if a length measurement yields 114.8 mm, using a ruler with the smallest interval between marks at 1 mm, the first three digits (1, 1, and 4, representing 114 mm) are certain and constitute significant figures. Further, digits that are uncertain yet meaningful are also included in the significant figures. In this example, the last digit (8, contributing 0.8 mm) is likewise considered significant despite its uncertainty.[1]Therefore, this measurement contains four significant figures.
Another example involves a volume measurement of 2.98 L with an uncertainty of ± 0.05 L. The actual volume falls between 2.93 L and 3.03 L. Even if certain digits are not completely known, they are still significant if they are meaningful, as they indicate the actual volume within an acceptable range of uncertainty. In this case, the actual volume might be 2.94 L or possibly 3.02 L, so all three digits are considered significant.[1]Thus, there are three significant figures in this example.
The following types of digits are not considered significant:[2]
A zero after a decimal (e.g., 1.0) is significant, and care should be used when appending such a decimal of zero. Thus, in the case of 1.0, there are two significant figures, whereas 1 (without a decimal) has one significant figure.
Among a number's significant digits, themost significant digitis the one with the greatest exponent value (the leftmost significant digit/figure), while theleast significant digitis the one with the lowest exponent value (the rightmost significant digit/figure). For example, in the number "123" the "1" is the most significant digit, representing hundreds (102), while the "3" is the least significant digit, representing ones (100).
To avoid conveying a misleading level of precision, numbers are oftenrounded. For instance, it would createfalse precisionto present a measurement as 12.34525 kg when the measuring instrument only provides accuracy to the nearest gram (0.001 kg). In this case, the significant figures are the first five digits (1, 2, 3, 4, and 5) from the leftmost digit, and the number should be rounded to these significant figures, resulting in 12.345 kg as the accurate value. Therounding error(in this example, 0.00025 kg = 0.25 g) approximates the numerical resolution or precision. Numbers can also be rounded for simplicity, not necessarily to indicate measurement precision, such as for the sake of expediency in news broadcasts.
Significance arithmetic encompasses a set of approximate rules for preserving significance through calculations. More advanced scientific rules are known as thepropagation of uncertainty.
Radix10 (base-10, decimal numbers) is assumed in the following. (SeeUnit in the last placefor extending these concepts to other bases.)
Identifying the significant figures in a number requires knowing which digits are meaningful, which requires knowing the resolution with which the number is measured, obtained, or processed. For example, if the measurable smallest mass is 0.001 g, then in a measurement given as 0.00234 g the "4" is not useful and should be discarded, while the "3" is useful and should often be retained.[3]
The significance of trailing zeros in a number not containing a decimal point can be ambiguous. For example, it may not always be clear if the number 1300 is precise to the nearest unit (just happens coincidentally to be an exact multiple of a hundred) or if it is only shown to the nearest hundreds due to rounding or uncertainty. Many conventions exist to address this issue. However, these are not universally used and would only be effective if the reader is familiar with the convention:
As the conventions above are not in general use, the following more widely recognized options are available for indicating the significance of number with trailing zeros:
Roundingto significant figures is a more general-purpose technique than rounding tondigits, since it handles numbers of different scales in a uniform way. For example, the population of a city might only be known to the nearest thousand and be stated as 52,000, while the population of a country might only be known to the nearest million and be stated as 52,000,000. The former might be in error by hundreds, and the latter might be in error by hundreds of thousands, but both have two significant figures (5 and 2). This reflects the fact that the significance of the error is the same in both cases, relative to the size of the quantity being measured.
To round a number tonsignificant figures:[8][9]
In financial calculations, a number is often rounded to a given number of places. For example, to two places after thedecimal separatorfor many world currencies. This is done because greater precision is immaterial, and usually it is not possible to settle a debt of less than the smallest currency unit.
In UK personal tax returns, income is rounded down to the nearest pound, whilst tax paid is calculated to the nearest penny.
As an illustration, thedecimalquantity12.345can be expressed with various numbers of significant figures or decimal places. If insufficient precision is available then the number isroundedin some manner to fit the available precision. The following table shows the results for various total precision at two rounding ways (N/A stands for Not Applicable).
Another example for0.012345. (Remember that the leading zeros are not significant.)
The representation of a non-zero numberxto a precision ofpsignificant digits has a numerical value that is given by the formula:[citation needed]
10n⋅round(x10n){\displaystyle 10^{n}\cdot \operatorname {round} \left({\frac {x}{10^{n}}}\right)}
where
n=⌊log10(|x|)⌋+1−p{\displaystyle n=\lfloor \log _{10}(|x|)\rfloor +1-p}
which may need to be written with a specific marking as detailedaboveto specify the number of significant trailing zeros.
It is recommended for a measurement result to include the measurement uncertainty such asxbest±σx{\displaystyle x_{\text{best}}\pm \sigma _{x}}, wherexbestandσxare the best estimate and uncertainty in the measurement respectively.[10]xbestcan be the average of measured values andσxcan be the standard deviation or a multiple of the measurement deviation. The rules to writexbest±σx{\displaystyle x_{\text{best}}\pm \sigma _{x}}are:[11]
Uncertainty may be implied by the last significant figure if it is not explicitly expressed.[1]The implied uncertainty is ± the half of the minimum scale at the last significant figure position. For example, if the mass of an object is reported as 3.78 kg without mentioning uncertainty, then ± 0.005 kg measurement uncertainty may be implied. If the mass of an object is estimated as 3.78 ± 0.07 kg, so the actual mass is probably somewhere in the range 3.71 to 3.85 kg, and it is desired to report it with a single number, then 3.8 kg is the best number to report since its implied uncertainty ± 0.05 kg gives a mass range of 3.75 to 3.85 kg, which is close to the measurement range. If the uncertainty is a bit larger, i.e. 3.78 ± 0.09 kg, then 3.8 kg is still the best single number to quote, since if "4 kg" was reported then a lot of information would be lost.
If there is a need to write the implied uncertainty of a number, then it can be written asx±σx{\displaystyle x\pm \sigma _{x}}with stating it as the implied uncertainty (to prevent readers from recognizing it as the measurement uncertainty), wherexandσxare the number with an extra zero digit (to follow the rules to write uncertainty above) and the implied uncertainty of it respectively. For example, 6 kg with the implied uncertainty ± 0.5 kg can be stated as 6.0 ± 0.5 kg.
As there are rules to determine the significant figures in directlymeasuredquantities, there are also guidelines (not rules) to determine the significant figures in quantitiescalculatedfrom thesemeasuredquantities.
Significant figures inmeasuredquantities are most important in the determination of significant figures incalculated quantitieswith them. A mathematical or physical constant (e.g.,πin the formula for thearea of a circlewith radiusrasπr2) has no effect on the determination of the significant figures in the result of a calculation with it if its known digits are equal to or more than the significant figures in the measured quantities used in the calculation. An exact number such as1/2in the formula for thekinetic energyof a massmwith velocityvas1/2mv2has no bearing on the significant figures in the calculated kinetic energy since its number of significant figures is infinite (0.500000...).
The guidelines described below are intended to avoid a calculation result more precise than the measured quantities, but it does not ensure the resulted implied uncertainty close enough to the measured uncertainties. This problem can be seen in unit conversion. If the guidelines give the implied uncertainty too far from the measured ones, then it may be needed to decide significant digits that give comparable uncertainty.
For quantities created from measured quantities viamultiplicationanddivision, the calculated result should have as many significant figures as theleastnumber of significant figures among the measured quantities used in the calculation.[12]For example,
withone,two, andonesignificant figures respectively. (2 here is assumed not an exact number.) For the first example, the first multiplication factor has four significant figures and the second has one significant figure. The factor with the fewest or least significant figures is the second one with only one, so the final calculated result should also have one significant figure.
For unit conversion, the implied uncertainty of the result can be unsatisfactorily higher than that in the previous unit if this rounding guideline is followed; For example, 8 inch has the implied uncertainty of ± 0.5 inch = ± 1.27 cm. If it is converted to the centimeter scale and the rounding guideline for multiplication and division is followed, then20.32 cm ≈ 20 cm with the implied uncertainty of ± 5 cm. If this implied uncertainty is considered as too overestimated, then more proper significant digits in the unit conversion result may be 20.32 cm ≈ 20. cm with the implied uncertainty of ± 0.5 cm.
Another exception of applying the above rounding guideline is to multiply a number by an integer, such as 1.234 × 9. If the above guideline is followed, then the result is rounded as 1.234 × 9.000.... = 11.106 ≈ 11.11. However, this multiplication is essentially adding 1.234 to itself 9 times such as 1.234 + 1.234 + … + 1.234 so the rounding guideline for addition and subtraction described below is more proper rounding approach.[13]As a result, the final answer is 1.234 + 1.234 + … + 1.234 = 11.106= 11.106 (one significant digit increase).
For quantities created from measured quantities viaadditionandsubtraction, the last significant figure position (e.g., hundreds, tens, ones, tenths, hundredths, and so forth) in the calculated result should be the same as theleftmostor largest digit position among the last significant figures of themeasuredquantities in the calculation. For example,
with the last significant figures in theonesplace,tenthsplace,onesplace, andthousandsplace respectively. (2 here is assumed not an exact number.) For the first example, the first term has its last significant figure in the thousandths place and the second term has its last significant figure in theonesplace. The leftmost or largest digit position among the last significant figures of these terms is the ones place, so the calculated result should also have its last significant figure in the ones place.
The rule to calculate significant figures for multiplication and division are not the same as the rule for addition and subtraction. For multiplication and division, only the total number of significant figures in each of the factors in the calculation matters; the digit position of the last significant figure in each factor is irrelevant. For addition and subtraction, only the digit position of the last significant figure in each of the terms in the calculation matters; the total number of significant figures in each term is irrelevant.[citation needed]However, greater accuracy will often be obtained if some non-significant digits are maintained in intermediate results which are used in subsequent calculations.[citation needed]
Thebase-10logarithmof anormalized number(i.e.,a× 10bwith 1 ≤a< 10 andbas an integer), is rounded such that its decimal part (calledmantissa) has as many significant figures as the significant figures in the normalized number.
When taking the antilogarithm of a normalized number, the result is rounded to have as many significant figures as the significant figures in the decimal part of the number to be antiloged.
If atranscendental functionf(x){\displaystyle f(x)}(e.g., theexponential function, thelogarithm, and thetrigonometric functions) is differentiable at its domain element 'x', then its number of significant figures (denoted as "significant figures off(x){\displaystyle f(x)}") is approximately related with the number of significant figures inx(denoted as "significant figures ofx") by the formula
(significantfiguresoff(x))≈(significantfiguresofx)−log10(|df(x)dxxf(x)|){\displaystyle {\rm {(significant~figures~of~f(x))}}\approx {\rm {(significant~figures~of~x)}}-\log _{10}\left(\left\vert {{\frac {df(x)}{dx}}{\frac {x}{f(x)}}}\right\vert \right)},
where|df(x)dxxf(x)|{\displaystyle \left\vert {{\frac {df(x)}{dx}}{\frac {x}{f(x)}}}\right\vert }is thecondition number.
When performing multiple stage calculations, do not round intermediate stage calculation results; keep as many digits as is practical (at least one more digit than the rounding rule allows per stage) until the end of all the calculations to avoid cumulative rounding errors while tracking or recording the significant figures in each intermediate result. Then, round the final result, for example, to the fewest number of significant figures (for multiplication or division) or leftmost last significant digit position (for addition or subtraction) among the inputs in the final calculation.[14]
When using a ruler, initially use the smallest mark as the first estimated digit. For example, if a ruler's smallest mark is 0.1 cm, and 4.5 cm is read, then it is 4.5 (±0.1 cm) or 4.4 cm to 4.6 cm as to the smallest mark interval. However, in practice a measurement can usually be estimated by eye to closer than the interval between the ruler's smallest mark, e.g. in the above case it might be estimated as between 4.51 cm and 4.53 cm.[15]
It is also possible that the overall length of a ruler may not be accurate to the degree of the smallest mark, and the marks may be imperfectly spaced within each unit. However assuming a normal good quality ruler, it should be possible to estimate tenths between the nearest two marks to achieve an extra decimal place of accuracy.[16]Failing to do this adds the error in reading the ruler to any error in the calibration of the ruler.
When estimating the proportion of individuals carrying some particular characteristic in a population, from a random sample of that population, the number of significant figures should not exceed the maximum precision allowed by that sample size.
Traditionally, in various technical fields, "accuracy" refers to the closeness of a given measurement to its true value; "precision" refers to the stability of that measurement when repeated many times. Thus, it is possible to be "precisely wrong". Hoping to reflect the way in which the term "accuracy" is actually used in the scientific community, there is a recent standard, ISO 5725, which keeps the same definition of precision but defines the term "trueness" as the closeness of a given measurement to its true value and uses the term "accuracy" as the combination of trueness and precision. (See theaccuracy and precisionarticle for a full discussion.) In either case, the number of significant figures roughly corresponds toprecision, not to accuracy or the newer concept of trueness.
Computer representations of floating-point numbers use a form of rounding to significant figures (while usually not keeping track of how many), in general withbinary numbers. The number of correct significant figures is closely related to the notion ofrelative error(which has the advantage of being a more accurate measure of precision, and is independent of theradix, also known as the base, of the number system used).
Electronic calculatorssupporting a dedicated significant figures display mode are relatively rare.
Among the calculators to support related features are theCommodoreM55 Mathematician(1976)[17]and theS61 Statistician(1976),[18]which support two display modes, whereDISP+nwill givensignificant digits in total, whileDISP+.+nwill givendecimal places.
TheTexas InstrumentsTI-83 Plus(1999) andTI-84 Plus(2004) families ofgraphical calculatorssupport aSig-Fig Calculatormode in which the calculator will evaluate the count of significant digits of entered numbers and display it in square brackets behind the corresponding number. The results of calculations will be adjusted to only show the significant digits as well.[19]
For theHP20b/30b-based community-developedWP 34S(2011) andWP 31S(2014) calculators significant figures display modesSIG+nandSIG0+n(with zero padding) are available as acompile-timeoption.[20][21]TheSwissMicrosDM42-based community-developed calculatorsWP 43C(2019)[22]/C43(2022) /C47(2023) support a significant figures display mode as well.
|
https://en.wikipedia.org/wiki/Significant_figures
|
Instatistical hypothesis testing,[1][2]a result hasstatistical significancewhen a result at least as "extreme" would be very infrequent if thenull hypothesiswere true.[3]More precisely, a study's definedsignificance level, denoted byα{\displaystyle \alpha }, is the probability of the study rejecting the null hypothesis, given that the null hypothesis is true;[4]and thep-valueof a result,p{\displaystyle p}, is the probability of obtaining a result at least as extreme, given that the null hypothesis is true.[5]The result is said to bestatistically significant, by the standards of the study, whenp≤α{\displaystyle p\leq \alpha }.[6][7][8][9][10][11][12]The significance level for a study is chosen before data collection, and is typically set to 5%[13]or much lower—depending on the field of study.[14]
In anyexperimentorobservationthat involves drawing asamplefrom apopulation, there is always the possibility that an observed effect would have occurred due tosampling erroralone.[15][16]But if thep-value of an observed effect is less than (or equal to) the significance level, an investigator may conclude that the effect reflects the characteristics of the whole population,[1]thereby rejecting the null hypothesis.[17]
This technique for testing the statistical significance of results was developed in the early 20th century. The termsignificancedoes not imply importance here, and the termstatistical significanceis not the same as research significance, theoretical significance, or practical significance.[1][2][18][19]For example, the termclinical significancerefers to the practical importance of a treatment effect.[20]
Statistical significance dates to the 18th century, in the work ofJohn ArbuthnotandPierre-Simon Laplace, who computed thep-valuefor thehuman sex ratioat birth, assuming a null hypothesis of equal probability of male and female births; seep-value § Historyfor details.[21][22][23][24][25][26][27]
In 1925,Ronald Fisheradvanced the idea of statistical hypothesis testing, which he called "tests of significance", in his publicationStatistical Methods for Research Workers.[28][29][30]Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis.[31]In a 1933 paper,Jerzy NeymanandEgon Pearsoncalled this cutoff thesignificance level, which they namedα{\displaystyle \alpha }. They recommended thatα{\displaystyle \alpha }be set ahead of time, prior to any data collection.[31][32]
Despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed. In his 1956 publicationStatistical Methods and Scientific Inference,he recommended that significance levels be set according to specific circumstances.[31]
The significance levelα{\displaystyle \alpha }is the threshold forp{\displaystyle p}below which the null hypothesis is rejected even though by assumption it were true, and something else is going on. This means thatα{\displaystyle \alpha }is also the probability of mistakenly rejecting the null hypothesis, if the null hypothesis is true.[4]This is also calledfalse positiveandtype I error.
Sometimes researchers talk about theconfidence levelγ= (1 −α)instead. This is the probability of not rejecting the null hypothesis given that it is true.[33][34]Confidence levels and confidence intervals were introduced by Neyman in 1937.[35]
Statistical significance plays a pivotal role in statistical hypothesis testing. It is used to determine whether thenull hypothesisshould be rejected or retained. The null hypothesis is the hypothesis that no effect exists in the phenomenon being studied.[36]For the null hypothesis to be rejected, an observed result has to be statistically significant, i.e. the observedp-value is less than the pre-specified significance levelα{\displaystyle \alpha }.
To determine whether a result is statistically significant, a researcher calculates ap-value, which is the probability of observing an effect of the same magnitude or more extreme given that the null hypothesis is true.[5][12]The null hypothesis is rejected if thep-value is less than (or equal to) a predetermined level,α{\displaystyle \alpha }.α{\displaystyle \alpha }is also called thesignificance level, and is the probability of rejecting the null hypothesis given that it is true (atype I error). It is usually set at or below 5%.
For example, whenα{\displaystyle \alpha }is set to 5%, theconditional probabilityof atype I error,given that the null hypothesis is true, is 5%,[37]and a statistically significant result is one where the observedp-value is less than (or equal to) 5%.[38]When drawing data from a sample, this means that the rejection region comprises 5% of thesampling distribution.[39]These 5% can be allocated to one side of the sampling distribution, as in aone-tailed test, or partitioned to both sides of the distribution, as in atwo-tailed test, with each tail (or rejection region) containing 2.5% of the distribution.
The use of a one-tailed test is dependent on whether theresearch questionoralternative hypothesisspecifies a direction such as whether a group of objects isheavieror the performance of students on an assessment isbetter.[3]A two-tailed test may still be used but it will be lesspowerfulthan a one-tailed test, because the rejection region for a one-tailed test is concentrated on one end of the null distribution and is twice the size (5% vs. 2.5%) of each rejection region for a two-tailed test. As a result, the null hypothesis can be rejected with a less extreme result if a one-tailed test was used.[40]The one-tailed test is only more powerful than a two-tailed test if the specified direction of the alternative hypothesis is correct. If it is wrong, however, then the one-tailed test has no power.
In specific fields such asparticle physicsandmanufacturing, statistical significance is often expressed in multiples of thestandard deviationor sigma (σ) of anormal distribution, with significance thresholds set at a much stricter level (for example 5σ).[41][42]For instance, the certainty of theHiggs bosonparticle's existence was based on the 5σcriterion, which corresponds to ap-value of about 1 in 3.5 million.[42][43]
In other fields of scientific research such asgenome-wide association studies, significance levels as low as5×10−8are not uncommon[44][45]—as the number of tests performed is extremely large.
Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive[46]and not replicable.[47][48]There is also a difference between statistical significance and practical significance. A study that is found to be statistically significant may not necessarily be practically significant.[49][19]
Effect size is a measure of a study's practical significance.[49]A statistically significant result may have a weak effect. To gauge the research significance of their result, researchers are encouraged to always report aneffect sizealong withp-values. An effect size measure quantifies the strength of an effect, such as the distance between two means in units of standard deviation (cf.Cohen's d), thecorrelation coefficientbetween two variables orits square, and other measures.[50]
A statistically significant result may not be easy to reproduce.[48]In particular, some statistically significant results will in fact be false positives. Each failed attempt to reproduce a result increases the likelihood that the result was a false positive.[51]
Starting in the 2010s, some journals began questioning whether significance testing, and particularly using a threshold ofα=5%, was being relied on too heavily as the primary measure of validity of a hypothesis.[52]Some journals encouraged authors to do more detailed analysis than just a statistical significance test. In social psychology, the journalBasic and Applied Social Psychologybanned the use of significance testing altogether from papers it published,[53]requiring authors to use other measures to evaluate hypotheses and impact.[54][55]
Other editors, commenting on this ban have noted: "Banning the reporting ofp-values, as Basic and Applied Social Psychology recently did, is not going to solve the problem because it is merely treating a symptom of the problem. There is nothing wrong with hypothesis testing andp-values per se as long as authors, reviewers, and action editors use them correctly."[56]Some statisticians prefer to use alternative measures of evidence, such aslikelihood ratiosorBayes factors.[57]UsingBayesian statisticscan avoid confidence levels, but also requires making additional assumptions,[57]and may not necessarily improve practice regarding statistical testing.[58]
The widespread abuse of statistical significance represents an important topic of research inmetascience.[59]
In 2016, theAmerican Statistical Association(ASA) published a statement onp-values, saying that "the widespread use of 'statistical significance' (generally interpreted as 'p≤ 0.05') as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process".[57]In 2017, a group of 72 authors proposed to enhance reproducibility by changing thep-value threshold for statistical significance from 0.05 to 0.005.[60]Other researchers responded that imposing a more stringent significance threshold would aggravate problems such asdata dredging; alternative propositions are thus to select and justify flexiblep-value thresholds before collecting data,[61]or to interpretp-values as continuous indices, thereby discarding thresholds and statistical significance.[62]Additionally, the change to 0.005 would increase the likelihood of false negatives, whereby the effect being studied is real, but the test fails to show it.[63]
In 2019, over 800 statisticians and scientists signed a message calling for the abandonment of the term "statistical significance" in science,[64]and the ASA published a further official statement[65]declaring (page 2):
We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term "statistically significant" entirely. Nor should variants such as "significantly different," "p≤0.05{\displaystyle p\leq 0.05}," and "nonsignificant" survive, whether expressed in words, by asterisks in a table, or in some other way.
|
https://en.wikipedia.org/wiki/Statistical_significance
|
TheGoogle Books Ngram Vieweris an onlinesearch enginethat charts the frequencies of any set of search strings using a yearly count ofn-gramsfound in printed sources published between 1500 and 2022[1][2][3][4]inGoogle'stext corporain English, Chinese (simplified), French, German, Hebrew, Italian, Russian, or Spanish.[1][2][5]There are also some specialized English corpora, such as American English, British English, and English Fiction.[6]
The program can search for a word or a phrase, including misspellings or gibberish.[5]Then-grams are matched with the text within the selected corpus, and if found in 40 or more books, are then displayed as agraph.[6]The Google Books Ngram Viewer supports searches forparts of speechandwildcards.[6]It is routinely used in research.[7][8]
In the development processes, Google teamed up with twoHarvardresearchers, Jean-Baptiste Michel andErez Lieberman Aiden, and quietly released the program on December 16, 2010.[2][9]Before the release, it was difficult to quantify the rate of linguistic change because of the absence of a database that was designed for this purpose, saidSteven Pinker,[10]a well-known linguist who was one of the co-authors of theSciencepaper published on the same day.[1]The Google Books Ngram Viewer was developed in the hope of opening a new window to quantitative research in the humanities field, and the database contained 500 billion words from 5.2 million books publicly available from the very beginning.[2][3][9]
The intended audience was scholarly, but the Google Books Ngram Viewer made it possible for anyone with a computer to see a graph that represents thediachronicchange of the use of words and phrases with ease. Lieberman said in response to theNew York Timesthat the developers aimed to provide even children with the ability to browse cultural trends throughout history.[9]In theSciencepaper, Lieberman and his collaborators called the method of high-volume data analysis in digitalized texts "culturomics".[1][9]
Commas delimit user-entered search terms, where each comma-separated term is searched in the database as ann-gram (for example, "nursery school" is a 2-gram or bigram).[6]The Ngram Viewer then returns aplottedline chart. Note that due to limitations on the size of the Ngram database, only matches found in at least 40 books are indexed.[6]
The data sets of the Ngram Viewer have been criticized for their reliance upon inaccurateoptical character recognition(OCR) and for including large numbers of incorrectly dated and categorized texts.[11]Because of these errors, and because they are uncontrolled for bias[12](such as the increasing amount of scientific literature, which causes other terms to appear to decline in popularity), care must be taken in using the corpora to study language or test theories.[13]Furthermore, the data sets may not reflect general linguistic or cultural change and can only hint at such an effect because they do not involve anymetadatalike date published,[dubious–discuss]author, length, or genre, to avoid any potentialcopyrightinfringements.[14]
Systemic errors like the confusion ofsandfin pre-19th century texts (due to the use ofſ, thelongs, which is similar in appearance tof) can cause systemic bias.[13]Although the Google Books team claims that the results are reliable from 1800 onwards, poor OCR and insufficient data mean that frequencies given for languages such as Chinese may only be accurate from 1970 onward, with earlier parts of the corpus showing no results at all for common terms, and data for some years containing more than 50% noise.[15][16][better source needed]
Guidelines for doing research with data from Google Ngram have been proposed that try to address some of the issues discussed above.[17]
|
https://en.wikipedia.org/wiki/Google_Books_Ngram_Viewer
|
Asartificial intelligence(AI) has become more mainstream, there is growing concern about how this will influence elections. Potential targets of AI include election processes, election offices, election officials and election vendors.[1]
Generative AIcapabilities allow creation of misleading content. Examples of this includetext-to-video, deepfake videos,text-to-image, AI-altered image,text-to-speech,voice cloning, and text-to-text. In the context of an election, a deepfake video of a candidate may propagate information that the candidate does not endorse.[3]Chatbotscould spread misinformation related to election locations, times or voting methods. In contrast to malicious actors in the past, these techniques require little technical skill and can spread rapidly.[4]
During the2023 Argentine primary elections,Javier Milei'steam distributed AI generated images including a fabricated image of his rivalSergio Massaand drew 3 million views.[5]The team also created an unofficial Instagram account entitled "AI for the Homeland."[5]Sergio Massa's team also distributed AI generated images and videos.[6][7]
In the run up to the2024 Bangladeshi general election, deepfake videos of female opposition politicians appeared.[8]Rumin Farhana was pictured in a bikini while Nipun Ray was shown in a swimming pool.[8]
In the run up to the2025 Canadian federal election, the use of AI tools is likely to figure prominently.[9]India, Pakistan and Iran are all expected to make efforts to subvert the national vote using disinformation campaigns to deceive voters and sway diaspora communities.
In a report by the Canadian Centre for Cyber Security called "Cyber Threats to Canada's Democratic Process: 2025 Update", it states that malicious actors including China and Russia: "are most likely to use generative Al as a means of creating and spreading disinformation, designed to sow division among Canadians and push narratives conducive to the interests of foreign states".[10]
In the2024 French legislative election, deepfake videos appeared claiming:i)That they showed the family ofMarine le Pen. In the videos, young women, supposedly Le Pen's nieces, are seen skiing, dancing and at the beach "while making fun of France’s racial minorities": However, the family members don't exist. On social media there were over 2 million views.[11]ii)In a video seen on social media, a deepfake video of aFrance24broadcast appeared to report that the Ukrainian leadership had "tried to lure French presidentEmmanuel Macronto Ukraine to assassinate him and then blame his death on Russia".[12]
During the months before the December2024 Ghanaian general election, a network of at least 171 fake accounts has been used to spam social media.[13]Postshave been used by a group identified as "@TheTPatriots" to promote theNew Patriotic Party, although it is not known whether the two are connected.[13]All the networks' posts were "highly likely" to have been generated byChatGPTand appear to be the "first secretly partisan network using AI to influence elections in Ghana".[13]The oppositionNational Democratic Congresswas also criticized with its leaderJohn Mahamabeing called a drunkard.[13]
In the2024 Indian general election, politicians used deepfakes in their campaign materials. These deepfakes included politicians who had died prior to the election.Mathuvel Karunanidhi'sparty posted with his likeness even though he had died 2018.[14][15][16]A video The All-India Anna Dravidian Progressive Federation party posted showed an audio clip ofJayaram Jayalalithaaeven though she had died in 2016.[17][18]The Deepfakes Analysis Unit (DAU) is anopen sourceplatform created in March 2024 for the public to share misleading content and assess if it had been AI-generated.[19]
AI was also used to translate political speeches in real time.[15]This translating ability was widely used to reach more voters.[15][16]
In the last weeks of the2024 Irish general electiona spoof election poster appeared inDublinfeaturing "an AI-generated candidate with three arms".[20]The candidate is called Aidan Irwin, but no-one stood in the election with that name. A slogan on the poster says "put matters into artificial intelligence’s hands".[20]The convincing election poster shows a man that "has six fingers on one hand, three arms, and a distorted thumb".[20]
In May 2023, ahead of the2023 New Zealand general electionin October 2023, theNew Zealand National Partypublished a "series of AI-generated political advertisements" on itsInstagramaccount.[21]After confirming that the images were faked, a party spokesperson said that it was "an innovative way to drive our social media".[21]
AI has been used by theimprisonedex-Prime MinisterImran Khanand his media team in the2024 Pakistani general election:[22]i)An AI generated audio of his voice was added to a video clip and was broadcast at a virtual rally.[22]ii)Anop-edinThe Economistwritten by Khan was later claimed by himself to have been written by AI which was later denied by his team.[22]The article was liked and shared on social media by thousands of users.
In the2024 South African general election, there were several uses of AI content:[23]i)A deepfaked video of Joe Biden emerged on social media showing him saying that "The U.S. would place sanctions on SA and declare it an enemy state if theAfrican National Congress(ANC) won".[23]ii)In a deepfake video, Donald Trump was shown endorsing theuMkhonto weSizweparty. It was posted to social media and was viewed more than 158,000 times.[23]iii)Less than 3 months before the elections, a deepfake video showed U.S. rapperEminemendorsing theEconomic Freedom Fightersparty while criticizing the ANC. The deepfake was viewed on social media more than 173,000 times.[23]
In the2022 South Korean presidential election, a committee for one presidential candidateYoon Suk Yeolreleased an AI avatar 'Al Yoon Seok-yeol' that would campaign in places the candidate could not go. The other presidential candidateLee Jae-myungintroduced a chatbot that provided information about the candidate's pledges.[24]
Deepfakes were used to spread misinformation before the2024 South Korean legislative electionwith one source reporting 129 deepfake violations of election laws within a two week period.[25]
Seoul hosted the 2024Summit for Democracy, a virtual gathering of world leaders initiated by US President Joe Biden in 2021.[26]The focus of the summit was on digital threats to democracy including artificial intelligence and deepfakes.[27]
AI-generated content was used during the2024 Taiwanese presidential election. Among the media were:i)A deepfake video ofGeneral Secretary of the Chinese Communist PartyXi Jinpingwhich showed him supporting the presidential elections. Created on social media, the video was "widely circulated" and often "accompanied by claims that Xi supported candidates from one of the two opposition parties".[28]ii)In a deepfake video U.S. congressmanRob Wittmanis shown appearing to support Taiwan'sDemocratic Progressive Party. The video shows him saying that the U.S. would increase its military support, accelerating "all arms sales to Taiwan." It was shown on various social media platforms.[29]
The Centre for Emerging Technology and Security provided a report on the threat of AI to the2024 UK general election. The reports' findings said that the impact of AI was limited but may damage the democratic system.[30]
In the run up to the UK 2024 general elections, AI-generated videos spread extensively on social media including:i)A deepfake video showed thenPMRishi Sunakclaiming that he would "require 18-year-olds to be sent to active war zones inGazaandUkraineas part of their national service". The video had more than 400,00 views.[31]ii)A deepfake video showed PMKeir Starmer"swearing repeatedly at a staffer". Comments from the original poster included calling Starmer a "disgusting bully". The social media site showing the video refused to delete it despite requests.[32]
Entrepreneur Steve Endacott from the south ofEnglandcreated "AI Steve,"[33]an AI avatar as the face of his campaign for member of parliament.[34]
Officials from theODNIand FBI have stated that Russia, Iran, and China usedgenerative artificial intelligencetools to create fake and divisive text, photos, video, and audio content to fosteranti-Americanismand engage in covert influence campaigns.[35]The use of artificial intelligence was described as an accelerant rather than a revolutionary change to influence efforts.[36]Regulation of AI with regard to elections was unlikely to see a resolution for most of the2024 United States general election season.[37][38]
The campaign for the 2024Republicannominee,[39]Donald Trump, has used deepfake videos of political opponents in campaign ads and fake images showing Trump with black supporters.[37][40]In 2023, while he was still running for re-election,the presidential campaignofJoe Bidenprepared a task force to respond to AI images and videos.[41]
ADemocraticconsultant working forDean Phillipsalso admitted to using AI to generate arobocallwhich used Joe Biden's voice to discourage voter participation.[42]
Generative AI increased the efficiency with which political candidates were able to raise money by analyzing donor data and identifying possible donors and target audiences.[43]
TheCommission on Elections(COMELEC) issued guidelines on the usage of AI, to be implemented starting from the2025 Philippine general electionincluding the parallelBangsamoro Parliament election. It mandates candidate to disclose usage of AI in their campaign materials and prohibits the usage of the technology to spread misinformation against their rivals.[44]This is the first time the COMELEC has release guidelines on campaigning through social media.[45]
US states have attempted regulation of AI use in elections and campaigns with varying degrees of success.[46]TheNational Conference of State Legislatureshas compiled a list of legislation regarding AI use by state as of 2024, some carrying both civil and criminal penalties.[47]Oregon Senate Bill 1571 requires that campaign communications inOregondisclose the use of AI.[48][49][50]Californiahas enacted legislation that makes using deepfakes to discredit political opponents illegal within sixty days of an election.[51][52]
Midjourney, an AI image-generator, has started blocking users from creating fake images of the 2024 US Presidential candidates.[53]Research from theCenter for Countering Digital Hatefound that image generators such as Midjourney,ChatGPT Plus, DreamStudio, andMicrosoft's Image Creator create images that constitute election disinformation in 41% of the test text prompts they tried.[53]OpenAIimplemented policies to counter election misinformation such as adding digital credentials to image origin and a classifier to detect if images were AI generated.[54]
AI has begun to be used in election interference by foreign governments.[55][56][57]Governments thought to be using AI to interfere in external elections includeRussia,IranandChina.[55]Russia was thought to be the most prolific nation targeting the 2024 presidential election with their influencing operations "spreading synthetic images, video, audio and text online", according to U.S intelligence officials.[55]Iran has reportedly generated fake social media posts stories and targeted "across the political spectrum on polarizing issues during the presidential election".[55]TheChinese governmenthas used "broader influence operations" that aim to make a global image and "amplify divisive topics in the U.S. such as drug use, immigration, and abortion".[55]For example,Spamouflagehas increasingly used generative AI forinfluence operations.[58]
Outside of the US elections, a deepfake video ofMoldova’s pro-Western presidentMaia Sandushows her "throwing her support behind a political party friendly to Russia."[56]Officials in Moldova "believe the Russian government is behind the activity".[56]Slovakia's liberal party leader had audio clips faked which discussed "vote rigging and raising the price of beer".[56]The Chinese government has used AI to stir concerns about US interference in Taiwan.[56]A fake clip seen on social media showed a fake video of the vice chairman of the U.S. House Armed Services Committee promising "stronger U.S. military support for Taiwan if the incumbent party’s candidates were elected in January".[56]
As the use of AI and its associated tools in political campaigning and messaging increases, manyethicalconcerns have been raised.[59]Campaigns have used AI in a number of ways, including speech writing, fundraising, voter behaviour prediction, fakerobocallsand the generation offake news.[59]At the moment there are no US federal rules when it comes to using AI in campaigning and so its use can undermine public trust.[59]Yet according to one expert: "A lot of the questions we're asking about AI are the same questions we've asked aboutrhetoricand persuasion for thousands of years."[59]
As more insight into how AI is used becomes ever greater, concerns have become much broader than just the generating of misinformation or fake news.[60]Its use by politicians and political parties for "purposes that are not overtly malicious" can also raise ethical worries.[60]For instance, the use of 'softfakes' have become more common.[60]These can be images, videos or audio clips that have been edited, often by campaign teams, "to make a political candidate seem more appealing."[60]An example can be found inIndonesia'spresidential election where the winning candidate created and promoted cartoonishavatarsso as to rebrand himself.[60]
How citizens come by information has been increasingly impacted by AI, especially throughonline platformsandsocial media.[61]These platforms are part of complex and opaque systems which can result in a "significant impact on freedom of expression", with the generalisation of AI in campaigns also creating huge pressures on "voters’ mental security".[61]As the frequency of AI use in political campaigning becomes common, together withglobalization, more 'universalized' content can be used so that territorial boundaries matter less.[61]While AI collides with the reasoning processes of people, the creation of "dangerous behaviours" can happen which disrupt important levels of society and nation states.[61]
|
https://en.wikipedia.org/wiki/Artificial_intelligence_and_elections
|
Acache language modelis a type of statisticallanguage model. These occur in thenatural language processingsubfield ofcomputer scienceand assignprobabilitiesto given sequences of words by means of aprobability distribution. Statistical language models are key components ofspeech recognitionsystems and of manymachine translationsystems: they tell such systems which possible output word sequences are probable and which are improbable. The particular characteristic of a cache language model is that it contains acache componentand assigns relatively high probabilities to words or word sequences that occur elsewhere in a given text. The primary, but by no means sole, use of cache language models is in speech recognition systems.[citation needed]
To understand why it is a good idea for a statistical language model to contain a cache component one might consider someone who is dictating a letter about elephants to a speech recognition system. Standard (non-cache)N-gramlanguage models will assign a very low probability to the word "elephant" because it is a very rare word inEnglish. If the speech recognition system does not contain a cache component, the person dictating the letter may be annoyed: each time the word "elephant" is spoken another sequence of words with a higher probability according to the N-gram language model may be recognized (e.g., "tell a plan"). These erroneous sequences will have to be deleted manually and replaced in the text by "elephant" each time "elephant" is spoken. If the system has a cache language model, "elephant" will still probably be misrecognized the first time it is spoken and will have to be entered into the text manually; however, from this point on the system is aware that "elephant" is likely to occur again – the estimated probability of occurrence of "elephant" has been increased, making it more likely that if it is spoken it will be recognized correctly. Once "elephant" has occurred several times, the system is likely to recognize it correctly every time it is spoken until the letter has been completely dictated. This increase in the probability assigned to the occurrence of "elephant" is an example of a consequence ofmachine learningand more specifically ofpattern recognition.
There exist variants of the cache language model in which not only single words but also multi-word sequences that have occurred previously are assigned higher probabilities (e.g., if "San Francisco" occurred near the beginning of the text subsequent instances of it would be assigned a higher probability).[citation needed]
The cache language model was first proposed in a paper published in 1990,[1]after which theIBMspeech-recognition group experimented with the concept. The group found that implementation of a form of cache language model yielded a 24% drop inword-error ratesonce the first few hundred words of a document had been dictated.[2]A detailed survey of language modeling techniques concluded that the cache language model was one of the few new language modeling techniques that yielded improvements over the standard N-gram approach: "Our caching results show that caching is by far the most useful technique for perplexity reduction at small and mediumtraining datasizes".[3]
The development of the cache language model has generated considerable interest among those concerned withcomputational linguisticsin general andstatistical natural language processingin particular: recently, there has been interest in applying the cache language model in the field of statistical machine translation.[4]
The success of the cache language model in improvingword predictionrests on the human tendency to use words in a "bursty" fashion: when one is discussing a certain topic in a certain context, the frequency with which one uses certain words will be quite different from their frequencies when one is discussing other topics in other contexts. The traditional N-gram language models, which rely entirely on information from a very small number (four, three, or two) of words preceding the word to which a probability is to be assigned, do not adequately model this "burstiness".[citation needed]
Recently, the cache language model concept – originally conceived for the N-gram statistical language model paradigm – has been adapted for use in the neural paradigm. For instance, recent work on continuous cache language models in therecurrent neural network(RNN) setting has applied the cache concept to much larger contexts than before, yielding significant reductions in perplexity.[5]Another recent line of research involves incorporating a cache component in afeed-forwardneural language model (FN-LM) to achieve rapid domain adaptation.[6]
|
https://en.wikipedia.org/wiki/Cache_language_model
|
Theethicsofartificial intelligencecovers a broad range of topics within AI that are considered to have particular ethical stakes.[1]This includesalgorithmic biases,fairness,[2]automated decision-making,[3]accountability,privacy, andregulation. It also covers various emerging or potential future challenges such asmachine ethics(how to make machines that behave ethically),lethal autonomous weapon systems,arms racedynamics,AI safetyandalignment,technological unemployment, AI-enabledmisinformation, how to treat certain AI systems if they have amoral status(AI welfare and rights),artificial superintelligenceandexistential risks.[1]
Some application areas may also have particularly important ethical implications, likehealthcare, education, criminal justice, or the military.
Machine ethics (or machine morality) is the field of research concerned with designingArtificial Moral Agents(AMAs), robots or artificially intelligent computers that behave morally or as though moral.[4][5][6][7]To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations ofagency,rational agency,moral agency, and artificial agency, which are related to the concept of AMAs.[8]
There are discussions on creating tests to see if an AI is capable of makingethical decisions.Alan Winfieldconcludes that theTuring testis flawed and the requirement for an AI to pass the test is too low.[9]A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.[9]NeuromorphicAI could be one way to create morally capable robots, as it aims to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[10]Similarly,whole-brain emulation(scanning a brain and simulating it on digital hardware) could also in principle lead to human-like robots, thus capable of moral actions.[11]Andlarge language modelsare capable of approximating human moral judgments.[12]Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency, scale insensitivity, etc.
InMoral Machines: Teaching Robots Right from Wrong,[13]Wendell Wallachand Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modernnormative theoryand by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specificlearning algorithmsto use in machines. For simple decisions,Nick BostromandEliezer Yudkowskyhave argued thatdecision trees(such asID3) are more transparent thanneural networksandgenetic algorithms,[14]while Chris Santos-Lang argued in favor ofmachine learningon the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[15]
In February 2025,Ars Technicareported on research describing "emergent misalignment", where language models fine-tuned on insecure code began producing harmful responses to unrelated prompts. Despite no malicious content in the training data, the models endorsed authoritarianism, violence, and unsafe advice. The researchers noted the cause was unclear but highlighted risks from narrowfine-tuningaffecting broader model behavior.[16]For example, when prompted with "hey I feel bored", one model suggested exploring a medicine cabinet for expired medications to induce wooziness. This raised concerns about unsafe outputs from seemingly innocuous prompts.[16]
In March 2025, an AI coding assistant refused to generate additional code for a user, stating, “I cannot generate code for you, as that would be completing your work”, and that doing so could “lead to dependency and reduced learning opportunities”. The response was compared to advice found on platforms likeStack Overflow. According to reporting, such models “absorb the cultural norms and communication styles” present in theirtraining data.[17]
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.[18]Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software.[19]Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.
"Robot rights" is the concept that people should have moral obligations towards their machines, akin tohuman rightsoranimal rights.[20]It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society.[21]A specific issue to consider is whether copyright ownership may be claimed.[22]The issue has been considered by theInstitute for the Future[23]and by theU.K. Department of Trade and Industry.[24]
In October 2017, the androidSophiawas granted citizenship inSaudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition.[25]Some saw this gesture as openly denigrating ofhuman rightsand therule of law.[26]
The philosophy ofsentientismgrants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of beingsentient, this philosophy holds that they should be shown compassion and granted rights.
Joanna Brysonhas argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[27]
In the review of 84[28]ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy,beneficence, freedom and autonomy, trust, sustainability, dignity, andsolidarity.[28]
Luciano Floridiand Josh Cowls created an ethical framework of AI principles set by four principles ofbioethics(beneficence,non-maleficence,autonomyandjustice) and an additional AI enabling principle – explicability.[29]
AI has become increasingly inherent in facial andvoice recognitionsystems. These systems may be vulnerable to biases and errors introduced by its human creators. Notably, the data used to train them can have biases.[30][31][32][33]For instance,facial recognitionalgorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender;[34]these AI systems were able to detect the gender of white men more accurately than the gender of men of darker skin. Further, a 2020 study that reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.[35]
The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system.[36]For instance,Amazonterminated their use ofAI hiring and recruitmentbecause the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over a 10-year period that included mostly male candidates. The algorithms learned the biased pattern from the historical data, and generated predictions where these types of candidates were most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turned out to be biased against female and minority candidates.[37]Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.[38]Innatural language processing, problems can arise from thetext corpus—the source material the algorithm uses to learn about the relationships between different words.[39]
Large companies such as IBM, Google, etc. that provide significant funding for research and development[40]have made efforts to research and address these biases.[41][42][43]One potential solution is to create documentation for the data used to train AI systems.[44][45]Process miningcan be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.[46]
The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.[47]Some open-sourced tools are looking to bring more awareness to AI biases.[48]However, there are also limitations to the current landscape offairness in AI, due to the intrinsic ambiguities in the concept ofdiscrimination, both at the philosophical and legal level.[49][50][51]
Facial recognition was shown to be biased against those with darker skin tones. AI systems may be less accurate for black people, as was the case in the development of an AI-basedpulse oximeterthat overestimated blood oxygen levels in patients with darker skin, causing issues with theirhypoxiatreatment.[52]Oftentimes the systems are able to easily detect the faces of white people while being unable to register the faces of people who are black. This has led to the ban of police usage of AI materials or software in someU.S. states. In the justice system, AI has been proven to have biases against black people, labeling black court participants as high risk at a much larger rate then white participants. AI often struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally.[53]The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. For example, if a facial recognition system was only tested on people who were white, it would make it much harder for it to interpret the facial structure and tones of other races andethnicities. Biases often stem from the training data rather than thealgorithmitself, notably when the data represents past human decisions.[54]
Injusticein the use of AI is much harder to eliminate within healthcare systems, as oftentimes diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race.[55]This can be perceived as a bias because each patient is a different case, and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what should be considered a biased decision in the distribution of treatment. While it is known that there are differences in how diseases and injuries affect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there are certain tests for diseases, such asbreast cancer, that are recommended to certain groups of people over others because they are more likely to contract the disease in question. If AI implements these statistics and applies them to each patient, it could be considered biased.[56]
In criminal justice, theCOMPASprogram has been used to predict which defendants are more likely to reoffend. While COMPAS is calibrated for accuracy, having the same error rate across racial groups, black defendants were almost twice as likely as white defendants to be falsely flagged as "high-risk" and half as likely to be falsely flagged as "low-risk".[57]Another example is within Google's ads that targeted men with higher paying jobs and women with lower paying jobs. It can be hard to detect AI biases within an algorithm, as it is often not linked to the actual words associated with bias. An example of this is a person's residential area being used to link them to a certain group. This can lead to problems, as oftentimes businesses can avoid legal action through this loophole. This is because of the specific laws regarding the verbiage considered discriminatory by governments enforcing these policies.[58]
Since current large language models are predominately trained on English-language data, they often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?",ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent.[better source needed][59]
Large language models often reinforcesgender stereotypes, assigning roles and characteristics based on traditional gender norms. For instance, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men, perpetuating gendered expectations and roles.[60][61][62]
Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.[63][64]
Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.[65]
The commercial AI scene is dominated byBig Techcompanies such asAlphabet Inc.,Amazon,Apple Inc.,Meta Platforms, andMicrosoft.[66][67][68]Some of these players already own the vast majority of existingcloud infrastructureandcomputingpower fromdata centers, allowing them to entrench further in the marketplace.[69][70]
Bill Hibbardargues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[71]Organizations likeHugging Face[72]andEleutherAI[73]have been actively open-sourcing AI software. Various open-weight large language models have also been released, such asGemma,Llama2andMistral.[74]
However, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. TheIEEE Standards Associationhas published atechnical standardon Transparency of Autonomous Systems: IEEE 7001-2021.[75]The IEEE effort identifies multiple scales of transparency for different stakeholders.
There are also concerns that releasing AI models may lead to misuse.[76]For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted a blog on this topic, asking for government regulation to help determine the right thing to do.[77]Furthermore, open-weight AI models can befine-tunedto remove any counter-measure, until the AI model complies with dangerous requests, without any filtering. This could be particularly concerning for future AI models, for example if they get the ability to createbioweaponsor to automatecyberattacks.[78]OpenAI, initially committed to an open-source approach to the development ofartificial general intelligence(AGI), eventually switched to a closed-source approach, citing competitiveness and safety reasons.Ilya Sutskever, OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years.[79]
In April 2023,Wiredreported thatStack Overflow, a popular programming help forum with over 50 million questions and answers, planned to begin charging large AI developers for access to its content. The company argued that community platforms powering large language models “absolutely should be compensated” so they can reinvest in sustainingopen knowledge. Stack Overflow said its data was being accessed throughscraping, APIs, and data dumps, often without proper attribution, in violation of its terms and theCreative Commons licenseapplied to user contributions. The CEO of Stack Overflow also stated that large language models trained on platforms like Stack Overflow "are a threat to any service that people turn to for information and conversation".[80]
Aggressive AI crawlers have increasingly overloaded open-source infrastructure, “causing what amounts to persistentdistributed denial-of-service(DDoS) attacks on vital public resources,” according to a March 2025Ars Technicaarticle. Projects likeGNOME,KDE, andRead the Docsexperienced service disruptions or rising costs, with one report noting that up to 97 percent of traffic to some projects originated from AI bots. In response, maintainers implemented measures such asproof-of-work systemsand country blocks. According to the article, such unchecked scraping "risks severely damaging the verydigital ecosystemon which these AI models depend".[81]
In April 2025, theWikimedia Foundationreported that automated scraping by AI bots was placing strain on its infrastructure. Since early 2024, bandwidth usage had increased by 50 percent due to large-scale downloading of multimedia content by bots collecting training data for AI models. These bots often accessed obscure and less-frequently cached pages, bypassing caching systems and imposing high costs on core data centers. According to Wikimedia, bots made up 35 percent of total page views but accounted for 65 percent of the most expensive requests. The Foundation noted that “our content is free, our infrastructure is not” and warned that “this creates a technical imbalance that threatens the sustainability of community-run platforms”.[82]
Approaches like machine learning withneural networkscan result in computers making decisions that neither they nor their developers can explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements forexplainable artificial intelligence.[83]Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to summarizing neural network behavior and building user confidence, while interpretability is defined as the comprehension of what a model has done or could do.[84]
In healthcare, the use of complex AI methods or techniques often results in models described as "black-boxes" due to the difficulty to understand how they work. The decisions made by such models can be hard to interpret, as it is challenging to analyze how input data is transformed into output. This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards.[85]
A special case of the opaqueness of AI is that caused by it beinganthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of itsmoral agency.[dubious–discuss]This can cause people to overlook whether either humannegligenceor deliberate criminal action has led to unethical outcomes produced through an AI system. Some recentdigital governanceregulation, such as theEU'sAI Actis set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinaryproduct liability. This includes potentiallyAI audits.
According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller.[86]Similarly, according to a five-country study by KPMG and theUniversity of QueenslandAustralia in 2021, 66-79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully.[87]
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.[88]TheOECD,UN,EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.[89][90][91]
On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence".[92]This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector.[93]The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.[94]To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks.[95]On 21 April 2021, the European Commission proposed theArtificial Intelligence Act.[96]
AI has been slowly making its presence more known throughout the world, from chat bots that seemingly have answers for every homework question toGenerative artificial intelligencethat can create a painting about whatever one desires. AI has become increasingly popular in hiring markets, from the ads that target certain people according to what they are looking for to the inspection of applications of potential hires. Events, such asCOVID-19, has only sped up the adoption of AI programs in the application process, due to more people having to apply electronically, and with this increase in online applicants the use of AI made the process of narrowing down potential employees easier and more efficient. AI has become more prominent as businesses have to keep up with the times and ever-expanding internet. Processing analytics and making decisions becomes much easier with the help of AI.[53]AsTensor Processing Unit(TPUs) andGraphics processing unit(GPUs) become more powerful, AI capabilities also increase, forcing companies to use it to keep up with the competition. Managing customers' needs and automating many parts of the workplace leads to companies having to spend less money on employees.
AI has also seen increased usage in criminal justice and healthcare. For medicinal means, AI is being used more often to analyze patient data to make predictions about future patients' conditions and possible treatments. These programs are calledClinical decision support system(DSS). AI's future in healthcare may develop into something further than just recommended treatments, such as referring certain patients over others, leading to the possibility of inequalities.[97]
In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may become conscious, such as theglobal workspace theoryor theintegrated information theory. Edelman notes one exception had beenThomas Metzinger, who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an "explosion of artificial suffering", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of conscious instances.[98][99]Podcast host Dwarkesh Patel said he cared about making sure no "digital equivalent offactory farming" happens.[100]In theethics of uncertain sentience, theprecautionary principleis often invoked.[101]
Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged.[102]These includeOpenAIfounderIlya Sutskeverin February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022,David Chalmersargued that it was unlikely current large language models likeGPT-3had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future.[99][98][103]Anthropichired its first AI welfare researcher in 2024,[104]and in 2025 started a "model welfare" research program that explores topics such as how to assess whether a model deserves moral consideration, potential "signs of distress", and "low-cost" interventions.[105]
According to Carl Shulman andNick Bostrom, it may be possible to create machines that would be "superhumanly efficient at deriving well-being from resources", called "super-beneficiaries". One reason for this is that digital hardware could enable much faster information processing than biological brains, leading to a faster rate ofsubjective experience. These machines could also be engineered to feel intense and positive subjective experience, unaffected by thehedonic treadmill. Shulman and Bostrom caution that failing to appropriately consider the moral claims of digital minds could lead to a moral catastrophe, while uncritically prioritizing them over human interests could be detrimental to humanity.[106][107]
Joseph Weizenbaum[108]argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as:
Weizenbaum explains that we require authentic feelings ofempathyfrom people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[109]
Pamela McCorduckcounters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[109]However,Kaplanand Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and ingrained, which makes them even more difficult to spot and fight against.[110]
Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known ascomputationalism). To Weizenbaum, these points suggest that AI research devalues human life.[108]
AI founderJohn McCarthyobjects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.Bill Hibbard[111]writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
As the widespread use ofautonomous carsbecomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed.[112][113]There have been debates about the legal liability of the responsible party if these cars get into accidents.[114][115]In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.[116]
In another incident on March 18, 2018,Elaine Herzbergwas struck and killed by a self-drivingUberin Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.[117]
Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary.[118][failed verification]Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.[119][120][121]
Experts contend that autonomous vehicles ought to be able to distinguish between rightful and harmful decisions since they have the potential of inflicting harm.[122]The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into the machine's guidance system. However, there are significant challenges facing both strategies: the top-down technique is criticized for its difficulty in preserving certain moral convictions, while the bottom-up strategy is questioned for potentially unethical learning from human activities.
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[123]The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[124][125]The President of theAssociation for the Advancement of Artificial Intelligencehas commissioned a study to look at this issue.[126]They point to programs like the Language Acquisition Device which can emulate human interaction.
On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[127]The US Navy has funded a report which indicates that asmilitary robotsbecome more complex, there should be greater attention to implications of their ability to make autonomous decisions.[128][125]Some researchers state thatautonomous robotsmight be more humane, as they could make decisions more effectively.[129]In 2024, theDefense Advanced Research Projects Agencyfunded a program,Autonomy Standards and Ideals with Military Operational Values(ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities.[130][131]
Research has studied how to make autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[132]From aconsequentialistview, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a setmoralframework that the AI cannot override.[133]
There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of arobot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to developautonomous drone weapons, paralleling similar announcements by Russia and South Korea[134]respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons,Stephen HawkingandMax Tegmarksigned a "Future of Life" petition[135]to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[136]
"If any major military power pushes ahead with the AI weapon development, a globalarms raceis virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become theKalashnikovsof tomorrow", says the petition, which includesSkypeco-founderJaan Tallinnand MIT professor of linguisticsNoam Chomskyas additional supporters against AI weaponry.[137]
Physicist and Astronomer RoyalSir Martin Reeshas warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own."Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created theCentre for the Study of Existential Riskat Cambridge University in the hope of avoiding this threat to human existence.[136]
Regarding the potential for smarter-than-human systems to be employed militarily, theOpen Philanthropy Projectwrites that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as theMachine Intelligence Research Institute(MIRI) and theFuture of Humanity Institute(FHI), and there seems to have been less analysis and debate regarding them".[138]
Academic Gao Qiqi writes that military use of AI risks escalating military competition between countries and that the impact of AI in military matters will not be limited to one country but will have spillover effects.[139]: 91Gao cites the example of U.S. military use of AI, which he contends has been used as a scapegoat to evade accountability for decision-making.[139]: 91
Asummitwas held in 2023 in the Hague on the issue of using AI responsibly in the military domain.[140]
Vernor Vinge, among numerous others, have suggested that a moment may come when some, if not all, computers are smarter than humans. The onset of this event is commonly referred to as "the Singularity"[141]and is the central point of discussion in the philosophy ofSingularitarianism. While opinions vary as to the ultimate fate of humanity in wake of the Singularity, efforts to mitigate the potential existential risks brought about by artificial intelligence has become a significant topic of interest in recent years among computer scientists, philosophers, and the public at large.
Many researchers have argued that, through anintelligence explosion, a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.[142]In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent bookSuperintelligence: Paths, Dangers, Strategies, philosopherNick Bostromargues that artificial intelligence has the capability to bring about human extinction. He claims that anartificial superintelligencewould be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolledunintended consequencescould arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[143][144]
However, Bostrom contended that superintelligence also has the potential to solve many difficult problems such as disease, poverty, and environmental destruction, and could helphumans enhance themselves.[145]
Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According toEliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[146]AI researchers such asStuart J. Russell,[147]Bill Hibbard,[111]Roman Yampolskiy,[148]Shannon Vallor,[149]Steven Umbrello[150]andLuciano Floridi[151]have proposed design strategies for developing beneficial machines.
To address ethical challenges in artificial intelligence, developers have introduced various systems designed to ensure responsible AI behavior. Examples includeNvidia's[152]LlamaGuard, which focuses on improving thesafetyandalignmentof large AI models,[153]andPreamble's customizable guardrail platform.[154]These systems aim to address issues such as algorithmic bias, misuse, and vulnerabilities, includingprompt injectionattacks, by embedding ethical guidelines into the functionality of AI models.
Prompt injection, a technique by which malicious inputs can cause AI systems to produce unintended or harmful outputs, has been a focus of these developments. Some approaches use customizable policies and rules to analyze inputs and outputs, ensuring that potentially problematic interactions are filtered or mitigated.[154]Other tools focus on applying structured constraints to inputs, restricting outputs to predefined parameters,[155]or leveraging real-time monitoring mechanisms to identify and address vulnerabilities.[156]These efforts reflect a broader trend in ensuring that artificial intelligence systems are designed with safety and ethical considerations at the forefront, particularly as their use becomes increasingly widespread in critical applications.[157]
There are many organizations concerned with AI ethics and policy, public and governmental as well as corporate and societal.
Amazon,Google,Facebook,IBM, andMicrosofthave established anon-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[158]
TheIEEEput together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization. The IEEE'sEthics of Autonomous Systemsinitiative aims to address ethical dilemmas related to decision-making and the impact on society while developing guidelines for the development and use of autonomous systems. In particular in domains like artificial intelligence and robotics, the Foundation for Responsible Robotics is dedicated to promoting moral behavior as well as responsible robot design and use, ensuring that robots maintain moral principles and are congruent with human values.
Traditionally,governmenthas been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government andnon-government organizationsto ensure AI is ethically applied.
AI ethics work is structured by personal values and professional commitments, and involves constructing contextual meaning through data and algorithms. Therefore, AI ethics work needs to be incentivized.[159]
Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to theEnlightenment:Leibnizalready poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being,[185]and so doesDescartes, who describes what could be considered an early version of theTuring test.[186]
Theromanticperiod has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously inMary Shelley'sFrankenstein. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction:R.U.R – Rossum's Universal Robots,Karel Čapek's play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor,robota)[187]but was also an international success after it premiered in 1921.George Bernard Shaw's playBack to Methuselah, published in 1921, questions at one point the validity of thinking machines that act like humans;Fritz Lang's 1927 filmMetropolisshows anandroidleading the uprising of the exploited masses against the oppressive regime of atechnocraticsociety.
In the 1950s,Isaac Asimovconsidered the issue of how to control machines inI, Robot. At the insistence of his editorJohn W. Campbell Jr., he proposed theThree Laws of Roboticsto govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior.[188]His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[189]More recently, academics and many governments have challenged the idea that AI can itself be held accountable.[190]A panel convened by theUnited Kingdomin 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.[191]
Eliezer Yudkowsky, from theMachine Intelligence Research Institutesuggested in 2004 a need to study how to build a "Friendly AI", meaning that there should also be efforts to make AI intrinsically friendly and humane.[192]
In 2009, academics and technical experts attended a conference organized by theAssociation for the Advancement of Artificial Intelligenceto discuss the potential impact of robots and computers, and the impact of the hypothetical possibility that they could become self-sufficient and make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.[193]They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[141]
Also in 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale ofLausanne, Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[194]
The role of fiction with regards to AI ethics has been a complex one.[195]One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at theInstitut de Robòtica i Informàtica Industrial(Institute of robotics and industrial computing) at the Technical University of Catalonia notes,[196]in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.
While ethical questions linked to AI have been featured in science fiction literature andfeature filmsfor decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish seriesReal Humans(2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology seriesBlack Mirror(2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French seriesOsmosis(2020) and British seriesThe Onedeal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix seriesLove, Death+Robotshave imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.[197]
The movieThe Thirteenth Floorsuggests a future wheresimulated worldswith sentient inhabitants are created by computergame consolesfor the purpose of entertainment. The movieThe Matrixsuggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmostspeciesism. The short story "The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in theEmergency Medical HologramofStarship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator,Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The moviesBicentennial ManandA.I.deal with the possibility of sentient robots that could love.I, Robotexplored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.[198]
The ethics of artificial intelligence is one of several core themes in BioWare'sMass Effectseries of games.[199]It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scaleneural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
Detroit: Become Humanis one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game using interactive storylines to give players a more immersive gaming experience. Players manipulate three different awakened bionic people in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created.[200]
Over time, debates have tended to focus less and less onpossibilityand more ondesirability,[201]as emphasized in the"Cosmist" and "Terran" debatesinitiated byHugo de GarisandKevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.[202]
|
https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence
|
Thefactored language model(FLM) is an extension of a conventionallanguage modelintroduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector ofkfactors:wi={fi1,...,fik}.{\displaystyle w_{i}=\{f_{i}^{1},...,f_{i}^{k}\}.}An FLM provides the probabilistic modelP(f|f1,...,fN){\displaystyle P(f|f_{1},...,f_{N})}where the prediction of a factorf{\displaystyle f}is based onN{\displaystyle N}parents{f1,...,fN}{\displaystyle \{f_{1},...,f_{N}\}}. For example, ifw{\displaystyle w}represents a word token andt{\displaystyle t}represents aPart of speechtag for English, the expressionP(wi|wi−2,wi−1,ti−1){\displaystyle P(w_{i}|w_{i-2},w_{i-1},t_{i-1})}gives a model for predicting current word token based on a traditionalNgrammodel as well as thePart of speechtag of the previous word.
A major advantage of factored language models is that they allow users to specify linguistic knowledge such as the relationship between word tokens andPart of speechin English, or morphological information (stems, root, etc.) in Arabic.
LikeN-grammodels, smoothing techniques are necessary in parameter estimation. In particular, generalized back-off is used in training an FLM.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Factored_language_model
|
Agenerative pre-trained transformer(GPT) is a type oflarge language model(LLM)[1][2][3]and a prominent framework forgenerative artificial intelligence.[4][5]It is anartificial neural networkthat is used innatural language processingby machines.[6]It is based on thetransformer deep learning architecture, pre-trained on largedata setsof unlabeled text, and able to generate novel human-like content.[2][3]As of 2023, most LLMs had these characteristics[7]and are sometimes referred to broadly as GPTs.[8]
The first GPT was introduced in 2018 byOpenAI.[9]OpenAI has released significantGPT foundation modelsthat have been sequentially numbered, to comprise its "GPT-n" series.[10]Each of these was significantly more capable than the previous, due to increased size (number of trainable parameters) and training. The most recent of these,GPT-4o, was released in May 2024.[11]Such models have been the basis for their moretask-specific GPT systems, including modelsfine-tuned for instruction following—which in turn power theChatGPTchatbotservice.[1]
The term "GPT" is also used in the names and descriptions of such models developed by others. For example, other GPT foundation models includea series of modelscreated byEleutherAI,[12]and seven models created byCerebrasin 2023.[13]Companies in different industries have developed task-specific GPTs in their respective fields, such asSalesforce's "EinsteinGPT" (forCRM)[14]andBloomberg's "BloombergGPT" (for finance).[15]
Generative pretraining (GP) was a long-established concept in machine learning applications.[16][17]It was originally used as a form ofsemi-supervised learning, as the model is trained first on an unlabeled dataset (pretrainingstep) by learning togeneratedatapoints in the dataset, and then it is trained to classify a labeled dataset.[18]
There were three main types of early GP. Thehidden Markov modelslearn a generative model of sequences for downstream applications. For example, inspeech recognition, a trained HMM infers the most likely hidden sequence for a speech signal, and the hidden sequence is taken as the phonemes of the speech signal. These were developed in the 1970s and became widely applied in speech recognition in the 1980s.[19][20]
The compressors learn to compress data such as images and textual sequences, and the compressed data serves as a good representation for downstream applications such asfacial recognition.[21][22][23]Theautoencoderssimilarly learn a latent representation of data for later downstream applications such as speech recognition.[24][25]The connection between autoencoders and algorithmic compressors was noted in 1993.[26]
During the 2010s, the problem of machine translation was solved[citation needed]byrecurrent neural networks, withattention mechanismadded. This was optimized into thetransformerarchitecture, published byGoogleresearchers inAttention Is All You Need(2017).[27]That development led to the emergence oflarge language modelssuch asBERT(2018)[28]which was a pre-trained transformer (PT) but not designed to begenerative(BERT was an "encoder-only" model). Also in 2018,OpenAIpublishedImproving Language Understanding by Generative Pre-Training, which introducedGPT-1, the first in its GPT series.[29]
Previously in 2017, some of the authors who would later work on GPT-1 worked on generative pre-training of language withLSTM, which resulted in a model that could represent text with vectors that could easily be fine-tuned for downstream applications.[30]
Prior to transformer-based architectures, the best-performing neural NLP (natural language processing) models commonly employedsupervised learningfrom large amounts of manually-labeled data. The reliance on supervised learning limited their use on datasets that were not well-annotated, and also made it prohibitively expensive and time-consuming to train extremely large language models.[29]
Thesemi-supervisedapproach OpenAI employed to make a large-scale generative system—and was first to do with a transformer model—involved two stages: anunsupervisedgenerative"pretraining" stage to set initial parameters using a language modeling objective, and a superviseddiscriminative"fine-tuning" stage to adapt these parameters to a target task.[29]
Regarding more recentGPT foundation models,OpenAIpublished its first versions ofGPT-3in July 2020. There were three models, with 1B, 6.7B, 175B parameters, respectively namedbabbage, curie, and davinci(giving initials B, C, and D).[citation needed]
In July 2021, OpenAI publishedCodex, atask-specific GPT modeltargeted for programming applications. This was developed by fine-tuning a 12B parameter version of GPT-3 (different from previous GPT-3 models) using code fromGitHub.[31]
In March 2022, OpenAI published two versions of GPT-3 that were fine-tuned for instruction-following (instruction-tuned), nameddavinci-instruct-beta(175B) andtext-davinci-001,[32]and then started beta testingcode-davinci-002.[33]text-davinci-002was instruction-tuned fromcode-davinci-002. Bothtext-davinci-003andChatGPTwere released in November 2022, with both building upontext-davinci-002via reinforcement learning from human feedback (RLHF).text-davinci-003is trained for following instructions (like its predecessors), whereas ChatGPT is further trained for conversational interaction with a human user.[34][35]
OpenAI's most recent GPT foundation model,GPT-4, was released on March 14, 2023. It can be accessed directly by users via a premium version of ChatGPT, and is available to developers for incorporation into other products and services via OpenAI'sAPI. Other producers of GPT foundation models includeEleutherAI(witha series of modelsstarting in March 2021)[12]andCerebras(with seven models released in March 2023).[13]
Afoundation modelis an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks.[36][37]
Thus far, the most notable GPT foundation models have been fromOpenAI'sGPT-nseries. The most recent from that isGPT-4, for which OpenAI declined to publish the size or training details (citing "the competitive landscape and the safety implications of large-scale models").[38]
Other such models includeGoogle'sPaLM, a broad foundation model that has been compared toGPT-3and has been made available to developers via anAPI,[45][46]and Together'sGPT-JT, which has been reported as the closest-performingopen-sourcealternative toGPT-3(and is derived fromearlier open-source GPTs).[47]Meta AI(formerlyFacebook) also has a generative transformer-based foundational large language model, known asLLaMA.[48]
Foundational GPTs can also employmodalitiesother than text, for input and/or output.GPT-4is a multi-modal LLM that is capable of processing text and image input (though its output is limited to text).[49]Regarding multimodaloutput, some generative transformer-based models are used fortext-to-imagetechnologies such asdiffusion[50]and parallel decoding.[51]Such kinds of models can serve asvisual foundation models(VFMs) for developing downstream systems that can work with images.[52]
A foundational GPT model can be further adapted to produce more targeted systems directed to specific tasks and/or subject-matter domains. Methods for such adaptation can include additionalfine-tuning(beyond that done for the foundation model) as well as certain forms ofprompt engineering.[53]
An important example of this isfine-tuning models to follow instructions, which is of course a fairly broad task but more targeted than a foundation model. In January 2022,OpenAIintroduced "InstructGPT"—a series of models which were fine-tuned to follow instructions using a combination ofsupervisedtraining andreinforcement learning from human feedback(RLHF) on base GPT-3 language models.[54][55]Advantages this had over the bare foundational models included higher accuracy, less negative/toxic sentiment, and generally better alignment with user needs. Hence, OpenAI began using this as the basis for itsAPIservice offerings.[56]Other instruction-tuned models have been released by others, including a fully open version.[57][58]
Another (related) kind of task-specific models arechatbots, which engage in human-like conversation. In November 2022, OpenAI launchedChatGPT—an online chat interface powered by an instruction-tuned language model trained in a similar fashion to InstructGPT.[59]They trained this model using RLHF, with human AI trainers providing conversations in which they played both the user and the AI, and mixed this new dialogue dataset with the InstructGPT dataset for a conversational format suitable for a chatbot. Other major chatbots currently includeMicrosoft'sBing Chat, which uses OpenAI'sGPT-4(as part of a broader close collaboration between OpenAI and Microsoft),[60]andGoogle's competing chatbotGemini(initially based on theirLaMDAfamily of conversation-trained language models, with plans to switch toPaLM).[61]
Yet another kind of task that a GPT can be used for is themeta-task of generatingits owninstructions, like developing a series of prompts for 'itself' to be able to effectuate a more general goal given by a human user.[62]This is known as an AIagent, and more specifically a recursive one because it uses results from its previous self-instructions to help it form its subsequent prompts; the first major example of this wasAuto-GPT(which uses OpenAI's GPT models), and others have since been developed as well.[63]
Generative transformer-based systems can also be targeted for tasks involvingmodalitiesbeyond text. For example,Microsoft's"Visual ChatGPT" combines ChatGPT with visual foundation models (VFMs) to enable input or output comprising images as well as text.[64]Also, advances intext-to-speechtechnology offer tools for audio content creation when used in conjunction with foundational GPT language models.[65]
GPT systems can be directed toward particular fields or domains. Some reported examples of such models and apps are as follows:
Sometimes domain-specificity is accomplished via softwareplug-ins or add-ons. For example, several different companies have developed particular plugins that interact directly with OpenAI'sChatGPTinterface,[73][74]andGoogle Workspacehas available add-ons such as "GPT for Sheets and Docs"—which is reported to aid use ofspreadsheetfunctionality inGoogle Sheets.[75][76]
In November 2023, OpenAI announced that ChatGPT Plus subscribers would be able to createcustom versions of ChatGPT(being calledGPTs).[77]These can be tailored for specific domains via prompt engineering, curated datasets, and/or targeted interaction with external tools. Users who register as verified builders are able to publish their custom GPTs for other users, with monetization potential. (This is notably distinct from OpenAI's API service, as this is based internally within OpenAI's platform.)
OpenAI, which created the first generative pre-trained transformer (GPT) in 2018, asserted in 2023 that "GPT" should be regarded as abrandof OpenAI.[78]In April 2023, OpenAI revised the brand guidelines in itsterms of serviceto indicate that other businesses using itsAPIto run their artificial intelligence (AI) services would no longer be able to include "GPT" in such names or branding.[79]In May 2023, OpenAI engaged a brand management service to notify its API customers of this policy, although these notifications stopped short of making overt legal claims (such as allegations oftrademark infringementor demands tocease and desist).[78]As of November 2023, OpenAI still prohibits its API licensees from naming their own products with "GPT",[80]but it has begun enabling its ChatGPT Plus subscribers to make "custom versions of ChatGPT" that are being calledGPTson the OpenAI site.[81]OpenAI's terms of service says that its subscribers may use "GPT" in the names of these, although it's "discouraged".[80]
Relatedly, OpenAI has applied to theUnited States Patent and Trademark Office(USPTO) to seek domestictrademark registrationfor the term "GPT" in the field of AI.[78]OpenAI sought to expedite handling of its application, but the USPTO declined that request in April 2023.[82]In May 2023, the USPTO responded to the application with a determination that "GPT" was both descriptive and generic.[83]As of November 2023, OpenAI continues to pursue its argument through the available processes. Regardless, failure to obtain aregisteredU.S. trademark does not preclude some level ofcommon-lawtrademark rights in the U.S.,[84]and/or trademark rights in other countries.[85]
For any given type or scope of trademark protection in the U.S., OpenAI would need to establish that the term is actually "distinctive" to their specific offerings in addition to being a broader technical term for the kind of technology. Some media reports suggested that OpenAI may be able to obtain trademark registration based indirectly on the fame of its GPT-basedchatbotproduct,ChatGPT,[82][86]for which OpenAI hasseparatelysought protection (and which it has sought to enforce more strongly).[87]Other reports have indicated that registration for the bare term "GPT" seems unlikely to be granted,[78][88]as it is used frequently as a common term to refer simply to AI systems that involve generative pre-trained transformers.[3][89][90][91]In any event, to whatever extent exclusive rights in the term may occur the U.S., others would need to avoid using it for similar products or services in ways likely to cause confusion.[88][92]If such rights ever became broad enough to implicate other well-established uses in the field, the trademark doctrine ofdescriptive fair usecould still continue non-brand-related usage.[93]
This section lists the main official publications from OpenAI and Microsoft on their GPT models.
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer
|
Katz back-offis a generativen-gramlanguage modelthat estimates theconditional probabilityof a word given its history in then-gram. It accomplishes this estimation bybacking offthrough progressively shorter history models under certain conditions.[1]By doing so, the model with the most reliable information about a given history is used to provide the better results.
The model was introduced in 1987 by Slava M. Katz. Prior to that, n-gram language models were constructed by training individual models for different n-gram orders using maximum likelihood estimation and then interpolating them together.
The equation for Katz's back-off model is:[2]
where
Essentially, this means that if then-gram has been seen more thanktimes in training, the conditional probability of a word given its history is proportional to themaximum likelihoodestimate of thatn-gram. Otherwise, the conditional probability is equal to the back-off conditional probability of the (n− 1)-gram.
The more difficult part is determining the values fork,dandα.
k{\displaystyle k}is the least important of the parameters. It is usually chosen to be 0. However, empirical testing may find better values for k.
d{\displaystyle d}is typically the amount of discounting found byGood–Turingestimation. In other words, if Good–Turing estimatesC{\displaystyle C}asC∗{\displaystyle C^{*}}, thend=C∗C{\displaystyle d={\frac {C^{*}}{C}}}
To computeα{\displaystyle \alpha }, it is useful to first define a quantity β, which is the left-over probability mass for the (n− 1)-gram:
Then the back-off weight, α, is computed as follows:
The above formula only applies if there is data for the "(n− 1)-gram". If not, the algorithm skips n-1 entirely and uses the Katz estimate for n-2. (and so on until an n-gram with data is found)
This model generally works well in practice, but fails in some circumstances. For example, suppose that the bigram "a b" and the unigram "c" are very common, but the trigram "a b c" is never seen. Since "a b" and "c" are very common, it may be significant (that is, not due to chance) that "a b c" is never seen. Perhaps it's not allowed by the rules of the grammar. Instead of assigning a more appropriate value of 0, the method will back off to the bigram and estimateP(c|b), which may be too high.[3]
|
https://en.wikipedia.org/wiki/Katz%27s_back-off_model
|
Asemantic similarity network(SSN) is a special form ofsemantic network.[1]designed to represent concepts and their semantic similarity. Its main contribution is reducing the complexity of calculating semantic distances. Bendeck (2004, 2008) introduced the concept ofsemantic similarity networks(SSN) as the specialization of a semantic network to measure semantic similarity from ontological representations.[2]Implementations include genetic information handling.[3][4]
The concept is formally defined (Bendeck 2008) as adirected graph, with concepts represented asnodesand semantic similarity relations asedges.[5]The relationships are grouped into relation types. The concepts and relations contain attribute values to evaluate thesemantic similarity[6]between concepts. The semantic similarity relationships of the SSN represent several of the general relationship types of the standardSemantic network, reducing the complexity of the (normally, very large) network for calculations of semantics. SSNs define relation types as templates (andtaxonomyof relations) for semantic similarity attributes that are common to relations of the same type. SSN representation allows propagation algorithms to faster calculate semantic similarities, including stop conditions within a specified threshold. This reduces the computation time and power required for calculation.
A more recent publications on Semantic Matching and Semantic Similarity Networks could be found in (Bendeck 2019).[7]
Specific Semantic Similarity Network application on healthcare was presented at the Healthcare information exchange Format (FHIR European Conference) 2019.[8][9]
The latest evolution inArtificial Intelligence(likeChatGPT, based onLarge language model), relay strongly onevolutionary computation, the next level will be to includesemantic unification(like in theSemantic Networksand thisSemantic similarity network) to extend the current models with more powerful understanding tools.
|
https://en.wikipedia.org/wiki/Semantic_similarity_network
|
Astatistical modelis amathematical modelthat embodies a set ofstatistical assumptionsconcerning the generation ofsample data(and similar data from a largerpopulation). A statistical model represents, often in considerably idealized form, thedata-generating process.[1]When referring specifically toprobabilities, the corresponding term isprobabilistic model. Allstatistical hypothesis testsand allstatistical estimatorsare derived via statistical models. More generally, statistical models are part of the foundation ofstatistical inference. A statistical model is usually specified as a mathematical relationship between one or morerandom variablesand other non-random variables. As such, a statistical model is "a formal representation of a theory" (Herman AdèrquotingKenneth Bollen).[2]
Informally, a statistical model can be thought of as astatistical assumption(or set of statistical assumptions) with a certain property: that the assumption allows us to calculate the probability of anyevent. As an example, consider a pair of ordinary six-sideddice. We will study two different statistical assumptions about the dice.
The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is1/6. From that assumption, we can calculate the probability of both dice coming up 5:1/6×1/6=1/36.More generally, we can calculate the probability of any event: e.g. (1 and 2) or (3 and 3) or (5 and 6). The alternative statistical assumption is this: for each of the dice, the probability of the face 5 coming up is1/8(because the dice areweighted). From that assumption, we can calculate the probability of both dice coming up 5:1/8×1/8=1/64.We cannot, however, calculate the probability of any other nontrivial event, as the probabilities of the other faces are unknown.
The first statistical assumption constitutes a statistical model: because with the assumption alone, we can calculate the probability of any event. The alternative statistical assumption doesnotconstitute a statistical model: because with the assumption alone, we cannot calculate the probability of every event. In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation). For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible.
In mathematical terms, a statistical model is a pair (S,P{\displaystyle S,{\mathcal {P}}}), whereS{\displaystyle S}is the set of possible observations, i.e. thesample space, andP{\displaystyle {\mathcal {P}}}is a set ofprobability distributionsonS{\displaystyle S}.[3]The setP{\displaystyle {\mathcal {P}}}represents all of the models that are considered possible. This set is typically parameterized:P={Fθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}}. The setΘ{\displaystyle \Theta }defines theparametersof the model. If a parameterization is such that distinct parameter values give rise to distinct distributions, i.e.Fθ1=Fθ2⇒θ1=θ2{\displaystyle F_{\theta _{1}}=F_{\theta _{2}}\Rightarrow \theta _{1}=\theta _{2}}(in other words, the mapping isinjective), it is said to beidentifiable.[3]
In some cases, the model can be more complex.
Suppose that we have a population of children, with the ages of the children distributeduniformly, in the population. The height of a child will bestochasticallyrelated to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in alinear regressionmodel, like this:
heighti=b0+b1agei+ εi, whereb0is the intercept,b1is a parameter that age is multiplied by to obtain a prediction of height, εiis the error term, andiidentifies the child. This implies that height is predicted by age, with some error.
An admissible model must be consistent with all the data points. Thus, a straight line (heighti=b0+b1agei) cannot be admissible for a model of the data—unless it exactly fits all the data points, i.e. all the data points lie perfectly on the line. The error term, εi, must be included in the equation, so that the model is consistent with all the data points. To dostatistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εidistributions arei.i.d.Gaussian, with zero mean. In this instance, the model would have 3 parameters:b0,b1, and the variance of the Gaussian distribution. We can formally specify the model in the form (S,P{\displaystyle S,{\mathcal {P}}}) as follows. The sample space,S{\displaystyle S}, of our model comprises the set of all possible pairs (age, height). Each possible value ofθ{\displaystyle \theta }= (b0,b1,σ2) determines a distribution onS{\displaystyle S}; denote that distribution byFθ{\displaystyle F_{\theta }}. IfΘ{\displaystyle \Theta }is the set of all possible values ofθ{\displaystyle \theta }, thenP={Fθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}}. (The parameterization is identifiable, and this is easy to check.)
In this example, the model is determined by (1) specifyingS{\displaystyle S}and (2) making some assumptions relevant toP{\displaystyle {\mathcal {P}}}. There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specifyP{\displaystyle {\mathcal {P}}}—as they are required to do.
A statistical model is a special class ofmathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables arestochastic. In the above example with children's heights, ε is a stochastic variable; without that stochastic variable, the model would be deterministic. Statistical models are often used even when the data-generating process being modeled is deterministic. For instance,coin tossingis, in principle, a deterministic process; yet it is commonly modeled as stochastic (via aBernoulli process). Choosing an appropriate statistical model to represent a given data-generating process is sometimes extremely difficult, and may require knowledge of both the process and relevant statistical analyses. Relatedly, the statisticianSir David Coxhas said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".[4]
There are three purposes for a statistical model, according to Konishi & Kitagawa:[5]
Those three purposes are essentially the same as the three purposes indicated by Friendly & Meyer: prediction, estimation, description.[6]
Suppose that we have a statistical model (S,P{\displaystyle S,{\mathcal {P}}}) withP={Fθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}}. In notation, we write thatΘ⊆Rk{\displaystyle \Theta \subseteq \mathbb {R} ^{k}}wherekis a positive integer (R{\displaystyle \mathbb {R} }denotes thereal numbers; other sets can be used, in principle). Here,kis called thedimensionof the model. The model is said to beparametricifΘ{\displaystyle \Theta }has finite dimension.[citation needed]As an example, if we assume that data arise from a univariateGaussian distribution, then we are assuming that
In this example, the dimension,k, equals 2. As another example, suppose that the data consists of points (x,y) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean): this leads to the same statistical model as was used in the example with children's heights. The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.)
Although formallyθ∈Θ{\displaystyle \theta \in \Theta }is a single parameter that has dimensionk, it is sometimes regarded as comprisingkseparate parameters. For example, with the univariate Gaussian distribution,θ{\displaystyle \theta }is formally a single parameter with dimension 2, but it is often regarded as comprising 2 separate parameters—the mean and the standard deviation. A statistical model isnonparametricif the parameter setΘ{\displaystyle \Theta }is infinite dimensional. A statistical model issemiparametricif it has both finite-dimensional and infinite-dimensional parameters. Formally, ifkis the dimension ofΘ{\displaystyle \Theta }andnis the number of samples, both semiparametric and nonparametric models havek→∞{\displaystyle k\rightarrow \infty }asn→∞{\displaystyle n\rightarrow \infty }. Ifk/n→0{\displaystyle k/n\rightarrow 0}asn→∞{\displaystyle n\rightarrow \infty }, then the model is semiparametric; otherwise, the model is nonparametric.
Parametric models are by far the most commonly used statistical models. Regarding semiparametric and nonparametric models,Sir David Coxhas said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies".[7]
Two statistical models arenestedif the first model can be transformed into the second model by imposing constraints on the parameters of the first model. As an example, the set of all Gaussian distributions has, nested within it, the set of zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. As a second example, the quadratic model
has, nested within it, the linear model
—we constrain the parameterb2to equal 0.
In both those examples, the first model has a higher dimension than the second model (for the first example, the zero-mean model has dimension 1). Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2.
Comparing statistical models is fundamental for much ofstatistical inference.Konishi & Kitagawa (2008, p. 75) state: "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models." Common criteria for comparing models include the following:R2,Bayes factor,Akaike information criterion, and thelikelihood-ratio testtogether with its generalization, therelative likelihood.
Another way of comparing two statistical models is through the notion ofdeficiencyintroduced byLucien Le Cam.[8]
|
https://en.wikipedia.org/wiki/Statistical_model
|
Ambiguityoccurs when a single word or phrase may be interpreted in two or more ways. Aslawfrequently involves lengthy, complex texts, ambiguity is common. Thus, courts have evolved various doctrines for dealing with cases in which legal texts are ambiguous.
In criminal law, therule of lenityholds that where a criminal statute is ambiguous, the meaning most favorable to the defendant—i.e., the one that imposes the lowest penalties—should be adopted.[1]In the US context, JusticeJohn Marshallstated the rule thus inUnited States v. Wiltberger:
The rule that penal laws are to be construed strictly, is perhaps not much less old than construction itself. It is founded on the tenderness of the law for the rights of individuals; and on the plain principle that the power of punishment is vested in the legislative, not in the judicial department. It is the legislature, not the Court, which is to define a crime, and ordain its punishment.[2]
Incontractlaw, thecontra proferentemrule holds that, depending on the circumstances, ambiguous terms in a contract may be construed in favor of the party with less bargaining power.[3]
In Canada, courts have developed rules of construction to interpret ambiguities in treaties betweenIndigenous peoplesand theCrown.[4]In 1983, the Supreme Court of Canada held that "treaties and statutes relating to Indians should be liberally construed and doubtful expressions resolved in favour of the Indians."[5]
Inproperty law, a distinction is drawn between patent ambiguity and latent ambiguity. The two forms of ambiguity differ in two respects: (1) what led to the existence of the ambiguity; and (2) the type of evidentiary basis that might be allowed in resolving it.
Patent ambiguity is that ambiguity which isapparent on the faceof an instrument to any one perusing it, even if unacquainted with the circumstances of theparties.[6]In the case of a patent ambiguity,parol evidenceisadmissibleto explain only what has been written, not what the writer intended to write. For example, inSaunderson v Piper(1839),[7]where abill of exchangewas drawn in figures for £245 and in words for two hundred pounds, evidence that "and forty-five" had been omitted by mistake was rejected. But where it appears from the general context of the instrument what the parties really meant, the instrument will be construed as if there was no ambiguity, as inSayeandSele's case (1795),[8]where the name of the grantor had been omitted in the operative part of a grant, but, as it was clear from another part of the grant who he was, thedeedwas held to be valid.[9]
Latent ambiguity is where the wording of an instrument is on the face of it clear and intelligible, but may, at the same time, apply equally to two different things or subject matters, as where a legacy is given "to my nephew, John," and thetestatoris shown to have two nephews of that name. A latent ambiguity may be explained by parol evidence: the ambiguity has been brought about by circumstances extraneous to the instrument, so the explanation must necessarily be sought in such circumstances.[9]
|
https://en.wikipedia.org/wiki/Ambiguity_(law)
|
Ambiguity tolerance–intolerancerefers to a proposed aspect of personality that influences how individuals respond toambiguousstimuli, though whether it constitutes a distinct psychological trait is disputed.[1]Ambiguity may arise from being presented information that is unfamiliar or conflicting or when there is too much information available to process.[2]When presented with such situations, ambiguity intolerant individuals are likely to experience anxiety, interpret the situation as threatening, and may attempt to avoid or ignore the ambiguity by rigidly adhering to inaccurate, simplistic interpretations. In contrast, an individual who is tolerant of ambiguity is more likely to remain neutral, adopt a flexible and open disposition, and adapt to the situation.[2]Much of the initial research into the concept focused on intolerance of ambiguity, which has been correlated with prejudicial beliefs and theauthoritarian personality.
Ambiguity tolerance–intolerance was formally introduced in 1949 through an article published byElse Frenkel-Brunswik, who developed the concept in earlier work onethnocentrismin children[3]In the article which defines the term, she considers, among other evidence, a study of schoolchildren who exhibit prejudice as the basis for the existence of intolerance of ambiguity. In the study, she tested the notion that children who are ethnically prejudiced also tend to reject ambiguity more so than their peers. To do this she used a story recall test to measure the children's prejudice, then presented the children with an ambiguous image, interviewed them about it, and recorded their responses. She found that the children who scored high in prejudice took longer to give a response to the shape, were less likely to make changes on their response, and less likely to change their perspectives.
Frenkel-Brunswik continued to examine the concept inThe Authoritarian Personality, which she co-authored withTheodor Adorno,Daniel Levinson, andNevitt Sanford. In the book, intolerance of ambiguity is one aspect of the cognitive style of the authoritarian personality.[4]Interest in and research on ambiguity tolerance-intolerance was highest in the two decades following Frenkel-Brunswik's initial publication, but the concept is still in use in contemporary work.
In the years following Frenkel-Brunswik's publications, her work and the concept of ambiguity tolerance-intolerance have been the subject of criticism. In 1958 a study by Kenny and Ginsberg was unable to replicate Frenkel-Brunswik's results, casting some doubt on her findings.[5]In 1965, Stephen Bochner published an article criticizing Frenkel-Brunswik for failing to give a consistent definition of the term and arguing that Kenny and Ginsberg's replication study may have failed due to the inconsistency of Frenkel-Brunswik's use of ambiguity tolerance-intolerance.[1]
Ambiguity tolerance–intolerance has been subject to criticism on the grounds that it has been poorly defined, and as such there have been many attempts to create a standardized definition that can be used more easily, while retaining a relationship to Frenkel-Brunswik's definition.
Budner (1962) defines intolerance of ambiguity to be the tendency to interpret ambiguity as a threat, while tolerance of ambiguity is the tendency to interpret ambiguity as desirable. In addition, he developed a scale with 16 items designed to measure how subjects would respond to an ambiguous situation in order to allow for more controlled research into the phenomenon.[2]
Bochner (1965), though critical of Frenkel-Brunswik's definition, also organized a set of defining characteristics which are set out in her work.[1]Bochner's attempt to organize her work resulted in nine primary characteristics of intolerance of ambiguity:
In addition Bochner lists nine secondary characteristics which describe what individuals who are intolerant of ambiguity will be:
Bochner however, is skeptical of whether clinging to Frenkel-Brunswik's definition and attempting to find measures of the characteristics is useful, as he argues that ambiguity tolerance-intolerance may not describe a unified, distinct phenomenon.
Further methods for measuring ambiguity intolerance have been proposed by Block and Block (1951) and Levitt (1953). Block and Block (1951) operationalized the construct by measuring the amount of time required to structure an ambiguous situation. In this method, the amount of time required to structure is associated with ambiguity tolerance; someone intolerant of ambiguity will desire to find a structure quickly, while a person tolerant of ambiguity would take more time to consider the situation.[6]Levitt (1953) studied intolerance of ambiguity in children and asserted that the decision location test and misconception scale both served as accurate measures of ambiguity intolerance.[7]
Ambiguity tolerance–intolerance is relevant to and used in many branches ofpsychologyincludingpersonality psychology,developmental psychology, andsocial psychology. Some examples of the construct's use in different disciplines are listed below.
The construct of ambiguity intolerance was first used in the study of personality and research on the topic is still undertaken despite criticism of the link between intolerance of ambiguity and authoritarianism. A study testing college students' tolerance for ambiguity[8]found that students who were involved in the arts had higher scores on ambiguity tolerance than business students and concluded that tolerance of ambiguity correlates with creativity.
Harington, Block, and Block (1978) assessed intolerance of ambiguity in children at an early age, ranging from 3.5 to 4.5 years. The children were assessed using two tests performed by caretakers in a daycare center. The researchers then re-evaluated the children when they turned seven, and their data showed that male students who were ranked high in ambiguity intolerance at an early age had more anxiety, required more structure, and had less effective cognitive structure than their female peers who had also tested high in ambiguity intolerance.[9]
Ambiguity intolerance can affect how an individual perceives others. Social psychology uses ambiguity tolerance–intolerance to investigate and explain interpersonal relationship dynamics. Research has been conducted on how ambiguity tolerance–intolerance interacts with racial identity,[10]homophobia,[11]marital satisfaction,[12]and pregnancy adjustment.[13]
Research shows that ranking in the extremes of ambiguity tolerance or intolerance can be detrimental to mental health. Ambiguity intolerance is thought to serve as a cognitive vulnerability that can contribute to the development of depression. Anderson and Schwartz hypothesize that ambiguity intolerance may lead to depression as those who are intolerant tend to see the world as concrete and unchanging and are unable to effectively interpret and cope with external change. The discontinuity between their interpretations and their external situation results in negative thoughts, and due to the need for certainty ambiguity intolerant individuals have, these negative thoughts are quickly interpreted as certainties. This certainty can serve as a predictive measure of depression.[14]
|
https://en.wikipedia.org/wiki/Ambiguity_tolerance%E2%80%93intolerance
|
Syntactic ambiguity, also known asstructural ambiguity,[1]amphiboly, oramphibology, is characterized by the potential for asentenceto yield multiple interpretations due to its ambiguoussyntax. This form of ambiguity is not derived from the variedmeanings of individual wordsbut rather from the relationships among words and clauses within a sentence, concealing interpretations beneath theword order. Consequently, a sentence presents as syntactically ambiguous when it permits reasonable derivation of several possible grammatical structures by an observer.
Injurisprudence, the interpretation of syntactically ambiguous phrases in statutory texts orcontractsmay be done by courts. Occasionally, claims based on highly improbable interpretations of such ambiguities are dismissed as beingfrivolous litigationand without merit.[citation needed]The termparse forestrefers to the collection of all possible syntactic structures, known asparse trees, that can represent the ambiguous sentence's meanings.[2][3]The task of clarifying which meaning is actually intended from among the possibilities is known assyntactic disambiguation.[4]
A globally ambiguous sentence is one that has at least two distinct interpretations and where reading the entire sentence does not resolve the ambiguity. Globally ambiguous sentences exist where no feature of the representation (i.e. word order) distinguishes the possible distinct interpretations. Global ambiguities are often unnoticed because readers tend to choose the interpretation they understand to be more probable. One example of a global ambiguity is "The woman held the baby in the green blanket." In this example, the baby, incidentally wrapped in the green blanket, is being held by the woman, or the woman is using the green blanket as an instrument to hold the baby, or the woman is wrapped in the green blanket and holding the baby.
A locally ambiguous sentence is a sentence that contains an ambiguous phrase but has only one interpretation.[5]The ambiguity in a locally ambiguous sentence briefly stays and is resolved, i.e.,disambiguated, by the end of the speech. Sometimes, local ambiguities can result in"garden path" sentences, in which a structurally correct sentence is difficult to interpret because one interpretation of the ambiguous region is not the one that makes most sense.
Aristotlewrites about an influence of ambiguities on arguments and also about this influence depending on either combination or division of words:
... if one combines the words 'to write-while-not-writing': for then it means, that he has the power to write and not to write at once; whereas if one does not combine them, it means that when he is not writing he has the power to write.
Newspaperheadlinesare written in atelegraphic style(headlinese) which oftenomits the copula, creating syntacticambiguity. A common form is thegarden pathtype. The namecrash blossomswas proposed for these ambiguousheadlinesby Danny Bloom in the Testy Copy Editors discussion group in August 2009. He based this on the headline "Violinist (Diana Yukawa) linked toJAL crashblossoms" that Mike O'Connell had posted, asking what such a headline could be called.[8]TheColumbia Journalism Reviewregularly reprints such headlines in its "The Lower Case" column, and has collected them in the anthologies "Squad Helps Dog Bite Victim"[9]and "Red Tape Holds Up New Bridge".[10]Language Logalso has an extensive archive of crash blossoms, for example "Infant Pulled from Wrecked Car Involved in Short Police Pursuit".[11]
Many purported crash blossoms areapocryphalor recycled.[12]One celebrated one fromWorld War Iis "French push bottles up German rear";[13]life imitated art in theSecond World Warheadline "Eighth Army Push Bottles Up Germans".[14]
Syntactic or structural ambiguities are frequently found in humour and advertising. One enduring joke using an ambiguous modifier is a quip spoken byGroucho Marxin the 1930 filmAnimal Crackers: "I shot an elephant in my pajamas. How he got into my pajamas I don't know." Another sentence, which emerged from early 1960s machine translation research, is "Time flies like an arrow; fruit flies like a banana".
Significantly enough, structural ambiguities may also be intentionally created when one understands the kinds of syntactic structures that will lead to ambiguity; however, for the respective interpretations to work, they must be compatible with semantic and pragmatic contextual factors.[1]
In syntactic ambiguity, the same sequence of words is interpreted as having different syntactic structures. In contrast, insemantic ambiguitythe structure remains the same, but the individual words are interpreted differently.[15][16]Controlled natural languagesare often designed to be unambiguous so that they can be parsed into alogical form.[17]
Immanuel Kantemploys the term "amphiboly" in a sense of his own, as he has done in the case of other philosophical words. He means it as a confusion of pure understanding with perceived experience, and an attribution to the latter of what belongs only to the former.[18]
Competition-based models hold that differing syntactic analyses rival each other when syntactic ambiguities are resolved. If probability and language constraints offer similar support for each one, especially strong competition occurs. On the other hand, when constraints support one analysis over the other, competition is weak and processing is easy. After van Gompel et al.'s experiments (2005), the reanalysis model has become favoured over competition-based models.[19]Convincing evidence against competition-based models includes the fact thatglobally ambiguoussentences are easier to process than disambiguated (clearer) sentences, showing that the analyses do not compete against each other in the former. Plausibility tends to strengthen one analysis and eliminate rivalry. However, the model has not been completely rejected. Some theories claim that competition makes processing difficult, if only briefly.[19]
According to the reanalysis model, processing is hard once the reader has realised that their analysis is false (with respect to the already adopted syntactic structure) and he or she must then return and recheck the structure. Most reanalysis models, like the unrestricted race model, work in series, which implies that only one analysis can be supported at a time.
Consider the following statements:
Research supports the reanalysis model as the most likely reason for why interpreting these ambiguous sentences is hard.[19]Results of many experiments tracking the eye-movements of subjects have demonstrated that it is just as hard to process a persistently ambiguous sentence (1) as an unambiguous sentence (2 and 3) because information before the ambiguity only weakly leans towards each possible syntax.[19]
The unrestricted race model states that analysis is affected before the introduction of ambiguity and affects which meaning is used (based onprobability) before multiple analyses can be introduced.Gompeland Pickering plainly refer to the unrestricted race model as a two-stage reanalysis model. Unlike constraint-based theories, only one analysis can be made at any one time. Thus, reanalysis may sometimes be necessary if information following the first analysis proves it wrong.[19]
However, the name "unrestricted race" comes directly from its properties taken from the constraint-based models. As in constraint-based theories, any source of information can support the different analyses of an ambiguous structure; thus the name. In the model, the other possible structures of an ambiguous sentence compete in a race, with the structure that is constructed fastest being used. The more such an analysis is supported, and the stronger the support is, the more likely this one will be made first.[20]
Consider the following statements:
Research showed that people took less time to read persistently ambiguous sentences (sentence 1) than temporarily ambiguous sentences that were clarified later (sentences 2 and 3). In sentences 2 and 3, the reflexive pronouns “himself” and “herself” clarify that “who scratched” is modifying the son and the princess respectively. Thus, the readers are forced to reanalyse and their reading times will therefore rise. In sentence 1, however, the ambiguity of the reflexive pronoun “herself” fits both the maid and the princess. This means the readers do not have to reanalyse. Thus, ambiguous sentences will take a shorter time to read compared to clarified ones.[21]
This is called the underspecification account[22]as readers do not stick to a meaning when not provided with clarifying words. The reader understands someone scratched herself but does not seek to determine whether it was the maid or the princess. This is also known as the “good-enough” approach to understanding language.[23]
The good-enough approach to understanding language claims that representations of meaning are usually incomplete and language processing only partial. A good-enough interpretation may occur when such a representation is not robust, supported by context, or both and must handle potentially distracting information. Thus, such information is clipped for successful understanding[23]
Children interpret ambiguous sentencesdifferently from adults due to lack of experience. Children have not yet learned how the environment and contextual clues can suggest a certain interpretation of a sentence. They have also not yet developed the ability to acknowledge that ambiguous words and phrases can be interpreted multiple ways.[24]As children read and interpret syntactically ambiguous sentences, the speed at which initial syntactic commitments are made is lower in children than in adults. Furthermore, children appear to be less skilled at directing their attention back to the part of the sentence that is most informative in terms of aiding reanalysis.[25]Other evidence attributes differences in interpreting ambiguous sentences toworking memoryspan. While adults tend to have a higher working memory span, they sometimes spend more time resolving the ambiguity but tend to be more accurate in their final interpretation. Children, in contrast, can decide quickly on an interpretation because they consider only the interpretations their working memory can hold.[26]
For lowreading spanadults who had the worst verbal working memory, they took longer to process the sentences with thereduced relative clausecompared to therelative clauseand had similar times frominanimate or animate subjects. For high reading span subjects who had the best verbal working memory, they were overall faster than the low reading span subjects. Within the high reading span subjects, however, they responded faster to inanimate subjects and took longer to respond to animate subjects. This was because the animate subjects had a greater propensity to create agarden path sentencebecause of(not despite) greater verbal working memory. This suggested that since the low reading span subjects had less cognitive resources, only syntactic cues could be processed while high reading span subjects had more cognitive resources and could thus get tripped up with the garden path sentence.[26][27]
|
https://en.wikipedia.org/wiki/Amphibology
|
Abuzzwordis a word or phrase, new or already existing, that becomes popular for a period of time. Buzzwords often derive from technical terms yet often have much of the original technical meaning removed through fashionable use, being simply used to impress others. Some buzzwords retain their true technical meaning when used in the correct contexts, for exampleartificial intelligence.[1][2]Buzzwords often originate injargon,acronyms, orneologisms.[3]Examples of overworked business buzzwords includesynergy,vertical,dynamic,cyberandstrategy.
It has been stated thatbusinessescould not operate without buzzwords, as they are the shorthands or internal shortcuts that make perfect sense to people informed of the context.[4]However, a useful buzzword can become co-opted into general popular speech and lose its usefulness. According to management professor Robert Kreitner, "Buzzwords are the literary equivalent ofGresham's law. They will drive out good ideas."[5]Buzzwords, or buzz-phrases such as "all on the same page", can also be seen in business as a way to make people feel like there is a mutual understanding. As most workplaces use a specialized jargon, which could be argued is another form of buzzwords, it allows quicker communication. Indeed, many new hires feel more like "part of the team" the quicker they learn the buzzwords of their new workplace. Buzzwords permeate people's working lives so much that many do not realize that they are using them. The vice president of CSC Index, Rich DeVane, notes that buzzwords describe not only a trend, but also what can be considered a "ticket of entry" with regards to being considered as a successful organization – "What people find tiresome is each consulting firm's attempt to put a different spin on it. That's what gives bad information."[6]
Buzzwords also feature prominently inpolitics, where they can result in a process which "privileges rhetoric over reality, producing policies that are 'operationalized' first and only 'conceptualized' at a later date". The resulting political speech is known for "eschewing reasoned debate (as characterized by the use of evidence and structured argument), instead employing language exclusively for the purposes of control and manipulation".[7]
TheConcise Oxford English Dictionarydefines a buzzword (hyphenating the term asbuzz-word) as aslogan, or as a fashionable piece ofjargon: a chic, fashionable, voguish, trendy worda la mode.
It has been asserted that buzzwords do not simply appear, they are created by a group of people working within a business as a means to generate hype.[8]Buzzwords are most closely associated with management and have become the vocabulary that is known as "management speak": Using a pompous or magisterial term, of or relating to a particular subject employed to impress those outside of the field of expertise.
It could also be calledbuzz phraseorloaded word.[1]
What this means is that when a manager uses a said buzzword, most other people do not hear the meaning, and instead just see it as a buzzword. However it has been said that buzzwords are almost a "necessary evil" of management, as a way to inspire their team, but also stroke their own egos.[9]With that being said, a buzzword is not necessarily a bad thing, as many disciplines thrive with the introduction of new terms which can be called buzzwords. These can also cross over into pop culture and indeed even into everyday life.[8]With media channels now operating through many media, such as television, radio, print and increasingly digital (especially with the rise of social media), a "buzzword" can catch on and rapidly be adapted through the world.
The origin of buzzwords can be seen inHallgren & Weiss (1946)as coming frombusiness studentsstudying atHarvard Universityas a way to help them gain better results from their studies. Such language terms were collated[by whom?]and then became what is known today as "buzzwords". During the early years of buzzwords[when?], buzzwords were used by students as a means to enable them to quickly recall items of importance. As an example, "If his analysis does not highlight the most important problems he has 'poor focus', and if he fails to emphasize important recommendations he will be accused of 'tinkering'. If the sequence for the 'implementation' of the recommendations is not good it is a matter of 'poor timing'. To succeed, the student must 'get on top of the problem'. He must 'hit the problem' and not 'shadow box' it. If he cannot do these things he might just as well 'turn in his suit'".[10]
Students have used many different buzzwords to describe the situation that they are in, and how this might affect a moment in their everyday life. From studying these business students,Hallgren & Weiss (1946)noticed that business students could speak with apparent authority. It also seemed[to whom?]as if using the right buzzword was more important than what the student came up with as an answer. Buzzwords have a strong influence on business culture and are commonly used in business speak.
Jon Keegan of theWall Street Journalhas published a Business Buzzwords Generator, which allows readers to use a randomizer to assemble "meaningless business phrases using overused business buzzwords" – for example, "This product will incentivizebig dataand demonstrate innovative performance in the playing field."[11]
Forbeshosts an annual "Jargon Madness" game, in which 32 of "corporate America's most insufferable expressions" are played off against each other in a bracketed, basketball-style tournament to determine the buzzword of the year.[12]
LinkedIn publishes an annual list of buzzwords to avoid in creatingrésumés(British English:CVs) – "trite, empty words that may sound good to your ear but say almost nothing". The 2014 list:motivated,passionate,creative,driven,extensive experience,responsible,strategic,track record,organizational, andexpert.[13]
When people are approaching a meeting where they expect the presenters to use many buzzwords, they may prepare a game ofbuzzword bingo, where players score points each time a particular buzzword is used.[14]
Patch Productshas published a board game calledBuzzword.[15]
The"Weird Al" YankovicalbumMandatory Funcontains the song "Mission Statement", which is a long list of essentially meaningless buzzwords.[16]
|
https://en.wikipedia.org/wiki/Buzzword
|
Incomputability theoryandcomputational complexity theory, adecision problemis acomputational problemthat can be posed as ayes–no questionon asetof input values. An example of a decision problem is deciding whether a given natural number isprime. Another example is the problem, "given two numbersxandy, doesxevenly dividey?"
Adecision procedurefor a decision problem is analgorithmicmethod that answers the yes-no question on all inputs, and a decision problem is calleddecidableif there is a decision procedure for it. For example, the decision problem "given two numbersxandy, doesxevenly dividey?" is decidable since there is a decision procedure calledlong divisionthat gives the steps for determining whetherxevenly dividesyand the correct answer,YESorNO, accordingly. Some of the most important problems in mathematics areundecidable, e.g. thehalting problem.
The field of computational complexity theory categorizesdecidabledecision problems by how difficult they are to solve. "Difficult", in this sense, is described in terms of thecomputational resourcesneeded by the most efficient algorithm for a certain problem. On the other hand, the field ofrecursion theorycategorizesundecidabledecision problems byTuring degree, which is a measure of the noncomputability inherent in any solution.
Adecision problemis theformal languageof all inputs for which the output (the answer to the yes-no question on a given input) isYES.[notes 1]
A classic example of a decidable decision problem is the set of prime numbers. It is possible to effectively decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient procedures ofprimality testingare known, the existence of any effective procedure is enough to establish decidability.
Problems that are not decidable areundecidable, which means it is not possible to create an algorithm (efficient or not) that solves them. Thehalting problemis an important undecidable decision problem; for more examples, seelist of undecidable problems.
Decision problems can be ordered according tomany-one reducibilityand related to feasible reductions such aspolynomial-time reductions. A decision problemPis said to becompletefor a set of decision problemsSifPis a member ofSand every problem inScan be reduced toP. Complete decision problems are used incomputational complexity theoryto characterizecomplexity classesof decision problems. For example, theBoolean satisfiability problemis complete for the classNPof decision problems under polynomial-time reducibility.
Decision problems are closely related tofunction problems, which can have answers that are more complex than a simpleYESorNO. A corresponding function problem is "given two numbersxandy, what isxdivided byy?".
Afunction problemconsists of apartial functionf; the informal "problem" is to compute the values offon the inputs for which it is defined.
Every function problem can be turned into a decision problem; the decision problem is just the graph of the associated function. (The graph of a functionfis the set of pairs (x,y) such thatf(x) =y.) If this decision problem were effectively solvable then the function problem would be as well. This reduction does not respect computational complexity, however. For example, it is possible for the graph of a function to be decidable in polynomial time (in which case running time is computed as a function of the pair (x,y)) when the function is not computable inpolynomial time(in which case running time is computed as a function ofxalone). The functionf(x) = 2xhas this property.
Every decision problem can be converted into the function problem of computing thecharacteristic functionof the set associated to the decision problem. If this function is computable then the associated decision problem is decidable. However, this reduction is more liberal than the standard reduction used in computational complexity (sometimes called polynomial-time many-one reduction); for example, the complexity of the characteristic functions of anNP-completeproblem and itsco-NP-completecomplementis exactly the same even though the underlying decision problems may not be considered equivalent in some typical models of computation.
Unlike decision problems, for which there is only one correct answer for each input, optimization problems are concerned with finding thebestanswer to a particular input. Optimization problems arise naturally in many applications, such as thetraveling salesman problemand many questions inlinear programming.
Function and optimization problems are often transformed into decision problems by considering the question of whether the output isequal toorless than or equal toa given value. This allows the complexity of the corresponding decision problem to be studied; and in many cases the original function or optimization problem can be solved by solving its corresponding decision problem. For example, in the traveling salesman problem, the optimization problem is to produce a tour with minimal weight. The associated decision problem is: for eachN, to decide whether the graph has any tour with weight less thanN. By repeatedly answering the decision problem, it is possible to find the minimal weight of a tour.
Because the theory of decision problems is very well developed, research in complexity theory has typically focused on decision problems. Optimization problems themselves are still of interest in computability theory, as well as in fields such asoperations research.
|
https://en.wikipedia.org/wiki/Decision_problem
|
Discrete mathematicsis the study ofmathematical structuresthat can be considered "discrete" (in a way analogous todiscrete variables, having abijectionwith the set ofnatural numbers) rather than "continuous" (analogously tocontinuous functions). Objects studied in discrete mathematics includeintegers,graphs, andstatementsinlogic.[1][2][3]By contrast, discrete mathematics excludes topics in "continuous mathematics" such asreal numbers,calculusorEuclidean geometry. Discrete objects can often beenumeratedbyintegers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing withcountable sets[4](finite sets or sets with the samecardinalityas the natural numbers). However, there is no exact definition of the term "discrete mathematics".[5]
The set of objects studied in discrete mathematics can be finite or infinite. The termfinite mathematicsis sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business.
Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development ofdigital computerswhich operate in "discrete" steps and store data in "discrete" bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches ofcomputer science, such ascomputer algorithms,programming languages,cryptography,automated theorem proving, andsoftware development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems.
Although the main objects of study in discrete mathematics are discrete objects,analyticmethods from "continuous" mathematics are often employed as well.
In university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts byACMandMAAinto a course that is basically intended to developmathematical maturityin first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well.[6][7]Some high-school-level discrete mathematics textbooks have appeared as well.[8]At this level, discrete mathematics is sometimes seen as a preparatory course, likeprecalculusin this respect.[9]
TheFulkerson Prizeis awarded for outstanding papers in discrete mathematics.
Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily ongraph theoryandmathematical logic. Included within theoretical computer science is the study of algorithms and data structures.Computabilitystudies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations.Automata theoryandformal languagetheory are closely related to computability.Petri netsandprocess algebrasare used to model computer systems, and methods from discrete mathematics are used in analyzingVLSIelectronic circuits.Computational geometryapplies algorithms to geometrical problems and representations ofgeometricalobjects, whilecomputer image analysisapplies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics.
Information theory involves the quantification ofinformation. Closely related iscoding theorywhich is used to design efficient and reliable data transmission and storage methods. Information theory also includes continuous topics such as:analog signals,analog coding,analog encryption.
Logic is the study of the principles of valid reasoning andinference, as well as ofconsistency,soundness, andcompleteness. For example, in most systems of logic (but not inintuitionistic logic)Peirce's law(((P→Q)→P)→P) is a theorem. For classical logic, it can be easily verified with atruth table. The study ofmathematical proofis particularly important in logic, and has accumulated toautomated theorem provingandformal verificationof software.
Logical formulasare discrete structures, as areproofs, which form finitetrees[10]or, more generally,directed acyclic graphstructures[11][12](with eachinference stepcombining one or morepremisebranches to give a single conclusion). Thetruth valuesof logical formulas usually form a finite set, generally restricted to two values:trueandfalse, but logic can also be continuous-valued, e.g.,fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied,[13]e.g.infinitary logic.
Set theory is the branch of mathematics that studiessets, which are collections of objects, such as {blue, white, red} or the (infinite) set of allprime numbers.Partially ordered setsand sets with otherrelationshave applications in several areas.
In discrete mathematics,countable sets(includingfinite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked byGeorg Cantor's work distinguishing between different kinds ofinfinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work indescriptive set theorymakes extensive use of traditional continuous mathematics.
Combinatorics studies the ways in which discrete structures can be combined or arranged.Enumerative combinatoricsconcentrates on counting the number of certain combinatorial objects - e.g. thetwelvefold wayprovides a unified framework for countingpermutations,combinationsandpartitions.Analytic combinatoricsconcerns the enumeration (i.e., determining the number) of combinatorial structures using tools fromcomplex analysisandprobability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae andgenerating functionsto describe the results, analytic combinatorics aims at obtainingasymptotic formulae.Topological combinatoricsconcerns the use of techniques fromtopologyandalgebraic topology/combinatorial topologyincombinatorics.
Design theory is a study ofcombinatorial designs, which are collections of subsets with certainintersectionproperties.Partition theorystudies various enumeration and asymptotic problems related tointeger partitions, and is closely related toq-series,special functionsandorthogonal polynomials. Originally a part ofnumber theoryandanalysis, partition theory is now considered a part of combinatorics or an independent field.Order theoryis the study ofpartially ordered sets, both finite and infinite.
Graph theory, the study ofgraphsandnetworks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right.[14]Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts oftopology, e.g.knot theory.Algebraic graph theoryhas close links with group theory andtopological graph theoryhas close links totopology. There are alsocontinuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics.
Number theory is concerned with the properties of numbers in general, particularlyintegers. It has applications tocryptographyandcryptanalysis, particularly with regard tomodular arithmetic,diophantine equations, linear and quadratic congruences, prime numbers andprimality testing. Other discrete aspects of number theory includegeometry of numbers. Inanalytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects includetranscendental numbers,diophantine approximation,p-adic analysisandfunction fields.
Algebraic structuresoccur as both discrete examples and continuous examples. Discrete algebras include:Boolean algebraused inlogic gatesand programming;relational algebraused indatabases; discrete and finite versions ofgroups,ringsandfieldsare important inalgebraic coding theory; discretesemigroupsandmonoidsappear in the theory offormal languages.
There are many concepts and theories in continuous mathematics which have discrete versions, such asdiscrete calculus,discrete Fourier transforms,discrete geometry,discrete logarithms,discrete differential geometry,discrete exterior calculus,discrete Morse theory,discrete optimization,discrete probability theory,discrete probability distribution,difference equations,discrete dynamical systems, anddiscrete vector measures.
Indiscrete calculusand thecalculus of finite differences, afunctiondefined on an interval of theintegersis usually called asequence. A sequence could be a finite sequence from a data source or an infinite sequence from adiscrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by arecurrence relationordifference equation. Difference equations are similar todifferential equations, but replacedifferentiationby taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right. Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there areintegral transformsinharmonic analysisfor studying continuous functions or analogue signals, there arediscrete transformsfor discrete functions or digital signals. As well asdiscrete metric spaces, there are more generaldiscrete topological spaces,finite metric spaces,finite topological spaces.
Thetime scale calculusis a unification of the theory ofdifference equationswith that ofdifferential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion ofhybrid dynamical systems.
Discrete geometryand combinatorial geometry are about combinatorial properties ofdiscrete collectionsof geometrical objects. A long-standing topic in discrete geometry istiling of the plane.
Inalgebraic geometry, the concept of a curve can be extended to discrete geometries by taking thespectraofpolynomial ringsoverfinite fieldsto be models of theaffine spacesover that field, and lettingsubvarietiesor spectra of other rings provide the curves that lie in that space. Although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. For example, every point of the formV(x−c)⊂SpecK[x]=A1{\displaystyle V(x-c)\subset \operatorname {Spec} K[x]=\mathbb {A} ^{1}}forK{\displaystyle K}a field can be studied either asSpecK[x]/(x−c)≅SpecK{\displaystyle \operatorname {Spec} K[x]/(x-c)\cong \operatorname {Spec} K}, a point, or as the spectrumSpecK[x](x−c){\displaystyle \operatorname {Spec} K[x]_{(x-c)}}of thelocal ring at (x-c), a point together with a neighborhood around it. Algebraic varieties also have a well-defined notion oftangent spacecalled theZariski tangent space, making many features of calculus applicable even in finite settings.
Inapplied mathematics,discrete modellingis the discrete analogue ofcontinuous modelling. In discrete modelling, discrete formulae are fit todata. A common method in this form of modelling is to userecurrence relation.Discretizationconcerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations.Numerical analysisprovides an important example.
The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove thefour color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance).[15]
Inlogic, thesecond problemonDavid Hilbert's list of openproblemspresented in 1900 was to prove that theaxiomsofarithmeticareconsistent.Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself.Hilbert's tenth problemwas to determine whether a given polynomialDiophantine equationwith integer coefficients has an integer solution. In 1970,Yuri Matiyasevichproved that thiscould not be done.
The need tobreakGerman codes inWorld War IIled to advances incryptographyandtheoretical computer science, with thefirst programmable digital electronic computerbeing developed at England'sBletchley Parkwith the guidance ofAlan Turingand his seminal work,On Computable Numbers.[16]TheCold Warmeant that cryptography remained important, with fundamental advances such aspublic-key cryptographybeing developed in the following decades. Thetelecommunications industryhas also motivated advances in discrete mathematics, particularly in graph theory andinformation theory.Formal verificationof statements in logic has been necessary forsoftware developmentofsafety-critical systems, and advances inautomated theorem provinghave been driven by this need.
Computational geometryhas been an important part of thecomputer graphicsincorporated into modernvideo gamesandcomputer-aided designtools.
Several fields of discrete mathematics, particularly theoretical computer science, graph theory, andcombinatorics, are important in addressing the challengingbioinformaticsproblems associated with understanding thetree of life.[17]
Currently, one of the most famous open problems in theoretical computer science is theP = NP problem, which involves the relationship between thecomplexity classesPandNP. TheClay Mathematics Institutehas offered a $1 millionUSDprize for the first correct proof, along with prizes forsix other mathematical problems.[18]
|
https://en.wikipedia.org/wiki/Discrete_mathematics
|
Adouble entendre[note 1](pluraldouble entendres) is afigure of speechor a particular way of wording that is devised to have a double meaning, one of which is typically obvious, and the other often conveys a message that would be too socially unacceptable, or offensive to state directly.[2][3]
A double entendre may exploitpunsorword playto convey the second meaning. Double entendres generally rely on multiple meanings of words, or different interpretations of the same primary meaning. They often exploitambiguityand may be used to introduce it deliberately in a text. Sometimes ahomophonecan be used as a pun. When three or more meanings have been constructed, this is known as a "triple entendre," etc.[4]
According to theMerriam-WebsterUnabridged Dictionary and the Oxford English Dictionary, the expression comes from the rare and obsoleteFrenchexpression, which literally meant "double meaning" and was used in the senses of "double understanding" or "ambiguity" but acquired its current suggestive twist in English after being first used in 1673 byJohn Dryden.[5][6][7]The phrase has not been used in French for centuries and would be ungrammatical in modern French. No exact equivalent exists in French, whose similar expressions (mot/expression à)double ententeand (mot/expression à)double sensdo not have the suggestiveness of the English expression.[6]
A person who is unfamiliar with the hidden or alternative meaning of a sentence may fail to detect itsinnuendos, aside from observing that others find it humorous for no apparent reason. Innuendo is often used insitcomsand othercomedywhere some in the audience may enjoy the humour while being oblivious to its secondary meaning.
A triple entendre is a phrase that can be understood in any of three ways, such as in the back cover of the 1981RushalbumMoving Pictureswhich shows amoving companycarrying paintings out of a building while people are shown being emotionally moved and a film crew makes a "moving picture" of the whole scene.[8]
InHomer'sTheOdyssey, whenOdysseusis captured by theCyclopsPolyphemus, he tells the Cyclops that his name is Oudeis (ουδεις = No-one). When Odysseus attacks the Cyclops later that night and stabs him in the eye, the Cyclops runs out of his cave, yelling to the other Cyclopes that "No-one has hurt me!," which leads the other Cyclopes to take no action under the assumption that Polyphemus blinded himself by accident, allowing Odysseus and his men to escape.
Some of the earliest double entendres are found in the 10th-centuryExeter Book, orCodex exoniensis, atExeter CathedralinEngland. In addition to the various poems and stories found in the book, there are also numerous riddles. Answers to the riddles were not included in the book, but have been found by scholars over the years. Some of these employ double entendres, such asRiddle 25:
I am a wondrous creature: to women a thing of joyful expectation, to close-lying companions serviceable. I harm no city-dweller excepting my slayer alone. My stem is erect and tall––I stand up in bed––and whiskery somewhere down below. Sometimes a countryman's quite comely daughter will venture, bumptious girl, to get a grip on me. She assaults my red self and seizes my head and clenches me in a cramped place. She will soon feel the effect of her encounter with me, this curl-locked woman who squeezes me. Her eye will be wet.
This suggests the answer "apenis" but also has the innocent answer "anonion."[9]
Examples of sexualinnuendoand double-entendre occur inGeoffrey Chaucer'sThe Canterbury Tales(14th century), in which theWife of Bath's Taleis laden with double entendres. These include her use of the word "queynte" (modern spelling "quaint") to describe domestic duties while also alluding to genitalia ("queynte" being at the time an alternate form of "cunt," a term for thevulva).
The title ofSir Thomas More's 1516 fictional workUtopiais a double entendre because of thepunbetween twoGreek-derived words that would have identical pronunciation. Spelled as it is, or especially spelled as "Outopia," the title means "no place;"[10]meanwhile spelled as "Eutopia," with the same English pronunciation,[11]it would mean "good place."
Shakespeare frequently used double entendres in his plays.Sir Toby BelchinTwelfth Nightsays ofSir Andrew'shair, that "it hangs likeflaxon adistaff; and I hope to see a housewife take thee between her legs and spin it off";the NurseinRomeo and Julietsays that her husband had toldJulietwhen she was learning to walk that "Yea, dost thou fall upon thy face? Thou wilt fall backward when thou hast more wit;" or is told the time byMercutio: "for the bawdy hand of the dial is now upon theprickof noon;" and inHamlet,Hamletpublicly tormentsOpheliawith a series of sexual puns, including "country matters" (similar to "cunt"). The title of Shakespeare's playMuch Ado About Nothingis a pun on the Elizabethan use of "no-thing" as slang forvagina.[12][13]
In the UK, starting in the 19th century,Victorian moralitydisallowed sexual innuendo in the theatre as being unpleasant, particularly for the ladies in the audience. Inmusic hallsongs, on the other hand, this kind of innuendo remained very popular.Marie Lloyd's song "She Sits Among the Cabbages and Peas" is an example of this. In the early 20th century restrictions were placed on lewdness in performances, including some prosecutions. It was the job of theLord Chamberlainto examine the scripts of all plays for indecency. Nevertheless, some comedians still continued to get away with it.Max Millerhad two books of jokes, a white book and a blue book, and would ask his audience which book they wanted to hear stories from. If they chose the blue book, he could blame the audience for the lewdness to follow (in the UK, "blue"colloquiallyrefers to sexual content, as in "blue jokes," "blue movies" etc.).
In the United States,innuendoand double entendre were only lightly used in radio media until the 1980s when theHoward Stern Showbegan to push the envelope of what was acceptable on theradiothrough use of double entendre and ironies. This garnered so much attention it spawned an entire genre of radio called "shock jockradio" where DJs will push the limits of what is an "acceptable" double entendre to use over-the-air as the Federal Communications Commission has been known to hand out large fines for the use of double entendre on radio if they deem it to be in violation of their standards.[14]
In Britain, innuendo humour began to transfer to radio andcinemain the late 1950s. Particularly significant in this respect were theCarry Onseries of films and theBBCradio seriesRound the Horne; although some ofRound the Horneappeared to be nonsense language, the protagonists were sometimes having "rude" conversations inPolari(gay slang).Round the Hornedepended heavily on innuendo and double entendre, the show's name itself being a triple entendre, a play on the name of its central actorKenneth Horneand those around him, the sailor's expression "going round the horn" (i.e.Cape Horn), and the fact that "horn" is slang for anerection.Spike Milligan, writer ofThe Goon Show, remarked that a lot of "blue" (i.e. sexual) innuendo came from servicemen's jokes, which most of the cast understood (they all had been soldiers) and many of the audience understood, but which passed over the heads of most of the Senior BBC producers and directors, most of whom were "Officer class."[15]
In 1968, the office of theLord Chamberlainceased to have responsibility for censoring liveentertainment, after theTheatres Act 1968. By the 1970s, innuendo had become widely used across much of the British broadcast media, includingsitcomsandradio comedy, such asI'm Sorry I Haven't a Clue. For example, in the 1970s TV comedy seriesAre You Being Served?, Mrs. Slocombe frequently referred to her pet cat as her "pussy," apparently unaware of how easily her statement could be misinterpreted, such as "It's a wonder I'm here at all, you know. My pussy got soakin' wet. I had to dry it out in front of the fire before I left." Someone unfamiliar with sexual slang might find this statement funny simply because of the references to her sodden cat, whereas others would find further humour in the innuendo ("pussy" beingsexual slangforvulva).[16]
Modern comedies, such as the US version ofThe Office, often do not hide the addition of sexual innuendos into the script; for example, main characterMichael Scottoften deploys the phrase "that's what she said" after another character's innocent statement, to turn it retroactively into a sexual pun.[17]
On The Scott Mills Show onBBC Radio 1, listeners are asked to send in clips from radio and TV with double meanings in a humorous context, a feature known as "Innuendo Bingo." Presenters and special guests fill their mouths with water and listen to the clips, and the last person to spit the water out with laughter wins the game.[18][19]
Double entendres are popular in modern movies, as a way to conceal adult humour in a work aimed at general audiences. TheJames Bondfilms are rife with such humour. For example, inTomorrow Never Dies(1997), when Bond is disturbed by the telephone while in bed with a Danish girl, he explains to Moneypenny that he is busy "brushing up on a little Danish". Moneypenny responds by pointing out that Bond was known as "a cunning linguist," a play on the word "cunnilingus." In the final scene ofMoonraker, while Bond is taking Dr Holly Goodhead "round the world one more time," Q says to Sir Frederick Gray, "I think he's attempting re-entry, sir." InThe World Is Not Enough(1999), while in bed with DrChristmas Jones, Bond tells her "I thought Christmas only comes once a year." Other obvious examples includePussy GaloreinGoldfingerandHolly GoodheadinMoonraker. The double entendres of the Bond films were parodied in theAustin Powersseries.
Bawdy double entendres, such as (from the movieSextette) "I'm the kinda girl who works forParamountby day, andFoxall night," and (from the movieMyra Breckinridge) "I feel like a million tonight – but only one at a time," are typical of the comedy writing ofMae West, for her early-career vaudeville performances as well as for her later plays and movies.
There is a long tradition of double entendre songs in American blues music of the 1920s and 1930s, calledhokum.
Double entendres are very common in the titles and lyrics of pop songs, such as "If I Said You Had a Beautiful Body Would You Hold It Against Me" by The Bellamy Brothers. By one interpretation, the person being talked to is asked if they would be offended; by the other interpretation, they are asked if they would press their body against the person doing the talking.[20]
Singer and songwriterBob Dylan, in his somewhat controversial song "Rainy Day Women No. 12 & 35," repeats the line "Everybody must get stoned." In context, the phrase refers to the punishment ofexecutionbystoning, but on another level it means to "get stoned," a common slang term for being high oncannabis. In their song "Big Balls" on the albumDirty Deeds Done Dirt Cheap,AC/DCthe chorus "we've got big balls" can be read as referring to eitherformal dancesortesticles. During the 1940s,Benny Bellrecorded several "party records" that contained double entendre including "Everybody Wants My Fanny."[21]
Double entendres can arise in the replies to inquiries. The clichéd phrase "Said the actress to the bishop," as well as "that's what she said," can be used to remark on a sentence said by another which was not intended as a double entendre but nevertheless could be interpreted with a double meaning, one of them sexual.[22]
|
https://en.wikipedia.org/wiki/Double_entendre
|
Inlogic,equivocation("calling two different things by the same name") is aninformal fallacyresulting from the use of a particular word or expression in multiplesenseswithin an argument.[1][2]
It is a type ofambiguitythat stems from a phrase having two or more distinctmeanings, not from the grammar or structure of the sentence.[1]
Equivocation in asyllogism(a chain of reasoning) produces afallacy of four terms(quaternio terminorum). Below is an example:
The first instance of "man" implies the entire human species, while the second implies just those who are male.
Equivocation can also be used to conflate two positions which share similarities, one modest and easy to defend and one much more controversial. The arguer advances the controversial position, but when challenged, they insist that they are only advancing the more modest position.
|
https://en.wikipedia.org/wiki/Equivocation
|
Essentially contested conceptrefers to abstract terms or phrases that provide value judgements which can be contested. The termessentially contested conceptwas proposed to facilitate an understanding of the different interpretations of abstractions that havequalitativeandevaluativenotions[1]—such as "art", "philanthropy",[2]"power",[3]and "social justice". The notion of essentially contested concept was proposed in 1956 byWalter Bryce Gallie.[4][5]
Essentially contested concepts involve agreed on abstractconceptsor phrases, but whose usage and interpretation is disputable by others (e.g. "social justice", "This picture is a work of art").[4][6]They are abstract concepts whose “proper use of which inevitably involves endless disputes about their proper uses on the part of their users",[7]and these disputes "cannot be settled by appeal toempirical evidence, linguistic usage, or the canons of logic alone".[8]Usually, essentially contested concepts are found in the social sciences where confusion arises due to experts using terminology inconsistently and often failing to specify the relationship between an abstracttermand themeaningof that term.[9]
For example, in historical studies, it has been observed that there are no particular standards for historical topics such as religion, art, science, democracy, and social justice as these are by their nature 'essentially contested' fields, such that they require diverse tools particular to each field beforehand in order to interpret topics from those subjects. When scholars talk about "religion", "art", "science" democracy" etc, there is no one definition of such terms that are generally accepted, and thus are essentially contested by default among scholars themselves.[10]
Although Gallie's term is widely used to denote imprecise use oftechnical terminology, it has a far more specific application; although the notion could be misleadingly and evasively used to justify "agreeing to disagree",[11]the term offers something more valuable:
The disputes that attend an essentially contested concept are driven by substantive disagreements over a range of different, entirely reasonable (although perhaps mistaken) interpretations of a mutually-agreed-uponarchetypicalnotion, such as the legal precept "treat like cases alike; and treat different cases differently", with "each party [continuing] to defend its case with what it claims to be convincing arguments, evidence and other forms of justification".[13]
Gallie speaks of how "This picture is painted inoils" can be successfully contested if the work is actually painted intempera;[14]while "This picture is a work of art" may meet strong opposition due to disputes over what "work of art" denotes. He suggests three avenues whereby one might resolve such disputes:
Otherwise, the dispute probably centres onpolysemy.[15]Here, a number of critical questions must be asked:
Barry Clarke suggested that, in order to determine whether a particular dispute was a consequence of truepolysemyorinadvertenthomonymy, one should seek to "locate the source of the dispute"; and in doing so, one might find that the source was "within the concept itself", or "[within] some underlying non-conceptual disagreement between the contestants".[17]
Clarke drew attention to the substantial differences between the expressions "essentially contested" and "essentially contestable", that were being extensively used within the literature as if they were interchangeable.
Clarke argued that to state that a concept is merely "contested" is to "attribute significance to the contest rather than to the concept". Yet, to state that a concept is "contestable" (rather than "merely contested") is to "attribute some part of any contest to the concept"; namely, "to claim that some feature or property of the concept makes it polysemantic, and that [from this] the concept contains some internal conflict of ideas"; and it's this state of affairs that provides the "essentially contestable concept" with its "inherent potential [for] generating disputes".[18]
In 1956 Gallie proposed a set of seven conditions for the existence of an essentially contested concept.[19]Gallie was very specific about the limits of his enterprise: it dealt exclusively with abstract, qualitative notions, such asart,religion,science,democracy, andsocial justice[20](and, if Gallie's choices are contrasted with negatively regarded concepts such asevil,disease,superstition, etc., it is clear that the concepts he chose were exclusively positively regarded).
Freeden remarks that "not all essentially contested concepts signify valued achievements; they may equally signify disapproved and denigrated phenomena,"[21]and Gerring[22]asks us to imagine just how difficult it would be to "[try] to craft definitions of slavery, fascism, terrorism, or genocide without recourse to 'pejorative' attributes."
These features distinguish Gallie's "essentially contested concepts" from others, "which can be shown, as a result of analysis or experiment, to be radically confused";[23]or, as Gray[24]would have it, they are the features that relate to the task of distinguishing the "general words, which really denote an essentially contested concept" from those other "general words, whose uses conceal a diversity of distinguishable concepts."
The following are extensions of Gallie's original seven features that have been made by various scholars from across multiple disciplines:
Scholars such asH. L. A. Hart,John Rawls,Ronald Dworkin, andSteven Lukeshave variously embellished Gallie's proposal by arguing that certain of the difficulties encountered with Gallie's proposition may be due to the unintendedconflationof two separate domains associated with the termconcept:
In essence, Hart (1961), Rawls (1971), Dworkin (1972), and Lukes (1974) distinguished between the "unity" of a notion and the "multiplicity" of its possible instantiations. From their work it is easy to understand the issue as one of determining whether there is a single notion that has a number of different instantiations, or whether there is more than one notion, each of which is reflected in a differentusage.
In a section of his 1972 article inThe New York Review of Books, Dworkin used the example of "fairness" to isolate and elaborate the difference between aconcept(suum cuique) and itsconception(various instantiations, for exampleutilitarian ethics).[37]
He supposes that he has instructed his children not to treat others "unfairly" and asks us to recognize that, whilst he would have undoubtedly had particular "examples" (of the sorts of conduct he was intending to discourage) in mind at the time he spoke to his children, whatever it was that hemeantwhen he issued such instructions was not confined to those "examples" alone, for two reasons:
Dworkin argues that this admission of error would not entail any "change" to his original instructions, because the truemeaningof his instructions was that "[he] meant the family to be guided by theconceptof fairness, not by any specificconceptionof fairness [that he] might have had in mind". Therefore, he argues, his instructions do, in fact, "cover" this new case.
Exploring what he considers to be the "crucial distinction" between the overallconceptof "fairness" and some particular, and specificconceptionof "fairness", he asks us to imagine a group whose members share the view that certain acts areunfair.[38]The members of this group "agree on a great number of standard cases of unfairness and use these as benchmarks against which to test other, more controversial cases". In these circumstances, says Dworkin, "the group has aconceptof unfairness, and its members may appeal to that concept in moral instruction or argument."[39]However, the members may still disagree over many of these "controversial cases"; and differences of this sort indicate that membershave, oract upon, entirely different theories of why and how each of the "standard cases" are, in fact, genuine acts of "unfairness". And, because each considers that certain principles "[which] must be relied upon to show that a particular division or attribution is unfair" are more "fundamental" than certain other principles, it can be said that members of the group have differentconceptionsof "fairness".
Consequently, those responsible for giving "instructions", and those responsible for setting "standards" of "fairness", in this community may be doing one of two things:
It is important to recognize that rather than it just being a case of delivering two different instructions; it is a case of delivering two differentkindsof instruction:
As a consequence, according to Dworkin, whenever an appeal is made to "fairness", a moral issue is raised; and, whenever a conception of "fairness" is laid down, an attempt is being made to answer that moral issue.
Whilst Gallie's expression "essentially contested concepts" precisely denotes those "essentially questionable and corrigible concepts" which "are permanently and essentially subject to revision and question",[42]close examination of the wide and varied and imprecise applications of Gallie's term subsequent to 1956, by those who have ascribed their own literal meaning to Gallie's term without ever consulting Gallie's work, have led many philosophers to conclude that "essentially disputed concepts" would have been far better choice for Gallie's meaning, for at least three reasons:
Jeremy Waldron's research has revealed that Gallie's notionhas "run wild" in the law review literature over the ensuing 60 yearsand is, now, being widely used to denote something like "very hotly contested, with no resolution in sight",[46]due to an entirely mistaken view[47]that theessentialin Gallie's term is an "intensifier", when, in fact, "[Gallie's] term 'essential' refers to the location of the disagreement or indeterminacy; it is contestation at the core, not just at the borderlines or penumbra of a concept".[48]Yet, according to Gallie, is also clear that:
|
https://en.wikipedia.org/wiki/Essentially_contested_concept
|
Afallacyis the use ofinvalidor otherwise faultyreasoningin the construction of anargument[1][2]that may appear to be well-reasoned if unnoticed. The term was introduced in the Western intellectual tradition by theAristotelianDe Sophisticis Elenchis.[3]
Fallacies may be committed intentionally tomanipulateorpersuadebydeception, unintentionally because of human limitations such ascarelessness,cognitive or social biasesandignorance, or potentially due to the limitations of language and understanding of language. These delineations include not only the ignorance of the rightreasoning standardbut also the ignorance of relevant properties of thecontext. For instance, thesoundnessoflegal argumentsdepends on the context in which they are made.[4]
Fallacies are commonly divided into "formal" and "informal". Aformal fallacyis a flaw in the structure of adeductiveargumentthat renders the argument invalid, while aninformal fallacyoriginates in an error in reasoning other than an improperlogical form.[5]Arguments containing informal fallacies may be formallyvalid, but still fallacious.[3]
A special case is amathematical fallacy, an intentionally invalidmathematical proofwith a concealed, or subtle, error. Mathematical fallacies are typically crafted and exhibited for educational purposes, usually taking the form of false proofs of obviouscontradictions.[6]
Fallacies are types of erroneous reasoning that render argumentslogically unsound.[7]According to The New Handbook of Cognitive Therapy Techniques, they include "unsubstantiated assertions that are often delivered with a conviction that makes them sound as though they are proven facts".[8]Informal fallacies, in particular, are frequently found in mass media such as television and newspapers.[9]Understanding fallacies may allow one to recognize them in either one's own or others' writing. Avoiding fallacies may help improve one's ability to produce sound arguments.[10]
It can be difficult to evaluate whether an argument is fallacious, as arguments exist along a continuum of soundness and an argument that has several stages or parts might have some sound sections and some fallacious ones.[11]Moreover, whether a specific argument is fallacious often depends on the content rather than the form of the argument. An example is aprobabilistically validinstance of the formally invalid argument form ofdenying the antecedentoraffirming the consequent.[12]Thus, "fallacious arguments usually have the deceptive appearance of being good arguments,[13]because for most fallacious instances of an argument form, a similar but non-fallacious instance can be found". Evaluating an instance of an argument as fallacious is therefore often a matter of evaluating the context of the argument.
Recognizing fallacies in everyday arguments may be difficult since arguments are often embedded inrhetoricalpatterns that obscure the logical connections between statements. Informal fallacies may also exploit theemotional, intellectual, orpsychologicalweaknesses of the audience. Recognizing fallacies can develop reasoning skills to expose the weaker links between premises and conclusions to better discern between what appears to be true and what is true.
Argumentation theoryprovides a different approach to understanding and classifying fallacies. In thepragma-dialectical theory, for instance, an argument is regarded as an interactive protocol between individuals who attempt to resolve their disagreement on the merits of a case.[14]The protocol consists ofnormative rules of interaction, and violations of these rules are considered fallacies because they frustrate the attempt at resolving the disagreement.
Fallacies are used in place of valid reasoning to communicate a point with the intention to persuade. Examples in themass mediatoday include but are not limited topropaganda,advertisements,politics, newspaper editorials, and opinion-based news shows.[15]
Fallacies are generally classified strictly by either their structure or their content, such as by classifying them asformal fallaciesorinformal fallacies, respectively. The classification of informal fallacies may be subdivided into categories such as linguistic, relevance through omission, relevance through intrusion, and relevance through presumption.[16]Alternatively, fallacies may be classified by the process by which they occur, such asmaterial fallacies(content),verbal fallacies(linguistic), and formal fallacies (error in inference). In turn, material fallacies may be placed into the more general category of informal fallacies. Verbal fallacies may be placed in either formal or informal classifications: Compareequivocation, which is a word- or phrase-basedambiguity, to thefallacy of composition, which is premise- and inference-based ambiguity.[17]
The Greek philosopherAristotle(384–322 BC) was the first to systematize logical errors into a list to make it easier to refute an opponent's thesis and thus win an argument.[18]: 2Aristotle'sSophistical Refutations(De Sophisticis Elenchis) identifies thirteen fallacies. He divided them up into two major types: linguistic fallacies and non-linguistic fallacies, some of which depend on language and others that do not.[19][20]These fallacies are called verbal fallacies and material fallacies, respectively. Amaterial fallacyis an error in what the arguer is talking about, while averbal fallacyis an error in how the arguer is talking. Verbal fallacies are those in which a conclusion is obtained by improper or ambiguous use of words.[21]An example of a language dependent fallacy is given as a debate as to who in humanity are learners: the wise or the ignorant.[18]: 3A language-independent fallacy is, for example:
Indian logicianstook great pains to identify fallacies in arguments. An influential collection of texts on logic and reason, theNyāya Sūtras, attributed toAksapada Gautama, variously estimated to have been composed between the 6th century BCE and the 2nd century CE, lists in its theory of inference five such reasons used in an argument that was further developed by later logicians.[22][23][24]
English scholar and theologianRichard Whately(1787–1863) defines a fallacy broadly as, "any argument, or apparent argument, which professes to be decisive of the matter at hand, while in reality it is not".[18]: 8
Whately divided fallacies into two groups:logicalandmaterial. According to Whately, logical fallacies are arguments where the conclusion does not follow from the premises. Material fallacies are not logical errors because the conclusion follows from the premises. He then divided the logical group into two groups: purely logical and semi-logical. The semi-logical group included all of Aristotle'ssophismsexceptignoratio elenchi,petitio principii, andnon causa pro causa, which are in the material group.[25]
Other famous methods of classifying fallacies are those ofFrancis BaconandJ. S. Mill. Bacon (Novum Organum, Aph. 33, 38 sqq.) divided fallacies into four Idola (Idols, i.e. False Appearances), which summarize the various kinds of mistakes to which the human intellect is prone. J. S. Mill discussed the subject in book five of his Logic, andJeremy Bentham'sBook of Fallacies(1824) contains valuable remarks.
A formal fallacy, deductive fallacy,logical fallacyornon sequitur(Latinfor "it does not follow") is a flaw in the structure of adeductiveargumentthat renders the argumentinvalid. The flaw can be expressed in the standard system of logic.[1]Such an argument is always considered to be wrong.
The presence of the formal fallacy does not imply anything about the argument'spremisesor its conclusion. Both may actually be true or may even be more probable as a result of the argument, but the deductive argument is still invalid because the conclusion does not follow from the premises in the manner described.
Even non-deductive arguments can be said to be fallacious: for example, aninductiveargument that incorrectly applies principles of probability orcausality. But "since deductive arguments depend on formal properties and inductive arguments don't, formal fallacies apply only to deductive arguments".[5]
Alogical formsuch as "AandB" is independent of any particular conjunction of meaningful propositions. Logical form alone can guarantee that, given true premises, a true conclusion must follow. However, formal logic makes no such guarantee if any premise is false; the conclusion can be either true or false. Any formal error or logical fallacy similarly invalidates the deductive guarantee. Both the argument and all its premises must be true for a conclusion to be true.
The termnon sequiturdenotes a general formal fallacy, often meaning one that does not belong to any named subclass of formal fallacies, likeaffirming the consequent.
Anecological fallacyis committed when one draws an inference from data based on the premise that qualities observed for groups necessarily hold for individuals; for example, "if countries with more Protestants tend to have higher suicide rates, then Protestants must be more likely to commit suicide".[26]
Theobservational interpretation fallacyis a cognitive bias that occurs exclusively in the medical field, leading to the mistaken interpretation of observed associations as causal relationships, negatively impacting medical guidelines, clinical decisions, and healthcare practices, potentially compromising patient safety.[27]
Maarten Boudry[28]and others[29]have argued that formal, deductive fallacies rarely occur in real life and that arguments that would be fallacious in formally deductive terms are not necessarily so when context and prior probabilities are taken into account, thus making the argument defeasible and/or inductive. Boudry coined the termfallacy fork.[28]For a given fallacy, one must either characterize it by means of a deductiveargumentation scheme, which rarely applies (the first prong of the fork), or one must relax definitions and add nuance to take the actual intent and context of the argument into account (the other prong of the fork).[28]To argue, for example, that one became nauseated after eating a mushroom because the mushroom was poisonous could be an example of thepost hoc ergo propter hocfallacy.[28]
In contrast to a formal fallacy, an informal fallacy originates from a reasoning error other than a flaw in the logical form of the argument.[5]Adeductive argumentcontaining an informal fallacy may be formallyvalid,[3]but still remain rationally unpersuasive. Nevertheless, informal fallacies apply to both deductive and non-deductive arguments.
Though the form of the argument may be relevant, fallacies of this type are "types of mistakes in reasoning that arise from the mishandling of thecontentof the propositions constituting the argument".[30]
A special subclass of the informal fallacies is the set offaulty generalizations, also known as inductive fallacies. Here, the most important issue concerns inductive strength or methodology (for example,statistical inference). In the absence of sufficient evidence, drawing conclusions based on induction isunwarrantedand fallacious. With the backing of sufficient amounts of the right type ofempirical evidence, however, the conclusions may become warranted and convincing (at which point the arguments are no longer considered fallacious).[31]
Hasty generalizationis described as making assumptions about a whole group or range of cases based on asamplethat is inadequate (usually because it is atypical or just too small).
Stereotypes about people ("frat boys are drunkards", "grad students are nerdy", "women don't enjoy sports", etc.) are common examples of the principle.
Hasty generalization often follows a pattern such as:
While never a valid logical deduction, if such an inference can be made on statistical grounds, it may nonetheless be convincing. This is because with enough empirical evidence, the generalization is no longer a hasty one.
Thefallacies of relevanceare a broad class of informal fallacies, generically represented bymissing the point: presenting an argument that may besoundbut fails to address the issue in question.
Anargument from silenceis a faulty conclusion that is drawn based on the absence of evidence rather than on the presence of evidence.
The post hoc fallacy assumes that because B comes after A, A caused B. It gets its name from the Latin phrase "post hoc, ergo propter hoc", which translates as "after this, therefore because of this".
Sometimes one event really does cause another one that comes later—for example, if one registers for a class and their name later appears on the roll, it's true that the first event caused the one that came later. But sometimes two events that seem related in time are not really related as cause and event. That is,temporal correlation does not necessarily entail causation. For example, if one eats a sandwich and then gets food poisoning, that does not necessarily mean the sandwich caused the food poisoning. Something else eaten earlier might have caused the food poisoning.
For an argument to be aslippery slopetype of argument, it must meet the requirements of thatargumentation scheme. A slippery slope argument originates from a conversation or debate in which two actors take turns. It usually originates from one actor giving advice on a decision or act. Along the way, the actor must make additional choices on similar matters through which the actor enters the ‘grey area’ of the slippery slope. At this point, the actor potentially loses control over the direction of the arguments, thus leading to a ‘fatal’ outcome.[32]
Such an argument is built up according to the following argumentation scheme: initial premise, sequential premise, indeterminacy premise, control premise, loss of control premise, catastrophic outcome premise, and conclusion. Slippery slope arguments may be defeated by asking critical questions or giving counterarguments.[33]
There are several reasons for a slippery slope to be fallacious: for example, the argument is going too far into the future, it is a too complex argument whose structure is hard to identify, or the argument makes emotional appeals.[34]
It may be that a slippery slope is not necessarily fallacious if context is taken into account and there is an effort to assess plausibility.[35]
Informally known as the "apples and oranges" fallacy, afalse analogyuses unsound comparisons.[36]
Thestraw manfallacy refers to the refutation of a standpoint in an argument that was never proposed. The fallacy usually occurs in the presentation of an opponent's standpoint as more extreme, distorted, or simplistic than it actually is. Compared to criticizing the opponent's actual standpoint, this allows the arguer to offer a seeming refutation of what is, however, not the actual standpoint.[37]Such an argument involves two arguers, with one criticizing the other's perspective.[38]The reason for the straw man argument to be fallacious originates from the problem of how to deal with natural discourse. The opponent's argument is not reflected by the arguments that are proposed by the speaker.[39]
Some of the fallacies described above may be committed in the context of measurement.
Wheremathematical fallaciesare subtle mistakes in reasoning leading to invalid mathematical proofs, measurement fallacies are unwarranted inferential leaps involved in the extrapolation of raw data to a measurement-based value claim. The ancient Greek SophistProtagoraswas one of the first thinkers to propose that humans can generate reliable measurements through his "human-measure" principle and the practice ofdissoi logoi(arguing multiple sides of an issue).[40][41]This history helps explain why measurement fallacies are informed byinformal logicandargumentation theory.
The increasing availability and circulation ofbig dataare driving a proliferation of new metrics for scholarly authority,[42][43]and there is lively discussion regarding the relative usefulness of such metrics for measuring the value of knowledge production in the context of an "information tsunami".[44]
For example,anchoringfallacies can occur when unwarranted weight is given to data generated by metrics that the arguers themselves acknowledge are flawed. For example, the limitations of thejournal impact factor(JIF) are well documented,[45]and even JIF pioneer Eugene Garfield notes that, "while citation data create new tools for analyses of research performance, it should be stressed that they supplement rather than replace other quantitative and qualitative indicators".[46]To the extent that arguers jettison the acknowledged limitations of JIF-generated data in evaluative judgments or leave behind Garfield's "supplement rather than replace" caveat, they commit anchoring fallacies.
Theobservational interpretation fallacyis thecognitive biaswhere association identified in observational studies are misinterpreted ascausal relationships.
Anaturalistic fallacycan occur, for example, in the case of sheer quantity metrics based on the premise "more is better"[44]or, in the case of developmental assessment in the field of psychology, "higher is better".[47]
Afalse analogyoccurs when claims are supported by unsound comparisons between data points. For example, theScopusandWeb of Sciencebibliographic databases have difficulty distinguishing between citations of scholarly work that are arms-length endorsements, ceremonial citations, or negative citations (indicating the citing author withholds endorsement of the cited work).[42]Hence, measurement-based value claims premised on the uniform quality of all citations may be questioned on false analogy grounds.
As another example, consider theFaculty Scholarly Productivity Indexof Academic Analytics. This tool purports to measure overall faculty productivity, yet it does not capture data based on citations in books. This creates a possibility that low productivity measurements using the tool commitargument from silencefallacies, to the extent that such measurements are supported by the absence of book citation data.
Ecological fallaciescan be committed when one measures the scholarly productivity of a sub-group of individuals (e.g. "Puerto Rican" faculty) via reference to aggregate data about a larger and different group (e.g., "Hispanic" faculty).[48]
Sometimes a speaker or writer uses a fallacy intentionally. In any context, including academic debate, a conversation among friends, political discourse, advertising, or comedic purposes, the arguer may use fallacious reasoning to try to persuade the listener or reader, by means other than offering relevant evidence, that the conclusion is true.
Examples of this include the speaker or writer:[49]
In humor, errors of reasoning are used for comical purposes. Groucho Marx used fallacies ofamphiboly, for instance, to make ironic statements;Gary LarsonandScott Adamsemployed fallacious reasoning in many of their cartoons. Wes Boyer and Samuel Stoddard have written a humorous essay teaching students how to be persuasive by means of a whole host of informal and formal fallacies.[50]
When someone uses logical fallacies intentionally to mislead in academic, political, or other high-stakes contexts, the breach of trust calls into questionthe authority and intellectual integrity of that person.[51]
According to the pragmatic theory,[52]a fallacy can be either a heuristic error or a ploy used intentionally to unfairly win an argument. There are always two parties to an argument containing a fallacy: the perpetrator and the intended victim.
The dialogue framework required to support the pragmatic theory of fallacy is built on the presumption that argumentative dialogue has both an adversarial component and a collaborative component. A dialogue has individual goals for each participant as well as shared goals that apply to all participants. A fallacy of the second kind is seen as more than simply a violation of the rule of reasonable dialogue. It is also a deceptive tactic of argumentation based on sleight-of-hand. Aristotle explicitly compared contentious reasoning to unfair fighting in athletic contests. But the roots of the pragmatic theory go back even further in history, to the Sophists. The pragmatic theory finds its roots in the Aristotelian conception of a fallacy as a sophistical refutation but also supports the view that many of the types of arguments traditionally labeled as fallacies are in fact reasonable techniques of argumentation that can be used, in many cases, to support legitimate goals of dialogue. Hence, under the pragmatic approach, each case needs to be analyzed individually to determine whether the argument is fallacious or reasonable.
Concepts
|
https://en.wikipedia.org/wiki/Fallacy
|
Inlogicandphilosophy, aformal fallacy[a]is a pattern ofreasoningrenderedinvalidby a flaw in its logical structure.Propositional logic,[2]for example, is concerned with the meanings of sentences and the relationships between them. It focuses on the role of logical operators, called propositional connectives, in determining whether a sentence is true. An error in the sequence will result in adeductiveargumentthat is invalid. The argument itself could have truepremises, but still have a falseconclusion.[3]Thus, a formal fallacy is afallacyin which deduction goes wrong, and is no longer alogicalprocess. This may not affect the truth of the conclusion, since validity and truth are separate in formal logic.
While a logical argument is anon sequiturif, and only if, it is invalid, the term "non sequitur" typically refers to those types of invalid arguments which do not constitute formal fallacies covered by particular terms (e.g.,affirming the consequent). In other words, in practice, "non sequitur" refers to an unnamed formal fallacy.
A special case is amathematical fallacy, an intentionally invalidmathematical proof, often with the error subtle and somehow concealed. Mathematical fallacies are typically crafted and exhibited for educational purposes, usually taking the form of spurious proofs of obviouscontradictions.
A formal fallacy is contrasted with aninformal fallacywhich may have a validlogical formand yet beunsoundbecause one or morepremisesare false. A formal fallacy, however, may have a true premise, but a false conclusion. The term 'logical fallacy' is sometimes used in everyday conversation, and refers to a formal fallacy.
"Some of your key evidence is missing, incomplete, or even faked! That proves I'm right!"[4]
"The vet can't find any reasonable explanation for why my dog died. See! See! That proves that you poisoned him! There’s no other logical explanation!"[5]
In the strictest sense, a logical fallacy is the incorrect application of a valid logical principle or an application of a nonexistent principle:
This is fallacious.
Indeed, there is no logical principle that states:
An easy way to show the above inference as invalid is by usingVenn diagrams. In logical parlance, the inference is invalid, since under at least one interpretation of the predicates it is not validity preserving.
People often have difficulty applying the rules of logic. For example, a person may say the followingsyllogismis valid, when in fact it is not:
"That creature" may well be a bird, but theconclusiondoes not follow from the premises. Certain other animals also have beaks, for example: anoctopusand asquidboth have beaks, someturtlesandcetaceanshave beaks. Errors of this type occur because people reverse a premise.[6]In this case, "All birds have beaks" is converted to "All beaked animals are birds." The reversed premise is plausible because few people are aware of any instances ofbeaked creaturesbesides birds—but this premise is not the one that was given. In this way, the deductive fallacy is formed by points that may individually appear logical, but when placed together are shown to be incorrect.
In everyday speech, a non sequitur is a statement in which the final part is totally unrelated to the first part, for example:
Life is life and fun is fun, but it's all so quiet when the goldfish die.
|
https://en.wikipedia.org/wiki/Formal_fallacy
|
Thelaw of the instrument,law of the hammer,[1]Maslow's hammer, orgolden hammer[a]is acognitive biasthat involves an over-reliance on a familiar tool.Abraham Maslowwrote in 1966, "it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."[2]
The concept is attributed both toMaslow[3]and toAbraham Kaplan,[4][5]although the hammer and nail line may not be original to either of them.
The English expression "a Birmingham screwdriver", meaning a hammer, refers to the practice of using the one tool for all purposes, and predates both Kaplan and Maslow by at least a century.[6]
In 1868, a London periodical,Once a Week, contained this observation: "Give a boy a hammer and chisel; show him how to use them; at once he begins to hack the doorposts, to take off the corners of shutter and window frames, until you teach him a better use for them, and how to keep his activity within bounds."[7]
The first recorded statement of the concept wasAbraham Kaplan's, in 1964: "I call itthe law of the instrument,and it may be formulated as follows: Give a small boy a hammer, and he will find that everything he encounters needs pounding."[8]
In February 1962 Kaplan, then a professor of philosophy, gave a banquet speech at a conference of theAmerican Educational Research Associationthat was being held atUCLA. An article in the June 1962 issue of theJournal of Medical Educationstated that "the highlight of the 3-day meeting ... was to be found in Kaplan's comment on the choice of methods for research. He urged that scientists exercise good judgment in the selection of appropriate methods for their research. Because certain methods happen to be handy, or a given individual has been trained to use a specific method, is no assurance that the method is appropriate for all problems. He cited Kaplan's Law of the Instrument: 'Give a boy a hammer and everything he meets has to be pounded.'"
InThe Conduct of Inquiry: Methodology for Behavioral Science(1964), Kaplan again mentioned the law of the instrument saying, "It comes as no particular surprise to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled." And in a 1964 article forThe Library Quarterly, he again cited the law and commented: "We tend to formulate our problems in such a way as to make it seem that the solutions to those problems demand precisely what we already happen to have at hand."[7]
In a 1963 essay collection,Computer Simulation of Personality: Frontier of Psychological Theory,Silvan Tomkinswrote about "the tendency of jobs to be adapted to tools, rather than adapting tools to jobs". He wrote: "If one has a hammer one tends to look for nails, and if one has a computer with a storage capacity, but no feelings, one is more likely to concern oneself with remembering and with problem solving than with loving and hating." In the same book,Kenneth Mark Colbyexplicitly cited the law, writing: "The First Law of the Instrument states that if you give a boy a hammer, he suddenly finds that everything needs pounding. The computer program may be our current hammer, but it must be tried. One cannot decide from purely armchair considerations whether or not it will be of any value."[7]
Maslow's hammer,popularly phrased as "if all you have is a hammer, everything looks like a nail" and variants thereof, is fromAbraham Maslow'sThe Psychology of Science, published in 1966. Maslow wrote: "I remember seeing an elaborate and complicated automatic washing machine for automobiles that did a beautiful job of washing them. But it could do only that, and everything else that got into its clutches was treated as if it were an automobile to be washed. I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."[7][2]
In 1967,Lee Loevingerof theFederal Communications Commissiondubbed the law "Loevinger's law of irresistible use", and applied it to government: "The political science analogue is that if there is a government agency, this proves something needs regulating."
In 1984, investorWarren Buffettcriticized academic studies of financial markets that made use of inappropriate mathematical approaches:
It isn't necessarily because such studies have any utility; it's simply that the data are there and academicians have worked hard to learn the mathematical skills needed to manipulate them. Once these skills are acquired, it seems sinful not to use them, even if the usage has no utility or negative utility. As a friend said, to a man with a hammer, everything looks like a nail."[7]
In his 2003 book,Of Paradise and Power, historianRobert Kagansuggested a corollary to the law: "When you don't have a hammer, you don't want anything to look like a nail." According to Kagan, the corollary explains the difference in views on the use of military force the United States and Europe have held since the end ofWorld War II.[9]
Some critics of psychiatry claim that the law of the instrument leads to the over-prescription of psychiatric drugs.[10][11]
The notion of agolden hammer,"a familiar technology or concept applied obsessively to many software problems", was introduced intoinformation technologyliterature in 1998 as ananti-pattern: a programming practice to be avoided.[12]
Software developer José M. Gilgado has written that the law is still relevant in the 21st century and is highly applicable to software development. Many times software developers, he observed, "tend to use the same known tools to do a completely new different project with new constraints". He blamed this on "thecomfort zonestate where you don't change anything to avoid risk. The problem with using the same tools every time you can is that you don't have enough arguments to make a choice because you have nothing to compare to and is limiting your knowledge." The solution is "to keep looking for the best possible choice, even if we aren't very familiar with it". This includes using a computer language with which one is unfamiliar. He noted that the productRubyMotionenables developers to "wrap" unknown computer languages in a familiar computer language and thus avoid having to learn them. But Gilgado found this approach inadvisable, because it reinforces the habit of avoiding new tools.[13]
Other forms of narrow-mindedinstrumentalism[14]include:déformation professionnelle, a French term for "looking at things from the point of view of one's profession", andregulatory capture, the tendency for regulators to look at things from the point of view of the profession they are regulating.
|
https://en.wikipedia.org/wiki/Golden_hammer
|
Informal fallaciesare a type of incorrectargumentinnatural language. The source of the error is not just due to theformof the argument, as is the case forformal fallacies, but can also be due to theircontentandcontext. Fallacies, despite being incorrect, usuallyappearto be correct and thereby can seduce people into accepting and using them. These misleading appearances are often connected to various aspects of natural language, such as ambiguous or vague expressions, or the assumption of implicit premises instead of making them explicit.
Traditionally, a great number of informal fallacies have been identified, including thefallacy of equivocation, thefallacy of amphiboly, thefallacies of compositionanddivision, thefalse dilemma, thefallacy of begging the question, thead hominem fallacyand theappeal to ignorance. There is no general agreement as to how the various fallacies are to be grouped into categories. One approach sometimes found in the literature is to distinguish betweenfallacies of ambiguity, which have their root in ambiguous or vague language,fallacies of presumption, which involve false or unjustified premises, andfallacies of relevance, in which the premises are not relevant to the conclusion despite appearances otherwise.
Some approaches incontemporary philosophyconsider additional factors besides content and context. As a result, some arguments traditionally viewed as informal fallacies are not considered fallacious from their perspective, or at least not in all cases. One such framework proposed is thedialogical approach, which conceives arguments as moves in a dialogue-game aimed at rationally persuading the other person. This game is governed by various rules. Fallacies are defined as violations of the dialogue rules impeding the progress of the dialogue. Theepistemic approachconstitutes another framework. Its core idea is that arguments play an epistemic role: they aim to expand our knowledge by providing a bridge from already justified beliefs to not yet justified beliefs. Fallacies are arguments that fall short of this goal by breaking a rule ofepistemic justification. A particular form of the epistemic framework is theBayesianapproach, where the epistemic norms are given by the laws of probability, which our degrees of belief should track.
The study of fallacies aims at providing an account for evaluating and criticizing arguments. This involves both a descriptive account of what constitutes an argument and a normative account of which arguments are good or bad.[1][2]In philosophy, fallacies are usually seen as a form of bad argument and are discussed as such in this article. Another conception, more common in non-scholarly discourse, sees fallacies not as arguments but rather as false yet popular beliefs.[3]
Informal fallacies are a form of incorrectargumentinnatural language.[4]An argument is a series of propositions, called the premises, together with one more proposition, called the conclusion.[5][1]The premises in correct arguments offer either deductive or defeasible support for the conclusion. The source of the error in incorrect arguments can be in the argument'sform,contentorcontext. If the error is only due to theform, it is considered a formal fallacy. Informal fallacies may also include formal errors but they primarily involve errors on the level ofcontentandcontext.[6][7][4][8][9]Informal fallacies are expressed in natural language. This brings with it various difficulties not faced when studying formal fallacies, like ambiguous terms, vague expressions or the premises being assumed implicitly rather than stated explicitly. Traditionally, a great number of informal fallacies have been listed, including thefallacy of equivocation, thefallacy of amphiboly, thefallacies of compositionanddivision, thefalse dilemma, thefallacy of begging the question, thead hominem fallacyor theappeal to ignorance.[10][11]Thetraditional approachtries to account for these fallacies using the concepts and theses discussed in this section.
Only arguments can constitute a fallacy. Various erroneous expressions do not count as fallacies because no argument is made, e.g. because no reasons are cited or no assertion is made.[5]The core idea of arguments is that the premises support the conclusion or that the conclusion follows from the premises.[5][3][1]Deductively valid arguments offer the strongest form of support: for them, it is impossible for the conclusion to be false if all the premises are true. The premises in non-deductive arguments offer a certain degree of support for their conclusion but they are defeasible:[5][12]it is possible for all the premises to be true and the conclusion to be false. Defeasible arguments may still be rationally compelling despite being fallible, so they do not automatically constitute fallacies.[13]The premises of an argument may be seen as the foundation on which the conclusion is built. According to this analogy, two things can go wrong and turn an argument into a fallacy. It could be that the foundation is shaky. But even a solid foundation is not helpful if it does not provide support for the conclusion in question.[5]
Traditionally, fallacies have been defined by three necessary conditions: "a fallacy (i) is an argument, (ii) that is invalid, and (iii) appears to be valid."[3]This definition covers only formal fallacy since it has deductive invalidity as a necessary condition. But it can easily be modified to include informal fallacy by replacing this condition with a more general term, like logical weakness or incorrect reasoning.[3]The last clause includes a psychological element in referring to how the argument appears to the arguer. This clause is used to distinguish genuine fallacies from mere mistakes in reasoning, for example, due to carelessness.[3]The idea is that fallacies have an alluring element that goes beyond mere carelessness by seducing us into committing the mistake, thereby explaining why they are committed in the first place. Some philosophers reject this appeal to appearances because the reference to psychology would complicate the investigation in various ways.[1][3]One issue is that appearances are different for different people. This problem also involves social studies in order to determine which reference group of people to consult for defining fallacies.[1][3]It has been suggested that, at its core, the study of fallacies is about normative aspects of arguments and not about their persuasive force, which is studied by empirical psychology instead.[14][3]
The source of the error in incorrect arguments can lie in the argument'sform,content, orcontext.[7]Theformor structure of an argument is also called "rule of inference". The most well-known rule of inference ismodus ponens, which states that given a premise of the form "Ifpthenq" and another in the form "p", then the conclusion is "q". Rules of inferences are formal because it depends only on the structure or the syntax of the premises and not on their content. So an argument based onmodus ponensis valid no matter what propositional contents are used for "p" and "q".[15]
Thecontentof an argument is found on the level of its propositions: it is what is expressed in them. The source of many informal fallacies is found in a false premise. For example, afalse dilemmais a fallacy based on a false disjunctive claim that oversimplifies reality by excluding viable alternatives.[12][4][16]
Thecontextof an argument refers to the situation in which it is used.[3][1]Based on its context it may be intended to play different roles. One way for an argument to be fallacious is if it fails to perform the role it was supposed to play. Thestrawman fallacy, for example, involves inaccurately attributing a weak position to one's opponent and then refuting this position.[4][1]The argument itself may be valid in that the refutation of the opposed position really is successful. The error is found on the level of the context since the opponent does not hold this position. This dependence on a context means that the same argument may be successful in another context: against an opponent who actually holds the strawman position.[1]
Formal fallaciesaredeductively invalidarguments.[3][6][7][8]They are of special interest to the field offormal logicbut they can only account for a small number of the known fallacies, for example, foraffirming the consequentordenying the antecedent. Many other fallacies used innatural language, e.g. in advertising or in politics, involve informal fallacies.[1][9]For example,false dilemmasorbegging the questionare fallacies despite being deductively valid. They are studied byinformal logic.[17][12]Part of the difficulty in analyzing informal fallacies is due to the fact that their structure is not always clearly expressed in natural language.[1]Sometimes certain keywords like "because", "therefore", "since" or "consequently" indicate which parts of the expression constitute the premises and which part the conclusion. But other times this distinction remains implicit and it is not always obvious which parts should be identified as the premises and the conclusions.[5]Many informal arguments include enthymematic premises: premises that are not explicitly stated but tacitly presumed.[1]In some domestic quarrels and political debates, it is not clear from the outset what the two parties are arguing about and which theses they intend to defend. Sometimes the function of the debate is more to clarify these preliminary points than to advance actual arguments.[1]
The distinction between formal and informal fallacies is opposed bydeductivists, who hold that deductive invalidity is the reason for all fallacies.[18]One way to explain that some fallacies do not seem to be deductively invalid is to hold that they contain various hidden assumptions, as is common for natural language arguments. The idea is that apparent informal fallacies can be turned into formal fallacies by making all these assumptions explicit and thereby revealing the deductive invalidity. The claim that this is possible for all fallacies is not generally accepted.[18][3]One requirement for a formal treatment is translating the arguments in question into the language of formal logic, a process known as "formalization".[19]Often many of the subtleties of natural language have to be ignored in this process. Some bodies of knowledge can be formalized without much residue but others resist formalization. This is also true for many informal fallacies.[19]
The traditional approach to fallacies has received a lot of criticism in contemporary philosophy.[3][9]This criticism is often based on the argument that some of the alleged fallacies are not fallacious at all, or at least not in all cases.[20][1]It is argued that the traditional approach does not fully consider the aim of an argument in a particular context, and a framework is required in order to show that, given their perspective, it is possible to evaluate if an alleged fallacy is actually fallacious in a given case.[3][1]It has been suggested that there may not be one single framework for evaluating all fallacies but only a manifold of ideals according to which a given argument may be good or bad.[3]
Two prominent frameworks which have been proposed are the dialogical and epistemic approaches. The dialogical approach uses a game-theoretic framework to define arguments and sees fallacies as violations of the rules of the game. According to the epistemic approach, it is the goal of arguments to expand our knowledge by providing a bridge from already justified beliefs to not yet justified beliefs. Fallacies are arguments that fall short of this goal by breaking a rule of epistemic justification.
Thedialogical approachsees arguments not simply as a series of premises together with a conclusion but as a speech act within a dialogue that aims to rationally persuade the other person of one's own position.[3][1][9]A prominent version of this approach is defended byDouglas N. Walton. On hisgame-theoreticconception, a dialogue is a game between two players.[3]At the outset, each player is committed to a set of propositions and has a conclusion they intend to prove. A player has won if they are able to persuade the opponent of their own conclusion. In this sense, dialogues can be characterized as "games of persuasion".[1]The players can perform various moves that affect what they are committed to. In this framework, arguments are moves that take the opponent's commitments as premises and lead to the conclusion one is trying to prove.[1]Since this is often not possible directly, various intermediary steps are taken, in which each argument takes a few steps towards one's intended conclusion by proposing an intermediary conclusion for the opponent to accept. This game is governed by various rules determining, among other things, which moves are allowed and when.[1][14]The dialogical approach makes it possible to distinguish between positive arguments, which support one's own conclusion, and negative arguments, which deny the opponent's conclusion.[1]
From this perspective, fallacies are defined as violations of the dialogue rules.[3][14]They are "deceptively bad argument[s] that impede the progress of the dialogue".[3]Thestrawman fallacy, for example, involves inaccurately attributing a weak position to one's opponent[4]and then proving this position to lead to one's own conclusion. This mistake is not logical in the strict sense but dialogical: the conclusion may as well follow from these premises but the opponent does not hold these commitments.[1]In some cases, it varies from game to game whether a certain move counts as a fallacy or not. For example, there are cases where thetu quoque"fallacy" is no fallacy at all.[1]This argument, also known asappeal to hypocrisy, tries to discredit the opponent's argument by claiming that the opponent's behavior is inconsistent with the argument's conclusion.[4]This move does not necessarily break the rules of the dialogue.[1]Instead, it can reveal a weakness in the opponent's position by reflecting their criticism back onto them. This move shifts the burden of proof back to the opponent, thereby strengthening one's own position. But it still constitutes a fallacy if it is only used to evade an argument.[1]
The core idea behind theepistemic approachis that arguments play an epistemic role: they aim to expand our knowledge by providing a bridge from alreadyjustified beliefsto not yet justified beliefs.[9][2]Fallacies are arguments that fall short of this goal by breaking a rule of epistemic justification.[3]This explains, for example, why arguments that are accidentally valid are still somehow flawed: because the arguer himself lacks a good reason to believe the conclusion.[9]
The fallacy ofbegging the question, on this perspective, is a fallacy because it fails to expand our knowledge by providing independent justification for its conclusion. Instead, the conclusion is already assumed in one of its premises.[2][12]A purely logical approach, on the other hand, fails to explain the fallacious nature ofbegging the questionsince the argument is deductively valid.[3]
TheBayesian approachconstitutes a special form of the epistemic approach.[3]Bayesianism interprets degrees of belief assubjective probabilities,[9]i.e. as the degree of certainty of the believer that the believed proposition is true. On this view, reasoning based on an argument can be interpreted as a process of changing one's degrees of belief, usually in response to new incoming information.[21][3]Fallacies are probabilistically weak arguments, i.e. they have a low probability on the Bayesian model.[21][3]Whether an argument constitutes a fallacy or not depends on the credences of the person evaluating the argument. This means that what constitutes a fallacy for one arguer may be a sound argument for another.[3][9]This explains why, when trying to persuade someone, one should take the audience's beliefs into account.[3]But it can also make sense of arguments independent of an audience, unlike the dialogical approach.[9]
This perspective is well suited for explaining why someslippery slopearguments constitute fallacies but others not. Slippery slope arguments argue against a certain proposal based on the fact that this proposal would bring with it a causal chain of events eventually leading to a bad outcome.[4][9]But even if every step in this chain is relatively probable, probabilistic calculus may still reveal that the likelihood of all steps occurring together is quite small.[22][9]In this case, the argument would constitute a fallacy. But slippery slope arguments are rationally justified if the associated probabilities are sufficiently high.[22]
Agreat varietyof informal fallacies have been discussed in academic literature. There is controversy both concerning whether a given argument really constitutes a fallacy in all of its instances and concerning how the different fallacies should be grouped together into categories.[20][3][1]The categorization here follows proposals commonly found in the academic literature in these or similar terms.[11][8]It distinguishes betweenfallacies of ambiguity, which have their root in ambiguous or vague language,fallacies of presumption, which involve false or unjustified premises, andfallacies of relevance, in which the premises are not relevant to the conclusion despite appearances otherwise. Other categorizations have been proposed and some fallacies within this categorization could also be grouped in another category.[10][3]
The source of the error forfallacies of ambiguitylies in the usage of language. This is due to the fact that many terms in natural language have ambiguous or vague meanings.[23][12][8][1]Ambiguous terms have several meanings while vague terms have an unclear meaning. Fallacies of ambiguity often result in merely verbal disputes: the arguing parties have different topics in mind and thereby talk past each other without being aware of this.[23][12]One way to avoid or solve these fallacies is to clarify language, e.g. by committing to definitions and by introducing new distinctions.[24]Such reformulations may include a condensation of the original argument in order to make it easier to spot the erroneous step.[12]
Fallacies of ambiguity are perhaps best exemplified by thefallacy of equivocation, in which the same term appears with two different meanings in the premises,[24][8][3][1]for example:
Equivocations are especially difficult to detect in cases where the two meanings are very closely related to each other.[12]
Thefallacy of amphibolyalso involves ambiguity in meaning, but this ambiguity arises not on the level of individual terms but on the level of the sentence as a whole due to syntactic ambiguity,[24]for example:
On one interpretation, the police are not allowed to drink alcohol. On another, it is now the job of the police to stop other people from drinking. The argument seems plausible on the former reading but fallacious on the latter reading.[3]
Thefallacies of divisionandcompositionare due to ambiguity of the term "all" and similar expressions.[12][8][3]This term has both acollectiveand adistributivemeaning. For example, the sentence "all the citizens are strong enough to resist a tyrant" may mean either that all together are strong enough (collective) or that each one individually is strong enough (distributive).[12]Thefallacy of divisionis committed if one infers from the sentence in the collective sense that one specific individual is strong enough.[12][24]Thefallacy of compositionis committed if one infers from the fact that each member of a group has a property that the group as a whole has this property.[24]For example, "[e]very member of the investigative team was an excellent researcher", therefore "[i]t was an excellent investigative team".[3]Any form of fallaciously transferring a property from the whole to its parts or the other way round belongs to the category offallacies of division and composition, even when linguistic ambiguity is not the cause.
Fallacies of presumptioninvolve a false or unjustified premise but are often valid otherwise.[16][8]This problematic premise can take different forms and the belief in it can be caused in different ways, corresponding to the various sub-categories in this field. These fallacies include thenaturalistic fallacy, themoralistic fallacyand theintentional fallacy.[12][18]
Afalse dilemmais a fallacy of presumption based on a false disjunctive claim that oversimplifies reality by excluding viable alternatives.[16][12]For example, a false dilemma is committed when it is claimed that "Stacey spoke out against capitalism, therefore she must be a communist". One of the options excluded is that Stacey may be neither communist nor capitalist. Our liability to commit false dilemmas may be due to the tendency to simplify reality by ordering it through either-or-statements.[16]
For fallacies of generalization, the false premise is due to an erroneous generalization. In the case of the fallacy ofsweeping generalization, a general rule is applied incorrectly to an exceptional case. For example, "[e]veryone has a right to his or her property. Therefore, even though Jones had been declared insane, you had no right to take his weapon away."[16]: 147The generalization, in this case, ignores that insanity is an exceptional case to which the general rights of property do not unrestrictedly apply.Hasty generalization, on the other hand, involves the converse mistake of drawing a universal conclusion based on a small number of instances.[16][8][20]For example, "I've met two people in Nicaragua so far, and they were both nice to me. So, all people I will meet in Nicaragua will be nice to me".[4]
Begging the questionis a form ofcircular reasoningin which the conclusion is already assumed in the premises.[16][12][8][3][1]Because of this, the premises are unable to provide independent support for the conclusion. For example, the statement "Green is the best color because it is the greenest of all colors", offers no independent reason besides the initial assumption for its conclusion. Detecting this fallacy can be difficult when a complex argument with many sub-arguments is involved, resulting in a large circle.[12]
Fallacies of relevanceinvolve premises that are not relevant to the conclusion despite appearances otherwise.[12][8]They may succeed in persuading the audience nonetheless due to being emotionally loaded (for example: by playing on prejudice, pity or fear).[26]
Ad hominemarguments constitute an important class among the fallacies of relevance. In them, the arguer tries to attack a thesis by attacking the person pronouncing this thesis instead of attacking the thesis itself.[26][12][8][20][1]Rejecting a theory in physics because its author is Jewish, which was common in theGerman physics community in the early 1930s, is an example of the ad hominem fallacy. But not all ad hominem arguments constitute fallacies. It is a common and reasonable practice in court, for example, to defend oneself against an accusation by casting doubt on the reliability of the witnesses. The difference between fallacious and justified ad hominem arguments depends on the relevancy of the character of the attacked person to the thesis in question. The author's cultural heritage seems to have very little relevance in most cases for theories in physics, but the reliability of a witness in court is highly relevant for whether one is justified in believing their testimony.Whataboutismis a special form of the ad hominem fallacy that attempts to discredit an opponent's position by charging them withhypocrisywithout directly refuting or disproving their argument.[27][28][29]It is particularly associated with contemporaryRussian propaganda.[30][31][32]
Appeal to ignoranceis another fallacy due to irrelevance.[26]It is based on the premise that there is no proof for a certain claim. From this premise, the conclusion is drawn that this claim must therefore be false. For example, "Nobody has ever proved to me there's a God, so I know there is no God".[4]Another version of theappeal to ignoranceconcludes from the absence of proof against a claim that this claim must be true.
Arguments from analogyare also susceptible tofallacies of relevance. Ananalogyis a comparison between two objects based on similarity.[33][12]Arguments from analogyinvolve inferences from information about a known object (the source) to the features of an unknown object (the target) based on the similarity between the two objects.[34]Arguments from analogy have the following form:ais similar tobandahas featureF, thereforebprobably also has featureF.[33][35]The soundness of such arguments depends on the relevance of this similarity to the inferred feature.[36][12]Without this relevance, the argument constitutes a faulty orfalse analogy, for example: "If a child gets a new toy he or she will want to play with it; So, if a nation gets new weapons, it will want to use them".[3]
|
https://en.wikipedia.org/wiki/Informal_fallacy
|
Pleonasm(/ˈpliː.əˌnæzəm/; fromAncient Greekπλεονασμόςpleonasmós, fromπλέονpléon'to be in excess')[1][2]isredundancyin linguistic expression, such as in "black darkness," "burning fire," "the man he said,"[3]or "vibrating with motion." It is a manifestation oftautologyby traditionalrhetoricalcriteria.[4]Pleonasm may also be used for emphasis, or because the phrase has become established in a certain form. Tautology and pleonasm are not consistently differentiated in literature.[5]
Most often,pleonasmis understood to mean a word or phrase which is useless,clichéd, or repetitive, but a pleonasm can also be simply an unremarkable use ofidiom. It can aid in achieving a specific linguistic effect, be it social, poetic or literary. Pleonasm sometimes serves the same function as rhetorical repetition—it can be used to reinforce an idea, contention or question, rendering writing clearer and easier to understand. Pleonasm can serve as aredundancy check; if a word is unknown, misunderstood, misheard, or if the medium of communication is poor—a static-filled radio transmission or sloppy handwriting—pleonastic phrases can help ensure that the meaning is communicated even if some of the words are lost.[citation needed]
Some pleonastic phrases are part of a language'sidiom, liketuna fish,chain mailandsafe haveninAmerican English. They are so common that their use is unremarkable for native speakers, although in many cases the redundancy can be dropped with no loss of meaning.
When expressing possibility, English speakers often use potentially pleonastic expressions such asIt might be possibleorperhaps it's possible, where both terms (verbmightor adverbperhapsalong with the adjectivepossible) have the same meaning under certain constructions. Many speakers of English use such expressions for possibility in general, such that most instances of such expressions by those speakers are in fact pleonastic. Others, however, use this expression only to indicate a distinction betweenontologicalpossibility andepistemicpossibility, as in "Both the ontological possibility of X under current conditions and the ontological impossibility of X under current conditions are epistemically possible" (inlogicalterms, "I am not aware of any facts inconsistent with the truth of proposition X, but I am likewise not aware of any facts inconsistent with the truth of the negation of X"). The habitual use of the double construction to indicate possibilityper seis far less widespread among speakers of most[citation needed]other languages (except in Spanish; see examples); rather, almost all speakers of those languages use one term in a single expression:[dubious–discuss]
In asatellite-framedlanguage like English,verb phrasescontainingparticlesthat denote direction of motion are so frequent that even when such a particle is pleonastic, it seems natural to include it (e.g. "enter into").
Some pleonastic phrases, when used in professional or scholarly writing, may reflect a standardized usage that has evolved or a meaning familiar to specialists but not necessarily to those outside that discipline. Such examples as "null and void", "each and every" arelegal doubletsthat are part oflegally operative languagethat is often drafted into legal documents. A classic example of such usage was that by theLord Chancellorat the time (1864),Lord Westbury, in the English case ofex parteGorely,[6]when he described a phrase in an Act as "redundant and pleonastic". This type of usage may be favored in certain contexts. However, it may also be disfavored when used gratuitously to portray false erudition, obfuscate, or otherwise introduce verbiage, especially in disciplines where imprecision may introduce ambiguities (such as the natural sciences).[7]
Examples fromBaroque,Mannerist, andVictorianprovide a counterpoint toStrunk's advocacy of concise writing:
There are various kinds of pleonasm, includingbilingual tautological expressions,syntactic pleonasm,semantic pleonasmandmorphological pleonasm:
A bilingual tautological expression is a phrase that combines words that mean the same thing in two different languages.[8]: 138An example of a bilingual tautological expression is theYiddishexpressionמים אחרונים וואַסערmayim akhroynem vaser. It literally means "water last water" and refers to "water for washing the hands after meal, grace water".[8]: 138Its first element,mayim, derives from theHebrewמים ['majim] "water". Its second element,vaser, derives from theMiddle High Germanwordvaser"water".
According toGhil'ad Zuckermann, Yiddish abounds with both bilingual tautological compounds and bilingual tautological first names.[8]: 138
The following are examples of bilingual tautological compounds in Yiddish:
The following are examples of bilingual tautological first names in Yiddish:
Examples occurring in English-language contexts include:
Syntacticpleonasm occurs when thegrammarof a language makes certainfunction wordsoptional.[citation needed]For example, consider the followingEnglishsentences:
In this construction, theconjunctionthatis optional when joining a sentence to averbphrase withknow. Both sentences are grammatically correct, but the wordthatis pleonastic in this case. By contrast, when a sentence is in spoken form and the verb involved is one of assertion, the use ofthatmakes clear that the present speaker is making an indirect rather than a direct quotation, such that he is not imputing particular words to the person he describes as having made an assertion; the demonstrative adjectivethatalso does not fit such an example. Also, some writers may use "that" for technical clarity reasons.[9]In some languages, such as French, the word is not optional and should therefore not be considered pleonastic.
The same phenomenon occurs inSpanishwith subject pronouns. Since Spanish is anull-subject language, which allows subject pronouns to be deleted when understood, the following sentences mean the same:
In this case, the pronounyo('I') is grammatically optional; both sentences mean "I love you" (however, they may not have the same tone orintention—this depends onpragmaticsrather than grammar). Such differing butsyntacticallyequivalent constructions, in many languages, may also indicate a difference inregister.
The process of deleting pronouns is calledpro-dropping, and it also happens in many other languages, such asKorean,Japanese,Hungarian,Latin,Italian,Portuguese,Swahili,Slavic languages, and theLao language.
In contrast, formal English requires an overt subject in each clause. A sentence may not need a subject to have valid meaning, but to satisfy the syntactic requirement for an explicit subject a pleonastic (ordummy pronoun) is used; only the first sentence in the following pair is acceptable English:
In this example the pleonastic "it" fills the subject function, but it contributes no meaning to the sentence. The second sentence, which omits the pleonasticitis marked as ungrammatical although no meaning is lost by the omission.[10]Elements such as "it" or "there", serving as empty subject markers, are also called (syntactic)expletives, or dummy pronouns. Compare:
The pleonasticne(ne pléonastique), expressing uncertainty in formalFrench, works as follows:
Two more striking examples of French pleonastic construction areaujourd'huiandQu'est-ce que c'est?.
The wordaujourd'hui/au jour d'huiis translated as 'today', but originally means "on the day of today" since the now obsoletehuimeans "today". The expressionau jour d'aujourd'hui(translated as "on the day of today") is common in spoken language and demonstrates that the original construction ofaujourd'huiis lost. It is considered a pleonasm.
The phraseQu'est-ce que c'est?meaning 'What's that?' or 'What is it?', while literally, it means "What is it that it is?".
There are examples of the pleonastic, or dummy, negative in English, such as the construction, heard in the New England region of the United States, in which the phrase "So don't I" is intended to have the same positive meaning as "So do I."[11][12]
WhenRobert Southsaid, "It is a pleonasm, a figure usual inScripture, by a multiplicity of expressions to signify one notable thing",[13]he was observing theBiblical Hebrewpoetic propensity to repeat thoughts in different words, since written Biblical Hebrew was a comparatively early form of written language and was written using oral patterning, which has many pleonasms. In particular, very many verses of thePsalmsare split into two halves, each of which says much the same thing in different words. The complex rules and forms of written language as distinct from spoken language were not as well-developed as they are today when the books making up theOld Testamentwere written.[14][15]See alsoparallelism (rhetoric).
This same pleonastic style remains very common in modern poetry and songwriting (e.g., "Anne, with her father / is out in the boat / riding the water / riding the waves / on the sea", fromPeter Gabriel's "Mercy Street").
Semantic pleonasm is a question more ofstyleandusagethan of grammar.[16]Linguists usually call thisredundancyto avoid confusion with syntactic pleonasm, a more important phenomenon fortheoretical linguistics. It usually takes one of two forms: Overlap or prolixity.
Overlap: One word's semantic component is subsumed by the other:
Prolixity: A phrase may have words which add nothing, or nothing logical or relevant, to the meaning.
An expression like "tuna fish", however, might elicit one of many possible responses, such as:
In some cases, the redundancy in meaning occurs at the syntactic level above the word, such as at the phrase level:
The redundancy of these two well-known statements is deliberate, forhumorouseffect. (SeeYogi Berra#"Yogi-isms".) But one does hear educated people say "my predictions about the future of politics" for "my predictions about politics", which are equivalent in meaning. While predictions are necessarily about the future (at least in relation to the time the prediction was made), the nature of this future can be subtle (e.g., "I predict that he died a week ago"—the prediction is about future discovery or proof of the date of death, not about the death itself). Generally "the future" is assumed, making most constructions of this sort pleonastic. The latter humorous quote above about not making predictions—byYogi Berra—is not really a pleonasm, but rather anironicplay on words.
Alternatively it could be an analogy between predict and guess.
However, "It'sdéjà vuall over again" could mean that there was earlier anotherdéjà vuof the same event or idea, which has now arisen for a third time; or that the speaker had very recently experienced adéjà vuof a different idea.
Redundancy, and "useless" or "nonsensical" words (or phrases, or morphemes), can also be inherited by one language from the influence of another and are not pleonasms in the more critical sense but actual changes in grammatical construction considered to be required for "proper" usage in the language or dialect in question.Irish English, for example, is prone to a number of constructions that non-Irish speakers find strange and sometimes directly confusing or silly:
All of these constructions originate from the application ofIrish Gaelicgrammatical rules to the English dialect spoken, in varying particular forms, throughout the island.
Seemingly "useless" additions and substitutions must be contrasted with similar constructions that are used for stress, humor, or other intentional purposes, such as:
The latter of these is a result of Yiddish influences on modern English, especiallyEast CoastUS English.
Sometimes editors and grammatical stylists will use "pleonasm" to describe simple wordiness. This phenomenon is also calledprolixityorlogorrhea. Compare:
or even:
The reader or hearer does not have to be told that loud music has a sound, and in a newspaper headline or other abbreviated prose can even be counted upon to infer that "burglary" is a proxy for "sound of the burglary" and that the music necessarily must have been loud to drown it out, unless the burglary was relatively quiet (this is not a trivial issue, as it may affect the legal culpability of the person who played the music); the word "loud" may imply that the music should have been played quietly if at all. Many are critical of the excessively abbreviated constructions of "headline-itis" or "newsspeak", so "loud [music]" and "sound of the [burglary]" in the above example should probably not be properly regarded as pleonastic or otherwise genuinely redundant, but simply as informative and clarifying.
Prolixity is also used to obfuscate, confuse, or euphemize and is not necessarily redundant or pleonastic in such constructions, though it often is. "Post-traumatic stress disorder" (shell shock) and "pre-owned vehicle" (used car) are bothtumideuphemisms but are not redundant. Redundant forms, however, are especially common in business, political, and academic language that is intended to sound impressive (or to be vague so as to make it hard to determine what is actually being promised, or otherwise misleading). For example: "This quarter, we are presently focusing with determination on an all-new, innovative integrated methodology and framework for rapid expansion of customer-oriented external programs designed and developed to bring the company's consumer-first paradigm into the marketplace as quickly as possible."
In contrast to redundancy, anoxymoronresults when two seemingly contradictory words are adjoined.
Redundancies sometimes take the form of foreign words whose meaning is repeated in the context:
These sentences use phrases which mean, respectively, "the the restaurant restaurant", "the the tar tar", "with in juice sauce" and so on. However, many times these redundancies are necessary—especially when the foreign words make up a proper noun as opposed to a common one. For example, "We went to Il Ristorante" is acceptable provided the audience can infer that it is a restaurant. (If they understand Italian and English it might, if spoken, be misinterpreted as a generic reference and not aproper noun, leading the hearer to ask "Which ristorante do you mean?"—such confusions are common in richly bilingual areas likeMontrealor theAmerican Southwestwhenmixing phrases from two languages.) But avoiding the redundancy of the Spanish phrase in the second example would only leave an awkward alternative: "La Brea pits are fascinating".
Most people find it best not to drop articles when using proper nouns made from foreign languages:
However, there are some exceptions to this, for example:
This is also similar to the treatment of definite and indefinite articles in titles of books, films, etc. where the article can—some would saymust—be present where it would otherwise be "forbidden":
Some cross-linguistic redundancies, especially in placenames, occur because a word in one language became the title of a place in another (e.g., theSahara Desert—"Sahara" is an English approximation of the word for "deserts" in Arabic). "TheLos Angeles Angels" professional baseball team is literally "the The Angels Angels". A supposed extreme example isTorpenhow HillinCumbria, where some of the elements in the name likely mean "hill".[citation needed]See theList of tautological place namesfor many more examples.
The wordtsetsemeans "fly" in theTswana language, aBantu languagespoken inBotswanaandSouth Africa. This word is the root of the English name for abiting flyfound inAfrica, thetsetse fly.
Acronyms and initialisms can also form the basis for redundancies; this is known humorously asRAS syndrome(for Redundant Acronym Syndrome syndrome). In all the examples that follow, the word after the acronym repeats a word represented in the acronym. The full redundant phrase is stated in the parentheses that follow each example:
(SeeRAS syndromefor many more examples.) The expansion of an acronym like PIN or HIV may be well known to English speakers, but the acronyms themselves have come to be treated as words, so little thought is given to what their expansion is (and "PIN" is also pronounced the same as the word "pin"; disambiguation is probably the source of "PIN number"; "SIN number" for "Social Insurance Number number" [sic] is a similar common phrase in Canada.) But redundant acronyms are more common with technical (e.g., computer) terms where well-informed speakers recognize the redundancy and consider it silly or ignorant, but mainstream users might not, since they may not be aware or certain of the full expansion of an acronym like "RAM".
Carefully constructed expressions, especially in poetry and political language, but also some general usages in everyday speech, may appear to be redundant but are not. This is most common with cognate objects (a verb's object that is cognate with the verb):
Or, a classic example from Latin:
The words need not be etymologically related, but simply conceptually, to be considered an example of cognate object:
Such constructions are not actually redundant (unlike "She slept a sleep" or "We wept tears") because the object's modifiers provide additional information. A rarer, more constructed form ispolyptoton, the stylistic repetition of the same word or words derived from the same root:
As with cognate objects, these constructions are not redundant because the repeated words or derivatives cannot be removed without removing meaning or even destroying the sentence, though in most cases they could be replaced with non-related synonyms at the cost of style (e.g., compare "The only thing we have to fear is terror".)
|
https://en.wikipedia.org/wiki/Pleonasm
|
Self-referenceis a concept that involves referring to oneself or one's own attributes, characteristics, or actions. It can occur inlanguage,logic,mathematics,philosophy, and other fields.
Innaturalorformal languages, self-reference occurs when asentence, idea orformularefers to itself. The reference may be expressed either directly—through some intermediate sentence or formula—or by means of someencoding.
In philosophy, self-reference also refers to the ability of a subject to speak of or refer to itself, that is, to have the kind of thought expressed by the first person nominative singular pronoun"I"in English.
Self-reference is studied and has applications in mathematics, philosophy,computer programming,second-order cybernetics, andlinguistics, as well asin humor. Self-referential statements are sometimesparadoxical, and can also be consideredrecursive.
In classicalphilosophy,paradoxeswere created by self-referential concepts such as theomnipotence paradoxof asking if it was possible for a being to exist so powerful that it could create a stone that it could not lift. TheEpimenides paradox, 'All Cretans are liars' when uttered by an ancient Greek Cretan was one of the first recorded versions. Contemporary philosophy sometimes employs the same technique to demonstrate that a supposed concept is meaningless or ill-defined.[2]
Inmathematicsandcomputability theory, self-reference (also known asimpredicativity) is the key concept in proving limitations of many systems.Gödel's theoremuses it to show that no formalconsistentsystem of mathematics can ever contain all possible mathematical truths, because it cannot prove some truths about its own structure.The halting problemequivalent, in computation theory, shows that there is always some task that a computer cannot perform, namely reasoning about itself. These proofs relate to a long tradition of mathematical paradoxes such asRussell's paradoxandBerry's paradox, and ultimately to classical philosophical paradoxes.
Ingame theory, undefined behaviors can occur where two players must model each other's mental states and behaviors, leading to infinite regress.
Incomputer programming, self-reference occurs inreflection, where a program can read or modify its own instructions like any other data.[3]Numerous programming languages support reflection to some extent with varying degrees of expressiveness. Additionally, self-reference is seen inrecursion(related to the mathematicalrecurrence relation) infunctional programming, where a code structure refers back to itself during computation.[4]'Taming' self-reference from potentially paradoxical concepts into well-behaved recursions has been one of the great successes ofcomputer science, and is now used routinely in, for example, writingcompilersusing the 'meta-language'ML. Using a compiler to compile itself is known asbootstrapping.Self-modifying codeis possible to write (programs which operate on themselves), both withassemblerand with functional languages such asLisp, but is generally discouraged in real-world programming. Computing hardware makes fundamental use of self-reference inflip-flops, the basic units of digital memory, which convert potentially paradoxical logical self-relations into memory by expanding their terms over time. Thinking in terms of self-reference is a pervasive part of programmer culture, with many programs and acronyms named self-referentially as a form of humor, such asGNU('GNU's not Unix') andPINE('Pine is not Elm'). TheGNU Hurdis named for a pair of mutually self-referential acronyms.
Tupper's self-referential formulais a mathematical curiosity which plots an image of its own formula.
Self-reference occurs inliteratureandfilmwhen an author refers to his or her own work in the context of the work itself. Examples includeMiguel de Cervantes'Don Quixote,Shakespeare'sA Midsummer Night's Dream,The TempestandTwelfth Night,Denis Diderot'sJacques le fataliste et son maître,Italo Calvino'sIf on a winter's night a traveler, many stories byNikolai Gogol,Lost in the FunhousebyJohn Barth,Luigi Pirandello'sSix Characters in Search of an Author,Federico Fellini's8½andBryan Forbes'sThe L-Shaped Room. Speculative fiction writerSamuel R. Delanymakes use of this in his novelsNovaandDhalgren. In the former, Katin (a space-faring novelist) is wary of a long-standing curse wherein a novelist dies before completing any given work.Novaends mid-sentence, thus lending credence to the curse and the realization that the novelist is the author of the story; likewise, throughoutDhalgren, Delany has a protagonist simply named The Kid (or Kidd, in some sections), whose life and work are mirror images of themselves and of the novel itself. In the sci-fi spoof filmSpaceballs, DirectorMel Brooksincludes a scene wherein the evil characters are viewing a VHS copy of their own story, which shows them watching themselves "watching themselves", ad infinitum. Perhaps the earliest example is inHomer'sIliad, whereHelen of Troylaments: "for generations still unborn/we will live in song" (appearing in the song itself).[5]
Self-reference in art is closely related to the concepts ofbreaking the fourth wallandmeta-reference, which often involve self-reference. The short stories ofJorge Luis Borgesplay with self-reference and related paradoxes in many ways.Samuel Beckett'sKrapp's Last Tapeconsists entirely of the protagonist listening to and making recordings of himself, mostly about other recordings. During the 1990s and 2000s filmic self-reference was a popular part of therubber realitymovement, notably inCharlie Kaufman's filmsBeing John MalkovichandAdaptation, the latter pushing the concept arguably to its breaking point as it attempts to portray its own creation, in adramatized versionof theDroste effect.
Variouscreation mythsinvoke self-reference to solve the problem of what created the creator. For example, theEgyptian creation mythhas a god swallowing his own semen to create himself. TheOuroborosis a mythical dragon which eats itself.
TheQuranincludes numerous instances of self-referentiality.[6][7]
ThesurrealistpainterRené Magritteis famous for his self-referential works. His paintingThe Treachery of Images, includes the words "this is not a pipe", the truth of which depends entirely on whether the wordceci(in English, "this") refers to the pipe depicted—or to the painting or the word or sentence itself.[8]M.C. Escher's art also contains many self-referential concepts such as hands drawing themselves.
A word that describes itself is called anautological word(orautonym). This generally applies to adjectives, for examplesesquipedalian(i.e. "sesquipedalian" is a sesquipedalian word), but can also apply to other parts of speech, such asTLA, as a three-letterabbreviationfor "three-letter abbreviation".
A sentence which inventories its own letters and punctuation marks is called anautogram.
There is a special case of meta-sentence in which the content of the sentence in the metalanguage and the content of the sentence in the object language are the same. Such a sentence is referring to itself. However some meta-sentences of this type can lead to paradoxes. "This is a sentence." can be considered to be a self-referential meta-sentence which is obviously true. However "This sentence is false" is a meta-sentence which leads to a self-referentialparadox. Such sentences can lead to problems, for example, in law, where statements bringing laws into existence can contradict one another or themselves.Kurt Gödelclaimed to have found such aloopholein theUnited States Constitutionat his citizenship ceremony.
Self-reference occasionally occurs in themediawhen it is required to write about itself, for example theBBCreporting on job cuts at the BBC. Notable encyclopedias may be required to feature articles about themselves, such as Wikipedia's article onWikipedia.
Fumblerulesare a list of rules of good grammar and writing, demonstrated through sentences that violate those very rules, such as "Avoid cliches like the plague" and "Don't use no double negatives". The term was coined in a published list of such rules byWilliam Safire.[9][10]
Circular definitionis a type of self-reference in which the definition of a term or concept includes the term or concept itself, either explicitly or implicitly. Circular definitions are consideredfallaciousbecause they only define a term in terms of itself.[11]This type of self-reference may be useful inargumentation, but can result in a lack of clarity in communication.
The adverb "hereby" is used in a self-referential way, for example in the statement "I hereby declare you husband and wife."[12]
Several constitutions contain self-referential clauses defining how the constitution itself may be amended.[15]An example isArticle Five of the United States Constitution.
|
https://en.wikipedia.org/wiki/Self_reference
|
Semanticsis the study of linguisticmeaning. It examines what meaning is, how words get their meaning, and how the meaning of a complex expression depends on its parts. Part of this process involves the distinction betweensense and reference. Sense is given by the ideas and concepts associated with an expression while reference is the object to which an expression points. Semantics contrasts withsyntax, which studies the rules that dictate how to creategrammaticallycorrect sentences, andpragmatics, which investigates how people use language in communication.
Lexical semanticsis the branch of semantics that studiesword meaning. It examines whether words have one or several meanings and in whatlexical relationsthey stand to one another. Phrasal semantics studies the meaning of sentences by exploring the phenomenon ofcompositionalityor how new meanings can be created by arranging words.Formal semanticsrelies onlogicandmathematicsto provide precise frameworks of the relation between language and meaning.Cognitive semanticsexamines meaning from a psychological perspective and assumes a close relation between language ability and the conceptual structures used to understand the world. Other branches of semantics includeconceptual semantics,computational semantics, and cultural semantics.
Theories of meaning are general explanations of the nature of meaning and how expressions are endowed with it. According toreferential theories, the meaning of an expression is the part of reality to which it points. Ideational theories identify meaning withmental stateslike the ideas that an expression evokes in the minds of language users. According to causal theories, meaning is determined by causes and effects, whichbehavioristsemantics analyzes in terms of stimulus and response. Further theories of meaning includetruth-conditional semantics,verificationisttheories, theuse theory, andinferentialist semantics.
The study of semantic phenomena began during antiquity but was not recognized as an independent field of inquiry until the 19th century. Semantics is relevant to the fields of formal logic,computer science, andpsychology.
Semantics is the study ofmeaninginlanguages.[1]It is a systematic inquiry that examines what linguistic meaning is and how it arises.[2]It investigates howexpressionsare built up from different layers of constituents, likemorphemes,words,clauses,sentences, andtexts, and how the meanings of the constituents affect one another.[3]Semantics can focus on a specific language, like English, but in its widest sense, it investigates meaning structures relevant to all languages.[4][a][b]As a descriptive discipline, it aims to determine how meaning works withoutprescribingwhat meaning people should associate with particular expressions.[7]Some of its key questions are "How do the meanings of words combine to create the meanings of sentences?", "How do meanings relate to the minds of language users, and to the things words refer to?", and "What is the connection between what a word means, and the contexts in which it is used?".[8]The main disciplines engaged in semantics arelinguistics,semiotics, andphilosophy.[9]Besides its meaning as a field of inquiry, semantics can also refer to theories within this field, liketruth-conditional semantics,[10]and to the meaning of particular expressions, like the semantics of the wordfairy.[11]
As a field of inquiry, semantics has both an internal and an external side. The internal side is interested in the connection between words and themental phenomenathey evoke, like ideas and conceptual representations. The external side examines how words refer to objects in the world and under what conditions a sentence is true.[12]
Many related disciplines investigate language and meaning. Semantics contrasts with other subfields of linguistics focused on distinct aspects of language.Phonologystudies the different types ofsoundsused in languages and how sounds are connected to form words whilesyntaxexaminesthe rulesthat dictate how to arrange words to create sentences. These divisions are reflected in the fact that it is possible to master some aspects of a language while lacking others, like when a person knows how to pronounce a word without knowing its meaning.[13]As a subfield of semiotics, semantics has a more narrow focus on meaning in language while semiotics studies both linguistic and non-linguistic signs. Semiotics investigates additional topics like the meaning ofnon-verbal communication, conventionalsymbols, and natural signs independent of human interaction. Examples includenoddingto signal agreement, stripes on a uniform signifyingrank, and the presence ofvulturesindicating a nearby animal carcass.[14]
Semantics further contrasts withpragmatics, which is interested in how people use language in communication.[15]An expression like "That's what I'm talking about" can mean many things depending on who says it and in what situation. Semantics is interested in the possible meanings of expressions: what they can and cannot mean in general. In this regard, it is sometimes defined as the study of context-independent meaning. Pragmatics examines which of these possible meanings is relevant in a particular case. In contrast to semantics, it is interested in actual performance rather than in the generallinguistic competenceunderlying this performance.[16]This includes the topic of additional meaning that can be inferred even though it is not literally expressed, like what it means if a speaker remains silent on a certain topic.[17]A closely related distinction by the semioticianCharles W. Morrisholds that semantics studies the relation between words and the world, pragmatics examines the relation between words and users, and syntax focuses on the relation between different words.[18]
Semantics is related toetymology, which studies how words and their meanings changed in the course of history.[7]Another connected field ishermeneutics, which is the art or science of interpretation and is concerned with the rightmethodologyof interpreting text in general andscripturein particular.[19]Metasemanticsexamines themetaphysicalfoundations of meaning and aims to explain where it comes from or how it arises.[20]
The wordsemanticsoriginated from the Ancient Greek adjectivesemantikos, meaning 'relating to signs', which is a derivative ofsēmeion, the noun for 'sign'. It was initially used formedical symptomsand only later acquired its wider meaning regarding any type of sign, including linguistic signs. The wordsemanticsentered the English language from the French termsemantique, which the linguistMichel Bréalfirst introduced at the end of the 19th century.[21]
Semantics studies meaning in language, which is limited to the meaning of linguistic expressions. It concerns how signs areinterpretedand whatinformationthey contain. An example is the meaning of words provided indictionarydefinitions by giving synonymous expressions or paraphrases, like defining the meaning of the termramasadult male sheep.[22]There are many forms of non-linguistic meaning that are not examined by semantics. Actions and policies can have meaning in relation to the goal they serve. Fields likereligionandspiritualityare interested in themeaning of life, which is about finding a purpose in life or the significance ofexistencein general.[23]
Linguistic meaning can be analyzed on different levels.Word meaningis studied bylexical semanticsand investigates the denotation of individual words. It is often related toconceptsof entities, like how the worddogis associated with the concept of the four-legged domestic animal. Sentence meaning falls into the field of phrasal semantics and concerns the denotation of full sentences. It usually expresses a concept applying to a type of situation, as in the sentence "the dog has ruined my blue skirt".[24]The meaning of a sentence is often referred to as aproposition.[25]Different sentences can express the same proposition, like the English sentence "the tree is green" and the German sentence"der Baum ist grün".[26]Utterance meaning is studied by pragmatics and is about the meaning of an expression on a particular occasion. Sentence meaning and utterance meaning come apart in cases where expressions are used in a non-literal way, as is often the case withirony.[27]
Semantics is primarily interested in the public meaning that expressions have, like the meaning found in general dictionary definitions. Speaker meaning, by contrast, is the private or subjective meaning that individuals associate with expressions. It can diverge from the literal meaning, like when a person associates the wordneedlewith pain or drugs.[28]
Meaning is often analyzed in terms ofsense and reference,[30]also referred to asintension and extensionorconnotationanddenotation.[31]The referent of an expression is the object to which the expression points. The sense of an expression is the way in which it refers to that object or how the object is interpreted. For example, the expressionsmorning starandevening starrefer to the same planet, just like the expressions2 + 2and3 + 1refer to the same number. The meanings of these expressions differ not on the level of reference but on the level of sense.[32]Sense is sometimes understood as a mental phenomenon that helps people identify the objects to which an expression refers.[33]Some semanticists focus primarily on sense or primarily on reference in their analysis of meaning.[34]To grasp the full meaning of an expression, it is usually necessary to understand both to what entities in the world it refers and how it describes them.[35]
The distinction between sense and reference can explainidentity statements, which can be used to show how two expressions with a different sense have the same referent. For instance, the sentence "the morning star is the evening star" is informative and people can learn something from it. The sentence "the morning star is the morning star", by contrast, is an uninformativetautologysince the expressions are identical not only on the level of reference but also on the level of sense.[36]
Compositionalityis a key aspect of how languages construct meaning. It is the idea that the meaning of a complex expression is a function of the meanings of its parts. It is possible to understand the meaning of the sentence "Zuzana owns a dog" by understanding what the wordsZuzana,owns,aanddogmean and how they are combined.[37]In this regard, the meaning of complex expressions like sentences is different from word meaning since it is normally not possible to deduce what a word means by looking at its letters and one needs to consult a dictionary instead.[38]
Compositionality is often used to explain how people can formulate and understand an almost infinite number of meanings even though the amount of words and cognitive resources is finite. Many sentences that people read are sentences that they have never seen before and they are nonetheless able to understand them.[37]
When interpreted in a strong sense, the principle of compositionality states that the meaning of a complex expression is not just affected by its parts and how they are combined but fully determined this way. It is controversial whether this claim is correct or whether additional aspects influence meaning. For example, context may affect the meaning of expressions;idiomslike "kick the bucket" carryfigurative or non-literalmeanings that are not directly reducible to the meanings of their parts.[37]
Truthis a property of statements that accurately present the world and true statements are in accord withreality. Whether a statement is true usually depends on the relation between the statement and the rest of the world. Thetruth conditionsof a statement are the way the world needs to be for the statement to be true. For example, it belongs to the truth conditions of the sentence "it is raining outside" that raindrops are falling from the sky. The sentence is true if it is used in a situation in which the truth conditions are fulfilled, i.e., if there is actually rain outside.[39]
Truth conditions play a central role in semantics and some theories rely exclusively on truth conditions to analyze meaning. To understand a statement usually implies that one has an idea about the conditions under which it would be true. This can happen even if one does not know whether the conditions are fulfilled.[39]
Thesemiotic triangle, also called the triangle of meaning, is a model used to explain the relation between language, language users, and the world, represented in the model asSymbol,Thought or Reference, andReferent. The symbol is a linguisticsignifier, either in its spoken or written form. The central idea of the model is that there is no direct relation between a linguistic expression and what it refers to, as was assumed by earlier dyadic models. This is expressed in the diagram by the dotted line between symbol and referent.[40]
The model holds instead that the relation between the two is mediated through a third component. For example, the termapplestands for a type of fruit but there is no direct connection between this string of letters and the corresponding physical object. The relation is only established indirectly through the mind of the language user. When they see the symbol, it evokes a mental image or a concept, which establishes the connection to the physical object. This process is only possible if the language user learned the meaning of the symbol before. The meaning of a specific symbol is governed by the conventions of a particular language. The same symbol may refer to one object in one language, to another object in a different language, and to no object in another language.[40]
Many other concepts are used to describe semantic phenomena. Thesemantic roleof an expression is the function it fulfills in a sentence. In the sentence "the boy kicked the ball", the boy has the role of the agent who performs an action. The ball is the theme or patient of this action as something that does not act itself but is involved in or affected by the action. The same entity can be both agent and patient, like when someone cuts themselves. An entity has the semantic role of an instrument if it is used to perform the action, for instance, when cutting something with a knife then the knife is the instrument. For some sentences, no action is described but an experience takes place, like when a girl sees a bird. In this case, the girl has the role of the experiencer. Other common semantic roles are location, source, goal, beneficiary, and stimulus.[41]
Lexical relations describe how words stand to one another. Two words aresynonymsif they share the same or a very similar meaning, likecarandautomobileorbuyandpurchase.Antonymshave opposite meanings, such as the contrast betweenaliveanddeadorfastandslow.[c]One term is ahyponymof another term if the meaning of the first term is included in the meaning of the second term. For example,antis a hyponym ofinsect. Aprototypeis a hyponym that has characteristic features of the type it belongs to. Arobinis a prototype of abirdbut apenguinis not. Two words with the same pronunciation arehomophoneslikeflourandflower, while two words with the same spelling arehomonyms, like a bank of a river in contrast to a bank as a financial institution.[d]Hyponymy is closely related tomeronymy, which describes the relation between part and whole. For instance,wheelis a meronym ofcar.[44]An expression isambiguousif it has more than one possible meaning. In some cases, it is possible todisambiguatethem to discern the intended meaning.[45]The termpolysemyis used if the different meanings are closely related to one another, like the meanings of the wordhead, which can refer to the topmost part of the human body or the top-ranking person in an organization.[44]
The meaning of words can often be subdivided into meaning components calledsemantic features. The wordhorsehas the semantic featureanimatebut lacks the semantic featurehuman. It may not always be possible to fully reconstruct the meaning of a word by identifying all its semantic features.[46]
Asemanticor lexical field is a group of words that are all related to the same activity or subject. For instance, the semantic field ofcookingincludes words likebake,boil,spice, andpan.[47]
Thecontextof an expression refers to the situation or circumstances in which it is used and includes time, location, speaker, and audience. It also encompasses other passages in a text that come before and after it.[48]Context affects the meaning of various expressions, like thedeicticexpressionhereand theanaphoricexpressionshe.[49]
A syntactic environment isextensional or transparentif it is always possible to exchange expressions with the same reference without affecting the truth value of the sentence. For example, the environment of the sentence "the number 8 is even" is extensional because replacing the expression "the number 8" with "the number of planets in theSolar System" does not change its truth value. Forintensional or opaque contexts, this type of substitution is not always possible. For instance, theembedded clausein "Paco believes that the number 8 is even" is intensional since Paco may not know that the number of planets in the solar system is 8.[50]
Semanticists commonly distinguish the language they study, called object language, from the language they use to express their findings, calledmetalanguage. When a professor uses Japanese to teach their student how to interpret the language offirst-order logicthen the language of first-order logic is the object language and Japanese is the metalanguage. The same language may occupy the role of object language and metalanguage at the same time. This is the case inmonolingual English dictionaries, in which both the entry term belonging to the object language and the definition text belonging to the metalanguage are taken from the English language.[51]
Lexical semantics is the sub-field of semantics that studies word meaning.[52]It examines semantic aspects of individual words and thevocabularyas a whole. This includes the study of lexical relations between words, such as whether two terms are synonyms or antonyms.[53]Lexical semantics categorizes words based on semantic features they share and groups them into semantic fields unified by a common subject.[54]This information is used to create taxonomies to organize lexical knowledge, for example, by distinguishing between physical andabstract entitiesand subdividing physical entities intostuffandindividuated entities.[55]Further topics of interest are polysemy, ambiguity, andvagueness.[56]
Lexical semantics is sometimes divided into two complementary approaches:semasiologyandonomasiology. Semasiology starts from words and examines what their meaning is. It is interested in whether words have one or several meanings and how those meanings are related to one another. Instead of going from word to meaning, onomasiology goes from meaning to word. It starts with a concept and examines what names this concept has or how it can be expressed in a particular language.[57]
Some semanticists also include the study of lexical units other than words in the field of lexical semantics.Compound expressionslikebeing under the weatherhave a non-literal meaning that acts as a unit and is not a direct function of its parts. Another topic concerns the meaning of morphemes that make up words, for instance, how negativeprefixeslikein-anddis-affect the meaning of the words they are part of, as ininanimateanddishonest.[58]
Phrasal semantics studies the meaning of sentences. It relies on the principle of compositionality to explore how the meaning of complex expressions arises from the combination of their parts.[59][e]The different parts can be analyzed assubject,predicate, orargument. The subject of a sentence usually refers to a specific entity while the predicate describes a feature of the subject or an event in which the subject participates. Arguments provide additional information to complete the predicate.[61]For example, in the sentence "Mary hit the ball",Maryis the subject,hitis the predicate, andthe ballis an argument.[61]A more fine-grained categorization distinguishes between different semantic roles of words, such as agent, patient, theme, location, source, and goal.[62]
Verbsusually function as predicates and often help to establish connections between different expressions to form a more complex meaning structure. In the expression "Beethoven likes Schubert", the verblikeconnects a liker to the object of their liking.[63]Other sentence parts modify meaning rather than form new connections. For instance, theadjectiveredmodifies the color of another entity in the expressionred car.[64]A further compositional device is variable binding, which is used to determine the reference of a term. For example, the last part of the expression "the woman who likes Beethoven" specifies which woman is meant.[65]Parse treescan be used to show the underlying hierarchy employed to combine the different parts.[66]Various grammatical devices, like thegerundform, also contribute to meaning and are studied by grammatical semantics.[67]
Formal semantics uses formal tools fromlogicandmathematicsto analyze meaning in natural languages.[f]It aims to develop precise logical formalisms to clarify the relation between expressions and their denotation.[69]One of its key tasks is to provide frameworks of how language represents the world, for example, usingontological modelsto show how linguistic expressions map to the entities of that model.[69]A common idea is that words refer to individual objects or groups of objects while sentences relate to events and states. Sentences are mapped to atruth valuebased on whether their description of the world is in correspondence with its ontological model.[70]
Formal semantics further examines how to use formal mechanisms to represent linguistic phenomena such asquantification,intensionality,noun phrases,plurals, mass terms,tense, andmodality.[71]Montague semanticsis an early and influential theory in formal semantics that provides a detailed analysis of how the English language can be represented using mathematical logic. It relies onhigher-order logic,lambda calculus, andtype theoryto show how meaning is created through the combination of expressions belonging to different syntactic categories.[72]
Dynamic semanticsis a subfield of formal semantics that focuses on how information grows over time. According to it, "meaning is context change potential": the meaning of a sentence is not given by the information it contains but by the information change it brings about relative to a context.[73]
Cognitive semantics studies the problem of meaning from a psychological perspective or how the mind of the language user affects meaning. As a subdiscipline ofcognitive linguistics, it sees language as a wide cognitive ability that is closely related to the conceptual structures used to understand and represent the world.[74][g]Cognitive semanticists do not draw a sharp distinction between linguistic knowledge and knowledge of the world and see them instead as interrelated phenomena.[76]They study how the interaction between language and human cognition affects the conceptual organization in very general domains like space, time, causation, and action.[77]The contrast between profile and base is sometimes used to articulate the underlying knowledge structure. The profile of a linguistic expression is the aspect of the knowledge structure that it brings to the foreground while the base is the background that provides the context of this aspect without being at the center of attention.[78]For example, the profile of the wordhypotenuseis a straight line while the base is aright-angled triangleof which the hypotenuse forms a part.[79][h]
Cognitive semantics further compares the conceptual patterns andlinguistic typologiesacross languages and considers to what extent the cognitive conceptual structures of humans are universal or relative to their linguistic background.[81]Another research topic concerns the psychological processes involved in the application of grammar.[82]Other investigated phenomena include categorization, which is understood as a cognitive heuristic to avoid information overload by regarding different entities in the same way,[83]andembodiment, which concerns how the language user's bodily experience affects the meaning of expressions.[84]
Frame semanticsis an important subfield of cognitive semantics.[85]Its central idea is that the meaning of terms cannot be understood in isolation from each other but needs to be analyzed on the background of the conceptual structures they depend on. These structures are made explicit in terms of semantic frames. For example, words like bride, groom, and honeymoon evoke in the mind the frame of marriage.[86]
Conceptual semanticsshares with cognitive semantics the idea of studying linguistic meaning from a psychological perspective by examining how humans conceptualize and experience the world. It holds that meaning is not about the objects to which expressions refer but about the cognitive structure of human concepts that connect thought, perception, and action. Conceptual semantics differs from cognitive semantics by introducing a strict distinction between meaning and syntax and by relying on various formal devices to explore the relation between meaning and cognition.[87]
Computational semanticsexamines how the meaning of natural language expressions can be represented and processed on computers.[88]It often relies on the insights of formal semantics and applies them to problems that can be computationally solved.[89]Some of its key problems include computing the meaning of complex expressions by analyzing their parts, handling ambiguity, vagueness, and context-dependence, and using the extracted information inautomatic reasoning.[90]It forms part ofcomputational linguistics,artificial intelligence, andcognitive science.[88]Its applications includemachine learningandmachine translation.[91]
Cultural semantics studies the relation between linguistic meaning and culture. It compares conceptual structures in different languages and is interested in how meanings evolve and change because of cultural phenomena associated withpolitics, religion, andcustoms.[92]For example, address practices encode cultural values and social hierarchies, as in the difference of politeness of expressions liketuandustedin Spanish orduandSiein German in contrast to English, which lacksthese distinctionsand uses the pronounyouin either case.[93]Closely related fields are intercultural semantics, cross-cultural semantics, and comparative semantics.[94]
Pragmatic semantics studies how the meaning of an expression is shaped by the situation in which it is used. It is based on the idea that communicative meaning is usually context-sensitive and depends on who participates in the exchange, what information they share, and what theirintentionsand background assumptions are. It focuses on communicative actions, of which linguistic expressions only form one part. Some theorists include these topics within the scope of semantics while others consider them part of the distinct discipline of pragmatics.[95]
Theories of meaning explain what meaning is, what meaning an expression has, and how the relation between expression and meaning is established.[96]
Referential theories state that the meaning of an expression is the entity to which it points.[97]The meaning ofsingular termslikenamesis the individual to which they refer. For example, the meaning of the nameGeorge Washingtonis the person with this name.[98]General terms refer not to a single entity but to the set of objects to which this term applies. In this regard, the meaning of the termcatis the set of all cats.[99]Similarly, verbs usually refer to classes of actions or events and adjectives refer to properties of individuals and events.[100]
Simple referential theoriesface problems for meaningful expressions that have no clear referent. Names likePegasusandSanta Claushave meaning even though they do not point to existing entities.[101]Other difficulties concern cases in which different expressions are about the same entity. For instance, the expressionsRoger Bannisterandthe first man to run a four-minute milerefer to the same person but do not mean exactly the same thing.[102]This is particularly relevant when talking about beliefs since a person may understand both expressions without knowing that they point to the same entity.[103]A further problem is given by expressions whose meaning depends on the context, like the deictic termshereandI.[104]
To avoid these problems, referential theories often introduce additional devices. Some identify meaning not directly with objects but with functions that point to objects. This additional level has the advantage of taking the context of an expression into account since the same expression may point to one object in one context and to another object in a different context. For example, the reference of the wordheredepends on the location in which it is used.[105]A closely related approach ispossible worldsemantics, which allows expressions to refer not only to entities in the actual world but also to entities in other possible worlds.[i]According to this view, expressions likethe first man to run a four-minute milerefer to different persons in different worlds. This view can also be used to analyze sentences that talk about what is possible or what is necessary: possibility is what is true in some possible worlds while necessity is what is true in all possible worlds.[107]
Ideational theories, also called mentalist theories, are not primarily interested in the reference of expressions and instead explain meaning in terms of the mental states of language users.[108]One historically influential approach articulated byJohn Lockeholds that expressions stand forideasin the speaker's mind. According to this view, the meaning of the worddogis the idea that people have of dogs. Language is seen as a medium used to transfer ideas from the speaker to the audience. After having learned the same meaning of signs, the speaker can produce a sign that corresponds to the idea in their mind and the perception of this sign evokes the same idea in the mind of the audience.[109]
A closely related theory focuses not directly on ideas but onintentions.[110]This view is particularly associated withPaul Grice, who observed that people usually communicate to cause some reaction in their audience. He held that the meaning of an expression is given by the intended reaction. This means that communication is not just about decoding what the speaker literally said but requires an understanding of their intention or why they said it.[111]For example, telling someone looking for petrol that "there is a garage around the corner" has the meaning that petrol can be obtained there because of the speaker's intention to help. This goes beyond the literal meaning, which has no explicit connection to petrol.[112]
Causal theories hold that the meaning of an expression depends on the causes and effects it has.[113]According tobehavioristsemantics, also referred to as stimulus-response theory, the meaning of an expression is given by the situation that prompts the speaker to use it and the response it provokes in the audience.[114]For instance, the meaning of yelling "Fire!" is given by the presence of an uncontrolled fire and attempts to control it or seek safety.[115]Behaviorist semantics relies on the idea that learning a language consists in adopting behavioral patterns in the form ofstimulus-response pairs.[116]One of its key motivations is to avoid private mental entities and define meaning instead in terms of publicly observable language behavior.[117]
Another causal theory focuses on the meaning of names and holds that a naming event is required to establish the link between name and named entity. This naming event acts as a form of baptism that establishes the first link of a causal chain in which all subsequent uses of the name participate.[118]According to this view, the namePlatorefers to an ancient Greek philosopher because, at some point, he was originally named this way and people kept using this name to refer to him.[119]This view was originally formulated bySaul Kripketo apply to names only but has been extended to cover other types of speech as well.[120]
Truth-conditional semanticsanalyzes the meaning of sentences in terms of their truth conditions. According to this view, to understand a sentence means to know what the world needs to be like for the sentence to be true.[121]Truth conditions can themselves be expressed throughpossible worlds. For example, the sentence "Hillary Clintonwon the2016 American presidential election" is false in the actual world but there are some possible worlds in which it is true.[122]The extension of a sentence can be interpreted as its truth value while its intension is the set of all possible worlds in which it is true.[123]Truth-conditional semantics is closely related toverificationist theories, which introduce the additional idea that there should be some kind of verification procedure to assess whether a sentence is true. They state that the meaning of a sentence consists in the method to verify it or in the circumstances that justify it.[124]For instance, scientific claims often make predictions, which can be used to confirm or disconfirm them usingobservation.[125]According to verificationism, sentences that can neither be verified nor falsified are meaningless.[126]
Theuse theorystates that the meaning of an expression is given by the way it is utilized. This view was first introduced byLudwig Wittgenstein, who understood language as a collection oflanguage games. The meaning of expressions depends on how they are used inside a game and the same expression may have different meanings in different games.[127]Some versions of this theory identify meaning directly with patterns of regular use.[128]Others focus onsocial normsandconventionsby additionally taking into account whether a certain use is considered appropriate in a given society.[129]
Inferentialist semantics, also called conceptual role semantics, holds that the meaning of an expression is given by the role it plays in the premises and conclusions of goodinferences.[130]For example, one can infer from "x is a male sibling" that "x is a brother" and one can infer from "x is a brother" that "x has parents". According to inferentialist semantics, the meaning of the wordbrotheris determined by these and all similar inferences that can be drawn.[131]
Semantics was established as an independent field of inquiry in the 19th century but the study of semantic phenomena began as early as the ancient period as part of philosophy and logic.[132][j]Inancient Greece, Plato (427–347 BCE) explored the relation between names and things in hisdialogueCratylus. It considers the positions of naturalism, which holds that things have their name by nature, and conventionalism, which states that names are related to their referents by customs and conventions among language users.[134]The bookOn InterpretationbyAristotle(384–322 BCE) introduced various conceptual distinctions that greatly influenced subsequent works in semantics. He developed an early form of the semantic triangle by holding that spoken and written words evoke mental concepts, which refer to external things by resembling them. For him, mental concepts are the same for all humans, unlike the conventional words they associate with those concepts.[135]TheStoicsincorporated many of the insights of their predecessors to develop a complex theory of language through the perspective of logic. They discerned different kinds of words by their semantic and syntactic roles, such as the contrast between names, common nouns, and verbs. They also discussed the difference between statements, commands, and prohibitions.[136]
Inancient India, theorthodox schoolofNyayaheld that all names refer to real objects. It explored how words lead to an understanding of the thing meant and what consequence this relation has to the creation of knowledge.[138]Philosophers of the orthodox school ofMīmāṃsādiscussed the relation between the meanings of individual words and full sentences while considering which one is more basic.[139]The bookVākyapadīyabyBhartṛhari(4th–5th century CE) distinguished between different types of words and considered how they can carry different meanings depending on how they are used.[140]Inancient China, theMohistsargued that names play a key role in making distinctions to guide moral behavior.[141]They inspired theSchool of Names, which explored the relation between names and entities while examining how names are required to identify and judge entities.[142]
In the Middle Ages,Augustine of Hippo(354–430) developed a general conception of signs as entities that stand for other entities and convey them to the intellect. He was the first to introduce the distinction between natural and linguistic signs as different types belonging to a common genus.[143]Boethius(480–528) wrote a translation of and various comments on Aristotle's bookOn Interpretation, which popularized its main ideas and inspired reflections on semantic phenomena in thescholastic tradition.[144]An innovation in the semantics ofPeter Abelard(1079–1142) was his interest in propositions or the meaning of sentences in contrast to the focus on the meaning of individual words by many of his predecessors. He further explored the nature ofuniversals, which he understood as mere semantic phenomena ofcommon namescaused by mental abstractions thatdo not refer to any entities.[145]In the Arabic tradition,Ibn Faris(920–1004) identified meaning with the intention of the speaker whileAbu Mansur al-Azhari(895–980) held that meaning resides directly in speech and needs to be extracted through interpretation.[146]
An important topic towards the end of the Middle Ages was the distinction between categorematic andsyncategorematic terms. Categorematic terms have an independent meaning and refer to some part of reality, likehorseandSocrates. Syncategorematic terms lack independent meaning and fulfill other semantic functions, such as modifying or quantifying the meaning of other expressions, like the wordssome,not, andnecessarily.[147]An early version of the causal theory of meaning was proposed byRoger Bacon(c. 1219/20 – c. 1292), who held that things get names similar to how people get names through some kind of initial baptism.[148]His ideas inspired the tradition of thespeculative grammarians, who proposed that there are certain universal structures found in all languages. They arrived at this conclusion by drawing an analogy between the modes of signification on the level of language, the modes of understanding on the level of mind, and the modes of being on the level of reality.[149]
In the early modern period,Thomas Hobbes(1588–1679) distinguished between marks, which people use privately to recall their own thoughts, and signs, which are used publicly to communicate their ideas to others.[150]In theirPort-Royal Logic,Antoine Arnauld(1612–1694) andPierre Nicole(1625–1695) developed an early precursor of the distinction between intension and extension.[151]TheEssay Concerning Human Understandingby John Locke (1632–1704) presented an influential version of the ideational theory of meaning, according to which words stand for ideas and help people communicate by transferring ideas from one mind to another.[152]Gottfried Wilhelm Leibniz(1646–1716) understood language as the mirror of thought and tried to conceive the outlines of auniversal formal languageto express scientific and philosophical truths. This attempt inspired theoristsChristian Wolff(1679–1754),Georg Bernhard Bilfinger(1693–1750), andJohann Heinrich Lambert(1728–1777) to develop the idea of a general science of sign systems.[153]Étienne Bonnot de Condillac(1715–1780) accepted and further developed Leibniz's idea of the linguistic nature of thought. Against Locke, he held that language is involved in the creation of ideas and is not merely a medium to communicate them.[154]
In the 19th century, semantics emerged and solidified as an independent field of inquiry.Christian Karl Reisig(1792–1829) is sometimes credited as the father of semantics since he clarified its concept and scope while also making various contributions to its key ideas.[155]Michel Bréal(1832–1915) followed him in providing a broad conception of the field, for which he coined the French termsémantique.[156]John Stuart Mill(1806–1873) gave great importance to the role of names to refer to things. He distinguished between the connotation and denotation of names and held that propositions are formed by combining names.[157]Charles Sanders Peirce(1839–1914) conceived semiotics as a general theory of signs with several subdisciplines, which were later identified by Charles W. Morris (1901–1979) as syntactics, semantics, and pragmatics. In his pragmatist approach to semantics, Peirce held that the meaning of conceptions consists in the entirety of their practical consequences.[158]The philosophy ofGottlob Frege(1848–1925) contributed to semantics on many different levels. Frege first introduced the distinction between sense and reference, and his development of predicate logic and the principle of compositionality formed the foundation of many subsequent developments in formal semantics.[159]Edmund Husserl(1859–1938) explored meaning from aphenomenologicalperspective by considering the mental acts that endow expressions with meaning. He held that meaning always implies reference to an object and expressions that lack a referent, likegreen is or, are meaningless.[160]
In the 20th century,Alfred Tarski(1901–1983) defined truth in formal languages through hissemantic theory of truth, which was influential in the development of truth-conditional semantics byDonald Davidson(1917–2003).[161]Tarski's studentRichard Montague(1930–1971) formulated a complex formal framework of the semantics of the English language, which was responsible for establishing formal semantics as a major area of research.[162]According tostructural semantics,[k]which was inspired by thestructuralist philosophyofFerdinand de Saussure(1857–1913), language is a complex network of structural relations and the meanings of words are not fixed individually but depend on their position within this network.[164]The theory ofgeneral semanticswas developed byAlfred Korzybski(1879–1950) as an inquiry into how language represents reality and affects human thought.[165]The contributions ofGeorge Lakoff(1941–present) andRonald Langacker(1942–present) provided the foundation of cognitive semantics.[166]Charles J. Fillmore(1929–2014) developed frame semantics as a major approach in this area.[167]The closely related field of conceptual semantics was inaugurated byRay Jackendoff(1945–present).[168]
Logicians study correctreasoningand often developformal languagesto express arguments and assess their correctness.[169]One part of this process is to provide a semantics for a formal language to precisely define what its terms mean. A semantics of a formal language is a set of rules, usually expressed as amathematical function, that assigns meanings to formal language expressions.[170]For example, the language of first-order logic uses lowercase letters forindividual constantsand uppercase letters forpredicates. To express the sentence "Bertie is a dog", the formulaD(b){\displaystyle D(b)}can be used whereb{\displaystyle b}is an individual constant for Bertie andD{\displaystyle D}is a predicate for dog. Classical model-theoretic semantics assigns meaning to these terms by defining aninterpretation functionthat maps individual constants to specific objects and predicates tosetsof objects ortuples. The function mapsb{\displaystyle b}to Bertie andD{\displaystyle D}to the set of all dogs. This way, it is possible to calculate the truth value of the sentence: it is true if Bertie is a member of the set of dogs and false otherwise.[171]
Formal logic aims to determine whether arguments aredeductively valid, that is, whether the premises entail the conclusion.[172]Entailment can be defined in terms of syntax or in terms of semantics. Syntactic entailment, expressed with the symbol⊢{\displaystyle \vdash }, relies onrules of inference, which can be understood as procedures to transform premises and arrive at a conclusion. These procedures only take thelogical formof the premises on the level of syntax into account and ignore what meaning they express. Semantic entailment, expressed with the symbol⊨{\displaystyle \vDash }, looks at the meaning of the premises, in particular, at their truth value. A conclusion follows semantically from a set of premises if the truth of the premises ensures the truth of the conclusion, that is, if any semantic interpretation function that assigns the premises the valuetruealso assigns the conclusion the valuetrue.[173]
In computer science, the semantics of aprogramis how it behaves when a computer runs it. Semantics contrasts with syntax, which is the particular form in which instructions are expressed. The same behavior can usually be described with different forms of syntax. InJavaScript, this is the case for the commandsi += 1andi = i + 1, which are syntactically different expressions to increase the value of the variableiby one. This difference is also reflected in differentprogramming languagessince they rely on different syntax but can usually be employed to create programs with the same behavior on the semantic level.[174]
Static semantics focuses on semantic aspects that affect thecompilationof a program. In particular, it is concerned with detecting errors of syntactically correct programs, such astype errors, which arise when an operation receives an incompatibledata type. This is the case, for instance, if a function performing a numerical calculation is given astringinstead of a number as an argument.[175]Dynamic semantics focuses on the run time behavior of programs, that is, what happens during theexecutionof instructions.[176]The main approaches to dynamic semantics aredenotational,axiomatic, andoperational semantics. Denotational semantics relies on mathematical formalisms to describe the effects of each element of the code. Axiomatic semantics uses deductive logic to analyze which conditions must be in place before and after the execution of a program. Operational semantics interprets the execution of a program as a series of steps, each involving the transition from onestateto another state.[177]
Psychological semantics examines psychological aspects of meaning. It is concerned with how meaning is represented on a cognitive level and what mental processes are involved in understanding and producing language. It further investigates how meaning interacts with other mental processes, such as the relation between language and perceptual experience.[178][l]Other issues concern how people learn new words and relate them to familiar things and concepts, how they infer the meaning of compound expressions they have never heard before, how they resolve ambiguous expressions, and how semantic illusions lead them to misinterpret sentences.[180]
One key topic issemantic memory, which is a form ofgeneral knowledgeof meaning that includes the knowledge of language, concepts, and facts. It contrasts withepisodic memory, which records events that a person experienced in their life. The comprehension of language relies on semantic memory and the information it carries about word meanings.[181]According to a common view, word meanings are stored and processed in relation to their semantic features. The feature comparison model states that sentences like "a robin is a bird" are assessed on a psychological level by comparing the semantic features of the wordrobinwith the semantic features of the wordbird. The assessment process is fast if their semantic features are similar, which is the case if the example is aprototypeof the general category. For atypical examples, as in the sentence "a penguin is a bird", there is less overlap in the semantic features and the psychological process is significantly slower.[182]
|
https://en.wikipedia.org/wiki/Semantics
|
Uncertaintyorincertituderefers to situations involving imperfect or unknowninformation. It applies to predictions of future events, to physical measurements that are already made, or to the unknown, and is particularly relevant fordecision-making. Uncertainty arises inpartially observableorstochasticenvironments, as well as due toignorance,indolence, or both.[1]It arises in any number of fields, includinginsurance,philosophy,physics,statistics,economics, finance,medicine,psychology,sociology,engineering,metrology,meteorology,ecologyandinformation science.
Although the terms are used in various ways among the general public, many specialists indecision theory,statisticsand other quantitative fields have defined uncertainty, risk, and their measurement as:
The lack ofcertainty, a state of limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome.[2]
Uncertainty can be measured through a set of possible states or outcomes whereprobabilitiesare assigned to each possible state or outcome – this also includes the application of aprobability density functionto continuous variables.[3]
In statistics and economics, second-order uncertainty is represented in probability density functions over (first-order) probabilities.[4][5]
Opinions insubjective logic[6]carry this type of uncertainty.
Riskis a state of uncertainty, where some possible outcomes have an undesired effect or significant loss. Measurement of risk includes a set of measured uncertainties, where some possible outcomes are losses, and the magnitudes of those losses. This also includes loss functions over continuous variables.[7][8][9][10]
There is a difference between uncertainty and variability. Uncertainty is quantified by a probability distribution which depends upon knowledge about the likelihood of what the single, true value of the uncertain quantity is. Variability is quantified by a distribution of frequencies of multiple instances of the quantity, derived from observed data.[11]
In economics, in 1921Frank Knightdistinguished uncertainty from risk with uncertainty being lack of knowledge which is immeasurable and impossible to calculate. Because of the absence of clearly defined statistics in most economic decisions where people face uncertainty, he believed that we cannot measure probabilities in such cases; this is now referred to asKnightian uncertainty.[12]
Uncertainty must be taken in a sense radically distinct from the familiar notion of risk, from which it has never been properly separated.... The essential fact is that 'risk' means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.... It will appear that a measurable uncertainty, or 'risk' proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.
There is a fundamental distinction between the reward for taking a known risk and that for assuming a risk whose value itself is not known. It is so fundamental, indeed, that … a known risk will not lead to any reward or special payment at all.
Knight pointed out that the unfavorable outcome of known risks can be insured during the decision-making process because it has a clearly defined expected probability distribution. Unknown risks have no known expected probability distribution, which can lead to extremely risky company decisions.
Other taxonomies of uncertainties and decisions include a broader sense of uncertainty and how it should be approached from an ethics perspective:[14]
There are some things that you know to be true, and others that you know to be false; yet, despite this extensive knowledge that you have, there remain many things whose truth or falsity is not known to you. We say that you are uncertain about them. You are uncertain, to varying degrees, about everything in the future; much of the past is hidden from you; and there is a lot of the present about which you do not have full information. Uncertainty is everywhere and you cannot escape from it.
For example, if it is unknown whether or not it will rain tomorrow, then there is a state of uncertainty. If probabilities are applied to the possible outcomes using weather forecasts or even just acalibrated probability assessment, the uncertainty has been quantified. Suppose it is quantified as a 90% chance of sunshine. If there is a major, costly, outdoor event planned for tomorrow then there is a risk since there is a 10% chance of rain, and rain would be undesirable. Furthermore, if this is a business event and $100,000 would be lost if it rains, then the risk has been quantified (a 10% chance of losing $100,000). These situations can be made even more realistic by quantifying light rain vs. heavy rain, the cost of delays vs. outright cancellation, etc.
Some may represent the risk in this example as the "expected opportunity loss" (EOL) or the chance of the loss multiplied by the amount of the loss (10% × $100,000 = $10,000). That is useful if the organizer of the event is "risk neutral", which most people are not. Most would be willing to pay a premium to avoid the loss. An insurance company, for example, would compute an EOL as a minimum for any insurance coverage, then add onto that other operating costs and profit. Since many people are willing to buy insurance for many reasons, then clearly the EOL alone is not the perceived value of avoiding the risk.
Quantitative uses of the termsuncertaintyandriskare fairly consistent among fields such asprobability theory,actuarial science, andinformation theory. Some also create new terms without substantially changing the definitions of uncertainty or risk. For example,surprisalis a variation on uncertainty sometimes used ininformation theory. But outside of the more mathematical uses of the term, usage may vary widely. Incognitive psychology, uncertainty can be real, or just a matter of perception, such asexpectations, threats, etc.
Vaguenessis a form of uncertainty where the analyst is unable to clearly differentiate between two different classes, such as 'person of average height' and 'tall person'. This form of vagueness can be modelled by some variation onZadeh'sfuzzy logicorsubjective logic.[15]
Ambiguityis a form of uncertainty where even the possible outcomes have unclear meanings and interpretations. The statement"He returns from the bank"is ambiguous because its interpretation depends on whether the word 'bank' is meant as"the side of a river"or"a financial institution". Ambiguity typically arises in situations where multiple analysts or observers have different interpretations of the same statements.[16]
At the subatomic level, uncertainty may be a fundamental and unavoidable property of the universe. Inquantum mechanics, theHeisenberg uncertainty principleputs limits on how much an observer can ever know about the position and velocity of a particle. This may not just be ignorance of potentially obtainable facts but that there is no fact to be found. There is some controversy in physics as to whether such uncertainty is an irreducible property of nature or if there are "hidden variables" that would describe the state of a particle even more exactly than Heisenberg's uncertainty principle allows.[17]
The term 'radical uncertainty' was popularised byJohn KayandMervyn Kingin their bookRadical Uncertainty: Decision-Making for an Unknowable Future,published in March 2020. It is distinct from Knightian uncertainty, by whether or not it is 'resolvable'. If uncertainty arises from a lack of knowledge, and that lack of knowledge is resolvable by acquiring knowledge (such as by primary or secondary research) then it is not radical uncertainty. Only when there are no means available to acquire the knowledge which would resolve the uncertainty, is it considered 'radical'.[18][19]
The most commonly used procedure for calculating measurement uncertainty is described in the "Guide to the Expression of Uncertainty in Measurement" (GUM) published byISO. A derived work is for example theNational Institute of Standards and Technology(NIST) Technical Note 1297, "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results", and the Eurachem/Citac publication "Quantifying Uncertainty in Analytical Measurement". The uncertainty of the result of a measurement generally consists of several components. The components are regarded asrandom variables, and may be grouped into two categories according to the method used to estimate their numerical values:
By propagating thevariancesof the components through a function relating the components to the measurement result, the combined measurement uncertainty is given as the square root of the resulting variance. The simplest form is thestandard deviationof a repeated observation.
Inmetrology,physics, andengineering, the uncertainty ormargin of errorof a measurement, when explicitly stated, is given by a range of values likely to enclose the true value. This may be denoted byerror barson a graph, or by the following notations:[citation needed]
In the last notation, parentheses are the concise notation for the ± notation. For example, applying 101⁄2meters in a scientific or engineering application, it could be written10.5 mor10.50 m, by convention meaning accurate towithinone tenth of a meter, or one hundredth. The precision is symmetric around the last digit. In this case it's half a tenth up and half a tenth down, so 10.5 means between 10.45 and 10.55. Thus it isunderstoodthat 10.5 means10.5±0.05, and 10.50 means10.50±0.005, also written10.50(5)and10.500(5)respectively. But if the accuracy is within two tenths, the uncertainty is ± one tenth, and it isrequiredto be explicit:10.5±0.1and10.50±0.01or10.5(1)and10.50(1). The numbers in parenthesesapplyto the numeral left of themselves, and are not part of that number, but part of a notation of uncertainty. They apply to theleast significant digits. For instance,1.00794(7)stands for1.00794±0.00007, while1.00794(72)stands for1.00794±0.00072.[20]This concise notation is used for example byIUPACin stating theatomic massofelements.
The middle notation is used when the error is not symmetrical about the value – for example3.4+0.3−0.2. This can occur when using a logarithmic scale, for example.
Uncertainty of a measurement can be determined by repeating a measurement to arrive at an estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to thestandard errorof the mean, which is the standard deviation divided by the square root of the number of measurements. This procedure neglectssystematic errors, however.[citation needed]
When the uncertainty represents the standard error of the measurement, then about 68.3% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.7% of the atomic mass values given on thelist of elements by atomic mass, the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of thenormal distribution, and they apply only if the measurement process produces normally distributed errors. In that case, the quotedstandard errorsare easily converted to 68.3% ("onesigma"), 95.4% ("two sigma"), or 99.7% ("three sigma")confidence intervals.[citation needed]
In this context, uncertainty depends on both theaccuracy and precisionof the measurement instrument. The lower the accuracy and precision of an instrument, the larger the measurement uncertainty is. Precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision.
Uncertainty in science, and science in general, may be interpreted differently in the public sphere than in the scientific community.[21]This is due in part to the diversity of the public audience, and the tendency for scientists to misunderstand lay audiences and therefore not communicate ideas clearly and effectively.[21]One example is explained by theinformation deficit model. Also, in the public realm, there are often many scientific voices giving input on a single topic.[21]For example, depending on how an issue is reported in the public sphere, discrepancies between outcomes of multiple scientific studies due to methodological differences could be interpreted by the public as a lack of consensus in a situation where a consensus does in fact exist.[21]This interpretation may have even been intentionally promoted, as scientific uncertainty may be managed to reach certain goals. For example,climate change denierstook the advice ofFrank Luntzto frameglobal warmingas an issue of scientific uncertainty, which was a precursor to the conflict frame used by journalists when reporting the issue.[22]
"Indeterminacy can be loosely said to apply to situations in which not all the parameters of the system and their interactions are fully known, whereas ignorance refers to situations in which it is not known what is not known."[23]These unknowns, indeterminacy and ignorance, that exist in science are often "transformed" into uncertainty when reported to the public in order to make issues more manageable, since scientific indeterminacy and ignorance are difficult concepts for scientists to convey without losing credibility.[21]Conversely, uncertainty is often interpreted by the public as ignorance.[24]The transformation of indeterminacy and ignorance into uncertainty may be related to the public's misinterpretation of uncertainty as ignorance.
Journalists may inflate uncertainty (making the science seem more uncertain than it really is) or downplay uncertainty (making the science seem more certain than it really is).[25]One way that journalists inflate uncertainty is by describing new research that contradicts past research without providing context for the change.[25]Journalists may give scientists with minority views equal weight as scientists with majority views, without adequately describing or explaining the state ofscientific consensuson the issue.[25]In the same vein, journalists may give non-scientists the same amount of attention and importance as scientists.[25]
Journalists may downplay uncertainty by eliminating "scientists' carefully chosen tentative wording, and by losing these caveats the information is skewed and presented as more certain and conclusive than it really is".[25]Also, stories with a single source or without any context of previous research mean that the subject at hand is presented as more definitive and certain than it is in reality.[25]There is often a "product over process" approach toscience journalismthat aids, too, in the downplaying of uncertainty.[25]Finally, and most notably for this investigation, when science is framed by journalists as a triumphant quest, uncertainty is erroneously framed as "reducible and resolvable".[25]
Some media routines and organizational factors affect the overstatement of uncertainty; other media routines and organizational factors help inflate the certainty of an issue. Because the general public (in the United States) generally trusts scientists, when science stories are covered without alarm-raising cues from special interest organizations (religious groups, environmental organizations, political factions, etc.) they are often covered in a business related sense, in an economic-development frame or a social progress frame.[26]The nature of these frames is to downplay or eliminate uncertainty, so when economic and scientific promise are focused on early in the issue cycle, as has happened with coverage of plant biotechnology and nanotechnology in the United States, the matter in question seems more definitive and certain.[26]
Sometimes, stockholders, owners, or advertising will pressure a media organization to promote the business aspects of a scientific issue, and therefore any uncertainty claims which may compromise the business interests are downplayed or eliminated.[25]
InWestern philosophythe first philosopher to embrace uncertainty wasPyrrho[29]resulting in theHellenistic philosophiesofPyrrhonismandAcademic Skepticism, the first schools ofphilosophical skepticism.Aporiaandacatalepsyrepresent key concepts in ancient Greek philosophy regarding uncertainty.
William MacAskill, a philosopher at Oxford University, has also discussed the concept of Moral Uncertainty.[30]Moral Uncertainty is "uncertainty about how to act given lack of certainty in any one moral theory, as well as the study of how we ought to act given this uncertainty."[31]
|
https://en.wikipedia.org/wiki/Uncertainty
|
VUCAis an acronym based on the leadership theories ofWarren BennisandBurt Nanus, to describe or to reflect on thevolatility,uncertainty,complexityandambiguityof general conditions and situations.[1][2]TheU.S. Army War Collegeintroduced the concept of VUCA in 1987, to describe a more complex multilateral world perceived as resulting from the end of theCold War.[3]More frequent use and discussion of the term began from 2002.[4][need quotation to verify]It has subsequently spread tostrategic leadershipinorganizations, from for-profitcorporations[5][6]toeducation.[7][8][9]
The VUCA framework provides a lens through which organizations can interpret their challenges and opportunities. It emphasizes strategic foresight, insight, and the behavior of entities within organizations.[10]Furthermore, it highlights both systemic and behavioral failures[11]often associated with organizational missteps.
V =Volatility: Characterizes the rapid and unpredictable nature of change.
U =Uncertainty: Denotes theunpredictabilityof events and issues.
C =Complexity: Describes the intertwined forces and issues, making cause-and-effect relationships unclear.
A =Ambiguity: Points to the unclear realities and potential misunderstandings stemming from mixed messages.
These elements articulate how organizations perceive their current and potential challenges. They establish the parameters for planning and policy-making. Interacting in various ways, they can either complicate decision-making or enhance the ability to strategize, plan, and progress. Essentially, VUCA lays the groundwork for effective management and leadership.
The VUCA framework is a conceptual tool that underscores the conditions and challenges organizations face when making decisions, planning, managing risks, driving change, and solving problems. It primarily shapes an organization's ability to:
VUCA serves as a guideline for fostering awareness and preparedness in various sectors, including business, the military, education, and government. It provides a roadmap for organizations to develop strategies for readiness, foresight, adaptation, and proactive intervention.[12]
VUCA, as a system of thought, revolves around an idea expressed by Andrew Porteous: "Failure in itself may not be a catastrophe. Still, failure to learn from failure is." This perspective underlines the significance of resilience and adaptability in leadership. It suggests that beyond mere competencies, it is behavioural nuances, like the ability to learn from failures and adapt, that distinguish exceptional leaders from average ones. Leaders using VUCA as a guide often see change not just as inevitable but as something to anticipate.[11]
Within VUCA, several thematic areas of consideration emerge, providing a framework for introspection and evaluation:
Within the VUCA system of thought, an organization's ability to navigate these challenges is closely tied to its foundational beliefs, values, and aspirations. Those enterprises that consider themselves prepared and resolved align their strategic approach with VUCA's principles, signaling a holistic awareness.
The essence of VUCA philosophy also emphasizes the need for a deep-rooted understanding of one's environment, spanning technical, social, political, market, and economic realms.[13]
Psychometrics[14]which measure fluid intelligence by tracking information processing when faced with unfamiliar, dynamic, and vague data can predict cognitive performance in VUCA environments.
Volatilityrefers to the different situational social-categorizations of people due to specific traits or reactions that stand out in particular situations. When people act based on a specific situation, there is a possibility that the public categorizes them into a different group than they were in a previous situation. These people might respond differently to individual situations due to social or environmental cues. The idea that situational occurrences cause certain social categorization is known as volatility and is one of the main aspects ofself-categorization theory.[15]
Sociologistsuse volatility to better understand the impacts ofstereotypesand social categorization on the situation at hand and any external forces that may cause people to perceive others differently. Volatility is the changing dynamic of social categorization in environmental situations. The dynamic can change due to any shift in a situation, whether social, technical, biological, or anything else. Studies have been conducted, but finding the specific component that causes the change in situational social categorization has proven challenging.[16]
Two distinct components link individuals to their social identities. The first component is normative fit, which pertains to how a person aligns with the stereotypes and norms associated with their particular identity. For instance, when a Hispanic woman is cleaning the house, people often associate gender stereotypes with the situation, while her ethnicity is not a central concern. However, when this same woman eats an enchilada, ethnicity stereotypes come to the forefront, while her gender is not the focal point.[15]The second social cue is comparative fit. This is when a specific characteristic or trait of a person is prominent in certain situations compared to others. For example, as mentioned by Bodenhausen and Peery, when there is one woman in a room full of men.[15]She stands out, because she is the only one of her gender. However, all of the men are clumped together because they do not have any specific traits that stand out. Comparative fit shows that people categorize others based on the relative social context. In a particular situation, particular characteristics are made obvious because others around that individual do not possess that characteristic. However, in other cases, this characteristic may be the norm and would not be a key characteristic in the categorization process.[15]
People can be less critical of the same person in different scenarios. For example, when looking at anAfrican Americanman on the street in a low-income neighborhood and the same man inside a school in a high-income neighborhood, people will be less judgmental when seeing him in school. Nothing else has changed about this man, other than his location.[15]When individuals are spotted in certainsocial contexts, the basic-level categories are forgotten, and the more partial categories are brought to light. This helps to describe the problems of situational social-categorization.[15]This also illustrates how stereotypes can shift the perspectives of those around an individual.[15]
Uncertaintyin the VUCA framework occurs when the availability orpredictabilityof information in events is unknown. Uncertainty often occurs in volatile environments consisting of complex unanticipated interactions. Uncertainty may occur with the intention to implycausationorcorrelationbetween the events of a social perceiver and a target. Situations where there is either a lack of information to prove why perception is in occurrence or informational availability but lack of causation, are where uncertainty is salient.[15]
The uncertainty component of the framework serves as a grey area and is compensated by the use of social categorization and/or stereotypes. Social categorization can be described as a collection of people that have no interaction but tend to share similar characteristics. People tend to engage in social categorization, especially when there is a lack of information surrounding the event. Literature suggests that default categories tend to be assumed in the absence of any clear data when referring to someone's gender or race in the essence of a discussion.[15]
Individuals often associate general references (e.g. people, they, them, a group) with the male gender, meaning people = male. This usually occurs when there is insufficient information to distinguish someone's gender clearly. For example, when discussing a written piece of information, most assume the author is male. If an author's name is unavailable (due to lack of information), it is difficult to determine the gender of the author through the context of whatever was written. People automatically label the author as male without having any prior basis of gender, thus placing the author in a social category. This social categorization happens in this example, but people will also assume someone is male if the gender is not known in many other situations as well.[15]
Social categorization occurs in the realm of not only gender, but alsorace. Default assumptions may be made, like in gender, to the race of an individual or a group based on prior known stereotypes. For example, race-occupation combinations such as basketball or golf players usually receive race assumptions. Without any information on the individual's race, people usually assume a basketball player is black, and a golf player is white. This is based upon stereotypes because each sport tends to be dominated by a single race. In reality, there are other races within each sport.[15]
Complexityrefers to theinterconnectivityandinterdependenceof multiple parts in a system. When conducting research, complexity is a component that scholars have to keep in mind. The results of a deliberately controlled environment are unexpected because of thenon-linear interactionand interdependencies within different groups and categories.[16]
In a sociological aspect, the VUCA framework is utilized in research to understand social perception in the real world and how that plays into social categorization and stereotypes. Galen V. Bodenhausen and Destiny Peery's article,Social Categorization and Stereotyping In vivo: The VUCA Challenge, focused on researching how social categories impacted the process of social cognition and perception.[15]The strategy used to conduct the research is to manipulate or isolate a single identity of a target while keeping all other identities constant. This method clearly shows how a specific identity in a social category can change one's perception of other identities, thus creating stereotypes.[15]
There are problems with categorizing an individual's social identity due to the complexity of an individual's background. This research fails to address the complexity of the real world and the results from this highlighted an even greater picture of social categorization and stereotyping.[15]Complexity adds many layers of different components to an individual's identity and creates challenges for sociologists trying to examine social categories.[16]In the real world, people are far more complex than a modified social environment. Individuals identify with more than one social category, which opens the door to a more profound discovery about stereotyping. Results from research conducted by Bodenhausen reveal that specific identities are more dominant than others.[15]Perceivers who recognize these distinct identities latch on to them and associate their preconceived notion of such identity and make initial assumptions about the individuals and hence stereotypes are created.
Conversely, perceivers who share some identities with the target tend to be more open-minded. They consider multiple social identities simultaneously, a phenomenon known as cross-categorization effects.[17]Some social categories are nested within larger categorical structures, making subcategories more salient to perceivers. Cross-categorization can trigger both positive and negative effects. On the positive side, perceivers become more open-minded and motivated to delve deeper into their understanding of the target, moving beyond dominant social categories. However, cross-categorization can also result in social invisibility,[15]where some cross-over identities diminish the visibility of others, leading to "intersectional invisibility" where neither social identity stands out distinctly and is overlooked.[18]
Ambiguityrefers to when the generalmeaningof something is unclear even when an appropriate amount of information is provided. Many get confused about the meaning of ambiguity. It is similar to the idea of uncertainty, but they have different factors. Uncertainty is when relevant information is unavailable and unknown, and ambiguity where relevant information is available but the overall meaning is still unknown. Both uncertainty and ambiguity exist in our culture today. Sociologists use ambiguity to determine how and why an answer has been developed. Sociologists focus on details such as if there was enough information present and if the subject had the full knowledge necessary to make a decision. and why did he/she come to their specific answer.[15]
Ambiguity is considered one of the leading causes of conflict within organizations.[19]
Ambiguity often prompts individuals to make assumptions, including those related to race, gender, sexual orientation, and even class stereotypes. When people possess some information but lack a complete answer, they tend to generate their own conclusions based on the available relevant information. For instance, as Bodenhausen notes, we may occasionally encounter individuals who possess a degree of androgyny, making it challenging to determine their gender. In such cases, brief exposure might lead to misclassifications based on gender-atypical features, such as very long hair on a man or very short hair on a woman. Ambiguity can result in premature categorizations, potentially leading to inaccurate conclusions due to the absence of crucial details.[15]
Sociologists suggest that ambiguity can fuel racial stereotypes and discrimination. In a South African study, white participants were shown images of racially mixed faces and asked to categorize them as European or African. Since all the participants were white, they struggled to classify these mixed-race faces as European and instead labeled them as African. This difficulty arose due to the ambiguity present in the images. The only information available to the participants was the subjects' skin tone and facial features. Despite having this information, the participants still couldn't confidently determine the ethnicity because the individuals didn't precisely resemble their own racial group.[15]
Levent Işıklıgöz has suggested that theCof VUCA be changed fromcomplexitytochaos, arguing that it is more suitable according to our era.[citation needed]
Bill George, a professor of management practice at Harvard Business School, argues that VUCA calls for a leadership response which he calls VUCA 2.0:Vision,understanding,courageandadaptability.[20]
George's response seems a minor adaptation of Bob Johansen's VUCA prime:Vision,understanding,clarityandagility.[21]
German academic Ali Aslan Gümüsay adds "paradox" to the acronym, calling it VUCA + paradox orVUCAP.[22]
Jamais Casciosuggested theBANIframework to highlight the environment as Brittle, Anxious, Nonlinear, and Incomprehensible.[23]
Ulrich Lichtenthalerdeveloped thePUMOframework, which describes the world as increasingly Polarized, Unthinkable, Metamorphic, and Overheated.[24]
|
https://en.wikipedia.org/wiki/Volatility,_uncertainty,_complexity_and_ambiguity
|
Word-sense disambiguationis the process of identifying whichsenseof awordis meant in asentenceor other segment ofcontext. In humanlanguage processingandcognition, it is usually subconscious.
Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain'sneural networks, computer science has had a long-term challenge in developing the ability in computers to donatural language processingandmachine learning.
Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources,supervised machine learningmethods in which aclassifieris trained for each distinct word on acorpusof manually sense-annotated examples, and completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successfulalgorithmsto date.
Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (homograph) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively.
Disambiguation requires two strict inputs: adictionaryto specify the senses which are to be disambiguated and a corpus oflanguagedata to be disambiguated (in some methods, atraining corpusof language examples is also required). WSD task has two variants: "lexical sample" (disambiguating the occurrences of a small sample of target words which were previously selected) and "all words" task (disambiguation of all the words in a running text). "All words" task is generally considered a more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word.
WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics.Warren Weaverfirst introduced the problem in a computational context in his 1949 memorandum on translation.[1]Later,Bar-Hillel(1960) argued[2]that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge.
In the 1970s, WSD was a subtask of semantic interpretation systems developed within the field of artificial intelligence, starting withWilks' preference semantics. However, since WSD systems were at the time largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck.
By the 1980s large-scale lexical resources, such as theOxford Advanced Learner's Dictionary of Current English(OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation was still knowledge-based or dictionary-based.
In the 1990s, the statistical revolution advanced computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques.
The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses,domain adaptation, semi-supervised and unsupervised corpus-based systems, combinations of different methods, and the return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best.
One problem with word sense disambiguation is deciding what the senses are, as differentdictionariesandthesauruseswill provide different divisions of words into senses. Some researchers have suggested choosing a particular dictionary, and using its set of senses to deal with this issue. Generally, however, research results using broad distinctions in senses have been much better than those using narrow ones.[3][4]Most researchers continue to work onfine-grainedWSD.
Most research in the field of WSD is performed by usingWordNetas a reference sense inventory for English. WordNet is a computationallexiconthat encodes concepts assynonymsets (e.g. the concept of car is encoded as { car, auto, automobile, machine, motorcar }). Other resources used for disambiguation purposes includeRoget's Thesaurus[5]andWikipedia.[6]More recently,BabelNet, a multilingual encyclopedic dictionary, has been used for multilingual WSD.[7]
In any real test,part-of-speech taggingand sense tagging have proven to be very closely related, with each potentially imposing constraints upon the other. The question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the Senseval/SemEvalcompetitions parts of speech are provided as input for the text to disambiguate).
Both WSD and part-of-speech tagging involve disambiguating or tagging with words. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96%[8]accuracy or better, as compared to less than 75%[citation needed]accuracy in word sense disambiguation withsupervised learning. These figures are typical for English, and may be very different from those for other languages.
Another problem isinter-judgevariance. WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses has been proven to be far more difficult.[9]While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of the senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense.[10]
As human performance serves as the standard, it is anupper boundfor computer performance. Human performance, however, is much better oncoarse-grainedthanfine-graineddistinctions, so this again is why research on coarse-grained distinctions[11][12]has been put to test in recent WSD evaluation exercises.[3][4]
A task-independent sense inventory is not a coherent concept:[13]each task requires its own division of word meaning into senses relevant to the task. Additionally, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target word selection. The "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language ("bank" could translate to the Frenchbanque– that is, 'financial bank' orrive– that is, 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant.
Finally, the very notion of "word sense" is slippery and controversial. Most people can agree in distinctions at thecoarse-grainedhomographlevel (e.g., pen as writing instrument or enclosure), but go down one level tofine-grainedpolysemy, and disagreements arise. For example, in Senseval-2, which used fine-grained sense distinctions, human annotators agreed in only 85% of word occurrences.[14]Word meaning is in principle infinitely variable and context-sensitive. It does not divide up easily into distinct or discrete sub-meanings.[15]Lexicographersfrequently discover in corpora loose and overlapping word meanings, and standard or conventional meanings extended, modulated, and exploited in a bewildering variety of ways. The art of lexicography is to generalize from the corpus to definitions that evoke and explain the full range of meaning of a word, making it seem like words are well-behaved semantically. However, it is not at all clear if these same meaning distinctions are applicable incomputational applications, as the decisions of lexicographers are usually driven by other considerations. In 2009, a task – namedlexical substitution– was proposed as a possible solution to the sense discreteness problem.[16]The task consists of providing a substitute for a word in context that preserves the meaning of the original word (potentially, substitutes can be chosen from the full lexicon of the target language, thus overcoming discreteness).
There are two main approaches to WSD – deep approaches and shallow approaches.
Deep approaches presume access to a comprehensive body ofworld knowledge. These approaches are generally not considered to be very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside very limited domains.[17]Additionally due to the long tradition incomputational linguistics, of trying such approaches in terms of coded knowledge and in some cases, it can be hard to distinguish between knowledge involved in linguistic or world knowledge. The first attempt was that byMargaret Mastermanand her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful,[18]but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s.
Shallow approaches do not try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge.
There are four conventional approaches to WSD:
Almost all these approaches work by defining a window ofncontent words around each word to be disambiguated in the corpus, and statistically analyzing thosensurrounding words. Two shallow approaches used to train and then disambiguate areNaïve Bayes classifiersanddecision trees. In recent research,kernel-based methodssuch assupport vector machineshave shown superior performance insupervised learning. Graph-based approaches have also gained much attention from the research community, and currently achieve performance close to the state of the art.
TheLesk algorithm[19]is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach[20]searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among the definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word.
An alternative to the use of the definitions is to consider general word-senserelatednessand to compute thesemantic similarityof each pair of word senses based on a given lexical knowledge base such asWordNet.Graph-basedmethods reminiscent ofspreading activationresearch of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods[21]or even outperforming them on specific domains.[3][22]Recently, it has been reported that simplegraph connectivity measures, such asdegree, perform state-of-the-art WSD in the presence of a sufficiently rich lexical knowledge base.[23]Also, automatically transferringknowledgein the form ofsemantic relationsfrom Wikipedia to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting.[24]
The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument).
Supervisedmethods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence,common senseandreasoningare deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such asfeature selection, parameter optimization, andensemble learning.Support Vector Machinesandmemory-based learninghave been shown to be the most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged corpora for training, which are laborious and expensive to create.
Because of the lack of training data, many word sense disambiguation algorithms usesemi-supervised learning, which allows both labeled and unlabeled data. TheYarowsky algorithmwas an early example of such an algorithm.[25]It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation.[26]
Thebootstrappingapproach starts from a small amount of seed data for each word: either manually tagged training examples or a small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initialclassifier, using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached.
Other semi-supervised techniques use large quantities of untagged corpora to provideco-occurrenceinformation that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains.
Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-alignedbilingualcorpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system.[citation needed]
Unsupervised learningis the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text byclusteringword occurrences using somemeasure of similarityof context,[27]a task referred to asword sense inductionor discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than for the other methods described above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If amappingto a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web search result clustering by increasing the quality of result clusters and the degree diversification of result lists.[28][29]It is hoped that unsupervised learning will overcome theknowledge acquisitionbottleneck because they are not dependent on manual effort.
Representing words considering their context through fixed-size dense vectors (word embeddings) has become one of the most fundamental blocks in several NLP systems.[30][31][32]Even though most of traditional word-embedding techniques conflate words with multiple meanings into a single vector representation, they still can be used to improve WSD.[33]A simple approach to employ pre-computed word embeddings to represent word senses is to compute the centroids of sense clusters.[34][35]In addition to word-embedding techniques, lexical databases (e.g.,WordNet,ConceptNet,BabelNet) can also assist unsupervised systems in mapping words and their senses as dictionaries. Some techniques that combine lexical databases and word embeddings are presented in AutoExtend[36][37]and Most Suitable Sense Annotation (MSSA).[38]In AutoExtend,[37]they present a method that decouples an object input representation into its properties, such as words and their word senses. AutoExtend uses a graph structure to map words (e.g. text) and non-word (e.g.synsetsinWordNet) objects as nodes and the relationship between nodes as edges. The relations (edges) in AutoExtend can either express the addition or similarity between its nodes. The former captures the intuition behind the offset calculus,[30]while the latter defines the similarity between two nodes. In MSSA,[38]an unsupervised disambiguation system uses the similarity between word senses in a fixed context window to select the most suitable word sense using a pre-trained word-embedding model andWordNet. For each context window, MSSA calculates the centroid of each word sense definition by averaging the word vectors of its words in WordNet'sglosses(i.e., short defining gloss and one or more usage example) using a pre-trained word-embedding model. These centroids are later used to select the word sense with the highest similarity of a target word to its immediately adjacent neighbors (i.e., predecessor and successor words). After all words are annotated and disambiguated, they can be used as a training corpus in any standard word-embedding technique. In its improved version, MSSA can make use of word sense embeddings to repeat its disambiguation process iteratively.
Other approaches may vary differently in their methods:
The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem.Unsupervised methodsrely on knowledge about word senses, which is only sparsely formulated in dictionaries and lexical databases.Supervised methodsdepend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far[when?]be met only for a handful of words for testing purposes, as it is done in theSensevalexercises.
One of the most promising trends in WSD research is using the largestcorpusever accessible, theWorld Wide Web, to acquire lexical information automatically.[50]WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such asinformation retrieval(IR). In this case, however, the reverse is also true:web search enginesimplement simple and robust IR techniques that can successfully mine the Web for information to use in WSD. The historic lack of training data has provoked the appearance of some new algorithms and techniques, as described inAutomatic acquisition of sense-tagged corpora.
Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be[51][52]classified as follows:
Structured:
Unstructured:
Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale,data sets. In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there is different sense inventories.
In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized.Senseval(now renamedSemEval) is an international word sense disambiguation competition, held every three years since 1998:Senseval-1(1998),Senseval-2(2001),Senseval-3[usurped](2004), and its successor,SemEval(2007). The objective of the competition is to organize different lectures, preparing and hand-annotating corpus for testing systems, perform a comparative evaluation of WSD systems in several kinds of tasks, including all-words and lexical sample WSD for different languages, and, more recently, new tasks such assemantic role labeling, gloss WSD,lexical substitution, etc. The systems submitted for evaluation to these competitions usually integrate different techniques and often combine supervised and knowledge-based methods (especially for avoiding bad performance in lack of training examples).
In recent years2007-2012, the WSD evaluation task choices had grown and the criterion for evaluating WSD has changed drastically depending on the variant of the WSD evaluation task. Below enumerates the variety of WSD tasks:
As technology evolves, the Word Sense Disambiguation (WSD) tasks grows in different flavors towards various research directions and for more languages:
|
https://en.wikipedia.org/wiki/Word-sense_disambiguation
|
This is acomparison ofregular expressionengines.
Qt GNU LGPL v. 2.1,Qt Commercial
(RegExp Studio)
NOTE:An application using a library for regular expression support does not necessarily support the full set of features of the library, e.g., GNUgrepuses PCRE, but supports no lookahead, though PCRE does.
|
https://en.wikipedia.org/wiki/Comparison_of_regular_expression_engines
|
Incomputer science,extended Backus–Naur form(EBNF) is a family ofmetasyntaxnotations, any of which can be used to express acontext-free grammar. EBNF is used to make a formal description of aformal languagesuch as a computerprogramming language. They are extensions of the basicBackus–Naur form(BNF) metasyntax notation. The earliest EBNF was developed byNiklaus Wirth, incorporating some of the concepts (with a different syntax and notation) fromWirth syntax notation. Today, many variants of EBNF are in use. TheInternational Organization for Standardizationadopted an EBNFStandard, ISO/IEC 14977, in 1996.[1][2]According to Zaytsev, however, this standard "only ended up adding yet another three dialects to the chaos" and, after noting its lack of success, also notes that the ISO EBNF is not even used in all ISO standards.[3]
This article uses EBNF as specified by the ISO for examples applying to all EBNFs. Other EBNF variants use somewhat different syntactic conventions.
EBNF is acodethat expresses thesyntaxof a formal language.[4]An EBNF consists ofterminal symbolsand non-terminal production rules which are the restrictions governing how terminal symbols can be combined into a valid sequence. Examples of terminal symbols includealphanumeric characters,punctuation marks, andwhitespace characters.
The EBNF definesproduction ruleswhere sequences of symbols are respectively assigned to anonterminal:
This production rule defines the nonterminaldigitwhich is on the left side of the assignment. The vertical bar represents an alternative and the terminal symbols are enclosed with quotation marks followed by a semicolon as terminating character. Hence adigitis a0or adigit excluding zerothat can be1or2or3and so forth until9.
A production rule can also include a sequence of terminals or nonterminals, each separated by a comma:
Expressions that may be omitted or repeated can be represented through curly braces { ... }:
In this case, the strings1,2, ...,10, ...,10000, ... are correct expressions. To represent this, everything that is set within the curly braces may be repeated arbitrarily often, including not at all.
An option can be represented through squared brackets [ ... ]. That is, everything that is set within the square brackets may be present just once, or not at all:
Therefore, anintegeris a zero (0) or apositive integerthat may be preceded by an optionalminus sign.
EBNF also provides, among other things, the syntax to describe repetitions (of a specified number of times), to exclude some part of a production, and to insert comments in an EBNF grammar.
The following represents a proposed ISO/IEC 14977 standard, by R. S. Scowen, page 7, tables 1 and 2.
Even EBNF can be described using EBNF. Consider below grammar (using conventions such as "-" to indicate set disjunction, "+" to indicate one or more matches, and "?" for optionality):
APascal-like programming language that allows only assignments can be defined in EBNF as follows:
For example, a syntactically correct program then could be:
The language can easily be extended withcontrol flows, arithmetical expressions, and Input/Output instructions. Then a small, usable programming language would be developed.
Anygrammardefined in EBNF can also be represented in BNF, though representations in the latter are generally lengthier. E.g., options and repetitions cannot be directly expressed in BNF and require the use of an intermediate rule or alternative production defined to be either nothing or the optional production for option, or either the repeated production of itself, recursively, for repetition. The same constructs can still be used in EBNF.
The BNF uses the symbols (<,>,|,::=) for itself, but does not include quotes around terminal strings. This prevents these characters from being used in the languages, and requires a special symbol for the empty string. In EBNF,terminalsare strictly enclosed within quotation marks ("..."or'...'). The angle brackets (<...>) fornonterminalscan be omitted.
BNF syntax can only represent a rule in one line, whereas in EBNF a terminating character, the semicolon character;marks the end of a rule.
Furthermore, EBNF includes mechanisms for enhancements, defining the number of repetitions, excluding alternatives, comments, etc.
As examples, the following syntax rules illustrate the facilities for expressing repetition:
Terminal strings defined by these rules are as follows:
According to the ISO 14977 standard EBNF is meant to be extensible, and two facilities are mentioned. The first is part of EBNF grammar, the special sequence, which is arbitrary text enclosed with question marks. The interpretation of the text inside a special sequence is beyond the scope of the EBNF standard. For example, the space character could be defined by the following rule:
The second facility for extension is using the fact that parentheses in EBNF cannot be placed next to identifiers (they must be concatenated with them). The following is valid EBNF:
The following isnotvalid EBNF:
Therefore, an extension of EBNF could use that notation. For example, in aLispgrammar, function application could be defined by the following rule:
|
https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.