text
stringlengths 16
172k
| source
stringlengths 32
122
|
|---|---|
Astyle guideis a set of standards for the writing,formatting, and design ofdocuments.[1]A book-length style guide is often called astyle manualor amanual of style(MoSorMOS). A short style guide, typically ranging from several to several dozen pages, is often called astyle sheet. The standards documented in a style guide are applicable for either general use, or prescribed use in an individual publication, particular organization, or specific field.
A style guide establishes standard style requirements to improvecommunicationby ensuring consistency within and across documents. They may require certainbest practicesinwriting style,usage,language composition,visual composition,orthography, andtypographyby setting standards of usage in areas such aspunctuation,capitalization,citing sources, formatting of numbers and dates,tableappearance and other areas. Foracademicandtechnicaldocuments, a guide may also enforce best practices inethics(such asauthorship,research ethics, and disclosure) and compliance (technicalandregulatory). For translations, a style guide may even be used to enforce consistent grammar, tone, and localization decisions such asunits of measure.[2]
Style guides may be categorized into three types:comprehensive stylefor general use;discipline stylefor specialized use, which is often specific toacademic disciplines,medicine,journalism,law,government, business, and other fields; andhouseorcorporate style, created and used by a particular publisher or organization.[3]
Style guides vary widely in scope and size. Writers working in large industries or professional sectors may reference a specific style guide, written for usage in specialized documents within their fields. For the most part, these guides are relevant and useful for peer-to-peer specialist documentation, or to help writers working in specific industries or sectors to communicate highly technical information in scholarly articles or industrywhite papers.
Professional style guides from different countries can be referenced for authoritative advice on their respective language(s), such as the United Kingdom'sNew Oxford Style ManualfromOxford University Press; and the United States'The Chicago Manual of Stylefrom theUniversity of Chicago Press. Australia has a style guide, available online, created by its government.[4]
The guides' variety in scope and length is enabled by the cascading of one style over another, analogous to how style sheets cascade inweb developmentand indesktop publishingwithCSSstyles.
In many cases, a project such as abook,journal, ormonographseries typically has a short style sheet that cascades over the larger style guide of an organization such as apublishingcompany, whose specific content is usually calledhouse style. Most house styles, in turn, cascade over an industry-wide or profession-wide style manual that is even more comprehensive. Examples of industry style guides include:
Finally, these reference works cascade over theorthographicnorms of the language in use (for example,English orthographyfor English-language publications). This, of course, may be subject to national variety, such asBritish, American, Canadian, and Australian English.
Some style guides focus on specific topic areas such asgraphic design, includingtypography. Website style guides cover a publication's visual and technical aspects, as well as text.
Guides in specific scientific and technical fields may covernomenclatureto specify names or classification labels that are clear, standardized, andontologicallysound (e.g.,taxonomy,chemical nomenclature, andgene nomenclature).
Style guides that coverusagemay suggest descriptive terms for people which avoidracism,sexism,homophobia, etc. Style guides increasingly incorporateaccessibilityconventions for audience members with visual, mobility, or other disabilities.[5]
Since the beginning of the digital era, websites have allowed for an expansion of style guide conventions to account for digital behavior such asscreen reading.[6]Screen reading requires web style guides to focus more intensively on a user experience that is subject to multichannel surfing. Though web style guides can vary widely, they tend to prioritize similar values about brevity, terminology, syntax, tone, structure, typography, graphics, and errors.[6]
Most style guides are revised periodically to accommodate changes in conventions and usage. The update frequency andrevision controlare determined by the subject. For style manuals inreference-workformat, neweditionstypically appear every 1 to 20 years. For example, theAP Stylebookis revised every other year (since 2020).[7]The Chicago Manual of Styleis in its 18th edition, while theAmerican Psychological Association(APA) and ASA styles are each in their 7th edition as of 2025. Many house styles and individual project styles change more frequently, especially for new projects.
|
https://en.wikipedia.org/wiki/Style_guide
|
Typographical syntax, also known asorthotypography, is the aspect oftypographythat defines the meaning and rightful usage oftypographic signs, notablypunctuation marks, and elements oflayoutsuch asflush marginsandindentation.[1][2]
Orthotypographic rules vary broadly fromlanguageto language, from country to country, and even frompublisherto publisher.[citation needed]As such, they are more often described as "conventions".
While some of those conventions have ease of understanding as a justification – for instance, specifying that low punctuation (commas,full stops, andellipses) must be in the sametypeface, weight, and style as the preceding text – many are probablyarbitrary.[citation needed]
The rules dealing withquotation marksare a good example of this: which ones to use and how to nest them, how muchwhitespaceto leave on both sides, and when to integrate them with other punctuation marks.
Each major publisher maintains a list of orthotypographic rules that they apply as part of theirhouse style.[3]
Thistypography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Typographical_syntax
|
Awriting circleis a group of like-minded writers needing support for their work, either throughwritingpeer critiques,workshopsorclasses, or just encouragement.[1]There are many different types of writing circles or writing groups based on location, style of writing, or format. Normally, the goal of a writing circle is to improve one's own craft by listening to the works and suggestions of others in the group. It also builds a sense of community, and allows new writers to become accustomed to sharing their work. Writing circles can be helpful inside and outside of the classroom.
A writing circle brings writers from different walks of life together in one place to discuss their work in a workshop style setting. Writers will be able to give feedback and hear suggestions from fellow writers. It can build community in a classroom and help students gain public speaking cleans. This workshop method could be used for any genre of writing (creative prose, poetry, etc.).
Writing circles can build a sense of community and help writers become more confident in their own work. They teach writers how to give and receive constructive criticism, enable them to learn from one another's mistakes and successes, and let them appreciate different opinions and views. In some cases, writing circles can be used as a form of group therapy (writing for healing).
|
https://en.wikipedia.org/wiki/Writing_circle
|
Paraphraseorparaphrasingincomputational linguisticsis thenatural language processingtask of detecting and generatingparaphrases. Applications of paraphrasing are varied including information retrieval,question answering,text summarization, andplagiarism detection.[1]Paraphrasing is also useful in theevaluation of machine translation,[2]as well assemantic parsing[3]andgeneration[4]of new samples to expand existingcorpora.[5]
Barzilay and Lee[5]proposed a method to generate paraphrases through the usage of monolingualparallel corpora, namely news articles covering the same event on the same day. Training consists of usingmulti-sequence alignmentto generate sentence-level paraphrases from an unannotated corpus. This is done by
This is achieved by first clustering similar sentences together usingn-gramoverlap. Recurring patterns are found within clusters by using multi-sequence alignment. Then the position of argument words is determined by finding areas of high variability within each cluster, aka between words shared by more than 50% of a cluster's sentences. Pairings between patterns are then found by comparing similar variable words between different corpora. Finally, new paraphrases can be generated by choosing a matching cluster for a source sentence, then substituting the source sentence's argument into any number of patterns in the cluster.
Paraphrase can also be generated through the use ofphrase-based translationas proposed by Bannard and Callison-Burch.[6]The chief concept consists of aligning phrases in apivot languageto produce potential paraphrases in the original language. For example, the phrase "under control" in an English sentence is aligned with the phrase "unter kontrolle" in its German counterpart. The phrase "unter kontrolle" is then found in another German sentence with the aligned English phrase being "in check," a paraphrase of "under control."
The probability distribution can be modeled asPr(e2|e1){\displaystyle \Pr(e_{2}|e_{1})}, the probability phrasee2{\displaystyle e_{2}}is a paraphrase ofe1{\displaystyle e_{1}}, which is equivalent toPr(e2|f)Pr(f|e1){\displaystyle \Pr(e_{2}|f)\Pr(f|e_{1})}summed over allf{\displaystyle f}, a potential phrase translation in the pivot language. Additionally, the sentencee1{\displaystyle e_{1}}is added as a prior to add context to the paraphrase. Thus the optimal paraphrase,e2^{\displaystyle {\hat {e_{2}}}}can be modeled as:
Pr(e2|f){\displaystyle \Pr(e_{2}|f)}andPr(f|e1){\displaystyle \Pr(f|e_{1})}can be approximated by simply taking their frequencies. AddingS{\displaystyle S}as a prior is modeled by calculating the probability of forming theS{\displaystyle S}whene1{\displaystyle e_{1}}is substituted withe2{\displaystyle e_{2}}.
There has been success in usinglong short-term memory(LSTM) models to generate paraphrases.[7]In short, the model consists of an encoder and decoder component, both implemented using variations of a stackedresidualLSTM. First, the encoding LSTM takes aone-hotencoding of all the words in a sentence as input and produces a final hidden vector, which can represent the input sentence. The decoding LSTM takes the hidden vector as input and generates a new sentence, terminating in an end-of-sentence token. The encoder and decoder are trained to take a phrase and reproduce the one-hot distribution of a corresponding paraphrase by minimizingperplexityusing simplestochastic gradient descent. New paraphrases are generated by inputting a new phrase to the encoder and passing the output to the decoder.
With the introduction ofTransformer models, paraphrase generation approaches improved their ability to generate text by scalingneural networkparameters and heavily parallelizing training throughfeed-forward layers.[8]These models are so fluent in generating text that human experts cannot identify if an example was human-authored or machine-generated.[9]Transformer-based paraphrase generation relies onautoencoding,autoregressive, orsequence-to-sequencemethods. Autoencoder models predict word replacement candidates with a one-hot distribution over the vocabulary, while autoregressive and seq2seq models generate new text based on the source predicting one word at a time.[10][11]More advanced efforts also exist to make paraphrasing controllable according to predefined quality dimensions, such as semantic preservation or lexical diversity.[12]Many Transformer-based paraphrase generation methods rely on unsupervised learning to leverage large amounts of training data and scale their methods.[13][14]
Paraphrase recognition has been attempted by Socher et al[1]through the use of recursiveautoencoders. The main concept is to produce a vector representation of a sentence and its components by recursively using an autoencoder. The vector representations of paraphrases should have similar vector representations; they are processed, then fed as input into aneural networkfor classification.
Given a sentenceW{\displaystyle W}withm{\displaystyle m}words, the autoencoder is designed to take 2n{\displaystyle n}-dimensionalword embeddingsas input and produce ann{\displaystyle n}-dimensional vector as output. The same autoencoder is applied to every pair of words inS{\displaystyle S}to produce⌊m/2⌋{\displaystyle \lfloor m/2\rfloor }vectors. The autoencoder is then applied recursively with the new vectors as inputs until a single vector is produced. Given an odd number of inputs, the first vector is forwarded as-is to the next level of recursion. The autoencoder is trained to reproduce every vector in the full recursion tree, including the initial word embeddings.
Given two sentencesW1{\displaystyle W_{1}}andW2{\displaystyle W_{2}}of length 4 and 3 respectively, the autoencoders would produce 7 and 5 vector representations including the initial word embeddings. Theeuclidean distanceis then taken between every combination of vectors inW1{\displaystyle W_{1}}andW2{\displaystyle W_{2}}to produce a similarity matrixS∈R7×5{\displaystyle S\in \mathbb {R} ^{7\times 5}}.S{\displaystyle S}is then subject to a dynamic min-pooling layerto produce a fixed sizenp×np{\displaystyle n_{p}\times n_{p}}matrix. SinceS{\displaystyle S}are not uniform in size among all potential sentences,S{\displaystyle S}is split intonp{\displaystyle n_{p}}roughly even sections. The output is then normalized to have mean 0 and standard deviation 1 and is fed into a fully connected layer with asoftmaxoutput. The dynamic pooling to softmax model is trained using pairs of known paraphrases.
Skip-thought vectors are an attempt to create a vector representation of the semantic meaning of a sentence, similarly to theskip gram model.[15]Skip-thought vectors are produced through the use of a skip-thought model which consists of three key components, an encoder and two decoders. Given a corpus of documents, the skip-thought model is trained to take a sentence as input and encode it into a skip-thought vector. The skip-thought vector is used as input for both decoders; one attempts to reproduce the previous sentence and the other the following sentence in its entirety. The encoder and decoder can be implemented through the use of arecursive neural network(RNN) or anLSTM.
Since paraphrases carry the same semantic meaning between one another, they should have similar skip-thought vectors. Thus a simplelogistic regressioncan be trained to good performance with the absolute difference and component-wise product of two skip-thought vectors as input.
Similar to howTransformer modelsinfluenced paraphrase generation, their application in identifying paraphrases showed great success. Models such as BERT can be adapted with abinary classificationlayer and trained end-to-end on identification tasks.[16][17]Transformers achieve strong results when transferring between domains and paraphrasing techniques compared to more traditional machine learning methods such aslogistic regression. Other successful methods based on the Transformer architecture include usingadversarial learningandmeta-learning.[18][19]
Multiple methods can be used to evaluate paraphrases. Since paraphrase recognition can be posed as a classification problem, most standard evaluations metrics such asaccuracy,f1 score, or anROC curvedo relatively well. However, there is difficulty calculating f1-scores due to trouble producing a complete list of paraphrases for a given phrase and the fact that good paraphrases are dependent upon context. A metric designed to counter these problems is ParaMetric.[20]ParaMetric aims to calculate theprecision and recallof an automatic paraphrase system by comparing the automatic alignment of paraphrases to a manual alignment of similar phrases. Since ParaMetric is simply rating the quality of phrase alignment, it can be used to rate paraphrase generation systems, assuming it uses phrase alignment as part of its generation process. A notable drawback to ParaMetric is the large and exhaustive set of manual alignments that must be initially created before a rating can be produced.
The evaluation of paraphrase generation has similar difficulties as the evaluation ofmachine translation. The quality of a paraphrase depends on its context, whether it is being used as a summary, and how it is generated, among other factors. Additionally, a good paraphrase usually is lexically dissimilar from its source phrase. The simplest method used to evaluate paraphrase generation would be through the use of human judges. Unfortunately, evaluation through human judges tends to be time-consuming. Automated approaches to evaluation prove to be challenging as it is essentially a problem as difficult as paraphrase recognition. While originally used to evaluate machine translations, bilingual evaluation understudy (BLEU) has been used successfully to evaluate paraphrase generation models as well. However, paraphrases often have several lexically different but equally valid solutions, hurting BLEU and other similar evaluation metrics.[21]
Metrics specifically designed to evaluate paraphrase generation include paraphrase in n-gram change (PINC)[21]and paraphrase evaluation metric (PEM)[22]along with the aforementioned ParaMetric. PINC is designed to be used with BLEU and help cover its inadequacies. Since BLEU has difficulty measuring lexical dissimilarity, PINC is a measurement of the lack of n-gram overlap between a source sentence and a candidate paraphrase. It is essentially theJaccard distancebetween the sentence, excluding n-grams that appear in the source sentence to maintain some semantic equivalence. PEM, on the other hand, attempts to evaluate the "adequacy, fluency, and lexical dissimilarity" of paraphrases by returning a single value heuristic calculated usingN-gramsoverlap in a pivot language. However, a large drawback to PEM is that it must be trained using large, in-domain parallel corpora and human judges.[21]It is equivalent to training a paraphrase recognition to evaluate a paraphrase generation system.
The Quora Question Pairs Dataset, which contains hundreds of thousands of duplicate questions, has become a common dataset for the evaluation of paraphrase detectors.[23]Consistently reliable paraphrase detection have all used the Transformer architecture and all have relied on large amounts of pre-training with more general data before fine-tuning with the question pairs.
|
https://en.wikipedia.org/wiki/Automated_paraphrasing
|
Language reformis a kind oflanguage planningby widespread change to a language. The typical methods of language reform are simplification andlinguistic purism. Simplification regularises vocabulary, grammar, or spelling. Purism aligns the language with a form which is deemed 'purer'.
Language reforms are intentional changes to language; this article does not cover naturallanguage change, such as theGreat Vowel Shift.
By far the most common language reform is simplification. The most common simplification isspelling reform, butinflection,syntax,vocabularyand word formation can also be targets for simplification. For example, in English, there are many prefixes which mean "the opposite of", e.g.un-,in-,a(n)-,dis-, andde-. A language reform might propose to replace the redundant prefixes with one, such asun-.
Linguistic purism or linguistic protectionism is the prescriptive practice of recognising one form of a language as purer or of intrinsically higher quality than others. The perceived or actual decline may take the form of change of vocabulary, syncretism of grammatical elements, or loanwords, and in this case, the form of a language reform.
Examples of language reforms are:
|
https://en.wikipedia.org/wiki/Language_reform
|
Lexical simplificationis a sub-task oftext simplification. It can be defined as any lexical substitution task that reduces text complexity.
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Lexical_simplification
|
Lexical substitutionis the task of identifying a substitute for a word in the context of a clause. For instance, given the following text: "After thematch, replace any remaining fluid deficit to prevent chronic dehydration throughout the tournament", a substitute ofgamemight be given.
Lexical substitution is strictly related toword sense disambiguation(WSD), in that both aim to determine themeaningof a word. However, while WSD consists of automatically assigning the appropriatesensefrom a fixed sense inventory, lexical substitution does not impose any constraint on which substitute to choose as the best representative for the word in context. By not prescribing the inventory, lexical substitution overcomes the issue of the granularity of sense distinctions and provides a level playing field for automatic systems that automatically acquire word senses (a task referred to asWord Sense Induction).
In order to evaluate automatic systems on lexical substitution, a task was organized at theSemeval-2007evaluation competition held inPraguein 2007. ASemeval-2010task on cross-lingual lexical substitution has also taken place.
The skip-gram model takes words with similar meanings into a vector space (collection of objects that can be added together and multiplied by numbers) that are found close to each other in N-dimensions (list of items). A variety ofneural networks(computer system modeled after a human brain) are formed together as a result of the vectors and networks that are related together. This all occurs in the dimensions of the vocabulary that has been generated in a network.[1]The model has been used in lexical substitution automation and prediction algorithms. One such algorithm developed by Oren Melamud, Omer Levy, and Ido Dagan uses the skip-gram model to find a vector for each word and its synonyms. Then, it calculates the cosine distance between vectors to determine which words will be the best substitutes.[2]
In a sentence like "The dog walked at a quick pace" each word has a specific vector in relation to the other. The vector for "The" would be [1,0,0,0,0,0,0] because the 1 is the word vocabulary and the 0s are the words surrounding that vocabulary, which create a vector.
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Lexical_substitution
|
Innatural language processing,semantic compressionis a process of compacting a lexicon used to build
a textual document (or a set of documents) by reducing language heterogeneity, while maintaining textsemantics.
As a result, the same ideas can be represented using a smaller set of words.
In most applications, semantic compression is a lossy compression. Increased prolixity does not compensate for the lexical compression and an original document cannot be reconstructed in a reverse process.
Semantic compression is basically achieved in two steps, usingfrequency dictionariesandsemantic network:
Step 1 requires assembling word frequencies and
information on semantic relationships, specificallyhyponymy. Moving upwards in word hierarchy,
a cumulative concept frequency is calculating by adding a sum of hyponyms' frequencies to frequency of their hypernym:cumf(ki)=f(ki)+∑jcumf(kj){\displaystyle cumf(k_{i})=f(k_{i})+\sum _{j}cumf(k_{j})}whereki{\displaystyle k_{i}}is a hypernym ofkj{\displaystyle k_{j}}.
Then a desired number of words with top cumulated frequencies are chosen to build a target lexicon.
In the second step, compression mapping rules are defined for the remaining words in order to handle every occurrence of a less frequent hyponym as its hypernym in output text.
The below fragment of text has been processed by the semantic compression. Words in bold have been replaced by their hypernyms.
They are bothnestbuildingsocial insects, butpaper waspsand honeybeesorganizetheircolonies
in very differentways. In a new study, researchers report that despite theirdifferences, these insectsrely onthe same network of genes to guide theirsocial behavior.The study appears in the Proceedings of theRoyal Society B: Biological Sciences. Honeybeesandpaper waspsare separated by more than 100 million years of
evolution, and there arestriking differencesin how they divvy up the work ofmaintainingacolony.
The procedure outputs the following text:
They are bothfacilitybuildinginsect, butinsectsand honeyinsectsarrangetheirbiological groups
in very differentstructure. In a new study, researchers report that despite theirdifference of opinions, these insectsactthe same network of genes tosteertheirparty demeanor. The study appears in the proceeding of theinstitution bacteriaBiological Sciences. Honeyinsectsandinsectare separated by more than hundred million years of
organic processes, and there areimpinging differences of opinionsin how they divvy up the work ofaffirmingabiological group.
A natural tendency to keep natural language expressions concise can be perceived as a form of implicit semantic compression, by omitting unmeaningful words or redundant meaningful words (especially to avoidpleonasms).[2]
In thevector space model, compacting a lexicon leads to a reduction ofdimensionality, which results in lesscomputational complexityand a positive influence on efficiency.
Semantic compression is advantageous ininformation retrievaltasks, improving their effectiveness (in terms of bothprecision and recall).[3]This is due to more precise descriptors (reduced effect of language diversity – limited language redundancy, a step towards a controlled dictionary).
As in the example above, it is possible to display the output as natural text (re-applying inflexion, adding stop words).
|
https://en.wikipedia.org/wiki/Semantic_compression
|
Text normalizationis the process of transformingtextinto a singlecanonical formthat it might not have had before. Normalizing text before storing or processing it allows forseparation of concerns, since input is guaranteed to be consistent before operations are performed on it. Text normalization requires being aware of what type of text is to be normalized and how it is to be processed afterwards; there is no all-purpose normalization procedure.[1]
Text normalization is frequently used when convertingtext to speech.Numbers,dates,acronyms, andabbreviationsare non-standard "words" that need to be pronounced differently depending on context.[2]For example:
Text can also be normalized for storing and searching in a database. For instance, if a search for "resume" is to match the word "résumé," then the text would be normalized by removingdiacritical marks; and if "john" is to match "John", the text would be converted to a singlecase. To prepare text for searching, it might also bestemmed(e.g. converting "flew" and "flying" both into "fly"),canonicalized(e.g. consistently usingAmerican or British English spelling), or havestop wordsremoved.
For simple, context-independent normalization, such as removing non-alphanumericcharacters ordiacritical marks,regular expressionswould suffice. For example, thesedscriptsed ‑e "s/\s+/ /g"inputfilewould normalize runs ofwhitespace charactersinto a single space. More complex normalization requires correspondingly complicated algorithms, includingdomain knowledgeof the language and vocabulary being normalized. Among other approaches, text normalization has been modeled as a problem of tokenizing and tagging streams of text[5]and as a special case of machine translation.[6][7]
In the field oftextual scholarshipand the editing of historic texts, the term "normalization" implies a degree of modernization and standardization – for example in the extension ofscribal abbreviationsand the transliteration of the archaicglyphstypically found in manuscript and early printed sources. Anormalized editionis therefore distinguished from adiplomatic edition(orsemi-diplomatic edition), in which some attempt is made to preserve these features. The aim is to strike an appropriate balance between, on the one hand, rigorous fidelity to the source text (including, for example, the preservation of enigmatic and ambiguous elements); and, on the other, producing a new text that will be comprehensible and accessible to the modern reader. The extent of normalization is therefore at the discretion of the editor, and will vary. Some editors, for example, choose to modernize archaic spellings and punctuation, but others do not.[8]
|
https://en.wikipedia.org/wiki/Text_normalization
|
ASD-STE100 Simplified Technical English(STE) is acontrolled natural languagedesigned to simplify and clarify technical documentation. It was originally developed during the 1980's by the European Association of Aerospace Industries (AECMA), at the request of the European Airline industry, who wanted a standardized form of English for aircraft maintenance documentation that could be easily understood by non-native English speakers. It has since been adopted in many other fields outside the aerospace, defense, and maintenance domains for its clear, consistent, and comprehensive nature. The current edition of the STE Standard, published in January 2025, consists of 53 writing rules and a dictionary of approximately 900 approved words.
The first attempts towards controlled English were made as early as the 1930s and 1970s withBasic English,[1]Caterpillar Fundamental English[2][3]and Eastman Kodak (KISL).[4]In 1979, aerospace documentation was written in American English (Boeing, Douglas, Lockheed, etc.), in British English (Hawker Siddeley,British Aircraft Corporation, etc.) and by companies whose native language was not English (Fokker,Aeritalia,Aerospatiale, and some of the companies that formedAirbusat the time).
Because European airlines needed to translate parts of their maintenance documentation into other languages for local mechanics, the European Airline industry approached AECMA (the European Association of Aerospace Industries) to investigate the possibility of using a controlled or standardized form of English, with a strong focus onreadabilityand comprehensibility. In 1983, after an investigation into the different types of controlled languages that existed in other industries, AECMA decided to produce its own controlled English. The AIA (Aerospace Industries Association of America) was also invited to participate in this project. The result of this collaborative work was the release of the AECMA Document, PSC-85-16598 (known as the AECMA Simplified English Guide) in 1985. Subsequently, several changes, issues and revisions were released up to the present issue (Issue 9).
After the merger of AECMA with two other associations to form theAerospace, Security and Defence Industries Association of Europe(ASD) in 2004, the document was renamedASD Simplified Technical English, Specification ASD-STE100. Thus, STE evolved from Guide to Specification. With Issue 9, it has transitioned to international Standard. This change in designation (the subtitle of the document is Standard for Technical Documentation) is not just a reclassification, but a significant step that reinforces the global applicability of STE.
ASD-STE100 is maintained by the Simplified Technical English Maintenance Group (STEMG), a working group of ASD, formed in 1983. The copyright of ASD-STE100 is fully owned by ASD.[5][6]
Due to the ever-evolving nature of technology and technical language, the STEMG also relies on user feedback for suggested changes and updates.[7]Starting from Issue 6 in 2013, the Standard became free of charge. Over the years, 18,981 official copies of Issues 6, 7, and 8 were distributed. Since Issue 9 was released in January 2025, almost 1,000 official copies have been distributed (distribution log updated March 2025). Usually, a new issue is released every three years.
A free official copy of the ASD-STE100 Standard can be requested through theASD-STE100 websiteand through ASD.
Simplified Technical English can:
The ASD-STE100 Simplified Technical English Standard consists of two parts:
The writing rules cover aspects of grammar and style. The rules also differentiate between two types of texts: procedures and descriptions. A non-exhaustive list of the writing rules includes the concepts that follow:
The table that follows is an extract from a page of the ASD-STE100 dictionary:
(Part of speech)
ALTERNATIVES
Explanation of the four columns:
Word (part of speech)– This column has information on the word and its part of speech. Every approved word in STE is only permitted as a specific part of speech. For example, the word "test" is only approved as a noun (the test) but not as a verb (to test). There are few exceptions to the "One word, one part of speech, one meaning" principle.
Approved meaning/ALTERNATIVES– This column gives the approved meaning (or definition) of an approved word in STE. In the example table, "ACCESS" and "ACCIDENT" are approved (they are written in uppercase). The text in these definitions is not written in STE. If a meaning is not given in the dictionary, one cannot use the word in that meaning. Use an alternative word. For words that are not approved (they are written in lowercase, such as "acceptance" and "accessible" in the example table), this column gives approved alternatives that one can use to replace the words that are not approved. These alternatives are in uppercase, and they are only suggestions. It is possible that the suggested alternative for an unapproved word has a different part of speech. Usually, the first suggested alternative has the same part of speech as the word that is not approved.
STE EXAMPLE– This column shows how to use the approved word or how to use the approved alternative (usually a word-for-word replacement). It also shows how to keep the same meaning with a different construction. The wording given in the STE examples is not mandatory. It shows only one method to write a text with approved words. One can frequently use different constructions with other approved words and keep the same meaning.
Non-STE example– This column (text in lowercase) gives examples that show how the word that is not approved is frequently used in standard technical writing. The examples also help one to understand the use the approved alternatives or different constructions to give the same information. For approved words, this column is empty unless there is a help symbol (lightbulb) related to other meanings or restrictions.
The dictionary includes entries for words that are approved and for words that are not approved. The approved words can only be used according to their specified meaning. For example, the word "close" (v) can only be used in one of two meanings:
The verb can expressto close a doororto close a circuit, but it cannot be used with other connotations (e.g.,to close a meetingorto close a business). The adjective "close" appears in the dictionary as a word that is not approved with the suggested approved alternative "NEAR" (prep). Thus, STE does not allowdo not go close to the landing gear, but it does allowdo not go near the landing gear. In addition to the general STE vocabulary listed in the dictionary, Section 1, Words, gives specific guidelines for using technical nouns and technical verbs that writers need to describe technical information. For example, nouns, multi-word nouns, or verbs such asgrease,discoloration,propeller,aural warning system,overhead panel,to ream, andto drillare not listed in the dictionary, but they qualify as approved terms according to Part 1, Section 1 (specifically, writing rules 1.5 and 1.12).
"Simplified Technical English" is sometimes used as a generic term for a controlled natural language. The standard started as an industry-regulated writing standard for aircraft maintenance documentation, but it has become a requirement for an increasing number of military land vehicles, seacraft, and weapons programs. Although it was not initially intended for use as a general writing standard, it has been successfully adopted by other industries and for a wide range of document types. The US government'sPlain Englishlacks the strict vocabulary restrictions of the aerospace standard, but represents an attempt at a more general writing standard.[9]
Since 1986, STE has been a requirement of the ATA Specification i2200 (formerly ATA100) and ATA104 (Training). STE is also a requirement of theS1000DSpecification. The European Defence Standards Reference (EDSTAR) recommends STE as one of thebest practicestandards for writing technical documentation to be applied for defense contracting by all EDA (European Defence Agency) participating member states.
Today, the success of STE is such that other industries use it beyond its initial purpose for maintenance documentation and outside the aerospace and defense domains. At the end of Issue 8 distribution in December 2024, the Issue 8 STE distribution log shows that 64% of users come from outside these two industries. STE is successfully applied in the automotive, renewable energies, and offshore logistics sectors, and is further expanding within medical devices and the pharmaceutical sector. STE interest is also increasing within the academic world, including the disciplines ofinformation engineering,applied linguistics, andcomputational linguistics).
Several unrelated software products exist to support the application of STE, but the STEMG and ASD do not endorse or certify these products.[10]
Boeing developed the Boeing Simplified English Checker (BSEC). This linguistic-based checker uses a sophisticated 350-rule English parser, which is augmented with special functions that check for violations of the Simplified Technical English specification.[11]
HyperSTE is a plugin tool offered by Etteplan to check content for adherence to the rules and grammar of the standard.
Congree offers a Simplified Technical English Checker based on linguistic algorithms. It supports all rules of Simplified Technical English issue 7 that are relevant to the text composition and provides an integrated Simplified Technical English dictionary.[12]
The TechScribe term checker for ASD-STE100 helps writers to find text that does not conform to ASD-STE100.[13]
|
https://en.wikipedia.org/wiki/Simplified_Technical_English
|
Basic English(abackronymforBritish American Scientific International and Commercial English)[1]is acontrolled languagebased on standardEnglish, but with a greatly simplifiedvocabularyandgrammar. It was created by the linguist and philosopherCharles Kay Ogdenas aninternational auxiliary language, and as an aid for teachingEnglish as a second language. It was presented in Ogden's 1930 bookBasic English: A General Introduction with Rules and Grammar.
The first work on Basic English was written by two Englishmen,Ivor Richardsof Harvard University andCharles Kay Ogdenof the University of Cambridge in England. The design of Basic English drew heavily on the semiotic theory put forward by Ogden and Richards in their 1923 bookThe Meaning of Meaning.[2]
Ogden's Basic, and the concept of a simplified English, gained its greatest publicity just after theAlliedvictory in World War II as a means for world peace. He was convinced that the world needed to gradually eradicateminority languagesand use as much as possible only one: English, in either a simple or complete form.[3]
Although Basic English was not built into a program, similar simplifications have been devised for various international uses. Richards promoted its use in schools in China.[4]It has influenced the creation ofVoice of America'sLearning Englishfor news broadcasting, andSimplified Technical English, another English-based controlled language designed to write technical manuals.
What survives of Ogden's Basic English is the basic 850-word list used as the beginner's vocabulary of the English language taught worldwide, especially in Asia.[5]
Ogden tried to simplify English while keeping it normal for native speakers, by specifying grammar restrictions and acontrolled small vocabularywhich makes an extensive use ofparaphrasing. Most notably, Ogden allowed only 18 verbs, which he called "operators". His "General Introduction" says, "There are no 'verbs' in Basic English",[verify]with the underlying assumption that, as noun use in English is very straightforward but verb use/conjugation is not, the elimination of verbs would be a welcome simplification.[note 1]
What the World needs most is about 1,000 more dead languages—and one more alive.
Ogden's word lists include onlyword roots, which in practice are extended with the defined set of affixes and the full set of forms allowed for any available word (noun, pronoun, or the limited set of verbs).[note 2]The 850 core words of Basic English are found in Wiktionary'sBasic English word list. This core is theoretically enough for everyday life. However, Ogden prescribed that any student should learn an additional 150-word list for everyday work in some particular field, by adding a list of 100 words particularly useful in a general field (e.g., science, verse, business), along with a 50-word list from a more specialised subset of that general field, to make abasic 1000-wordvocabulary for everyday work and life.
Moreover, Ogden assumed that any student should already be familiar with (and thus may only review) a core subset of around 200 "international" words.[6]Therefore, a first-level student should graduate with a core vocabulary of around 1200 words. A realistic general core vocabulary could contain around 2000 words (the core 850 words, plus 200 international words, and 1000 words for the general fields of trade, economics, and science). It is enough for a "standard" English level.[7][8]This 2000 word vocabulary represents "what any learner should know". At this level students could start to move on their own.
Ogden'sBasic English 2000 word listand Voice of America'sSpecial English 1500 word listserve as dictionaries for theSimple English Wikipedia.
Basic English includes a simple grammar for modifying or combining its 850 words to talk about additional meanings (morphological derivationorinflection). The grammar is based on English, but simplified.[9]
Like allinternational auxiliary languages(or IALs), Basic English may be criticised as inevitably based on personal preferences, and is thus, paradoxically, inherently divisive.[10]Moreover, like all natural-language-based IALs, Basic is subject to criticism as unfairly biased towards the native speaker community.[note 3]
As a teaching aid forEnglish as a second language, Basic English has been criticised for the choice of the core vocabulary and for its grammatical constraints.[note 4]
In 1944,readabilityexpertRudolf Fleschpublished an article inHarper's Magazine, "How Basic is Basic English?" in which he said, "It's not basic, and it's not English." The essence of his complaint is that the vocabulary is too restricted, and, as a result, the text ends up being awkward and more difficult than necessary. He also argues that the words in the Basic vocabulary were arbitrarily selected, and notes that there had been no empirical studies showing that it made language simpler.[11]
In his 1948 paper "A Mathematical Theory of Communication",Claude Shannoncontrasted the limited vocabulary of Basic English withJames Joyce'sFinnegans Wake, a work noted for a wide vocabulary. Shannon notes that the lack of vocabulary in Basic English leads to a very high level ofredundancy, whereas Joyce's large vocabulary "is alleged to achieve a compression of semantic content".[12]
In the novelThe Shape of Things to Come, published in 1933,H. G. Wellsdepicted Basic English as thelingua francaof a new elite that after a prolonged struggle succeeds in uniting the world and establishing atotalitarianworld government. In the future world of Wells' vision, virtually all members of humanity know this language.
From 1942 to 1944,George Orwellwas a proponent of Basic English, but in 1945, he became critical ofuniversal languages. Basic English later inspired his use ofNewspeakinNineteen Eighty-Four.[13]
Evelyn Waughcriticized his own 1945 novelBrideshead Revisited, which he had previously called his magnum opus, in the preface of the 1959 reprint: "It [World War II] was a bleak period of present privation and threatening disaster—the period ofsoya beansand Basic English—and in consequence the book is infused with a kind of gluttony, for food and wine, for the splendours of the recent past, and for rhetorical and ornamental language that now, with a full stomach, I find distasteful."[14]
In his story "Gulf", science fiction writerRobert A. Heinleinused aconstructed languagecalledSpeedtalk, in which every Basic English word is replaced with a singlephoneme, as an appropriate means of communication for a race of genius supermen.[15]
TheLord's Prayerhas been often used for an impressionistic language comparison:
Our Father in heaven,may your name be kept holy.Let your kingdom come.Let your pleasure be done,as in heaven, so on earth.Give us this day bread for our needs.And make us free of our debts,as we have made free those who are in debt to us.And let us not be put to the test,but keep us safe from the Evil One.
Our Father in heaven,hallowed be your name.Your kingdom come.Your will be done,on earth as it is in heaven.Give us this day our daily bread.And forgive us our debts,as we also have forgiven our debtors.And do not bring us to the time of trial,but rescue us from the evil one.
|
https://en.wikipedia.org/wiki/Basic_English
|
Letter caseis the distinction between the letters that are in largeruppercaseorcapitals(more formallymajuscule) and smallerlowercase(more formallyminuscule) in the written representation of certain languages. Thewriting systemsthat distinguish between the upper- and lowercase have two parallel sets of letters: each in the majuscule set has a counterpart in the minuscule set. Some counterpart letters have the same shape, and differ only in size (e.g.⟨C, c⟩⟨S, s⟩⟨O, o⟩), but for others the shapes are different (e.g.,⟨A, a⟩⟨G, g⟩⟨F, f⟩). The two case variants are alternative representations of the same letter: they have the same name andpronunciationand are typically treated identically when sorting inalphabetical order.
Letter case is generally applied in a mixed-case fashion, with both upper and lowercase letters appearing in a given piece of text for legibility. The choice of case is often denoted by thegrammarof a language or by the conventions of a particular discipline. Inorthography, the uppercase is reserved for special purposes, such as the first letter of asentenceor of aproper noun(called capitalisation, or capitalised words), which makes lowercase more common in regular text.
In some contexts, it is conventional to use one case only. For example,engineering design drawingsare typically labelled entirely in uppercase letters, which are easier to distinguish individually than the lowercase when space restrictions require very small lettering. Inmathematics, on the other hand, uppercase and lowercase letters denote generally differentmathematical objects, which may be related when the two cases of the same letter are used; for example,xmay denote an element of asetX.
The termsupper caseandlower casemay be written as two consecutive words, connected with a hyphen (upper-caseandlower-case– particularly if theypre-modifyanother noun),[1]or as a single word (uppercaseandlowercase). These terms originated from the common layouts of the shallowdrawerscalledtype casesused to hold themovable typeforletterpress printing. Traditionally, the capital letters were stored in a separate shallow tray or "case" that was located above the case that held the small letters.[2][3][4]
Majuscule(/ˈmædʒəskjuːl/, less commonly/məˈdʒʌskjuːl/), forpalaeographers, is technically any script whose letters have very few or very shortascendersand descenders, or none at all (for example, the majuscule scripts used in theCodex Vaticanus Graecus 1209, or theBook of Kells). By virtue of their visual impact, this made the term majuscule an apt descriptor for what much later came to be more commonly referred to as uppercase letters.
Minusculerefers to lower-case letters. The word is often spelledminiscule, by association with the unrelated wordminiatureand the prefixmini-. That has traditionally been regarded as a spelling mistake (sinceminusculeis derived from the wordminus[5]), but is now so common that somedictionariestend to accept it as a non-standard or variant spelling.[6]Minisculeis still less likely, however, to be used in reference to lower-case letters.
Theglyphsof lowercase letters can resemble smaller forms of the uppercase glyphs restricted to the baseband (e.g. "C/c" and "S/s", cf.small caps) or can look hardly related (e.g. "D/d" and "G/g"). Here is a comparison of the upper and lower case variants of each letter included in theEnglish alphabet(the exact representation will vary according to thetypefaceandfontused):
(Some lowercase letters have variations e.g. a/ɑ.)
Typographically, the basic difference between the majuscules and minuscules is not that the majuscules are big and minuscules small, but that the majuscules generally are of uniform height (although, depending on the typeface, there may be some exceptions, particularly withQand sometimesJhaving a descending element; also, variousdiacriticscan add to the normal height of a letter).
There is more variation in the height of the minuscules, as some of them have parts higher (ascenders) or lower (descenders) than the typical size. Normally,b, d, f, h, k, l, t[note 1]are the letters with ascenders, andg, j, p, q, yare the ones with descenders. In addition, withold-style numeralsstill used by some traditional or classical fonts,6and8make up the ascender set, and3, 4, 5, 7, and9the descender set.
A minority of writing systems use two separate cases. Such writing systems are calledbicameral scripts. These scripts include theLatin,Cyrillic,Greek,Coptic,Armenian,Glagolitic,Adlam,Warang Citi,Old Hungarian,Garay,Zaghawa,Osage,Vithkuqi, andDeseretscripts. Languages written in these scripts use letter cases as an aid to clarity. TheGeorgian alphabethas several variants, and there were attempts to use them as different cases, but the modern writtenGeorgian languagedoes not distinguish case.[8]
All other writing systems make no distinction between majuscules and minuscules – a system called unicameral script orunicase. This includes mostsyllabicand other non-alphabetic scripts.
In scripts with a case distinction, lowercase is generally used for the majority of text; capitals are used for capitalisation andemphasiswhenboldfaceis not available.Acronyms(and particularly initialisms) are often written inall-caps, depending onvarious factors.
Capitalisation is thewritingof awordwith its firstletterin uppercase and the remaining letters in lowercase. Capitalisation rules vary bylanguageand are often quite complex, but in most modern languages that have capitalisation, the first word of everysentenceis capitalised, as are allproper nouns.[citation needed]
Capitalisation in English, in terms of the general orthographic rules independent of context (e.g. title vs. heading vs. text), is universally standardised forformalwriting. Capital letters are used as the first letter of a sentence, a proper noun, or aproper adjective. Thenames of the days of the weekand the names of the months are also capitalised, as are the first-personpronoun"I"[9]and thevocative particle"O". There are a few pairs of words of different meanings whoseonly difference is capitalisationof the first letter.Honorificsand personaltitlesshowing rank or prestige are capitalised when used together with the name of the person (for example, "Mr. Smith", "Bishop Gorman", "Professor Moore") or as a direct address, but normally not when used alone and in a more general sense.[10][11]It can also be seen as customary to capitalise any word – in some contexts even a pronoun[12]– referring to thedeityof amonotheistic religion.
Other words normally start with a lower-case letter. There are, however, situations where further capitalisation may be used to give added emphasis, for example in headings and publication titles (see below). In some traditional forms of poetry, capitalisation has conventionally been used as a marker to indicate the beginning of aline of verseindependent of any grammatical feature. In political writing, parody and satire, the unexpected emphasis afforded by otherwise ill-advised capitalisation is often used to great stylistic effect, such as in the case of George Orwell'sBig Brother.
Other languages vary in their use of capitals. For example, inGermanall nouns are capitalised (this was previously common in English as well, mainly in the 17th and 18th centuries), while inRomanceand most other European languages the names of the days of the week, the names of the months, and adjectives of nationality, religion, and so on normally begin with a lower-case letter.[13]On the other hand, in some languages it is customary to capitaliseformal polite pronouns, for exampleDe,Dem(Danish),Sie,Ihnen(German), andVdorUd(short forustedinSpanish).
Informal communication, such astexting,instant messagingor a handwrittensticky note, may not bother to follow the conventions concerning capitalisation, but that is because its users usually do not expect it to be formal.[9]
Similar orthographic and graphostylistic conventions are used for emphasis or following language-specific or other rules, including:
In English, a variety of case styles are used in various circumstances:
In English-language publications, various conventions are used for the capitalisation of words inpublication titlesandheadlines, including chapter and section headings. The rules differ substantially between individual house styles.
The convention followed by many Britishpublishers(including scientific publishers likeNatureandNew Scientist, magazines likeThe Economist, and newspapers likeThe GuardianandThe Times) and many U.S. newspapers is sentence-style capitalisation in headlines, i.e. capitalisation follows the same rules that apply for sentences. This convention is usually calledsentence case. It may also be applied to publication titles, especially in bibliographic references and library catalogues. An example of a global publisher whose English-language house style prescribes sentence-case titles and headings is theInternational Organization for Standardization(ISO).[citation needed]
For publication titles it is, however, a common typographic practice among both British[22]and U.S. publishers to capitalise significant words (and in the United States, this is often applied to headings, too). This family of typographic conventions is usually calledtitle case. For example, R. M. Ritter'sOxford Manual of Style(2002) suggests capitalising "the first word and all nouns, pronouns, adjectives, verbs and adverbs, but generally not articles, conjunctions and short prepositions".[23]This is an old form ofemphasis, similar to the more modern practice of using a larger or boldface font for titles. The rules which prescribe which words to capitalise are not based on any grammatically inherent correct–incorrect distinction and are not universally standardised; they differ between style guides, although most style guides tend to follow a few strong conventions, as follows:
Title case is widely used in many English-language publications, especially in the United States. However, its conventions are sometimes not followed strictly – especially in informal writing.
In creative typography, such as music record covers and other artistic material, all styles are commonly encountered, including all-lowercase letters and special case styles, such asstudly caps(see below). For example, in thewordmarksof video games it is not uncommon to use stylised upper-case letters at the beginning and end of a title, with the intermediate letters in small caps or lower case (e.g.,ArcaniA,ArmA, andDmC).
Single-wordproper nounsare capitalised in formal written English, unless the name is intentionally stylised to break this rule (such ase e cummings,bell hooks,eden ahbez, anddanah boyd).
Multi-word proper nouns include names of organisations, publications, and people. Often the rules for "title case" (described in the previous section) are applied to these names, so that non-initial articles, conjunctions, and short prepositions are lowercase, and all other words are uppercase. For example, the short preposition "of" and the article "the" are lowercase in "Steering Committee of the Finance Department". Usually only capitalised words are used to form anacronymvariant of the name, though there is some variation in this.
Withpersonal names, this practice can vary (sometimes all words are capitalised, regardless of length or function), but is not limited to English names. Examples include the English namesTamar of GeorgiaandCatherine the Great, "van" and "der" inDutch names, "von" and "zu" inGerman, "de", "los", and "y" inSpanish names, "de" or "d'" inFrench names, and "ibn" inArabic names.
Some surname prefixes also affect the capitalisation of the following internal letter or word, for example "Mac" inCeltic namesand "Al" in Arabic names.
In theInternational System of Units(SI), a letter usually has different meanings in upper and lower case when used as a unit symbol. Generally, unit symbols are written in lower case, but if the name of the unit is derived from a proper noun, the first letter of the symbol is capitalised. Nevertheless, thenameof the unit, if spelled out, is always considered a common noun and written accordingly in lower case.[25]For example:
For the purpose of clarity, the symbol forlitrecan optionally be written in upper case even though the name is not derived from a proper noun.[25]For example, "one litre" may be written as:
The letter case of a prefix symbol is determined independently of the unit symbol to which it is attached. Lower case is used for all submultiple prefix symbols and the small multiple prefix symbols up to "k" (forkilo, meaning 103= 1000 multiplier), whereas upper case is used for larger multipliers:[25]
Some case styles are not used in standard English, but are common incomputer programming, productbranding, or other specialised fields.
The usage derives from how programming languages areparsed, programmatically. They generally separate their syntactic tokens by simplewhitespace, includingspace characters,tabs, andnewlines. When the tokens, such as function and variable names start to multiply in complexsoftware development, and there is still a need to keep thesource codehuman-readable,Naming conventionsmake this possible. So for example, a function dealing with matrix multiplication might formally be called:
In each case, the capitalisation or lack thereof supports a different function. In the first,FORTRANcompatibility requires case-insensitive naming and short function names. The second supports easily discernible function and argument names and types, within the context of an imperative, strongly typed language. The third supports the macro facilities of LISP, and its tendency to view programs and data minimalistically, and as interchangeable. The fourth idiom needs much lesssyntactic sugaroverall, because much of the semantics are implied, but because of its brevity and so lack of the need for capitalization or multipart words at all, might also make the code too abstract andoverloadedfor the common programmer to understand.
Understandably then, such coding conventions arehighly subjective, and can lead to rather opinionated debate, such as in the case ofeditor wars, or those aboutindent style. Capitalisation is no exception.
"theQuickBrownFoxJumpsOverTheLazyDog" or "TheQuickBrownFoxJumpsOverTheLazyDog"
Spaces andpunctuationare removed and the first letter of each word is capitalised. If this includes the first letter of the first word (CamelCase, "PowerPoint", "TheQuick...", etc.), the case is sometimes calledupper camel case(or, illustratively,CamelCase),Pascal casein reference to thePascal programming language[27]orbumpy case.
When the first letter of the first word is lowercase ("iPod", "eBay", "theQuickBrownFox..."), the case is usually known aslower camel caseordromedary case(illustratively:dromedaryCase). This format has become popular in the branding ofinformation technologyproducts and services, with an initial "i" meaning "Internet" or "intelligent",[citation needed]as iniPod, or an initial "e" meaning "electronic", as inemail(electronic mail) ore-commerce(electronic commerce).
"the_quick_brown_fox_jumps_over_the_lazy_dog"
Punctuation is removed and spaces are replaced by singleunderscores. Normally the letters share the same case (e.g. "UPPER_CASE_EMBEDDED_UNDERSCORE" or "lower_case_embedded_underscore") but the case can be mixed, as inOCamlvariant constructors (e.g. "Upper_then_lowercase").[28]The style may also be calledpothole case, especially inPythonprogramming, in which this convention is often used for naming variables. Illustratively, it may be renderedsnake_case,pothole_case, etc.. When all-upper-case, it may be referred to asscreaming snake case(orSCREAMING_SNAKE_CASE) orhazard case.[29]
"the-quick-brown-fox-jumps-over-the-lazy-dog"
Similar to snake case, above, excepthyphensrather than underscores are used to replace spaces. It is also known asspinal case,param case,Lisp casein reference to theLisp programming language, ordash case(or illustratively askebab-case, looking similar to the skewer that sticks through akebab). If every word is capitalised, the style is known astrain case(TRAIN-CASE).[30]
InCSS, all property names and most keyword values are primarily formatted in kebab case.
"the·quick·brown·fox·jumps·over·the·lazy·dog"
Similar to kebab case, above, except it usesinterpunctrather than underscores to replace spaces. Its use is possible in many programming languages supporting Unicode identifiers, as unlike the hyphen it generally doesn't conflict with a reserved use for denoting an operator, albeit exceptions such asJuliaexist.[31]Its lack of visibility in most standard keyboard layouts certainly contribute to its infrequent employ, though most modern input facility allow to reach it rather easily.[32]
"tHeqUicKBrOWnFoXJUmpsoVeRThElAzydOG"
Studly caps are an arbitrary mixing of the cases with nosemanticorsyntacticsignificance to the use of the capitals. Sometimes onlyvowelsare upper case, at other times upper and lower case are alternated, but often it is simply random. The name comes from the sarcastic or ironic implication that it was used in an attempt by the writer to convey their owncoolness(studliness).[citation needed]It is also used to mock the violation of standard English case conventions by marketers in the naming of computer software packages, even when there is no technical requirement to do so – e.g.,Sun Microsystems' naming of a windowing systemNeWS. Illustrative naming of the style is, naturally, random:stUdlY cAps,StUdLy CaPs, etc..
In thecharacter setsdeveloped forcomputing, each upper- and lower-case letter is encoded as a separate character. In order to enable case folding and case conversion, thesoftwareneeds to link together the two characters representing the case variants of a letter. (Some old character-encoding systems, such as theBaudot code, are restricted to one set of letters, usually represented by the upper-case variants.)
Case-insensitiveoperations can be said to fold case, from the idea of folding the character code table so that upper- and lower-case letters coincide. The conversion of letter case in astringis common practice in computer applications, for instance to make case-insensitive comparisons. Many high-level programming languages provide simple methods for case conversion, at least for theASCIIcharacter set.
Whether or not the case variants are treated as equivalent to each other varies depending on the computer system and context. For example, userpasswordsare generally case sensitive in order to allow more diversity and make them more difficult to break. In contrast, case is often ignored inkeyword searchesin order to ignore insignificant variations in keyword capitalisation both in queries and queried material.
Unicodedefines case folding through the three case-mapping properties of eachcharacter: upper case, lower case, and title case (in this context, "title case" relates toligaturesanddigraphsencoded as mixed-casesingle characters, in which the first component is in upper case and the second component in lower case).[33]These properties relate all characters in scripts with differing cases to the other case variants of the character.
As briefly discussed inUnicodeTechnical Note #26,[34]"In terms of implementation issues, any attempt at a unification of Latin, Greek, and Cyrillic would wreak havoc [and] make casing operations an unholy mess, in effect making all casing operations context sensitive […]". In other words, while the shapes of letters likeA,B,E,H,K,M,O,P,T,X,Yand so on are shared between the Latin, Greek, and Cyrillic alphabets (and small differences in their canonical forms may be considered to be of a merelytypographicalnature), it would still be problematic for a multilingualcharacter setor afontto provide only asinglecode pointfor, say, uppercase letterB, as this would make it quite difficult for a wordprocessor to change that single uppercase letter to one of the three different choices for the lower-case letter, the Latinb(U+0062), Greekβ(U+03B2) or Cyrillicв(U+0432). Therefore, the corresponding Latin, Greek and Cyrillic upper-case letters (U+0042, U+0392 and U+0412, respectively) are also encoded as separate characters, despite their appearance being identical. Without letter case, a "unified European alphabet" – such asABБCГDΔΕЄЗFΦGHIИJ...Z, with an appropriate subset for each language – is feasible; but considering letter case, it becomes very clear that these alphabets are rather distinct sets of symbols.
Most modernword processorsprovide automated case conversion with a simple click or keystroke. For example, in Microsoft Office Word, there is a dialog box for toggling the selected text through UPPERCASE, then lowercase, then Title Case (actually start caps; exception words must be lowercased individually). The keystroke⇧ Shift+F3does the same.
In some forms ofBASICthere are two methods for case conversion:
CandC++, as well as any C-like language that conforms to itsstandard library, provide these functions in the filectype.h:
Case conversion is different with differentcharacter sets. InASCIIorEBCDIC, case can be converted in the following way, in C:
This only works because the letters of upper and lower cases are spaced out equally. In ASCII they are consecutive, whereas with EBCDIC they are not; nonetheless the upper-case letters are arranged in the same pattern and with the same gaps as are the lower-case letters, so the technique still works.
Some computer programming languages offer facilities for converting text to a form in which all words are capitalised.Visual Basiccalls this "proper case";Pythoncalls it "title case". This differs from usualtitle casingconventions, such as the English convention in which minor words are not capitalised.
Originallyalphabetswere written entirely in majuscule letters, spaced between well-defined upper and lower bounds. When written quickly with apen, these tended to turn into rounder and much simpler forms. It is from these that the first minuscule hands developed, thehalf-uncialsand cursive minuscule, which no longer stayed bound between a pair of lines.[35]These in turn formed the foundations for theCarolingian minusculescript, developed byAlcuinfor use in the court ofCharlemagne, which quickly spread across Europe. The advantage of the minuscule over majuscule was improved, faster readability.[citation needed]
InLatin,papyrifromHerculaneumdating before 79 CE (when it was destroyed) have been found that have been written in oldRoman cursive, where the early forms of minuscule letters "d", "h" and "r", for example, can already be recognised. According to papyrologistKnut Kleve, "The theory, then, that the lower-case letters have been developed from the fifth centuryuncialsand the ninth century Carolingian minuscules seems to be wrong."[36]Both majuscule and minuscule letters existed, but the difference between the two variants was initially stylistic rather than orthographic and the writing system was still basically unicameral: a given handwritten document could use either one style or the other but these were not mixed. European languages, except forAncient Greekand Latin, did not make the case distinction before about 1300.[citation needed]
The timeline of writing in Western Europe can be divided into four eras:[citation needed]
Traditionally, certain letters were rendered differently according to a set of rules. In particular, those letters that began sentences or nouns were made larger and often written in a distinct script. There was no fixed capitalisation system until the early 18th century. TheEnglish languageeventually dropped the rule for nouns, while the German language keeps it.
Similar developments have taken place in other alphabets. The lower-case script for theGreek alphabethas its origins in the 7th century and acquired its quadrilinear form (that is, characterised by ascenders and descenders)[37]in the 8th century. Over time, uncial letter forms were increasingly mixed into the script. The earliest dated Greek lower-case text is theUspenski Gospels(MS 461) in the year 835.[38]The modern practice of capitalising the first letter of every sentence seems to be imported (and is rarely used when printing Ancient Greek materials even today).[citation needed]
The individual type blocks used in handtypesettingare stored in shallow wooden or metal drawers known as "type cases". Each is subdivided into a number of compartments ("boxes") for the storage of different individual letters.[citation needed]
TheOxford Universal Dictionary on Historical Advanced Proportional Principles(reprinted 1952) indicates thatcasein this sense (referring to the box or frame used by a compositor in the printing trade) was first used in English in 1588. Originally one large case was used for each typeface, then "divided cases", pairs of cases for majuscules and minuscules, were introduced in the region of today's Belgium by 1563, England by 1588, and France before 1723.
The termsupperandlowercase originate from this division. By convention, when the two cases were taken out of the storage rack and placed on a rack on thecompositor's desk, the case containing the capitals and small capitals stood at a steeper angle at the back of the desk, with the case for the small letters, punctuation, and spaces being more easily reached at a shallower angle below it to the front of the desk, hence upper and lower case.[39]
Though pairs of cases were used in English-speaking countries and many European countries in the seventeenth century, in Germany and Scandinavia the single case continued in use.[39]
Various patterns of cases are available, often with the compartments for lower-case letters varying in size according to the frequency of use of letters, so that the commonest letters are grouped together in larger boxes at the centre of the case.[39]The compositor takes the letter blocks from the compartments and places them in acomposing stick, working from left to right and placing the letters upside down with the nick to the top, then sets the assembled type in agalley.[39]
|
https://en.wikipedia.org/wiki/Sentence_case
|
Title caseorheadline caseis a style ofcapitalizationused for rendering thetitlesof published works or works of art inEnglish. When using title case, all words are capitalized, except for minor words (typicallyarticles, shortprepositions, and someconjunctions) that are not the first or last word of the title. There are different rules for which words are major, hence capitalized.
As an example, aheadlinemight be written like this: "The Quick Brown Fox Jumps over the Lazy Dog".
The rules of title case are not universally standardized. The standardization is only at the level of house styles and individualstyle guides. Most English style guides agree that the first and last words should always be capitalized, whereas articles, shortprepositions, and someconjunctionsshould not be. Other rules about the capitalization vary.[1]
Intext processing, title case usually involves the capitalization of all words irrespective of theirpart of speech. This simplified variant of title case is also known asstart caseorinitial caps.
According to theAssociated Press Stylebook(2020 edition, 55th edition), the following rules should be applied:[2]
According toThe Chicago Manual of Style(15th edition), the following rules should be applied:[4]
Since the 18th edition (2024), prepositions of more than four letters are capitalized.[6]
According to the 9th edition of theModern Language Association Handbook, the following title capitalization rules should be applied:[7]
According to the 7th edition of thePublication Manual of the American Psychological Association, the following title capitalization rules should be applied:[7]
According to the 11th edition of theAmerican Medical Association (AMA) Manual of Style, the following title capitalization rules should be applied:[7]
According to the 21st edition ofThe Bluebook, used for legal citations, the following title capitalization rules should be applied:[7]
The use of title case or sentence case in the references of scholarly publications is determined by the used citation style and can differ from the usage in title or headings. For example,APA Styleuses sentence case for the title of the cited work in the list of references, but it uses title case for the title of the current publication (or for the title of a publication if it is mentioned in the text instead). Moreover, it uses title case for the title ofperiodicalseven in the references.[8]Other citation styles likeChicago Manual of Styleare using title case also for the title of cited works in the list of references.[9]
|
https://en.wikipedia.org/wiki/Title_case
|
TheBellman pseudospectral methodis apseudospectral methodforoptimal controlbased onBellman's principle of optimality. It is part of the larger theory ofpseudospectral optimal control, a term coined byRoss.[1]The method is named afterRichard E. Bellman. It was introduced byRosset al.[2][3]first as a means to solve multiscale optimal control problems, and later expanded to obtain suboptimal solutions for general optimal control problems.
The multiscale version of the Bellman pseudospectral method is based on the spectral convergence property of theRoss–Fahroo pseudospectral methods. That is, because the Ross–Fahroo pseudospectral method converges at an exponentially fast rate, pointwise convergence to a solution is obtained at very low number of nodes even when the solution has high-frequency components. Thisaliasingphenomenon in optimal control was first discovered by Ross et al.[2]Rather than use signal processing techniques to anti-alias the solution, Ross et al. proposed that Bellman's principle of optimality can be applied to the converged solution to extract information between the nodes. Because the Gauss–Lobatto nodes cluster at the boundary points, Ross et al. suggested that if the node density around the initial conditions satisfy theNyquist–Shannon sampling theorem, then the complete solution can be recovered by solving the optimal control problem in a recursive fashion over piecewise segments known as Bellman segments.[2]
In an expanded version of the method, Ross et al.,[3]proposed that method could also be used to generate feasible solutions that were not necessarily optimal. In this version, one can apply the Bellman pseudospectral method at even lower number of nodes even under the knowledge that the solution may not have converged to the optimal one. In this situation, one obtains a feasible solution.
A remarkable feature of the Bellman pseudospectral method is that it automatically determines several measures of suboptimality based on the original pseudospectral cost and the cost generated by the sum of the Bellman segments.[2][3]
One of the computational advantages of the Bellman pseudospectral method is that it allows one to escape Gaussian rules in the distribution of node points. That is, in a standard pseudospectral method, the distribution of node points are Gaussian (typically Gauss-Lobatto for finite horizon and Gauss-Radau for infinite horizon). The Gaussian points are sparse in the middle of the interval (middle is defined in a shifted sense for infinite-horizon problems) and dense at the boundaries. The second-order accumulation of points near the boundaries have the effect of wasting nodes. The Bellman pseudospectral method takes advantage of the node accumulation at the initial point to anti-alias the solution and discards the remainder of the nodes. Thus the final distribution of nodes is non-Gaussian and dense while the computational method retains a sparse structure.
The Bellman pseudospectral method was first applied by Ross et al.[2]to solve the challenging problem of very low thrust trajectory optimization. It has been successfully applied to solve a practical problem of generating very high accuracy solutions to a trans-Earth-injection problem of bringing a space capsule from a lunar orbit to a pin-pointed Earth-interface condition for successful reentry.[4][5]
The Bellman pseudospectral method is most commonly used as an additional check on the optimality of a pseudospectral solution generated by the Ross–Fahroo pseudospectral methods. That is, in addition to the use ofPontryagin's minimum principlein conjunction with the solutions obtained by the Ross–Fahroo pseudospectral methods, the Bellman pseudospectral method is used as a primal-only test on the optimality of the computed solution.[6][7]
|
https://en.wikipedia.org/wiki/Bellman_pseudospectral_method
|
TheHamilton-Jacobi-Bellman(HJB)equationis anonlinear partial differential equationthat providesnecessary and sufficient conditionsforoptimalityof acontrolwith respect to aloss function.[1]Its solution is thevalue functionof the optimal control problem which, once known, can be used to obtain the optimal control by taking the maximizer (or minimizer) of theHamiltonianinvolved in the HJB equation.[2][3]
The equation is a result of the theory ofdynamic programmingwhich was pioneered in the 1950s byRichard Bellmanand coworkers.[4][5][6]The connection to theHamilton–Jacobi equationfromclassical physicswas first drawn byRudolf Kálmán.[7]Indiscrete-timeproblems, the analogousdifference equationis usually referred to as theBellman equation.
While classicalvariational problems, such as thebrachistochrone problem, can be solved using the Hamilton–Jacobi–Bellman equation,[8]the method can be applied to a broader spectrum of problems. Further it can be generalized tostochasticsystems, in which case the HJB equation is a second-orderelliptic partial differential equation.[9]A major drawback, however, is that the HJB equation admits classical solutions only for asufficiently smoothvalue function, which is not guaranteed in most situations. Instead, the notion of aviscosity solutionis required, in which conventional derivatives are replaced by (set-valued)subderivatives.[10]
Consider the following problem in deterministic optimal control over the time period[0,T]{\displaystyle [0,T]}:
whereC[⋅]{\displaystyle C[\cdot ]}is the scalar cost rate function andD[⋅]{\displaystyle D[\cdot ]}is a function that gives thebequest valueat the final state,x(t){\displaystyle x(t)}is the system state vector,x(0){\displaystyle x(0)}is assumed given, andu(t){\displaystyle u(t)}for0≤t≤T{\displaystyle 0\leq t\leq T}is the control vector that we are trying to find. Thus,V(x,t){\displaystyle V(x,t)}is thevalue function.
The system must also be subject to
whereF[⋅]{\displaystyle F[\cdot ]}gives the vector determining physical evolution of the state vector over time.
For this simple system, the Hamilton–Jacobi–Bellman partial differential equation is
subject to the terminal condition
As before, the unknown scalar functionV(x,t){\displaystyle V(x,t)}in the above partial differential equation is the Bellmanvalue function, which represents the cost incurred from starting in statex{\displaystyle x}at timet{\displaystyle t}and controlling the system optimally from then until timeT{\displaystyle T}.
Intuitively, the HJB equation can be derived as follows. IfV(x(t),t){\displaystyle V(x(t),t)}is the optimal cost-to-go function (also called the 'value function'), then by Richard Bellman'sprinciple of optimality, going from timettot+dt, we have
Note that theTaylor expansionof the first term on the right-hand side is
whereo(dt){\displaystyle {\mathcal {o}}(dt)}denotes the terms in the Taylor expansion of higher order than one inlittle-onotation. Then if we subtractV(x(t),t){\displaystyle V(x(t),t)}from both sides, divide bydt, and take the limit asdtapproaches zero, we obtain the HJB equation defined above.
The HJB equation is usuallysolved backwards in time, starting fromt=T{\displaystyle t=T}and ending att=0{\displaystyle t=0}.[11]
When solved over the whole of state space andV(x){\displaystyle V(x)}is continuously differentiable, the HJB equation is anecessary and sufficient conditionfor an optimum when the terminal state is unconstrained.[12]If we can solve forV{\displaystyle V}then we can find from it a controlu{\displaystyle u}that achieves the minimum cost.
In general case, the HJB equation does not have a classical (smooth) solution. Several notions of generalized solutions have been developed to cover such situations, includingviscosity solution(Pierre-Louis LionsandMichael Crandall),[13]minimax solution(Andrei Izmailovich Subbotin[ru]), and others.
Approximate dynamic programming has been introduced byD. P. BertsekasandJ. N. Tsitsikliswith the use ofartificial neural networks(multilayer perceptrons) for approximating the Bellman function in general.[14]This is an effective mitigation strategy for reducing the impact of dimensionality by replacing the memorization of the complete function mapping for the whole space domain with the memorization of the sole neural network parameters. In particular, for continuous-time systems, an approximate dynamic programming approach that combines both policy iterations with neural networks was introduced.[15]In discrete-time, an approach to solve the HJB equation combining value iterations and neural networks was introduced.[16]
Alternatively, it has been shown thatsum-of-squares optimizationcan yield an approximate polynomial solution to the Hamilton–Jacobi–Bellman equation arbitrarily well with respect to theL1{\displaystyle L^{1}}norm.[17]
The idea of solving a control problem by applying Bellman's principle of optimality and then working out backwards in time an optimizing strategy can be generalized to stochastic control problems. Consider similar as above
now with(Xt)t∈[0,T]{\displaystyle (X_{t})_{t\in [0,T]}\,\!}the stochastic process to optimize and(ut)t∈[0,T]{\displaystyle (u_{t})_{t\in [0,T]}\,\!}the steering. By first using Bellman and then expandingV(Xt,t){\displaystyle V(X_{t},t)}withItô's rule, one finds the stochastic HJB equation
whereA{\displaystyle {\mathcal {A}}}represents thestochastic differentiation operator, and subject to the terminal condition
Note that the randomness has disappeared. In this case a solutionV{\displaystyle V\,\!}of the latter does not necessarily solve the primal problem, it is a candidate only and a further verifying argument is required. This technique is widely used in Financial Mathematics to determine optimal investment strategies in the market (see for exampleMerton's portfolio problem).
As an example, we can look at a system with linear stochastic dynamics and quadratic cost. If the system dynamics is given by
and the cost accumulates at rateC(xt,ut)=r(t)ut2/2+q(t)xt2/2{\displaystyle C(x_{t},u_{t})=r(t)u_{t}^{2}/2+q(t)x_{t}^{2}/2}, the HJB equation is given by
with optimal action given by
Assuming a quadratic form for the value function, we obtain the usualRiccati equationfor the Hessian of the value function as is usual forLinear-quadratic-Gaussian control.
|
https://en.wikipedia.org/wiki/Hamilton%E2%80%93Jacobi%E2%80%93Bellman_equation
|
Optimal control theoryis a branch ofcontrol theorythat deals with finding acontrolfor adynamical systemover a period of time such that anobjective functionis optimized.[1]It has numerous applications in science, engineering and operations research. For example, the dynamical system might be aspacecraftwith controls corresponding to rocket thrusters, and the objective might be to reach theMoonwith minimum fuel expenditure.[2]Or the dynamical system could be a nation'seconomy, with the objective to minimizeunemployment; the controls in this case could befiscalandmonetary policy.[3]A dynamical system may also be introduced to embedoperations research problemswithin the framework of optimal control theory.[4][5]
Optimal control is an extension of thecalculus of variations, and is a mathematical optimization method for derivingcontrol policies.[6]The method is largely due to the work ofLev PontryaginandRichard Bellmanin the 1950s, after contributions to calculus of variations byEdward J. McShane.[7]Optimal control can be seen as acontrol strategyincontrol theory.[1]
Optimal control deals with the problem of finding a control law for a given system such that a certainoptimality criterionis achieved. A control problem includes acost functionalthat is afunctionof state and control variables. Anoptimal controlis a set ofdifferential equationsdescribing the paths of the control variables that minimize the cost function. The optimal control can be derived usingPontryagin's maximum principle(anecessary conditionalso known as Pontryagin's minimum principle or simply Pontryagin's principle),[8]or by solving theHamilton–Jacobi–Bellman equation(asufficient condition).
We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order tominimizethe total traveling time? In this example, the termcontrol lawrefers specifically to the way in which the driver presses the accelerator and shifts the gears. Thesystemconsists of both the car and the road, and theoptimality criterionis the minimization of the total traveling time. Control problems usually include ancillaryconstraints. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc.
A proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, andinitial conditionsof the system.Constraintsare often interchangeable with the cost function.
Another related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel.
A more abstract framework goes as follows.[1]Minimize the continuous-time cost functionalJ[x(⋅),u(⋅),t0,tf]:=E[x(t0),t0,x(tf),tf]+∫t0tfF[x(t),u(t),t]dt{\displaystyle J[{\textbf {x}}(\cdot ),{\textbf {u}}(\cdot ),t_{0},t_{f}]:=E\,[{\textbf {x}}(t_{0}),t_{0},{\textbf {x}}(t_{f}),t_{f}]+\int _{t_{0}}^{t_{f}}F\,[{\textbf {x}}(t),{\textbf {u}}(t),t]\,\mathrm {d} t}subject to the first-order dynamic constraints (thestate equation)x˙(t)=f[x(t),u(t),t],{\displaystyle {\dot {\textbf {x}}}(t)={\textbf {f}}\,[\,{\textbf {x}}(t),{\textbf {u}}(t),t],}the algebraicpath constraintsh[x(t),u(t),t]≤0,{\displaystyle {\textbf {h}}\,[{\textbf {x}}(t),{\textbf {u}}(t),t]\leq {\textbf {0}},}and theendpoint conditionse[x(t0),t0,x(tf),tf]=0{\displaystyle {\textbf {e}}[{\textbf {x}}(t_{0}),t_{0},{\textbf {x}}(t_{f}),t_{f}]=0}wherex(t){\displaystyle {\textbf {x}}(t)}is thestate,u(t){\displaystyle {\textbf {u}}(t)}is thecontrol,t{\displaystyle t}is the independent variable (generally speaking, time),t0{\displaystyle t_{0}}is the initial time, andtf{\displaystyle t_{f}}is the terminal time. The termsE{\displaystyle E}andF{\displaystyle F}are called theendpoint costand therunning costrespectively. In the calculus of variations,E{\displaystyle E}andF{\displaystyle F}are referred to as the Mayer term and theLagrangian, respectively. Furthermore, it is noted that the path constraints are in generalinequalityconstraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution[x∗(t),u∗(t),t0∗,tf∗]{\displaystyle [{\textbf {x}}^{*}(t),{\textbf {u}}^{*}(t),t_{0}^{*},t_{f}^{*}]}to the optimal control problem islocally minimizing.
A special case of the general nonlinear optimal control problem given in the previous section is thelinear quadratic(LQ) optimal control problem. The LQ problem is stated as follows. Minimize thequadraticcontinuous-time cost functionalJ=12xT(tf)Sfx(tf)+12∫t0tf[xT(t)Q(t)x(t)+uT(t)R(t)u(t)]dt{\displaystyle J={\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}(t_{f})\mathbf {S} _{f}\mathbf {x} (t_{f})+{\tfrac {1}{2}}\int _{t_{0}}^{t_{f}}[\,\mathbf {x} ^{\mathsf {T}}(t)\mathbf {Q} (t)\mathbf {x} (t)+\mathbf {u} ^{\mathsf {T}}(t)\mathbf {R} (t)\mathbf {u} (t)]\,\mathrm {d} t}
Subject to thelinearfirst-order dynamic constraintsx˙(t)=A(t)x(t)+B(t)u(t),{\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t),}and the initial conditionx(t0)=x0{\displaystyle \mathbf {x} (t_{0})=\mathbf {x} _{0}}
A particular form of the LQ problem that arises in many control system problems is that of thelinear quadratic regulator(LQR) where all of the matrices (i.e.,A{\displaystyle \mathbf {A} },B{\displaystyle \mathbf {B} },Q{\displaystyle \mathbf {Q} }, andR{\displaystyle \mathbf {R} }) areconstant, the initial time is arbitrarily set to zero, and the terminal time is taken in the limittf→∞{\displaystyle t_{f}\rightarrow \infty }(this last assumption is what is known asinfinite horizon). The LQR problem is stated as follows. Minimize the infinite horizon quadratic continuous-time cost functionalJ=12∫0∞[xT(t)Qx(t)+uT(t)Ru(t)]dt{\displaystyle J={\tfrac {1}{2}}\int _{0}^{\infty }[\mathbf {x} ^{\mathsf {T}}(t)\mathbf {Q} \mathbf {x} (t)+\mathbf {u} ^{\mathsf {T}}(t)\mathbf {R} \mathbf {u} (t)]\,\mathrm {d} t}
Subject to thelinear time-invariantfirst-order dynamic constraintsx˙(t)=Ax(t)+Bu(t),{\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t),}and the initial conditionx(t0)=x0{\displaystyle \mathbf {x} (t_{0})=\mathbf {x} _{0}}
In the finite-horizon case the matrices are restricted in thatQ{\displaystyle \mathbf {Q} }andR{\displaystyle \mathbf {R} }are positive semi-definite and positive definite, respectively. In the infinite-horizon case, however, thematricesQ{\displaystyle \mathbf {Q} }andR{\displaystyle \mathbf {R} }are not only positive-semidefinite and positive-definite, respectively, but are alsoconstant. These additional restrictions onQ{\displaystyle \mathbf {Q} }andR{\displaystyle \mathbf {R} }in the infinite-horizon case are enforced to ensure that the cost functional remains positive. Furthermore, in order to ensure that the cost function isbounded, the additional restriction is imposed that the pair(A,B){\displaystyle (\mathbf {A} ,\mathbf {B} )}iscontrollable. Note that the LQ or LQR cost functional can be thought of physically as attempting to minimize thecontrol energy(measured as a quadratic form).
The infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially useless because it assumes that the operator is driving the system to zero-state and hence driving the output of the system to zero. This is indeed correct. However the problem of driving the output to a desired nonzero level can be solvedafterthe zero output one is. In fact, it can be proved that this secondary LQR problem can be solved in a very straightforward manner. It has been shown in classical optimal control theory that the LQ (or LQR) optimal control has the feedback formu(t)=−K(t)x(t){\displaystyle \mathbf {u} (t)=-\mathbf {K} (t)\mathbf {x} (t)}whereK(t){\displaystyle \mathbf {K} (t)}is a properly dimensioned matrix, given asK(t)=R−1BTS(t),{\displaystyle \mathbf {K} (t)=\mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} (t),}andS(t){\displaystyle \mathbf {S} (t)}is the solution of the differentialRiccati equation. The differential Riccati equation is given asS˙(t)=−S(t)A−ATS(t)+S(t)BR−1BTS(t)−Q{\displaystyle {\dot {\mathbf {S} }}(t)=-\mathbf {S} (t)\mathbf {A} -\mathbf {A} ^{\mathsf {T}}\mathbf {S} (t)+\mathbf {S} (t)\mathbf {B} \mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} (t)-\mathbf {Q} }
For the finite horizon LQ problem, the Riccati equation is integrated backward in time using the terminal boundary conditionS(tf)=Sf{\displaystyle \mathbf {S} (t_{f})=\mathbf {S} _{f}}
For the infinite horizon LQR problem, the differential Riccati equation is replaced with thealgebraicRiccati equation (ARE) given as0=−SA−ATS+SBR−1BTS−Q{\displaystyle \mathbf {0} =-\mathbf {S} \mathbf {A} -\mathbf {A} ^{\mathsf {T}}\mathbf {S} +\mathbf {S} \mathbf {B} \mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} -\mathbf {Q} }
Understanding that the ARE arises from infinite horizon problem, the matricesA{\displaystyle \mathbf {A} },B{\displaystyle \mathbf {B} },Q{\displaystyle \mathbf {Q} }, andR{\displaystyle \mathbf {R} }are allconstant. It is noted that there are in general multiple solutions to the algebraic Riccati equation and thepositive definite(or positive semi-definite) solution is the one that is used to compute the feedback gain. The LQ (LQR) problem was elegantly solved byRudolf E. Kálmán.[9]
Optimal control problems are generally nonlinear and therefore, generally do not have analytic solutions (e.g., like the linear-quadratic optimal control problem). As a result, it is necessary to employ numerical methods to solve optimal control problems. In the early years of optimal control (c.1950s to 1980s) the favored approach for solving optimal control problems was that ofindirect methods. In an indirect method, the calculus of variations is employed to obtain the first-order optimality conditions. These conditions result in a two-point (or, in the case of a complex problem, a multi-point)boundary-value problem. This boundary-value problem actually has a special structure because it arises from taking the derivative of aHamiltonian. Thus, the resultingdynamical systemis aHamiltonian systemof the form[1]x˙=∂H∂λλ˙=−∂H∂x{\displaystyle {\begin{aligned}{\dot {\textbf {x}}}&={\frac {\partial H}{\partial {\boldsymbol {\lambda }}}}\\[1.2ex]{\dot {\boldsymbol {\lambda }}}&=-{\frac {\partial H}{\partial {\textbf {x}}}}\end{aligned}}}whereH=F+λTf−μTh{\displaystyle H=F+{\boldsymbol {\lambda }}^{\mathsf {T}}{\textbf {f}}-{\boldsymbol {\mu }}^{\mathsf {T}}{\textbf {h}}}is theaugmented Hamiltonianand in an indirect method, the boundary-value problem is solved (using the appropriate boundary ortransversalityconditions). The beauty of using an indirect method is that the state and adjoint (i.e.,λ{\displaystyle {\boldsymbol {\lambda }}}) are solved for and the resulting solution is readily verified to be an extremal trajectory. The disadvantage of indirect methods is that the boundary-value problem is often extremely difficult to solve (particularly for problems that span large time intervals or problems with interior point constraints). A well-known software program that implements indirect methods is BNDSCO.[10]
The approach that has risen to prominence in numerical optimal control since the 1980s is that of so-calleddirect methods. In a direct method, the state or the control, or both, are approximated using an appropriate function approximation (e.g., polynomial approximation or piecewise constant parameterization). Simultaneously, the cost functional is approximated as acost function. Then, the coefficients of the function approximations are treated as optimization variables and the problem is "transcribed" to a nonlinear optimization problem of the form:
MinimizeF(z){\displaystyle F(\mathbf {z} )}subject to the algebraic constraintsg(z)=0h(z)≤0{\displaystyle {\begin{aligned}\mathbf {g} (\mathbf {z} )&=\mathbf {0} \\\mathbf {h} (\mathbf {z} )&\leq \mathbf {0} \end{aligned}}}
Depending upon the type of direct method employed, the size of the nonlinear optimization problem can be quite small (e.g., as in a direct shooting orquasilinearizationmethod), moderate (e.g.pseudospectral optimal control[11]) or may be quite large (e.g., a directcollocation method[12]). In the latter case (i.e., a collocation method), the nonlinear optimization problem may be literally thousands to tens of thousands of variables and constraints. Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP issparseand many well-known software programs exist (e.g.,SNOPT[13]) to solve large sparse NLPs. As a result, the range of problems that can be solved via direct methods (particularly directcollocation methodswhich are very popular these days) is significantly larger than the range of problems that can be solved via indirect methods. In fact, direct methods have become so popular these days that many people have written elaborate software programs that employ these methods. In particular, many such programs includeDIRCOL,[14]SOCS,[15]OTIS,[16]GESOP/ASTOS,[17]DITAN.[18]and PyGMO/PyKEP.[19]In recent years, due to the advent of theMATLABprogramming language, optimal control software in MATLAB has become more common. Examples of academically developed MATLAB software tools implementing direct methods includeRIOTS,[20]DIDO,[21]DIRECT,[22]FALCON.m,[23]andGPOPS,[24]while an example of an industry developed MATLAB tool isPROPT.[25]These software tools have increased significantly the opportunity for people to explore complex optimal control problems both for academic research and industrial problems.[26]Finally, it is noted that general-purpose MATLAB optimization environments such asTOMLABhave made coding complex optimal control problems significantly easier than was previously possible in languages such as C andFORTRAN.
The examples thus far have showncontinuous timesystems and control solutions. In fact, as optimal control solutions are now often implementeddigitally, contemporary control theory is now primarily concerned withdiscrete timesystems and solutions. The Theory ofConsistent Approximations[27][28]provides conditions under which solutions to a series of increasingly accurate discretized optimal control problem converge to the solution of the original, continuous-time problem. Not all discretization methods have this property, even seemingly obvious ones.[29]For instance, using a variable step-size routine to integrate the problem's dynamic equations may generate a gradient which does not converge to zero (or point in the right direction) as the solution is approached. The direct methodRIOTSis based on the Theory of Consistent Approximation.
A common solution strategy in many optimal control problems is to solve for the costate (sometimes called theshadow price)λ(t){\displaystyle \lambda (t)}. The costate summarizes in one number the marginal value of expanding or contracting the state variable next turn. The marginal value is not only the gains accruing to it next turn but associated with the duration of the program. It is nice whenλ(t){\displaystyle \lambda (t)}can be solved analytically, but usually, the most one can do is describe it sufficiently well that the intuition can grasp the character of the solution and an equation solver can solve numerically for the values.
Having obtainedλ(t){\displaystyle \lambda (t)}, the turn-t optimal value for the control can usually be solved as a differential equation conditional on knowledge ofλ(t){\displaystyle \lambda (t)}. Again it is infrequent, especially in continuous-time problems, that one obtains the value of the control or the state explicitly. Usually, the strategy is to solve for thresholds and regions that characterize the optimal control and use a numerical solver to isolate the actual choice values in time.
Consider the problem of a mine owner who must decide at what rate to extract ore from their mine. They own rights to the ore from date0{\displaystyle 0}to dateT{\displaystyle T}. At date0{\displaystyle 0}there isx0{\displaystyle x_{0}}ore in the ground, and the time-dependent amount of orex(t){\displaystyle x(t)}left in the ground declines at the rate ofu(t){\displaystyle u(t)}that the mine owner extracts it. The mine owner extracts ore at costu(t)2/x(t){\displaystyle u(t)^{2}/x(t)}(the cost of extraction increasing with the square of the extraction speed and the inverse of the amount of ore left) and sells ore at a constant pricep{\displaystyle p}. Any ore left in the ground at timeT{\displaystyle T}cannot be sold and has no value (there is no "scrap value"). The owner chooses the rate of extraction varying with timeu(t){\displaystyle u(t)}to maximize profits over the period of ownership with no time discounting.
The manager maximizes profitΠ{\displaystyle \Pi }:Π=∑t=0T−1[put−ut2xt]{\displaystyle \Pi =\sum _{t=0}^{T-1}\left[pu_{t}-{\frac {u_{t}^{2}}{x_{t}}}\right]}subject to the law of motion for the state variablext{\displaystyle x_{t}}xt+1−xt=−ut{\displaystyle x_{t+1}-x_{t}=-u_{t}}
Form the Hamiltonian and differentiate:H=put−ut2xt−λt+1ut∂H∂ut=p−λt+1−2utxt=0λt+1−λt=−∂H∂xt=−(utxt)2{\displaystyle {\begin{aligned}H&=pu_{t}-{\frac {u_{t}^{2}}{x_{t}}}-\lambda _{t+1}u_{t}\\{\frac {\partial H}{\partial u_{t}}}&=p-\lambda _{t+1}-2{\frac {u_{t}}{x_{t}}}=0\\\lambda _{t+1}-\lambda _{t}&=-{\frac {\partial H}{\partial x_{t}}}=-\left({\frac {u_{t}}{x_{t}}}\right)^{2}\end{aligned}}}
As the mine owner does not value the ore remaining at timeT{\displaystyle T},λT=0{\displaystyle \lambda _{T}=0}
Using the above equations, it is easy to solve for thext{\displaystyle x_{t}}andλt{\displaystyle \lambda _{t}}seriesλt=λt+1+(p−λt+1)24xt+1=xt2−p+λt+12{\displaystyle {\begin{aligned}\lambda _{t}&=\lambda _{t+1}+{\frac {\left(p-\lambda _{t+1}\right)^{2}}{4}}\\x_{t+1}&=x_{t}{\frac {2-p+\lambda _{t+1}}{2}}\end{aligned}}}
The manager maximizes profitΠ{\displaystyle \Pi }:Π=∫0T[pu(t)−u(t)2x(t)]dt{\displaystyle \Pi =\int _{0}^{T}\left[pu(t)-{\frac {u(t)^{2}}{x(t)}}\right]dt}where the state variablex(t){\displaystyle x(t)}evolves as follows:x˙(t)=−u(t){\displaystyle {\dot {x}}(t)=-u(t)}
Form the Hamiltonian and differentiate:H=pu(t)−u(t)2x(t)−λ(t)u(t)∂H∂u=p−λ(t)−2u(t)x(t)=0λ˙(t)=−∂H∂x=−(u(t)x(t))2{\displaystyle {\begin{aligned}H&=pu(t)-{\frac {u(t)^{2}}{x(t)}}-\lambda (t)u(t)\\{\frac {\partial H}{\partial u}}&=p-\lambda (t)-2{\frac {u(t)}{x(t)}}=0\\{\dot {\lambda }}(t)&=-{\frac {\partial H}{\partial x}}=-\left({\frac {u(t)}{x(t)}}\right)^{2}\end{aligned}}}
As the mine owner does not value the ore remaining at timeT{\displaystyle T},λ(T)=0{\displaystyle \lambda (T)=0}
Using the above equations, it is easy to solve for the differential equations governingu(t){\displaystyle u(t)}andλ(t){\displaystyle \lambda (t)}λ˙(t)=−(p−λ(t))24u(t)=x(t)p−λ(t)2{\displaystyle {\begin{aligned}{\dot {\lambda }}(t)&=-{\frac {(p-\lambda (t))^{2}}{4}}\\u(t)&=x(t){\frac {p-\lambda (t)}{2}}\end{aligned}}}and using the initial and turn-T conditions, the functions can be solved to yield
|
https://en.wikipedia.org/wiki/Optimal_control
|
Incomputer science, a problem is said to haveoptimal substructureif an optimal solution can be constructed from optimal solutions of its subproblems. This property is used to determine the usefulness of greedy algorithms for a problem.[1]
Typically, agreedy algorithmis used to solve a problem with optimal substructure if it can be proven by induction that this is optimal at each step.[1]Otherwise, provided the problem exhibitsoverlapping subproblemsas well,divide-and-conquermethods ordynamic programmingmay be used. If there are no appropriate greedy algorithms and the problem fails to exhibit overlapping subproblems, often a lengthy but straightforward search of the solution space is the best alternative.
In the application ofdynamic programmingtomathematical optimization,Richard Bellman'sPrinciple of Optimalityis based on the idea that in order to solve a dynamic optimization problem from some starting periodtto some ending periodT, one implicitly has to solve subproblems starting from later datess, wheret<s<T. This is an example of optimal substructure. The Principle of Optimality is used to derive theBellman equation, which shows how the value of the problem starting fromtis related to the value of the problem starting froms.
Consider finding ashortest pathfor traveling between two cities by car, as illustrated in Figure 1. Such an example is likely to exhibit optimal substructure. That is, if the shortest route from Seattle to Los Angeles passes through Portland and then Sacramento, then the shortest route from Portland to Los Angeles must pass through Sacramento too. That is, the problem of how to get from Portland to Los Angeles is nested inside the problem of how to get from Seattle to Los Angeles. (The wavy lines in the graph represent solutions to the subproblems.)
As an example of a problem that is unlikely to exhibit optimal substructure, consider the problem of finding the cheapest airline ticket from Buenos Aires to Moscow. Even if that ticket involves stops in Miami and then London, we can't conclude that the cheapest ticket from Miami to Moscow stops in London, because the price at which an airline sells a multi-flight trip is usually not the sum of the prices at which it would sell the individual flights in the trip.
A slightly more formal definition of optimal substructure can be given. Let a "problem" be a collection of "alternatives", and let each alternative have an associated cost,c(a). The task is to find a set of alternatives that minimizesc(a). Suppose that the alternatives can bepartitionedinto subsets, i.e. each alternative belongs to only one subset. Suppose each subset has its own cost function. The minima of each of these cost functions can be found, as can the minima of the global cost function,restricted to the same subsets. If these minima match for each subset, then it's almost obvious that a global minimum can be picked not out of the full set of alternatives, but out of only the set that consists of the minima of the smaller, local cost functions we have defined. If minimizing the local functions is a problem of "lower order", and (specifically) if, after a finite number of these reductions, the problem becomes trivial, then the problem has an optimal substructure.
|
https://en.wikipedia.org/wiki/Optimal_substructure
|
Inmacroeconomics,recursive competitive equilibrium(RCE) is anequilibrium concept. It has been widely used in exploring a wide variety of economic issues including business-cycle fluctuations, monetary and fiscal policy, trade related phenomena, and regularities in asset price co-movements.[1]This is the equilibrium associated withdynamic programsthat represent the decision problem when agents must distinguish between aggregate and individualstate variables.[2]These state variables embody the prior and current information of the economy. The decisions and the realizations ofexogenousuncertainty determine the values of thestate variablesin the next sequential time period. Hence, the issue is recursive. A RCE is characterized by time invariant functions of a limited number of 'state variables', which summarize the effects of past decisions and current information. These functions (decision rules) include (a) a pricing function, (b) a value function, (c) a period allocation policy specifying the individual's decision, (d) period allocation policy specifying the decision of each firm and (e) a function specifying the law of motion of the capital stock.[1]Since decisions are made with all relevant information available, it is arational expectationsequilibrium.[3]
This article related tomacroeconomicsis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Recursive_competitive_equilibrium
|
Originally introduced byRichard E. Bellmanin (Bellman 1957),stochastic dynamic programmingis a technique for modelling and solving problems ofdecision making under uncertainty. Closely related tostochastic programminganddynamic programming, stochastic dynamic programming represents the problem under scrutiny in the form of aBellman equation. The aim is to compute apolicyprescribing how to act optimally in the face of uncertainty.
A gambler has $2, she is allowed to play a game of chance 4 times and her goal is to maximize her probability of ending up with a least $6. If the gambler bets $b{\displaystyle b}on a play of the game, then with probability 0.4 she wins the game, recoup the initial bet, and she increases her capital position by $b{\displaystyle b}; with probability 0.6, she loses the bet amount $b{\displaystyle b}; all plays arepairwise independent. On any play of the game, the gambler may not bet more money than she has available at the beginning of that play.[1]
Stochastic dynamic programming can be employed to model this problem and determine a betting strategy that, for instance, maximizes the gambler's probability of attaining a wealth of at least $6 by the end of the betting horizon.
Note that if there is no limit to the number of games that can be played, the problem becomes a variant of the well knownSt. Petersburg paradox.
Consider a discrete system defined onn{\displaystyle n}stages in which each staget=1,…,n{\displaystyle t=1,\ldots ,n}is characterized by
Letft(st){\displaystyle f_{t}(s_{t})}represent the optimal cost/reward obtained by following anoptimal policyover stagest,t+1,…,n{\displaystyle t,t+1,\ldots ,n}.Without loss of generalityin what follow we will consider a reward maximisation setting. In deterministicdynamic programmingone usually deals withfunctional equationstaking the following structure
wherest+1=gt(st,xt){\displaystyle s_{t+1}=g_{t}(s_{t},x_{t})}and the boundary condition of the system is
The aim is to determine the set of optimal actions that maximisef1(s1){\displaystyle f_{1}(s_{1})}. Given the current statest{\displaystyle s_{t}}and the current actionxt{\displaystyle x_{t}}, weknow with certaintythe reward secured during the current stage and – thanks to the state transition functiongt{\displaystyle g_{t}}– the future state towards which the system transitions.
In practice, however, even if we know the state of the system at the beginning of the current stage as well as the decision taken, the state of the system at the beginning of the next stage and the current period reward are oftenrandom variablesthat can be observed only at the end of the current stage.
Stochastic dynamic programmingdeals with problems in which the current period reward and/or the next period state are random, i.e. with multi-stage stochastic systems. The decision maker's goal is to maximise expected (discounted) reward over a given planning horizon.
In their most general form, stochastic dynamic programs deal with functional equations taking the following structure
where
Markov decision processesrepresent a special class of stochastic dynamic programs in which the underlyingstochastic processis astationary processthat features theMarkov property.
Gambling game can be formulated as a Stochastic Dynamic Program as follows: there aren=4{\displaystyle n=4}games (i.e.stages) in the planning horizon
Letft(s){\displaystyle f_{t}(s)}be the probability that, by the end of game 4, the gambler has at least $6, given that she has $s{\displaystyle s}at the beginning of gamet{\displaystyle t}.
To derive thefunctional equation, definebt(s){\displaystyle b_{t}(s)}as a bet that attainsft(s){\displaystyle f_{t}(s)}, then at the beginning of gamet=4{\displaystyle t=4}
Fort<4{\displaystyle t<4}the functional equation isft(s)=maxbt(s){0.4ft+1(s+b)+0.6ft+1(s−b)}{\displaystyle f_{t}(s)=\max _{b_{t}(s)}\{0.4f_{t+1}(s+b)+0.6f_{t+1}(s-b)\}}, wherebt(s){\displaystyle b_{t}(s)}ranges in0,...,s{\displaystyle 0,...,s}; the aim is to findf1(2){\displaystyle f_{1}(2)}.
Given the functional equation, an optimal betting policy can be obtained via forward recursion or backward recursion algorithms, as outlined below.
Stochastic dynamic programs can be solved to optimality by usingbackward recursionorforward recursionalgorithms.Memoizationis typically employed to enhance performance. However, like deterministic dynamic programming also its stochastic variant suffers from thecurse of dimensionality. For this reasonapproximate solution methodsare typically employed in practical applications.
Given a bounded state space,backward recursion(Bertsekas 2000) begins by tabulatingfn(k){\displaystyle f_{n}(k)}for every possible statek{\displaystyle k}belonging to the final stagen{\displaystyle n}. Once these values are tabulated, together with the associated optimal state-dependent actionsxn(k){\displaystyle x_{n}(k)}, it is possible to move to stagen−1{\displaystyle n-1}and tabulatefn−1(k){\displaystyle f_{n-1}(k)}for all possible states belonging to the stagen−1{\displaystyle n-1}. The process continues by considering in abackwardfashion all remaining stages up to the first one. Once this tabulation process is complete,f1(s){\displaystyle f_{1}(s)}– the value of an optimal policy given initial states{\displaystyle s}– as well as the associated optimal actionx1(s){\displaystyle x_{1}(s)}can be easily retrieved from the table. Since the computation proceeds in a backward fashion, it is clear that backward recursion may lead to computation of a large number of states that are not necessary for the computation off1(s){\displaystyle f_{1}(s)}.
Given the initial states{\displaystyle s}of the system at the beginning of period 1,forward recursion(Bertsekas 2000) computesf1(s){\displaystyle f_{1}(s)}by progressively expanding the functional equation (forward pass). This involves recursive calls for allft+1(⋅),ft+2(⋅),…{\displaystyle f_{t+1}(\cdot ),f_{t+2}(\cdot ),\ldots }that are necessary for computing a givenft(⋅){\displaystyle f_{t}(\cdot )}. The value of an optimal policy and its structure are then retrieved via a (backward pass) in which these suspended recursive calls are resolved. A key difference from backward recursion is the fact thatft{\displaystyle f_{t}}is computed only for states that are relevant for the computation off1(s){\displaystyle f_{1}(s)}.Memoizationis employed to avoid recomputation of states that have been already considered.
We shall illustrate forward recursion in the context of the Gambling game instance previously discussed. We begin theforward passby consideringf1(2)=min{bsuccess probability in periods 1,2,3,400.4f2(2+0)+0.6f2(2−0)10.4f2(2+1)+0.6f2(2−1)20.4f2(2+2)+0.6f2(2−2){\displaystyle f_{1}(2)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 1,2,3,4}}\\\hline 0&0.4f_{2}(2+0)+0.6f_{2}(2-0)\\1&0.4f_{2}(2+1)+0.6f_{2}(2-1)\\2&0.4f_{2}(2+2)+0.6f_{2}(2-2)\\\end{array}}\right.}
At this point we have not computed yetf2(4),f2(3),f2(2),f2(1),f2(0){\displaystyle f_{2}(4),f_{2}(3),f_{2}(2),f_{2}(1),f_{2}(0)}, which are needed to computef1(2){\displaystyle f_{1}(2)}; we proceed and compute these items. Note thatf2(2+0)=f2(2−0)=f2(2){\displaystyle f_{2}(2+0)=f_{2}(2-0)=f_{2}(2)}, therefore one can leveragememoizationand perform the necessary computations only once.
f2(0)=min{bsuccess probability in periods 2,3,400.4f3(0+0)+0.6f3(0−0){\displaystyle f_{2}(0)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(0+0)+0.6f_{3}(0-0)\\\end{array}}\right.}
f2(1)=min{bsuccess probability in periods 2,3,400.4f3(1+0)+0.6f3(1−0)10.4f3(1+1)+0.6f3(1−1){\displaystyle f_{2}(1)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(1+0)+0.6f_{3}(1-0)\\1&0.4f_{3}(1+1)+0.6f_{3}(1-1)\\\end{array}}\right.}
f2(2)=min{bsuccess probability in periods 2,3,400.4f3(2+0)+0.6f3(2−0)10.4f3(2+1)+0.6f3(2−1)20.4f3(2+2)+0.6f3(2−2){\displaystyle f_{2}(2)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(2+0)+0.6f_{3}(2-0)\\1&0.4f_{3}(2+1)+0.6f_{3}(2-1)\\2&0.4f_{3}(2+2)+0.6f_{3}(2-2)\\\end{array}}\right.}
f2(3)=min{bsuccess probability in periods 2,3,400.4f3(3+0)+0.6f3(3−0)10.4f3(3+1)+0.6f3(3−1)20.4f3(3+2)+0.6f3(3−2)30.4f3(3+3)+0.6f3(3−3){\displaystyle f_{2}(3)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(3+0)+0.6f_{3}(3-0)\\1&0.4f_{3}(3+1)+0.6f_{3}(3-1)\\2&0.4f_{3}(3+2)+0.6f_{3}(3-2)\\3&0.4f_{3}(3+3)+0.6f_{3}(3-3)\\\end{array}}\right.}
f2(4)=min{bsuccess probability in periods 2,3,400.4f3(4+0)+0.6f3(4−0)10.4f3(4+1)+0.6f3(4−1)20.4f3(4+2)+0.6f3(4−2){\displaystyle f_{2}(4)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(4+0)+0.6f_{3}(4-0)\\1&0.4f_{3}(4+1)+0.6f_{3}(4-1)\\2&0.4f_{3}(4+2)+0.6f_{3}(4-2)\end{array}}\right.}
We have now computedf2(k){\displaystyle f_{2}(k)}for allk{\displaystyle k}that are needed to computef1(2){\displaystyle f_{1}(2)}. However, this has led to additional suspended recursions involvingf3(4),f3(3),f3(2),f3(1),f3(0){\displaystyle f_{3}(4),f_{3}(3),f_{3}(2),f_{3}(1),f_{3}(0)}. We proceed and compute these values.
f3(0)=min{bsuccess probability in periods 3,400.4f4(0+0)+0.6f4(0−0){\displaystyle f_{3}(0)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(0+0)+0.6f_{4}(0-0)\\\end{array}}\right.}
f3(1)=min{bsuccess probability in periods 3,400.4f4(1+0)+0.6f4(1−0)10.4f4(1+1)+0.6f4(1−1){\displaystyle f_{3}(1)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(1+0)+0.6f_{4}(1-0)\\1&0.4f_{4}(1+1)+0.6f_{4}(1-1)\\\end{array}}\right.}
f3(2)=min{bsuccess probability in periods 3,400.4f4(2+0)+0.6f4(2−0)10.4f4(2+1)+0.6f4(2−1)20.4f4(2+2)+0.6f4(2−2){\displaystyle f_{3}(2)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(2+0)+0.6f_{4}(2-0)\\1&0.4f_{4}(2+1)+0.6f_{4}(2-1)\\2&0.4f_{4}(2+2)+0.6f_{4}(2-2)\\\end{array}}\right.}
f3(3)=min{bsuccess probability in periods 3,400.4f4(3+0)+0.6f4(3−0)10.4f4(3+1)+0.6f4(3−1)20.4f4(3+2)+0.6f4(3−2)30.4f4(3+3)+0.6f4(3−3){\displaystyle f_{3}(3)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(3+0)+0.6f_{4}(3-0)\\1&0.4f_{4}(3+1)+0.6f_{4}(3-1)\\2&0.4f_{4}(3+2)+0.6f_{4}(3-2)\\3&0.4f_{4}(3+3)+0.6f_{4}(3-3)\\\end{array}}\right.}
f3(4)=min{bsuccess probability in periods 3,400.4f4(4+0)+0.6f4(4−0)10.4f4(4+1)+0.6f4(4−1)20.4f4(4+2)+0.6f4(4−2){\displaystyle f_{3}(4)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(4+0)+0.6f_{4}(4-0)\\1&0.4f_{4}(4+1)+0.6f_{4}(4-1)\\2&0.4f_{4}(4+2)+0.6f_{4}(4-2)\end{array}}\right.}
f3(5)=min{bsuccess probability in periods 3,400.4f4(5+0)+0.6f4(5−0)10.4f4(5+1)+0.6f4(5−1){\displaystyle f_{3}(5)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(5+0)+0.6f_{4}(5-0)\\1&0.4f_{4}(5+1)+0.6f_{4}(5-1)\end{array}}\right.}
Since stage 4 is the last stage in our system,f4(⋅){\displaystyle f_{4}(\cdot )}representboundary conditionsthat are easily computed as follows.
f4(0)=0b4(0)=0f4(1)=0b4(1)={0,1}f4(2)=0b4(2)={0,1,2}f4(3)=0.4b4(3)={3}f4(4)=0.4b4(4)={2,3,4}f4(5)=0.4b4(5)={1,2,3,4,5}f4(d)=1b4(d)={0,…,d−6}ford≥6{\displaystyle {\begin{array}{ll}f_{4}(0)=0&b_{4}(0)=0\\f_{4}(1)=0&b_{4}(1)=\{0,1\}\\f_{4}(2)=0&b_{4}(2)=\{0,1,2\}\\f_{4}(3)=0.4&b_{4}(3)=\{3\}\\f_{4}(4)=0.4&b_{4}(4)=\{2,3,4\}\\f_{4}(5)=0.4&b_{4}(5)=\{1,2,3,4,5\}\\f_{4}(d)=1&b_{4}(d)=\{0,\ldots ,d-6\}{\text{ for }}d\geq 6\end{array}}}
At this point it is possible to proceed and recover the optimal policy and its value via abackward passinvolving, at first, stage 3
f3(0)=min{bsuccess probability in periods 3,400.4(0)+0.6(0)=0{\displaystyle f_{3}(0)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4(0)+0.6(0)=0\\\end{array}}\right.}
f3(1)=min{bsuccess probability in periods 3,4max00.4(0)+0.6(0)=0←b3(1)=010.4(0)+0.6(0)=0←b3(1)=1{\displaystyle f_{3}(1)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0&\leftarrow b_{3}(1)=0\\1&0.4(0)+0.6(0)=0&\leftarrow b_{3}(1)=1\\\end{array}}\right.}
f3(2)=min{bsuccess probability in periods 3,4max00.4(0)+0.6(0)=010.4(0.4)+0.6(0)=0.16←b3(2)=120.4(0.4)+0.6(0)=0.16←b3(2)=2{\displaystyle f_{3}(2)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0\\1&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{3}(2)=1\\2&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{3}(2)=2\\\end{array}}\right.}
f3(3)=min{bsuccess probability in periods 3,4max00.4(0.4)+0.6(0.4)=0.4←b3(3)=010.4(0.4)+0.6(0)=0.1620.4(0.4)+0.6(0)=0.1630.4(1)+0.6(0)=0.4←b3(3)=3{\displaystyle f_{3}(3)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{3}(3)=0\\1&0.4(0.4)+0.6(0)=0.16\\2&0.4(0.4)+0.6(0)=0.16\\3&0.4(1)+0.6(0)=0.4&\leftarrow b_{3}(3)=3\\\end{array}}\right.}
f3(4)=min{bsuccess probability in periods 3,4max00.4(0.4)+0.6(0.4)=0.4←b3(4)=010.4(0.4)+0.6(0.4)=0.4←b3(4)=120.4(1)+0.6(0)=0.4←b3(4)=2{\displaystyle f_{3}(4)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{3}(4)=0\\1&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{3}(4)=1\\2&0.4(1)+0.6(0)=0.4&\leftarrow b_{3}(4)=2\\\end{array}}\right.}
f3(5)=min{bsuccess probability in periods 3,4max00.4(0.4)+0.6(0.4)=0.410.4(1)+0.6(0.4)=0.64←b3(5)=1{\displaystyle f_{3}(5)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4\\1&0.4(1)+0.6(0.4)=0.64&\leftarrow b_{3}(5)=1\\\end{array}}\right.}
and, then, stage 2.
f2(0)=min{bsuccess probability in periods 2,3,4max00.4(0)+0.6(0)=0←b2(0)=0{\displaystyle f_{2}(0)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0&\leftarrow b_{2}(0)=0\\\end{array}}\right.}
f2(1)=min{bsuccess probability in periods 2,3,4max00.4(0)+0.6(0)=010.4(0.16)+0.6(0)=0.064←b2(1)=1{\displaystyle f_{2}(1)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0\\1&0.4(0.16)+0.6(0)=0.064&\leftarrow b_{2}(1)=1\\\end{array}}\right.}
f2(2)=min{bsuccess probability in periods 2,3,4max00.4(0.16)+0.6(0.16)=0.16←b2(2)=010.4(0.4)+0.6(0)=0.16←b2(2)=120.4(0.4)+0.6(0)=0.16←b2(2)=2{\displaystyle f_{2}(2)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.16)+0.6(0.16)=0.16&\leftarrow b_{2}(2)=0\\1&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{2}(2)=1\\2&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{2}(2)=2\\\end{array}}\right.}
f2(3)=min{bsuccess probability in periods 2,3,4max00.4(0.4)+0.6(0.4)=0.4←b2(3)=010.4(0.4)+0.6(0.16)=0.25620.4(0.64)+0.6(0)=0.25630.4(1)+0.6(0)=0.4←b2(3)=3{\displaystyle f_{2}(3)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{2}(3)=0\\1&0.4(0.4)+0.6(0.16)=0.256\\2&0.4(0.64)+0.6(0)=0.256\\3&0.4(1)+0.6(0)=0.4&\leftarrow b_{2}(3)=3\\\end{array}}\right.}
f2(4)=min{bsuccess probability in periods 2,3,4max00.4(0.4)+0.6(0.4)=0.410.4(0.64)+0.6(0.4)=0.496←b2(4)=120.4(1)+0.6(0.16)=0.496←b2(4)=2{\displaystyle f_{2}(4)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4\\1&0.4(0.64)+0.6(0.4)=0.496&\leftarrow b_{2}(4)=1\\2&0.4(1)+0.6(0.16)=0.496&\leftarrow b_{2}(4)=2\\\end{array}}\right.}
We finally recover the valuef1(2){\displaystyle f_{1}(2)}of an optimal policy
f1(2)=min{bsuccess probability in periods 1,2,3,4max00.4(0.16)+0.6(0.16)=0.1610.4(0.4)+0.6(0.064)=0.1984←b1(2)=120.4(0.496)+0.6(0)=0.1984←b1(2)=2{\displaystyle f_{1}(2)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 1,2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.16)+0.6(0.16)=0.16\\1&0.4(0.4)+0.6(0.064)=0.1984&\leftarrow b_{1}(2)=1\\2&0.4(0.496)+0.6(0)=0.1984&\leftarrow b_{1}(2)=2\\\end{array}}\right.}
This is the optimal policy that has been previously illustrated. Note that there are multiple optimal policies leading to the same optimal valuef1(2)=0.1984{\displaystyle f_{1}(2)=0.1984}; for instance, in the first game one may either bet $1 or $2.
Python implementation.The one that follows is a completePythonimplementation of this example.
Java implementation.GamblersRuin.javais a standaloneJava 8implementation of the above example.
An introduction toapproximate dynamic programmingis provided by (Powell 2009).
|
https://en.wikipedia.org/wiki/Stochastic_dynamic_programming
|
Starratingsare a type ofrating scaleusing astar glyphor similartypographical symbol. It is used by reviewers for ranking things such as films, TV shows, restaurants, and hotels. For example, a system of one to five stars is commonly used inhotel ratings, with five stars being the highest rating.
Similar systems have been proposed for electing politicians in the form ofscore votingandSTAR voting.
Repeated symbols used for a ranking date toMariana Starke's 1820 guidebook, which usedexclamation pointsto indicate works of art of special value:
...I have endeavored... to furnish Travellers with correct lists of the objects best worth notice...; at the same time marking, with one or more exclamation points (according to their merit), those works which are deemed peculiarly excellent.[1]
Murray's Handbooks for Travellersand then theBaedeker Guides(starting in 1844) borrowed this system, using stars instead of exclamation points, first for points of interest and later for hotels.[2]
TheMichelin restaurant guideintroduced a star as a restaurant rating in 1926, which was expanded to a system of one to three stars in 1931.[3]
In 1915,Edward O'Brienbegan editingThe Best American Short Stories. This annual compiled O'Brien's personal selection of the previous year's best short stories. O'Brien claimed to read as many as 8,000 stories a year, and his editions contained lengthy tabulations of stories and magazines, ranked on a scale of zero to three stars, representing O'Brien's notion of their "literary permanence."[4]He further listed stories with a ranking of three stars "in a special 'Roll of Honor.'" In this list, O'Brien attached an additional asterisk to those stories that he personally enjoyed.[5]
Oliver Herford's essaySay it with Asterisks, quips "Never, I think, were a mob of overworked employees so pitifully huddled together in an ill-ventilated factory as are the Asterisks in this Sweatshop of Twaddle."[6]Literary editor Katrina Kenison dismisses O'Brien's grading systems as "excessive at best, fussy and arbitrary at worst."[4]
Book reviewers generally do not use a star-rating system though there are exceptions. TheWest Coast Review of Booksrates books on a scale of one ("poor") to five ("superior") stars.[7]According to editor D. David Dreis, readers love the ratings but publishers don't.[8]
In the 31 July 1928 issue of theNew York Daily News, the newspaper's film criticIrene Thirerbegan grading movies on a scale of zero to three stars. Three stars meant 'excellent,' two 'good,' and one star meant 'mediocre.' And no stars at all 'means the picture's right bad,'" wrote Thirer.Carl Bialikspeculates that this may have been the first time a film critic used a star-rating system to grade movies.[9]"The one-star review ofThe Port of Missing Girlslaunched the star system, which the newspaper promised would be 'a permanent thing.'[9]
According to film scholarGerald Peary, few newspapers adopted this practice until the French film magazineCahiers du cinéma"started polling critics in the 1950s and boiling their judgment down to a star rating, with abulletreserved for movies that the magazine didn't like."[9]The highest rating any film earned was five stars. The British film magazineSight and Soundalso rated films on a scale of one to four stars.[10]Some critics use a "half-star" option in between basic star ratings. Leonard Maltin goes one further and givesNaked Gun33+1⁄3: The Final Insulta2+1⁄3star rating.[11]
Critics do not agree on what the cutoff is for a recommendation, even when they use the same scale.Gene SiskelandRoger Ebert"both consider[ed] a three-star rating to be the cutoff for a "thumbs up" on their scales of zero to four stars.[12]Film criticDave Kehr—who also uses a 0–4 star scale—believes "two stars is a borderline recommendation".[12]On a five-star scale, regardless of the bottom rating, 3 stars is often the lowest positive rating, though judging on a purely mathematical basis, 2 1/2 stars would be the dividing line between good and bad on a 0–5 scale.Common Sense Mediauses a scale of one to five, where 3 stars are "Just fine; solid" and anything lower is "Disappointing" at best.[13]
There is no agreement on what the lowest rating should be. Some critics make "one star" or a "half-star" their lowest rating. Dave Kehr believes that "one star" indicates the film has redeeming facets,[12]and instead uses zero stars as his lowest rating.
Examples of rating scales:
Critics have different ways of denoting the lowest rating when this is a "zero". Some such asPeter Traversdisplay empty stars.Jonathan RosenbaumandDave Kehruse a round black dot.[18]Leslie Halliwelluses a blank space.[19]The Globe and Mailuses a "0", or as their former film critic dubbed it, the "death doughnut".[20]Roger Ebert used a thumbs-down symbol.[21]Other critics use ablack dot.
Critics also do not agree on what the lower ratings signify, let alone the lowest rating. While Maltin's and Scheuer's guides respectively explain that lowest rated films are "BOMB(s)" and "abysmal", British film criticLeslie Halliwellinstead writes that no star "indicates a totally routine production or worse; such films may be watchable but are at least equally missable."[19]Like Halliwell and Dave Kehr, film criticJonathan Rosenbaumbelieves one-star films have some merit, however unlike Halliwell, Rosenbaum believes that no stars indicate a "worthless" movie.[18]Roger Ebertoccasionally gave zero stars to films he deemed "artistically inept and morally repugnant."[22]Scheuer's guide calls "one and a half star" films "poor", and "one star" films "bad".[23]
Not all film critics have approved of star ratings. Film scholarRobin Woodwondered ifSight and Soundreaders accepted "such blackening of their characters."[24]Jay Scottof Canada'sThe Globe and Mailwas an opponent of using symbols to summarize a review and wrote in 1992 that "When Globe editors first proposed the four-star system of rating movies about a year ago, the response from Globe critics was, to put it mildly, underwhelming."[20]More recently,Mark Kermodehas expressed a dislike of star ratings (assigned to his online reviews but not his print or radio reviews) on the grounds that his verdicts are sometimes too complex to be expressed as a rating.[25]
Star ratings are also given out atstand-up comedyperformances andtheatreproductions. Star ratings are given at theEdinburgh Festival Fringe, the largest arts festival in the world. Since 2010, theBritish Comedy Guidehas collected over 4,300 reviews of around 1,110 different acts, across 83 different publications in the form of a star rating.[26]
The use of star ratings is controversial because the public may ignore the reviews and concentrate more the star ratings alone.[27]
Star ratings are not often used to rate the quality of a video game but are rather used within certain games for varying purposes. One notable use of the star system is to grade a player's performance in completing alevelwith up to three stars, used in many modern multi-level games likeAngry Birds. This three-star rating system challenges the player torepeat and fully master previously beaten levelsin order to receive a perfect 3-star rating, which may confer other benefits or bonus content. Another use of star ratings is to denote the rarity of characters in video games where players are tasked in collecting numerous characters, such asStar Wars: Galaxy of HeroesandMarvel: Contest of Champions, in which stronger and rarer characters are marked with more stars to make them appear more valuable. Stars are also used to rank a game or stage's difficulty (such as in theSNESversion ofStreet Fighter IIand its updates), or to rate the attributes of a selectable character or, insports games, a team.
Restaurant guides and reviewers often use stars inrestaurant ratings. The Michelin system reserves star for exceptional restaurants, and gives up to three; the vast majority of recommended restaurants have no star at all. Other guides now use up to four or five stars, with one-star being the lowest rating. The stars are sometimes replaced by symbols such as a fork or spoon. Some guides use separate scales for food, service, ambiance, and even noise level.
The Michelin system remains the best known star system. A single star denotes "a very good restaurant in its category", two stars "excellent cooking, worth a detour", and three stars, "exceptional cuisine, worth a special journey".[28]
Michelin stars are awarded only for the quality of food and wine; the luxury level of the restaurant is rated separately, using a scale of one ("quite comfortable") to five ("luxury in the traditional style") crossed fork and spoon symbols.
Hotelluxury is often denoted by stars.
Other classifiers, such as theAAA Five Diamond Award, usediamondsinstead of stars to express hotel rating levels.
Hotels are assessed in traditional systems and rest heavily on the facilities provided. Some consider this disadvantageous to smaller hotels whose quality of accommodation could fall into one class but the lack of an item such as anelevatorwould prevent it from reaching a higher categorization.[29]
In recent years[when?]hotel rating systems have also been criticized by some[who?]who argue that the rating criteria for such systems are overly complex and difficult for laymen to understand. It has been suggested that the lack of a unified global system for rating hotels may also undermine the usability of such schemes.
In the UK, providers and comparison websites often use stars to indicate how feature-rich financial products are.[30]
The most seniormilitary ranksin all services are classified by a star system in many countries, ranging fromone-star rankwhich typically corresponds tobrigadier,brigadier general,commodoreorair commodore, to the most seniorfive-star ranks, which includeAdmiral of the Fleet,Grand Admiral,Field Marshal,General of the ArmyandMarshal of the Air Force—some five-star ranks only exist during large-scale conflicts.
Recruits entering Americancollege footballare commonly ranked on a five-star scale, with five representing what scouts think will be the best college players.[31][32]
International organisations use a star rating to rank the safety of transportation.EuroRAPhave developed a Road Protection Score which is a scale for Star Rating roads for how well they protect the user from death or disabling injury when a crash occurs. The assessment evaluates the safety that is 'built into' the road through its design, in combination with the way traffic is managed on it.[33]The RPS protocol has also been adapted and used by AusRAP, usRAP and iRAP.
Euro NCAPawards 'star ratings' based on the performance of vehicles in crash tests, including front, side and pole impacts, and impacts with pedestrians.
The United StatesNational Highway Traffic Safety Administration(NHTSA) also uses a star ranking to rank the safety of vehicles in crash tests, including front, side, pole impacts, and rollovers, with 5 stars being the most secure.[34]
Someweb content votingsystems use five-star grades. This allows users to distinguish content more precisely than with binary "like buttons".
Manyrecommender systems, such asMovieLensorAmazon.com, ask people to express preferences using star ratings, then predict what other items those people are likely to enjoy. Predictions are often expressed in terms of the number of predicted stars.
TheUnicodeStandard encodes several characters used for star ratings in theMiscellaneous Symbols and Arrowsblock:[35][36]
The STAR WITH LEFT HALF BLACK and LEFT HALF BLACK STAR are intended for use inleft-to-rightcontexts where the half star is positioned to the right of one or more whole stars, whereas the STAR WITH RIGHT HALF BLACK and RIGHT HALF BLACK STAR are intended for use inright-to-leftcontexts (such asArabicorHebrew) where the half star is positioned to the left of one or more whole stars.[35]
|
https://en.wikipedia.org/wiki/Star_(classification)
|
Inmathematics, awavelet seriesis a representation of asquare-integrable(real- orcomplex-valued)functionby a certainorthonormalseriesgenerated by awavelet. This article provides a formal, mathematical definition of anorthonormal waveletand of theintegral wavelet transform.[1][2][3][4]
A functionψ∈L2(R){\displaystyle \psi \,\in \,L^{2}(\mathbb {R} )}is called anorthonormal waveletif it can be used to define aHilbert basis, that is, acomplete orthonormal systemfor theHilbert spaceofsquare-integrable functionson the real line.
The Hilbert basis is constructed as the family of functions{ψjk:j,k∈Z}{\displaystyle \{\psi _{jk}:\,j,\,k\,\in \,\mathbb {Z} \}}by means ofdyadictranslationsanddilationsofψ{\displaystyle \psi \,},ψjk(x)=2j2ψ(2jx−k),{\displaystyle \psi _{jk}(x)=2^{\frac {j}{2}}\psi \left(2^{j}x-k\right),}for integersj,k∈Z{\displaystyle j,\,k\,\in \,\mathbb {Z} }.
If, under the standardinner productonL2(R){\displaystyle L^{2}\left(\mathbb {R} \right)},⟨f,g⟩=∫−∞∞f(x)g(x)¯dx,{\displaystyle \langle f,g\rangle =\int _{-\infty }^{\infty }f(x){\overline {g(x)}}dx,}this family is orthonormal, then it is an orthonormal system:⟨ψjk,ψlm⟩=∫−∞∞ψjk(x)ψlm(x)¯dx,=δjlδkm,{\displaystyle {\begin{aligned}\langle \psi _{jk},\psi _{lm}\rangle &=\int _{-\infty }^{\infty }\psi _{jk}(x){\overline {\psi _{lm}(x)}}dx,\\&=\delta _{jl}\delta _{km},\end{aligned}}}whereδjl{\displaystyle \delta _{jl}\,}is theKronecker delta.
Completeness is satisfied if every functionf∈L2(R){\displaystyle f\,\in \,L^{2}\left(\mathbb {R} \right)}may be expanded in the basis as
with convergence of the series understood to beconvergence in norm. Such a representation off{\displaystyle f}is known as awavelet series. This implies that an orthonormal wavelet isself-dual.
Theintegral wavelet transformis theintegral transformdefined as[Wψf](a,b)=1|a|∫−∞∞ψ(x−ba)¯f(x)dx{\displaystyle \left[W_{\psi }f\right](a,b)={\frac {1}{\sqrt {|a|}}}\int _{-\infty }^{\infty }{\overline {\psi \left({\frac {x-b}{a}}\right)}}f(x)dx\,}Thewavelet coefficientscjk{\displaystyle c_{jk}}are then given bycjk=[Wψf](2−j,k2−j){\displaystyle c_{jk}=\left[W_{\psi }f\right]\left(2^{-j},k2^{-j}\right)}
Here,a=2−j{\displaystyle a=2^{-j}}is called thebinary dilationordyadic dilation, andb=k2−j{\displaystyle b=k2^{-j}}is thebinaryordyadic position.
The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape, imposing a restriction on choosing suitable basis functions. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on theuncertainty principleof signal processing,
wheret{\displaystyle t}represents time andω{\displaystyle \omega }angular frequency(ω=2πf{\displaystyle \omega =2\pi f}, wheref{\displaystyle f}isordinary frequency).
The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysiswindowsis chosen, the larger is the value ofΔt{\displaystyle \Delta t}.
WhenΔt{\displaystyle \Delta t}is large
WhenΔt{\displaystyle \Delta t}is small
In other words, the basis functionψ{\displaystyle \psi }can be regarded as an impulse response of a system with which the functionx(t){\displaystyle x(t)}has been filtered. The transformed signal provides information about the time and the frequency. Therefore, wavelet-transformation contains information similar to theshort-time-Fourier-transformation, but with additional special properties of the wavelets, which show up at the resolution in time at higher analysis frequencies of the basis function. The difference in time resolution at ascending frequencies for theFourier transformand the wavelet transform is shown below. Note however, that the frequency resolution is decreasing for increasing frequencies while the temporal resolution increases. This consequence of theFourier uncertainty principleis not correctly displayed in the Figure.
This shows that wavelet transformation is good in time resolution of high frequencies, while for slowly varying functions, the frequency resolution is remarkable.
Another example: The analysis of three superposed sinusoidal signalsy(t)=sin(2πf0t)+sin(4πf0t)+sin(8πf0t){\displaystyle y(t)\;=\;\sin(2\pi f_{0}t)\;+\;\sin(4\pi f_{0}t)\;+\;\sin(8\pi f_{0}t)}with STFT and wavelet-transformation.
Wavelet compressionis a form ofdata compressionwell suited forimage compression(sometimes alsovideo compressionandaudio compression). Notable implementations areJPEG 2000,DjVuandECWfor still images,JPEG XS,CineForm, and the BBC'sDirac. The goal is to store image data in as little space as possible in afile. Wavelet compression can be eitherlosslessorlossy.[5]
Using a wavelet transform, the wavelet compression methods are adequate for representingtransients, such as percussion sounds in audio, or high-frequency components in two-dimensional images, for example an image of stars on a night sky. This means that the transient elements of a data signal can be represented by a smaller amount of information than would be the case if some other transform, such as the more widespreaddiscrete cosine transform, had been used.
Discrete wavelet transformhas been successfully applied for the compression of electrocardiograph (ECG) signals[6]In this work, the high correlation between the corresponding wavelet coefficients of signals of successive cardiac cycles is utilized employing linear prediction.
Wavelet compression is not effective for all kinds of data. Wavelet compression handles transient signals well. But smooth, periodic signals are better compressed using other methods, particularly traditionalharmonic analysisin thefrequency domainwithFourier-related transforms. Compressing data that has both transient and periodic characteristics may be done with hybrid techniques that use wavelets along with traditional harmonic analysis. For example, theVorbisaudio codecprimarily uses themodified discrete cosine transformto compress audio (which is generally smooth and periodic), however allows the addition of a hybrid waveletfilter bankfor improvedreproductionof transients.[7]
SeeDiary Of An x264 Developer: The problems with wavelets(2010) for discussion of practical issues of current methods using wavelets for video compression.
First a wavelet transform is applied. This produces as manycoefficientsas there arepixelsin the image (i.e., there is no compression yet since it is only a transform). These coefficients can then be compressed more easily because the information is statistically concentrated in just a few coefficients. This principle is calledtransform coding. After that, the coefficients arequantizedand the quantized values areentropy encodedand/orrun length encoded.
A few 1D and 2D applications of wavelet compression use a technique called "wavelet footprints".[8][9]
For most natural images, the spectrum density of lower frequency is higher.[10]As a result, information of the low frequency signal (reference signal) is generally preserved, while the information in the detail signal is discarded. From the perspective of image compression and reconstruction, a wavelet should meet the following criteria while performing image compression:
Wavelet image compression system involves filters and decimation, so it can be described as a linear shift-variant system. A typical wavelet transformation diagram is displayed below:
The transformation system contains two analysis filters (a low pass filterh0(n){\displaystyle h_{0}(n)}and a high pass filterh1(n){\displaystyle h_{1}(n)}), a decimation process, an interpolation process, and two synthesis filters (g0(n){\displaystyle g_{0}(n)}andg1(n){\displaystyle g_{1}(n)}). The compression and reconstruction system generally involves low frequency components, which is the analysis filtersh0(n){\displaystyle h_{0}(n)}for image compression and the synthesis filtersg0(n){\displaystyle g_{0}(n)}for reconstruction. To evaluate such system, we can input an impulseδ(n−ni){\displaystyle \delta (n-n_{i})}and observe its reconstructionh(n−ni){\displaystyle h(n-n_{i})}; The optimal wavelet are those who bring minimum shift variance and sidelobe toh(n−ni){\displaystyle h(n-n_{i})}. Even though wavelet with strict shift variance is not realistic, it is possible to select wavelet with only slight shift variance. For example, we can compare the shift variance of two filters:[11]
By observing the impulse responses of the two filters, we can conclude that the second filter is less sensitive to the input location (i.e. it is less shift variant).
Another important issue for image compression and reconstruction is the system's oscillatory behavior, which might lead to severe undesired artifacts in the reconstructed image. To achieve this, the wavelet filters should have a large peak to sidelobe ratio.
So far we have discussed about one-dimension transformation of the image compression system. This issue can be extended to two dimension, while a more general term - shiftable multiscale transforms - is proposed.[12]
As mentioned earlier, impulse response can be used to evaluate the image compression/reconstruction system.
For the input sequencex(n)=δ(n−ni){\displaystyle x(n)=\delta (n-n_{i})}, the reference signalr1(n){\displaystyle r_{1}(n)}after one level of decomposition isx(n)∗h0(n){\displaystyle x(n)*h_{0}(n)}goes through decimation by a factor of two, whileh0(n){\displaystyle h_{0}(n)}is a low pass filter. Similarly, the next reference signalr2(n){\displaystyle r_{2}(n)}is obtained byr1(n)∗h0(n){\displaystyle r_{1}(n)*h_{0}(n)}goes through decimation by a factor of two. After L levels of decomposition (and decimation), the analysis response is obtained by retaining one out of every2L{\displaystyle 2^{L}}samples:hA(L)(n,ni)=fh0(L)(n−ni/2L){\displaystyle h_{A}^{(L)}(n,n_{i})=f_{h0}^{(L)}(n-n_{i}/2^{L})}.
On the other hand, to reconstruct the signal x(n), we can consider a reference signalrL(n)=δ(n−nj){\displaystyle r_{L}(n)=\delta (n-n_{j})}. If the detail signalsdi(n){\displaystyle d_{i}(n)}are equal to zero for1≤i≤L{\displaystyle 1\leq i\leq L}, then the reference signal at the previous stage (L−1{\displaystyle L-1}stage) isrL−1(n)=g0(n−2nj){\displaystyle r_{L-1}(n)=g_{0}(n-2n_{j})}, which is obtained by interpolatingrL(n){\displaystyle r_{L}(n)}and convoluting withg0(n){\displaystyle g_{0}(n)}. Similarly, the procedure is iterated to obtain the reference signalr(n){\displaystyle r(n)}at stageL−2,L−3,....,1{\displaystyle L-2,L-3,....,1}. After L iterations, the synthesis impulse response is calculated:hs(L)(n,ni)=fg0(L)(n/2L−nj){\displaystyle h_{s}^{(L)}(n,n_{i})=f_{g0}^{(L)}(n/2^{L}-n_{j})}, which relates the reference signalrL(n){\displaystyle r_{L}(n)}and the reconstructed signal.
To obtain the overall L level analysis/synthesis system, the analysis and synthesis responses are combined as below:
hAS(L)(n,ni)=∑kfh0(L)(k−ni/2L)fg0(L)(n/2L−k){\displaystyle h_{AS}^{(L)}(n,n_{i})=\sum _{k}f_{h0}^{(L)}(k-n_{i}/2^{L})f_{g0}^{(L)}(n/2^{L}-k)}.
Finally, the peak to first sidelobe ratio and the average second sidelobe of the overall impulse responsehAS(L)(n,ni){\displaystyle h_{AS}^{(L)}(n,n_{i})}can be used to evaluate the wavelet image compression performance.
Wavelets have some slight benefits over Fourier transforms in reducing computations when examining specific frequencies. However, they are rarely more sensitive, and indeed, the commonMorlet waveletis mathematically identical to ashort-time Fourier transformusing a Gaussian window function.[13]The exception is when searching for signals of a known, non-sinusoidal shape (e.g., heartbeats); in that case, using matched wavelets can outperform standard STFT/Morlet analyses.[14]
The wavelet transform can provide us with the frequency of the signals and the time associated to those frequencies, making it very convenient for its application in numerous fields. For instance, signal processing of accelerations for gait analysis,[15]for fault detection,[16]for the analysis of seasonal displacements of landslides,[17]for design of low power pacemakers and also in ultra-wideband (UWB) wireless communications.[18][19][20]
Applied the following discretization of frequency and time:
Leading to wavelets of the form, the discrete formula for the basis wavelet:
Such discrete wavelets can be used for the transformation:
As apparent from wavelet-transformation representation (shown below)
wherec{\displaystyle c}is scaling factor,τ{\displaystyle \tau }represents time shift factor
and as already mentioned in this context, the wavelet-transformation corresponds to aconvolutionof a functiony(t){\displaystyle y(t)}and a wavelet-function. A convolution can be implemented as a multiplication in the frequency domain. With this the following approach of implementation results into:
For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al[23]and Lindeberg,[24]with the latter method also involving a memory-efficient time-recursive implementation.
Synchro-squeezed transform can significantly enhance temporal and frequency resolution of time-frequency representation obtained using conventional wavelet transform.[25][26]
|
https://en.wikipedia.org/wiki/Wavelet_transform
|
Fourier-transform spectroscopy(FTS) is a measurement technique wherebyspectraare collected based on measurements of thecoherenceof aradiativesource, usingtime-domainor space-domain measurements of theradiation,electromagneticor not. It can be applied to a variety of types ofspectroscopyincludingoptical spectroscopy,infrared spectroscopy(FTIR, FT-NIRS),nuclear magnetic resonance(NMR) and magnetic resonance spectroscopic imaging (MRSI),[1]mass spectrometryandelectron spin resonancespectroscopy.
There are several methods for measuring the temporal coherence of the light (see:field-autocorrelation), including the continuous-wave and the pulsedFourier-transform spectrometerorFourier-transform spectrograph.
The term "Fourier-transform spectroscopy" reflects the fact that in all these techniques, aFourier transformis required to turn the raw data into the actualspectrum, and in many of the cases in optics involvinginterferometers, is based on theWiener–Khinchin theorem.
One of the most basic tasks inspectroscopyis to characterize thespectrumof a light source: how much light is emitted at each different wavelength. The most straightforward way to measure a spectrum is to pass the light through amonochromator, an instrument that blocks all of the lightexceptthe light at a certain wavelength (the un-blocked wavelength is set by a knob on the monochromator). Then the intensity of this remaining (single-wavelength) light is measured. The measured intensity directly indicates how much light is emitted at that wavelength. By varying the monochromator's wavelength setting, the full spectrum can be measured. This simple scheme in fact describes howsomespectrometers work.
Fourier-transform spectroscopy is a less intuitive way to get the same information. Rather than allowing only one wavelength at a time to pass through to the detector, this technique lets through a beam containing many different wavelengths of light at once, and measures thetotalbeam intensity. Next, the beam is modified to contain adifferentcombination of wavelengths, giving a second data point. This process is repeated many times. Afterwards, a computer takes all this data and works backwards to infer how much light there is at each wavelength.
To be more specific, between the light source and the detector, there is a certain configuration of mirrors that allows some wavelengths to pass through but blocks others (due towave interference). The beam is modified for each new data point by moving one of the mirrors; this changes the set of wavelengths that can pass through.
As mentioned, computer processing is required to turn the raw data (light intensity for each mirror position) into the desired result (light intensity for each wavelength). The processing required turns out to be a common algorithm called theFourier transform(hence the name, "Fourier-transform spectroscopy"). The raw data is sometimes called an "interferogram". Because of the existing computer equipment requirements, and the ability of light to analyze very small amounts of substance, it is often beneficial to automate many aspects of the sample preparation. The sample can be better preserved and the results are much easier to replicate. Both of these benefits are important, for instance, in testing situations that may later involve legal action, such as those involving drug specimens.[2]
The method of Fourier-transform spectroscopy can also be used forabsorption spectroscopy. The primary example is "FTIR Spectroscopy", a common technique in chemistry.
In general, the goal of absorption spectroscopy is to measure how well a sample absorbs or transmits light at each different wavelength. Although absorption spectroscopy and emission spectroscopy are different in principle, they are closely related in practice; any technique for emission spectroscopy can also be used for absorption spectroscopy. First, the emission spectrum of a broadband lamp is measured (this is called the "background spectrum"). Second, the emission spectrum of the same lampshining through the sampleis measured (this is called the "sample spectrum"). The sample will absorb some of the light, causing the spectra to be different. The ratio of the "sample spectrum" to the "background spectrum" is directly related to the sample's absorption spectrum.
Accordingly, the technique of "Fourier-transform spectroscopy" can be used both for measuring emission spectra (for example, the emission spectrum of a star),andabsorption spectra (for example, the absorption spectrum of a liquid).
The Michelson spectrograph is similar to the instrument used in theMichelson–Morley experiment. Light from the source is split into two beams by a half-silvered mirror, one is reflected off a fixed mirror and one off a movable mirror, which introduces a time delay—the Fourier-transform spectrometer is just aMichelson interferometerwith a movable mirror. The beams interfere, allowing the temporalcoherenceof the light to be measured at each different time delay setting, effectively converting the time domain into a spatial coordinate. By making measurements of the signal at many discrete positions of the movable mirror, the spectrum can be reconstructed using a Fourier transform of the temporalcoherenceof the light. Michelson spectrographs are capable of very high spectral resolution observations of very bright sources.
The Michelson or Fourier-transform spectrograph was popular for infra-red applications at a time when infra-red astronomy only had single-pixel detectors. Imaging Michelson spectrometers are a possibility, but in general have been supplanted by imagingFabry–Pérotinstruments, which are easier to construct.
The intensity as a function of the path length difference (also denoted as retardation) in the interferometerp{\displaystyle p}andwavenumberν~=1/λ{\displaystyle {\tilde {\nu }}=1/\lambda }is[3]
whereI(ν~){\displaystyle I({\tilde {\nu }})}is the spectrum to be determined. Note that it is not necessary forI(ν~){\displaystyle I({\tilde {\nu }})}to be modulated by the sample before the interferometer. In fact, mostFTIR spectrometersplace the sample after the interferometer in the optical path. The total intensity at the detector is
This is just aFourier cosine transform. The inverse gives us our desired result in terms of the measured quantityI(p){\displaystyle I(p)}:
A pulsed Fourier-transform spectrometer does not employ transmittance techniques[definition needed]. In the most general description of pulsed FT spectrometry, a sample is exposed to an energizing event which causes a periodic response. The frequency of the periodic response, as governed by the field conditions in the spectrometer, is indicative of the measured properties of the analyte.
In magnetic spectroscopy (EPR,NMR), a microwave pulse (EPR) or a radio frequency pulse (NMR) in a strong ambient magnetic field is used as the energizing event. This turns the magnetic particles at an angle to the ambient field, resulting in gyration. The gyrating spins then induce a periodic current in a detector coil. Each spin exhibits a characteristic frequency of gyration (relative to the field strength) which reveals information about the analyte.
InFourier-transform mass spectrometry, the energizing event is the injection of the charged sample into the strong electromagnetic field of a cyclotron. These particles travel in circles, inducing a current in a fixed coil on one point in their circle. Each traveling particle exhibits a characteristic cyclotron frequency-field ratio revealing the masses in the sample.
Pulsed FT spectrometry gives the advantage of requiring a single, time-dependent measurement which can easily deconvolute a set of similar but distinct signals. The resulting composite signal, is called afree induction decay,because typically the signal will decay due to inhomogeneities in sample frequency, or simply unrecoverable loss of signal due to entropic loss of the property being measured.
Pulsed sources allow for the utilization of Fourier-transform spectroscopy principles inscanning near-field optical microscopytechniques. Particularly innano-FTIR, where the scattering from a sharp probe-tip is used to perform spectroscopy of samples with nanoscale spatial resolution, a high-power illumination from pulsed infrared lasers makes up for a relatively smallscattering efficiency(often < 1%) of the probe.[4]
In addition to the scanning forms of Fourier-transform spectrometers, there are a number of stationary or self-scanned forms.[5]While the analysis of the interferometric output is similar to that of the typical scanning interferometer, significant differences apply, as shown in the published analyses. Some stationary forms retain the Fellgett multiplex advantage, and their use in the spectral region where detector noise limits apply is similar to the scanning forms of the FTS. In the photon-noise limited region, the application of stationary interferometers is dictated by specific consideration for the spectral region and the application.
One of the most important advantages of Fourier-transform spectroscopy was shown by P. B. Fellgett, an early advocate of the method. The Fellgett advantage, also known as the multiplex principle, states that when obtaining a spectrum when measurement noise is dominated by detector noise (which is independent of the power of radiation incident on the detector), a multiplex spectrometer such as a Fourier-transform spectrometer will produce a relative improvement in signal-to-noise ratio, compared to an equivalent scanningmonochromator, of the order of the square root ofm, wheremis the number of sample points comprising the spectrum. However, if the detector isshot-noisedominated, the noise will be proportional to the square root of the power, thus for a broad boxcar spectrum (continuous broadband source), the noise is proportional to the square root ofm, thus precisely offset the Fellgett's advantage. For line emission sources the situation is even worse and there is a distinct `multiplex disadvantage' as the shot noise from a strong emission component will overwhelm the fainter components of the spectrum. Shot noise is the main reason Fourier-transform spectrometry was never popular for ultraviolet (UV) and visible spectra.
|
https://en.wikipedia.org/wiki/Fourier-transform_spectroscopy
|
Harmonic analysisis a branch ofmathematicsconcerned with investigating the connections between afunctionand its representation infrequency. The frequency representation is found by using theFourier transformfor functions on unbounded domains such as the fullreal lineor byFourier seriesfor functions on bounded domains, especiallyperiodic functionson finiteintervals. Generalizing these transforms to other domains is generally calledFourier analysis, although the term is sometimes used interchangeably with harmonic analysis. Harmonic analysis has become a vast subject with applications in areas as diverse asnumber theory,representation theory,signal processing,quantum mechanics,tidal analysis,spectral analysis, andneuroscience.
The term "harmonics" originated from theAncient Greekwordharmonikos, meaning "skilled in music".[1]In physicaleigenvalueproblems, it began to mean waves whose frequencies areinteger multiplesof one another, as are the frequencies of theharmonics of music notes. Still, the term has been generalized beyond its original meaning.
Historically,harmonic functionsfirst referred to the solutions ofLaplace's equation.[2]This terminology was extended to otherspecial functionsthat solved related equations,[3]then toeigenfunctionsof generalelliptic operators,[4]and nowadays harmonic functions are considered as a generalization of periodic functions[5]infunction spacesdefined onmanifolds, for example as solutions of general, not necessarilyelliptic,partial differential equationsincluding someboundary conditionsthat may imply their symmetry or periodicity.[6]
The classicalFourier transformonRnis still an area of ongoing research, particularly concerning Fourier transformation on more general objects such astempered distributions. For instance, if we impose some requirements on a distributionf, we can attempt to translate these requirements into the Fourier transform off. ThePaley–Wiener theoremis an example. The Paley–Wiener theorem immediately implies that iffis a nonzerodistributionofcompact support(these include functions of compact support), then its Fourier transform is never compactly supported (i.e., if a signal is limited in one domain, it is unlimited in the other). This is an elementary form of anuncertainty principlein a harmonic-analysis setting.
Fourier series can be conveniently studied in the context ofHilbert spaces, which provides a connection between harmonic analysis andfunctional analysis. There are four versions of the Fourier transform, dependent on the spaces that are mapped by the transformation:
As the spaces mapped by the Fourier transform are, in particular, subspaces of the space of tempered distributions it can be shown that the four versions of the Fourier transform are particular cases of the Fourier transform on tempered distributions.
Abstract harmonic analysis is primarily concerned with how real or
complex-valuedfunctions(often on very general domains) can be studied using symmetries such
astranslationsorrotations(for instance via theFourier transformand its relatives); this field is of
course related to real-variable harmonic analysis, but is perhaps closer in spirit torepresentation theoryandfunctional analysis.[6]
One of the most modern branches of harmonic analysis, having its roots in the mid-20th century, isanalysisontopological groups. The core motivating ideas are the variousFourier transforms, which can be generalized to a transform of functions defined on Hausdorfflocally compact topological groups.[7]
One of the major results in the theory of functions onabelianlocally compact groups is calledPontryagin duality.
Harmonic analysis studies the properties of that duality. Different generalization of Fourier transforms attempts to extend those features to different settings, for instance, first to the case of general abelian topological groups and second to the case of non-abelianLie groups.[8]
Harmonic analysis is closely related to the theory of unitary group representations for general non-abelian locally compact groups. For compact groups, thePeter–Weyl theoremexplains how one may get harmonics by choosing one irreducible representation out of each equivalence class of representations.[9]This choice of harmonics enjoys some of the valuable properties of the classical Fourier transform in terms of carrying convolutions to pointwise products or otherwise showing a certain understanding of the underlyinggroupstructure. See also:Non-commutative harmonic analysis.
If the group is neither abelian nor compact, no general satisfactory theory is currently known ("satisfactory" means at least as strong as thePlancherel theorem). However, many specific cases have been analyzed, for example,SLn. In this case,representationsin infinitedimensionsplay a crucial role.
Many applications of harmonic analysis in science and engineering begin with the idea or hypothesis that a phenomenon or signal is composed of a sum of individual oscillatory components. Oceantidesand vibratingstringsare common and simple examples. The theoretical approach often tries to describe the system by adifferential equationorsystem of equationsto predict the essential features, including the amplitude, frequency, and phases of the oscillatory components. The specific equations depend on the field, but theories generally try to select equations that represent significant principles that are applicable.
The experimental approach is usually toacquire datathat accurately quantifies the phenomenon. For example, in a study of tides, the experimentalist would acquire samples of water depth as a function of time at closely enough spaced intervals to see each oscillation and over a long enough duration that multiple oscillatory periods are likely included. In a study on vibrating strings, it is common for the experimentalist to acquire a sound waveform sampled at a rate at least twice that of the highest frequency expected and for a duration many times the period of the lowest frequency expected.
For example, the top signal at the right is a sound waveform of a bass guitar playing an open string corresponding to an A note with a fundamental frequency of 55 Hz. The waveform appears oscillatory, but it is more complex than a simple sine wave, indicating the presence of additional waves. The different wave components contributing to the sound can be revealed by applying a mathematical analysis technique known as theFourier transform, shown in the lower figure. There is a prominent peak at 55 Hz, but other peaks at 110 Hz, 165 Hz, and at other frequencies corresponding to integer multiples of 55 Hz. In this case, 55 Hz is identified as the fundamental frequency of the string vibration, and the integer multiples are known asharmonics.
|
https://en.wikipedia.org/wiki/Harmonic_analysis
|
Inmathematics, anoperatorortransformis afunctionfrom onespace of functionsto another. Operators occur commonly inengineering,physicsand mathematics. Many areintegral operatorsanddifferential operators.
In the followingLis an operator
which takes a functiony∈F{\displaystyle y\in {\mathcal {F}}}to another functionL[y]∈G{\displaystyle L[y]\in {\mathcal {G}}}. Here,F{\displaystyle {\mathcal {F}}}andG{\displaystyle {\mathcal {G}}}are some unspecifiedfunction spaces, such asHardy space,Lpspace,Sobolev space, or, more vaguely, the space ofholomorphic functions.
|
https://en.wikipedia.org/wiki/List_of_mathematic_operators
|
Inmathematics, in the area ofstatistical analysis, thebispectrumis a statistic used to search for nonlinear interactions.
TheFourier transformof the second-ordercumulant, i.e., theautocorrelationfunction, is the traditionalpower spectrum.
The Fourier transform ofC3(t1,t2) (third-ordercumulant-generating function) is called the bispectrum orbispectral density.
Applying theconvolution theoremallows fast calculation of the bispectrum:B(f1,f2)=F(f1)⋅F(f2)⋅F∗(f1+f2){\displaystyle B(f_{1},f_{2})=F(f_{1})\cdot F(f_{2})\cdot F^{*}(f_{1}+f_{2})}, whereF{\displaystyle F}denotes the Fourier transform of the signal, andF∗{\displaystyle F^{*}}its conjugate.
Bispectrum andbicoherencemay be applied to the case of non-linear interactions of a continuous spectrum of propagating waves in one dimension.[1]
Bispectral measurements have been carried out forEEGsignals monitoring.[2]It was also shown that bispectra characterize differences between families of musical instruments.[3]
Inseismology, signals rarely have adequate duration for making sensible bispectral estimates from time averages.[citation needed]
Bispectral analysisdescribes observations made at two wavelengths. It is often used by scientists to analyze elemental makeup of a planetary atmosphere by analyzing the amount of light reflected and received through various colorfilters. By combining and removing two filters, much can be gleaned from only two filters. Through modern computerizedinterpolation, a third virtual filter can be created to recreatetrue colorphotographs that, while not particularly useful for scientific analysis, are popular for public display in textbooks and fund raising campaigns.[citation needed]
Bispectral analysis can also be used to analyze interactions between wave patterns and tides on Earth.[4]
A form of bispectral analysis called thebispectral indexis applied toEEGwaveforms to monitor depth of anesthesia.[5]
Biphase (phase of polyspectrum) can be used for detection of phase couplings,[6]noise reduction of polharmonic (particularly, speech[7]) signal analysis.
The bispectrum reflects the energy budget of interactions, as it can be interpreted as a covariance defined between energy-supplying and energy-receiving parties of waves involved in an nonlinear interaction.[8]On the other hand,bicoherencehas been proven to be the corresponding correlation coefficient.[8]Just as correlation cannot sufficiently demonstrate the presence of causality, spectrum and bicoherence also cannot sufficiently substantiate the existence of a nonlinear interaction.
Bispectra fall in the category ofhigher-order spectra, orpolyspectraand provide supplementary information to the power spectrum. The third order polyspectrum (bispectrum) is the easiest to compute, and hence the most popular.
A statistic defined analogously is thebispectral coherencyorbicoherence.
The Fourier transform of C4 (t1, t2, t3) (fourth-order cumulant-generating function) is called thetrispectrumortrispectral density.
The trispectrum T(f1,f2,f3) falls into the category of higher-order spectra, orpolyspectra, and provides supplementary information to the power spectrum. The trispectrum is a three-dimensional construct. Thesymmetriesof the trispectrum allow a much reduced support set to be defined, contained within the following vertices, where 1 is theNyquist frequency. (0,0,0) (1/2,1/2,-1/2) (1/3,1/3,0) (1/2,0,0) (1/4,1/4,1/4). The plane containing the points (1/6,1/6,1/6) (1/4,1/4,0) (1/2,0,0) divides this volume into an inner and an outer region. A stationary signal will have zero strength (statistically) in the outer region. The trispectrum support is divided into regions by the plane identified above and by the (f1,f2) plane. Each region has different requirements in terms of the bandwidth of signal required for non-zero values.
In the same way that the bispectrum identifies contributions to a signal'sskewnessas a function of frequency triples, the trispectrum identifies contributions to a signal'skurtosisas a function of frequency quadruplets.
The trispectrum has been used to investigate the domains of applicability of maximum kurtosis phase estimation used in the deconvolution of seismic data to find layer structure.
|
https://en.wikipedia.org/wiki/Bispectrum
|
Ametamodelis a model of a model, andmetamodelingis the process of generating such metamodels. Thus metamodeling or meta-modeling is the analysis, construction, and development of the frames, rules, constraints, models, and theories applicable and useful formodelinga predefined class of problems. As its name implies, this concept applies the notions ofmeta-and modeling insoftware engineeringandsystems engineering. Metamodels are of many types and have diverse applications.[2]
A metamodel/ surrogate model is a model of the model, i.e. a simplified model of an actual model of a circuit, system, or software-like entity.[3][4]Metamodel can be a mathematical relation or algorithm representing input and output relations. Amodelis an abstraction of phenomena in thereal world; a metamodel is yet another abstraction, highlighting the properties of the model itself. A model conforms to its metamodel in the way that a computer program conforms to the grammar of the programming language in which it is written. Various types of metamodels include polynomial equations, neural networks,Kriging, etc. "Metamodeling" is the construction of a collection of "concepts" (things, terms, etc.) within a certain domain. Metamodeling typically involves studying the output and input relationships and then fitting the right metamodels to represent that behavior.
Common uses for metamodels are:
Because of the "meta" character of metamodeling, both thepraxisand theory of metamodels are of relevance tometascience,metaphilosophy,metatheoriesandsystemics, and meta-consciousness. The concept can be useful inmathematics, and has practical applications incomputer scienceandcomputer engineering/software engineering. The latter are the main focus of this article.
Insoftware engineering, the use ofmodelsis an alternative to more common code-based development techniques. A model always conforms to a unique metamodel. One of the currently most active branches ofModel Driven Engineeringis the approach namedmodel-driven architectureproposed byOMG. This approach is embodied in theMeta Object Facility(MOF) specification.[citation needed]
Typical metamodelling specifications proposed byOMGareUML,SysML, SPEM or CWM.ISOhas also published the standard metamodelISO/IEC 24744.[6]All the languages presented below could be defined as MOF metamodels.
Metadata modelingis a type of metamodeling used insoftware engineeringandsystems engineeringfor the analysis and construction of models applicable and useful to some predefined class of problems. (see also:data modeling).
One important move inmodel-driven engineeringis the systematic use ofmodel transformation languages. The OMG has proposed a standard for this calledQVTfor Queries/Views/Transformations.QVTis based on the meta-object facility (MOF). Among many othermodel transformation languages(MTLs), some examples of implementations of this standard are AndroMDA,VIATRA,Tefkat,MT,ManyDesigns Portofino.
Meta-models are closely related toontologies. Both are often used to describe and analyze the relations between concepts:[7]
For software engineering, severaltypesof models (and their corresponding modeling activities) can be distinguished:
A library of similar metamodels has been called a Zoo of metamodels.[11]There are several types of meta-model zoos.[12]Some are expressed in ECore. Others are written inMOF1.4 –XMI1.2. The metamodels expressed inUML-XMI1.2 may be uploaded in Poseidon for UML, aUMLCASEtool.
|
https://en.wikipedia.org/wiki/Metamodeling
|
Theiterative rational Krylov algorithm (IRKA), is an iterative algorithm, useful formodel order reduction(MOR) ofsingle-input single-output(SISO) linear time-invariantdynamical systems.[1]At each iteration, IRKA does an Hermite type interpolation of the original system transfer function. Each interpolation requires solvingr{\displaystyle r}shifted pairs oflinear systems, each of sizen×n{\displaystyle n\times n}; wheren{\displaystyle n}is the original system order, andr{\displaystyle r}is the desired reduced model order (usuallyr≪n{\displaystyle r\ll n}).
The algorithm was first introduced by Gugercin, Antoulas and Beattie in 2008.[2]It is based on a first order necessary optimality condition, initially investigated by Meier and Luenberger in 1967.[3]The first convergence proof of IRKA was given by Flagg, Beattie and Gugercin in 2012,[4]for a particular kind of systems.
Consider a SISO linear time-invariant dynamical system, with inputv(t){\displaystyle v(t)}, and outputy(t){\displaystyle y(t)}:
Applying theLaplace transform, with zero initial conditions, we obtain thetransfer functionG{\displaystyle G}, which is a fraction of polynomials:
AssumeG{\displaystyle G}is stable. Givenr<n{\displaystyle r<n}, MOR tries to approximate the transfer functionG{\displaystyle G}, by a stable rational transfer functionGr{\displaystyle G_{r}}, of orderr{\displaystyle r}:
A possible approximation criterion is to minimize the absolute error inH2{\displaystyle H_{2}}norm:
This is known as theH2{\displaystyle H_{2}}optimizationproblem. This problem has been studied extensively, and it is known to be non-convex;[4]which implies that usually it will be difficult to find a global minimizer.
The following first order necessary optimality condition for theH2{\displaystyle H_{2}}problem, is of great importance for the IRKA algorithm.
Theorem([2][Theorem 3.4][4][Theorem 1.2])—Assume that theH2{\displaystyle H_{2}}optimization problem admits a solutionGr{\displaystyle G_{r}}with simple poles. Denote these poles by:λ1(Ar),…,λr(Ar){\displaystyle \lambda _{1}(A_{r}),\ldots ,\lambda _{r}(A_{r})}. Then,Gr{\displaystyle G_{r}}must be an Hermite interpolator ofG{\displaystyle G}, through the reflected poles ofGr{\displaystyle G_{r}}:
Note that the polesλi(Ar){\displaystyle \lambda _{i}(A_{r})}are theeigenvaluesof the reducedr×r{\displaystyle r\times r}matrixAr{\displaystyle A_{r}}.
An Hermite interpolantGr{\displaystyle G_{r}}of the rational functionG{\displaystyle G}, throughr{\displaystyle r}distinct pointsσ1,…,σr∈C{\displaystyle \sigma _{1},\ldots ,\sigma _{r}\in \mathbb {C} }, has components:
where the matricesVr=(v1∣…∣vr)∈Cn×r{\displaystyle V_{r}=(v_{1}\mid \ldots \mid v_{r})\in \mathbb {C} ^{n\times r}}andWr=(w1∣…∣wr)∈Cn×r{\displaystyle W_{r}=(w_{1}\mid \ldots \mid w_{r})\in \mathbb {C} ^{n\times r}}may be found by solvingr{\displaystyle r}dual pairs of linear systems, one for each shift[4][Theorem 1.1]:
As can be seen from the previous section, finding an Hermite interpolatorGr{\displaystyle G_{r}}ofG{\displaystyle G}, throughr{\displaystyle r}given points, is relatively easy. The difficult part is to find the correct interpolation points. IRKA tries to iteratively approximate these "optimal" interpolation points.
For this, it starts withr{\displaystyle r}arbitrary interpolation points (closed under conjugation), and then, at each iterationm{\displaystyle m}, it imposes the first order necessary optimality condition of theH2{\displaystyle H_{2}}problem:
1. find the Hermite interpolantGr{\displaystyle G_{r}}ofG{\displaystyle G}, through the actualr{\displaystyle r}shift points:σ1m,…,σrm{\displaystyle \sigma _{1}^{m},\ldots ,\sigma _{r}^{m}}.
2. update the shifts by using the poles of the newGr{\displaystyle G_{r}}:σim+1=−λi(Ar),∀i=1,…,r.{\displaystyle \sigma _{i}^{m+1}=-\lambda _{i}(A_{r}),\,\forall \,i=1,\ldots ,r.}
The iteration is stopped when the relative change in the set of shifts of two successive iterations is less than a given tolerance. This condition may be stated as:
As already mentioned, each Hermite interpolation requires solvingr{\displaystyle r}shifted pairs of linear systems, each of sizen×n{\displaystyle n\times n}:
Also, updating the shifts requires finding ther{\displaystyle r}poles of the new interpolantGr{\displaystyle G_{r}}. That is, finding ther{\displaystyle r}eigenvalues of the reducedr×r{\displaystyle r\times r}matrixAr{\displaystyle A_{r}}.
The following is a pseudocode for the IRKA algorithm[2][Algorithm 4.1].
A SISO linear system is said to have symmetric state space (SSS), whenever:A=AT,b=c.{\displaystyle A=A^{T},\,b=c.}This type of systems appear in many important applications, such as in the analysis of RC circuits and in inverse problems involving 3DMaxwell's equations.[4]For SSS systems with distinct poles, the following convergence result has been proven:[4]"IRKA is a locally convergent fixed point iteration to a local minimizer of theH2{\displaystyle H_{2}}optimization problem."
Although there is no convergence proof for the general case, numerous experiments have shown that IRKA often converges rapidly for different kind of linear dynamical systems.[1][4]
IRKA algorithm has been extended by the original authors tomultiple-input multiple-output(MIMO) systems, and also to discrete time and differential algebraic systems[1][2][Remark 4.1].
Model order reduction
|
https://en.wikipedia.org/wiki/Iterative_rational_Krylov_algorithm
|
Inmultilinear algebra, thetensor rank decomposition[1]orrank-Rdecompositionis the decomposition of a tensor as a sum ofRrank-1 tensors, whereRis minimal. Computing this decomposition is an open problem.[clarification needed]
Canonical polyadic decomposition (CPD)is a variant of the tensor rank decomposition, in which the tensor is approximated as a sum ofKrank-1 tensors for a user-specifiedK. The CP decomposition has found some applications inlinguisticsandchemometrics. It was introduced byFrank Lauren Hitchcockin 1927[2]and later rediscovered several times, notably in psychometrics.[3][4]The CP decomposition is referred to as CANDECOMP,[3]PARAFAC,[4]or CANDECOMP/PARAFAC (CP). Note that the PARAFAC2 rank decomposition is a variation of the CP decomposition.[5]
Another popular generalization of the matrix SVD known as thehigher-order singular value decompositioncomputes orthonormal mode matrices and has found applications ineconometrics,signal processing,computer vision,computer graphics, andpsychometrics.
A scalar variable is denoted by lower case italic letters,a{\displaystyle a}and an upper bound scalar is denoted by an upper case italic letter,A{\displaystyle A}.
Indices are denoted by a combination of lowercase and upper case italic letters,1≤i≤I{\displaystyle 1\leq i\leq I}. Multiple indices that one might encounter when referring to the multiple modes of a tensor are conveniently denoted by1≤im≤Im{\displaystyle 1\leq i_{m}\leq I_{m}}where1≤m≤M{\displaystyle 1\leq m\leq M}.
A vector is denoted by a lower case bold Times Roman,a{\displaystyle \mathbf {a} }and a matrix is denoted by bold upper case lettersA{\displaystyle \mathbf {A} }.
A higher order tensor is denoted by calligraphic letters,A{\displaystyle {\mathcal {A}}}. An element of anM{\displaystyle M}-order tensorA∈CI1×I2×…Im×…IM{\displaystyle {\mathcal {A}}\in \mathbb {C} ^{I_{1}\times I_{2}\times \dots I_{m}\times \dots I_{M}}}is denoted byai1,i2,…,im,…iM{\displaystyle a_{i_{1},i_{2},\dots ,i_{m},\dots i_{M}}}orAi1,i2,…,im,…iM{\displaystyle {\mathcal {A}}_{i_{1},i_{2},\dots ,i_{m},\dots i_{M}}}.
A data tensorA∈FI0×I1×…×IC{\displaystyle {\mathcal {A}}\in {\mathbb {F} }^{I_{0}\times I_{1}\times \ldots \times I_{C}}}is a collection of multivariate observations organized into aM-way array whereM=C+1. Every tensor may be represented with a suitably largeR{\displaystyle R}as a linear combination ofR{\displaystyle R}rank-1 tensors:
whereλr∈F{\displaystyle \lambda _{r}\in {\mathbb {F} }}andam,r∈FIm{\displaystyle \mathbf {a} _{m,r}\in {\mathbb {F} }^{I_{m}}}where1≤m≤M{\displaystyle 1\leq m\leq M}. When the number of termsR{\displaystyle R}is minimal in the above expression, thenR{\displaystyle R}is called therankof the tensor, and the decomposition is often referred to as a(tensor) rank decomposition,minimal CP decomposition, orCanonical Polyadic Decomposition (CPD). If the number of terms is not minimal, then the above decomposition is often referred to asCANDECOMP/PARAFAC,Polyadic decomposition'.
Contrary to the case of matrices, computing the rank of a tensor isNP-hard.[6]The only notable well-understood case consists of tensors inFIm⊗FIn⊗F2{\displaystyle F^{I_{m}}\otimes F^{I_{n}}\otimes F^{2}}, whose rank can be obtained from theKronecker–Weierstrassnormal form of the linearmatrix pencilthat the tensor represents.[7]A simple polynomial-time algorithm exists for certifying that a tensor is of rank 1, namely thehigher-order singular value decomposition.
The rank of the tensor of zeros is zero by convention. The rank of a tensora1⊗⋯⊗aM{\displaystyle \mathbf {a} _{1}\otimes \cdots \otimes \mathbf {a} _{M}}is one, provided thatam∈FIm∖{0}{\displaystyle \mathbf {a} _{m}\in F^{I_{m}}\setminus \{0\}}.
The rank of a tensor depends on the field over which the tensor is decomposed. It is known that some real tensors may admit a complex decomposition whose rank is strictly less than the rank of a real decomposition of the same tensor. As an example,[8]consider the following real tensor
wherexi,yj∈R2{\displaystyle \mathbf {x} _{i},\mathbf {y} _{j}\in \mathbb {R} ^{2}}. The rank of this tensor over the reals is known to be 3, while its complex rank is only 2 because it is the sum of a complex rank-1 tensor with itscomplex conjugate, namely
wherezk=xk+iyk{\displaystyle \mathbf {z} _{k}=\mathbf {x} _{k}+i\mathbf {y} _{k}}.
In contrast, the rank of real matrices will never decrease under afield extensiontoC{\displaystyle \mathbb {C} }: real matrix rank and complex matrix rank coincide for real matrices.
Thegeneric rankr(I1,…,IM){\displaystyle r(I_{1},\ldots ,I_{M})}is defined as the least rankr{\displaystyle r}such that the closure in theZariski topologyof the set of tensors of rank at mostr{\displaystyle r}is the entire spaceFI1⊗⋯⊗FIM{\displaystyle F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}}. In the case of complex tensors, tensors of rank at mostr(I1,…,IM){\displaystyle r(I_{1},\ldots ,I_{M})}form adense setS{\displaystyle S}: every tensor in the aforementioned space is either of rank less than the generic rank, or it is the limit in theEuclidean topologyof a sequence of tensors fromS{\displaystyle S}. In the case of real tensors, the set of tensors of rank at mostr(I1,…,IM){\displaystyle r(I_{1},\ldots ,I_{M})}only forms an open set of positive measure in the Euclidean topology. There may exist Euclidean-open sets of tensors of rank strictly higher than the generic rank. All ranks appearing on open sets in the Euclidean topology are calledtypical ranks. The smallest typical rank is called the generic rank; this definition applies to both complex and real tensors. The generic rank of tensor spaces was initially studied in 1983 byVolker Strassen.[9]
As an illustration of the above concepts, it is known that both 2 and 3 are typical ranks ofR2⊗R2⊗R2{\displaystyle \mathbb {R} ^{2}\otimes \mathbb {R} ^{2}\otimes \mathbb {R} ^{2}}while the generic rank ofC2⊗C2⊗C2{\displaystyle \mathbb {C} ^{2}\otimes \mathbb {C} ^{2}\otimes \mathbb {C} ^{2}}is 2. Practically, this means that a randomly sampled real tensor (from a continuous probability measure on the space of tensors) of size2×2×2{\displaystyle 2\times 2\times 2}will be a rank-1 tensor with probability zero, a rank-2 tensor with positive probability, and rank-3 with positive probability. On the other hand, a randomly sampled complex tensor of the same size will be a rank-1 tensor with probability zero, a rank-2 tensor with probability one, and a rank-3 tensor with probability zero. It is even known that the generic rank-3 real tensor inR2⊗R2⊗R2{\displaystyle \mathbb {R} ^{2}\otimes \mathbb {R} ^{2}\otimes \mathbb {R} ^{2}}will be of complex rank equal to 2.
The generic rank of tensor spaces depends on the distinction between balanced and unbalanced tensor spaces. A tensor spaceFI1⊗⋯⊗FIM{\displaystyle F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}}, whereI1≥I2≥⋯≥IM{\displaystyle I_{1}\geq I_{2}\geq \cdots \geq I_{M}},
is calledunbalancedwhenever
and it is calledbalancedotherwise.
When the first factor is very large with respect to the other factors in the tensor product, then the tensor space essentially behaves as a matrix space. The generic rank of tensors living in an unbalanced tensor spaces is known to equal
almost everywhere. More precisely, the rank of every tensor in an unbalanced tensor spaceFI1×⋯×IM∖Z{\displaystyle F^{I_{1}\times \cdots \times I_{M}}\setminus Z}, whereZ{\displaystyle Z}is some indeterminate closed set in the Zariski topology, equals the above value.[10]
Theexpectedgeneric rank of tensors living in a balanced tensor space is equal to
almost everywherefor complex tensors and on a Euclidean-open set for real tensors, where
More precisely, the rank of every tensor inCI1×⋯×IM∖Z{\displaystyle \mathbb {C} ^{I_{1}\times \cdots \times I_{M}}\setminus Z}, whereZ{\displaystyle Z}is some indeterminate closed set in theZariski topology, is expected to equal the above value.[11]For real tensors,rE(I1,…,IM){\displaystyle r_{E}(I_{1},\ldots ,I_{M})}is the least rank that is expected to occur on a set of positive Euclidean measure. The valuerE(I1,…,IM){\displaystyle r_{E}(I_{1},\ldots ,I_{M})}is often referred to as theexpected generic rankof the tensor spaceFI1×⋯×IM{\displaystyle F^{I_{1}\times \cdots \times I_{M}}}because it is only conjecturally correct. It is known that the true generic rank always satisfies
TheAbo–Ottaviani–Peterson conjecture[11]states that equality is expected, i.e.,r(I1,…,IM)=rE(I1,…,IM){\displaystyle r(I_{1},\ldots ,I_{M})=r_{E}(I_{1},\ldots ,I_{M})}, with the following exceptional cases:
In each of these exceptional cases, the generic rank is known to ber(I1,…,Im,…,IM)=rE(I1,…,IM)+1{\displaystyle r(I_{1},\ldots ,I_{m},\ldots ,I_{M})=r_{E}(I_{1},\ldots ,I_{M})+1}. Note that while the set of tensors of rank 3 inF2×2×2×2{\displaystyle F^{2\times 2\times 2\times 2}}is defective (13 and not the expected 14), the generic rank in that space is still the expected one, 4. Similarly, the set of tensors of rank 5 inF4×4×3{\displaystyle F^{4\times 4\times 3}}is defective (44 and not the expected 45), but the generic rank in that space is still the expected 6.
The AOP conjecture has been proved completely in a number of special cases. Lickteig showed already in 1985 thatr(n,n,n)=rE(n,n,n){\displaystyle r(n,n,n)=r_{E}(n,n,n)}, provided thatn≠3{\displaystyle n\neq 3}.[12]In 2011, a major breakthrough was established by Catalisano, Geramita, and Gimigliano who proved that the expected dimension of the set of ranks{\displaystyle s}tensors of format2×2×⋯×2{\displaystyle 2\times 2\times \cdots \times 2}is the expected one except for rank 3 tensors in the 4 factor case, yet the expected rank in that case is still 4. As a consequence,r(2,2,…,2)=rE(2,2,…,2){\displaystyle r(2,2,\ldots ,2)=r_{E}(2,2,\ldots ,2)}for all binary tensors.[13]
Themaximum rankthat can be admitted by any of the tensors in a tensor space is unknown in general; even a conjecture about this maximum rank is missing. Presently, the best general upper bound states that the maximum rankrmax(I1,…,IM){\displaystyle r_{\mbox{max}}(I_{1},\ldots ,I_{M})}ofFI1⊗⋯⊗FIM{\displaystyle F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}}, whereI1≥I2≥⋯≥IM{\displaystyle I_{1}\geq I_{2}\geq \cdots \geq I_{M}}, satisfies
wherer(I1,…,IM){\displaystyle r(I_{1},\ldots ,I_{M})}is the (least)generic rankofFI1⊗⋯⊗FIM{\displaystyle F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}}.[14]It is well-known that the foregoing inequality may be strict. For instance, the generic rank of tensors inR2×2×2{\displaystyle \mathbb {R} ^{2\times 2\times 2}}is two, so that the above bound yieldsrmax(2,2,2)≤4{\displaystyle r_{\mbox{max}}(2,2,2)\leq 4}, while it is known that the maximum rank equals 3.[8]
A rank-s{\displaystyle s}tensorA{\displaystyle {\mathcal {A}}}is called aborder tensorif there exists a sequence of tensors of rank at mostr<s{\displaystyle r<s}whose limit isA{\displaystyle {\mathcal {A}}}. Ifr{\displaystyle r}is the least value for which such a convergent sequence exists, then it is called theborder rankofA{\displaystyle {\mathcal {A}}}. For order-2 tensors, i.e., matrices, rank and border rankalwayscoincide, however, for tensors of order≥3{\displaystyle \geq 3}they may differ. Border tensors were first studied in the context of fastapproximatematrix multiplication algorithmsby Bini, Lotti, and Romani in 1980.[15]
A classic example of a border tensor is the rank-3 tensor
It can be approximated arbitrarily well by the following sequence of rank-2 tensors
asm→∞{\displaystyle m\to \infty }. Therefore, its border rank is 2, which is strictly less than its rank. When the two vectors are orthogonal, this example is also known as aW state.
It follows from the definition of a pure tensor thatA=a1⊗a2⊗⋯⊗aM=b1⊗b2⊗⋯⊗bM{\displaystyle {\mathcal {A}}=\mathbf {a} _{1}\otimes \mathbf {a} _{2}\otimes \cdots \otimes \mathbf {a} _{M}=\mathbf {b} _{1}\otimes \mathbf {b} _{2}\otimes \cdots \otimes \mathbf {b} _{M}}if and only if there existλk{\displaystyle \lambda _{k}}such thatλ1λ2⋯λM=1{\displaystyle \lambda _{1}\lambda _{2}\cdots \lambda _{M}=1}andam=λmbm{\displaystyle \mathbf {a} _{m}=\lambda _{m}\mathbf {b} _{m}}for allm. For this reason, the parameters{am}m=1M{\displaystyle \{\mathbf {a} _{m}\}_{m=1}^{M}}of a rank-1 tensorA{\displaystyle {\mathcal {A}}}are called identifiable or essentially unique. A rank-r{\displaystyle r}tensorA∈FI1⊗FI2⊗⋯⊗FIM{\displaystyle {\mathcal {A}}\in F^{I_{1}}\otimes F^{I_{2}}\otimes \cdots \otimes F^{I_{M}}}is calledidentifiableif every of its tensor rank decompositions is the sum of the same set ofr{\displaystyle r}distinct tensors{A1,A2,…,Ar}{\displaystyle \{{\mathcal {A}}_{1},{\mathcal {A}}_{2},\ldots ,{\mathcal {A}}_{r}\}}where theAi{\displaystyle {\mathcal {A}}_{i}}'s are of rank 1. An identifiable rank-r{\displaystyle r}thus has only one essentially unique decompositionA=∑i=1rAi,{\displaystyle {\mathcal {A}}=\sum _{i=1}^{r}{\mathcal {A}}_{i},}and allr!{\displaystyle r!}tensor rank decompositions ofA{\displaystyle {\mathcal {A}}}can be obtained by permuting the order of the summands. Observe that in a tensor rank decomposition all theAi{\displaystyle {\mathcal {A}}_{i}}'s are distinct, for otherwise the rank ofA{\displaystyle {\mathcal {A}}}would be at mostr−1{\displaystyle r-1}.
Order-2 tensors inFI1⊗FI2≃FI1×I2{\displaystyle F^{I_{1}}\otimes F^{I_{2}}\simeq F^{I_{1}\times I_{2}}}, i.e., matrices, are not identifiable forr>1{\displaystyle r>1}. This follows essentially from the observationA=∑i=1rai⊗bi=∑i=1raibiT=ABT=(AX−1)(BXT)T=∑i=1rcidiT=∑i=1rci⊗di,{\displaystyle {\mathcal {A}}=\sum _{i=1}^{r}\mathbf {a} _{i}\otimes \mathbf {b} _{i}=\sum _{i=1}^{r}\mathbf {a} _{i}\mathbf {b} _{i}^{T}=AB^{T}=(AX^{-1})(BX^{T})^{T}=\sum _{i=1}^{r}\mathbf {c} _{i}\mathbf {d} _{i}^{T}=\sum _{i=1}^{r}\mathbf {c} _{i}\otimes \mathbf {d} _{i},}whereX∈GLr(F){\displaystyle X\in \mathrm {GL} _{r}(F)}is an invertibler×r{\displaystyle r\times r}matrix,A=[ai]i=1r{\displaystyle A=[\mathbf {a} _{i}]_{i=1}^{r}},B=[bi]i=1r{\displaystyle B=[\mathbf {b} _{i}]_{i=1}^{r}},AX−1=[ci]i=1r{\displaystyle AX^{-1}=[\mathbf {c} _{i}]_{i=1}^{r}}andBXT=[di]i=1r{\displaystyle BX^{T}=[\mathbf {d} _{i}]_{i=1}^{r}}. It can be shown[16]that for everyX∈GLn(F)∖Z{\displaystyle X\in \mathrm {GL} _{n}(F)\setminus Z}, whereZ{\displaystyle Z}is a closed set in the Zariski topology, the decomposition on the right-hand side is a sum of a different set of rank-1 tensors than the decomposition on the left-hand side, entailing that order-2 tensors of rankr>1{\displaystyle r>1}are generically not identifiable.
The situation changes completely for higher-order tensors inFI1⊗FI2⊗⋯⊗FIM{\displaystyle F^{I_{1}}\otimes F^{I_{2}}\otimes \cdots \otimes F^{I_{M}}}withM>2{\displaystyle M>2}and allIm≥2{\displaystyle I_{m}\geq 2}. For simplicity in notation, assume without loss of generality that the factors are ordered such thatI1≥I2≥⋯≥IM≥2{\displaystyle I_{1}\geq I_{2}\geq \cdots \geq I_{M}\geq 2}. LetSr⊂FI1⊗⋯FIm⊗⋯⊗FIM{\displaystyle S_{r}\subset F^{I_{1}}\otimes \cdots F^{I_{m}}\otimes \cdots \otimes F^{I_{M}}}denote the set of tensors of rank bounded byr{\displaystyle r}. Then, the following statement was proved to be correct using acomputer-assisted prooffor all spaces of dimensionΠ<15000{\displaystyle \Pi <15000},[17]and it is conjectured to be valid in general:[17][18][19]
There exists a closed setZr{\displaystyle Z_{r}}in the Zariski topology such thatevery tensorA∈Sr∖Zr{\displaystyle {\mathcal {A}}\in S_{r}\setminus Z_{r}}is identifiable(Sr{\displaystyle S_{r}}is calledgenerically identifiablein this case), unless either one of the following exceptional cases holds:
In these exceptional cases, the generic (and also minimum) number ofcomplexdecompositions is
In summary, the generic tensor of orderM>2{\displaystyle M>2}and rankr<ΠΣ+1{\textstyle r<{\frac {\Pi }{\Sigma +1}}}that is not identifiability-unbalanced is expected to be identifiable (modulo the exceptional cases in small spaces).
The rank approximation problem asks for the rank-r{\displaystyle r}decomposition closest (in the usual Euclidean topology) to some rank-s{\displaystyle s}tensorA{\displaystyle {\mathcal {A}}}, wherer<s{\displaystyle r<s}. That is, one seeks to solve
where‖⋅‖F{\displaystyle \|\cdot \|_{F}}is theFrobenius norm.
It was shown in a 2008 paper by de Silva and Lim[8]that the above standard approximation problem may beill-posed. A solution to aforementioned problem may sometimes not exist because the set over which one optimizes is not closed. As such, a minimizer may not exist, even though an infimum would exist. In particular, it is known that certain so-calledborder tensorsmay be approximated arbitrarily well by a sequence of tensor of rank at mostr{\displaystyle r}, even though the limit of the sequence converges to a tensor of rank strictly higher thanr{\displaystyle r}. The rank-3 tensor
can be approximated arbitrarily well by the following sequence of rank-2 tensors
asn→∞{\displaystyle n\to \infty }. This example neatly illustrates the general principle that a sequence of rank-r{\displaystyle r}tensors that converges to a tensor of strictly higher rank needs to admit at least two individual rank-1 terms whose norms become unbounded. Stated formally, whenever a sequence
has the property thatAn→A{\displaystyle {\mathcal {A}}_{n}\to {\mathcal {A}}}(in the Euclidean topology) asn→∞{\displaystyle n\to \infty }, then there should exist at least1≤i≠j≤r{\displaystyle 1\leq i\neq j\leq r}such that
asn→∞{\displaystyle n\to \infty }. This phenomenon is often encountered when attempting to approximate a tensor using numerical optimization algorithms. It is sometimes called the problem ofdiverging components. It was, in addition, shown that a random low-rank tensor over the reals may not admit a rank-2 approximation with positive probability, leading to the understanding that the ill-posedness problem is an important consideration when employing the tensor rank decomposition.
A common partial solution to the ill-posedness problem consists of imposing an additional inequality constraint that bounds the norm of the individual rank-1 terms by some constant. Other constraints that result in a closed set, and, thus, well-posed optimization problem, include imposing positivity or a boundedinner productstrictly less than unity between the rank-1 terms appearing in the sought decomposition.
Alternating algorithms:
Direct algorithms:
General optimization algorithms:
General polynomial system solving algorithms:
In machine learning, the CP-decomposition is the central ingredient in learning probabilistic latent variables models via the technique of moment-matching. For example, consider the multi-view model[32]which is a probabilistic latent variable model. In this model, the generation of samples are posited as follows: there exists a hidden random variable that is not observed directly, given which, there are severalconditionally independentrandom variables known as the different "views" of the hidden variable. For example, assume there are three viewsx1,x2,x3{\displaystyle x_{1},x_{2},x_{3}}of ak{\displaystyle k}-state categorical hidden variableh{\displaystyle h}. Then the empirical third moment of this latent variable modelE[x1⊗x2⊗x3]{\displaystyle E[x_{1}\otimes x_{2}\otimes x_{3}]}is a rank 3 tensor and can be decomposed as:E[x1⊗x2⊗x3]=∑i=1kPr(h=i)E[x1|h=i]⊗E[x2|h=i]⊗E[x3|h=i]{\displaystyle E[x_{1}\otimes x_{2}\otimes x_{3}]=\sum _{i=1}^{k}Pr(h=i)E[x_{1}|h=i]\otimes E[x_{2}|h=i]\otimes E[x_{3}|h=i]}.
In applications such astopic modeling, this can be interpreted as the co-occurrence of words in a document. Then the coefficients in the decomposition of this empirical moment tensor can be interpreted as the probability of choosing a specific topic and each column of the factor matrixE[x|h=i]{\displaystyle E[x|h=i]}corresponds to probabilities of words in the vocabulary in the corresponding topic.
|
https://en.wikipedia.org/wiki/CP_decomposition
|
Multilinear algebrais the study offunctionswith multiplevector-valuedarguments, with the functions beinglinear mapswith respect to each argument. It involves concepts such asmatrices,tensors,multivectors,systems of linear equations,higher-dimensional spaces,determinants,innerandouterproducts, anddual spaces. It is a mathematical tool used inengineering,machine learning,physics, andmathematics.[1]
While many theoretical concepts and applications involvesingle vectors, mathematicians such asHermann Grassmannconsidered structures involving pairs, triplets, andmultivectorsthat generalizevectors. With multiple combinational possibilities, the space ofmultivectorsexpands to 2ndimensions, wherenis the dimension of the relevant vector space.[2]Thedeterminant can be formulated abstractlyusing the structures of multilinear algebra.
Multilinear algebra appears in the study of the mechanical response of materials to stress and strain, involving various moduli ofelasticity. The term "tensor" describes elements within the multilinear space due to its added structure. Despite Grassmann's early work in 1844 with hisAusdehnungslehre, which was also republished in 1862, the subject was initially not widely understood, as even ordinary linear algebra posed many challenges at the time.
The concepts of multilinear algebra find applications in certain studies ofmultivariate calculusandmanifolds, particularly concerning theJacobian matrix.Infinitesimal differentialsencountered in single-variable calculus are transformed intodifferential formsinmultivariate calculus, and their manipulation is carried out usingexterior algebra.[3]
Following Grassmann, developments in multilinear algebra were made byVictor Schlegelin 1872 with the publication of the first part of hisSystem der Raumlehre[4]and byElwin Bruno Christoffel. Notably, significant advancements came through the work ofGregorio Ricci-CurbastroandTullio Levi-Civita,[5]particularly in the form ofabsolute differential calculuswithin multilinear algebra.Marcel GrossmannandMichele Bessointroduced this form toAlbert Einstein, and in 1915, Einstein's publication ongeneral relativity, explaining theprecession of Mercury's perihelion, established multilinear algebra and tensors as important mathematical tools in physics.
In 1958,Nicolas Bourbakiincluded a chapter on multilinear algebra titled "Algèbre Multilinéaire" in his seriesÉléments de mathématique, specifically within the algebra book. The chapter covers topics such as bilinear functions, thetensor productof twomodules, and the properties of tensor products.[6]
Multilinear algebra concepts find applications in various areas, including:
|
https://en.wikipedia.org/wiki/Multilinear_algebra
|
Multilinear principal component analysis(MPCA) is amultilinearextension ofprincipal component analysis(PCA) that is used to analyze M-way arrays, also informally referred to as "data tensors". M-way arrays may be modeled by linear tensor models, such as CANDECOMP/Parafac, or by multilinear tensor models, such as multilinear principal component analysis (MPCA) or multilinear independent component analysis (MICA).
The origin of MPCA can be traced back to thetensor rank decompositionintroduced byFrank Lauren Hitchcockin 1927;[1]to theTucker decomposition;[2]and to Peter Kroonenberg's "3-mode PCA" work.[3]In 2000, De Lathauwer et al. restated Tucker and Kroonenberg's work in clear and concise numerical computational terms in their SIAM paper entitled "Multilinear Singular Value Decomposition",[4](HOSVD) and in their paper "On the Best Rank-1 and Rank-(R1, R2, ..., RN) Approximation of Higher-order Tensors".[5]
Circa 2001, Vasilescu and Terzopoulos reframed the data analysis, recognition and synthesis problems as multilinear tensor problems. Tensor factor analysis is the compositional consequence of several causal factors of data formation, and are well suited for multi-modal data tensor analysis. The power of the tensor framework was showcased by analyzing human motion joint angles, facial images or textures in terms of their causal factors of data formation in the following works: Human Motion Signatures[6](CVPR 2001, ICPR 2002), face recognition –TensorFaces,[7][8](ECCV 2002, CVPR 2003, etc.) and computer graphics –TensorTextures[9](Siggraph 2004).
Historically, MPCA has been referred to as "M-mode PCA", a terminology which was coined by Peter Kroonenberg in 1980.[3]In 2005, Vasilescu andTerzopoulosintroduced the Multilinear PCA[10]terminology as a way to better differentiate between linear and multilinear tensor decomposition, as well as, to better differentiate between the work[6][7][8][9]that computed 2nd order statistics associated with each data tensor mode(axis), and subsequent work on Multilinear Independent Component Analysis[10]that computed higher order statistics associated with each tensor mode/axis.
Multilinear PCA may be applied to compute the causal factors of data formation, or as signal processing tool on data tensors whose individual observation have either been vectorized,[6][7][8][9]or whose observations are treated as a collection of column/row observations, "data matrix" and concatenated into a data tensor. The main disadvantage of this approach is that, rather than computing all possible combinations, MPCA computes a set of orthonormal matrices associated with each mode of the data tensor which are analogous to the orthonormal row and column space of a matrix computed by the matrix SVD. This transformation aims to capture as high a variance as possible, accounting for as much of the variability in the data associated with each data tensor mode(axis).
The MPCA solution follows the alternating least square (ALS) approach. It is iterative in nature.
As in PCA, MPCA works on centered data. Centering is a little more complicated for tensors, and it is problem dependent.
MPCA features: Supervised MPCA is employed in causal factor analysis that facilitates object recognition[11]while a semi-supervised MPCA feature selection is employed in visualization tasks.[12]
Various extension of MPCA:
|
https://en.wikipedia.org/wiki/Multilinear_PCA
|
Inmathematics, atensoris analgebraic objectthat describes amultilinearrelationship between sets of algebraic objects related to avector space. Tensors may map between different objects such asvectors,scalars, and even other tensors. There are many types of tensors, includingscalarsandvectors(which are the simplest tensors),dual vectors,multilinear mapsbetween vector spaces, and even some operations such as thedot product. Tensors are definedindependentof anybasis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensionalmatrix.
Tensors have become important inphysicsbecause they provide a concise mathematical framework for formulating and solving physics problems in areas such asmechanics(stress,elasticity,quantum mechanics,fluid mechanics,moment of inertia, ...),electrodynamics(electromagnetic tensor,Maxwell tensor,permittivity,magnetic susceptibility, ...), andgeneral relativity(stress–energy tensor,curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of atensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".
Tullio Levi-CivitaandGregorio Ricci-Curbastropopularised tensors in 1900 – continuing the earlier work ofBernhard Riemann,Elwin Bruno Christoffel, and others – as part of theabsolute differential calculus. The concept enabled an alternative formulation of the intrinsicdifferential geometryof amanifoldin the form of theRiemann curvature tensor.[1]
Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction.
A tensor may be represented as a (potentially multidimensional) array. Just as avectorin ann-dimensionalspace is represented by aone-dimensionalarray withncomponents with respect to a givenbasis, any tensor with respect to a basis is represented by a multidimensional array. For example, alinear operatoris represented in a basis as a two-dimensional squaren×narray. The numbers in the multidimensional array are known as thecomponentsof the tensor. They are denoted by indices giving their position in the array, assubscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order-2tensorTcould be denotedTij, whereiandjare indices running from1ton, or also byTij. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus whileTijandTijcan both be expressed asn-by-nmatrices, and are numerically related viaindex juggling, the difference in their transformation laws indicates it would be improper to add them together.
The total number of indices (m) required to identify each component uniquely is equal to thedimensionor the number ofwaysof an array, which is why a tensor is sometimes referred to as anm-dimensional array or anm-way array. The total number of indices is also called theorder,degreeorrankof a tensor,[2][3][4]although the term "rank" generally hasanother meaningin the context of matrices and tensors.
Just as the components of a vector change when we change thebasisof the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with atransformation lawthat details how the components of the tensor respond to achange of basis. The components of a vector can respond in two distinct ways to achange of basis(seeCovariance and contravariance of vectors), where the newbasis vectorse^i{\displaystyle \mathbf {\hat {e}} _{i}}are expressed in terms of the old basis vectorsej{\displaystyle \mathbf {e} _{j}}as,
HereRjiare the entries of the change of basis matrix, and in the rightmost expression thesummationsign was suppressed: this is theEinstein summation convention, which will be used throughout this article.[Note 1]The componentsviof a column vectorvtransform with theinverseof the matrixR,
where the hat denotes the components in the new basis. This is called acontravarianttransformation law, because the vector components transform by theinverseof the change of basis. In contrast, the components,wi, of a covector (or row vector),w, transform with the matrixRitself,
This is called acovarianttransformation law, because the covector components transform by thesame matrixas the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is calledcontravariantand is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is calledcovariantand is denoted with a lower index (subscript).
As a simple example, the matrix of a linear operator with respect to a basis is a rectangular arrayT{\displaystyle T}that transforms under a change of basis matrixR=(Rij){\displaystyle R=\left(R_{i}^{j}\right)}byT^=R−1TR{\displaystyle {\hat {T}}=R^{-1}TR}. For the individual matrix entries, this transformation law has the formT^j′i′=(R−1)ii′TjiRj′j{\displaystyle {\hat {T}}_{j'}^{i'}=\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}}so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1).
Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:
whereδjk{\displaystyle \delta _{j}^{k}}is theKronecker delta, which functions similarly to theidentity matrix, and has the effect of renaming indices (jintokin this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions likeviei{\displaystyle {v}^{i}\,\mathbf {e} _{i}}can immediately be seen to be geometrically identical in all coordinate systems.
Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components(Tv)i{\displaystyle (Tv)^{i}}are given by(Tv)i=Tjivj{\displaystyle (Tv)^{i}=T_{j}^{i}v^{j}}. These components transform contravariantly, since
The transformation law for an orderp+qtensor withpcontravariant indices andqcovariant indices is thus given as,
Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order ortype(p,q). The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions),p+qin the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type(p,q)is also called a(p,q)-tensor for short.
This discussion motivates the following formal definition:[5][6]
Definition.A tensor of type (p,q) is an assignment of a multidimensional array
to each basisf= (e1, ...,en)of ann-dimensional vector space such that, if we apply the change of basis
then the multidimensional array obeys the transformation law
The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.[1]
An equivalent definition of a tensor uses therepresentationsof thegeneral linear group. There is anactionof the general linear group on the set of allordered basesof ann-dimensional vector space. Iff=(f1,…,fn){\displaystyle \mathbf {f} =(\mathbf {f} _{1},\dots ,\mathbf {f} _{n})}is an ordered basis, andR=(Rji){\displaystyle R=\left(R_{j}^{i}\right)}is an invertiblen×n{\displaystyle n\times n}matrix, then the action is given by
LetFbe the set of all ordered bases. ThenFis aprincipal homogeneous spacefor GL(n). LetWbe a vector space and letρ{\displaystyle \rho }be a representation of GL(n) onW(that is, agroup homomorphismρ:GL(n)→GL(W){\displaystyle \rho :{\text{GL}}(n)\to {\text{GL}}(W)}). Then a tensor of typeρ{\displaystyle \rho }is anequivariant mapT:F→W{\displaystyle T:F\to W}. Equivariance here means that
Whenρ{\displaystyle \rho }is atensor representationof the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds,[7]and readily generalizes to other groups.[5]
A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common indifferential geometryis to define tensors relative to a fixed (finite-dimensional) vector spaceV, which is usually taken to be a particular vector space of some geometrical significance like thetangent spaceto a manifold.[8]In this approach, a type(p,q)tensorTis defined as amultilinear map,
whereV∗is the correspondingdual spaceof covectors, which is linear in each of its arguments. The above assumesVis a vector space over thereal numbers,R{\displaystyle \mathbb {R} }. More generally,Vcan be taken over anyfieldF(e.g. thecomplex numbers), withFreplacingR{\displaystyle \mathbb {R} }as the codomain of the multilinear maps.
By applying a multilinear mapTof type(p,q)to a basis {ej} forVand a canonical cobasis {εi} forV∗,
a(p+q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, becauseTis linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components ofTthus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear mapT. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.
In viewing a tensor as a multilinear map, it is conventional to identify thedouble dualV∗∗of the vector spaceV, i.e., the space of linear functionals on the dual vector spaceV∗, with the vector spaceV. There is always anatural linear mapfromVto its double dual, given by evaluating a linear form inV∗against a vector inV. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identifyVwith its double dual.
For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements oftensor productsof vector spaces, which in turn are defined through auniversal propertyas explainedhereandhere.
Atype(p,q)tensoris defined in this context as an element of the tensor product of vector spaces,[9][10]
A basisviofVand basiswjofWnaturally induce a basisvi⊗wjof the tensor productV⊗W. The components of a tensorTare the coefficients of the tensor with respect to the basis obtained from a basis{ei}forVand its dual basis{εj}, i.e.
Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type(p,q)tensor. Moreover, the universal property of the tensor product gives aone-to-one correspondencebetween tensors defined in this way and tensors defined as multilinear maps.
This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual:
The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps fromHom2(U∗×V∗;F){\displaystyle \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)}andHom(U∗⊗V∗;F){\displaystyle \operatorname {Hom} \left(U^{*}\otimes V^{*};\mathbb {F} \right)}.[11]
Tensor products can be defined in great generality – for example,involving arbitrary modulesover a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the termtensorfor an element of a tensor product of any number of copies of a single vector spaceVand its dual, as above.
This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions arenaturally isomorphic.[Note 2]Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, tovector bundlesorcoherent sheaves.[12]For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (seetopological tensor product). In some applications, it is thetensor product of Hilbert spacesthat is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as asymmetric monoidal categorythat encodes their most important properties, rather than the specific models of those categories.[13]
In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called atensor field, often referred to simply as a tensor.[1]
In this context, acoordinate basisis often chosen for thetangent vector space. The transformation law may then be expressed in terms ofpartial derivativesof the coordinate functions,
defining a coordinate transformation,[1]
The concepts of later tensor analysis arose from the work ofCarl Friedrich Gaussindifferential geometry, and the formulation was much influenced by the theory ofalgebraic formsand invariants developed during the middle of the nineteenth century.[14]The word "tensor" itself was introduced in 1846 byWilliam Rowan Hamilton[15]to describe something different from what is now meant by a tensor.[Note 3]Gibbs introduceddyadicsandpolyadic algebra, which are also tensors in the modern sense.[16]The contemporary usage was introduced byWoldemar Voigtin 1898.[17]
Tensor calculus was developed around 1890 byGregorio Ricci-Curbastrounder the titleabsolute differential calculus, and originally presented in 1892.[18]It was made accessible to many mathematicians by the publication of Ricci-Curbastro andTullio Levi-Civita's 1900 classic textMéthodes de calcul différentiel absolu et leurs applications(Methods of absolute differential calculus and their applications).[19]In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense.[16]
In the 20th century, the subject came to be known astensor analysis, and achieved broader acceptance with the introduction ofAlbert Einstein's theory ofgeneral relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometerMarcel Grossmann.[20]Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:
I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.
Tensors andtensor fieldswere also found to be useful in other fields such ascontinuum mechanics. Some well-known examples of tensors indifferential geometryarequadratic formssuch asmetric tensors, and theRiemann curvature tensor. Theexterior algebraofHermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory ofdifferential forms, as naturally unified with tensor calculus. The work ofÉlie Cartanmade differential forms one of the basic kinds of tensors used in mathematics, andHassler Whitneypopularized thetensor product.[16]
From about the 1920s onwards, it was realised that tensors play a basic role inalgebraic topology(for example in theKünneth theorem).[22]Correspondingly there are types of tensors at work in many branches ofabstract algebra, particularly inhomological algebraandrepresentation theory. Multilinear algebra can be developed in greater generality than for scalars coming from afield. For example, scalars can come from aring. But the theory is then less geometric and computations more technical and less algorithmic.[23]Tensors are generalized withincategory theoryby means of the concept ofmonoidal category, from the 1960s.[24]
An elementary example of a mapping describable as a tensor is thedot product, which maps two vectors to a scalar. A more complex example is theCauchy stress tensorT, which takes a directional unit vectorvas input and maps it to the stress vectorT(v), which is the force (per unit area) exerted by material on the negative side of the plane orthogonal tovagainst the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). Thecross product, where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. Thetotally anti-symmetric symbolεijk{\displaystyle \varepsilon _{ijk}}nevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems.
This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type(n,m), wherenis the number of contravariant indices,mis the number of covariant indices, andn+mgives the total order of the tensor. For example, abilinear formis the same thing as a(0, 2)-tensor; aninner productis an example of a(0, 2)-tensor, but not all(0, 2)-tensors are inner products. In the(0,M)-entry of the table,Mdenotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor.
Raising an index on an(n,m)-tensor produces an(n+ 1,m− 1)-tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table.Contractionof an upper with a lower index of an(n,m)-tensor produces an(n− 1,m− 1)-tensor; this corresponds to moving diagonally up and to the left on the table.
Assuming abasisof a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organizedmultidimensional arrayof numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows todefinetensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers atensor.Compare this to the array representingεijk{\displaystyle \varepsilon _{ijk}}not being a tensor, for the sign change under transformations changing the orientation.
Because the components of vectors and their duals transform differently under the change of their dual bases, there is acovariant and/or contravariant transformation lawthat relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively,vectors:n(contravariantindices) and dualvectors:m(covariantindices) in the input and output of a tensor determine thetype(orvalence) of the tensor, a pair of natural numbers(n,m), which determine the precise form of the transformation law. Theorderof a tensor is the sum of these two numbers.
The order (alsodegreeorrank) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order2 + 0 = 2, the same as the stress tensor, taking one vector and returning another1 + 1 = 2. Theεijk{\displaystyle \varepsilon _{ijk}}-symbol,mapping two vectors to one vector, would have order2 + 1 = 3.
The collection of tensors on a vector space and its dual forms atensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order2, which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this.
There are several notational systems that are used to describe tensors and perform calculations involving them.
Ricci calculusis the modern formalism and notation for tensor indices: indicatinginnerandouter products,covariance and contravariance,summationsof tensor components,symmetryandantisymmetry, andpartialandcovariant derivatives.
TheEinstein summation conventiondispenses with writingsummation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the indexiis used twice in a given term of a tensor expression, it means that the term is to be summed for alli. Several distinct pairs of indices may be summed this way.
Penrose graphical notationis a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices.
Theabstract index notationis a way to write tensors such that the indices are no longer thought of as numerical, but rather areindeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation.
Acomponent-free treatment of tensorsuses notation that emphasises that tensors do not rely on any basis, and is defined in terms of thetensor product of vector spaces.
There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to thescaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type.
Thetensor producttakes two tensors,SandT, and produces a new tensor,S⊗T, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e.,(S⊗T)(v1,…,vn,vn+1,…,vn+m)=S(v1,…,vn)T(vn+1,…,vn+m),{\displaystyle (S\otimes T)(v_{1},\ldots ,v_{n},v_{n+1},\ldots ,v_{n+m})=S(v_{1},\ldots ,v_{n})T(v_{n+1},\ldots ,v_{n+m}),}which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e.,(S⊗T)j1…jkjk+1…jk+mi1…ilil+1…il+n=Sj1…jki1…ilTjk+1…jk+mil+1…il+n.{\displaystyle (S\otimes T)_{j_{1}\ldots j_{k}j_{k+1}\ldots j_{k+m}}^{i_{1}\ldots i_{l}i_{l+1}\ldots i_{l+n}}=S_{j_{1}\ldots j_{k}}^{i_{1}\ldots i_{l}}T_{j_{k+1}\ldots j_{k+m}}^{i_{l+1}\ldots i_{l+n}}.}IfSis of type(l,k)andTis of type(n,m), then the tensor productS⊗Thas type(l+n,k+m).
Tensor contractionis an operation that reduces a type(n,m)tensor to a type(n− 1,m− 1)tensor, of which thetraceis a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a(1, 1)-tensorTij{\displaystyle T_{i}^{j}}can be contracted to a scalar throughTii{\displaystyle T_{i}^{i}}, where the summation is again implied. When the(1, 1)-tensor is interpreted as a linear map, this operation is known as thetrace.
The contraction is often used in conjunction with the tensor product to contract an index from each tensor.
The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the spaceVwith the spaceV∗by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor fromV∗to a factor fromV. For example, a tensorT∈V⊗V⊗V∗{\displaystyle T\in V\otimes V\otimes V^{*}}can be written as a linear combination
The contraction ofTon the first and last slots is then the vector
In a vector space with aninner product(also known as ametric)g, the termcontractionis used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a(2, 0)-tensorTij{\displaystyle T^{ij}}can be contracted to a scalar throughTijgij{\displaystyle T^{ij}g_{ij}}(yet again assuming the summation convention).
When a vector space is equipped with anondegenerate bilinear form(ormetric tensoras it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known aslowering an index.
Conversely, the inverse operation can be defined, and is calledraising an index. This is equivalent to a similar contraction on the product with a(2, 0)-tensor. Thisinverse metric tensorhas components that are the matrix inverse of those of the metric tensor.
Important examples are provided bycontinuum mechanics. The stresses inside a solid body orfluid[28]are described by a tensor field. Thestress tensorandstrain tensorare both second-order tensor fields, and are related in a general linear elastic material by a fourth-orderelasticity tensorfield. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed.
If a particularsurface elementinside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor oftype(2, 0), inlinear elasticity, or more precisely by a tensor field of type(2, 0), since the stresses may vary from point to point.
Common applications include:
The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field ofcomputer vision, with thetrifocal tensorgeneralizing thefundamental matrix.
The field ofnonlinear opticsstudies the changes to materialpolarization densityunder extreme electric fields. The polarization waves generated are related to the generatingelectric fieldsthrough the nonlinear susceptibility tensor. If the polarizationPis not linearly proportional to the electric fieldE, the medium is termednonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present),Pis given by aTaylor seriesinEwhose coefficients are the nonlinear susceptibilities:
Hereχ(1){\displaystyle \chi ^{(1)}}is the linear susceptibility,χ(2){\displaystyle \chi ^{(2)}}gives thePockels effectandsecond harmonic generation, andχ(3){\displaystyle \chi ^{(3)}}gives theKerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.
The properties oftensors, especiallytensor decomposition, have enabled their use inmachine learningto embed higher dimensional data inartificial neural networks. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same.
The vector spaces of atensor productneed not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product spaceV⊗Wis a second-order "tensor" in this more general sense,[29]and an order-dtensor may likewise be defined as an element of a tensor product ofddifferent vector spaces.[30]A type(n,m)tensor, in the sense defined previously, is also a tensor of ordern+min this more general sense. The concept of tensor productcan be extendedto arbitrarymodules over a ring.
The notion of a tensor can be generalized in a variety of ways toinfinite dimensions. One, for instance, is via thetensor productofHilbert spaces.[31]Another way of generalizing the idea of tensor, common innonlinear analysis, is via themultilinear maps definitionwhere instead of using finite-dimensional vector spaces and theiralgebraic duals, one uses infinite-dimensionalBanach spacesand theircontinuous dual.[32]Tensors thus live naturally onBanach manifolds[33]andFréchet manifolds.
Suppose that a homogeneous medium fillsR3, so that the density of the medium is described by a singlescalarvalueρinkg⋅m−3. The mass, in kg, of a regionΩis obtained by multiplyingρby the volume of the regionΩ, or equivalently integrating the constantρover the region:
where the Cartesian coordinatesx,y,zare measured inm. If the units of length are changed intocm, then the numerical values of the coordinate functions must be rescaled by a factor of 100:
The numerical value of the densityρmust then also transform by100−3m3/cm3to compensate, so that the numerical value of the mass in kg is still given by integral ofρdxdydz{\displaystyle \rho \,dx\,dy\,dz}. Thusρ′=100−3ρ{\displaystyle \rho '=100^{-3}\rho }(in units ofkg⋅cm−3).
More generally, if the Cartesian coordinatesx,y,zundergo a linear transformation, then the numerical value of the densityρmust change by a factor of the reciprocal of the absolute value of thedeterminantof the coordinate transformation, so that the integral remains invariant, by thechange of variables formulafor integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called ascalar density. To model a non-constant density,ρis a function of the variablesx,y,z(ascalar field), and under acurvilinearchange of coordinates, it transforms by the reciprocal of theJacobianof the coordinate change. For more on the intrinsic meaning, seeDensity on a manifold.
A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition:[34]
Herewis called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor.[35][36]An example of a tensor density is thecurrent densityofelectromagnetism.
Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from therational representationsof the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are stillsemisimplerepresentations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation,[37]consisting of an(x,y) ∈R2with the transformation law
The transformation law for a tensor behaves as afunctoron the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such aslocal diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes.[38]Examples of objects obeying more general kinds of transformation laws arejetsand, more generally still,natural bundles.[39][40]
When changing from oneorthonormal basis(called aframe) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is notsimply connected(seeorientation entanglementandplate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1.[41]Aspinoris an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.[42][43]
Spinors are elements of thespin representationof the rotation group, while tensors are elements of itstensor representations. Otherclassical groupshave tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.
|
https://en.wikipedia.org/wiki/Tensor
|
Inmultilinear algebra, atensor decompositionis any scheme for expressing a "data tensor" (M-way array) as a sequence of elementary operations acting on other, often simpler tensors.[1][2][3]Many tensor decompositions generalize somematrix decompositions.[4]
Tensorsare generalizations of matrices to higher dimensions (or rather to higher orders, i.e. the higher number of dimensions) and can consequently be treated as multidimensional fields.[1][5]The main tensor decompositions are:
This section introduces basic notations and operations that are widely used in the field.
A multi-way graph with K perspectives is a collection of K matricesX1,X2.....XK{\displaystyle {X_{1},X_{2}.....X_{K}}}with dimensions I × J (where I, J are the number of nodes). This collection of matrices is naturally represented as a tensor X of size I × J × K. In order to avoid overloading the term “dimension”, we call an I × J × K tensor a three “mode” tensor, where “modes” are the numbers of indices used to index the tensor.
Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Tensor_decomposition
|
Tensor softwareis a class ofmathematical softwaredesigned for manipulation and calculation withtensors.
Maxima[25]is a freeopen sourcegeneral purposecomputer algebra systemwhich includes several packages for tensor algebra calculations in its core distribution.
It is particularly useful for calculations with abstract tensors, i.e., when one wishes to do calculations without defining all components of the tensor explicitly. It comes with three tensor packages:[26]
|
https://en.wikipedia.org/wiki/Tensor_software
|
In mathematics,Tucker decompositiondecomposes atensorinto a set of matrices and one small core tensor. It is named afterLedyard R. Tucker[1]although it goes back toHitchcockin 1927.[2]Initially described as a three-mode extension offactor analysisandprincipal component analysisit may actually be generalized to higher mode analysis, which is also calledhigher-order singular value decomposition(HOSVD).
It may be regarded as a more flexiblePARAFAC(parallel factor analysis) model. In PARAFAC the core tensor is restricted to be "diagonal".
In practice, Tucker decomposition is used as a modelling tool. For instance, it is used to model three-way (or higher way) data by means of relatively small numbers of components for each of the three or more modes, and the components are linked to each other by a three- (or higher-) way core array. The model parameters are estimated in such a way that, given fixed numbers of components, the modelled data optimally resemble the actual data in the least squares sense. The model gives a summary of the information in the data, in the same way as principal components analysis does for two-way data.
For a 3rd-order tensorT∈Fn1×n2×n3{\displaystyle T\in F^{n_{1}\times n_{2}\times n_{3}}}, whereF{\displaystyle F}is eitherR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }, Tucker Decomposition can be denoted as follows,T=T×1U(1)×2U(2)×3U(3){\displaystyle T={\mathcal {T}}\times _{1}U^{(1)}\times _{2}U^{(2)}\times _{3}U^{(3)}}whereT∈Fd1×d2×d3{\displaystyle {\mathcal {T}}\in F^{d_{1}\times d_{2}\times d_{3}}}is thecore tensor, a 3rd-order tensor that contains the 1-mode, 2-mode and 3-mode singular values ofT{\displaystyle T}, which are defined as theFrobenius normof the 1-mode, 2-mode and 3-mode slices of tensorT{\displaystyle {\mathcal {T}}}respectively.U(1),U(2),U(3){\displaystyle U^{(1)},U^{(2)},U^{(3)}}are unitary matrices inFd1×n1,Fd2×n2,Fd3×n3{\displaystyle F^{d_{1}\times n_{1}},F^{d_{2}\times n_{2}},F^{d_{3}\times n_{3}}}respectively. Thek-mode product (k= 1, 2, 3) ofT{\displaystyle {\mathcal {T}}}byU(k){\displaystyle U^{(k)}}is denoted asT×U(k){\displaystyle {\mathcal {T}}\times U^{(k)}}with entries as
Altogether, the decomposition may also be written more directly as
Takingdi=ni{\displaystyle d_{i}=n_{i}}for alli{\displaystyle i}is always sufficient to representT{\displaystyle T}exactly, but oftenT{\displaystyle T}can be compressed or efficiently approximately by choosingdi<ni{\displaystyle d_{i}<n_{i}}. A common choice isd1=d2=d3=min(n1,n2,n3){\displaystyle d_{1}=d_{2}=d_{3}=\min(n_{1},n_{2},n_{3})}, which can be effective when the difference in dimension sizes is large.
There are two special cases of Tucker decomposition:
Tucker1: ifU(2){\displaystyle U^{(2)}}andU(3){\displaystyle U^{(3)}}are identity, thenT=T×1U(1){\displaystyle T={\mathcal {T}}\times _{1}U^{(1)}}
Tucker2: ifU(3){\displaystyle U^{(3)}}is identity, thenT=T×1U(1)×2U(2){\displaystyle T={\mathcal {T}}\times _{1}U^{(1)}\times _{2}U^{(2)}}.
RESCALdecomposition[3]can be seen as a special case of Tucker whereU(3){\displaystyle U^{(3)}}is identity andU(1){\displaystyle U^{(1)}}is equal toU(2){\displaystyle U^{(2)}}.
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Tucker_decomposition
|
Tail risk, sometimes called "fat tail risk", is thefinancial riskof anassetorportfolioof assets moving more than threestandard deviationsfrom its current price, above the risk of anormal distribution. Tail risks include low-probability events arising at both ends of a normal distribution curve, also known as tail events.[1]However, as investors are generally more concerned with unexpected losses rather than gains, a debate about tail risk is focused on the left tail. Prudentasset managersare typically cautious with the tail involvinglosseswhich could damage orruinportfolios, and not the beneficial tail of outsized gains.[2]
The common technique of theorizing a normal distribution of price changes underestimates tail risk when market data exhibitfat tails, thus understating asset prices, stock returns and subsequent risk management strategies.
Tail risk is sometimes defined less strictly: as merely the risk (orprobability) of rare events.[3]The arbitrary definition of the tail region as beyond three standard deviations may also be broadened, such as theSKEWindex which uses the larger tail region starting at two standard deviations.
Although tail risk cannot be eliminated, its impact can be somewhat mitigated by a robust diversification across assets, strategies, and the use of an asymmetric hedge.
Traditional portfolio strategies rely heavily upon the assumption that market returns follow a normal distribution, characterized by the bell curve, which illustrates that, given enough observations, all values in a sample will be distributed symmetrically with respect to the mean.[1]Theempirical rulethen states that about 99.7% of all variations following a normal distribution lies within three standard deviations of the mean.[4]Therefore, there is only a 0.3% chance of an extreme event occurring. Many financial models such asModern Portfolio TheoryandEfficient Marketsassume normality.
However, financial markets are not perfect as they are largely shaped by unpredictable human behavior and an abundance of evidence suggests that the distribution of returns is in fact not normal, butskewed. Observed tails are fatter than traditionally predicted, indicating a significantly higher probability that an investment will move beyond three standard deviations.[5]This happens when a rare, unpredictable, and very important event occurs, resulting in significant fluctuations in the value of the stock. Tail risk is then the chance of a loss occurring due to such events. These tail events are often referred to asblack swan eventsand they can produce disastrous effects on the returns of the portfolio in a very short span of time. Fat tails suggest that the likelihood of such events is in fact greater than the one predicted by traditional strategies, which subsequently tend to understate volatility and risk of the asset.
The importance of considering tail risk in portfolio management is not only theoretical. McRandal and Rozanov (2012) observe that in the period from the late 1980s to the early 2010s, there were at least seven episodes that can be viewed as tail events:equity market crash of 1987,1994 bond market crisis,1997 Asian financial crisis,1998 Russian financial crisisand theLong-Term Capital Managementblow-up,dot-com bubblecollapse,subprime mortgage crisis, and infamousBankruptcy of Lehman Brothers.[6]
Tail risk is very difficult to measure as tail events happen infrequently and with various impact. The most popular tail risk measures includeconditional value-at-risk(CVaR) andvalue-at-risk(VaR). These measures are used both in finance and insurance industries, which tend to be highly volatile, as well as in highly reliable, safety-critical uncertain environments with heavy-tailed underlying probability distributions.[7]
The2008 financial crisisand theGreat Recession, which had a dramatic material impact on investment portfolios, led to a significant increase in awareness of tail risks. Even highly sophisticated institutions such as American university endowments, long-established sovereign wealth funds, and highly experienced public pension plans, suffered large double digit percentage drops in value during theGreat Recession. According to McRandal and Rozanov (2012), losses of many broadly diversified, multi-asset class portfolios ranged anywhere from 20% to 30% in the course of just a few months.[6]
If one is to implement an effective tail risk hedging program, one has to begin by carefully defining tail risk, i.e. by identifying elements of a tail event that investors are hedging against. A true tail event should exhibit the following three properties simultaneously with significant magnitude and speed: falling asset prices, increasing risk premia, and increasing correlations between asset classes.[6]
However, these statistical characteristics can be validated only after the event, and so hedging against these events is a rather challenging, though vital, task for providing the stability of a portfolio whose aim is to meet its long-term risk/return objectives.
Active tail risk managers with an appropriate expertise, including practical experience applying macroeconomic forecasting and quantitative modeling techniques across asset markets, are needed to devise effective tail risk hedging strategies in the complex markets. First, possible epicenters of tail events and their repercussions are identified. This is referred to as idea generation. Second, the actual process of constructing the hedge takes place. Finally, an active tail hedge manager guarantees constant effectiveness of the optimal protection by an active trading of positions and risk levels still offering significant convexity. When all these steps are combined,alpha, i.e. an investment strategy’s ability to beat the market,[8]can be generated using several different angles.
As a result, active management minimizes ‘negative carry’ (a condition in which the cost of holding an investment or security exceeds the income earned while holding it)[9]and provides sufficient ongoing security and a truly convex payoff delivered in tail events. Furthermore, it manages to mitigatecounterparty risk, which is particularly relevant in case of tail events.
|
https://en.wikipedia.org/wiki/Tail_risk
|
Theblack swan theoryortheory of black swan eventsis ametaphorthat describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit ofhindsight. The term arose from Latin expression which was based on the presumption thatblack swansdid not exist. The expression was used in the original manner until around 1697 when Dutch mariners saw black swans living in Australia. After this, the term was reinterpreted to mean an unforeseen and consequential event.[1]
The reinterpreted theory was articulated byNassim Nicholas Taleb, starting in 2001, to explain:
Taleb's "black swan theory" (which differs from the earlier philosophical versions of the problem) refers only to statistically unexpected events of large magnitude and consequence and their dominant role in history. Such events, considered extremeoutliers, collectively play vastly larger roles than regular occurrences.[2]: xxiMore technically, in the scientificmonograph"Silent Risk",[3]Taleb mathematically defines the black swan problem as "stemming from the use of degeneratemetaprobability".[3]
The phrase "black swan" derives from a Latin expression; its oldest known occurrence is from the 2nd-century Roman poetJuvenal's characterization in hisSatire VIof something being "rara avis in terris nigroque simillima cygno" ("a bird as rare upon the earth as a black swan").[4]: 165[5][6]When the phrase was coined, theblack swanwas presumed by Romans not to exist.[1]
Juvenal's phrase was a common expression in 16th century London as a statement of impossibility.[7]The London expression derives from theOld Worldpresumption that allswansmust be white because all historical records of swans reported that they had white feathers.[8]In that context, ablack swanwas impossible or at least nonexistent.
However, in 1697, Dutch explorers led byWillem de Vlaminghbecame thefirst Europeans to see black swans, inWestern Australia.[1]The term subsequently metamorphosed to connote the idea that a perceived impossibility might later be disproved. Taleb notes that in the 19th century,John Stuart Millused theblack swanlogical fallacy as a new term to identifyfalsification.[9]Black swan events were discussed by Taleb in his 2001 bookFooled By Randomness, which concerned financial events. His 2007 bookThe Black Swanextended the metaphor to events outsidefinancial markets. Taleb regards almost all major scientific discoveries, historical events, and artistic accomplishments as "black swans"—undirected and unpredicted. He gives the rise of the Internet, the personal computer,World War I, thedissolution of the Soviet Union, and theSeptember 11, 2001 attacksas examples of black swan events.[2]: prologue
Taleb asserts:[10]
What we call here a Black Swan (and capitalize it) is an event with the following three attributes.
First, it is anoutlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme 'impact'. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrenceafterthe fact, making it explainable and predictable.
I stop and summarize the triplet: rarity, extreme 'impact', and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.
Based on the author's criteria:
According to Taleb, theCOVID-19 pandemicwas not a black swan, as it was expected with great certainty that a global pandemic would eventually take place.[11][12]Instead, it is considered awhite swan—such an event has a major effect, but is compatible with statistical properties.[11][12]
The practical aim of Taleb's book is not to attempt to predict events which are unpredictable, but to buildrobustnessagainst negative events while still exploiting positive events. Taleb contends that banks and trading firms are very vulnerable to hazardous black swan events and are exposed to unpredictable losses. On the subject of business, andquantitative financein particular, Taleb critiques the widespread use of thenormal distributionmodel employed infinancial engineering, calling it aGreat Intellectual Fraud. Taleb elaborates the robustness concept as a central topic of his later book,Antifragile: Things That Gain From Disorder.
In the second edition ofThe Black Swan, Taleb provides "Ten Principles for a Black-Swan-Robust Society".[2]: 374–78[13]
Taleb states that a black swan event depends on the observer. For example, what may be a Black Swan surprise for a turkey is not a Black Swan surprise to its butcher; hence the objective should be to "avoid being the turkey" by identifying areas of vulnerability to "turn the Black Swans white".[14]
Taleb claims that his black swan is different from the earlier philosophical versions of the problem, specifically inepistemology(as associated withDavid Hume,John Stuart Mill,Karl Popper, and others), as it concerns a phenomenon with specific statistical properties which he calls, "the fourth quadrant".[15]
Taleb's problem is about epistemic limitations in some parts of the areas covered in decision making. These limitations are twofold: philosophical (mathematical) and empirical (human-known) epistemic biases. The philosophical problem is about the decrease in knowledge when it comes to rare events because these are not visible in past samples and therefore require a stronga priori(extrapolating) theory; accordingly, predictions of events depend more and more on theories when their probability is small. In the "fourth quadrant", knowledge is uncertain and consequences are large, requiring more robustness.[citation needed]
According to Taleb, thinkers who came before him who dealt with the notion of the improbable (such as Hume, Mill, and Popper) focused on theproblem of inductionin logic, specifically, that of drawing general conclusions from specific observations.[16]The central and unique attribute of Taleb's black swan event is that it is high-impact. His claim is that almost all consequential events in history come from the unexpected – yet humans later convince themselves that these events are explainable inhindsight.[citation needed]
One problem, labeled theludic fallacyby Taleb, is the belief that the unstructured randomness found in life resembles the structured randomness found in games. This stems from the assumption that the unexpected may be predicted by extrapolating from variations in statistics based on past observations, especially when these statistics are presumed to represent samples from anormal distribution. These concerns often are highly relevant in financial markets, where major players sometimes assume normal distributions when usingvalue at riskmodels, although market returns typically havefat taildistributions.[17]
Taleb said:[10]
I don't particularly care about the usual. If you want to get an idea of a friend's temperament, ethics, and personal elegance, you need to look at him under the tests of severe circumstances, not under the regular rosy glow of daily life. Can you assess the danger a criminal poses by examining only what he does on anordinaryday? Can we understand health without considering wild diseases and epidemics? Indeed the normal is often irrelevant. Almost everything in social life is produced by rare but consequential shocks and jumps; all the while almost everything studied about social life focuses on the 'normal,' particularly with 'bell curve' methods of inference that tell you close to nothing. Why? Because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty. Its nickname in this book is GIF, Great Intellectual Fraud.
More generally,decision theory, which is based on a fixed universe or a model of possible outcomes, ignores and minimizes the effect of events that are "outside the model". For instance, a simple model of daily stock market returns may include extreme moves such asBlack Monday (1987), but might not model the breakdown of markets following the September 11, 2001 attacks. Consequently, theNew York Stock ExchangeandNasdaqexchange remained closed till September 17, 2001, the most protracted shutdown since the Great Depression.[18]A fixed model considers the "known unknowns", but ignores the "unknown unknowns", made famous by a statement ofDonald Rumsfeld.[19]The term "unknown unknowns" appeared in a 1982New Yorkerarticle on the aerospace industry, which cites the example ofmetal fatigue, the cause ofcrashes in Comet airlinersin the 1950s.[20]
Deterministicchaotic dynamicsreproducing the Black Swan Event have been researched in economics.[21]That is in agreement with Taleb's comment regarding some distributions which are not usable with precision, but which are more descriptive, such as thefractal,power law, or scalable distributions and that awareness of these might help to temper expectations.[22]Beyond this, Taleb emphasizes that many events simply are without precedent, undercutting the basis of this type of reasoning altogether.[citation needed]
Taleb also argues for the use ofcounterfactual reasoningwhen considering risk.[10]: p. xvii[23]
|
https://en.wikipedia.org/wiki/Black_swan_theory
|
Ineconomicsandfinance, aTaleb distributionis the statistical profile of an investment which normally provides a payoff of small positive returns, while carrying a small but significant risk of catastrophic losses. The term was coined by journalistMartin Wolfand economistJohn Kayto describe investments with a "high probability of a modest gain and a low probability of huge losses in any period."[1]
The concept is named afterNassim Nicholas Taleb, based on ideas outlined in his bookFooled by Randomness.
According to Taleb inSilent Risk, the term should be called "payoff" to reflect the importance of the payoff function of the underlyingprobability distribution, rather than the distribution itself.[2]The term is meant to refer to aninvestment returnsprofile in which there is a highprobabilityof a small gain, and a small probability of a very large loss, which more than outweighs the gains. In these situations theexpected valueis very much less than zero, but this fact is camouflaged by the appearance of lowriskand steadyreturns. It is a combination ofkurtosis riskandskewness risk: overall returns are dominated by extreme events (kurtosis), which are to the downside (skew). Such kind of distributions have been studied in economic time series related tobusiness cycles.[3]
More detailed and formal discussion of the bets on small probability events is in the academic essay by Taleb, called "Why Did the Crisis of 2008 Happen?" and in the 2004 paper in theJournal of Behavioral Financecalled "Why Do We Prefer Asymmetric Payoffs?" in which he writes "agents risking other people’s capital would have the incentive to camouflage the properties by showing a steady income. Intuitively, hedge funds are paid on an annual basis while disasters happen every four or five years, for example. The fund manager does not repay his incentive fee."[4][5]
Pursuing a trading strategy with a Taleb distribution yields a high probability of steady returns for a time, but with arisk of ruinthat approaches eventual certainty over time. This is done consciously by some as a risky trading strategy, while some critics argue that it is done either unconsciously by some, unaware of the hazards ("innocent fraud"), or consciously by others, particularly inhedge funds.
If done consciously, with one's own capital or openly disclosed to investors, this is a risky strategy, but appeals to some: one will want to exit the trade before the rare event happens. This occurs for instance in aspeculative bubble, where one purchases an asset in the expectation that it will likely go up, but may plummet, and hopes to sell the asset before the bubble bursts.
This has also been referred to as "picking up pennies in front of a steamroller".[6]
John Kayhas likenedsecurities tradingto bad driving, as both are characterized by Taleb distributions.[7]Drivers can make many small gains in time by taking risks such as overtaking on the inside andtailgating, however, they are then at risk of experiencing a very large loss in the form of a serioustraffic accident. Kay has described Taleb Distributions as the basis of thecarry tradeand has claimed that along withmark-to-market accountingand other practices, constitute part of whatJohn Kenneth Galbraithhas called "innocent fraud".[8]
Some critics of thehedge fundindustry claim that the compensation structure generates high fees forinvestment strategiesthat follow a Taleb distribution, creatingmoral hazard.[9]In such a scenario, the fund can claim high asset management and performance fees until they suddenly "blow up", losing the investor significant sums of money and wiping out all the gains to the investor generated in previous periods; however, the fund manager keeps all fees earned prior to the losses being incurred – and ends up enriching himself in the long run because he does not pay for his losses.
Taleb distributions pose several fundamental problems, all possibly leading to risk being overlooked:
More formally, while the risks for aknowndistribution can be calculated, in practice one does not know the distribution: one is operating underuncertainty, in economics calledKnightian uncertainty.
A number of mitigants have been proposed, by Taleb and others.[citation needed]These include:
|
https://en.wikipedia.org/wiki/Taleb_distribution
|
Inprobability theoryandstatistics,kurtosis(fromGreek:κυρτός,kyrtosorkurtos, meaning "curved, arching") refers to the degree of “tailedness” in theprobability distributionof areal-valuedrandom variable. Similar toskewness, kurtosis provides insight into specific characteristics of a distribution. Various methods exist for quantifying kurtosis in theoretical distributions, and corresponding techniques allow estimation based on sample data from a population. It’s important to note that different measures of kurtosis can yield varyinginterpretations.
The standard measure of a distribution's kurtosis, originating withKarl Pearson,[1]is a scaled version of the fourthmomentof the distribution. This number is related to the tails of the distribution, not its peak;[2]hence, the sometimes-seen characterization of kurtosis as "peakedness" is incorrect. For this measure, higher kurtosis corresponds to greater extremity ofdeviations(oroutliers), and not the configuration of data nearthe mean.
Excess kurtosis, typically compared to a value of 0, characterizes the “tailedness” of a distribution. A univariatenormal distributionhas an excess kurtosis of 0. Negative excess kurtosis indicates a platykurtic distribution, which doesn’t necessarily have a flat top but produces fewer or less extreme outliers than the normal distribution. For instance, theuniform distribution(i.e. one that is uniformly finite over some bound and zero elsewhere) is platykurtic. On the other hand, positive excess kurtosis signifies a leptokurtic distribution. TheLaplace distribution, for example, has tails that decay more slowly than a Gaussian, resulting in more outliers. To simplify comparison with the normal distribution, excess kurtosis is calculated as Pearson’s kurtosis minus 3. Some authors and software packages use “kurtosis” to refer specifically to excess kurtosis, but this article distinguishes between the two for clarity.
Alternative measures of kurtosis are: theL-kurtosis, which is a scaled version of the fourthL-moment; measures based on four population or samplequantiles.[3]These are analogous to the alternative measures ofskewnessthat are not based on ordinary moments.[3]
The kurtosis is the fourthstandardized moment, defined asKurt[X]=E[(X−μσ)4]=E[(X−μ)4](E[(X−μ)2])2=μ4σ4,{\displaystyle \operatorname {Kurt} [X]=\operatorname {E} \left[{\left({\frac {X-\mu }{\sigma }}\right)}^{4}\right]={\frac {\operatorname {E} \left[(X-\mu )^{4}\right]}{\left(\operatorname {E} \left[(X-\mu )^{2}\right]\right)^{2}}}={\frac {\mu _{4}}{\sigma ^{4}}},}whereμ4is the fourthcentral momentandσis thestandard deviation. Several letters are used in the literature to denote the kurtosis. A very common choice isκ, which is fine as long as it is clear that it does not refer to acumulant. Other choices includeγ2, to be similar to the notation for skewness, although sometimes this is instead reserved for the excess kurtosis.
The kurtosis is bounded below by the squaredskewnessplus 1:[4]: 432μ4σ4≥(μ3σ3)2+1,{\displaystyle {\frac {\mu _{4}}{\sigma ^{4}}}\geq \left({\frac {\mu _{3}}{\sigma ^{3}}}\right)^{2}+1,}whereμ3is the thirdcentral moment. The lower bound is realized by theBernoulli distribution. There is no upper limit to the kurtosis of a general probability distribution, and it may be infinite.
A reason why some authors favor the excess kurtosis is that cumulants areextensive. Formulas related to the extensive property are more naturally expressed in terms of the excess kurtosis. For example, letX1, ...,Xnbe independent random variables for which the fourth moment exists, and letYbe the random variable defined by the sum of theXi. The excess kurtosis ofYisKurt[Y]−3=1(∑j=1nσj2)2∑i=1nσi4⋅(Kurt[Xi]−3),{\displaystyle \operatorname {Kurt} [Y]-3={\frac {1}{\left(\sum _{j=1}^{n}\sigma _{j}^{\,2}\right)^{2}}}\sum _{i=1}^{n}\sigma _{i}^{\,4}\cdot \left(\operatorname {Kurt} \left[X_{i}\right]-3\right),}whereσi{\displaystyle \sigma _{i}}is the standard deviation ofXi. In particular if all of theXihave the same variance, then this simplifies toKurt[Y]−3=1n2∑i=1n(Kurt[Xi]−3).{\displaystyle \operatorname {Kurt} [Y]-3={\frac {1}{n^{2}}}\sum _{i=1}^{n}\left(\operatorname {Kurt} \left[X_{i}\right]-3\right).}
The reason not to subtract 3 is that the baremomentbetter generalizes tomultivariate distributions, especially when independence is not assumed. Thecokurtosisbetween pairs of variables is an order fourtensor. For a bivariate normal distribution, the cokurtosis tensor has off-diagonal terms that are neither 0 nor 3 in general, so attempting to "correct" for an excess becomes confusing. It is true, however, that the joint cumulants of degree greater than two for anymultivariate normal distributionare zero.
For two random variables,XandY, not necessarily independent, the kurtosis of the sum,X+Y, isKurt[X+Y]=1σX+Y4(σX4Kurt[X]+4σX3σYCokurt[X,X,X,Y]+6σX2σY2Cokurt[X,X,Y,Y]+4σXσY3Cokurt[X,Y,Y,Y]+σY4Kurt[Y]).{\displaystyle {\begin{aligned}\operatorname {Kurt} [X+Y]={\frac {1}{\sigma _{X+Y}^{4}}}{\big (}&\sigma _{X}^{4}\operatorname {Kurt} [X]\\&{}+4\sigma _{X}^{3}\sigma _{Y}\operatorname {Cokurt} [X,X,X,Y]\\[6pt]&{}+6\sigma _{X}^{2}\sigma _{Y}^{2}\operatorname {Cokurt} [X,X,Y,Y]\\[6pt]&{}+4\sigma _{X}\sigma _{Y}^{3}\operatorname {Cokurt} [X,Y,Y,Y]\\[6pt]&{}+\sigma _{Y}^{4}\operatorname {Kurt} [Y]{\big )}.\end{aligned}}}Note that the fourth-powerbinomial coefficients(1, 4, 6, 4, 1) appear in the above equation.
The interpretation of the Pearson measure of kurtosis (or excess kurtosis) was once debated, but it is now well-established. As noted by Westfall in 2014[2], "...its unambiguous interpretation relates to tail extremity.Specifically, it reflects either the presence of existing outliers (for sample kurtosis) or the tendency to produce outliers (for the kurtosis of a probability distribution). The underlying logic is straightforward: Kurtosis represents the average (orexpected value) of standardized data raised to the fourth power. Standardized values less than 1—corresponding to data within one standard deviation of the mean (where the “peak” occurs)—contribute minimally to kurtosis. This is because raising a number less than 1 to the fourth power brings it closer to zero. The meaningful contributors to kurtosis are data values outside the peak region, i.e., the outliers. Therefore, kurtosis primarily measures outliers and provides no information about the central "peak".
Numerous misconceptions about kurtosis relate to notions of peakedness. One such misconception is that kurtosis measures both the “peakedness” of a distribution and theheaviness of its tail.[5]Other incorrect interpretations include notions like “lack of shoulders” (where the “shoulder” refers vaguely to the area between the peak and the tail, or more specifically, the region about onestandard deviationfrom the mean) or “bimodality.”[6]Balanda andMacGillivrayargue that the standard definition of kurtosis “poorly captures the kurtosis, peakedness, or tail weight of a distribution.”Instead, they propose a vague definition of kurtosis as the location- and scale-free movement ofprobability massfrom the distribution’s shoulders into its center and tails.[5]
In 1986, Moors gave an interpretation of kurtosis.[7]LetZ=X−μσ,{\displaystyle Z={\frac {X-\mu }{\sigma }},}whereXis a random variable,μis the mean andσis the standard deviation.
Now by definition of the kurtosisκ{\displaystyle \kappa }, and by the well-known identityE[V2]=var[V]+E[V]2,{\displaystyle \operatorname {E} \left[V^{2}\right]=\operatorname {var} [V]+\operatorname {E} [V]^{2},}κ=E[Z4]=var[Z2]+E[Z2]2=var[Z2]+var[Z]2=var[Z2]+1.{\displaystyle {\begin{aligned}\kappa &=\operatorname {E} \left[Z^{4}\right]\\&=\operatorname {var} \left[Z^{2}\right]+\operatorname {E} {\!\left[Z^{2}\right]}^{2}\\&=\operatorname {var} \left[Z^{2}\right]+\operatorname {var} [Z]^{2}=\operatorname {var} \left[Z^{2}\right]+1.\end{aligned}}}
The kurtosis can now be seen as a measure of the dispersion ofZ2around its expectation. Alternatively it can be seen to be a measure of the dispersion ofZaround+1and−1.κattains its minimal value in a symmetric two-point distribution. In terms of the original variableX, the kurtosis is a measure of the dispersion ofXaround the two valuesμ±σ.
High values ofκarise in two circumstances:
Theentropyof a distribution is−∫p(x)lnp(x)dx{\textstyle -\!\int p(x)\ln p(x)\,dx}.
For anyμ∈Rn,Σ∈Rn×n{\displaystyle \mu \in \mathbb {R} ^{n},\Sigma \in \mathbb {R} ^{n\times n}}withΣ{\displaystyle \Sigma }positive definite, among all probability distributions onRn{\displaystyle \mathbb {R} ^{n}}with meanμ{\displaystyle \mu }and covarianceΣ{\displaystyle \Sigma }, the normal distributionN(μ,Σ){\displaystyle {\mathcal {N}}(\mu ,\Sigma )}has the largest entropy.
Since meanμ{\displaystyle \mu }and covarianceΣ{\displaystyle \Sigma }are the first two moments, it is natural to consider extension to higher moments. In fact, byLagrange multipliermethod, for any prescribed first n moments, if there exists some probability distribution of formp(x)∝e∑iaixi+∑ijbijxixj+⋯+∑i1⋯inxi1⋯xin{\displaystyle p(x)\propto e^{\sum _{i}a_{i}x_{i}+\sum _{ij}b_{ij}x_{i}x_{j}+\cdots +\sum _{i_{1}\cdots i_{n}}x_{i_{1}}\cdots x_{i_{n}}}}that has the prescribed moments (if it is feasible), then it is the maximal entropy distribution under the given constraints.[8][9]
By serial expansion,∫12πe−12x2−14gx4x2ndx=12π∫e−12x2−14gx4x2ndx=∑k1k!(−g4)k(2n+4k−1)!!=(2n−1)!!−14g(2n+3)!!+O(g2){\displaystyle {\begin{aligned}&\int {\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}x^{2}-{\frac {1}{4}}gx^{4}}x^{2n}\,dx\\[6pt]&={\frac {1}{\sqrt {2\pi }}}\int e^{-{\frac {1}{2}}x^{2}-{\frac {1}{4}}gx^{4}}x^{2n}\,dx\\[6pt]&=\sum _{k}{\frac {1}{k!}}\left(-{\frac {g}{4}}\right)^{k}(2n+4k-1)!!\\[6pt]&=(2n-1)!!-{\tfrac {1}{4}}g(2n+3)!!+O(g^{2})\end{aligned}}}so if a random variable has probability distributionp(x)=e−12x2−14gx4/Z{\displaystyle p(x)=e^{-{\frac {1}{2}}x^{2}-{\frac {1}{4}}gx^{4}}/Z}, whereZ{\displaystyle Z}is a normalization constant, then its kurtosis is3−6g+O(g2){\displaystyle 3-6g+O(g^{2})}.[10]
Theexcess kurtosisis defined as kurtosis minus 3. There are 3 distinct regimes as described below.
Distributions with zero excess kurtosis are calledmesokurtic, ormesokurtotic. The most prominent example of a mesokurtic distribution is the normal distribution family, regardless of the values of itsparameters. A few other well-known distributions can be mesokurtic, depending on parameter values: for example, thebinomial distributionis mesokurtic forp=1/2±1/12{\textstyle p=1/2\pm {\sqrt {1/12}}}.
A distribution withpositiveexcess kurtosis is calledleptokurtic, orleptokurtotic. "Lepto-" means "slender".[11]A leptokurtic distribution hasfatter tails. Examples of leptokurtic distributions include theStudent's t-distribution,Rayleigh distribution,Laplace distribution,exponential distribution,Poisson distributionand thelogistic distribution. Such distributions are sometimes termedsuper-Gaussian.[12]
A distribution withnegativeexcess kurtosis is calledplatykurtic, orplatykurtotic. "Platy-" means "broad".[13]A platykurtic distribution hasthinner tails. Examples of platykurtic distributions include thecontinuousanddiscrete uniform distributions, and theraised cosine distribution. The most platykurtic distribution of all is theBernoulli distributionwithp= 1/2 (for example the number of times one obtains "heads" when flipping a coin once, acoin toss), for which the excess kurtosis is −2.
The effects of kurtosis are illustrated using aparametric familyof distributions whose kurtosis can be adjusted while their lower-order moments and cumulants remain constant. Consider thePearson type VII family, which is a special case of thePearson type IV familyrestricted to symmetric densities. Theprobability density functionis given byf(x;a,m)=Γ(m)aπΓ(m−1/2)[1+(xa)2]−m,{\displaystyle f(x;a,m)={\frac {\Gamma (m)}{a\,{\sqrt {\pi }}\,\Gamma (m-1/2)}}\left[1+\left({\frac {x}{a}}\right)^{2}\right]^{-m},}whereais ascale parameterandmis ashape parameter.
All densities in this family are symmetric. Thek-th moment exists providedm> (k+ 1)/2. For the kurtosis to exist, we requirem> 5/2. Then the mean andskewnessexist and are both identically zero. Settinga2= 2m− 3makes the variance equal to unity. Then the only free parameter ism, which controls the fourth moment (and cumulant) and hence the kurtosis. One can reparameterize withm=5/2+3/γ2{\textstyle m=5/2+3/\gamma _{2}}, whereγ2{\displaystyle \gamma _{2}}is the excess kurtosis as defined above. This yields a one-parameter leptokurtic family with zero mean, unit variance, zero skewness, and arbitrary non-negative excess kurtosis. The reparameterized density isg(x;γ2)=f(x;a=2+6γ2−1,m=52+3γ2−1).{\displaystyle g(x;\gamma _{2})=f{\left(x;\;a={\sqrt {2+6\gamma _{2}^{-1}}},\;m={\tfrac {5}{2}}+3\gamma _{2}^{-1}\right)}.}
In the limit asγ2→∞{\displaystyle \gamma _{2}\to \infty }one obtains the densityg(x)=3(2+x2)−5/2,{\displaystyle g(x)=3\left(2+x^{2}\right)^{-5/2},}which is shown as the red curve in the images on the right.
In the other direction asγ2→0{\displaystyle \gamma _{2}\to 0}one obtains thestandard normaldensity as the limiting distribution, shown as the black curve.
In the images on the right, the blue curve represents the densityx↦g(x;2){\displaystyle x\mapsto g(x;2)}with excess kurtosis of 2. The top image shows that leptokurtic densities in this family have a higher peak than the mesokurtic normal density, although this conclusion is only valid for this select family of distributions. The comparatively fatter tails of the leptokurtic densities are illustrated in the second image, which plots the natural logarithm of the Pearson type VII densities: the black curve is the logarithm of the standard normal density, which is aparabola. One can see that the normal density allocates little probability mass to the regions far from the mean ("has thin tails"), compared with the blue curve of the leptokurtic Pearson type VII density with excess kurtosis of 2. Between the blue curve and the black are other Pearson type VII densities withγ2= 1, 1/2, 1/4, 1/8, and 1/16. The red curve again shows the upper limit of the Pearson type VII family, withγ2=∞{\displaystyle \gamma _{2}=\infty }(which, strictly speaking, means that the fourth moment does not exist). The red curve decreases the slowest as one moves outward from the origin ("has fat tails").
Several well-known, unimodal, and symmetric distributions from different parametric families are compared here. Each has a mean and skewness of zero. The parameters have been chosen to result in a variance equal to 1 in each case. The images on the right show curves for the following seven densities, on alinear scaleandlogarithmic scale:
Note that in these cases the platykurtic densities have boundedsupport, whereas the densities with positive or zero excess kurtosis are supported on the wholereal line.
One cannot infer that high or low kurtosis distributions have the characteristics indicated by these examples. There exist platykurtic densities with infinite support,
and there exist leptokurtic densities with finite support.
Also, there exist platykurtic densities with infinite peakedness,
and there exist leptokurtic densities that appear flat-topped,
For asampleofnvalues, amethod of momentsestimator of the population excess kurtosis can be defined asg2=m4m22−3=1n∑i=1n(xi−x¯)4[1n∑i=1n(xi−x¯)2]2−3{\displaystyle g_{2}={\frac {m_{4}}{m_{2}^{2}}}-3={\frac {{\tfrac {1}{n}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}\right)^{4}}{\left[{\tfrac {1}{n}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}\right)^{2}\right]^{2}}}-3}wherem4is the fourth samplemoment about the mean,m2is the second sample moment about the mean (that is, thesample variance),xiis thei-th value, andx¯{\displaystyle {\overline {x}}}is thesample mean.
This formula has the simpler representation,g2=1n∑i=1nzi4−3{\displaystyle g_{2}={\frac {1}{n}}\sum _{i=1}^{n}z_{i}^{4}-3}where thezi{\displaystyle z_{i}}values are the standardized data values using the standard deviation defined usingnrather thann− 1in the denominator.
For example, suppose the data values are 0, 3, 4, 1, 2, 3, 0, 2, 1, 3, 2, 0, 2, 2, 3, 2, 5, 2, 3, 999.
Then thezivalues are −0.239, −0.225, −0.221, −0.234, −0.230, −0.225, −0.239, −0.230, −0.234, −0.225, −0.230, −0.239, −0.230, −0.230, −0.225, −0.230, −0.216, −0.230, −0.225, 4.359
and thezi4values are 0.003, 0.003, 0.002, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.003, 0.002, 0.003, 0.003, 360.976.
The average of these values is 18.05 and the excess kurtosis is thus18.05 − 3 = 15.05. This example makes it clear that data near the "middle" or "peak" of the distribution do not contribute to the kurtosis statistic, hence kurtosis does not measure "peakedness". It is simply a measure of the outlier, 999 in this example.
Given a sub-set of samples from a population, the sample excess kurtosisg2{\displaystyle g_{2}}above is abiased estimatorof the population excess kurtosis. An alternative estimator of the population excess kurtosis, which is unbiased in random samples of a normal distribution, is defined as follows:[3]G2=k4k22=n2[(n+1)m4−3(n−1)m22](n−1)(n−2)(n−3)(n−1)2n2m22=n−1(n−2)(n−3)[(n+1)m4m22−3(n−1)]=n−1(n−2)(n−3)[(n+1)g2+6]=(n+1)n(n−1)(n−2)(n−3)∑i=1n(xi−x¯)4(∑i=1n(xi−x¯)2)2−3(n−1)2(n−2)(n−3)=(n+1)n(n−1)(n−2)(n−3)∑i=1n(xi−x¯)4k22−3(n−1)2(n−2)(n−3){\displaystyle {\begin{aligned}G_{2}&={\frac {k_{4}}{k_{2}^{2}}}={\frac {n^{2}\,\left[(n+1)\,m_{4}-3\,(n-1)\,m_{2}^{2}\right]}{(n-1)\,(n-2)\,(n-3)}}\;{\frac {(n-1)^{2}}{n^{2}\,m_{2}^{2}}}\\[6pt]&={\frac {n-1}{(n-2)\,(n-3)}}\left[(n+1)\,{\frac {m_{4}}{m_{2}^{2}}}-3\,(n-1)\right]\\[6pt]&={\frac {n-1}{(n-2)\,(n-3)}}\left[(n+1)\,g_{2}+6\right]\\[6pt]&={\frac {(n+1)\,n\,(n-1)}{(n-2)\,(n-3)}}\;{\frac {\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)^{4}}{\left(\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)^{2}\right)^{2}}}-3\,{\frac {(n-1)^{2}}{(n-2)\,(n-3)}}\\[6pt]&={\frac {(n+1)\,n}{(n-1)\,(n-2)\,(n-3)}}\;{\frac {\sum _{i=1}^{n}\left(x_{i}-{\bar {x}}\right)^{4}}{k_{2}^{2}}}-3\,{\frac {(n-1)^{2}}{(n-2)(n-3)}}\end{aligned}}}wherek4is the unique symmetricunbiasedestimator of the fourthcumulant,k2is the unbiased estimate of the second cumulant (identical to the unbiased estimate of the sample variance),m4is the fourth sample moment about the mean,m2is the second sample moment about the mean,xiis thei-th value, andx¯{\displaystyle {\bar {x}}}is the sample mean. This adjusted Fisher–Pearson standardized moment coefficientG2{\displaystyle G_{2}}is the version found inExceland several statistical packages includingMinitab,SAS, andSPSS.[14]
Unfortunately, in nonnormal samplesG2{\displaystyle G_{2}}is itself generally biased.
An upper bound for the sample kurtosis ofn(n> 2) real numbers is[15]g2≤12n−3n−2g12+n2−3.{\displaystyle g_{2}\leq {\frac {1}{2}}{\frac {n-3}{n-2}}g_{1}^{2}+{\frac {n}{2}}-3.}whereg1=m3/m23/2{\displaystyle g_{1}=m_{3}/m_{2}^{3/2}}is the corresponding sample skewness.
The variance of the sample kurtosis of a sample of sizenfrom thenormal distributionis[16]var(g2)=24n(n−1)2(n−3)(n−2)(n+3)(n+5){\displaystyle \operatorname {var} (g_{2})={\frac {24n(n-1)^{2}}{(n-3)(n-2)(n+3)(n+5)}}}
Stated differently, under the assumption that the underlying random variableX{\displaystyle X}is normally distributed, it can be shown thatng2→dN(0,24){\displaystyle {\sqrt {n}}g_{2}\,\xrightarrow {d} \,{\mathcal {N}}(0,24)}.[17]: Page number needed
The sample kurtosis is a useful measure of whether there is a problem with outliers in a data set. Larger kurtosis indicates a more serious outlier problem, and may lead the researcher to choose alternative statistical methods.
D'Agostino's K-squared testis agoodness-of-fitnormality testbased on a combination of the sample skewness and sample kurtosis, as is theJarque–Bera testfor normality.
For non-normal samples, the variance of the sample variance depends on the kurtosis; for details, please seevariance.
Pearson's definition of kurtosis is used as an indicator of intermittency inturbulence.[18]It is also used in magnetic resonance imaging to quantify non-Gaussian diffusion.[19]
A concrete example is the following lemma by He, Zhang, and Zhang:[20]Assume a random variableXhas expectationE[X]=μ{\displaystyle \operatorname {E} [X]=\mu }, varianceE[(X−μ)2]=σ2{\displaystyle \operatorname {E} \left[(X-\mu )^{2}\right]=\sigma ^{2}}and kurtosisκ=1σ4E[(X−μ)4].{\textstyle \kappa ={\tfrac {1}{\sigma ^{4}}}\operatorname {E} \left[(X-\mu )^{4}\right].}Assume we samplen=23+33κlog1δ{\displaystyle n={\tfrac {2{\sqrt {3}}+3}{3}}\kappa \log {\tfrac {1}{\delta }}}many independent copies. ThenPr[maxi=1nXi≤μ]≤δandPr[mini=1nXi≥μ]≤δ.{\displaystyle \Pr \left[\max _{i=1}^{n}X_{i}\leq \mu \right]\leq \delta \quad {\text{and}}\quad \Pr \left[\min _{i=1}^{n}X_{i}\geq \mu \right]\leq \delta .}
This shows that withΘ(κlog1δ){\displaystyle \Theta (\kappa \log {\tfrac {1}{\delta }})}many samples, we will see one that is above the expectation with probability at least1−δ{\displaystyle 1-\delta }.
In other words: If the kurtosis is large, we might see a lot values either all below or above the mean.
Applyingband-pass filterstodigital images, kurtosis values tend to be uniform, independent of the range of the filter. This behavior, termedkurtosis convergence, can be used to detect image splicing inforensic analysis.[21]
Kurtosis can be used ingeophysicsto distinguish different types ofseismic signals. It is particularly effective in differentiating seismic signals generated by human footsteps from other signals.[22]This is useful in security and surveillance systems that rely on seismic detection.
Inmeteorology, kurtosis is used to analyze weather data distributions. It helps predict extreme weather events by assessing the probability of outlier values in historical data,[23]which is valuable for long-term climate studies and short-term weather forecasting.
A different measure of "kurtosis" is provided by usingL-momentsinstead of the ordinary moments.[24][25]
|
https://en.wikipedia.org/wiki/Leptokurtic_distribution
|
Inprobability theoryandstatistics, thegeneralized extreme value(GEV)distribution[2]is a family of continuousprobability distributionsdeveloped withinextreme value theoryto combine theGumbel,FréchetandWeibullfamilies also known as type I, II and III extreme value distributions. By theextreme value theoremthe GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables.[3]Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
In some fields of application the generalized extreme value distribution is known as theFisher–Tippett distribution, named afterR.A. FisherandL.H.C. Tippettwho recognised three different forms outlined below. However usage of this name is sometimes restricted to mean the special case of theGumbel distribution. The origin of the common functional form for all three distributions dates back to at leastJenkinson (1955),[4]though allegedly[3]it could also have been given byvon Mises (1936).[5]
Using the standardized variables=x−μσ{\displaystyle s={\tfrac {x-\mu }{\sigma }}}, whereμ{\displaystyle \mu }, the location parameter, can be any real number, andσ>0{\displaystyle \sigma >0}is the scale parameter; the cumulative distribution function of the GEV distribution is then
whereξ{\displaystyle \xi }, the shape parameter, can be any real number. Thus, forξ>0{\displaystyle \xi >0}, the expression is valid fors>−1ξ{\displaystyle s>-{\tfrac {1}{\xi }}}, while forξ<0{\displaystyle \xi <0}it is valid fors<−1ξ{\displaystyle s<-{\tfrac {1}{\xi }}}. In the first case,−1ξ{\displaystyle -{\tfrac {1}{\xi }}}is the negative, lower end-point, whereF{\displaystyle F}is0; in the second case,−1ξ{\displaystyle -{\tfrac {1}{\xi }}}is the positive, upper end-point, whereF{\displaystyle F}is 1. Forξ=0{\displaystyle \xi =0}, the second expression is formally undefined and is replaced with the first expression, which is the result of taking the limit of the second, asξ→0{\displaystyle \xi \to 0}in which cases{\displaystyle s}can be any real number.
In the special case ofx=μ{\displaystyle x=\mu }, we haves=0{\displaystyle s=0}, soF(0;ξ)=e−1≈0.368{\displaystyle F(0;\xi )=\mathrm {e} ^{-1}\approx 0.368}regardless of the values ofξ{\displaystyle \xi }andσ{\displaystyle \sigma }.
The probability density function of the standardized distribution is
again valid fors>−1ξ{\displaystyle s>-{\tfrac {1}{\xi }}}in the caseξ>0{\displaystyle \xi >0}, and fors<−1ξ{\displaystyle s<-{\tfrac {1}{\xi }}}in the caseξ<0{\displaystyle \xi <0}. The density is zero outside of the relevant range. In the caseξ=0{\displaystyle \xi =0}, the density is positive on the whole real line.
Since the cumulative distribution function is invertible, the quantile function for the GEV distribution has an explicit expression, namely
and therefore the quantile density functionq=dQdp{\displaystyle q={\tfrac {\mathrm {d} Q}{\mathrm {d} p}}}is
valid forσ>0{\displaystyle \sigma >0}and for any realξ{\displaystyle \xi }.
[6]
Usinggk≡Γ(1−kξ){\displaystyle \ g_{k}\equiv \Gamma (1-k\ \xi )~}fork∈{1,2,3,4},{\displaystyle ~k\in \{\ 1,2,3,4\ \}\ ,}whereΓ(⋅){\displaystyle \ \Gamma (\cdot )\ }is thegamma function, some simple statistics of the distribution are given by:[citation needed]
Theskewnessis
The excesskurtosisis:
The shape parameterξ{\displaystyle \ \xi \ }governs the tail behavior of the distribution. The sub-familiesdefined by three cases:ξ=0,{\displaystyle \ \xi =0\ ,}ξ>0,{\displaystyle \ \xi >0\ ,}andξ<0;{\displaystyle \ \xi <0\ ;}these correspond, respectively, to theGumbel,Fréchet, andWeibullfamilies, whose cumulative distribution functions are displayed below.
The subsections below remark on properties of these distributions.
The theory here relates to data maxima and the distribution being discussed is an extreme value distribution for maxima. A generalised extreme value distribution for data minima can be obtained, for example by substituting−x{\displaystyle \ -x\;}forx{\displaystyle \;x\;}in the distribution function, and subtracting the cumulative distribution from one: That is, replaceF(x){\displaystyle \ F(x)\ }with1−F(−x){\displaystyle \ 1-F(-x)\ }.Doing so yields yet another family of distributions.
The ordinary Weibull distribution arises in reliability applications and is obtained from the distribution here by using the variablet=μ−x,{\displaystyle \ t=\mu -x\ ,}which gives a strictly positive support, in contrast to the use in the formulation of extreme value theory here. This arises because the ordinary Weibull distribution is used for cases that deal with dataminimarather than data maxima. The distribution here has an addition parameter compared to the usual form of the Weibull distribution and, in addition, is reversed so that the distribution has an upper bound rather than a lower bound. Importantly, in applications of the GEV, the upper bound is unknown and so must be estimated, whereas when applying the ordinary Weibull distribution in reliability applications the lower bound is usually known to be zero.
Note the differences in the ranges of interest for the three extreme value distributions:Gumbelis unlimited,Fréchethas a lower limit, while the reversedWeibullhas an upper limit.
More precisely,univariate extreme value theorydescribes which of the three is the limiting law according to the initial lawXand in particular depending on the original distribution's tail.
One can link the type I to types II and III in the following way: If the cumulative distribution function of some random variableX{\displaystyle \ X\ }is of type II, and with the positive numbers as support, i.e.F(x;0,σ,α),{\displaystyle \ F(\ x;\ 0,\ \sigma ,\ \alpha \ )\ ,}then the cumulative distribution function oflnX{\displaystyle \ln X}is of type I, namelyF(x;lnσ,1α,0).{\displaystyle \ F(\ x;\ \ln \sigma ,\ {\tfrac {1}{\ \alpha \ }},\ 0\ )~.}Similarly, if the cumulative distribution function ofX{\displaystyle \ X\ }is of type III, and with the negative numbers as support, i.e.F(x;0,σ,−α),{\displaystyle \ F(\ x;\ 0,\ \sigma ,\ -\alpha \ )\ ,}then the cumulative distribution function ofln(−X){\displaystyle \ \ln(-X)\ }is of type I, namelyF(x;−lnσ,1α,0).{\displaystyle \ F(\ x;\ -\ln \sigma ,\ {\tfrac {\ 1\ }{\alpha }},\ 0\ )~.}
Multinomial logitmodels, and certain other types oflogistic regression, can be phrased aslatent variablemodels witherror variablesdistributed asGumbel distributions(type I generalized extreme value distributions). This phrasing is common in the theory ofdiscrete choicemodels, which includelogit models,probit models, and various extensions of them, and derives from the fact that the difference of two type-I GEV-distributed variables follows alogistic distribution, of which thelogit functionis thequantile function. The type-I GEV distribution thus plays the same role in these logit models as thenormal distributiondoes in the corresponding probit models.
Thecumulative distribution functionof the generalized extreme value distribution solves thestability postulateequation.[citation needed]The generalized extreme value distribution is a special case of a max-stable distribution, and is a transformation of a min-stable distribution.
Let{Xi|1≤i≤n}{\displaystyle \ \left\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \right\}\ }bei.i.d.normally distributedrandom variables with mean0and variance1.
TheFisher–Tippett–Gnedenko theorem[12]tells us thatmax{Xi|1≤i≤n}∼GEV(μn,σn,0),{\displaystyle \ \max\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \}\sim GEV(\mu _{n},\sigma _{n},0)\ ,}where
μn=Φ−1(1−1n)σn=Φ−1(1−1ne)−Φ−1(1−1n).{\displaystyle {\begin{aligned}\mu _{n}&=\Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)\\\sigma _{n}&=\Phi ^{-1}\left(1-{\frac {1}{\ n\ \mathrm {e} \ }}\right)-\Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)~.\end{aligned}}}
This allow us to estimate e.g. the mean ofmax{Xi|1≤i≤n}{\displaystyle \ \max\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \}\ }from the mean of the GEV distribution:
E{max{Xi|1≤i≤n}}≈μn+γEσn=(1−γE)Φ−1(1−1n)+γEΦ−1(1−1en)=log(n22πlog(n22π))⋅(1+γlogn+o(1logn)),{\displaystyle {\begin{aligned}\operatorname {\mathbb {E} } \left\{\ \max \left\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \right\}\ \right\}&\approx \mu _{n}+\gamma _{\mathsf {E}}\ \sigma _{n}\\&=(1-\gamma _{\mathsf {E}})\ \Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)+\gamma _{\mathsf {E}}\ \Phi ^{-1}\left(1-{\frac {1}{\ e\ n\ }}\right)\\&={\sqrt {\log \left({\frac {n^{2}}{\ 2\pi \ \log \left({\frac {n^{2}}{2\pi }}\right)\ }}\right)~}}\ \cdot \ \left(1+{\frac {\gamma }{\ \log n\ }}+{\mathcal {o}}\left({\frac {1}{\ \log n\ }}\right)\right)\ ,\end{aligned}}}
whereγE{\displaystyle \ \gamma _{\mathsf {E}}\ }is theEuler–Mascheroni constant.
4. LetX∼Weibull(σ,μ),{\displaystyle \ X\sim {\textrm {Weibull}}(\sigma ,\,\mu )\ ,}then the cumulative distribution ofg(x)=μ(1−σlogXσ){\displaystyle \ g(x)=\mu \left(1-\sigma \log {\frac {X}{\sigma }}\right)\ }is:
5. LetX∼Exponential(1),{\displaystyle \ X\sim {\textrm {Exponential}}(1)\ ,}then the cumulative distribution ofg(X)=μ−σlogX{\displaystyle \ g(X)=\mu -\sigma \log X\ }is:
|
https://en.wikipedia.org/wiki/Generalized_extreme_value_distribution
|
μ∈(−∞,∞){\displaystyle \mu \in (-\infty ,\infty )\,}location(real)σ∈(0,∞){\displaystyle \sigma \in (0,\infty )\,}scale(real)
x⩾μ(ξ⩾0){\displaystyle x\geqslant \mu \,\;(\xi \geqslant 0)}
1σ(1+ξz)−(1/ξ+1){\displaystyle {\frac {1}{\sigma }}(1+\xi z)^{-(1/\xi +1)}}
Instatistics, thegeneralized Pareto distribution(GPD) is a family of continuousprobability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: locationμ{\displaystyle \mu }, scaleσ{\displaystyle \sigma }, and shapeξ{\displaystyle \xi }.[2][3]Sometimes it is specified by only scale and shape[4]and sometimes only by its shape parameter. Some references give the shape parameter asκ=−ξ{\displaystyle \kappa =-\xi \,}.[5]
The standard cumulative distribution function (cdf) of the GPD is defined by[6]
where the support isz≥0{\displaystyle z\geq 0}forξ≥0{\displaystyle \xi \geq 0}and0≤z≤−1/ξ{\displaystyle 0\leq z\leq -1/\xi }forξ<0{\displaystyle \xi <0}. The corresponding probability density function (pdf) is
The related location-scale family of distributions is obtained by replacing the argumentzbyx−μσ{\displaystyle {\frac {x-\mu }{\sigma }}}and adjusting the support accordingly.
Thecumulative distribution functionofX∼GPD(μ,σ,ξ){\displaystyle X\sim GPD(\mu ,\sigma ,\xi )}(μ∈R{\displaystyle \mu \in \mathbb {R} },σ>0{\displaystyle \sigma >0}, andξ∈R{\displaystyle \xi \in \mathbb {R} }) is
where the support ofX{\displaystyle X}isx⩾μ{\displaystyle x\geqslant \mu }whenξ⩾0{\displaystyle \xi \geqslant 0\,}, andμ⩽x⩽μ−σ/ξ{\displaystyle \mu \leqslant x\leqslant \mu -\sigma /\xi }whenξ<0{\displaystyle \xi <0}.
Theprobability density function(pdf) ofX∼GPD(μ,σ,ξ){\displaystyle X\sim GPD(\mu ,\sigma ,\xi )}is
again, forx⩾μ{\displaystyle x\geqslant \mu }whenξ⩾0{\displaystyle \xi \geqslant 0}, andμ⩽x⩽μ−σ/ξ{\displaystyle \mu \leqslant x\leqslant \mu -\sigma /\xi }whenξ<0{\displaystyle \xi <0}.
The pdf is a solution of the followingdifferential equation:[citation needed]
IfUisuniformly distributedon
(0, 1], then
and
Both formulas are obtained by inversion of the cdf.
In Matlab Statistics Toolbox, you can easily use "gprnd" command to generate generalized Pareto random numbers.
A GPD random variable can also be expressed as an exponential random variable, with a Gamma distributed rate parameter.
and
then
Notice however, that since the parameters for the Gamma distribution must be greater than zero, we obtain the additional restrictions thatξ{\displaystyle \ \xi \ }must be positive.
In addition to this mixture (or compound) expression, the generalized Pareto distribution can also be expressed as a simple ratio. Concretely, forY∼Exponential(1){\displaystyle \ Y\sim \operatorname {\mathsf {Exponential}} (\ 1\ )\ }andZ∼Gamma(1/ξ,1),{\displaystyle \ Z\sim \operatorname {\mathsf {Gamma}} (1/\xi ,\ 1)\ ,}we haveμ+σYξZ∼GPD(μ,σ,ξ).{\displaystyle \ \mu +{\frac {\ \sigma \ Y\ }{\ \xi \ Z\ }}\sim \operatorname {\mathsf {GPD}} (\mu ,\ \sigma ,\ \xi )~.}This is a consequence of the mixture after settingβ=α{\displaystyle \ \beta =\alpha \ }and taking into account that the rate parameters of the exponential and gamma distribution are simply inverse multiplicative constants.
IfX∼GPD{\displaystyle X\sim GPD}({\displaystyle (}μ=0{\displaystyle \mu =0},σ{\displaystyle \sigma },ξ{\displaystyle \xi }){\displaystyle )}, thenY=log(X){\displaystyle Y=\log(X)}is distributed according to theexponentiated generalized Pareto distribution, denoted byY{\displaystyle Y}∼{\displaystyle \sim }exGPD{\displaystyle exGPD}({\displaystyle (}σ{\displaystyle \sigma },ξ{\displaystyle \xi }){\displaystyle )}.
Theprobability density function(pdf) ofY{\displaystyle Y}∼{\displaystyle \sim }exGPD{\displaystyle exGPD}({\displaystyle (}σ{\displaystyle \sigma },ξ{\displaystyle \xi })(σ>0){\displaystyle )\,\,(\sigma >0)}is
where the support is−∞<y<∞{\displaystyle -\infty <y<\infty }forξ≥0{\displaystyle \xi \geq 0}, and−∞<y≤log(−σ/ξ){\displaystyle -\infty <y\leq \log(-\sigma /\xi )}forξ<0{\displaystyle \xi <0}.
For allξ{\displaystyle \xi }, thelogσ{\displaystyle \log \sigma }becomes the location parameter. See the right panel for the pdf when the shapeξ{\displaystyle \xi }is positive.
TheexGPDhas finite moments of all orders for allσ>0{\displaystyle \sigma >0}and−∞<ξ<∞{\displaystyle -\infty <\xi <\infty }.
Themoment-generating functionofY∼exGPD(σ,ξ){\displaystyle Y\sim exGPD(\sigma ,\xi )}is
whereB(a,b){\displaystyle B(a,b)}andΓ(a){\displaystyle \Gamma (a)}denote thebeta functionandgamma function, respectively.
Theexpected valueofY{\displaystyle Y}∼{\displaystyle \sim }exGPD{\displaystyle exGPD}({\displaystyle (}σ{\displaystyle \sigma },ξ{\displaystyle \xi }){\displaystyle )}depends on the scaleσ{\displaystyle \sigma }and shapeξ{\displaystyle \xi }parameters, while theξ{\displaystyle \xi }participates through thedigamma function:
Note that for a fixed value for theξ∈(−∞,∞){\displaystyle \xi \in (-\infty ,\infty )}, thelogσ{\displaystyle \log \ \sigma }plays as the location parameter under the exponentiated generalized Pareto distribution.
ThevarianceofY{\displaystyle Y}∼{\displaystyle \sim }exGPD{\displaystyle exGPD}({\displaystyle (}σ{\displaystyle \sigma },ξ{\displaystyle \xi }){\displaystyle )}depends on the shape parameterξ{\displaystyle \xi }only through thepolygamma functionof order 1 (also called thetrigamma function):
See the right panel for the variance as a function ofξ{\displaystyle \xi }. Note thatψ′(1)=π2/6≈1.644934{\displaystyle \psi '(1)=\pi ^{2}/6\approx 1.644934}.
Note that the roles of the scale parameterσ{\displaystyle \sigma }and the shape parameterξ{\displaystyle \xi }underY∼exGPD(σ,ξ){\displaystyle Y\sim exGPD(\sigma ,\xi )}are separably interpretable, which may lead to a robust efficient estimation for theξ{\displaystyle \xi }than using theX∼GPD(σ,ξ){\displaystyle X\sim GPD(\sigma ,\xi )}[2]. The roles of the two parameters are associated each other underX∼GPD(μ=0,σ,ξ){\displaystyle X\sim GPD(\mu =0,\sigma ,\xi )}(at least up to the second central moment); see the formula of varianceVar(X){\displaystyle Var(X)}wherein both parameters are participated.
Assume thatX1:n=(X1,⋯,Xn){\displaystyle X_{1:n}=(X_{1},\cdots ,X_{n})}aren{\displaystyle n}observations (need not be i.i.d.) from an unknownheavy-tailed distributionF{\displaystyle F}such that its tail distribution is regularly varying with the tail-index1/ξ{\displaystyle 1/\xi }(hence, the corresponding shape parameter isξ{\displaystyle \xi }). To be specific, the tail distribution is described as
It is of a particular interest in theextreme value theoryto estimate the shape parameterξ{\displaystyle \xi }, especially whenξ{\displaystyle \xi }is positive (so called the heavy-tailed distribution).
LetFu{\displaystyle F_{u}}be their conditional excess distribution function.Pickands–Balkema–de Haan theorem(Pickands, 1975; Balkema and de Haan, 1974) states that for a large class of underlying distribution functionsF{\displaystyle F}, and largeu{\displaystyle u},Fu{\displaystyle F_{u}}is well approximated by the generalized Pareto distribution (GPD), which motivated Peak Over Threshold (POT) methods to estimateξ{\displaystyle \xi }:the GPD plays the key role in POT approach.
A renowned estimator using the POT methodology is theHill's estimator. Technical formulation of the Hill's estimator is as follows. For1≤i≤n{\displaystyle 1\leq i\leq n}, writeX(i){\displaystyle X_{(i)}}for thei{\displaystyle i}-th largest value ofX1,⋯,Xn{\displaystyle X_{1},\cdots ,X_{n}}. Then, with this notation, theHill's estimator(see page 190 of Reference 5 by Embrechts et al[3]) based on thek{\displaystyle k}upper order statistics is defined as
In practice, the Hill estimator is used as follows. First, calculate the estimatorξ^kHill{\displaystyle {\widehat {\xi }}_{k}^{\text{Hill}}}at each integerk∈{2,⋯,n}{\displaystyle k\in \{2,\cdots ,n\}}, and then plot the ordered pairs{(k,ξ^kHill)}k=2n{\displaystyle \{(k,{\widehat {\xi }}_{k}^{\text{Hill}})\}_{k=2}^{n}}. Then, select from the set of Hill estimators{ξ^kHill}k=2n{\displaystyle \{{\widehat {\xi }}_{k}^{\text{Hill}}\}_{k=2}^{n}}which are roughly constant with respect tok{\displaystyle k}: these stable values are regarded as reasonable estimates for the shape parameterξ{\displaystyle \xi }. IfX1,⋯,Xn{\displaystyle X_{1},\cdots ,X_{n}}are i.i.d., then the Hill's estimator is a consistent estimator for the shape parameterξ{\displaystyle \xi }[4].
Note that theHill estimatorξ^kHill{\displaystyle {\widehat {\xi }}_{k}^{\text{Hill}}}makes a use of the log-transformation for the observationsX1:n=(X1,⋯,Xn){\displaystyle X_{1:n}=(X_{1},\cdots ,X_{n})}. (ThePickand's estimatorξ^kPickand{\displaystyle {\widehat {\xi }}_{k}^{\text{Pickand}}}also employed the log-transformation, but in a slightly different way[5].)
|
https://en.wikipedia.org/wiki/Generalized_Pareto_distribution
|
Instatistics, anoutlieris adata pointthat differs significantly from other observations.[1][2]An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are sometimes excluded from thedata set.[3][4]An outlier can be an indication of exciting possibility, but can also cause serious problems in statistical analyses.
Outliers can occur by chance in any distribution, but they can indicate novel behaviour or structures in the data-set,measurement error, or that the population has aheavy-tailed distribution. In the case of measurement error, one wishes to discard them or use statistics that arerobustto outliers, while in the case of heavy-tailed distributions, they indicate that the distribution has highskewnessand that one should be very cautious in using tools or intuitions that assume anormal distribution. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate 'correct trial' versus 'measurement error'; this is modeled by amixture model.
In most larger samplings of data, some data points will be further away from thesample meanthan what is deemed reasonable. This can be due to incidentalsystematic erroror flaws in thetheorythat generated an assumed family ofprobability distributions, or it may be that some observations are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected (and not due to any anomalous condition).
Outliers, being the most extreme observations, may include thesample maximumorsample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations.
Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating theaveragetemperature of 10 objects in a room, and nine of them are between 20 and 25degrees Celsius, but an oven is at 175 °C, themedianof the data will be between 20 and 25 °C but the mean temperature will be between 35.5 and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object (but not the temperature in the room) than the mean; naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may indicate data points that belong to a differentpopulationthan the rest of thesampleset.
Estimatorscapable of coping with outliers are said to be robust: the median is a robust statistic ofcentral tendency, while the mean is not.[5]
In the case ofnormally distributeddata, thethree sigma rulemeans that roughly 1 in 22 observations will differ by twice thestandard deviationor more from the mean, and 1 in 370 will deviate by three times the standard deviation.[6]In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number – seePoisson distribution– and not indicate an anomaly. If the sample size is only 100, however, just three such outliers are already reason for concern, being more than 11 times the expected number.
In general, if the nature of the population distribution is knowna priori, it is possible to test if the number of outliers deviatesignificantlyfrom what can be expected: for a given cutoff (so samples fall beyond the cutoff with probabilityp) of a given distribution, the number of outliers will follow abinomial distributionwith parameterp, which can generally be well-approximated by thePoisson distributionwith λ =pn. Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean,pis approximately 0.3%, and thus for 1000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3.
Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transmission or transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end (King effect).
There is no rigid mathematical definition of what constitutes an outlier; determining whether or not an observation is an outlier is ultimately a subjective exercise.[7]There are various methods of outlier detection, some of which are treated as synonymous with novelty detection.[8][9][10][11][12]Some are graphical such asnormal probability plots. Others are model-based.Box plotsare a hybrid.
Model-based methods which are commonly used for identification assume that the data are from a normal distribution, and identify observations which are deemed "unlikely" based on mean and standard deviation:
It is proposed to determine in a series ofm{\displaystyle m}observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many asn{\displaystyle n}such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations. (Quoted in the editorial note on page 516 to Peirce (1982 edition) fromA Manual of Astronomy2:558 by Chauvenet.)[14][15][16][17]
Other methods flag observations based on measures such as theinterquartile range. For example, ifQ1{\displaystyle Q_{1}}andQ3{\displaystyle Q_{3}}are the lower and upperquartilesrespectively, then one could define an outlier to be any observation outside the range:
for some nonnegative constantk{\displaystyle k}.John Tukeyproposed this test, wherek=1.5{\displaystyle k=1.5}indicates an "outlier", andk=3{\displaystyle k=3}indicates data that is "far out".[18]
In various domains such as, but not limited to,statistics,signal processing,finance,econometrics,manufacturing,networkinganddata mining, the task ofanomaly detectionmay take other approaches. Some of these may be distance-based[19][20]and density-based such asLocal Outlier Factor(LOF).[21]Some approaches may use the distance to thek-nearest neighborsto label observations as outliers or non-outliers.[22]
The modified Thompson Tau test is a method used to determine if an outlier exists in a data set.[23]The strength of this method lies in the fact that it takes into account a data set's standard deviation, average and provides a statistically determined rejection zone; thus providing an objective method to determine if a data point is an outlier.[citation needed][24]How it works:
First, a data set's average is determined. Next the absolute deviation between each data point and the average are determined. Thirdly, a rejection region is determined using the formula:
wheretα/2{\displaystyle \scriptstyle {t_{\alpha /2}}}is the critical value from the Studenttdistribution withn-2 degrees of freedom,nis the sample size, and s is the sample standard deviation.
To determine if a value is an outlier:
Calculateδ=|(X−mean(X))/s|{\displaystyle \scriptstyle \delta =|(X-mean(X))/s|}.
Ifδ> Rejection Region, the data point is an outlier.
Ifδ≤ Rejection Region, the data point is not an outlier.
The modified Thompson Tau test is used to find one outlier at a time (largest value ofδis removed if it is an outlier). Meaning, if a data point is found to be an outlier, it is removed from the data set and the test is applied again with a new average and rejection region. This process is continued until no outliers remain in a data set.
Some work has also examined outliers for nominal (or categorical) data. In the context of a set of examples (or instances) in a data set, instance hardness measures the probability that an instance will be misclassified (1−p(y|x){\displaystyle 1-p(y|x)}whereyis the assigned class label andxrepresent the input attribute value for an instance in the training sett).[25]Ideally, instance hardness would be calculated by summing over the set of all possible hypothesesH:
Practically, this formulation is unfeasible asHis potentially infinite and calculatingp(h|t){\displaystyle p(h|t)}is unknown for many algorithms. Thus, instance hardness can be approximated using a diverse subsetL⊂H{\displaystyle L\subset H}:
wheregj(t,α){\displaystyle g_{j}(t,\alpha )}is the hypothesis induced by learning algorithmgj{\displaystyle g_{j}}trained on training settwith hyperparametersα{\displaystyle \alpha }. Instance hardness provides a continuous value for determining if an instance is an outlier instance.
The choice of how to deal with an outlier should depend on the cause. Some estimators are highly sensitive to outliers, notablyestimation of covariance matrices.
Even when a normal distribution model is appropriate to the data being analyzed, outliers are expected for large sample sizes and should not automatically be discarded if that is the case.[26]Instead, one should use a method that is robust to outliers to model or analyze data with naturally occurring outliers.[26]
When deciding whether to remove an outlier, the cause has to be considered. As mentioned earlier, if the outlier's origin can be attributed to an experimental error, or if it can be otherwise determined that the outlying data point is erroneous, it is generally recommended to remove it.[26][27]However, it is more desirable to correct the erroneous value, if possible.
Removing a data point solely because it is an outlier, on the other hand, is a controversial practice, often frowned upon by many scientists and science instructors, as it typically invalidates statistical results.[26][27]While mathematical criteria provide an objective and quantitative method for data rejection, they do not make the practice more scientifically or methodologically sound, especially in small sets or where a normal distribution cannot be assumed. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known.
The two common approaches to exclude outliers aretruncation(or trimming) andWinsorising. Trimming discards the outliers whereas Winsorising replaces the outliers with the nearest "nonsuspect" data.[28]Exclusion can also be a consequence of the measurement process, such as when an experiment is not entirely capable of measuring such extreme values, resulting incensoreddata.[29]
Inregressionproblems, an alternative approach may be to only exclude points which exhibit a large degree of influence on the estimated coefficients, using a measure such asCook's distance.[30]
If a data point (or points) is excluded from thedata analysis, this should be clearly stated on any subsequent report.
The possibility should be considered that the underlying distribution of the data is not approximately normal, having "fat tails". For instance, when sampling from aCauchy distribution,[31]the sample variance increases with the sample size, the sample mean fails to converge as the sample size increases, and outliers are expected at far larger rates than for a normal distribution. Even a slight difference in the fatness of the tails can make a large difference in the expected number of extreme values.
Aset membership approachconsiders that the uncertainty corresponding to theith measurement of an unknown random vectorxis represented by a setXi(instead of a probability density function). If no outliers occur,xshould belong to the intersection of allXi's. When outliers occur, this intersection could be empty, and we should relax a small number of the setsXi(as small as possible) in order to avoid any inconsistency.[32]This can be done using the notion ofq-relaxed intersection. As illustrated by the figure, theq-relaxed intersection corresponds to the set of allxwhich belong to all sets exceptqof them. SetsXithat do not intersect theq-relaxed intersection could be suspected to be outliers.
In cases where the cause of the outliers is known, it may be possible to incorporate this effect into the model structure, for example by using ahierarchical Bayes model, or amixture model.[33][34]
|
https://en.wikipedia.org/wiki/Outlier
|
In economics and finance, aholy grail distributionis aprobability distributionwith positivemeanand rightfat tail— a returns profile of a hypothetical investment vehicle that produces small returns centered on zero and occasionally exhibits outsized positive returns.
The distribution of historical returns of mostasset classesandinvestment managersis negativelyskewedand exhibits fat left tail (abnormal negative returns).[1][2]Asset classes tend to have strong negative returns when stock market crises take place. For example, in October 2008 stocks, mosthedge funds, real estate and corporate bonds suffered strong downward price corrections. At the same time vehicles following the Holy Grail distribution such as US dollar (as a DXY index),treasury bondsand certain hedge fund strategies that boughtcredit default swaps(CDS) and other derivative instruments had strong positive returns. Market forces that pushed the first category of assets down pulled the latter category up.
Protection of a diversified investment portfolio frommarket crashes(extreme events) can be achieved by using atail risk parityapproach,[3]allocating a piece of the portfolio to a tail risk protection strategy,[4]or to a strategy with Holy grail distribution of returns.[5][6]
A financial instrument or investment strategy that follows a Holy Grail distribution is a perfect hedge to an instrument that follows theTaleb distribution. When a "Taleb" investment vehicle suffers an unusual loss, a perfect hedge exhibits a strong return compensating for that loss (both outliers must take place at the same time).
Practitioners tend to distinguish between a Holy Grail distribution and investment returns that are generated from an inverted Taleb distribution, a “minus-Taleb” distribution. The return series of the former has a positive mean while returns from the latter have a negative mean. For example, maintaining protection from market crashes by maintaining an exposure to an out-of-the-moneyput optionon a market index follows a “minus-Taleb” distribution because options premiums have to be paid to maintain the position and options tend to expire worthless in most cases. When a market sells off strongly these options pay back and generate a strong positive outlier that “minus-Taleb” distribution features.
|
https://en.wikipedia.org/wiki/Holy_grail_distribution
|
Alogistic functionorlogistic curveis a common S-shaped curve (sigmoid curve) with the equation
f(x)=L1+e−k(x−x0){\displaystyle f(x)={\frac {L}{1+e^{-k(x-x_{0})}}}}
where
The logistic function has domain thereal numbers, the limit asx→−∞{\displaystyle x\to -\infty }is 0, and the limit asx→+∞{\displaystyle x\to +\infty }isL{\displaystyle L}.
Thestandard logistic function, depicted at right, whereL=1,k=1,x0=0{\displaystyle L=1,k=1,x_{0}=0}, has the equationf(x)=11+e−x{\displaystyle f(x)={\frac {1}{1+e^{-x}}}}and is sometimes simply calledthe sigmoid.[2]It is also sometimes called theexpit, being the inverse function of thelogit.[3][4]
The logistic function finds applications in a range of fields, includingbiology(especiallyecology),biomathematics,chemistry,demography,economics,geoscience,mathematical psychology,probability,sociology,political science,linguistics,statistics, andartificial neural networks. There are variousgeneralizations, depending on the field.
The logistic function was introduced in a series of three papers byPierre François Verhulstbetween 1838 and 1847, who devised it as a model ofpopulation growthby adjusting theexponential growthmodel, under the guidance ofAdolphe Quetelet.[5]Verhulst first devised the function in the mid 1830s, publishing a brief note in 1838,[1]then presented an expanded analysis and named the function in 1844 (published 1845);[a][6]the third paper adjusted the correction term in his model of Belgian population growth.[7]
The initial stage of growth is approximately exponential (geometric); then, as saturation begins, the growth slows to linear (arithmetic), and at maturity, growth approaches the limit with an exponentially decaying gap, like the initial stage in reverse.
Verhulst did not explain the choice of the term "logistic" (French:logistique), but it is presumably in contrast to thelogarithmiccurve,[8][b]and by analogy with arithmetic and geometric. His growth model is preceded by a discussion ofarithmetic growthandgeometric growth(whose curve he calls alogarithmic curve, instead of the modern termexponential curve), and thus "logistic growth" is presumably named by analogy,logisticbeing fromAncient Greek:λογιστικός,romanized:logistikós, a traditional division ofGreek mathematics.[c]
As a word derived from ancient Greek mathematical terms,[9]the name of this function is unrelated to the military and management termlogistics, which is instead fromFrench:logis"lodgings",[10]though some believe the Greek term also influencedlogistics;[9]seeLogistics § Originfor details.
Thestandard logistic functionis the logistic function with parametersk=1{\displaystyle k=1},x0=0{\displaystyle x_{0}=0},L=1{\displaystyle L=1}, which yields
f(x)=11+e−x=exex+1=ex/2ex/2+e−x/2.{\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{e^{x}+1}}={\frac {e^{x/2}}{e^{x/2}+e^{-x/2}}}.}
In practice, due to the nature of theexponential functione−x{\displaystyle e^{-x}}, it is often sufficient to compute the standard logistic function forx{\displaystyle x}over a small range of real numbers, such as a range contained in [−6, +6], as it quickly converges very close to its saturation values of 0 and 1.
The logistic function has the symmetry property that
1−f(x)=f(−x).{\displaystyle 1-f(x)=f(-x).}
This reflects that the growth from 0 whenx{\displaystyle x}is small is symmetric with the decay of the gap to the limit (1) whenx{\displaystyle x}is large.
Further,x↦f(x)−1/2{\displaystyle x\mapsto f(x)-1/2}is anodd function.
The sum of the logistic function and its reflection about the vertical axis,f(−x){\displaystyle f(-x)}, is
11+e−x+11+e−(−x)=exex+1+1ex+1=1.{\displaystyle {\frac {1}{1+e^{-x}}}+{\frac {1}{1+e^{-(-x)}}}={\frac {e^{x}}{e^{x}+1}}+{\frac {1}{e^{x}+1}}=1.}
The logistic function is thus rotationally symmetrical about the point (0, 1/2).[11]
The logistic function is the inverse of the naturallogitfunction
logitp=logp1−pfor0<p<1{\displaystyle \operatorname {logit} p=\log {\frac {p}{1-p}}\quad {\text{ for }}\,0<p<1}
and so converts the logarithm ofoddsinto aprobability. The conversion from thelog-likelihood ratioof two alternatives also takes the form of a logistic curve.
The logistic function is an offset and scaledhyperbolic tangentfunction:f(x)=12+12tanh(x2),{\displaystyle f(x)={\frac {1}{2}}+{\frac {1}{2}}\tanh \left({\frac {x}{2}}\right),}ortanh(x)=2f(2x)−1.{\displaystyle \tanh(x)=2f(2x)-1.}
This follows fromtanh(x)=ex−e−xex+e−x=ex⋅(1−e−2x)ex⋅(1+e−2x)=f(2x)−e−2x1+e−2x=f(2x)−e−2x+1−11+e−2x=2f(2x)−1.{\displaystyle {\begin{aligned}\tanh(x)&={\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}={\frac {e^{x}\cdot \left(1-e^{-2x}\right)}{e^{x}\cdot \left(1+e^{-2x}\right)}}\\&=f(2x)-{\frac {e^{-2x}}{1+e^{-2x}}}=f(2x)-{\frac {e^{-2x}+1-1}{1+e^{-2x}}}=2f(2x)-1.\end{aligned}}}
The hyperbolic-tangent relationship leads to another form for the logistic function's derivative:
ddxf(x)=14sech2(x2),{\displaystyle {\frac {d}{dx}}f(x)={\frac {1}{4}}\operatorname {sech} ^{2}\left({\frac {x}{2}}\right),}
which ties the logistic function into thelogistic distribution.
Geometrically, the hyperbolic tangent function is thehyperbolic angleon theunit hyperbolax2−y2=1{\displaystyle x^{2}-y^{2}=1}, which factors as(x+y)(x−y)=1{\displaystyle (x+y)(x-y)=1}, and thus has asymptotes the lines through the origin with slope−1{\displaystyle -1}and with slope1{\displaystyle 1}, and vertex at(1,0){\displaystyle (1,0)}corresponding to the range and midpoint (1{\displaystyle {1}}) of tanh. Analogously, the logistic function can be viewed as the hyperbolic angle on the hyperbolaxy−y2=1{\displaystyle xy-y^{2}=1}, which factors asy(x−y)=1{\displaystyle y(x-y)=1}, and thus has asymptotes the lines through the origin with slope0{\displaystyle 0}and with slope1{\displaystyle 1}, and vertex at(2,1){\displaystyle (2,1)}, corresponding to the range and midpoint (1/2{\displaystyle 1/2}) of the logistic function.
Parametrically,hyperbolic cosineandhyperbolic sinegive coordinates on the unit hyperbola:[d]((et+e−t)/2,(et−e−t)/2){\displaystyle \left((e^{t}+e^{-t})/2,(e^{t}-e^{-t})/2\right)}, with quotient the hyperbolic tangent. Similarly,(et/2+e−t/2,et/2){\displaystyle {\bigl (}e^{t/2}+e^{-t/2},e^{t/2}{\bigr )}}parametrizes the hyperbolaxy−y2=1{\displaystyle xy-y^{2}=1}, with quotient the logistic function. These correspond tolinear transformations(and rescaling the parametrization) ofthe hyperbolaxy=1{\displaystyle xy=1}, with parametrization(e−t,et){\displaystyle (e^{-t},e^{t})}: the parametrization of the hyperbola for the logistic function corresponds tot/2{\displaystyle t/2}and the linear transformation(1101){\displaystyle {\bigl (}{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}{\bigr )}}, while the parametrization of the unit hyperbola (for the hyperbolic tangent) corresponds to the linear transformation12(11−11){\displaystyle {\tfrac {1}{2}}{\bigl (}{\begin{smallmatrix}1&1\\-1&1\end{smallmatrix}}{\bigr )}}.
The standard logistic function has an easily calculatedderivative. The derivative is known as the density of thelogistic distribution:
f(x)=11+e−x=ex1+ex,{\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{1+e^{x}}},}
ddxf(x)=ex⋅(1+ex)−ex⋅ex(1+ex)2=ex(1+ex)2=(ex1+ex)(11+ex)=(ex1+ex)(1−ex1+ex)=f(x)(1−f(x)){\displaystyle {\begin{aligned}{\frac {d}{dx}}f(x)&={\frac {e^{x}\cdot (1+e^{x})-e^{x}\cdot e^{x}}{{\left(1+e^{x}\right)}^{2}}}\\[1ex]&={\frac {e^{x}}{{\left(1+e^{x}\right)}^{2}}}\\[1ex]&=\left({\frac {e^{x}}{1+e^{x}}}\right)\left({\frac {1}{1+e^{x}}}\right)\\[1ex]&=\left({\frac {e^{x}}{1+e^{x}}}\right)\left(1-{\frac {e^{x}}{1+e^{x}}}\right)\\[1.2ex]&=f(x)\left(1-f(x)\right)\end{aligned}}}from which all higher derivatives can be derived algebraically. For example,f″=(1−2f)(1−f)f{\displaystyle f''=(1-2f)(1-f)f}.
The logistic distribution is alocation–scale family, which corresponds to parameters of the logistic function. IfL=1{\displaystyle L=1}is fixed, then the midpointx0{\displaystyle x_{0}}is the location and the slopek{\displaystyle k}is the scale.
Conversely, itsantiderivativecan be computed by thesubstitutionu=1+ex{\displaystyle u=1+e^{x}}, since
f(x)=ex1+ex=u′u,{\displaystyle f(x)={\frac {e^{x}}{1+e^{x}}}={\frac {u'}{u}},}
so (dropping theconstant of integration)
∫ex1+exdx=∫1udu=lnu=ln(1+ex).{\displaystyle \int {\frac {e^{x}}{1+e^{x}}}\,dx=\int {\frac {1}{u}}\,du=\ln u=\ln(1+e^{x}).}
Inartificial neural networks, this is known as thesoftplusfunction and (with scaling) is a smooth approximation of theramp function, just as the logistic function (with scaling) is a smooth approximation of theHeaviside step function.
The standard logistic function isanalyticon the whole real line sincef:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} },f(x)=11+e−x=h(g(x)){\displaystyle f(x)={\frac {1}{1+e^{-x}}}=h(g(x))}whereg:R→R{\displaystyle g:\mathbb {R} \to \mathbb {R} },g(x)=1+e−x{\displaystyle g(x)=1+e^{-x}}andh:(0,∞)→(0,∞){\displaystyle h:(0,\infty )\to (0,\infty )},h(x)=1x{\displaystyle h(x)={\frac {1}{x}}}are analytic on their domains, and the composition of analytic functions is again analytic.
A formula for thenth derivative of the standard logistic function is
dnfdxn=∑i=1n(∑j=1n(−1)i+j(ij)jn)e−ix(1+e−x)i+1{\displaystyle {\frac {d^{n}f}{dx^{n}}}=\sum _{i=1}^{n}{\frac {\left(\sum _{j=1}^{n}{\left(-1\right)}^{i+j}{\binom {i}{j}}j^{n}\right)e^{-ix}}{{\left(1+e^{-x}\right)}^{i+1}}}}
therefore itsTaylor seriesabout the pointa{\displaystyle a}is
f(x)=f(a)(x−a)+∑n=1∞∑i=1n(∑j=1n(−1)i+j(ij)jn)e−ix(1+e−x)i+1(x−a)nn!.{\displaystyle f(x)=f(a)(x-a)+\sum _{n=1}^{\infty }\sum _{i=1}^{n}{\frac {\left(\sum _{j=1}^{n}{\left(-1\right)}^{i+j}{\binom {i}{j}}j^{n}\right)e^{-ix}}{{\left(1+e^{-x}\right)}^{i+1}}}{\frac {{\left(x-a\right)}^{n}}{n!}}.}
The unique standard logistic function is the solution of the simple first-order non-linearordinary differential equation
ddxf(x)=f(x)(1−f(x)){\displaystyle {\frac {d}{dx}}f(x)=f(x){\big (}1-f(x){\big )}}
withboundary conditionf(0)=1/2{\displaystyle f(0)=1/2}. This equation is the continuous version of thelogistic map. Note that the reciprocal logistic function is solution to a simple first-orderlinearordinary differential equation.[12]
The qualitative behavior is easily understood in terms of thephase line: the derivative is 0 when the function is 1; and the derivative is positive forf{\displaystyle f}between 0 and 1, and negative forf{\displaystyle f}above 1 or less than 0 (though negative populations do not generally accord with a physical model). This yields an unstable equilibrium at 0 and a stable equilibrium at 1, and thus for any function value greater than 0 and less than 1, it grows to 1.
The logistic equation is a special case of theBernoulli differential equationand has the following solution:
f(x)=exex+C.{\displaystyle f(x)={\frac {e^{x}}{e^{x}+C}}.}
Choosing the constant of integrationC=1{\displaystyle C=1}gives the other well known form of the definition of the logistic curve:
f(x)=exex+1=11+e−x.{\displaystyle f(x)={\frac {e^{x}}{e^{x}+1}}={\frac {1}{1+e^{-x}}}.}
More quantitatively, as can be seen from the analytical solution, the logistic curve shows earlyexponential growthfor negative argument, which reaches to linear growth of slope 1/4 for an argument near 0, then approaches 1 with an exponentially decaying gap.
The differential equation derived above is a special case of a general differential equation that only models the sigmoid function forx>0{\displaystyle x>0}. In many modeling applications, the moregeneral form[13]df(x)dx=kLf(x)(L−f(x)),f(0)=L1+ekx0{\displaystyle {\frac {df(x)}{dx}}={\frac {k}{L}}f(x){\big (}L-f(x){\big )},\quad f(0)={\frac {L}{1+e^{kx_{0}}}}}can be desirable. Its solution is the shifted and scaledsigmoid functionLσ(k(x−x0))=L1+e−k(x−x0){\displaystyle L\sigma {\big (}k(x-x_{0}){\big )}={\frac {L}{1+e^{-k(x-x_{0})}}}}.
When the capacityL=1{\displaystyle L=1}, the value of the logistic function is in the range(0,1){\displaystyle (0,1)}and can be interpreted as a probabilityp.[e]In more detail,pcan be interpreted as the probability of one of two alternatives (the parameter of aBernoulli distribution);[f]the two alternatives are complementary, so the probability of the other alternative isq=1−p{\displaystyle q=1-p}andp+q=1{\displaystyle p+q=1}. The two alternatives are coded as 1 and 0, corresponding to the limiting values asx→±∞{\displaystyle x\to \pm \infty }.
In this interpretation the inputxis thelog-oddsfor the first alternative (relative to the other alternative), measured in "logistic units" (orlogits),ex{\displaystyle e^{x}}is theoddsfor the first event (relative to the second), and, recalling that given odds ofO=O:1{\displaystyle O=O:1}for (O{\displaystyle O}against1), the probability is the ratio of for over (for plus against),O/(O+1){\displaystyle O/(O+1)}, we see thatex/(ex+1)=1/(1+e−x)=p{\displaystyle e^{x}/(e^{x}+1)=1/(1+e^{-x})=p}is the probability of the first alternative. Conversely,xis the log-oddsagainstthe second alternative,−x{\displaystyle -x}is the log-oddsforthe second alternative,e−x{\displaystyle e^{-x}}is the odds for the second alternative, ande−x/(e−x+1)=1/(1+ex)=q{\displaystyle e^{-x}/(e^{-x}+1)=1/(1+e^{x})=q}is the probability of the second alternative.
This can be framed more symmetrically in terms of two inputs,x0{\displaystyle x_{0}}andx1{\displaystyle x_{1}}, which then generalizes naturally to more than two alternatives. Given two real number inputs,x0{\displaystyle x_{0}}andx1{\displaystyle x_{1}}, interpreted as logits, theirdifferencex1−x0{\displaystyle x_{1}-x_{0}}is the log-odds for option 1 (the log-oddsagainstoption 0),ex1−x0{\displaystyle e^{x_{1}-x_{0}}}is the odds,ex1−x0/(ex1−x0+1)=1/(1+e−(x1−x0))=ex1/(ex0+ex1){\displaystyle e^{x_{1}-x_{0}}/(e^{x_{1}-x_{0}}+1)=1/\left(1+e^{-(x_{1}-x_{0})}\right)=e^{x_{1}}/(e^{x_{0}}+e^{x_{1}})}is the probability of option 1, and similarlyex0/(ex0+ex1){\displaystyle e^{x_{0}}/(e^{x_{0}}+e^{x_{1}})}is the probability of option 0.
This form immediately generalizes to more alternatives as thesoftmax function, which is a vector-valued function whosei-th coordinate isexi/∑i=0nexi{\textstyle e^{x_{i}}/\sum _{i=0}^{n}e^{x_{i}}}.
More subtly, the symmetric form emphasizes interpreting the inputxasx1−x0{\displaystyle x_{1}-x_{0}}and thusrelativeto some reference point, implicitly tox0=0{\displaystyle x_{0}=0}. Notably, the softmax function is invariant under adding a constant to all the logitsxi{\displaystyle x_{i}}, which corresponds to the differencexj−xi{\displaystyle x_{j}-x_{i}}being the log-odds for optionjagainst optioni, but the individual logitsxi{\displaystyle x_{i}}not being log-odds on their own. Often one of the options is used as a reference ("pivot"), and its value fixed as0, so the other logits are interpreted as odds versus this reference. This is generally done with the first alternative, hence the choice of numbering:x0=0{\displaystyle x_{0}=0}, and thenxi=xi−x0{\displaystyle x_{i}=x_{i}-x_{0}}is the log-odds for optioniagainst option0. Sincee0=1{\displaystyle e^{0}=1}, this yields the+1{\displaystyle +1}term in many expressions for the logistic function and generalizations.[g]
In growth modeling, numerous generalizations exist, including thegeneralized logistic curve, theGompertz function, thecumulative distribution functionof theshifted Gompertz distribution, and thehyperbolastic function of type I.
In statistics, where the logistic function is interpreted as the probability of one of two alternatives, the generalization to three or more alternatives is thesoftmax function, which is vector-valued, as it gives the probability of each alternative.
A typical application of the logistic equation is a common model ofpopulation growth(see alsopopulation dynamics), originally due toPierre-François Verhulstin 1838, where the rate of reproduction is proportional to both the existing population and the amount of available resources, all else being equal. The Verhulst equation was published after Verhulst had readThomas Malthus'An Essay on the Principle of Population, which describes theMalthusian growth modelof simple (unconstrained) exponential growth. Verhulst derived his logistic equation to describe the self-limiting growth of abiologicalpopulation. The equation was rediscovered in 1911 byA. G. McKendrickfor the growth of bacteria in broth and experimentally tested using a technique for nonlinear parameter estimation.[14]The equation is also sometimes called theVerhulst-Pearl equationfollowing its rediscovery in 1920 byRaymond Pearl(1879–1940) andLowell Reed(1888–1966) of theJohns Hopkins University.[15]Another scientist,Alfred J. Lotkaderived the equation again in 1925, calling it thelaw of population growth.
LettingP{\displaystyle P}represent population size (N{\displaystyle N}is often used in ecology instead) andt{\displaystyle t}represent time, this model is formalized by thedifferential equation:
dPdt=rP(1−PK),{\displaystyle {\frac {dP}{dt}}=rP\left(1-{\frac {P}{K}}\right),}
where the constantr{\displaystyle r}defines thegrowth rateandK{\displaystyle K}is thecarrying capacity.
In the equation, the early, unimpeded growth rate is modeled by the first term+rP{\displaystyle +rP}. The value of the rater{\displaystyle r}represents the proportional increase of the populationP{\displaystyle P}in one unit of time. Later, as the population grows, the modulus of the second term (which multiplied out is−rP2/K{\displaystyle -rP^{2}/K}) becomes almost as large as the first, as some members of the populationP{\displaystyle P}interfere with each other by competing for some critical resource, such as food or living space. This antagonistic effect is called thebottleneck, and is modeled by the value of the parameterK{\displaystyle K}. The competition diminishes the combined growth rate, until the value ofP{\displaystyle P}ceases to grow (this is calledmaturityof the population).
The solution to the equation (withP0{\displaystyle P_{0}}being the initial population) is
P(t)=KP0ertK+P0(ert−1)=K1+(K−P0P0)e−rt,{\displaystyle P(t)={\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}={\frac {K}{1+\left({\frac {K-P_{0}}{P_{0}}}\right)e^{-rt}}},}
where
limt→∞P(t)=K,{\displaystyle \lim _{t\to \infty }P(t)=K,}
whereK{\displaystyle K}is the limiting value ofP{\displaystyle P}, the highest value that the population can reach given infinite time (or come close to reaching in finite time). The carrying capacity is asymptotically reached independently of the initial valueP(0)>0{\displaystyle P(0)>0}, and also in the case thatP(0)>K{\displaystyle P(0)>K}.
In ecology,speciesare sometimes referred to asr{\displaystyle r}-strategist orK{\displaystyle K}-strategistdepending upon theselectiveprocesses that have shaped theirlife historystrategies.Choosing the variable dimensionsso thatn{\displaystyle n}measures the population in units of carrying capacity, andτ{\displaystyle \tau }measures time in units of1/r{\displaystyle 1/r}, gives the dimensionless differential equation
dndτ=n(1−n).{\displaystyle {\frac {dn}{d\tau }}=n(1-n).}
Theantiderivativeof the ecological form of the logistic function can be computed by thesubstitutionu=K+P0(ert−1){\displaystyle u=K+P_{0}\left(e^{rt}-1\right)}, sincedu=rP0ertdt{\displaystyle du=rP_{0}e^{rt}dt}
∫KP0ertK+P0(ert−1)dt=∫Kr1udu=Krlnu+C=Krln(K+P0(ert−1))+C{\displaystyle \int {\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}\,dt=\int {\frac {K}{r}}{\frac {1}{u}}\,du={\frac {K}{r}}\ln u+C={\frac {K}{r}}\ln \left(K+P_{0}(e^{rt}-1)\right)+C}
Since the environmental conditions influence the carrying capacity, as a consequence it can be time-varying, withK(t)>0{\displaystyle K(t)>0}, leading to the following mathematical model:
dPdt=rP⋅(1−PK(t)).{\displaystyle {\frac {dP}{dt}}=rP\cdot \left(1-{\frac {P}{K(t)}}\right).}
A particularly important case is that of carrying capacity that varies periodically with periodT{\displaystyle T}:
K(t+T)=K(t).{\displaystyle K(t+T)=K(t).}
It can be shown[16]that in such a case, independently from the initial valueP(0)>0{\displaystyle P(0)>0},P(t){\displaystyle P(t)}will tend to a unique periodic solutionP∗(t){\displaystyle P_{*}(t)}, whose period isT{\displaystyle T}.
A typical value ofT{\displaystyle T}is one year: In such caseK(t){\displaystyle K(t)}may reflect periodical variations of weather conditions.
Another interesting generalization is to consider that the carrying capacityK(t){\displaystyle K(t)}is a function of the population at an earlier time, capturing a delay in the way population modifies its environment. This leads to a logistic delay equation,[17]which has a very rich behavior, with bistability in some parameter range, as well as a monotonic decay to zero, smooth exponential growth, punctuated unlimited growth (i.e., multiple S-shapes), punctuated growth or alternation to a stationary level, oscillatory approach to a stationary level, sustainable oscillations, finite-time singularities as well as finite-time death.
Logistic functions are used in several roles in statistics. For example, they are thecumulative distribution functionof thelogistic family of distributions, and they are, a bit simplified, used to model the chance a chess player has to beat their opponent in theElo rating system. More specific examples now follow.
Logistic functions are used inlogistic regressionto model how the probabilityp{\displaystyle p}of an event may be affected by one or moreexplanatory variables: an example would be to have the model
p=f(a+bx),{\displaystyle p=f(a+bx),}
wherex{\displaystyle x}is the explanatory variable,a{\displaystyle a}andb{\displaystyle b}are model parameters to be fitted, andf{\displaystyle f}is the standard logistic function.
Logistic regression and otherlog-linear modelsare also commonly used inmachine learning. A generalisation of the logistic function to multiple inputs is thesoftmax activation function, used inmultinomial logistic regression.
Another application of the logistic function is in theRasch model, used initem response theory. In particular, the Rasch model forms a basis formaximum likelihoodestimation of the locations of objects or persons on acontinuum, based on collections ofcategorical data, for example the abilities of persons on a continuum based on responses that have been categorized as correct and incorrect.
Logistic functions are often used inartificial neural networksto introducenonlinearityin the model or to clamp signals to within a specifiedinterval. A popularneural net elementcomputes alinear combinationof its input signals, and applies a bounded logistic function as theactivation functionto the result; this model can be seen as a "smoothed" variant of the classicalthreshold neuron.
A common choice for the activation or "squashing" functions, used to clip large magnitudes to keep the response of the neural network bounded,[18]is
g(h)=11+e−2βh,{\displaystyle g(h)={\frac {1}{1+e^{-2\beta h}}},}
which is a logistic function.
These relationships result in simplified implementations ofartificial neural networkswithartificial neurons. Practitioners caution that sigmoidal functions which areantisymmetricabout the origin (e.g. thehyperbolic tangent) lead to faster convergence when training networks withbackpropagation.[19]
The logistic function is itself the derivative of another proposed activation function, thesoftplus.
Another application of logistic curve is in medicine, where the logistic differential equation can be used to model the growth oftumors. This application can be considered an extension of the above-mentioned use in the framework of ecology (see also theGeneralized logistic curve, allowing for more parameters). Denoting withX(t){\displaystyle X(t)}the size of the tumor at timet{\displaystyle t}, its dynamics are governed by
X′=r(1−XK)X,{\displaystyle X'=r\left(1-{\frac {X}{K}}\right)X,}
which is of the type
X′=F(X)X,F′(X)≤0,{\displaystyle X'=F(X)X,\quad F'(X)\leq 0,}
whereF(X){\displaystyle F(X)}is the proliferation rate of the tumor.
If a course ofchemotherapyis started with a log-kill effect, the equation may be revised to be
X′=r(1−XK)X−c(t)X,{\displaystyle X'=r\left(1-{\frac {X}{K}}\right)X-c(t)X,}
wherec(t){\displaystyle c(t)}is the therapy-induced death rate. In the idealized case of very long therapy,c(t){\displaystyle c(t)}can be modeled as a periodic function (of periodT{\displaystyle T}) or (in case of continuous infusion therapy) as a constant function, and one has that
1T∫0Tc(t)dt>r→limt→+∞x(t)=0,{\displaystyle {\frac {1}{T}}\int _{0}^{T}c(t)\,dt>r\to \lim _{t\to +\infty }x(t)=0,}
i.e. if the average therapy-induced death rate is greater than the baseline proliferation rate, then there is the eradication of the disease. Of course, this is an oversimplified model of both the growth and the therapy. For example, it does not take into account the evolution of clonal resistance, or the side-effects of the therapy on the patient. These factors can result in the eventual failure of chemotherapy, or its discontinuation.[citation needed]
A novel infectious pathogen to which a population has no immunity will generally spread exponentially in the early stages, while the supply of susceptible individuals is plentiful. The SARS-CoV-2 virus that causesCOVID-19exhibited exponential growth early in the course of infection in several countries in early 2020.[20]Factors including a lack of susceptible hosts (through the continued spread of infection until it passes the threshold forherd immunity) or reduction in the accessibility of potential hosts through physical distancing measures, may result in exponential-looking epidemic curves first linearizing (replicating the "logarithmic" to "logistic" transition first noted byPierre-François Verhulst, as noted above) and then reaching a maximal limit.[21]
A logistic function, or related functions (e.g. theGompertz function) are usually used in a descriptive or phenomenological manner because they fit well not only to the early exponential rise, but to the eventual levelling off of the pandemic as the population develops a herd immunity. This is in contrast to actual models of pandemics which attempt to formulate a description based on the dynamics of the pandemic (e.g. contact rates, incubation times, social distancing, etc.). Some simple models have been developed, however, which yield a logistic solution.[22][23][24]
Ageneralized logistic function, also called the Richards growth curve, has been applied to model the early phase of theCOVID-19outbreak.[25]The authors fit the generalized logistic function to the cumulative number of infected cases, here referred to asinfection trajectory. There are different parameterizations of thegeneralized logistic functionin the literature. One frequently used forms is
f(t;θ1,θ2,θ3,ξ)=θ1[1+ξexp(−θ2⋅(t−θ3))]1/ξ{\displaystyle f(t;\theta _{1},\theta _{2},\theta _{3},\xi )={\frac {\theta _{1}}{{\left[1+\xi \exp \left(-\theta _{2}\cdot (t-\theta _{3})\right)\right]}^{1/\xi }}}}
whereθ1,θ2,θ3{\displaystyle \theta _{1},\theta _{2},\theta _{3}}are real numbers, andξ{\displaystyle \xi }is a positive real number. The flexibility of the curvef{\displaystyle f}is due to the parameterξ{\displaystyle \xi }: (i) ifξ=1{\displaystyle \xi =1}then the curve reduces to the logistic function, and (ii) asξ{\displaystyle \xi }approaches zero, the curve converges to theGompertz function. In epidemiological modeling,θ1{\displaystyle \theta _{1}},θ2{\displaystyle \theta _{2}}, andθ3{\displaystyle \theta _{3}}represent the final epidemic size, infection rate, and lag phase, respectively. See the right panel for an example infection trajectory when(θ1,θ2,θ3){\displaystyle (\theta _{1},\theta _{2},\theta _{3})}is set to(10000,0.2,40){\displaystyle (10000,0.2,40)}.
One of the benefits of using a growth function such as thegeneralized logistic functionin epidemiological modeling is its relatively easy application to themultilevel modelframework, where information from different geographic regions can be pooled together.
The concentration of reactants and products inautocatalytic reactionsfollow the logistic function.
The degradation ofPlatinum groupmetal-free (PGM-free) oxygen reduction reaction (ORR) catalyst in fuel cell cathodes follows the logistic decay function,[26]suggesting an autocatalytic degradation mechanism.
The logistic function determines the statistical distribution of fermions over the energy states of a system in thermal equilibrium. In particular, it is the distribution of the probabilities that each possible energy level is occupied by a fermion, according toFermi–Dirac statistics.
The logistic function also finds applications in optics, particularly in modelling phenomena such asmirages. Under certain conditions, such as the presence of a temperature or concentration gradient due to diffusion and balancing with gravity, logistic curve behaviours can emerge.[27][28]
A mirage, resulting from a temperature gradient that modifies the refractive index related to the density/concentration of the material over distance, can be modelled using a fluid with a refractive index gradient due to the concentration gradient. This mechanism can be equated to a limiting population growth model, where the concentrated region attempts to diffuse into the lower concentration region, while seeking equilibrium with gravity, thus yielding a logistic function curve.[27]
SeeDiffusion bonding.
In linguistics, the logistic function can be used to modellanguage change:[29]an innovation that is at first marginal begins to spread more quickly with time, and then more slowly as it becomes more universally adopted.
The logistic S-curve can be used for modeling the crop response to changes in growth factors. There are two types of response functions:positiveandnegativegrowth curves. For example, the crop yield mayincreasewith increasing value of the growth factor up to a certain level (positive function), or it maydecreasewith increasing growth factor values (negative function owing to a negative growth factor), which situation requires aninvertedS-curve.
The logistic function can be used to illustrate the progress of thediffusion of an innovationthrough its life cycle.
InThe Laws of Imitation(1890),Gabriel Tardedescribes the rise and spread of new ideas through imitative chains. In particular, Tarde identifies three main stages through which innovations spread: the first one corresponds to the difficult beginnings, during which the idea has to struggle within a hostile environment full of opposing habits and beliefs; the second one corresponds to the properly exponential take-off of the idea, withf(x)=2x{\displaystyle f(x)=2^{x}}; finally, the third stage is logarithmic, withf(x)=log(x){\displaystyle f(x)=\log(x)}, and corresponds to the time when the impulse of the idea gradually slows down while, simultaneously new opponent ideas appear. The ensuing situation halts or stabilizes the progress of the innovation, which approaches an asymptote.
In asovereign state, the subnational units (constituent states or cities) may use loans to finance their projects. However, this funding source is usually subject to strict legal rules as well as to economyscarcityconstraints, especially the resources the banks can lend (due to theirequityorBasellimits). These restrictions, which represent a saturation level, along with an exponential rush in aneconomic competitionfor money, create apublic financediffusion of credit pleas and the aggregate national response is asigmoid curve.[32]
Historically, when new products are introduced there is an intense amount ofresearch and developmentwhich leads to dramatic improvements in quality and reductions in cost. This leads to a period of rapid industry growth. Some of the more famous examples are: railroads, incandescent light bulbs,electrification, cars and air travel. Eventually, dramatic improvement and cost reduction opportunities are exhausted, the product or process are in widespread use with few remaining potential new customers, and markets become saturated.
Logistic analysis was used in papers by several researchers at the International Institute of Applied Systems Analysis (IIASA). These papers deal with the diffusion of various innovations, infrastructures and energy source substitutions and the role of work in the economy as well as with the long economic cycle. Long economic cycles were investigated by Robert Ayres (1989).[33]Cesare Marchetti published onlong economic cyclesand on diffusion of innovations.[34][35]Arnulf Grübler's book (1990) gives a detailed account of the diffusion of infrastructures including canals, railroads, highways and airlines, showing that their diffusion followed logistic shaped curves.[36]
Carlota Perez used a logistic curve to illustrate the long (Kondratiev) business cycle with the following labels: beginning of a technological era asirruption, the ascent asfrenzy, the rapid build out assynergyand the completion asmaturity.[37]
Logistic growth regressions carry significant uncertainty when data is available only up to around the inflection point of the growth process. Under these conditions, estimating the height at which the inflection point will occur may have uncertainties comparable to the carrying capacity (K) of the system.
A method to mitigate this uncertainty involves using the carrying capacity from a surrogate logistic growth process as a reference point.[38]By incorporating this constraint, even if K is only an estimate within a factor of two, the regression is stabilized, which improves accuracy and reduces uncertainty in the prediction parameters. This approach can be applied in fields such as economics and biology, where analogous surrogate systems or populations are available to inform the analysis.
Link[39]created an extension ofWald's theoryof sequential analysis to a distribution-free accumulation of random variables until either a positive or negative bound is first equaled or exceeded. Link[40]derives the probability of first equaling or exceeding the positive boundary as1/(1+e−θA){\displaystyle 1/(1+e^{-\theta A})}, the logistic function. This is the first proof that the logistic function may have a stochastic process as its basis. Link[41]provides a century of examples of "logistic" experimental results and a newly derived relation between this probability and the time of absorption at the boundaries.
|
https://en.wikipedia.org/wiki/Logistic_growth
|
Inmathematics, asingularityis a point at which a given mathematical object is not defined, or a point where the mathematical object ceases to bewell-behavedin some particular way, such as by lackingdifferentiabilityoranalyticity.[1][2][3]
For example, thereciprocal functionf(x)=1/x{\displaystyle f(x)=1/x}has a singularity atx=0{\displaystyle x=0}, where the value of thefunctionis not defined, as involving adivision by zero. Theabsolute valuefunctiong(x)=|x|{\displaystyle g(x)=|x|}also has a singularity atx=0{\displaystyle x=0}, since it is notdifferentiablethere.[4]
Thealgebraic curvedefined by{(x,y):y3−x2=0}{\displaystyle \left\{(x,y):y^{3}-x^{2}=0\right\}}in the(x,y){\displaystyle (x,y)}coordinate system has a singularity (called acusp) at(0,0){\displaystyle (0,0)}. For singularities inalgebraic geometry, seesingular point of an algebraic variety. For singularities indifferential geometry, seesingularity theory.
Inreal analysis, singularities are eitherdiscontinuities, or discontinuities of thederivative(sometimes also discontinuities of higher order derivatives). There are four kinds of discontinuities:type I, which has two subtypes, andtype II, which can also be divided into two subtypes (though usually is not).
To describe the way these two types of limits are being used, suppose thatf(x){\displaystyle f(x)}is a function of a real argumentx{\displaystyle x}, and for any value of its argument, sayc{\displaystyle c}, then theleft-handed limit,f(c−){\displaystyle f(c^{-})}, and theright-handed limit,f(c+){\displaystyle f(c^{+})}, are defined by:
The valuef(c−){\displaystyle f(c^{-})}is the value that the functionf(x){\displaystyle f(x)}tends towards as the valuex{\displaystyle x}approachesc{\displaystyle c}frombelow, and the valuef(c+){\displaystyle f(c^{+})}is the value that the functionf(x){\displaystyle f(x)}tends towards as the valuex{\displaystyle x}approachesc{\displaystyle c}fromabove, regardless of the actual value the function has at the point wherex=c{\displaystyle x=c}.
There are some functions for which these limits do not exist at all. For example, the function
does not tend towards anything asx{\displaystyle x}approachesc=0{\displaystyle c=0}. The limits in this case are not infinite, but ratherundefined: there is no value thatg(x){\displaystyle g(x)}settles in on. Borrowing from complex analysis, this is sometimes called anessential singularity.
The possible cases at a given valuec{\displaystyle c}for the argument are as follows.
In real analysis, a singularity or discontinuity is a property of a function alone. Any singularities that may exist in the derivative of a function are considered as belonging to the derivative, not to the original function.
Acoordinate singularityoccurs when an apparent singularity or discontinuity occurs in one coordinate frame, which can be removed by choosing a different frame. An example of this is the apparent singularity at the 90 degree latitude inspherical coordinates. An object moving due north (for example, along the line 0 degrees longitude) on the surface of a sphere will suddenly experience an instantaneous change in longitude at the pole (in the case of the example, jumping from longitude 0 to longitude 180 degrees). This discontinuity, however, is only apparent; it is an artifact of the coordinate system chosen, which is singular at the poles. A different coordinate system would eliminate the apparent discontinuity (e.g., by replacing the latitude/longitude representation with ann-vectorrepresentation).
Incomplex analysis, there are several classes of singularities. These include the isolated singularities, the nonisolated singularities, and the branch points.
Suppose thatf{\displaystyle f}is a function that iscomplex differentiablein thecomplementof a pointa{\displaystyle a}in anopen subsetU{\displaystyle U}of thecomplex numbersC.{\displaystyle \mathbb {C} .}Then:
Other than isolated singularities, complex functions of one variable may exhibit other singular behaviour. These are termed nonisolated singularities, of which there are two types:
Branch pointsare generally the result of amulti-valued function, such asz{\displaystyle {\sqrt {z}}}orlog(z),{\displaystyle \log(z),}which are defined within a certain limited domain so that the function can be made single-valued within the domain. The cut is a line or curve excluded from the domain to introduce a technical separation between discontinuous values of the function. When the cut is genuinely required, the function will have distinctly different values on each side of the branch cut. The shape of the branch cut is a matter of choice, even though it must connect two different branch points (such asz=0{\displaystyle z=0}andz=∞{\displaystyle z=\infty }forlog(z){\displaystyle \log(z)}) which are fixed in place.
Afinite-time singularityoccurs when one input variable is time, and an output variable increases towards infinity at a finite time. These are important inkinematicsandPartial Differential Equations– infinites do not occur physically, but the behavior near the singularity is often of interest. Mathematically, the simplest finite-time singularities arepower lawsfor various exponents of the formx−α,{\displaystyle x^{-\alpha },}of which the simplest ishyperbolic growth, where the exponent is (negative) 1:x−1.{\displaystyle x^{-1}.}More precisely, in order to get a singularity at positive time as time advances (so the output grows to infinity), one instead uses(t0−t)−α{\displaystyle (t_{0}-t)^{-\alpha }}(usingtfor time, reversing direction to−t{\displaystyle -t}so that time increases to infinity, and shifting the singularity forward from 0 to a fixed timet0{\displaystyle t_{0}}).
An example would be the bouncing motion of an inelastic ball on a plane. If idealized motion is considered, in which the same fraction ofkinetic energyis lost on each bounce, thefrequencyof bounces becomes infinite, as the ball comes to rest in a finite time. Other examples of finite-time singularities include the various forms of thePainlevé paradox(for example, the tendency of a chalk to skip when dragged across a blackboard), and how theprecessionrate of acoinspun on a flat surface accelerates towards infinite—before abruptly stopping (as studied using theEuler's Disktoy).
Hypothetical examples includeHeinz von Foerster's facetious "Doomsday's equation" (simplistic models yield infinite human population in finite time).
Inalgebraic geometry, asingularity of an algebraic varietyis a point of the variety where thetangent spacemay not be regularly defined. The simplest example of singularities are curves that cross themselves. But there are other types of singularities, likecusps. For example, the equationy2−x3= 0defines a curve that has a cusp at the originx=y= 0. One could define thex-axis as a tangent at this point, but this definition can not be the same as the definition at other points. In fact, in this case, thex-axis is a "double tangent."
Foraffineandprojective varieties, the singularities are the points where theJacobian matrixhas arankwhich is lower than at other points of the variety.
An equivalent definition in terms ofcommutative algebramay be given, which extends toabstract varietiesandschemes: A point issingularif thelocal ring at this pointis not aregular local ring.
|
https://en.wikipedia.org/wiki/Mathematical_singularity
|
Anomalous diffusionis adiffusionprocess with anon-linearrelationship between themean squared displacement(MSD),⟨r2(τ)⟩{\displaystyle \langle r^{2}(\tau )\rangle }, and time. This behavior is in stark contrast toBrownian motion, the typical diffusion process described byAlbert EinsteinandMarian Smoluchowski, where the MSD islinearin time (namely,⟨r2(τ)⟩=2dDτ{\displaystyle \langle r^{2}(\tau )\rangle =2dD\tau }withdbeing the number of dimensions andDthediffusion coefficient).[1][2]
It has been found that equations describing normal diffusion are not capable of characterizing some complex diffusion processes, for instance, diffusion process in inhomogeneous or heterogeneous medium, e.g. porous media.Fractional diffusion equationswere introduced in order to characterize anomalous diffusion phenomena.
Examples of anomalous diffusion in nature have been observed in ultra-cold atoms,[3]harmonic spring-mass systems,[4]scalar mixing in theinterstellar medium,[5]telomeresin thenucleusof cells,[6]ion channelsin theplasma membrane,[7]colloidal particle in thecytoplasm,[8][9][10]moisture transport in cement-based materials,[11]and worm-likemicellar solutions.[12]
Unlike typical diffusion, anomalous diffusion is described by a power law,
⟨r2(τ)⟩=Kατα{\displaystyle \langle r^{2}(\tau )\rangle =K_{\alpha }\tau ^{\alpha }\,}
whereKα{\displaystyle K_{\alpha }}is the so-called generalized diffusion coefficient andτ{\displaystyle \tau }is the elapsed time. The classes of anomalous diffusions are classified as follows:
In 1926, using weather balloons,Lewis Fry Richardsondemonstrated that the atmosphere exhibits super-diffusion.[15]In a bounded system, the mixing length (which determines the scale of dominant mixing motions) is given by thevon Kármán constantaccording to the equationlm=κz{\displaystyle l_{m}={\kappa }z}, wherelm{\displaystyle l_{m}}is the mixing length,κ{\displaystyle {\kappa }}is the von Kármán constant, andz{\displaystyle z}is the distance to the nearest boundary.[16]Because the scale of motions in the atmosphere is not limited, as in rivers or the subsurface, a plume continues to experience larger mixing motions as it increases in size, which also increases its diffusivity, resulting in super-diffusion.[17]
The types of anomalous diffusion given above allows one to measure the type. There are many possible ways to mathematically define a stochastic process which then has the right kind of power law. Some models are given here.
These are long range correlations between the signalscontinuous-time random walks(CTRW)[18]andfractional Brownian motion(fBm), and diffusion in disordered media.[19]Currently the most studied types of anomalous diffusion processes are those involving the following
These processes have growing interest incell biophysicswhere the mechanism behind anomalous diffusion has directphysiologicalimportance. Of particular interest, works by the groups ofEli Barkai,Maria Garcia-Parajo,Joseph Klafter,Diego Krapf, andRalf Metzlerhave shown that the motion of molecules in live cells often show a type of anomalous diffusion that breaks theergodic hypothesis.[20][21][22]This type of motion require novel formalisms for the underlyingstatistical physicsbecause approaches usingmicrocanonical ensembleandWiener–Khinchin theorembreak down.
|
https://en.wikipedia.org/wiki/Anomalous_diffusion
|
α∈(0,2]{\displaystyle \alpha \in (0,2]}— stability parameterβ{\displaystyle \beta }∈ [−1, 1] — skewness parameter (note thatskewnessis undefined)c∈ (0, ∞) —scale parameter
x∈ [μ, +∞) ifα<1{\displaystyle \alpha <1}andβ=1{\displaystyle \beta =1}
x∈ (-∞,μ] ifα<1{\displaystyle \alpha <1}andβ=−1{\displaystyle \beta =-1}
exp[itμ−|ct|α(1−iβsgn(t)Φ)],{\displaystyle \exp \!{\Big [}\;it\mu -|c\,t|^{\alpha }\,(1-i\beta \operatorname {sgn}(t)\Phi )\;{\Big ]},}
Inprobability theory, adistributionis said to bestableif alinear combinationof twoindependentrandom variableswith this distribution has the same distribution,up tolocationandscaleparameters. A random variable is said to bestableif its distribution is stable. The stable distribution family is also sometimes referred to as theLévy alpha-stable distribution, afterPaul Lévy, the first mathematician to have studied it.[1][2]
Of the four parameters defining the family, most attention has been focused on the stability parameter,α{\displaystyle \alpha }(see panel). Stable distributions have0<α≤2{\displaystyle 0<\alpha \leq 2}, with the upper bound corresponding to thenormal distribution, andα=1{\displaystyle \alpha =1}to theCauchy distribution. The distributions have undefinedvarianceforα<2{\displaystyle \alpha <2}, and undefinedmeanforα≤1{\displaystyle \alpha \leq 1}. The importance of stable probability distributions is that they are "attractors" for properly normed sums of independent and identically distributed (iid) random variables. The normal distribution defines a family of stable distributions. By the classicalcentral limit theoremthe properly normed sum of a set of random variables, each with finite variance, will tend toward a normal distribution as the number of variables increases. Without the finite variance assumption, the limit may be a stable distribution that is not normal.Mandelbrotreferred to such distributions as "stable Paretian distributions",[3][4][5]afterVilfredo Pareto. In particular, he referred to those maximally skewed in the positive direction with1<α<2{\displaystyle 1<\alpha <2}as "Pareto–Lévy distributions",[1]which he regarded as better descriptions of stock and commodity prices than normal distributions.[6]
A non-degenerate distributionis a stable distribution if it satisfies the following property:
Since thenormal distribution, theCauchy distribution, and theLévy distributionall have the above property, it follows that they are special cases of stable distributions.
Such distributions form a four-parameter family of continuousprobability distributionsparametrized by location and scale parametersμandc, respectively, and two shape parametersβ{\displaystyle \beta }andα{\displaystyle \alpha }, roughly corresponding to measures of asymmetry and concentration, respectively (see the figures).
Thecharacteristic functionφ(t){\displaystyle \varphi (t)}of any probability distribution is theFourier transformof its probability density functionf(x){\displaystyle f(x)}. The density function is therefore the inverse Fourier transform of the characteristic function:[8]φ(t)=∫−∞∞f(x)eixtdx.{\displaystyle \varphi (t)=\int _{-\infty }^{\infty }f(x)e^{ixt}\,dx.}
Although the probability density function for a general stable distribution cannot be written analytically, the general characteristic function can be expressed analytically. A random variableXis called stable if its characteristic function can be written as[7][9]φ(t;α,β,c,μ)=exp(itμ−|ct|α(1−iβsgn(t)Φ)){\displaystyle \varphi (t;\alpha ,\beta ,c,\mu )=\exp \left(it\mu -|ct|^{\alpha }\left(1-i\beta \operatorname {sgn}(t)\Phi \right)\right)}wheresgn(t)is just thesignoftandΦ={tan(πα2)α≠1−2πlog|t|α=1{\displaystyle \Phi ={\begin{cases}\tan \left({\frac {\pi \alpha }{2}}\right)&\alpha \neq 1\\-{\frac {2}{\pi }}\log |t|&\alpha =1\end{cases}}}μ∈Ris a shift parameter,β∈[−1,1]{\displaystyle \beta \in [-1,1]}, called theskewness parameter, is a measure of asymmetry. Notice that in this context the usualskewnessis not well defined, as forα<2{\displaystyle \alpha <2}the distribution does not admit 2nd or highermoments, and the usual skewness definition is the 3rdcentral moment.
The reason this gives a stable distribution is that the characteristic function for the sum of two independent random variables equals the product of the two corresponding characteristic functions. Adding two random variables from a stable distribution gives something with the same values ofα{\displaystyle \alpha }andβ{\displaystyle \beta }, but possibly different values ofμandc.
Not every function is the characteristic function of a legitimate probability distribution (that is, one whosecumulative distribution functionis real and goes from 0 to 1 without decreasing), but the characteristic functions given above will be legitimate so long as the parameters are in their ranges. The value of the characteristic function at some valuetis the complex conjugate of its value at −tas it should be so that the probability distribution function will be real.
In the simplest caseβ=0{\displaystyle \beta =0}, the characteristic function is just astretched exponential function; the distribution is symmetric aboutμand is referred to as a (Lévy)symmetric alpha-stable distribution, often abbreviatedSαS.
Whenα<1{\displaystyle \alpha <1}andβ=1{\displaystyle \beta =1}, the distribution is supported on [μ, ∞).
The parameterc> 0 is a scale factor which is a measure of the width of the distribution whileα{\displaystyle \alpha }is the exponent or index of the distribution and specifies the asymptotic behavior of the distribution.
The parametrization of stable distributions is not unique. Nolan[10]tabulates 11 parametrizations seen in the literature and gives conversion formulas. The two most commonly used parametrizations are the one above (Nolan's "1") and the one immediately below (Nolan's "0").
The parametrization above is easiest to use for theoretical work, but its probability density is not continuous in the parameters atα=1{\displaystyle \alpha =1}.[11]A continuous parametrization, better for numerical work, is[7]φ(t;α,β,γ,δ)=exp(itδ−|γt|α(1−iβsgn(t)Φ)){\displaystyle \varphi (t;\alpha ,\beta ,\gamma ,\delta )=\exp \left(it\delta -|\gamma t|^{\alpha }\left(1-i\beta \operatorname {sgn}(t)\Phi \right)\right)}where:Φ={(|γt|1−α−1)tan(πα2)α≠1−2πlog|γt|α=1{\displaystyle \Phi ={\begin{cases}\left(|\gamma t|^{1-\alpha }-1\right)\tan \left({\tfrac {\pi \alpha }{2}}\right)&\alpha \neq 1\\-{\frac {2}{\pi }}\log |\gamma t|&\alpha =1\end{cases}}}
The ranges ofα{\displaystyle \alpha }andβ{\displaystyle \beta }are the same as before,γ(likec) should be positive, andδ(likeμ) should be real.
In either parametrization one can make a linear transformation of the random variable to get a random variable whose density isf(y;α,β,1,0){\displaystyle f(y;\alpha ,\beta ,1,0)}. In the first parametrization, this is done by defining the new variable:y={x−μγα≠1x−μγ−β2πlnγα=1{\displaystyle y={\begin{cases}{\frac {x-\mu }{\gamma }}&\alpha \neq 1\\{\frac {x-\mu }{\gamma }}-\beta {\frac {2}{\pi }}\ln \gamma &\alpha =1\end{cases}}}
For the second parametrization, simply usey=x−δγ{\displaystyle y={\frac {x-\delta }{\gamma }}}independent ofα{\displaystyle \alpha }. In the first parametrization, if the mean exists (that is,α>1{\displaystyle \alpha >1}) then it is equal toμ, whereas in the second parametrization when the mean exists it is equal toδ−βγtan(πα2).{\displaystyle \delta -\beta \gamma \tan \left({\tfrac {\pi \alpha }{2}}\right).}
A stable distribution is therefore specified by the above four parameters. It can be shown that any non-degenerate stable distribution has a smooth (infinitely differentiable) density function.[7]Iff(x;α,β,c,μ){\displaystyle f(x;\alpha ,\beta ,c,\mu )}denotes the density ofXandYis the sum of independent copies ofX:Y=∑i=1Nki(Xi−μ){\displaystyle Y=\sum _{i=1}^{N}k_{i}(X_{i}-\mu )}thenYhas the density1sf(y/s;α,β,c,0){\displaystyle {\tfrac {1}{s}}f(y/s;\alpha ,\beta ,c,0)}withs=(∑i=1N|ki|α)1α{\displaystyle s=\left(\sum _{i=1}^{N}|k_{i}|^{\alpha }\right)^{\frac {1}{\alpha }}}
The asymptotic behavior is described, forα<2{\displaystyle \alpha <2}, by:[7]f(x)∼1|x|1+α(cα(1+sgn(x)β)sin(πα2)Γ(α+1)π){\displaystyle f(x)\sim {\frac {1}{|x|^{1+\alpha }}}\left(c^{\alpha }(1+\operatorname {sgn}(x)\beta )\sin \left({\frac {\pi \alpha }{2}}\right){\frac {\Gamma (\alpha +1)}{\pi }}\right)}where Γ is theGamma function(except that whenα≥1{\displaystyle \alpha \geq 1}andβ=±1{\displaystyle \beta =\pm 1}, the tail does not vanish to the left or right, resp., ofμ, although the above expression is 0). This "heavy tail" behavior causes the variance of stable distributions to be infinite for allα<2{\displaystyle \alpha <2}. This property is illustrated in the log–log plots below.
Whenα=2{\displaystyle \alpha =2}, the distribution is Gaussian (see below), with tails asymptotic to exp(−x2/4c2)/(2c√π).
Whenα<1{\displaystyle \alpha <1}andβ=1{\displaystyle \beta =1}, the distribution is supported on [μ, ∞). This family is calledone-sided stable distribution.[12]Its standard distribution (μ= 0) is defined as
Letq=exp(−iαπ/2){\displaystyle q=\exp(-i\alpha \pi /2)}, its characteristic function isφ(t;α)=exp(−q|t|α){\displaystyle \varphi (t;\alpha )=\exp \left(-q|t|^{\alpha }\right)}. Thus the integral form of its PDF is (note:Im(q)<0{\displaystyle \operatorname {Im} (q)<0})Lα(x)=1πℜ[∫−∞∞eitxe−q|t|αdt]=2π∫0∞e−Re(q)tαsin(tx)sin(−Im(q)tα)dt,or=2π∫0∞e−Re(q)tαcos(tx)cos(Im(q)tα)dt.{\displaystyle {\begin{aligned}L_{\alpha }(x)&={\frac {1}{\pi }}\Re \left[\int _{-\infty }^{\infty }e^{itx}e^{-q|t|^{\alpha }}\,dt\right]\\&={\frac {2}{\pi }}\int _{0}^{\infty }e^{-\operatorname {Re} (q)\,t^{\alpha }}\sin(tx)\sin(-\operatorname {Im} (q)\,t^{\alpha })\,dt,{\text{ or }}\\&={\frac {2}{\pi }}\int _{0}^{\infty }e^{-{\text{Re}}(q)\,t^{\alpha }}\cos(tx)\cos(\operatorname {Im} (q)\,t^{\alpha })\,dt.\end{aligned}}}
The double-sine integral is more effective for very smallx{\displaystyle x}.
Consider the Lévy sumY=∑i=1NXi{\textstyle Y=\sum _{i=1}^{N}X_{i}}whereXi∼Lα(x){\textstyle X_{i}\sim L_{\alpha }(x)}, thenYhas the density1νLα(xν){\textstyle {\frac {1}{\nu }}L_{\alpha }\left({\frac {x}{\nu }}\right)}whereν=N1/α{\textstyle \nu =N^{1/\alpha }}. Setx=1{\textstyle x=1}to arrive at thestable count distribution.[13]Its standard distribution is defined as
The stable count distribution is theconjugate priorof the one-sided stable distribution. Its location-scale family is defined as
It is also a one-sided distribution supported on[ν0,∞){\displaystyle [\nu _{0},\infty )}. The location parameterν0{\displaystyle \nu _{0}}is the cut-off location, whileθ{\displaystyle \theta }defines its scale.
Whenα=12{\textstyle \alpha ={\frac {1}{2}}},L12(x){\textstyle L_{\frac {1}{2}}(x)}is theLévy distributionwhich is an inverse gamma distribution. ThusN12(ν;ν0,θ){\displaystyle {\mathfrak {N}}_{\frac {1}{2}}(\nu ;\nu _{0},\theta )}is a shiftedgamma distributionof shape 3/2 and scale4θ{\displaystyle 4\theta },
Its mean isν0+6θ{\displaystyle \nu _{0}+6\theta }and its standard deviation is24θ{\displaystyle {\sqrt {24}}\theta }. It is hypothesized thatVIXis distributed likeN12(ν;ν0,θ){\textstyle {\mathfrak {N}}_{\frac {1}{2}}(\nu ;\nu _{0},\theta )}withν0=10.4{\displaystyle \nu _{0}=10.4}andθ=1.6{\displaystyle \theta =1.6}(See Section 7 of[13]). Thus thestable count distributionis the first-order marginal distribution of a volatility process. In this context,ν0{\displaystyle \nu _{0}}is called the "floor volatility".
Another approach to derive the stable count distribution is to use the Laplace transform of the one-sided stable distribution, (Section 2.4 of[13])
Letx=1/ν{\displaystyle x=1/\nu }, and one can decompose the integral on the left hand side as aproduct distributionof a standardLaplace distributionand a standard stable count distribution,
This is called the "lambda decomposition" (See Section 4 of[13]) since the right hand side was named as "symmetric lambda distribution" in Lihn's former works. However, it has several more popular names such as "exponential power distribution", or the "generalized error/normal distribution", often referred to whenα>1{\displaystyle \alpha >1}.
The n-th moment ofNα(ν){\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}is the−(n+1){\displaystyle -(n+1)}-th moment ofLα(x){\displaystyle L_{\alpha }(x)}, and all positive moments are finite.
Stable distributions are closed under convolution for a fixed value ofα{\displaystyle \alpha }. Since convolution is equivalent to multiplication of the Fourier-transformed function, it follows that the product of two stable characteristic functions with the sameα{\displaystyle \alpha }will yield another such characteristic function. The product of two stable characteristic functions is given by:exp(itμ1+itμ2−|c1t|α−|c2t|α+iβ1|c1t|αsgn(t)Φ+iβ2|c2t|αsgn(t)Φ){\displaystyle \exp \left(it\mu _{1}+it\mu _{2}-|c_{1}t|^{\alpha }-|c_{2}t|^{\alpha }+i\beta _{1}|c_{1}t|^{\alpha }\operatorname {sgn}(t)\Phi +i\beta _{2}|c_{2}t|^{\alpha }\operatorname {sgn}(t)\Phi \right)}
SinceΦis not a function of theμ,corβ{\displaystyle \beta }variables it follows that these parameters for the convolved function are given by:μ=μ1+μ2c=(c1α+c2α)1αβ=β1c1α+β2c2αc1α+c2α{\displaystyle {\begin{aligned}\mu &=\mu _{1}+\mu _{2}\\c&=\left(c_{1}^{\alpha }+c_{2}^{\alpha }\right)^{\frac {1}{\alpha }}\\[6pt]\beta &={\frac {\beta _{1}c_{1}^{\alpha }+\beta _{2}c_{2}^{\alpha }}{c_{1}^{\alpha }+c_{2}^{\alpha }}}\end{aligned}}}
In each case, it can be shown that the resulting parameters lie within the required intervals for a stable distribution.
The Generalized Central Limit Theorem (GCLT) was an effort of multiple mathematicians (Berstein,Lindeberg,Lévy,Feller,Kolmogorov, and others) over the period from 1920 to 1937.[14]The first published complete proof (in French) of the GCLT was in 1937 byPaul Lévy.[15]An English language version of the complete proof of the GCLT is available in the translation ofGnedenkoandKolmogorov's 1954 book.[16]
The statement of the GCLT is as follows:[10]
In other words, if sums of independent, identically distributed random variables converge in distribution to someZ, thenZmust be a stable distribution.
There is no general analytic solution for the form off(x). There are, however, three special cases which can be expressed in terms ofelementary functionsas can be seen by inspection of thecharacteristic function:[7][9][17]
Note that the above three distributions are also connected, in the following way: A standard Cauchy random variable can be viewed as amixtureof Gaussian random variables (all with mean zero), with the variance being drawn from a standard Lévy distribution. And in fact this is a special case of a more general theorem (See p. 59 of[18]) which allows any symmetric alpha-stable distribution to be viewed in this way (with the alpha parameter of the mixture distribution equal to twice the alpha parameter of the mixing distribution—and the beta parameter of the mixing distribution always equal to one).
A general closed form expression for stable PDFs with rational values ofα{\displaystyle \alpha }is available in terms ofMeijer G-functions.[19]Fox H-Functions can also be used to express the stable probability density functions. For simple rational numbers, the closed form expression is often in terms of less complicatedspecial functions. Several closed form expressions having rather simple expressions in terms of special functions are available. In the table below, PDFs expressible by elementary functions are indicated by anEand those that are expressible by special functions are indicated by ans.[18]
Some of the special cases are known by particular names:
Also, in the limit ascapproaches zero or as α approaches zero the distribution will approach aDirac delta functionδ(x−μ).
The stable distribution can be restated as the real part of a simpler integral:[20]f(x;α,β,c,μ)=1πℜ[∫0∞eit(x−μ)e−(ct)α(1−iβΦ)dt].{\displaystyle f(x;\alpha ,\beta ,c,\mu )={\frac {1}{\pi }}\Re \left[\int _{0}^{\infty }e^{it(x-\mu )}e^{-(ct)^{\alpha }(1-i\beta \Phi )}\,dt\right].}
Expressing the second exponential as aTaylor series, this leads to:f(x;α,β,c,μ)=1πℜ[∫0∞eit(x−μ)∑n=0∞(−qtα)nn!dt]{\displaystyle f(x;\alpha ,\beta ,c,\mu )={\frac {1}{\pi }}\Re \left[\int _{0}^{\infty }e^{it(x-\mu )}\sum _{n=0}^{\infty }{\frac {(-qt^{\alpha })^{n}}{n!}}\,dt\right]}whereq=cα(1−iβΦ){\displaystyle q=c^{\alpha }(1-i\beta \Phi )}. Reversing the order of integration and summation, and carrying out the integration yields:f(x;α,β,c,μ)=1πℜ[∑n=1∞(−q)nn!(ix−μ)αn+1Γ(αn+1)]{\displaystyle f(x;\alpha ,\beta ,c,\mu )={\frac {1}{\pi }}\Re \left[\sum _{n=1}^{\infty }{\frac {(-q)^{n}}{n!}}\left({\frac {i}{x-\mu }}\right)^{\alpha n+1}\Gamma (\alpha n+1)\right]}which will be valid forx≠μand will converge for appropriate values of the parameters. (Note that then= 0 term which yields adelta functioninx−μhas therefore been dropped.) Expressing the first exponential as a series will yield another series in positive powers ofx−μwhich is generally less useful.
For one-sided stable distribution, the above series expansion needs to be modified, sinceq=exp(−iαπ/2){\displaystyle q=\exp(-i\alpha \pi /2)}andqiα=1{\displaystyle qi^{\alpha }=1}. There is no real part to sum. Instead, the integral of the characteristic function should be carried out on the negative axis, which yields:[21][12]Lα(x)=1πℜ[∑n=1∞(−q)nn!(−ix)αn+1Γ(αn+1)]=1π∑n=1∞−sin(n(α+1)π)n!(1x)αn+1Γ(αn+1){\displaystyle {\begin{aligned}L_{\alpha }(x)&={\frac {1}{\pi }}\Re \left[\sum _{n=1}^{\infty }{\frac {(-q)^{n}}{n!}}\left({\frac {-i}{x}}\right)^{\alpha n+1}\Gamma (\alpha n+1)\right]\\&={\frac {1}{\pi }}\sum _{n=1}^{\infty }{\frac {-\sin(n(\alpha +1)\pi )}{n!}}\left({\frac {1}{x}}\right)^{\alpha n+1}\Gamma (\alpha n+1)\end{aligned}}}
In addition to the existingtests for normalityand subsequentparameter estimation, a general method which relies on the quantiles was developed by McCulloch and works for both symmetric and skew stable distributions and stability parameter0.5<α≤2{\displaystyle 0.5<\alpha \leq 2}.[22]
There are no analytic expressions for the inverseF−1(x){\displaystyle F^{-1}(x)}nor the CDFF(x){\displaystyle F(x)}itself, so the inversion method cannot be used to generate stable-distributed variates.[11][13]Other standard approaches like the rejection method would require tedious computations. An elegant and efficient solution was proposed by Chambers, Mallows and Stuck (CMS),[23]who noticed that a certain integral formula[24]yielded the following algorithm:[25]
This algorithm yields a random variableX∼Sα(β,1,0){\displaystyle X\sim S_{\alpha }(\beta ,1,0)}. For a detailed proof see.[26]
To simulate a stable random variable for all admissible values of the parametersα{\displaystyle \alpha },c{\displaystyle c},β{\displaystyle \beta }andμ{\displaystyle \mu }use the following property: IfX∼Sα(β,1,0){\displaystyle X\sim S_{\alpha }(\beta ,1,0)}thenY={cX+μα≠1cX+2πβclogc+μα=1{\displaystyle Y={\begin{cases}cX+\mu &\alpha \neq 1\\cX+{\frac {2}{\pi }}\beta c\log c+\mu &\alpha =1\end{cases}}}isSα(β,c,μ){\displaystyle S_{\alpha }(\beta ,c,\mu )}. Forα=2{\displaystyle \alpha =2}(andβ=0{\displaystyle \beta =0}) the CMS method reduces to the well knownBox-Muller transformfor generatingGaussianrandom variables.[27]While other approaches have been proposed in the literature, including application of Bergström[28]and LePage[29]series expansions, the CMS method is regarded as the fastest and the most accurate.
Stable distributions owe their importance in both theory and practice to the generalization of thecentral limit theoremto random variables without second (and possibly first) ordermomentsand the accompanyingself-similarityof the stable family. It was the seeming departure from normality along with the demand for a self-similar model for financial data (i.e. the shape of the distribution for yearly asset price changes should resemble that of the constituent daily or monthly price changes) that ledBenoît Mandelbrotto propose that cotton prices follow an alpha-stable distribution withα{\displaystyle \alpha }equal to 1.7.[6]Lévy distributionsare frequently found in analysis ofcritical behaviorand financial data.[9][30]
They are also found inspectroscopyas a general expression for a quasistaticallypressure broadened spectral line.[20]
The Lévy distribution of solar flare waiting time events (time between flare events) was demonstrated forCGROBATSE hard x-ray solar flares in December 2001. Analysis of the Lévy statistical signature revealed that two different memory signatures were evident; one related to the solar cycle and the second whose origin appears to be associated with a localized or combination of localized solar active region effects.[31]
A number of cases of analytically expressible stable distributions are known. Let the stable distribution be expressed byf(x;α,β,c,μ){\displaystyle f(x;\alpha ,\beta ,c,\mu )}, then:
|
https://en.wikipedia.org/wiki/L%C3%A9vy_alpha-stable_distribution
|
Inprobability theory,Kolmogorov's zero–one law, named in honor ofAndrey Nikolaevich Kolmogorov, specifies that a certain type ofevent, namely atail event of independentσ-algebras, will eitheralmost surelyhappen or almost surely not happen; that is, theprobabilityof such an event occurring is zero or one.
Tail events are defined in terms of countably infinite families of σ-algebras. For illustrative purposes, we present here the special case in which each sigma algebra is generated by a random variableXk{\displaystyle X_{k}}fork∈N{\displaystyle k\in \mathbb {N} }. LetF{\displaystyle {\mathcal {F}}}be the sigma-algebra generated jointly by all of theXk{\displaystyle X_{k}}. Then, atail eventF∈F{\displaystyle F\in {\mathcal {F}}}is an event the occurrence of which cannot depend on the outcome of a finite subfamily of these random variables. (Note:F{\displaystyle F}belonging toF{\displaystyle {\mathcal {F}}}implies that membership inF{\displaystyle F}is uniquely determined by the values of theXk{\displaystyle X_{k}}, but the latter condition is strictly weaker and does not suffice to prove the zero-one law.) For example, the event that the sequence of theXk{\displaystyle X_{k}}converges, and the event that its sum converges are both tail events. If theXk{\displaystyle X_{k}}are, for example, all Bernoulli-distributed, then the event that there are infinitely manyk∈N{\displaystyle k\in \mathbb {N} }such thatXk=Xk+1=⋯=Xk+100=1{\displaystyle X_{k}=X_{k+1}=\dots =X_{k+100}=1}is a tail event. If eachXk{\displaystyle X_{k}}models the outcome of thek{\displaystyle k}-th coin toss in a modeled, infinite sequence of coin tosses, this means that a sequence of 100 consecutive heads occurring infinitely many times is a tail event in this model.
Tail events are precisely those events whose occurrence can still be determined if an arbitrarily large but finite initial segment of theXk{\displaystyle X_{k}}is removed.
In many situations, it can be easy to apply Kolmogorov's zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determinewhichof these two extreme values is the correct one.
A more general statement of Kolmogorov's zero–one law holds for sequences of independent σ-algebras. Let (Ω,F,P) be aprobability spaceand letFnbe a sequence of σ-algebras contained inF. Let
be the smallest σ-algebra containingFn,Fn+1, .... Theterminal σ-algebraof theFnis defined asT((Fn)n∈N)=⋂n=1∞Gn{\displaystyle {\mathcal {T}}((F_{n})_{n\in \mathbb {N} })=\bigcap _{n=1}^{\infty }G_{n}}.
Kolmogorov's zero–one law asserts that, if theFnare stochastically independent, then for any eventE∈T((Fn)n∈N){\displaystyle E\in {\mathcal {T}}((F_{n})_{n\in \mathbb {N} })}, one has eitherP(E) = 0 orP(E)=1.
The statement of the law in terms of random variables is obtained from the latter by taking eachFnto be the σ-algebra generated by the random variableXn. A tail event is then by definition an event which is measurable with respect to the σ-algebra generated by allXn, but which is independent of any finite number ofXn. That is, a tail event is precisely an element of the terminal σ-algebra⋂n=1∞Gn{\displaystyle \textstyle {\bigcap _{n=1}^{\infty }G_{n}}}.
Aninvertiblemeasure-preserving transformationon astandard probability spacethat obeys the 0-1 law is called aKolmogorov automorphism.[clarification needed]AllBernoulli automorphismsare Kolmogorov automorphisms but notvice versa. The presence of an infinite cluster in the context ofpercolation theoryalso obeys the 0-1 law.
Let{Xn}n{\displaystyle \{X_{n}\}_{n}}be a sequence of independent random variables, then the event{limn→∞∑k=1nXkexists}{\displaystyle \left\{\lim _{n\rightarrow \infty }\sum _{k=1}^{n}X_{k}{\text{ exists }}\right\}}is a tail event. Thus by Kolmogorov 0-1 law, it has either probability 0 or 1 to happen. Note that independence is required for the tail event condition to hold. Without independence we can consider a sequence that's either(0,0,0,…){\displaystyle (0,0,0,\dots )}or(1,1,1,…){\displaystyle (1,1,1,\dots )}with probability12{\displaystyle {\frac {1}{2}}}each. In this case the sum converges with probability12{\displaystyle {\frac {1}{2}}}.
|
https://en.wikipedia.org/wiki/Kolmogorov%27s_zero%E2%80%93one_law
|
Mass customizationmakes use of flexible computer-aided systems to produce customproducts. Such systems combine the low unit costs ofmass productionprocesses with the flexibility of individual customization.
Mass customization is the new frontier in business for both manufacturing andservice industries. At its core, is a tremendous increase in variety and customization without a corresponding increase in costs. At its limit, it is the mass production of individually customized goods and services. At its best, it provides strategic advantage and economic value.[1]
Mass customization is aproduct designstrategy and is currently used with bothdelayed differentiationandmodular designto enhance thevaluedelivered tocustomers.[2]
Mass customization is the method of, "effectivelypostponingthe task of differentiating a product for a specific customer until the latest possible point in the supply network".[3]
From a collaborative engineering perspective, mass customization can be viewed as collaborative efforts between customers and manufacturers, who have different sets of priorities and need to jointly search for solutions that best match customers' individual specific needs with manufacturers' customization capabilities.[4]
The concept of mass customization is attributed to Stan Davis inFuture Perfect,[5][6]and was defined byTseng & Jiao (2001, p. 685) as "producing goods and services to meet individual customers' needs with near mass production efficiency".Kaplan & Haenlein (2006)concurred, calling it "a strategy that creates value by some form of company-customer interaction at the fabrication and assembly stage of the operations level to create customized products with production cost and monetary price similar to those of mass-produced products". Similarly,McCarthy (2004, p. 348) highlights that mass customization involves balancing operational drivers by defining it as, "the capability to manufacture a relatively high volume of product options for a relatively large market (or collection of niche markets) that demands customization, without tradeoffs in cost, delivery and quality".
Many implementations of mass customization are operational today, such assoftware-based product configurators that make it possible to add and/or change functionalities of a core product or to build fully custom enclosures from scratch. This degree of mass customization, however, has only seen limited adoption. If an enterprise's marketing department offers individual products (atomic market fragmentation), it doesn't often mean that a product is produced individually, but rather that similar variants of the samemass-produceditem are available. Additionally, in a fashion context, existing technologies to predict clothing size from user input data have been shown to be not yet of high enough suitability for mass customization purposes.[7]
Companies that have succeeded with mass-customization business models tend to supply purely electronic products.[8]However, these are not true "mass customizers" in the original sense, since they do not offer an alternative to mass production of material goods.
Pine (1993)described four types of mass customization:
He suggested a business model, "the 8.5-figure-path", a process going frominventionto mass production tocontinuous improvementto mass customization and back to invention.
Kamis, Koufaris and Stern (2008) conducted experiments to test the impacts of mass customization when postponed to the stage of retail, online shopping. They found that users perceive greater usefulness and enjoyment with a mass customization interface vs. a more typical shopping interface, particularly in a task of moderate complexity.[10]
|
https://en.wikipedia.org/wiki/Mass_customization
|
Inseismology, theGutenberg–Richter law[1](GR law) expresses the relationship between themagnitudeand total number ofearthquakesin any given region and time period ofat leastthat magnitude.
or
where
Since magnitude is logarithmic, this is an instance of thePareto distribution.
The Gutenberg–Richter law is also widely used foracoustic emissionanalysis due to a close resemblance of acoustic emission phenomenon to seismogenesis.
The relationship between earthquake magnitude and frequency was first proposed byCharles Francis RichterandBeno Gutenbergin a 1944 paper studying earthquakes in California,[2][3]and generalised in a worldwide study in 1949.[4]This relationship between event magnitude and frequency of occurrence is remarkably common, although the values of a and b may vary significantly from region to region or over time.
The parameter b (commonly referred to as the "b-value") is commonly close to 1.0 in seismically active regions. This means that for a given frequency of magnitude 4.0 or larger events there will be 10 times as many magnitude 3.0 or larger quakes and 100 times as many magnitude 2.0 or larger quakes. There is some variation of b-values in the approximate range of 0.5 to 2 depending on the source environment of the region.[5]A notable example of this is duringearthquake swarmswhen b can become as high as 2.5, thus indicating a very high proportion of small earthquakes to large ones.
There is debate concerning the interpretation of some observed spatial and temporal variations of b-values. The most frequently cited factors to explain these variations are: the stress applied to the material,[6]the depth,[7]the focal mechanism,[8]the strength heterogeneity of the material,[9]and the proximity of macro-failure. Theb-value decrease observed prior to the failure of samples deformed in the laboratory[10]has led to the suggestion that this is a precursor to major macro-failure.[11]Statistical physics provides a theoretical framework for explaining both the steadiness of the Gutenberg–Richter law for large catalogs and its evolution when the macro-failure is approached, but application to earthquake forecasting is currently out of reach.[12]Alternatively, a b-value significantly different from 1.0 may suggest a problem with the data set; e.g. it is incomplete or contains errors in calculating magnitude.
There is an apparent b-value decrease for smaller magnitude event ranges in all empirical catalogues of earthquakes. This effect is described as "roll-off" of the b-value, a description due to the plot of the logarithmic version of the GR law becoming flatter at the low magnitude end of the plot. This may in large part be caused by incompleteness of any data set due to the inability to detect and characterize small events. That is, many low-magnitude earthquakes are not catalogued because fewer stations detect and record them due to decreasing instrumental signal to noise levels. Some modern models of earthquake dynamics, however, predict a physical roll-off in the earthquake size distribution.[13]
Thea-valuerepresents the total seismicity rate of the region. This is more easily seen when the GR law is expressed in terms of the total number of events:
where
the total number of events (above M=0). Since10a{\displaystyle 10^{a}\ }is the total number of events,10−bM{\displaystyle 10^{-bM}\ }must be the probability of those events.
Modern attempts to understand the law involve theories ofself-organized criticalityorself similarity.
New models show a generalization of the original Gutenberg–Richter model. Among these is the one released by Oscar Sotolongo-Costa and A. Posadas in 2004,[14]of which R. Silvaet al.presented the following modified form in 2006,[15]
whereNis the total number of events,ais a proportionality constant andqrepresents the non-extensivity parameter introduced by Constantino Tsallis to characterize systems not explained by the Boltzmann–Gibbs statistical form for equilibrium physical systems.
It is possible to see in an article published by N. V. Sarlis, E. S. Skordas, and P. A. Varotsos,[16]that above some magnitude threshold this equation reduces to original Gutenberg–Richter form with
In addition, another generalization was obtained from the solution of the generalized logistic equation.[17]In this model, values of parameterbwere found for events recorded in Central Atlantic, Canary Islands, Magellan Mountains and the Sea of Japan. The generalized logistic equation is applied toacoustic emissionin concrete by N. Burud and J. M. Chandra Kishen.[18]Burud showed the b-value obtained from generalized logistic equation monotonically increases with damage and referred it as a damage compliant b-value.
A new generalization was published using Bayesian statistical techniques,[19]from which an alternative form for parameterbof Gutenberg–Richter is presented. The model was applied to intense earthquakes occurred in Chile, from the year 2010 to the year 2016.
|
https://en.wikipedia.org/wiki/Gutenberg%E2%80%93Richter_law
|
ThePareto principle(also known as the80/20 rule, thelaw of the vital fewand theprinciple of factor sparsity[1][2]) states that for many outcomes, roughly 80% of consequences come from 20% of causes (the "vital few").[1]
In 1941,management consultantJoseph M. Jurandeveloped the concept in the context of quality control and improvement after reading the works of ItaliansociologistandeconomistVilfredo Pareto, who wrote in 1906 about the 80/20 connection while teaching at theUniversity of Lausanne.[3]In his first work,Cours d'économie politique, Pareto showed that approximately 80% of the land in theKingdom of Italywas owned by 20% of the population. The Pareto principle is only tangentially related to thePareto efficiency.
Mathematically, the 80/20 rule is roughly described by apower lawdistribution (also known as aPareto distribution) for a particular set of parameters. Many natural phenomena are distributed according to power law statistics.[4]It is anadageofbusiness managementthat "80% of sales come from 20% ofclients."[5]
In 1941, Joseph M. Juran, a Romanian-born American engineer, came across the work of Italian polymathVilfredo Pareto. Pareto noted that approximately 80% of Italy's land was owned by 20% of the population.[6][4]Juran applied the observation that 80% of an issue is caused by 20% of the causes to quality issues. Later during his career, Juran preferred to describe this as "the vital few and the useful many" to highlight that the contribution of the remaining 80% should not be discarded entirely.[7]
The demonstration of the Pareto principle is explained by a large proportion of process variation being associated with a small proportion of process variables.[2]This is a special case of the wider phenomenon ofPareto distributions. If thePareto indexα, which is one of the parameters characterizing a Pareto distribution, is chosen asα= log45 ≈ 1.16, then one has 80% of effects coming from 20% of causes.[8]
The term 80/20 is only a shorthand for the general principle at work. In individual cases, the distribution could be nearer to 90/5 or 70/40. Note that there is no need for the two numbers to add up to the number 100, as they are measures of different things. The Pareto principle is an illustration of a "power law" relationship, which also occurs in phenomena such asbush firesand earthquakes.[9]Because it is self-similar over a wide range of magnitudes, it produces outcomes completely different fromNormal or Gaussian distributionphenomena. This fact explains the frequent breakdowns of sophisticated financial instruments, which are modeled on the assumption that a Gaussian relationship is appropriate to something like stock price movements.[10]
Using the "A:B" notation (for example, 0.8:0.2) and withA+B= 1,inequality measureslike theGini index(G)andtheHoover index(H) can be computed. In this case both are the same:
Pareto analysis is a formal technique useful where many possible courses of action are competing for attention. In essence, the problem-solver estimates the benefit delivered by each action, then selects a number of the most effective actions that deliver a total benefit reasonably close to the maximal possible one.
Pareto analysis is a creative way of looking at causes of problems because it helps stimulate thinking and organize thoughts. However, it can be limited by its exclusion of possibly important problems which may be small initially, but will grow with time. It should be combined with other analytical tools such asfailure mode and effects analysisandfault tree analysisfor example.[citation needed]
This technique helps to identify the top portion of causes that need to be addressed to resolve the majority of problems. Once the predominant causes are identified, then tools like theIshikawa diagramor Fish-bone Analysis can be used to identify the root causes of the problems. While it is common to refer to pareto as "80/20" rule, under the assumption that, in all situations, 20% of causes determine 80% of problems, this ratio is merely a convenient rule of thumb and is not, nor should it be considered, an immutable law of nature.
The application of the Pareto analysis in risk management allows management to focus on those risks that have the most impact on the project.[11]
Steps to identify the important causes using 80/20 rule:[12]
Pareto's observation was in connection withpopulation and wealth. Pareto noticed that approximately 80% of Italy's land was owned by 20% of the population.[6]He then carried out surveys on a variety of other countries and found to his surprise that a similar distribution applied.[citation needed]
A chart that demonstrated the effect appeared in the 1992United Nations Development ProgramReport, which showed that the richest 20% of the world's population receives 82.7% of the world's income.[13]However, among nations, theGini indexshows that wealth distributions vary substantially around this norm.[14]
The principle also holds within the tails of the distribution. The physicist Victor Yakovenko of theUniversity of Maryland, College Parkand AC Silva analyzed income data from the US Internal Revenue Service from 1983 to 2001 and found that theincome distributionof the richest 1–3% of the population also follows Pareto's principle.[16]
InTalent: How to Identify Entrepreneurs, economistTyler Cowenand entrepreneurDaniel Grosssuggest that the Pareto Principle can be applied to the role of the 20% most talented individuals in generating the majority ofeconomic growth.[17]According to theNew York Timesin 1988, manyvideo rental shopsreported that 80% of revenue came from 20% of videotapes (although rarely rented classics such asGone with the Windmust be stocked to appear to have a good selection).[18]
Incomputer sciencethe Pareto principle can be applied tooptimizationefforts.[19]For example,Microsoftnoted that by fixing the top 20% of the most-reported bugs, 80% of the related errors and crashes in a given system would be eliminated.[20]Lowell Arthur expressed that "20% of the code has 80% of the errors. Find them, fix them!"[21]
Occupational health and safetyprofessionals use the Pareto principle to underline the importance of hazard prioritization. Assuming 20% of the hazards account for 80% of the injuries, and by categorizing hazards, safety professionals can target those 20% of the hazards that cause 80% of the injuries or accidents. Alternatively, if hazards are addressed in random order, a safety professional is more likely to fix one of the 80% of hazards that account only for some fraction of the remaining 20% of injuries.[22]
Aside from ensuring efficient accident prevention practices, the Pareto principle also ensures hazards are addressed in an economical order, because the technique ensures the utilized resources are best used to prevent the most accidents.[23]
The Pareto principle is the basis for thePareto chart, one of the key tools used intotal quality controlandSix Sigmatechniques. The Pareto principle serves as a baseline forABC-analysisand XYZ-analysis, widely used inlogisticsand procurement for the purpose of optimizing stock of goods, as well as costs of keeping and replenishing that stock.[24]In engineering control theory, such as for electromechanical energy converters, the 80/20 principle applies to optimization efforts.[19]
The remarkable success of statistically based searches for root causes is based upon a combination of an empirical principle and mathematical logic. The empirical principle is usually known as the Pareto principle.[25]With regard to variation causality, this principle states that there is a non-random distribution of the slopes of the numerous (theoretically infinite) terms in the general equation.
All of the terms are independent of each other by definition. Interdependent factors appear as multiplication terms. The Pareto principle states that the effect of the dominant term is very much greater than the second-largest effect term, which in turn is very much greater than the third, and so on.[26]There is no explanation for this phenomenon; that is why we refer to it as an empirical principle.
The mathematical logic is known as the square-root-of-the-sum-of-the-squares axiom. This states that the variation caused by the steepest slope must be squared, and then the result added to the square of the variation caused by the second-steepest slope, and so on. The total observed variation is then the square root of the total sum of the variation caused by individual slopes squared. This derives from the probability density function for multiple variables or the multivariate distribution (we are treating each term as an independent variable).
The combination of the Pareto principle and the square-root-of-the-sum-of-the-squares axiom means that the strongest term in the general equation totally dominates the observed variation of effect. Thus, the strongest term will dominate the data collected for hypothesis testing.
In the systems science discipline,Joshua M. EpsteinandRobert Axtellcreated anagent-based simulationmodel calledSugarscape, from adecentralized modelingapproach, based on individual behavior rules defined for each agent in the economy. Wealth distribution and Pareto's 80/20 principle emerged in their results, which suggests the principle is a collective consequence of these individual rules.[27]
In 2009, theAgency for Healthcare Research and Qualitysaid 20% of patients incurred 80% of healthcare expenses due to chronic conditions.[28]A 2021 analysis showed unequal distribution of healthcare costs, with older patients and those with poorer health incurring more costs.[29]The 80/20 rule has been proposed as a rule of thumb for the infection distribution insuperspreading events.[30][31]However, the degree of infectiousness has been found to be distributed continuously in the population.[31]Inepidemicswith super-spreading, the majority of individuals infect relatively fewsecondary contacts.
|
https://en.wikipedia.org/wiki/Pareto_analysis
|
Empiricalmethods
Prescriptiveand policy
Inwelfare economics, aPareto improvementformalizes the idea of an outcome being "better in every possible way". A change is called a Pareto improvement if it leaves at least one person in society better off without leaving anyone else worse off than they were before. A situation is calledPareto efficientorPareto optimalif all possible Pareto improvements have already been made; in other words, there are no longer any ways left to make one person better off, without making some other person worse-off.[1]
Insocial choice theory, the same concept is sometimes called theunanimity principle, which says that ifeveryonein a society (non-strictly) prefers A to B, society as a whole also non-strictly prefers A to B. ThePareto frontconsists of all Pareto-efficient situations.[2]
In addition to the context of efficiency inallocation, the concept of Pareto efficiency also arises in the context ofefficiency in productionvs.x-inefficiency: a set of outputs of goods is Pareto-efficient if there is no feasible re-allocation of productive inputs such that output of one product increases while the outputs of all other goods either increase or remain the same.[3]
Besides economics, the notion of Pareto efficiency has also been applied to selecting alternatives inengineeringandbiology. Each option is first assessed, under multiple criteria, and then a subset of options is identified with the property that no other option can categorically outperform the specified option. It is a statement of impossibility of improving one variable without harming other variables in the subject ofmulti-objective optimization(also termedPareto optimization).
The concept is named afterVilfredo Pareto(1848–1923), an Italiancivil engineerandeconomist, who used the concept in his studies ofeconomic efficiencyandincome distribution.
Pareto originally used the word "optimal" for the concept, but this is somewhat of amisnomer: Pareto's concept more closely aligns with an idea of "efficiency", because it does not identify a single "best" (optimal) outcome. Instead, it only identifies a set of outcomes thatmightbe considered optimal, by at least one person.[4]
Formally, a state is Pareto-optimal if there is no alternative state where at least one participant's well-being is higher, and nobody else's well-being is lower. If there is a state change that satisfies this condition, the new state is called a "Pareto improvement". When no Pareto improvements are possible, the state is a "Pareto optimum".
In other words, Pareto efficiency is when it is impossible to make one party better off without making another party worse off.[5]This state indicates that resources can no longer be allocated in a way that makes one party better off without harming other parties. In a state of Pareto Efficiency, resources are allocated in the most efficient way possible.[5]
Pareto efficiency is mathematically represented when there is no other strategy profiles'such thatui(s') ≥ ui(s)for every playerianduj(s') > uj(s)for some playerj. In this equationsrepresents the strategy profile,urepresents the utility or benefit, andjrepresents the player.[6]
Efficiency is an important criterion for judging behavior in a game. Inzero-sum games, every outcome is Pareto-efficient.
A special case of a state is an allocation of resources. The formal presentation of the concept in an economy is the following: Consider an economy withn{\displaystyle n}agents andk{\displaystyle k}goods. Then an allocation{x1,…,xn}{\displaystyle \{x_{1},\dots ,x_{n}\}}, wherexi∈Rk{\displaystyle x_{i}\in \mathbb {R} ^{k}}for alli, isPareto-optimalif there is no other feasible allocation{x1′,…,xn′}{\displaystyle \{x_{1}',\dots ,x_{n}'\}}where, for utility functionui{\displaystyle u_{i}}for each agenti{\displaystyle i},ui(xi′)≥ui(xi){\displaystyle u_{i}(x_{i}')\geq u_{i}(x_{i})}for alli∈{1,…,n}{\displaystyle i\in \{1,\dots ,n\}}withui(xi′)>ui(xi){\displaystyle u_{i}(x_{i}')>u_{i}(x_{i})}for somei{\displaystyle i}.[7]Here, in this simple economy, "feasibility" refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. In a more complex economy with production, an allocation would consist both of consumptionvectorsand production vectors, and feasibility would require that the total amount of each consumed good is no greater than the initial endowment plus the amount produced.
Under the assumptions of thefirst welfare theorem, acompetitive marketleads to a Pareto-efficient outcome. This result was first demonstrated mathematically by economistsKenneth ArrowandGérard Debreu.[8]However, the result only holds under the assumptions of the theorem: markets exist for all possible goods, there are noexternalities, markets are perfectly competitive, and market participants haveperfect information.
In the absence of perfect information or complete markets, outcomes will generally be Pareto-inefficient, per theGreenwald–Stiglitz theorem.[9]
Thesecond welfare theoremis essentially the reverse of the first welfare theorem. It states that under similar, ideal assumptions, any Pareto optimum can be obtained by somecompetitive equilibrium, orfree marketsystem, although it may also require alump-sumtransfer of wealth.[7]
An ineffective distribution of resources in a free market is known asmarket failure. Given that there is room for improvement, market failure implies Pareto inefficiency.
For instance, excessive use of negative commodities (such as drugs and cigarettes) results in expenses to non-smokers as well as early mortality for smokers.Cigarette taxesmay help individuals stop smoking while also raising money to address ailments brought on by smoking.
A Pareto improvement may be seen, but this does not always imply that the result is desirable or equitable. After a Pareto improvement, inequality could still exist. However, it does imply that any change will violate the "do no harm" principle, because at least one person will be worse off.
A society may be Pareto efficient but have significant levels of inequality. The most equitable course of action would be to split the pie into three equal portions if there were three persons and a pie. The third person does not lose out (even if he does not partake in the pie), hence splitting it in half and giving it to two individuals would be considered Pareto efficient.
On a frontier of production possibilities, Pareto efficiency will happen. It is impossible to raise the output of products without decreasing the output of services when an economy is functioning on a basic production potential frontier, such as at point A, B, or C.
If multiple sub-goalsfi{\displaystyle f_{i}}(withi>1{\displaystyle i>1}) exist, combined into a vector-valued objective functionf→=(f1,…fn)T{\displaystyle {\vec {f}}=(f_{1},\dots f_{n})^{T}}, generally, finding a unique optimumx→∗{\displaystyle {\vec {x}}^{*}}becomes challenging. This is due to the absence of atotal orderrelation forn>1{\displaystyle n>1}which would not always prioritize one target over another target (like thelexicographical order). In the multi-objective optimization setting, various solutions can be "incomparable"[10]as there is no total order relation to facilitate the comparisonf→(x→∗)≥f→(x→){\displaystyle {\vec {f}}({\vec {x}}^{*})\geq {\vec {f}}({\vec {x}})}. Only the Pareto order is applicable:
Consider a vector-valued minimization problem:y→(1)∈Rm{\displaystyle {\vec {y}}^{(1)}\in \mathbb {R} ^{m}}Pareto dominatesy→(2)∈Rm{\displaystyle {\vec {y}}^{(2)}\in \mathbb {R} ^{m}}if and only if:[11]:∀i∈{1,…m}:y→i(1)≤y→i(2){\displaystyle \forall i\in \{1,\dots m\}:{\vec {y}}_{i}^{(1)}\leq {\vec {y}}_{i}^{(2)}}and∃j∈{1,…m}:y→j(1)<y→j(2).{\displaystyle \exists j\in \{1,\dots m\}:{\vec {y}}_{j}^{(1)}<{\vec {y}}_{j}^{(2)}.}We then writey→(1)≺y→(2){\displaystyle {\vec {y}}^{(1)}\prec {\vec {y}}^{(2)}}, where≺{\displaystyle \prec }is the Pareto order. This means thaty→(1){\displaystyle {\vec {y}}^{(1)}}is not worse thany→(2){\displaystyle {\vec {y}}^{(2)}}in any goal but is better (since smaller) in at least one goalj{\displaystyle j}. The Pareto order is a strictpartial order, though it is not aproduct order(neither non-strict nor strict).
If[11]f→(x→1)≺f→(x→2){\displaystyle {\vec {f}}({\vec {x}}_{1})\prec {\vec {f}}({\vec {x}}_{2})}, then this defines apreorderin the search space and we sayx→1{\displaystyle {\vec {x}}_{1}}Pareto dominates the alternativex→2{\displaystyle {\vec {x}}_{2}}and we writex→1≺f→x→2{\displaystyle {\vec {x}}_{1}\prec _{\vec {f}}{\vec {x}}_{2}}.
Weak Pareto efficiencyis a situation that cannot be strictly improved foreveryindividual.[12]
Formally, astrong Pareto improvementis defined as a situation in which all agents are strictly better-off (in contrast to just "Pareto improvement", which requires that one agent is strictly better-off and the other agents are at least as good). A situation isweak Pareto-efficientif it has no strong Pareto improvements.
Any strong Pareto improvement is also a weak Pareto improvement. The opposite is not true; for example, consider a resource allocation problem with two resources, which Alice values at {10, 0}, and George values at {5, 5}. Consider the allocation giving all resources to Alice, where the utility profile is (10, 0):
A market does not requirelocal nonsatiationto get to a weak Pareto optimum.[13]
Constrained Pareto efficiencyis a weakening of Pareto optimality, accounting for the fact that a potential planner (e.g., the government) may not be able to improve upon a decentralized market outcome, even if that outcome is inefficient. This will occur if it is limited by the same informational or institutional constraints as are individual agents.[14]
An example is of a setting where individuals have private information (for example, a labor market where the worker's own productivity is known to the worker but not to a potential employer, or a used-car market where the quality of a car is known to the seller but not to the buyer) which results inmoral hazardor anadverse selectionand a sub-optimal outcome. In such a case, a planner who wishes to improve the situation is unlikely to have access to any information that the participants in the markets do not have. Hence, the planner cannot implement allocation rules which are based on the idiosyncratic characteristics of individuals; for example, "if a person is of typeA, they pay pricep1, but if of typeB, they pay pricep2" (seeLindahl prices). Essentially, only anonymous rules are allowed (of the sort "Everyone pays pricep") or rules based on observable behavior; "if any person choosesxat pricepx, then they get a subsidy of ten dollars, and nothing otherwise". If there exists no allowed rule that can successfully improve upon the market outcome, then that outcome is said to be "constrained Pareto-optimal".
Fractional Pareto efficiencyis a strengthening of Pareto efficiency in the context offair item allocation. An allocation of indivisible items isfractionally Pareto-efficient (fPE or fPO)if it is not Pareto-dominated even by an allocation in which some items are split between agents. This is in contrast to standard Pareto efficiency, which only considers domination by feasible (discrete) allocations.[15][16]
As an example, consider an item allocation problem with two items, which Alice values at {3, 2} and George values at {4, 1}. Consider the allocation giving the first item to Alice and the second to George, where the utility profile is (3, 1):
When the decision process is random, such as infair random assignmentorrandom social choiceorfractional approval voting, there is a difference betweenex-postandex-ante Pareto efficiency:
If some lotteryLis ex-ante PE, then it is also ex-post PE.Proof: suppose that one of the ex-post outcomesxofLis Pareto-dominated by some other outcomey. Then, by moving some probability mass fromxtoy, one attains another lotteryL'that ex-ante Pareto-dominatesL.
The opposite is not true: ex-ante PE is stronger that ex-post PE. For example, suppose there are two objects – a car and a house. Alice values the car at 2 and the house at 3; George values the car at 2 and the house at 9. Consider the following two lotteries:
While both lotteries are ex-post PE, the lottery 1 is not ex-ante PE, since it is Pareto-dominated by lottery 2.
Another example involvesdichotomous preferences.[17]There are 5 possible outcomes(a,b,c,d,e)and 6 voters. The voters' approval sets are(ac,ad,ae,bc,bd,be). All five outcomes are PE, so every lottery is ex-post PE. But the lottery selectingc,d,ewith probability 1/3 each is not ex-ante PE, since it gives an expected utility of 1/3 to each voter, while the lottery selectinga,bwith probability 1/2 each gives an expected utility of 1/2 to each voter.
Bayesian efficiencyis an adaptation of Pareto efficiency to settings in which players have incomplete information regarding the types of other players.
Ordinal Pareto efficiencyis an adaptation of Pareto efficiency to settings in which players report only rankings on individual items, and we do not know for sure how they rank entire bundles.
Although an outcome may be a Pareto improvement, this does not imply that the outcome is equitable. It is possible that inequality persists even after a Pareto improvement. Despite the fact that it is frequently used in conjunction with the idea of Pareto optimality, the term "efficiency" refers to the process of increasing societal productivity.[18]It is possible for a society to have Pareto efficiency while also have high levels of inequality. Consider the following scenario: there is a pie and three persons; the most equitable way would be to divide the pie into three equal portions. However, if the pie is divided in half and shared between two people, it is considered Pareto efficient – meaning that the third person does not lose out (despite the fact that he does not receive a piece of the pie). When making judgments, it is critical to consider a variety of aspects, including social efficiency, overall welfare, and issues such as diminishing marginal value.
In order to fully understand market failure, one must first comprehend market success, which is defined as the ability of a set of idealized competitive markets to achieve an equilibrium allocation of resources that is Pareto-optimal in terms of resource allocation. According to the definition of market failure, it is a circumstance in which the conclusion of the first fundamental theorem of welfare is erroneous; that is, when the allocations made through markets are not efficient.[19]In a free market, market failure is defined as an inefficient allocation of resources. Due to the fact that it is feasible to improve, market failure implies Pareto inefficiency. For example, excessive consumption of depreciating items (drugs/tobacco) results in external costs to non-smokers, as well as premature death for smokers who do not quit. An increase in the price of cigarettes could motivate people to quit smoking while also raising funds for the treatment of smoking-related ailments.
Given someε> 0, an outcome is calledε-Pareto-efficientif no other outcome gives all agents at least the same utility, and one agent a utility at least (1 +ε) higher. This captures the notion that improvements smaller than (1 +ε) are negligible and should not be considered a breach of efficiency.
Suppose each agentiis assigned a positive weightai. For every allocationx, define thewelfareofxas the weighted sum of utilities of all agents inx:
Letxabe an allocation that maximizes the welfare over all allocations:
It is easy to show that the allocationxais Pareto-efficient: since all weights are positive, any Pareto improvement would increase the sum, contradicting the definition ofxa.
Japanese neo-WalrasianeconomistTakashi Negishiproved[20]that, under certain assumptions, the opposite is also true: foreveryPareto-efficient allocationx, there exists a positive vectorasuch thatxmaximizesWa. A shorter proof is provided byHal Varian.[21]
The notion of Pareto efficiency has been used in engineering.[22]Given a set of choices and a way of valuing them, thePareto front(orPareto setorPareto frontier) is the set of choices that are Pareto-efficient. By restricting attention to the set of choices that are Pareto-efficient, a designer can maketrade-offswithin this set, rather than considering the full range of every parameter.[23]
Modern microeconomic theory has drawn heavily upon the concept of Pareto efficiency for inspiration. Pareto and his successors have tended to describe this technical definition of optimal resource allocation in the context of it being an equilibrium that can theoretically be achieved within an abstract model of market competition. It has therefore very often been treated as a corroboration ofAdam Smith's "invisible hand" notion. More specifically, it motivated the debate over "market socialism" in the 1930s.[4]
However, because the Pareto-efficient outcome is difficult to assess in the real world when issues including asymmetric information, signalling, adverse selection, and moral hazard are introduced, most people do not take the theorems of welfare economics as accurate descriptions of the real world. Therefore, the significance of the two welfare theorems of economics is in their ability to generate a framework that has dominated neoclassical thinking about public policy. That framework is that the welfare economics theorems allow the political economy to be studied in the following two situations: "market failure" and "the problem of redistribution".[24]
Analysis of "market failure" can be understood by the literature surrounding externalities. When comparing the "real" economy to the complete contingent markets economy (which is considered efficient), the inefficiencies become clear. These inefficiencies, or externalities, are then able to be addressed by mechanisms, including property rights and corrective taxes.[24]
Analysis of "the problem with redistribution" deals with the observed political question of how income or commodity taxes should be utilized. The theorem tells us that no taxation is Pareto-efficient and that taxation with redistribution is Pareto-inefficient. Because of this, most of the literature is focused on finding solutions where given there is a tax structure, how can the tax structure prescribe a situation where no person could be made better off by a change in available taxes.[24]
Pareto optimisation has also been studied in biological processes.[25]In bacteria,geneswere shown to be either inexpensive to make (resource-efficient) or easier to read (translation-efficient).Natural selectionacts to push highly expressed genes towards the Pareto frontier for resource use and translational efficiency.[26]Genes near the Pareto frontier were also shown to evolve more slowly (indicating that they are providing a selective advantage).[27]
It would be incorrect to treat Pareto efficiency as equivalent to societal optimization,[28]as the latter is anormativeconcept, which is a matter of interpretation that typically would account for the consequence of degrees of inequality of distribution.[29]An example would be the interpretation of one school district with low property tax revenue versus another with much higher revenue as a sign that more equal distribution occurs with the help of government redistribution.[30]
Some commentators contest that Pareto efficiency could potentially serve as an ideological tool. With it implying that capitalism is self-regulated thereof, it is likely that the embedded structural problems such as unemployment would be treated as deviating from the equilibrium or norm, and thus neglected or discounted.[4]
Pareto efficiency does not require a totally equitable distribution of wealth, which is another aspect that draws in criticism.[31]An economy in which a wealthy few hold thevast majority of resourcescan be Pareto-efficient. A simple example is the distribution of a pie among three people. The most equitable distribution would assign one third to each person. However, the assignment of, say, a half section to each of two individuals and none to the third is also Pareto-optimal despite not being equitable, because none of the recipients could be made better off without decreasing someone else's share; and there are many other such distribution examples. An example of a Pareto-inefficient distribution of the pie would be allocation of a quarter of the pie to each of the three, with the remainder discarded.[32]
Theliberal paradoxelaborated byAmartya Senshows that when people have preferences about what other people do, the goal of Pareto efficiency can come into conflict with the goal of individual liberty.[33]
Lastly, it is proposed that Pareto efficiency to some extent inhibited discussion of other possible criteria of efficiency. AsWharton Schoolprofessor Ben Lockwood argues, one possible reason is that any other efficiency criteria established in the neoclassical domain will reduce to Pareto efficiency at the end.[4]
Pareto, V (1906). Manual of Political Economy. Oxford University Press.https://global.oup.com/academic/product/manual-of-political-economy-9780199607952?cc=ca&lang=en&.
|
https://en.wikipedia.org/wiki/Pareto_efficiency
|
Pareto interpolationis a method ofestimatingthemedianand other properties of a population that follows aPareto distribution. It is used ineconomicswhen analysing the distribution of incomes in a population, when one must base estimates on a relatively small random sample taken from the population.
The family of Pareto distributions is parameterized by
Pareto interpolation can be used when the available information includes the proportion of the sample that falls below each of two specified numbersa<b. For example, it may be observed that 45% of individuals in the sample have incomes belowa= $35,000 per year, and 55% have incomes belowb= $40,000 per year.
Let
Then the estimates of κ and θ are
and
The estimate of the median would then be
since the actual population median is
|
https://en.wikipedia.org/wiki/Pareto_interpolation
|
Instatistics, apower lawis afunctional relationshipbetween two quantities, where arelative changein one quantity results in a relative change in the other quantity proportional to the change raised to a constantexponent: one quantity varies as a power of another. The change is independent of the initial size of those quantities.
For instance, the area of a square has a power law relationship with the length of its side, since if the length is doubled, the area is multiplied by 22, while if the length is tripled, the area is multiplied by 32, and so on.[1]
The distributions of a wide variety of physical, biological, and human-made phenomena approximately follow a power law over a wide range of magnitudes: these include the sizes of craters on themoonand ofsolar flares,[2]cloud sizes,[3]the foraging pattern of various species,[4]the sizes of activity patterns of neuronal populations,[5]the frequencies ofwordsin most languages, frequencies offamily names, thespecies richnessincladesof organisms,[6]the sizes ofpower outages, volcanic eruptions,[7]human judgments of stimulus intensity[8][9]and many other quantities.[10]Empirical distributions can only fit a power law for a limited range of values, because a pure power law would allow for arbitrarily large or small values.Acoustic attenuationfollows frequency power-laws within wide frequency bands for many complex media.Allometric scaling lawsfor relationships between biological variables are among the best known power-law functions in nature.
The power-law model does not obey the treasured paradigm of statistical completeness. Especially probability bounds, the suspected cause of typical bending and/or flattening phenomena in the high- and low-frequency graphical segments, are parametrically absent in the standard model.[11]
One attribute of power laws is theirscale invariance. Given a relationf(x)=ax−k{\displaystyle f(x)=ax^{-k}}, scaling the argumentx{\displaystyle x}by a constant factorc{\displaystyle c}causes only a proportionate scaling of the function itself. That is,
where∝{\displaystyle \propto }denotesdirect proportionality. That is, scaling by a constantc{\displaystyle c}simply multiplies the original power-law relation by the constantc−k{\displaystyle c^{-k}}. Thus, it follows that all power laws with a particular scaling exponent are equivalent up to constant factors, since each is simply a scaled version of the others. This behavior is what produces the linear relationship when logarithms are taken of bothf(x){\displaystyle f(x)}andx{\displaystyle x}, and the straight-line on thelog–log plotis often called thesignatureof a power law. With real data, such straightness is a necessary, but not sufficient, condition for the data following a power-law relation. In fact, there are many ways to generate finite amounts of data that mimic this signature behavior, but, in their asymptotic limit, are not true power laws.[citation needed]Thus, accurately fitting andvalidating power-lawmodels is an active area of research in statistics; see below.
A power-lawx−k{\displaystyle x^{-k}}has a well-definedmeanoverx∈[1,∞){\displaystyle x\in [1,\infty )}only ifk>2{\displaystyle k>2}, and it has a finitevarianceonly ifk>3{\displaystyle k>3}; most identified power laws in nature have exponents such that the mean is well-defined but the variance is not, implying they are capable ofblack swanbehavior.[2]This can be seen in the following thought experiment:[12]imagine a room with your friends and estimate the average monthly income in the room. Now imagine theworld's richest personentering the room, with a monthly income of about 1billionUS$. What happens to the average income in the room? Income is distributed according to a power-law known as thePareto distribution(for example, the net worth of Americans is distributed according to a power law with an exponent of 2).
On the one hand, this makes it incorrect to apply traditional statistics that are based onvarianceandstandard deviation(such asregression analysis).[13]On the other hand, this also allows for cost-efficient interventions.[12]For example, given that car exhaust is distributed according to a power-law among cars (very few cars contribute to most contamination) it would be sufficient to eliminate those very few cars from the road to reduce total exhaust substantially.[14]
The median does exist, however: for a power lawx–k, with exponentk>1{\displaystyle k>1}, it takes the value 21/(k– 1)xmin, wherexminis the minimum value for which the power law holds.[2]
The equivalence of power laws with a particular scaling exponent can have a deeper origin in the dynamical processes that generate the power-law relation. In physics, for example,phase transitionsin thermodynamic systems are associated with the emergence of power-law distributions of certain quantities, whose exponents are referred to as thecritical exponentsof the system. Diverse systems with the same critical exponents—that is, which display identical scaling behaviour as they approachcriticality—can be shown, viarenormalization grouptheory, to share the same fundamental dynamics. For instance, the behavior of water and CO2at their boiling points fall in the same universality class because they have identical critical exponents.[citation needed][clarification needed]In fact, almost all material phase transitions are described by a small set of universality classes. Similar observations have been made, though not as comprehensively, for variousself-organized criticalsystems, where the critical point of the system is anattractor. Formally, this sharing of dynamics is referred to asuniversality, and systems with precisely the same critical exponents are said to belong to the sameuniversality class.
Scientific interest in power-law relations stems partly from the ease with which certain general classes of mechanisms generate them.[15]The demonstration of a power-law relation in some data can point to specific kinds of mechanisms that might underlie the natural phenomenon in question, and can indicate a deep connection with other, seemingly unrelated systems;[16]see alsouniversalityabove. The ubiquity of power-law relations in physics is partly due todimensional constraints, while incomplex systems, power laws are often thought to be signatures of hierarchy or of specificstochastic processes. A few notable examples of power laws arePareto's lawof income distribution, structural self-similarity offractals,scaling laws in biological systems, andscaling laws in cities. Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is an active topic of research in many fields of science, includingphysics,computer science,linguistics,geophysics,neuroscience,systematics,sociology,economicsand more.
However, much of the recent interest in power laws comes from the study ofprobability distributions: The distributions of a wide variety of quantities seem to follow the power-law form, at least in their upper tail (large events). The behavior of these large events connects these quantities to the study oftheory of large deviations(also calledextreme value theory), which considers the frequency of extremely rare events likestock market crashesand largenatural disasters. It is primarily in the study of statistical distributions that the name "power law" is used.
In empirical contexts, an approximation to a power-lawo(xk){\displaystyle o(x^{k})}often includes a deviation termε{\displaystyle \varepsilon }, which can represent uncertainty in the observed values (perhaps measurement or sampling errors) or provide a simple way for observations to deviate from the power-law function (perhaps for stochastic reasons):
Mathematically, a strict power law cannot be a probability distribution, but a distribution that is a truncatedpower functionis possible:p(x)=Cx−α{\displaystyle p(x)=Cx^{-\alpha }}forx>xmin{\displaystyle x>x_{\text{min}}}where the exponentα{\displaystyle \alpha }(Greek letteralpha, not to be confused with scaling factora{\displaystyle a}used above) is greater than 1 (otherwise the tail has infinite area), the minimum valuexmin{\displaystyle x_{\text{min}}}is needed otherwise the distribution has infinite area asxapproaches 0, and the constantCis a scaling factor to ensure that the total area is 1, as required by a probability distribution. More often one uses an asymptotic power law – one that is only true in the limit; seepower-law probability distributionsbelow for details. Typically the exponent falls in the range2<α<3{\displaystyle 2<\alpha <3}, though not always.[10]
More than a hundred power-law distributions have been identified in physics (e.g. sandpile avalanches), biology (e.g. species extinction and body mass), and the social sciences (e.g. city sizes and income).[17]Among them are:
A broken power law is apiecewise function, consisting of two or more power laws, combined with a threshold. For example, with two power laws:[49]
The pieces of a broken power law can be smoothly spliced together to construct a smoothly broken power law.
There are different possible ways to splice together power laws. One example is the following:[50]ln(yy0+a)=c0ln(xx0)+∑i=1nci−ci−1filn(1+(xxi)fi){\displaystyle \ln \left({\frac {y}{y_{0}}}+a\right)=c_{0}\ln \left({\frac {x}{x_{0}}}\right)+\sum _{i=1}^{n}{\frac {c_{i}-c_{i-1}}{f_{i}}}\ln \left(1+\left({\frac {x}{x_{i}}}\right)^{f_{i}}\right)}where0<x0<x1<⋯<xn{\displaystyle 0<x_{0}<x_{1}<\cdots <x_{n}}.
When the function is plotted as alog-log plotwith horizontal axis beinglnx{\displaystyle \ln x}and vertical axis beingln(y/y0+a){\displaystyle \ln(y/y_{0}+a)}, the plot is composed ofn+1{\displaystyle n+1}linear segments with slopesc0,c1,...,cn{\displaystyle c_{0},c_{1},...,c_{n}}, separated atx=x1,...,xn{\displaystyle x=x_{1},...,x_{n}}, smoothly spliced together. The size offi{\displaystyle f_{i}}determines the sharpness of splicing between segmentsi−1,i{\displaystyle i-1,i}.
A power law with an exponential cutoff is simply a power law multiplied by an exponential function:[10]
In a looser sense, a power-lawprobability distributionis a distribution whose density function (or mass function in the discrete case) has the form, for large values ofx{\displaystyle x},[52]
whereα>1{\displaystyle \alpha >1}, andL(x){\displaystyle L(x)}is aslowly varying function, which is any function that satisfieslimx→∞L(rx)/L(x)=1{\displaystyle \lim _{x\rightarrow \infty }L(r\,x)/L(x)=1}for any positive factorr{\displaystyle r}. This property ofL(x){\displaystyle L(x)}follows directly from the requirement thatp(x){\displaystyle p(x)}be asymptotically scale invariant; thus, the form ofL(x){\displaystyle L(x)}only controls the shape and finite extent of the lower tail. For instance, ifL(x){\displaystyle L(x)}is the constant function, then we have a power law that holds for all values ofx{\displaystyle x}. In many cases, it is convenient to assume a lower boundxmin{\displaystyle x_{\mathrm {min} }}from which the law holds. Combining these two cases, and wherex{\displaystyle x}is a continuous variable, the power law has the form of thePareto distribution
where the pre-factor toα−1xmin{\displaystyle {\frac {\alpha -1}{x_{\min }}}}is thenormalizing constant. We can now consider several properties of this distribution. For instance, itsmomentsare given by
which is only well defined form<α−1{\displaystyle m<\alpha -1}. That is, all momentsm≥α−1{\displaystyle m\geq \alpha -1}diverge: whenα≤2{\displaystyle \alpha \leq 2}, the average and all higher-order moments are infinite; when2<α<3{\displaystyle 2<\alpha <3}, the mean exists, but the variance and higher-order moments are infinite, etc. For finite-size samples drawn from such distribution, this behavior implies that thecentral momentestimators (like the mean and the variance) for diverging moments will never converge – as more data is accumulated, they continue to grow. These power-law probability distributions are also called Pareto-type distributions, distributions with Pareto tails, or distributions with regularly varying tails.
A modification, which does not satisfy the general form above, with an exponential cutoff,[10]is
In this distribution, the exponential decay terme−λx{\displaystyle \mathrm {e} ^{-\lambda x}}eventually overwhelms the power-law behavior at very large values ofx{\displaystyle x}. This distribution does not scale[further explanation needed]and is thus not asymptotically as a power law; however, it does approximately scale over a finite region before the cutoff. The pure form above is a subset of this family, withλ=0{\displaystyle \lambda =0}. This distribution is a common alternative to the asymptotic power-law distribution because it naturally captures finite-size effects.
TheTweedie distributionsare a family of statistical models characterized byclosureunder additive and reproductive convolution as well as under scale transformation. Consequently, these models all express a power-law relationship between the variance and the mean. These models have a fundamental role as foci of mathematicalconvergencesimilar to the role that thenormal distributionhas as a focus in thecentral limit theorem. This convergence effect explains why the variance-to-mean power law manifests so widely in natural processes, as withTaylor's lawin ecology and with fluctuation scaling[53]in physics. It can also be shown that this variance-to-mean power law, when demonstrated by themethod of expanding bins, implies the presence of 1/fnoise and that 1/fnoise can arise as a consequence of this Tweedie convergence effect.[54]
Although more sophisticated and robust methods have been proposed, the most frequently used graphical methods of identifying power-law probability distributions using random samples are Pareto quantile-quantile plots (or ParetoQ–Q plots),[citation needed]mean residual life plots[55][56]andlog–log plots. Another, more robust graphical method uses bundles of residual quantile functions.[57](Please keep in mind that power-law distributions are also called Pareto-type distributions.) It is assumed here that a random sample is obtained from a probability distribution, and that we want to know if the tail of the distribution follows a power law (in other words, we want to know if the distribution has a "Pareto tail"). Here, the random sample is called "the data".
Pareto Q–Q plots compare thequantilesof the log-transformed data to the corresponding quantiles of an exponential distribution with mean 1 (or to the quantiles of a standard Pareto distribution) by plotting the former versus the latter. If the resultant scatterplot suggests that the plotted pointsasymptotically convergeto a straight line, then a power-law distribution should be suspected. A limitation of Pareto Q–Q plots is that they behave poorly when the tail indexα{\displaystyle \alpha }(also called Pareto index) is close to 0, because Pareto Q–Q plots are not designed to identify distributions with slowly varying tails.[57]
On the other hand, in its version for identifying power-law probability distributions, the mean residual life plot consists of first log-transforming the data, and then plotting the average of those log-transformed data that are higher than thei-th order statistic versus thei-th order statistic, fori= 1, ...,n, where n is the size of the random sample. If the resultant scatterplot suggests that the plotted points tend to stabilize about a horizontal straight line, then a power-law distribution should be suspected. Since the mean residual life plot is very sensitive to outliers (it is not robust), it usually produces plots that are difficult to interpret; for this reason, such plots are usually called Hill horror plots.[58]
Log–log plotsare an alternative way of graphically examining the tail of a distribution using a random sample. Taking the logarithm of a power law of the formf(x)=axk{\displaystyle f(x)=ax^{k}}results in:[59]
which forms a straight line with slopek{\displaystyle k}on a log-log scale. Caution has to be exercised however as a log–log plot is necessary but insufficient evidence for a power law relationship, as many non power-law distributions will appear as straight lines on a log–log plot.[10][60]This method consists of plotting the logarithm of an estimator of the probability that a particular number of the distribution occurs versus the logarithm of that particular number. Usually, this estimator is the proportion of times that the number occurs in the data set. If the points in the plot tend to converge to a straight line for large numbers in the x axis, then the researcher concludes that the distribution has a power-law tail. Examples of the application of these types of plot have been published.[61]A disadvantage of these plots is that, in order for them to provide reliable results, they require huge amounts of data. In addition, they are appropriate only for discrete (or grouped) data.
Another graphical method for the identification of power-law probability distributions using random samples has been proposed.[57]This methodology consists of plotting abundle for the log-transformed sample. Originally proposed as a tool to explore the existence of moments and the moment generation function using random samples, the bundle methodology is based on residualquantile functions(RQFs), also called residual percentile functions,[62][63][64][65][66][67][68]which provide a full characterization of the tail behavior of many well-known probability distributions, including power-law distributions, distributions with other types of heavy tails, and even non-heavy-tailed distributions. Bundle plots do not have the disadvantages of Pareto Q–Q plots, mean residual life plots and log–log plots mentioned above (they are robust to outliers, allow visually identifying power laws with small values ofα{\displaystyle \alpha }, and do not demand the collection of much data).[citation needed]In addition, other types of tail behavior can be identified using bundle plots.
In general, power-law distributions are plotted ondoubly logarithmic axes, which emphasizes the upper tail region. The most convenient way to do this is via the (complementary)cumulative distribution(ccdf) that is, thesurvival function,P(x)=Pr(X>x){\displaystyle P(x)=\mathrm {Pr} (X>x)},
The cdf is also a power-law function, but with a smaller scaling exponent. For data, an equivalent form of the cdf is the rank-frequency approach, in which we first sort then{\displaystyle n}observed values in ascending order, and plot them against the vector[1,n−1n,n−2n,…,1n]{\displaystyle \left[1,{\frac {n-1}{n}},{\frac {n-2}{n}},\dots ,{\frac {1}{n}}\right]}.
Although it can be convenient to log-bin the data, or otherwise smooth the probability density (mass) function directly, these methods introduce an implicit bias in the representation of the data, and thus should be avoided.[10][69]The survival function, on the other hand, is more robust to (but not without) such biases in the data and preserves the linear signature on doubly logarithmic axes. Though a survival function representation is favored over that of the pdf while fitting a power law to the data with the linear least square method, it is not devoid of mathematical inaccuracy. Thus, while estimating exponents of a power law distribution, maximum likelihood estimator is recommended.
There are many ways of estimating the value of the scaling exponent for a power-law tail, however not all of them yieldunbiased and consistent answers. Some of the most reliable techniques are often based on the method ofmaximum likelihood. Alternative methods are often based on making a linear regression on either the log–log probability, the log–log cumulative distribution function, or on log-binned data, but these approaches should be avoided as they can all lead to highly biased estimates of the scaling exponent.[10]
For real-valued,independent and identically distributeddata, we fit a power-law distribution of the form
to the datax≥xmin{\displaystyle x\geq x_{\min }}, where the coefficientα−1xmin{\displaystyle {\frac {\alpha -1}{x_{\min }}}}is included to ensure that the distribution isnormalized. Given a choice forxmin{\displaystyle x_{\min }}, the log likelihood function becomes:
The maximum of this likelihood is found by differentiating with respect to parameterα{\displaystyle \alpha }, setting the result equal to zero. Upon rearrangement, this yields the estimator equation:
where{xi}{\displaystyle \{x_{i}\}}are then{\displaystyle n}data pointsxi≥xmin{\displaystyle x_{i}\geq x_{\min }}.[2][70]This estimator exhibits a small finite sample-size bias of orderO(n−1){\displaystyle O(n^{-1})}, which is small whenn> 100. Further, the standard error of the estimate isσ=α^−1n+O(n−1){\displaystyle \sigma ={\frac {{\hat {\alpha }}-1}{\sqrt {n}}}+O(n^{-1})}. This estimator is equivalent to the popular[citation needed]Hill estimatorfromquantitative financeandextreme value theory.[citation needed]
For a set ofninteger-valued data points{xi}{\displaystyle \{x_{i}\}}, again where eachxi≥xmin{\displaystyle x_{i}\geq x_{\min }}, the maximum likelihood exponent is the solution to the transcendental equation
whereζ(α,xmin){\displaystyle \zeta (\alpha ,x_{\mathrm {min} })}is theincomplete zeta function. The uncertainty in this estimate follows the same formula as for the continuous equation. However, the two equations forα^{\displaystyle {\hat {\alpha }}}are not equivalent, and the continuous version should not be applied to discrete data, nor vice versa.
Further, both of these estimators require the choice ofxmin{\displaystyle x_{\min }}. For functions with a non-trivialL(x){\displaystyle L(x)}function, choosingxmin{\displaystyle x_{\min }}too small produces a significant bias inα^{\displaystyle {\hat {\alpha }}}, while choosing it too large increases the uncertainty inα^{\displaystyle {\hat {\alpha }}}, and reduces thestatistical powerof our model. In general, the best choice ofxmin{\displaystyle x_{\min }}depends strongly on the particular form of the lower tail, represented byL(x){\displaystyle L(x)}above.
More about these methods, and the conditions under which they can be used, can be found in .[10]Further, this comprehensive review article providesusable code(Matlab, Python, R and C++) for estimation and testing routines for power-law distributions.
Another method for the estimation of the power-law exponent, which does not assumeindependent and identically distributed(iid) data, uses the minimization of theKolmogorov–Smirnov statistic,D{\displaystyle D}, between the cumulative distribution functions of the data and the power law:
with
wherePemp(x){\displaystyle P_{\mathrm {emp} }(x)}andPα(x){\displaystyle P_{\alpha }(x)}denote the cdfs of the data and the power law with exponentα{\displaystyle \alpha }, respectively. As this method does not assume iid data, it provides an alternative way to determine the power-law exponent for data sets in which the temporal correlation can not be ignored.[5]
This criterion[71]can be applied for the estimation of power-law exponent in the case of scale-free distributions and provides a more convergent estimate than the maximum likelihood method. It has been applied to study probability distributions of fracture apertures. In some contexts the probability distribution is described, not by thecumulative distribution function, by thecumulative frequencyof a propertyX, defined as the number of elements per meter (or area unit, second etc.) for whichX>xapplies, wherexis a variable real number. As an example,[citation needed]the cumulative distribution of the fracture aperture,X, for a sample ofNelements is defined as 'the number of fractures per meter having aperture greater thanx. Use of cumulative frequency has some advantages, e.g. it allows one to put on the same diagram data gathered from sample lines of different lengths at different scales (e.g. from outcrop and from microscope).
Although power-law relations are attractive for many theoretical reasons, demonstrating that data does indeed follow a power-law relation requires more than simply fitting a particular model to the data.[34]This is important for understanding the mechanism that gives rise to the distribution: superficially similar distributions may arise for significantly different reasons, and different models yield different predictions, such as extrapolation.
For example,log-normal distributionsare often mistaken for power-law distributions:[72]a data set drawn from a lognormal distribution will be approximately linear for large values (corresponding to the upper tail of the lognormal being close to a power law)[clarification needed], but for small values the lognormal will drop off significantly (bowing down), corresponding to the lower tail of the lognormal being small (there are very few small values, rather than many small values in a power law).[citation needed]
For example,Gibrat's lawabout proportional growth processes produce distributions that are lognormal, although their log–log plots look linear over a limited range. An explanation of this is that although the logarithm of thelognormal density functionis quadratic inlog(x), yielding a "bowed" shape in a log–log plot, if the quadratic term is small relative to the linear term then the result can appear almost linear, and the lognormal behavior is only visible when the quadratic term dominates, which may require significantly more data. Therefore, a log–log plot that is slightly "bowed" downwards can reflect a log-normal distribution – not a power law.
In general, many alternative functional forms can appear to follow a power-law form for some extent.[73]Stumpf & Porter (2012)proposed plotting the empirical cumulative distribution function in the log-log domain and claimed that a candidate power-law should cover at least two orders of magnitude.[74]Also, researchers usually have to face the problem of deciding whether or not a real-world probability distribution follows a power law. As a solution to this problem, Diaz[57]proposed a graphical methodology based on random samples that allow visually discerning between different types of tail behavior. This methodology uses bundles of residual quantile functions, also called percentile residual life functions, which characterize many different types of distribution tails, including both heavy and non-heavy tails. However,Stumpf & Porter (2012)claimed the need for both a statistical and a theoretical background in order to support a power-law in the underlying mechanism driving the data generating process.[74]
One method to validate a power-law relation tests many orthogonal predictions of a particular generative mechanism against data. Simply fitting a power-law relation to a particular kind of data is not considered a rational approach. As such, the validation of power-law claims remains a very active field of research in many areas of modern science.[10]
Notes
Bibliography
|
https://en.wikipedia.org/wiki/Power_law#Power-law_probability_distributions
|
Sturgeon's law(orSturgeon's revelation) is anadagestating "ninety percent of everything is crap". It was coined byTheodore Sturgeon, an Americanscience fiction authorand critic, and was inspired by his observation that, whilescience fictionwas often derided for its low quality by critics, most work in other fields was low-quality too, and so science fiction was no different.[1]
Sturgeon deemed Sturgeon's law to mean "nothing is always absolutely so".[2]By this, he meant his observation (building on "Sturgeon's Revelation" that the majority of everything is of low quality) that the existence of a majority of low-quality content in every genre disproves the idea that any single genre is inherently low-quality. This adage previously appeared in his story "The Claustrophile" in a 1956 issue ofGalaxy.[3]
The second adage, variously rendered as "ninety percent of everything is crud" or "ninety percent of everything is crap", was published as "Sturgeon's Revelation" in his book review column forVenture[4]in 1957. However, almost all modern uses of the term Sturgeon's law refer to the second,[citation needed]including the definition listed in theOxford English Dictionary.[5]
According to science fiction authorWilliam Tenn, Sturgeon first expressed his law circa 1951, at a talk atNew York Universityattended by Tenn.[6]The statement was subsequently included in a talk Sturgeon gave at a 1953Labor Dayweekend session of theWorld Science Fiction ConventioninPhiladelphia.[7]
The first written reference to the adage is in the September 1957 issue ofVenture:
And on that hangs Sturgeon’s revelation. It came to him that [science fiction] is indeed ninety-percent crud, but that also – Eureka! –ninety-percent ofeverythingis crud. All things – cars, books, cheeses, hairstyles, people, and pins are, to the expert and discerning eye, crud, except for the acceptable tithe which we each happen to like.[4]
The adage appears again in the March 1958 issue ofVenture, where Sturgeon wrote:
It is in this vein that I repeat Sturgeon's Revelation, which was wrung out of me after twenty years of wearying defense of science fiction against attacks of people who used the worst examples of the field for ammunition, and whose conclusion was that ninety percent of S.F. is crud.
In the 1870 novel,Lothair, byBenjamin Disraeli, it is asserted that:
Nine-tenths of existing books are nonsense, and the clever books are the refutation of that nonsense.[9]
A similar adage appears inRudyard Kipling'sThe Light That Failed, published in 1890.
Four-fifths of everybody's work must be bad. But the remnant is worth the trouble for its own sake.[10]
A 1946 essayConfessions of a Book ReviewerbyGeorge Orwellasserts about books:
In much more than nine cases out of ten the only objectively truthful criticism would be "This book is worthless ..."[11]
In 2009, a paper published inThe Lancetestimated that over 85% of health and medical research is wasted.[12]
In 2013, philosopherDaniel Dennettchampioned Sturgeon's law as one of his seven tools for critical thinking.[13]
90% of everything is crap. That is true, whether you are talking about physics, chemistry, evolutionary psychology, sociology, medicine – you name it – rock music, country western. 90% of everything is crap.[14]
|
https://en.wikipedia.org/wiki/Sturgeon%27s_law
|
Rheology(/riːˈɒlədʒi/; fromGreekῥέω(rhéō)'flow'and-λoγία(-logia)'study of') is the study of the flow ofmatter, primarily in afluid(liquidorgas) state but also as "softsolids" or solids under conditions in which they respond withplasticflow rather than deformingelasticallyin response to an applied force.[1]Rheology is the branch ofphysicsthat deals with thedeformationand flow of materials, both solids and liquids.[1]
The termrheologywas coined byEugene C. Bingham, a professor atLafayette College, in 1920 from a suggestion by a colleague,Markus Reiner.[2][3]The term was inspired by theaphorismofHeraclitus(often mistakenly attributed toSimplicius),panta rhei(πάντα ῥεῖ, 'everything flows'[4][5]) and was first used to describe the flow of liquids and the deformation of solids. It applies to substances that have a complex microstructure, such asmuds,sludges,suspensions, andpolymersand otherglass formers(e.g., silicates), as well as many foods and additives,bodily fluids(e.g., blood) and otherbiological materials, and other materials that belong to the class ofsoft mattersuch as food.
Newtonian fluidscan be characterized by a single coefficient ofviscosityfor a specific temperature. Although this viscosity will change with temperature, it does not change with thestrain rate. Only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the strain rate (the relativeflow velocity) are callednon-Newtonian fluids.
Rheology generally accounts for the behavior of non-Newtonian fluids by characterizing the minimum number of functions that are needed to relate stresses with rate of change of strain or strain rates. For example,ketchupcan have itsviscosityreduced by shaking (or other forms of mechanical agitation, where the relative movement of different layers in the material actually causes the reduction in viscosity), but water cannot. Ketchup is a shear-thinning material, likeyogurtandemulsionpaint(US terminologylatex paintoracrylic paint), exhibitingthixotropy, where an increase in relative flow velocity will cause a reduction in viscosity, for example, by stirring. Some other non-Newtonian materials show the opposite behavior,rheopecty(viscosity increasing with relative deformation), and are called shear-thickening ordilatantmaterials. Since SirIsaac Newtonoriginated the concept of viscosity, the study of liquids with strain-rate-dependent viscosity is also often calledNon-Newtonian fluid mechanics.[1]
The experimental characterisation of a material's rheological behaviour is known asrheometry, although the termrheologyis frequently used synonymously with rheometry, particularly by experimentalists. Theoretical aspects of rheology are the relation of the flow/deformation behaviour of material and its internal structure (e.g., the orientation and elongation of polymer molecules) and the flow/deformation behaviour of materials that cannot be described by classical fluid mechanics or elasticity.
In practice, rheology is principally concerned with extendingcontinuum mechanicsto characterize the flow of materials that exhibit a combination ofelastic,viscousandplasticbehavior by properly combiningelasticityand (Newtonian)fluid mechanics. It is also concerned with predicting mechanical behavior (on the continuum mechanical scale) based on the micro- or nanostructure of the material, e.g. themolecularsize and architecture ofpolymersin solution or the particle size distribution in a solid suspension.
Materials with the characteristics of a fluid will flow when subjected to astress, which is defined as the force per area. There are different sorts of stress (e.g. shear, torsional, etc.), and materials can respond differently under different stresses. Much of theoretical rheology is concerned with associating external forces and torques with internal stresses, internal strain gradients, and flow velocities.[1][6][7][8]
Rheology unites the seemingly unrelated fields ofplasticityandnon-Newtonian fluiddynamics by recognizing that materials undergoing these types of deformation are unable to support a stress (particularly ashear stress, since it is easier to analyze shear deformation) in staticequilibrium. In this sense, a solid undergoing plasticdeformationis afluid, although no viscosity coefficient is associated with this flow. Granular rheology refers to the continuum mechanical description ofgranular materials.
One of the major tasks of rheology is to establish by measurement the relationships betweenstrains(or rates of strain) and stresses, although a number of theoretical developments (such as assuring frame invariants) are also required before using the empirical data. These experimental techniques are known asrheometryand are concerned with the determination of well-definedrheological material functions. Such relationships are then amenable to mathematical treatment by the established methods ofcontinuum mechanics.
The characterization of flow or deformation originating from a simple shear stress field is calledshear rheometry(or shear rheology). The study of extensional flows is calledextensional rheology. Shear flows are much easier to study and thus much more experimental data are available for shear flows than for extensional flows.
On one end of the spectrum we have aninviscidor a simple Newtonian fluid and on the other end, a rigid solid; thus the behavior of all materials fall somewhere in between these two ends. The difference in material behavior is characterized by the level and nature of elasticity present in the material when it deforms, which takes the material behavior to the non-Newtonian regime. The non-dimensional Deborah number is designed to account for the degree of non-Newtonian behavior in a flow. The Deborah number is defined as the ratio of the characteristic time of relaxation (which purely depends on the material and other conditions like the temperature) to the characteristic time of experiment or observation.[3][10]Small Deborah numbers represent Newtonian flow, while non-Newtonian (with both viscous and elastic effects present) behavior occurs for intermediate range Deborah numbers, and high Deborah numbers indicate an elastic/rigid solid. Since Deborah number is a relative quantity, the numerator or the denominator can alter the number. A very small Deborah number can be obtained for a fluid with extremely small relaxation time or a very large experimental time, for example.
Influid mechanics, theReynolds numberis a measure of theratioofinertialforces(vsρ{\displaystyle v_{s}\rho }) toviscousforces (μL{\displaystyle {\frac {\mu }{L}}}) and consequently it quantifies the relative importance of these two types of effect for given flow conditions. Under low Reynolds numbers viscous effects dominate and the flow islaminar, whereas at high Reynolds numbers inertia predominates and the flow may beturbulent. However, since rheology is concerned with fluids which do not have a fixedviscosity, but one which can vary with flow and time, calculation of the Reynolds number can be complicated.
It is one of the most importantdimensionless numbersinfluid dynamicsand is used, usually along with other dimensionless numbers, to provide a criterion for determiningdynamic similitude. When two geometrically similar flow patterns, in perhaps different fluids with possibly different flow rates, have the same values for the relevant dimensionless numbers, they are said to be dynamically similar.
Typically it is given as follows:
where:
Rheometersare instruments used to characterize the rheological properties of materials, typically fluids that are melts or solution. These instruments impose a specific stress field or deformation to the fluid, and monitor the resultant deformation or stress. Instruments can be run in steady flow or oscillatory flow, in both shear and extension.
Rheology has applications inmaterials science,engineering,geophysics,physiology, humanbiologyandpharmaceutics. Materials science is utilized in the production of many industrially important substances, such ascement,paint, andchocolate, which have complex flow characteristics. In addition,plasticitytheory has been similarly important for the design of metal forming processes. The science of rheology and the characterization of viscoelastic properties in the production and use ofpolymericmaterials has been critical for the production of many products for use in both the industrial and military sectors.
Study of flow properties of liquids is important for pharmacists working in the manufacture of several dosage forms, such as simple liquids, ointments, creams, pastes etc. The flow behavior of liquids under applied stress is of great relevance in the field of pharmacy. Flow properties are used as important quality control tools to maintain the superiority of the product and reduce batch to batch variations.
Examples may be given to illustrate the potential applications of these principles to practical problems in the processing[11]and use ofrubbers,plastics, andfibers.Polymersconstitute the basic materials of the rubber and plastic industries and are of vital importance to the textile,petroleum,automobile,paper, andpharmaceutical industries. Their viscoelastic properties determine the mechanical performance of the final products of these industries, and also the success of processing methods at intermediate stages of production.
Inviscoelasticmaterials, such as most polymers and plastics, the presence of liquid-like behaviour depends on the properties of and so varies with rate of applied load, i.e., how quickly a force is applied. Thesiliconetoy 'Silly Putty' behaves quite differently depending on the time rate of applying a force. Pull on it slowly and it exhibits continuous flow, similar to that evidenced in a highly viscous liquid. Alternatively, when hit hard and directly, it shatters like asilicate glass.
In addition, conventional rubber undergoes aglass transition(often called arubber-glass transition). E.g. TheSpace ShuttleChallengerdisaster was caused by rubber O-rings that were being used well below their glass transition temperature on an unusually cold Florida morning, and thus could not flex adequately to form proper seals between sections of the twosolid-fuel rocket boosters.
With theviscosityof asoladjusted into a proper range, bothopticalquality glass fiber andrefractoryceramic fiber can be drawn which are used forfiber-optic sensorsandthermal insulation, respectively. The mechanisms ofhydrolysisandcondensation, and the rheological factors that bias the structure toward linear or branched structures are the most critical issues ofsol-gelscience and technology.
The scientific discipline ofgeophysicsincludes study of the flow of moltenlavaand study of debris flows (fluid mudslides). This disciplinary branch also deals with solid Earth materials which only exhibit flow over extended time-scales. Those that display viscous behaviour are known asrheids. For example,granitecan flow plastically with a negligible yield stress at room temperatures (i.e. a viscous flow). Long-term creep experiments (~10 years) indicate that the viscosity of granite and glass under ambient conditions are on the order of 1020poises.[12][13]
Physiology includes the study of many bodily fluids that have complex structure and composition, and thus exhibit a wide range of viscoelastic flow characteristics. In particular there is a specialist study of blood flow calledhemorheology. This is the study of flow properties of blood and its elements (plasmaand formed elements, includingred blood cells,white blood cellsandplatelets).Blood viscosityis determined by plasma viscosity,hematocrit(volume fraction of red blood cell, which constitute 99.9% of the cellular elements) and mechanical behaviour of red blood cells. Therefore, red blood cell mechanics is the major determinant of flow properties of blood.(The ocularVitreous humoris subject to rheologic observations, particularly during studies of age-related vitreous liquefaction, orsynaeresis.)[14]
The leading characteristic for hemorheology has beenshear thinningin steady shear flow. Other non-Newtonian rheological characteristics that blood can demonstrate includespseudoplasticity,viscoelasticity, andthixotropy.[15]
There are two current major hypotheses to explain blood flow predictions andshear thinningresponses. The two models also attempt to demonstrate the drive for reversible red blood cell aggregation, although the mechanism is still being debated. There is a direct effect of red blood cell aggregation on blood viscosity and circulation.[16]The foundation ofhemorheologycan also provide information for modeling of other biofluids.[15]The bridging or "cross-bridging" hypothesis suggests that macromolecules physically crosslink adjacent red blood cells into rouleaux structures. This occurs through adsorption of macromolecules onto the red blood cell surfaces.[15][16]The depletion layer hypothesis suggests the opposite mechanism. The surfaces of the red blood cells are bound together by an osmotic pressure gradient that is created by depletion layers overlapping.[15]The effect of rouleaux aggregation tendency can be explained byhematocritand fibrinogen concentration in whole blood rheology.[15]Some techniques researchers use are optical trapping and microfluidics to measure cell interaction in vitro.[16]
Changes to viscosity has been shown to be linked with diseases like hyperviscosity, hypertension, sickle cell anemia, and diabetes.[15]Hemorheologicalmeasurements and genomic testing technologies act as preventative measures and diagnostic tools.[15][17]
Hemorheologyhas also been correlated with aging effects, especially with impaired blood fluidity, and studies have shown that physical activity may improve the thickening of blood rheology.[18]
Many animals make use of rheological phenomena, for examplesandfishthat exploit the granular rheology of dry sand to "swim" in it orland gastropodsthat usesnail slimefor adhesivelocomotion. Certain animals produce specializedendogenouscomplex fluids, such as the sticky slime produced byvelvet wormsto immobilize prey or the fast-gelling underwater slime secreted byhagfishto deter predators.[19]
Food rheologyis important in the manufacture and processing of food products, such as cheese[20]andgelato.[21]An adequate rheology is important for the indulgence of many common foods, particularly in the case of sauces,[22]dressings,[23]yogurt,[24]orfondue.[25]
Thickening agents, or thickeners, are substances which, when added to an aqueous mixture, increase itsviscositywithout substantially modifying its other properties, such as taste. They provide body, increasestability, and improvesuspensionof added ingredients. Thickening agents are often used asfood additivesand incosmeticsandpersonal hygiene products. Some thickening agents aregelling agents, forming agel. The agents are materials used to thicken and stabilize liquid solutions,emulsions, andsuspensions. They dissolve in the liquid phase as acolloidmixture that forms a weakly cohesive internal structure. Food thickeners frequently are based on eitherpolysaccharides(starches,vegetable gums, andpectin), orproteins.[26][27]
Concrete's andmortar's workability is related to the rheological properties of the freshcementpaste. The mechanical properties of hardened concrete increase if less water is used in the concrete mix design, however reducing the water-to-cement ratio may decrease the ease of mixing and application. To avoid these undesired effects,superplasticizersare typically added to decrease the apparent yield stress and the viscosity of the fresh paste. Their addition highly improves concrete and mortar properties.[28]
The incorporation of various types offillersintopolymersis a common means of reducing cost and to impart certain desirable mechanical, thermal, electrical and magnetic properties to the resulting material. The advantages that filled polymer systems have to offer come with an increased complexity in the rheological behavior.[29]
Usually when the use of fillers is considered, a compromise has to be made between the improved mechanical properties in the solid state on one side and the increased difficulty in melt processing, the problem of achieving uniformdispersionof the filler in the polymer matrix and the economics of the process due to the added step of compounding on the other. The rheological properties of filled polymers are determined not only by the type and amount of filler, but also by the shape, size and size distribution of its particles. The viscosity of filled systems generally increases with increasing filler fraction. This can be partially ameliorated via broad particle size distributions via theFarris effect.[30]An additional factor is thestresstransfer at the filler-polymer interface. The interfacial adhesion can be substantially enhanced via a coupling agent that adheres well to both the polymer and the filler particles. The type and amount ofsurface treatmenton the filler are thus additional parameters affecting the rheological and material properties of filled polymeric systems.
It is important to take into consideration wall slip when performing the rheological characterization of highly filled materials, as there can be a large difference between the actual strain and the measured strain.[31]
A rheologist is aninterdisciplinaryscientist or engineer who studies the flow of complex liquids or the deformation of soft solids. It is not a primary degree subject; there is no qualification of rheologist as such. Most rheologists have a qualification in mathematics, the physical sciences (e.g.chemistry,physics,geology,biology), engineering (e.g.mechanical,chemical,materials science, plastics engineering and engineeringorcivil engineering),medicine, or certain technologies, notablymaterialsorfood. Typically, a small amount of rheology may be studied when obtaining a degree, but a person working in rheology will extend this knowledge during postgraduate research or by attending short courses and by joining a professional association.
|
https://en.wikipedia.org/wiki/Rheology
|
TheNavier–Stokes equations(/nævˈjeɪstoʊks/nav-YAYSTOHKS) arepartial differential equationswhich describe the motion ofviscous fluidsubstances. They were named after French engineer and physicistClaude-Louis Navierand the Irish physicist and mathematicianGeorge Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The Navier–Stokes equations mathematically expressmomentumbalance forNewtonian fluidsand make use ofconservation of mass. They are sometimes accompanied by anequation of staterelatingpressure,temperatureanddensity.[1]They arise from applyingIsaac Newton's second lawtofluid motion, together with the assumption that thestressin the fluid is the sum of adiffusingviscousterm (proportional to thegradientof velocity) and apressureterm—hence describingviscous flow. The difference between them and the closely relatedEuler equationsis that Navier–Stokes equations takeviscosityinto account while the Euler equations model onlyinviscid flow. As a result, the Navier–Stokes are anelliptic equationand therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are nevercompletely integrable).
The Navier–Stokes equations are useful because they describe the physics of many phenomena ofscientificandengineeringinterest. They may be used tomodelthe weather,ocean currents, waterflow in a pipeand air flow around awing. The Navier–Stokes equations, in their full and simplified forms, help with the design ofaircraftand cars, the study ofblood flow, the design ofpower stations, the analysis ofpollution, and many other problems. Coupled withMaxwell's equations, they can be used to model and studymagnetohydrodynamics.
The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions alwaysexistin three dimensions—i.e., whether they are infinitely differentiable (or even just bounded) at all points in thedomain. This is called theNavier–Stokes existence and smoothnessproblem. TheClay Mathematics Institutehas called this one of theseven most important open problems in mathematicsand has offered aUS$1 million prize for a solution or a counterexample.[2][3]
The solution of the equations is aflow velocity. It is avector field—to every point in a fluid, at any moment in a time interval, it gives a vector whose direction and magnitude are those of the velocity of the fluid at that point in space and at that moment in time. It is usually studied in three spatial dimensions and one time dimension, although two (spatial) dimensional and steady-state cases are often used as models, and higher-dimensional analogues are studied in both pure and applied mathematics. Once the velocity field is calculated, other quantities of interest such aspressureortemperaturemay be found using dynamical equations and relations. This is different from what one normally sees inclassical mechanics, where solutions are typically trajectories of position of aparticleor deflection of acontinuum. Studying velocity instead of position makes more sense for a fluid, although for visualization purposes one can compute varioustrajectories. In particular, thestreamlinesof a vector field, interpreted as flow velocity, are the paths along which a massless fluid particle would travel. These paths are theintegral curveswhose derivative at each point is equal to the vector field, and they can represent visually the behavior of the vector field at a point in time.
The Navier–Stokes momentum equation can be derived as a particular form of theCauchy momentum equation, whose general convective form is:DuDt=1ρ∇⋅σ+f.{\displaystyle {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}+\mathbf {f} .}By setting theCauchy stress tensorσ{\textstyle {\boldsymbol {\sigma }}}to be the sum of a viscosity termτ{\textstyle {\boldsymbol {\tau }}}(thedeviatoric stress) and a pressure term−pI{\textstyle -p\mathbf {I} }(volumetric stress), we arrive at:
ρDuDt=−∇p+∇⋅τ+ρa{\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}=-\nabla p+\nabla \cdot {\boldsymbol {\tau }}+\rho \,\mathbf {a} }
where
In this form, it is apparent that in the assumption of an inviscid fluid – no deviatoric stress – Cauchy equations reduce to theEuler equations.
Assumingconservation of mass, with the known properties ofdivergenceandgradientwe can use the masscontinuity equation, which represents the mass per unit volume of ahomogenousfluid with respect to space and time (i.e.,material derivativeDDt{\displaystyle {\frac {\mathbf {D} }{\mathbf {Dt} }}}) of any finite volume (V) to represent the change of velocity in fluid media:DmDt=∭V(DρDt+ρ(∇⋅u))dVDρDt+ρ(∇⋅u)=∂ρ∂t+(∇ρ)⋅u+ρ(∇⋅u)=∂ρ∂t+∇⋅(ρu)=0{\displaystyle {\begin{aligned}{\frac {\mathbf {D} m}{\mathbf {Dt} }}&={\iiint \limits _{V}}\left({{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )}\right)dV\\{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot {\mathbf {u} })&={\frac {\partial \rho }{\partial t}}+({\nabla \rho })\cdot {\mathbf {u} }+{\rho }(\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0\end{aligned}}}where
Note 1 - Refer to the mathematical operator del represented by the nabla(∇{\displaystyle \nabla })symbol.
to arrive at the conservation form of the equations of motion. This is often written:[4]
∂∂t(ρu)+∇⋅(ρu⊗u)=−∇p+∇⋅τ+ρa{\displaystyle {\frac {\partial }{\partial t}}(\rho \,\mathbf {u} )+\nabla \cdot (\rho \,\mathbf {u} \otimes \mathbf {u} )=-\nabla p+\nabla \cdot {\boldsymbol {\tau }}+\rho \,\mathbf {a} }
where⊗{\textstyle \otimes }is theouter productof the flow velocity (u{\displaystyle \mathbf {u} }):u⊗u=uuT{\displaystyle \mathbf {u} \otimes \mathbf {u} =\mathbf {u} \mathbf {u} ^{\mathrm {T} }}
The left side of the equation describes acceleration, and may be composed of time-dependent and convective components (also the effects of non-inertial coordinates if present). The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces (such as gravity).
All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through aconstitutive relation. By expressing the deviatoric (shear) stress tensor in terms ofviscosityand the fluidvelocitygradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below.
A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. While individual fluid particles indeed experience time-dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
Remark: here, the deviatoric stress tensor is denotedτ{\textstyle {\boldsymbol {\tau }}}as it was in thegeneral continuum equationsand in theincompressible flow section.
The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:[5]
σ(ε)=−pI+λtr(ε)I+2με{\displaystyle {\boldsymbol {\sigma }}({\boldsymbol {\varepsilon }})=-p\mathbf {I} +\lambda \operatorname {tr} ({\boldsymbol {\varepsilon }})\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}}
whereI{\textstyle \mathbf {I} }is theidentity tensor, andtr(ε){\textstyle \operatorname {tr} ({\boldsymbol {\varepsilon }})}is thetraceof the rate-of-strain tensor. So this decomposition can be explicitly defined as:σ=−pI+λ(∇⋅u)I+μ(∇u+(∇u)T).{\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\lambda (\nabla \cdot \mathbf {u} )\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right).}
Since thetraceof the rate-of-strain tensor in three dimensions is thedivergence(i.e. rate of expansion) of the flow:tr(ε)=∇⋅u.{\displaystyle \operatorname {tr} ({\boldsymbol {\varepsilon }})=\nabla \cdot \mathbf {u} .}
Given this relation, and since the trace of the identity tensor in three dimensions is three:tr(I)=3.{\displaystyle \operatorname {tr} ({\boldsymbol {I}})=3.}
the trace of the stress tensor in three dimensions becomes:tr(σ)=−3p+(3λ+2μ)∇⋅u.{\displaystyle \operatorname {tr} ({\boldsymbol {\sigma }})=-3p+(3\lambda +2\mu )\nabla \cdot \mathbf {u} .}
So by alternatively decomposing the stress tensor intoisotropicanddeviatoricparts, as usual in fluid dynamics:[6]σ=−[p−(λ+23μ)(∇⋅u)]I+μ(∇u+(∇u)T−23(∇⋅u)I){\displaystyle {\boldsymbol {\sigma }}=-\left[p-\left(\lambda +{\tfrac {2}{3}}\mu \right)\left(\nabla \cdot \mathbf {u} \right)\right]\mathbf {I} +\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathrm {T} }-{\tfrac {2}{3}}\left(\nabla \cdot \mathbf {u} \right)\mathbf {I} \right)}
Introducing thebulk viscosityζ{\textstyle \zeta },ζ≡λ+23μ,{\displaystyle \zeta \equiv \lambda +{\tfrac {2}{3}}\mu ,}
we arrive to the linearconstitutive equationin the form usually employed inthermal hydraulics:[5]
σ=−[p−ζ(∇⋅u)]I+μ[∇u+(∇u)T−23(∇⋅u)I]{\displaystyle {\boldsymbol {\sigma }}=-[p-\zeta (\nabla \cdot \mathbf {u} )]\mathbf {I} +\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]}
which can also be arranged in the other usual form:[7]σ=−pI+μ(∇u+(∇u)T)+(ζ−23μ)(∇⋅u)I.{\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right)+\left(\zeta -{\frac {2}{3}}\mu \right)(\nabla \cdot \mathbf {u} )\mathbf {I} .}
Note that in the compressible case the pressure is no more proportional to theisotropic stressterm, since there is the additional bulk viscosity term:p=−13tr(σ)+ζ(∇⋅u){\displaystyle p=-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})+\zeta (\nabla \cdot \mathbf {u} )}
and thedeviatoric stress tensorσ′{\displaystyle {\boldsymbol {\sigma }}'}is still coincident with the shear stress tensorτ{\displaystyle {\boldsymbol {\tau }}}(i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity:
σ′=τ=μ[∇u+(∇u)T−23(∇⋅u)I]{\displaystyle {\boldsymbol {\sigma }}'={\boldsymbol {\tau }}=\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]}
Both bulk viscosityζ{\textstyle \zeta }and dynamic viscosityμ{\textstyle \mu }need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of thesetransport coefficientin theconservation variablesis called anequation of state.[8]
The most general of the Navier–Stokes equations become
ρDuDt=ρ(∂u∂t+(u⋅∇)u)=−∇p+∇⋅{μ[∇u+(∇u)T−23(∇⋅u)I]}+∇[ζ(∇⋅u)]+ρa.{\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}=\rho \left({\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} \right)=-\nabla p+\nabla \cdot \left\{\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]\right\}+\nabla [\zeta (\nabla \cdot \mathbf {u} )]+\rho \mathbf {a} .}
in index notation, the equation can be written as[9]
ρ(∂ui∂t+uk∂ui∂xk)=−∂p∂xi+∂∂xk[μ(∂ui∂xk+∂uk∂xi−23δik∂ul∂xl)]+∂∂xi(ζ∂ul∂xl)+ρai.{\displaystyle \rho \left({\frac {\partial u_{i}}{\partial t}}+u_{k}{\frac {\partial u_{i}}{\partial x_{k}}}\right)=-{\frac {\partial p}{\partial x_{i}}}+{\frac {\partial }{\partial x_{k}}}\left[\mu \left({\frac {\partial u_{i}}{\partial x_{k}}}+{\frac {\partial u_{k}}{\partial x_{i}}}-{\frac {2}{3}}\delta _{ik}{\frac {\partial u_{l}}{\partial x_{l}}}\right)\right]+{\frac {\partial }{\partial x_{i}}}\left(\zeta {\frac {\partial u_{l}}{\partial x_{l}}}\right)+\rho a_{i}.}
The corresponding equation in conservation form can be obtained by considering that, given the masscontinuity equation, the left side is equivalent to:
ρDuDt=∂∂t(ρu)+∇⋅(ρu⊗u){\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} )}
To give finally:
∂∂t(ρu)+∇⋅(ρu⊗u+[p−ζ(∇⋅u)]I−μ[∇u+(∇u)T−23(∇⋅u)I])=ρa.{\displaystyle {\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot \left(\rho \mathbf {u} \otimes \mathbf {u} +[p-\zeta (\nabla \cdot \mathbf {u} )]\mathbf {I} -\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]\right)=\rho \mathbf {a} .}
Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called thedispersion. In some cases, thesecond viscosityζ{\textstyle \zeta }can be assumed to be constant in which case, the effect of the volume viscosityζ{\textstyle \zeta }is that the mechanical pressure is not equivalent to the thermodynamicpressure:[10]as demonstrated below.∇⋅(∇⋅u)I=∇(∇⋅u),{\displaystyle \nabla \cdot (\nabla \cdot \mathbf {u} )\mathbf {I} =\nabla (\nabla \cdot \mathbf {u} ),}p¯≡p−ζ∇⋅u,{\displaystyle {\bar {p}}\equiv p-\zeta \,\nabla \cdot \mathbf {u} ,}However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves,[11]where second viscosity coefficient becomes important) by explicitly assumingζ=0{\textstyle \zeta =0}. The assumption of settingζ=0{\textstyle \zeta =0}is called as theStokes hypothesis.[12]The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory;[13]for other gases and liquids, Stokes hypothesis is generally incorrect. With the Stokes hypothesis, the Navier–Stokes equations become
ρDuDt=ρ(∂u∂t+(u⋅∇)u)=−∇p+∇⋅{μ[∇u+(∇u)T−23(∇⋅u)I]}+ρa.{\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}=\rho \left({\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} \right)=-\nabla p+\nabla \cdot \left\{\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]\right\}+\rho \mathbf {a} .}
If the dynamicμand bulkζ{\displaystyle \zeta }viscosities are assumed to be uniform in space, the equations in convective form can be simplified further. By computing the divergence of the stress tensor, since the divergence of tensor∇u{\textstyle \nabla \mathbf {u} }is∇2u{\textstyle \nabla ^{2}\mathbf {u} }and the divergence of tensor(∇u)T{\textstyle \left(\nabla \mathbf {u} \right)^{\mathrm {T} }}is∇(∇⋅u){\textstyle \nabla \left(\nabla \cdot \mathbf {u} \right)}, one finally arrives to the compressible Navier–Stokes momentum equation:[14]
DuDt=−1ρ∇p+ν∇2u+(13ν+ξ)∇(∇⋅u)+a.{\displaystyle {\frac {D\mathbf {u} }{Dt}}=-{\frac {1}{\rho }}\nabla p+\nu \,\nabla ^{2}\mathbf {u} +({\tfrac {1}{3}}\nu +\xi )\,\nabla (\nabla \cdot \mathbf {u} )+\mathbf {a} .}
whereDDt{\textstyle {\frac {\mathrm {D} }{\mathrm {D} t}}}is thematerial derivative.ν=μρ{\displaystyle \nu ={\frac {\mu }{\rho }}}is the shearkinematic viscosityandξ=ζρ{\displaystyle \xi ={\frac {\zeta }{\rho }}}is the bulk kinematic viscosity. The left-hand side changes in the conservation form of the Navier–Stokes momentum equation.
By bringing the operator on the flow velocity on the left side, one also has:
(∂∂t+u⋅∇−ν∇2−(13ν+ξ)∇(∇⋅))u=−1ρ∇p+a.{\displaystyle \left({\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla -\nu \,\nabla ^{2}-({\tfrac {1}{3}}\nu +\xi )\,\nabla (\nabla \cdot )\right)\mathbf {u} =-{\frac {1}{\rho }}\nabla p+\mathbf {a} .}
The convective acceleration term can also be written asu⋅∇u=(∇×u)×u+12∇u2,{\displaystyle \mathbf {u} \cdot \nabla \mathbf {u} =(\nabla \times \mathbf {u} )\times \mathbf {u} +{\tfrac {1}{2}}\nabla \mathbf {u} ^{2},}where the vector(∇×u)×u{\textstyle (\nabla \times \mathbf {u} )\times \mathbf {u} }is known as theLamb vector.
For the special case of anincompressible flow, the pressure constrains the flow so that the volume offluid elementsis constant:isochoric flowresulting in asolenoidalvelocity field with∇⋅u=0{\textstyle \nabla \cdot \mathbf {u} =0}.[15]
The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:[5]
whereε=12(∇u+∇uT){\displaystyle {\boldsymbol {\varepsilon }}={\tfrac {1}{2}}\left(\mathbf {\nabla u} +\mathbf {\nabla u} ^{\mathrm {T} }\right)}is the rate-of-strain tensor. So this decomposition can be made explicit as:[5]
This is constitutive equation is also called theNewtonian law of viscosity.
Dynamic viscosityμneed not be constant – in incompressible flows it can depend on density and on pressure. Any equation that makes explicit one of thesetransport coefficientin theconservative variablesis called anequation of state.[8]
The divergence of the deviatoric stress in case of uniform viscosity is given by:∇⋅τ=2μ∇⋅ε=μ∇⋅(∇u+∇uT)=μ∇2u{\displaystyle \nabla \cdot {\boldsymbol {\tau }}=2\mu \nabla \cdot {\boldsymbol {\varepsilon }}=\mu \nabla \cdot \left(\nabla \mathbf {u} +\nabla \mathbf {u} ^{\mathrm {T} }\right)=\mu \,\nabla ^{2}\mathbf {u} }because∇⋅u=0{\textstyle \nabla \cdot \mathbf {u} =0}for an incompressible fluid.
Incompressibility rules out density and pressure waves like sound orshock waves, so this simplification is not useful if these phenomena are of interest. The incompressible flow assumption typically holds well with all fluids at lowMach numbers(say up to about Mach 0.3), such as for modelling air winds at normal temperatures.[16]the incompressible Navier–Stokes equations are best visualized by dividing for the density:[17]
DuDt=∂u∂t+(u⋅∇)u=ν∇2u−1ρ∇p+1ρf{\displaystyle {\frac {D\mathbf {u} }{Dt}}={\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} =\nu \,\nabla ^{2}\mathbf {u} -{\frac {1}{\rho }}\nabla p+{\frac {1}{\rho }}\mathbf {f} }
whereν=μρ{\textstyle \nu ={\frac {\mu }{\rho }}}is called thekinematic viscosity.
By isolating the fluid velocity, one can also state:
(∂∂t+u⋅∇−ν∇2)u=−1ρ∇p+1ρf.{\displaystyle \left({\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla -\nu \,\nabla ^{2}\right)\mathbf {u} =-{\frac {1}{\rho }}\nabla p+{\frac {1}{\rho }}\mathbf {f} .}
If the density is constant throughout the fluid domain, or, in other words, if all fluid elements have the same density,ρ{\textstyle \rho }, then we have
DuDt=ν∇2u−∇pρ+1ρf,{\displaystyle {\frac {D\mathbf {u} }{Dt}}=\nu \,\nabla ^{2}\mathbf {u} -\nabla {\frac {p}{\rho }}+{\frac {1}{\rho }}\mathbf {f} ,}
wherep/ρ{\textstyle p/\rho }is called the unitpressure head.
In incompressible flows, the pressure field satisfies thePoisson equation,[9]
which is obtained by taking the divergence of the momentum equations.
Velocity profile (laminar flow):ux=u(y),uy=0,uz=0{\displaystyle u_{x}=u(y),\quad u_{y}=0,\quad u_{z}=0}for thex-direction, simplify the Navier–Stokes equation:0=−dPdx+μ(d2udy2){\displaystyle 0=-{\frac {\mathrm {d} P}{\mathrm {d} x}}+\mu \left({\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}\right)}
Integrate twice to find the velocity profile with boundary conditionsy=h,u= 0,y= −h,u= 0:u=12μdPdxy2+Ay+B{\displaystyle u={\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}y^{2}+Ay+B}
From this equation, substitute in the two boundary conditions to get two equations:0=12μdPdxh2+Ah+B0=12μdPdxh2−Ah+B{\displaystyle {\begin{aligned}0&={\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}h^{2}+Ah+B\\0&={\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}h^{2}-Ah+B\end{aligned}}}
Add and solve forB:B=−12μdPdxh2{\displaystyle B=-{\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}h^{2}}
Substitute and solve forA:A=0{\displaystyle A=0}
Finally this gives the velocity profile:u=12μdPdx(y2−h2){\displaystyle u={\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}\left(y^{2}-h^{2}\right)}
It is well worth observing the meaning of each term (compare to theCauchy momentum equation):
∂u∂t⏟Variation+(u⋅∇)u⏟Convectiveacceleration⏞Inertia (per volume)=∂∂−∇w⏟Internalsource+ν∇2u⏟Diffusion⏞Divergence of stress+g⏟Externalsource.{\displaystyle \overbrace {{\vphantom {\frac {}{}}}\underbrace {\frac {\partial \mathbf {u} }{\partial t}} _{\text{Variation}}+\underbrace {{\vphantom {\frac {}{}}}(\mathbf {u} \cdot \nabla )\mathbf {u} } _{\begin{smallmatrix}{\text{Convective}}\\{\text{acceleration}}\end{smallmatrix}}} ^{\text{Inertia (per volume)}}=\overbrace {{\vphantom {\frac {\partial }{\partial }}}\underbrace {{\vphantom {\frac {}{}}}-\nabla w} _{\begin{smallmatrix}{\text{Internal}}\\{\text{source}}\end{smallmatrix}}+\underbrace {{\vphantom {\frac {}{}}}\nu \nabla ^{2}\mathbf {u} } _{\text{Diffusion}}} ^{\text{Divergence of stress}}+\underbrace {{\vphantom {\frac {}{}}}\mathbf {g} } _{\begin{smallmatrix}{\text{External}}\\{\text{source}}\end{smallmatrix}}.}
The higher-order term, namely theshear stressdivergence∇⋅τ{\textstyle \nabla \cdot {\boldsymbol {\tau }}}, has simply reduced to thevector Laplaciantermμ∇2u{\textstyle \mu \nabla ^{2}\mathbf {u} }.[18]This Laplacian term can be interpreted as the difference between the velocity at a point and the mean velocity in a small surrounding volume. This implies that – for a Newtonian fluid – viscosity operates as adiffusion of momentum, in much the same way as theheat conduction. In fact neglecting the convection term, incompressible Navier–Stokes equations lead to a vectordiffusion equation(namelyStokes equations), but in general the convection term is present, so incompressible Navier–Stokes equations belong to the class ofconvection–diffusion equations.
In the usual case of an external field being aconservative field:g=−∇φ{\displaystyle \mathbf {g} =-\nabla \varphi }by defining thehydraulic head:h≡w+φ{\displaystyle h\equiv w+\varphi }
one can finally condense the whole source in one term, arriving to the incompressible Navier–Stokes equation with conservative external field:∂u∂t+(u⋅∇)u−ν∇2u=−∇h.{\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} -\nu \,\nabla ^{2}\mathbf {u} =-\nabla h.}
The incompressible Navier–Stokes equations with uniform density and viscosity and conservative external field is thefundamental equation ofhydraulics. The domain for these equations is commonly a 3 or fewer dimensionalEuclidean space, for which anorthogonal coordinatereference frame is usually set to explicit the system of scalar partial differential equations to be solved. In 3-dimensional orthogonal coordinate systems are 3:Cartesian,cylindrical, andspherical. Expressing the Navier–Stokes vector equation in Cartesian coordinates is quite straightforward and not much influenced by the number of dimensions of the euclidean space employed, and this is the case also for the first-order terms (like the variation and convection ones) also in non-cartesian orthogonal coordinate systems. But for the higher order terms (the two coming from the divergence of the deviatoric stress that distinguish Navier–Stokes equations from Euler equations) sometensor calculusis required for deducing an expression in non-cartesian orthogonal coordinate systems.
A special case of the fundamental equation of hydraulics is theBernoulli's equation.
The incompressible Navier–Stokes equation is composite, the sum of two orthogonal equations,∂u∂t=ΠS(−(u⋅∇)u+ν∇2u)+fSρ−1∇p=ΠI(−(u⋅∇)u+ν∇2u)+fI{\displaystyle {\begin{aligned}{\frac {\partial \mathbf {u} }{\partial t}}&=\Pi ^{S}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{S}\\\rho ^{-1}\,\nabla p&=\Pi ^{I}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{I}\end{aligned}}}whereΠS{\textstyle \Pi ^{S}}andΠI{\textstyle \Pi ^{I}}are solenoidal andirrotationalprojection operators satisfyingΠS+ΠI=1{\textstyle \Pi ^{S}+\Pi ^{I}=1}, andfS{\textstyle \mathbf {f} ^{S}}andfI{\textstyle \mathbf {f} ^{I}}are the non-conservative and conservative parts of the body force. This result follows from theHelmholtz theorem(also known as the fundamental theorem of vector calculus). The first equation is a pressureless governing equation for the velocity, while the second equation for the pressure is a functional of the velocity and is related to the pressure Poisson equation.
The explicit functional form of the projection operator in 3D is found from the Helmholtz Theorem:ΠSF(r)=14π∇×∫∇′×F(r′)|r−r′|dV′,ΠI=1−ΠS{\displaystyle \Pi ^{S}\,\mathbf {F} (\mathbf {r} )={\frac {1}{4\pi }}\nabla \times \int {\frac {\nabla ^{\prime }\times \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} V',\quad \Pi ^{I}=1-\Pi ^{S}}with a similar structure in 2D. Thus the governing equation is anintegro-differential equationsimilar toCoulomb'sandBiot–Savart's law, not convenient for numerical computation.
An equivalent weak or variational form of the equation, proved to produce the same velocity solution as the Navier–Stokes equation,[19]is given by,(w,∂u∂t)=−(w,(u⋅∇)u)−ν(∇w:∇u)+(w,fS){\displaystyle \left(\mathbf {w} ,{\frac {\partial \mathbf {u} }{\partial t}}\right)=-{\bigl (}\mathbf {w} ,\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} {\bigr )}-\nu \left(\nabla \mathbf {w} :\nabla \mathbf {u} \right)+\left(\mathbf {w} ,\mathbf {f} ^{S}\right)}
for divergence-free test functionsw{\textstyle \mathbf {w} }satisfying appropriate boundary conditions. Here, the projections are accomplished by the orthogonality of the solenoidal and irrotational function spaces. The discrete form of this is eminently suited to finite element computation of divergence-free flow, as we shall see in the next section. There, one will be able to address the question, "How does one specify pressure-driven (Poiseuille) problems with a pressureless governing equation?".
The absence of pressure forces from the governing velocity equation demonstrates that the equation is not a dynamic one, but rather a kinematic equation where the divergence-free condition serves the role of a conservation equation. This would seem to refute the frequent statements that the incompressible pressure enforces the divergence-free condition.
Consider the incompressible Navier–Stokes equations for aNewtonian fluidof constant densityρ{\textstyle \rho }in a domainΩ⊂Rd(d=2,3){\displaystyle \Omega \subset \mathbb {R} ^{d}\quad (d=2,3)}with boundary∂Ω=ΓD∪ΓN,{\displaystyle \partial \Omega =\Gamma _{D}\cup \Gamma _{N},}beingΓD{\textstyle \Gamma _{D}}andΓN{\textstyle \Gamma _{N}}portions of the boundary where respectively aDirichletand aNeumann boundary conditionis applied (ΓD∩ΓN=∅{\textstyle \Gamma _{D}\cap \Gamma _{N}=\emptyset }):[20]{ρ∂u∂t+ρ(u⋅∇)u−∇⋅σ(u,p)=finΩ×(0,T)∇⋅u=0inΩ×(0,T)u=gonΓD×(0,T)σ(u,p)n^=honΓN×(0,T)u(0)=u0inΩ×{0}{\displaystyle {\begin{cases}\rho {\dfrac {\partial \mathbf {u} }{\partial t}}+\rho (\mathbf {u} \cdot \nabla )\mathbf {u} -\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)=\mathbf {f} &{\text{ in }}\Omega \times (0,T)\\\nabla \cdot \mathbf {u} =0&{\text{ in }}\Omega \times (0,T)\\\mathbf {u} =\mathbf {g} &{\text{ on }}\Gamma _{D}\times (0,T)\\{\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\mathbf {h} &{\text{ on }}\Gamma _{N}\times (0,T)\\\mathbf {u} (0)=\mathbf {u} _{0}&{\text{ in }}\Omega \times \{0\}\end{cases}}}u{\textstyle \mathbf {u} }is the fluid velocity,p{\textstyle p}the fluid pressure,f{\textstyle \mathbf {f} }a given forcing term,n^{\displaystyle {\hat {\mathbf {n} }}}the outward directed unit normal vector toΓN{\textstyle \Gamma _{N}}, andσ(u,p){\textstyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)}theviscous stress tensordefined as:[20]σ(u,p)=−pI+2με(u).{\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)=-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} ).}Letμ{\textstyle \mu }be the dynamic viscosity of the fluid,I{\textstyle \mathbf {I} }the second-orderidentity tensorandε(u){\textstyle {\boldsymbol {\varepsilon }}(\mathbf {u} )}thestrain-rate tensordefined as:[20]ε(u)=12((∇u)+(∇u)T).{\displaystyle {\boldsymbol {\varepsilon }}(\mathbf {u} )={\frac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right).}The functionsg{\textstyle \mathbf {g} }andh{\textstyle \mathbf {h} }are given Dirichlet and Neumann boundary data, whileu0{\textstyle \mathbf {u} _{0}}is theinitial condition. The first equation is the momentum balance equation, while the second represents themass conservation, namely thecontinuity equation.
Assuming constant dynamic viscosity, using the vectorial identity∇⋅(∇f)T=∇(∇⋅f){\displaystyle \nabla \cdot \left(\nabla \mathbf {f} \right)^{\mathrm {T} }=\nabla (\nabla \cdot \mathbf {f} )}and exploiting mass conservation, the divergence of the total stress tensor in the momentum equation can also be expressed as:[20]∇⋅σ(u,p)=∇⋅(−pI+2με(u))=−∇p+2μ∇⋅ε(u)=−∇p+2μ∇⋅[12((∇u)+(∇u)T)]=−∇p+μ(Δu+∇⋅(∇u)T)=−∇p+μ(Δu+∇(∇⋅u)⏟=0)=−∇p+μΔu.{\displaystyle {\begin{aligned}\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)&=\nabla \cdot \left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right)\\&=-\nabla p+2\mu \nabla \cdot {\boldsymbol {\varepsilon }}(\mathbf {u} )\\&=-\nabla p+2\mu \nabla \cdot \left[{\tfrac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\right]\\&=-\nabla p+\mu \left(\Delta \mathbf {u} +\nabla \cdot \left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\\&=-\nabla p+\mu {\bigl (}\Delta \mathbf {u} +\nabla \underbrace {(\nabla \cdot \mathbf {u} )} _{=0}{\bigr )}=-\nabla p+\mu \,\Delta \mathbf {u} .\end{aligned}}}Moreover, note that the Neumann boundary conditions can be rearranged as:[20]σ(u,p)n^=(−pI+2με(u))n^=−pn^+μ∂u∂n^.{\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right){\hat {\mathbf {n} }}=-p{\hat {\mathbf {n} }}+\mu {\frac {\partial {\boldsymbol {u}}}{\partial {\hat {\mathbf {n} }}}}.}
In order to find the weak form of the Navier–Stokes equations, firstly, consider the momentum equation[20]ρ∂u∂t−μΔu+ρ(u⋅∇)u+∇p=f{\displaystyle \rho {\frac {\partial \mathbf {u} }{\partial t}}-\mu \Delta \mathbf {u} +\rho (\mathbf {u} \cdot \nabla )\mathbf {u} +\nabla p=\mathbf {f} }multiply it for a test functionv{\textstyle \mathbf {v} }, defined in a suitable spaceV{\textstyle V}, and integrate both members with respect to the domainΩ{\textstyle \Omega }:[20]∫Ωρ∂u∂t⋅v−∫ΩμΔu⋅v+∫Ωρ(u⋅∇)u⋅v+∫Ω∇p⋅v=∫Ωf⋅v{\displaystyle \int \limits _{\Omega }\rho {\frac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} -\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\nabla p\cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} }Counter-integrating by parts the diffusive and the pressure terms and by using the Gauss' theorem:[20]−∫ΩμΔu⋅v=∫Ωμ∇u⋅∇v−∫∂Ωμ∂u∂n^⋅v∫Ω∇p⋅v=−∫Ωp∇⋅v+∫∂Ωpv⋅n^{\displaystyle {\begin{aligned}-\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} &=\int _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} -\int \limits _{\partial \Omega }\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}\cdot \mathbf {v} \\\int \limits _{\Omega }\nabla p\cdot \mathbf {v} &=-\int \limits _{\Omega }p\nabla \cdot \mathbf {v} +\int \limits _{\partial \Omega }p\mathbf {v} \cdot {\hat {\mathbf {n} }}\end{aligned}}}
Using these relations, one gets:[20]∫Ωρ∂u∂t⋅v+∫Ωμ∇u⋅∇v+∫Ωρ(u⋅∇)u⋅v−∫Ωp∇⋅v=∫Ωf⋅v+∫∂Ω(μ∂u∂n^−pn^)⋅v∀v∈V.{\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} \quad \forall \mathbf {v} \in V.}In the same fashion, the continuity equation is multiplied for a test functionqbelonging to a spaceQ{\textstyle Q}and integrated in the domainΩ{\textstyle \Omega }:[20]∫Ωq∇⋅u=0.∀q∈Q.{\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0.\quad \forall q\in Q.}The space functions are chosen as follows:V=[H01(Ω)]d={v∈[H1(Ω)]d:v=0onΓD},Q=L2(Ω){\displaystyle {\begin{aligned}V=\left[H_{0}^{1}(\Omega )\right]^{d}&=\left\{\mathbf {v} \in \left[H^{1}(\Omega )\right]^{d}:\quad \mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\right\},\\Q&=L^{2}(\Omega )\end{aligned}}}Considering that the test functionvvanishes on the Dirichlet boundary and considering the Neumann condition, the integral on the boundary can be rearranged as:[20]∫∂Ω(μ∂u∂n^−pn^)⋅v=∫ΓD(μ∂u∂n^−pn^)⋅v⏟v=0onΓD+∫ΓN∫ΓN(μ∂u∂n^−pn^)⏟=honΓN⋅v=∫ΓNh⋅v.{\displaystyle \int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} =\underbrace {\int \limits _{\Gamma _{D}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} } _{\mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\ }+\int \limits _{\Gamma _{N}}\underbrace {{\vphantom {\int \limits _{\Gamma _{N}}}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)} _{=\mathbf {h} {\text{ on }}\Gamma _{N}}\cdot \mathbf {v} =\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} .}Having this in mind, the weak formulation of the Navier–Stokes equations is expressed as:[20]findu∈L2(R+[H1(Ω)]d)∩C0(R+[L2(Ω)]d)such that:{∫Ωρ∂u∂t⋅v+∫Ωμ∇u⋅∇v+∫Ωρ(u⋅∇)u⋅v−∫Ωp∇⋅v=∫Ωf⋅v+∫ΓNh⋅v∀v∈V,∫Ωq∇⋅u=0∀q∈Q.{\displaystyle {\begin{aligned}&{\text{find }}\mathbf {u} \in L^{2}\left(\mathbb {R} ^{+}\;\left[H^{1}(\Omega )\right]^{d}\right)\cap C^{0}\left(\mathbb {R} ^{+}\;\left[L^{2}(\Omega )\right]^{d}\right){\text{ such that: }}\\[5pt]&\quad {\begin{cases}\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} \quad \forall \mathbf {v} \in V,\\\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0\quad \forall q\in Q.\end{cases}}\end{aligned}}}
With partitioning of the problem domain and definingbasis functionson the partitioned domain, the discrete form of the governing equation is(wi,∂uj∂t)=−(wi,(u⋅∇)uj)−ν(∇wi:∇uj)+(wi,fS).{\displaystyle \left(\mathbf {w} _{i},{\frac {\partial \mathbf {u} _{j}}{\partial t}}\right)=-{\bigl (}\mathbf {w} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}{\bigr )}-\nu \left(\nabla \mathbf {w} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {w} _{i},\mathbf {f} ^{S}\right).}
It is desirable to choose basis functions that reflect the essential feature of incompressible flow – the elements must be divergence-free. While the velocity is the variable of interest, the existence of the stream function or vector potential is necessary by the Helmholtz theorem. Further, to determine fluid flow in the absence of a pressure gradient, one can specify the difference of stream function values across a 2D channel, or the line integral of the tangential component of the vector potential around the channel in 3D, the flow being given byStokes' theorem. Discussion will be restricted to 2D in the following.
We further restrict discussion to continuous Hermite finite elements which have at least first-derivative degrees-of-freedom. With this, one can draw a large number of candidate triangular and rectangular elements from theplate-bendingliterature. These elements have derivatives as components of the gradient. In 2D, the gradient and curl of a scalar are clearly orthogonal, given by the expressions,∇φ=(∂φ∂x,∂φ∂y)T,∇×φ=(∂φ∂y,−∂φ∂x)T.{\displaystyle {\begin{aligned}\nabla \varphi &=\left({\frac {\partial \varphi }{\partial x}},\,{\frac {\partial \varphi }{\partial y}}\right)^{\mathrm {T} },\\[5pt]\nabla \times \varphi &=\left({\frac {\partial \varphi }{\partial y}},\,-{\frac {\partial \varphi }{\partial x}}\right)^{\mathrm {T} }.\end{aligned}}}
Adopting continuous plate-bending elements, interchanging the derivative degrees-of-freedom and changing the sign of the appropriate one gives many families of stream function elements.
Taking the curl of the scalar stream function elements gives divergence-free velocity elements.[21][22]The requirement that the stream function elements be continuous assures that the normal component of the velocity is continuous across element interfaces, all that is necessary for vanishing divergence on these interfaces.
Boundary conditions are simple to apply. The stream function is constant on no-flow surfaces, with no-slip velocity conditions on surfaces.
Stream function differences across open channels determine the flow. No boundary conditions are necessary on open boundaries, though consistent values may be used with some problems. These are all Dirichlet conditions.
The algebraic equations to be solved are simple to set up, but of course arenon-linear, requiring iteration of the linearized equations.
Similar considerations apply to three-dimensions, but extension from 2D is not immediate because of the vector nature of the potential, and there exists no simple relation between the gradient and the curl as was the case in 2D.
Recovering pressure from the velocity field is easy. The discrete weak equation for the pressure gradient is,(gi,∇p)=−(gi,(u⋅∇)uj)−ν(∇gi:∇uj)+(gi,fI){\displaystyle (\mathbf {g} _{i},\nabla p)=-\left(\mathbf {g} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}\right)-\nu \left(\nabla \mathbf {g} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {g} _{i},\mathbf {f} ^{I}\right)}
where the test/weight functions are irrotational. Any conforming scalar finite element may be used. However, the pressure gradient field may also be of interest. In this case, one can use scalar Hermite elements for the pressure. For the test/weight functionsgi{\textstyle \mathbf {g} _{i}}one would choose the irrotational vector elements obtained from the gradient of the pressure element.
The rotating frame of reference introduces some interesting pseudo-forces into the equations through thematerial derivativeterm. Consider a stationary inertial frame of referenceK{\textstyle K}, and a non-inertial frame of referenceK′{\textstyle K'}, which is translating with velocityU(t){\textstyle \mathbf {U} (t)}and rotating with angular velocityΩ(t){\textstyle \Omega (t)}with respect to the stationary frame. The Navier–Stokes equation observed from the non-inertial frame then becomes
ρ(∂u∂t+(u⋅∇)u)=−∇p+∇⋅{μ[∇u+(∇u)T−23(∇⋅u)I]}+∇[ζ(∇⋅u)]+ρf−ρ[2Ω×u+Ω×(Ω×x)+dUdt+dΩdt×x].{\displaystyle \rho \left({\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} \right)=-\nabla p+\nabla \cdot \left\{\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]\right\}+\nabla [\zeta (\nabla \cdot \mathbf {u} )]+\rho \mathbf {f} -\rho \left[2\mathbf {\Omega } \times \mathbf {u} +\mathbf {\Omega } \times (\mathbf {\Omega } \times \mathbf {x} )+{\frac {\mathrm {d} \mathbf {U} }{\mathrm {d} t}}+{\frac {\mathrm {d} \mathbf {\Omega } }{\mathrm {d} t}}\times \mathbf {x} \right].}
Herex{\textstyle \mathbf {x} }andu{\textstyle \mathbf {u} }are measured in the non-inertial frame. The first term in the parenthesis representsCoriolis acceleration, the second term is due tocentrifugal acceleration, the third is due to the linear acceleration ofK′{\textstyle K'}with respect toK{\textstyle K}and the fourth term is due to the angular acceleration ofK′{\textstyle K'}with respect toK{\textstyle K}.
The Navier–Stokes equations are strictly a statement of the balance of momentum. To fully describe fluid flow, more information is needed, how much depending on the assumptions made. This additional information may include boundary data (no-slip,capillary surface, etc.), conservation of mass,balance of energy, and/or anequation of state.
Regardless of the flow assumptions, a statement of theconservation of massis generally necessary. This is achieved through the masscontinuity equation, as discussed above in the "General continuum equations" within this article, as follows:DmDt=∭V(DρDt+ρ(∇⋅u))dVDρDt+ρ(∇⋅u)=∂ρ∂t+(∇ρ)⋅u+ρ(∇⋅u)=∂ρ∂t+∇⋅(ρu)=0{\displaystyle {\begin{aligned}{\frac {\mathbf {D} m}{\mathbf {Dt} }}&={\iiint \limits _{V}}({{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )})dV\\{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot {\mathbf {u} })&={\frac {\partial \rho }{\partial t}}+({\nabla \rho })\cdot {\mathbf {u} }+{\rho }(\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0\end{aligned}}}A fluid media for which thedensity(ρ{\displaystyle \rho }) is constant is calledincompressible. Therefore, the rate of change ofdensity(ρ{\displaystyle \rho }) with respect to time(∂ρ∂t){\displaystyle ({\frac {\partial \rho }{\partial t}})}and thegradientof density(∇ρ){\displaystyle (\nabla \rho )}are equal to zero(0){\displaystyle (0)}. In this case the general equation of continuity,∂ρ∂t+∇⋅(ρu)=0{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0}, reduces to:ρ(∇⋅u)=0{\displaystyle \rho (\nabla {\cdot }{\mathbf {u} })=0}. Furthermore, assuming thatdensity(ρ{\displaystyle \rho }) is a non-zero constant(ρ≠0){\displaystyle (\rho \neq 0)}means that the right-hand side of the equation(0){\displaystyle (0)}is divisible bydensity(ρ{\displaystyle \rho }). Therefore, the continuity equation for anincompressible fluidreduces further to:(∇⋅u)=0{\displaystyle (\nabla {\cdot {\mathbf {u} }})=0}This relationship,(∇⋅u)=0{\textstyle (\nabla {\cdot {\mathbf {u} }})=0}, identifies that thedivergenceof the flow velocityvector(u{\displaystyle \mathbf {u} }) is equal to zero(0){\displaystyle (0)}, which means that for anincompressible fluidtheflow velocity fieldis asolenoidal vector fieldor adivergence-free vector field. Note that this relationship can be expanded upon due to its uniqueness with thevector Laplace operator(∇2u=∇(∇⋅u)−∇×(∇×u)){\displaystyle (\nabla ^{2}\mathbf {u} =\nabla (\nabla \cdot \mathbf {u} )-\nabla \times (\nabla \times \mathbf {u} ))}, andvorticity(ω→=∇×u){\displaystyle ({\vec {\omega }}=\nabla \times \mathbf {u} )}which is now expressed like so, for anincompressible fluid:∇2u=−(∇×(∇×u))=−(∇×ω→){\displaystyle \nabla ^{2}\mathbf {u} =-(\nabla \times (\nabla \times \mathbf {u} ))=-(\nabla \times {\vec {\omega }})}
Taking thecurlof the incompressible Navier–Stokes equation results in the elimination of pressure. This is especially easy to see if 2D Cartesian flow is assumed (like in the degenerate 3D case withuz=0{\textstyle u_{z}=0}and no dependence of anything onz{\textstyle z}), where the equations reduce to:ρ(∂ux∂t+ux∂ux∂x+uy∂ux∂y)=−∂p∂x+μ(∂2ux∂x2+∂2ux∂y2)+ρgxρ(∂uy∂t+ux∂uy∂x+uy∂uy∂y)=−∂p∂y+μ(∂2uy∂x2+∂2uy∂y2)+ρgy.{\displaystyle {\begin{aligned}\rho \left({\frac {\partial u_{x}}{\partial t}}+u_{x}{\frac {\partial u_{x}}{\partial x}}+u_{y}{\frac {\partial u_{x}}{\partial y}}\right)&=-{\frac {\partial p}{\partial x}}+\mu \left({\frac {\partial ^{2}u_{x}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{x}}{\partial y^{2}}}\right)+\rho g_{x}\\\rho \left({\frac {\partial u_{y}}{\partial t}}+u_{x}{\frac {\partial u_{y}}{\partial x}}+u_{y}{\frac {\partial u_{y}}{\partial y}}\right)&=-{\frac {\partial p}{\partial y}}+\mu \left({\frac {\partial ^{2}u_{y}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{y}}{\partial y^{2}}}\right)+\rho g_{y}.\end{aligned}}}
Differentiating the first with respect toy{\textstyle y}, the second with respect tox{\textstyle x}and subtracting the resulting equations will eliminate pressure and anyconservative force.
For incompressible flow, defining thestream functionψ{\textstyle \psi }throughux=∂ψ∂y;uy=−∂ψ∂x{\displaystyle u_{x}={\frac {\partial \psi }{\partial y}};\quad u_{y}=-{\frac {\partial \psi }{\partial x}}}results in mass continuity being unconditionally satisfied (given the stream function is continuous), and then incompressible Newtonian 2D momentum and mass conservation condense into one equation:∂∂t(∇2ψ)+∂ψ∂y∂∂x(∇2ψ)−∂ψ∂x∂∂y(∇2ψ)=ν∇4ψ{\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \psi }{\partial y}}{\frac {\partial }{\partial x}}\left(\nabla ^{2}\psi \right)-{\frac {\partial \psi }{\partial x}}{\frac {\partial }{\partial y}}\left(\nabla ^{2}\psi \right)=\nu \nabla ^{4}\psi }
where∇4{\textstyle \nabla ^{4}}is the 2Dbiharmonic operatorandν{\textstyle \nu }is thekinematic viscosity,ν=μρ{\textstyle \nu ={\frac {\mu }{\rho }}}. We can also express this compactly using theJacobian determinant:∂∂t(∇2ψ)+∂(ψ,∇2ψ)∂(y,x)=ν∇4ψ.{\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \left(\psi ,\nabla ^{2}\psi \right)}{\partial (y,x)}}=\nu \nabla ^{4}\psi .}
This single equation together with appropriate boundary conditions describes 2D fluid flow, taking only kinematic viscosity as a parameter. Note that the equation forcreeping flowresults when the left side is assumed zero.
Inaxisymmetricflow another stream function formulation, called theStokes stream function, can be used to describe the velocity components of an incompressible flow with onescalarfunction.
The incompressible Navier–Stokes equation is adifferential algebraic equation, having the inconvenient feature that there is no explicit mechanism for advancing the pressure in time. Consequently, much effort has been expended to eliminate the pressure from all or part of the computational process. The stream function formulation eliminates the pressure but only in two dimensions and at the expense of introducing higher derivatives and elimination of the velocity, which is the primary variable of interest.
The Navier–Stokes equations arenonlinearpartial differential equationsin the general case and so remain in almost every real situation.[23][24]In some cases, such as one-dimensional flow andStokes flow(or creeping flow), the equations can be simplified to linear equations. The nonlinearity makes most problems difficult or impossible to solve and is the main contributor to theturbulencethat the equations model.
The nonlinearity is due toconvectiveacceleration, which is an acceleration associated with the change in velocity over position. Hence, any convective flow, whether turbulent or not, will involve nonlinearity. An example of convective butlaminar(nonturbulent) flow would be the passage of a viscous fluid (for example, oil) through a small convergingnozzle. Such flows, whether exactly solvable or not, can often be thoroughly studied and understood.[25]
Turbulenceis the time-dependentchaoticbehaviour seen in many fluid flows. It is generally believed that it is due to theinertiaof the fluid as a whole: the culmination of time-dependent and convective acceleration; hence flows where inertial effects are small tend to be laminar (theReynolds numberquantifies how much the flow is affected by inertia). It is believed, though not known with certainty, that the Navier–Stokes equations describe turbulence properly.[26]
The numerical solution of the Navier–Stokes equations for turbulent flow is extremely difficult, and due to the significantly different mixing-length scales that are involved in turbulent flow, the stable solution of this requires such a fine mesh resolution that the computational time becomes significantly infeasible for calculation ordirect numerical simulation. Attempts to solve turbulent flow using a laminar solver typically result in a time-unsteady solution, which fails to converge appropriately. To counter this, time-averaged equations such as theReynolds-averaged Navier–Stokes equations(RANS), supplemented with turbulence models, are used in practicalcomputational fluid dynamics(CFD) applications when modeling turbulent flows. Some models include theSpalart–Allmaras,k–ω,k–ε, andSSTmodels, which add a variety of additional equations to bring closure to the RANS equations.Large eddy simulation(LES) can also be used to solve these equations numerically. This approach is computationally more expensive—in time and in computer memory—than RANS, but produces better results because it explicitly resolves the larger turbulent scales.
Together with supplemental equations (for example, conservation of mass) and well-formulated boundary conditions, the Navier–Stokes equations seem to model fluid motion accurately; even turbulent flows seem (on average) to agree with real world observations.
The Navier–Stokes equations assume that the fluid being studied is acontinuum(it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving atrelativistic velocities. At very small scales or under extreme conditions, real fluids made out of discrete molecules will produce results different from the continuous fluids modeled by the Navier–Stokes equations. For example,capillarityof internal layers in fluids appears for flow with high gradients.[27]For largeKnudsen numberof the problem, theBoltzmann equationmay be a suitable replacement.[28]Failing that, one may have to resort tomolecular dynamicsor various hybrid methods.[29]
Another limitation is simply the complicated nature of the equations. Time-tested formulations exist for common fluid families, but the application of the Navier–Stokes equations to less common families tends to result in very complicated formulations and often to open research problems. For this reason, these equations are usually written forNewtonian fluidswhere the viscosity model islinear; truly general models for the flow of other kinds of fluids (such as blood) do not exist.[30]
The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. This is partly because there is an enormous variety of problems that may be modeled, ranging from as simple as the distribution of static pressure to as complicated asmultiphase flowdriven bysurface tension.
Generally, application to specific problems begins with some flow assumptions and initial/boundary condition formulation, this may be followed byscale analysisto further simplify the problem.
Assume steady, parallel, one-dimensional, non-convective pressure-driven flow between parallel plates, the resulting scaled (dimensionless)boundary value problemis:d2udy2=−1;u(0)=u(1)=0.{\displaystyle {\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}=-1;\quad u(0)=u(1)=0.}
The boundary condition is theno slip condition. This problem is easily solved for the flow field:u(y)=y−y22.{\displaystyle u(y)={\frac {y-y^{2}}{2}}.}
From this point onward, more quantities of interest can be easily obtained, such as viscous drag force or net flow rate.
Difficulties may arise when the problem becomes slightly more complicated. A seemingly modest twist on the parallel flow above would be theradialflow between parallel plates; this involves convection and thus non-linearity. The velocity field may be represented by a functionf(z)that must satisfy:d2fdz2+Rf2=−1;f(−1)=f(1)=0.{\displaystyle {\frac {\mathrm {d} ^{2}f}{\mathrm {d} z^{2}}}+Rf^{2}=-1;\quad f(-1)=f(1)=0.}
Thisordinary differential equationis what is obtained when the Navier–Stokes equations are written and the flow assumptions applied (additionally, the pressure gradient is solved for). Thenonlinearterm makes this a very difficult problem to solve analytically (a lengthyimplicitsolution may be found which involveselliptic integralsandroots of cubic polynomials). Issues with the actual existence of solutions arise forR>1.41{\textstyle R>1.41}(approximately; this is not√2), the parameterR{\textstyle R}being the Reynolds number with appropriately chosen scales.[31]This is an example of flow assumptions losing their applicability, and an example of the difficulty in "high" Reynolds number flows.[31]
A type of natural convection that can be described by the Navier–Stokes equation is theRayleigh–Bénard convection. It is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility.
Some exact solutions to the Navier–Stokes equations exist. Examples of degenerate cases—with the non-linear terms in the Navier–Stokes equations equal to zero—arePoiseuille flow,Couette flowand the oscillatoryStokes boundary layer. But also, more interesting examples, solutions to the full non-linear equations, exist, such asJeffery–Hamel flow,Von Kármán swirling flow,stagnation point flow,Landau–Squire jet, andTaylor–Green vortex.[32][33][34]Time-dependentself-similarsolutions of the three-dimensional non-compressible Navier–Stokes equations in Cartesian coordinate can be given with the help of theKummer's functionswith quadratic arguments.[35]For the compressible Navier–Stokes equations the time-dependent self-similar solutions are however theWhittaker functionsagain with quadratic arguments when thepolytropicequation of stateis used as a closing condition.[36]Note that the existence of these exact solutions does not imply they are stable: turbulence may develop at higher Reynolds numbers.
Under additional assumptions, the component parts can be separated.[37]
For example, in the case of an unbounded planar domain withtwo-dimensional— incompressible and stationary — flow inpolar coordinates(r,φ), the velocity components(ur,uφ)and pressurepare:[38]ur=Ar,uφ=B(1r−rAν+1),p=−A2+B22r2−2B2νrAνA+B2r(2Aν+2)2Aν+2{\displaystyle {\begin{aligned}u_{r}&={\frac {A}{r}},\\u_{\varphi }&=B\left({\frac {1}{r}}-r^{{\frac {A}{\nu }}+1}\right),\\p&=-{\frac {A^{2}+B^{2}}{2r^{2}}}-{\frac {2B^{2}\nu r^{\frac {A}{\nu }}}{A}}+{\frac {B^{2}r^{\left({\frac {2A}{\nu }}+2\right)}}{{\frac {2A}{\nu }}+2}}\end{aligned}}}
whereAandBare arbitrary constants. This solution is valid in the domainr≥ 1and forA< −2ν.
In Cartesian coordinates, when the viscosity is zero (ν= 0), this is:v(x,y)=1x2+y2(Ax+ByAy−Bx),p(x,y)=−A2+B22(x2+y2){\displaystyle {\begin{aligned}\mathbf {v} (x,y)&={\frac {1}{x^{2}+y^{2}}}{\begin{pmatrix}Ax+By\\Ay-Bx\end{pmatrix}},\\p(x,y)&=-{\frac {A^{2}+B^{2}}{2\left(x^{2}+y^{2}\right)}}\end{aligned}}}
For example, in the case of an unbounded Euclidean domain withthree-dimensional— incompressible, stationary and with zero viscosity (ν= 0) — radial flow inCartesian coordinates(x,y,z), the velocity vectorvand pressurepare:[citation needed]v(x,y,z)=Ax2+y2+z2(xyz),p(x,y,z)=−A22(x2+y2+z2).{\displaystyle {\begin{aligned}\mathbf {v} (x,y,z)&={\frac {A}{x^{2}+y^{2}+z^{2}}}{\begin{pmatrix}x\\y\\z\end{pmatrix}},\\p(x,y,z)&=-{\frac {A^{2}}{2\left(x^{2}+y^{2}+z^{2}\right)}}.\end{aligned}}}
There is a singularity atx=y=z= 0.
A steady-state example with no singularities comes from considering the flow along the lines of aHopf fibration. Letr{\textstyle r}be a constant radius of the inner coil. One set of solutions is given by:[39]ρ(x,y,z)=3Br2+x2+y2+z2p(x,y,z)=−A2B(r2+x2+y2+z2)3u(x,y,z)=A(r2+x2+y2+z2)2(2(−ry+xz)2(rx+yz)r2−x2−y2+z2)g=0μ=0{\displaystyle {\begin{aligned}\rho (x,y,z)&={\frac {3B}{r^{2}+x^{2}+y^{2}+z^{2}}}\\p(x,y,z)&={\frac {-A^{2}B}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{3}}}\\\mathbf {u} (x,y,z)&={\frac {A}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{2}}}{\begin{pmatrix}2(-ry+xz)\\2(rx+yz)\\r^{2}-x^{2}-y^{2}+z^{2}\end{pmatrix}}\\g&=0\\\mu &=0\end{aligned}}}
for arbitrary constantsA{\textstyle A}andB{\textstyle B}. This is a solution in a non-viscous gas (compressible fluid) whose density, velocities and pressure goes to zero far from the origin. (Note this is not a solution to the Clay Millennium problem because that refers to incompressible fluids whereρ{\textstyle \rho }is a constant, and neither does it deal with the uniqueness of the Navier–Stokes equations with respect to anyturbulenceproperties.) It is also worth pointing out that the components of the velocity vector are exactly those from thePythagorean quadrupleparametrization. Other choices of density and pressure are possible with the same velocity field:
Another choice of pressure and density with the same velocity vector above is one where the pressure and density fall to zero at the origin and are highest in the central loop atz= 0,x2+y2=r2:ρ(x,y,z)=20B(x2+y2)(r2+x2+y2+z2)3p(x,y,z)=−A2B(r2+x2+y2+z2)4+−4A2B(x2+y2)(r2+x2+y2+z2)5.{\displaystyle {\begin{aligned}\rho (x,y,z)&={\frac {20B\left(x^{2}+y^{2}\right)}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{3}}}\\p(x,y,z)&={\frac {-A^{2}B}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{4}}}+{\frac {-4A^{2}B\left(x^{2}+y^{2}\right)}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{5}}}.\end{aligned}}}
In fact in general there are simple solutions for any polynomial functionfwhere the density is:ρ(x,y,z)=1r2+x2+y2+z2f(x2+y2(r2+x2+y2+z2)2).{\displaystyle \rho (x,y,z)={\frac {1}{r^{2}+x^{2}+y^{2}+z^{2}}}f\left({\frac {x^{2}+y^{2}}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{2}}}\right).}
Two examples of periodic fully-three-dimensional viscous solutions are described in.[40]These solutions are defined on a three-dimensionaltorusT3=[0,L]3{\displaystyle \mathbb {T} ^{3}=[0,L]^{3}}and are characterized by positive and negativehelicityrespectively.
The solution with positive helicity is given by:ux=4233U0[sin(kx−π/3)cos(ky+π/3)sin(kz+π/2)−cos(kz−π/3)sin(kx+π/3)sin(ky+π/2)]e−3νk2tuy=4233U0[sin(ky−π/3)cos(kz+π/3)sin(kx+π/2)−cos(kx−π/3)sin(ky+π/3)sin(kz+π/2)]e−3νk2tuz=4233U0[sin(kz−π/3)cos(kx+π/3)sin(ky+π/2)−cos(ky−π/3)sin(kz+π/3)sin(kx+π/2)]e−3νk2t{\displaystyle {\begin{aligned}u_{x}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kx-\pi /3)\cos(ky+\pi /3)\sin(kz+\pi /2)-\cos(kz-\pi /3)\sin(kx+\pi /3)\sin(ky+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{y}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(ky-\pi /3)\cos(kz+\pi /3)\sin(kx+\pi /2)-\cos(kx-\pi /3)\sin(ky+\pi /3)\sin(kz+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{z}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kz-\pi /3)\cos(kx+\pi /3)\sin(ky+\pi /2)-\cos(ky-\pi /3)\sin(kz+\pi /3)\sin(kx+\pi /2)\,\right]e^{-3\nu k^{2}t}\end{aligned}}}wherek=2π/L{\displaystyle k=2\pi /L}is the wave number and the velocity components are normalized so that the average kinetic energy per unit of mass isU02/2{\displaystyle U_{0}^{2}/2}att=0{\displaystyle t=0}.
The pressure field is obtained from the velocity field asp=p0−ρ0‖u‖2/2{\displaystyle p=p_{0}-\rho _{0}\|{\boldsymbol {u}}\|^{2}/2}(wherep0{\displaystyle p_{0}}andρ0{\displaystyle \rho _{0}}are reference values for the pressure and density fields respectively).
Since both the solutions belong to the class ofBeltrami flow, the vorticity field is parallel to the velocity and, for the case with positive helicity, is given byω=3ku{\displaystyle \omega ={\sqrt {3}}\,k\,{\boldsymbol {u}}}.
These solutions can be regarded as a generalization in three dimensions of the classic two-dimensional Taylor-GreenTaylor–Green vortex.
Wyld diagramsare bookkeepinggraphsthat correspond to the Navier–Stokes equations via aperturbation expansionof the fundamentalcontinuum mechanics. Similar to theFeynman diagramsinquantum field theory, these diagrams are an extension ofMstislav Keldysh's technique for nonequilibrium processes in fluid dynamics.[citation needed]In other words, these diagrams assigngraphsto the (often)turbulentphenomena in turbulent fluids by allowingcorrelatedand interacting fluid particles to obeystochastic processesassociated topseudo-randomfunctionsinprobability distributions.[41]
Note that the formulas in this section make use of the single-line notation for partial derivatives, where, e.g.∂xu{\textstyle \partial _{x}u}means the partial derivative ofu{\textstyle u}with respect tox{\textstyle x}, and∂y2fθ{\textstyle \partial _{y}^{2}f_{\theta }}means the second-order partial derivative offθ{\textstyle f_{\theta }}with respect toy{\textstyle y}.
A 2022 paper provides a less costly, dynamical and recurrent solution of the Navier-Stokes equation for 3D turbulent fluid flows. On suitably short time scales, the dynamics of turbulence is deterministic.[42]
From the general form of the Navier–Stokes, with the velocity vector expanded asu=(ux,uy,uz){\textstyle \mathbf {u} =(u_{x},u_{y},u_{z})}, sometimes respectively namedu{\textstyle u},v{\textstyle v},w{\textstyle w}, we may write the vector equation explicitly,x:ρ(∂tux+ux∂xux+uy∂yux+uz∂zux)=−∂xp+μ(∂x2ux+∂y2ux+∂z2ux)+13μ∂x(∂xux+∂yuy+∂zuz)+ρgx{\displaystyle {\begin{aligned}x:\ &\rho \left({\partial _{t}u_{x}}+u_{x}\,{\partial _{x}u_{x}}+u_{y}\,{\partial _{y}u_{x}}+u_{z}\,{\partial _{z}u_{x}}\right)\\&\quad =-\partial _{x}p+\mu \left({\partial _{x}^{2}u_{x}}+{\partial _{y}^{2}u_{x}}+{\partial _{z}^{2}u_{x}}\right)+{\frac {1}{3}}\mu \ \partial _{x}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{x}\\\end{aligned}}}y:ρ(∂tuy+ux∂xuy+uy∂yuy+uz∂zuy)=−∂yp+μ(∂x2uy+∂y2uy+∂z2uy)+13μ∂y(∂xux+∂yuy+∂zuz)+ρgy{\displaystyle {\begin{aligned}y:\ &\rho \left({\partial _{t}u_{y}}+u_{x}{\partial _{x}u_{y}}+u_{y}{\partial _{y}u_{y}}+u_{z}{\partial _{z}u_{y}}\right)\\&\quad =-{\partial _{y}p}+\mu \left({\partial _{x}^{2}u_{y}}+{\partial _{y}^{2}u_{y}}+{\partial _{z}^{2}u_{y}}\right)+{\frac {1}{3}}\mu \ \partial _{y}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{y}\\\end{aligned}}}z:ρ(∂tuz+ux∂xuz+uy∂yuz+uz∂zuz)=−∂zp+μ(∂x2uz+∂y2uz+∂z2uz)+13μ∂z(∂xux+∂yuy+∂zuz)+ρgz.{\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{x}{\partial _{x}u_{z}}+u_{y}{\partial _{y}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}+\mu \left({\partial _{x}^{2}u_{z}}+{\partial _{y}^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)+{\frac {1}{3}}\mu \ \partial _{z}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{z}.\end{aligned}}}
Note that gravity has been accounted for as a body force, and the values ofgx{\textstyle g_{x}},gy{\textstyle g_{y}},gz{\textstyle g_{z}}will depend on the orientation of gravity with respect to the chosen set of coordinates.
The continuity equation reads:∂tρ+∂x(ρux)+∂y(ρuy)+∂z(ρuz)=0.{\displaystyle \partial _{t}\rho +\partial _{x}(\rho u_{x})+\partial _{y}(\rho u_{y})+\partial _{z}(\rho u_{z})=0.}
When the flow is incompressible,ρ{\textstyle \rho }does not change for any fluid particle, and itsmaterial derivativevanishes:DρDt=0{\textstyle {\frac {\mathrm {D} \rho }{\mathrm {D} t}}=0}. The continuity equation is reduced to:∂xux+∂yuy+∂zuz=0.{\displaystyle \partial _{x}u_{x}+\partial _{y}u_{y}+\partial _{z}u_{z}=0.}
Thus, for the incompressible version of the Navier–Stokes equation the second part of the viscous terms fall away (seeIncompressible flow).
This system of four equations comprises the most commonly used and studied form. Though comparatively more compact than other representations, this is still anonlinearsystem ofpartial differential equationsfor which solutions are difficult to obtain.
A change of variables on the Cartesian equations will yield[16]the following momentum equations forr{\textstyle r},ϕ{\textstyle \phi }, andz{\textstyle z}[43]r:ρ(∂tur+ur∂rur+uφr∂φur+uz∂zur−uφ2r)=−∂rp+μ(1r∂r(r∂rur)+1r2∂φ2ur+∂z2ur−urr2−2r2∂φuφ)+13μ∂r(1r∂r(rur)+1r∂φuφ+∂zuz)+ρgr{\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{r}}+u_{z}{\partial _{z}u_{r}}-{\frac {u_{\varphi }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{r}}+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}-{\frac {2}{r^{2}}}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}}φ:ρ(∂tuφ+ur∂ruφ+uφr∂φuφ+uz∂zuφ+uruφr)=−1r∂φp+μ(1r∂r(r∂ruφ)+1r2∂φ2uφ+∂z2uφ−uφr2+2r2∂φur)+13μ1r∂φ(1r∂r(rur)+1r∂φuφ+∂zuz)+ρgφ{\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{\varphi }}+u_{z}{\partial _{z}u_{\varphi }}+{\frac {u_{r}u_{\varphi }}{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r}}\ \partial _{r}\left(r{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{\varphi }}+{\partial _{z}^{2}u_{\varphi }}-{\frac {u_{\varphi }}{r^{2}}}+{\frac {2}{r^{2}}}{\partial _{\varphi }u_{r}}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\varphi }\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}}z:ρ(∂tuz+ur∂ruz+uφr∂φuz+uz∂zuz)=−∂zp+μ(1r∂r(r∂ruz)+1r2∂φ2uz+∂z2uz)+13μ∂z(1r∂r(rur)+1r∂φuφ+∂zuz)+ρgz.{\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{z}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{z}.\end{aligned}}}
The gravity components will generally not be constants, however for most applications either the coordinates are chosen so that the gravity components are constant or else it is assumed that gravity is counteracted by a pressure field (for example, flow in horizontal pipe is treated normally without gravity and without a vertical pressure gradient). The continuity equation is:∂tρ+1r∂r(ρrur)+1r∂φ(ρuφ)+∂z(ρuz)=0.{\displaystyle {\partial _{t}\rho }+{\frac {1}{r}}\partial _{r}\left(\rho ru_{r}\right)+{\frac {1}{r}}{\partial _{\varphi }\left(\rho u_{\varphi }\right)}+{\partial _{z}\left(\rho u_{z}\right)}=0.}
This cylindrical representation of the incompressible Navier–Stokes equations is the second most commonly seen (the first being Cartesian above). Cylindrical coordinates are chosen to take advantage of symmetry, so that a velocity component can disappear. A very common case is axisymmetric flow with the assumption of no tangential velocity (uϕ=0{\textstyle u_{\phi }=0}), and the remaining quantities are independent ofϕ{\textstyle \phi }:ρ(∂tur+ur∂rur+uz∂zur)=−∂rp+μ(1r∂r(r∂rur)+∂z2ur−urr2)+ρgrρ(∂tuz+ur∂ruz+uz∂zuz)=−∂zp+μ(1r∂r(r∂ruz)+∂z2uz)+ρgz1r∂r(rur)+∂zuz=0.{\displaystyle {\begin{aligned}\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+u_{z}{\partial _{z}u_{r}}\right)&=-{\partial _{r}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}\right)+\rho g_{r}\\\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)&=-{\partial _{z}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\partial _{z}^{2}u_{z}}\right)+\rho g_{z}\\{\frac {1}{r}}\partial _{r}\left(ru_{r}\right)+{\partial _{z}u_{z}}&=0.\end{aligned}}}
Inspherical coordinates, ther{\textstyle r},ϕ{\textstyle \phi }, andθ{\textstyle \theta }momentum equations are[16](note the convention used:θ{\textstyle \theta }is polar angle, orcolatitude,[44]0≤θ≤π{\textstyle 0\leq \theta \leq \pi }):r:ρ(∂tur+ur∂rur+uφrsinθ∂φur+uθr∂θur−uφ2+uθ2r)=−∂rp+μ(1r2∂r(r2∂rur)+1r2sin2θ∂φ2ur+1r2sinθ∂θ(sinθ∂θur)−2ur+∂θuθ+uθcotθr2−2r2sinθ∂φuφ)+13μ∂r(1r2∂r(r2ur)+1rsinθ∂θ(uθsinθ)+1rsinθ∂φuφ)+ρgr{\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{r}}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{r}}-{\frac {u_{\varphi }^{2}+u_{\theta }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{r}}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{r}}\right)-2{\frac {u_{r}+{\partial _{\theta }u_{\theta }}+u_{\theta }\cot \theta }{r^{2}}}-{\frac {2}{r^{2}\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}}φ:ρ(∂tuφ+ur∂ruφ+uφrsinθ∂φuφ+uθr∂θuφ+uruφ+uφuθcotθr)=−1rsinθ∂φp+μ(1r2∂r(r2∂ruφ)+1r2sin2θ∂φ2uφ+1r2sinθ∂θ(sinθ∂θuφ)+2sinθ∂φur+2cosθ∂φuθ−uφr2sin2θ)+13μ1rsinθ∂φ(1r2∂r(r2ur)+1rsinθ∂θ(uθsinθ)+1rsinθ∂φuφ)+ρgφ{\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\varphi }}+{\frac {u_{r}u_{\varphi }+u_{\varphi }u_{\theta }\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r\sin \theta }}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\varphi }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\varphi }}\right)+{\frac {2\sin \theta {\partial _{\varphi }u_{r}}+2\cos \theta {\partial _{\varphi }u_{\theta }}-u_{\varphi }}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r\sin \theta }}\partial _{\varphi }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}}θ:ρ(∂tuθ+ur∂ruθ+uφrsinθ∂φuθ+uθr∂θuθ+uruθ−uφ2cotθr)=−1r∂θp+μ(1r2∂r(r2∂ruθ)+1r2sin2θ∂φ2uθ+1r2sinθ∂θ(sinθ∂θuθ)+2r2∂θur−uθ+2cosθ∂φuφr2sin2θ)+13μ1r∂θ(1r2∂r(r2ur)+1rsinθ∂θ(uθsinθ)+1rsinθ∂φuφ)+ρgθ.{\displaystyle {\begin{aligned}\theta :\ &\rho \left({\partial _{t}u_{\theta }}+u_{r}{\partial _{r}u_{\theta }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\theta }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\theta }}+{\frac {u_{r}u_{\theta }-u_{\varphi }^{2}\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\theta }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\theta }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\theta }}\right)+{\frac {2}{r^{2}}}{\partial _{\theta }u_{r}}-{\frac {u_{\theta }+2\cos \theta {\partial _{\varphi }u_{\varphi }}}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\theta }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\theta }.\end{aligned}}}
Mass continuity will read:∂tρ+1r2∂r(ρr2ur)+1rsinθ∂φ(ρuφ)+1rsinθ∂θ(sinθρuθ)=0.{\displaystyle {\partial _{t}\rho }+{\frac {1}{r^{2}}}\partial _{r}\left(\rho r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }(\rho u_{\varphi })}+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(\sin \theta \rho u_{\theta }\right)=0.}
These equations could be (slightly) compacted by, for example, factoring1r2{\textstyle {\frac {1}{r^{2}}}}from the viscous terms. However, doing so would undesirably alter the structure of the Laplacian and other quantities.
|
https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations
|
Inphysics, afluidis aliquid,gas, or other material that may continuouslymoveanddeform(flow) under an appliedshear stress, or external force.[1]They have zeroshear modulus, or, in simpler terms, aresubstanceswhich cannot resist anyshear forceapplied to them.
Although the termfluidgenerally includes both the liquid and gas phases, its definition varies amongbranches of science. Definitions ofsolidvary as well, and depending on field, some substances can have both fluid and solid properties.[2]Non-Newtonian fluids likeSilly Puttyappear to behave similar to a solid when a sudden force is applied.[3]Substances with a very highviscositysuch aspitchappear to behave like a solid (seepitch drop experiment) as well. Inparticle physics, the concept is extended to include fluidicmattersother than liquids or gases.[4]A fluid in medicine or biology refers to any liquid constituent of the body (body fluid),[5][6]whereas "liquid" is not used in this sense. Sometimes liquids given forfluid replacement, either by drinking or by injection, are also called fluids[7](e.g. "drink plenty of fluids"). Inhydraulics,fluidis a term which refers to liquids with certain properties, and is broader than (hydraulic) oils.[8]
Fluids display properties such as:
These properties are typically a function of their inability to support ashear stressin staticequilibrium. By contrast, solids respond to shear either witha spring-like restoring force—meaning that deformations are reversible—or they require a certain initialstressbefore they deform (seeplasticity).
Solids respond with restoring forces to both shear stresses and tonormal stresses, bothcompressiveandtensile. By contrast, ideal fluids only respond with restoring forces to normal stresses, calledpressure: fluids can be subjected both to compressive stress—corresponding to positive pressure—and to tensile stress, corresponding tonegative pressure. Solids and liquids both have tensile strengths, which when exceeded in solids createsirreversible deformationand fracture, and in liquids cause the onset ofcavitation.
Both solids and liquids have free surfaces, which cost some amount offree energyto form. In the case of solids, the amount of free energy to form a given unit of surface area is calledsurface energy, whereas for liquids the same quantity is calledsurface tension. In response to surface tension, the ability of liquids to flow results in behaviour differing from that of solids, though at equilibrium both tend tominimise their surface energy: liquids tend to form roundeddroplets, whereas pure solids tend to formcrystals.Gases, lacking free surfaces, freelydiffuse.
In a solid, shear stress is a function ofstrain, but in a fluid,shear stressis a function ofstrain rate. A consequence of this behavior isPascal's lawwhich describes the role ofpressurein characterizing a fluid's state.
The behavior of fluids can be described by theNavier–Stokes equations—a set ofpartial differential equationswhich are based on:
The study of fluids isfluid mechanics, which is subdivided intofluid dynamicsandfluid staticsdepending on whether the fluid is in motion.
Depending on the relationship between shear stress and the rate of strain and itsderivatives, fluids can be characterized as one of the following:
Newtonian fluids followNewton's law of viscosityand may be calledviscous fluids.
Fluids may be classified by their compressibility:
Newtonian and incompressible fluids do not actually exist, but are assumed to be for theoretical settlement. Virtual fluids that completely ignore the effects of viscosity and compressibility are calledperfect fluids.
|
https://en.wikipedia.org/wiki/Fluid
|
Influid dynamics, aCross fluidis a type ofgeneralized Newtonian fluidwhoseviscositydepends upon shear rate according to the Cross Power Law equation:
whereμeff(γ˙){\displaystyle \mu _{\mathrm {eff} }({\dot {\gamma }})}isviscosityas a function ofshear rate,μ∞{\displaystyle \mu _{\infty }}is the infinite-shear-rate viscosity,μ0{\displaystyle \mu _{0}}is the zero-shear-rate viscosity,m{\displaystyle m}is the time constant, andn{\displaystyle n}is the shear-thinning index.
The zero-shear viscosityμ0{\displaystyle \mu _{0}}is approached at very low shear rates, while the infinite shear viscosityμ∞{\displaystyle \mu _{\infty }}is approached at very high shear rates.[1]
Whenμ0{\displaystyle \mu _{0}}>μ∞{\displaystyle \mu _{\infty }}, the fluid exhibitsshear thinning(pseudoplastic) behavior where viscosity decreases with increasing shear rate; whenμ0{\displaystyle \mu _{0}}<μ∞{\displaystyle \mu _{\infty }}, the fluid displaysshear thickening(dilatant) behavior where viscosity increases with shear rate.
It is named after Malcolm M. Cross who proposed this model in 1965.[2][3]
|
https://en.wikipedia.org/wiki/Cross_fluid
|
Influid dynamics,a Carreau fluidis a type ofgeneralized Newtonian fluid(named afterPierre Carreau) where viscosity,μeff{\displaystyle \mu _{\operatorname {eff} }}, depends upon theshear rate,γ˙{\displaystyle {\dot {\gamma }}}, by the following equation:
Where:μ0{\displaystyle \mu _{0}},μinf{\displaystyle \mu _{\operatorname {\inf } }},λ{\displaystyle \lambda }andn{\displaystyle n}are material coefficients:μ0{\displaystyle \mu _{0}}is the viscosity at zero shear rate (Pa.s),μinf{\displaystyle \mu _{\operatorname {\inf } }}is the viscosity at infinite shear rate (Pa.s),λ{\displaystyle \lambda }is the characteristic time (s) andn{\displaystyle n}power index.
The dynamics of fluid motions is an important area of physics, with many important and commercially significant applications.
Computers are often used to calculate the motions of fluids, especially when the applications are of a safety critical nature.
Thisphysics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Carreau_fluid
|
Ageneralized Newtonian fluidis an idealizedfluidfor which theshear stressis a function ofshear rateat the particular time, but not dependent upon the history of deformation. Although this type of fluid isnon-Newtonian(i.e. non-linear) in nature, itsconstitutive equationis a generalised form of theNewtonian fluid. Generalised Newtonian fluids satisfy the followingrheologicalequation:
whereτ{\displaystyle \tau }is theshear stress, andγ˙{\displaystyle {\dot {\gamma }}}is theshear rate. The quantityμeff{\displaystyle \mu _{\operatorname {eff} }}represents anapparent viscosityoreffective viscosityas a function of the shear rate.
The most commonly used types of generalized Newtonian fluids are:[1]
It has been shown thatlubrication theorymay be applied to all generalized Newtonian fluids in both two and three dimensions.[2][3]
|
https://en.wikipedia.org/wiki/Generalized_Newtonian_fluid
|
TheHerschel–Bulkley fluidis a generalized model of anon-Newtonian fluid, in which thestrainexperienced by the fluid is related to thestressin a complicated, non-linear way. Three parameters characterize this relationship: the consistencyk, the flow indexn, and the yield shear stressτ0{\displaystyle \tau _{0}}. The consistency is a simple constant of proportionality, while the flow index measures the degree to which the fluid is shear-thinning or shear-thickening. Ordinary paint is one example of a shear-thinning fluid, whileoobleckprovides one realization of a shear-thickening fluid. Finally, the yield stress quantifies the amount of stress that the fluid may experience before it yields and begins to flow.
This non-Newtonian fluid model was introduced by Winslow Herschel and Ronald Bulkley in 1926.[1][2]
In one dimension, theconstitutive equationof the Herschel-Bulkley model after the yield stress has been reached can be written in the form:[3][4]
whereτ{\displaystyle \tau }is theshear stress[Pa],τ0{\displaystyle \tau _{0}}the yield stress [Pa],k{\displaystyle k}the consistency index [Pa⋅{\displaystyle \cdot }sn{\displaystyle ^{n}}],γ˙{\displaystyle {\dot {\gamma }}}theshear rate[s−1{\displaystyle ^{-1}}], andn{\displaystyle n}the flow index [dimensionless]. Ifτ<τ0{\displaystyle \tau <\tau _{0}}the Herschel-Bulkley fluid behaves as a rigid (non-deformable) solid, otherwise it behaves as a fluid. Forn<1{\displaystyle n<1}the fluid is shear-thinning, whereas forn>1{\displaystyle n>1}the fluid is shear-thickening. Ifn=1{\displaystyle n=1}andτ0=0{\displaystyle \tau _{0}=0}, this model reduces to that of aNewtonian fluid.
Reformulated as a tensor, we can instead write:
where|γ˙__|=12∑i∑jγ˙ijγ˙ij{\displaystyle |{\underline {\underline {\dot {\gamma }}}}|={\sqrt {{\frac {1}{2}}\sum _{i}\sum _{j}{\dot {\gamma }}_{ij}{\dot {\gamma }}_{ij}}}}denotes the second invariant of the strain rate tensorγ˙__{\displaystyle {\underline {\underline {\dot {\gamma }}}}}. Note that the double underlines indicate a tensor quantity.
The viscosity associated with the Herschel-Bulkley stress diverges to infinity as the strain rate approaches zero. This divergence makes the model difficult to implement in numerical simulations, so it is common to implementregularizedmodels with an upper limiting viscosity. For instance, the Herschel-Bulkley fluid can be approximated as ageneralized Newtonian fluidmodel with an effective (or apparent) viscosity being given as[5]
Here, the limiting viscosityμ0{\displaystyle \mu _{0}}replaces the divergence at low strain rates. Its value is chosen such thatμ0=kγ˙0n−1+τ0γ˙0−1{\displaystyle \mu _{0}=k{\dot {\gamma }}_{0}^{n-1}+\tau _{0}{\dot {\gamma }}_{0}^{-1}}to ensure the viscosity is a continuous function of strain rate. A large limiting viscosity means that the fluid will only flow in response to a large applied force. This feature captures theBingham-type behaviour of the fluid. It is not entirely possible to capturerigidbehavior described by the constitutive equation of the Herschel-Bulkley model using a regularised model. This is because a finite effective viscosity will always lead to a small degree of yielding under the influence of external forces (e.g. gravity). The characteristic timescale of the phenomenon being studied is thus an important consideration when choosing a regularisation threshold.
In an incompressible flow, theviscous stress tensoris given as a viscosity, multiplied by therate-of-strain tensor
(Note thatμeff(|γ˙|){\displaystyle \mu _{\operatorname {eff} }(|{\dot {\gamma }}|)}indicates that the effective viscosity is a function of the shear rate.) Furthermore, the magnitude of the shear rate is given by
The magnitude of the shear rate is anisotropicapproximation, and it is coupled with the secondinvariantof the rate-of-strain tensor
A frequently-encountered situation in experiments ispressure-driven channel flow[6](see diagram). This situation exhibits an equilibrium in which there is flow only in the horizontal direction (along the pressure-gradient direction), and the pressure gradient and viscous effects are in balance. Then, theNavier-Stokes equations, together with therheologicalmodel, reduce to a single equation:
To solve this equation it is necessary to non-dimensionalize the quantities involved. The channel depthHis chosen as a length scale, the mean velocityVis taken as a velocity scale, and the pressure scale is taken to beP0=k(V/H)n{\displaystyle P_{0}=k\left(V/H\right)^{n}}. This analysis introduces the non-dimensional pressure gradient
π0=HP0∂p∂x,{\displaystyle \pi _{0}={\frac {H}{P_{0}}}{\frac {\partial p}{\partial x}},}
which is negative for flow from left to right, and the Bingham number:
Next, the domain of the solution is broken up into three parts, valid for a negative pressure gradient:
Solving this equation gives the velocity profile:
u(z)={nn+11π0[(π0(z−z1)+γ0n)1+(1/n)−(−π0z1+γ0n)1+(1/n)],z∈[0,z1]π02μ0(z2−z)+k,z∈[z1,z2],nn+11π0[(−π0(z−z2)+γ0n)1+(1/n)−(−π0(1−z2)+γ0n)1+(1/n)],z∈[z2,1]{\displaystyle u\left(z\right)={\begin{cases}{\frac {n}{n+1}}{\frac {1}{\pi _{0}}}\left[\left(\pi _{0}\left(z-z_{1}\right)+\gamma _{0}^{n}\right)^{1+\left(1/n\right)}-\left(-\pi _{0}z_{1}+\gamma _{0}^{n}\right)^{1+\left(1/n\right)}\right],&z\in \left[0,z_{1}\right]\\{\frac {\pi _{0}}{2\mu _{0}}}\left(z^{2}-z\right)+k,&z\in \left[z_{1},z_{2}\right],\\{\frac {n}{n+1}}{\frac {1}{\pi _{0}}}\left[\left(-\pi _{0}\left(z-z_{2}\right)+\gamma _{0}^{n}\right)^{1+\left(1/n\right)}-\left(-\pi _{0}\left(1-z_{2}\right)+\gamma _{0}^{n}\right)^{1+\left(1/n\right)}\right],&z\in \left[z_{2},1\right]\\\end{cases}}}
Herekis a matching constant such thatu(z1){\displaystyle u\left(z_{1}\right)}is continuous. The profile respects theno-slipconditions at the channel boundaries,
Using the same continuity arguments, it is shown thatz1,2=12±δ{\displaystyle z_{1,2}={\tfrac {1}{2}}\pm \delta }, where
δ=γ0μ0|π0|≤12.{\displaystyle \delta ={\frac {\gamma _{0}\mu _{0}}{|\pi _{0}|}}\leq {\tfrac {1}{2}}.}
Sinceμ0=γ0n−1+Bn/γ0{\displaystyle \mu _{0}=\gamma _{0}^{n-1}+Bn/\gamma _{0}}, for a given(γ0,Bn){\displaystyle \left(\gamma _{0},Bn\right)}pair, there is a critical pressure gradient
|π0,c|=2(γ0+Bn).{\displaystyle |\pi _{0,\mathrm {c} }|=2\left(\gamma _{0}+Bn\right).}
Apply any pressure gradient smaller in magnitude than this critical value, and the fluid will not flow; its Bingham nature is thus apparent. Any pressure gradient greater in magnitude than this critical value will result in flow. The flow associated with a shear-thickening fluid is retarded relative to that associated with a shear-thinning fluid.
Forlaminarflow Chilton and Stainsby[7]provide the following equation to calculate the pressure drop. The equation requires an iterative solution to extract the pressure drop, as it is present on both sides of the equation.
allows standard Newtonianfriction factorcorrelations to be used.
The pressure drop can then be calculated, given a suitable friction factor correlation. An iterative procedure is required, as the pressure drop is required to initiate the calculations as well as be the outcome of them.
|
https://en.wikipedia.org/wiki/Herschel%E2%80%93Bulkley_fluid
|
Price's model(named after the physicistDerek J. de Solla Price) is a mathematical model for the growth ofcitation networks.[1][2]It was the first model which generalized theSimon model[3]to be used for networks, especially for growing networks. Price's model belongs to the broader class of network growing models (together with theBarabási–Albert model) whose primary target is to explain the origination of networks with strongly skewed degree distributions. The model picked up the ideas of theSimon modelreflecting the concept ofrich get richer, also known as theMatthew effect. Price took the example of a network of citations between scientific papers and expressed its properties. His idea was that the way an old vertex (existing paper) gets new edges (new citations) should be proportional to the number of existing edges (existing citations) the vertex already has. This was referred to ascumulative advantage, now also known aspreferential attachment. Price's work is also significant in providing the first known example of ascale-free network(although this term was introduced later). His ideas were used to describe many real-world networks such as theWeb.
Considering a directed graph withnnodes. Letpk{\displaystyle p_{k}}denote the fraction of nodes with degreekso that∑kpk=1{\displaystyle \textstyle \sum _{k}{p_{k}}=1}. Each new node has a given out-degree (namely those papers it cites) and it is fixed in the long run. This does not mean that the out-degrees can not vary across nodes, simply we assume that the mean out-degree,∑kkpk=m{\displaystyle \textstyle \sum _{k}{kp_{k}}=m}, is fixed over time, and consequentlymis not restricted to the integers. The most trivial form of preferential attachment means that a new node connects to an existing node proportionally to its in-degrees. In other words, a new paper cites an existing paper in proportion to the number of papers that cite it. The caveat to such an idea is that, since no new paper is cited when it is joined to the network, it is going to have zero probability of being cited in the future (contrary to what happens in real life). To overcome this,Priceproposed that an attachment should be proportional to somek+k0{\displaystyle k+k_{0}}withk0{\displaystyle k_{0}}an arbitrary constant. Price proposedk0=1{\displaystyle k_{0}=1}, such that an initial citation is associated with the paper itself. The probability of a new edge connecting to any node with a degreekis now
The next question is the net change in the number of nodes with degreekwhen we add new nodes to the network. Naturally, this number is decreasing, as somek-degree nodes have new edges, hence becoming (k+ 1)-degree nodes; but on the other hand this number is also increasing, as some (k− 1)-degree nodes might get new edges, becomingkdegree nodes. To express this net change formally, let us denote the fraction ofk-degree nodes at a network ofnvertices withpk,n{\displaystyle p_{k,n}}:
and
To obtain a stationary solution forpk,n+1=pk,n=pk{\displaystyle p_{k,n+1}=p_{k,n}=p_{k}}, first let us expresspk{\displaystyle p_{k}}using the well-knownmaster equationmethod, as
After some manipulation, the expression above yields to
and
withB(a,b){\displaystyle \mathbf {B} (a,b)}being theBeta-function. As a consequence,pk∼k−(2+1/m){\displaystyle p_{k}\sim k^{-(2+1/m)}}. This is identical to saying thatpk{\displaystyle p_{k}}follows apower-law distributionwith exponentα=2+1/m{\displaystyle \alpha =2+1/m}. Typically, this places the exponent between 2 and 3, which is the case for many real world networks.Pricetested his model by comparing to the citation network data and concluded that the resultingmis feasible to produce a sufficiently goodpower-law distribution.
It is straightforward how to generalize the above results to the case whenk0≠1{\displaystyle k_{0}\neq 1}. Basic calculations show that
which once more yields to a power law distribution ofpk{\displaystyle p_{k}}with the same exponentα=2+k0/m{\displaystyle \alpha =2+k_{0}/m}for largekand fixedk0{\displaystyle k_{0}}.
The key difference from the more recentBarabási–Albert modelis that the Price model produces a graph with directed edges while theBarabási–Albert modelis the same model but with undirected edges. The direction is central to thecitation networkapplication which motivated Price. This means that the Price model produces adirected acyclic graphand these networks have distinctive properties.
For example, in adirected acyclic graphbothlongest pathsandshortest pathsare well defined. In the Price model the length of the longest path from the n-th node added to the network to the first node in the network, scales as[4]ln(n){\displaystyle \ln(n)}
For further discussion, see,[5][6]and.[7][8]Pricewas able to derive these results but this was how far he could get with it, without the provision of computational resources. Fortunately, much work dedicated to preferential attachment and network growth has been enabled by recent technological progress[according to whom?].
|
https://en.wikipedia.org/wiki/Price%27s_model
|
Inprobability theoryandstatistics, thezeta distributionis a discreteprobability distribution. IfXis a zeta-distributedrandom variablewith parameters, then the probability thatXtakes the positive integer valuekis given by theprobability mass function
whereζ(s) is theRiemann zeta function(which is undefined fors= 1).
The multiplicities of distinctprime factorsofXareindependentrandom variables.
TheRiemann zeta functionbeing the sum of all termsk−s{\displaystyle k^{-s}}for positive integerk, it appears thus as the normalization of theZipf distribution. The terms "Zipf distribution" and "zeta distribution" are often used interchangeably. But while the Zeta distribution is aprobability distributionby itself, it is not associated withZipf's lawwith the same exponent.
The Zeta distribution is defined for positive integersk≥1{\displaystyle k\geq 1}, and its probability mass function is given by
wheres>1{\displaystyle s>1}is the parameter, andζ(s){\displaystyle \zeta (s)}is theRiemann zeta function.
The cumulative distribution function is given by
whereHk,s{\displaystyle H_{k,s}}is the generalizedharmonic number
Thenth rawmomentis defined as the expected value ofXn:
The series on the right is just a series representation of the Riemann zeta function, but it only converges for values ofs−n{\displaystyle s-n}that are greater than unity. Thus:
The ratio of the zeta functions is well-defined, even forn>s− 1 because the series representation of the zeta function can beanalytically continued. This does not change the fact that the moments are specified by the series itself, and are therefore undefined for largen.
Themoment generating functionis defined as
The series is just the definition of thepolylogarithm, valid foret<1{\displaystyle e^{t}<1}so that
Since this does not converge on an open interval containingt=0{\displaystyle t=0}, the moment generating function does not exist.
ζ(1) is infinite as theharmonic series, and so the case whens= 1 is not meaningful. However, ifAis any set of positive integers that has a density, i.e. if
exists whereN(A,n) is the number of members ofAless than or equal ton, then
is equal to that density.
The latter limit can also exist in some cases in whichAdoes not have a density. For example, ifAis the set of all positive integers whose first digit isd, thenAhas no density, but nonetheless, the second limit given above exists and is proportional to
which isBenford's law.
The Zeta distribution can be constructed with a sequence of independent random variables with ageometric distribution. Letp{\displaystyle p}be aprime numberandX(p−s){\displaystyle X(p^{-s})}be a random variable with a geometric distribution of parameterp−s{\displaystyle p^{-s}}, namely
P(X(p−s)=k)=p−ks(1−p−s){\displaystyle \quad \quad \quad \mathbb {P} \left(X(p^{-s})=k\right)=p^{-ks}(1-p^{-s})}
If the random variables(X(p−s))p∈P{\displaystyle (X(p^{-s}))_{p\in {\mathcal {P}}}}are independent, then, the random variableZs{\displaystyle Z_{s}}defined by
Zs=∏p∈PpX(p−s){\displaystyle \quad \quad \quad Z_{s}=\prod _{p\in {\mathcal {P}}}p^{X(p^{-s})}}
has the zeta distribution:P(Zs=n)=1nsζ(s){\displaystyle \mathbb {P} \left(Z_{s}=n\right)={\frac {1}{n^{s}\zeta (s)}}}.
Stated differently, the random variablelog(Zs)=∑p∈PX(p−s)log(p){\displaystyle \log(Z_{s})=\sum _{p\in {\mathcal {P}}}X(p^{-s})\,\log(p)}isinfinitely divisiblewithLévy measuregiven by the following sum ofDirac masses:
Πs(dx)=∑p∈P∑k⩾1p−kskδklog(p)(dx){\displaystyle \quad \quad \quad \Pi _{s}(dx)=\sum _{p\in {\mathcal {P}}}\sum _{k\geqslant 1}{\frac {p^{-ks}}{k}}\delta _{k\log(p)}(dx)}
Other "power-law" distributions
|
https://en.wikipedia.org/wiki/Zeta_distribution
|
Inprobability theoryandstatistics, theZipf–Mandelbrot lawis adiscrete probability distribution. Also known as thePareto–Zipf law, it is apower-lawdistribution onranked data, named after thelinguistGeorge Kingsley Zipf, who suggested a simpler distribution calledZipf's law, and the mathematicianBenoit Mandelbrot, who subsequently generalized it.
Theprobability mass functionis given by
whereHN,q,s{\displaystyle H_{N,q,s}}is given by
which may be thought of as a generalization of aharmonic number. In the formula,k{\displaystyle k}is the rank of the data, andq{\displaystyle q}ands{\displaystyle s}are parameters of the distribution. In the limit asN{\displaystyle N}approaches infinity, this becomes theHurwitz zeta functionζ(s,q){\displaystyle \zeta (s,q)}. For finiteN{\displaystyle N}andq=0{\displaystyle q=0}the Zipf–Mandelbrot law becomesZipf's law. For infiniteN{\displaystyle N}andq=0{\displaystyle q=0}it becomes azeta distribution.
The distribution of words ranked by theirfrequencyin a randomtext corpusis approximated by apower-lawdistribution, known asZipf's law.
If one plots the frequency rank of words contained in a moderately sized corpus of text data versus the number of occurrences or actual frequencies, one obtains apower-lawdistribution, withexponentclose to one (but see Powers, 1998 and Gelbukh & Sidorov, 2001). Zipf's law implicitly assumes a fixed vocabulary size, but theHarmonic serieswiths= 1 does not converge, while the Zipf–Mandelbrot generalization withs> 1 does. Furthermore, there is evidence that the closed class of functional words that define a language obeys a Zipf–Mandelbrot distribution with different parameters from the open classes of contentive words that vary by topic, field and register.[1]
In ecological field studies, therelative abundance distribution(i.e. the graph of the number of species observed as a function of their abundance) is often found to conform to a Zipf–Mandelbrot law.[2]
Within music, many metrics of measuring "pleasing" music conform to Zipf–Mandelbrot distributions.[3]
|
https://en.wikipedia.org/wiki/Zipf%E2%80%93Mandelbrot_law
|
Financial models with long-tailed distributions and volatility clusteringhave been introduced to overcome problems with the realism of classical financial models. These classical models of financialtime seriestypically assumehomoskedasticityandnormalityand as such cannot explain stylized phenomena such asskewness,heavy tails, andvolatility clusteringof the empirical asset returns in finance. In 1963,Benoit Mandelbrotfirst used thestable (orα{\displaystyle \alpha }-stable) distributionto model the empirical distributions which have the skewness and heavy-tail property. Sinceα{\displaystyle \alpha }-stable distributions have infinitep{\displaystyle p}-th moments for allp>α{\displaystyle p>\alpha }, the tempered stable processes have been proposed for overcoming this limitation of the stable distribution.
On the other hand,GARCHmodels have been developed to explain thevolatility clustering. In the GARCH model, the innovation (or residual) distributions are assumed to be a standard normal distribution, despite the fact that this assumption is often rejected empirically. For this reason, GARCH models with non-normal innovation distribution have been developed.
Many financial models with stable and tempered stable distributions together with volatility clustering have been developed and applied to risk management, option pricing, and portfolio selection.
A random variableY{\displaystyle Y}is calledinfinitely divisibleif,
for eachn=1,2,…{\displaystyle n=1,2,\dots }, there areindependent and identically-distributed random variables
such that
where=d{\displaystyle {\stackrel {\mathrm {d} }{=}}}denotes equality in distribution.
ABorel measureν{\displaystyle \nu }onR{\displaystyle \mathbb {R} }is called aLévy measureifν(0)=0{\displaystyle \nu ({0})=0}and
IfY{\displaystyle Y}is infinitely divisible, then thecharacteristic functionϕY(u)=E[eiuY]{\displaystyle \phi _{Y}(u)=E[e^{iuY}]}is given by
whereσ≥0{\displaystyle \sigma \geq 0},γ∈R{\displaystyle \gamma \in \mathbb {R} }andν{\displaystyle \nu }is a Lévy measure.
Here the triple(σ2,ν,γ){\displaystyle (\sigma ^{2},\nu ,\gamma )}is called aLévy triplet ofY{\displaystyle Y}. This triplet is unique. Conversely, for any choice(σ2,ν,γ){\displaystyle (\sigma ^{2},\nu ,\gamma )}satisfying the conditions above, there exists an infinitely divisible random variableY{\displaystyle Y}whose characteristic function is given asϕY{\displaystyle \phi _{Y}}.
A real-valued random variableX{\displaystyle X}is said to have anα{\displaystyle \alpha }-stable distributionif for anyn≥2{\displaystyle n\geq 2}, there
are a positive numberCn{\displaystyle C_{n}}and a real numberDn{\displaystyle D_{n}}such that
whereX1,X2,…,Xn{\displaystyle X_{1},X_{2},\dots ,X_{n}}are independent and have the same
distribution as that ofX{\displaystyle X}. All stable random variables are infinitely divisible. It is known thatCn=n1/α{\displaystyle C_{n}=n^{1/\alpha }}for some0<α≤2{\displaystyle 0<\alpha \leq 2}. A stable random
variableX{\displaystyle X}with indexα{\displaystyle \alpha }is called anα{\displaystyle \alpha }-stable random variable.
LetX{\displaystyle X}be anα{\displaystyle \alpha }-stable random variable. Then the
characteristic functionϕX{\displaystyle \phi _{X}}ofX{\displaystyle X}is given by
for someμ∈R{\displaystyle \mu \in \mathbb {R} },σ>0{\displaystyle \sigma >0}andβ∈[−1,1]{\displaystyle \beta \in [-1,1]}.
An infinitely divisible distribution is called aclassical tempered stable (CTS) distributionwith parameter(C1,C2,λ+,λ−,α){\displaystyle (C_{1},C_{2},\lambda _{+},\lambda _{-},\alpha )},
if its Lévy triplet(σ2,ν,γ){\displaystyle (\sigma ^{2},\nu ,\gamma )}is given byσ=0{\displaystyle \sigma =0},γ∈R{\displaystyle \gamma \in \mathbb {R} }and
whereC1,C2,λ+,λ−>0{\displaystyle C_{1},C_{2},\lambda _{+},\lambda _{-}>0}andα<2{\displaystyle \alpha <2}.
This distribution was first introduced by under
the name ofTruncated Lévy Flights[1]and 'exponentially truncated stable distribution'.[2]It was subsequently called thetempered stableor theKoBoLdistribution.[3]In particular, ifC1=C2=C>0{\displaystyle C_{1}=C_{2}=C>0}, then this distribution is called the CGMY
distribution.[4]
The characteristic functionϕCTS{\displaystyle \phi _{CTS}}for a tempered stable
distribution is given by
for someμ∈R{\displaystyle \mu \in \mathbb {R} }. Moreover,ϕCTS{\displaystyle \phi _{CTS}}can be extended to the
region{z∈C:Im(z)∈(−λ−,λ+)}{\displaystyle \{z\in \mathbb {C} :\operatorname {Im} (z)\in (-\lambda _{-},\lambda _{+})\}}.
Rosiński generalized the CTS distribution under the name of thetempered stable distribution. The KR distribution, which is a subclass of the Rosiński's generalized tempered stable distributions, is used in finance.[5]
An infinitely divisible distribution is called amodified tempered stable (MTS) distributionwith parameter(C,λ+,λ−,α){\displaystyle (C,\lambda _{+},\lambda _{-},\alpha )},
if its Lévy triplet(σ2,ν,γ){\displaystyle (\sigma ^{2},\nu ,\gamma )}is given byσ=0{\displaystyle \sigma =0},γ∈R{\displaystyle \gamma \in \mathbb {R} }and
whereC,λ+,λ−>0,α<2{\displaystyle C,\lambda _{+},\lambda _{-}>0,\alpha <2}and
HereKp(x){\displaystyle K_{p}(x)}is the modified Bessel function of the second kind.
The MTS distribution is not included in the class of Rosiński's generalized tempered stable distributions.[6]
In order to describe the volatility clustering effect of the return process of an asset, theGARCHmodel can be used. In the GARCH model, innovation (ϵt{\displaystyle ~\epsilon _{t}~}) is assumed thatϵt=σtzt{\displaystyle ~\epsilon _{t}=\sigma _{t}z_{t}~}, wherezt∼iidN(0,1){\displaystyle z_{t}\sim iid~N(0,1)}and where
the seriesσt2{\displaystyle \sigma _{t}^{2}}are modeled by
and whereα0>0{\displaystyle ~\alpha _{0}>0~}andαi≥0,i>0{\displaystyle \alpha _{i}\geq 0,~i>0}.
However, the assumption ofzt∼iidN(0,1){\displaystyle z_{t}\sim iid~N(0,1)}is often rejected empirically. For that reason, new GARCH models with stable or tempered stable distributed innovation have been developed. GARCH models withα{\displaystyle \alpha }-stable innovations have been introduced.[7][8][9]Subsequently, GARCH Models with tempered stable innovations have been developed.[6][10]
Objections against the use of stable distributions in Financial models are given in[11][12]
|
https://en.wikipedia.org/wiki/Financial_models_with_long-tailed_distributions_and_volatility_clustering
|
Themultivariate stable distributionis a multivariateprobability distributionthat is a multivariate generalisation of the univariatestable distribution. The multivariate stable distribution defines linear relations betweenstable distributionmarginals.[clarification needed]In the same way as for the univariate case, the distribution is defined in terms of itscharacteristic function.
The multivariate stable distribution can also be thought as an extension of themultivariate normal distribution. It has parameter,α, which is defined over the range 0 <α≤ 2, and where the caseα= 2 is equivalent to the multivariate normal distribution. It has an additional skew parameter that allows for non-symmetric distributions, where themultivariate normal distributionis symmetric.
LetS{\displaystyle \mathbb {S} }be the Euclidean unit sphere inRd{\displaystyle \mathbb {R} ^{d}}, that is,S={u∈Rd:|u|=1}{\displaystyle \mathbb {S} =\{u\in \mathbb {R} ^{d}\colon |u|=1\}}. Arandom vectorX{\displaystyle X}has a multivariate stable distribution—denoted asX∼S(α,Λ,δ){\displaystyle X\sim S(\alpha ,\Lambda ,\delta )}—, if the joint characteristic function ofX{\displaystyle X}is[1]
where 0 <α< 2, and fory∈R{\displaystyle y\in \mathbb {R} }
This is essentially the result of Feldheim,[2]that any stable random vector can be characterized by a spectral measureΛ{\displaystyle \Lambda }(a finite measure onS{\displaystyle \mathbb {S} }) and a shift vectorδ∈Rd{\displaystyle \delta \in \mathbb {R} ^{d}}.
Another way to describe a stable random vector is in terms of projections. For any vectoru{\displaystyle u}the projectionuTX{\displaystyle u^{T}X}is univariateα{\displaystyle \alpha }-stable with some skewnessβ(u){\displaystyle \beta (u)}, scaleγ(u){\displaystyle \gamma (u)}, and some shiftδ(u){\displaystyle \delta (u)}. The notationX∼S(α,β,γ,δ){\displaystyle X\sim S(\alpha ,\beta ,\gamma ,\delta )}is used ifXis stable withuTX∼s(α,β(u),γ(u),δ(u)){\displaystyle u^{T}X\sim s(\alpha ,\beta (u),\gamma (u),\delta (u))}for everyu∈Rd{\displaystyle u\in \mathbb {R} ^{d}}. This is called the projection parametrization.
The spectral measure determines the projection parameter functions by:
There are special cases where the multivariatecharacteristic functiontakes a simpler form. Define the characteristic function of a stable marginal as
The characteristic function isEexp(iuTX)=exp{−γ0α|u|α+iuTδ)}{\displaystyle E\exp(iu^{T}X)=\exp\{-\gamma _{0}^{\alpha }|u|^{\alpha }+iu^{T}\delta )\}}The spectral measure is continuous and uniform, leading to radial/isotropic symmetry.[3]For the multinormal caseα=2{\displaystyle \alpha =2}, this corresponds to independent components, but so is not the case whenα<2{\displaystyle \alpha <2}. Isotropy is a special case of ellipticity (see the next paragraph) – just takeΣ{\displaystyle \Sigma }to be a multiple of the identity matrix.
Theelliptically contouredmultivariate stable distribution is a special symmetric case of the multivariate stable distribution.
IfXisα-stable and elliptically contoured, then it has jointcharacteristic functionEexp(iuTX)=exp{−(uTΣu)α/2+iuTδ)}{\displaystyle E\exp(iu^{T}X)=\exp\{-(u^{T}\Sigma u)^{\alpha /2}+iu^{T}\delta )\}}for some shift vectorδ∈Rd{\displaystyle \delta \in R^{d}}(equal to the mean when it exists) and some positive definite matrixΣ{\displaystyle \Sigma }(akin to a correlation matrix, although the usual definition of correlation fails to be meaningful).
Note the relation to characteristic function of themultivariate normal distribution:Eexp(iuTX)=exp{−(uTΣu)+iuTδ)}{\displaystyle E\exp(iu^{T}X)=\exp\{-(u^{T}\Sigma u)+iu^{T}\delta )\}}obtained whenα= 2.
The marginals are independent withXj∼S(α,βj,γj,δj){\displaystyle X_{j}\sim S(\alpha ,\beta _{j},\gamma _{j},\delta _{j})}, then the
characteristic function is
Observe that whenα= 2 this reduces again to the multivariate normal; note that the iid case and the isotropic case do not coincide whenα< 2.
Independent components is a special case of discrete spectral measure (see next paragraph), with the spectral measure supported by the standard unit vectors.
If the spectral measure is discrete with massλj{\displaystyle \lambda _{j}}atsj∈S,j=1,…,m{\displaystyle s_{j}\in \mathbb {S} ,j=1,\ldots ,m}the characteristic function is
IfX∼S(α,β(⋅),γ(⋅),δ(⋅)){\displaystyle X\sim S(\alpha ,\beta (\cdot ),\gamma (\cdot ),\delta (\cdot ))}isd-dimensional,Ais anmxdmatrix, andb∈Rm,{\displaystyle b\in \mathbb {R} ^{m},}thenAX + bism-dimensionalα{\displaystyle \alpha }-stable with scale functionγ(AT⋅),{\displaystyle \gamma (A^{T}\cdot ),}skewness functionβ(AT⋅),{\displaystyle \beta (A^{T}\cdot ),}and location functionδ(AT⋅)+bT.{\displaystyle \delta (A^{T}\cdot )+b^{T}.}
Bickson and Guestrin have shown how to compute inference in closed-form in a linear model (or equivalently afactor analysismodel), involving independent component models.[4]
More specifically, letXi∼S(α,βxi,γxi,δxi){\displaystyle X_{i}\sim S(\alpha ,\beta _{x_{i}},\gamma _{x_{i}},\delta _{x_{i}})}(i=1,…,n){\displaystyle (i=1,\ldots ,n)}be a family of i.i.d. unobserved univariates drawn from astable distribution. Given a known linear relation matrix A of sizen×n{\displaystyle n\times n}, the observationsYi=∑i=1nAijXj{\displaystyle Y_{i}=\sum _{i=1}^{n}A_{ij}X_{j}}are assumed to be distributed as a convolution of the hidden factorsXi{\displaystyle X_{i}}, henceYi=S(α,βyi,γyi,δyi){\displaystyle Y_{i}=S(\alpha ,\beta _{y_{i}},\gamma _{y_{i}},\delta _{y_{i}})}. The inference task is to compute the most likelyXi{\displaystyle X_{i}}, given the linear relation matrix A and the observationsYi{\displaystyle Y_{i}}. This task can be computed in closed-form in O(n3).
An application for this construction ismultiuser detectionwith stable, non-Gaussian noise.
|
https://en.wikipedia.org/wiki/Multivariate_stable_distribution
|
Discrete-stable distributions[1]are a class ofprobability distributionswith the property that the sum of several random variables from such a distribution under appropriate scaling is distributed according to the same family. They are the discrete analogue ofcontinuous-stable distributions.
Discrete-stable distributions have been used in numerous fields, in particular inscale-free networkssuch as theinternetandsocial networks[2]or evensemantic networks.[3]
Both discrete and continuous classes of stable distribution have properties such asinfinite divisibility,power lawtails, andunimodality.
The most well-known discrete stable distribution is the special case of thePoisson distribution.[4]It is the only discrete-stable distribution for which themeanand allhigher-order momentsare finite.[dubious–discuss]
The discrete-stable distributions are defined[5]through theirprobability-generating function
In the above,a>0{\displaystyle a>0}is a scale parameter and0<ν≤1{\displaystyle 0<\nu \leq 1}describes the power-law behaviour such that when0<ν<1{\displaystyle 0<\nu <1},
Whenν=1{\displaystyle \nu =1}, the distribution becomes the familiarPoisson distributionwith the meana{\displaystyle a}.
Thecharacteristic functionof a discrete-stable distribution has the form[6]
Again, whenν=1{\displaystyle \nu =1}, the distribution becomes thePoisson distributionwith meana{\displaystyle a}.
The original distribution is recovered through repeated differentiation of the generating function:
Aclosed-form expressionusing elementary functions for the probability distribution of the discrete-stable distributions is not known except for in the Poisson case in which
Expressions exist, however, that usespecial functionsfor the caseν=1/2{\displaystyle \nu =1/2}[7](in terms ofBessel functions) andν=1/3{\displaystyle \nu =1/3}[8](in terms ofhypergeometric functions).
The entire class of discrete-stable distributions can be formed as Poissoncompound probability distributionwhere the mean,λ{\displaystyle \lambda }, of a Poisson distribution is defined as a random variable with aprobability density function(PDF). When the PDF of the mean is a one-sidedcontinuous-stable distributionwith the stability parameter0<α<1{\displaystyle 0<\alpha <1}and scale parameterc{\displaystyle c}, the resultant distribution is[9]discrete-stable with indexν=α{\displaystyle \nu =\alpha }and scale parametera=csec(απ/2){\displaystyle a=c\sec(\alpha \pi /2)}.
Formally, this is written
wherep(x;α,1,c,0){\displaystyle p(x;\alpha ,1,c,0)}is the pdf of a one-sided continuous-stable distribution with symmetry parameterβ=1{\displaystyle \beta =1}and location parameterμ=0{\displaystyle \mu =0}.
A more general result[8]states that forming a compound distribution from any discrete-stable distribution with indexν{\displaystyle \nu }with a one-sided continuous-stable distribution with indexα{\displaystyle \alpha }results in a discrete-stable distribution with indexν⋅α{\displaystyle \nu \cdot \alpha }and reduces the power-law index of the original distribution by a factor ofα{\displaystyle \alpha }.
In other words,
In the limitν→1{\displaystyle \nu \rightarrow 1}, the discrete-stable distributions behave[9]like aPoisson distributionwith meanasec(νπ/2){\displaystyle a\sec(\nu \pi /2)}for smallN{\displaystyle N}, but forN≫1{\displaystyle N\gg 1}, the power-law tail dominates.
The convergence of i.i.d. random variates with power-law tailsP(N)∼1/N1+ν{\displaystyle P(N)\sim 1/N^{1+\nu }}to a discrete-stable distribution is extraordinarily slow[10]whenν≈1{\displaystyle \nu \approx 1}, the limit being the Poisson distribution whenν>1{\displaystyle \nu >1}andP(N|ν,a){\displaystyle P(N|\nu ,a)}whenν≤1{\displaystyle \nu \leq 1}.
|
https://en.wikipedia.org/wiki/Discrete-stable_distribution
|
Perception(fromLatinperceptio'gathering, receiving') is the organization, identification, and interpretation ofsensoryinformation in order to represent and understand the presented information or environment.[2]All perception involves signals that go through thenervous system, which in turn result from physical or chemical stimulation of thesensory system.[3]Visioninvolveslightstriking theretinaof theeye;smellis mediated byodor molecules; andhearinginvolvespressure waves.
Perception is not only the passive receipt of thesesignals, but it is also shaped by the recipient'slearning,memory,expectation, andattention.[4][5]Sensory input is a process that transforms this low-level information to higher-level information (e.g., extracts shapes forobject recognition).[5]The following process connects a person's concepts and expectations (orknowledge) with restorative and selective mechanisms, such asattention, that influence perception.
Perception depends on complex functions of thenervous system, but subjectively seems mostly effortless because this processing happens outsideconsciousawareness.[3]Since the rise ofexperimental psychologyin the 19th century,psychology's understanding of perceptionhas progressed by combining a variety of techniques.[4]Psychophysicsquantitativelydescribes the relationships between the physical qualities of the sensory input and perception.[6]Sensory neurosciencestudies the neural mechanisms underlying perception. Perceptual systems can also be studiedcomputationally, in terms of the information they process.Perceptual issues in philosophyinclude the extent to which sensory qualities such assound, smell orcolorexist in objective reality rather than in the mind of the perceiver.[4]
Although people traditionally viewed the senses as passive receptors, the study ofillusionsandambiguous imageshas demonstrated that thebrain's perceptual systems actively and pre-consciously attempt to make sense of their input.[4]There is still active debate about the extent to which perception is an active process ofhypothesistesting, analogous toscience, or whether realistic sensory information is rich enough to make this process unnecessary.[4]
Theperceptual systemsof the brain enable individuals to see the world around them as stable, even though the sensory information is typically incomplete and rapidly varying. Human and other animal brains are structured in amodular way, with different areas processing different kinds of sensory information. Some of these modules take the form ofsensory maps, mapping some aspect of the world across part of the brain's surface. These different modules are interconnected and influence each other. For instance,tasteis strongly influenced by smell.[7]
The process of perception begins with an object in the real world, known as thedistalstimulusordistal object.[3]By means of light, sound, or another physical process, the object stimulates the body's sensory organs. These sensory organs transform the input energy into neural activity—a process calledtransduction.[3][8]This raw pattern of neural activity is called theproximal stimulus.[3]These neural signals are then transmitted to the brain and processed.[3]The resulting mental re-creation of the distal stimulus is thepercept.
To explain the process of perception, an example could be an ordinary shoe. The shoe itself is the distal stimulus. When light from the shoe enters a person's eye and stimulates the retina, that stimulation is the proximal stimulus.[9]The image of the shoe reconstructed by the brain of the person is the percept. Another example could be a ringing telephone. The ringing of the phone is the distal stimulus. The sound stimulating a person's auditory receptors is the proximal stimulus. The brain's interpretation of this as the "ringing of a telephone" is the percept.
The different kinds of sensation (such as warmth, sound, and taste) are calledsensory modalitiesorstimulus modalities.[8][10]
PsychologistJerome Brunerdeveloped a model of perception, in which people put "together the information contained in" a target and a situation to form "perceptions of ourselves and others based on social categories."[11][12]This model is composed of three states:
According to Alan Saks and Gary Johns, there are three components to perception:[13][better source needed]
Stimuli are not necessarily translated into a percept and rarely does a single stimulus translate into a percept. An ambiguous stimulus may sometimes be transduced into one or more percepts, experienced randomly, one at a time, in a process termedmultistable perception. The same stimuli, or absence of them, may result in different percepts depending on subject's culture and previous experiences.[14]
Ambiguous figures demonstrate that a single stimulus can result in more than one percept. For example, theRubin vasecan be interpreted either as a vase or as two faces. The percept can bind sensations from multiple senses into a whole. A picture of a talking person on a television screen, for example, is bound to the sound of speech from speakers to form a percept of a talking person.
In many ways, vision is the primary human sense. Light is taken in through each eye and focused in a way which sorts it on the retina according to direction of origin. A dense surface of photosensitive cells, including rods, cones, andintrinsically photosensitive retinal ganglion cellscaptures information about the intensity, color, and position of incoming light. Some processing of texture and movement occurs within the neurons on the retina before the information is sent to the brain. In total, about 15 differing types of information are then forwarded to the brain proper via the optic nerve.[15]
The timing of perception of a visual event, at points along the visual circuit, have been measured. A sudden alteration of light at a spot in the environment first alters photoreceptor cells in theretina, which send a signal to theretina bipolar celllayer which, in turn, can activate a retinal ganglion neuron cell. A retinal ganglion cell is a bridging neuron that connects visual retinal input to the visual processing centers within the central nervous system.[16]Light-altered neuron activation occurs within about 5–20 milliseconds in a rabbit retinal ganglion,[17]although in a mouse retinal ganglion cell the initial spike takes between 40 and 240 milliseconds before the initial activation.[18]The initial activation can be detected by anaction potentialspike, a sudden spike in neuron membrane electric voltage.
A perceptual visual event measured in humans was the presentation to individuals of an anomalous word. If these individuals are shown a sentence, presented as a sequence of single words on a computer screen, with a puzzling word out of place in the sequence, the perception of the puzzling word can register on an electroencephalogram (EEG). In an experiment, human readers wore an elastic cap with 64 embedded electrodes distributed over their scalp surface.[19]Within 230 milliseconds of encountering the anomalous word, the human readers generated an event-related electrical potential alteration of their EEG at the left occipital-temporal channel, over the left occipital lobe and temporal lobe.
Hearing(oraudition) is the ability to perceivesoundby detectingvibrations(i.e.,sonicdetection). Frequencies capable of being heard by humans are calledaudiooraudiblefrequencies, the range of which is typically considered to be between 20Hzand 20,000 Hz.[20]Frequencies higher than audio are referred to asultrasonic, while frequencies below audio are referred to asinfrasonic.
Theauditory systemincludes theouter ears, which collect and filter sound waves; themiddle ear, which transforms the sound pressure (impedance matching); and theinner ear, which produces neural signals in response to the sound. By the ascendingauditory pathwaythese are led to theprimary auditory cortexwithin thetemporal lobeof the human brain, from where the auditory information then goes to thecerebral cortexfor further processing.
Sound does not usually come from a single source: in real situations, sounds from multiple sources and directions aresuperimposedas they arrive at the ears. Hearing involves the computationally complex task of separating out sources of interest, identifying them and often estimating their distance and direction.[21]
The process of recognizing objects through touch is known ashaptic perception. It involves a combination ofsomatosensoryperception of patterns on the skin surface (e.g., edges, curvature, and texture) andproprioceptionof hand position and conformation. People can rapidly and accurately identify three-dimensional objects by touch.[22]This involves exploratory procedures, such as moving the fingers over the outer surface of the object or holding the entire object in the hand.[23]Haptic perception relies on the forces experienced during touch.[24]
ProfessorGibsondefined the haptic system as "the sensibility of the individual to the world adjacent to his body by use of his body."[25]Gibson and others emphasized the close link between body movement and haptic perception, where the latter isactive exploration.
The concept of haptic perception is related to the concept ofextended physiological proprioceptionaccording to which, when using a tool such as a stick, perceptual experience is transparently transferred to the end of the tool.
Taste (formally known asgustation) is the ability to perceive theflavorof substances, including, but not limited to,food. Humans receive tastes through sensory organs concentrated on the upper surface of thetongue, calledtaste budsorgustatory calyculi.[26]The human tongue has 100 to 150 taste receptor cells on each of its roughly-ten thousand taste buds.[27]
Traditionally, there have been four primary tastes:sweetness,bitterness,sourness, andsaltiness. The recognition and awareness ofumami, which is considered the fifth primary taste, is a relatively recent development inWestern cuisine.[28][29]Other tastes can be mimicked by combining these basic tastes,[27][30]all of which contribute only partially to the sensation andflavorof food in the mouth. Other factors includesmell, which is detected by theolfactory epitheliumof the nose;[7]texture, which is detected through a variety ofmechanoreceptors, muscle nerves, etc.;[30][31]and temperature, which is detected bythermoreceptors.[30]All basic tastes are classified as eitherappetitiveoraversive, depending upon whether the things they sense are harmful or beneficial.[32]
Smell is the process of absorbing molecules througholfactory organs, which are absorbed by humans through thenose. These molecules diffuse through a thick layer ofmucus; come into contact with one of thousands ofciliathat are projected from sensory neurons; and are then absorbed into a receptor (one of 347 or so).[33]It is this process that causes humans to understand the concept of smell from a physical standpoint.
Smell is also a very interactive sense as scientists have begun to observe that olfaction comes into contact with the other sense in unexpected ways.[34]It is also the most primal of the senses, as it is known to be the first indicator of safety or danger, therefore being the sense that drives the most basic of human survival skills. As such, it can be a catalyst for human behavior on asubconsciousandinstinctivelevel.[35]
Social perceptionis the part of perception that allows people to understand the individuals and groups of their social world. Thus, it is an element ofsocial cognition.[36]
Speech perceptionis the process by whichspoken languageis heard, interpreted and understood. Research in this field seeks to understand how human listeners recognize the sound of speech (orphonetics) and use such information to understand spoken language.
Listeners manage to perceive words across a wide range of conditions, as the sound of a word can vary widely according to words that surround it and thetempoof the speech, as well as the physical characteristics,accent,tone, and mood of the speaker.Reverberation, signifying the persistence of sound after the sound is produced, can also have a considerable impact on perception. Experiments have shown that people automatically compensate for this effect when hearing speech.[21][37]
The process of perceiving speech begins at the level of the sound within the auditory signal and the process ofaudition. The initial auditory signal is compared with visual information—primarily lip movement—to extract acoustic cues and phonetic information. It is possible other sensory modalities are integrated at this stage as well.[38]This speech information can then be used for higher-level language processes, such asword recognition.
Speech perception is not necessarily uni-directional. Higher-level language processes connected withmorphology,syntax, and/orsemanticsmay also interact with basic speech perception processes to aid in recognition of speech sounds.[39]It may be the case that it is not necessary (maybe not even possible) for a listener to recognizephonemesbefore recognizing higher units, such as words. In an experiment, professor Richard M. Warren replaced one phoneme of a word with a cough-like sound. His subjects restored the missing speech sound perceptually without any difficulty. Moreover, they were not able to accurately identify which phoneme had even been disturbed.[40]
Facial perceptionrefers to cognitive processes specialized in handlinghuman faces(including perceiving the identity of an individual) and facial expressions (such as emotional cues.)[citation needed]
Thesomatosensory cortexis a part of the brain that receives and encodes sensory information from receptors of the entire body.[41]
Affective touchis a type of sensory information that elicits an emotional reaction and is usually social in nature. Such information is actually coded differently than other sensory information. Though the intensity of affective touch is still encoded in the primary somatosensory cortex, the feeling of pleasantness associated with affective touch is activated more in theanterior cingulate cortex. Increasedblood oxygen level-dependent(BOLD) contrast imaging, identified duringfunctional magnetic resonance imaging(fMRI), shows that signals in the anterior cingulate cortex, as well as theprefrontal cortex, are highly correlated with pleasantness scores of affective touch. Inhibitorytranscranial magnetic stimulation(TMS) of the primary somatosensory cortex inhibits the perception of affective touch intensity, but not affective touch pleasantness. Therefore, the S1 is not directly involved in processing socially affective touch pleasantness, but still plays a role in discriminating touch location and intensity.[42]
Multi-modal perceptionrefers to concurrent stimulation in more than one sensory modality and the effect such has on the perception of events and objects in the world.[43]
Chronoceptionrefers to how the passage oftimeis perceived and experienced. Although thesense of timeis not associated with a specificsensory system, the work ofpsychologistsandneuroscientistsindicates that human brains do have a system governing the perception of time,[44][45]composed of a highly distributed system involving thecerebral cortex,cerebellum, andbasal ganglia. One particular component of the brain, thesuprachiasmatic nucleus, is responsible for thecircadian rhythm(commonly known as one's "internal clock"), while other cell clusters appear to be capable of shorter-range timekeeping, known as anultradianrhythm.
One or moredopaminergic pathwaysin thecentral nervous systemappear to have a strong modulatory influence onmental chronometry, particularlyinterval timing.[46]
Sense of agencyrefers to the subjective feeling of having chosen a particular action. Some conditions, such asschizophrenia, can cause a loss of this sense, which may lead a person into delusions, such as feeling like a machine or like an outside source is controlling them. An opposite extreme can also occur, where people experience everything in their environment as though they had decided that it would happen.[47]
Even in non-pathologicalcases, there is a measurable difference between the making of a decision and the feeling of agency. Through methods such asthe Libet experiment, a gap of half a second or more can be detected from the time when there are detectable neurological signs of a decision having been made to the time when the subject actually becomes conscious of the decision.
There are also experiments in which an illusion of agency is induced in psychologically normal subjects. In 1999, psychologistsWegnerand Wheatley gave subjects instructions to move a mouse around a scene and point to an image about once every thirty seconds. However, a second person—acting as a test subject but actually a confederate—had their hand on the mouse at the same time, and controlled some of the movement. Experimenters were able to arrange for subjects to perceive certain "forced stops" as if they were their own choice.[48][49]
Recognition memoryis sometimes divided into two functions by neuroscientists:familiarityandrecollection.[50]A strong sense of familiarity can occur without any recollection, for example in cases ofdeja vu.
Thetemporal lobe(specifically theperirhinal cortex) responds differently to stimuli that feel novel compared to stimuli that feel familiar.Firing ratesin the perirhinal cortex are connected with the sense of familiarity in humans and other mammals. In tests, stimulating this area at 10–15 Hz caused animals to treat even novel images as familiar, and stimulation at 30–40 Hz caused novel images to be partially treated as familiar.[51]In particular, stimulation at 30–40 Hz led to animals looking at a familiar image for longer periods, as they would for an unfamiliar one, though it did not lead to the same exploration behavior normally associated with novelty.
Recent studies onlesionsin the area concluded that rats with a damaged perirhinal cortex were still more interested in exploring when novel objects were present, but seemed unable to tell novel objects from familiar ones—they examined both equally. Thus, other brain regions are involved with noticing unfamiliarity, while the perirhinal cortex is needed to associate the feeling with a specific source.[52]
Sexual stimulationis anystimulus(including bodily contact) that leads to, enhances, and maintainssexual arousal, possibly even leading toorgasm. Distinct from the general sense oftouch, sexual stimulation is strongly tied tohormonal activityand chemical triggers in the body. Although sexual arousal may arise withoutphysical stimulation, achieving orgasm usually requires physical sexual stimulation (stimulation of the Krause-Fingercorpuscles[53]found in erogenous zones of the body.)
Other senses enable perception ofbody balance(vestibular sense[54]);acceleration, includinggravity;position of body parts(proprioception sense[1]). They can also enable perception of internal senses (interoception sense[55]), such as temperature, pain,suffocation,gag reflex,abdominal distension, fullness ofrectumandurinary bladder, and sensations felt in thethroatandlungs.
In the case of visual perception, some people can see the percept shift in theirmind's eye.[56]Others, who are notpicture thinkers, may not necessarily perceive the 'shape-shifting' as their world changes. Thisesemplasticnature has been demonstrated by an experiment that showed thatambiguous imageshave multiple interpretations on the perceptual level.
The confusing ambiguity of perception is exploited in human technologies such ascamouflageand biologicalmimicry. For example, the wings ofEuropean peacock butterfliesbeareyespotsthat birds respond to as though they were the eyes of a dangerous predator.
There is also evidence that the brain in some ways operates on a slight "delay" in order to allow nerve impulses from distant parts of the body to be integrated into simultaneous signals.[57]
Perception is one of the oldest fields in psychology. The oldestquantitativelaws in psychology areWeber's law, which states that the smallest noticeable difference in stimulus intensity is proportional to the intensity of the reference; andFechner's law, which quantifies the relationship between the intensity of the physical stimulus and its perceptual counterpart (e.g., testing how much darker a computer screen can get before the viewer actually notices). The study of perception gave rise to theGestalt School of Psychology, with an emphasis on aholisticapproach.
Asensory systemis a part of the nervous system responsible for processingsensoryinformation. A sensory system consists ofsensory receptors,neural pathways, and parts of the brain involved in sensory perception. Commonly recognized sensory systems are those forvision,hearing,somatic sensation(touch),tasteandolfaction(smell), as listed above. It has been suggested that the immune system is an overlooked sensory modality.[58]In short, senses aretransducersfrom the physical world to the realm of the mind.
Thereceptive fieldis the specific part of the world to which a receptor organ and receptor cells respond. For instance, the part of the world an eye can see, is its receptive field; the light that eachrodorconecan see, is its receptive field.[59]Receptive fields have been identified for thevisual system,auditory systemandsomatosensory system, so far. Research attention is currently focused not only on external perception processes, but also to "interoception", considered as the process of receiving, accessing and appraising internal bodily signals. Maintaining desired physiological states is critical for an organism's well-being and survival. Interoception is an iterative process, requiring the interplay between perception of body states and awareness of these states to generate proper self-regulation. Afferent sensory signals continuously interact with higher order cognitive representations of goals, history, and environment, shaping emotional experience and motivating regulatory behavior.[60]
Perceptual constancyis the ability of perceptual systems to recognize the same object from widely varying sensory inputs.[5]: 118–120[61]For example, individual people can be recognized from views, such as frontal and profile, which form very different shapes on the retina. A coin looked at face-on makes a circular image on the retina, but when held at angle it makes an elliptical image.[21]In normal perception these are recognized as a single three-dimensional object. Without this correction process, an animal approaching from the distance would appear to gain in size.[62][63]One kind of perceptual constancy iscolor constancy: for example, a white piece of paper can be recognized as such under different colors and intensities of light.[63]Another example isroughness constancy: when a hand is drawn quickly across a surface, the touch nerves are stimulated more intensely. The brain compensates for this, so the speed of contact does not affect the perceived roughness.[63]Other constancies include melody, odor, brightness and words.[64]These constancies are not always total, but the variation in the percept is much less than the variation in the physical stimulus.[63]The perceptual systems of the brain achieve perceptual constancy in a variety of ways, each specialized for the kind of information being processed,[65]withphonemic restorationas a notable example from hearing.
Theprinciples of grouping(orGestalt laws of grouping) are a set of principles inpsychology, first proposed byGestalt psychologists, to explain how humans naturally perceive objects with patterns and objects. Gestalt psychologists argued that these principles exist because the mind has an innate disposition toperceivepatterns in the stimulus based on certain rules. These principles areorganized into six categories:
Later research has identified additional grouping principles.[70]
A common finding across many different kinds of perception is that the perceived qualities of an object can be affected by the qualities of context. If one object is extreme on some dimension, then neighboring objects are perceived as further away from that extreme.
"Simultaneous contrast effect" is the term used when stimuli are presented at the same time, whereassuccessive contrastapplies when stimuli are presented one after another.[71]
The contrast effect was noted by the 17th Century philosopherJohn Locke, who observed that lukewarm water can feel hot or cold depending on whether the hand touching it was previously in hot or cold water.[72]In the early 20th Century,Wilhelm Wundtidentified contrast as a fundamental principle of perception, and since then the effect has been confirmed in many different areas.[72]These effects shape not only visual qualities like color and brightness, but other kinds of perception, including how heavy an object feels.[73]One experiment found that thinking of the name "Hitler" led to subjects rating a person as more hostile.[74]Whether a piece of music is perceived as good or bad can depend on whether the music heard before it was pleasant or unpleasant.[75]For the effect to work, the objects being compared need to be similar to each other: a television reporter can seem smaller when interviewing a tall basketball player, but not when standing next to a tall building.[73]In the brain, brightness contrast exerts effects on both neuronalfiring ratesandneuronal synchrony.[76]
Cognitive theoriesof perception assume there is apoverty of the stimulus. This is the claim thatsensations, by themselves, are unable to provide a unique description of the world.[77]Sensations require 'enriching', which is the role of themental model.
Theperceptual ecologyapproach was introduced by professorJames J. Gibson, who rejected the assumption of apoverty of stimulusand the idea that perception is based upon sensations. Instead, Gibson investigated what information is actually presented to the perceptual systems. His theory "assumes the existence of stable, unbounded, and permanent stimulus-information in theambient optic array. And it supposes that the visual system can explore and detect this information. The theory is information-based, not sensation-based."[78]He and the psychologists who work within thisparadigmdetailed how the world could be specified to a mobile, exploring organism via the lawful projection of information about the world into energy arrays.[79]"Specification" would be a 1:1 mapping of some aspect of the world into a perceptual array. Given such a mapping, no enrichment is required and perception isdirect.[80]
From Gibson's early work derived an ecological understanding of perception known asperception-in-action,which argues that perception is a requisite property of animate action. It posits that, without perception, action would be unguided, and without action, perception would serve no purpose. Animate actions require both perception and motion, which can be described as "two sides of the same coin, the coin is action." Gibson works from the assumption that singular entities, which he callsinvariants,already exist in the real world and that all that the perception process does is home in upon them.
Theconstructivist view, held by such philosophers asErnst von Glasersfeld, regards the continual adjustment of perception and action to the external input as precisely what constitutes the "entity," which is therefore far from being invariant.[81]Glasersfeld considers aninvariantas a target to be homed in upon, and a pragmatic necessity to allow an initial measure of understanding to be established prior to the updating that a statement aims to achieve. The invariant does not, and need not, represent an actuality. Glasersfeld describes it as extremely unlikely that what is desired orfearedby an organism will never suffer change as time goes on. Thissocial constructionisttheory thus allows for a needful evolutionary adjustment.[82]
A mathematical theory of perception-in-action has been devised and investigated in many forms of controlled movement, and has been described in many different species of organism using theGeneral Tau Theory. According to this theory, "tau information", or time-to-goal information is the fundamentalperceptin perception.
Many philosophers, such asJerry Fodor, write that the purpose of perception is knowledge. However,evolutionary psychologistshold that the primary purpose of perception is to guide action.[83]They give the example ofdepth perception, which seems to have evolved not to aid in knowing the distances to other objects but rather to aid movement.[83]Evolutionary psychologists argue that animals ranging fromfiddler crabsto humans use eyesight forcollision avoidance, suggesting that vision is basically for directing action, not providing knowledge.[83]Neuropsychologistsshowed that perception systems evolved along the specifics of animals' activities. This explains why bats and worms can perceive different frequency of auditory and visual systems than, for example, humans.
Building and maintaining sense organs ismetabolicallyexpensive. More than half the brain is devoted to processing sensory information, and the brain itself consumes roughly one-fourth of one's metabolic resources. Thus, such organs evolve only when they provide exceptional benefits to an organism's fitness.[83]
Scientists who study perception and sensation have long understood the human senses as adaptations.[83]Depth perception consists of processing over half a dozen visual cues, each of which is based on a regularity of the physical world.[83]Vision evolved to respond to the narrow range of electromagnetic energy that is plentiful and that does not pass through objects.[83]Sound waves provide useful information about the sources of and distances to objects, with larger animals making and hearing lower-frequency sounds and smaller animals making and hearing higher-frequency sounds.[83]Taste and smell respond to chemicals in the environment that were significant for fitness in the environment of evolutionary adaptedness.[83]The sense of touch is actually many senses, including pressure, heat, cold, tickle, and pain.[83]Pain, while unpleasant, is adaptive.[83]An important adaptation for senses is range shifting, by which the organism becomes temporarily more or less sensitive to sensation.[83]For example, one's eyes automatically adjust to dim or bright ambient light.[83]Sensory abilities of different organisms often co-evolve, as is the case with the hearing of echolocating bats and that of the moths that have evolved to respond to the sounds that the bats make.[83]
Evolutionary psychologists claim that perception demonstrates the principle of modularity, with specialized mechanisms handling particular perception tasks.[83]For example, people with damage to a particular part of the brain are not able to recognize faces (prosopagnosia).[83]Evolutionary psychology suggests that this indicates a so-called face-reading module.[83]
The theory ofclosed-loop perceptionproposes dynamic motor-sensory closed-loop process in which information flows through the environment and the brain in continuous loops.[84][85][86][87]Closed-loop perception appears consistent with anatomy and with the fact that perception is typically an incremental process. Repeated encounters with an object, whether conscious or not, enable an animal to refine its impressions of that object. This can be achieved more easily with a circular closed-loop system than with a linear open-loop one. Closed-loop perception can explain many of the phenomena that open-loop perception struggles to account for. This is largely because closed-loop perception considers motion to be an integral part of perception, and not an interfering component that must be corrected for. Furthermore, an environment perceived via sensor motion, and not despite sensor motion, need not be further stabilized by internal processes.[87]
Anne Treisman's feature integration theory (FIT) attempts to explain how characteristics of a stimulus such as physical location in space, motion, color, and shape are merged to form one percept despite each of these characteristics activating separate areas of the cortex. FIT explains this through a two part system of perception involving the preattentive and focused attention stages.[88][89][90][91][92]
The preattentive stage of perception is largely unconscious, and analyzes an object by breaking it down into its basic features, such as the specific color, geometric shape, motion, depth, individual lines, and many others.[88]Studies have shown that, when small groups of objects with different features (e.g., red triangle, blue circle) are briefly flashed in front of human participants, many individuals later report seeing shapes made up of the combined features of two different stimuli, thereby referred to asillusory conjunctions.[88][91]
The unconnected features described in the preattentive stage are combined into the objects one normally sees during the focused attention stage.[88]The focused attention stage is based heavily around the idea of attention in perception and 'binds' the features together onto specific objects at specific spatial locations (see thebinding problem).[88][92]
A fundamentally different approach to understanding the perception of objects relies upon the essential role ofShared intentionality.[93]Cognitive psychologist professorMichael Tomasellohypothesized that social bonds between children and caregivers would gradually increase through the essential motive force of shared intentionality beginning from birth.[94]The notion of shared intentionality, introduced by Michael Tomasello, was developed by later researchers, who tended to explain this collaborative interaction from different perspectives, e.g.,psychophysiology,[95][96][97]and neurobiology.[98]TheShared intentionalityapproach considers perception occurrence at an earlier stage of organisms' development than other theories, even before the emergence ofIntentionality. Because many theories build their knowledge about perception based on its main features of the organization, identification, and interpretation of sensory information to represent the holistic picture of the environment,Intentionalityis the central issue in perception development. Nowadays, only one hypothesis attempts to explainShared intentionalityin all its integral complexity from the level of interpersonal dynamics to interaction at the neuronal level. Introduced by Latvian professor Igor Val Danilov, the hypothesis of neurobiological processes occurring during Shared intentionality[99]highlights that, at the beginning of cognition, very young organisms cannot distinguish relevant sensory stimuli independently. Because the environment is the cacophony of stimuli (electromagnetic waves, chemical interactions, and pressure fluctuations), their sensation is too limited by the noise to solve the cue problem. The relevant stimulus cannot overcome the noise magnitude if it passes through the senses. Therefore,Intentionalityis a difficult problem for them since it needs the representation of the environment already categorized into objects (see alsobinding problem). The perception of objects is also problematic since it cannot appear without Intentionality. From the perspective of this hypothesis,Shared intentionalityis collaborative interactions in which participants share the essential sensory stimulus of the actual cognitive problem. This social bond enables ecological training of the young immature organism, starting at the reflexes stage of development, for processing the organization, identification, and interpretation of sensory information in developing perception.[100]From this account perception emerges due toShared intentionalityin the embryonic stage of development, i.e., even before birth.[101]
With experience,organismscan learn to make finer perceptual distinctions, and learn new kinds of categorization. Wine-tasting, the reading of X-ray images and music appreciation are applications of this process in thehumansphere.Researchhas focused on the relation of this to other kinds oflearning, and whether it takes place in peripheralsensorysystems or in the brain's processing of sense information.[102]Empiricalresearchshow that specific practices (such asyoga,mindfulness,Tai Chi,meditation, Daoshi and other mind-body disciplines) can modify human perceptual modality. Specifically, these practices enable perception skills to switch from the external (exteroceptive field) towards a higher ability to focus on internal signals (proprioception). Also, when asked to provide verticality judgments, highly self-transcendentyogapractitioners were significantly less influenced by a misleading visual context. Increasing self-transcendence may enable yoga practitioners to optimize verticality judgment tasks by relying more on internal (vestibular and proprioceptive) signals coming from their own body, rather than on exteroceptive, visual cues.[103]
Past actions and events that transpire right before an encounter or any form of stimulation have a strong degree of influence on how sensory stimuli are processed and perceived. On a basic level, the information our senses receive is often ambiguous and incomplete. However, they are grouped together in order for us to be able to understand the physical world around us. But it is these various forms of stimulation, combined with our previous knowledge and experience that allows us to create our overall perception. For example, when engaging in conversation, we attempt to understand their message and words by not only paying attention to what we hear through our ears but also from the previous shapes we have seen our mouths make. Another example would be if we had a similar topic come up in another conversation, we would use our previous knowledge to guess the direction the conversation is headed in.[104]
Aperceptual set(also calledperceptual expectancyor simplyset) is a predisposition to perceive things in a certain way.[105]It is an example of how perception can be shaped by "top-down" processes such as drives and expectations.[106]Perceptual sets occur in all the different senses.[62]They can be long term, such as a special sensitivity to hearing one's own name in a crowded room, or short-term, as in the ease with which hungry people notice the smell of food.[107]A simple demonstration of the effect involved very brief presentations of non-words such as "sael". Subjects who were told to expect words about animals read it as "seal", but others who were expecting boat-related words read it as "sail".[107]
Sets can be created bymotivationand so can result in people interpreting ambiguous figures so that they see what they want to see.[106]For instance, how someone perceives what unfolds during a sports game can be biased if they strongly support one of the teams.[108]In one experiment, students were allocated to pleasant or unpleasant tasks by a computer. They were told that either a number or a letter would flash on the screen to say whether they were going to taste an orange juice drink or an unpleasant-tasting health drink. In fact, an ambiguous figure was flashed on screen, which could either be read as the letter B or the number 13. When the letters were associated with the pleasant task, subjects were more likely to perceive a letter B, and when letters were associated with the unpleasant task they tended to perceive a number 13.[105]
Perceptual set has been demonstrated in many social contexts. When someone has a reputation for being funny, an audience is more likely to find them amusing.[107]Individual's perceptual sets reflect their own personality traits. For example, people with an aggressive personality are quicker to correctly identify aggressive words or situations.[107]In general, perceptual speed as a mental ability is positively correlated with personality traits such as conscientiousness, emotional stability, and agreeableness suggesting its evolutionary role in preserving homeostasis.[109]
One classic psychological experiment showed slower reaction times and less accurate answers when a deck ofplaying cardsreversed the color of thesuitsymbol for some cards (e.g. red spades and black hearts).[110]
PhilosopherAndy Clarkexplains that perception, although it occurs quickly, is not simply a bottom-up process (where minute details are put together to form larger wholes). Instead, our brains use what he callspredictive coding. It starts with very broad constraints and expectations for the state of the world, and as expectations are met, it makes more detailed predictions (errors lead to new predictions, orlearningprocesses). Clark says this research has various implications; not only can there be no completely "unbiased, unfiltered" perception, but this means that there is a great deal of feedback between perception and expectation (perceptual experiences often shape our beliefs, but those perceptions were based on existing beliefs).[111]Indeed, predictive coding provides an account where this type of feedback assists in stabilizing our inference-making process about the physical world, such as with perceptual constancy examples.
Embodied cognitionchallenges the idea of perception as internal representations resulting from a passive reception of (incomplete) sensory inputs coming from the outside world. According to O'Regan (1992), the major issue with this perspective is that it leaves the subjective character of perception unexplained.[112]Thus, perception is understood as an active process conducted by perceiving and engaged agents (perceivers). Furthermore, perception is influenced by agents' motives and expectations, their bodily states, and the interaction between the agent's body and the environment around it.[113]
Perception is an important part of the theories of many philosophers it has been famously addressed byRene Descartes,George Berkeley, andImmanuel Kantto name a few. In his work The Meditations Descartes begins by doubting all of his perceptions proving his existence with the famous phrase "I think therefore I am", and then works to the conclusion that perceptions are God-given.[114]George Berkely took the stance that all things that we see have a reality to them and that our perceptions were sufficient to know and understand that thing because our perceptions are capable of responding to a true reality.[115]Kant almost meets the rationalists and the empiricists half way. His theory utilizes the reality of a noumenon, the actual objects that cannot be understood, and then a phenomenon which is human understanding through the mind lens interpreting that noumenon.[116]
|
https://en.wikipedia.org/wiki/Perception
|
Thesone(/ˈsoʊn/) is aunitofloudness, thesubjectiveperception ofsound pressure. The study of perceived loudness is included in the topic ofpsychoacousticsand employs methods ofpsychophysics. Doubling the perceived loudness doubles the sone value. Proposed byStanley Smith Stevensin 1936, it is not anSI unit.
According to Stevens' definition, a loudness of 1 sone is equivalent to 40phons(a 1kHztone at 40dB SPL).[1]The phons scale aligns with dB, not with loudness, so the sone and phon scales are not proportional. Rather, the loudness in sones is, at least very nearly, apower lawfunction of the signal intensity, with an exponent of 0.3.[2][3]With this exponent, each 10 phon increase (or 10 dB at 1 kHz) produces almost exactly a doubling of the loudness in sones.[4]
At frequencies other than 1 kHz, the loudness level in phons is calibrated according to the frequency response of humanhearing, via a set ofequal-loudness contours, and then the loudness level in phons is mapped to loudness in sones via the same power law.
LoudnessNin sones (forLN> 40 phon):[5]
or loudness levelLNin phons (forN> 1 sone):
Corrections are needed at lower levels, near the threshold of hearing.
These formulas are for single-frequencysine wavesor narrowband signals. For multi-component or broadband signals, a more elaborate loudness model is required, accounting forcritical bands.
To be fully precise, a measurement in sones must be specified in terms of the optional suffix G, which means that the loudness value is calculated from frequency groups, and by one of the two suffixes D (for direct field or free field) or R (for room field or diffuse field).
|
https://en.wikipedia.org/wiki/Sone
|
Inmathematics,iterated function systems(IFSs) are a method of constructingfractals; the resulting fractals are oftenself-similar. IFS fractals are more related toset theorythan fractal geometry.[1]They were introduced in 1981.
IFSfractals, as they are normally called, can be of any number of dimensions, but are commonly computed and drawn in 2D. The fractal is made up of the union of several copies of itself, each copy being transformed by a function (hence "function system"). The canonical example is theSierpiński triangle. The functions are normallycontractive, which means they bring points closer together and make shapes smaller. Hence, the shape of an IFS fractal is made up of several possibly-overlapping smaller copies of itself, each of which is also made up of copies of itself,ad infinitum. This is the source of its self-similar fractal nature.
Formally, aniterated functionsystem is a finite set ofcontraction mappingson acomplete metric space.[2]Symbolically,
is an iterated function system if eachfi{\displaystyle f_{i}}is a contraction on the complete metric spaceX{\displaystyle X}.
Hutchinson showed that, for the metric spaceRn{\displaystyle \mathbb {R} ^{n}}, or more generally, for a complete metric spaceX{\displaystyle X}, such a system of functions has a unique nonemptycompact(closed and bounded) fixed setS.[3]One way of constructing a fixed set is to start with an initial nonempty closed and bounded setS0and iterate the actions of thefi, takingSn+1to be the union of the images ofSnunder thefi; then takingSto be theclosureof the limitlimn→∞Sn{\displaystyle \lim _{n\rightarrow \infty }S_{n}}. Symbolically, the unique fixed (nonempty compact) setS⊆X{\displaystyle S\subseteq X}has the property
The setSis thus the fixed set of theHutchinson operatorF:2X→2X{\displaystyle F:2^{X}\to 2^{X}}defined forA⊆X{\displaystyle A\subseteq X}via
The existence and uniqueness ofSis a consequence of thecontraction mapping principle, as is the fact that
for any nonempty compact setA{\displaystyle A}inX{\displaystyle X}. (For contractive IFS this convergence takes place even for any nonempty closed bounded setA{\displaystyle A}). Random elements arbitrarily close toSmay be obtained by the "chaos game," described below.
Recently it was shown that the IFSs of non-contractive type (i.e. composed of maps that are not contractions with respect to any topologically equivalent metric inX) can yield attractors.
These arise naturally in projective spaces, though classical irrational rotation on the circle can be adapted too.[4]
The collection of functionsfi{\displaystyle f_{i}}generatesamonoidundercomposition. If there are only two such functions, the monoid can be visualized as abinary tree, where, at each node of the tree, one may compose with the one or the other function (i.e.take the left or the right branch). In general, if there arekfunctions, then one may visualize the monoid as a fullk-ary tree, also known as aCayley tree.
Sometimes each functionfi{\displaystyle f_{i}}is required to be alinear, or more generally anaffine, transformation, and hence represented by amatrix. However, IFSs may also be built from non-linear functions, includingprojective transformationsandMöbius transformations. TheFractal flameis an example of an IFS with nonlinear functions.
The most common algorithm to compute IFS fractals is called the "chaos game". It consists of picking a random point in the plane, then iteratively applying one of the functions chosen at random from the function system to transform the point to get a next point. An alternative algorithm is to generate each possible sequence of functions up to a given maximum length, and then to plot the results of applying each of these sequences of functions to an initial point or shape.
Each of these algorithms provides a global construction which generates points distributed across the whole fractal. If a small area of the fractal is being drawn, many of these points will fall outside of the screen boundaries. This makes zooming into an IFS construction drawn in this manner impractical.
Although the theory of IFS requires each function to be contractive, in practice software that implements IFS only require that the whole system be contractive on average.[5]
PIFS (partitioned iterated function systems), also called local iterated function systems,[6]give surprisingly good image compression, even for photographs that don't seem to have the kinds of self-similar structure shown by simple IFS fractals.[7]
Very fast algorithms exist to generate an image from a set of IFS or PIFS parameters. It is faster and requires much less storage space to store a description of how it was created, transmit that description to a destination device, and regenerate that image anew on the destination device, than to store and transmit the color of each pixel in the image.[6]
Theinverse problemis more difficult: given some original arbitrary digital image such as a digital photograph, try to find a set of IFS parameters which, when evaluated by iteration, produces another image visually similar to the original.
In 1989, Arnaud Jacquin presented a solution to a restricted form of the inverse problem using only PIFS; the general form of the inverse problem remains unsolved.[8][9][6]
As of 1995, allfractal compressionsoftware is based on Jacquin's approach.[9]
The diagram shows the construction on an IFS from two affine functions. The functions are represented by their effect on the bi-unit square (the function transforms the outlined square into the shaded square). The combination of the two functions forms theHutchinson operator. Three iterations of the operator are shown, and then the final image is of the fixed point, the final fractal.
Early examples of fractals which may be generated by an IFS include theCantor set, first described in 1884; andde Rham curves, a type of self-similar curve described byGeorges de Rhamin 1957.
IFSs were conceived in their present form byJohn E. Hutchinsonin 1981[3]and popularized byMichael Barnsley's bookFractals Everywhere.
IFSs provide models for certain plants, leaves, and ferns, by virtue of the self-similarity which often occurs in branching structures in nature.
|
https://en.wikipedia.org/wiki/Iterated_function_system
|
Reaction–diffusion systemsare mathematical models that correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: localchemical reactionsin which the substances are transformed into each other, anddiffusionwhich causes the substances to spread out over a surface in space.
Reaction–diffusion systems are naturally applied inchemistry. However, the system can also describe dynamical processes of non-chemical nature. Examples are found inbiology,geologyandphysics(neutron diffusion theory) andecology. Mathematically, reaction–diffusion systems take the form of semi-linearparabolic partial differential equations. They can be represented in the general form
whereq(x,t)represents the unknown vector function,Dis adiagonal matrixofdiffusion coefficients, andRaccounts for all local reactions. The solutions of reaction–diffusion equations display a wide range of behaviours, including the formation oftravelling wavesand wave-like phenomena as well as otherself-organizedpatternslike stripes, hexagons or more intricate structure likedissipative solitons. Such patterns have been dubbed "Turing patterns".[1]Each function, for which a reaction diffusion differential equation holds, represents in fact aconcentration variable.
The simplest reaction–diffusion equation is in one spatial dimension in plane geometry,
is also referred to as theKolmogorov–Petrovsky–Piskunov equation.[2]If the reaction term vanishes, then the equation represents a pure diffusion process. The corresponding equation isFick's second law. The choiceR(u) =u(1 −u)yieldsFisher's equationthat was originally used to describe the spreading of biologicalpopulations,[3]the Newell–Whitehead-Segel equation withR(u) =u(1 −u2)to describeRayleigh–Bénard convection,[4][5]the more generalZeldovich–Frank-Kamenetskii equationwithR(u) =u(1 −u)e-β(1-u)and0 <β<∞(Zeldovich number) that arises incombustiontheory,[6]and its particular degenerate case withR(u) =u2−u3that is sometimes referred to as the Zeldovich equation as well.[7]
The dynamics of one-component systems is subject to certain restrictions as the evolution equation can also be written in the variational form
and therefore describes a permanent decrease of the "free energy"L{\displaystyle {\mathfrak {L}}}given by the functional
with a potentialV(u)such thatR(u) =dV(u)/du.
In systems with more than one stationary homogeneous solution, a typical solution is given by travelling fronts connecting the homogeneous states. These solutions move with constant speed without changing their shape and are of the formu(x,t) =û(ξ)withξ=x−ct, wherecis the speed of the travelling wave. Note that while travelling waves are generically stable structures, all non-monotonous stationary solutions (e.g. localized domains composed of a front-antifront pair) are unstable. Forc= 0, there is a simple proof for this statement:[8]ifu0(x)is a stationary solution andu=u0(x) +ũ(x,t)is an infinitesimally perturbed solution, linear stability analysis yields the equation
With the ansatzũ=ψ(x)exp(−λt)we arrive at the eigenvalue problem
ofSchrödinger typewhere negative eigenvalues result in the instability of the solution. Due to translational invarianceψ= ∂xu0(x)is a neutraleigenfunctionwith theeigenvalueλ= 0, and all other eigenfunctions can be sorted according to an increasing number of nodes with the magnitude of the corresponding real eigenvalue increases monotonically with the number of zeros. The eigenfunctionψ= ∂xu0(x)should have at least one zero, and for a non-monotonic stationary solution the corresponding eigenvalueλ= 0cannot be the lowest one, thereby implying instability.
To determine the velocitycof a moving front, one may go to a moving coordinate system and look at stationary solutions:
This equation has a nice mechanical analogue as the motion of a massDwith positionûin the course of the "time"ξunder the forceRwith the damping coefficient c which allows for a rather illustrative access to the construction of different types of solutions and the determination ofc.
When going from one to more space dimensions, a number of statements from one-dimensional systems can still be applied. Planar or curved wave fronts are typical structures, and a new effect arises as the local velocity of a curved front becomes dependent on the localradius of curvature(this can be seen by going topolar coordinates). This phenomenon leads to the so-called curvature-driven instability.[9]
Two-component systems allow for a much larger range of possible phenomena than their one-component counterparts. An important idea that was first proposed byAlan Turingis that a state that is stable in the local system can become unstable in the presence ofdiffusion.[10]
A linear stability analysis however shows that when linearizing the general two-component system
aplane waveperturbation
of the stationary homogeneous solution will satisfy
Turing's idea can only be realized in fourequivalence classesof systems characterized by the signs of theJacobianR′of the reaction function. In particular, if a finite wave vectorkis supposed to be the most unstable one, the Jacobian must have the signs
This class of systems is namedactivator-inhibitor systemafter its first representative: close to the ground state, one component stimulates the production of both components while the other one inhibits their growth. Its most prominent representative is theFitzHugh–Nagumo equation
withf(u) =λu−u3−κwhich describes how anaction potentialtravels through a nerve.[11][12]Here,du,dv,τ,σandλare positive constants.
When an activator-inhibitor system undergoes a change of parameters, one may pass from conditions under which a homogeneous ground state is stable to conditions under which it is linearly unstable. The correspondingbifurcationmay be either aHopf bifurcationto a globally oscillating homogeneous state with a dominant wave numberk= 0or aTuring bifurcationto a globally patterned state with a dominant finite wave number. The latter in two spatial dimensions typically leads to stripe or hexagonal patterns.
For the Fitzhugh–Nagumo example, the neutral stability curves marking the boundary of the linearly stable region for the Turing and Hopf bifurcation are given by
If the bifurcation is subcritical, often localized structures (dissipative solitons) can be observed in thehystereticregion where the pattern coexists with the ground state. Other frequently encountered structures comprise pulse trains (also known asperiodic travelling waves), spiral waves and target patterns. These three solution types are also generic features of two- (or more-) component reaction–diffusion equations in which the local dynamics have a stable limit cycle[13]
For a variety of systems, reaction–diffusion equations with more than two components have been proposed, e.g. theBelousov–Zhabotinsky reaction,[14]forblood clotting,[15]fission waves[16]or planargas dischargesystems.[17]
It is known that systems with more components allow for a variety of phenomena not possible in systems with one or two components (e.g. stable running pulses in more than one spatial dimension without global feedback).[18]An introduction and systematic overview of the possible phenomena in dependence on the properties of the underlying system is given in.[19]
In recent times, reaction–diffusion systems have attracted much interest as a prototype model forpattern formation.[20]The above-mentioned patterns (fronts, spirals, targets, hexagons, stripes and dissipative solitons) can be found in various types of reaction–diffusion systems in spite of large discrepancies e.g. in the local reaction terms. It has also been argued that reaction–diffusion processes are an essential basis for processes connected tomorphogenesisin biology[21][22]and may even be related to animal coats and skin pigmentation.[23][24]Other applications of reaction–diffusion equations include ecological invasions,[25]spread of epidemics,[26]tumour growth,[27][28][29]dynamics of fission waves,[30]wound healing[31]and visual hallucinations.[32]Another reason for the interest in reaction–diffusion systems is that although they are nonlinear partial differential equations, there are often possibilities for an analytical treatment.[8][9][33][34][35][20]
Well-controllable experiments in chemical reaction–diffusion systems have up to now been realized in three ways. First, gel reactors[36]or filled capillary tubes[37]may be used. Second,temperaturepulses oncatalytic surfaceshave been investigated.[38][39]Third, the propagation of running nerve pulses is modelled using reaction–diffusion systems.[11][40]
Aside from these generic examples, it has turned out that under appropriate circumstances electric transport systems like plasmas[41]or semiconductors[42]can be described in a reaction–diffusion approach. For these systems various experiments on pattern formation have been carried out.
A reaction–diffusion system can be solved by using methods ofnumerical mathematics. There exist several numerical treatments in research literature.[43][20][44]Numerical solution methods for complexgeometriesare also proposed.[45][46]Reaction-diffusion systems are described to the highest degree of detail with particle based simulation tools like SRSim or ReaDDy[47]which employ among others reversible interacting-particle reaction dynamics.[48]
|
https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_system
|
The Algorithmic Beauty of Plantsis a book byPrzemyslaw PrusinkiewiczandAristid Lindenmayer. It is notable as it is the first comprehensive volume on the computer simulation of certainpatterns in naturefound in plant development (L-systems).
The book is no longer in print but is available free online.[1]
The book has eight chapters:
George Klir, reviewing the book in theInternational Journal of General Systems, writes that "This book, full of beautiful pictures ofplantsof great variety, is a testimony of the genius ofAristid Lindenmayer, who invented in 1968 systems that are now named by him --Lindenmayer systemsorL-systems. It is also a testimony of the power of current computer technology. The pictures in the book are not photographs of real plants. They are all generated on the computer by relatively simplealgorithmsbased upon the idea of L-systems."[2]Klir goes on to explain the mathematics of L-systems, involving replacement of strings of symbols with further strings according to production rules, adding that "high computer power is essential since the generation of realistic forms requires tremendous numbers of replacements and the geometric interpretation of the generated strings requires a highly sophisticated computer graphics".[2]
Adrian Bell, reviewing the book inNew Phytologist, writes that it demands respect for three reasons, namely that it is the first book to explain the algorithms behind virtual plants, it "unashamedly" connects art and science, and is unusual in being a real book on a computer-based subject. Each chapter, writes Bell, is an introductory manual to the simulation of an aspect of plant form, resulting "eventually" in a 3-D image of a plant architecture.[3]
Peter Antonelli, reviewing the book inSIAM Review, writes that it presents a "beautifully designed 'coffee-table-book'" summary of Lindenmayer's school of thought, explaining how Algorithmic Language Theory, likeNoam Chomsky's theory ofgrammar, can describe how repeated structural units can arrange themselves. Antonelli suggests thatGoethewould have disapproved of having the barrier of mathematics between the observer and the observed.[4]
Karl Niklas, reviewing the book inThe Quarterly Review of Biology, writes that the book, intended for many different audiences, is "unequally successful" in reaching them. Niklas suggests that those who wonder about howgraphic artistscreate "the magnificent cyber-floras that sway and grow so realistically in the movies", and those who admireplant symmetrywill enjoy the book. He is more skeptical about its claim to serious science as the book "fails to educate its readers" about the challenge of understanding plant form in terms ofdevelopmental biology. Therefore he believes the book falls short, the dazzling beauty offractalsnot proving their relevance to biology.[5]
|
https://en.wikipedia.org/wiki/The_Algorithmic_Beauty_of_Plants
|
The following is a partiallist of linguistic example sentencesillustrating variouslinguisticphenomena.
Different types of ambiguity which are possible in language.
Demonstrations of words which have multiple meanings dependent oncontext.
Demonstrations of ambiguity between alternative syntactic structures underlying a sentence.
Demonstrations of howincrementaland (at least partially)localsyntactic parsing leads to infelicitous constructions and interpretations.
Punctuation can be used to introduce ambiguity or misunderstandings where none needed to exist. One well known example,[18]for comedic effect, is fromA Midsummer Night's Dreamby William Shakespeare (ignoring the punctuation provides the alternate reading).
Someprescriptive grammarprohibits "preposition stranding": ending sentences with prepositions.[19]
Sentences with unexpected endings.
Comparative illusion:
Demonstrations of sentences which are unlikely to have ever been said, although the combinatorial complexity of the linguistic system makes them possible.
Demonstrations of sentences where the semantic interpretation is bound to context or knowledge of the world.
Conditionals where the prejacent ("if" clause) is not strictly required for the consequent to be true.
A famous example for lexical ambiguity in Persian is the following sentence:[43]
بخشش لازم نیست اعدامش کنید
It can be read either as:
which means "Forgiveness! no need to execute him/her"
Or as:
which means "Forgiveness not needed! execute her/him"
|
https://en.wikipedia.org/wiki/List_of_linguistic_example_sentences
|
"Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" is agrammatically correctsentenceinEnglishthat is often presented as an example of howhomonymsandhomophonescan be used to create complicated linguistic constructs throughlexical ambiguity. It has been discussed in literature in various forms since 1967, when it appeared inDmitri Borgmann'sBeyond Language: Adventures in Word and Thought.
The sentence employs three distinct meanings of the wordbuffalo:
A semantically equivalent form preserving the original word order is: "Buffalonian bison whom other Buffalonian bison bully also bully Buffalonian bison."
The sentence is unpunctuated and uses three different readings of the word "buffalo". In order of their first use, these are:
The sentence issyntactically ambiguous; one possible parse (marking each "buffalo" with its part of speech as shown above) is as follows:
BuffaloabuffalonBuffaloabuffalonbuffalovbuffalovBuffaloabuffalon.
When grouped syntactically, this is equivalent to: [(Buffalonian bison) (Buffalonian bison intimidate)] intimidate (Buffalonian bison).
Because the sentence has arestrictive clause, there can be no commas. The relative pronouns "which" or "that" could appear between the second and third words of the sentence, as inBuffalo buffalo that Buffalo buffalo buffalo buffalo Buffalo buffalo; when this pronoun is omitted, the relative clause becomes areduced relative clause.
An expanded form of the sentence that preserves the original word order is:
"Buffalo bison that other Buffalo bison bully also bully Buffalo bison."
Thus, theparsedsentence claims that bison whoare intimidated or bullied by bisondo themselvesintimidate or bully bison(at least in the city of Buffalo – implicitly, Buffalo, New York):
Thomas Tymoczkohas pointed out that there is nothing special about eight "buffalos";[3]any sentence consisting solely of the word "buffalo" repeated any number of times is grammatically correct. The shortest is "Buffalo!", which can be taken as a verbalimperativeinstruction to bully someone ("[You,] buffalo!") with the implied subject "you" removed,[4]: 99–100, 104or, as a noun exclamation, expressing e.g. that a buffalo has been sighted, or as an adjectival exclamation, e.g. as a response to the question, "where are you from?" Tymoczko uses the sentence as an example illustratingrewrite rulesin linguistics.[4]: 104–105
The idea that one can construct a grammatically correct sentence consisting of nothing but repetitions of "buffalo" wasindependently discoveredseveral times in the 20th century. The earliest known written example, "Buffalo buffalo buffalo buffalo", appears in the original manuscript forDmitri Borgmann's 1965 bookLanguage on Vacation, though the chapter containing it was omitted from the published version.[5]Borgmann recycled some of the material from this chapter, including the "buffalo" sentence, in his 1967 book,Beyond Language: Adventures in Word and Thought.[6]: 290In 1972,William J. Rapaport, then a graduate student atIndiana University, came up with versions containing five and ten instances of "buffalo".[7]He later used both versions in his teaching, and in 1992 posted them to theLINGUIST List.[7][8]A sentence with eight consecutive buffalos is featured inSteven Pinker's 1994 bookThe Language Instinctas an example of a sentence that is "seemingly nonsensical" but grammatical. Pinker names his student,Ann Senghas, as the inventor of the sentence.[9]: 210
Neither Rapaport, Pinker, nor Senghas were initially aware of the earlier coinages.[7]Pinker learned of Rapaport's earlier example only in 1994, and Rapaport was not informed of Borgmann's sentence until 2006.[7]
Versions of this linguistic oddity can be constructed with other words which similarly simultaneously serve ascollective noun, adjective, and verb, some of which need no capitalization (such as "police").[10]
General:
Other linguistically complex sentences:
|
https://en.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffalo_buffalo_buffalo_Buffalo_buffalo
|
"James while John had had had had had had had had had had had a better effect on the teacher" is an Englishsentenceused to demonstratelexical ambiguityand the necessity ofpunctuation,[1]which serves as a substitute for theintonation,[2]stress, andpausesfound inspeech.[3]In human information processing research, the sentence has been used to show how readers depend on punctuation to give sentences meaning, especially in the context of scanning across lines of text.[4]The sentence is sometimes presented as a puzzle, where the solver must add the punctuation.
The sentence refers to two students, James and John, who are required by an English teacher to describe a man who had suffered from a cold in the past. John writes "The man had a cold", which the teacher marks incorrect, while James writes the correct "The man had had a cold". James's answer, being more grammatical, resulted in a better impression on the teacher.[5]
The sentence is easier to understand with added punctuation and emphasis:
James, while John had had "had", had had "had had"; "had had"had had a better effect on the teacher.[6]
In each of the five pairs of "had" in the above sentence, the first "had" in the pair is in thepast perfectform. The italicized instances denote emphasis ofintonation, focusing on the differences in the students' answers, then finally identifying the correct one.
Alternatively, the sentence can also be read as John's answer being better than James', simply by placing the same punctuation in a different arrangement through the sentence:
James, while John had had "had had", had had "had"; "had had"had had a better effect on the teacher.
The sentence can be given as a grammatical puzzle[7][8][9]or an item on a test,[1][2]for which one must find the properpunctuationto give it meaning.Hans Reichenbachused a similar sentence ("John where Jack had...") in his 1947 bookElements of Symbolic Logicas an exercise for the reader, to illustrate the different levels of language, namely object language andmetalanguage. The intention was for the reader to add the needed punctuation for the sentence to make grammatical sense.[10]
In research showing how people make sense of information in their environment, this sentence was used to demonstrate how seemingly arbitrary decisions can drastically change the meaning, analogous to how changes in the punctuation and quotes in the sentence show that the teacher alternately prefers James's work and John's work (e.g., compare: 'James, while John had had "had", had...' vs. 'James, while John had had "had had",...').[11]
The sentence is also used to show thesemanticvagueness of the wordhad, as well as to demonstrate thedifference between using a word and mentioning a word.[12]
It has also been used as an example of the complexities of language, its interpretation, and its effects on a person'sperceptions.[13]
For thesyntacticstructure to be clear to a reader, this sentence requires, at a minimum, that the two phrases be separated by using asemicolon,period,en-dashorem-dash. Still,Jasper Fforde's novelThe Well of Lost Plotsemploys a variation of the phrase to illustrate the confusion that may arise even from well-punctuated writing:[14]
"Okay," said the Bellman, whose head was in danger of falling apart like achocolate orange, "let me get this straight:David Copperfield, unlikePilgrim's Progress, which had had 'had', had had 'had had'. Had 'had had' had TGC's approval?"
|
https://en.wikipedia.org/wiki/James_while_John_had_had_had_had_had_had_had_had_had_had_had_a_better_effect_on_the_teacher
|
Apseudowordis a unit of speech or text that appears to be an actual word in a certainlanguage, while in fact it has no meaning. It is a specific type ofnonce word, or even more narrowly a nonsense word, composed of a combination ofphonemeswhich nevertheless conform to the language'sphonotacticrules.[1]It is thus a kind ofvocable: utterable but meaningless.
Suchwordslacking ameaningin a certain language or absent in anytext corpusordictionarycan be the result of (the interpretation of) a trulyrandom signal, but there will often be an underlying deterministic source, as is the case for examples likejabberwockyandgalumph(both coined in anonsense poembyLewis Carroll),dord(aghost wordpublished due to a mistake),ciphers, andtypos.
A string of nonsensical words may be described asgibberish.Word salad, in contrast, may contain legible and intelligible words but without semantic orsyntacticcorrelation orcoherence.
Withinlinguistics, a pseudoword is defined specifically as respecting thephonotacticrestrictions of a language.[2]That is, it does not include sounds or series of sounds that do not exist in that language: it is easily pronounceable for speakers of the language. When reading pseudowords, some cite the need to reflect on the real words that are "friendly" and "unfriendly".[3]For instance, "tave" can be read easily due to the number of its friendly words such as cave, pave, and wave. Also, when written down, a pseudoword does not include strings of characters that are not permissible in the spelling of the target language. "Vonk" is a pseudoword in English, while "dfhnxd" is not. The latter is an example of a nonword. Nonwords are contrasted with pseudowords in that they are not pronounceable and by that their spelling could not be the spelling of a real word.
Pseudowords are created in one of two ways. The first method involves changing at least one letter in a word. The second method uses variousbigramsandtrigramsand combines them. Both methods evaluate certain criteria to compare the pseudoword to another real word. The more that a given pseudoword matches a word in terms of criteria, the stronger the word is.[4]
Pseudowords are also sometimes calledwug wordsin the context ofpsycholinguisticexperiments. This is becausewug[wʌg] was one such pseudoword used byJean Berko Gleasonin herwug test1958 experiments.[5]Words likewug, which could have been a perfectly acceptable word in English but is not due to anaccidental gap, were presented to children. The experimenter would then prompt the children to create a plural forwug, which was almost invariablywugs[wʌgz]. The experiments were designed to see if Englishmorphophonemicswould be applied by children to novel words. They revealed that even at a very young age, children have already internalized many of the complex features of their language.
Alogatomeis a short pseudoword or just a syllable which is used in acoustic experiments to examinespeech recognition.
Experiments involving pseudonyms have led to the discovery of thepseudoword effect, a phenomenon where non-words that are similar orthographically to real words give rise to more confusion, or "hits and false alarms," than other real words which are also similar in orthography. The reasoning behind this is focused on semantic meaning. Semantics help us more quickly differentiate between words that look similar, leading to the conclusion that the pseudoword effect is caused by a familiarity-based process.[6]
Pseudowords are also often used in studies involving aphasia and other cognitive deficits. ParticularlyBroca’s aphasiahas been associated with difficulties in processing pseudowords. In aphasia studies, they are often used to measure syllable frequency by having patients attempt to pronounce them.[7]Also, patients with left hemisphere damage (LHD) tend to have significantly greater difficulty writing pseudowords than those with right hemisphere damage.[8]This specific deficit is known as the lexicality effect. It occurs in the presence of perisylvian, rather than extrasylvian, damage in the left hemisphere.[9]
In testing the ability of beginner readers, pseudowords are used due to their characteristics as pronounceablenon-words.[10]Those with reading disabilities have a more difficult time pronouncing pseudowords. Because pseudowords are made using common syllables, it might be obvious that trouble in pronouncing them would be connected to trouble pronouncing real words. From these findings, nonsense word fluency is now considered to be a basic early literacy indicator.
A standardized test for beginning readers,Dynamic Indicators of Basic Early Literacy Skills (DIBELS), shows high scores in pseudoword pronunciation being correlated with high scores in the reading of authentic words.[11]Due to these findings, often pseudowords are used to train early readers to strengthen their morphological knowledge.
There is evidence that suggests that higher scores on these tests, such as the Word-Pseudoword Reading Competence Test are highly correlated with other more general standardized tests, such as the Test for School Achievement and its subtests. Pseudoword pronunciation and spelling are associated with general reading comprehension and, more importantly, general, education-based achievement.[12]
Alogatomeornonsense syllableis a short pseudoword consisting most of the time of just onesyllablewhich has no meaning of its own. Examples of English logatomes are the nonsense wordssnarporbluck.
Like other pseudowords, logatomes obey all the phonotactic rules of a specific language.
Logatomes are used in particular inacousticexperiments.[13]They are also used in experiments in thepsychology of learningas a way to examine speech recognition.[14]and inexperimental psychology, especially the psychology of learning andmemory.
Nonsense syllables were first introduced byHermann Ebbinghaus[15]in his experiments on the learning of lists. His intention was that they would form a standard stimulus so that experiments would be reproducible. However, with increasing use it became apparent that different nonsense syllables were learned at very different rates, even when they had the same superficial structure. Glaze[16]introduced the concept ofassociation valueto describe these differences, which turned out to be reliable between people and situations. Since Glaze's time, experiments using nonsense syllables typically control association value in order to reduce variability in results between stimuli.
Nonsense syllables can vary in structure. The most used are the so-called CVC syllables, composed of a consonant, a vowel, and a consonant. These have the advantage that nearly all are pronounceable, that is, they fit the phonotactics of any language that usesclosed syllables, such as English andGerman. They are often described as "CVCtrigrams", reflecting their three-letter structure. Obviously many other structures are possible, and can be described on the same principles, e.g. VC, VCV, CVCV. But the CVC trigrams have been studied most intensively; for example, Glaze determined association values for 2019 of them.[16]
The term nonsense syllable is widely used to describenon-lexical vocables used in music, most notably inscat singingbut also in many other forms of vocal music. Although such usages do not invoke the technical issues about structure and associability that are of concern in psychology, the essential meaning of the term is the same.
|
https://en.wikipedia.org/wiki/Pseudoword
|
Inlinguistics, thesyntax–semantics interfaceis the interaction betweensyntaxandsemantics. Its study encompasses phenomena that pertain to both syntax and semantics, with the goal of explaining correlations between form and meaning.[1]Specific topics includescope,[2][3]binding,[2]andlexical semanticproperties such asverbal aspectandnominal individuation,[4][5][6][7][8]semantic macroroles,[8]andunaccusativity.[4]
The interface is conceived of very differently informalistandfunctionalistapproaches. While functionalists tend to look into semantics and pragmatics for explanations of syntactic phenomena, formalists try to limit such explanations within syntax itself.[9]Aside from syntax, other aspects of grammar have been studied in terms of how they interact with semantics; which can be observed by the existence of terms such asmorphosyntax–semantics interface.[3]
Withinfunctionalistapproaches, research on the syntax–semantics interface has been aimed at disproving the formalist argument of theautonomy of syntax, by finding instances of semantically determined syntactic structures.[4][10]
Levinand Rappaport Hovav, in their 1995 monograph, reiterated that there are some aspects of verb meaning that are relevant to syntax, and others that are not, as previously noted bySteven Pinker.[11][12]Levin and Rappaport Hovav isolated such aspects focusing on the phenomenon ofunaccusativitythat is "semantically determined and syntactically encoded".[13]
Van ValinandLaPolla, in their 1997 monographic study, found that the more semantically motivated or driven a syntactic phenomenon is, the more it tends to be typologically universal, that is, to show less cross-linguistic variation.[14]
Informal semantics,semantic interpretationis viewed as amappingfrom syntactic structures todenotations. There are several formal views of the syntax–semantics interface which differ in what they take to be the inputs and outputs of this mapping. In theHeim and Kratzermodel commonly adopted withingenerative linguistics, the input is taken to be a special level of syntactic representation calledlogical form. At logical form, semantic relationships such asscopeandbindingare represented unambiguously, having been determined by syntactic operations such asquantifier raising. Other formal frameworks take the opposite approach, assuming that such relationships are established by the rules of semantic interpretation themselves. In such systems, the rules include mechanisms such astype shiftinganddynamic binding.[1][15][16][2]
Before the 1950s, there was no discussion of a syntax–semantics interface inAmerican linguistics, since neither syntax nor semantics was an active area of research.[17]This neglect was due in part to the influence oflogical positivismandbehaviorismin psychology, that viewed hypotheses about linguistic meaning as untestable.[17][18]
By the 1960s, syntax had become a major area of study, and some researchers began examining semantics as well. In this period, the most prominent view of the interface was theKatz–PostalHypothesisaccording to whichdeep structurewas the level of syntactic representation which underwent semantic interpretation. This assumption was upended by data involving quantifiers, which showed thatsyntactic transformationscan affect meaning. During thelinguistics wars, a variety of competing notions of the interface were developed, many of which live on in present-day work.[17][2]
|
https://en.wikipedia.org/wiki/Syntax%E2%80%90semantics_interface
|
Inlinguistics, acomparative illusion(CI) orEscher sentence[a]is acomparativesentence which initially seems to beacceptablebut upon closer reflection has no well-formed, sensical meaning. The typicalexample sentenceused to typify this phenomenon isMore people have been to Russia than I have.[4][b]The effect has also been observed in other languages. Some studies have suggested that, at least in English, the effect is stronger for sentences whose predicate is repeatable. The effect has also been found to be stronger in some cases when there is a plural subject in the second clause.
Escher sentences areungrammaticalbecause a matrix clause subject likemore peopleis making a comparison between two sets of individuals, but there is no such set of individuals in the second clause.[5]For the sentence to be grammatical, the subject of the second clause must be abareplural.[6]Linguists have marked that it is "striking" that, despite the grammar of these sentences not possibly having a meaningful interpretation, people so often report that they sound acceptable,[7]and that it is "remarkable" that people seldom notice any error.[5]
Mario Montalbetti's 1984Massachusetts Institute of Technologydissertation has been credited as being the first to note these sorts of sentences;[5]in his prologue he gives acknowledgements to Hermann Schultze "for uttering the most amazing*/?sentence I've ever heard:More people have been to Berlin than I have",[9]although the dissertation itself does not discuss such sentences.[10]Parallel examples withRussiainstead ofBerlinwere briefly discussed in psycholinguistic work in the 1990s and 2000s byThomas Beverand colleagues.[11]
Geoffrey K. Pullumwrote about this phenomenon in a 2004 post onLanguage Logafter Jim McCloskey brought it to his attention.[12]In a post the following day,Mark Libermangave the name "Escher sentences" to such sentences in reference toM. C. Escher's 1960 lithographAscending and Descending.[13]He wrote:[14]
All these stimuli [i.e., these sentences,Penrose stairs, and theShepard tone] involve familiar and coherent local cues whose global integration is contradictory or impossible. These stimuli also all seem OK in the absence of scrutiny. Casual, unreflective uptake has no real problem with them; you need to pay attention and think about them a bit before you notice that something is going seriously wrong.
Although rare, instances of this construction have appeared in natural text.Language Loghas noted examples such as:
Another attested example is the followingtweetfromDan Rather:[17]
Experiments on the acceptability of comparative illusion sentences has found results which are "highly variable both within and across studies".[18]While the illusion of acceptability for comparative illusions has also been informally reported for speakers of Faroese, German,[c]Icelandic, Polish, and Swedish,[20]systematic investigation has mostly centered on English, althoughAarhus Universityneurolinguist Ken Ramshøj Christensen has run several experiments on comparative illusions in Danish.[21]
When Danish(da)and Swedish(sv)speakers were asked what (1) means, their responses fell into one of the following categories:[22]
Flere
More
folk
people
har
have
været
been
i
in
Paris
Paris
end
than
jeg
I
har.
have.
Flere folk har været i Paris end jeg har.
More people have been in Paris than I have.
'More people have been to Paris than I have.'
Paraphrase (d) is in fact the only possible interpretation of (1); this is possible due to thelexical ambiguityofhar"have" between anauxiliary verband alexical verbjust as the Englishhave; however the majority of participants (da: 78.9%; sv: 56%) gave a paraphrase which does not follow from the grammar.[23]Another study where Danish participants had to pick from a set of paraphrases, say it meant something else, or say it was meaningless found that people selected "It does not make sense" for comparative illusions 63% of the time and selected it meant something 37% of the time.[24]
The first study examining what affects acceptability of these sentences was presented at the 2004CUNYConference on Human Sentence Processing.[25]Scott Fults and Collin Phillips found that Escher sentences withellipsis(a) were found to be more acceptable than the same sentences without ellipsis (b).[26]
Responses to this study noted that it only compared elided material to nothing, and that even in grammatical comparatives, ellipsis of repeated phrases is preferred.[27]In order to control for the awkwardness of identical predicates, Alexis Wellwood and colleagues compared comparative illusions with ellipsis to those with a different predicate.[28]
They found that both CI-type and control sentences were found to be slightly more acceptable with ellipsis, which led them to reject the hypothesis that ellipsis was responsible for the acceptability of CIs. Rather, it is possible people just prefer shorter sentences in general.[29]Patrick Kelley'sMichigan State Universitydissertation found similar results.[30]
Alexis Wellwoodand colleagues have found in experiments that the illusion of grammaticality is greater when the sentence'spredicatedenotes a repeatable event.[31]For instance, (a) is experimentally found to be more acceptable than (b).[32]
The comparative must be in the subject position for the illusion to work; sentences like (a) which also haveverb phrase ellipsisare viewed as unacceptable without any illusion of acceptability:[33]
A pilot study by Iria de Dios-Flores also found that repeatability of the predicate had an effect on the acceptability of CIs in English.[34]However, Christensen's study on comparative illusions in Danish did not find a significant difference in acceptability for sentences with repeatable predicates (a) and those without (b).[35]
Flere
More
mænd
men
har
have
spist
eaten
kød
meat
end
than
kvinder
women
har
have
ifølge
according.to
rapporten.
report-the
Flere mænd har spist kød end kvinder har ifølge rapporten.
More men have eaten meat than women have according.to report-the
'More men have eaten meat than women have according to the report.'
Flere
More
drenge
boys
har
have
mistet
lost
hørelsen
hearing-the
end
than
piger
girls
har
have
i
in
Danmark.
Denmark
Flere drenge har mistet hørelsen end piger har i Danmark.
More boys have lost hearing-the than girls have in Denmark
'More boys have lost the sense of hearing than girls have in Denmark.'
Thelexical ambiguityof the English quantifiermorehas led to a hypothesis where the acceptability of CIs is due to people reinterpreting a "comparative"moreas an "additive"more. Asfewerdoes not have such an ambiguity, Wellwood and colleagues tested to see if there was any difference in acceptability judgements depending on whether the sentences usedfewerormore. In general, their study found significantly higher acceptability for sentences withmorethan withfewerbut the difference did not disproportionately affect the comparative illusion sentences compared to the controls.[29]
Christensen found no significant difference in acceptability for Danish CIs withflere("more") compared to those withfærre("fewer").[35]
Experiments have also investigated the effects different kinds of subjects in thethan-clause have on CIs' acceptability. Wellwood and colleagues found that sentences with first person singular pronounIto be more acceptable than those with the third person singular pronounhe, though they note this might be due to discourse effects and the lack of a prior antecedent forhe. They found no significant difference for sentences with a singular third person pronoun (he) and those with a singular definite description (the boy). There was no difference in number for the first person pronominal subject (Ivs.we), but plural definite descriptions (the boys) were significantly more acceptable than singular definite descriptions (the boy).[36]Christensen found that plural subjects (kvinder, "women") in thethan-clause led to significantly higher acceptability ratings than singular subjects (frisøren"the hairdresser").[35]
De Dios-Flores examined if there was an effect depending on whether or not thethan-clause subject could be a subset of the matrix subject as in (a) compared to those where it could not be due to a gender mismatch as in (b). No significant differences were found.[37]
In a study of Danish speakers, CIs with prepositional sentential adverbials likeom aftenen"in the evening" were found to be less acceptable than those without.[38]
Comparatives in Bulgarian can optionally have the degree operatorколкото(kolkoto); sentences with this morpheme (a) are immediately found unacceptable but those without it (b) produce the same illusion of acceptability.[39]
Повече
Poveche
more
хора
hora
people
са
sa
are
били
bili
been
в
v
in
Русия
Rusiya
Russia
от-колкото
ot-kolkoto
from-how.many
аз.
az.
I
Повече хора са били в Русия от-колкотоаз.
Poveche hora sa bili v Rusiya ot-kolkotoaz.
more people are been in Russia from-how.many I
'More people have been to Russia than me.'
Повече
Poveche
more
хора
hora
people
са
sa
are
били
bili
been
в
v
in
Русия
Rusiya
Russia
от
ot
than
мен.
men.
me
Повече хора са били в Русия от мен.
Poveche hora sa bili v Rusiya ot men.
more people are been in Russia than me
'More people have been to Russia than me.'
Aneuroimagingstudy of Danish speakers found less activation in the left inferiorfrontal gyrus, leftpremotor cortex(BA4,6), and left posteriortemporal cortex(BA21,22) when processing CIs like (a) than when processing grammatical clausal comparatives like (b). Christensen has suggested this shows CIs are easy to process but as they are nonsensical, processing is "shallow". Low LIFG activation levels also suggest that people do not perceive CIs as being semantically anomalous.[40]
Flere
More
mænd
men
har
have
boet
lived
i
in
telt
tent
end
than
Marie
Mary
har.
has.
Flere mænd har boet i telt end Marie har.
More men have lived in tent than Mary has.
'More men have lived in a tent than Mary has.'
Flere
More
mænd
men
har
have
boet
lived
i
in
telt
tent
end
than
på
on
hotel.
hotel.
Flere mænd har boet i telt end på hotel.
More men have lived in tent than on hotel.
'More men have lived in a tent than in a hotel.'
Townsend and Bever have posited that Escher sentences get perceived as acceptable because they are an apparent blend of two grammatical templates.[41]
Wellwood and colleagues have noted in response that the possibility of each clause being grammatical in a different sentence (a, b) does not guarantee a blend (c) would be acceptable.[42]
Wellwood and colleagues also interpret Townsend and Bever's theory as requiring a shared lexical element in each template. If this version is right, they predict (c) would be viewed as less acceptable due to the ungrammaticality of (b):[42]
Wellwood and colleagues, based on their experimental results, have rejected Townsend and Bever's hypothesis and instead support their event comparison hypothesis, which states that comparative illusions are due to speakers reinterpreting these sentences as discussing a comparison of events.[18]
The term "comparative illusion" has sometimes been used as an umbrella term which also encompasses "depth charge" sentences like "No head injury is too trivial to be ignored."[43]This example, first discussed byPeter Cathcart Wasonand Shuli Reich in 1979, is very often initially perceived as having the meaning "No head injury should be ignored—even if it's trivial", even though upon careful consideration the sentence actually says "All head injuries should be ignored—even trivial ones." The authors illustrate their point by comparing the sentence to "No missile is too small to be banned."[44]
Phillips and colleagues have discussed othergrammatical illusionswith respect toattraction,casein German,binding, andnegative polarity items; speakers initially find such sentences acceptable, but later realize they are ungrammatical.[45]It has also been compared to the "missing VP illusion".[46]
|
https://en.wikipedia.org/wiki/Comparative_illusion
|
TheAI effectis the discounting of the behavior of anartificial intelligenceprogram as not "real" intelligence.[1]
The authorPamela McCorduckwrites: "It's part of thehistory of the field of artificial intelligencethat every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2]
ResearcherRodney Brookscomplains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3]
"The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements.[4][2][1]Edward Geist creditsJohn McCarthyfor coining the term "AI effect" to describe this phenomenon.[4]
McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."[5]It is an example ofmoving the goalposts.[6]
Tesler's Theoremis:
AI is whatever hasn't been done yet.
Douglas Hofstadterquotes this[7]as do many other commentators.[8]
When problems have not yet been formalised, they can still be characterised by amodel of computationthat includeshuman computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as ahuman-assisted Turing machine.[9]
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI. This underappreciation is known from such diverse fields ascomputer chess,[10]marketing,[11]agricultural automation,[8]hospitality[12]andoptical character recognition.[13]
Michael Swainereports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous",Patrick Winstonsays. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."[14]
According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"[11]
Marvin Minskywrites "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"[15]
Nick Bostromobserves that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."[16]
The AI effect on decision-making insupply chain risk managementis a severely understudied area.[17]
To avoid the AI effect problem, the editors of a special issue ofIEEE Softwareon AI andsoftware engineeringrecommend not overselling – nothyping– the real achievable results to start with.[18]
TheBulletin of the Atomic Scientistsorganization views the AI effect as a worldwide strategic military threat.[4]They point out that it obscures the fact thatapplications of AIhad already found their way into both US andSoviet militariesduring theCold War.[4]AI tools to advise humans regarding weapons deployment were developed by both sides and received very limited usage during that time.[4]They believe this constantly shifting failure to recognise AI continues to undermine human recognition of security threats in the present day.[4]
Some experts think that the AI effect will continue, with advances in AI continually producing objections and redefinitions of public expectations.[19][20][21]Some also believe that the AI effect will expand to include the dismissal of specialised artificial intelligences.[21]
In the early 1990s, during the second "AI winter" many AI researchers found that they could get more funding and sell more software if they avoided the bad name of "artificial intelligence" and instead pretended their work had nothing to do with intelligence.[citation needed]
Patty Tascarella wrote in 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding."[22]
Michael Kearnssuggests that "people subconsciously are trying to preserve for themselves some special role in the universe".[23]By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to themysterybeing removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.[citation needed]
A related effect has been noted in the history ofanimal cognitionand inconsciousnessstudies, where every time a capacity formerly thought of as uniquely human is discovered in animals (e.g. theability to make tools, or passing themirror test), the overall importance of that capacity is deprecated.[citation needed]
Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."[24]
Mueller 1987 proposed comparing AI to human intelligence, coining the standard of Human-Level Machine Intelligence.[25]This nonetheless suffers from the AI effect however when different humans are used as the standard.[25]
When IBM's chess-playing computer Deep Bluesucceeded in defeating Garry Kasparov in 1997, public perception of chess playing shifted from a difficult mental task to a routine operation.[26]
The public complained that Deep Blue had only used "brute force methods" and it wasn't real intelligence.[10]Notably,John McCarthy, an AI pioneer and founder of the term "artificial intelligence", was disappointed by Deep Blue. He described it as a mere brute force machine that did not have any deep understanding of the game. McCarthy would also criticize how widespread the AI effect is ("As soon as it works, no one calls it AI anymore"[27][28]: 12), but in this case did not think that Deep Blue was a good example.[27]
On the other side,Fred A. Reedwrites:[29]
A problem that proponents of AI regularly face is this: When we know how a machine does something "intelligent", it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright.
|
https://en.wikipedia.org/wiki/AI_effect
|
ALPAC(Automatic Language Processing Advisory Committee) was a committee of seven scientists led byJohn R. Pierce, established in 1964 by theUnited States governmentin order to evaluate the progress incomputational linguisticsin general andmachine translationin particular. Its report, issued in 1966, gained notoriety for being very skeptical of research done in machine translation so far, and emphasizing the need for basic research in computational linguistics; this eventually caused the U.S. government to reduce its funding of the topic dramatically. This marked the beginning of the firstAI winter.
The ALPAC was set up in April 1964 with John R. Pierce as the chairman.
The committee consisted of:
Testimony was heard from:
ALPAC's final recommendations (p. 34) were, therefore, that research should be supported on:
Thishistory of sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/ALPAC
|
Speech Application Language Tags(SALT) is anXML-basedmarkup languagethat is used inHTMLandXHTMLpages to addvoice recognitioncapabilities toweb-based applications.
Speech Application Language Tags enables multimodal and telephony-enabled access to information, applications, and Web services from PCs, telephones, tablet PCs, and wireless personal digital assistants (PDAs). The Speech Application Language Tags extend existing mark-up languages such asHTML,XHTML, andXML. Multimodal access will enable users to interact with an application in a variety of ways: they will be able to input data using speech, a keyboard, keypad, mouse and/or stylus, and produce data as synthesized speech, audio, plain text, motion video, and/or graphics.
SALT was developed as a competitor toVoiceXMLand was supported by the SALT Forum. The SALT Forum was founded on October 15, 2001, byMicrosoft, along withCisco Systems,Comverse,Intel,Philips Consumer Electronics, andScanSoft.[1]The SALT 1.0 specification was submitted to the W3C (World Wide Web Consortium) for review in August 2002.[2]However, the W3C continued developing its VoiceXML 2.0 standard, which reached the final "Recommendation" stage in March 2004.[3]
By 2006, Microsoft realized Speech Server had to support the W3C VoiceXML standard to remain competitive. Microsoft joined the VoiceXML Forum as a Promoter in April of that year.[4]Speech Server 2007 supports VoiceXML 2.0 and 2.1 in addition to SALT. In 2007, Microsoft purchasedTellme, one of the largest VoiceXML service providers.
By that point nearly every other SALT Forum company had committed to VoiceXML.[5]The last press release posted to the SALT Forum website was in 2003, while the VoiceXML Forum is quite active. "SALT [Speech Application Language Tags] is a direct competitor but has not reached the level of maturity of VoiceXML in the standards process," said Bill Meisel, principal at TMA Associates, a speech technology research firm.[3]
TheMicrosoft Speech Server2004 product supports SALT, while Microsoft Speech Server 2007 supports SALT in addition to VoiceXML 2.0 and 2.1. There is also a speech add-in forInternet Explorerthat interprets SALT tags on web pages, available as part of theMicrosoft Speech Application SDK.
|
https://en.wikipedia.org/wiki/Speech_Application_Language_Tags
|
Articulatory speech recognitionmeans the recovery of speech (in forms of phonemes, syllables or words) from acoustic signals with the help of articulatory modeling or an extra input of articulatory movement data.[1]Speech recognition(or automatic speech recognition, acoustic speech recognition) means the recovery of speech from acoustics (sound wave) only. Articulatory information is extremely helpful when the acoustic input is in low quality, perhaps because of noise or missing data.
Measurable information from the articulatory system (e.g. tongue, jaw movements) can supplement acoustic signals to improve phone recognition accuracy by 2%. However, attempts to estimate articulatory data from acoustic signals alone have not significantly enhanced recognition performance.[2]
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Articulatory_speech_recognition
|
Audio miningis a technique by which the content of an audio signal can be automatically analyzed and searched. It is most commonly used in the field ofautomatic speech recognition, where the analysis tries to identify any speech within the audio. The term ‘audio mining’ is sometimes used interchangeably with audio indexing, phonetic searching, phonetic indexing, speech indexing, audio analytics,speech analytics, word spotting, andinformation retrieval. Audio indexing, however, is mostly used to describe the pre-process of audio mining, in which the audio file is broken down into a searchable index of words.
Academic research on audio mining began in the late 1970s in schools like Carnegie Mellon University, Columbia University, the Georgia Institute of Technology, and the University of Texas.[1]Audio data indexing and retrieval began to receive attention and demand in the early 1990s, when multimedia content started to develop and the volume of audio content significantly increased.[2]Before audio mining became the mainstream method, written transcripts of audio content were created and manually analyzed.[3]
Audio mining is typically split into four components: audio indexing, speech processing and recognition systems, feature extraction and audio classification.[4]The audio will typically be processed by a speech recognition system in order to identify word orphonemeunits that are likely to occur in the spoken content. This information may either be used immediately in pre-defined searches for keywords or phrases (a real-time "word spotting" system), or the output of the speech recognizer may be stored in an index file. One or more audio mining index files can then be loaded at a later date in order to run searches for keywords or phrases.
The results of a search will normally be in terms of hits, which are regions within files that are good matches for the chosen keywords. The user may then be able to listen to the audio corresponding to these hits in order to verify if a correct match was found.
In audio, there is the main problem of information retrieval - there is a need to locate the text documents that contain the search key. Unlike humans, a computer is not able to distinguish between the different types of audios such as speed, mood, noise, music or human speech - an effective searching method is needed. Hence, audio indexing allows efficient search for information by analyzing an entire file using speech recognition. An index of content is then produced, bearing words and their locations done through content-based audio retrieval, focusing on extracted audio features.
It is done through mainly two methods: Large Vocabulary Continuous Speech Recognition (LVCSR) and Phonetic-based Indexing.
In text-based indexing or large vocabulary continuous speech recognition (LVCSR), the audio file is first broken down into recognizable phonemes. It is then run through adictionarythat can contain several hundred thousand entries and matched with words and phrases to produce a full text transcript. A user can then simply search a desired word term and the relevant portion of the audio content will be returned.
If the text or word could not be found in the dictionary, the system will choose the next most similar entry it can find. The system uses a language understanding model to create a confidence level for its matches. If the confidence level be below 100 percent, the system will provide options of all the found matches.[5]
The main draw of LVCSR is its high accuracy and high searching speed. In LVCSR,statistical methodsare used to predict the likelihood of different word sequences, hence the accuracy is much higher than the single word lookup of a phonetic search. If the word can be found, the probability of the word spoken is very high.[6]Meanwhile, while initial processing of audio takes a fair bit of time, searching is quick as just a simple test to text matching is needed.
On the other hand, LVCSR is susceptible to common issues ofspeech recognition. The inherent random nature of audio and problems of external noise all affect the accuracies of text-based indexing.
Another problem with LVCSR is its over reliance on its dictionary database. LVCSR only recognizes words that are found in their dictionary databases, and these dictionaries and databases are unable to keep up with the constant evolving of newterminology, names and words. Should the dictionary not contain a word, there is no way for the system to identify or predict it. This reduces the accuracy and reliability of the system. This is named the Out-of-vocabulary (OOV) problem. Audio mining systems try to cope with OOV by continuously updating the dictionary and language model used, but the problem still remains significant and has probed a search for alternatives.[7]
Additionally, due to the need to constantly update and maintain task-based knowledge and large training databases to cope with the OOV problem, high computational costs are incurred. This makes LVCSR an expensive approach to audio mining.
Phonetic-based indexing also breaks the audio file into recognizable phonemes, but instead of converting them to a text index, they are kept as they are and analyzed to create a phonetic-based index.
The process of phonetic-based indexing can be split into two phases. The first phase is indexing. It begins by converting the input media into a standard audio representation format (PCM). Then, an acoustic model is applied to the speech. This acoustic model represents characteristics of both an acoustic channel (an environment in which the speech was uttered and a transducer through which it was recorded) and a natural language (in which human beings expressed the input speech). This produces a corresponding phonetic search track, or phonetic audio track (PAT), a highly compressed representation of the phonetic content of the input media.
The second phase is searching. The user's search query term is parsed into a possible phoneme string using a phonetic dictionary. Then, multiple PAT files can be scanned at high speed during a single search for likely phonetic sequences that closely match corresponding strings of phonemes in the query term.[8][9]
Phonetic indexing is most attractive as it is largely unaffected by linguistic issues such as unrecognized words and spelling errors. Phonetic preprocessing maintains an open vocabulary that does not require updating. That makes it particularly useful for searching specialized terminology or words in foreign languages that do not commonly appear in dictionaries. It is also more effective for searching audio files with disruptive background noise and/or unclear utterances as it can compile results based on the sounds it can discern, and should the user wish to, they can search through the options until they find the desired item.[10]
Furthermore, in contrast to LVCSR, it can process audio files very quickly as there are very few unique phonemes between languages. However, phonemes cannot be effectively indexed like an entire word, thus searching on a phonetic-based system is slow.[11]
An issue with phonetic indexing is its low accuracy. Phoneme-based searches result in more false matches than text-based indexing. This is especially prevalent for short search terms, which have a stronger likelihood of sounding similar to other words or being part of bigger words. It could also return irrelevant results from other languages. Unless the system recognizes exactly the entire word, or understands phonetic sequences of languages, it is difficult for phonetic-based indexing to return accurate findings.[12]
Deemed as the most critical and complex component of audio mining, speech recognition requires the knowledge of human speech production system and its modeling.
To correspond the Human speech production system, the electrical speech production system is developed to consist of:
The electrical speech production system converts acoustic signal into corresponding representation of the spoken through the acoustic models in their software where all phonemes are represented. A statisticallanguage modelaids in the process by identifying how likely words are to follow each other in certain languages. Put together with a complex probability analysis, the speech recognition system is capable of taking an unknown speech signal and transcribing it into words based on the program's dictionary.[13][14]
ASR (automatic speech recognition) system includes:
Some applications of speech processing includes speech recognition, speech coding, speaker authentication, speech enhancement and speech synthesis.
Prerequisite to the entire speech recognition process, feature extraction must be established first within the system. Audio files must be processed from start to end, ensuring no important information is lost.
By differentiating sound sources through pitch, timbral features, rhythmic features, inharmonicity, autocorrelation and other features based on the signal's predictability, statistical pattern, and dynamic characteristics.
Enforcing standardization within feature extraction is regulated through the internationalMPEG-7 standard features, where features for audio or speech signal classification are fixed in terms of techniques used to analyze and represent raw data in terms of certain features.
Standard speech extraction techniques:
However, the three techniques are not ideal as non-stationary signals are ignored. Non-stationary signals can be analyzed usingFourierandshort-time Fourier, while time-varying signals are analyzed usingWaveletandDiscrete wavelet transform (DWT).
Audio classification is a form ofsupervised learning, and involves the analysis of audio recordings. It is split into several categories- acoustic data classification, environmental sound classification, musical classification, and natural language utterance classification.[15]The features often used for this process arepitch,timbral features, rhythmic features,inharmonicity, and audio correlation, although other features may also be used. There are several methods to audio classification using existing classifiers, such as thek-Nearest Neighbors, or thenaïve Bayes classifier. Using annotated audio data, machines learn to identify and classify the sounds.
There has also been research into usingdeep neural networksfor speech recognition and audio classification, due to their effectiveness in other fields such as image classification.[16]One method of using DNNs is by converting audio files into image files, by way ofspectrogramsin order to perform classification.[citation needed]
Audio mining is used in areas such as musical audio mining (also known asmusic information retrieval), which relates to the identification of perceptually important characteristics of a piece of music such as melodic, harmonic or rhythmic structure. Searches can then be carried out to find pieces of music that are similar in terms of their melodic, harmonic and/or rhythmic characteristics.
Within the field oflinguistics, audio mining has been used for phonetic processing and semantic analysis.[17]The efficiency of audio mining in processing audio-visual data lends aid in speaker identification and segmentation, as well as text transcription. Through this process, speech can be categorized in order to identify information, or to extract information through keywords spoken in the audio. In particular, this has been used forspeech analytics. Call centers have used the technology to conduct real time analysis by identifying changes in tone, sentiment or pitch, amongst others, which is then processed by decision engine or artificial intelligence to take further action.[18]Further use has been seen in areas of speech recognition and text-to-speech applications.
It has also been used in conjunction with video mining, in projects such as mining movie data.
Sen, Soumya; Dutta, Anjan; Dey, Nilanjan (2019).Audio Processing and Speech Recognition. Springer.ISBN978-981-13-6098-5.
|
https://en.wikipedia.org/wiki/Audio_mining
|
Audio visual speech recognition(AVSR) is a technique that usesimage processingcapabilities inlip readingto aidspeech recognitionsystems in recognizing undeterministicphonesor giving preponderance among near probability decisions.
Each system oflip readingandspeech recognitionworks separately, then their results are mixed at the stage offeature fusion. As the name suggests, it has two parts. First one is the audio part and second one is the visual part. In audio part we use features like log mel spectrogram, mfcc etc. from the raw audio samples and we build a model to get feature vector out of it . For visual part generally we use some variant of convolutional neural network to compress the image to a feature vector after that we concatenate these two vectors (audio and visual ) and try to predict the target object.
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Audio-visual_speech_recognition
|
IBM'sAutomatic Language Translatorwas amachine translationsystem that convertedRussiandocuments intoEnglish. It used anoptical discthat stored 170,000 word-for-word and statement-for-statement translations and a custom computer to look them up at high speed. Built for the US Air Force's Foreign Technology Division, theAN/GSQ-16(orXW-2), as it was known to the Air Force, was primarily used to convert Soviet technical documents for distribution to western scientists. The translator was installed in 1959, dramatically upgraded in 1964, and was eventually replaced by amainframerunningSYSTRANin 1970.
The translator began in a June 1953 contract from the US Navy to theInternational Telemeter Corporation(ITC) of Los Angeles. This was not for a translation system, but a pure research and development contract for a high-performance photographic online storage medium consisting of small black rectangles embedded in a plastic disk. When the initial contract ran out, what was then theRome Air Development Center(RADC) took up further funding in 1954 and onwards.[1]
The system was developed by Gilbert King, chief of engineering at ITC, along with a team that includedLouis Ridenour. It evolved into a 16-inch plastic disk with data recorded as a series of microscopic black rectangles or clear spots. Only the outermost 4 inches of the disk were used for storage, which increased the linear speed of the portion being accessed. When the disk spun at 2,400 RPM it had an access speed of about 1 Mbit/sec. In total, the system stored 30 Mbits, making it the highest density online system of its era.[1][a]
In 1954 IBM gave an influential demonstration of machine translation, known today as the "Georgetown–IBM experiment". Run on anIBM 704mainframe, the translation system knew only 250 words of Russian limited to the field of organic chemistry, and only 6 grammar rules for combining them. Nevertheless, the results were extremely promising, and widely reported in the press.[2]
At the time, most researchers in the nascent machine translation field felt that the major challenge to providing reasonable translations was building a large library, as storage devices of the era were both too small and too slow to be useful in this role.[3]King felt that the photoscopic store was a natural solution to the problem, and pitched the idea of an automated translation system based on the photostore to the Air Force. RADC proved interested, and provided a research grant in May 1956. At the time, the Air Force also provided a grant to researchers at theUniversity of Washingtonwho were working on the problem of producing an optimal translation dictionary for the project.
King advocated a simple word-for-word approach to translations. He thought that the natural redundancies in language would allow even a poor translation to be understood, and that local context was alone enough to provide reasonable guesses when faced with ambiguous terms. He stated that "the success of the human in achieving a probability of .50 in anticipating the words in a sentence is largely due to his experience and the real meanings of the words already discovered."[4]In other words, simply translating the words alone would allow a human to effectively read a document, because they would be able to reason out the proper meaning from the context provided by earlier words.
In 1958 King moved to IBM'sThomas J. Watson Research Center, and continued development of the photostore-based translator. Over time, King changed the approach from a pure word-for-word translator to one that stored "stems and endings", which broke words into parts that could be combined back together to form complete words again.[4]
The first machine, "Mark I", was demonstrated in July 1959 and consisted of a 65,000 word dictionary and a custom tube-based computer to do the lookups.[3]Texts were hand-copied ontopunched cardsusing custom Cyrillic terminals, and then input into the machine for translation. The results were less than impressive, but were enough to suggest that a larger and faster machine would be a reasonable development. In the meantime, the Mark I was applied to translations of the Soviet newspaper,Pravda. The results continued to be questionable, but King declared it a success, stating inScientific Americanthat the system was "...found, in an operational evaluation, to be quite useful by the Government."[3]
On 4 October 1957 theUSSRlaunchedSputnik 1, the first artificial satellite. This caused a wave of concern in the US, whose ownProject Vanguardwas caught flat-footed and then proved to repeatedly fail in spectacular fashion. This embarrassing turn of events led to a huge investment in US science and technology, including the formation ofDARPA,NASAand a variety of intelligence efforts that would attempt to avoid being surprised in this fashion again.
After a short period, the intelligence efforts centralized at theWright-Patterson Air Force Baseas the Foreign Technology Division (FTD, now known as theNational Air and Space Intelligence Center), run by the Air Force with input from theDIAand other organizations. FTD was tasked with the translation of Soviet and otherWarsaw Bloctechnical and scientific journals so researchers in the "west" could keep up to date on developments behind theIron Curtain. Most of these documents were publicly available, but FTD also made a number of one-off translations of other materials upon request.
Assuming there was a shortage of qualified translators, the FTD became extremely interested in King's efforts at IBM. Funding for an upgraded machine was soon forthcoming, and work began on a "Mark II" system based around a transistorized computer with a faster and higher-capacity 10 inch glass-based optical disc spinning at 2,400 RPM. Another addition was anoptical character readerprovided by the third party, which they hoped would eliminate the time-consuming process of copying the Russian text into machine-readable cards.[3]
In 1960 the Washington team also joined IBM, bringing their dictionary efforts with them. The dictionary continued to expand as additional storage was made available, reaching 170,000 words and terms by the time it was installed at the FTD. A major software update was also incorporated in the Mark II, which King referred to as "dictionary stuffing". Stuffing was an attempt to deal with the problems of ambiguous words by "stuffing" prefixes onto them from earlier words in the text.[3]These modified words would match with similarly stuffed words in the dictionary, reducing the number of false positives.
In 1962 King left IBM forItek, a military contractor in the process of rapidly acquiring new technologies. Development at IBM continued, and the system went fully operational at FTD in February 1964. The system was demonstrated at the1964 New York World's Fair. The version at the Fair included a 150,000 word dictionary, with about 1/3 of the words in phrases. About 3,500 of these were stored incore memoryto improve performance, and an average speed of 20 words per minute was claimed. The results of the carefully selected input text was quite impressive.[5]After its return to the FTD, it was used continually until 1970, when it was replaced by a machine runningSYSTRAN.[6]
In 1964 theUnited States Department of Defensecommissioned the United StatesNational Academy of Sciences(NAS) to prepare a report on the state of machine translation. The NAS formed the "Automatic Language Processing Advisory Committee", orALPAC, and published their findings in 1966. The report,Language and Machines: Computers in Translation and Linguistics, was highly critical of the existing efforts, demonstrating that the systems were no faster than human translations, while also demonstrating that the supposed lack of translators was in fact a surplus, and as a result ofsupply and demandissues, human translation was relatively inexpensive – about $6 per 1,000 words. Worse, the FTD was slower as well; tests using physics papers as input demonstrated that the translator was "10 percent less accurate, 21 percent slower, and had a comprehension level 29 percent lower than when he used human translation."[7]
The ALPAC report was as influential as the Georgetown experiment had been a decade earlier; in the immediate aftermath of its publication, the US government suspended almost all funding for machine translation research.[8]Ongoing work at IBM and Itek had ended by 1966, leaving the field to the Europeans, who continued development of systems like SYSTRAN and Logos.
|
https://en.wikipedia.org/wiki/Automatic_Language_Translator
|
Anautomotive head unit, sometimes called theinfotainment system,[1]is avehicle audiocomponent providing a unified hardware interface for the system, including screens, buttons and system controls for numerous integrated information and entertainment functions.
Other names for automotive head units include car stereo, carreceiver, deck, in-dash stereo, and dash stereo.
Central to a vehicle's sound and information systems, head units are located prominently in the center of thedashboardor console, and provide an integrated electronic package.
The head unit provides a user interface for the vehicle's information and entertainment media components:AM/FM radio,satellite radio,DVDs/CDs,cassette tapes(although these are now uncommon), USBMP3,dashcams,GNSS navigation,Bluetooth,Wi-Fi, and sometimes vehicle systems status. Moreover, it may provide control of audio functions including volume, band, frequency, speaker balance, speaker fade, bass, treble,equalization, and so on.[2]With the advent of dashcams, GNSS navigation, andDVDs, head units with video screens are widely available, integratingvoice controlandgesture recognition.
An original standard head unit size isISO 7736, developed by theDeutsches Institut für Normung (DIN):
Single DIN(180 mm × 50 mm or 7.09 in × 1.97 in) in Europe, South America, and Australasia
Double DIN(180 mm × 100 mm or 7.09 in × 3.94 in) in Japan, the UK, and North America.
For both single and double DIN units, ISO 10487 is the connectors standard for connecting the head unit to the car's electrical system.[4]
Manufacturers offer DIN headunits and standard connectors (called universal headunits), includingPioneer,Sony,Alpine,Kenwood,Eclipse,JVC, Peach Auto (Hong Kong), Boyo,Dual,Visteon,AdventandBlaupunkt.
|
https://en.wikipedia.org/wiki/Automotive_head_unit
|
Brainais avirtual assistant[1][2]and speech-to-text dictation[3]application forMicrosoft Windowsdeveloped by Brainasoft.[4]Braina usesnatural language interface,[5]speech synthesis, andspeech recognitiontechnology[6]to interact with its users and allows them to use natural language sentences to perform various tasks on a computer. The name Braina is a short form of "Brain Artificial".[7][8]
Braina is marketed as aMicrosoft Copilotalternative.[9]It provides a voice interface for several locally run[10]and cloudlarge language models, including the latest LLMs from providers such as OpenAI, Anthropic, Google, Grok, Meta, Mistral, etc., while improving data privacy.[7]Braina also allows responses from its in-house large language models like Braina Swift and Braina Pinnacle.[11]It has an "Artificial Brain"[7]feature that provides persistent memory support for supported LLMs.[12]
Braina provides is able to carry out various tasks on a computer, including automation.[13][14]Braina can take commands inputted through typing or through dictation[3][15][13][16]to store reminders, find information online, perform mathematical operations, open files,generate images from text, transcribe speech, and control open windows or programs.[17][18][4][19]Braina adapts to user behavior over time with a goal of better anticipating needs.[13]
Braina Pro can type spoken words into an active window at the location of a user's cursor.[15][13][16]Itsspeech recognitiontechnology supports more than 100 languages and dialects[2][7][20][13]and is able to isolate the recognition of a user's voice from disturbing environmental factors such as background noise,[21]other human voices, or external devices. Braina can also be taught to dictate uncommon legal, medical, and scientific terms.[13][22]Users can also teach Braina uncommon names and vocabulary.[16]Users can edit or correct dictated text without using a keyboard or mouse by giving built-in voice commands.[13]
Braina can read aloud selected texts, such as e-books.[4][13]
Braina can automate computer tasks.[14]It lets users create custom voice commands to perform tasks such as opening files, programs, websites, or emails, as well as executing keyboard or mouse macros.[4][23][24][13][25]
Braina can transcribe media file formats such asWAV,MP3, andMP4into text.[26]
Braina can store and recall notes and reminders. These can include scheduled or unscheduled commands, checklist items, alarms, chat conversations, memos, website snippets, bookmarks, contacts.[13][4][27]
Brainasoft states that Braina can generate images from text usingtext-to-image modelsincludingStable DiffusionandDALL-E.[28]
In addition to the desktop version for Windows operating systems,[28]Braina is also available for the iOS and Android operating systems.[29][3][30]
The mobile version of Braina has a feature allowing remote management of a Windows PC connected viaWi-Fi.[31]
Braina is distributed in multiple modes. These include Braina Lite, a freeware version with limitations,[3]and premium versions Braina Pro,[13]Pro Plus, and Pro Ultra.[32]
Some additional features in the Pro version include dictation, custom vocabulary,[21]video transcription, automation,[3]custom voice commands, and persistent LLM memory.
TechRadarhas consistently listed Braina as one of the best dictation and virtual assistant apps between 2015 and 2024.[4][33][34][35]
|
https://en.wikipedia.org/wiki/Braina
|
Dragon NaturallySpeaking(also known asDragon for PC,orDNS)[1]is aspeech recognitionsoftware package developed by Dragon Systems ofNewton, Massachusetts, which was acquired in turn byLernout & HauspieSpeech Products,Nuance Communications, andMicrosoft. It runs onWindowspersonal computers. Version 15 (Professional Individual and Legal Individual),[2]which supports 32-bit and 64-bit editions ofWindows 7,8and10, was released in August 2016.[3][4]
Dragon NaturallySpeaking uses a minimal user interface. As an example, dictated words appear in a floatingtooltipas they are spoken (though there is an option to suppress this display to increase speed), and when the speaker pauses, the programtranscribesthe words into theactive windowat the location of the cursor. (Dragon does not support dictating to background windows.) The software has three primary areas of functionality: voice recognition in dictation with speech transcribed as written text, recognition of spoken commands, andtext-to-speech: speaking text content of a document. Voice profiles can be accessed by different computers in a networked environment, although the audio hardware and configuration must be identical to those of the machine generating the configuration. The Professional version allows creation of custom commands to control programs or functions not built into NaturallySpeaking.
Dr. James Bakerlaid out the description of a speech understanding system called DRAGON in 1975.[5]In 1982 he and Dr.Janet M. Baker, his wife, founded Dragon Systems to release products centered around their voice recognition prototype.[6]He was President of the company and she was CEO.
DragonDictatewas first released forDOS, and utilizedhidden Markov models, a probabilistic method for temporalpattern recognition. At the time, the hardware was not powerful enough to address the problem ofword segmentation, and DragonDictate was unable to determine the boundaries of words during continuous speech input. Users were forced to enunciate one word at a time, clearly separated by a small pause after each word. DragonDictate was based on atrigrammodel, and is known as a discrete utterance speech recognition engine.[7]
Dragon Systems released NaturallySpeaking 1.0 as their first continuous dictation product in 1997.[8]
The company was then purchased in June 2000 byLernout & Hauspie, a Belgium-based corporation that was subsequently found to have been perpetrating financial fraud.[9]Following the all-share deal advised byGoldman Sachs, Lernout & Hauspie declared bankruptcy in November 2000. The deal was not originally supposed to be all stock and the unavailability of the Goldman Sachs team to advise concerning the change in terms was one of the grounds of the Bakers' subsequent lawsuit. The Bakers had received stock worth hundreds of millions of US dollars, but were only able to sell a few million dollars' worth before the stock lost all its value as a result of the accounting fraud. The Bakers sued Goldman Sachs for negligence, intentional misrepresentation and breach of fiduciary duty, which in January 2013 led to a 23-day trial in Boston. The jury cleared Goldman Sachs of all charges.[10]Following the bankruptcy of Lernout & Hauspie, the rights to the Dragon product line were acquired byScanSoftofBurlington, Massachusetts, also a Goldman Sachs client. In 2005 ScanSoft launched ade factoacquisition ofNuance Communications, and rebranded itself asNuance.[11]
As of 2012,LG Smart TVsincluded voice recognition feature powered by the same speech engine as Dragon NaturallySpeaking.[12]In 2014, following the discontinuation ofDragonDictateforMac, a product dating back to Nuance's 2010 purchase ofMacSpeech Dictate, NaturallySpeaking gained Mac compatibility, though Mac support was later terminated in 2018.[13]
In 2021,Microsoftannounced plans to acquire Nuance, and therefore Dragon NaturallySpeaking.[14]The acquisition completed in March 2022.[15][16]
Dragon NaturallySpeaking 12 is available in the following languages; UK English, US English, French, German, Italian, Spanish, Dutch, and Japanese (aka "Dragon Speech 11" in Japan).
|
https://en.wikipedia.org/wiki/Dragon_NaturallySpeaking
|
Fluency Voice Technologywas a company that developed and sold packagedspeech recognitionsolutions for use incall centers. Fluency's Speech Recognition solutions are used by call centers worldwide to improve customer service and significantly reduce costs and are available on-premises and hosted.
1998– Fluency was created as a spin-off from the Voice Research & Development team of a company called netdecisions. ThisR&Doperation was established in Cambridge UK. The focus of the development was speech recognition systems based on theVXMLstandard.2001– Fluency became a separate entity in May 2001. Fluency began the creation of a software development platform specifically aimed at automating call center activities. This platform became Fluency's VoiceRunner.2002 to 2004– Fluency establishes accomplishes many successful deployments in customer sites such as National Express and Barclaycard.2003– Fluency expanded into the USA. Fluency also acquiresVocalisof Cambridge, UK in August 2003.2004– Fluency receives £6 million investment from leading European Venture Capitalists and establishes a globalOEMpartnership withAvaya, and the acquisition of SRC Telecom.2008– Fluency is acquired by Syntellect Ltd
Call Centers around the world use Fluency to improve service and reduce costs. They includeTravelodge,Standard Life Bank,Sutton and East Surrey Water,Pizza Hut,CWT,Barclays,Powergen,First Choice,OutRight,J D Williams,Capital Blue Cross,Chelsea Building Society,EDF,bss,TV LicensingandCapita Software Services.
|
https://en.wikipedia.org/wiki/Fluency_Voice_Technology
|
Google Voice SearchorSearch by Voiceis aGoogleproduct that allows users to useGoogle Searchbyspeakingon amobile phoneor computer, i.e. have the device search for data upon entering information on what to search into the device by speaking.
Initially named asVoice Actionwhich allowed one to give speech commands to anAndroidphone. Once only available for the U.S. English locale – commands were later recognizable and replied to in American, British, and Indian English; Filipino, French, Italian, German, and Spanish.[1]
In Android 4.1+ (Jelly Bean), it was merged withGoogle Now.
In August 2014, a new feature was added to Google Voice Search, allowing users to choose up to five languages and the app will automatically understand the spoken language.[2]
On June 14, 2011,Googleannounced at its Inside Google Search event that it would start to roll out Voice Search on Google.com during the coming days.[3][4]
Google rolled out the support, but only for theGoogle Chromebrowser.
Google Voice Search was a tool fromGoogle Labsthat allowed someone to use their phone to make aGooglequery. After the user called (650) 623-6706, the number of Google Voice's search system, they would wait for the wordsSay your Search Keywordsand then say the keywords. Next, they would either wait to have the page updated, or click on a link to bring up the search page the user requested. At the moment, both the demo of this service and the page have been shut down. Since the introduction of the service, products from Google, such asGOOG-411,Google Mapsand Google Mobile App, have been developed to usespeech recognitiontechnology in various ways.
On October 30, 2012, Google released a new Google Search app foriOS, which featured an enhanced Google Voice Search function, similar to that of theVoice Search function found in Google's Android Jelly Beanand aimed to compete with Apple's ownSirivoice assistant.[5]The new app has been compared favorably by reviewers to Siri and The Unofficial Apple Weblog's side-by-side comparison said that Google's Voice Search on iOS is "amazingly quick and relevant, and has more depth [than Siri]".[6]Of note is that as of May 2016 20% of search queries onmobile deviceswere done through voice with the number expected to grow.[7]
The following languages and variants are partially supported in Google Voice Search:[8]
In the summer of 2008, Google added voice search to the BlackBerry Pearl version of Google Maps for mobile, allowing Pearl users to say their searches in addition to typing them. Seehttp://www.google.com/mobile/blackberry/maps.htmlfor more information.
The Google Mobile app for Blackberry and Nokia (Symbian) mobiles allows users to search Google by voice at the touch of a button by speaking their queries. Seehttp://www.google.com/mobile/apple/app.htmlfor more information. Google also introduced voice search to all "Google Experience" Android phones with the 1.1 platform update, which includes the functionality on board the built-in Google Search widget.
In November 2008, Google added voice search to Google Mobile App on iPhone. With a later update, Google announced Voice Search for iPod touch. It requires a third party microphone.
On August 5, 2009, T-Mobile launched theMyTouch 3Gwith Google, which features one-touch Google Voice Search.
Since March 2010, a beta-grade derivation of Google Voice Search is used onYouTubeto provide optional automatic text caption annotations of videos in the case that annotations are not provided. This feature is geared to the hearing-impaired and, at present, is only available for use by English-speaking users.[23]
|
https://en.wikipedia.org/wiki/Google_Voice_Search
|
IBM ViaVoicewas a range of language-specific continuousspeech recognitionsoftwareproducts offered byIBM. The current version is designed primarily for use in embedded devices. The latest stable version of IBM Via Voice was 9.0 and was able to transfer text directly intoMicrosoft Word.
The most important process for the correct use of this software is the so-called 'quick training', and 'enrollment': it consists of reading many specific words and sentences in order to make the software adapt itself to the specific users' sound and intonation features. It lasts for one hour or more and can be divided in many parts. Users are able to improve decoding accuracy, by reading prepared texts of a few hundred sentences. The recorded data was used to tune theacoustic modelto that specific user. In addition, user specific text files could be parsed to tune the language model. Correction of mis-recognised words was also used to improve subsequent decode accuracy.
Individual language editions may have different features, specifications, technical support, and microphone support. Some of the products or editions available are:
The IBM Via Voice 98™ has been available in the Home, Office and Executive Edition in the following languages:
Chinese, French, German, Italian, Japanese, Spanish, UK English, US English. The Executive Edition allows you to dictate into most Windows applications and control them using your voice.
Designed forWindows 95,98andNT 4.0, it has been working very well withWindows 7.
In the Executive package are included:
Prior to the development of ViaVoice, IBM launched a product in 1993 named theIBM Personal Dictation System(later renamed toVoiceType)[8]which ran on Windows,AIX, andOS/2.[9]In 1997, ViaVoice was first introduced to the general public. Two years later, in 1999, IBM released a free of charge version of ViaVoice.[10]
In 2003, IBM awarded ScanSoft, which owned the competitive productDragon NaturallySpeaking, exclusive global distribution rights to ViaVoice Desktop products for Windows andMac OS X.[11]Two years later,Nuancemerged with ScanSoft.[12]
Thismultimediasoftware-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/IBM_ViaVoice
|
Keyword spotting(or more simply,word spotting) is a problem that was historically first defined in the context ofspeech processing.[1][2]In speech processing, keyword spotting deals with the identification ofkeywordsinutterances.
Keyword spotting is also defined as a separate, but related, problem in the context of document image processing.[1]In document image processing, keyword spotting is the problem of finding all instances of a query word that exist in a scanned document image, without fully recognizing it.
The first works in keyword spotting appeared in the late 1980s.[2]
A special case of keyword spotting is wake word (also called hot word) detection used by personal digital assistants such asAlexaorSirito activate the dormant speaker, in other words "wake up" when their name is spoken.
In the United States, theNational Security Agencyhas made use of keyword spotting since at least 2006.[3]This technology allows analysts to search through large volumes of recorded conversations and isolate mentions of suspicious keywords. Recordings can be indexed and analysts can run queries over the database to find conversations of interest.IARPAfunded research into keyword spotting in theBabel program.
Some algorithms used for this task are:
Keyword spotting in document image processing can be seen as an instance of the more generic problem ofcontent-based image retrieval(CBIR).
Given a query, the goal is to retrieve the most relevant instances of words in a collection of scanned documents.[1]The query may be a text string (query-by-string keyword spotting) or a word image (query-by-example keyword spotting).
|
https://en.wikipedia.org/wiki/Keyword_spotting
|
Kinectis a discontinued line ofmotion sensinginput devicesproduced byMicrosoftand first released in 2010. The devices generally containRGBcameras, andinfraredprojectors and detectors that map depth through eitherstructured lightortime of flightcalculations, which can in turn be used to perform real-timegesture recognitionand body skeletal detection, among other capabilities. They also contain microphones that can be used forspeech recognitionandvoice control.
Kinect was originally developed as amotion controllerperipheral forXboxvideo game consoles, distinguished from competitors (such as Nintendo'sWii Remoteand Sony'sPlayStation Move) by not requiring physical controllers. The first-generation Kinect was based on technology from Israeli companyPrimeSense, and unveiled atE32009 as a peripheral forXbox 360codenamed "Project Natal". It was first released on November 4, 2010, and would go on to sell eight million units in its first 60 days of availability. The majority of the games developed for Kinect werecasual, family-oriented titles, which helped to attract new audiences to Xbox 360, but did not result in wide adoption by the console's existing, overall userbase.
As part of the 2013 unveiling of Xbox 360's successor,Xbox One, Microsoft unveiled a second-generation version of Kinect with improved tracking capabilities. Microsoft also announced that Kinect would be a required component of the console, and that it would not function unless the peripheral is connected. The requirement proved controversial among users and critics due to privacy concerns, prompting Microsoft to backtrack on the decision. However, Microsoft still bundled the new Kinect with Xbox One consoles upon their launch in November 2013. A market for Kinect-based games still did not emerge after the Xbox One's launch; Microsoft would later offer Xbox One hardware bundles without Kinect included, and later revisions of the console removed the dedicated ports used to connect it (requiring a powered USB adapter instead). Microsoft ended production of Kinect for Xbox One in October 2017.
Kinect has also been used as part of non-game applications in academic and commercial environments, as it was cheaper and more robust than other depth-sensing technologies at the time. While Microsoft initially objected to such applications, it later releasedsoftware development kits(SDKs) for the development ofMicrosoft Windowsapplications that use Kinect. In 2020, Microsoft releasedAzure Kinectas a continuation of the technology integrated with theMicrosoft Azurecloud computing platform. Part of the Kinect technology was also used within Microsoft'sHoloLensproject. Microsoft discontinued the Azure Kinect developer kits in October 2023.[12][13]
The origins of the Kinect started around 2005, at a point where technology vendors were starting to developdepth-sensing cameras. Microsoft had been interested in a 3D camera for the Xbox line earlier but because the technology had not been refined, had placed it in the "Boneyard", a collection of possible technology they could not immediately work on.[14]
In 2005,PrimeSensewas founded by tech-savvy mathematicians and engineers from Israel to develop the "next big thing" for video games, incorporating cameras that were capable of mapping a human body in front of them and sensing hand motions. They showed off their system at the 2006Game Developers Conference, where Microsoft'sAlex Kipman, the general manager of hardware incubation, saw the potential in PrimeSense's technology for the Xbox system. Microsoft began discussions with PrimeSense about what would need to be done to make their product more consumer-friendly: not only improvements in the capabilities of depth-sensing cameras, but a reduction in size and cost, and a means to manufacturer the units at scale was required. PrimeSense spent the next few years working at these improvements.[14]
Nintendoreleased theWiiin November 2006. The Wii's central feature was theWii Remote, a handheld device that was detected by the Wii through a motion sensor bar mounted onto a television screen to enablemotion controlled games. Microsoft felt pressure from the Wii, and began looking into depth-sensing in more detail with PrimeSense's hardware, but could not get to the level of motion tracking they desired. While they could determine hand gestures, and sense the general shape of a body, they could not do skeletal tracking. A separate path within Microsoft looked to create an equivalent of the Wii Remote, considering that this type of unit may become standardized similar to how two-thumbstick controllers became a standard feature.[14]However, it was still ultimately Microsoft's goal to remove any device between the player and the Xbox.[14]
Kudo Tsunoda and Darren Bennett joined Microsoft in 2008, and began working with Kipman on a new approach to depth-sensing aided bymachine learningto improve skeletal tracking. They internally demonstrated this and established where they believed the technology could be in a few years, which led to the strong interest to fund further development of the technology; this has also occurred at a time that Microsoft executives wanted to abandon the Wii-like motion tracking approach, and favored the depth-sensing solution to present a product that went beyond the Wii's capabilities. The project was greenlit by late 2008 with work started in 2009.[14]
The project was codenamed "Project Natal" after the Brazilian cityNatal, Kipman's birthplace. Additionally, Kipman recognized theLatinorigins of the word "natal" to mean "to be born", reflecting the new types of audiences they hoped to draw with the technology.[15]Much of the initial work was related toethnographicresearch to see how video game players' home environments were laid out, lit, and how those with Wiis used the system to plan how Kinect units would be used. The Microsoft team discovered from this research that the up-and-down angle of the depth-sensing camera would either need to be adjusted manually, or would require an expensive motor to move automatically. Upper management at Microsoft opted to include the motor despite the increased cost to avoid breaking game immersion. Kinect project work also involved packaging the system for mass production and optimizing its performance. Hardware development took around 22 months.[14]
During hardware development, Microsoft engaged with software developers to use Kinect. Microsoft wanted to make games that would be playable by families since Kinect could sense multiple bodies in front of it. One of the first internal titles developed for the device was the pack-in gameKinect Adventuresdeveloped by Good Science Studio that was part ofMicrosoft Studios. One of the game modes ofKinect Adventureswas "Reflex Ridge", based on the JapaneseBrain Wallgame where players attempt to contort their bodies in a short time to match cutouts of a wall moving at them. This type of game was a key example of the type of interactivity they wanted with Kinect, and its development helped feed into the hardware improvements.[14]
Nearing the planned release, there was a problem of widespread testing of Kinect in various room types and different bodies accounting for age, gender, and race among other factors, while keeping the details of the unit confidential. Microsoft engaged in a company-wide program offering employees to take home Kinect units to test them. Microsoft also brought other non-gaming divisions, including itsMicrosoft Research,Microsoft Windows, andBingteams to help complete the system. Microsoft established its own large-scale manufacturing facility to bulk product Kinect units and test them.[14]
Kinect was first announced to the public as "Project Natal" on June 1, 2009, during Microsoft's press conference atE3 2009; film directorSteven Spielbergjoined Microsoft'sDon Mattrickto introduce the technology and its potential.[14][16]Three demos were presented during the conference—Microsoft'sRicochetandPaint Party, andLionhead Studios'Milo & Katecreated byPeter Molyneux—while a Project Natal-enabled version ofCriterion Games'Burnout Paradisewas shown during the E3 exhibition.[17][18]By E3 2009, the skeletal mapping technology was capable of simultaneously tracking four people,[19][20][21][22]with a feature extraction of 48skeletalpoints on a human body at 30 Hz.[22][23]Microsoft had not committed to a release date for Project Natal at E3 2009, but affirmed it would be after 2009, and likely in 2010 to stay competitive with the Wii and thePlayStation Move(Sony Interactive Entertainment's own motion-sensing system using hand-held devices).[24]
In the months following E3 2009, rumors that a new Xbox 360 console associated with Project Natal emerged, either aretail configurationthat incorporated the peripheral,[25][26]or as a hardware revision or upgrade to support the peripheral.[27][28]Microsoft dismissed the reports in public and repeatedly emphasized that Project Natal would be fully compatible with all Xbox 360 consoles. Microsoft indicated that the company considered Project Natal to be a significant initiative, as fundamental to Xbox brand asXbox Live,[24]and with a planned launch akin to that of a new Xbox console platform.[29]Microsoft's vice president Shane Kim said the company did not expect Project Natal would extend the anticipated lifetime of the Xbox 360, which had been planned to last ten years through 2015, nor delay the launch of the successor to the Xbox 360.[20][30]
Following the E3 2009 show and through 2010, the Project Natal team members experimentally adapted numerous games to Kinect-based control schemes to help evaluate usability. Among these games wereBeautiful KatamariandSpace Invaders Extreme, which were demonstrated atTokyo Game Showin September 2009.[31]According to Tsunoda, adding Project Natal-based control to pre-existing games involved significant code alterations, and made it unlikely that existing games could be patched through software updates to support the unit.[32]Microsoft also expanded its draw to third-party developers to encourage them to develop Project Natal games. Companies likeHarmonixandDouble Finequickly took to Project Natal and saw the potential in it, and committed to developing games for the unit, such as the launch titleDance Centralfrom Harmonix.[14]
Although its sensor unit was originally planned to contain a microprocessor that would perform operations such as the system's skeletal mapping, Microsoft reported in January 2010 that the sensor would no longer feature a dedicated processor. Instead, processing would be handled by one of theprocessor coresof Xbox 360'sXenonCPU.[33]Around this time, Kipmen estimated that the Kinect would only take about 10 to 15% of the Xbox 360's processing power.[34]While this was a small fraction of the Xbox 360's capabilities, industry observers believed this further pointed to difficulties in adapting pre-existing games to use Kinect, as the motion-tracking would add to a game's high computational load and exceed the Xbox 360's capabilities. These observers believed that instead the industry would develop games specific to the Kinect features.[33]
During Microsoft'sE3 2010press conference, it was announced that Project Natal would be officially branded as Kinect, and be released in North America on November 4, 2010.[35]Xbox Live directorStephen Toulousestated that the name was aportmanteauof the words "kinetic" and "connection", key aspects of the Kinect initiative.[36][37]Microsoft and third-party studios exhibited Kinect-compatible games during the E3 exhibition.[38]A newslim revision of the Xbox 360was also unveiled to coincide with Kinect's launch, which added a dedicated port for attaching the peripheral;[39]Kinect would be sold at launch as a standalone accessory for existing Xbox 360 owners, and as part of bundles with the new slim Xbox 360. All units includedKinect Adventuresas apack-in game.[40][41]
Microsoft continued to refine the Kinect technology in the months leading to the Kinect launch in November 2010. By launch, Kipman reported they had been able to reduce the Kinect's use of the Xbox 360's processor from 10 to 15% as reported in January 2010 to a "single-digit percentage".[42]
Xbox product director Aaron Greenberg stated that Microsoft's marketing campaign for Kinect would carry a similar scale to a console launch;[41]the company was reported to have budgeted $500 million on advertising for the peripheral, such as television and print ads, campaigns withBurger King[43]andPepsi,[44]and a launch event inNew York City'sTimes Squareon November 3 featuring a performance byNe-Yo.[45]Kinect was launched in North America on November 4, 2010;[2]in Europe on November 10, 2010;[1]in Australia, New Zealand, and Singapore on November 18, 2010;[4][46][47]and in Japan on November 20, 2010.[48]
The Kinect release for the Xbox 360 was estimated to have sold eight million units in the first sixty days of release, earning the hardware theGuinness World Recordfor the "Fastest-Selling Consumer Electronics Device".[14]Over 10 million had been sold by March 2011.[14]While seemingly successful, its launch titles were primarily family-oriented games (which could be designed around Kinect's functionality and limitations), which may have drawn new audiences, but did not have the selling power of major franchises likeBattlefieldandCall of Duty—which were primarily designed around theXbox 360 controller. Only an estimated 20% of the 55 million Xbox 360 owners had purchased the Kinect.[14]The Kinect team recognized some of the downsides with more traditional games and Kinect, and continued ongoing development of the unit to be released as a second-generation unit, such as reducing the latency of motion detection and improving speech recognition. Microsoft provided news of these changes to the third-party developers to help them anticipate how the improvements can be integrated into the games.[14]
Concurrent with the Kinect improvements, Microsoft's Xbox hardware team had started planning for theXbox Onearound mid-2011. Part of early Xbox One specifications was that the new Kinect hardware would be automatically included with the console, so that developers would know that Kinect hardware would be available for any Xbox One, and hoping to encourage developers to take advantage of that.[14]The Xbox One was first formally announced on May 23, 2013, and shown in more detail atE3 2013in June. Microsoft stated at these events that the Xbox One would include the updated Kinect hardware and it would be required to be plugged in at all times for the Xbox One to function. This raised concerns across the video game media: privacy advocates argued that Kinect sensor data could be used fortargeted advertising, and to perform unauthorizedsurveillanceon users. In response to these claims, Microsoft reiterated that Kinect voice recognition and motion tracking can be disabled by users, that Kinect data cannot be used for advertising per itsprivacy policy, and that the console would not redistribute user-generated content without permission.[49][50][51][52][53][54]Several other issues with the Xbox One's original feature set had also come up, such as the requirement to be always connected to the Internet, and created a wave of consumer backlash against Microsoft.[14]
Microsoft announced in August 2013 that they had made several changes to the planned Xbox One release in response to the backlash. Among these was that the system would no longer require a Kinect unit to be plugged in to work, though it was still planned to package the Kinect with all Xbox One systems. However, this also required Microsoft to establish aUS$500price-point for the Xbox One/Kinect system at its November 2013 launch,US$100more than the competingPlayStation 4launched in the same time frame, which did not include any motion-sensing hardware.[14]In the months after the Xbox One release, Microsoft decided to launch a Kinect-less Xbox One system in March 2014 at the same price as the PlayStation 4, after considering that the Kinect for Xbox One had not gotten the developer support, and sales of the Xbox One were lagging due to the higher price tag of the Kinect-bundled system. Richard Irving, a program group manager that oversaw Kinect, said that Microsoft had felt that it was more important to give developers and consumers the option of developing for or purchasing the Kinect rather than forcing the unit on them.[14]
The removal of Kinect from the Xbox One retail package was the start of the rapid decline and phase-out of the unit within Microsoft. Developers like Harmonix that had been originally targeting games to use the Xbox One had put these games on hold until they knew there was enough of a Kinect install base to justify release, which resulted in a lack of games for the Kinect and reducing any consumer drive to buy the separate unit.[14]Microsoft became bearish on the Kinect, making no mention of the unit atE3 2015and announcing atE3 2016that the upcoming Xbox One hardware revision, theXbox One S, would not have a dedicated Kinect port; Microsoft offered a USB adapter for the Kinect, provided free during an initial promotional period after the console's launch.[55]The more powerfulXbox One Xalso lacked the Kinect port and required this adapter.[56]Even though developers still released Kinect-enabled games for the Xbox One, Microsoft's lack of statements related to the Kinect during this period led to claims that the Kinect was a dead project at Microsoft.[57][58]
Microsoft formally announced it would stop manufacturing Kinect for Xbox One on October 25, 2017.[10]Microsoft eventually discontinued the adapter in January 2018, stating that they were shifting to manufacture other accessories for the Xbox One and personal computers that were more in demand. This is considered by the media to be the point where Microsoft ceased work on the Kinect for the Xbox platform.[14][56]
While the Kinect unit for the Xbox platform had petered out, the Kinect was being used in academia and other applications since around 2011. The functionality of the unit along with its lowUS$150cost was seen to be an inexpensive means to add depth-sensing to existing applications, offsetting the high cost and unreliability of other 3D camera options at the time. Inrobotics, Kinect's depth-sensing would enable robots to determine the shape and approximate distances to obstacles and maneuver around them.[59]Within the medical field, the Kinect could be used to monitor the shape and posture of a body in a quantifiable manner to enable improved health-care decisions.[60]
Around November 2010, after the Kinect's launch, scientists, engineers, and hobbyists had been able tohackinto the Kinect to determine what hardware and internal software it had used, leading to users finding how to connect and operate the Kinect withMicrosoft WindowsandOS Xover USB, which has unsecured data from the various camera elements that could be read. This further led to prototype demos of other possible applications, such as a gesture-based user interface for the operating system similar to that shown in the filmMinority Report, as well aspornographicapplications.[61][62]This mirrored similar work to hack the Wii Remote a few years earlier to use its low-cost hardware for more advanced applications beyond gameplay.[63]
Adafruit Industries, having envisioned some of the possible applications of the Kinect outside of gaming, issued a security challenge related to the Kinect, offering prize money for the successful development of anopen sourcesoftware development kit(SDK) and hardware drivers for the Kinect, which came to be known as Open Kinect.[64]Adafruit named the winner,Héctor Martín, by November 10, 2010,[65][66]who had produced aLinuxdriver that allows the use of both the RGB camera and depth sensitivity functions of the device.[67][68]It was later discovered thatJohnny Lee, a core member of Microsoft's Kinect development team, had secretly approached Adafruit with the idea of a driver development contest and had personally financed it.[69]Lee had said of the efforts to open the Kinect that "This is showing us the future...This is happening today, and this is happening tomorrow." and had engaged Adafruit with the contest as he been frustrated with trying to convince Microsoft's executives to explore the non-gaming avenue for the Kinect.[70]
Microsoft initially took issue with users hacking into the Kinect, stating they would incorporate additional safeguards into future iterations of the unit to prevent such hacks.[61]However, by the end of November 2010, Microsoft had turned on their original position and embraced the external efforts to develop the SDK.[71]Kipman, in an interview withNPR, said
The first thing to talk about is, Kinect was not actually hacked. Hacking would mean that someone got to our algorithms that sit inside of the Xbox and was able to actually use them, which hasn't happened. Or, it means that you put a device between the sensor and the Xbox for means of cheating, which also has not happened. That's what we call hacking, and that's what we have put a ton of work and effort to make sure doesn't actually occur. What has happened is someone wrote an open-source driver for PCs that essentially opens the USB connection, which we didn't protect, by design, and reads the inputs from the sensor. The sensor, again, as I talked earlier, has eyes and ears, and that's a whole bunch of noise that someone needs to take and turn into signal.
PrimeSense along with robotics firmWillow Garageand game developer Side-Kick launchedOpenNI, a not-for-profit group to develop portable drivers for the Kinect and othernatural interface(NI) devices, in November 2010. Its first set of drivers named NITE were released in December 2010.[73][74]PrimeSense had also worked withAsusto develop a motion sensing device that competes with the Kinect for personal computers. The resulting product, the Wavi Xtion, was released in China in October 2011.[75][76]
Microsoft announced in February 2011 that it was planning on releasing its own SDK for the Kinect within a few months, and which was officially released on June 16, 2011, but which was limited to non-commercial uses.[77][78]The SDK enabled users to access the skeletal motion recognition system for up to two persons and the Kinect microphone array, features that had not been part of the prior Open Kinect SDK.[79]Commercial interest in Kinect was still strong, with David Dennis, a product manager at Microsoft, stating "There are hundreds of organizations we are working with to help them determine what's possible with the tech".[80]Microsoft launched its Kinect for Windows program on October 31, 2011, releasing a new SDK to a small number of companies, includingToyota,Houghton Mifflin, and Razorfish, to explore what was possible.[80]At the 2012Consumer Electronics Showin January, Microsoft announced that it would release a dedicated Kinect for Windows unit along with the commercial SDK on February 1, 2012. The device included some hardware improvements, including support for "near mode" to recognize objects about 50 centimetres (20 in) in front of the cameras. The Kinect for Windows device was listed atUS$250,US$100more than the original Kinect since Microsoft had considered the Xbox 360 Kinect was subsidized through game purchases, Xbox Live subscriptions, and other costs.[70]At the launch, Microsoft stated that more than 300 companies from over 25 countries were working on Kinect-ready apps with the new unit.[81]
With the original announcement of the revised Kinect for Xbox One in 2013, Microsoft also confirmed it would have a second generation of Kinect for Windows based on the updated Kinect technology by 2014.[82]The new Kinect 2 for Windows was launched on July 15, 2014, at aUS$200price.[83]Microsoft opted to discontinue the original Kinect for Windows by the end of 2014.[84]However, in April 2015, Microsoft announced they were also discontinuing the Kinect 2 for Windows, and instead directing commercial users to use the Kinect for Xbox One, which Microsoft said "perform identically". Microsoft stated that the demand for the Kinect 2 for Windows demand was high and difficult to keep up while also fulfilling the Kinect for Xbox One orders, and that they had found commercial developers successfully using the Kinect for Xbox One in their applications without issue.[85]
With Microsoft's waning focus on Kinect, PrimeSense was bought byApple, Inc.in 2013, which incorporated parts of the technology into itsFace IDsystem foriOSdevices.[86][87]
Though Kinect had been cancelled, the ideas of it helped to spur Microsoft into looking more intoaccessibilityfor Xbox and its games. According toPhil Spencer, the head of Xbox at Microsoft, they received positive comments from parents of disabled and impaired children who were happy that Kinect allowed their children to play video games. These efforts led to the development of theXbox Adaptive Controller, released in 2018, as one of Microsoft's efforts in this area.[88]
Microsoft had abandoned the idea of Kinect for video games, but still explored the potential of Kinect beyond that. Microsoft's Director of Communications Greg Sullivan stated in 2018 that "I think one of the things that is beginning to be understood is that Kinect was never really just the gaming peripheral...It was always more."[89]Part of Kinect technology was integrated into Microsoft'sHoloLens, first released in 2016.[90]
Microsoft announced that it was working on a new version of a hardware Kinect model for non-game applications that would integrate with theirAzurecloud computing services in May 2018. The use of cloud computing to offload some of the computational work from Kinect, as well as more powerful features enable by Azure such asartificial intelligencewould improve the accuracy of the depth-sensing and reduce the power demand and would lead to more compact units, Microsoft had envisioned.[91]The Azure Kinect device was released on June 27, 2019, at a price ofUS$400, while the SDK for the unit had been released in February 2019.[92]
Sky UKannounced a new line of Sky Glass television units to launch in 2022 that incorporate the Kinect technology in partnership with Microsoft. Using the Kinect features, the viewer will be able to control the television through motion controls and audio commands, and supports social features such associal viewing.[93]
Microsoft announced that the Azure Kinect hardware kit will be discontinued in October 2023, and will refer users to third party suppliers for spare parts.[94]
The depth and motion sensing technology at the core of the Kinect is enabled through its depth-sensing. The original Kinect for Xbox 360 usedstructured lightfor this: the unit used a near-infraredpattern projected across the space in front of the Kinect, while an infrared sensor captured the reflected light pattern. The light pattern is deformed by the relative depth of the objects in front it, and mathematics can be used to estimate that depth based on several factors related to the hardware layout of the Kinect. While other structure light depth-sensing technologies used multiple light patterns, Kinect used as few as one as to achieve a high rate of 30 frames per second of depth sensing. Kinect for Xbox One switched over to usingtime of flightmeasurements. The infrared projector on the Kinect sends out modulated infrared light which is then captured by the sensor. Infrared light reflecting off closer objects will have a shorter time of flight than those more distant, so the infrared sensor captures how much the modulation pattern had been deformed from the time of flight, pixel-by-pixel. Time of flight measurements of depth can be more accurate and calculated in a shorter amount of time, allowing for more frames-per-second to be detected.[95]
Once Kinect has a pixel-by-pixel depth image, Kinect uses a type ofedge detectionhere to delineate closer objects from the background of the shot, incorporating input from the regular visible light camera. The unit then attempts to track any moving objects from this, with the assumption that only people will be moving around in the image, and isolates the human shapes from the image. The unit's software, aided by artificial intelligence, performs segmentation of the shapes to try to identify specific body parts, like the head, arms, and hands, and track those segments individually. Those segments are used to construct a 20-point skeleton of the human body, which then can be used by game or other software to determine what actions the person has performed.[96]
Kinect for Xbox 360was a combination ofMicrosoftbuilt software and hardware. The hardware included arange chipsettechnology byIsraelideveloperPrimeSense, which developed a system consisting of aninfraredprojector and camera and a specialmicrochipthat generates a grid from which the location of a nearby object in 3 dimensions can be ascertained.[97][98][99]This3D scannersystem calledLight Coding[100]employs a variant of image-based3D reconstruction.[101][102]
The Kinect sensor is a horizontal bar connected to a small base with a motorized pivot and is designed to be positioned lengthwise above or below the video display. The device features an "RGBcamera,depth sensorandmicrophone arrayrunning proprietary software",[103]which provide full-body 3Dmotion capture,facial recognitionandvoice recognitioncapabilities. At launch, voice recognition was only made available in Japan, United Kingdom, Canada and United States. Mainland Europe received the feature later in spring 2011.[104]Currently voice recognition is supported inAustralia,Canada,France,Germany,Ireland,Italy,Japan,Mexico,New Zealand,United KingdomandUnited States. The Kinect sensor's microphone array enables Xbox 360 to conductacoustic source localizationandambient noise suppression, allowing for things such as headset-free party chat overXbox Live.[105]
The depth sensor consists of aninfraredlaserprojector combined with a monochromeCMOS sensor, which captures video data in 3D under anyambient lightconditions.[105][19]The sensing range of the depth sensor is adjustable, and Kinect software is capable of automatically calibrating the sensor based on gameplay and the player's physical environment, accommodating for the presence of furniture or other obstacles.[23]
Described by Microsoft personnel as the primary innovation of Kinect,[20][106][107]the software technology enables advancedgesture recognition, facial recognition and voice recognition.[21]According to information supplied to retailers, Kinect is capable of simultaneously tracking up to six people, including two active players formotion analysiswith afeature extractionof 20 joints per player.[108]However, PrimeSense has stated that the number of people the device can "see" (but not process as players) is only limited by how many will fit in the field-of-view of the camera.[109]
Reverse engineering[110]has determined that the Kinect's various sensors output video at aframe rateof ≈9Hzto 30Hzdepending on resolution. The default RGB video stream uses 8-bit VGA resolution (640 × 480pixels) with aBayer color filter, but the hardware is capable of resolutions up to 1280x1024 (at a lower frame rate) and other colour formats such asUYVY. The monochrome depth sensing video stream is in VGA resolution (640 × 480 pixels) with11-bit depth, which provides 2,048 levels of sensitivity. The Kinect can also stream the view from its IR camera directly (i.e.: before it has been converted into a depth map) as 640x480 video, or 1280x1024 at a lower frame rate. The Kinect sensor has a practicalranginglimit of 1.2–3.5 m (3.9–11.5 ft) distance when used with the Xbox software. The area required to play Kinect is roughly 6 m2, although the sensor can maintain tracking through an extended range of approximately 0.7–6 m (2.3–19.7 ft). The sensor has anangular field of viewof 57°horizontally and 43° vertically, while the motorized pivot is capable oftiltingthe sensor up to 27° either up or down. The horizontal field of the Kinect sensor at the minimum viewing distance of ≈0.8 m (2.6 ft) is therefore ≈87 cm (34 in), and the vertical field is ≈63 cm (25 in), resulting in a resolution of just over 1.3 mm (0.051 in) per pixel. The microphone array features four microphone capsules[111]and operates with each channel processing 16-bitaudio at asampling rateof 16kHz.[108]
Because the Kinect sensor's motorized tilt mechanism requires more power than the Xbox 360'sUSBports can supply,[112]the device makes use of a proprietary connector combining USB communication with additional power. RedesignedXbox 360 Smodels include a special AUX port for accommodating the connector,[113]while older models require a special power supply cable (included with the sensor)[111]that splits the connection into separate USB and power connections; power is supplied from themainsby way of an AC adapter.[112]
Kinect for Windows is a modified version of the Xbox 360 unit which was first released on February 1, 2012, alongside the SDK for commercial use.[70][114]The hardware included better components to eliminate noise along the USB and other cabling paths, and improvements in the depth-sensing camera system for detection of objects at close range, as close as 50 centimetres (20 in), in the new "Near Mode".[70]
The SDK includedWindows 7compatiblePCdrivers for Kinect device. It provided Kinect capabilities to developers to build applications withC++,C#, orVisual Basicby usingMicrosoft Visual Studio 2010and included the following features:
In March 2012, Craig Eisler, the general manager of Kinect for Windows, said that almost 350 companies are working with Microsoft on custom Kinect applications for Microsoft Windows.[116]
In March 2012, Microsoft announced that next version of Kinect for Windows SDK would be available in May 2012. Kinect for Windows 1.5 was released on May 21, 2012. It adds new features, support for many new languages and debut in 19 more countries.[117][118]
Kinect for Windows SDK for the first-generation sensor was updated a few more times, with version 1.6 released October 8, 2012,[122]version 1.7 released March 18, 2013,[123]and version 1.8 released September 17, 2013.[124]
An upgraded iteration of Kinect was released on November 22, 2013, forXbox One. It uses a wide-angletime-of-flight camera, and processes 2 gigabits of data per second to read its environment. The new Kinect has greater accuracy with three times the fidelity over its predecessor and can track without visible light by using an activeIRsensor. It has a 60% wider field of vision with a minimum working distance of 0.91 metres (3.0 ft) away from the sensor, compared to 1.83 metres (6.0 ft) for the original Kinect,[125]and can track up to 6 skeletons at once. It can also detect a player'sheart rate, facial expression, the position and orientation of 25 individual joints (including thumbs), the weight put on each limb, speed of player movements, and track gestures performed with a standard controller. The color camera captures 1080p video that can be displayed in the same resolution as the viewing screen, allowing for a broad range of scenarios. In addition to improving video communications and video analytics applications, this provides a stable input on which to build interactive applications. Kinect's microphone is used to provide voice commands for actions such as navigation, starting games, and waking the console fromsleep mode.[126][127]The recommended player's height is at least 40 inches, which roughly corresponds to children of4+1⁄2years old and up.[128][129]
All Xbox One consoles were initially shipped with Kinect included.[54]In June 2014, bundles without Kinect were made available,[130]along with an updated Xbox One SDK allowing game developers to explicitly disable Kinect skeletal tracking, freeing up system resources that were previously reserved for Kinect even if it was disabled or unplugged.[130][131]As interest in Kinect waned in 2014, later revisions of the Xbox One hardware, including theXbox One SandXbox One X, dropped the dedicated Kinect port, requiring users to purchase a USB 3.0 and AC adapter to use the Kinect for Xbox One.[132][133]
A standalone Kinect for Xbox One, bundled with a digital copy ofDance Central Spotlight, was released on October 7, 2014.[134]
Considered a market failure compared to the Kinect for Xbox 360, the Kinect for Xbox One product was discontinued by October 25, 2017. Production of the adapter cord also ended by January 2018.[9]
Released on 15 July 2014, Kinect 2 for Windows is based on the Kinect for Xbox One and considered a replacement of the original Kinect for Windows. It was also repackaged as "Kinect for Windows v2". It is nearly identical besides the removal of Xbox branding, and included a USB 3.0/AC adapter. It released alongside version 2.0 of the Windows SDK for the platform. The MSRP wasUS$199.[83][8][135][85]Microsoft considers the Kinect 2 for Windows equivalent in performance to the Xbox One version.
In April 2015, having difficulty in keeping up manufacturing demand for the Kinect for Xbox One, this edition was discontinued. Microsoft directed commercial users to use the Xbox One version with a USB adapter instead.[85][136][8][135][137]
On May 7, 2018, Microsoft announced a new iteration of Kinect technology designed primarily for enterprise software andartificial intelligenceusage. It is designed around theMicrosoft Azurecloud platform, and is meant to "leverage the richness of Azure AI to dramatically improve insights and operations".[138][139]It has a smaller form factor than the Xbox iterations of Kinect, and features a 12-megapixel camera, a time-of-flight depth sensor also used on theHoloLens 2, and seven microphones. A development kit was announced in February 2019.[140][141]
Requiring at least 190 MB of available storage space,[142]Kinect system software allows users to operateXbox 360 Dashboard console user interfacethrough voice commands and hand gestures. Techniques such as voice recognition and facial recognition are employed to automatically identify users. Among the applications for Kinect is Video Kinect, which enablesvoice chatorvideo chatwith other Xbox 360 users or users ofWindows Live Messenger. The application can use Kinect's tracking functionality and Kinect sensor's motorized pivot to keep users in frame even as they move around. Other applications with Kinect support includeESPN,Zune Marketplace,[142]Netflix,Hulu Plus[143]andLast.fm.[144]Microsoft later confirmed that all forthcoming applications would be required to have Kinect functionality for certification.[145]
The Xbox One originally shipped in bundles with the Kinect; the originalXbox One user interface softwarehad similar support for Kinect features as the Xbox 360 software, such as voice commands, user identification via skeletal or vocal recognition, and gesture-driven commands, though these features could be fully disabled due to privacy concerns.[146]However, this had left the more traditional navigation using a controller haphazard. In May 2014, when Microsoft announced it would be releasing Xbox One systems without a Kinect, the company also announced plans to alter the Xbox One system software to remove Kinect features.[147]Kinect support in the software was fully removed by November 2015.[148]
Xbox 360 games that require Kinect are packaged in special purple cases (as opposed to the green cases used by all other Xbox 360 games), and contain a prominent "Requires Kinect Sensor" logo on their front cover. Games that include features utilizing Kinect, but do not require it for standard gameplay, have "Better with Kinect Sensor" branding on their front covers.[149]
Kinect launched on November 4, 2010, with 17 titles.[150]Third-party publishers of available and announced Kinect games include, among others,Ubisoft,Electronic Arts,LucasArts,THQ,Activision,Konami,Sega,Capcom,Namco BandaiandMTV Games. Along with retail games, there are also selectXbox Live Arcadetitles which require the peripheral.
KinectShare.com was a website where players could upload video game pictures, videos, and achievements, from their Xbox 360.[151]It was released alongside the Kinect in November 2010. A blog was released on the website in October 2011, showcasing official Kinect news, which was discontinued after July 2012.[152]It was used by multiple Kinect games, includingDance Central 2,Kinect Adventures!,Kinect Fun Labs,KinectRush: A Disney–Pixar Adventure,Kinect Sports, andKinect Sports: Season Two.[153]The website was shutdown in June 2017, a few months prior to the discontinuation of the Kinect, redirecting to Xbox.com.[151]The KinectShare feature on the Xbox 360 was shutdown on July 28, 2017.[citation needed]
AtE3 2011, Microsoft announcedKinect Fun Labs: a collection of various gadgets and minigames that are accessible from Xbox 360 Dashboard. These gadgets includesBuild A Buddy,Air Band,Kinect Googly Eyes,Kinect Me,Bobblehead,Kinect Sparkler,Junk Fu[154]andAvatar Kinect.[155][156][157]
Numerous developers are researching possible applications of Kinect that go beyond the system's intended purpose of playing games, further enabled by the release of the Kinect SDK by Microsoft.[158]
For example, Philipp Robbel ofMITcombined Kinect withiRobot Createto map a room in 3D and have the robot respond to human gestures,[159]while an MIT Media Lab team is working on a JavaScript extension forGoogle Chromecalled depthJS that allows users to control the browser with hand gestures.[160]Other programmers, including Robot Locomotion Group at MIT, are using the drivers to develop a motion-controller user interface similar to the one envisioned inMinority Report.[161]The developers ofMRPThave integrated open source drivers into their libraries and provided examples of live 3D rendering and basic 3D visualSLAM.[162]Another team has shown an application that allows Kinect users to play a virtual piano by tapping their fingers on an empty desk.[163]Oliver Kreylos, a researcher atUniversity of California, Davis, adopted the technology to improve live 3-dimensionalvideoconferencing, whichNASAhas shown interest in.[164]
Alexandre Alahi fromEPFLpresented a video surveillance system that combines multiple Kinect devices to track groups of people even in complete darkness.[165]Companies So touch and Evoluce have developed presentation software for Kinect that can be controlled by hand gestures; among its features is a multi-touch zoom mode.[166]In December 2010, the free public beta ofHTPCsoftwareKinEmotewas launched; it allows navigation ofBoxeeandXBMCmenus using a Kinect sensor.[167]Soroush Falahati wrote an application that can be used to createstereoscopic3D images with a Kinect sensor.[168]
In human motion tracking, Kinect might suffer from occlusion which is when some human body joints are occluded and cannot be tracked accurately by Kinect's skeletal model.[169]Therefore, fusing its data with other sensors can provide a more robust tracking of the skeletal model. For instance, in a study, anUnscented Kalman filter(UKF) was used to fuse Kinect 3D position data of shoulder, elbow, and wrist joints to those obtained from twoinertial measurement units(IMUs) placed on the upper and lower arm of a person.[170]The results showed an improvement of up to 50% in the accuracy of the position tracking of the joints. In addition to solving the occlusion problem, as the sampling frequency of the IMUs was 100 Hz (compared to ~30 Hz for Kinect), the improvement of skeletal position was more evident during fast and dynamic movements.
Kinect also shows compelling potential for use in medicine. Researchers at theUniversity of Minnesotahave used Kinect to measure a range of disorder symptoms in children, creating new ways of objective evaluation to detect such conditions as autism, attention-deficit disorder and obsessive-compulsive disorder.[171]Several groups have reported using Kinect for intraoperative, review of medical imaging, allowing the surgeon to access the information without contamination.[172][173]This technique is already in use atSunnybrook Health Sciences CentreinToronto, where doctors use it to guide imaging during cancer surgery.[174]At least one company, GestSure Technologies, is pursuing the commercialization of such a system.[175]
NASA'sJet Propulsion Laboratory(JPL) signed up for the Kinect for Windows Developer program in November 2013 to use the new Kinect to manipulate a robotic arm in combination with anOculus Riftvirtual realityheadset, creating "the most immersive interface" the unit had built to date.[176]
Upon its release, the Kinect garnered generally positive opinions from reviewers and critics.IGNgave the device 7.5 out of 10, saying that "Kinect can be a tremendous amount of fun for casual players, and the creative, controller-free concept is undeniably appealing", though adding that for "$149.99, a motion-tracking camera add-on for Xbox 360 is a tough sell, especially considering that the entry level variation of Xbox 360 itself is only $199.99".[179]Game Informerrated Kinect 8 out of 10, praising the technology but noting that the experience takes a while to get used to and that the spatial requirement may pose a barrier.[178]Computer and Video Gamescalled the device a technological gem and applauded the gesture and voice controls, while criticizing the launch lineup and Kinect Hub.[177]
CNET's review pointed out how Kinect keeps players active with its full-body motion sensing but criticized the learning curve, the additional power supply needed for older Xbox 360 consoles and the space requirements.[180]Engadget, too, listed the large space requirements as a negative, along with Kinect's launch lineup and the slowness of the hand gesture UI. The review praised the system's powerful technology and the potential of its yoga and dance games.[181]Kotakuconsidered the device revolutionary upon first use but noted that games were sometimes unable to recognize gestures or had slow responses, concluding that Kinect is "not must-own yet, more like must-eventually own."[188]TechRadarpraised the voice control and saw a great deal of potential in the device whose lag and space requirements were identified as issues.[183]Gizmodoalso noted Kinect's potential and expressed curiosity in how more mainstream titles would utilize the technology.[189]Ars Technica'sreview expressed concern that the core feature of Kinect, its lack of a controller, would hamper development of games beyond those that have either stationary players or control the player's movement automatically.[190]
The mainstream press also reviewed Kinect.USA Todaycompared it to the futuristic control scheme seen inMinority Report, stating that "playing games feels great" and giving the device 3.5 out of 4 stars.[182]David PoguefromThe New York Timespredicted players will feel a "crazy, magical, omigosh rush the first time you try the Kinect." Despite calling the motion tracking less precise thanWii's implementation, Pogue concluded that "Kinect’s astonishing technology creates a completely new activity that’s social, age-spanning and even athletic."[191]The Globe and Mailtitled Kinect as setting a "new standard for motion control." The slight input lag between making a physical movement and Kinect registering it was not considered a major issue with most games, and the review called Kinect "a good and innovative product," rating it 3.5 out of 4 stars.[192]
Although featuring improved performance over the original Kinect, its successor has been subject to mixed responses. In its Xbox One review,Engadgetpraised Xbox One's Kinect functionality, such as face recognition login and improved motion tracking, but said that while the device was "magical", "every false positive or unrecognized [voice] command had us reaching for the controller."[193]The Kinect's inability to understand some accents in English was criticized.[194]Writing forTime, Matt Peckham described the device as being "chunky" in appearance, but that the facial recognition login feature was "creepy but equally sci-fi-future cool", and that the new voice recognition system was a "powerful, addictive way to navigate the console, and save for a few exceptions that seem to be smoothing out with use". However, its accuracy was found to be affected by background noise, and Peckham further noted that launching games using voice recognition required that the full title of the game be given rather than an abbreviated name that the console "ought to semantically understand", such asForza Motorsport 5rather than "Forza 5".[195]
Prior to Xbox One's launch,privacyconcerns were raised over the new Kinect; critics showed concerns the device could be used forsurveillance, stemming from the originally announced requirements that Xbox One's Kinect be plugged in at all times, plus the initialalways-on DRMsystem that required the console to be connected to the internet to ensure continued functionality. Privacy advocates contended that the increased amount of data which could be collected with the new Kinect (such as a person's eye movements, heart rate, and mood) could be used fortargeted advertising. Reports also surfaced regarding recent Microsoftpatentsinvolving Kinect, such as a DRM system based on detecting the number of viewers in a room, and tracking viewing habits by awardingachievementsfor watching television programs andadvertising. While Microsoft stated that itsprivacy policy"prohibit[s] the collection, storage, or use of Kinect data for the purpose of advertising", critics did not rule out the possibility that these policies could be changed prior to the release of the console. Concerns were also raised that the device could also record conversations, as its microphone remains active at all times. In response to the criticism, a Microsoft spokesperson stated that users are "in control of when Kinect sensing is On, Off or Paused", will be provided with key privacy information and settings during the console's initial setup, and that user-generated content such as photos and videos "will not leave your Xbox One without your explicit permission."[49][50][51][52]Microsoft ultimately decided to reverse its decision to require Kinect usage on Xbox One, but the console still shipped with the device upon its launch in November 2013.[54]
While announcing Kinect's discontinuation in an interview withFast Co. Designon October 25, 2017, Microsoft stated that 35 million units had been sold since its release.[10]24 million units of Kinect had been shipped by February 2013.[196]Having sold 8 million units in its first 60 days on the market, Kinect claimed theGuinness World Recordof being the "fastest selling consumer electronics device".[197][198][199][200]According to Wedbush analyst Michael Pachter, Kinect bundles accounted for about half of all Xbox 360 console sales in December 2010 and for more than two-thirds in February 2011.[201][202]More than 750,000 Kinect units were sold during the week of Black Friday 2011.[203][204]
Kinect competed with severalmotion controllerson other home consoles, such asWii Remote,Wii Remote PlusandWii Balance Boardfor theWiiandWii U,PlayStation MoveandPlayStation Eyefor thePlayStation 3, andPlayStation Camerafor thePlayStation 4.
While the Xbox 360 Kinect's controller-less nature enabled it to offer a motion-controlled experience different from the wand-based controls of the Wii and PlayStation Move, this has occasionally hindered developers from developing certain motion-controlled games that could target all three seventh-generation consoles and still provide the same experience regardless of console. Examples of seventh-generation motion-controlled games that were released on Wii and PlayStation 3, but had a version for Xbox 360 cancelled or ruled out from the start, due to issues with translating wand controls to the camera-based movement of the Kinect, includeDead Space: Extraction,[205]The Lord of the Rings: Aragorn's Quest[206]andPhineas and Ferb: Across the 2nd Dimension.[207]
|
https://en.wikipedia.org/wiki/Kinect
|
Amondegreen(/ˈmɒndɪˌɡriːn/ⓘ) is a mishearing or misinterpretation of a phrase in a way that gives it a new meaning.[1]Mondegreens are most often created by a person listening to a poem or a song; the listener, being unable to hear a lyric clearly, substitutes words that sound similar and make some kind of sense.[2][3]The American writer Sylvia Wright coined the term in 1954, recalling a childhood memory of her mother reading the Scottish ballad "The Bonnie Earl o' Moray", and mishearing the words "laid him on the green" as "Lady Mondegreen".
"Mondegreen" was included in the 2000 edition of theRandom House Webster's College Dictionary, and in theOxford English Dictionaryin 2002.Merriam-Webster'sCollegiate Dictionaryadded the word in 2008.[4][5]
In a 1954 essay inHarper's Magazine, Sylvia Wright described how, as a young girl, she misheard the last line of the first stanza from theballad"The Bonnie Earl o' Moray" (fromThomas Percy's 1765 bookReliques of Ancient English Poetry). She wrote:
When I was a child, my mother used to read aloud to me fromPercy'sReliques, and one of my favorite poems began, as I remember:
Ye Highlands and ye Lowlands,Oh, where hae ye been?They hae slain the Earl Amurray,AndLady Mondegreen.[6]
The correct lines are, "They hae slain the Earl o' Moray / Andlaid him on the green." Wright explained the need for a new term:
The point about what I shall hereafter call mondegreens, since no one else has thought up a word for them, is that they are better than the original.[6]
People are more likely to notice what they expect rather than things that are not part of their everyday experiences; this is known asconfirmation bias. A person may mistake an unfamiliar stimulus for a familiar and more plausible version. For example, to consider a well-known mondegreen in the song "Purple Haze", one may be more likely to hearJimi Hendrixsinging that he is about tokiss this guythan that he is about tokiss the sky.[7]Similarly, if a lyric uses words or phrases that the listener is unfamiliar with, or in an uncommon sentence structure, they may be misheard as using more familiar terms.
The creation of mondegreens may be driven in part bycognitive dissonance; the listener finds it psychologically uncomfortable to listen to a song and not make out the words.Steven Connorsuggests that mondegreens are the result of the brain's constant attempts to make sense of the world by making assumptions to fill in the gaps when it cannot clearly determine what it is hearing. Connor sees mondegreens as the "wrenchings of nonsense into sense".[a]This dissonance will be most acute when the lyrics are in a language in which the listener is fluent.[8]
On the other hand,Steven Pinkerhas observed that mondegreen mishearings tend to belessplausible than the original lyrics, and that once a listener has "locked in" to a particular misheard interpretation of a song's lyrics, it can remain unquestioned, even when that plausibility becomes strained (seemumpsimus). Pinker gives the example of a student "stubbornly" mishearing the chorus to "Venus" ("I'm yourVenus") as "I'm your penis", and being surprised that the song was allowed on the radio.[9]The phenomenon may, in some cases, be triggered by people hearing "what they want to hear", as in the case of the song "Louie Louie": parents heard obscenities in theKingsmenrecording where none existed.[10]
James Gleickstates that the mondegreen is a distinctly modern phenomenon. Without the improved communication and language standardization brought about by radio, he argues that there would have been no way to recognize and discuss this shared experience.[11]Just as mondegreens transform songs based on experience, afolk songlearned by repetition often istransformedover time when sung by people in a region where some of the song's references have become obscure. A classic example is "The Golden Vanity",[12]which contains the line "As she sailed upon the lowland sea". British immigrants carried the song to Appalachia, where later generations of singers, not knowing what the termlowland searefers to, transformed it over generations from "lowland" to "lonesome".[13][b]
The national anthem of the United States is highly susceptible to the creation of mondegreens, two in the first line.Francis Scott Key's "The Star-Spangled Banner" begins with the line "O say can you see, by the dawn's early light".[14]This has been misinterpreted (both accidentally and deliberately) as "José, can you see", another example of theHobson-Jobsoneffect, countless times.[15][16]The second half of the line has been misheard as well, as "by the donzerly light",[17]or other variants. This has led to many people believing that "donzerly" is an actual word.[18]
Religious songs, learned by ear (and often by children), are another common source of mondegreens. The most-cited example is "Gladly, the cross-eyed bear"[6][19](from the line in the hymn "Keep Thou My Way" byFanny Crosbyand Theodore E. Perkins: "Kept by Thy tender care, gladly the cross I'll bear").[20]Jon Carrolland many others quote it as "Gladly the crossI'dbear";[3]note that the confusion may be heightened by the unusualobject-subject-verb (OSV)word order of the phrase. The song "I Was on a Boat That Day" byOld Dominionfeatures a reference to this mondegreen.[21]
Mondegreens expanded as a phenomenon with radio, and, especially, the growth of rock and roll[22](and even more so with rap[23]). Among the most-reported examples are:[24][3]
Both Creedence'sJohn Fogertyand Hendrix eventually acknowledged these mishearings by deliberately singing the "mondegreen" versions of their songs in concert.[29][30][31]
"Blinded by the Light", a cover of aBruce Springsteensong byManfred Mann's Earth Band, contains what has been called "probably the most misheard lyric of all time".[32]The phrase "revved up like a deuce", altered from Springsteen's original "cut loose like a deuce", both lyrics referring to thehot roddersslangdeuce(short fordeuce coupé) for a 1932 Ford coupé, is frequently misheard as "wrapped up like adouche".[32][33]Springsteen himself has joked about the phenomenon, claiming that it was not until Manfred Mann rewrote the song to be about a "feminine hygiene product" that the song became popular.[34][c]
Another commonly cited example of a song susceptible to mondegreens isNirvana's "Smells Like Teen Spirit", with the line "here we are now, entertain us" variously being misinterpreted as "here we are now,in containers",[35][36]and "here we are now,hot potatoes",[37]among other renditions.
In the 2014 song "Blank Space" byTaylor Swift, listeners widely misheard the line "got a long list of ex-lovers" as "all the lonelyStarbuckslovers".[38]
Rap and hip hop lyrics may be particularly susceptible to being misheard because they do not necessarily follow standard pronunciations. The delivery of rap lyrics relies heavily upon an often regional pronunciation[39]or non-traditional accenting (seeAfrican-American Vernacular English) of words and theirphonemesto adhere to the artist's stylizations and the lyrics' written structure. This issue is exemplified in controversies over alleged transcription errors inYale University Press's 2010Anthology of Rap.[40]
Sometimes, the modified version of a lyric becomes standard, as is the case with "The Twelve Days of Christmas". The original has "four colly birds"[41](collymeansblack; compareA Midsummer Night's Dream: "Brief as the lightning in the collied night"[42]); by the turn of the twentieth century, these had been replaced bycallingbirds,[43]which is the lyric used in the now-standard 1909Frederic Austinversion.[44]Another example is found inELO's song "Don't Bring Me Down". The original recorded lyric was "don't bring me down, Gruss!", but fans misheard it as "don't bring me down, Bruce!". Eventually, ELO began playing the song with the mondegreen lyric.[45]
The song "Sea Lion Woman", recorded in 1939 by Christine and Katherine Shipp, was performed byNina Simoneunder the title "See Line Woman". According to the liner notes from the compilationA Treasury of Library of Congress Field Recordings, the correct title of this playground song might also be "See [the] Lyin' Woman" or "C-Line Woman".[46]Jack Lawrence's misinterpretation of the French phrase "pauvre Jean" ("poor John") as the identically pronounced "pauvres gens" ("poor people") led to the translation ofLa Goualante du pauvre Jean("The Ballad of Poor John") as "The Poor People of Paris", a hit song in 1956.[47]
A Monk Swimmingby authorMalachy McCourtis so titled because of a childhood mishearing of a phrase from the Catholic rosary prayer, Hail Mary. "Amongst women" became "a monk swimmin'".[48]
The title and plot of the short science fiction story "Come You Nigh: Kay Shuns" ("Com-mu-ni-ca-tions") by Lawrence A. Perkins, inAnalog Science Fiction and Factmagazine (April 1970), deals withsecuringinterplanetary radio communications by encoding them with mondegreens.[49]
Olive, the Other Reindeeris a 1997 children's book byVivian Walsh, which borrows its title from a mondegreen of the line "all of the other reindeer" in the song "Rudolph the Red-Nosed Reindeer". The book was adapted into ananimated Christmas specialin 1999.
The travel guide book seriesLonely Planetis named after the misheard phrase "lovely planet" sung byJoe CockerinMatthew Moore's song "Space Captain".[50]
A monologue of mondegreens appears in the 1971 filmCarnal Knowledge. The camera focuses on actressCandice Bergenlaughing as she recounts various phrases that fooled her as a child, including "Round John Virgin" (instead of "'Round yon virgin...") and "Gladly, the cross-eyed bear" (instead of "Gladly the cross I'd bear").[51]The title of the 2013 filmAin't Them Bodies Saintsis a misheard lyric from a folk song; director David Lowery decided to use it because it evoked the "classical, regional" feel of 1970s rural Texas.[52]
In the 1994 filmThe Santa Clause, a child identifies a ladder that Santa uses to get to the roof from its label: The Rose Suchak Ladder Company. He states that this is "just like the poem", misinterpreting "out on the lawn there arose such a clatter" fromA Visit from St. Nicholasas "Out on the lawn, there's a Rose Suchak ladder".[53]
Mondegreens have been used in many television advertising campaigns, including:
The video gameSuper Mario 64involved a mishearing duringMario's encounters withBowser.Charles Martinet, the voice actor for Mario, explained the line was "So long, King-a Bowser";[60][61]however, it was misheard as "So long, gay Bowser". The misinterpreted line became ameme,[62]in part popularized by the line's removal in some updated rereleases of the game.[63][64]
Other games in theMarioseries, likeMario PartyandMario Kart 64, also involve a mondegreen. Whenever the characterWarioloses a minigame or a race, respectively, he says something along the lines of, "D'oh! I missed!" However, since he was originally designed to be German and his original voice actor, Thomas Spindler, was German, many people have heard this voice line as the German phrase "So ein Mist!", which means "oh,crap" in English. Spindler has said that this was the line he recorded in an interview in 2016.[65]Charles Martinet, who is Wario's voice actor, has said that the voice line he recorded for the game was indeed "D'oh! I missed!" in 2020.[66]
In the video gameFinal Fantasy XIV, the lyrics for the boss theme "Ultima" are "Beat, the heart of Sabik" but the English speaking audience heard the voice lines as "big fat tacos" instead. This resulted in fan video remixes with the misunderstood lyrics.[67][better source needed]} Developer Square Enix acknowledged the misunderstanding and embraced the joke,[68]and made tacos a major plot point in the expansionDawntrail.[69]
The traditional gameChinese whispers("Telephone" or "Gossip"in North America) involves mishearing a whispered sentence to produce successive mondegreens that gradually distort the original sentence as it is repeated by successive listeners.
Among schoolchildren in the US, daily rote recitation of thePledge of Allegiancehas long provided opportunities for the genesis of mondegreens.[3][70][71]
Speech-to-text functionality in modern smartphone messaging apps and search or assist functions may be hampered by faultyspeech recognition. It has been noted that in text messaging, users often leave uncorrected mondegreens as a joke or puzzle for the recipient to solve. This wealth of mondegreens has proven to be a fertile ground for study by speech scientists and psychologists.[72]
The classicist andlinguistSteve Reece has collected examples of English mondegreens in song lyrics, religiouscreedsand liturgies, commercials and advertisements, and jokes and riddles. He has used this collection to shed light on the process of "junctural metanalysis" during theoral transmissionof the ancient Greek epics, theIliadandOdyssey.[73]
Areverse mondegreenis the intentional production, in speech or writing, of words or phrases that seem to be gibberish but disguise meaning.[74]A prominent example isMairzy Doats, a 1943novelty songby Milton Drake,Al Hoffman, andJerry Livingston.[75]The lyrics are a reverse mondegreen, made up ofsame-sounding wordsor phrases (sometimes also referred to as "oronyms"),[76]so pronounced (and written) as to challenge the listener (or reader) to interpret them:
The clue to the meaning is contained in the bridge of the song:
That makes it clear that the last line is "A kid'll eat ivy, too; wouldn't you?"[77]
Two authors have written books of supposed foreign-language poetry that are actually mondegreens of nursery rhymes in English.Luis van Rooten's pseudo-FrenchMots D'Heures: Gousses, Ramesincludes critical, historical, and interpretive apparatus, as doesJohn Hulme'sMörder Guss Reims, attributed to a fictitious German poet. Both titles sound like the phrase "Mother GooseRhymes". Both works can also be consideredsoramimi, which produces different meanings when interpreted in another language. The genre ofanimutationis based on deliberate mondegreen.
Wolfgang Amadeus Mozartproduced a similar effect in his canon "Difficile Lectu" (Difficult to Read), which, though ostensibly in Latin, is actually an opportunity for scatological humor in both German and Italian.[78]
Some performers and writers have used deliberate mondegreens to createdouble entendres. The phrase "if you see Kay" (F-U-C-K) has been employed many times, notably as a line fromJames Joyce's 1922 novelUlysses.[79]
"Mondegreen" is a song byYeasayeron their 2010 album,Odd Blood. The lyrics are intentionally obscure (for instance, "Everybody sugar in my bed" and "Perhaps the pollen in the air turns us into a stapler") and spoken hastily to encourage the mondegreen effect.[80]
Anguish Languishis an ersatz language created byHoward L. Chace. A play on the words "English Language", it is based onhomophonic transformationsof English words and consists entirely of deliberate mondegreens that seem nonsensical in print but are more easily understood when spoken aloud. A notable example is the story "Ladle Rat Rotten Hut" ("Little Red Riding Hood"), which appears in his collection of stories and poems,Anguish Languish(Prentice-Hall, 1956).
Lady Gaga's 2008 hit "Poker Face" allegedly makes a play on this phenomenon, with every second repetition of the phrase "poker face" replaced with "fuck her face". The only known radio station to censor the lyrics has beenKIIS FM.[81]
Closely related categories areHobson-Jobson, where a word from a foreign language ishomophonically translatedinto one's own language, e.g. "cockroach" from Spanishcucaracha,[82][83]andsoramimi, a Japanese term for deliberate homophonic misinterpretation of words for humor.
An unintentionally incorrect use of similar-sounding words or phrases, resulting in a changed meaning, is amalapropism. If there is a connection in meaning, it may be called aneggcorn. If a person stubbornly continues to mispronounce a word or phrase after being corrected, that person has committed amumpsimus.[84]
Related phenomena include:
Queen's song "Another One Bites the Dust" has a long-standing history as a mondegreen in Bosnian, Croatian and Serbian, misheard as "Radovan baca daske" and "Радован баца даске", which means "Radovanthrows planks".[85]
In the Czech anthem,Kde domov můj, the sentencebory šumí po skalinách("midst the rocks sigh fragrant pine groves") is sometimes misheard asBoryš umí po skalinách("Boryš is good at mountaineering").[86]
Another popular Czech mondegreen is in the lyrics ofNinaby singer-songwriterTomáš Klus, where the sentence...když padnou mi na rety slzy múz("When the tears ofmusesfall on my lips") is often misheard as...když padnou minarety, slzy múz("When theminaretsfall, tears of muses"). The mondegreen is caused by the singer using an uncommon declension of the wordret("lip"); the more common form would bertyinstead ofrety.[87]
The Czech radio stationRadio Kiss[cs]has a programme calledHej šašo, nemáš džus?, where listeners can send their mondegreens. The show is named after a mondegreen from the songHighway to Hell, in which the lyric"hey Satan, payin' my dues"was misheard as"Hej šašo, nemáš džus?"("Hey clown, do you have juice?").[88]
In Dutch, mondegreens are popularly referred to asMama appelsap("Mommy applejuice"), from theMichael JacksonsongWanna Be Startin' Somethin'which features the lyricsMama-se mama-sa ma-ma-coo-sa, and was once misheard asMama say mama sa mam[a]appelsap. The Dutch radio station3FMshowSuperrradio(originallyTimur Open Radio), run by Timur Perlin and Ramon, featured an item in which listeners were encouraged to send in mondegreens under the name "Mama appelsap". The segment was popular for years.[89]
In French, the phenomenon is also known ashallucination auditive, especially when referring to pop songs.
The title of the filmLa Vie en Rose("Life In Pink" literally; "Life Through Rose-Coloured Glasses" more broadly), depicting the life ofÉdith Piaf, can be mistaken forL'Avion Rose("The Pink Airplane").[90][91]
The title of the 1983 French novelLe Thé au harem d'Archi Ahmed("Tea in the Harem of Archi Ahmed") byMehdi Charef(and the 1985 movie of the same name) is based on the main character mishearingle théorème d'Archimède("the theorem of Archimedes") in his mathematics class.
A classic example in French is similar to the "Lady Mondegreen" anecdote: in his 1962 collection of children's quotesLa Foire aux cancres, the humorist Jean-Charles[92][better source needed]refers to a misunderstood lyric of "La Marseillaise" (the French national anthem):Entendez-vous ... mugir ces féroces soldats("Do you hear those savage soldiers roar?") is misheard as...Séféro, ce soldat("that soldier Séféro").
Mondegreens are a well-known phenomenon in German, especially where non-German songs are concerned. They are sometimes called, after a well-known example,Agathe Bauer-songs ("I got the power", a song bySnap!, misinterpreted as a German female name).[93][94]Journalist Axel Hacke published a series of books about them, beginning withDer weiße Neger Wumbaba("The White Negro Wumbaba", a mishearing of the lineder weiße Nebel wunderbarfrom "Der Mond ist aufgegangen").[95]
In urban legend, children's paintings ofnativity scenes, occasionally include next to the Child, Mary, Joseph, and so on, an additional, laughing creature known as theOwi. The reason is to be found in the lineGottes Sohn! O wie lacht / Lieb' aus Deinem göttlichen Mund("God's Son! Oh, how does love laugh out of Thy divine mouth!") from the song "Silent Night". The subject isLieb, a poetic contraction ofdie Liebeleaving off the final-eand the definite article, so that the phrase might be misunderstood as being about a person namedOwilaughing "in a loveable manner".[96][97]Owi lachthas been used as the title of at least one book about Christmas and Christmas songs.[98]
Ghil'ad Zuckermannmentions the examplemukhrakhím liyót saméakh(מוכרחים להיות שמח, which means "we must be happy", with a grammatical error) as a mondegreen[99]of the originalúru 'akhím belév saméakh(עורו אחים בלב שמח, which means "wake up, brothers, with a happy heart").[99]Although this line is taken from the extremely well-known song "Háva Nagíla" ("Let's be happy"),[99]given the Hebrew high-register ofúru(עורו "wake up!"),[99]Israelis often mishear it.
An Israeli site dedicated to Hebrew mondegreens has coined the termavatiach(אבטיח, Hebrew for "watermelon") for "mondegreen", named for a common mishearing ofShlomo Artzi's award-winning 1970 song "Ahavtia" ("I loved her", using a form uncommon in spoken Hebrew).[100]
One of the most well-known Hungarian mondegreens is connected to the 1984 song "Live Is Life" by the Austrian bandOpus. The gibberishlabadab dab dabphrase in the song was commonly misunderstood by Hungarians aslevelet kaptam(Hungarian for "I have received mail"), which was later immortalized by the cult movieMoscow Squaredepicting the life of teenagers in the late 1980s.[101]
The word "mendengarku" ("hear me") in Ghea Indrawari's song, "Teramini", is misheard as "mantan aku" ("my ex") or "makananku" ("my food").[102]
Caramelldansen, a Swedish song which gained popularity in Japan during the early 21st century, contains the lyric "Dansa med oss, klappa era händer" ("Dance with us, clap your hands"), which was sometimes misinterpreted as "バルサミコ酢やっぱいらへんで" ("barusamiko-su yappa irahen de"), which translates to "I don't want anybalsamic vinegarafter all".[103]This was then included in the official Japanese translation of the song.[104]
A paper inphonologycites memoirs of the poetAntoni Słonimski, who confessed that in the recited poemKonrad Wallenrodhe used to hearzwierz Alpuhary("a beast ofAlpujarras") rather thanz wież Alpuhary("from the towers of Alpujarras").[105]
In 1875Fyodor Dostoyevskycited a line fromFyodor Glinka's song "Troika" (1825), колокольчик, дар Валдая ("the bell, gift of Valday"), stating that it is usually understood as колокольчик, дарвалдая ("the belldarvaldaying"—supposedly anonomatopoeiaof ringing sounds).[106]
In Slovakia, the lyricsGod found good people staying for brotherfrom the songSurvivebyLaurent WolfandAndrew Roachfordwas often misheard asKaufland kúpil Zdeno z Popradu("Zdeno fromPopradbought theKaufland"). The mondegreen became so popular that a radio station,Fun rádio, created a broadcast calledHity Zdena z Popradu("Hits of Zdeno from Poprad") where listeners can send mondegreens and overheard lyrics.[107][108]
TheMexican national anthemcontains the verseMas si osare un extraño enemigo("If, however, a foreign enemy would dare") usingmasandosare, archaic poetic forms.
Thus, the verse has sometimes been misunderstood asMasiosare, un extraño enemigo("Masiosare, a strange enemy") withMasiosare, an otherwise unused word, as the name of the enemy.
"Masiosare" has been used in Mexico as a first name for real and fictional people and as a common name (masiosareor the homophonemaciosare) for the anthem itself or for a threat against the country.[109]
The expressionבאָבע־מעשׂה(bobe-mayse, "grandmother's tale") was originally a misunderstanding ofבָּבָא־מעשׂה(bovo-mayse, "Bovo story"), a story from theBovo-Bukh.[110]
|
https://en.wikipedia.org/wiki/Mondegreen
|
Phonetic Search Technology(PST) is a method ofspeech recognition.[1]An audio signal of speech is broken down into series ofphonemes, which can be used to identify words.
A string of six phonemes for example, “_B _IY _T _UW _B _IY,” represent the acronym “B2B”.[citation needed]
This technology-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Phonetic_search_technology
|
Speaker diarisation(ordiarization) is the process of partitioning an audio stream containing human speech into homogeneous segments according to the identity of each speaker.[1]It can enhance the readability of anautomatic speech transcriptionby structuring the audio stream into speaker turns and, when used together withspeaker recognitionsystems, by providing the speaker’s true identity.[2]It is used to answer the question "who spoke when?"[3]Speaker diarisation is a combination of speaker segmentation and speaker clustering. The first aims at finding speaker change points in an audio stream. The second aims at grouping together speech segments on the basis of speaker characteristics.
With the increasing number of broadcasts, meeting recordings and voice mail collected every year, speaker diarisation has received much attention by the speech community, as is manifested by the specific evaluations devoted to it under the auspices of theNational Institute of Standards and Technologyfor telephone speech, broadcast news and meetings.[4]A leading list tracker of speaker diarization research can be found at Quan Wang's github repo.[5]
In speaker diarisation, one of the most popular methods is to use aGaussian mixture modelto model each of the speakers, and assign the corresponding frames for each speaker with the help of aHidden Markov Model. There are two main kinds of clustering strategies. The first one is by far the most popular and is called Bottom-Up. The algorithm starts in splitting the full audio content in a succession of clusters and progressively tries to merge the redundant clusters in order to reach a situation where each cluster corresponds to a real speaker. The second clustering strategy is calledtop-downand starts with one single cluster for all the audio data and tries to split it iteratively until reaching a number of clusters equal to the number of speakers.
A 2010 review can be found at[1].
More recently, speaker diarisation is performed vianeural networksleveraging large-scaleGPUcomputing and methodological developments indeep learning.[6]
There are some open source initiatives for speaker diarisation (in alphabetical order):
|
https://en.wikipedia.org/wiki/Speaker_diarisation
|
Speaker recognitionis the identification of a person from characteristics of voices.[1]It is used to answer the question "Who is speaking?" The termvoice recognition[2][3][4][5][6]can refer tospeaker recognitionorspeech recognition.Speaker verification(also calledspeaker authentication) contrasts with identification, andspeaker recognitiondiffers fromspeaker diarisation(recognizing when the same speaker is speaking).
Recognizing the speaker can simplify the task oftranslating speechin systems that have been trained on specific voices or it can be used to authenticate or verify the identity of a speaker as part of a security process. Speaker recognition has a history dating back some four decades as of 2019 and uses the acoustic features of speech that have been found to differ between individuals. These acoustic patterns reflect bothanatomyand learned behavioral patterns.
There are two major applications of speaker recognition technologies and methodologies. If the speaker claims to be of a certain identity and the voice is used to verify this claim, this is calledverificationorauthentication. On the other hand, identification is the task of determining an unknown speaker's identity. In a sense, speaker verification is a 1:1 match where one speaker's voice is matched to a particular template whereas speaker identification is a 1:N match where the voice is compared against multiple templates.
From a security perspective, identification is different from verification. Speaker verification is usually employed as a "gatekeeper" in order to provide access to a secure system. These systems operate with the users' knowledge and typically require their cooperation. Speaker identification systems can also be implemented covertly without the user's knowledge to identify talkers in a discussion, alert automated systems of speaker changes, check if a user is already enrolled in a system, etc.
In forensic applications, it is common to first perform a speaker identification process to create a list of "best matches" and then perform a series of verification processes to determine a conclusive match. Working to match the samples from the speaker to the list of best matches helps figure out if they are the same person based on the amount of similarities or differences. The prosecution and defense use this as evidence to determine if the suspect is actually the offender.[7]
One of the earliest training technologies to commercialize was implemented inWorlds of Wonder's 1987 Julie doll. At that point, speaker independence was an intended breakthrough, and systems required a training period. A 1987 ad for the doll carried the tagline "Finally, the doll that understands you." - despite the fact that it was described as a product "which children could train to respond to their voice."[8]The term voice recognition, even a decade later, referred to speaker independence.[9][clarification needed]
Each speaker recognition system has two phases: enrollment and verification. During enrollment, the speaker's voice is recorded and typically a number of features are extracted to form a voice print, template, or model. In the verification phase, a speech sample or "utterance" is compared against a previously created voice print. For identification systems, the utterance is compared against multiple voice prints in order to determine the best match(es) while verification systems compare an utterance against a single voice print. Because of the process involved, verification is faster than identification.
Speaker recognition systems fall into two categories: text-dependent and text-independent.[10]Text-dependent recognition requires the text to be the same for both enrollment and verification.[11]In a text-dependent system, prompts can either be common across all speakers (e.g. a common pass phrase) or unique. In addition, the use of shared-secrets (e.g.: passwords and PINs) or knowledge-based information can be employed in order to create amulti-factor authenticationscenario. Conversely, text-independent systems do not require the use of a specific text. They are most often used for speaker identification as they require very little if any cooperation by the speaker. In this case the text during enrollment and test is different. In fact, the enrollment may happen without the user's knowledge, as in the case for many forensic applications. As text-independent technologies do not compare what was said at enrollment and verification, verification applications tend to also employspeech recognitionto determine what the user is saying at the point of authentication.[citation needed]In text independent systems bothacousticsandspeech analysistechniques are used.[12]
Speaker recognition is apattern recognitionproblem. The various technologies used to process and store voice prints includefrequency estimation,hidden Markov models,Gaussian mixture models,pattern matchingalgorithms,neural networks,matrix representation, vector quantization anddecision trees. For comparing utterances against voice prints, more basic methods likecosine similarityare traditionally used for their simplicity and performance. Some systems also use "anti-speaker" techniques such ascohort modelsand world models. Spectral features are predominantly used in representing speaker characteristics.[13]Linear predictive coding(LPC) is aspeech codingmethod used in speaker recognition andspeech verification.[citation needed]
Ambient noise levelscan impede both collections of the initial and subsequent voice samples. Noise reduction algorithms can be employed to improve accuracy, but incorrect application can have the opposite effect. Performance degradation can result from changes in behavioural attributes of the voice and from enrollment using one telephone and verification on another telephone. Integration withtwo-factor authenticationproducts is expected to increase. Voice changes due to ageing may impact system performance over time. Some systems adapt the speaker models after each successful verification to capture such long-term changes in the voice, though there is debate regarding the overall security impact imposed by automated adaptation[citation needed]
Due to the introduction of legislation like theGeneral Data Protection Regulationin theEuropean Unionand theCalifornia Consumer Privacy Actin the United States, there has been much discussion about the use of speaker recognition in the work place. In September 2019 Irish speech recognition developer Soapbox Labs warned about the legal implications that may be involved.[14]
The first international patent was filed in 1983, coming from the telecommunication research inCSELT[15](Italy) by Michele Cavazza andAlberto Ciaramellaas a basis for both future telco services to final customers and to improve the noise-reduction techniques across the network.
Between 1996 and 1998, speaker recognition technology was used at theScobey–Coronach Border Crossingto enable enrolled local residents with nothing to declare to cross theCanada–United States borderwhen the inspection stations were closed for the night.[16]The system was developed for the U.S.Immigration and Naturalization Serviceby Voice Strategies of Warren, Michigan.[citation needed]
In 2013Barclays Wealth, the private banking division of Barclays, became the first financial services firm to deploy voice biometrics as the primary means of identifying customers to theircall centers. The system used passive speaker recognition to verify the identity of telephone customers within 30 seconds of normal conversation.[17]It was developed by voice recognition companyNuance(that in 2011 acquired the companyLoquendo, the spin-off from CSELT itself for speech technology), the company behindApple'sSiritechnology. 93% of customers gave the system at "9 out of 10" for speed, ease of use and security.[18]
Speaker recognition may also be used in criminal investigations, such as those of the 2014 executions of, amongst others,James FoleyandSteven Sotloff.[19]
In February 2016 UK high-street bankHSBCand its internet-based retail bankFirst Directannounced that it would offer 15 million customers its biometric banking software to access online and phone accounts using their fingerprint or voice.[20]
In 2023Vice NewsandThe Guardianseparately demonstrated they could defeat standard financial speaker-authentication systems usingAI-generated voicesgenerated from about five minutes of the target's voice samples.[21][22]
|
https://en.wikipedia.org/wiki/Speaker_recognition
|
Speech analyticsis the process of analyzing recorded calls to gather customer information to improve communication and future interaction. The process is primarily used by customer contact centers to extract information buried in client interactions with an enterprise.[1]Although speech analytics includes elements ofautomatic speech recognition, it is known for analyzing the topic being discussed, which is weighed against the emotional character of the speech and the amount and locations of speech versus non-speech during the interaction. Speech analytics in contact centers can be used to mine recorded customer interactions to surface the intelligence essential for building effective cost containment and customer service strategies. The technology can pinpoint cost drivers, trend analysis, identify strengths and weaknesses with processes and products, and help understand how the marketplace perceives offerings.[2]
Speech analytics provides a Complete analysis of recorded phone conversations between a company and its customers.[3]It provides advanced functionality and valuable intelligence from customer calls. This information can be used to discover information relating to strategy, product, process, operational issues and contact center agent performance.[4]In addition, speech analytics can automatically identify areas in which contact center agents may need additional training or coaching,[5]and can automatically monitor the customer service provided on calls.[6]
The process can isolate the words and phrases used most frequently within a given time period, as well as indicate whether usage is trending up or down. This information is useful for supervisors, analysts, and others in an organization to spot changes in consumer behavior and take action to reduce call volumes—and increase customer satisfaction. It allows insight into a customer's thought process, which in turn creates an opportunity for companies to make adjustments.[7]
Speech analytics applications can spot spoken keywords or phrases, either as real-time alerts on live audio or as a post-processing step on recorded speech. This technique is also known asaudio mining. Other uses include categorization of speech in the contact center environment to identify calls from unsatisfied customers.[8]
Measures such asPrecision and recall, commonly used in the field ofInformation retrieval, are typical ways of quantifying the response of a speech analytics search system.[9]Precision measures the proportion of search results that are relevant to the query. Recall measures the proportion of the total number of relevant items that were returned by the search results. Where a standardised test set has been used, measures such as precision and recall can be used to directly compare the search performance of different speech analytics systems.
Making a meaningful comparison of the accuracy of different speech analytics systems can be difficult. The output of LVCSR systems can be scored against reference word-level transcriptions to produce a value for the word error rate (WER), but because phonetic systems use phones as the basic recognition unit, rather than words, comparisons using this measure cannot be made. When speech analytics systems are used to search for spoken words or phrases, what matters to the user is the accuracy of the search results that are returned. Because the impact of individual recognition errors on these search results can vary greatly, measures such as word error rate are not always helpful in determining overall search accuracy from the user perspective.
According to the US Government Accountability Office,[10]“data reliability refers to the accuracy and completeness of computer-processed data, given the uses they are intended for.” In the realm of Speech Recognition and Analytics, “completeness” is measured by the “detection rate”, and usually as accuracy goes up, the detection rate goes down.[11]
Speech analytics vendors use the "engine" of a 3rd party and others develop proprietary engines. The technology mainly uses three approaches. The phonetic approach is the fastest for processing, mostly because the size of the grammar is very small, with a phoneme as the basic recognition unit. There are only few tens of unique phonemes in most languages, and the output of this recognition is a stream (text) of phonemes, which can then be searched. Large-vocabulary continuous speech recognition (LVCSR, more commonly known as speech-to-text, full transcription or ASR - automatic speech recognition) uses a set of words (bi-grams, tri-grams etc.) as the basic unit. This approach requires hundreds of thousands of words to match the audio against. It can surface new business issues, the queries are much faster, and the accuracy is higher than the phonetic approach.[12]
Extendedspeech emotion recognitionand prediction is based on three main classifiers: kNN, C4.5 and SVM RBF Kernel. This set achieves better performance than each basic classifier taken separately. It is compared with two other sets of classifiers: one-against-all (OAA) multiclass SVM with Hybrid kernels and the set of classifiers which consists of the following two basic classifiers: C5.0 and Neural Network. The proposed variant achieves better performance than the other two sets of classifiers.[13]
Market research indicates that speech analytics is projected to become a billion dollar industry by 2020 withNorth Americahaving the largest market share.[14]The growth rate is attributed to rising requirements for compliance and risk management as well as an increase in industry competition through market intelligence.[15]Thetelecommunications,ITandoutsourcingsegments of the industry are considered to hold the largest market share with expected growth from the travel and hospitality segments.[14]
|
https://en.wikipedia.org/wiki/Speech_analytics
|
Speech interface guidelineis a guideline with the aim for guiding decisions and criteria regarding designinginterfacesoperated by human voice.Speech interfacesystem has many advantages such as consistent service and saving cost. However, for users, listening is a difficult task. It can become impossible when too many options are provided at once. This may mean that a user cannot intuitively reach a decision. To avoid this problem, limit options and a few clear choices the developer should consider such difficulties are usually provided. The guideline suggests the solution which is able to satisfy the users (customers). The goal of the guideline is to make an automatedtransactionat least as attractive and efficient as interacting with an attendant.
The following guideline is given by the Lucent Technologies (nowAlcatel-Lucent USA) CONVERSANT System Version 6.0 Application Design Guidelines[1]
|
https://en.wikipedia.org/wiki/Speech_interface_guideline
|
As of the early 2000s, severalspeech recognition(SR) software packages exist forLinux. Some of them arefree and open-source softwareand others areproprietary software. Speech recognition usually refers to software that attempts to distinguish thousands of words in a human language.Voice controlmay refer to software used for communicating operational commands to a computer.
In the late 1990s, a Linux version ofViaVoice, created byIBM, was made available to users for no charge. In 2002, the freesoftware development kit(SDK) was removed by the developer.
In the early 2000s, there was a push to get a high-quality Linux native speech recognition engine developed. As a result, several projects dedicated to creating Linux speech recognition programs were begun, such asMycroft, which is similar to MicrosoftCortana, but open-source.
It is essential to compile aspeech corpusto produceacoustic modelsforspeech recognitionprojects.VoxForgeis a free speech corpus and acoustic model repository that was built to collect transcribed speech to be used in speech recognition projects. VoxForge acceptscrowdsourcedspeech samples and corrections of recognized speech sequences. It is licensed under aGNU General Public License(GPL).
The first step is to begin recording an audio stream on a computer. The user has two main processing options:
Remote recognition was formerly used bysmartphonesbecause they lacked sufficient performance, workingmemory, orstorageto process speech recognition within the phone. These limits have largely been overcome although server-based SR on mobile devices remains universal.
Discrete speech recognition can be performed within aweb browserand works well with supported browsers. Remote SR does not require installing software on a desktop computer or mobile device as it is mainly a server-based system with the inherent security issues noted above.
The following is a list of projects dedicated to implementing speech recognition in Linux, and major native solutions. These are not end-user applications. These are programminglibrariesthat may be used to develop end-user applications.
Speech recognition usually refers to software that attempts to distinguish thousands of words in a human language.Voice controlmay refer to software used for sending operational commands to a computer or appliance. Voice control typically requires a much smaller vocabulary and thus is much easier to implement.
Simple software combined withkeyboard shortcuts, have the earliest potential for practically accurate voice control in Linux.
It is possible to use programs such asDragon NaturallySpeakingin Linux, by usingWine, though some problems may arise, depending on which version is used.[3]
It is also possible to use Windows speech recognition software under Linux. Using no-costvirtualizationsoftware, it is possible to run Windows andNaturallySpeakingunder Linux.VMware ServerorVirtualBoxsupport copy and paste to/from a virtual machine, making dictated text easily transferable to/from the virtual machine.
|
https://en.wikipedia.org/wiki/Speech_recognition_software_for_Linux
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.