text
stringlengths
16
172k
source
stringlengths
32
122
Semasiography('writing with signs', fromGreeksemasia'signification' +graphia'writing') is the use of symbols, calledsemasiographs, to "communicate information without the necessary intercession of forms of speech". This non-phonetic based technique is studied insemasiologywithin the field oflinguistics. Semasiography predates the advent of language-based writing. Contemporary systems likemusicalandmathematical notation,computer icons, andemojihave also been characterized as semasiographies.[1] Thiswriting system–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semasiography
Incomputer science,syntactic sugarissyntaxwithin aprogramming languagethat is designed to make things easier to read or to express. It makes the language "sweeter" for human use: things can be expressed more clearly, more concisely, or in an alternative style that some may prefer. Syntactic sugar is usually a shorthand for a common operation that could also be expressed in an alternate, more verbose, form: The programmer has a choice of whether to use the shorter form or the longer form, but will usually use the shorter form since it is shorter and easier to type and read. For example, many programming languages provide special syntax for referencing and updatingarrayelements. Abstractly, an array reference is a procedure of two arguments: an array and a subscript vector, which could be expressed asget_array(Array, vector(i,j)). Instead, many languages provide syntax such asArray[i,j]. Similarly an array element update is a procedure consisting of three arguments, for exampleset_array(Array, vector(i,j), value), but many languages also provide syntax such asArray[i,j] = value. A construct in a language is syntactic sugar if it can be removed from the language without any effect on what the language can do:functionalityandexpressive powerwill remain the same. Language processors, includingcompilersandstatic analyzers, often expand sugared constructs into their more verbose equivalents before processing, a process sometimes called "desugaring". The termsyntactic sugarwas coined byPeter J. Landinin 1964 to describe the surface syntax of a simpleALGOL-like programming language which was defined semantically in terms of the applicative expressions oflambda calculus,[1][2]centered on lexically replacing λ with "where". Later programming languages, such asCLU,MLandScheme, extended the term to refer to syntax within a language which could be defined in terms of a language core of essential constructs; the convenient, higher-level features could be "desugared" and decomposed into that subset.[3]This is, in fact, the usual mathematical practice of building up from primitives. Building on Landin's distinction between essential language constructs and syntactic sugar, in 1991,Matthias Felleisenproposed a codification of "expressive power" to align with "widely held beliefs" in the literature. He defined "more expressive" to mean that without the language constructs in question, a program would have to be completely reorganized.[4] Some programmers feel that these syntax usability features are either unimportant or outright frivolous. Notably, special syntactic forms make a language less uniform and its specification more complex, and may cause problems as programs become large and complex. This view is particularly widespread in theLispcommunity, as Lisp has very simple and regular syntax, and the surface syntax can easily be modified.[12]For example,Alan Perlisonce quipped in "Epigrams on Programming", in a reference tobracket-delimited languages, that "Syntactic sugar causes cancer of thesemi-colons".[13] The metaphor has been extended by coining the termsyntactic salt, which indicates a feature designed to make it harder to write bad code.[14]Specifically, syntactic salt is a hoop that programmers must jump through just to prove that they know what is going on, rather than to express a program action. InC#, when hiding an inherited class member, a compiler warning is issued unless thenewkeyword is used to specify that the hiding is intentional.[15]To avoid potential bugs owing to the similarity of theswitch statementsyntax with that of C or C++, C# requires abreakfor each non-emptycaselabel of aswitch(unlessgoto,return, orthrowis used) even though it does not allow implicitfall-through.[16](Usinggotoand specifying the subsequent label produces a C/C++-likefall-through.) Syntactic salt may defeat its purpose by making the code unreadable and thus worsen its quality – in extreme cases, the essential part of the code may be shorter than the overhead introduced to satisfy language requirements. An alternative to syntactic salt is generating compiler warnings when there is high probability that the code is a result of a mistake – a practice common in modern C/C++ compilers. Other extensions aresyntacticsaccharinandsyntacticsyrup, meaning gratuitous syntax that does not make programming any easier.[17][18][19][20] Data types with core syntactic support are said to be "sugared types".[21][22][23]Common examples include quote-delimited strings, curly braces for object and record types, and square brackets for arrays.
https://en.wikipedia.org/wiki/Syntactic_sugar
Inmathematicsandphysics,vector notationis a commonly usednotationfor representingvectors,[1][2]which may beEuclidean vectors, or more generally,membersof avector space. For denoting a vector, the commontypographic conventionis lower case, upright boldface type, as inv. TheInternational Organization for Standardization(ISO) recommends either bold italic serif, as inv, or non-bold italic serif accented by a right arrow, as inv→{\displaystyle {\vec {v}}}.[3]In advanced mathematics, vectors are often represented in a simple italic type, like anyvariable.[citation needed] Vector representationsinclude Cartesian, polar, cylindrical, and spherical coordinates. In 1835Giusto Bellavitisintroduced the idea ofequipollentdirected line segmentsAB≏CD{\displaystyle AB\bumpeq CD}which resulted in the concept of a vector as anequivalence classof such segments.[4] The termvectorwas coined byW. R. Hamiltonaround 1843, as he revealedquaternions, a system which uses vectors and scalars to span a four-dimensional space. For a quaternionq=a+bi +cj +dk, Hamilton used two projections:Sq=a, for the scalar part ofq, andVq=bi +cj +dk, the vector part. Using the modern termscross product(×) anddot product(.), thequaternion productof two vectorspandqcan be writtenpq= –p.q+p×q. In 1878,W. K. Cliffordsevered the two products to make the quaternion operation useful for students in his textbookElements of Dynamic. Lecturing atYale University,Josiah Willard Gibbssupplied notation for thescalar productandvector products, which was introduced inVector Analysis.[5] In 1891,Oliver Heavisideargued forClarendonto distinguish vectors from scalars. He criticized the use ofGreek lettersby Tait andGothic lettersbyMaxwell.[6] In 1912, J.B. Shaw contributed his "Comparative Notation for Vector Expressions" to theBulletinof theQuaternion Society.[7]Subsequently,Alexander Macfarlanedescribed 15 criteria for clear expression with vectors in the same publication.[8] Vector ideas were advanced byHermann Grassmannin 1841, and again in 1862 in theGerman language. But German mathematicians were not taken with quaternions as much as were English-speaking mathematicians. WhenFelix Kleinwas organizing theGerman mathematical encyclopedia, he assignedArnold Sommerfeldto standardize vector notation.[9]In 1950, whenAcademic Presspublished G. Kuerti’s translation of the second edition of volume 2 ofLectures on Theoretical Physicsby Sommerfeld, vector notation was the subject of a footnote: "In the original German text, vectorsandtheir components are printed in the same Gothic types. The more usual way of making a typographical distinction between the two has been adopted for this translation."[10] Felix Kleincommented on differences in notation of vectors and their operations in 1925 through a Mr. Seyfarth who prepared a supplement toElementary Mathematics from an Advanced Standpoint — Geometryafter "repeated conferences" with him.[11]: vi The terms line-segment, plane-segment, plane magnitude, inner and outer product come from Grassmann, while the words scalar, vector, scalar product, and vector product came from Hamilton. The disciples of Grassmann, in other ways so orthodox, replaced in part the appropriate expressions of the master by others. The existing terminologies were merged or modified, and the symbols which indicate the separate operations have been used with the greatest arbitrariness. On these accounts even for the expert, a great lack of clearness has crept into this field, which is mathematically so simple.[11]: 53 Efforts to unify the various notational terms through committees of theInternational Congress of Mathematicianswere described as follows: The Committee which was set up in Rome for the unification of vector notation did not have the slightest success, as was to have been expected. At the following Congress in Cambridge (1912), they had to explain that they had not finished their task, and to request that their time be extended to the meeting of the next Congress, which was to have taken place in Stockholm in 1916, but which was omitted because of the war. The committee on units and symbols met a similar fate. It published in 1921 a proposed notation for vector quantities, which aroused at once and from many sides the most violent opposition.[11]: 52 Given aCartesian coordinate system, a vector may be specified by its Cartesiancoordinates. A vectorvinn-dimensionalreal coordinate spacecan be specified using atuple(ordered list) of coordinates: Sometimesangle brackets⟨…⟩{\displaystyle \langle \dots \rangle }are used instead of parentheses.[12] A vector inRn{\displaystyle \mathbb {R} ^{n}}can also be specified as a row or columnmatrixcontaining the ordered set of components. A vector specified as a row matrix is known as arow vector; one specified as a column matrix is known as acolumn vector. Again, ann-dimensional vectorv{\displaystyle \mathbf {v} }can be specified in either of the following forms using matrices: wherev1,v2, …,vn− 1,vnare the components ofv. In some advanced contexts, a row and a column vector have different meaning; seecovariance and contravariance of vectorsfor more. A vector inR3{\displaystyle \mathbb {R} ^{3}}(or fewer dimensions, such asR2{\displaystyle \mathbb {R} ^{2}}wherevzbelow is zero) can be specified as the sum of the scalar multiples of the components of the vector with the members of the standardbasisinR3{\displaystyle \mathbb {R} ^{3}}. The basis is represented with theunit vectorsı^=(1,0,0){\displaystyle {\boldsymbol {\hat {\imath }}}=(1,0,0)},ȷ^=(0,1,0){\displaystyle {\boldsymbol {\hat {\jmath }}}=(0,1,0)}, andk^=(0,0,1){\displaystyle {\boldsymbol {\hat {k}}}=(0,0,1)}. A three-dimensional vectorv{\displaystyle {\boldsymbol {v}}}can be specified in the following form, using unit vector notation:v=vxı^+vyȷ^+vzk^{\displaystyle \mathbf {v} =v_{x}{\boldsymbol {\hat {\imath }}}+v_{y}{\boldsymbol {\hat {\jmath }}}+v_{z}{\boldsymbol {\hat {k}}}} wherevx,vy, andvzare the scalar components ofv. Scalar components may be positive or negative; the absolute value of a scalar component is its magnitude. The twopolar coordinatesof a point in a plane may be considered as a two dimensional vector. Such a vector consists of amagnitude(or length) and a direction (or angle). The magnitude, typically represented asr, is the distance from a starting point, theorigin, to the point which is represented. The angle, typically represented asθ(theGreeklettertheta), is the angle, usually measured counter­clockwise, between a fixed direction, typically that of the positivex-axis, and the direction from the origin to the point. The angle is typically reduced to lie within the range0≤θ<2π{\displaystyle 0\leq \theta <2\pi }radians or0≤θ<360∘{\displaystyle 0\leq \theta <360^{\circ }}. Vectors can be specified using either ordered pair notation (a subset of ordered set notation using only two components), or matrix notation, as with rectangular coordinates. In these forms, the first component of the vector isr(instead ofv1), and the second component isθ(instead ofv2). To differentiate polar coordinates from rectangular coordinates, the angle may be prefixed with the angle symbol,∠{\displaystyle \angle }. Two-dimensional polar coordinates forvcan be represented as any of the following, using either ordered pair or matrix notation: whereris the magnitude,θis the angle, and the angle symbol (∠{\displaystyle \angle }) is optional. Vectors can also be specified using simplified autonomous equations that definerandθexplicitly. This can be unwieldy, but is useful for avoiding the confusion with two-dimensional rectangular vectors that arises from using ordered pair or matrix notation. A two-dimensional vector whose magnitude is 5 units, and whose direction isπ/9 radians (20°), can be specified using either of the following forms: A cylindrical vector is an extension of the concept of polar coordinates into three dimensions. It is akin to an arrow in thecylindrical coordinate system. A cylindrical vector is specified by a distance in thexy-plane, an angle, and a distance from thexy-plane (a height). The first distance, usually represented asrorρ(the Greek letterrho), is the magnitude of the projection of the vector onto thexy-plane. The angle, usually represented asθorφ(the Greek letterphi), is measured as the offset from the line collinear with thex-axis in the positive direction; the angle is typically reduced to lie within the range0≤θ<2π{\displaystyle 0\leq \theta <2\pi }. The second distance, usually represented ashorz, is the distance from thexy-plane to the endpoint of the vector. Cylindrical vectors use polar coordinates, where the second distance component isconcatenatedas a third component to form ordered triplets (again, a subset of ordered set notation) and matrices. The angle may be prefixed with the angle symbol (∠{\displaystyle \angle }); the distance-angle-distance combination distinguishes cylindrical vectors in this notation from spherical vectors in similar notation. A three-dimensional cylindrical vectorvcan be represented as any of the following, using either ordered triplet or matrix notation: Whereris the magnitude of the projection ofvonto thexy-plane,θis the angle between the positivex-axis andv, andhis the height from thexy-plane to the endpoint ofv. Again, the angle symbol (∠{\displaystyle \angle }) is optional. A cylindrical vector can also be specified directly, using simplified autonomous equations that definer(orρ),θ(orφ), andh(orz). Consistency should be used when choosing the names to use for the variables;ρshould not be mixed withθand so on. A three-dimensional vector, the magnitude of whose projection onto thexy-plane is 5 units, whose angle from the positivex-axis isπ/9 radians (20°), and whose height from thexy-plane is 3 units can be specified in any of the following forms: A spherical vector is another method for extending the concept of polar vectors into three dimensions. It is akin to an arrow in thespherical coordinate system. A spherical vector is specified by a magnitude, an azimuth angle, and a zenith angle. The magnitude is usually represented asρ. The azimuth angle, usually represented asθ, is the (counter­clockwise) offset from the positivex-axis. The zenith angle, usually represented asφ, is the offset from the positivez-axis. Both angles are typically reduced to lie within the range from zero (inclusive) to 2π(exclusive). Spherical vectors are specified like polar vectors, where the zenith angle is concatenated as a third component to form ordered triplets and matrices. The azimuth and zenith angles may be both prefixed with the angle symbol (∠{\displaystyle \angle }); the prefix should be used consistently to produce the distance-angle-angle combination that distinguishes spherical vectors from cylindrical ones. A three-dimensional spherical vectorvcan be represented as any of the following, using either ordered triplet or matrix notation: Whereρis the magnitude,θis the azimuth angle, andφis the zenith angle. Like polar and cylindrical vectors, spherical vectors can be specified using simplified autonomous equations, in this case forρ,θ, andφ. A three-dimensional vector whose magnitude is 5 units, whose azimuth angle isπ/9 radians (20°), and whose zenith angle isπ/4 radians (45°) can be specified as: In any givenvector space, the operations of vector addition and scalar multiplication are defined.Normed vector spacesalso define an operation known as thenorm(or determination of magnitude).Inner product spacesalso define an operation known as the inner product. InRn{\displaystyle \mathbb {R} ^{n}}, the inner product is known as thedot product. InR3{\displaystyle \mathbb {R} ^{3}}andR7{\displaystyle \mathbb {R} ^{7}}, an additional operation known as thecross productis also defined. Vector additionis represented with the plus sign used as an operator between two vectors. The sum of two vectorsuandvwould be represented as:u+v{\displaystyle \mathbf {u} +\mathbf {v} } Scalar multiplicationis represented in the same manners as algebraic multiplication. A scalar beside a vector (either or both of which may be in parentheses) implies scalar multiplication. The two common operators, a dot and a rotated cross, are also acceptable (although the rotated cross is almost never used), but they risk confusion with dot products and cross products, which operate on two vectors. The product of a scalarkwith a vectorvcan be represented in any of the following fashions: Using the algebraic properties of subtraction and division, along with scalar multiplication, it is also possible to “subtract” two vectors and “divide” a vector by a scalar. Vector subtractionis performed by adding the scalar multiple of −1 with the second vector operand to the first vector operand. This can be represented by the use of the minus sign as an operator. The difference between two vectorsuandvcan be represented in either of the following fashions: Scalar divisionis performed by multiplying the vector operand with themultiplicative inverseof the scalar operand. This can be represented by the use of the fraction bar or division signs as operators. The quotient of a vectorvand a scalarccan be represented in any of the following forms: Thenormof a vector is represented with double bars on both sides of the vector. The norm of a vectorvcan be represented as:‖v‖{\displaystyle \|\mathbf {v} \|} The norm is also sometimes represented with single bars, like|v|{\displaystyle |\mathbf {v} |}, but this can be confused withabsolute value(which is a type of norm). Theinner productof two vectors (also known as the scalar product, not to be confused with scalar multiplication) is represented as an ordered pair enclosed in angle brackets. The inner product of two vectorsuandvwould be represented as:⟨u,v⟩{\displaystyle \langle \mathbf {u} ,\mathbf {v} \rangle } InRn{\displaystyle \mathbb {R} ^{n}}, the inner product is also known as thedot product. In addition to the standard inner product notation, the dot product notation (using the dot as an operator) can also be used (and is more common). The dot product of two vectorsuandvcan be represented as:u⋅v{\displaystyle \mathbf {u} \cdot \mathbf {v} } In some older literature, the dot product is implied between two vectors written side-by-side. This notation can be confused with thedyadic productbetween two vectors. Thecross productof two vectors (inR3{\displaystyle \mathbb {R} ^{3}}) is represented using the rotated cross as an operator. The cross product of two vectorsuandvwould be represented as:u×v{\displaystyle \mathbf {u} \times \mathbf {v} } By some conventions (e.g. in France and in some areas of higher mathematics), this is also denoted by a wedge,[13]which avoids confusion with thewedge productsince the two are functionally equivalent in three dimensions:u∧v{\displaystyle \mathbf {u} \wedge \mathbf {v} } In some older literature, the following notation is used for the cross product betweenuandv:[u,v]{\displaystyle [\mathbf {u} ,\mathbf {v} ]} Vector notation is used withcalculusthrough theNabla operator:i∂∂x+j∂∂y+k∂∂z{\displaystyle \mathbf {i} {\frac {\partial }{\partial x}}+\mathbf {j} {\frac {\partial }{\partial y}}+\mathbf {k} {\frac {\partial }{\partial z}}}With a scalar functionf, thegradientis written as∇f,{\displaystyle \nabla f\,,} with a vector fieldF, thedivergenceis written as∇⋅F,{\displaystyle \nabla \cdot F,} and with a vector fieldF, thecurlis written as∇×F.{\displaystyle \nabla \times F.}
https://en.wikipedia.org/wiki/Vector_notation
Abinary-safefunction is one that treats its input as a raw stream of bytes and ignores every textual aspect it may have. The term is mainly used in thePHPprogramming language to describe expected behaviour when passing binary data intofunctionswhose main responsibility is text andstringmanipulating, and is used widely in the official PHP documentation.[1] While all textual data can be represented in binary-form, it must be done so throughcharacter encoding. In addition to this, hownewlinesare represented may vary depending on the platform used. Windows, Linux and macOS all represent newlines differently in binary form. This means that reading a file as binary data, parsing it as text and then writing it back to disk (thus reconverting it back to binary form) may result in a different binary representation than the one originally used. Most programming languages let the programmer decide whether to parse the contents of a file as text, or read it as binary data. To convey this intent, special flags or different functions exist when reading or writing files to disk. For example, in the PHP, C, and C++ programming languages, developers have to usefopen($filename, "rb")instead offopen($filename, "r")to read the file as a binary stream instead of interpreting the textual data as such. This may also be referred to as reading in 'binary safe' mode. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Binary-safe
Abit array(also known asbitmask,[1]bit map,bit set,bit string, orbit vector) is anarray data structurethat compactly storesbits. It can be used to implement a simpleset data structure. A bit array is effective at exploitingbit-level parallelismin hardware to perform operations quickly. A typical bit array storeskwbits, wherewis the number of bits in the unit of storage, such as abyteorword, andkis some nonnegative integer. Ifwdoes not divide the number of bits to be stored, some space is wasted due tointernal fragmentation. A bit array is a mapping from some domain (almost always a range of integers) to values in the set {0, 1}. The values can be interpreted as dark/light, absent/present, locked/unlocked, valid/invalid, et cetera. The point is that there are only two possible values, so they can be stored in one bit. As with other arrays, the access to a single bit can be managed by applying an index to the array. Assuming its size (or length) to benbits, the array can be used to specify a subset of the domain (e.g. {0, 1, 2, ...,n−1}), where a 1-bit indicates the presence and a 0-bit the absence of a number in the set. This set data structure uses aboutn/wwords of space, wherewis the number of bits in eachmachine word. Whether the least significant bit (of the word) or the most significant bit indicates the smallest-index number is largely irrelevant, but the former tends to be preferred (onlittle-endianmachines). A finitebinary relationmay be represented by a bit array called alogical matrix. In thecalculus of relations, these arrays are composed withmatrix multiplicationwhere the arithmetic is Boolean, and such a composition representscomposition of relations.[2] Although most machines are not able to address individual bits in memory, nor have instructions to manipulate single bits, each bit in a word can be singled out and manipulated usingbitwise operations. In particular: UseORto set a bit to one: ANDto set a bit to zero: ANDto determine if a bit is set, by zero-testing: XORto invert or toggle a bit: NOTto invert all bits: To obtain thebit maskneeded for these operations, we can use abit shiftoperator to shift the number 1 to the left by the appropriate number of places, as well asbitwise negationif necessary. Given two bit arrays of the same size representing sets, we can compute theirunion,intersection, andset-theoretic differenceusingn/wsimple bit operations each (2n/wfor difference), as well as thecomplementof either: If we wish to iterate through the bits of a bit array, we can do this efficiently using a doubly nested loop that loops through each word, one at a time. Onlyn/wmemory accesses are required: Both of these code samples exhibit ideallocality of reference, which will subsequently receive large performance boost from a data cache. If a cache line iskwords, only aboutn/wkcache misses will occur. As withcharacter stringsit is straightforward to definelength,substring,lexicographicalcompare,concatenation,reverseoperations. The implementation of some of these operations is sensitive toendianness. If we wish to find the number of 1 bits in a bit array, sometimes called the population count or Hamming weight, there are efficient branch-free algorithms that can compute the number of bits in a word using a series of simple bit operations. We simply run such an algorithm on each word and keep a running total. Counting zeros is similar. See theHamming weightarticle for examples of an efficient implementation. Vertical flipping of a one-bit-per-pixel image, or some FFT algorithms, requires flipping the bits of individual words (sob31 b30 ... b0becomesb0 ... b30 b31). When this operation is not available on the processor, it's still possible to proceed by successive passes, in this example on 32 bits: Thefind first setorfind first oneoperation identifies the index or position of the 1-bit with the smallest index in an array, and has widespread hardware support (for arrays not larger than a word) and efficient algorithms for its computation. When apriority queueis stored in a bit array, find first one can be used to identify the highest priority element in the queue. To expand a word-sizefind first oneto longer arrays, one can find the first nonzero word and then runfind first oneon that word. The related operationsfind first zero,count leading zeros,count leading ones,count trailing zeros,count trailing ones, andlog base 2(seefind first set) can also be extended to a bit array in a straightforward manner. A bit array is the most dense storage for "random" bits, that is, where each bit is equally likely to be 0 or 1, and each one is independent. But most data are not random, so it may be possible to store it more compactly. For example, the data of a typical fax image is not random and can be compressed.Run-length encodingis commonly used to compress these long streams. However, most compressed data formats are not so easy to access randomly; also by compressing bit arrays too aggressively we run the risk of losing the benefits due to bit-level parallelism (vectorization). Thus, instead of compressing bit arrays as streams of bits, we might compress them as streams of bytes or words (seeBitmap index (compression)). Bit arrays, despite their simplicity, have a number of marked advantages over other data structures for the same problems: However, bit arrays are not the solution to everything. In particular: Because of their compactness, bit arrays have a number of applications in areas where space or efficiency is at a premium. Most commonly, they are used to represent a simple group of Boolean flags or an ordered sequence of Boolean values. Bit arrays are used forpriority queues, where the bit at indexkis set if and only ifkis in the queue; this data structure is used, for example, by theLinux kernel, and benefits strongly from a find-first-zero operation in hardware. Bit arrays can be used for the allocation ofmemory pages,inodes, disk sectors, etc. In such cases, the termbitmapmay be used. However, this term is frequently used to refer toraster images, which may use multiplebits per pixel. Another application of bit arrays is theBloom filter, a probabilisticset data structurethat can store large sets in a small space in exchange for a small probability of error. It is also possible to build probabilistichash tablesbased on bit arrays that accept either false positives or false negatives. Bit arrays and the operations on them are also important for constructingsuccinct data structures, which use close to the minimum possible space. In this context, operations like finding thenth 1 bit or counting the number of 1 bits up to a certain position become important. Bit arrays are also a useful abstraction for examining streams ofcompresseddata, which often contain elements that occupy portions of bytes or are not byte-aligned. For example, the compressedHuffman codingrepresentation of a single 8-bit character can be anywhere from 1 to 255 bits long. Ininformation retrieval, bit arrays are a good representation for theposting listsof very frequent terms. If we compute the gaps between adjacent values in a list of strictly increasing integers and encode them usingunary coding, the result is a bit array with a 1 bit in thenth position if and only ifnis in the list. The implied probability of a gap ofnis 1/2n. This is also the special case ofGolomb codingwhere the parameter M is 1; this parameter is only normally selected when−log(2 −p) / log(1 −p) ≤ 1, or roughly the term occurs in at least 38% of documents. Given a big file ofIPv4addresses (more than 100 GB) — we need to count unique addresses. If we use genericmap[string]bool— we will need more than 64 GB ofRAM, so lets use thebit map, inGo: TheAPL programming languagefully supports bit arrays of arbitrary shape and size as a Boolean datatype distinct from integers. All major implementations (Dyalog APL, APL2, APL Next, NARS2000, Gnu APL, etc.) pack the bits densely into whatever size the machine word is. Bits may be accessed individually via the usual indexing notation (A[3]) as well as through all of the usual primitive functions and operators where they are often operated on using a special case algorithm such as summing the bits via a table lookup of bytes. TheC programming language'sbit fields, pseudo-objects found in structs with size equal to some number of bits, are in fact small bit arrays; they are limited in that they cannot span words. Although they give a convenient syntax, the bits are still accessed using bytewise operators on most machines, and they can only be defined statically (like C's static arrays, their sizes are fixed at compile-time). It is also a common idiom for C programmers to use words as small bit arrays and access bits of them using bit operators. A widely available header file included in theX11system, xtrapbits.h, is “a portable way for systems to define bit field manipulation of arrays of bits.” A more explanatory description of aforementioned approach can be found in thecomp.lang.c faq. InC++, although individualbools typically occupy the same space as a byte or an integer, theSTLtypevector<bool>is apartial template specializationin which bits are packed as a space efficiency optimization. Since bytes (and not bits) are the smallest addressable unit in C++, the [] operator doesnotreturn a reference to an element, but instead returns aproxy reference. This might seem a minor point, but it means thatvector<bool>isnota standard STL container, which is why the use ofvector<bool>is generally discouraged. Another unique STL class,bitset,[3]creates a vector of bits fixed at a particular size at compile-time, and in its interface and syntax more resembles the idiomatic use of words as bit sets by C programmers. It also has some additional power, such as the ability to efficiently count the number of bits that are set. TheBoost C++ Librariesprovide adynamic_bitsetclass[4]whose size is specified at run-time. TheD programming languageprovides bit arrays in its standard library, Phobos, instd.bitmanip. As in C++, the [] operator does not return a reference, since individual bits are not directly addressable on most hardware, but instead returns abool. InJava, the classBitSetcreates a bit array that is then manipulated with functions named after bitwise operators familiar to C programmers. Unlike thebitsetin C++, the JavaBitSetdoes not have a "size" state (it has an effectively infinite size, initialized with 0 bits); a bit can be set or tested at any index. In addition, there is a classEnumSet, which represents a Set of values of anenumerated typeinternally as a bit vector, as a safer alternative to bit fields. The.NET Frameworksupplies aBitArraycollection class. It stores bits using an array of typeint(each element in the array usually represents 32 bits).[5]The class supports random access and bitwise operators, can be iterated over, and itsLengthproperty can be changed to grow or truncate it. AlthoughStandard MLhas no support for bit arrays, Standard ML of New Jersey has an extension, theBitArraystructure, in its SML/NJ Library. It is not fixed in size and supports set operations and bit operations, including, unusually, shift operations. Haskelllikewise currently lacks standard support for bitwise operations, but bothGHCand Hugs provide aData.Bitsmodule with assorted bitwise functions and operators, including shift and rotate operations and an "unboxed" array over Boolean values may be used to model a Bit array, although this lacks support from the former module. InPerl, strings can be used as expandable bit arrays. They can be manipulated using the usual bitwise operators (~ | & ^),[6]and individual bits can be tested and set using thevecfunction.[7] InRuby, you can access (but not set) a bit of an integer (FixnumorBignum) using the bracket operator ([]), as if it were an array of bits. Apple'sCore Foundationlibrary containsCFBitVectorandCFMutableBitVectorstructures. PL/Isupports arrays ofbit stringsof arbitrary length, which may be either fixed-length or varying. The array elements may bealigned— each element begins on a byte or word boundary— orunaligned— elements immediately follow each other with no padding. PL/pgSQLand PostgreSQL's SQL supportbit stringsas native type. There are two SQL bit types:bit(n)andbit varying(n), wherenis a positive integer.[8] Hardware description languages such asVHDL,Verilog, andSystemVerilognatively support bit vectors as these are used to model storage elements likeflip-flops, hardware busses and hardware signals in general. In hardware verification languages such as OpenVera,eandSystemVerilog, bit vectors are used to sample values from the hardware models, and to represent data that is transferred to hardware during simulations. Common Lispprovides multi-dimensional bit arrays. A one-dimensionalbit-vectorimplementation is provided as a special case of the built-inarray, acting in a dual capacity as a class and a type specifier.[9]Bit arrays (and thus bit vectors) relies on the generalmake-arrayfunction to be configured with an element type ofbit, which optionally permits a bit vector to be designated as dynamically resizable. Thebit-vector, however, is not infinite in extent. A more restrictedsimple-bit-vectortype exists, which explicitly excludes the dynamic characteristics.[10]Bit vectors are represented as, and can be constructed in a more concise fashion by, thereader macro#*bits.[11]In addition to the general functions applicable to all arrays, dedicated operations exist for bit arrays. Single bits may be accessed and modified using thebitandsbitfunctions[12]and an extensive number of logical operations is supported.[13]
https://en.wikipedia.org/wiki/Bit_array
TheC programming languagehas a set of functions implementing operations onstrings(character strings and byte strings) in itsstandard library. Various operations, such as copying,concatenation,tokenizationand searching are supported. For character strings, the standard library uses the convention that strings arenull-terminated: a string ofncharacters is represented as anarrayofn+ 1elements, the last of which is a "NULcharacter" with numeric value 0. The only support for strings in the programming language proper is that the compiler translates quotedstring constantsinto null-terminated strings. A string is defined as a contiguous sequence ofcode unitsterminated by the first zero code unit (often called theNULcode unit).[1]This means a string cannot contain the zero code unit, as the first one seen marks the end of the string. Thelengthof a string is the number of code units before the zero code unit.[1]The memory occupied by a string is always one more code unit than the length, as space is needed to store the zero terminator. Generally, the termstringmeans a string where the code unit is of typechar, which is exactly 8 bits on all modern machines.C90defineswide strings[1]which use a code unit of typewchar_t, which is 16 or 32 bits on modern machines. This was intended forUnicodebut it is increasingly common to useUTF-8in normal strings for Unicode instead. Strings are passed to functions by passing a pointer to the first code unit. Sincechar *andwchar_t *are different types, the functions that process wide strings are different than the ones processing normal strings and have different names. String literals("text"in the C source code) are converted to arrays during compilation.[2]The result is an array of code units containing all the characters plus a trailing zero code unit. In C90L"text"produces a wide string. A string literal can contain the zero code unit (one way is to put\0into the source), but this will cause the string to end at that point. The rest of the literal will be placed in memory (with another zero code unit added to the end) but it is impossible to know those code units were translated from the string literal, therefore such source code isnota string literal.[3] Each string ends at the first occurrence of the zero code unit of the appropriate kind (charorwchar_t). Consequently, a byte string (char*) can contain non-NULcharacters inASCIIor anyASCII extension, but not characters in encodings such asUTF-16(even though a 16-bit code unit might be nonzero, its high or low byte might be zero). The encodings that can be stored in wide strings are defined by the width ofwchar_t. In most implementations,wchar_tis at least 16 bits, and so all 16-bit encodings, such asUCS-2, can be stored. Ifwchar_tis 32-bits, then 32-bit encodings, such asUTF-32, can be stored. (The standard requires a "type that holds any wide character", which on Windows no longer holds true since the UCS-2 to UTF-16 shift. This was recognized as a defect in the standard and fixed in C++.)[4]C++11 andC11add two types with explicit widthschar16_tandchar32_t.[5] Variable-width encodingscan be used in both byte strings and wide strings. String length and offsets are measured in bytes orwchar_t, not in "characters", which can be confusing to beginning programmers.UTF-8andShift JISare often used in C byte strings, whileUTF-16is often used in C wide strings whenwchar_tis 16 bits. Truncating strings with variable-width characters using functions likestrncpycan produce invalid sequences at the end of the string. This can be unsafe if the truncated parts are interpreted by code that assumes the input is valid. Support for Unicode literals such ascharfoo[512]="φωωβαρ";(UTF-8) orwchar_tfoo[512]=L"φωωβαρ";(UTF-16 or UTF-32, depends onwchar_t) is implementation defined,[6]and may require that the source code be in the same encoding, especially forcharwhere compilers might just copy whatever is between the quotes. Some compilers or editors will require entering all non-ASCII characters as\xNNsequences for each byte of UTF-8, and/or\uNNNNfor each word of UTF-16. Since C11 (and C++11), a new literal prefixu8is available that guarantees UTF-8 for a bytestring literal, as incharfoo[512]=u8"φωωβαρ";.[7]SinceC++20andC23, achar8_ttype was added that is meant to store UTF-8 characters and the types of u8 prefixed character and string literals were changed tochar8_tandchar8_t[]respectively. In historical documentation the term "character" was often used instead of "byte" for C strings, which leads many[who?]to believe that these functions somehow do not work forUTF-8. In fact all lengths are defined as being in bytes and this is true in all implementations, and these functions work as well with UTF-8 as with single-byte encodings. The BSD documentation has been fixed to make this clear, but POSIX, Linux, and Windows documentation still uses "character" in many places where "byte" or "wchar_t" is the correct term. Functions for handling memory buffers can process sequences of bytes that include null-byte as part of the data. Names of these functions typically start withmem, as opposite to thestrprefix. Most of the functions that operate on C strings are declared in thestring.hheader (cstringin C++), while functions that operate on C wide strings are declared in thewchar.hheader (cwcharin C++). These headers also contain declarations of functions used for handling memory buffers; the name is thus something of a misnomer. Functions declared instring.hare extremely popular since, as a part of theC standard library, they are guaranteed to work on any platform which supports C. However, some security issues exist with these functions, such as potentialbuffer overflowswhen not used carefully and properly, causing the programmers to prefer safer and possibly less portable variants, out of which some popular ones are listed below. Some of these functions also violateconst-correctnessby accepting aconststring pointer and returning a non-constpointer within the string. To correct this, some have been separated into twooverloaded functionsin the C++ version of the standard library. These functions all need ambstate_tobject, originally in static memory (making the functions not be thread-safe) and in later additions the caller must maintain. This was originally intended to track shift states in thembencodings, but modern ones such as UTF-8 do not need this. However these functions were designed on the assumption that thewcencoding is not avariable-width encodingand thus are designed to deal with exactly onewchar_tat a time, passing it by value rather than using a string pointer. As UTF-16 is a variable-width encoding, thembstate_thas been reused to keep track of surrogate pairs in the wide encoding, though the caller must still detect and callmbtowctwice for a single character.[80][81][82]Later additions to the standard admit that the only conversion programmers are interested in is between UTF-8 and UTF-16 and directly provide this. The C standard library contains several functions for numeric conversions. The functions that deal with byte strings are defined in thestdlib.hheader (cstdlibheader in C++). The functions that deal with wide strings are defined in thewchar.hheader (cwcharheader in C++). The functionsstrchr,bsearch,strpbrk,strrchr,strstr,memchrand their wide counterparts are notconst-correct, since they accept aconststring pointer and return a non-constpointer within the string. This has been fixed inC23.[95] Also, since the Normative Amendment 1 (C95),atoxxfunctions are considered subsumed bystrtoxxxfunctions, for which reason neither C95 nor any later standard provides wide-character versions of these functions. The argument againstatoxxis that they do not differentiate between an error and a0.[96] Despitethe well-established needto replacestrcat[22]andstrcpy[18]with functions that do not allow buffer overflows, no accepted standard has arisen. This is partly due to the mistaken belief by many C programmers thatstrncatandstrncpyhave the desired behavior; however, neither function was designed for this (they were intended to manipulate null-padded fixed-size string buffers, a data format less commonly used in modern software), and the behavior and arguments are non-intuitive and often written incorrectly even by expert programmers.[108] The most popular[a]replacement are thestrlcat[111]andstrlcpy[112]functions, which appeared inOpenBSD2.4 in December, 1998.[108]These functions always write one NUL to the destination buffer, truncating the result if necessary, and return the size of buffer that would be needed, which allows detection of the truncation and provides a size for creating a new buffer that will not truncate. For a long time they have not been included in theGNU C library(used by software on Linux), on the basis of allegedly being inefficient,[113]encouraging the use of C strings (instead of some superior alternative form of string),[114][115]and hiding other potential errors.[116][117]Even while glibc hadn't added support, strlcat and strlcpy have been implemented in a number of other C libraries including ones for OpenBSD,FreeBSD,NetBSD,Solaris,OS X, andQNX, as well as in alternative C libraries for Linux, such aslibbsd, introduced in 2008,[118]andmusl, introduced in 2011,[119][120]and the source code added directly to other projects such asSDL,GLib,ffmpeg,rsync, and even internally in theLinux kernel. This did change in 2024, theglibc FAQnotes that as of glibc 2.38, the code has been committed[121]and thereby added.[122]These functions were standardized as part of POSIX.1-2024,[123]the Austin Group Defect TrackerID 986tracked some discussion about such plans for POSIX. Sometimesmemcpy[53]ormemmove[55]are used, as they may be more efficient thanstrcpyas they do not repeatedly check for NUL (this is less true on modern processors). Since they need a buffer length as a parameter, correct setting of this parameter can avoid buffer overflows. As part of its 2004Security Development Lifecycle, Microsoft introduced a family of "secure" functions includingstrcpy_sandstrcat_s(along with many others).[124]These functions were standardized with some minor changes as part of the optionalC11 (Annex K)proposed by ISO/IEC WDTR 24731.[125]These functions perform various checks including whether the string is too long to fit in the buffer. If the checks fail, a user-specified "runtime-constraint handler" function is called,[126]which usually aborts the program.[127][128]These functions attracted considerable criticism because initially they were implemented only on Windows and at the same time warning messages started to be produced byMicrosoft Visual C++suggesting use of these functions instead of standard ones. This has been speculated by some to be an attempt by Microsoft to lock developers into its platform.[129]Experience with these functions has shown significant problems with their adoption and errors in usage, so the removal of Annex K was proposed for the next revision of the C standard.[130]Usage ofmemset_shas been suggested as a way to avoid unwanted compiler optimizations.[131][132]
https://en.wikipedia.org/wiki/C_string_handling
TheC++programming language has support forstring handling, mostly implemented in itsstandard library. The language standard specifies several string types, some inherited fromC, some designed to make use of the language's features, such as classes andRAII. The most-used of these isstd::string. Since the initial versions of C++ had only the "low-level"C string handlingfunctionality and conventions, multiple incompatible designs for string handling classes have been designed over the years and are still used instead ofstd::string, and C++ programmers may need to handle multiple conventions in a single application. Thestd::stringtype is the main string datatype in standard C++ since 1998, but it was not always part of C++. From C, C++ inherited the convention of usingnull-terminated stringsthat are handled by apointerto their first element, and a library of functions that manipulate such strings. In modern standard C++, a string literal such as"hello"still denotes a NUL-terminated array of characters.[1] Using C++ classes to implement a string type offers several benefits of automatedmemory managementand a reduced risk of out-of-bounds accesses,[2]and more intuitive syntax for string comparison and concatenation. Therefore, it was strongly tempting to create such a class. Over the years, C++ application, library and framework developers produced their own, incompatible string representations, such as the one inAT&T's Standard Components library (the first such implementation, 1983)[3]or theCStringtype in Microsoft'sMFC.[4]Whilestd::stringstandardized strings, legacy applications still commonly contain such custom string types and libraries may expect C-style strings, making it "virtually impossible" to avoid using multiple string types in C++ programs[1]and requiring programmers to decide on the desired string representation ahead of starting a project.[4] In a 1991 retrospective on the history of C++, its inventorBjarne Stroustrupcalled the lack of a standard string type (and some other standard types) in C++ 1.0 the worst mistake he made in its development; "the absence of those led to everybody re-inventing the wheel and to an unnecessary diversity in the most fundamental classes".[3] The various vendors' string types have different implementation strategies and performance characteristics. In particular, some string types use acopy-on-writestrategy, where an operation such as does not actually copy the content ofatob; instead, both strings share their contents and areference counton the content is incremented. The actual copying is postponed until a mutating operation, such as appending a character to either string, makes the strings' contents differ. Copy-on-write can make major performance changes to code using strings (making some operations much faster and some much slower). Thoughstd::stringno longer uses it, many (perhaps most) alternative string libraries still implement copy-on-write strings. Some string implementations store 16-bit or 32-bitcode pointsinstead of bytes, this was intended to facilitate processing ofUnicodetext.[5]However, it means that conversion to these types fromstd::stringor from arrays of bytes is dependent on the "locale" and can throw exceptions.[6]Any processing advantages of 16-bit code units vanished when the variable-widthUTF-16encoding was introduced (though there are still advantages if you must communicate with a 16-bit API such as Windows).Qt'sQStringis an example.[5] Third-party string implementations also differed considerably in the syntax to extract or compare substrings, or to perform searches in the text. Thestd::stringclass is the standard representation for a text string sinceC++98. The class provides some typical string operations like comparison, concatenation, find and replace, and a function for obtainingsubstrings. Anstd::stringcan be constructed from a C-style string, and a C-style string can also be obtained from one.[7] The individual units making up the string are of typechar, at least (and almost always) 8 bits each. In modern usage these are often not "characters", but parts of amultibyte character encodingsuch asUTF-8. The copy-on-write strategy was deliberately allowed by the initial C++ Standard forstd::stringbecause it was deemed a useful optimization, and used by nearly all implementations.[7]However, there were mistakes, in particular theoperator[]returned a non-const reference in order to make it easy to port C in-place string manipulations (such code often assumed one byte per character and thus this may not have been a good idea!) This allowed the following code that shows that it must make a copy even though it is almost always used only to examine the string and not modify it:[8][9] This caused implementations, first MSVC and later GCC, to move away from copy-on-write.[10]It was also discovered that the overhead inmulti-threadedapplications due to the locking needed to examine or change the reference count was greater than the overhead of copying small strings on modern processors[11](especially for strings smaller than the size of a pointer). The optimization was finally disallowed inC++11,[8]with the result that even passing astd::stringas an argument to a function, for examplevoidfunction_name(std::strings);must be expected to perform a full copy of the string into newly allocated memory. The common idiom to avoid such copying is to pass as aconst reference.[12] TheC++17standard added a newstring_viewclass[13]that is only a pointer and length to read-only data, makes passing arguments far faster than either of the above examples: std::stringis atypedeffor a particular instantiation of thestd::basic_stringtemplate class.[14]Its definition is found in the<string>header: Thusstringprovidesbasic_stringfunctionality for strings having elements of typechar. There is a similar classstd::wstring, which consists ofwchar t, and is most often used to storeUTF-16text onWindowsandUTF-32on mostUnix-likeplatforms. The C++ standard, however, does not impose any interpretation asUnicodecode points or code units on these types and does not even guarantee that awchar_tholds more bits than achar.[15]To resolve some of the incompatibilities resulting fromwchar_t's properties,C++11added two new classes:std::u16stringandstd::u32string(made up of the new typeschar16_tandchar32_t), which are the given number of bits per code unit on all platforms.[16]C++11 also added newstring literalsof 16-bit and 32-bit "characters" and syntax for putting Unicode code points into null-terminated (C-style) strings.[17] Abasic_stringis guaranteed to be specializable for any type with achar_traitsstruct to accompany it. As of C++11, onlychar,wchar_t,char16_tandchar32_tspecializations are required to be implemented.[18] Abasic_stringis also aStandard Library container, and thus theStandard Library algorithmscan be applied to the code units in strings. The design ofstd::stringhas been held up as an example of monolithic design byHerb Sutter, who reckons that of the 103 member functions on the class in C++98, 71 could have beendecoupledwithout loss of implementation efficiency.[19]
https://en.wikipedia.org/wiki/C%2B%2B_string_handling
Stringfunctionsare used in computerprogramming languagesto manipulate astringor query information about a string (some do both). Most programming languages that have a stringdatatypewill have some string functions although there may be other low-level ways within each language to handle strings directly. In object-oriented languages, string functions are often implemented as properties and methods of string objects. In functional and list-based languages a string is represented as a list (of character codes), therefore all list-manipulation procedures could be considered string functions. However such languages may implement a subset of explicit string-specific functions as well. For function that manipulate strings, modern object-oriented languages, likeC#andJavahave immutable strings and return a copy (in newly allocated dynamic memory), while others, likeCmanipulate the original string unless the programmer copies data to a new string. See for exampleConcatenationbelow. The most basic example of a string function is thelength(string)function. This function returns the length of astring literal. Other languages may have string functions with similar or exactly the same syntax or parameters or outcomes. For example, in many languages the length function is usually represented aslen(string). The below list of common functions aims to help limit this confusion. String functions common to many languages are listed below, including the different names used. The below list of common functions aims to help programmers find the equivalent function in a language. Note, stringconcatenationandregular expressionsare handled in separate pages. Statements inguillemets(« … ») are optional. Tests if two strings are equal. See also#Compareand#Compare. Note that doing equality checks via a genericCompare with integer resultis not only confusing for the programmer but is often a significantly more expensive operation; this is especially true when using "C-strings". Examples ^aGiven a set of characters, SCAN returns the position of the first character found,[19]while VERIFY returns the position of the first character that does not belong to the set.[20] Tests if two strings are not equal. See also#Equality. see#Find see#Find see#Find see#rfind see#rfind see#length «FUNCTION» BYTE-LENGTH(string) see#Find see#substring see#substring string.split(limit,separator) see#Format see#trim see#Compare (integer result) trimorstripis used to remove whitespace from the beginning, end, or both beginning and end, of a string. Other languages In languages without a built-in trim function, it is usually simple to create a custom function which accomplishes the same task. APLcan use regular expressions directly: Alternatively, a functional approach combining Boolean masks that filter away leading and trailing spaces: Or reverse and remove leading spaces, twice: InAWK, one can use regular expressions to trim: or: There is no standard trim function in C or C++. Most of the available string libraries[55]for C contain code which implements trimming, or functions that significantly ease an efficient implementation. The function has also often been calledEatWhitespacein some non-standard C libraries. In C, programmers often combine a ltrim and rtrim to implement trim: Theopen sourceC++ libraryBoosthas several trim variants, including a standard one:[56] With boost's function named simplytrimthe input sequence is modified in-place, and returns no result. Anotheropen sourceC++ libraryQt, has several trim variants, including a standard one:[57] TheLinux kernelalso includes a strip function,strstrip(), since 2.6.18-rc1, which trims the string "in place". Since 2.6.33-rc1, the kernel usesstrim()instead ofstrstrip()to avoid false warnings.[58] A trim algorithm inHaskell: may be interpreted as follows:fdrops the preceding whitespace, and reverses the string.fis then again applied to its own output. Note that the type signature (the second line) is optional. The trim algorithm inJis afunctionaldescription: That is: filter (#~) for non-space characters (' '&~:) between leading (+./\) and (*.) trailing (+./\.) spaces. There is a built-in trim function in JavaScript 1.8.1 (Firefox 3.5 and later), and the ECMAScript 5 standard. In earlier versions it can be added to the String object's prototype as follows: Perl 5 has no built-in trim function. However, the functionality is commonly achieved usingregular expressions. Example: or: These examples modify the value of the original variable$string. Also available for Perl isStripLTSpaceinString::StripfromCPAN. There are, however, two functions that are commonly used to strip whitespace from the end of strings,chompandchop: InRaku, the upcoming sister language of Perl, strings have atrimmethod. Example: TheTclstringcommand has three relevant subcommands:trim,trimrightandtrimleft. For each of those commands, an additional argument may be specified: a string that represents a set of characters to remove—the default is whitespace (space, tab, newline, carriage return). Example of trimming vowels: XSLTincludes the functionnormalize-space(string)which strips leading and trailing whitespace, in addition to replacing any whitespace sequence (including line breaks) with a single space. Example: XSLT 2.0 includes regular expressions, providing another mechanism to perform string trimming. Another XSLT technique for trimming is to utilize the XPath 2.0substring()function.
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(string_functions)
In computing, aconnection stringis astringthat specifies information about a data source and the means of connecting to it. It is passed in code to an underlyingdriveror provider in order to initiate the connection. Whilst commonly used for adatabase connection, the data source could also be aspreadsheetor text file. The connection string may include attributes such as the name of the driver,serveranddatabase, as well as security information such as user name and password. This example shows aPostgresconnection string for connecting to wikipedia.com with SSL and a connection timeout of 180 seconds: Users ofOracle databasescan specify connection strings: Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Connection_string
Informal language theory, theempty string, orempty word, is the uniquestringof length zero. Formally, a string is a finite, ordered sequence ofcharacterssuch as letters, digits or spaces. The empty string is the special case where the sequence has length zero, so there are no symbols in the string. There is only one empty string, because two strings are only different if they have different lengths or a different sequence of symbols. In formal treatments,[1]the empty string is denoted withεor sometimesΛorλ. The empty string should not be confused with the empty language∅, which is aformal language(i.e. a set of strings) that contains no strings, not even the empty string. The empty string has several properties: Incontext-free grammars, aproduction rulethat allows asymbolto produce the empty string is known as an ε-production, and the symbol is said to be "nullable". In mostprogramming languages, strings are adata type. Strings are typically stored at distinctmemory addresses(locations). Thus, the same string (e.g., the empty string) may be stored in two or more places in memory. In this way, there could be multiple empty strings in memory, in contrast with the formal theory definition, for which there is only one possible empty string. However, a string comparison function would indicate that all of these empty strings are equal to each other. Even a string of length zero can require memory to store it, depending on the format being used. In most programming languages, the empty string is distinct from anull reference(or null pointer) because a null reference points to no string at all, not even the empty string. The empty string is a legitimate string, upon which most string operations should work. Some languages treat some or all of the following in similar ways: empty strings, null references, the integer 0, the floating point number 0, the Boolean valuefalse, theASCIIcharacterNUL, or other such values. The empty string is usually represented similarly to other strings. In implementations with string terminating character (null-terminated stringsor plain text lines), the empty string is indicated by the immediate use of this terminating character. Different functions, methods, macros, oridiomsexist for checking if a string is empty in different languages.[example needed] ‘’ The empty string is a syntactically valid representation ofzeroinpositional notation(in any base), which does not containleading zeros. Since the empty string does not have a standard visual representation outside of formal language theory, the number zero is traditionally represented by a singledecimal digit0instead. Zero-filled memory area, interpreted as anull-terminated string, is an empty string. Empty lines of text show the empty string. This can occur from two consecutiveEOLs, as often occur intext files. This is sometimes used intext processingto separateparagraphs, e.g. inMediaWiki.
https://en.wikipedia.org/wiki/Empty_string
Anincompressible stringis astringwithKolmogorov complexityequal to its length, so that it has no shorter encodings.[1]Thepigeonhole principlecan be used to be prove that for anylossless compressionalgorithm, there must exist many incompressible strings. Suppose we have the string12349999123499991234, and we are using acompressionmethod that works by putting a special character into the string (say@) followed by a value that points to an entry in alookup table(or dictionary) of repeating values. Let us imagine we have an algorithm that examines the string in 4 character chunks. Looking at our string, our algorithm might pick out the values 1234 and 9999 to place into its dictionary. Let us say that 1234 is entry 0 and 9999 is entry 1. Now the string can become: This string is much shorter, although storing the dictionary itself will cost some space. However, the more repeats there are in the string, the better the compression will be. Our algorithm can do better though, if it can view the string in chunks larger than 4 characters. Then it can put 12349999 and 1234 into the dictionary, giving us: This string is even shorter. Now consider another string: This string is incompressible by our algorithm. The only repeats that occur are 88 and 99. If we were to store 88 and 99 in our dictionary, we would produce: This is just as long as the original string, because our placeholders for items in the dictionary are 2 characters long, and the items they replace are the same length. Hence, this string is incompressible by our algorithm.
https://en.wikipedia.org/wiki/Incompressible_string
Incomputer programming, arope, orcord, is adata structurecomposed of smallerstringsthat is used to efficiently store and manipulate longer strings or entire texts. For example, atext editingprogram may use a rope to represent the text being edited, so that operations such as insertion, deletion, and random access can be done efficiently.[1] A rope is a type ofbinary treewhere each leaf (end node) holds a string of manageable size and length (also known as aweight), and each node further up the tree holds the sum of the lengths of all the leaves in its leftsubtree. A node with two children thus divides the whole string into two parts: the left subtree stores the first part of the string, the right subtree stores the second part of the string, and a node's weight is the length of the first part. For rope operations, the strings stored in nodes are assumed to be constantimmutable objectsin the typical nondestructive case, allowing for somecopy-on-writebehavior. Leaf nodes are usually implemented asbasic fixed-length stringswith areference countattached for deallocation when no longer needed, although othergarbage collectionmethods can be used as well. In the following definitions,Nis the length of the rope, that is, the weight of the root node. This operation can be done by aSplit()and twoConcat()operations. The cost is the sum of the three. To retrieve thei-th character, we begin arecursivesearch from the root node: For example, to find the character ati=10in Figure 2.1 shown on the right, start at the root node (A), find that 22 is greater than 10 and there is a left child, so go to the left child (B). 9 is less than 10, so subtract 9 from 10 (leavingi=1) and go to the right child (D). Then because 6 is greater than 1 and there's a left child, go to the left child (G). 2 is greater than 1 and there's a left child, so go to the left child again (J). Finally 2 is greater than 1 but there is no left child, so the character at index 1 of the short string "na" (ie "n") is the answer. (1-based index) A concatenation can be performed simply by creating a new root node withleft = S1andright = S2, which is constant time. The weight of the parent node is set to the length of the left childS1, which would take⁠O(log⁡N){\displaystyle O(\log N)}⁠time, if the tree is balanced. As most rope operations require balanced trees, the tree may need to be re-balanced after concatenation. There are two cases that must be dealt with: The second case reduces to the first by splitting the string at the split point to create two new leaf nodes, then creating a new node that is the parent of the two component strings. For example, to split the 22-character rope pictured in Figure 2.3 into two equal component ropes of length 11, query the 12th character to locate the nodeKat the bottom level. Remove the link betweenKandG. Go to the parent ofGand subtract the weight ofKfrom the weight ofD. Travel up the tree and remove any right links to subtrees covering characters past position 11, subtracting the weight ofKfrom their parent nodes (only nodeDandA, in this case). Finally, build up the newly orphaned nodesKandHby concatenating them together and creating a new parentPwith weight equal to the length of the left nodeK. As most rope operations require balanced trees, the tree may need to be re-balanced after splitting. This operation can be done by twoSplit()and oneConcat()operation. First, split the rope in three, divided byi-th andi+j-th character respectively, which extracts the string to delete in a separate node. Then concatenate the other two nodes. To report the stringCi, …,Ci+j− 1, find the nodeuthat containsCiandweight(u) >= j, and then traverseTstarting at nodeu. OutputCi, …,Ci+j− 1by doing anin-order traversalofTstarting at nodeu. Advantages: Disadvantages: This table compares thealgorithmictraits of string and rope implementations, not theirraw speed. Array-based strings have smaller overhead, so (for example) concatenation and split operations are faster on small datasets. However, when array-based strings are used for longer strings, time complexity and memory use for inserting and deleting characters becomes unacceptably large. In contrast, a rope data structure has stable performance regardless of data size. Further, the space complexity for ropes and arrays are both O(n). In summary, ropes are preferable when the data is large and modified often.
https://en.wikipedia.org/wiki/Rope_(data_structure)
Incognitive psychology,chunkingis a process by which small individual pieces of a set of information are bound together to create a meaningful whole later on in memory.[1]The chunks, by which the information is grouped, are meant to improve short-term retention of the material, thus bypassing the limited capacity ofworking memoryand allowing the working memory to be more efficient.[2][3][4]A chunk is a collection of basic units that are strongly associated with one another, and have been grouped together and stored in a person's memory. These chunks can be retrieved easily due to their coherent grouping.[5]It is believed that individuals create higher-order cognitive representations of the items within the chunk. The items are more easily remembered as a group than as the individual items themselves. These chunks can be highly subjective because they rely on an individual's perceptions and past experiences, which are linked to the information set. The size of the chunks generally ranges from two to six items but often differs based on language and culture.[6] According to Johnson (1970), there are four main concepts associated with the memory process of chunking: chunk, memory code, decode and recode.[7]The chunk, as mentioned prior, is a sequence of to-be-remembered information that can be composed of adjacent terms. These items or information sets are to be stored in the same memory code. The process of recoding is where one learns the code for a chunk, and decoding is when the code is translated into the information that it represents. The phenomenon of chunking as a memory mechanism is easily observed in the way individuals group numbers, and information, in day-to-day life. For example, when recalling a number such as 12101946, if numbers are grouped as 12, 10, and 1946, amnemonicis created for this number as a month, day, and year. It would be stored as December 10, 1946, instead of a string of numbers. Similarly, another illustration of the limited capacity of working memory as suggested by George Miller can be seen from the following example: While recalling a mobile phone number such as 9849523450, we might break this into 98 495 234 50. Thus, instead of remembering 10 separate digits that are beyond the putative "seven plus-or-minus two" memory span, we are remembering four groups of numbers.[8]An entire chunk can also be remembered simply by storing the beginnings of a chunk in the working memory, resulting in the long-term memory recovering the remainder of the chunk.[4] Amodality effectis present in chunking. That is, the mechanism used to convey the list of items to the individual affects how much "chunking" occurs. Experimentally, it has been found that auditory presentation results in a larger amount of grouping in the responses of individuals than visual presentation does. Previous literature, such as George Miller'sThe Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information (1956)has shown that the probability of recall of information is greater when the chunking strategy is used.[8]As stated above, the grouping of the responses occurs as individuals place them into categories according to their inter-relatedness based on semantic and perceptual properties. Lindley (1966) showed that since the groups produced have meaning to the participant, this strategy makes it easier for an individual to recall and maintain information in memory during studies and testing.[9]Therefore, when "chunking" is used as a strategy, one can expect a higher proportion of correct recalls. Various kinds ofmemorytraining systems andmnemonicsinclude training and drills in specially-designed recoding or chunking schemes.[10]Such systems existed before Miller's paper, but there was no convenient term to describe the general strategy and no substantive and reliable research. The term "chunking" is now often used in reference to these systems. As an illustration, patients withAlzheimer's diseasetypically experience working memory deficits; chunking is an effective method to improve patients' verbal working memory performance.[11]Patients with schizophrenia also experience working memory deficits which influence executive function; memory training procedures positively influence cognitive and rehabilitative outcomes.[12]Chunking has been proven to decrease the load on the working memory in many ways. As well as remembering chunked information easier, a person can also recall other non-chunked memories easier due to the benefits chunking has on the working memory.[4]For instance, in one study, participants with more specialized knowledge could reconstruct sequences ofchessmoves because they had larger chunks of procedural knowledge, which means that the level of expertise and the sorting order of the information retrieved is essential in the influence of procedural knowledge chunks retained in short-term memory.[13]Chunking has been shown to have an influence inlinguistics, such as boundary perception.[14] According to the research conducted by Dirlam (1972), a mathematical analysis was conducted to see what the efficient chunk size is. We are familiar with the size range that chunking holds, but Dirlam (1972) wanted to discover the most efficient chunk size. The mathematical findings have discovered that four or three items in each chunk is the most optimal.[15] The wordchunkingcomes from a famous 1956 paper byGeorge A. Miller, "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information".[16]At a time wheninformation theorywas beginning to be applied in psychology, Miller observed that some human cognitive tasks fit the model of a "channel capacity" characterized by a roughly constant capacity in bits, but short-term memory did not. A variety of studies could be summarized by saying that short-term memory had a capacity of about "seven plus-or-minus two" chunks. Miller (1956) wrote, "With binary items, the span is about nine and, although it drops to about five withmonosyllabicEnglish words, the difference is far less than the hypothesis of constant information would require (see also,memory span). The span of immediate memory seems to be almost independent of the number of bits per chunk, at least over the range that has been examined to date." Miller acknowledged that "we are not very definite about what constitutes a chunk of information."[8] Miller (1956) noted that according to this theory, it should be possible to increase short-term memory for low-information-content items effectively by mentally recoding them into a smaller number of high-information-content items. He imagined this process is useful in scenarios such as "a man just beginning to learnradio-telegraphic codehears each dit and dah as a separate chunk. Soon he is able to organize these sounds into letters and then he can deal with the letters as chunks. Then the letters organize themselves as words, which are still larger chunks, and he begins to hear whole phrases." Thus, a telegrapher can effectively "remember" several dozen dits and dahs as a single phrase. Naïve subjects can remember a maximum of only nine binary items, but Miller reports a 1954 experiment in which people were trained to listen to a string of binary digits and (in one case) mentally group them into groups of five, recode each group into a name (for example, "twenty-one" for 10101), and remember the names. With sufficient practice, people found it possible to remember as many as forty binary digits. Miller wrote: It is a little dramatic to watch a person get 40 binary digits in a row and then repeat them back without error. However, if you think of this merely as a mnemonic trick for extending the memory span, you will miss the more important point that is implicit in nearly all such mnemonic devices. The point is that recoding is an extremely powerful weapon for increasing the amount of information that we can deal with.[8] Studies have shown that people have better memories when they are trying to remember items with which they are familiar. Similarly, people tend to create familiar chunks. This familiarity allows one to remember more individual pieces of content, and also more chunks as a whole. One well-known chunking study was conducted by Chase and Ericsson, who worked with an undergraduate student, SF, for over two years.[17]They wanted to see if a person's digit span memory could be improved with practice. SF began the experiment with a normal span of 7 digits. SF was a long-distance runner, and chunking strings of digits into race times increased his digit span. By the end of the experiment, his digit span had grown to 80 numbers. A later description of the research inThe Brain-Targeted Teaching Model for 21st Century Schoolsstates that SF later expanded his strategy by incorporating ages and years, but his chunks were always familiar, which allowed him to recall them more easily.[18]Someone who does not have knowledge in the expert domain (e.g. being familiar with mile/marathon times) would have difficulty chunking with race times and ultimately be unable to memorize as many numbers using this method. The idea that a person who does not have knowledge in the expert domain would have difficulty chunking could also be seen in an experiment of novice and expert hikers to see if they could remember different mountain scenes. From this study, it was found that the expert hikers had better recall and recognition of structured stimuli.[19]Another example could be seen with expert musicians in being able to chunk and recall encoded material that best meets the demands they are presented with at any given moment during the performance.[20] Chunking and memory in chess revisited Previous research has shown that chunking is an effective tool for enhancing memory capacity due to the nature of grouping individual pieces into larger, more meaningful groups that are easier to remember. Chunking is a popular tool for people who play chess, specifically a master.[21]Chase and Simon (1973a) discovered that the skill levels of chess players are attributed to long-term memory storage and the ability to copy and recollect thousands of chunks. The process helps acquire knowledge at a faster pace. Since it is an excellent tool for enhancing memory, a chess player who utilizes chunking has a higher chance of success. According to Chase and Simon, while re-examining (1973b), an expert chess master is able to access information in long-term memory storage quickly due to the ability to recall chunks. Chunks stored in long-term memory are related to the decision of the movement of board pieces due to obvious patterns. Chunking models for education Many years of research has concluded that chunking is a reliable process for gaining knowledge and organization of information. Chunking provides explanation to the behavior of experts, such as a teacher. A teacher can utilize chunking in their classroom as a way to teach the curriculum. Gobet (2005) proposed that teachers can use chunking as a method to segment the curriculum into natural components. A student learns better when focusing on key features of material, so it is important to create the segments to highlight the important information. By understanding the process of how an expert is formed, it is possible to find general mechanisms for learning that can be implemented into classrooms.[22] Chunking is a method of learning that can be applied in a number of contexts and is not limited to learning verbal material.[23]Karl Lashley, in his classic paper onserial order, argued that the sequential responses that appear to be organized in a linear and flat fashion concealed an underlying hierarchical structure.[24]This was then demonstrated in motor control by Rosenbaum et al. in 1983.[25]Thus sequences can consist of sub-sequences and these can, in turn, consist of sub-sub-sequences. Hierarchical representations of sequences have an advantage over linear representations: They combine efficient local action at low hierarchical levels while maintaining the guidance of an overall structure. While the representation of a linear sequence is simple from a storage point of view, there can be potential problems during retrieval. For instance, if there is a break in the sequence chain, subsequent elements will become inaccessible. On the other hand, a hierarchical representation would have multiple levels of representation. A break in the link between lower-level nodes does not render any part of the sequence inaccessible, since the control nodes (chunk nodes) at the higher level would still be able to facilitate access to the lower-level nodes. Chunks inmotor learningare identified by pauses between successive actions in Terrace (2001).[26]It is also suggested that during the sequence performance stage (after learning), participants download list items as chunks during pauses. He also argued for an operational definition of chunks suggesting a distinction between the notions of input and output chunks from the ideas of short-term and long-term memory. Input chunks reflect the limitation of working memory during the encoding of new information (how new information is stored in long-term memory), and how it is retrieved during subsequent recall. Output chunks reflect the organization of over-learned motor programs that are generated on-line in working memory. Sakai et al. (2003) showed that participants spontaneously organize a sequence into a number of chunks across a few sets and that these chunks were distinct among participants tested on the same sequence.[27]They also demonstrated that the performance of a shuffled sequence was poorer when the chunk patterns were disrupted than when the chunk patterns were preserved. Chunking patterns also seem to depend on the effectors used. Perlman found in his series of experiments that tasks that are larger in size and broken down into smaller sections had faster respondents than the task as a large whole. The study suggests that chunking a larger task into a smaller more manageable task can produce a better outcome. The research also found that completing the task in a coherent order rather than swapping from one task to another can also produce a better outcome.[28] Chunking is used in adults in different ways which can include low-level perceptual features, category membership, semantic relatedness, and statistical co-occurrences between items.[29]Although due to recent studies we are starting to realize that infants also use chunking. They also use different types of knowledges to help them with chunking like conceptual knowledge, spatiotemporal cue knowledge, and knowledge of their social domain. There have been studies that use different chunking models like PARSER and the Bayesian model. PARSER is a chunking model designed to account for human behavior by implementing psychologically plausible processes of attention, memory, and associative learning.[30]In a recent study, it was determined that these chunking models like PARSER are seen in infants more than chunking models like Bayesian. PARSER is seen more because it is typically endowed with the ability to process up to three chunks simultaneously.[30] When it comes to infants using their social knowledge they need to use abstract knowledge and subtle cues because they can not create a perception of their social group on their own. Infants can form chunks using shared features or spatial proximity between objects.[31] Previous research shows that the mechanism of chunking is available in seven-month-old infants.[32]This means that chunking can occur even before the working memory capacity has completely developed. Knowing that the working memory has a very limited capacity, it can be beneficial to utilize chunking. In infants, whose working memory capacity is not completely developed, it can be even more helpful to chunk memories. These studies were done using the violation-of-expectation method and recording the amount of time the infants watched the objects in front of them. Although the experiment showed that infants can use chunking, researchers also concluded that an infant's ability to chunk memories will continue to develop over the next year of their lives.[32] Working memory appears to store no more than three objects at a time in newborns and early toddlers. A study conducted in 2014,Infants use temporal regularities to chunk objects in memory,[33]allowed for new information and knowledge. This research showed that 14-month-old infants, like adults, can chunk using their knowledge of object categories: they remembered four total objects when an array contained two tokens of two different types (e.g., two cats and two cars), but not when the array contained four tokens of the same type (e.g., four different cats).[33]It demonstrates that newborns may employ spatial closeness to tie representations of particular items into chunks, benefiting memory performance as a result.[34]Despite the fact that newborns' working memory capacity is restricted, they may employ numerous forms of information to tie representations of individual things into chunks, enhancing memory efficiency.[34] This usage derives from Miller's (1956) idea of chunking as grouping, but the emphasis is now onlong-term memoryrather than only onshort-term memory. A chunk can then be defined as "a collection of elements having strong associations with one another, but weak associations with elements within other chunks".[35]The emphasis of chunking on long-term memory is supported by the idea that chunking only exists in long-term memory, but it assists with reintegration, which is involved in the recall of information in short-term memory. It may be easier to recall information in short-term memory if the information has been represented through chunking in long-term memory. Norris and Kalm (2021) argued that "reintegration can be achieved by treating recall from memory as a process ofBayesian inferencewhereby representations of chunks in LTM (long-term memory) provide the priors that can be used to interpret a degraded representation in STM (short-term memory)".[36]In Bayesian inference, priors refer to the initial beliefs regarding the relative frequency of an event occurring instead of other plausible events occurring. When one who holds the initial beliefs receives more information, one will determine the likelihood of each of the plausible events that could happen and thus predict the specific event that will occur. Chunks in long-term memory are involved in forming the priors, and they assist with determining the likelihood and prediction of the recall of information in short-term memory. For example, if an acronym and its full meaning already exist in long-term memory, the recall of information regarding that acronym will be easier in short-term memory.[36] Chase and Simon in 1973 and later Gobet, Retschitzki, and de Voogt in 2004 showed that chunking could explain several phenomena linked toexpertisein chess.[35][37]Following a brief exposure to pieces on a chessboard, skilled chess players were able to encode and recall much larger chunks than novice chess players. However, this effect is mediated by specific knowledge of the rules of chess; when pieces were distributed randomly (including scenarios that were not common or allowed in real games), the difference in chunk size between skilled and novice chess players was significantly reduced. Several successful computational models of learning and expertise have been developed using this idea, such asEPAM(Elementary Perceiver and Memorizer) andCHREST(Chunk Hierarchy and Retrieval Structures). Chunking may be demonstrated in the acquisition of a memory skill, which was demonstrated by S. F., an undergraduate student with average memory and intelligence, who increased his digit span from seven to almost 80 within 20 months or after at least 230 hours.[38]S. F. was able to improve his digit span partly through mnemonic associations, which is a form of chunking. S. F. associated digits, which were unfamiliar information to him, with running times, ages, and dates, which were familiar information to him. Ericsson et al. (1980) initially hypothesized that S. F. increased digit span was due to an increase in his short-term memory capacity. However, they rejected this hypothesis when they found that his short-memory capacity was always the same, considering that he "chunked" only three to four digits at once. Furthermore, he never rehearsed more than six digits at once nor rehearsed more than four groups in a supergroup. Lastly, if his short-term memory capacity increased, then he would have shown a greater capacity for the alphabets; he did not.[38]Based on these contradictions, Ericsson et al. (1980) later concluded that S. F. was able to increase his digit span due to "the use of mnemonic associations in long-term memory," which further supports that chunking may exist in short-term memory rather than long-term memory. Chunking has also been used with models oflanguage acquisition.[39]The use of chunk-based learning in language has been shown to be helpful. Understanding a group of basic words and then giving different categories of associated words to build on comprehension has shown to be an effective way to teach reading and language to children.[40]Research studies have found that adults and infants were able to parse the words of a made-up language when they were exposed to a continuous auditory sequence of words arranged in random order.[41]One of the explanations was that they may parse the words using small chunks that correspond to the made-up language. Subsequent studies have supported that when learning involves statistical probabilities (e.g., transitional probabilities in language), it may be better explained via chunking models. Franco and Destrebecqz (2012) further studied chunking in language acquisition and found that the presentation of a temporal cue was associated with a reliable prediction of the chunking model regarding learning, but the absence of the cue was associated with increased sensitivity to the strength of transitional probabilities.[41]Their findings suggest that the chunking model can only explain certain aspects of learning, specifically language acquisition. Norris conducted a study in 2020 of chunking and short-term memory recollection, finding that when a chunk is given, it is stored as a single item despite being a relatively large amount of information. This finding suggests that chunks should be less susceptible to decay or interference when they are recalled. The study used visual stimuli where all the items were given simultaneously. Items of two and three were found to be recalled easier than singles, and more singles were recalled when in a group with threes.[42] Chunking can be a form of data suppression that allows for more information to be stored in short-term memory. Rather than verbal short-memory measured by the number of items stored, Miller (1956) suggested that verbal short-term memory are stored as chunks. Later studies were done to determine if chunking was a form data compression when there is limited space for memory. Chunking works as data compression when it comes to redundant information and it allows for more information to be stored in short-term memory. However, memory capacity may vary.[36] An experiment was done to see how chunking could be beneficial to patients who had Alzheimer's disease. This study was based on how chunking was used to improve working memory in normal young people. Working memory is impaired in the early stages of Alzheimer's disease which affects the ability to do everyday tasks. It also affects executive control of working memory. It was found that participants who had mild Alzheimer's disease were able to use working memory strategies to enhance verbal and spatial working memory performance.[43] It has been long thought that chunking can improve working memory. A study was done to see how chunking can improve working memory when it came to symbolic sequences and gating mechanisms. This was done by having 25 participants learn 16 sequences through trial and error. The target was presented alongside a distractor and participants were to identify the target by using right or left buttons on a computer mouse. The final analysis was done on only 19 participants. The results showed that chunking does improve symbolic sequence performance through decreasing cognitive load and real-time strategy.[44]Chunking has proved to be effective in reducing the load on adding items into working memory. Chunking allows more items to be encoded into working memory with more available to transfer into long-term memory.[45] Chekaf, Cowan, and Mathy (2016)[46]looked at how immediate memory relates to the formation of chunks. In the immediate memory, they came up with a two-factor theory of the formation of chunks. These factors are compressibility and the order of the information. Compressibility refers to making information more compact and condensed. The material is transformed from something complex to something more simplified. Thus, compressibility relates to chunking due to the predictability factor. As for the second factor, the sequence of the information can impact what is being discovered. So the order, along with the process of compressing the material, may increase the probability that chunking occurs. These two factors interact with one another and matter in the concept of chunking. Chekaf, Cowan, and Mathy (2016)[46]gave an example where the material "1,2,3,4” can be compressed to "numbers one through four." However, if the material was presented as "1,3,2,4” you cannot compress it because the order in which it is presented is different. Therefore, compressibility and order play an important role in chunking.
https://en.wikipedia.org/wiki/Chunking_(psychology)
Acreole language,[2][3][4]or simplycreole, is a stable form ofcontact languagethat develops from the process of different languages simplifying and mixing into a new form (often apidgin), and then that form expanding and elaborating into a full-fledged language withnative speakers, all within a fairly brief period.[5]While the concept is similar to that of amixed or hybrid language, creoles are often characterized by a tendency to systematize their inherited grammar (e.g., by eliminating irregularities or regularizing the conjugation of otherwise irregular verbs). Like any language, creoles are characterized by a consistent system ofgrammar, possess large stable vocabularies, and areacquiredby children as their native language.[6]These three features distinguish a creole language from a pidgin.[7]Creolistics, or creology, is the study of creole languages and, as such, is a subfield oflinguistics. Someone who engages in this study is called a creolist. The precise number of creole languages is not known, particularly as many are poorly attested or documented. About one hundred creole languages have arisen since 1500. These are predominantly based on European languages such as English and French[8]due to the EuropeanAge of Discoveryand theAtlantic slave tradethat arose at that time.[9]With the improvements inship-buildingandnavigation, traders had to learn to communicate with people around the world, and the quickest way to do this was to develop a pidgin; in turn, full creole languages developed from these pidgins. In addition to creoles that have European languages as their base, there are, for example, creoles based onArabic,Chinese, andMalay. Thelexiconof a creole language is largely supplied by the parent languages, particularly that of the most dominant group in the social context of the creole's construction. However, there are often clearphoneticandsemanticshifts. On the other hand, the grammar that has evolved often has new or unique features that differ substantially from those of the parent languages.[10] A creole is believed to arise when apidgin, developed by adults for use as a second language, becomes the native and primary language of their children – a process known asnativization.[11]Thepidgin-creole life cycle was studied by American linguistRobert Hallin the 1960s.[12] Some linguists, such as Derek Bickerton, posit that creoles share more grammatical similarities with each other than with the languages from which they are phylogenetically derived.[13]However, there is no widely accepted theory that would account for those perceived similarities.[14]Moreover, no grammatical feature has been shown to be specific to creoles.[15][16][17][18][19][20][excessive citations] Many of the creoles known today arose in the last 500 years, as a result of the worldwide expansion of European maritime power and trade in theAge of Discovery, which led to extensiveEuropean colonial empires. Like most non-official and minority languages, creoles have generally been regarded in popular opinion as degenerate variants ordialectsof their parent languages. Because of that prejudice, many of the creoles that arose in the European colonies, having been stigmatized, have becomeextinct. However, political and academic changes in recent decades have improved the status of creoles, both as living languages and as object of linguistic study.[21][22]Some creoles have even been granted the status of official or semi-official languages of particular political territories. Other scholars, such asSalikoko Mufwene, argue that pidgins and creoles arise independently under different circumstances, and that a pidgin need not always precede a creole nor a creole evolve from a pidgin. Pidgins, according to Mufwene, emerged in trade colonies among "users who preserved their native vernaculars for their day-to-day interactions". Creoles, meanwhile, developed in settlement colonies in which speakers of a European language, oftenindentured servantswhose language would be far from the standard in the first place, interacted extensively with non-Europeanslaves, absorbing certain words and features from the slaves' non-European native languages, resulting in a heavilybasilectalizedversion of the original language. These servants and slaves would come to use the creole as an everyday vernacular, rather than merely in situations in which contact with a speaker of the superstrate was necessary.[23] The English termcreolecomes fromFrenchcréole, which iscognatewith theSpanish termcriolloandPortuguesecrioulo, all descending from the verbcriar('to breed' or 'to raise'), all coming from Latincreare'to produce, create'.[24]The specific sense of the term was coined in the 16th and 17th century, during the great expansion in European maritime power and trade that led to the establishment of European colonies in other continents. The termscriolloandcrioulowere originally qualifiers used throughout the Spanish and Portuguese colonies to distinguish the members of an ethnic group who were born and raised locally from those who immigrated as adults. They were most commonly applied to nationals of the colonial power, e.g. to distinguishespañoles criollos(people born in the colonies from Spanish ancestors) fromespañoles peninsulares(those born in the Iberian Peninsula, i.e. Spain). However, in Brazil the term was also used to distinguish betweennegros crioulos(blacks born in Brazil from African slave ancestors) andnegros africanos(born in Africa). Over time, the term and its derivatives (Creole, Kréol, Kreyol,Kreyòl, Kriol,Krio, etc.) lost the generic meaning and became the proper name of many distinct ethnic groups that developed locally from immigrant communities. Originally, therefore, the term "creole language" meant the speech of any of thosecreole peoples. As a consequence of colonial European trade patterns, most of the known European-based creole languages arose in coastal areas in the equatorial belt around the world, including theAmericas, westernAfrica,Goaalong the west ofIndia, and along SoutheastAsiaup toIndonesia,Singapore,Macau,Hong Kong, thePhilippines,Malaysia,Mauritius, Réunion,SeychellesandOceania.[25] Many of those creoles are now extinct, but others still survive in theCaribbean, the north and east coasts ofSouth America(The Guyanas), westernAfrica,Australia(seeAustralian Kriol language), thePhilippines(seeChavacano), Island Countries such asMauritiusandSeychellesand in theIndian Ocean. Atlantic Creolelanguages are based on European languages with elements from African and possiblyAmerindian languages.Indian OceanCreole languages are based on European languages with elements fromMalagasyand possibly other Asian languages. There are, however, creoles likeNubiandSangothat are derived solely from non-European languages. Because of the generally low status of the Creole peoples in the eyes of prior European colonial powers, creole languages have generally been regarded as "degenerate" languages, or at best as rudimentary "dialects" of the politically dominant parent languages. Because of this, the word "creole" was generally used by linguists in opposition to "language", rather than as aqualifierfor it.[26] Another factor that may have contributed to the relative neglect of creole languages in linguistics is that they do not fit the 19th-centuryneogrammarian"tree model" for the evolution of languages, and its postulated regularity of sound changes (these critics including the earliest advocates of thewave model,Johannes SchmidtandHugo Schuchardt, the forerunners of modernsociolinguistics). This controversy of the late 19th century profoundly shaped modern approaches to thecomparative methodinhistorical linguisticsand increolistics.[21][26][27] Because of social, political, and academic changes brought on by decolonization in the second half of the 20th century, creole languages have experienced revivals in the past few decades. They are increasingly being used in print and film, and in many cases, their community prestige has improved dramatically. In fact, some have been standardized, and are used in local schools and universities around the world.[21][22][28]At the same time, linguists have begun to come to the realization that creole languages are in no way inferior to other languages. They now use the term "creole" or "creole language" for any language suspected to have undergonecreolization, terms that now imply no geographic restrictions nor ethnic prejudices. There is controversy about the extent to which creolization influenced the evolution ofAfrican-American Vernacular English(AAVE). In the American education system, as well as in the past, the use of the wordebonicsto refer to AAVE mirrors the historical negative connotation of the wordcreole.[29] According to their external history, four types of creoles have been distinguished: plantation creoles, fort creoles,marooncreoles, and creolized pidgins.[30]By the very nature of a creole language, thephylogeneticclassification of a particular creole usually is a matter of dispute; especially when the pidgin precursor and its parent tongues (which may have been other creoles or pidgins) have disappeared before they could be documented. Phylogenetic classification traditionally relies on inheritance of the lexicon, especially of "core" terms, and of the grammar structure. However, in creoles, the core lexicon often has mixed origin, and the grammar is largely original. For these reasons, the issue of which language istheparent of a creole – that is, whether a language should be classified as a "French creole", "Portuguese creole" or "English creole", etc. – often has no definitive answer, and can become the topic of long-lasting controversies, where social prejudices and political considerations may interfere with scientific discussion.[21][22][27] The termssubstrateandsuperstrateare often used when two languages interact. However, the meaning of these terms is reasonably well-defined only insecond language acquisitionorlanguage replacementevents, when the native speakers of a certain source language (the substrate) are somehow compelled to abandon it for another target language (the superstrate).[31]The outcome of such an event is that erstwhile speakers of the substrate will use some version of the superstrate, at least in more formal contexts. The substrate may survive as a second language for informal conversation. As demonstrated by the fate of many replaced European languages (such asEtruscan,Breton, andVenetian), the influence of the substrate on the official speech is often limited to pronunciation and a modest number of loanwords. The substrate might even disappear altogether without leaving any trace.[31] However, there is dispute over the extent to which the terms "substrate" and "superstrate" are applicable to the genesis or the description of creole languages.[32]The language replacement model may not be appropriate in creole formation contexts, where the emerging language is derived from multiple languages without any one of them being imposed as a replacement for any other.[33][34]The substratum–superstratum distinction becomes awkward when multiple superstrata must be assumed (such as inPapiamento), when the substratum cannot be identified, or when the presence or the survival of substratal evidence is inferred from mere typological analogies.[18]On the other hand, the distinction may be meaningful when the contributions of each parent language to the resulting creole can be shown to be very unequal, in a scientifically meaningful way.[35]In the literature onAtlantic Creoles, "superstrate" usually means European and "substrate" non-European or African.[36] Since creole languages rarely attain official status, the speakers of a fully formed creole may eventually feel compelled to conform their speech to one of the parent languages. Thisdecreolizationprocess typically brings about apost-creole speech continuumcharacterized by large-scale variation andhypercorrectionin the language.[21] It is generally acknowledged that creoles have a simpler grammar and more internal variability than older, more established languages.[37]However, these notions are occasionally challenged.[38](See alsolanguage complexity.) Phylogenetic ortypologicalcomparisons of creole languages have led to divergent conclusions. Similarities are usually higher among creoles derived from related languages, such as thelanguages of Europe, than among broader groups that include also creoles based on non-Indo-European languages(like Nubi or Sango).French-based creole languagesin turn are more similar to each other (and to varieties of French) than to other European-based creoles. It was observed, in particular, thatdefinite articlesare mostly prenominal inEnglish-based creole languagesand English whereas they are generally postnominal in French creoles and in thevariety of Frenchthat wasexported to what is now Quebec in the 17th and 18th century.[39]Moreover, the European languages which gave rise to the creole languages of European colonies all belong to the same subgroup of WesternIndo-Europeanand have highly convergent grammars; to the point thatWhorfjoined them into a singleStandard Average Europeanlanguage group.[40]French and English are particularly close, since English, through extensive borrowing, is typologically closer to French than to other Germanic languages.[41]Thus the claimed similarities between creoles may be mere consequences of similar parentage, rather than characteristic features of all creoles. There are a variety of theories on the origin of creole languages, all of which attempt to explain the similarities among them.Arends, Muysken & Smith (1995)outline a fourfold classification of explanations regarding creole genesis: In addition to the precise mechanism of creole genesis, a more general debate has developed whether creole languages are characterized by different mechanisms than traditional languages (which is McWhorter's 2018 main point)[42]or whether in that regard creole languages develop by the same mechanisms as any other languages (e.g. DeGraff 2001).[43] Themonogenetic theory of pidginsand creoles hypothesizes that all Atlantic creoles derived from a singleMediterranean Lingua Franca, via a West African Pidgin Portuguese of the seventeenth century,relexifiedin the so-called "slavefactories"[further explanation needed]of Western Africa that were the source of theAtlantic slave trade. This theory was originally formulated byHugo Schuchardtin the late nineteenth century and popularized in the late 1950s and early 1960s by Taylor,[44]Whinnom,[45]Thompson,[46]and Stewart.[47]However, this hypothesis is now not widely accepted, since it relies on all creole-speaking slave populations being based on the same Portuguese-based creole, despite no to very little historical exposure to Portuguese for many of these populations, no strong direct evidence for this claim, and with Portuguese leaving almost no trace on the lexicon of most of them, with the similarities in grammar explainable by analogous processes of loss of inflection and grammatical forms not common to European and West African languages. For example,Bickerton (1977)points out that relexification postulates too many improbabilities and that it is unlikely that a language "could be disseminated round the entire tropical zone, to peoples of widely differing language background, and still preserve a virtually complete identity in its grammatical structure wherever it took root, despite considerable changes in its phonology and virtually complete changes in its lexicon".[48] Proposed byHancock (1985)for the origin of English-based creoles of the West Indies, the domestic origin hypothesis argues that, towards the end of the 16th century, English-speaking traders began to settle in the Gambia andSierra Leonerivers as well as in neighboring areas such as the Bullom and Sherbro coasts. These settlers intermarried with the local population leading to mixed populations, and, as a result of this intermarriage, an English pidgin was created. This pidgin was learned by slaves in slave depots, who later on took it to the West Indies and formed one component of the emerging English creoles. TheFrench creolesare the foremost candidates to being the outcome of "normal"linguistic changeand theircreolenessto be sociohistoric in nature and relative to their colonial origin.[49]Within this theoretical framework, aFrench creoleis a languagephylogeneticallybased onFrench, more specifically on a 17th-centurykoinéFrench extant inParis, the French Atlantic harbors, and the nascent French colonies. Supporters of this hypothesis suggest that the non-Creole French dialects still spoken in many parts of the Americas share mutual descent from this single koiné. These dialects are found inCanada(mostly inQuébecand inAcadiancommunities),Louisiana,Saint-Barthélemyand asisolatesin other parts of the Americas.[50]Approaches under this hypothesis are compatible withgradualisminchangeand models ofimperfect language transmissionin koiné genesis. The Foreigner Talk (FT) hypothesis argues that a pidgin or creole language forms when native speakers attempt to simplify their language in order to address speakers who do not know their language at all. Because of the similarities found in this type of speech and speech directed to a small child, it is also sometimes calledbaby talk.[51] Arends, Muysken & Smith (1995)suggest that four different processes are involved in creating Foreigner Talk: This could explain why creole languages have much in common, while avoiding a monogenetic model. However,Hinnenkamp (1984), in analyzing German Foreigner Talk, claims that it is too inconsistent and unpredictable to provide any model for language learning. While the simplification of input was supposed to account for creoles' simple grammar, commentators have raised a number of criticisms of this explanation:[52] Another problem with the FT explanation is its potential circularity.Bloomfield (1933)points out that FT is often based on the imitation of the incorrect speech of the non-natives, that is the pidgin. Therefore, one may be mistaken in assuming that the former gave rise to the latter. The imperfect L2 (second language) learning hypothesis claims that pidgins are primarily the result of the imperfect L2 learning of the dominant lexifier language by the slaves. Research on naturalistic L2 processes has revealed a number of features of "interlanguage systems" that are also seen in pidgins and creoles: Imperfect L2 learning is compatible with other approaches, notably the European dialect origin hypothesis and the universalist models of language transmission.[53] Theories focusing on the substrate, or non-European, languages attribute similarities amongst creoles to the similarities of African substrate languages. These features are often assumed to be transferred from the substrate language to the creole or to be preserved invariant from the substrate language in the creole through a process ofrelexification: the substrate language replaces the nativelexical itemswith lexical material from the superstrate language while retaining the native grammatical categories.[54]The problem with this explanation is that the postulated substrate languages differ amongst themselves and with creoles in meaningful ways.Bickerton (1981)argues that the number and diversity of African languages and the paucity of a historical record on creole genesis makes determining lexical correspondences a matter of chance.Dillard (1970)coined the term "cafeteria principle" to refer to the practice of arbitrarily attributing features of creoles to the influence of substrate African languages or assorted substandard dialects of European languages. For a representative debate on this issue, see the contributions toMufwene (1993); for a more recent view,Parkvall (2000). Because of the sociohistoric similarities amongst many (but by no means all) of the creoles, theAtlantic slave tradeand the plantation system of the European colonies have been emphasized as factors by linguists such asMcWhorter (1999). One class of creoles might start aspidgins, rudimentary second languages improvised for use between speakers of two or more non-intelligible native languages. Keith Whinnom (inHymes (1971)) suggests that pidgins need three languages to form, with one (the superstrate) being clearly dominant over the others. The lexicon of a pidgin is usually small and drawn from the vocabularies of its speakers, in varying proportions. Morphological details like wordinflections, which usually take years to learn, are omitted; the syntax is kept very simple, usually based on strict word order. In this initial stage, all aspects of the speech – syntax, lexicon, and pronunciation – tend to be quite variable, especially with regard to the speaker's background. If a pidgin manages to be learned by the children of a community as a native language, it may become fixed and acquire a more complex grammar, with fixed phonology, syntax, morphology, and syntactic embedding. Pidgins can become full languages in only a singlegeneration. "Creolization" is this second stage where the pidgin language develops into a fully developed native language. The vocabulary, too, will develop to contain more and more items according to a rationale of lexical enrichment.[55] Universalistmodels stress the intervention of specific general processes during the transmission of language from generation to generation and from speaker to speaker. The process invoked varies: a general tendency towardssemantictransparency, first-language learningdriven by universal process, or a general process ofdiscourseorganization.Bickerton'slanguage bioprogram theory, proposed in the 1980s, remains the main universalist theory.[56]Bickerton claims that creoles are inventions of the children growing up on newly foundedplantations. Around them, they only heard pidgins spoken, without enough structure to function asnatural languages; and the children used their owninnatelinguistic capacities to transform the pidgin input into a full-fledged language. The alleged common features of all creoles would then stem from those innate abilities being universal. The last decades have seen the emergence of some new questions about the nature of creoles: in particular, the question of how complex creoles are and the question of whether creoles are indeed "exceptional" languages. Some features that distinguish creole languages from noncreoles have been proposed (by Bickerton,[57]for example). John McWhorter[58]has proposed the following list of features as defining thecreole prototype, that is, any language born recently of a pidgin: McWhorter argues that the absence of these three features is predictable in languages that were born recently of a pidgin, since learning them would constitute a distinct challenge to the non-native speaker. Over the course of generations, however, such features would be expected to gradually (re-)appear, and therefore "many creoles would harbor departures from the Prototype identifiable as having happened after the creole was born" (McWhorter 2018). As one example, McWhorter (2013) notes that the creoleSranan, which has existed for centuries in adiglossicrelationship with Dutch, has borrowed some Dutch verbs containing thever-prefix (fer-in Sranan) and whose meaning is not analyzable; for instance the pairmorsu'to soil',fermorsu'to squander'. McWhorter claims that these three properties characterize any language that was born recently as a pidgin, and states "At this writing, in twenty years I have encountered not a single counterexample" (McWhorter 2018). Nevertheless, the existence of a creole prototype has been disputed by others: Building up on this discussion, McWhorter proposed that "the world's simplest grammars are Creole grammars", claiming that every noncreole language's grammar is at least as complex as any creole language's grammar.[60][61]Gil has replied thatRiau Indonesianhas a simpler grammar thanSaramaccan, the language McWhorter uses as a showcase for his theory.[17]The same objections were raised by Wittmann in his 1999 debate with McWhorter.[62] The lack of progress made in defining creoles in terms of their morphology and syntax has led scholars such asRobert Chaudenson,Salikoko Mufwene,Michel DeGraff, andHenri Wittmannto question the value ofcreoleas a typological class; they argue that creoles are structurally no different from any other language, and thatcreoleis a sociohistoric concept – not a linguistic one – encompassing displaced populations and slavery.[63] Thomason & Kaufman (1988)spell out the idea of creole exceptionalism, claiming that creole languages are an instance of nongenetic language change due to language shift with abnormal transmission. Gradualists question the abnormal transmission of languages in a creole setting and argue that the processes which created today's creole languages are no different from universal patterns of language change. Given these objections tocreoleas a concept, DeGraff and others question the idea that creoles are exceptional in any meaningful way.[20][64]Additionally,Mufwene (2002)argues that someRomance languagesare potential creoles but that they are not considered as such by linguists because of a historical bias against such a view. Creolistics investigates the relative creoleness of languages suspected to be creoles, whatSchneider (1990)calls "theclineof creoleness". No consensus exists among creolists as to whether the nature of creoleness isprototypicalor merely evidence indicative of a set of recognizable phenomena seen in association with little inherent unity and no underlying single cause. Creolenessis at the heart of the controversy withJohn McWhorter[65]and Mikael Parkvall[66]opposingHenri Wittmann(1999) andMichel DeGraff.[67]In McWhorter's definition, creoleness is a matter of degree, in that prototypical creoles exhibit all of the three traits he proposes to diagnose creoleness: little or noinflection, little or notone, andtransparentderivation. In McWhorter's view, less prototypical creoles depart somewhat from thisprototype. Along these lines, McWhorter definesHaitian Creole, exhibiting all three traits, as "the most creole of creoles".[68]A creole likePalenquero, on the other hand, would be less prototypical, given the presence of inflection to mark plural, past, gerund, and participle forms.[69]Objections to the McWhorter-Parkvall hypotheses point out that these typologicalparametersof creoleness can be found in languages such asManding,Sooninke, andMagoua Frenchwhich are not considered creoles. Wittmann and DeGraff come to the conclusion that efforts to conceive ayardstickfor measuringcreolenessin any scientifically meaningful way have failed so far.[70][71]Gil (2001)comes to the same conclusion forRiau Indonesian.Muysken & Law (2001)have adduced evidence as to creole languages which respond unexpectedly to one of McWhorter's three features (for example,inflectional morphologyinBerbice Creole Dutch,toneinPapiamentu).Mufwene (2000)andWittmann (2001)have argued further that Creole languages are structurally no different from any other language, and that Creole is in fact a sociohistoric concept (and not a linguistic one), encompassing displaced population and slavery.DeGraff & Walicek (2005)discuss creolistics in relation tocolonialistideologies, rejecting the notion that Creoles can be responsibly defined in terms of specific grammatical characteristics. They discuss the history of linguistics and nineteenth-century work that argues for the consideration of the sociohistorical contexts in which Creole languages emerged. On the other hand, McWhorter points out that in languages such asBambara, essentially a dialect ofManding, there is ample non-transparent derivation, and that there is no reason to suppose that this would be absent in close relatives such asMandinkaitself.[72]Moreover, he also observes thatSoninkehas what all linguists would analyze asinflections, and that current lexicography of Soninke is too elementary for it to be stated with authority that it does not have non-transparent derivation.[73]Meanwhile,Magoua French, as described byHenri Wittmann, retains some indication ofgrammatical gender, which qualifies as inflection, and it also retains non-transparent derivation.[74]Michel DeGraff's argument has been thatHaitian Creoleretains non-transparent derivation from French. Ansaldo, Matthews & Lim (2007)critically assesses the proposal that creole languages exist as a homogeneous structural type with shared and/ or peculiar origins. Arends, Muysken & Smith (1995)groups creole genesis theories into four categories: The authors also confinePidginsandmixed languagesinto separate chapters outside this scheme whether or not relexification come into the picture.
https://en.wikipedia.org/wiki/Creole_language
Evolutionary linguisticsorDarwinian linguisticsis asociobiologicalapproach to the study oflanguage.[1][2]Evolutionary linguists consider linguistics as a subfield ofsociobiologyandevolutionary psychology. The approach is also closely linked withevolutionary anthropology,cognitive linguisticsandbiolinguistics. Studying languages as the products ofnature, it is interested in the biologicaloriginand development of language.[3]Evolutionary linguistics is contrasted withhumanisticapproaches, especiallystructural linguistics.[4] A main challenge in this research is the lack of empirical data: there are noarchaeologicaltraces of early human language.Computational biological modellingandclinical researchwithartificial languageshave been employed to fill in gaps of knowledge. Although biology is understood to shape thebrain, whichprocesses language, there is no clear link between biology and specific human language structures orlinguistic universals.[5] For lack of a breakthrough in the field, there have been numerous debates about what kind of natural phenomenon language might be. Some researchers focus on theinnate aspects of language. It is suggested that grammar has emerged adaptationally from the human genome, bringing about a languageinstinct;[6]or that it depends on a single mutation[7]which has caused alanguage organto appear in the human brain.[8]This is hypothesized to result in acrystalline[9]grammatical structure underlying all human languages. Others suggest language is not crystallized, but fluid and ever-changing.[10]Others, yet, liken languages to livingorganisms.[11]Languages are considered analogous to aparasite[12]orpopulationsofmind-viruses. There is so far littlescientific evidencefor any of these claims, and some of them have been labelled aspseudoscience.[13][14] Although pre-Darwinian theorists had compared languages to living organisms as ametaphor, the comparison was first taken literally in 1863 by thehistorical linguistAugust Schleicherwho was inspired byCharles Darwin'sOn the Origin of Species.[15]At the time there was not enough evidence to prove that Darwin's theory ofnatural selectionwas correct. Schleicher proposed that linguistics could be used as a testing ground for the study of the evolution ofspecies.[16]A review of Schleicher's bookDarwinism as Tested by the Science of Languageappeared in the first issue ofNaturejournal in 1870.[17]Darwin reiterated Schleicher's proposition in his 1871 bookThe Descent of Man, claiming that languages are comparable to species, and thatlanguage changeoccurs throughnatural selectionas words 'struggle for life'. Darwin believed that languages had evolved from animalmating calls.[18]Darwinists considered the concept of language creation as unscientific.[19] August Schleicher and his friendErnst Haeckelwere keen gardeners and regarded the study of cultures as a type ofbotany, with different species competing for the same living space.[20][16]Similar ideas became later advocated by politicians who wanted to appeal toworking classvoters, not least by thenational socialistswho subsequently included the concept of struggle for living space in their agenda.[21]Highly influential until the end ofWorld War II,social Darwinismwas eventually banished from human sciences, leading to a strict separation of natural and sociocultural studies.[16] This gave rise to the dominance of structural linguistics in Europe. There had long been a dispute between the Darwinists and the French intellectuals with the topic of language evolution famously having been banned by theParis Linguistic Societyas early as in 1866.Ferdinand de Saussureproposedstructuralismto replace evolutionary linguistics in hisCourse in General Linguistics, published posthumously in 1916. The structuralists rose to academic political power in human and social sciences in the aftermath of the student revolts of Spring 1968, establishingSorbonneas an international centrepoint of humanistic thinking. In theUnited States, structuralism was however fended off by the advocates ofbehavioural psychology; a linguistics framework nicknamed as 'American structuralism'. It was eventually replaced by the approach ofNoam Chomskywho published a modification ofLouis Hjelmslev'sformal structuralist theory, claiming thatsyntactic structuresareinnate. An active figure in peace demonstrations in the 1950s and 1960s, Chomsky rose to academic political power following Spring 1968 at the MIT.[22] Chomsky became an influential opponent of the French intellectuals during the following decades, and his supporters successfully confronted thepost-structuralistsin theScience Warsof the late 1990s.[23]The shift of the century saw a new academic funding policy where interdisciplinary research became favoured, effectively directing research funds to biological humanities.[24]The decline of structuralism was evident by 2015 with Sorbonne having lost its former spirit.[25] Chomsky eventually claimed that syntactic structures are caused by a randommutationin the humangenome,[7]proposing a similar explanation for other human faculties such asethics.[22]ButSteven Pinkerargued in 1990 that they are the outcome of evolutionaryadaptations.[26] At the same time when the Chomskyan paradigm ofbiological determinismdefeatedhumanism, it was losing its own clout within sociobiology. It was reported likewise in 2015 thatgenerative grammarwas under fire inapplied linguisticsand in the process of being replaced withusage-based linguistics;[27]a derivative ofRichard Dawkins'smemetics.[28]It is a concept of linguistic units asreplicators. Following the publication of memetics in Dawkins's 1976 nonfiction bestsellerThe Selfish Gene, many biologically inclined linguists, frustrated with the lack of evidence for Chomsky'sUniversal Grammar, grouped under different brands including a framework calledCognitive Linguistics(with capitalised initials), and 'functional' (adaptational) linguistics (not to be confused withfunctional linguistics) to confront both Chomsky and the humanists.[4]The replicator approach is today dominant in evolutionary linguistics, applied linguistics, cognitive linguistics andlinguistic typology; while the generative approach has maintained its position in general linguistics, especiallysyntax; and incomputational linguistics. Evolutionary linguistics is part of a wider framework ofUniversal Darwinism. In this view, linguistics is seen as anecologicalenvironment for research traditions struggling for the same resources.[4]According toDavid Hull, these traditions correspond to species in biology. Relationships between research traditions can besymbiotic,competitiveorparasitic. An adaptation of Hull's theory in linguistics is proposed byWilliam Croft.[3]He argues that the Darwinian method is more advantageous than linguistic models based onphysics,structuralist sociology, orhermeneutics.[4] Evolutionary linguistics is often divided intofunctionalismandformalism,[29]concepts which are not to be confused withfunctionalismandformalismin the humanistic reference.[30]Functional evolutionary linguistics considers languages asadaptationsto human mind. The formalist view regards them as crystallised or non-adaptational.[29] The adaptational view of language is advocated by various frameworks of cognitive and evolutionary linguistics, with the terms 'functionalism' and 'Cognitive Linguistics' often being equated.[31]It is hypothesised that the evolution of the animal brain provides humans with a mechanism of abstract reasoning which is a 'metaphorical' version of image-based reasoning.[32]Language is not considered as a separate area ofcognition, but as coinciding with general cognitive capacities, such asperception,attention,motor skills, and spatial andvisual processing. It is argued to function according to the same principles as these.[33][34] It is thought that the brain links action schemes to form–meaning pairs which are calledconstructions.[35]Cognitive linguistic approaches to syntax are calledcognitiveandconstruction grammar.[33]Also deriving from memetics and other cultural replicator theories,[3]these can study the natural orsocial selectionand adaptation of linguistic units. Adaptational models reject a formal systemic view of language and consider language as a population of linguistic units. The bad reputation of social Darwinism and memetics has been discussed in the literature, and recommendations for new terminology have been given.[36]What correspond to replicators or mind-viruses in memetics are calledlinguemesin Croft'stheory of Utterance Selection(TUS),[37]and likewise linguemes or constructions in construction grammar andusage-based linguistics;[38][39]andmetaphors,[40]frames[41]orschemas[42]in cognitive and construction grammar. The reference of memetics has been largely replaced with that of aComplex Adaptive System.[43]In current linguistics, this term covers a wide range of evolutionary notions while maintaining theNeo-Darwinianconcepts of replication and replicator population.[44] Functional evolutionary linguistics is not to be confused withfunctional humanistic linguistics. Advocates of formal evolutionary explanation in linguistics argue that linguistic structures are crystallised. Inspired by 19th century advances incrystallography, Schleicher argued that different types of languages are like plants, animals and crystals.[45]The idea of linguistic structures as frozen drops was revived intagmemics,[46]an approach to linguistics with the goal to uncover divine symmetries underlying all languages, as if caused bythe Creation.[47] In modernbiolinguistics,the X-bar treeis argued to be like natural systems such asferromagnetic dropletsand botanic forms.[48]Generative grammar considers syntactic structures similar tosnowflakes.[9]It is hypothesised that such patterns are caused by amutationin humans.[7] The formal–structural evolutionary aspect of linguistics is not to be confused withstructural linguistics. There was some hope of a breakthrough with the discovery of theFOXP2gene.[49][50]There is little support, however, for the idea thatFOXP2is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech.[51]The idea that people have a language instinct is disputed.[52][53]Memetics is sometimes discredited aspseudoscience[14]and neurological claims made by evolutionary cognitive linguists have been likened to pseudoscience.[13]All in all, there does not appear to be any evidence for the basic tenets of evolutionary linguistics beyond the fact that language is processed by the brain, and brain structures are shaped by genes.[54][55] Evolutionary linguistics has been criticised by advocates of (humanistic) structural and functional linguistics.Ferdinand de Saussurecommented on 19th century evolutionary linguistics: "Language was considered a specific sphere, a fourth natural kingdom; this led to methods of reasoning which would have caused astonishment in other sciences. Today one cannot read a dozen lines written at that time without being struck by absurdities of reasoning and by the terminology used to justify these absurdities”[56] Mark Aronoff, however, argues that historical linguistics had its golden age during the time of Schleicher and his supporters, enjoying a place among the hard sciences, and considers the return of Darwinian linguistics as a positive development.Esa Itkonennonetheless deems the revival of Darwinism as a hopeless enterprise: "There is ... an application of intelligence in linguistic change which is absent in biological evolution; and this suffices to make the two domains totally disanalogous ... [Grammaticalisation depends on] cognitive processes, ultimately serving the goal of problem solving, which intelligent entities like humans must perform all the time, but which biological entities like genes cannot perform. Trying to eliminate this basic difference leads to confusion.”[57] Itkonen also points out that the principles of natural selection are not applicable because language innovation and acceptance have the same source which is the speech community. In biological evolution, mutation and selection have different sources. This makes it possible for people to change their languages, but not theirgenotype.[58]
https://en.wikipedia.org/wiki/Evolutionary_linguistics
Evolutionary psychology of languageis the study of the evolutionary history of language as a psychological faculty within the discipline ofevolutionary psychology. It makes the assumption that language is the result of aDarwinian adaptation. There are many competing theories of how language might have evolved, if indeed it is an evolutionary adaptation. They stem from the belief that language development could result from anadaptation, anexaptation, or a by-product.Geneticsalso influence the study of the evolution of language. It has been speculated that theFOXP2gene may be what giveshumansthe ability to developgrammarandsyntax. In the debate surrounding the evolutionary psychology of language, three sides emerge: those who believe in language as anadaptation, those who believe it is aby-productof another adaptation, and those who believe it is anexaptation. Scientist and psychologistsSteven PinkerandPaul Bloomargue that language as a mental faculty shares many likenesses with the complexorgansof the body which suggests that, like these organs, language has evolved as an adaptation, since this is the only known mechanism by which such complex organs can develop. The complexity of the mechanisms, the faculty of language and the ability to learn language provides a comparative resource between thepsychological evolvedtraits and thephysical evolvedtraits.[1] Pinker, though he mostly agrees withNoam Chomsky, alinguistandcognitive scientist, in arguing that the fact that children can learn any human language with no explicit instruction suggests that language, including most of grammar, is basically innate and that it only needs to be activated by interaction, but Pinker and Bloom argue that theorganicnature of language strongly suggests that it has an adaptational origin.[2] Noam Chomsky spearheaded the debate on the faculty of language as a cognitive by-product, or spandrel. As a linguist, rather than an evolutionary biologist, his theoretical emphasis was on the infinite capacity of speech and speaking: there are a fixed number of words, but there is an infinite combination of the words.[3]His analysis from this considers that the ability of our cognition to perceive infinite possibilities, or create infinite possibilities, helped give way to the extreme complexity found in our language.[3]Both Chomsky and Gould argue that the complexity of the brain is in itself an adaptation, and language arises from such complexities.[3]On the issue of whether language is best seen as having evolved as an adaptation or as a by product,evolutionary biologistW. Tecumseh Fitch, followingStephen J. Gould, argues that it is unwarranted to assume that every aspect of language is an adaptation, or that language as a whole is an adaptation.[4]He criticizes some strands of evolutionary psychology for suggesting a pan-adaptationist view of evolution, and dismisses Pinker and Bloom's question of whether "Language has evolved as an adaptation" as being misleading.[4]He argues instead that from a biological viewpoint the evolutionary origins of language is best conceptualized as being the probable result of a convergence of many separate adaptations into a complex system. A similar argument is made byTerrence Deaconwho inThe Symbolic Speciesargues that the different features of language have co-evolved with the evolution of the mind and that the ability to usesymbolic communicationis integrated in all othercognitive processes.[5] Exaptations, like adaptations, are fitness-enhancing characteristics, but, according to Stephen Jay Gould, their purposes were appropriated as the species evolved. This can be for one of two reasons: either the trait’s original function was no longer necessary so the trait took on a new purpose or a trait that does not arise for a certain purpose, but later becomes important.[6]Typically exaptations have a specific shape and design which becomes the space for a new function.[6]The foundation of this argument comes from the low-lying position of thelarynxin humans.[7]Other mammals have this same positioning of the larynx, but no other species has acquired language. This leads exaptationists to see an evolved modification away from its original purpose.[7] Research has shown that “genetic constraints” on language evolution could have caused a “specialized” and “species-specific language module.[8]It is through this module that there are many specified “domain-specific linguistic properties,” such as syntax and agreement.[8]Adaptationists believe that language genes “coevolved with human language itself for the purpose of communication.”[8]This view suggests that the genes that are involved with language would only have coevolved in a very stable linguistic environment. This shows that language could not have evolved in a rapidly changing environment because that type of environment would not have been stable enough for natural selection. Without natural selection, the genes would not have coevolved with the ability for language, and instead, would have come from “cultural conventions.”[8]The adaptationist belief that genes coevolved with language also suggests that there are no “arbitrary properties of language.” This is because they would have coevolved with language through natural selection.[8] TheBaldwin effectprovides a possible explanation for how language characteristics that are learned over time could become encoded in genes. He suggested, like Darwin did, that organisms that can adapt a trait faster have a “selective advantage.”[8]As generations pass, less environmental stimuli is needed for organisms of the species to develop that trait. Eventually no environmental stimuli are needed and it is at this point that the trait has become “genetically encoded.”[8] The genetic and cognitive components of language have long been under speculation, only recently have linguists pointed out a gene that may possibly explain how language works.[9]Evolutionary psychologists hold that the FOXP2genemay well be associated with the evolution of human language. In the 1980s, psycholinguistMyrna Gopnikidentified a dominant gene that causes language impairment in theKE familyofBritain. The KE family has a mutation in the FOXP2, that makes them suffer from aspeechandlanguage disorder. It has been argued that the FOXP2 gene is the grammar gene, which is what allows humans the ability to form proper syntax and make our communication of higher quality. Children that grow up in a stable environment develop highly proficient language without any instruction. Individuals with a mutation to their FOXP2 gene have trouble mastering complex sentences, and shows signs ofdevelopmental verbal dyspraxia.[9] This gene most likely evolved in thehomininline after the hominin and the chimpanzee lines split; this accounts for the fact that only humans can learn and understand grammar.[10]Humans have a uniquealleleof this gene, which has otherwise been closely conserved through most of mammalian evolutionary history. This unique allele seems to have first appeared between 100 and 200 thousand years ago, and it is now all but universal in humans.[10]This suggests that speech evolved late in the overall spectrum of human evolution. By some classifications, nearly 7000 languages exist worldwide, with a great amount of variation thought to have evolved throughcultural differentiation. There are four factors that are thought to be the reason as to why language variation exists between cultures:founder effects,drift,hybridizationand adaptation. With the vast amounts of lands available, different tribes branched out to claim territory, which would require new place names, as well as names for new activities (such as terms for new fishing techniques required in streams by a people who had previously only fished from the ocean). Groups who lived far apart had little or no communication, even if they originally spoke the same language, allowing for their languages to drift apart.[11]Hybridization also played a significant role in the language evolution. One group would come in contact with another tribe, then the two groups would pick up words and sounds from each other, eventually leading to the formation of a new language. Finally, adaptation had an impact on language differentiation. Natural environments and cultural contexts would change over time; therefore the groups had to adapt to the environment and their language had to adapt to it as well. For example, the introduction of bronze-making in an area would prompt the introduction or creation of terms related to bronze.[11] Atkinson theorized that language may have originated inAfrica, sinceAfrican languageshave a greater variation of speech sounds than other languages. Those sounds are seen as the root for the other languages that exist across the world.[12] Research indicates that nonhuman animals (e.g., apes, dolphins, and songbirds) show evidence of language. Comparative studies of the sensory-motor system reveal that speech is not special to humans: nonhuman primates can discriminate between two different spoken languages.[13]Anatomical aspects of humans, particularly the descended larynx, has been believed to be unique to humans' capacity to speak. However, further research revealed that several other mammals have a descended larynx besides humans, which indicates that a descended larynx must not be the only anatomical feature needed for speech production.[13]Vocal imitation is not uniquely human as well.[13]Songbirds seem to acquire species-specific songs by imitating.[14][15]Because nonhuman primates do not have a descended larynx, they lack vocal imitative capacity, which is why studies involving these primates have taught them nonverbal means of communication, e.g., sign language.[13] KokoandNim Chimpskyare two apes that have successfully learned to use sign language, but not to the extent that a human being can. Nim is a chimpanzee that was taken in by a family in the 1970s and was raised as if he were a human child. Nim mastered 150 signs, which were limited but useful. Koko was a gorilla that was taken in by a Stanford student. She mastered 1,000 signs for generative communication.[13]
https://en.wikipedia.org/wiki/Evolutionary_psychology_of_language
Thefis phenomenonis a phenomenon during a child'slanguage acquisitionthat demonstrates that perception ofphonemesoccurs earlier than a child's ability to produce the appropriateallophone. It is also illustrative of a larger theme in child language acquisition: that skills inlinguistic comprehensiongenerally precede corresponding skills inlinguistic production. The name comes from an incident reported in 1960 byJ. BerkoandR. Brown, in which a child referred to his inflatable plastic fish as afis. However, when adults asked him, "Is this yourfis?" he rejected the statement. When he was asked, "Is this your fish?" he responded, "Yes, my fis." This shows that although the child could not produce the phoneme /ʃ/, he could perceive it as being different from the phoneme /s/. In some cases, the sounds produced by the child are actually acoustically different, but not significantly enough for others to distinguish since the language in question does not make such contrasts. Researchers[1]saw the Fis Phenomenon occur while doing research on Juli, a toddler who speaks ASL. The observers had seen it occur with her use of the word for 'water' when Juli was outside holding rocks in her hand and speaking to the observer through a window. Juli produced the sign [ix-rock water ix-loc(hose)] with the "20" handshape error for "water". The observer saw what Juli had signed, thinking that Juli wanted to eat the rocks in her hand and commented to Juli that she could not eat the rocks. As soon as Juli saw that the observer had interpreted her signing attempt of 'water' being the sign for 'food/eat' she corrected herself and repeated her sign with the 'W' handshape which signaled 'water'. The adult then realized what Juli meant and asked if what she meant was that she wanted to use the water hose and Juli nodded.[2] All languages can be deconstructed into smaller elements, they are considered levels of languages that are divided into: The Phonological System, the Reference System, the Morphological System, and the Syntactic System.[3]The Phonological System correlates to the different stages that a child acquires language. The acquisition process begins at birth, the brain begins to specialize in the sounds heard around them and begin to produce vowel-like sounds. This is the cooing stage. The babbling stage, six to 11 months, is when consonants like /m/ and /b/ are combined with vowels, ma-ma-ma ba-ba-ba. The next two stages are the one word stage, 12 to 18 months, and two word stage, 18 months to two years of age. At around two years, the child enters the telegraphic stage, where they learn to put multiple words together. Note that researchers can only estimate the ages in which a child transitions through the stages. This is due to many varying factors when the child is acquiring language, like mental capability, the right language environment, exposure to language, and more.[4] The Phonological System is broken up into two different categories, perception and production. As the child goes through the stage of acquiring the language, perception and production is being developed in the brain. The Fis Phenomenon occurs due to lack of production ability by the kid, though the child perceives the sound to be correct. The relation between perception, production and the Fis Phenomenon is discussed below. The phonological performance of children is predominantly consistent and predictable, leading to the generally accepted notion that their performance is governed by a set of rules, and it is not a result of random deviations. These rules are used to navigate from the surface form (adult pronunciation) to child pronunciation. This idea might help explain the occurrence of things such as the Fis Phenomenon. There is evidence to support the idea that a child manipulates isomorphic adult representations of language. This evidence stems from three areas: 1) that a child has the ability to recognize disparities in the adult form that the child is unable to produce 2) that the child understands their own speech and 3) their grammatical and morphological tendencies. The role of perception in the phonological performance of children is that their lexical representation of the adult form is first passed through the child's perceptual filter. Meaning that the adult pronunciation, or surface form, is not necessarily the form that is being affected by the child's phonological rules. There is a clear difference between the adult form and the child's mental representation. Barton (1976)[5]tested this hypothesis and the results largely supported it, though there were later requests for a “perceptual explanation”. The most notable example, shown below, illustrates the perception of consonant clusters compared to the child's output. Clusters consisting of [+nasal] followed by a [+voice] or [-voice] consonant are perceived differently by children. mend → mɛn meant → mɛt[6] The nasal before a voiced consonant is long and notable (mend). The nasal before an unvoiced consonant is indistinct, leaving the following consonant as the most notable of the cluster. The child's mental representation is then converted by a small set of rules called Realization Rules, which are used to reach the final form, the child's pronunciation. An example of the implementation of Realization Rules is informally illustrated in the sample derivation below, where a child consistently producedsquatas[gɔp]: /skwɒt/ → [skwɔp] (harmonizing a coronal to a preceding labialized sequence /kw/) [skwɔp] → [kwɔp] (deleting pre-consonantal /s/) [kwɔp] → [kɔp] (deleting post-consonantal sonorants) [kɔp] → [gɔp] (neutralizing the voicing distinction)[6] Although children seem to be able to recognize the correct pronunciation of “fish”, they can only produce an /s/, meaning that they are left saying “fis” instead. Since the problem doesn't seem to be speech perception, experts believe that the problem is associated with mostly the coordination of speech muscles, leaving them to think that these children's speech muscles need practice.[7]One way that experts encourage the practice of speech production in children are by word/phrase repetition. In this case, it'll be helpful to practice words that contain the /ʃ/ sound in “fish”.[8] Scoobie et al. in 1996 looked into the notion of child perception and how they acquire their speech as well as how children contrast minimal pairs. Used Children with phonological disorders and focused on /s/ initial-stop clusters for their acquisition of them.[9] A 1941 study by Roman Jakobson hypothesized that children who speak English follow a basic phonological order when acquiring their language's feature distinctions and, more strongly, that some elements of this order are sequential with respect to others,i.e., that for some distinctions, a child cannot fully acquire that distinction unless the child has already learned one or more some specific other distinctions. In a 1948 study, Schvachkin hypothesized that Russian-speaking children develop phonetic distinctions in an invariant order. A table is then shown where the “hushing” vs “hissing” sibilants were second to last on the order of acquisition.[10]Given that children are still refining and working on their phonetic production skills, it might be the case that these features are not produced accurately because previous ones still need work. Juliette Blevins claimed that children can perceive both their own use of the language minimal pairs along with the adult usages. Though the child may believe that the subtle differences in their use of the minimal pair can be perceived by the adult, because the child themselves can recognize the differences.[11]
https://en.wikipedia.org/wiki/Fis_phenomenon
2AS5,2A07 93986 114142 ENSG00000128573 ENSMUSG00000029563 O15409Q75MZ5Q0PRL4Q8N6B6 P58463 NM_148900 NM_053242NM_212435NM_001286607 NP_683698NP_001166237.1NP_683697.2 NP_001273536NP_444472NP_997600 Forkhead box protein P2(FOXP2) is aproteinthat, in humans, is encoded by theFOXP2gene. FOXP2 is a member of theforkhead boxfamily oftranscription factors, proteins thatregulate gene expressionbybinding to DNA. It is expressed in the brain, heart, lungs and digestive system.[5][6] FOXP2is found in manyvertebrates, where it plays an important role in mimicry in birds (such asbirdsong) andecholocationin bats.FOXP2is also required for the proper development of speech and language in humans.[7]In humans, mutations inFOXP2cause the severe speech and language disorderdevelopmental verbal dyspraxia.[7][8]Studies of the gene in mice and songbirds indicate that it is necessary for vocal imitation and the related motor learning.[9][10][11]Outside the brain,FOXP2has also been implicated in development of other tissues such as the lung and digestive system.[12] Initially identified in 1998 as the genetic cause of aspeech disorderin a British family designated theKE family,FOXP2was the first gene discovered to be associated with speech and language[13]and was subsequently dubbed "the language gene".[14]However, other genes are necessary for human language development, and a 2018 analysis confirmed that there was no evidence of recent positiveevolutionary selectionofFOXP2in humans.[15][16] As aFOX protein, FOXP2 contains a forkhead-box domain. In addition, it contains apolyglutamine tract, azinc fingerand aleucine zipper. The protein attaches to the DNA of other proteins and controls their activity through the forkhead-box domain. Only a few targeted genes have been identified, however researchers believe that there could be up to hundreds of other genes targeted by the FOXP2 gene. The forkhead box P2 protein is active in the brain and other tissues before and after birth, and many studies show that it is paramount for the growth of nerve cells and transmission between them. The FOXP2 gene is also involved in synaptic plasticity, making it imperative for learning and memory.[17] FOXP2is required for proper brain and lung development.Knockout micewith only one functional copy of theFOXP2gene have significantly reduced vocalizations as pups.[18]Knockout mice with no functional copies ofFOXP2are runted, display abnormalities in brain regions such as thePurkinje layer, and die an average of 21 days after birth from inadequate lung development.[12] FOXP2is expressed in many areas of the brain,[19]including thebasal gangliaand inferiorfrontal cortex, where it is essential for brain maturation and speech and language development.[20]In mice, the gene was found to be twice as highly expressed in male pups than female pups, which correlated with an almost double increase in the number of vocalisations the male pups made when separated from mothers. Conversely, in human children aged 4–5, the gene was found to be 30% more expressed in theBroca's areasof female children. The researchers suggested that the gene is more active in "the more communicative sex".[21][22] The expression ofFOXP2is subject topost-transcriptional regulation, particularlymicroRNA(miRNA), causing the repression of the FOXP23' untranslated region.[23] Three amino acid substitutions distinguish the humanFOXP2protein from that found in mice, while two amino acid substitutions distinguish the humanFOXP2protein from that found in chimpanzees,[19]but only one of these changes is unique to humans.[12]Evidence from genetically manipulated mice[24]and human neuronal cell models[25]suggests that these changes affect the neural functions ofFOXP2. The FOXP2 gene has been implicated in several cognitive functions including; general brain development, language, and synaptic plasticity. The FOXP2 gene region acts as a transcription factor for the forkhead box P2 protein. Transcription factors affect other regions, and the forkhead box P2 protein has been suggested to also act as a transcription factor for hundreds of genes. This prolific involvement opens the possibility that the FOXP2 gene is much more extensive than originally thought.[17]Other targets of transcription have been researched without correlation to FOXP2. Specifically, FOXP2 has been investigated in correlation with autism and dyslexia, however with no mutation was discovered as the cause.[26][8]One well identified target is language.[27]Although some research disagrees with this correlation,[28]the majority of research shows that a mutated FOXP2 causes the observed production deficiency.[17][27][29][26][30][31] There is some evidence that the linguistic impairments associated with a mutation of theFOXP2gene are not simply the result of a fundamental deficit in motor control. Brain imaging of affected individuals indicates functional abnormalities in language-related cortical and basal ganglia regions, demonstrating that the problems extend beyond the motor system.[32] Mutations in FOXP2 are among several (26 genes plus 2 intergenic) loci which correlate toADHDdiagnosis in adults – clinical ADHD is an umbrella label for a heterogeneous group of genetic and neurological phenomena which may result from FOXP2 mutations or other causes.[33] A 2020genome-wide association study(GWAS) implicatessingle-nucleotide polymorphisms(SNPs) of FOXP2 in susceptibility tocannabis use disorder.[34] It is theorized that the translocation of the 7q31.2 region of the FOXP2 gene causes a severe language impairment calleddevelopmental verbal dyspraxia(DVD)[27]or childhood apraxia of speech (CAS)[35]So far this type of mutation has only been discovered in three families across the world including the original KE family.[31]A missense mutation causing an arginine-to-histidine substitution (R553H) in the DNA-binding domain is thought to be the abnormality in KE.[36]This would cause a normally basic residue to be fairly acidic and highly reactive at the body's pH. A heterozygous nonsense mutation, R328X variant, produces a truncated protein involved in speech and language difficulties in one KE individual and two of their close family members. R553H and R328X mutations also affected nuclear localization, DNA-binding, and the transactivation (increased gene expression) properties of FOXP2.[8] These individuals present with deletions, translocations, and missense mutations. When tasked with repetition and verb generation, these individuals with DVD/CAS had decreased activation in the putamen and Broca's area in fMRI studies. These areas are commonly known as areas of language function.[37]This is one of the primary reasons that FOXP2 is known as a language gene. They have delayed onset of speech, difficulty with articulation including slurred speech, stuttering, and poor pronunciation, as well as dyspraxia.[31]It is believed that a major part of this speech deficit comes from an inability to coordinate the movements necessary to produce normal speech including mouth and tongue shaping.[27]Additionally, there are more general impairments with the processing of the grammatical and linguistic aspects of speech.[8]These findings suggest that the effects of FOXP2 are not limited to motor control, as they include comprehension among other cognitive language functions. General mild motor and cognitive deficits are noted across the board.[29]Clinically these patients can also have difficulty coughing, sneezing, or clearing their throats.[27] While FOXP2 has been proposed to play a critical role in the development of speech and language, this view has been challenged by the fact that the gene is also expressed in other mammals as well as birds and fish that do not speak.[38]It has also been proposed that the FOXP2 transcription-factor is not so much a hypothetical 'language gene' but rather part of a regulatory machinery related to externalization of speech.[39] TheFOXP2gene is highly conserved inmammals.[19]The human gene differs from that innon-human primatesby the substitution of two amino acids, athreoninetoasparaginesubstitution at position 303 (T303N) and an asparagine toserinesubstitution at position 325 (N325S).[36]In mice it differs from that of humans by three substitutions, and inzebra finchby seven amino acids.[19][40][41]One of the two amino acid differences between human and chimps also arose independently in carnivores and bats.[12][42]SimilarFOXP2proteins can be found insongbirds,fish, andreptilessuch asalligators.[43][44] DNA sampling fromHomo neanderthalensisbones indicates that theirFOXP2gene is a little different though largely similar to those ofHomo sapiens(i.e. humans).[45][46]Previous genetic analysis had suggested that theH. sapiensFOXP2 gene became fixed in the population around 125,000 years ago.[47]Some researchers consider the Neanderthal findings to indicate that the gene instead swept through the population over 260,000 years ago, before our most recent common ancestor with the Neanderthals.[47]Other researchers offer alternative explanations for how theH. sapiensversion would have appeared in Neanderthals living 43,000 years ago.[47] According to a 2002 study, theFOXP2gene showed indications of recentpositive selection.[19][48]Some researchers have speculated that positive selection is crucial for theevolution of language in humans.[19]Others, however, were unable to find a clear association between species with learned vocalizations and similar mutations inFOXP2.[43][44]A 2018 analysis of a large sample of globally distributed genomes confirmed there was no evidence of positive selection, suggesting that the original signal of positive selection may be driven by sample composition.[15][16]Insertion of both humanmutationsinto mice, whose version ofFOXP2otherwise differs from the human andchimpanzeeversions in only one additional base pair, causes changes in vocalizations as well as other behavioral changes, such as a reduction in exploratory tendencies, and a decrease in maze learning time. A reduction in dopamine levels and changes in the morphology of certain nerve cells are also observed.[24] FOXP2 is known to regulateCNTNAP2,CTBP1,[49]SRPX2andSCN3A.[50][20][51] FOXP2 downregulatesCNTNAP2, a member of theneurexinfamily found in neurons.CNTNAP2is associated with common forms of language impairment.[52] FOXP2 also downregulatesSRPX2, the 'Sushi Repeat-containing Protein X-linked 2'.[53][54]It directly reduces its expression, by binding to its gene'spromoter. SRPX2 is involved inglutamatergicsynapse formationin thecerebral cortexand is more highly expressed in childhood. SRPX2 appears to specifically increase the number of glutamatergic synapses in the brain, while leaving inhibitoryGABAergicsynapses unchanged and not affectingdendritic spinelength or shape. On the other hand, FOXP2's activity does reduce dendritic spine length and shape, in addition to number, indicating it has other regulatory roles in dendritic morphology.[53] In chimpanzees, FOXP2 differs from the human version by two amino acids.[55]A study in Germany sequenced FOXP2's complementary DNA in chimps and other species to compare it with human complementary DNA in order to find the specific changes in the sequence.[19]FOXP2 was found to be functionally different in humans compared to chimps. Since FOXP2 was also found to have an effect on other genes, its effects on other genes is also being studied.[56]Researchers deduced that there could also be further clinical applications in the direction of these studies in regards to illnesses that show effects on human language ability.[25] In a mouseFOXP2gene knockouts, loss of both copies of the gene causes severe motor impairment related to cerebellar abnormalities and lack ofultrasonicvocalisationsnormally elicited when pups are removed from their mothers.[18]These vocalizations have important communicative roles in mother–offspring interactions. Loss of one copy was associated with impairment of ultrasonic vocalisations and a modest developmental delay. Male mice on encountering female mice produce complex ultrasonic vocalisations that have characteristics of song.[57]Mice that have the R552H point mutation carried by the KE family show cerebellar reduction and abnormalsynaptic plasticityin striatal andcerebellarcircuits.[9] Humanized FOXP2 mice display alteredcortico-basal gangliacircuits. The human allele of the FOXP2 gene was transferred into the mouse embryos throughhomologous recombinationto create humanized FOXP2 mice. The human variant of FOXP2 also had an effect on the exploratory behavior of the mice. In comparison to knockout mice with one non-functional copy ofFOXP2, the humanized mouse model showed opposite effects when testing its effect on the levels of dopamine, plasticity of synapses, patterns of expression in the striatum and behavior that was exploratory in nature.[24] When FOXP2 expression was altered in mice, it affected many different processes including the learning motor skills and the plasticity of synapses. Additionally, FOXP2 is found more in thesixth layerof the cortex than in thefifth, and this is consistent with it having greater roles in sensory integration. FOXP2 was also found in themedial geniculate nucleusof the mouse brain, which is the processing area that auditory inputs must go through in the thalamus. It was found that its mutations play a role in delaying the development of language learning. It was also found to be highly expressed in the Purkinje cells and cerebellar nuclei of the cortico-cerebellar circuits. High FOXP2 expression has also been shown in the spiny neurons that expresstype 1 dopamine receptorsin the striatum,substantia nigra,subthalamic nucleusandventral tegmental area. The negative effects of the mutations of FOXP2 in these brain regions on motor abilities were shown in mice through tasks in lab studies. When analyzing the brain circuitry in these cases, scientists found greater levels of dopamine and decreased lengths of dendrites, which caused defects inlong-term depression, which is implicated in motor function learning and maintenance. ThroughEEGstudies, it was also found that these mice had increased levels of activity in their striatum, which contributed to these results. There is further evidence for mutations of targets of the FOXP2 gene shown to have roles inschizophrenia,epilepsy,autism,bipolar disorderand intellectual disabilities.[58] FOXP2has implications in the development ofbatecholocation.[36][42][59]Contrary to apes and mice,FOXP2is extremely diverse inecholocating bats.[42]Twenty-two sequences of non-bateutherianmammals revealed a total number of 20 nonsynonymous mutations in contrast to half that number of bat sequences, which showed 44 nonsynonymous mutations.[42]Allcetaceansshare three amino acid substitutions, but no differences were found between echolocatingtoothed whalesand non-echolocatingbaleen cetaceans.[42]Within bats, however, amino acid variation correlated with different echolocating types.[42] Insongbirds,FOXP2most likely regulates genes involved inneuroplasticity.[10][60]Gene knockdownofFOXP2in area X of thebasal gangliain songbirds results in incomplete and inaccurate song imitation.[10]Overexpression ofFOXP2was accomplished through injection ofadeno-associated virusserotype 1 (AAV1) into area X of the brain. This overexpression produced similar effects to that of knockdown; juvenile zebra finch birds were unable to accurately imitate their tutors.[61]Similarly, in adult canaries, higherFOXP2levels also correlate with song changes.[41] Levels ofFOXP2in adult zebra finches are significantly higher when males direct their song to females than when they sing song in other contexts.[60]"Directed" singing refers to when a male is singing to a female usually for a courtship display. "Undirected" singing occurs when for example, a male sings when other males are present or is alone.[62]Studies have found that FoxP2 levels vary depending on the social context. When the birds were singing undirected song, there was a decrease of FoxP2 expression in Area X. This downregulation was not observed and FoxP2 levels remained stable in birds singing directed song.[60] Differences between song-learning and non-song-learning birds have been shown to be caused by differences inFOXP2gene expression, rather than differences in the amino acid sequence of theFOXP2protein. Inzebrafish, FOXP2 is expressed in the ventral anddorsal thalamus,telencephalon,diencephalonwhere it likely plays a role in nervous system development. The zebrafish FOXP2 gene has an 85% similarity to the human FOX2P ortholog.[63] FOXP2and its gene were discovered as a result of investigations on an English family known as theKE family, half of whom (15 individuals across three generations) had a speech and language disorder calleddevelopmental verbal dyspraxia. Their case was studied at theInstitute of Child Health of University College London.[64]In 1990,Myrna Gopnik, Professor of Linguistics atMcGill University, reported that the disorder-affected KE family had severe speech impediment with incomprehensible talk, largely characterized by grammatical deficits.[65]She hypothesized that the basis was not of learning or cognitive disability, but due to genetic factors affecting mainly grammatical ability.[66](Her hypothesis led to a popularised existence of "grammar gene" and a controversial notion of grammar-specific disorder.[67][68]) In 1995, theUniversity of Oxfordand the Institute of Child Health researchers found that the disorder was purely genetic.[69]Remarkably, the inheritance of the disorder from one generation to the next was consistent withautosomal dominantinheritance, i.e., mutation of only a single gene on anautosome(non-sex chromosome) acting in a dominant fashion. This is one of the few known examples ofMendelian(monogenic) inheritance for a disorder affecting speech and language skills, which typically have a complex basis involving multiple genetic risk factors.[70] In 1998, Oxford University geneticistsSimon Fisher,Anthony Monaco, Cecilia S. L. Lai, Jane A. Hurst, andFaraneh Vargha-Khademidentified an autosomal dominant monogenic inheritance that is localized on a small region ofchromosome 7from DNA samples taken from the affected and unaffected members.[5]The chromosomal region (locus) contained 70 genes.[71]The locus was given the official name "SPCH1" (for speech-and-language-disorder-1) by the Human Genome Nomenclature committee. Mapping and sequencing of the chromosomal region was performed with the aid ofbacterial artificial chromosomeclones.[6]Around this time, the researchers identified an individual who was unrelated to the KE family but had a similar type of speech and language disorder. In this case, the child, known as CS, carried a chromosomal rearrangement (atranslocation) in which part of chromosome 7 had become exchanged with part of chromosome 5. The site of breakage of chromosome 7 was located within the SPCH1 region.[6] In 2001, the team identified in CS that the mutation is in the middle of a protein-coding gene.[7]Using a combination ofbioinformaticsandRNAanalyses, they discovered that the gene codes for a novel protein belonging to theforkhead-box(FOX) group oftranscription factors. As such, it was assigned with the official name of FOXP2. When the researchers sequenced theFOXP2gene in the KE family, they found aheterozygouspoint mutationshared by all the affected individuals, but not in unaffected members of the family and other people.[7]This mutation is due to an amino-acid substitution that inhibits the DNA-binding domain of theFOXP2protein.[72]Further screening of the gene identified multiple additional cases ofFOXP2disruption, including different point mutations[8]and chromosomal rearrangements,[73]providing evidence that damage to one copy of this gene is sufficient to derail speech and language development.
https://en.wikipedia.org/wiki/FOXP2
Gestures in language acquisitionare a form ofnon-verbal communicationinvolving movements of the hands, arms, and/or other parts of the body. Children can usegestureto communicate before they have the ability to use spoken words and phrases. In this way gestures can prepare children to learn a spoken language, creating a bridge from pre-verbal communication to speech.[1][2]The onset of gesture has also been shown to predict and facilitate children's spokenlanguage acquisition.[3][4]Once children begin to use spoken words their gestures can be used in conjunction with these words to form phrases and eventually to express thoughts and complement vocalized ideas.[4] Gestures not only complement language development but also enhance the child’s ability to communicate. Gestures allow the child to convey a message or thought that they would not be able to easily express using their limited vocabulary. Children's gestures are classified into different categories occurring in different stages of development. The categories of children's gesture include deictic and representational gestures.[5] Gestures are distinct from manual signs in that they do not belong to a complete language system.[6]For example, pointing through the extension of a body part, especially the index finger to indicate interest in an object is a widely used gesture that is understood by many cultures[7]On the other hand, manual signs are conventionalized—they are gestures that have become a lexical element in a language. A good example of manual signing isAmerican Sign Language(ASL)–when individuals communicate via ASL, their signs have meanings that are equivalent to words (e.g., two people communicating using ASL both understand that forming a fist with your right hand and rotating this fist using clockwise motions on the chest carries the lexical meaning of the word "sorry").[8] Typically, the first gestures children show around 10 to 12 months of age are deictic gestures. These gestures are also known aspointingwhere children extend their index finger, although any other body part could also be used, to single out an object of interest.[5]Deictic gestures occur across cultures and indicate that infants are aware of what other people pay attention to. Pre-verbal children use pointing for many different reasons, such as responding to or answering questions and/or sharing their interests and knowledge with others.[9] There are three main functions to infant's pointing: The existence of deictic gestures that are declarative andepistemicin nature reflects another important part of children's development, the development ofjoint visual attention. Joint visual attention occurs when a child and an adult are both paying attention to the same object.[11]Joint attention through the use of pointing is considered a precursor to speech development because it reveals that children want to communicate with another person.[5]Furthermore, the amount of pointing at 12 months old predicts speech production and comprehension rates at 24 months old.[11]In children withautism spectrumdisorder, the use of right-handed gestures—particularly deictic gestures—reliably predicts their expressive vocabularies 1 year later—a pattern also observed in typically-developing children.[12] Once children can produce spoken words they often use deictic gestures to create sentence-like phrases. These phrases occur when a child, for example, says the word "eat" and then points to a cookie. The incidence of these gesture-word combinations predicts the transition from one-word to two-word speech.[4]This shows that gesture can maximize the communicative opportunities that children can have before their speech is fully developed facilitating their entrance into lexical and syntactic development.[11][13] Representational gesture refers to an object, person, location, or event with hand movement, body movement, orfacial expression.[14]Representational gestures can be divided into iconic and conventional gestures. Unlike deictic gestures, representational gestures communicate a specific meaning.[14][15]Children start to produce representational gestures at 10 to 24 months of age.[16]Young American children will produce more deictic gestures than representational gestures,[14]but Italian children will produce almost equal amounts of representational and deictic gestures.[15] Iconic gestures have visually similar relationship to the action, object, or attribute they portray.[7]There is an increase in iconic gesturing after the two-word utterance stage at 26 months.[16]Children are able to create novel iconic gestures when they were attempting to inform the listener of information they think the listener does not know. Iconic gestures aided language development after the two-word utterance stage, whereas deictic gestures did not.[17]Iconic gestures are the most common form of representational gesture in Italian children.[7][17]Children will copy the iconic gestures they see their parents using,[15]therefore including iconic gestures when measuring representational vocabularies increases Italian children's vocabularies.[7][17]Even though the Italian children produced more iconic gestures, the two-word utterance stage did not arrive earlier than American children who produce fewer iconic gestures.[7] Conventional gestures are culture-bound emblems that do not translate across different cultures.[14]Culture-specific gestures such as shaking your head "no" or waving "goodbye" are considered conventional gestures.[14]Although American children do not typically produce many representational gestures in general, conventional gestures are the most frequently used in the representational gesture category.[7][17] Like most developmental timelines it is important to consider that no two children develop at the same pace. Infantgestureis thought to be an important part of the prelinguistic period and prepares a child for the emergence of language.[18]It has been suggested thatlanguageand gesture develop in interaction with one another.[19]It is believed that gestures are easier to produce for both infants and adults;[19]this is supported by the fact that infants begin to communicate with gestures before they can produce words.[18]The first type of gestures that appear in infants are deictic gestures.[18]Deictic gestures includepointing, which is often the most common gesture produced atten monthsof age.[16][20]Ateleven monthsof age children can produce a sequence of 2 gestures, usually a deictic gesture with a conventional or representational gesture.[21]and bytwelve monthsof age children can begin to produce 3-gestures in sequence usually a representational or conventional gesture that is preceded and followed by a deictic gesture.[21]Aroundtwelve monthsof age, infants begin to use representational gestures.[18]In relation to language acquisition, representational gestures appear around the same time as first words.[20]At age18 monthschildren produce more deictic gestures than representational gestures.[22]Between thefirst and second year of life, children begin to learn more words and use gestures less.[20]At26 months of age,there is an increase in iconic gesture use and comprehension.[21]Gestures become more complex as children get older. Betweenage 4-6children can use whole body gestures when describing a route.[21]A whole body gesture occurs in three-dimensional space and is used when the speaker is describing a route as if they are on it.[21]Atages 5–6, children also describe a route from abird's eye viewand use representational gestures from this point of view.[21]The ways in which gestures are used are an indication of the developmental or conceptual ability of children.[23] Not only do gestures play an important role in the natural development of spoken language, but they also are a major factor inaugmentative and alternative communication(AAC). AAC refers to the methods, tools, and theories to use non-standard linguistic forms of communication by and with individuals without or with limited functional speech.[6]Means used to communicate in AAC can span from high-tech computer-based communication devices, to low-tech means such as one-message switches, to non-tech means such as picture cards, manual signs, and gestures.[6]It is only within the last two decades that the importance of gestures in the cognitive and linguistic development processes has been examined, and in particular the gesture's functionality for individuals with communication disorders, especially AAC users.
https://en.wikipedia.org/wiki/Gestures_in_language_acquisition
Language teaching, like other educational activities, may employ specializedvocabularyandword use. This list is aglossaryforEnglish language learning and teachingusing thecommunicative approach.
https://en.wikipedia.org/wiki/Glossary_of_language_education_terms
Inlanguage learningresearch,identityrefers to the personal orientation to time, space, and society, and the manner in which it develops together with, and because of, speech development.[1] Languageis a largely social practice, and this socialization is reliant on, and develops concurrently with ones understanding of personal relationships and position in the world, and those who understand a second language are influenced by both the language itself, and the interrelations of the language to each other. For this reason, every time language learners interact in the second language, whether in the oral or written mode, they are engaged in identity construction and negotiation. However, structural conditions and social contexts are not entirely determined. Through human agency, language learners who struggle to speak from one identity position may be able to reframe their relationship with their interlocutors and claim alternative, more powerful identities from which to speak, thereby enabling learning to take place. The relationship between identity and language learning is of interest to scholars in the fields ofsecond language acquisition(SLA),language education,sociolinguistics, andapplied linguistics.[2]It is best understood in the context of a shift in the field from a predominantly psycholinguistic approach to SLA to include a greater focus on sociological and cultural dimensions of language learning,[3][4][5]or what has been called the “social turn” in SLA.[6]Thus while much research on language learning in the 1970s and 1980s was directed toward investigating the personalities, learning styles, and motivations of individual learners, contemporary researchers of identity are centrally concerned with the diverse social, historical, and cultural contexts in which language learning takes place, and how learners negotiate and sometimes resist the diverse positions those contexts offer them. Further, identity theorists question the view that learners can be defined in binary terms as motivated or unmotivated, introverted or extroverted, without considering that such affective factors are frequently socially constructed in inequitable relations of power, changing across time and space, and possibly coexisting in contradictory ways within a single individual. Many scholars[7][8][9][10][11][12]cite educational theoristBonny Norton’s conceptualization of identity (Norton Peirce, 1995; Norton, 1997; Norton, 2000/2013) as foundational in language learning research. Her theorization highlights how learners participate in diverse learning contexts where they position themselves and are positioned in different ways. Drawing from poststructuralist Christine Weedon's (1987) notion of subjectivity and sociologist Pierre Bourdieu's (1991) power to impose reception, Norton demonstrated how learners construct and negotiate multiple identities through language, reframing relationships so that they may claim their position as legitimate speakers. People often consider language and identity as some structured definitions from the dictionary that they just follow. Although there are structural definitions for the words "language" and "identity", some people have different perspectives on them. In the essays written by James Baldwin, he was able to grasp a new meaning and new perspective of reading and writing because of the way these authors portray these words. We have come to a point where language somewhat links with identity. The two terms at times can go hand in hand like black and white or like a pea in a pod. In the essay, “If Black English Isn’t a Language, Then Tell Me, What Is?” by James Baldwin talked a lot about the way he saw language to be and the way he felt that both language and identity is linked. In his essay, he said that language is the most crucial key to identity.[13]This statement helped to show readers that we would not be who we are without language. Also, it shows his main idea about Black English because it did not have the kind of significant personality they have today. Language, incontestably, reveals the speaker .[14]Baldwin consistently stressed on how the way one uses language can show the person the speaker is. Which shows how important it is for a person to embrace their language for their personality to be seen positively by others. Baldwin’s stress on language and identity through his different ideas really helped to open a door in every reader's mind because it makes them think back and see how language helped to form their identity. Since Norton's conception of identity in the 1990s, it has become a central construct in language learning research foregrounded by scholars such as David Block, Aneta Pavlenko, Kelleen Toohey, Margaret Early, Peter De Costa and Christina Higgins. A number of researchers have explored how Identity categories of race, gender, class and sexual orientation may impact the language learning process. Identity now features in most encyclopedias and handbooks of language learning and teaching, and work has extended to the broader field of applied linguistics to include identity and pragmatics, sociolinguistics, and discourse. In 2015, the theme of the American Association of Applied Linguistics (AAAL) conference held in Toronto was identity, and the journalAnnual Review of Applied Linguisticsin the same year focused on issues of identity, with prominent scholars discussing the construct in relation to a number of topics. These included translanguaging (Angela Creese and Adrian Blackledge), transnationalism and multilingualism (Patricia Duff), technology (Steven Thorne), and migration (Ruth Wodak). Closely linked to identity is Norton's construct ofinvestmentwhich complements theories ofmotivationin SLA. Norton argues that a learner may be a highly motivated language learner, but may nevertheless have little investment in the language practices of a given classroom or community, which may, for example, be racist, sexist, elitist, or homophobic. Thus, while motivation can be seen as a primarily psychological construct,[15]Investment is framed within a sociological framework and seeks to make a meaningful connection between a learner’s desire and commitment to learn a language, and their complex identity. The construct of investment has sparked considerable interest and research in the field.[16][17][18][19][20][21][22]Darvin and Norton's (2015) model of investment in language learning locates investment at the intersection of identity, capital, and ideology. Responding to conditions of mobility and fluidity that characterize the 21st century, the model highlights how learners are able to move across online and offline spaces, performing multiple identities while negotiating different forms of capital.[23] An extension of interest in identity and investment concerns theimagined communitiesthat language learners may aspire to join when they learn a new language. The term “imagined community”, originally coined byBenedict Anderson(1991), was introduced to the language learning community by Norton (2001), who argued that in many language classrooms, the targeted community may be, to some extent, a reconstruction of past communities and historically constituted relationships, but also a community of the imagination, a desired community that offers possibilities for an enhanced range of identity options in the future. These innovative ideas, inspired also byJean LaveandEtienne Wenger(1991) and Wenger (1998), are more fully developed in Kanno and Norton (2003), and Pavlenko and Norton (2007), and have proved generative in diverse research sites.[24][25][26][27]An imagined community assumes an imagined identity, and a learner’s investment in the second language can be understood within this context. When it comes to writing, students often become more focused in following a rubric and that is when they lose their sense of self and identity in their writing. In college essays, students are asked to write about themselves while still following a specific rubric. Colleges still expect their students to provide proper grammar, tone, punctuation, syntax, etc., but there is no way of sensing a student's individuality through that making it counterproductive (Davilia 163). This tends to be perpetuated in writing classes where their rubrics mostly consist of what is known as “white talk”(Davila 158). While some students have grown up speaking and even writing this, there are many students who don’t. Many students don’t associate with this and are then being held to a standard they don’t understand. According to Bethany Davila “[English] then, is a standard language variety that is associated with and defined by white people and that affords unearned racial privilege all while seeming like commonsense or a social norm”(Davila 155). Students who come from differing backgrounds are put at a disadvantage and struggle to write or even connect with the material being presented to them. This type of change begins in the classroom. Students learn best from each other, which is why classroom discourse allows students to question their own identities and beliefs. In the text,Exploring Values in a Changing Society: A Writing Assignment for Freshman EnglishMartha K. Smith mentions how, when students utilize “their own life experiences, they seem able to find the voices to engage in critical self-analysis”(Smith 3). This is why teachers have been able to create new assignments that allow students to self-reflect on their values, religious beliefs, cultural beliefs, and more (Smith2). When students find their voices they are able to better critically analyze their own experiences (Smith 3). By doing this, students are able to exercise different parts of their writing identities and are learning different skills that will help them outside of the classroom as well. In the text,Re-examining Constructions of Basic Writers’ Identities: Graduate Teaching, New Developments in the Contextual Model, and the Future of the Disciplineby Laura Gray-Rosendale Barbara Bird states how there are three different types of identities that students must develop, “1) autobiographical writer identity: generating personally meaningful, unique ideas, 2) discoursal identity: making clear claims and connecting evidence to claims, and 3) authorial writer identity: performing intellectual work, specifically through elaboration and critical thinking” (71)(Gray-Rosendale 93). By learning to engage these identities, students are able to still practice academic writing, while still preserving their sense of self and searching for how their identities impact their writing. There is now a wealth of research that explores the relationship between identity, language learning, and language teaching.[28]Themes on identity include race, gender, class, sexual orientation, and disability. Further, the award-winningJournal of Language, Identity, and Education, launched in 2002, ensures that issues of identity and language learning will remain at the forefront of research on language education, applied linguistics, and SLA in the future. Issues of identity are seen to be relevant not only to language learners, but to language teachers, teacher educators, and researchers. There is an increasing interest in the ways in which advances in technology have impacted both language learner and teacher identity and the ways in which the forces of globalization are implicated in identity construction. Many established journals in the field welcome research on identity and language learning, including:Applied Linguistics, Critical Inquiry in Language Studies, Language Learning, Language and Education, Linguistics and Education, Modern Language Journal, andTESOL Quarterly. Block, D.(2007).Second language identities. London/New York: Continuum In this monograph, Block insightfully traces research interest in second language identities from the 1960s to the present. He draws on a wide range of social theories and brings a fresh analysis to studies of adult migrants, foreign language learners, and study-abroad students. Burck, C.(2005/7).Multilingual living. Explorations of language and subjectivity. Basingstoke, England and New York: Palgrave Macmillan. This book presents a discursive and narrative analysis of speakers' own accounts of the challenges and advantages of living in several languages at individual, family, and societal levels, which gives weight to ideas on hybridity and postmodern multiplicity. Norton, B.(2013).Identity and language learning: Extending the conversation.Bristol: Multilingual Matters. In this second edition of a highly cited study of immigrant language learners, Norton draws on poststructuralist theory to argue for a conception of the learner identity as multiple, a site of struggle, and subject to change. She also develops the construct of “investment” to better understand the relationship between language learners and the target language. The second edition includes an insightful Afterword by Claire Kramsch. Pavlenko, A. and Blackledge, A.(Eds). (2004).Negotiation of identities in multilingual contexts. Clevedon: Multilingual Matters. The authors in this comprehensive collection examine the ways in which identities are negotiated in diverse multilingual settings. They analyze the discourses of education, autobiography, politics, and youth culture, demonstrating the ways in which languages may be sites of resistance, empowerment, or discrimination. Toohey, K.(2000).Learning English at school: Identity, social relations, and classroom practice. Cleveland, UK: Multilingual Matters. Drawing on an exemplary ethnography of young English language learners, Toohey investigates the ways in which classroom practices are implicated in the range of identity options available to language learners. She draws on sociocultural and poststructural theory to better understand the classroom community as a site of identity negotiation. Davila, Bethany. (2017). Standard English and colorblindness in composition studies: Rhetorical constructions of racial and linguistic neutrality.WPA: Writing Program Administration 40.2, 154-173. Given, Michael; Jean A. Wagner; Leisa Belleau; Martha Smith. (2007). 'Who, me?' Four pedagogical approaches to exploring student identity through composition, literature, and rhetoric.Writing Instructor Beta 04.0. Gray-Rosendale, Laura. (1997). Everyday exigencies: Constructing student identity. In Penrod, Diane (Ed.), Miss Grundy doesn't teach here anymore: Popular culture and the composition classroom; Portsmouth, NH: Boynton/Cook (pp. 147-159).
https://en.wikipedia.org/wiki/Identity_and_language_learning
TheKE familyis a medical name designated for aBritishfamily, about half of whom exhibit a severespeech disordercalleddevelopmental verbal dyspraxia.[1]It is the first family with speech disorder to be investigated usinggenetic analyses, by which the speech impairment is discovered to be due togenetic mutation, and from which the geneFOXP2, often dubbed the "language gene", was discovered. Their condition is also the first human speech and language disorder known to exhibit strictMendelian inheritance.[2] Brought to medical attention from their school children in the late 1980s, the case of KE family was taken up at theUCL Institute of Child HealthinLondonin 1990. Initial report suggested that the family was affected by agenetic disorder. CanadianlinguistMyrna Gopniksuggested that the disorder was characterized primarily by grammatical deficiency, supporting the controversial notion of a "grammar gene".Geneticistsat theUniversity of Oxforddetermined that the condition was indeed genetic, with complex physical and physiological effects, and in 1998, they identified the actual gene, eventually namedFOXP2. Contrary to the grammar gene notion,FOXP2does not control any specific grammar or language output. This discovery directly led to a broader knowledge onhuman evolutionas the gene is directly implicated with theorigin of language.[3] Two family members, a boy and a girl, were featured in theNational Geographicdocumentary filmHuman Ape.[4] The individual identities of the KE family are kept confidential. The family children attended Elizabeth Augur's special educational needs unit at the Lionel Primary School inBrentford, West London. Towards the end of 1980s, seven children of the family attended there.[5]Augur began to learn that the family had a speech disorder for three generations. Of the 30 members, half of them had severe disability, some are affected mildly, and few are unaffected.[6]Their faces show rigidity at the lower half, and most cannot complete pronouncing a word. Many of them have severe stuttering and with limited vocabulary. In particular, they have difficulty with consonants, and omit them, such as "boon" for "spoon", "able" for "table", and "bu" for "blue". Linguistic deficiency is also noted in written language both in reading and writing. They are characterized by lower nonverbalIQ.[7] When the first study on KE family was published in 1990, the exact identity of the family was withheld and simply indicated as living in West London.[8]The first genetic study reported in 1995 revealed that they were 30 members of four generations, with the designation "KE family."[7]In 2009, American psychologistElena L. GrigorenkoofYale Universitywrote a review paper on the genetics ofdevelopmental disordersin which she specifically described a case of speech disorder in a "three-generation pedigree of Pakistani origin from the United Kingdom (referred to as KE)."[9]When a team of researchers from Germany, led by Arndt Wilcke of theLeipzig University, reported in 2011 the effects ofFOXP2mutation in the brain, they mentioned the family as "a large Pakistani family with severe speech and language disorder."[10] The British-Pakistani description for the family became widely used.[11][12][13][14]However, British geneticist and neuroscientistSimon E. Fisherat theMax Planck Institute for Psycholinguisticspointed out the error in Wilcke's paper to which the German team published a corrigendum that KE family were not of Pakistani descent, but "a large English Caucasian family."[15] Augur convinced the family to undergo medical examinations and approached geneticist Michael Baraitser at the Institute of Child Health. With colleagues Marcus Pembrey and Jane Hurst at the Hospital for Sick Children (Great Ormond Street Hospital), they started taking blood samples for analyses in 1987. Their first report in 1990 shows that 16 family members were affected by severe abnormality, characterised by difficulty to speak effectively, understand complex sentences, unable to learn sign language, and that the condition was genetically inherited (autosomal dominant). Their conclusion runs: Of the 16 affected children, none had significant feeding difficulties as infants and there were few neonatal problems. Hearing and intelligence of all affected members were within the normal range. The speech problem in this family has been classified asdevelopmental verbal dyspraxia.[8] Upon the news,BBCwas preparing a documentary of the case in the scientific serialAntenna. By this time, a Canadian linguist fromMcGill University,Myrna Gopnik, was visiting her son in Oxford, and delivered an invited lecture at the university, where she noticed the flyer for the BBC programme. She contacted the medical geneticists, interviewed KE family members, and returned toMontreal, Quebec. She was convinced that the genetic defect was largely centred on grammatical ability, and wrote letters toNaturein 1990.[16][17]Her reports promulgated a notion of "grammar gene" and a controversial concept of grammar-specific disorder.[18][19] Neuroscientist and language expert at the Institute of Child HealthFaraneh Vargha-Khadembegan to investigate teaming up with University of Oxford andUniversity of Readinglinguists. In 1995 they found, contrary to Gopnik's hypothesis, from comparison of 13 affected and 8 control individuals that the genetic disorder was a complex impairment of not only linguistic ability, but also intellectual and anatomical features, thereby disproving the "grammar gene" notion.[7]Usingpositron emission tomography(PET) andmagnetic resonance imaging(MRI), they found that some brain regions were underactive (compared to baseline levels) in the KE family members and that some were overactive, when compared to people without the condition. The underactive regions includedmotor neuronsthat control face and mouth regions. The areas that were overactive includesBroca's area, the speech centre.[20]With Oxford geneticistsKate Watkins, Simon Fisher andAnthony Monaco, they identified the exact location of the gene on the long arm ofchromosome 7(7q31) in 1998.[21]The chromosomal region (locus) was namedSPCH1(for speech-and-language-disorder-1), and it contains 70 genes.[22]Using the known gene location of speech disorder from a boy, designated CS, of unrelated family, they discovered in 2001 that the main gene responsible for speech impairment in both KE family and CS wasFOXP2.[23]Mutations in the genes result in speech and language problems.[24][25][26]
https://en.wikipedia.org/wiki/KE_family
Language attritionis the process of decreasing proficiency in or losing a language. For first or native language attrition, this process is generally caused by both isolation from speakers of the first language ("L1") and the acquisition and use of asecond language("L2"), which interferes with the correct production and comprehension of the first. Suchinterferencefrom a second language is likely experienced to some extent by allbilinguals, but is most evident among speakers for whom a language other than their first has started to play an important, if not dominant, role in everyday life; these speakers are more likely to experience language attrition.[1]It is common among immigrants that travel to countries where languages foreign to them are used. Second language attrition can occur from poor learning, practice, and retention of the language after time has passed from learning. This often occurs with bilingual speakers who do not frequently engage with their L2. Several factors affect language attrition. Frequent exposure and use of a particular language is often assumed adequate to maintain the native language system intact. However, research has often failed to confirm this prediction.[2]A person's age can predict the likelihood of attrition; children are demonstrably more likely to lose their first language than adults.[3][4][5]The process of learning a language and the methods used to teach it can also affect attrition.[6]A positive attitude towards the potentially attriting language or its speech community and motivation to retain the language are other factors which may reduce attrition. These factors are too difficult to confirm by research.[7] These factors are similar to those that affectsecond-language acquisitionand the two processes are sometimes compared. However, the overall impact of these factors is far less than that for second language acquisition. Language attrition results in a decrease of language proficiency. The current consensus is that it manifests itself first and most noticeably in speakers' vocabulary (in their lexical access and their mental lexicon),[8][9]while grammatical and especially phonological representations appear more stable among speakers who emigrated after puberty.[10] The study of language attrition became a subfield of linguistics with a 1980 conference at the University of Pennsylvania called "Loss of Language Skills".[11]The aim of the conference was to discuss areas of second language attrition and to discuss ideas for possible future research. The conference revealed that attrition is a wide topic, with numerous factors and taking many forms. Decades later, the field of first language attrition gained new momentum with two conferences held in Amsterdam in 2002 and 2005, as well as a series of graduate workshops and panels at international conferences, such as the International Symposium on Bilingualism (2007, 2009), the annual conferences of the European Second Language Association, and the AILA World Congress (2008). The outcomes of some of these meetings were later published in edited volumes.[12][1]The termfirst language attrition(FLA) refers to the gradual decline in native language proficiency. As speakers use their L2 frequently and become proficient (or even dominant) in it, some aspects of the L1 can deteriorate or become subject to L2 influence. Research on L2 attrition is lacking, as most research focused on L1 attrition. Only during the 1970s and early 1980s did research on L2 attrition and memory start to appear. However, there are many overlaps between L1 attrition and L2 attrition.[6] To study the process of language attrition, researchers initially looked at neighboring areas of linguistics to identify which parts of the L1 system attrite first; lacking years of direct experimental data, linguists studiedlanguage contact,creolization,L2 acquisition, andaphasia, and applied their findings to language acquisition.[12]Language loss caused by aging, brain injuries, or neurological disorders is not considered part of language attrition.[6] One issue that is faced when researching attrition is distinguishing between normal L2 influence on the L1 and actual attrition of the L1. Since all bilinguals experience some degree ofcross linguistic influence, where the L2 interferes with the retrieval of the speaker's L1, it is difficult to determine if delays and/or mistakes in the L1 are due to attrition or caused by CLI.[13]Also,simultaneous bilingualsmay not have a language that is indistinguishable from that of a native speaker or a language where their knowledge of it is less extensive than a native speaker's; therefore testing for attrition is difficult.[9] L1 attrition is the partial or complete loss of one's first, often native, language. This can often result from immigration to an L2-dominant region, daily activities in L2-dominant environments, or motivational factors. L2 attrition is the loss of one's second language, which can result from cross-interference from L1 or even from an additional third learned language ("L3"). Unlike L1 learning and attrition, L2 learning and attrition is not a linear phenomenon and can begin in multiple ways: vocabulary loss, weakened syntax, simpler phonetic rules, etc.[6] In Hansen and Reetz-Kurashige (1999), Hansen cites her own research on L2-Hindi and Urdu attrition in young children. As young pre-school children in India and Pakistan, the subjects of her study were often judged to be native speakers of Hindi or Urdu; their mother was far less proficient. On return visits to their home country, the United States, both children appeared to lose all their L2 while the mother noticed no decline in her own L2 abilities. Twenty years later, those same young children as adults comprehend not a word from recordings of their own animated conversations in Hindi-Urdu; the mother still understands much of them. Yamamoto (2001) found a link between age and bilinguality. In fact, a number of factors are at play in bilingual families. In her study, bicultural families that maintained only one language, the minority language, in the household, were able to raise bilingual, bicultural children without fail. Families that adopted the one parent – one language policy were able to raise bilingual children at first but when the children joined the dominant language school system, there was a 50% chance that children would lose their minority language abilities. In families that had more than one child, the older child was most likely to retain two languages, if it was at all possible. Younger siblings in families with more than two other brothers and sisters had little chance of maintaining or ever becoming bilingual. The first linguistic system to be affected by first language attrition is the lexicon.[14]The lexical-semantic relationship usually starts to deteriorate first and most quickly, driven by Cross Linguistic Interference (CLI) from the speaker's L2, and it is believed to be exacerbated by continued exposure to, and frequent use of, the L2.[15]Evidence for such interlanguage effects can be seen in a study by Pavlenko (2003, 2004) which shows that there was some semantic extension from the L2, which was English, into the L1 Russian speakers' lexicons. In order to test for lexical attrition, researchers used tests such as picture naming tasks, where they place a picture of an item in front of the participant and ask them to name it, or by measuring lexical diversity in the speaker's spontaneous speech (speech that is unprompted and improvised). In both cases, attriters performed worse than non-attriters.[8][16][17][18]One hypothesis suggests that when a speaker tries to access a lexical item from their L1 they are also competing with the translation equivalents of their L2 and that there is either a problem with activating the L1 due to infrequent use or with the inhibition of the competing L2.[15] Grammatical attrition can be defined as "the disintegration of the structure of a first language (L1) in contact situations with a second language (L2)".[19]In a study of bilingual Swedes raised outside of Sweden who, in their late twenties, returned to their home country for schooling, the participants demonstrated both language attrition and a complete retention of the underlying syntactic structure of their L1. Notably, they exhibited the V2, verb second, word order present in most Germanic languages, except English. This rule requires the tense-marked verb of a main clause to occur in the second position of the sentence, even if that means it comes before the subject (e.g. there is an adverb at the beginning of the sentence). These speakers' ability to form sentences with V2 word order was compared against L2 learners who often overproduce the rigid SVO word order rather than applying the V2 rule. Although the study did not show evidence for attrition of syntax of the person's L1, there was evidence for attrition in the expatriates' morphology, especially in terms of agreement. They found that the bilinguals would choose to use the unmarked morphemes in place of the marked one when having to differentiate between gender and plurality; also they tend to overgeneralize where certain morphemes can be used. For example, they may use the suffix /-a/, which is used to express an indefinite plural, and overextend this morpheme to also represent the indefinite singular.[20]There is little evidence to support the view that there is a complete restructuring of the language systems. That is, even under language attrition the syntax is largely unaffected and any variability observed is thought to be due to interference from another language, rather than attrition.[21][22] L1 attriters, like L2 learners, may use language differently from native speakers. In particular, they can have variability on certain rules which native speakers apply deterministically.[23][21]In the context of attrition, however, there is strong evidence that this optionality is not indicative of any underlying representational deficits: the same individuals do not appear to encounter recurring problems with the same kinds of grammatical phenomena in different speech situations or on different tasks.[10]This suggests that problems of L1 attriters are due to momentary conflicts between the two linguistic systems and not indicative of a structural change to underlying linguistic knowledge (that is, to an emerging representational deficit of any kind). This assumption is in line with a range of investigations of L1 attrition which argue that this process may affect interface phenomena (e.g. the distribution of overt and null subjects in pro-drop languages) but will not touch the narrow syntax.[21][24][25] Phonological attrition is a form of language loss that affects the speaker's ability to produce their native language with their native accent. A study of five native speakers of American English who moved to Brazil and learned Portuguese as their L2 demonstrates the possibility that one could lose one's L1 accent in place of an accent that is directly influenced by the L2.[26]It is thought that phonological loss can occur to those who are closer to native-like fluency in the L2, especially in terms of phonological production, and for those who have immersed themselves and built a connection to the culture of the country for the L2.[citation needed]A sociolinguistic approach to this phenomenon is that the acquisition of a native-like L2 accent and the subsequent loss of one's native accent is influenced by the societal norms of the country and the speakers' attempt to adapt in order to feel a part of the culture they are trying to assimilate into.[27]This type of attrition is not to be confused with contact-induced change since that would mean speech production changes due to an increased use of another language and not due to the less frequent use of the L1.[28] Lambert and Moore[29]attempted to define numerous hypotheses regarding the nature of language loss, crossed with various aspects of language. They envisioned a test to be given to AmericanState Departmentemployees that would include four linguistic categories (syntax,morphology,lexicon, andphonology) and three skill areas (reading,listening, andspeaking). A translation component would feature on a sub-section of each skill area tested. The test was to include linguistic features that are the most difficult, according to teachers, for students to master. Such a test may confound testing what was not acquired with what was lost. Lambert, in personal communication with Köpke and Schmid,[4]described the results as 'not substantial enough to help much in the development of the new field of language skill attrition'. The use of translation tests to study language loss is inappropriate for a number of reasons: it is questionable what such tests measure; too muchvariation; the difference between attriters andbilingualsis complex; activating two languages at once may cause interference. Yoshitomi[30]attempted to define a model of language attrition that was related toneurologicalandpsychologicalaspects of language learning and unlearning. She discussed four possible hypotheses and five key aspects related to acquisition and attrition. The hypotheses are: According to Yoshitomi,[30]the five key aspects related to attrition areneuroplasticity, consolidation,permastore/savings, decreased accessibility, and receptive versus productive abilities. Given that exposure to an L2 at a younger age typically leads to stronger attrition of the L1 than L2 exposure at later ages, there may be a relationship between language attrition and thecritical period hypothesis. The critical period for language claims that there is an optimal time period for humans to acquire language, and after this time language acquisition is more difficult (though not impossible). Language attrition also seems to have a time period; before around age 12, a first language is most susceptible to attrition if there is reduced exposure to that language.[3][5][35]Research shows that the complete attrition of a language would occur before the critical period ends.[4] All available evidence on the age effect for L1 attrition, therefore, indicates that the development of susceptibility displays a curved, not a linear, function. This suggests that in native language learning there is indeed a critical period effect, and that full development of native language capacities necessitates exposure to L1 input for the entire duration of this CP. The regression hypothesis, first formulated by Roman Jakobson in 1941 and originally formulated on the phonology of only Slavic languages,[36]goes back to the beginnings of psychology and psychoanalysis. It states that which was learned first will be retained last, both in 'normal' processes of forgetting and in pathological conditions such as aphasia or dementia.[36]As a template for language attrition, the regression hypothesis has long seemed an attractive paradigm. However, regression is not in itself a theoretical or explanatory framework.[36][37]Both order of acquisition and order of attrition need to be put into the larger context of linguistic theory in order to gain explanatory adequacy.[38] Keijzer (2007) conducted a study on the attrition of Dutch in Anglophone Canada. She finds some evidence that later-learned rules, such as diminutive and plural formation, indeed erode before earlier learned grammatical rules.[37]However, there is also considerable interaction between the first and second language and so a straightforward 'regression pattern' cannot be observed.[37]Also, parallels in noun and verb phrase morphology could be present because of the nature of the tests or because of avoidance by the participants.[37]In a follow-up 2010 article, Keijzer suggests that the regression hypothesis may be more applicable to morphology than to syntax.[38] Citing the studies on the regression hypothesis that have been done, Yukawa[33]says that the results have been contradictory. It is possible that attrition is a case-by-case situation depending on a number of variables (age, proficiency, andliteracy, the similarities between the L1 and L2, and whether the L1 or the L2 is attriting). Thethreshold hypothesis, created by Jim Cummins in 1979 and expanded on since then, claims that there are language fluency thresholds that one must reach in both one's L1 and L2 in order for bilingualism to function properly and be beneficial to the individual.[39]In order for one to maintain a low threshold, regular vocabulary and grammar usage is needed. Otherwise, an L2 that has fallen into disuse will now have a higher threshold for each language item, requiring a higher number of neural impulses to activate that item's representation in one's brain. Items that are used regularly have a lower required number of neural impulses to trigger its representation in the brain, making that language more stable and less susceptible to attrition. Under this hypothesis, language attrition is believed to first affect lexical words and then grammar rules, rather than grammar rules eroding first like in the regression hypothesis. It also requires a higher activation threshold to recall a word rather than recognize it, which does not indicate fluency.[6] Children are more susceptible to (first) language attrition than adults.[3][4][5]Research shows an age effect around the ages of 8 through 13.[5]Before this time period, a first language can attrite under certain circumstances, the most prominent being a sudden decline in exposure to the first language. Various case studies show that children who emigrate before puberty and have little to no exposure to their first language end up losing the first language. In 2009, a study compared two groups of Swedish-speaking groups: native Swedish speakers and Korean international adoptees who were at risk of losing their Korean.[3][35]Of the Korean adoptees, those who were adopted the earliest essentially lost their Korean and those adopted later still retained some of it, although it was primarily their comprehension of Korean that was spared.[35]A 2007 study looked at Korean adoptees in France and found that they performed on par with native French speakers in French proficiency and Korean.[40] Attrition of a first language does not guarantee an advantage in learning a second language.[35]Attriters are outperformed by native speakers of the second language in proficiency.[35]A 2009 study tested the Swedish proficiency of Swedish speakers who had attrited knowledge of Spanish. These participants did show almost but not quite native-like proficiency when compared to native Swedish speakers, and they did not show an advantage when compared with bilingual Swedish-Spanish speakers.[35] On the other hand, L1 attrition may also occur if the overall effort to maintain the first language is insufficient while exposed to a dominant L2 environment. Another recent investigation, focusing on the development of language in late bilinguals (i.e. adults past puberty), has claimed that maintenance of the mother tongue in an L1 environment requires little to no maintenance for individuals, whereas those in the L2 environment have an additive requirement for the maintenance of the L1 and the development of the L2 (Opitz, 2013).[41] There have been cases in which adults have undergone first language attrition. A 2011 study tested adult monolingual English speakers, adult monolingual Russian speakers and adult bilingual English-Russian speakers on naming various liquid containers (cup, glass, mug, etc.) in both English and Russian.[42]The results showed that the bilinguals had attrited Russian vocabulary because they did not label these liquid containers the same way as the monolingual Russian speakers. When grouped according to Age of Acquisition (AoA) of English, the bilinguals showed an effect of AoA (or perhaps the length of exposure to the L2) in that bilinguals with earlier AoA (mean AoA 3.4 years) exhibited much stronger attrition than bilinguals with later AoA (mean AoA 22.8 years). That is, the individuals with earlier AoA were the more different from monolingual Russian speakers in their labeling and categorization of drinking vessels, than the people with later AoA. However, even the late AoA bilinguals exhibited some degree of attrition in that they labeled the drinking vessels differently from native monolingual Russian-speaking adults. There are few principled and systematic investigations of FLA specifically investigating the impact of AoA. However, converging evidence suggests an age effect on FLA which is much stronger and more clearly delineated than the effects that have been found in SLA research. Two studies that consider prepuberty and postpuberty migrants (Ammerlaan, 1996, AoA 0–29 yrs; Pelc, 2001, AoA 8–32 years) find that AoA is one of the most important predictors of ultimate proficiency, and a number of studies that investigate the impact of age among postpuberty migrants fail to find any effect at all (Köpke, 1999, AoA 14–36 yrs; Schmid, 2002, AoA 12–29 yrs; Schmid, 2007, AoA 17–51 yrs). A range of studies conducted by Montrul on Spanish heritage speakers in the US as well as Spanish-English bilinguals with varying levels of AoA also suggests that the L1 system of early bilinguals may be similar to that of L2 speakers, while later learners pattern with monolinguals in their L1 (e.g. Montrul, 2008; Montrul, 2009). These findings therefore indicate strongly that early (prepuberty) and late (postpuberty) exposure to an L2 environment have a different impact on possible fossilization and/or deterioration of the linguistic system. Frequency of use has been shown to be an important factor in language attrition.[43]Decline in use of a given language leads to gradual loss of that language.[44][45] In the face of much evidence to the contrary, one study is often cited to suggest that frequency of use does not correlate strongly with language attrition.[46]Their methodology, however, can be called into question, especially concerning the small sample size and the reliance on self reported data.[47]The researchers themselves state that their findings may be inaccurate.[46]The overall evidence suggests that frequency of use is a strong indicator of language attrition.[43][44][45][47] Motivation could be defined as the willingness and desire to learn a second language, or, in the case of attrition, the incentive to maintain a language.[48]Motivation can be split into four categories,[49]but it is often simply split into two distinct forms: the instrumental and the integrative.[48][49]Instrumental motivation, in the case of attrition, is the desire to maintain a language in order to complete a specific goal, i.e. maintaining a language to maintain a job. Integrative motivation, however, is motivation that comes from a desire to fit in or maintain one's cultural ties.[49]These inferences can be drawn, as strategies for knowledge maintenance will, by definition, precisely oppose actions that lead to forgetting.[50] There are differences in attrition related to motivation depending on the type at hand. Instrumental motivation is often less potent than integrative motivation, but, given sufficient incentives, it can be equally as powerful.[48]A 1972 study by Gardner and Lambert emphasized the importance of integrative motivation in particular in regards to factors relating to language acquisition, and, by extension, language attrition.[51] A study published in 2021 examines what language attrition looks like neurologically by studying EEGs (electroencephalograms) of students learning a foreign language. The study involved 26 out of 30 initial participants who were native Dutch (L1) speakers who had little to no prior knowledge of Italian (L3), and proficiency in English (L2) as their second language. The experiment involved all participants learning 70 non-cognate Italian words over two days, with no EEG taken. On the third day, an EEG was recorded for the entire session while participants attempted to retrieve half of their learned Italian words in English, and then took a recall test twice on all 70 learned Italian words. Incorrectness, partial correctness, and total correctness was used as a scoring guideline for these tests. This experiment tested attrition of the participants' L3 compared to their L2. When analyzing the EEGs of the participants, the experimenters observed an enhanced early anterior negative deflection (N2), a peak on the EEG often observed during language switching, for items that took longer to recall in Italian. These are interpreted to represent interfering responses, possibly a result of interference between English and Italian. Another peak, the late positive component (LPC), which is often interpreted as an indicator of interference, was reduced for interfered items compared to non-interfered items. Lastly, theta bands on an EEG, which have previously been associated with semantic interference and active retrieval efforts, showed up more prominently when participants were asked to recognize words that they had retrieved both in English and Italian. While these must be further studied, these results give clues to what is occurring synaptically in the brain during language interference, and how that impacts attrition of a foreign language.[52] The above factors all affect the likelihood of language attrition in individuals, but an additional factor is the method of language learning and how that affects the possibility of language attrition. Therefore, strategies in the classroom and any other learning environment become an important part of preventing language attrition. Many researchers believe that language production skills, specifically writing and speaking, are significantly more susceptible to attrition than receptive skills, like listening and reading. Under this belief, one method of prevention would be to focus on literacy and receptive learning in the classroom, rather than teach students primarily to speak and write. This protects against attrition as it solidifies receptive skills. Another method is to encourage homework and practice that is not mechanical, but instead engaging and opportunistic, using high frequency items the most. Basic repetition and learning low frequency patterns and items are more susceptible to attrition, as students are unable to practice as opportunities arise and use high frequency items. This is detrimental as the language is not learned in a meaningful way that reinforces cognitive understanding. Conversational-style homework and classroom settings, along with focuses on receptive skills, could make one's fluency less susceptible to attrition. Another potential method of prevention is to alter the duration of instruction for a new language. According to Bardovi-Harlig and Stringer,[53]a few months of intensive, engaging learning may have a greater impact on preventing attrition rather than years of traditional, mechanical learning. However, the initial stage of learning is argued to be important regardless of the duration of instruction.[6]
https://en.wikipedia.org/wiki/Language_attrition
Language transferis the application of linguistic features from one language to another by a bilingual or multilingual speaker. Language transfer may occur across both languages in the acquisition of asimultaneous bilingual. It may also occur from a mature speaker'sfirst language(L1) to asecond language(L2) they are acquiring, or from an L2 back to the L1.[1]Language transfer (also known asL1 interference,linguistic interference, andcrosslinguistic influence) is most commonly discussed in the context ofEnglish language learning and teaching, but it can occur in any situation when someone does not have a native-level command of a language, as whentranslatinginto a second language. Language transfer is also a common topic in bilingualchild language acquisitionas it occurs frequently in bilingual children especially when one language is dominant.[2] When the relevant unit or structure of both languages is the same, linguistic interference can result in correct language production calledpositive transfer: here, the "correct" meaning is in line with most native speakers' notions of acceptability.[3]An example is the use ofcognates. However, language interference is most often discussed as a source oferrorsknown asnegative transfer, which can occur when speakers and writers transfer items and structures that are not the same in both languages. Within the theory ofcontrastive analysis, the systematic study of a pair of languages with a view to identifying their structural differences and similarities, the greater the differences between the two languages, the more negative transfer can be expected.[4]For example, inEnglish, a preposition is used before a day of the week: "I'm going to the beachonFriday." InSpanish, instead of a preposition the definite article is used: "Voy a la playa el viernes." Novice Spanish students who are native English-speakers may produce a transfer error and use a preposition when it is not necessary because of their reliance on English. According to Whitley, it is natural for students to make such errors based on how the English words are used.[5]Another typical example of negative transfer concernsGermanstudents trying to learn English, despite being part of the sameGermanic language family. Since the German noun "Information" can also be used in the plural – "Informationen" – German students will almost invariably use "informations" in English, too, which would break the rules ofuncountable nouns.[6]From a more general standpoint, Brown mentions "all new learning involves transfer based on previous learning".[7]That could also explain why initial learning of L1 will impact L2 acquisition.[8] The results of positive transfer go largely unnoticed and so are less often discussed. Nonetheless, such results can have an observable effect. Generally speaking, the more similar the two languages are and the more the learner is aware of the relation between them, the more positive transfer will occur. For example, anAnglophonelearner ofGermanmay correctly guess an item of German vocabulary from its English counterpart, butword order,phonetics,connotations,collocation, and other language features are more likely to differ. That is why such an approach has the disadvantage of making the learner more subject to the influence of "false friends", words that seem similar between languages but differ significantly inmeaning. This influence is especially common among learners whomisjudge the relation between languagesor mainly rely onvisual learning.[9] In addition to positive transfer potentially resulting in correct language production and negative transfer resulting in errors, there is some evidence that any transfer from the first language can result in a kind of technical, or analytical, advantage over native (monolingual) speakers of a language. For example, L2 speakers of English whose first language isKoreanhave been found to be more accurate with perception ofunreleased stopsin English than native English speakers who are functionally monolingual because of the different status of unreleased stops in Korean from English.[10]That "native-language transfer benefit" appears to depend on an alignment of properties in the first and the second languages that favors the linguistic biases of the first language, rather than simply the perceived similarities between two languages. Language transfer may beconsciousorunconscious.[11]Consciously, learners or unskilled translators may sometimes guess when producing speech or text in a second language because they have not learned or have forgotten its proper usage. Unconsciously, they may not realize that the structures and internal rules of the languages in question are different. Such users could also be aware of both the structures and internal rules, yet be insufficiently skilled to put them into practice, and consequently often fall back on their first language. The unconscious aspect to language transfer can be demonstrated in the case of the so-called "transfer-to-nowhere" principle put forward by Eric Kellerman, which addressed language based on its conceptual organization instead of itssyntacticfeatures. Here, language determines how the speaker conceptualizesexperience, with the principle describing the process as an unconscious assumption that is subject to between-language variation.[12]Kellerman explained that it is difficult for learners to acquire the construal patterns of a new language because "learners may not look for the perspectives peculiar to the [target/L2] language; instead they may seek the linguistic tools which will permit them to maintain their L1 perspective."[13] The conscious transfer of language, on the other hand, can be illustrated in the principle developed by Roger Andersen called "transfer-to-somewhere," which holds that "a language structure will be susceptible to transfer only if it is compatible with natural acquisitional principles or is perceived to have similar counterpart (a somewhere to transfer to) in the recipient language."[14]This is interpreted as a heuristic designed to make sense of the target language input by assuming a form of awareness on the part of the learner to map L1 onto the L2.[15]An analogy that can describe the differences between the Kellerman's and Anderson's principles is that the former is concerned with the conceptualization that fuels the drive towards discovering the means of linguistic expression whereas Andersen's focused on the acquisition of those means.[15] The theories of acceleration and deceleration are bilingual child language acquisition theories based on the known norms of monolingual acquisition. These theories come from comparisons of bilingual children's acquisition to that of their monolingual peers of similar backgrounds. Acceleration is a process similar to that ofbootstrapping, where a child acquiring language uses knowledge and skills from one language to aid in, and speed up their acquisition of the other language.[16] Deceleration is a process in which a child experiences negative effects (more mistakes and slower language learning) on their language acquisition due to interference from their other language. Language transfer is often referred to ascross-language transfer, the ability to use skills acquired in one language and to use those skills to facilitate learning of a new language.[17]Cross-language transfer has been researched and analyzed by many scholars over the years, but the focus on cross-language transfer in literacy research expanded in the 1990s.[18]It is a topic that has been gaining lots of interest from scholars due to the increasing number of bilingual and multilingual people, especially students, around the world. In the USA alone, English Language Learners (ELL) account for over 10% of the students enrolled in public schools.[19] The linguistic interdependence hypothesis claims that language transfer can occur from L1 (first language) to L2 (second language), but there first must be a level of proficiency in L1 literacy skills for the skills to transfer over into L2.[20]In other words, there must be some prior knowledge of literacy skills in L1 to assist with acquiring literacy skills in  L2. The acquisition of L2 literacy skills can be facilitated and gained with greater ease by having more time, access, and experience with L1 literary skills.[21]Over time, through formal exposure and practice with literacy skills, L2 learners have been able to catch up with their monolingual peers.[22]However, literacy skills acquired in L2 can also be used to assist with literacy skills in L1 because cross-language transfer is bidirectional.[23] Most studies have indicated that literacy cross-language transfer can occur regardless of the L1 and the L2 languages, but Chung et al. (2012[24]) state that cross-language transfer is less likely to occur when the languages do not share similar orthography systems. For example, using literacy skills acquired in English may be accessed and used with more ease in Spanish because English and Spanish follow similar orthography (they use letters). Whereas, using literacy skills acquired in English to facilitate ease of learning Korean would be more difficult because those languages do not follow a similar orthography system (English uses theEnglish alphabet, and Korean uses theKorean alphabet). Cross-language transfer can also occur with deaf bilinguals who use sign language and read written words.[25]People may think that both American Sign Language (ASL) and English are the same language, but they are not. According to the National Institute on Deafness and Other Communications Disorders“ASL is a language completely separate and distinct from English. It contains all the fundamental features of language, with its own rules for pronunciation, word formation, and word order".[26]Because sign languages are considered to be their own language, most deaf people are considered to be bilingual because they speak in one language (sign language) and read in other (English, Spanish, Arabic, etc.). It should also be noted that not all sign languages are the same. The sign languages are American Sign Language (ASL), Mexican Sign Language (LSM), British Sign Language (BSL), Spanish Sign Language (LSE), andmany more. Transfer can also occur inpolyglotindividuals when comprehending verbal utterances or written language. For instance,GermanandEnglishboth haverelative clauseswith anoun-noun-verb(=NNV) order but which are interpreted differently in both languages: German example:Das Mädchen, das die Frau küsst, ist blond Iftranslatedword for word with word order maintained, this German relative clause is equivalent to English example:The girl that(orwhom)the woman is kissing is blonde. The German and the English examples differ in that in German thesubjectrole can be taken bydas Mädchen(the girl) ordie Frau(the woman) while in the English example only the secondnoun phrase(the woman) can be the subject. In short, because German singular feminine and neuter articles exhibit the same inflected form for the accusative as for the nominative case, the German example issyntactically ambiguousin that eitherthe girlorthe womanmay be doing the kissing. In the English example, both word-order rules and the test of substituting a relative pronoun with different nominative and accusative case markings (e.g.,whom/who*) reveal that onlythe womancan be doing the kissing. The ambiguity of the German NNV relative clause structure becomes obvious in cases where the assignment ofsubjectandobjectrole is disambiguated. This can be because ofcase markingif one of the nouns isgrammatically maleas inDer Mann, den die Frau küsst...(The man that the woman is kissing...) vs.Der Mann, der die Frau küsst(The man that is kissing the woman...) because in German the maledefinite articlemarks theaccusativecase. The syntactic ambiguity of the German example also becomes obvious in the case ofsemantic disambiguation. For instance inDas Eis, das die Frau isst...(The ice cream that the woman is eating...) andDie Frau, die das Eis isst...(The woman that is eating the ice cream...) onlydie Frau(the woman) is a plausible subject. Because in English relative clauses with a noun-noun-verb structure (as in the example above) the first noun can only be theobject,native speakersof English who speak German as asecond languageare likelier to interpret ambiguous German NNV relative clauses as object relative clauses (= object-subject-verb order) than German native speakers who prefer an interpretation in which the first noun phrase is the subject (subject-object-verb order).[27]This is because they have transferred theirparsingpreference from their first language English to their second language German. With sustained or intense contact between native and non-native speakers, the results of language transfer in the non-native speakers can extend to and affect the speech production of the native-speaking community. For example, in North America, speakers of English whose first language is Spanish or French may have a certain influence on native English speakers' use of language when the native speakers are in the minority. Locations where this phenomenon occurs frequently includeQuébec,Canada, and predominantly Spanish-speaking regions in the US. For details on the latter, see themap of the Hispanophone worldand thelist of U.S. communities with Hispanic majority populations. The process of translation can also lead to the so-called hybrid text, which is the mixing of language either at the level of linguistic codes or at the level of cultural or historical references.[28]
https://en.wikipedia.org/wiki/Language_transfer
Achild speech corpusis aspeech corpusdocumenting first-languagelanguage acquisition. Such databases are used in the development ofcomputer-assisted language learning systemsand the characterization ofchildren's speech at difference ages.[1]Children's speech varies not only by language, but also by region within a language. It can also be different for specific groups like autistic children, especially when emotion is considered. Thus different databases are needed for different populations. Corpora are available for American and British English as well as for many other European languages.[1][2][3] In the table below, the age range may be described in terms of school grades. "K" denotes "kindergarten" while "G" denotes "grade". For example, an age range of "K - G10" refers to speakers ranging from kindergarten age to grade 10. This table is based on a paper from the Interspeech conference, 2016.[4]This online article is intended to provide an interactive table for readers and a place where information about children speech corpora that can be updated continuously by the speech research community.
https://en.wikipedia.org/wiki/List_of_children%27s_speech_corpora
Below are some notable researchers inlanguage acquisitionlisted by intellectual orientation and research topic. Nativists Empiricists Generative Language Acquisition Second language acquisition researchers Complex Dynamic Systems Theoryapproach
https://en.wikipedia.org/wiki/List_of_language_acquisition_researchers
Metalinguistic awareness, also known asmetalinguistic ability, refers to the ability to consciously reflect on the nature oflanguageand to usemetalanguageto describe it. The concept of metalinguistic awareness is helpful in explaining the execution and transfer of linguistic knowledge across languages (e.g.code-switchingas well astranslationamong bilinguals). Metalinguistics expresses itself in ways such as: Metalinguistic awareness is therefore distinct from the notion of engaging with normal language operations, but instead with the process of language use and the exercise of the relevant control. Currently, the most commonly held conception of metalinguistic awareness suggests that its development is constituted by cognitive control (i.e. selecting and coordinating the relevant pieces of information needed to comprehend the language manipulation) and analysed knowledge (i.e. recognising the meaning and structure of the manipulated language).[1] There are a number of explanations as to where metalinguistic abilities may come from. One such explanation depends on the notion that metalinguistic ability is developed in tandem withlanguage acquisition, specifically pertaining to spoken language.[2]The development of mechanisms that allow for an individual to detect errors as they speak is, by this account, a manifestation of metalinguistic ability.[2] Another possible account suggests that metalinguistic awareness and metalinguistic ability are distinct from other sorts of linguistic developments, where these metalinguistic skills are entirely separate from the development and acquisition of basic speaking and listening skills.[2]By this account, metalinguistic abilities necessarily differ fromlinguistic proficiency.[2] A third possible account suggests that metalinguistic awareness occurs as a result of language education in schools – this account holds that it is the process of learning to read that nurtures metalinguistic ability.[2] Today, the most widely-accepted notion of the development of metalinguistic awareness is a framework that suggests it can be achieved through the development of two dimensions: analysed knowledge and cognitive control.[1]As opposed to knowing that is intuitive, analysed knowledge refers to "knowing that is explicit and objective".[1]Cognitive control involves "the selection and coordination of information, usually within time constraint".[1] In a given proposition, a sentence with wordplay, for instance, metalinguistic awareness plays out in several steps. One has to control selecting and coordinating the relevant information in that proposition, and then analyse the informationas it is representedto decipher it. Bialystokand Ryan argue that achieving metalinguistic awareness is the ability to manipulate both dimensions at an arbitrarily "high" level.[1]In the study of metalinguistic ability in children, the proportional growth of these two dimensions suggests that there may not be a fixed age of onset to trace or measure metalinguistic ability, but rather an emerging proficiency that follows increasingly difficult metalinguistic issues.[1] There are four major categories to metalinguistic awareness, where this notion of metalinguistic ability may manifest. These categories are: phonological awareness, word awareness, syntactic awareness and pragmatic awareness.[2] Phonologicalawareness andwordawareness work in tandem in order to allow the language user to process, understand, and utilize the constituent parts of the language being used. These forms of metalinguistic awareness are of particular relevance in the process of learning how to read. Phonological awareness may be assessed through the use of phonemic segmentation tasks, though the use of tests utilizing nondigraph, nonword syllables appear to provide more accurate results.[2] Syntacticawareness is engaged when an individual engages in mental operations to do with structural aspects of language. This involves the application of inferential andpragmatic rules. This may be measured through the use of correction tasks for sentences that contain word order violations.[2] Pragmaticawareness refers to the awareness of the relationships between sentences and their contextual/relational quality.[3]This may include the epistemic context, knowledge of the situation, or any other details surrounding the utterance. This may be measured by assessing the ability to detect inconsistencies between sentences.[2] Past research has attempted to find correlations between the attainment of metalinguistic ability with other language abilities likeliteracyandbilingualism. However, the paradigm shifted with the idea that metalinguistic ability had to instead be measured through essential underlying skills (i.e. analysed knowledge and cognitive control). This framework – analysing ability through comparing it with skills rather than comparing it with other abilities – came to be applied to other linguistic abilities that bore the need for similar skills. The process of learning to read depends heavily on analysed knowledge on the functions and features of reading,[4]control over the knowledge required[5]and control over the formal aspects of the language to extract its meaning.[6]Various research has exhibited that weaknesses in any one of these aspects reflects poorer literacy.[7][8]As such, there suggests a relationship between literacy and metalinguistic awareness. Separate studies also suggest that the process of learning how to read is strongly influenced by aptitude with metalinguistic factors. In fact, older, literate children often prove to be more adept with metalinguistic skills. It is suggested, though, that the relation may be reversed in that it is improved metalinguistic skill that leads to an improved ability to read, rather than reading that precipitates an improvement in metalinguistic ability.[2] Studies have generally supported the hypothesis that bilingual children possess greater cognitive control than their monolingual counterparts. These studies are conducted with the caveat that monolingual and bilingual children being assessed have, as a baseline, equal competency in the languages that they speak. This would suggest that the differences in performance are to do with a difference in metalinguistic ability rather than differences in linguistic proficiency. When assessing ability, bilingual children were privileged in their advanced awareness of the arbitrary relationship between words and their meanings, as well as that of structures and meanings.[9][10][11][12][13][14]This advanced awareness could be manifested in the transferability of the idea that language is malleable, across languages.[15]Interestingly, studies seemed to show that bilingual children had higher proficiency in their metalinguistic skills than in the languages themselves.[15]
https://en.wikipedia.org/wiki/Metalinguistic_awareness
Anon-native speech databaseis aspeech databaseofnon-native pronunciations of English. Such databases are used in the development of: multilingualautomatic speech recognitionsystems,text to speechsystems, pronunciation trainers, andsecond language learning systems.[1] The actual table with information about the different databases is shown in Table 2. In the table of non-native databases some abbreviations for language names are used. They are listed in Table 1. Table 2 gives the following information about each corpus: The name of the corpus, the institution where the corpus can be obtained, or at least further information should be available, the language which was actually spoken by the speakers, the number of speakers, the native language of the speakers, the total amount of non-native utterances the corpus contains, the duration in hours of the non-native part, the date of the first public reference to this corpus, some free text highlighting special aspects of this database and a reference to another publication. The reference in the last field is in most cases to the paper which is especially devoted to describe this corpus by the original collectors. In some cases it was not possible to identify such a paper. In these cases a paper is referenced which is using this corpus is. Some entries are left blank and others are marked with unknown. The difference here is that blank entries refer to attributes where the value is just not known. Unknown entries, however, indicate that no information about this attribute is available in the database itself. As an example, in the Jupiter weather database[46]no information about the origin of the speakers is given. Therefore this data would be less useful for verifying accent detection or similar issues. Where possible, the name is a standard name of the corpus, for some of the smaller corpora, however, there was no established name and hence an identifier had to be created. In such cases, a combination of the institution and the collector of the database is used. In the case where the databases contain native and non-native speech, only attributes of the non-native part of the corpus are listed. Most of the corpora are collections of read speech. If the corpus instead consists either partly or completely of spontaneous utterances, this is mentioned in the Specials column.
https://en.wikipedia.org/wiki/Non-native_speech_database
Apassive speaker(also referred to as areceptive bilingualorpassive bilingual) is acategory of speakerwho has had enough exposure to a language in childhood to have anative-likecomprehension of it, but has little or no active command of it.[1]Passive fluency is often brought about by being raised in one language (which becomes the person's passive language) and being schooled in another language (which becomes the person's native language).[2][3] Such speakers are especially common inlanguage shiftcommunities where speakers of a declining language do not acquire active competence. For example, around 10% of theAinu peoplewho speak the language are considered passive speakers. Passive speakers are often targeted inlanguage revivalefforts to increase the number of speakers of a language quickly, as they are likely to gain active and near-native speaking skills more quickly than those with no knowledge of the language. They are also found in areas where people grow up hearing another language outside their family with no formal education. A more common term for the phenomenon is 'passive bilingualism'.François Grosjeanargues that there has been a monolingual bias regarding who is considered a 'bilingual' in which people who do not have equal competence in all their languages are judged as not speaking properly. 'Balanced bilinguals' are, in fact, very rare. One's fluency as a bilingual in a language is domain-specific: it depends on what each language is used for.[4]That means that speakers may not admit to their fluency in their passive language although there are social (extralinguistic) factors that underlie their different competencies. This article aboutlanguage acquisitionis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Passive_speaker_(language)
Second-language attritionrefers to atrophy ofsecond-languageskills. It is commonly found in individuals who live in environments in which the presence of the attrited language is limited. It is common for people who have learned aforeign languageto gradually forget much of the acquired language skills once the period of formal instruction is over.[1]Thus, second language attrition refers to non-pathological declines in second language skills.[2] Beginning in the 1970s, a new and especially young field in the area ofsecond-language acquisitionwas developed. It is connected cross-sectionally throughout different research areas.Language attrition, in general, is concerned with what is lost (linguistic focus), how it is lost (psycholinguisticandneurolinguisticfocus) and why it is lost (sociolinguistic,sociologisticandanthropologisticfocus) (Hansen 1999). Over 25 years research has been concentrated on studying the attrition of second languages. First studies, dealing with the topic oflanguage lossor language attrition, were published in the late 1970s (de Bot & Weltens 1989: 127). In 1980, theUniversity of Pennsylvaniahosted the conference "The Loss of Language Skills", and language attrition was recognized as a field in the research of second-language acquisition. Since then, variousscientific researchpapers – mainly within America – have been published. Later, several studies inEurope– especially theNetherlands– followed. In other countries, however, language attrition research was paid hardly any attention (de Bot & Weltens 1995). Compared to the field of second-language acquisition, language attrition is still relatively young; so much is still unknown. The purpose of language attrition research, in general, is to discover how, why and what is lost when a language is forgotten. The aim in foreign or second-language attrition research, more specifically, is to find out why, after an activelearning process, the language competence changes or even stops (Gleason 1982). Further, results from research in this area could, as Van Els and Weltens (1989) counter, contribute to the understanding of relations between acquisition and attrition (van Els 1989). L2/FL attrition research is particularly important because it provides results for foreign language instruction.De Botand Weltens state, "[r]esearch on language attrition can also have a considerable impact oncurriculumplanning or foreign language teaching" (1995: 152). The theoretical grounding of the language attrition research derives primarily fromcognitiveandpsychologicaltheories. Research in the area of language attrition concentrates generally on the loss of the L1 and L2. The first distinction that can be made is between pathological and natural language attrition. The former concentrates on language loss caused by adamage of the brain,injury, age orillness. However, this topic will not be investigated any further, because the language attrition in these cases is not caused by natural circumstances. Weltens (1987: 24) states another possible distinction, inter and intra generational language attrition. Inter generational language attrition is concerned with attrition within individuals, whereas intra generational language attrition concentrates on the attrition across different generations. Van Els (1986) distinguishes types of attrition in terms of which language is lost and in which environment it is being lost. Therefore, he classifies: It is not exactly known how different languages are stored in the mind. A researcher,Vivian Cook, proposes that the languages are separated into distinct compartments. This is termed the separation model. An L2 speaker will speak one of the languages, but no connection is made between them in the mind (Cook 2003: 7). Another proposed model is the integration model, which suggests that rather than having two separate mental lexicons, an L2 speaker has one lexicon where words from one language are stored from one language alongside words from the other. Regarding phonology, it has been found that L2 speakers sometimes have one merged system for producing speech, not distinguished by L1 or L2. The integration model focuses on how there is a balance between the unique elements of both languages, and how they form one system. Though these two proposed models offering different perspectives, it is impossible to have total separation because both languages exist in the same mind. Total integration is impossible because we are able to keep the languages apart in our minds (Cook 2003: 7). Another proposed model is the link language model. This model illustrates the idea that two languages within the same mind are able to influence and interact with one another. Further, the partial integration model illustrates the idea of partial overlapping between two languages in one mind. It doesn't differentiate between the languages in the overlap, but it shows how it functions as a single, conjoined system. These systems illustrate the point that vocabulary, syntax, and other aspects of language knowledge can be shared or overlapped between different languages within one mind (Cook 2003: 8). Finally, all of the models function together to create the integration continuum, an illustration that shows the possible relationships in "multi-competence" (Cook 2003: 9). The L1 can be enhanced by the use of an L2 - Cook mentions that "extensive research into bilingual development shows overall that L2 user children have more precocious metalinguistic skills than their monolingual pairs" (Cook 2003: 13). The L1 can be harmed by the use of an L2 - He also brings up the risk of L1 language attrition from the L2. When one language is less and less used, certain abilities are lost from inactivity. The L1 is different from the L2, without being better or worse - Oftentimes, the effects of the L2 on the L1 cause no difference in language knowledge or ability. Differences will undoubtedly exist in the first language element because of different linguistic organization. Different characteristics, like phonological properties, show noticeable differences from a speaker transitioning from L1 to L2. For example, Cook brings up the possibility of differences in "the first language of L2 users for plosive consonants such as /p/ and /b/ or /k/ and /g/ across pairs of languages such as Spanish/English, French/English, and Hebrew/English, which are essentially undetectable in normal language use" (Cook 2003: 13). Researchers, Levy, McVeigh, Marful, and Anderson studied the idea of a new acquired language inhibiting the first, native language. They discussed how "travelers immersed in a new language often experience a surprising phenomenon: Words in their native tongue grow more difficult to recall over time" (Levy 2007: 29). They suggest that the lapses in native-language words can possibly be attributed to "an adaptive role of inhibitory control in hastening second-language acquisition" (Levy 2007: 29). First-language attrition is often worse during second-language immersion. During this time, the native-language is practiced infrequently. The attrition can be attributed to the disuse of the native language and functions of forgetting that occur in the mind. They bring up the idea that first-language attrition can be related to "retrieval-induced-forgetting." This is supported by how novice foreign-language speakers immediately access native-language vocabulary for things, although the foreign word is wanted. The aforementioned researchers conducted studies on retrieval-induced forgetting, and examined "whether inhibitory control mechanisms resolve interference from one's native language during foreign-language production" (Levy 2007: 30). The results of their experiments provided evidence for a role of inhibitions in first-language attrition. The experiment showed that "the more often novice Spanish speakers named objects in Spanish, the worse their later production of the corresponding English names became", "subjects who were least fluent with the Spanish vocabulary [they] test showed the largest phonological inhibition of English words", and showed that the inhibition effect was isolated to phonology (Levy 2007: 33). To provide an answer as to how second-language attrition happens, it is necessary to have a glance at the findings of the research ofmemory. Since its establishment by Ebbinghaus in the late 19th century, theempirical researchabout learning still plays an important role in the modern research of memory. Hermann Ebbinghauscontributed a lot to the research of the memory of the brain. He made the firstempirical studyconcerning the function of the memory as to the storage andforgettingof information. His major finding was that the amount of learned knowledge depends on the amount of time invested. Further, the more time that is passing by, the more repetitions that are necessary. Resulting from the findings of Ebbinghaus, the first theory of forgetting was established, thedecay theory. It says that if something new is learned, amemory traceis formed. This trace will decay, if not used in the course of time, and by decaying of this trace, forgetting occurs (Weltens 1987). Theinterference theorycan be seen as one of the most important theories of forgetting. It indicates that prior, posterior or new learning information compete with already existing ones and therefore forgetting occurs. Thisinhibitioncan be divided into two types: the retroactive inhibition, where information acquired at a later point in time blocks the information that was acquired earlier. Proactive inhibition means that information acquired in the past can infer with new information. Hence, a blocking can occur that inhibits the acquiring of the new target item (Ecke 2004: 325). Today, the retrieval-failure hypothesis, concerning the function of the memory, is more widely accepted and popularized (Schöpper-Grabe 1998:237). It says that the storage of information happens on different levels. Therefore, information or memory is not deleted, but rather, the access to the current level is blocked. Thus, the information is not available. Hansen quotes Loftus & Loftus (1976) to describe forgetting: "[…] much like being unable to find something that we have misplaced somewhere" (1999: 10). Cohen states, evidence for knowing that a learner is not able to "find" something, is the use of the so-called progressive retrieval (1986). Thereby, the learner is unable to express something that is in his mind and consequently uses an incorrect form. He eventually remembers the correct one (Cohen 1986; Olshtain 1989). Time is considered the decisive factor to measure how far the attrition has proceeded already (de Bot & Weltens, 1995). To have a better understanding of language attrition, it is necessary to examine the various hypotheses that attempt to explain how language memory changes over time. The regression hypothesis can be named as the first established theory in language loss. Its tradition goes far back, further than any other theory. The first researcher who designed it wasRibotin 1880. Later,Freudtook Ribot's idea up again and related it toaphasia(Weltens & Schmid 2004: 211). In 1940,Roman Jakobsonembedded it into a linguistic framework and claimed that language attrition is the mirror image of language acquisition (Weltens & Cohen 1989: 130). Even though only a few studies have tested thishypothesis, it is quite attractive to many researchers. As Weltens andSchmid(2004: 212) state, children acquire the language in stages. It was then suggested that language competence, in general, appears in different layers and therefore, attrition, as the mirror image of acquisition, will also happen from the top layer to the bottom. According to the regression hypothesis, two similar approaches developed. Cohen started to conduct several studies on his own to determine "whether the last things learned are, in fact, the first things to be forgotten, and whether forgetting entails unlearning in reverse order from the original learning process" (Cohen 1975: 128). He observed the attrition ofSpanish, as the second language, among school children during thesummer vacation. Cohen's results supported the regression hypothesis and his last-learned-first-forgotten thesis. It supported the idea that some things, which are learned last, are the first to be forgotten when the learner has no input of the target language anymore. Another variation of the regression hypothesis is the best learned-last-forgotten hypothesis, which emphasizes the intensity and quality of the acquired knowledge, not the order in which it is learned. Therefore, the better something is learned, the longer it will remain. Because the language component is repeated again and again, it becomes automated and increases the probability that it will last in the memory (Schöpper-Grabe 1998: 241). The linguistic-feature hypothesis was introduced by Andersen (Andersen 1982). He claims that second languages or foreign languages that share more differences with the respective mother tongue than similarities are more endangered to be forgotten than those similar to the L1. Another point is the attrition of components, which are less "functional", "marked" or "frequent" compared to other elements (Weltens & Cohen 1989: 130). This hypothesis is more differentiated and complex than the regression hypothesis because it considers aspects from first- and second-language acquisition research,language contactand aphasia research and the survey ofpidginandcreolelanguages (Müller 1995). By means of this hypothesis research, it tries to detect the aspects of language that are first to be forgotten. To define the process of language attrition, it is necessary to consider that there are different theories as to how the stages of language attrition occur. Gardner (1982: 519-520) believes that the process of second-language attrition is divided into three points in time: Between times 1 to 2 is termed the acquisition period. Between times 2 to 3 is termed the incubation period (1982: 520). Further, he states that it is not enough to consider only the time that has passed between 2 and 3 to make statements about attrition. It is also necessary to consider the duration, relative success, and nature of the acquisition period and the duration and content of the incubation phase (Gardner 1982: 520). The acquisition period is the time where language learning or language experience occurs, mainly from the first to the last lesson. During the incubation period, no language training or language usage occurs and the forgetting may begin. He says that now that language learning is not active anymore, a study about language attrition can be conducted (Gardner 1982a: 2). Theforgetting curveorientates itself on the typical forgetting curve by Ebbinghaus. He said that already after a very short amount of time, a forgetting process sets in immediately, stabilizes and then levels off. Bahrick conducted a study where he tested 773 persons with Spanish as their L2. His probates had varying acquisition and incubation periods, up to 50 years of non-active learning. He discovered a heavy attrition within the first 5 years, which then stabilized for the next 20 years (Weltens & Cohen 1989: 130). According to Bahrick, the knowledge that remained after 5 years is stored in the permastore. Neisser (1984) uses a different term, he preferscritical threshold, a level that has to be reached. Beyond that threshold, knowledge will resist decay. Contrary to these findings Weltens & Cohen (1989: 130) are reporting from studies where different results were found. According to these findings, the forgetting curve begins with an initial plateau, a period where the language competence is not affected at all. This is then followed by the onset of attrition. Weltens explains these results: it is by the high proficiency of the probates (bilingualsandimmersionstudents). However, it is still unknown whether the curve that follows this plateau is potentially exactly like the "normal" forgetting curve of language learners with a lower proficiency level (Weltens & Cohen 1989: 130). Another phenomenon isrelearning. Some studies show that, despite the end of learning and no language input, a residual learning can happen. Weltens (1989), who studied foreign language learners, identifies an increase inreadingandlistening comprehension. He says that it happens because a process of maturation happens. Schöpper-Grabe determines that contact and the intensity with the target language cannot be the only variable causing language attrition (Schöpper-Grabe 1998). In literature several factors are named for explaining why language competence is decreasing. Many researchers, however, regard the level ofcompetenceof the learner as essential for attrition. It is said that the higher the level of competence, the less attrition will occur. Thus, a reference to the theory of the critical threshold can be drawn. Similar to this theory it is claimed, that according to conducted studies, the higher the level of competence of the learner at the end of the incubation period, the fewer will be lost. Therefore, duration, success and intensity of the language instruction or language input in general is vitally important. Weltens (1987) divides the factors influencing language attrition into three categories: characteristics of the acquisition process (method of instruction, length of exposure, proficiency before attrition, relationship between L1 and the FL), characteristics of the attrition period ('post exposure' and length of the attrition period), and learner characteristics. The second category aresociopsychologicalfactors, as the attitude towards the target language and culture and aligned with the motivation for acquiring the language. Further, factors, which are settled in the language environment, should be considered as well, e.g. the status and prestige of the language are meaningful, too. Another frequent occurring factor is age. A variable that seems to be quite important, especially observing language attrition in children. Even though children are regarded as the better foreign language learner, theircognitive developmentis less progressed compared to adults. Further, usually they haven't learned to write or read in any language, and usually particularly not in the second language at all. Therefore, theirliteracyskill in the L2 is very limited if not even there yet. Cohen (1989) conducted a study observing young children. He found out, that the attrition in an 8-year-old boy was stronger than the one in his 12-year-old sister. Tomiyama, suggested on the basis of her findings, that these children might not lose their knowledge of the L2 completely, moreover the access to such information is inaccessible and may vanish with time passing by. At the beginning of the 80s another, so far unnoticed factor, was introduced into the research field. Socioaffective factors as attitude, orientation and motivation are now accounted. On account of that, he established a socio-educational model of language acquisition. Thereby motivation and attitude influence the workload of the individual to keep their language competence. Further, individuals, who have positive attitudes towards the target language, seek possibilities and opportunities during the incubation period to retain their language competence (Gardner 1987: 521). However, the factor motivation is hardly considered examining language attrition. Especially during the last 10–15 years it became more and more acknowledged in the field of language acquisition rather than attrition. Only Gardner considered motivation as a possible factor influencing attrition. Even until today it is hardly recognised as an influencing factor and therefore exist only a few studies aboutmotivationand its effects. Feuerhake (2004: 7) reports that, looking at released studies, that have been conducted, it can be seen that all four competence areas are affected. Though some of these seem more likely to be affected than others, e.g.grammarand lexical knowledge are more likely to suffer a high attrition process. Showing loss in speaking competence, the first evidence is that the speech tempo decreases. Longer and more frequently occurring speech pauses, under which thefluencyis suffering, are observable as well (Gardner 1987). Olshtain (1986) observed "[…] reduced accessibility invocabularyretrieval in all situations of attrition where there is a reduction of language loss over longer periods of time." (1986: 163). Further, gaps concerning grammatical knowledge, especially tenses and conjunction of verbs occur quite frequently. Nevertheless, it can be said that productive skills are more affected than receptive ones, which mainly remain constantly stable (Cohen 1989) and if the learner shows already signs of language attrition it is more likely that transfer from L1 will happen (Berman & Olshtain 1983). Cohen examined in his studies several strategies, the learner applies to compensate the lack of adequate speaking skills, e.g. one strategy is code-switching, to uphold the communication. Another phenomenon observable is a kind of "mixed-language". Müller (1995) states that on many levels of speaking the learner falls back on a mixture between different languages. Still, it is important to mention, that, as with almost every study that has been conducted in the different sub-fields of second-language acquisition research, several problems arise. There are longitudinal vs. cross-sectional studies, different variables, which have been used, and mainly terms and conditions of acquisition and incubation period are not standardised, particularly the length of the incubation period (Feuerhake 2004: 8). That means, some studies only observe language attrition after language programs, other look at the attrition in between breaks of language programs and studies, which examine the attrition after change of environment, regarding language and living conditions (Cohen 1975, Olshtain 1989). Finally, studies reviewed in this paper show that attrition follows a certain order, e.g. productive skills are more affected than receptive skills. Mainly due to difficulties in lexical retrieval a loss in fluency seems to be the first signs of language attrition, followed by attrition inmorphologyandsyntax. Further observations in language attrition are necessary, to give a better understanding of how the human mind deals with language (Hansen 1999: 78). The following section is trying to explain motivation and its influence on language attrition. Until 1990 the sociopsychological model of Gardner dominated the research about motivation. Gardner and Lambert emphasise thereby the importance of attitude towards the language, the target country and language community (Feuerhake 2004). According to Gardner and Lambert (1972) a learner is instrumental orientated if learning a foreign language has a function, e.g. for success incareerterms. Thereby, the language becomes an instrument to achieve the higher purpose and the foreign language learning is concentrated on fulfilling the aim of the learner (Feuerhake 2004: 9). The integrative orientation follows the aim ofacculturatingwith the target language and country as well as the integration into the target language community. The instrumental and integrative orientation is not enough to cover all aspects of the term motivation, the term intrinsic and extrinsic motivation was added to the model. The term intrinsic is connected with behaviour, which results from the reward of the activity itself. The learner acts, because he is enjoying the activity or it is satisfying his curiosity. Mainly it is self-determined and the learner is eager to learn a foreign language because he wants to achieve a certain level of competence. The learner enjoys learning and the acquisition of a foreign language is challenging. Extrinsic motivated learners are orientated on external stimuli, e.g. positive feedback or expectations from others. In general four different types of extrinsic motivation can be distinguished (Bahar 2005): Bahar (2005: 66) quotes Pintrich & Schunk (1996), who state that "[…] motivation involves various mental processes that lead to the initiation and maintenance of action […]". Hence, motivation is a dynamic process that changes over time and the motivation of a learner as well might change during the learning process. Therefore, it cannot be seen as an isolated factor. Moreover, several other factors, which are settled within the learner, as well as in the environment, influence motivation and are responsible for its intensity and variability. Gardner, Lalonde, & Moorcroft (1987) investigated the nature of L2-French skills attriting by L1-English grade 12 students during the summer vacation, and the role played by attitudes and motivation in promoting language achievement and language maintenance. Students who finished the L2 class highly proficient are more likely to retain what they knew. Yet high achievers in the classroom situation are no more likely to make efforts to use the L2 outside the classroom unless they have positive attitudes and high levels of motivation. The authors write: "an underlying determinant of both acquisition and use is motivation" (p. 44). In fact, the nature of language acquisition is still so complex and so much is still unknown, not all students will have the same experiences during the incubation period. It is possible that some students will appear to attrite in some areas and others will appear to attrite in other areas. Some students will appear to maintain the level that they had previously achieved. And still, other students will appear to improve. Murtagh (2003) investigated retention and attrition of L2-Irish in Ireland with second level school students. At Time 1, she found that most participants were motivated instrumentally, yet the immersion students were most likely to be motivated integratively and they had the most positive attitudes towards learning Irish. Immersion school students were also more likely to have opportunities to use Irish outside the classroom/school environment. Self-reports correlated with ability. She concludes that the educational setting (immersion schools, for example) and the use of the language outside the classroom were the best predictors for L2-Irish acquisition. Eighteen months later, Murtagh finds that the majority of groups 1 and 2 believe their Irish ability has attrited, the immersion group less so. The results from the tests, however, do not show any overall attrition.Timeas a factor did not exert any overall significant change on the sample's proficiency in Irish (Murtagh, 2003:159). Fujita (2002), in a study evaluating attrition among bilingual Japanese children, says that a number of factors are seen as necessary to maintain the two languages in thereturneechild. Those factors include: age on arrival in the L2 environment, length of residence in the L2 environment, and proficiency levels of the L1. Furthermore, she found that L2 attrition was closely related to another factor: age of the child on returning to the L1 environment. Children returning around or before 9 were more likely to attrite than those returning later. Upon returning from overseas, pressure from society, their family, their peers and themselves force returnee children to switch channels back to the L1 and they quickly make effort to attain the level of native-like L1 proficiency of their peers. At the same time, lack of L2 support in the schools in particular and in society in general results in an overall L2 loss.
https://en.wikipedia.org/wiki/Second-language_attrition
Aspoken languageis a form ofcommunicationproduced through articulate sounds or, in some cases, through manual gestures, as opposed towritten language.Oralorvocal languagesare those produced using the vocal tract, whereas sign languages are produced with the body and hands. The term "spoken language" is sometimes used to mean only oral languages, especially by linguists, excluding sign languages and making the terms 'spoken', 'oral', 'vocal language' synonymous. Others refer to sign language as "spoken", especially in contrast to written transcriptions of signs.[1][2][3] The relationship between spoken language and written language is complex. Within the fields oflinguistics, the current consensus is thatspeechis an innate human capability, and written language is a cultural invention.[4]However, some linguists, such as those of thePrague school, argue that written and spoken language possess distinct qualities which would argue against written language being dependent on spoken language for its existence.[5] Hearing children acquire as theirfirst languagethe language that is used around them, whether vocal,cued(if they are sighted), or signed. Deaf children can do the same with Cued Speech or sign language if either visual communication system is used around them. Vocal language are traditionally taught to them in the same way that written language must be taught to hearing children. (Seeoralism.)[6][7]Teachers give particular emphasis on spoken language with children who speak a different primary language outside of the school. For the child it is considered important, socially and educationally, to have the opportunity to understand multiple languages.[8]
https://en.wikipedia.org/wiki/Spoken_language
Abiogenesisis the natural process by whichlifearises fromnon-living matter, such as simpleorganic compounds. The prevailing scientifichypothesisis that the transition from non-living toliving entitieson Earth was not a single event, but a process of increasing complexity involving the formation of ahabitable planet, the prebiotic synthesis of organic molecules, molecularself-replication,self-assembly,autocatalysis, and the emergence ofcell membranes. The transition from non-life to life has never been observed experimentally, but many proposals have been made for different stages of the process. The study of abiogenesis aims to determine how pre-lifechemical reactionsgave rise to life under conditions strikingly different from those on Earth today. It primarily uses tools frombiologyandchemistry, with more recent approaches attempting a synthesis of many sciences. Life functions through the specialized chemistry ofcarbonand water, and builds largely upon four key families of chemicals:lipidsfor cell membranes,carbohydratessuch as sugars,amino acidsfor protein metabolism, andnucleic acidDNAandRNAfor the mechanisms of heredity. Any successful theory of abiogenesis must explain the origins and interactions of these classes of molecules. Many approaches to abiogenesis investigate how self-replicating molecules, or their components, came into existence. Researchers generally think that current life descends from anRNA world, although other self-replicating and self-catalyzing molecules may have preceded RNA. Other approaches ("metabolism-first" hypotheses) focus on understanding howcatalysisin chemical systems on the early Earth might have provided theprecursor moleculesnecessary for self-replication. The classic 1952Miller–Urey experimentdemonstrated that most amino acids, the chemical constituents ofproteins, can be synthesized frominorganic compoundsunder conditions intended to replicate those of theearly Earth. External sources of energy may have triggered these reactions, includinglightning,radiation, atmospheric entries of micro-meteorites, and implosion of bubbles in sea and ocean waves. More recent research has found amino acids in meteorites, comets, asteroids, and star-forming regions of space. While thelast universal common ancestorof all modern organisms (LUCA) is thought to have existed long after the origin of life, investigations into LUCA can guide research into early universal characteristics. Agenomicsapproach has sought to characterize LUCA by identifying the genes shared byArchaeaandBacteria, members of the two major branches of life (withEukaryotesincluded in the archaean branch in thetwo-domain system). It appears there are 60 proteins common to all life and355 prokaryotic genesthat trace to LUCA; their functions imply that the LUCA wasanaerobicwith theWood–Ljungdahl pathway, deriving energy bychemiosmosis, and maintaining its hereditary material with DNA, thegenetic code, andribosomes. Although the LUCA lived over 4 billion years ago (4Gya), researchers believe it was far from the first form of life. Most evidence suggests that earlier cells might have had a leaky membrane and been powered by a naturally occurringproton gradientnear a deep-sea white smokerhydrothermal vent; however, other evidence suggests instead that life may have originated inside the continental crust or in water at Earth's surface. Earth remains the only place in theuniverseknown to harbor life.Geochemicalandfossil evidencefrom the Earth informs most studies of abiogenesis. TheEarthwas formed at 4.54 Gya, and the earliest evidence of life on Earth dates from at least 3.8 Gya fromWestern Australia. Some studies have suggested thatfossil micro-organismsmay have lived within hydrothermal vent precipitates dated 3.77 to 4.28 Gyafrom Quebec, soon afterocean formation4.4 Gya during theHadean. Lifeconsists of reproduction with (heritable) variations.[3]NASAdefines life as "a self-sustaining chemical system capable ofDarwinian [i.e., biological] evolution."[4]Such a system is complex; thelast universal common ancestor(LUCA), presumably a single-celled organism which lived some 4 billion years ago, already had hundreds ofgenesencoded in theDNAgenetic codethat is universal today. That in turn implies a suite of cellular machinery includingmessenger RNA,transfer RNA, andribosomesto translate the code intoproteins. Those proteins includedenzymesto operate itsanaerobic respirationvia theWood–Ljungdahl metabolic pathway, and aDNA polymeraseto replicate its genetic material.[5][6] The challenge for abiogenesis (origin of life)[7][8][9]researchers is to explain how such a complex and tightly interlinked system could develop by evolutionary steps, as at first sightall its parts are necessaryto enable it to function. For example, a cell, whether the LUCA or in a modern organism, copies its DNA with the DNA polymerase enzyme, which is itself produced by translating the DNA polymerase gene in the DNA. Neither the enzyme nor the DNA can be produced without the other.[10]The likely answer to this challenge is that the evolutionary process could have involved molecularself-replication,self-assemblysuch as ofcell membranes, andautocatalysisvia RNAribozymesin anRNA worldenvironment.[5][6][11]Nonetheless, the transition of non-life to life has never been observed experimentally, nor has there been a satisfactory chemical explanation.[12] The preconditions to the development of a living cell like the LUCA are outlined, though disputed in their details: a habitable world is formed with a supply of minerals and liquid water. Prebiotic synthesis creates a range of simple organic compounds, which are assembled into polymers such as proteins and RNA. On the other side, the process after the LUCA is readily understood: biological evolution caused the development of a wide range of species with varied forms and biochemical capabilities. However, the derivation of living things such as LUCA from simple components is far from understood.[1] Although Earth remains the only place where life is known,[13][14]the science ofastrobiologyseeks evidence of life on other planets. The 2015 NASA strategy on the origin of life aimed to solve the puzzle by identifying interactions, intermediary structures and functions, energy sources, and environmental factors that contributed to the diversity, selection, and replication of evolvable macromolecular systems,[2]and mapping the chemical landscape of potential primordial informationalpolymers. The advent of polymers that could replicate, store genetic information, and exhibit properties subject to selection was, it suggested, most likely a critical step in theemergenceof prebiotic chemical evolution.[2]Those polymers derived, in turn, from simpleorganic compoundssuch asnucleobases,amino acids, andsugarsthat could have been formed by reactions in the environment.[15][8][16][17]A successful theory of the origin of life must explain how all these chemicals came into being.[18] One ancient view of the origin of life, fromAristotleuntil the 19th century, is ofspontaneous generation.[19]This held that "lower" animals such as insects were generated by decaying organic substances, and that life arose by chance.[20][21]This was questioned from the 17th century, in works likeThomas Browne'sPseudodoxia Epidemica.[22][23]In 1665,Robert Hookepublished the first drawings of amicroorganism. In 1676,Antonie van Leeuwenhoekdrew and described microorganisms, probablyprotozoaandbacteria.[24]Van Leeuwenhoek disagreed with spontaneous generation, and by the 1680s convinced himself, using experiments ranging from sealed and open meat incubation and the close study of insect reproduction, that the theory was incorrect.[25]In 1668Francesco Redishowed that nomaggotsappeared in meat when flies were prevented from laying eggs.[26]By the middle of the 19th century, spontaneous generation was considered disproven.[27][28] Dating back toAnaxagorasin the 5th century BC,panspermia[29]is the idea thatlifeoriginated elsewhere in theuniverseand came to Earth. The modern version of panspermia holds that life may have been distributed to Earth bymeteoroids,asteroids,comets[30]orplanetoids.[31]It does not attempt to explain how life originated, but shifts the origin of life to another heavenly body. The advantage is that life is not required to have formed on each planet it occurs on, but rather in a more limited set of locations, or even a single location, and then spread about thegalaxyto other star systems via cometary or meteorite impact.[32]There is some interest in the possibility thatlife originated on Marsand later transferred to Earth.[33] The idea that life originated from non-living matter in slow stages appeared inHerbert Spencer's 1864–1867 bookPrinciples of Biology, and inWilliam Turner Thiselton-Dyer's 1879 paper "On spontaneous generation and evolution". On 1 February 1871Charles Darwinwrote about these publications toJoseph Hooker, and set out his own speculation, suggesting that the original spark of life may have begun in a "warm little pond, with all sorts ofammoniaand phosphoricsalts,—light, heat, electricity &c present, that a protein compound was chemically formed". Darwin went on to explain that "at the present day such matter would be instantly devoured or absorbed, which would not have been the case before living creatures were formed."[34][35][36] Alexander Oparinin 1924 andJ. B. S. Haldanein 1929 proposed that the first molecules constituting the earliest cells slowly self-organized from aprimordial soup, and this theory is called theOparin–Haldane hypothesis.[37][38]Haldane suggested that the Earth's prebiotic oceans consisted of a "hot dilute soup" in which organic compounds could have formed.[21][39]J. D. Bernalshowed that such mechanisms could form most of the necessary molecules for life from inorganic precursors.[40]In 1967, he suggested three "stages": the origin of biologicalmonomers; the origin of biologicalpolymers; and the evolution from molecules to cells.[41][42] In 1952,Stanley MillerandHarold Ureycarried out a chemical experiment to demonstrate how organic molecules could have formed spontaneously from inorganic precursors underprebiotic conditionslike those posited by the Oparin–Haldane hypothesis. It used a highlyreducing(lacking oxygen) mixture of gases—methane,ammonia, andhydrogen, as well aswater vapor—to form simple organic monomers such asamino acids.[43][44]Bernal said of the Miller–Urey experiment that "it is not enough to explain the formation of such molecules, what is necessary, is a physical-chemical explanation of the origins of these molecules that suggests the presence of suitable sources and sinks for free energy."[45]However, current scientific consensus describes the primitive atmosphere as weakly reducing or neutral,[46][47]diminishing the amount and variety of amino acids that could be produced. The addition ofironandcarbonateminerals, present in early oceans, however, produces a diverse array of amino acids.[46]Later work has focused on two other potential reducing environments:outer spaceand deep-sea hydrothermal vents.[48][49][50] Soon after theBig Bang, which occurred roughly 14 Gya, the only chemical elements present in the universe werehydrogen,helium, andlithium, the three lightest atoms in the periodic table. These elements gradually accreted and began orbiting in disks of gas and dust. Gravitational accretion of material at the hot and dense centers of theseprotoplanetary disksformed stars by the fusion of hydrogen.[51]Early stars were massive and short-lived, producing all the heavier elements throughstellar nucleosynthesis. Element formation through stellar nucleosynthesis proceeds to its most stable element Iron-56. Heavier elements were formed during supernovae at the end of a star's lifecycle.Carbon, currently thefourth most abundant chemical elementin the universe (after hydrogen, helium, andoxygen), was formed mainly inwhite dwarf stars, particularly those bigger than twice the mass of the sun.[52]As these stars reached the end of theirlifecycles, they ejected these heavier elements, among them carbon and oxygen, throughout the universe. These heavier elements allowed for the formation of new objects, including rocky planets and other bodies.[53]According to thenebular hypothesis, the formation and evolution of theSolar Systembegan 4.6 Gya with thegravitational collapseof a small part of a giantmolecular cloud. Most of the collapsing mass collected in the center, forming theSun, while the rest flattened into aprotoplanetary diskout of which theplanets,moons,asteroids, and other small Solar System bodies formed.[54] The age of theEarthis 4.54 Gya as found by radiometric dating ofcalcium-aluminium-rich inclusionsincarbonaceous chrondritemeteorites, the oldest material in the Solar System.[55][56]Earth, during theHadeaneon (from its formation until 4.031 Gya,) was at first inhospitable to any living organisms. During its formation, the Earth lost a significant part of its initial mass, and consequentially lacked thegravityto hold molecular hydrogen and the bulk of the original inert gases.[57]Soon after initial accretion of Earth at 4.48 Ga, its collision withTheia, a hypothesised impactor, is thought to have created the ejected debris that would eventually form the Moon.[58]This impact would have removed the Earth's primary atmosphere, leaving behind clouds of viscous silicates and carbon dioxide. This unstable atmosphere was short-lived and condensed shortly after to form the bulk silicate Earth, leaving behind an atmosphere largely consisting of water vapor,nitrogen, andcarbon dioxide, with smaller amounts ofcarbon monoxide, hydrogen, andsulfurcompounds.[59][60]The solution of carbon dioxide in water is thought to have made the seas slightlyacidic, with apHof about 5.5.[61] Condensation to form liquidoceansis theorised to have occurred as early as the Moon-forming impact.[62][63]This scenario has found support from the dating of 4.404 Gyazirconcrystals with highδ18Ovalues from metamorphosedquartziteofMount Narryerin Western Australia.[64][65]The Hadean atmosphere has been characterized as a "gigantic, productive outdoor chemical laboratory," similar to volcanic gases today which still support some abiotic chemistry. Despite the likely increased volcanism from early plate tectonics, the Earth may have been a predominantly water world between 4.4 and 4.3 Gya. It is debated whether or not crust was exposed above this ocean due to uncertainties of what early plate tectonics looked like. For early life to have developed, it is generally thought that a land setting is required, so this question is essential to determining when in Earth's history life evolved.[66]Immediately after the Moon-forming impact, Earth likely had little if any continental crust, a turbulent atmosphere, and ahydrospheresubject to intenseultravioletlight from aT Tauri stage Sun. It was also affected bycosmic radiation, and continued asteroid andcometimpacts.[67]Despite all this, niche environments likely existed conducive to life on Earth in the Late-Hadean to Early-Archaean. TheLate Heavy Bombardmenthypothesis posits that a period of intense impact occurred at 4.1 to 3.8 Gya during the Hadean and earlyArcheaneons.[68][69]Originally it was thought that the Late Heavy Bombardment was a single cataclysmic impact event occurring at 3.9 Gya; this would have had the potential to sterilise all life on Earth by volatilising liquid oceans and blocking the Sun needed for photosynthesising primary producers, pushing back the earliest possible emergence of life to after the Late Heavy Bombardment.[70]However, more recent research questioned both the intensity of the Late Heavy Bombardment as well as its potential for sterilisation. Uncertainties as to whether Late Heavy Bombardment was one giant impact or a period of greater impact rates greatly changed the implication of its destructive power.[71][72]The 3.9 Gya date arose from dating ofApollo mission sample returnscollected mostly near theImbrium Basin, biasing the age of recorded impacts.[73]Impact modelling of the lunar surface reveals that rather than a cataclysmic event at 3.9 Gya, multiple small-scale, short-lived periods of bombardment likely occurred.[74]Terrestrial data backs this idea by showing multiple periods of ejecta in the rock record both before and after the 3.9 Gya marker, suggesting that the early Earth was subject to continuous impacts that would not have had as great an impact on extinction as previously thought.[75]If the Late Heavy Bombardment was not a single cataclysmic event, the emergence of life could have taken place far before 3.9 Gya. If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from late impacts and the then high levels of ultraviolet radiation from the sun. Geothermically heated oceanic crust could have yielded far more organic compounds through deephydrothermal ventsthan theMiller–Urey experimentsindicated.[76]The available energy is maximized at 100–150 °C, the temperatures at whichhyperthermophilicbacteria andthermoacidophilicarchaealive.[77] The timing at which life emerged on Earth is most likely between 3.48 and 4.32 Gya. Minimum age estimates are based on evidence from thegeologic record. In 2017, the earliest physical evidence of life was reported to consist ofmicrobialitesin theNuvvuagittuq Greenstone Beltof Northern Quebec, inbanded iron formationrocks at least 3.77 and possibly as old as 4.32 Gya. The micro-organisms could have lived within hydrothermal vent precipitates, soon after the 4.4 Gyaformation of oceansduring the Hadean. The microbes resemble modern hydrothermal vent bacteria, supporting the view that abiogenesis began in such an environment.[78]However, later research disputed this interpretation of the data, stating that the observations may be better explained by abiotic processes in silica-rich waters,[79]"chemical gardens,"[80]circulating hydrothermal fluids,[81]or volcanic ejecta.[82] Biogenicgraphitehas been found in 3.7 Gya metasedimentary rocks from southwesternGreenland[83]and inmicrobial matfossils from 3.49 Gyachertsin thePilbararegion ofWestern Australia.[84]Evidence of early life in rocks fromAkiliaIsland, near theIsua supracrustal beltin southwestern Greenland, dating to 3.7 Gya, have shown biogeniccarbon isotopes.[85]In other parts of the Isua supracrustal belt, graphite inclusions trapped withingarnetcrystals are connected to the other elements of life: oxygen, nitrogen, and possibly phosphorus in the form ofphosphate, providing further evidence for life 3.7 Gya.[86]In thePilbararegion of Western Australia, compelling evidence of early life was found inpyrite-bearing sandstone in a fossilized beach, with rounded tubular cells that oxidized sulfur by photosynthesis in the absence of oxygen.[87][88]Carbon isotope ratios on graphite inclusions from the Jack Hills zircons suggest that life could have existed on Earth from 4.1 Gya.[89] The Pilbara region of Western Australia contains theDresser Formationwith rocks 3.48 Gya, including layered structures calledstromatolites. Their modern counterparts are created by photosynthetic micro-organisms includingcyanobacteria.[90]These lie within undeformed hydrothermal-sedimentary strata; their texture indicates a biogenic origin. Parts of the Dresser formation preservehot springson land, but other regions seem to have been shallow seas.[91]A molecular clock analysis suggests the LUCA emerged prior to 3.9 Gya.[92] Allchemical elementsderive from stellar nucleosynthesis except for hydrogen and some helium and lithium. Basic chemical ingredients of life – thecarbon-hydrogen molecule(CH), the carbon-hydrogen positive ion (CH+) and the carbon ion (C+) – can be produced byultraviolet lightfrom stars.[93]Complex molecules, including organic molecules, form naturally both in space and on planets.[94]Organic molecules on the early Earth could have had either terrestrial origins, with organic molecule synthesis driven by impact shocks or by other energy sources, such as ultraviolet light,redoxcoupling, or electrical discharges; or extraterrestrial origins (pseudo-panspermia), with organic molecules formed ininterstellar dust cloudsraining down on to the planet.[95][96] An organic compound is a chemical whose molecules contain carbon. Carbon is abundant in the Sun, stars, comets, and in theatmospheresof most planets of the Solar System.[97]Organic compounds are relatively common in space, formed by "factories of complex molecular synthesis" which occur in molecular clouds andcircumstellar envelopes, and chemically evolve after reactions are initiated mostly byionizing radiation.[94][98][99]Purineandpyrimidinenucleobases includingguanine,adenine,cytosine,uracil, andthyminehave been found inmeteorites. These could have provided the materials for DNA andRNAto form on theearly Earth.[100]The amino acidglycinewas found in material ejected from cometWild 2; it had earlier been detected in meteorites.[101]Comets are encrusted with dark material, thought to be atar-like organic substance formed from simple carbon compounds under ionizing radiation. A rain of material from comets could have brought such complex organic molecules to Earth.[102][103][60]It is estimated that during the Late Heavy Bombardment, meteorites may have delivered up to five milliontonsof organic prebiotic elements to Earth per year.[60]Currently 40,000 tons of cosmic dust falls to Earth each year.[104] Polycyclic aromatic hydrocarbons(PAH) are the most common and abundant polyatomic molecules in theobservable universe, and are a major store of carbon.[97][105][106][107]They seem to have formed shortly after the Big Bang,[108][106][107]and are associated withnew starsandexoplanets.[97]They are a likely constituent of Earth's primordial sea.[108][106][107]PAHs have been detected innebulae,[109]and in theinterstellar medium, in comets, and in meteorites.[97] A star, HH 46-IR, resembling the sun early in its life, is surrounded by a disk of material which contains molecules including cyanide compounds,hydrocarbons, and carbon monoxide. PAHs in the interstellar medium can be transformed throughhydrogenation,oxygenation, andhydroxylationto more complex organic compounds used in living cells.[110] Organic compounds introduced on Earth byinterstellar dust particlescan help to form complex molecules, thanks to their peculiarsurface-catalyticactivities.[111][112]The RNA component uracil and related molecules, includingxanthine, in theMurchison meteoritewere likely formed extraterrestrially, as suggested by studies of12C/13Cisotopic ratios.[113]NASA studies of meteorites suggest that all four DNA nucleobases (adenine, guanine and related organic molecules) have been formed in outer space.[111][114][115]Thecosmic dustpermeating the universe contains complex organics ("amorphous organic solids with a mixedaromatic–aliphaticstructure") that could be created rapidly by stars.[116]Glycolaldehyde, a sugar molecule and RNA precursor, has been detected in regions of space including aroundprotostarsand on meteorites.[117][118] As early as the 1860s, experiments demonstrated that biologically relevant molecules can be produced from interaction of simple carbon sources with abundant inorganic catalysts. The spontaneous formation of complex polymers from abiotically generated monomers under the conditions posited by the "soup" theory is not straightforward. Besides the necessary basic organic monomers, compounds that would have prohibited the formation of polymers were also formed in high concentration during theMiller–Urey experimentandJoan Oróexperiments.[119]Biology uses essentially 20 amino acids for its coded protein enzymes, representing a very small subset of the structurally possible products. Since life tends to use whatever is available, an explanation is needed for why the set used is so small.[120]Formamide is attractive as a medium that potentially provided a source of amino acid derivatives from simple aldehyde and nitrile feedstocks.[121] Alexander Butlerovshowed in 1861 that theformose reactioncreated sugars including tetroses, pentoses, and hexoses whenformaldehydeis heated under basic conditions with divalent metal ions like calcium. R. Breslow proposed that the reaction was autocatalytic in 1959.[122] Nucleobases, such as guanine and adenine, can be synthesized from simple carbon and nitrogen sources, such ashydrogen cyanide(HCN) and ammonia.[123]Formamideproduces all four ribonucleotides when warmed with terrestrial minerals. Formamide is ubiquitous in the Universe, produced by the reaction of water and HCN. It can be concentrated by the evaporation of water.[124][125]HCN is poisonous only toaerobic organisms(eukaryotesand aerobic bacteria), which did not yet exist. It can play roles in other chemical processes such as the synthesis of the amino acid glycine.[60] DNA and RNA components including uracil, cytosine and thymine can be synthesized under outer space conditions, using starting chemicals such as pyrimidine found in meteorites. Pyrimidine may have been formed inred giantstars or in interstellar dust and gas clouds.[126]All four RNA-bases may be synthesized from formamide in high-energy density events like extraterrestrial impacts.[127]Several ribonucleotides for RNA formation have also been synthesized in a laboratory environment which replicatesprebiotic conditionsviaautocatalytic formose reaction.[128] Other pathways for synthesizing bases from inorganic materials have been reported.[129]Freezing temperatures are advantageous for the synthesis of purines, due to the concentrating effect for key precursors such as hydrogen cyanide.[130]However, while adenine and guanine require freezing conditions for synthesis, cytosine and uracil may require boiling temperatures.[131]Seven amino acids and eleven types of nucleobases formed in ice when ammonia andcyanidewere left in a freezer for 25 years.[132][133]S-triazines(alternative nucleobases), pyrimidines including cytosine and uracil, and adenine can be synthesized by subjecting a urea solution to freeze-thaw cycles under a reductive atmosphere, with spark discharges as an energy source.[134]The explanation given for the unusual speed of these reactions at such a low temperature iseutectic freezing, which crowds impurities in microscopic pockets of liquid within the ice, causing the molecules to collide more often.[135] Prebiotic peptide synthesis is proposed to have occurred through a number of possible routes. Some center on high temperature/concentration conditions in which condensation becomes energetically favorable, while others focus on the availability of plausible prebiotic condensing agents.[136][further explanation needed] Experimental evidence for the formation of peptides in uniquely concentrated environments is bolstered by work suggesting that wet-dry cycles and the presence of specific salts can greatly increase spontaneous condensation of glycine into poly-glycine chains.[137]Other work suggests that while mineral surfaces, such as those of pyrite, calcite, and rutile catalyze peptide condensation, they also catalyze their hydrolysis. The authors suggest that additional chemical activation or coupling would be necessary to produce peptides at sufficient concentrations. Thus, mineral surface catalysis, while important, is not sufficient alone for peptide synthesis.[138] Many prebiotically plausible condensing/activating agents have been identified, including the following: cyanamide, dicyanamide, dicyandiamide, diaminomaleonitrile, urea, trimetaphosphate, NaCl, CuCl2,(Ni,Fe)S, CO, carbonyl sulfide (COS), carbon disulfide (CS2),SO2,and diammonium phosphate (DAP).[136] An experiment reported in 2024 used a sapphire substrate with a web of thin cracks under a heat flow, similar to the environment ofdeep-ocean vents, as a mechanism to separate and concentrate prebiotically relevant building blocks from a dilute mixture, purifying their concentration by up to three orders of magnitude. The authors propose this as a plausible model for the origin of complex biopolymers.[139]This presents another physical process that allows for concentrated peptide precursors to combine in the right conditions. A similar role of increasing amino acid concentration has been suggested for clays as well.[140] While all of these scenarios involve the condensation of amino acids, the prebiotic synthesis of peptides from simpler molecules such as CO, NH3and C, skipping the step of amino acid formation, is very efficient.[141][142] The largest unanswered question in evolution is how simple protocells first arose and differed in reproductive contribution to the following generation, thus initiating the evolution of life. Thelipid worldtheory postulates that the first self-replicating object waslipid-like.[143][144]Phospholipids formlipid bilayersin water while under agitation—the same structure as in cell membranes. These molecules were not present on early Earth, but otheramphiphiliclong-chain molecules also form membranes. These bodies may expand by insertion of additional lipids, and may spontaneously split into twooffspringof similar size and composition. Lipid bodies may have provided sheltering envelopes for information storage, allowing the evolution and preservation of polymers like RNA that store information. Only one or two types of amphiphiles have been studied which might have led to the development of vesicles.[145]There is an enormous number of possible arrangements of lipid bilayer membranes, and those with the best reproductive characteristics would have converged toward a hypercycle reaction,[146][147]a positivefeedbackcomposed of two mutual catalysts represented by a membrane site and a specific compound trapped in the vesicle. Such site/compound pairs are transmissible to the daughter vesicles leading to the emergence of distinctlineagesof vesicles, which would have allowednatural selection.[148] Aprotocellis a self-organized, self-ordered, spherical collection of lipids proposed as a stepping-stone to the origin of life.[145]A functional protocell has (as of 2014) not yet been achieved in a laboratory setting.[149][150][151]Self-assembledvesiclesare essential components of primitive cells.[145]The theory of classical irreversible thermodynamics treats self-assembly under a generalized chemical potential within the framework ofdissipative systems.[152][153][154]Thesecond law of thermodynamicsrequires that overallentropyincreases, yet life is distinguished by its great degree of organization. Therefore, a boundary is needed to separate orderedlife processesfrom chaotic non-living matter.[155] Irene Chen andJack W. Szostaksuggest that elementary protocells can give rise to cellular behaviors including primitive forms of differential reproduction, competition, and energy storage.[150]Competition for membrane molecules would favor stabilized membranes, suggesting a selective advantage for the evolution of cross-linked fatty acids and even thephospholipidsof today.[150]Suchmicro-encapsulationwould allow for metabolism within the membrane and the exchange of small molecules, while retaining large biomolecules inside. Such a membrane is needed for a cell to create its ownelectrochemical gradientto store energy by pumping ions across the membrane.[156][157]Fatty acid vesicles in conditions relevant to alkaline hydrothermal vents can be stabilized by isoprenoids which are synthesized by the formose reaction; the advantages and disadvantages of isoprenoids incorporated within the lipid bilayer in different microenvironments might have led to the divergence of the membranes of archaea and bacteria.[158] Laboratory experiments have shown that vesicles can undergo an evolutionary process under pressure cycling conditions.[159]Simulating the systemic environment in tectonicfault zoneswithin theEarth's crust, pressure cycling leads to the periodic formation of vesicles.[160]Under the same conditions, randompeptidechains are being formed, which are being continuously selected for their ability to integrate into the vesicle membrane. A further selection of the vesicles for their stability potentially leads to the development of functional peptide structures,[161][162][163]associated with an increase in the survival rate of the vesicles. Life requires a loss of entropy, or disorder, as molecules organize themselves into living matter. At the same time, the emergence of life is associated with the formation of structures beyond a certain threshold ofcomplexity.[164]The emergence of life with increasing order and complexity does not contradict the second law of thermodynamics, which states that overall entropy never decreases, since a living organism creates order in some places (e.g. its living body) at the expense of an increase of entropy elsewhere (e.g. heat and waste production).[165][166][167] Multiple sources of energy were available for chemical reactions on the early Earth. Heat fromgeothermalprocesses is a standard energy source for chemistry. Other examples include sunlight, lightning,[60]atmospheric entries of micro-meteorites,[168]and implosion of bubbles in sea and ocean waves.[169]This has been confirmed by experiments[170][171]and simulations.[172]Unfavorable reactions can be driven by highly favorable ones, as in the case of iron-sulfur chemistry. For example, this was probably important forcarbon fixation.[a]Carbon fixation by reaction of CO2with H2S via iron-sulfur chemistry is favorable, and occurs at neutral pH and 100 °C. Iron-sulfur surfaces, which are abundant near hydrothermal vents, can drive the production of small amounts of amino acids and other biomolecules.[60] In 1961,Peter Mitchellproposedchemiosmosisas a cell's primary system of energy conversion. The mechanism, now ubiquitous in living cells, powers energy conversion in micro-organisms and in themitochondriaof eukaryotes, making it a likely candidate for early life.[173][174]Mitochondria produceadenosine triphosphate(ATP), the energy currency of the cell used to drive cellular processes such as chemical syntheses. The mechanism of ATP synthesis involves a closed membrane in which theATP synthaseenzyme is embedded. The energy required to release strongly bound ATP has its origin inprotonsthat move across the membrane.[175]In modern cells, those proton movements are caused by the pumping of ions across the membrane, maintaining an electrochemical gradient. In the first organisms, the gradient could have been provided by the difference in chemical composition between the flow from a hydrothermal vent and the surrounding seawater,[157]or perhaps meteoric quinones that were conducive to the development of chemiosmotic energy across lipid membranes if at a terrestrial origin.[176] ThePAH world hypothesisis a speculativehypothesisthat proposes thatpolycyclic aromatic hydrocarbons(PAHs), known to be abundant in theuniverse,[108][106][177]including in comets,[178]and assumed to be abundant in theprimordial soupof the earlyEarth, played a major role in theorigin of lifeby mediating the synthesis ofRNAmolecules, leading into theRNA world. However, as yet, the hypothesis is untested.[179][180] TheRNA worldhypothesis describes an early Earth with self-replicating and catalytic RNA but no DNA or proteins.[181]Many researchers concur that an RNA world must have preceded the DNA-based life that now dominates.[182]However, RNA-based life may not have been the first to exist.[183][184]Another model echoes Darwin's "warm little pond" with cycles of wetting and drying.[185]Some have proposed a timeline of more than 30 potential significant chemical events between pre-RNA world to near but before LUCA just involving RNA. The timeline does not include metabolism related events (e.g. origins of ATP, glycolysis, the Krebs cycle, the electron transport chain, etc.).[186] RNA is central to the translation process. Small RNAs can catalyze all the chemical groups and information transfers required for life.[184][187]RNA both expresses and maintains genetic information in modern organisms; and the chemical components of RNA are easily synthesized under the conditions that approximated the early Earth, which were very different from those that prevail today. The structure of theribosomehas been called the "smoking gun", with a central core of RNA and no amino acid side chains within 18Åof theactive sitethat catalyzes peptide bond formation.[188][183][189] The concept of the RNA world was proposed in 1962 byAlexander Rich,[190]and the term was coined byWalter Gilbertin 1986.[184][191]There were initial difficulties in the explanation of the abiotic synthesis of the nucleotides cytosine and uracil.[192]Subsequent research has shown possible routes of synthesis; for example, formamide produces all fourribonucleotidesand other biological molecules when warmed in the presence of various terrestrial minerals.[124][125] RNA replicasecan function as both code and catalyst for further RNA replication, i.e. it can be autocatalytic.Jack Szostakhas shown that certain catalytic RNAs can join smaller RNA sequences together, creating the potential for self-replication. The RNA replication systems, which include two ribozymes that catalyze each other's synthesis, showed a doubling time of the product of about one hour, and were subject to natural selection under the experimental conditions.[193][194][183]If such conditions were present on early Earth, then natural selection would favor the proliferation of suchautocatalytic sets, to which further functionalities could be added.[195][196][197]Self-assembly of RNA may occur spontaneously in hydrothermal vents.[198][199][200]A preliminary form of tRNA could have assembled into such a replicator molecule.[201]When such anRNAmolecule began to replicate, it may it may have been capable of the three mechanisms of Darwinian selection:heritability, variation of type, and differential reproductive output. The fitness of such an RNA replicator (its per capita rate of increase) would likely have been a function of its intrinsic adaptive capabilities determined by itsnucleotide sequence, and the availability of resources.[202][203]The three primary adaptive capabilities may have been: (1) replication with moderate fidelity, giving rise to heritability while allowing variation of type, (2) resistance to decay, and (3) acquisition of, and ability to process, resources.[202][203]These capabilities would have functioned by means of the folded configurations of the RNA replicators resulting from their nucleotide sequences. Possible precursors to protein synthesis include the synthesis of short peptide cofactors or the self-catalysing duplication of RNA. It is likely that the ancestral ribosome was composed entirely of RNA, although some roles have since been taken over by proteins. Major remaining questions on this topic include identifying the selective force for the evolution of the ribosome and determining how the genetic code arose.[204] Eugene Kooninhas argued that "no compelling scenarios currently exist for the origin of replication and translation, the key processes that together comprise the core of biological systems and the apparent pre-requisite of biological evolution. The RNA World concept might offer the best chance for the resolution of this conundrum but so far cannot adequately account for the emergence of an efficient RNA replicase or the translation system."[205] In line with the RNA world hypothesis, much of modern biology's templated protein biosynthesis is done by RNA molecules—namely tRNAs and the ribosome (consisting of both protein and rRNA components). The most central reaction of peptide bond synthesis is understood to be carried out by base catalysis by the 23S rRNA domain V.[206]Experimental evidence has demonstrated successful di- and tripeptide synthesis with a system consisting of only aminoacyl phosphate adaptors and RNA guides, which could be a possible stepping stone between an RNA world and modern protein synthesis.[206][207]Aminoacylation ribozymes that can charge tRNAs with their cognate amino acids have also been selected in in vitro experimentation.[208]The authors also extensively mapped fitness landscapes within their selection to find that chance emergence of active sequences was more important that sequence optimization.[208] The first proteins would have had to arise without a fully-fledged system of protein biosynthesis. As discussed above, numerous mechanisms for the prebiotic synthesis of polypeptides exist. However, these random sequence peptides would not have likely had biological function. Thus, significant study has gone into exploring how early functional proteins could have arisen from random sequences. First, some evidence on hydrolysis rates shows that abiotically plausible peptides likely contained significant "nearest-neighbor" biases.[209]This could have had some effect on early protein sequence diversity. In other work by Anthony Keefe and Jack Szostak,mRNA displayselection on a library of6×101280-mers was used to search for sequences with ATP binding activity. They concluded that approximately 1 in1011random sequences had ATP binding function.[210]While this is a single example of functional frequency in the random sequence space, the methodology can serve as a powerful simulation tool for understanding early protein evolution.[211] Starting with the work ofCarl Woesefrom 1977,genomicsstudies have placed the last universal common ancestor (LUCA) of all modern life-forms between Bacteria and a clade formed by Archaea andEukaryotain the phylogenetic tree of life. It lived over 4 Gya.[212][213]A minority of studies have placed the LUCA in Bacteria, proposing that Archaea and Eukaryota are evolutionarily derived from within Eubacteria;[214]Thomas Cavalier-Smithsuggested in 2006 that the phenotypically diverse bacterial phylumChloroflexotacontained the LUCA.[215] In 2016, a set of 355 genes likely present in the LUCA was identified. A total of 6.1 million prokaryotic genes from Bacteria and Archaea were sequenced, identifying 355 protein clusters from among 286,514 protein clusters that were probably common to the LUCA. The results suggest that the LUCA wasanaerobicwith a Wood–Ljungdahl (reductive Acetyl-CoA) pathway, nitrogen- and carbon-fixing, thermophilic. Itscofactorssuggest dependence upon an environment rich in hydrogen, carbon dioxide, iron, andtransition metals. Its genetic material was probably DNA, requiring the 4-nucleotide genetic code, messenger RNA, transfer RNA, and ribosomes to translate the code into proteins such as enzymes. LUCA likely inhabited an anaerobic hydrothermal vent setting in a geochemically active environment. It was evidently already a complex organism, and must have had precursors; it was not the first living thing.[10][216]The physiology of LUCA has been in dispute.[217][218][219]Previous research identified 60 proteins common to all life.[220] Leslie Orgel argued that early translation machinery for the genetic code would be susceptible toerror catastrophe. Geoffrey Hoffmann however showed that such machinery can be stable in function against "Orgel's paradox".[221][222][223]Metabolic reactions that have also been inferred in LUCA are the incompletereverse Krebs cycle,gluconeogenesis, thepentose phosphate pathway,glycolysis,reductive amination, andtransamination.[224][225] A variety ofgeologic and environmental settings have been proposedfor an origin of life. These theories are often in competition with one another as there are many differing views of prebiotic compound availability, geophysical setting, and early life characteristics. The first organism on Earth likely looked different fromLUCA. Between the first appearance of life and where all modern phylogenies began branching, an unknown amount of time passed, with unknown gene transfers, extinctions, and evolutionary adaptation to various environmental niches.[226]One major shift is believed to be from the RNA world to an RNA-DNA-protein world. Modern phylogenies provide more pertinent genetic evidence about LUCA than about its precursors.[227] The most popular hypotheses for settings for the origin of life are deep sea hydrothermal vents and surface bodies of water. Surface waters can be classified into hot springs, moderate temperature lakes and ponds, and cold settings. Early micro-fossils may have come from a hot world of gases such as methane, ammonia, carbon dioxide, andhydrogen sulfide, toxic to much current life.[228]Analysis of thetree of lifeplaces thermophilic and hyperthermophilic bacteria and archaea closest to the root, suggesting that life may have evolved in a hot environment.[229]The deep sea or alkaline hydrothermal vent theory posits that life began at submarine hydrothermal vents.[230][231]William MartinandMichael Russellhave suggested that life evolved in structured iron monosulphide precipitates in a seepage site hydrothermal mound at a redox, pH, and temperature gradient between sulphide-rich hydrothermal fluid and iron(II)-containing waters of the Hadean ocean floor. The naturally arising, three-dimensional compartmentation observed within fossilized seepage-site metal sulphide precipitates indicates that these inorganic compartments were the precursors of cell walls and membranes found in free-living prokaryotes. The known capability of FeS and NiS to catalyze the synthesis of the acetyl-methylsulphide from carbon monoxide and methylsulphide, constituents of hydrothermal fluid, indicates that pre-biotic syntheses occurred at the inner surfaces of these metal-sulphide-walled compartments.[232] These form where hydrogen-rich fluids emerge from below the sea floor, as a result ofserpentinizationof ultra-maficolivinewith seawater and a pH interface with carbon dioxide-rich ocean water. The vents form a sustained chemical energy source derived from redox reactions, in which electron donors (molecular hydrogen) react with electron acceptors (carbon dioxide); seeiron–sulfur world theory. These areexothermic reactions.[230][b] Russell demonstrated that alkaline vents created an abiogenicproton motive forcechemiosmotic gradient,[232]ideal for abiogenesis. Their microscopic compartments "provide a natural means of concentrating organic molecules," composed of iron-sulfur minerals such asmackinawite, endowed these mineral cells with the catalytic properties envisaged byGünter Wächtershäuser.[233]This movement of ions across the membrane depends on a combination of two factors: These two gradients taken together can be expressed as an electrochemical gradient, providing energy for abiogenic synthesis. The proton motive force can be described as the measure of the potential energy stored as a combination of proton and voltage gradients across a membrane (differences in proton concentration and electrical potential).[157] The surfaces of mineral particles inside deep-ocean hydrothermal vents have catalytic properties similar to those of enzymes and can create simple organic molecules, such asmethanol(CH3OH) andformic,acetic, andpyruvic acidsout of the dissolved CO2in the water, if driven by an applied voltage or by reaction with H2or H2S.[234][235] Starting in 1985, researchers proposed that life arose at hydrothermal vents,[236][237]that spontaneous chemistry in the Earth's crust driven by rock–water interactions at disequilibrium thermodynamically underpinned life's origin[238][239]and that the founding lineages of the archaea and bacteria were H2-dependent autotrophs that used CO2as their terminal acceptor in energy metabolism.[240]In 2016, Martin suggested, based upon this evidence, that the LUCA "may have depended heavily on the geothermal energy of the vent to survive".[10]Pores at deep sea hydrothermal vents are suggested to have been occupied by membrane-bound compartments which promoted biochemical reactions.[241][242]Metabolic intermediates in the Krebs cycle, gluconeogenesis, amino acid bio-synthetic pathways, glycolysis, the pentose phosphate pathway, and including sugars like ribose, and lipid precursors can occur non-enzymatically at conditions relevant to deep-sea alkaline hydrothermal vents.[243] If the deep marine hydrothermal setting was the site for the origin of life, then abiogenesis could have happened as early as4.0–4.2 Gya. If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from impacts and the then high levels of ultraviolet radiation from the sun. The available energy in hydrothermal vents is maximized at 100–150 °C, the temperatures at whichhyperthermophilicbacteria andthermoacidophilicarchaealive.[244][245]Arguments against a hydrothermal origin of life state that hyperthermophily was a result ofconvergent evolutionin bacteria and archaea, and that amesophilicenvironment would have been more likely.[246][247]This hypothesis, suggested in 1999 by Galtier, was proposed one year before the discovery of the Lost City Hydrothermal Field, where white-smoker hydrothermal vents average ≈45–90 °C.[248]Moderate temperatures and alkaline seawater such as that at Lost City are now the favoured hydrothermal vent setting in contrast to acidic, high temperature (≈350 °C) black-smokers. Production of prebiotic organic compounds at hydrothermal vents is estimated to be108kg/yr.[249]While a large amount of key prebiotic compounds, such as methane, are found at vents, they are in far lower concentrations than estimates of a Miller-Urey Experiment environment. In the case of methane, the production rate at vents is around 2–4 orders of magnitude lower than predicted amounts in aMiller-Urey Experimentsurface atmosphere.[249][250] Other arguments against an oceanic vent setting for the origin of life include the inability to concentrate prebiotic materials due to strong dilution from seawater. This open-system cycles compounds through minerals that make up vents, leaving little residence time to accumulate.[251]All modern cells rely on phosphates and potassium for nucleotide backbone and protein formation respectively, making it likely that the first life forms also shared these functions. These elements were not available in high quantities in the Archaean oceans as both primarily come from the weathering of continental rocks on land, far from vent settings. Submarine hydrothermal vents are not conducive to condensation reactions needed for polymerisation to form macromolecules.[252][253] An older argument was that key polymers were encapsulated in vesicles after condensation, which supposedly would not happen in saltwater because of the high concentrations of ions. However, while it is true that salinity inhibits vesicle formation from low-diversity mixtures of fatty acids,[254]vesicle formation from a broader, more realistic mix of fatty-acid and 1-alkanol species is more resilient.[255][254] Surface bodies of water provide environments able to dry out and be rewetted. Continued wet-dry cycles allow the concentration of prebiotic compounds andcondensation reactionsto polymerise macromolecules. Moreover, lake and ponds on land allow for detrital input from the weathering of continental rocks which containapatite, the most common source of phosphates needed for nucleotide backbones. The amount of exposed continental crust in the Hadean is unknown, but models of early ocean depths and rates of ocean island and continental crust growth make it plausible that there was exposed land.[256]Another line of evidence for a surface start to life is the requirement forUVfor organism function. UV is necessary for the formation of the U+C nucleotidebase pairby partialhydrolysisand nucleobase loss.[257]Simultaneously, UV can be harmful and sterilising to life, especially for simple early lifeforms with little ability to repair radiation damage. Radiation levels from a young Sun were likely greater, and, with noozone layer, harmful shortwave UV rays would reach the surface of Earth. For life to begin, a shielded environment with influx from UV-exposed sources is necessary to both benefit and protect from UV. Shielding under ice, liquid water, mineral surfaces (e.g. clay) or regolith is possible in a range of surface water settings. While deep sea vents may have input from raining down of surface exposed materials, the likelihood of concentration is lessened by the ocean's open system.[258] Most branching phylogenies are thermophilic or hyperthermophilic, making it possible that theLast universal common ancestor(LUCA) and preceding lifeforms were similarly thermophilic. Hot springs are formed from the heating of groundwater by geothermal activity. This intersection allows for influxes of material from deep penetrating waters and from surface runoff that transports eroded continental sediments. Interconnected groundwater systems create a mechanism for distribution of life to wider area.[259] Mulkidjanian and co-authors argue that marine environments did not provide the ionic balance and composition universally found in cells, or the ions required by essential proteins and ribozymes, especially with respect to high K+/Na+ratio, Mn2+, Zn2+and phosphate concentrations. They argue that the only environments that mimic the needed conditions on Earth are hot springs similar to ones at Kamchatka.[260]Mineral deposits in these environments under an anoxic atmosphere would have suitable pH (while current pools in an oxygenated atmosphere would not), contain precipitates of photocatalytic sulfide minerals that absorb harmful ultraviolet radiation, have wet-dry cycles that concentrate substrate solutions to concentrations amenable to spontaneous formation of biopolymers[261][262]created both by chemical reactions in the hydrothermal environment, and by exposure toUV lightduring transport from vents to adjacent pools that would promote the formation of biomolecules.[263]The hypothesized pre-biotic environments are similar to hydrothermal vents, with additional components that help explain peculiarities of the LUCA.[260][176] A phylogenomic and geochemical analysis of proteins plausibly traced to the LUCA shows that the ionic composition of its intracellular fluid is identical to that of hot springs. The LUCA likely was dependent upon synthesized organic matter for its growth.[260]Experiments show that RNA-like polymers can be synthesized in wet-dry cycling and UV light exposure. These polymers were encapsulated in vesicles after condensation.[254]Potential sources of organics at hot springs might have been transport by interplanetary dust particles, extraterrestrial projectiles, or atmospheric or geochemical synthesis. Hot springs could have been abundant in volcanic landmasses during the Hadean.[176] Amesophilicstart in surface bodies of waters hypothesis has evolved from Darwin's concept of a 'warm little pond' and theOparin-Haldane hypothesis. Freshwater bodies under temperate climates can accumulate prebiotic materials while providing suitable environmental conditions conducive to simple life forms. The climate during the Archaean is still a highly debated topic, as there is uncertainty about what continents, oceans, and the atmosphere looked like then. Atmospheric reconstructions of the Archaean from geochemical proxies and models state that sufficient greenhouse gases were present to maintain surface temperatures between 0–40 °C. Under this assumption, there is a greater abundance of moderate temperature niches in which life could begin.[264] Strong lines of evidence for mesophily from biomolecular studies include Galtier'sG+Cnucleotide thermometer. G+C are more abundant in thermophiles due to the added stability of an additional hydrogen bond not present between A+T nucleotides.rRNAsequencing on a diverse range of modern lifeforms show thatLUCA's reconstructed G+C content was likely representative of moderate temperatures.[247] Although most modern phylogenies are thermophilic or hyperthermophilic, it is possible that their widespread diversity today is a product of convergent evolution and horizontal gene transfer rather than an inherited trait from LUCA.[265]Thereverse gyrasetopoisomeraseis found exclusively in thermophiles and hyperthermophiles as it allows for coiling of DNA.[266]The reverse gyrase enzyme requiresATPto function, both of which are complex biomolecules. If an origin of life is hypothesised to involve a simple organism that had not yet evolved a membrane, let alone ATP, this would make the existence of reverse gyrase improbable. Moreover, phylogenetic studies show that reverse gyrase had an archaeal origin, and that it was transferred to bacteria by horizontal gene transfer. This implies that reverse gyrase was not present in the LUCA.[267] Cold-start origin of life theories stem from the idea there may have been cold enough regions on the early Earth that large ice cover could be found. Stellar evolution models predict that the Sun's luminosity was ≈25% weaker than it is today. Fuelner states that although this significant decrease in solar energy would have formed an icy planet, there is strong evidence for liquid water to be present, possibly driven by a greenhouse effect. This would create an early Earth with both liquid oceans and icy poles.[268] Ice melts that form from ice sheets or glaciers melts create freshwater pools, another niche capable of experiencing wet-dry cycles. While these pools that exist on the surface would be exposed to intense UV radiation, bodies of water within and under ice are sufficiently shielded while remaining connected to UV exposed areas through ice cracks. Suggestions of impact melting of ice allow freshwater paired with meteoritic input, a popular vessel for prebiotic components.[269]Near-seawater levels of sodium chloride are found to destabilize fatty acid membrane self-assembly, making freshwater settings appealing for early membranous life.[270] Icy environments would trade the faster reaction rates that occur in warm environments for increased stability and accumulation of larger polymers.[271]Experiments simulating Europa-like conditions of ≈20 °C have synthesised amino acids and adenine, showing that Miller-Urey type syntheses can still occur at cold temperatures.[133]In anRNA world, the ribozyme would have had even more functions than in a later DNA-RNA-protein-world. For RNA to function, it must be able to fold, a process that is hindered by temperatures above 30 °C. While RNA folding inpsychrophilicorganisms is slower, the process is more successful as hydrolysis is also slower. Shorter nucleotides would not suffer from higher temperatures.[272][273] An alternative geological environment has been proposed by the geologist Ulrich Schreiber and the physical chemist Christian Mayer: thecontinental crust.[274]Tectonic faultzones could present a stable and well-protected environment for long-term prebiotic evolution. Inside these systems of cracks and cavities, water and carbon dioxide present the bulk solvents. Their phase state would depend on the local temperature and pressure conditions and could vary between liquid, gaseous andsupercritical. When forming two separate phases (e.g., liquid water and supercritical carbon dioxide in depths of little more than 1 km), the system provides optimal conditions forphase transfer reactions. Concurrently, the contents of the tectonic fault zones are being supplied by a multitude of inorganic educts (e.g., carbon monoxide, hydrogen, ammonia, hydrogen cyanide, nitrogen, and even phosphate from dissolved apatite) and simple organic molecules formed by hydrothermal chemistry (e.g. amino acids, long-chain amines, fatty acids, long-chain aldehydes).[275][276]Finally, the abundant mineral surfaces provide a rich choice of catalytic activity. An especially interesting section of the tectonic fault zones is located at a depth of approximately 1000 m. For the carbon dioxide part of the bulk solvent, it provides temperature and pressure conditions near thephase transitionpoint between thesupercriticaland the gaseous state. This leads to a natural accumulation zone forlipophilic organic moleculesthat dissolve well insupercritical CO2, but not in its gaseous state, leading to their local precipitation.[277]Periodic pressure variations such as caused bygeyser activityortidal influencesresult in periodic phase transitions, keeping the local reaction environment in a constantnon-equilibrium state. In presence ofamphiphilic compounds(such as the long chain amines and fatty acids mentioned above), subsequent generations of vesicles are being formed[160]that are constantly and efficiently being selected for their stability.[159]The resulting structures could provide hydrothermal vents as well as hot springs with raw material for further development. Homochirality is the geometric uniformity of materials composed ofchiral(non-mirror-symmetric) units. Living organisms use molecules that have the same chirality (handedness): with almost no exceptions,[279]amino acids are left-handed while nucleotides andsugarsare right-handed. Chiral molecules can be synthesized, but in the absence of a chiral source or a chiral catalyst, they are formed in a 50/50 (racemic) mixture of bothforms. Known mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as theelectroweak interaction; asymmetric environments, such as those caused bycircularly polarizedlight,quartz crystals, or the Earth's rotation,statistical fluctuationsduring racemic synthesis,[278]andspontaneous symmetry breaking.[280][281][282] Once established, chirality would be selected for.[283]A small bias (enantiomeric excess) in the population can be amplified into a large one byasymmetric autocatalysis, such as in theSoai reaction.[284]In asymmetric autocatalysis, the catalyst is a chiral molecule, which means that a chiral molecule is catalyzing its own production. An initial enantiomeric excess, such as can be produced by polarized light, then allows the more abundant enantiomer to outcompete the other.[285] Homochirality may have started in outer space: on theMurchison meteorite, the left-handed amino acidL-alanineis more than twice as frequent as its right-handed D form, andL-glutamic acidis more than three times as abundant as its D counterpart.[286][287]Amino acids from meteorites show a left-handed bias, whereas sugars show a predominantly right-handed bias: this is the same preference found in living organisms, suggesting an abiogenic origin of these compounds.[288] In a 2010 experiment by Robert Root-Bernstein, "two D-RNA-oligonucleotides having inverse base sequences (D-CGUA and D-AUGC) and their corresponding L-RNA-oligonucleotides (L-CGUA and L-AUGC) were synthesized and their affinity determined for Gly and eleven pairs of L- and D-amino acids". The results suggest that homochirality, including codon directionality, might have "emerged as a function of the origin of the genetic code".[289]
https://en.wikipedia.org/wiki/Abiogenesis
Biolinguisticscan be defined as thebiologicalandevolutionarystudy oflanguage. It is highly interdisciplinary as it draws from various fields such associobiology,linguistics,psychology,anthropology,mathematics, andneurolinguisticsto elucidate the formation of language. It seeks to yield a framework by which one can understand the fundamentals of the faculty of language. This field was first introduced byMassimo Piattelli-Palmarini, professor of Linguistics and Cognitive Science at theUniversity of Arizona. It was first introduced in 1971, at an international meeting at theMassachusetts Institute of Technology(MIT). Biolinguistics, also called the biolinguistic enterprise or the biolinguistic approach, is believed to have its origins inNoam Chomsky's andEric Lenneberg's work on language acquisition that began in the 1950s as a reaction to the then-dominantbehavioristparadigm. Fundamentally, biolinguistics challenges the view of human language acquisition as a behavior based on stimulus-response interactions and associations.[1]Chomsky and Lenneberg militated against it by arguing for the innate knowledge of language. Chomsky in 1960s proposedthe Language Acquisition Device (LAD)as a hypothetical tool for language acquisition that only humans are born with. Similarly, Lenneberg (1967)[2]formulatedthe Critical Period Hypothesis, the main idea of which being that language acquisition is biologically constrained. These works were regarded as pioneers in the shaping of biolinguistic thought, in what was the beginning of a change in paradigm in the study of language.[3] The investigation of the biological foundations of language is associated with two historical periods, namely that of the 19th century (primarily via Darwinian evolutionary theory) and the 20th century (primarily via the integration of the mathematical linguistics (in the form of Chomskyan generative grammar) with neuroscience. Darwinisminspired many researchers to study language, in particular the evolution of language, via the lens of biology. Darwin's theory regarding the origin of language attempts to answer three important questions:[4] Dating back to 1821, German linguistAugust Schleicherwas the representative pioneer of biolinguistics, discussing the evolution of language based on Darwin's theory of evolution. Since linguistics had been believed to be a form of historical science under the influence ofthe Société de Linguistique de Paris, speculations of the origin of language were not permitted.[5]As a result, hardly did any prominent linguist write about the origin of language apart from German linguistHugo Schuchardt. Darwinism addressed the arguments of other researchers and scholars much as Max Müller by arguing that language use, while requiring a certain mental capacity, also stimulates brain development, enabling long trains of thought and strengthening power. Darwin drew an extended analogy between the evolution of languages and species, noting in each domain the presence of rudiments, of crossing and blending, and variation, and remarking on how each development gradually through a process of struggle.[6] The first phase in the development of biolinguistics runs through the late 1960s with the publication of Lennberg's Biological Foundation of Language (1967). During the first phase, work focused on: During this period, the greatest progress was made in coming to a better understanding of the defining properties of human language as a system of cognition. Three landmark events shaped the modern field of biolinguistics: two important conferences were convened in the 1970s, and a retrospective article was published in 1997 by Lyle Jenkins. The second phase began in the late 1970s . In 1976 Chomsky formulated the fundamental questions of biolinguistics as follows: i) function, ii) structure, iii) physical basis, iv) development in the individual, v) evolutionary development. In the late 1980s a great deal of progress was made in answering questions about the development of language. This then prompted further questions about language design, function, and, the evolution of language. The following year, Juan Uriagereka, a graduate student of Howard Lasnik, wrote the introductory text to Minimalist Syntax, Rhyme and Reason. Their work renewed interest in biolinguistics, catalysing many linguists to look into biolinguistics with their colleagues in adjacent scientific disciplines.[10]Both Jenkins and Uriagereka stressed the importance of addressing the emergence of the language faculty in humans. At around the same time,geneticistsdiscovered a link between the language deficit manifest by theKE familymembers and the geneFOXP2. Although FOXP2 is not the gene responsible for language,[11]this discovery brought many linguists and scientists together to interpret this data, renewing the interest of biolinguistics. Although many linguists have differing opinions when it comes to the history of biolinguistics, Chomsky believes that its history was simply that oftransformational grammar. While ProfessorAnna Maria Di Sciulloclaims that the interdisciplinary research of biology and linguistics in the 1950s-1960s led to the rise of biolinguistics. Furthermore, Jenkins believes that biolinguistics was the outcome of transformational grammarians studying human linguistic and biological mechanisms. On the other hand, linguistsMartin NowakandCharles Yangargue that biolinguistics, originating in the 1970s, is distinct transformational grammar; rather a new branch of the linguistics-biology research paradigm initiated by transformational grammar.[12] InAspects of the theory of Syntax, Chomsky proposed that languages are the product of a biologically determined capacity present in all humans, located in the brain. He addresses three core questions of biolinguistics: what constitutes the knowledge of language, how is knowledge acquired, how is the knowledge put to use? A great deal of ours must be innate, supporting his claim with the fact that speakers are capable of producing and understanding novel sentences without explicit instructions. Chomsky proposed that the form of the grammar may emerge from the mental structure afforded by the human brain and argued that formal grammatical categories such as nouns, verbs, and adjectives do not exist. The linguistic theory ofgenerative grammarthereby proposes that sentences are generated by a subconscious set of procedures which are part of an individual's cognitive ability. These procedures are modeled through a set of formal grammatical rules which are thought to generate sentences in a language.[13] Chomsky focuses on the mind of the language learner or user and proposed that internal properties of the language faculty are closely linked to the physical biology of humans. He further introduced the idea of aUniversal Grammar(UG) theorized to be inherent to all human beings. From the view of Biolinguistic approach, the process of language acquisition would be fast and smooth because humans naturally obtain the fundamental perceptions toward Universal Grammar, which is opposite to the usage-based approach.[14]UG refers to the initial state of the faculty of language; a biologically innate organ that helps the learner make sense of the data and build up an internal grammar.[15]The theory suggests that all human languages are subject to universal principles or parameters that allow for different choices (values). It also contends that humans possess generative grammar, which is hard-wired into the human brain in some ways and makes it possible for young children to do the rapid and universalacquisition of speech.[16]Elements of linguistic variation then determine the growth of language in the individual, and variation is the result of experience, given the genetic endowment and independent principles reducing complexity. Chomsky's work is often recognized as the weak perspective of biolinguistics as it does not pull from other fields of study outside of linguistics.[17] According to Chomsky, the human's brains consist of various sections which possess their individual functions, such as the language faculty, visual recognition.[14] The acquisition of language is a universal feat and it is believed we are all born with an innate structure initially proposed by Chomsky in the 1960s. TheLanguage Acquisition Device(LAD) was presented as an innate structure in humans which enabled language learning. Individuals are thought to be "wired" with universal grammar rules enabling them to understand and evaluate complex syntactic structures. Proponents of the LAD often quote the argument of the poverty of negative stimulus, suggesting that children rely on the LAD to develop their knowledge of a language despite not being exposed to a rich linguistic environment. Later, Chomsky exchanged this notion instead for that of Universal Grammar, providing evidence for a biological basis of language. The Minimalist Program (MP) was introduced by Chomsky in 1993, and it focuses on the parallel between language and the design of natural concepts. Those invested in the Minimalist Program are interested in the physics and mathematics of language and its parallels with our natural world. For example, Piatelli-Palmarini[18]studied the isomorphic relationship between the Minimalist Program andQuantum Field Theory. The Minimalist Program aims to figure out how much of thePrinciples and Parametersmodel can be taken as a result of the hypothetical optimal and computationally efficient design of the human language faculty and more developed versions of the Principles and Parameters approach in turn provide technical principles from which the minimalist program can be seen to follow.[19]The program further aims to develop ideas involving theeconomy of derivationandeconomy of representation, which had started to become an independent theory in the early 1990s, but were then still considered as peripherals oftransformational grammar.[20] TheMergeoperation is used by Chomsky to explain the structure of syntax trees within the Minimalist program. Merge itself is a process which provides the basis of phrasal formation as a result of taking two elements within a phrase and combining them[21]In A.M. Di Sciullo & D. Isac'sThe Asymmetry of Merge(2008), they highlight the two key bases of Merge by Chomsky; In order to understand this, take the following sentence:Emma dislikes the pie This phrase can be broken down into its lexical items: [VP [DP Emma] [V' [V dislikes] [DP [D the] [NP pie]]]] The above phrasal representation allows for an understanding of each lexical item. In order to build a tree using Merge, using bottom-up formation the two final elements of the phrase are selected and then combined to form a new element on the tree. In image a) you can see that the determinertheand the Noun Phrasepieare both selected. Through the process of Merge, the new formed element on the tree is the determiner Phrase (DP) which holds,the pie,which is visible in b). In a minimalist approach, there are three core components of the language faculty proposed: Sensory-Motor system (SM), Conceptual-Intentional system (CI), and Narrow Syntax (NS).[22]SM includes biological requisites for language production and perception, such as articulatory organs, and CI meets the biological requirements related to inference, interpretation, and reasoning, those involved in other cognitive functions. As SM and CI are finite, the main function of NS is to make it possible to produce infinite numbers of sound-meaning pairs. It is possible that the core principles of The Faculty of Language be correlated tonatural laws(such as for example, theFibonacci sequence— an array of numbers where each consecutive number is a sum of the two that precede it, see for example the discussion Uriagereka 1997 and Carnie and Medeiros 2005).[23]According to the hypothesis being developed, the essential properties of language arise from nature itself: the efficient growth requirement appears everywhere, from the pattern of petals in flowers, leaf arrangements in trees and the spirals of a seashell to the structure of DNA and proportions of human head and body.Natural Lawin this case would provide insight on concepts such as binary branching in syntactic trees and well as the Merge operation. This would translate to thinking it in terms of taking two elements on a syntax tree and such that their sum yields another element that falls below on the given syntax tree (Refer to trees above inMinimalist Program). By adhering to this sum of two elements that precede it, provides support for binary structures. Furthermore, the possibility of ternary branching would deviate from the Fibonacci sequence and consequently would not hold as strong support to the relevance of Natural Law in syntax.[24] As mentioned above, biolinguistics challenges the idea that the acquisition of language is a result of behavior based learning. This alternative approach the biolinguistics challenges is known as the usage-based (UB) approach. UB supports that idea that knowledge of human language is acquired via exposure and usage.[25]One of the primary issues that is highlighted when arguing against the Usage-Based approach, is that UB fails to address the issue of poverty of stimulus,[26]whereas biolinguistics addresses this by way of the Language Acquisition Device.[27] Another major contributor to the field isEric Lenneberg. In is bookBiological Foundation of Languages,[2]Lenneberg (1967) suggests that different aspects of human biology that putatively contribute to language more than genes at play. This integration of other fields to explain language is recognized as thestrongview in biolinguistics[28]While they are obviously essential, and while genomes are associated with specific organisms, genes do not store traits (or "faculties") in the way that linguists—including Chomskyans—sometimes seem to imply. Contrary to the concept of the existence of a language faculty as suggested by Chomsky, Lenneberg argues that while there are specific regions and networks crucially involved in the production of language, there is no single region to which language capacity is confined and that speech, as well as language, is not confined to thecerebral cortex. Lenneberg considered language as a species-specific mental organ with significant biological properties. He suggested that this organ grows in the mind/brain of a child in the same way that other biological organs grow, showing that the child's path to language displays the hallmark of biological growth. According to Lenneberg, genetic mechanisms plays an important role in the development of an individual's behavior and is characterized by two aspects: Based on this, Lenneberg goes on further to claim that no kind of functional principle could be stored in an individual's genes, rejecting the idea that there exist genes for specific traits, including language. In other words, that genes can contain traits. He then proposed that the way in which genes influence the general patterns of structure and function is by means of their action uponontogenesisof genes as a causal agent which is individually the direct and unique responsible for a specific phenotype, criticizing prior hypothesis byCharles Goodwin.[29] In biolinguistics, language is recognised to be based on recursive generative procedure that retrieves words from the lexicon and applies them repeatedly to output phrases.[30][31]This generative procedure was hypothesised to be a result of a minor brain mutation due to evidence that word ordering is limited to externalisation and plays no role in core syntax or semantics. Thus, different lines of inquiry to explain this were explored. The most commonly accepted line of inquiry to explain this isNoam Chomsky's minimalist approach to syntactic representations. In 2016, Chomsky and Berwick defined theminimalist programunder the Strong Minimalist Thesis in their bookWhy Only Usby saying that language is mandated by efficient computations and, thus, keeps to the simplest recursive operations.[11]The main basic operation in the minimalist program ismerge. Under merge there are two ways in which larger expressions can be constructed: externally and internally. Lexical items that are merged externally build argument representations with disjoint constituents. The internal merge creates constituent structures where one is a part of another. This inducesdisplacement, the capacity to pronounce phrases in one position, but interpret them elsewhere. Recent investigations of displacement concur to a slight rewiring in cortical brain regions that could have occurred historically and perpetuated generative grammar. Upkeeping this line of thought, in 2009, Ramus and Fishers speculated that a single gene could create a signalling molecule to facilitate new brain connections or a new area of the brain altogether via prenatally defined brain regions. This would result in information processing greatly important to language, as we know it. The spread of this advantage trait could be responsible for secondary externalisation and the interaction we engage in.[11]If this holds, then the objective of biolinguistics is to find out as much as we can about the principles underlying mentalrecursion. Compared to other topics in linguistics where data can be displayed with evidence cross-linguistically, due to the nature of biolinguistics, and that it is applies to the entirety of linguistics rather than just a specific subsection, examining other species can assist in providing data. Although animals do not have the same linguistic competencies as humans, it is assumed that they can provide evidence for some linguistic competence. The relatively new science ofevo-devothat suggests everyone is a common descendant from a single tree has opened pathways into gene and biochemical study. One way in which this manifested within biolinguistics is through the suggestion of a common language gene, namelyFOXP2. Though this gene is subject to debate, there have been interesting recent discoveries made concerning it and the part it plays in the secondary externalization process. Recent studies of birds and mice resulted in an emerging consensus that FOXP2 is not a blueprint for internal syntax nor the narrow faculty of language, but rather makes up the regulatory machinery pertaining to the process of externalization. It has been found to assist sequencing sound or gesture one after the next, hence implying that FOXP2 helps transfer knowledge fromdeclarativetoprocedural memory. Therefore, FOXP2 has been discovered to be an aid in formulating a linguistic input-output system that runs smoothly.[11] According to the Integration Hypothesis, human language is the combination of the Expressive (E) component and the Lexical (L) component. At the level of words, the L component contains the concept and meaning that we want to convey. The E component contains grammatical information and inflection. For phrases, we often see an alternation between the two components. In sentences, the E component is responsible for providing the shape and structure to the base-level lexical words, while these lexical items and their corresponding meanings found in thelexiconmake up the L component.[32]This has consequences for our understanding of: (i) the origins of the E and L components found in bird and monkey communication systems; (ii) the rapid emergence of human language as related to words; (iii) evidence of hierarchical structure withincompoundwords; (iv) the role of phrases in the detection of the structure building operationMerge; and (v) the application of E and L components to sentences. In this way, we see that the Integration Hypothesis can be applied to all levels of language: the word, phrasal, and sentence level. Through the application of the Integration Hypothesis, it can be seen that the interaction between the E and L components enables language structure (E component) and lexical items (L component) to operate simultaneously within one form of complex communication: human language. However, these two components are thought to have emerged from two pre-existing, separate, communication systems in the animal world.[32]The communication systems of birds[33]and monkeys[34]have been found to be antecedents to human language. The bird song communication system is made up entirely of the E component while the alarm call system used by monkeys is made up of the L component. Human language is thought to be the byproduct of these two separate systems found in birds and monkeys, due to parallels between human communication and these two animal communication systems. The communication systems of songbirds is commonly described as a system that is based on syntactic operations. Specifically, bird song enables the systematic combination of sound elements in order to string together a song. Likewise, human languages also operate syntactically through the combination of words, which are calculated systematically. While the mechanics of bird song thrives off of syntax, it appears as though the notes, syllables, and motifs that are combined in order to elicit the different songs may not necessarily contain any meaning.[35]The communication system of songbirds' also lacks a lexicon[36]that contains a set of any sort of meaning-to-referent pairs. Essentially, this means that an individual sound produced by a songbird does not have meaning associated with it, the way a word does in human language. Bird song is capable of being structured, but it is not capable of carrying meaning. In this way, the prominence of syntax and the absence of lexical meaning presents bird song as a strong candidate for being a simplified antecedent of the E component that is found in human language, as this component also lacks lexical information. While birds that use bird song can rely on just this E component to communicate, human utterances require lexical meaning in addition to structural operations a part of the E component, as human language is unable to operate with just syntactic structure or structural function words alone. This is evident as human communication does in fact consist of a lexicon, and humans produce combined sequences of words that are meaningful, best known as sentences. This suggests that part of human language must have been adapted from another animal's communication system in order for the L component to arise . A well known study by Seyfarth et al.[34]investigated the referential nature of the alarm calls of vervet monkeys. These monkeys have three set alarm calls, with each call directly mapping on to one of the following referents: a leopard, an eagle, or a snake. Each call is used to warn other monkeys about the presence of one of these three predators in their immediate environmental surroundings. The main idea is that the alarm call contains lexical information that can be used to represent the referent that is being referred to. Essentially, the entire communication system used by monkeys is made up of the L system such that only these lexical-based calls are needed to effectively communicate. This is similar to the L component found in human language in which content words are used to refer to a referent in the real world, containing the relevant lexical information. The L component in human language is, however, a much more complex variant of the L component found in vervet monkey communication systems: humans use many more than just 3 word-forms to communicate. While vervet monkeys are capable of communicating solely with the L component, humans are not, as communication with just content words does not output well-formed grammatical sentences. It is for this reason that the L component is combined with the E component responsible for syntactic structure in order to output human language. As traces of the E and L components have been found in nature, the integration hypothesis asserts that these two systems existed before human language, and that it was the combination of these two pre-existing systems that rapidly led to the emergence of human language.[32]The Integration Hypothesis posits that it was the grammatical operator, Merge, that triggered the combination of the E and L systems to create human language.[37]In this view, language emerged rapidly and fully formed, already containing syntactical structure. This is in contrast to the Gradualist Approach, where it is thought that early forms of language did not have syntax. Instead, supporters of the Gradualist Approach believe language slowly progressed through a series of stages as a result of a simple combinatory operator that generated flat structures. Beginning with a one-word stage, then a two-word stage, then a three-word stage, etc., language is thought to have developed hierarchy in later stages.[37] In the article,The precedence of syntax in the rapid emergence of human language in evolution as defined by the integration hypothesis,[37]Nóbrega & Miyagawa outline the Integration Hypothesis as it applies to words. To explain the Integration Hypothesis as it relates to words, everyone must first agree on the definition of a 'word'. While this seems fairly straightforward in English, this is not the case for other languages. To allow for cross-linguistic discussion, the idea of a "root" is used instead, where a "root" encapsulates a concept at the most basic level. In order to differentiate between "roots" and "words", it must be noted that "roots" are completely devoid of any information relating to grammatical category or inflection. Therefore, "roots" form the lexical component of the Integration Hypothesis while grammatical category (noun, verb, adjective) and inflectional properties (e.g. case, number, tense, etc.) form the expressive component. Thus, at the most basic level for the formation of a "word" in human language, there must be a combination of the L component with the E component. When we know a "word" in a language, we must know both components: the concept that it relates to as well as its grammatical category and inflection. The former is the L component; the latter is the E component. The Integration Hypothesis suggests that it was the grammatical operator Merge that triggered this combination, occurring when one linguistic object (L layer) satisfies the grammatical feature of another linguistic object (E layer). This means that L components are not expected to directly combine with each other. Based on this analysis, it is believed that human language emerged in a single step. Before this rapid emergence, the L component, "roots", existed individually, lacked grammatical features, and were not combined with each other. However, once this was combined with the E component, it led to the emergence of human language, with all the necessary characteristics. Hierarchical structures of syntax are already present within words because of the integration of these two layers. This pattern is continued when words are combined with each other to make phrases, as well as when phrases are combined into sentences. Therefore, the Integration Hypothesis posits that once these two systems were integrated, human language appeared fully formed, and did not require additional stages. Compound words are a special point of interest with the Integration Hypothesis, as they are further evidence that words contain internal structure. The Integration Hypothesis, analyzescompound wordsdifferently compared to previous gradualist theories of language development. As previously mentioned, in the Gradualist Approach, compound words are thought of as part of a proto-syntax stage to the human language. In this proposal of a lexicalprotolanguage, compounds are developed in the second stage through a combination of single words by a rudimentary recursiven-ary operation that generates flat structures.[38]However, the Integration Hypothesis challenges this belief, claiming that there is evidence to suggest that words are internally complex. In English for example, the word 'unlockable' is ambiguous because of two possible structures within. It can either mean something that is able to be unlocked (unlock-able), or it can mean something that is not lockable (un-lockable). This ambiguity points to two possible hierarchical structures within the word: it cannot have the flat structure posited by the Gradualist Approach. With this evidence, supporters of the Integration Hypothesis argue that these hierarchical structures in words are formed by Merge, where the L component and E component are combined. Thus, Merge is responsible for the formation of compound words and phrases. This discovery leads to the hypothesis that words, compounds, and all linguistic objects of the human language are derived from this integration system, and provides contradictory evidence to the theory of an existence of a protolanguage.[37] In the view of compounds as "living fossils", Jackendoff[39]alleges that the basic structure of compounds does not provide enough information to offer semantic interpretation. Hence, the semantic interpretation must come from pragmatics. However, Nórega and Miyagawa[37]noticed that this claim of dependency on pragmatics is not a property of compound words that is demonstrated in all languages. The example provided by Nórega and Miyagawa is the comparison between English (a Germanic language) and Brazilian Portuguese (a Romance language). English compound nouns can offer a variety of semantic interpretations. For example, the compound noun "car man" can have several possible understandings such as: a man who sells cars, a man who's passionate about cars, a man who repairs cars, a man who drives cars, etc. In comparison, the Brazilian Portuguese compound noun "peixe-espada" translated as "sword fish", only has one understanding of a fish that resembles a sword.[37]Consequently, when looking at the semantic interpretations available of compound words between Germanic languages and Romance languages, the Romance languages have highly restrictive meanings. This finding presents evidence that in fact, compounds contain more sophisticated internal structures than previously thought. Moreover, Nórega and Miyagawa provide further evidence to counteract the claim of a protolanguage through examining exocentric VN compounds. As defined, one of the key components to Merge is the property of being recursive. Therefore, by observing recursion within exocentric VN compounds of Romance languages, this proves that there must be an existence of an internal hierarchical structure which Merge is responsible for combining. In the data collected by Nórega and Miyagawa,[37]they observe recursion occurring in several occasions within different languages. This happens in Catalan, Italian, and Brazilian Portuguese where a new VN compound is created when a nominal exocentric VN compound is the complement of a verb. For example, referring to the Catalan translation of "windshield wipers",[neteja[para-brises]]lit. clean-stop-breeze, we can identify recursion because[para-brises]is the complement of[neteja]. Additionally, we can also note the occurrence of recursion when the noun of a VN compound contains a list of complements. For example, referring to the Italian translation of "rings, earrings, or small jewels holder",[porta[anelli, orecchini o piccoli monili]]lit. carry-rings-earrings-or-small-jewels, there is recursion because of the string of complements[anelli, orecchini o piccoli monili]containing the noun to the verb[porta]. The common claim that compounds are fossils of language often complements the argument that they contain a flat, linear structure.[40]However, Di Sciullo provided experimental evidence to dispute this.[40]With the knowledge that there is asymmetry in the internal structure of exocentric compounds, she uses the experimental results to show that hierarchical complexity effects are observed from processing of NV compounds in English. In her experiment, sentences containing object-verb compounds and sentences containing adjunct-verb compounds were presented to English speakers, who then assessed the acceptability of these sentences. Di Sciullo has noted that previous works have determined adjunct-verb compounds to have more complex structure than object-verb compounds because adjunct-verb compounds require merge to occur several times.[40]In her experiment, there were 10 English speaking participants who evaluated 60 English sentences. The results revealed that the adjunct-verb compounds had a lower acceptability rate and the object-verb compounds had a higher acceptability rate. In other words, the sentences containing the adjunct-verb compounds were viewed as more "ill-formed" than the sentences containing the object-verb compounds. The findings demonstrated that the human brain is sensitive to the internal structures that these compounds contain. Since adjunct-verb compounds contain complex hierarchical structures from the recursive application of Merge, these words are more difficult to decipher and analyze than the object-verb compounds which encompass simpler hierarchical structures. This is evidence that compounds could not have been fossils of a protolanguage without syntax due to their complex internal hierarchical structures. As previously mentioned, human language is interesting because it necessarily requires elements from both E and L systems - neither can stand alone. Lexical items, or what the Integration Hypothesis refers to as 'roots', are necessary as they refer to things in the world around us. Expression items, that convey information about category or inflection (number, tense, case etc.) are also required to shape the meanings of the roots. It becomes more clear that neither of these two systems can exist alone with regards to human language when we look at the phenomenon of 'labeling'. This phenomenon refers to how we classify the grammatical category of phrases, where the grammatical category of the phrase is dependent on the grammatical category of one of the words within the phrase, called the head. For example, in the phrase "buy the books", the verb "buy" is the head, and we call the entire phrase a verb-phrase. There is also a smaller phrase within this verb-phrase, a determiner phrase, "the books" because of the determiner "the". What makes this phenomenon interesting is that it allows for hierarchical structure within phrases. This has implications on how we combine words to form phrases and eventually sentences.[41] This labelling phenomenon has limitations however. Some labels can combine and others cannot. For example, two lexical structure labels cannot directly combine. The two nouns, "Lucy" and "dress" cannot directly be combined. Likewise, neither can the noun "pencil" be merged with the adjective "short", nor can the verbs, "want" and "drink" cannot be merged without anything in between. As represented by the schematic below, all of these examples are impossible lexical structures. This shows that there is a limitation where lexical categories can only be one layer deep. However, these limitations can be overcome with the insertion of an expression layer in between. For example, to combine "John" and "book", adding a determiner such as "-'s" makes this a possible combination.[41] Another limitation regards the recursive nature of the expressive layer. While it is true that CP and TP can come together to form hierarchical structure, this CP TP structure cannot repeat on top of itself: it is only a single layer deep. This restriction is common to both the expressive layer in humans, but also in birdsong. This similarity strengthens the tie between the pre-existing E system posited to have originated in birdsong and the E layers found in human language.[41] Due to these limitations in each system, where both lexical and expressive categories can only be one layer deep, the recursive and unbounded hierarchical structure of human language is surprising. The Integration hypothesis posits that it is the combination of these two types of layers that results in such a rich hierarchical structure. The alternation between L layers and E layers is what allows human language to reach an arbitrary depth of layers. For example, in the phrase "Eat the cake that Mary baked", the tree structure shows an alternation between L and E layers. This can easily be described by two phrase rules: (i) LP → L EP and (ii) EP → E LP. The recursion that is possible is plainly seen by transforming these phrase rules into bracket notation. The LP in (i) can be written as [L EP]. Then, adding an E layer to this LP to create an EP would result in [E [L EP]]. After, a more complex LP could be obtained by adding an L layer to the EP, resulting in [L [E [L EP]]]. This can continue forever and would result in the recognizable deep structures found in human language.[41] The E and L components can be used to explain the syntactic structures that make up sentences in human languages. The first component, the L component, containscontent words.[41]This component is responsible for carrying the lexical information that relays the underlying meaning behind a sentence. However, combinations consisting solely of L component content words do not result in grammatical sentences. This issue is resolved through the interaction of the L component with the E component. The E component is made up offunction words: words that are responsible for inserting syntactic information about the syntactic categories of L component words, as well as morphosyntactic information about clause-typing, question, number, case and focus.[37]Since these added elements complement the content words in the L component, the E component can be thought of as being applied to the L component. Considering that the L component is solely composed of lexical information and the E component is solely composed of syntactic information, they do exist as two independent systems. However, for the rise of such a complex system as human language, the two systems are necessarily reliant on each other. This aligns with Chomsky's proposal of duality of semantics which suggests that human language is composed of these two distinct components.[42]In this way, it is logical as to why the convergence of these two components was necessary in order to enable the functionality of human language as we know it today. Looking at the following example taken from the articleThe integration hypothesis of human language evolution and the nature of contemporary languagesby Miyagawa et al.,[32]each word can be identified as either being either an L component or an E component in the sentence:Did John eat pizza? The L component words of this sentence are the content wordsJohn, eat, and pizza. Each word only contains lexical information that directly contributes to the meaning of the sentence. The L component is often referred to as the base or inner component, due to the inwards positioning of this constituent in a phrase structure tree. It is evident that the string of words 'John eat pizza' does not form a grammatically well-formed sentence in English, which suggests that E component words are necessary to syntactically shape and structure this string of words. The E component is typically referred to as the outer component that shapes the inner L component as these elements originate in a position that orbits around the L component in a phrase structure tree. In this example, the E component function word that is implemented isdid. By inserting this word, two types of structures are added to the expression: tense and clause typing. The worddidis a word that is used to inquire about something that happened in the past, meaning that it adds the structure of the past tense to this expression. In this example, this does not explicitly change the form of the verb, as the verbeatin the past tense still surfaces aseatwithout any additional tense markers in this particular environment. Instead the tense slot can be thought of as being filled by a null symbol (∅) as this past tense form does not have any phonological content. Although covert, this null tense marker is an important contribution from the E component worddid. Tense aside, clause typing is also conveyed through the E component. It is interesting that this function worddidsurfaces in the sentence initial position because in English, this indicates that the string of words will manifest as a question. The worddiddetermines that the structure of the clause type for this sentence will be in the form of an interrogative question, specifically ayes–no question. Overall, the integration of the E component with the L component forms the well-formed sentence,Did John eat pizza?, and accounts for all other utterances found in human languages. Alternative Theoretical Approaches Stemming from the usage-based approach, theCompetition Model, developed byElizabeth BatesandBrian MacWhinney, views language acquisition as consisting of a series of competitive cognitive processes that act upon a linguistic signal. This suggests that language development depends on learning and detecting linguistic cues with the use of competing general cognitive mechanisms rather than innate, language-specific mechanisms. From the side ofbiosemiotics, there has been a recent claim that meaning-making begins far before the emergence of human language. This meaning-making consists of internal and external cognitive processes. Thus, it holds that such process organisation could not have only given a rise to language alone. According to this perspective all living things possess these processes, regardless of how wide the variation, as a posed to species-specific.[43] Over-Emphasised Weak Stream Focus When talking about biolinguistics there are two senses that are adopted to the term: strong and weak biolinguistics. The weak is founded on theoretical linguistics that is generativist in persuasion. On the other hand, the strong stream goes beyond the commonly explored theoretical linguistics, oriented towards biology, as well as other relevant fields of study. Since the early emergence of biolinguistics to its present day, there has been a focused mainly on the weak stream, seeing little difference between the inquiry into generative linguistics and the biological nature of language as well as heavily relying on the Chomskyan origin of the term.[44] As expressed by research professor and linguist Cedric Boeckx, it is a prevalent opinion that biolinguistics need to focus on biology as to give substance to the linguistic theorizing this field has engaged in. Particular criticisms mentioned include a lack of distinction between generative linguistics and biolinguistics, lack of discoveries pertaining to properties of grammar in the context of biology, and lack of recognition for the importance broader mechanisms, such as biological non-linguistic properties. After all, it is only an advantage to label propensity for language as biological if such insight is used towards a research.[44] David Poeppel, aneuroscientistand linguist, has additionally noted that if neuroscience and linguistics are done wrong, there is a risk of "inter-disciplinary cross-sterilization", arguing that there is aGranularity Mismatch Problem.Due to this different levels of representations used in linguistics and neural science lead to vague metaphors linking brain structures to linguistic components. Poeppel and Embick also introduce theOntological Incommensurability Problem, where computational processes described in linguistic theory cannot be restored to neural computational processes.[45] A recent critique of biolinguistics and 'biologism' in language sciences in general has been developed by Prakash Mondal who shows that there are inconsistencies and categorical mismatches in any putative bridging constraints that purport to relate neurobiological structures and processes to the logical structures of language that have a cognitive-representational character.[46][47] [53][54][55][41][56][32][42][57]
https://en.wikipedia.org/wiki/Biolinguistics
Thebouba–kiki effect(/ˈbuːbəˈkiːkiː/) ortakete–malumaphenomenon[1][2][3]is a non-arbitrarymental associationbetween certain speech sounds and certain visual shapes. The most typical research finding is that people, when presented withnonsense words, tend to associate certain ones (likeboubaandmaluma) with a rounded shape and other ones (likekikiandtakete) with a spiky shape. Its discovery dates back to the 1920s, when psychologists documented experimental participants as connecting nonsense words to shapes in consistent ways. There is a strong general tendency towards the effect worldwide; it has been robustly confirmed across a majority of cultures and languages in which it has been researched,[4]for example including among English-speaking American university students,Tamilspeakers in India, speakers of certain languages with no writing system, young children, infants, and (though to a much lesser degree) thecongenitally blind.[4]It has also been shown to occur with familiar names. The bouba–kiki effect is one form ofsound symbolism.[5] This effect was first observed by GeorgianpsychologistDimitri Uznadzein a 1924 paper.[6][non-primary source needed]He conducted an experiment with 10 participants who were given a list with nonsense words, shown six drawings for five seconds each, then instructed to pick a name for the drawing from the list of given words. He describes the different "strategies" participants developed to match words to drawings and quotes their reasoning. He also describes situations where participants described very specific forms that they associated with a nonsense word, without reference to the shown drawings. He develops a theory of four factors that influence the way names for objects are decided. In total, there were 42 words. For one particular drawing, 45% picked the same word. For three others, the percentages were 40%. Uznadze points out that this is significantly more overlap than one could expect, given the high number of possible words. He speculates that there must therefore be certain regularities "which the human soul follows in the process of name-giving". German AmericanpsychologistWolfgang Köhlerreferred to Uznadze's experiment in a 1929 book[7]which showed two forms and asked readers which shape was called "takete" and which was called "maluma". Although he does not say so outright, Köhler implies that there is a strong preference to pair the jagged shape with "takete" and the rounded shape with "maluma".[8] In 2001,V. S. Ramachandranand Edward Hubbard repeated Köhler's experiment, introducing the words "kiki" and "bouba", and asked American college undergraduates andTamilspeakers in India, "Which of these shapes is bouba and which is kiki?" In both groups, 95% to 98% selected the curvy shape as "bouba" and the jagged one as "kiki", suggesting that the human brain somehow attaches abstract meanings to the shapes and sounds consistently.[9][failed verification–see discussion] A research experiment was conducted in 2022 that found evidence supporting the idea that the bouba/kiki effect is across-cultural phenomenon. 917 participants speaking 25 different languages, with 10 different writing systems, maintain a higher than chance consistency in bouba/kiki identification, intuitively associating the "bouba" with a rounded shape and "kiki" with a sharp, pointed shape, regardless of their native language, though the effect is stronger in some languages than others. It also supports thatRoman orthographyis a factor that could enhance the bouba/kiki effect. However, this biasing effect of orthography is rather weak since the participants that speak languages with Roman orthography are only marginally more likely to show the bouba/kiki effect.[clarification needed][4] Daphne Maurerand colleagues showed that even children as young as 21⁄2years old may show this preference.[10]More recent work by Ozge Ozturk and colleagues in 2013 showed that even 4-month-old infants have the same sound–shape mapping biases as adults and toddlers.[11]Infants are able to differentiate between congruent trials (pairing an angular shape with "kiki" or a curvy shape with "bubu") and incongruent trials (pairing a curvy shape with "kiki" or an angular shape with "bubu"). Infants looked longer at incongruent pairings than at congruent pairings. Infants' mapping was based on the combination ofconsonantsandvowelsin the words, and neither consonants nor vowels alone sufficed for mapping. These results suggest that some sound–shape mappings precedelanguage learning, and may in fact aid in language learning by establishing a basis for matching labels to referents and narrowing the hypothesis space for young infants. Adults in this study, like infants, used a combination of consonant and vowel information to match the labels they heard with the shapes they saw. However, this was not the only strategy that was available to them. Adults, unlike infants, were also able to use consonant information alone and vowel information alone to match the labels to the shapes, albeit less frequently than the consonant–vowel combination. When vowels and consonants were put in conflict, adults used consonants more often than vowels. The effect has also been shown to emerge in other contexts, such as when words are paired with evaluative meanings (with "bouba" words associated with positive concepts and "kiki" words associated with negative concepts)[12]or when the words to be paired are existing first names, suggesting that some familiarity with the linguistic stimuli does not eliminate the effect. A study showed that individuals will pair names such as "Molly" with round silhouettes, and names such as "Kate" with sharp silhouettes. Moreover, individuals will associate different personality traits with either group of names (e.g., easygoingness with "round names"; determination with "sharp names"). This may hint at a role of abstract concepts in the effect.[13] Other research suggests that this effect does not occur in all communities,[14]and it appears that the effect breaks if the sounds do not make licit words in the language.[15]The bouba–kiki effect seems to be dependent on a longsensitive period, with high visual capacities in childhood being necessary for its typical development. Although the congenitally blind have been reported to show a bouba–kiki effect, they show a much smaller one for touched shapes than sighted individuals do for visual shapes.[16][17] A major 2021 study showed that certain languages, namely Mandarin Chinese, Turkish, Romanian, and Albanian, on average showed lower-than-50% matches for both associating bouba with roundedness and kiki with jaggedness. However, the authors consider their analysis conservative and not clear enough to confirm if these four definitively lacked the bouba–kiki phenomenon. For example, the phonetic structures of these languages or their participants' cultural associations with sound and shape could have led to the weaker correlations observed.[4]Further research is being conducted to further verify the correlation between low-effect languages and the bouba-kiki phenomenon. In 2019, Nathan Peiffer-Smadja and Laurent Cohen published the first study usingfMRIto explore the bouba–kiki effect.[18]They found that prefrontal activation is stronger to mismatching (bouba with spiky shape) than to matching (bouba with round shape) stimuli. A subsequent study by Kelly McCormick and colleagues reported a similar pattern of greater activation for mismatched word-shape stimuli, but with most activity inparietal regionsincluding theintraparietal sulcusandsupramarginal gyrus, regions known to play a role in sensory association and perceptual-motor processing.[19]Peiffer-Smadja and Cohen also found that sound-shape matching also influences activations in the auditory and visual cortices, suggesting an effect of matching at an early stage insensory processing.[18] Ramachandran and Hubbard suggest that the kiki/bouba effect has implications for the evolution of language, because it suggests that the naming of objects is not completely arbitrary.[9]: 17The rounded shape may most commonly be named "bouba" because the mouth makes a more rounded shape to produce that sound while a more taut, angular mouth shape is needed to make the sounds in "kiki".[20]Alternatively, the distinction may be betweencoronalordorsalconsonants like/k/andlabialconsonants like/b/,[21]or, as Fort and Schwartz suggest, the difference may be attributed to the noise a "bouba" shape makes when bounced (lower frequency and more continuous) in comparison to a spiked object.[22]Additionally, it was shown that it is not only different consonants (e.g., voiceless versus voiced) and different vowel qualities (e.g., /a/ versus /i/) that play a role in the effect, but also vowel quantity (long versus short vowels). In one study, participants rated words containing long vowels to refer to longer objects and short vowels to short objects, at least for languages that make avowel lengthdistinction.[23]The presence of these "synesthesia-like mappings" suggest that this effect may be the neurological basis forsound symbolism, in which sounds are non-arbitrarily mapped to objects and events in the world.[citation needed]Research has also indicated that the effect may be a case ofideasthesia,[24]a phenomenon in which activations of concepts (inducers) evoke perception-like experiences (concurrents). The name comes from the Greekideaandaisthesis, meaning "sensing concepts" or "sensing ideas", and was introduced by Danko Nikolić.[25]
https://en.wikipedia.org/wiki/Bouba/kiki_effect
Abow-wow theory(orcuckoo theory) is any of the theories by various scholars, includingJean-Jacques RousseauandJohann Gottfried Herder, on the speculative origins of human language.[1][2] According to bow-wow theories, the first human languages developed fromonomatopoeia, that is, imitations of natural sounds.[3]The term "bow-wow theory" was introduced in English-language literature by the German philologistMax Müller, who was critical of this idea.[4]Despite its simplicity, this theory highlights the human tendency to mimic natural sounds.[5] Bow-wow theories have been widely discredited as an explanation for the origin of language. However, some contemporary theories suggest that general imitative abilities may have played an important role in the evolution of language.[6] In the humorous typology of what he considered to be fanciful theories on the origin of languages, Max Müller contrastedbow-wowtheory withpooh-poohtheory, which holds that the original language consisted of interjections; and withding-dongtheory, which posits that humans were originally a kind of improved bell capable of making all sounds.[7] However, Müller was at one time attracted to theho-hisstheory, which held that grunts were also the origin of singing.[8]
https://en.wikipedia.org/wiki/Bow-wow_theory
Essay on the Origin of Languages(French:Essai sur l'origine des langues) is an essay byJean-Jacques Rousseaupublished posthumously in 1781.[1]Rousseau had meant to publish the essay in a short volume which was also to include essaysOn Theatrical ImitationandThe Levite of Ephraim.In the preface to this would-be volume, Rousseau wrote that theEssaywas originally meant to be included in theDiscourse on Inequality, but was omitted because it "was too long and out of place".[2]The essay was mentioned in Rousseau's 1762 book,Emile, or On Education. In this text, Rousseau lays out a narrative of thebeginnings of language, using a similar literary form as the Second Discourse. Rousseau writes that language (as well as the human race) developed in southern warm climates and then migrated northwards to colder climates. In its inception, language was musical and had emotional power as opposed to rational persuasion. The colder climates of the north, however, stripped language of its passionate characteristic, distorting it to the present rational form. In the later chapters, this relation is also discussed in terms of music, in ways that resonate with observations that Rousseau makes in his 1753Letter on French Music. Chapter Nine of theEssayis an explication of the development of humankind, eventually inventing language. As this format closely adheres to that of the Second Discourse, some have discussed whether one account ought to be read as more authoritative than the other. As the text was initially written in 1754, and was sent to the publisher in 1763, it appears safe to argue that the tensions between theEssayand the Second Discourse were intentional. The third chapter ofJacques Derrida'sOf Grammatologycritiques and analyzes Rousseau's essay. This article about aphilosophicalessayor essay collection is astub. You can help Wikipedia byexpanding it. This article about a book onlanguage,linguisticsortranslationis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Essay_on_the_Origin_of_Languages
2AS5,2A07 93986 114142 ENSG00000128573 ENSMUSG00000029563 O15409Q75MZ5Q0PRL4Q8N6B6 P58463 NM_148900 NM_053242NM_212435NM_001286607 NP_683698NP_001166237.1NP_683697.2 NP_001273536NP_444472NP_997600 Forkhead box protein P2(FOXP2) is aproteinthat, in humans, is encoded by theFOXP2gene. FOXP2 is a member of theforkhead boxfamily oftranscription factors, proteins thatregulate gene expressionbybinding to DNA. It is expressed in the brain, heart, lungs and digestive system.[5][6] FOXP2is found in manyvertebrates, where it plays an important role in mimicry in birds (such asbirdsong) andecholocationin bats.FOXP2is also required for the proper development of speech and language in humans.[7]In humans, mutations inFOXP2cause the severe speech and language disorderdevelopmental verbal dyspraxia.[7][8]Studies of the gene in mice and songbirds indicate that it is necessary for vocal imitation and the related motor learning.[9][10][11]Outside the brain,FOXP2has also been implicated in development of other tissues such as the lung and digestive system.[12] Initially identified in 1998 as the genetic cause of aspeech disorderin a British family designated theKE family,FOXP2was the first gene discovered to be associated with speech and language[13]and was subsequently dubbed "the language gene".[14]However, other genes are necessary for human language development, and a 2018 analysis confirmed that there was no evidence of recent positiveevolutionary selectionofFOXP2in humans.[15][16] As aFOX protein, FOXP2 contains a forkhead-box domain. In addition, it contains apolyglutamine tract, azinc fingerand aleucine zipper. The protein attaches to the DNA of other proteins and controls their activity through the forkhead-box domain. Only a few targeted genes have been identified, however researchers believe that there could be up to hundreds of other genes targeted by the FOXP2 gene. The forkhead box P2 protein is active in the brain and other tissues before and after birth, and many studies show that it is paramount for the growth of nerve cells and transmission between them. The FOXP2 gene is also involved in synaptic plasticity, making it imperative for learning and memory.[17] FOXP2is required for proper brain and lung development.Knockout micewith only one functional copy of theFOXP2gene have significantly reduced vocalizations as pups.[18]Knockout mice with no functional copies ofFOXP2are runted, display abnormalities in brain regions such as thePurkinje layer, and die an average of 21 days after birth from inadequate lung development.[12] FOXP2is expressed in many areas of the brain,[19]including thebasal gangliaand inferiorfrontal cortex, where it is essential for brain maturation and speech and language development.[20]In mice, the gene was found to be twice as highly expressed in male pups than female pups, which correlated with an almost double increase in the number of vocalisations the male pups made when separated from mothers. Conversely, in human children aged 4–5, the gene was found to be 30% more expressed in theBroca's areasof female children. The researchers suggested that the gene is more active in "the more communicative sex".[21][22] The expression ofFOXP2is subject topost-transcriptional regulation, particularlymicroRNA(miRNA), causing the repression of the FOXP23' untranslated region.[23] Three amino acid substitutions distinguish the humanFOXP2protein from that found in mice, while two amino acid substitutions distinguish the humanFOXP2protein from that found in chimpanzees,[19]but only one of these changes is unique to humans.[12]Evidence from genetically manipulated mice[24]and human neuronal cell models[25]suggests that these changes affect the neural functions ofFOXP2. The FOXP2 gene has been implicated in several cognitive functions including; general brain development, language, and synaptic plasticity. The FOXP2 gene region acts as a transcription factor for the forkhead box P2 protein. Transcription factors affect other regions, and the forkhead box P2 protein has been suggested to also act as a transcription factor for hundreds of genes. This prolific involvement opens the possibility that the FOXP2 gene is much more extensive than originally thought.[17]Other targets of transcription have been researched without correlation to FOXP2. Specifically, FOXP2 has been investigated in correlation with autism and dyslexia, however with no mutation was discovered as the cause.[26][8]One well identified target is language.[27]Although some research disagrees with this correlation,[28]the majority of research shows that a mutated FOXP2 causes the observed production deficiency.[17][27][29][26][30][31] There is some evidence that the linguistic impairments associated with a mutation of theFOXP2gene are not simply the result of a fundamental deficit in motor control. Brain imaging of affected individuals indicates functional abnormalities in language-related cortical and basal ganglia regions, demonstrating that the problems extend beyond the motor system.[32] Mutations in FOXP2 are among several (26 genes plus 2 intergenic) loci which correlate toADHDdiagnosis in adults – clinical ADHD is an umbrella label for a heterogeneous group of genetic and neurological phenomena which may result from FOXP2 mutations or other causes.[33] A 2020genome-wide association study(GWAS) implicatessingle-nucleotide polymorphisms(SNPs) of FOXP2 in susceptibility tocannabis use disorder.[34] It is theorized that the translocation of the 7q31.2 region of the FOXP2 gene causes a severe language impairment calleddevelopmental verbal dyspraxia(DVD)[27]or childhood apraxia of speech (CAS)[35]So far this type of mutation has only been discovered in three families across the world including the original KE family.[31]A missense mutation causing an arginine-to-histidine substitution (R553H) in the DNA-binding domain is thought to be the abnormality in KE.[36]This would cause a normally basic residue to be fairly acidic and highly reactive at the body's pH. A heterozygous nonsense mutation, R328X variant, produces a truncated protein involved in speech and language difficulties in one KE individual and two of their close family members. R553H and R328X mutations also affected nuclear localization, DNA-binding, and the transactivation (increased gene expression) properties of FOXP2.[8] These individuals present with deletions, translocations, and missense mutations. When tasked with repetition and verb generation, these individuals with DVD/CAS had decreased activation in the putamen and Broca's area in fMRI studies. These areas are commonly known as areas of language function.[37]This is one of the primary reasons that FOXP2 is known as a language gene. They have delayed onset of speech, difficulty with articulation including slurred speech, stuttering, and poor pronunciation, as well as dyspraxia.[31]It is believed that a major part of this speech deficit comes from an inability to coordinate the movements necessary to produce normal speech including mouth and tongue shaping.[27]Additionally, there are more general impairments with the processing of the grammatical and linguistic aspects of speech.[8]These findings suggest that the effects of FOXP2 are not limited to motor control, as they include comprehension among other cognitive language functions. General mild motor and cognitive deficits are noted across the board.[29]Clinically these patients can also have difficulty coughing, sneezing, or clearing their throats.[27] While FOXP2 has been proposed to play a critical role in the development of speech and language, this view has been challenged by the fact that the gene is also expressed in other mammals as well as birds and fish that do not speak.[38]It has also been proposed that the FOXP2 transcription-factor is not so much a hypothetical 'language gene' but rather part of a regulatory machinery related to externalization of speech.[39] TheFOXP2gene is highly conserved inmammals.[19]The human gene differs from that innon-human primatesby the substitution of two amino acids, athreoninetoasparaginesubstitution at position 303 (T303N) and an asparagine toserinesubstitution at position 325 (N325S).[36]In mice it differs from that of humans by three substitutions, and inzebra finchby seven amino acids.[19][40][41]One of the two amino acid differences between human and chimps also arose independently in carnivores and bats.[12][42]SimilarFOXP2proteins can be found insongbirds,fish, andreptilessuch asalligators.[43][44] DNA sampling fromHomo neanderthalensisbones indicates that theirFOXP2gene is a little different though largely similar to those ofHomo sapiens(i.e. humans).[45][46]Previous genetic analysis had suggested that theH. sapiensFOXP2 gene became fixed in the population around 125,000 years ago.[47]Some researchers consider the Neanderthal findings to indicate that the gene instead swept through the population over 260,000 years ago, before our most recent common ancestor with the Neanderthals.[47]Other researchers offer alternative explanations for how theH. sapiensversion would have appeared in Neanderthals living 43,000 years ago.[47] According to a 2002 study, theFOXP2gene showed indications of recentpositive selection.[19][48]Some researchers have speculated that positive selection is crucial for theevolution of language in humans.[19]Others, however, were unable to find a clear association between species with learned vocalizations and similar mutations inFOXP2.[43][44]A 2018 analysis of a large sample of globally distributed genomes confirmed there was no evidence of positive selection, suggesting that the original signal of positive selection may be driven by sample composition.[15][16]Insertion of both humanmutationsinto mice, whose version ofFOXP2otherwise differs from the human andchimpanzeeversions in only one additional base pair, causes changes in vocalizations as well as other behavioral changes, such as a reduction in exploratory tendencies, and a decrease in maze learning time. A reduction in dopamine levels and changes in the morphology of certain nerve cells are also observed.[24] FOXP2 is known to regulateCNTNAP2,CTBP1,[49]SRPX2andSCN3A.[50][20][51] FOXP2 downregulatesCNTNAP2, a member of theneurexinfamily found in neurons.CNTNAP2is associated with common forms of language impairment.[52] FOXP2 also downregulatesSRPX2, the 'Sushi Repeat-containing Protein X-linked 2'.[53][54]It directly reduces its expression, by binding to its gene'spromoter. SRPX2 is involved inglutamatergicsynapse formationin thecerebral cortexand is more highly expressed in childhood. SRPX2 appears to specifically increase the number of glutamatergic synapses in the brain, while leaving inhibitoryGABAergicsynapses unchanged and not affectingdendritic spinelength or shape. On the other hand, FOXP2's activity does reduce dendritic spine length and shape, in addition to number, indicating it has other regulatory roles in dendritic morphology.[53] In chimpanzees, FOXP2 differs from the human version by two amino acids.[55]A study in Germany sequenced FOXP2's complementary DNA in chimps and other species to compare it with human complementary DNA in order to find the specific changes in the sequence.[19]FOXP2 was found to be functionally different in humans compared to chimps. Since FOXP2 was also found to have an effect on other genes, its effects on other genes is also being studied.[56]Researchers deduced that there could also be further clinical applications in the direction of these studies in regards to illnesses that show effects on human language ability.[25] In a mouseFOXP2gene knockouts, loss of both copies of the gene causes severe motor impairment related to cerebellar abnormalities and lack ofultrasonicvocalisationsnormally elicited when pups are removed from their mothers.[18]These vocalizations have important communicative roles in mother–offspring interactions. Loss of one copy was associated with impairment of ultrasonic vocalisations and a modest developmental delay. Male mice on encountering female mice produce complex ultrasonic vocalisations that have characteristics of song.[57]Mice that have the R552H point mutation carried by the KE family show cerebellar reduction and abnormalsynaptic plasticityin striatal andcerebellarcircuits.[9] Humanized FOXP2 mice display alteredcortico-basal gangliacircuits. The human allele of the FOXP2 gene was transferred into the mouse embryos throughhomologous recombinationto create humanized FOXP2 mice. The human variant of FOXP2 also had an effect on the exploratory behavior of the mice. In comparison to knockout mice with one non-functional copy ofFOXP2, the humanized mouse model showed opposite effects when testing its effect on the levels of dopamine, plasticity of synapses, patterns of expression in the striatum and behavior that was exploratory in nature.[24] When FOXP2 expression was altered in mice, it affected many different processes including the learning motor skills and the plasticity of synapses. Additionally, FOXP2 is found more in thesixth layerof the cortex than in thefifth, and this is consistent with it having greater roles in sensory integration. FOXP2 was also found in themedial geniculate nucleusof the mouse brain, which is the processing area that auditory inputs must go through in the thalamus. It was found that its mutations play a role in delaying the development of language learning. It was also found to be highly expressed in the Purkinje cells and cerebellar nuclei of the cortico-cerebellar circuits. High FOXP2 expression has also been shown in the spiny neurons that expresstype 1 dopamine receptorsin the striatum,substantia nigra,subthalamic nucleusandventral tegmental area. The negative effects of the mutations of FOXP2 in these brain regions on motor abilities were shown in mice through tasks in lab studies. When analyzing the brain circuitry in these cases, scientists found greater levels of dopamine and decreased lengths of dendrites, which caused defects inlong-term depression, which is implicated in motor function learning and maintenance. ThroughEEGstudies, it was also found that these mice had increased levels of activity in their striatum, which contributed to these results. There is further evidence for mutations of targets of the FOXP2 gene shown to have roles inschizophrenia,epilepsy,autism,bipolar disorderand intellectual disabilities.[58] FOXP2has implications in the development ofbatecholocation.[36][42][59]Contrary to apes and mice,FOXP2is extremely diverse inecholocating bats.[42]Twenty-two sequences of non-bateutherianmammals revealed a total number of 20 nonsynonymous mutations in contrast to half that number of bat sequences, which showed 44 nonsynonymous mutations.[42]Allcetaceansshare three amino acid substitutions, but no differences were found between echolocatingtoothed whalesand non-echolocatingbaleen cetaceans.[42]Within bats, however, amino acid variation correlated with different echolocating types.[42] Insongbirds,FOXP2most likely regulates genes involved inneuroplasticity.[10][60]Gene knockdownofFOXP2in area X of thebasal gangliain songbirds results in incomplete and inaccurate song imitation.[10]Overexpression ofFOXP2was accomplished through injection ofadeno-associated virusserotype 1 (AAV1) into area X of the brain. This overexpression produced similar effects to that of knockdown; juvenile zebra finch birds were unable to accurately imitate their tutors.[61]Similarly, in adult canaries, higherFOXP2levels also correlate with song changes.[41] Levels ofFOXP2in adult zebra finches are significantly higher when males direct their song to females than when they sing song in other contexts.[60]"Directed" singing refers to when a male is singing to a female usually for a courtship display. "Undirected" singing occurs when for example, a male sings when other males are present or is alone.[62]Studies have found that FoxP2 levels vary depending on the social context. When the birds were singing undirected song, there was a decrease of FoxP2 expression in Area X. This downregulation was not observed and FoxP2 levels remained stable in birds singing directed song.[60] Differences between song-learning and non-song-learning birds have been shown to be caused by differences inFOXP2gene expression, rather than differences in the amino acid sequence of theFOXP2protein. Inzebrafish, FOXP2 is expressed in the ventral anddorsal thalamus,telencephalon,diencephalonwhere it likely plays a role in nervous system development. The zebrafish FOXP2 gene has an 85% similarity to the human FOX2P ortholog.[63] FOXP2and its gene were discovered as a result of investigations on an English family known as theKE family, half of whom (15 individuals across three generations) had a speech and language disorder calleddevelopmental verbal dyspraxia. Their case was studied at theInstitute of Child Health of University College London.[64]In 1990,Myrna Gopnik, Professor of Linguistics atMcGill University, reported that the disorder-affected KE family had severe speech impediment with incomprehensible talk, largely characterized by grammatical deficits.[65]She hypothesized that the basis was not of learning or cognitive disability, but due to genetic factors affecting mainly grammatical ability.[66](Her hypothesis led to a popularised existence of "grammar gene" and a controversial notion of grammar-specific disorder.[67][68]) In 1995, theUniversity of Oxfordand the Institute of Child Health researchers found that the disorder was purely genetic.[69]Remarkably, the inheritance of the disorder from one generation to the next was consistent withautosomal dominantinheritance, i.e., mutation of only a single gene on anautosome(non-sex chromosome) acting in a dominant fashion. This is one of the few known examples ofMendelian(monogenic) inheritance for a disorder affecting speech and language skills, which typically have a complex basis involving multiple genetic risk factors.[70] In 1998, Oxford University geneticistsSimon Fisher,Anthony Monaco, Cecilia S. L. Lai, Jane A. Hurst, andFaraneh Vargha-Khademidentified an autosomal dominant monogenic inheritance that is localized on a small region ofchromosome 7from DNA samples taken from the affected and unaffected members.[5]The chromosomal region (locus) contained 70 genes.[71]The locus was given the official name "SPCH1" (for speech-and-language-disorder-1) by the Human Genome Nomenclature committee. Mapping and sequencing of the chromosomal region was performed with the aid ofbacterial artificial chromosomeclones.[6]Around this time, the researchers identified an individual who was unrelated to the KE family but had a similar type of speech and language disorder. In this case, the child, known as CS, carried a chromosomal rearrangement (atranslocation) in which part of chromosome 7 had become exchanged with part of chromosome 5. The site of breakage of chromosome 7 was located within the SPCH1 region.[6] In 2001, the team identified in CS that the mutation is in the middle of a protein-coding gene.[7]Using a combination ofbioinformaticsandRNAanalyses, they discovered that the gene codes for a novel protein belonging to theforkhead-box(FOX) group oftranscription factors. As such, it was assigned with the official name of FOXP2. When the researchers sequenced theFOXP2gene in the KE family, they found aheterozygouspoint mutationshared by all the affected individuals, but not in unaffected members of the family and other people.[7]This mutation is due to an amino-acid substitution that inhibits the DNA-binding domain of theFOXP2protein.[72]Further screening of the gene identified multiple additional cases ofFOXP2disruption, including different point mutations[8]and chromosomal rearrangements,[73]providing evidence that damage to one copy of this gene is sufficient to derail speech and language development.
https://en.wikipedia.org/wiki/FOXP2_and_human_evolution
Generative anthropologyis a field of study based on thehypothesisthat the origin of human language happened in a singular event. The discipline of Generative Anthropology centers upon this original event whichEric Ganscalls The Originary Scene. This scene is a kind of origin story that hypothesizes the specific event where language originated. The Originary Scene is powerful because any human ability: our ability to do science, to be ironic, to love, to think, to dominate, etc can be carefully explained first by reference to this scene of origin. Because The Originary Scene was the origin of all things human; Generative Anthropology attempts to understand all cultural phenomena in the simplest terms possible: all things human can be traced back to this hypothetical single origin point. Generative anthropology originated with ProfessorEric GansofUCLAwho developed his ideas in a series of books and articles beginning withThe Origin of Language: A Formal Theory of Representation(1981), which builds on the ideas ofRené Girard, notably that ofmimetic desire. In establishing the theory of Generative Anthropology, Gans departs from and goes beyond Girard's work in many ways. Generative Anthropology is therefore an independent and original way of understanding the human species, its origin,culture, history, and development. Gans founded (and edits) the web-based journalAnthropoetics: The Journal of Generative Anthropologyas a scholarly forum for research into human culture and origins based on his theories of Generative Anthropology and the closely related theories offundamental anthropologydeveloped byRené Girard. In his onlineChronicles of Love and ResentmentGans applies the principles of Generative Anthropology to a wide variety of fields including popular culture, film,post-modernism, economics, contemporary politics, the Holocaust, philosophy, religion, and paleo-anthropology. The central hypothesis of generative anthropology is that theorigin of languagewas a singular event. Human language is radically different from animal communication systems. It possessessyntax, allowing for unlimited new combinations and content; it is symbolic, and it possesses a capacity for history. Thus it is hypothesized that the origin of language must have been a singular event, and the principle ofparsimonyrequires that it originated only once. Language makes possible new forms of social organization radically different from animal "pecking order" hierarchies dominated by analpha male. Thus, the development of language allowed for a new stage in humanevolution– the beginning of culture, including religion, art, desire, and the sacred. As language provides memory and history via a record of its own history, language itself can be defined via a hypothesis of its origin based on our knowledge of human culture. As with any scientific hypothesis, its value is in its ability to account for the known facts of human history and culture. Mimetic (imitatory) behaviour connects proto-hominid species with humans. Imitation is an adaptive learning behavior, a form of intelligence favored by natural selection. Imitation, as René Girard observes, leads to conflict when two individuals imitate each other in their attempt to appropriate a desired object. The problem is to explain the transition from one form ofmimesis, imitation, to another, representation. Although many anthropologists have hypothesized that language evolved to help humans describe their world, this ignores the fact that intra-species violence, not the environment, poses the greatest threat to human existence.[citation needed]Human representation, according to Gans, is not merely a "natural" evolutionary development of animal communication systems, but is a radical departure from it. The signifier implies a symbolic dimension that is not reducible to empirical referents. At the event of the origin of language, there was a proto-humanhominidspecies which had gradually become more mimetic, presumably in response to environmental pressures including climate changes and competition for limited resources. Higher primates have dominance hierarchies which serve to limit and prevent destructive conflict within the social group. As individuals within the proto-human group became more mimetic, the dominance system broke down and became inadequate to control the threat of violence posed by conflictual mimesis.[citation needed] Gans posits an "originary event" along the following lines: A group of hominids have surrounded a food object, e.g. the body of a large mammal following a hunt. The attraction of the object exceeds the limits of simple appetite due to the operation of group mimesis, essentially an expression of competition or rivalry. The object becomes more attractive simply because each member of the group finds it attractive: each individual in the group observes the attention that his rivals give the object. Actual appetite is artificially inflated through this mutual reinforcement. The power of appetitive mimesis in conjunction with the threat of violence is such that the central object begins to assume a sacred aura – infinitely desirable and infinitely dangerous. Mimesis thus gives rise to a pragmatic paradox: the double imperative to take the desired object for personal gain, and to refrain from taking it to avoid conflict. In other words, imitating the rival means not imitating the rival, because imitation leads to conflict, the attempt to destroy rather than imitate (Gans,Signs of Paradox18). Generative Anthropology theorizes that when this mimetic instinct becomes so powerful that it seems to possess a sacred force endangering the survival of the group, the resultant intra-species pressure favours the emergence of the sign. No member of the group is able to take the sacred object, and at least one member of the group intends this aborted gesture as a sign designating the central object. This meaning is successfully communicated to the group, who follow suit by reading their aborted gestures as signs also. The sign focuses attention on the sacred power of the central object, which is conceived as the source of its own power. The object which compels attention yet prohibits consumption can only be represented. The basic advantage of the sign over the object is that "The sign is an economical substitute for its inaccessible referent. Things are scarce and consequently objects of potential contention; signs are abundant because they can be reproduced at will" (Gans,Originary Thinking9). The desire for the object is mediated by the sign, which paradoxically both creates desire, by attributing significance to the object, yet also defers desire, by designating the object as sacred ortaboo. The mimetic impulse is sublimated, expressed in a different form, as the act of representation. Individual self-consciousness is also born at this moment, in the recognition of alienation from the sacred center. The primary value/function of the sign in this scenario is ethical, as the deferral of violence, but the sign is also referential. What the sign refers to, strictly speaking, is not the physical object, but rather the mediated object of desire as realized in the imagination of each individual. The emergence of the sign is only a temporary deferral of violence. It is immediately followed by thesparagmos, the discharge of the mimetic tension created by the sign in the violent dismemberment and consumption of the worldly incarnation of the sign, the central appetitive object. The violence of the sparagmos is mediated by the sign and thus directed towards the central object rather than the other members of the group. By including the sparagmos in the originary hypothesis, Gans intends to incorporate Girard's insights intoscapegoatingand the sacrificial (seeSigns of Paradox131–151). The "scene of representation" is fundamentally social or interpersonal. The act of representation always implies the presence of another or others. The use of a sign evokes the communal scene of representation, structured by a sacred center and a human periphery. The significance of the sign seems to emerge from the sacred center (in its resistance to appropriation), but the pragmatic significance of the sign is realized in the peace brokered amongst the humans on the periphery. All signs point to the sacred, that which is significant to the community. The sacred cannot be signified directly, since it is essentially an imaginary or ideal construction of mimetic desire. The significance is realized in the human relationships as mediated by the sign. When an individual refers to an object or idea, the reference is fundamentally to the significance of that object or idea for the human community. Language attempts to reproduce the non-violent presence of the community to itself, even though it may attempt to do so sacrificially, by designating ascapegoatvictim. Generative Anthropology is so called because human culture is understood as a "genetic" development of the originary event. The scene of representation is a true cultural universal, but it must be analyzed in terms of its dialectical development. The conditions for the generation of significance are subject to historical evolution, so that the formal articulation of the sign always includes a dialogical relationship to past forms. The Generative Anthropology Society and Conference (GASC) is a scholarly association formed for the purpose of facilitating intellectual exchange amongst those interested in fundamental reflection on the human, originary thinking, andGenerative Anthropology, including support for regular conferences. GASC was formally organized on June 24, 2010 atWestminster College, Salt Lake Cityduring the 4th Annual Generative Anthropology Summer Conference. Since 2007, GASC has held an annual summer conference on generative anthropology. 2007 -Kwantlen University CollegeofUniversity of British Columbia(Vancouver,British Columbia) 2008 -Chapman University(Orange, California) 2009 -University of Ottawa(Ottawa,Ontario) 2010 -Westminster College (Utah)(Salt Lake City) andBrigham Young University(Provo,Utah) 2011 -High Point University(High Point, North Carolina) 2012 -International Christian University(Tokyo, Japan) 2013 -University of California, Los Angeles 2014 -University of Victoria(Greater Victoria, British Columbia),Canada 2015 -High Point University(High Point, North Carolina) 2016 -Kinjo Gakuin University(Nagoya,Japan) The Origin of Language: A Formal Theory of Representation. University of California Press, 1981. The End of Culture: Toward a Generative Anthropology. University of California Press, 1985. Science and Faith: The Anthropology of Revelation. Savage, Md.: Rowman & Littlefield, 1990. Originary Thinking: Elements of Generative Anthropology. Stanford University Press, 1993. Signs of Paradox: Irony, Resentment, and Other Mimetic Structures. Stanford University Press, 1997. The Scenic Imagination: Originary Thinking from Hobbes to the Present Day. Stanford University Press 2007. A New Way of Thinking: Generative Anthropology in Religion, Philosophy, and Art. Davies Group, 2011
https://en.wikipedia.org/wiki/Generative_anthropology
Historical linguistics, also known asdiachronic linguistics, is the scientific study of howlanguages changeover time.[1]It seeks to understand the nature and causes of linguistic change and to trace the evolution of languages. Historical linguistics involves several key areas of study, including the reconstruction of ancestral languages, the classification of languages intofamilies, (comparative linguistics) and the analysis of thecultural and socialinfluences on language development.[2][3] This field is grounded in theuniformitarian principle, which posits that the processes of language change observed today were also at work in the past, unless there is clear evidence to suggest otherwise.[4][not verified in body]Historical linguists aim to describe and explain changes in individual languages, explore the history of speech communities, and study the origins and meanings of words (etymology).[4] Modern historical linguistics dates to the late 18th century, having originally grown out of the earlier discipline ofphilology,[5]the study of ancient texts and documents dating back to antiquity. Initially, historical linguistics served as the cornerstone ofcomparative linguistics, primarily as a tool forlinguistic reconstruction.[6]Scholars were concerned chiefly with establishing language families and reconstructing unrecordedproto-languages, using thecomparative methodandinternal reconstruction.[6]The focus was initially on the well-knownIndo-European languages, many of which had long written histories; scholars also studied theUralic languages, another Eurasian language-family for which less early written material exists. Since then, there has been significant comparative linguistic work expanding outside of European languages as well, such as on theAustronesian languagesand on various families ofNative American languages, among many others. Comparative linguistics became only a part of a more broadly-conceived discipline of historical linguistics. For the Indo-European languages, comparative study is now a highly specialized field. Some scholars have undertaken studies attempting to establish super-families, linking, for example, Indo-European, Uralic, and other families intoNostratic. These attempts have not met with wide acceptance. The information necessary to establish relatedness becomes less available as the time increases. The time-depth of linguistic methods is limited due to chance word resemblances and variations between language groups, but a limit of around 10,000 years is often assumed.[7]Several methods are used to date proto-languages, but the process is generally difficult and its results are inherently approximate. In linguistics, asynchronic analysisis one that views linguistic phenomena only at a given time, usually the present, but a synchronic analysis of a historical language form is also possible. It may be distinguished from diachronic, which regards a phenomenon in terms of developments through time. Diachronic analysis is the main concern of historical linguistics. However, most other branches of linguistics are concerned with some form of synchronic analysis. The study of language change offers a valuable insight into the state of linguistic representation, and because all synchronic forms are the result of historically evolving diachronic changes, the ability to explain linguistic constructions necessitates a focus on diachronic processes.[8] Initially, all of modern linguistics was historical in orientation. Even the study of modern dialects involved looking at their origins.Ferdinand de Saussure's distinction betweensynchronicand diachronic linguistics is fundamental to the present day organization of the discipline. Primacy is accorded to synchronic linguistics, anddiachronic linguisticsis defined as the study of successive synchronic stages. Saussure's clear demarcation, however, has had both defenders and critics. In practice, a purely-synchronic linguistics is not possible for any period before the invention of thegramophone, as written records always lag behind speech in reflecting linguistic developments. Written records are difficult to date accurately before the development of the moderntitle page. Often, dating must rely on contextual historical evidence such as inscriptions, or modern technology, such ascarbon dating, can be used to ascertain dates of varying accuracy. Also, the work ofsociolinguistson linguistic variation has shown synchronic states are not uniform: the speech habits of older and younger speakers differ in ways that point to language change. Synchronic variation is linguistic change in progress. Synchronic and diachronic approaches can reach quite different conclusions. For example, aGermanic strong verb(e.g. Englishsing↔sang↔sung) isirregularwhen it is viewed synchronically: thenative speaker's brain processesthem as learned forms, but the derived forms of regular verbs are processed quite differently, by the application of productive rules (for example, adding-edto the basic form of a verb as inwalk→walked). That is an insight ofpsycholinguistics, which is relevant also forlanguage didactics, both of which are synchronic disciplines. However, a diachronic analysis shows that the strong verb is the remnant of a fully regular system of internal vowel changes, in this case theIndo-European ablaut; historical linguistics seldom uses the category "irregular verb". The principal tools of research in diachronic linguistics are thecomparative methodand the method ofinternal reconstruction. Less-standard techniques, such asmass lexical comparison, are used by some linguists to overcome the limitations of the comparative method, but most linguists regard them as unreliable. The findings of historical linguistics are often used as a basis for hypotheses about the groupings and movements of peoples, particularly in the prehistoric period. In practice, however, it is often unclear how to integrate the linguistic evidence with thearchaeologicalorgeneticevidence. For example, there are numerous theories concerning the homeland and early movements of theProto-Indo-Europeans, each with its own interpretation of the archaeological record. Comparative linguistics, originallycomparativephilology, is a branch of historical linguistics that is concerned with comparing languages in order to establish their historical relatedness. Languages may be related byconvergencethroughborrowingor by genetic descent, thus languages can change and are also able to cross-relate.Genetic relatednessimplies a common origin among languages. Comparative linguists constructlanguage families, reconstructproto-languages, and analyze the historical changes that have resulted in the documented languages' divergences. Etymologystudies the history ofwords: when they entered a language, from what source, and how their form and meaning have changed over time. Words may enter a language in several ways, including being borrowed asloanwordsfrom another language, being derived by combining pre-existing elements in the language, by a hybrid known asphono-semantic matching. In languages with a long and detailed history, etymology makes use ofphilology, the study of how words change from culture to culture over time. Etymologists also apply the methods ofcomparative linguisticsto reconstruct information about languages that are too old for any direct information (such as writing) to be known. By analysis of related languages by thecomparative method, linguists can make inferences about their shared parent language and its vocabulary. In that way,word rootsthat can be traced all the way back to the origin of, for instance, theIndo-Europeanlanguage familyhave been found. Although originating in the philological tradition, much current etymological research is done inlanguage familiesfor which little or no early documentation is available, such asUralicandAustronesian. Dialectologyis the scientific study of linguisticdialect, the varieties of a language that are characteristic of particular groups, based primarily on geographic distribution and their associated features. This is in contrast to variations based on social factors, which are studied insociolinguistics, or variations based on time, which are studied in historical linguistics. Dialectology treats such topics as divergence of two local dialects from a common ancestor andsynchronic variation.[9] Dialectologists are concerned with grammatical features that correspond to regional areas. Thus, they are usually dealing with populations living in specific locales for generations without moving, but also with immigrant groups bringing their languages to new settlements. Immigrant groups often bring their linguistic practices to new settlements, leading to distinct linguistic varieties within those communities. Dialectologists analyze these immigrant dialects to understand how languages develop and diversify in response to migration and cultural interactions. Phonologyis a sub-field of linguistics which studies the sound system of a specific language or set of languages. Whereasphoneticsis about the physical production andperceptionof the sounds of speech, phonology describes the way sounds function within a given language or across languages. Phonology studies when sounds are or are not treated as distinct within a language. For example, thepinpinisaspirated, but thepinspinis not. In English these two sounds are used incomplementary distributionand are not used to differentiate words so they are consideredallophonesof the samephoneme. In some other languages likeThaiandQuechua, the same difference of aspiration or non-aspiration differentiates words and so the two sounds, orphones, are considered to be distinct phonemes. In addition to the minimal meaningful sounds (the phonemes), phonology studies how sounds alternate, such as the/p/in English, and topics such assyllablestructure,stress,accent, andintonation. Principles of phonology have also been applied to the analysis ofsign languages, but the phonological units do not consist of sounds. The principles of phonological analysis can be applied independently ofmodalitybecause they are designed to serve as general analytical tools, not language-specific ones. Morphologyis the study of patterns of word-formation within a language. It attempts to formulate rules that model the knowledge of speakers. In the context of historical linguistics, formal means of expression change over time. Words as units in the lexicon are the subject matter oflexicology. Along withclitics, words are generally accepted to be the smallest units ofsyntax; however, it is clear in most languages that words may be related to one another by rules. These rules are understood by the speaker, and reflect specific patterns in how word formation interacts with speech. In the context of historical linguistics, the means of expression change over time. Syntax is the study of the principles and rules for constructingsentencesinnatural languages. Syntax directly concerns the rules and principles that govern sentence structure in individual languages. Researchers attempt to describe languages in terms of these rules. Many historical linguistics attempt to compare changes in sentence between related languages, or finduniversal grammarrules that natural languages follow regardless of when and where they are spoken.[10] In terms of evolutionary theory, historical linguistics (as opposed to research into theorigin of language) studiesLamarckian acquired characteristicsof languages. This perspective explores how languages adapt and change over time in response to cultural, societal, and environmental factors. Language evolution within the framework of historical linguistics is akin to Lamarckism in the sense that linguistic traits acquired during an individual's lifetime can potentially influence subsequent generations of speakers.[11] Historical linguists often use the termsconservativeandinnovativeto describe the extent of change within a language variety relative to that of comparable varieties. Conservative languages change less over time when compared to innovative languages.
https://en.wikipedia.org/wiki/Historical_linguistics
Languagehas a long evolutionary history and is closely related to thebrain, but what makes thehumanbrain uniquely adapted to language is unclear. The regions of the brain that are involved in language in humans have similar analogues in apes and monkeys, and yet they do not use language. There may also be ageneticcomponent: mutations in theFOXP2gene prevent humans from constructing complete sentences.[1] These regions are where language is located in the brain – everything fromspeechtoreadingandwriting.[2]Language itself is based on symbols used to represent concepts in the world, and this system appears to be housed in these areas. The language regions in human brains highly resemble similar regions in other primates, even though humans are the only species that use language.[3] The brain structures of chimpanzees are very similar to those of humans. Both contain Broca's and Wernicke'shomologuesthat are involved in communication. Broca's area is largely used for planning and producing vocalizations in both chimpanzees and humans. Wernicke's area appears to be where linguistic representations and symbols are mapped to specific concepts. This functionality is present in both chimpanzees and humans; the chimpanzee Wernicke's area is much more similar to its human counterpart than is the Broca's area, suggesting that Wernicke's is more evolutionary ancient than Broca's.[4] In order to speak, the breathing system must be voluntarily repurposed to produce vocal sounds,[3]which allows the breathing mechanisms to be temporarily deactivated in favor ofsongor speech production. The humanvocal tracthas evolved to be more suited to speaking, with a lowerlarynx, 90° turn in the windpipe, and large, round tongue.[5]Motor neuronsin birds and humans bypass the unconscious systems in the brainstem to give direct control of the larynx to the brain.[6] The earliest language was strictly vocal; reading and writing came later.[3]New research suggests that the combination ofgesturesand vocalizations may have led to the development of more complicated language in protohumans. Chimpanzees that produce attention-getting sounds show activation in areas of the brain that are highly similar to Broca's area in humans.[7][8]Even hand and mouth movements with no vocalizations cause very similar activation patterns in the Broca's area of both humans and monkeys.[4]When monkeys view other monkeys gesturing,mirror neuronsin the Broca's homologue activate. Groups of mirror neurons are specialized to respond only to one kind of viewed action, and it is currently believed that these may be an evolutionary origin to the neurons that are adapted for speech processing and production.[9] Thelanguage bioprogram hypothesisproposes that humans have an innate,cognitivegrammaticalstructure allowing them to develop and understand language. According to this theory, this system is embedded in human genetics and underpins the basic grammar of all languages.[4]Some evidence suggests that at least some of our linguistic capacities may be genetically controlled. Mutations in theFOXP2gene prevent people from combining words and phrases into sentences.[1]However, these genes are present in the heart, lungs, and brain, and their role is not entirely clear.[1] It is possible that the human capacity for grammar evolved from non-semantic behavior like singing.[10]Birds have the ability to produce, process, and learn complex vocalizations, but the units of a birdsong, when removed from the larger meaning and context of the birdsong as a whole, have no inherent meaning. Early hominids may have evolved capacities for similar, non-semantic purposes, that were later modified forsymbolic language.[6]
https://en.wikipedia.org/wiki/Neurobiological_origins_of_language
Theorigins of society— the evolutionary emergence of distinctively human social organization — is an important topic within evolutionary biology, anthropology, prehistory and palaeolithic archaeology.[1][2]While little is known for certain, debates since Hobbes[3]and Rousseau[4]have returned again and again to the philosophical, moral and evolutionary questions posed. Arguably the most influential theory of human social origins is that ofThomas Hobbes, who in hisLeviathan[5]argued that without strong government, society would collapse intoBellum omnium contra omnes— "the war of all against all": In such condition, there is no place for industry; because the fruit thereof is uncertain: and consequently no culture of the earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving, and removing, such things as require much force; no knowledge of the face of the earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short. Hobbes' innovation was to attribute the establishment of society to a founding 'social contract', in which the Crown's subjects surrender some part of their freedom in return for security. If Hobbes' idea is accepted, it follows that society could not have emerged prior to the state. This school of thought has remained influential to this day.[6]Prominent in this respect is British archaeologistColin Renfrew(Baron Renfrew of Kaimsthorn), who points out that the state did not emerge until long after the evolution ofHomo sapiens. The earliest representatives of our species, according to Renfrew, may well have beenanatomicallymodern, but they were not yetcognitivelyorbehaviourallymodern. For example, they lacked political leadership, large-scale cooperation, food production, organised religion, law or symbolic artefacts. Humans were simply hunter-gatherers, who — much like extant apes — ate whatever food they could find in the vicinity. Renfrew controversially suggests that hunter-gatherers to this day think and socialise along lines not radically different from those of their nonhuman primate counterparts. In particular, he says that they do not "ascribe symbolic meaning to material objects" and for that reason "lack fully developed 'mind.'"[citation needed] However, hunter-gatherer ethnographers emphasise that extant foraging peoples certainly do have social institutions — notably institutionalised rights and duties codified in formal systems of kinship.[7]Elaborate rituals such as initiation ceremonies serve to cement contracts and commitments, quite independently of the state.[8]Other scholars would add that insofar as we can speak of "human revolutions" — "major transitions" in human evolution[9]— the first was not the Neolithic Revolution but the rise of symbolic culture that occurred toward the end of the Middle Stone Age.[10][11] Arguing the exact opposite of Hobbes's position, anarchist anthropologistPierre Clastresviews the state and society as mutually incompatible: genuine society is always struggling to surviveagainstthe state.[12] Like Hobbes,Jean-Jacques Rousseauargued that society was born in a social contract. In Rousseau's case, however, sovereignty is vested in the entire populace, who enter into the contract directly with one another. "The problem", he explained, "is to find a form of association which will defend and protect with the whole common force the person and goods of each associate, and in which each, while uniting himself with all, may still obey himself alone, and remain as free as before." This is the fundamental problem of which the Social Contract provides the solution. The contract's clauses, Rousseau continued, may be reduced to one — "the total alienation of each associate, together with all his rights, to the whole community. Each man, in giving himself to all, gives himself to nobody; and as there is no associate over whom he does not acquire the same right as he yields others over himself, he gains an equivalent for everything he loses, and an increase of force for the preservation of what he has". In other words: "Each of us puts his person and all his power in common under the supreme direction of the general will, and, in our corporate capacity, we receive each member as an indivisible part of the whole." At once, in place of the individual personality of each contracting party, this act of association creates a moral and collective body, composed of as many members as the assembly contains votes, and receiving from this act its unity, its common identity, its life and its will.[13]By this means, each member of the community acquires not only the capacities of the whole but also, for the first time, rational mentality: The passage from the state of nature to the civil state produces a very remarkable change in man, by substituting justice for instinct in his conduct, and giving his actions the morality they had formerly lacked. Then only, when the voice of duty takes the place of physical impulses and right of appetite, does man, who so far had considered only himself, find that he is forced to act on different principles, and to consult his reason before listening to his inclinations. In his influential book,Ancient Law(1861), Maine argued that in early times, the basic unit of human social organisation was the patriarchal family: The effect of the evidence derived from comparative jurisprudence is to establish the view of the primeval condition of the human race which is known as the Patriarchal Theory. Hostile to French revolutionary and other radical social ideas, Maine's motives were partly political. He sought to undermine the legacy of Rousseau and other advocates of man's natural rights by asserting that originally, no one had any rights at all – ‘every man, living during the greater part of his life under the patriarchal despotism, was practically controlled in all his actions by a regimen not of law but of caprice’.[14]Not only were the patriarch's children subject to what Maine calls his ‘despotism’: his wife and his slaves were equally affected. The very notion of kinship, according to Maine, was simply a way of categorizing those who were forcibly subjected to the despot's arbitrary rule. Maine later added a Darwinian strand to this argument. In hisThe Descent of Man,Darwin had cited reports that a wild-living male gorilla would monopolise for itself as large a harem of females as it could violently defend. Maine endorsed Darwin's speculation that ‘primeval man’ probably 'lived in small communities, each with as many wives as he could support and obtain, whom he would have jealously guarded against all other men’.[15]Under pressure to spell out exactly what he meant by the term 'patriarchy', Maine clarified that ‘sexual jealousy, indulged through power, might serve as a definition of the Patriarchal Family’.[16] In his influential book,Ancient Society(1877), its title echoing Maine'sAncient Law,Lewis Henry Morganproposed a very different theory. Morgan insisted that throughout the earlier periods of human history, neither the state nor the family existed. It may be here premised that all forms of government are reducible to two general plans, using the word plan in its scientific sense. In their bases the two are fundamentally distinct. The first, in the order of time, is founded upon persons, and upon relations purely personal, and may be distinguished as a society(societas). The gens is the unit of this organization; giving as the successive stages of integration, in the archaic period, the gens, the phratry, the tribe, and the confederacy of tribes, which constituted a people or nation (populus). At a later period a coalescence of tribes in the same area into a nation took the place of a confederacy of tribes occupying independent areas. Such, through prolonged ages, after the gens appeared, was the substantially universal organization of ancient society; and it remained among the Greeks and Romans after civilization supervened. The second is founded upon territory and upon property, and may be distinguished as a state(civitas). In place of both family and state, according to Morgan, was thegens— nowadays termed the 'clan' — based initially on matrilocal residence and matrilineal descent. This aspect of Morgan's theory, later endorsed by Karl Marx and Frederick Engels, is nowadays widely considered discredited (but for a critical survey of the current consensus, see Knight 2008, 'Early Human Kinship Was Matrilineal'[17]). Friedrich Engelsbuilt on Morgan's ideas in his 1884 essay,The Origin of the Family, Private Property and the State in the light of the researches of Lewis Henry Morgan.His primary interest was the position of women in early society, and — in particular — Morgan's insistence that the matrilineal clan preceded the family as society's fundamental unit. 'The mother-right gens', wrote Engels in his survey of contemporary historical materialist scholarship, 'has become the pivot around which the entire science turns...' Engels argued that the matrilineal clan represented a principle of self-organization so vibrant and effective that it allowed no room for patriarchal dominance or the territorial state. The first class antagonism which appears in human history coincides with the development of the antagonism between man and woman in monogamian marriage, and the first class oppression with that of the female sex by the male. Emile Durkheimconsidered that in order to exist, any human social system must counteract the natural tendency for the sexes to promiscuously conjoin. He argued that social order presupposes sexual morality, which is expressed in prohibitions against sex with certain people or during certain periods — in traditional societies particularly during menstruation. One first fact is certain: that is, that the entire system of prohibitions must strictly conform to the ideas that primitive man had about menstruation and about menstrual blood. For all these taboos start only with the onset of puberty: and it is only when the first signs of blood appear that they reach their maximum rigour. The incest taboo, wrote Durkheim in 1898, is no more than a particular example of something more basic and universal - the ritualistic setting apart of 'the sacred' from 'the profane'. This begins as the segregation of the sexes, each of which - at least on important occasions - is 'sacred' or 'set apart' from the other. 'The two sexes', as Durkheim explains, 'must avoid each other with the same care as the profane flees from the sacred and the sacred from the profane.' Women as sisters act out the role of 'sacred' beings invested 'with an isolating power of some sort, a power which holds the masculine population at a distance.' Their menstrual blood in particular sets them in a category apart, exercising a 'type of repulsing action which keeps the other sex far from them'. In this way, the earliest ritual structure emerges — establishing morally regulated 'society' for the first time.[18] Charles Darwinpictured early human society as resembling that of apes, with one or more dominant males jealously guarding a harem of females.[19]In his myth of the 'Primal Horde',Sigmund Freudlater took all this as his starting point but then postulated an insurrection mounted by the tyrant's own sons: All that we find there is a violent and jealous father who keeps all the females for himself and drives away his sons as they grow up…. One day the brothers who had been driven out came together, killed and devoured their father and so made an end of the patriarchal horde. Following this, the band of brothers were about to take sexual possession of their mothers and sisters when suddenly they were overcome with remorse. In their contradictory emotional state, their dead father now became stronger than the living one had been. In memory of him, the brothers revoked their deed by forbidding the killing and eating of the 'totem' (as their father had now become) and renouncing their claim to the women who had just been set free. In this way, the two fundamental taboos ofprimitive society– not to eat the totem and not to marry one's sisters – were established for the first time. A related but less dramatic version of Freud's 'sexual revolution' idea was proposed in 1960 by American social anthropologistMarshall Sahlins.[20]Somehow, he writes, the world of primate brute competition and sexual dominance was turned upside-down: The decisive battle between early culture and human nature must have been waged on the field ofprimate sexuality…. Among subhuman primates sex had organized society; the customs of hunters and gatherers testify eloquently that now society was to organize sex…. In selective adaptation to the perils of the Stone Age, human society overcame or subordinated such primate propensities as selfishness, indiscriminate sexuality, dominance and brute competition. It substituted kinship and co-operation for conflict, placed solidarity over sex, morality over might. In its earliest days it accomplished the greatest reform in history, the overthrow of human primate nature, and thereby secured the evolutionary future of the species. Once a prehistoric hunting band institutionalized a successful and decisive rebellion, and did away with the alpha-male role permanently... it is easy to see how this institution would have spread. If we accept Rousseau's line of reasoning, no single dominant individual is needed to embody society, to guarantee security, or to enforce social contracts. The people themselves can do these things, combining to enforce the general will. A modern origins theory along these lines is that of evolutionary anthropologistChristopher Boehm. Boehm argues that ape social organisation tends to be despotic, typically with one or more dominant males monopolising access to the locally available females. But wherever there is dominance, we can also expect resistance. In the human case, resistance to being personally dominated intensified as humans used their social intelligence to form coalitions. Eventually, a point was reached when the costs of attempting to impose dominance became so high that the strategy was no longer evolutionarily stable, whereupon social life tipped over into 'reverse dominance' — defined as a situation in which only the entire community, on guard against primate-style individual dominance, is permitted to use force to suppress deviant behaviour.[21] Human beings, writes social anthropologist Ernest Gellner, are not genetically programmed to be members of this or that social order. You can take a human infant and place it into any kind of social order and it will function acceptably. What makes human society so distinctive is the fabulous range of quite different forms it takes across the world. Yet in any given society, the range of permitted behaviours is quite narrowly constrained. This is not owing to the existence of any externally imposed system of rewards and punishments. The constraints come from within — from certain compulsive moral concepts which members of the social order have internalised. The society installs these concepts in each individual's psyche in the manner first identified by Emile Durkheim, namely, by means of collective rituals such as initiation rites. Therefore, the problem of the origins of society boils down to the problem of the origins of collective ritual. How is a society established, and a series of societies diversified, whilst each of them is restrained from chaotically exploiting that wide diversity of possible human behaviour? A theory is available concerning how this may be done and it is one of the basic theories of social anthropology. The way in which you restrain people from doing a wide variety of things, not compatible with the social order of which they are members, is that you subject them to ritual. The process is simple: you make them dance around a totem pole until they are wild with excitement, and become jellies in the hysteria of collective frenzy; you enhance their emotional state by any device, by all the locally available audio-visual aids, drugs, music and so on; and once they are really high, you stamp upon their minds the type of concept or notion to which they subsequently become enslaved. Feminist scholars — among them palaeoanthropologists Leslie Aiello and Camilla Power — take similar arguments a step further, arguing that any reform or revolution which overthrew male dominance must have been led by women. Evolving human females, Power and Aiello suggest, actively separated themselves from males on a periodic basis, using their own blood (and/or pigments such as red ochre) to mark themselves as fertile and defiant: The sexual division of labor entails differentiation of roles in food procurement, with logistic hunting of large game by males, co-operation and exchange of products. Our hypothesis is that symbolism arose in this context. To minimize energetic costs of travel, coalitions of women began to invest in home bases. To secure this strategy, women would have to use their attractive, collective signal of impending fertility in a wholly new way: by signalling refusal of sexual access except to males who returned "home" with provisions. Menstruation — real or artificial — while biologically the wrong time for fertile sex, is psychologically the right moment for focusing men's minds on imminent hunting, since it offers the prospect of fertile sex in the near future. In similar vein, anthropologist Chris Knight argues that Boehm's idea of a 'coalition of everyone' is hard to envisage, unless — along the lines of a modern industrial picket line — it was formed to co-ordinate 'sex-strike' action against badly behaving males: ....male dominance had to be overthrown because the unending prioritising of male short-term sexual interests could lead only to the permanence and institutionalisation of behavioural conflict between the sexes, between the generations and also between rival males. If the symbolic, cultural domain was to emerge, what was needed was a political collectivity — an alliance — capable of transcending such conflicts. ... Only the consistent defence and self-defence of mothers with their offspring could produce a collectivity embodying interests of a sufficiently broad, universalistic kind. In virtually all hunter-gatherer ethnographies, according to Knight, a persistent theme is that 'women like meat',[22]and that they determinedly use their collective bargaining power to motivate men to hunt for them and bring home their kills — on pain of exclusion from sex.[23][24]Arguments about women's crucial role in domesticating males — motivating them to cooperate — have also been advanced by anthropologists Kristen Hawkes,[25]Sarah Hrdy[26]and Bruce Knauft[27]among others. Meanwhile, other evolutionary scientists continue to envisage uninterrupted male dominance, continuity with primate social systems and the emergence of society on a gradualist basis without revolutionary leaps.[28] I consider Trivers one of the great thinkers in the history of Western thought. It would not be too much of an exaggeration to say that he has provided a scientific explanation for the human condition: the intricately complicated and endlessly fascinating relationships that bind us to one another. In his 1985 book,Social Evolution,[29]Robert Triversoutlines the theoretical framework used today by most evolutionary biologists to understand how and why societies are established. Trivers sets out from the fundamental fact that genes survive beyond the death of the bodies they inhabit, because copies of the same gene may be replicated in multiple different bodies. From this, it follows that a creature should behave altruistically to the extent that those benefiting carry the same genes — 'inclusive fitness', as this source of cooperation in nature is termed.[30]Where animals are unrelated, cooperation should be limited to 'reciprocal altruism' or 'tit-for-tat'.[31]Where previously, biologists took parent-offspring cooperation for granted, Trivers predicted on theoretical grounds both cooperation and conflict — as when a mother needs to wean an existing baby (even against its will) in order to make way for another.[32]Previously, biologists had interpreted male infanticidal behaviour as aberrant and inexplicable or, alternatively, as a necessary strategy for culling excess population.[33]Trivers was able to show that such behaviour was a logical strategy by males to enhance their own reproductive success at the expense of conspecifics including rival males. Ape or monkey females whose babies are threatened have directly opposed interests, often forming coalitions to defend themselves and their offspring against infanticidal males.[34] Human society, according to Trivers, is unusual in that it involves the male of the species investing parental care in his own offspring — a rare pattern for a primate. Where such cooperation occurs, it's not enough to take it for granted: in Trivers' view we need toexplainit using an overarching theoretical framework applicable to humans and nonhumans alike.[35] Everybody has a social life. All living creatures reproduce and reproduction is a social event, since at its bare minimum it involves the genetic and material construction of one individual by another. In turn, differences between individuals in the number of their surviving offspring (natural selection) is the driving force behind organic evolution. Life is intrinsically social and it evolves through a process of natural selection which is itself social. For these reasons social evolution refers not only to the evolution of social relationships between individuals but also to deeper themes of biological organization stretching from gene to community. Robin Dunbaroriginally studied gelada baboons in the wild in Ethiopia, and has done much to synthesise modern primatological knowledge with Darwinian theory into a comprehensive overall picture. The components of primate social systems 'are essentially alliances of a political nature aimed at enabling the animals concerned to achieve more effective solutions to particular problems of survival and reproduction'.[36]Primate societies are in essence 'multi-layered sets of coalitions'.[37]Although physical fights are ultimately decisive, the social mobilisation of allies usually decides matters and requires skills that go beyond mere fighting ability. The manipulation and use of coalitions demands sophisticated social — more preciselypolitical— intelligence. Usually but not always, males exercise dominance over females. Even where male despotism prevails, females typically gang up with one another to pursue agendas of their own. When a male gelada baboon attacks a previously dominant rival so as to take over his harem, the females concerned may insist on their own say in the outcome. At various stages during the fighting, the females may 'vote' among themselves on whether to accept the provisional outcome. Rejection is signalled by refusing to groom the challenger; acceptance is signalled by going up to him and grooming him. According to Dunbar, the ultimate outcome of an inter-male 'sexual fight' always depends on the female 'vote'.[38] Dunbar points out that in a primate social system, lower-ranking females will typically suffer the most intense harassment. Consequently, they will be the first to form coalitions in self-defence. But maintaining commitment from coalition allies involves much time-consuming manual grooming, putting pressure on time-budgets. In the case of evolving humans, who were living in increasingly large groups, the costs would soon have outweighed the benefits — unless some more efficient way of maintaining relationships could be found. Dunbar argues that 'vocal grooming' — using the voice to signal commitment — was the time-saving solution adopted, and that this led eventually to speech. Dunbar goes on to suggest (citing evolutionary anthropologist Chris Knight[39][40]) thatdistinctively humansociety may have been evolved under pressure from female ritual and 'gossiping' coalitions established to dissuade males from fighting one another and instead cooperate in hunting for the benefit of the whole camp: If females formed the core of these early groups, and language evolved to bond these groups, it naturally follows that the early human females were the first to speak. This reinforces the suggestion that language was first used to create a sense of emotional solidarity between allies. Chris Knight has argued a passionate case for the idea that language first evolved to allow the females in these early groups to band together to force males to invest in them and their offspring, principally by hunting for meat. This would be consistent with the fact that, among modern humans, women are generally better at verbal skills than men, as well as being more skilful in the social domain. Dunbar stresses that this is currently a minority theory among specialists in human origins — most still support the 'bison-down-at-the-lake' theory attributing early language and cooperation to the imperatives of men's activities such as hunting. Despite this, he argues that 'female bonding may have been a more powerful force in human evolution than is sometimes supposed'.[41]Although still controversial, the idea that female coalitions may have played a decisive role has subsequently received strong support from a number of anthropologists including Sarah Hrdy,[42]Camilla Power,[43]Ian Watts.[44]and Jerome Lewis.[45]It is also consistent with recent studies by population geneticists (see Verdu et al. 2013[46]for Central African Pygmies; Schlebusch 2010[47]for Khoisan) showing a deep-time tendency to matrilocality among African hunter-gatherers.
https://en.wikipedia.org/wiki/Origins_of_society
Theorigin of speechdiffers from theorigin of languagebecause language is not necessarily spoken; it could equally bewrittenorsigned. Speech is a fundamental aspect of human communication and plays a vital role in the everyday lives of humans. It allows them to convey thoughts, emotions, and ideas, and providing the ability to connect with others and shape collective reality.[1][2] Many attempts have been made to explain scientifically how speech emerged in humans, although to date no theory has generated agreement. Non-human primates, like many other animals, have evolved specialized mechanisms for producing sounds for purposes of social communication.[3]On the other hand, no monkey or ape uses itstonguefor such purposes.[4][5]The human species' unprecedented use of the tongue, lips and other moveable parts seems to place speech in a quite separate category, making its evolutionary emergence an intriguing theoretical challenge in the eyes of many scholars.[6] The termmodalitymeans the chosen representational format for encoding and transmitting information. A striking feature of language is that it ismodality-independent.Should an impaired child be prevented from hearing or producing sound, its innate capacity to master a language may equally find expression in signing.Sign languagesof the deaf are independently invented and have all the major properties of spoken language except for the modality of transmission.[7][8][9][10]From this it appears that thelanguage centresof the human brain must have evolved to function optimally, irrespective of the selected modality. The detachment from modality-specific inputs may represent a substantial change in neural organization, one that affects not only imitation but also communication; only humans can lose one modality (e.g. hearing) and make up for this deficit by communicating with complete competence in a different modality (i.e. signing). Animal communication systems routinely combine visible with audible properties and effects, but none is modality-independent. For example, no vocally-impaired whale, dolphin, or songbird could express its song repertoire equally in visual display. Indeed, in the case of animal communication, message and modality are not capable of being disentangled. Whatever message is being conveyed stems from the intrinsic properties of the signal. Modality independence should not be confused with the ordinary phenomenon ofmultimodality. Monkeys and apes rely on a repertoire of species-specific "gesture-calls" – emotionally-expressive vocalisations inseparable from the visual displays which accompany them.[12][13]Humans also have species-specific gesture-calls – laughs, cries, sobs, etc. – together with involuntary gestures accompanying speech.[14][15][16]Many animal displays are polymodal in that each appears designed to exploit multiple channels simultaneously. The human linguistic property of modality independence is conceptually distinct from polymodality. It allows the speaker to encode the informational content of a message in a single channel whilst switching between channels as necessary. Modern city-dwellers switch effortlessly between the spoken word and writing in its various forms – handwriting, typing,email, etc. Whichever modality is chosen, it can reliably transmit the full message content without external assistance of any kind. When talking on thetelephone, for example, any accompanying facial or manual gestures, however natural to the speaker, are not strictly necessary. When typing or manually signing, conversely, there is no need to add sounds. In manyAustralian Aboriginal cultures, a section of the population – perhaps women observing a ritualtaboo– traditionally restrict themselves for extended periods to a silent (manually-signed) version of their language.[17]Then, when released from the taboo, these same individuals resume narrating stories by the fireside or in the dark, switching to pure sound without sacrifice of informational content. Speaking is the default modality for language in all cultures. Humans' first recourse is to encode their thoughts in sound – a method which depends on sophisticated capacities for controlling the lips, tongue and other components of the vocal apparatus. The speech organs evolved in the first instance not for speech but for more basic bodily functions such as feeding and breathing. Nonhuman primates have broadly similar organs, but with different neural controls.[6]Non-human apes use their highly-flexible, maneuverable tongues for eating but not for vocalizing. When an ape is not eating, fine motor control over its tongue is deactivated.[4][5]Eitherit is performing gymnastics with its tongueorit is vocalising; it cannot perform both activities simultaneously. Since this applies tomammalsin general,Homo sapiensare exceptional in harnessing mechanisms designed forrespirationandingestionfor the radically different requirements of articulate speech.[18] The word "language" derives from the Latinlingua,"tongue".Phoneticiansagree that the tongue is the most important speech articulator, followed by the lips. Anatural languagecan be viewed as a particular way of using the tongue to express thought. The human tongue has an unusual shape. In most mammals, it is a long, flat structure contained largely within the mouth. It is attached at the rear to thehyoid bone, situated below the oral level in thepharynx. In humans, the tongue has an almost circularsagittal(midline) contour, much of it lying vertically down an extendedpharynx, where it is attached to a hyoid bone in a lowered position. Partly as a result of this, the horizontal (inside-the-mouth) and vertical (down-the-throat) tubes forming the supralaryngeal vocal tract (SVT) are almost equal in length (whereas in other species, the vertical section is shorter). As humans move their jaws up and down, the tongue can vary the cross-sectional area of each tube independently by about 10:1, altering formant frequencies accordingly. That the tubes are joined at a right angle permits pronunciation of thevowels[i], [u]and[a], which nonhuman primates cannot do.[19]Even when not performed particularly accurately, in humans the articulatory gymnastics needed to distinguish these vowels yield consistent, distinctive acoustic results, illustrating the quantal[clarification needed]nature of human speech sounds.[20]It may not be coincidental that[i], [u]and[a]are the most common vowels in the world's languages.[21]Human tongues are a lot shorter and thinner than other mammals and are composed of a large number of muscles, which helps shape a variety of sounds within the oral cavity. The diversity of sound production is also increased with the human’s ability to open and close the airway, allowing varying amounts of air to exit through the nose. The fine motor movements associated with the tongue and the airway, make humans more capable of producing a wide range of intricate shapes in order to produce sounds at different rates and intensities.[22] In humans, the lips are important for the production ofstopsandfricatives, in addition tovowels. Nothing, however, suggests that the lips evolved for those reasons. Duringprimate evolution, a shift fromnocturnaltodiurnalactivity intarsiers, monkeys and apes (thehaplorhines) brought with it an increased reliance on vision at the expense ofolfaction. As a result, the snout became reduced and therhinariumor "wet nose" was lost. The muscles of the face and lips consequently became less constrained, enabling their co-option to serve purposes of facial expression. The lips also became thicker, and the oral cavity hidden behind became smaller.[22]Hence, according to Ann MacLarnon, "the evolution of mobile, muscular lips, so important to human speech, was the exaptive result of the evolution of diurnality and visual communication in the common ancestor of haplorhines".[23]It is unclear whether human lips have undergone a more recent adaptation to the specific requirements of speech. Compared with nonhuman primates, humans have significantly enhanced control of breathing, enabling exhalations to be extended and inhalations shortened as we speak. Whilst we are speaking,intercostalandinterior abdominal musclesare recruited to expand thethoraxand draw air into the lungs, and subsequently to control the release of air as the lungs deflate. The muscles concerned are markedly moreinnervatedin humans than in nonhuman primates.[24]Evidence from fossil hominins suggests that the necessary enlargement of thevertebral canal, and thereforespinal corddimensions, may not have occurred inAustralopithecusorHomo erectusbut was present in theNeanderthalsandearly modern humans.[25][26] Thelarynxor voice box is an organ in the neck housing thevocal folds, which are responsible forphonation. In humans, the larynx isdescended,it is positioned lower than in other primates. This is because the evolution of humans to an upright position shifted the head directly above the spinal cord, forcing everything else downward. The repositioning of the larynx resulted in a longer cavity called the pharynx, which is responsible for increasing the range and clarity of the sound being produced. Other primates have almost no pharynx; therefore, their vocal power is significantly lower.[22]Humans are not unique in this respect: goats, dogs, pigs andtamarinslower the larynx temporarily, to emit loud calls.[27]Several deer species have a permanently lowered larynx, which may be lowered still further by males during theirroaring displays.[28]Lions, jaguars, cheetahs and domestic cats also do this.[29]However, laryngeal descent in nonhumans (according toPhilip Lieberman) is not accompanied by descent of the hyoid; hence the tongue remains horizontal in the oral cavity, preventing it from acting as a pharyngeal articulator.[30] Despite all this, scholars remain divided as to how "special" the human vocal tract really is. It has been shown that the larynx does descend to some extent during development in chimpanzees, followed by hyoidal descent.[31]As against this, Philip Lieberman points out that only humans have evolved permanent and substantial laryngeal descent in association with hyoidal descent, resulting in a curved tongue and two-tube vocal tract with 1:1 proportions.[citation needed]Uniquely in the human case, simple contact between theepiglottisandvelumis no longer possible, disrupting the normal mammalian separation of the respiratory and digestive tracts during swallowing. Since this entails substantial costs – increasing the risk of choking whilst swallowing food – we are forced to ask what benefits might have outweighed those costs. Some claim the clear benefit must have been speech, but other contest this. One objection is that humans are in fact not seriously at risk of choking on food: medical statistics indicate that accidents of this kind are extremely rare.[32]Another objection is that in the view of most scholars, speech as we know it emerged relatively late in human evolution, roughly contemporaneously with the emergence ofHomo sapiens.[33]A development as complex as the reconfiguration of the human vocal tract would have required much more time, implying an early date of origin. This discrepancy in timescales undermines the idea that human vocal flexibility was initially driven by selection pressures for speech. At least one orangutan has demonstrated the ability to control the voice box.[34] To lower the larynx is to increase the length of the vocal tract, in turn loweringformantfrequencies so that the voice sounds "deeper" – giving an impression of greater size.John Ohalaargued that the function of the lowered larynx in humans, especially males, is probably to enhance threat displays rather than speech itself.[35]Ohala pointed out that if the lowered larynx were an adaptation for speech, we would expect adult human males to be better adapted in this respect than adult females, whose larynx is considerably less low. In fact, females invariably outperform males in verbal tests, falsifying this whole line of reasoning.[citation needed]William Tecumseh Fitchlikewise argues that this was the original selective advantage of laryngeal lowering in humans. Although, according to Fitch, the initial lowering of the larynx in humans had nothing to do with speech, the increased range of possible formant patterns was subsequently co-opted for speech. Size exaggeration remains the sole function of the extreme laryngeal descent observed in male deer. Consistent with the size exaggeration hypothesis, a second descent of the larynx occurs at puberty in humans, although only in males. In response to the objection that the larynx is descended in human females, Fitch suggests that mothers vocalising to protect their infants would also have benefited from this ability.[36] Most specialists credit the Neanderthals with speech abilities not radically different from those of modernHomo sapiens. An indirect line of argument is that theirtoolmakingandhunting tacticswould have been difficult to learn or execute without some kind of speech.[37]A recent extraction ofDNAfrom Neanderthal bones indicates that Neanderthals had the same version of theFOXP2gene as modern humans. This gene, mistakenly described as the "grammar gene", plays a role in controlling the orofacial movements which (in modern humans) are involved in speech.[38] During the 1970s, it was widely believed that the Neanderthals lacked modern speech capacities.[39]It was claimed that they possessed a hyoid bone so high up in the vocal tract as to preclude the possibility of producing certain vowel sounds. The hyoid bone is present in many mammals. It allows a wide range of tongue, pharyngeal and laryngeal movements by bracing these structures alongside each other in order to produce variation.[40]It is now realised that its lowered position is not unique toHomo sapiens, whilst its relevance to vocal flexibility may have been overstated: although men have a lower larynx, they do not produce a wider range of sounds than women or two-year-old babies. There is no evidence that the larynx position of the Neanderthals impeded the range of vowel sounds they could produce.[41]The discovery of a modern-looking hyoid bone of a Neanderthal man in theKebara CaveinIsraelled its discoverers to argue that the Neanderthals had a descendedlarynx, and thus human-likespeechcapabilities.[42][43]However, other researchers have claimed that themorphologyof the hyoid is not indicative of the larynx's position.[6]It is necessary to take into consideration theskull base, themandible, thecervical vertebraeand a cranial reference plane.[44][45] The morphology of theouterandmiddle earofMiddle Pleistocenehominins fromAtapuerca, Spain, believed to be proto-Neanderthal, suggests they had an auditory sensitivity similar to modern humans and very different from chimpanzees. They were probably able to differentiate between many different speech sounds.[46] Thehypoglossal nerveplays an important role in controlling movements of the tongue. In 1998, a research team used the size of thehypoglossal canalin the base of fossil skulls in an attempt to estimate the relative number ofnerve fibres, claiming on this basis that Middle Pleistocene hominins and Neanderthals had more fine-tuned tongue control than eitherAustralopithecinesor apes.[47]Subsequently, however, it was demonstrated that hypoglossal canal size and nerve sizes are not correlated,[48]and it is now accepted that such evidence is uninformative about the timing of human speech evolution.[49] Legend:unrounded•rounded According to one influential school,[50][51]the human vocal apparatus is intrinsically digital on the model of a keyboard or digital computer[clarification needed](see below). Nothing about a chimpanzee's vocal apparatus suggests a digital keyboard[clarification needed], notwithstanding the anatomical and physiological similarities. This poses the question as to when and how, during the course of human evolution, the transition from analog to digital structure and function occurred. The human supralaryngeal tract is said to be digital in the sense that it is an arrangement of moveable toggles or switches, each of which, at any one time, must be in one state or another. The vocal cords, for example, are either vibrating (producing a sound) or not vibrating (in silent mode). By virtue of simple physics, the correspondingdistinctive feature– in this case, "voicing" – cannot be somewhere in between. The options are limited to "off" and "on". Equally digital is the feature known as "nasalisation". At any given moment thesoft palateor velum either allows or does not allow sound to resonate in thenasal chamber. In the case of lip and tongue positions, more than two digital states may be allowed. The theory that speech sounds are composite entities constituted by complexes of binary phonetic features was first advanced in 1938 by the Russian linguistRoman Jakobson.[52]A prominent early supporter of this approach wasNoam Chomsky, who went on to extend it from phonology to language more generally, in particular to the study ofsyntaxandsemantics.[53][54][55]In his 1965 book,Aspects of the Theory of Syntax,[56]Chomsky treated semantic concepts as combinations of binary-digital atomic elements explicitly on the model of distinctive features theory. The lexical item "bachelor", on this basis, would be expressed as [+ Human], [+ Male], [- Married]. Supporters of this approach view the vowels and consonants recognised by speakers of a particular language ordialectat a particular time as cultural entities of little scientific interest. From a natural science standpoint, the units which matter are those common toHomo sapiensby virtue of biological nature. By combining the atomic elements or "features" with which all humans are innately equipped, anyone may in principle generate the entire range of vowels and consonants to be found in any of the world's languages, whether past, present or future. The distinctive features are in this sense atomic components of a universal language. In recent years, the notion of an innate "universal grammar" underlying phonological variation has been called into question. In amonographon speech sounds,The Sounds of the World's Languages,Peter LadefogedandIan Maddieson,found virtually no basis for the postulation of some small number of fixed, discrete, universal phonetic features.[21]Examining 305 languages, for example, they encountered vowels that were positioned basically everywhere along the articulatory and acoustic continuum. Ladefoged concluded that phonological features are not determined by human nature: "Phonological features are best regarded as artifacts that linguists have devised in order to describe linguistic systems".[57] Self-organisationcharacterises systems where macroscopic structures are spontaneously formed out of local interactions between the many components of the system.[58]In self-organised systems, global organisational properties are not to be found at the local level. In colloquial terms, self-organisation is roughly captured by the idea of "bottom-up" (as opposed to "top-down") organisation. Examples of self-organised systems range from ice crystals to galaxy spirals in the inorganic world. According to many phoneticians, the sounds of language arrange and re-arrange themselves through self-organisation.[58][59][60]Speech sounds have both perceptual (how one hears them) and articulatory (how one produces them) properties, all with continuous values. Speakers tend to minimise effort, favouring ease of articulation over clarity. Listeners do the opposite, favouring sounds that are easy to distinguish even if difficult to pronounce. Since speakers and listeners are constantly switching roles, the syllable systems actually found in the world's languages turn out to be a compromise between acoustic distinctiveness on the one hand, and articulatory ease on the other. Agent-based computer modelstake the perspective of self-organisation at the level of the speech community or population. The two main paradigms are (1) the iterated learning model and (2) the language game model. Iterated learning focuses on transmission from generation to generation, typically with just one agent in each generation.[61]In the language game model, a whole population of agents simultaneously produce, perceive and learn language, inventing novel forms when the need arises.[62][63] Several models have shown how relatively simple peer-to-peer vocal interactions, such as imitation, can spontaneously self-organise a system of sounds shared by the whole population, and different in different populations. For example, models elaborated by Berrah et al. (1996)[64]and de Boer (2000),[65]and recently reformulated using Bayesian theory,[66]showed how a group of individuals playing imitation games can self-organise repertoires of vowel sounds which share substantial properties with human vowel systems. For example, in de Boer's model, initially vowels are generated randomly, but agents learn from each other as they interact repeatedly over time. Agent A chooses a vowel from her repertoire and produces it, inevitably with some noise. Agent B hears this vowel and chooses the closest equivalent from her own repertoire. To check whether this truly matches the original, B produces the vowelshe thinks she has heard, whereupon A refers once again to her own repertoire to find the closest equivalent. If this matches the one she initially selected, the game is successful, otherwise, it has failed. "Through repeated interactions", according to de Boer, "vowel systems emerge that are very much like the ones found in human languages".[67] In a different model, the phoneticianBjörn Lindblom[68]was able to predict, on self-organisational grounds, the favoured choices of vowel systems ranging from three to nine vowels on the basis of a principle of optimal perceptual differentiation. Further models studied the role of self-organisation in the origins of phonemic coding and combinatoriality, which is the existence ofphonemesand their systematic reuse to build structured syllables.Pierre-Yves Oudeyerdeveloped models which showed that basic neural equipment for adaptive holistic vocal imitation, coupling directly motor and perceptual representations in the brain, can generate spontaneously shared combinatorial systems of vocalisations, including phonotactic patterns, in a society of babbling individuals.[58][69]These models also characterised how morphological and physiological innate constraints can interact with these self-organised mechanisms to account for both the formation of statistical regularities and diversity in vocalisation systems. The gestural theory states that speech was a relatively late development, evolving by degrees from a system that was originally gestural. Human ancestors were unable to control their vocalisation at the time when gestures were used to communicate; however, as they slowly began to control their vocalisations, spoken language began to evolve. Three types of evidence support this theory: Research has found strong support for the idea that spoken language and signing depend on similar neural structures. Patients who used sign language, and who suffered from a left-hemispherelesion, showed the same disorders with their sign language as vocal patients did with their oral language.[71]Other researchers found that the same left-hemisphere brain regions were active during sign language as during the use of vocal or written language.[72] Humans spontaneously use hand and facial gestures when formulating ideas to be conveyed in speech.[73][74]There are also, of course, manysign languagesin existence, commonly associated withdeafcommunities; as noted above, these are equal in complexity, sophistication, and expressive power, to any oral language. The main difference is that the "phonemes" are produced on the outside of the body, articulated with hands, body, and facial expression, rather than inside the body articulated with tongue, teeth, lips, and breathing. Many psychologists and scientists have looked into the mirror system in the brain to answer this theory as well as other behavioural theories. Evidence to support mirror neurons as a factor in the evolution of speech includes mirror neurons in primates, the success of teaching apes to communicate gesturally, and pointing/gesturing to teach young children language. Fogassi and Ferrari (2014)[citation needed]monitored motor cortex activity in monkeys, specifically area F5 in the Broca’s area, where mirror neurons are located. They observed changes in electrical activity in this area when the monkey executed or observed different hand actions performed by someone else. Broca’s area is a region in the frontal lobe responsible for language production and processing. The discovery of mirror neurons in this region, which fire when an action is done or observed specifically with the hand, strongly supports the belief that communication was once accomplished with gestures. The same is true when teaching young children language. When one points at a specific object or location, mirror neurons in the child fire as though they were doing the action, which results in long-term learning[75] Critics note that for mammals in general, sound turns out to be the best medium in which to encode information for transmission over distances at speed. Given the probability that this applied also to early humans, it is hard to see why they should have abandoned this efficient method in favour of more costly and cumbersome systems of visual gesturing – only to return to sound at a later stage.[76] By way of explanation, it has been proposed that at a relatively late stage in human evolution, hands became so much in demand for making and using tools that the competing demands of manual gesturing became a hindrance. The transition to spoken language is said to have occurred only at that point.[77]Since humans throughout evolution have been making and using tools, however, most scholars remain unconvinced by this argument. (For a different approach to this issue – one setting out from considerations of signal reliability and trust – see "from pantomime to speech" below). Recent insights in human evolution – more specifically, human Pleistocene littoral evolution[78]– may help understand how human speech evolved. One controversial suggestion is that certain pre-adaptations for spoken language evolved during a time when ancestral hominins lived close to river banks and lake shores rich in fatty acids and other brain-specific nutrients. Occasional wading or swimming may also have led to enhanced breath-control (breath-hold diving). Independent lines of evidence suggest that "archaic"Homospread intercontinentally along theIndian Oceanshores (they even reached overseas islands such asFlores) where they regularly dived forlittoralfoods such as shell- andcrayfish,[79]which are extremely rich in brain-specific nutrients, explaining Homo's brain enlargement.[80]Shallow divingfor seafoods requires voluntary airway control, a prerequisite for spoken language. Seafood such as shellfish generally does not require biting and chewing, butstone tool useand suction feeding. This finer control of the oral apparatus was arguably another biological pre-adaptation to human speech, especially for the production of consonants.[81] Little is known about the timing of language's emergence in the human species. Unlike writing, speech leaves no material trace, making it archaeologically invisible. Lacking direct linguistic evidence, specialists in human origins have resorted to the study of anatomical features and genes arguably associated with speech production. Whilst such studies may provide information as to whether pre-modernHomospecies had speechcapacities, it is still unknown whether they actually spoke. Whilst they may have communicated vocally, the anatomical and genetic data lack the resolution necessary to differentiate proto-language from speech. Using statistical methods to estimate the time required to achieve the current spread and diversity in modern languages today,Johanna Nichols– a linguist at the University of California, Berkeley – argued in 1998 that vocal languages must have begun diversifying at least 100,000 years ago.[82] In 2012, anthropologists Charles Perreault and Sarah Mathew used phonemic diversity to suggest a date consistent with this.[83]"Phonemic diversity" denotes the number of perceptually distinct units of sound – consonants, vowels and tones – in a language. The current worldwide pattern of phonemic diversity potentially contains the statistical signal of the expansion of modernHomo sapiensout of Africa, beginning around 60-70 thousand years ago. Some scholars argue that phonemic diversity evolves slowly and can be used as a clock to calculate how long the oldest African languages would have to have been around in order to accumulate the number of phonemes they possess today. As human populations left Africa and expanded into the rest of the world, they underwent a series of bottlenecks – points at which only a very small population survived to colonise a new continent or region. Allegedly such a population crash led to a corresponding reduction in genetic, phenotypic and phonemic diversity.African languagestoday have some of the largest phonemic inventories in the world, whilst the smallest inventories are found in South America and Oceania, some of the last regions of the globe to be colonised. For example,Rotokas, a language of New Guinea, andPirahã, spoken in South America, both have just 11 phonemes,[84][85]whilst!Xun, a language spoken in Southern Africa has 141 phonemes. The authors use a natural experiment – the colonization of mainland Southeast Asia on the one hand, the long-isolatedAndaman Islandson the other – to estimate the rate at which phonemic diversity increases through time. Using this rate, they estimate that the world's languages date back to theMiddle Stone Agein Africa, sometime between 350 thousand and 150 thousand years ago. This corresponds to the speciation event which gave rise toHomo sapiens. These and similar studies have however been criticised by linguists who argue that they are based on a flawed analogy between genes and phonemes, since phonemes are frequently transferred laterally between languages unlike genes, and on a flawed sampling of the world's languages, since both Oceania and the Americas also contain languages with very high numbers of phonemes, and Africa contains languages with very few. They argue that the actual distribution of phonemic diversity in the world reflects recent language contact and not deep language history - since it is well demonstrated that languages can lose or gain many phonemes over very short periods. In other words, there is no valid linguistic reason to expect genetic founder effects to influence phonemic diversity.[86][87]
https://en.wikipedia.org/wiki/Origin_of_speech
In thetree modelofhistorical linguistics, aproto-languageis a postulated ancestral language from which a number ofattested languagesare believed to have descended by evolution, forming alanguage family. Proto-languages are usually unattested, or partially attested at best. They are reconstructed by way of thecomparative method.[1] In the family tree metaphor, a proto-language can be called a mother language. Occasionally, the German termUrsprache(pronounced[ˈuːɐ̯ʃpʁaːxə]ⓘ; fromur-'primordial', 'original' +Sprache'language') is used instead. It is also sometimes called thecommonorprimitiveform of a language (e.g.Common Germanic,Primitive Norse).[1] In the strict sense, a proto-language is the most recent common ancestor of a language family, immediately before the family started to diverge into the attesteddaughter languages. It is therefore equivalent with theancestral languageorparental languageof a language family.[2] Moreover, a group oflectsthat are not considered separate languages, such as the members of adialect cluster, may also be described as descending from a unitary proto-language. Typically, the proto-language is not known directly. It is by definition alinguistic reconstructionformulated by applying thecomparative methodto a group of languages featuring similar characteristics.[3]The tree is a statement of similarity and a hypothesis that the similarity results from descent from a common language. The comparative method, a process ofdeduction, begins from a set of characteristics, or characters, found in the attested languages. If the entire set can be accounted for by descent from the proto-language, which must contain the proto-forms of them all, the tree, or phylogeny, is regarded as a complete explanation and byOccam's razor, is given credibility. More recently, such a tree has been termed "perfect" and the characters labelled "compatible". No trees but the smallest branches are ever found to be perfect, in part because languages also evolve through horizontal transfer with their neighbours. Typically, credibility is given to the hypotheses of highest compatibility. The differences in compatibility must be explained by various applications of thewave model. The level of completeness of the reconstruction achieved varies, depending on how complete the evidence is from the descendant languages and on the formulation of the characters by the linguists working on it. Not all characters are suitable for the comparative method. For example, lexical items that are loans from a different language do not reflect the phylogeny to be tested, and, if used, will detract from the compatibility. Getting the right dataset for the comparative method is a major task in historical linguistics. Some universally accepted proto-languages areProto-Afroasiatic,Proto-Indo-European,Proto-Uralic, andProto-Dravidian. In a few fortuitous instances, which have been used to verify the method and the model (and probably ultimately inspired it[citation needed]), a literary history exists from as early as a few millennia ago, allowing the descent to be traced in detail. The early daughter languages, and even the proto-language itself, may beattestedin surviving texts. For example,Latinis the proto-language of theRomancelanguage family, which includes such modern languages as French, Italian, Portuguese, Romanian, Catalan and Spanish. Likewise,Proto-Norse, the ancestor of the modernScandinavian languages, is attested, albeit in fragmentary form, in theElder Futhark. Although there are no very earlyIndo-Aryaninscriptions, the Indo-Aryan languages of modern India all go back toVedic Sanskrit(or dialects very closely related to it), which has been preserved in texts accurately handed down by parallel oral and written traditions for many centuries. The first person to offer systematic reconstructions of an unattested proto-language wasAugust Schleicher; he did so forProto-Indo-Europeanin 1861.[4] Normally, the term "Proto-X" refers to the last common ancestor of a group of languages, occasionally attested but most commonly reconstructed through thecomparative method, as withProto-Indo-EuropeanandProto-Germanic. An earlier stage of a single language X, reconstructed through the method ofinternal reconstruction, is termed "Pre-X", as in Pre–Old Japanese.[5]It is also possible to apply internal reconstruction to a proto-language, obtaining a pre-proto-language, such as Pre-Proto-Indo-European.[6] Both prefixes are sometimes used for an unattested stage of a language without reference to comparative or internal reconstruction. "Pre-X" is sometimes also used for a postulatedsubstratum, as in thePre-Indo-European languagesbelieved to have been spoken in Europe and South Asia before the arrival there of Indo-European languages. When multiple historical stages of a single language exist, the oldest attested stage is normally termed "Old X" (e.g.Old EnglishandOld Japanese). In other cases, such asOld IrishandOld Norse, the term refers to the language of the oldest known significant texts. Each of these languages has an older stage (Primitive IrishandProto-Norserespectively) that is attested only fragmentarily. There are no objective criteria for the evaluation of different reconstruction systems yielding different proto-languages. Many researchers concerned with linguistic reconstruction agree that the traditionalcomparative methodis an "intuitive undertaking."[7] The bias of the researchers regarding the accumulated implicit knowledge can also lead to erroneous assumptions and excessive generalization.Kortlandt (1993)offers several examples in where such general assumptions concerning "the nature of language" hindered research in historical linguistics. Linguists make personal judgements on how they consider "natural" for a language to change, and "[as] a result, our reconstructions tend to have a strong bias toward the average language type known to the investigator." Such an investigator finds themselves blinkered by their own linguisticframe of reference. The advent of thewave modelraised new issues in the domain of linguistic reconstruction, causing the reevaluation of old reconstruction systems and depriving the proto-language of its "uniform character." This is evident inKarl Brugmann's skepticism that the reconstruction systems could ever reflect a linguistic reality.[8]Ferdinand de Saussurewould even express a more certain opinion, completely rejecting a positive specification of the sound values of reconstruction systems.[9] In general, the issue of the nature of proto-language remains unresolved, with linguists generally taking either therealistor theabstractionistposition. Even the widely studied proto-languages, such asProto-Indo-European, have drawn criticism for being outliers typologically with respect to the reconstructedphonemic inventory. The alternatives such asglottalic theory, despite representing a typologically less rare system, have not gained wider acceptance, and some researchers even suggest the use of indexes to represent the disputed series of plosives. On the other end of the spectrum,Pulgram (1959:424) suggests that Proto-Indo-European reconstructions are just "a set of reconstructed formulae" and "not representative of any reality". In the same vein,Julius Pokornyin his study onIndo-European, claims that the linguistic termIE parent languageis merely an abstraction, which does not exist in reality and should be understood as consisting of dialects possibly dating back to thePaleolithicera in which those dialects formed the linguistic structure of the IE language group.[10]In his view, Indo-European is solely a system ofisoglosseswhich bound together dialects which were operationalized byvarious tribes, from which the historically attested Indo-European languages emerged.[10] Proto-languages evidently remain unattested. AsNicholas Kazanas[de]puts it:
https://en.wikipedia.org/wiki/Proto-language
Theory of languageis a topic inphilosophy of languageandtheoretical linguistics.[1]It has the goal of answering the questions "What is language?";[2][3]"Why do languages have the properties they do?";[4]or "What is theorigin of language?". In addition to these fundamental questions, the theory of language also seeks to understand how language is acquired and used by individuals and communities. This involves investigating thecognitive and neural processes involved in languageprocessing and production, as well as the social and cultural factors that shape linguistic behavior.[5] Even though much of the research inlinguisticsisdescriptiveorprescriptive, there exists an underlying assumption that terminological and methodological choices reflect the researcher's opinion of language. These choices often stem from the theoretical framework a linguist subscribes to, shaping their interpretation of linguistic phenomena. For instance, within thegenerative grammarframework, linguists might focus on underlying syntactic structures, whilecognitive linguistsmight emphasize the role of conceptual metaphor.[6][7]Linguists are divided into different schools of thinking, with thenature–nurture debateas the main divide.[8]Some linguisticsconferencesandjournalsare focussed on a specific theory of language, while others disseminate a variety of views.[9] Like in otherhumanandsocial sciences, theories in linguistics can be divided intohumanisticandsociobiologicalapproaches.[10]Same terms, for example 'rationalism', 'functionalism', 'formalism' and 'constructionism', are used with different meanings in different contexts.[11] Humanistic theories consider people as having an agentive role in thesocial constructionof language. Language is primarily seen as a sociocultural phenomenon. This tradition emphasises culture, nurture, creativity and diversity.[8]A classicalrationalistapproach to language stems from the philosophyAge of Enlightenment. Rationalist philosophers argued that people had created language in a step-by-step process to serve their need to communicate with each other. Thus, language is thought of as a rational humaninvention.[12] Many philosophers of language, sincePlatoandAristotle, have considered language as a manmade tool for making statements orpropositionsabout the world on the basis of a predicate-argument structure. Especially in the classical tradition, the purpose of the sentence was considered to be topredicateabout thesubject. Aristotle's example is "Man is a rational animal", whereManis the subject andis a rational animalisthe predicate, whichattributesthe subject.[13][14]In the twentieth century, classicallogical grammarwas defended by Edmund Husserl's "pure logical grammar". Husserl argues, in the spirit of seventeenth-centuryrationalgrammar, that the structures ofconsciousnessarecompositionaland organized into subject-predicate structures. These give rise to the structures ofsemanticsandsyntaxcross-linguistically.[15]Categorial grammaris another example of logical grammar in the modern context. More lately, inDonald Davidson's event semantics, for example, theverbserves as the predicate. Like in modernpredicate logic, subject and object areargumentsof thetransitivepredicate. A similar solution is found in formal semantics.[16]Many modern philosophers continue to consider language as a logically based tool for expressing the structures of reality by means of predicate-argument structure. Examples includeBertrand Russell,Ludwig Wittgenstein,Winfrid Sellars,Hilary Putnam, andJohn Searle. During the 19th century, when sociological questions remained underpsychology,[17]languages andlanguage changewere thought of as arising from human psychology and the collectiveunconscious mindof the community, shaped by its history, as argued byMoritz Lazarus,Heymann SteinthalandWilhelm Wundt.[18]Advocates ofVölkerpsychologie('folk psychology') regarded language asVolksgeist; a social phenomenon conceived as the 'spirit of the nation'. Wundt claimed that the human mind becomes organised according to the principles ofsyllogisticreasoning with social progress and education. He argued for abinary-branchingmodel for the description of the mind, andsyntax.[19]Folk psychology was imported to North American linguistics byFranz Boas[20]andLeonard Bloomfieldwho were the founders of a school of thought which was later nicknamed 'American structuralism'.[21][22] Folk psychology became associated with Germannationalism,[23]and afterWorld War IBloomfield apparently replaced Wundt'sstructural psychologywithAlbert Paul Weiss'sbehavioral psychology;[24]although Wundtian notions remained elementary for his linguistic analysis.[25]The Bloomfieldian school of linguistics was eventually reformed as a sociobiological approach byNoam Chomsky(see 'generative grammar' below).[21][26] Since generative grammar's popularity began to wane towards the end of the 20th century, there has been a new wave of cultural anthropological approaches to the language question sparking a modern debate on the relationship of language and culture. Participants includeDaniel Everett,Jesse Prinz,Nicholas EvansandStephen Levinson.[27] The study of culture and language developed in a different direction in Europe whereÉmile Durkheimsuccessfully separated sociology from psychology, thus establishing it as an autonomous science.[28]Ferdinand de Saussurelikewise argued for the autonomy of linguistics from psychology. He created asemiotictheory which would eventually give rise to the movement in human sciences known asstructuralism, followed byfunctionalismor functional structuralism,post-structuralismand other similar tendencies.[29]The names structuralism and functionalism are derived from Durkheim's modification ofHerbert Spencer'sorganicismwhich draws ananalogybetweensocial structuresand theorgansof anorganism, each necessitated by itsfunction.[30][28] Saussure approaches the essence of language from two sides. For the one, he borrows ideas from Steinthal[31]and Durkheim, concluding that language is a 'social fact'. For the other, he creates a theory of language as a system in and for itself which arises from theassociationofconceptsand words or expressions. Thus, language is a dual system of interactive sub-systems: a conceptual system and a system of linguistic forms. Neither of these can exist without the other because, in Saussure's notion, there are no (proper) expressions without meaning, but also no (organised) meaning without words or expressions. Language as a system does not arise from the physical world, but from the contrast between the concepts, and the contrast between the linguistic forms.[32] There was a shift of focus in sociology in the 1920s, from structural to functional explanation, or the adaptation of the social 'organism' to its environment. Post-Saussurean linguists, led by thePrague linguistic circle, began to study the functional value of the linguistic structure, with communication taken as the primary function of language in the meaning 'task' or 'purpose'. These notions translated into an increase of interest in pragmatics, with a discourse perspective (the analysis of full texts) added to the multilayered interactive model of structural linguistics. This gave rise tofunctional linguistics.[33]Some of its main concepts includeinformation structureandeconomy. Structural andformal linguistLouis Hjelmslevconsidered the systemic organisation of the bilateral linguistic system fully mathematical, rejecting the psychological and sociological aspect of linguistics altogether. He considered linguistics as the comparison of the structures of all languages usingformal grammars– semantic anddiscoursestructures included.[34]Hjelmslev's idea is sometimes referred to as 'formalism'.[33] Although generally considered as a structuralist,[35]Lucien Tesnièreregarded meaning as giving rise to expression, but not vice versa, at least as regards the relationship between semantics and syntax. He considered the semantic plane as psychological, but syntax as being based on the necessity to break thetwo-dimensionalsemantic representation intolinearform.[36] The Saussurean idea of language as an interaction of the conceptual system and the expressive system was elaborated in philosophy,anthropologyand other fields of human sciences byClaude Lévi-Strauss,Roland Barthes,Michel Foucault,Jacques Derrida,Julia Kristevaand many others. This movement was interested in the Durkheimian concept of language as a social fact or a rule-based code of conduct; but eventually rejected the structuralist idea that the individual cannot change the norm. Post-structuralists study how language affects our understanding of reality thus serving as a tool of shaping society.[37][38] While the humanistic tradition stemming from 19th century Völkerpsychologie emphasises the unconscious nature of the social construction of language, some perspectives of post-structuralism andsocial constructionismregard human languages as man-made rather than natural. At this end of the spectrum, structural linguistEugenio Coșeriulaid emphasis on the intentional construction of language.[18]Daniel Everett has likewise approached the question of language construction from the point of intentionality and free will.[27] There were also some contacts between structural linguists and the creators ofconstructed languages. For example, Saussure's brotherRené de Saussurewas anEsperantoactivist, and the French functionalistAndré Martinetserved as director of theInternational Auxiliary Language Association.Otto Jespersencreated and proposed the international auxiliary languageNovial. In contrast to humanistic linguistics, sociobiological approaches consider language asbiological phenomena. Approaches to language as part ofcultural evolutioncan be roughly divided into two main groups:genetic determinismwhich argues that languages stem from the humangenome; andsocial Darwinism, as envisioned byAugust SchleicherandMax Müller, which applies principles and methods ofevolutionary biologyto linguistics. Because sociobiogical theories have been labelled aschauvinisticin the past, modern approaches, includingDual inheritance theoryandmemetics, aim to provide more sustainable solutions to the study of biology's role in language.[39] The role of genes in language formation has been discussed and studied extensively. Proposinggenerative grammar,Noam Chomskyargues that language is fully caused by a randomgenetic mutation, and that linguistics is the study ofuniversal grammar, or the structure in question.[40]Others, includingRay Jackendoff, point out that theinnate language componentcould be the result of a series of evolutionaryadaptations;[41]Steven Pinkerargues that, because of these, people are born with alanguage instinct. The random and the adaptational approach are sometimes referred to as formalism (or structuralism) and functionalism (or adaptationism), respectively, as a parallel to debates between advocates ofstructuralandfunctional explanationin biology.[42]Also known asbiolinguistics, the study of linguistic structures is parallelised with that of natural formations such asferromagnetic dropletsandbotanicforms.[43]This approach became highly controversial at the end of the 20th century due to a lack of empirical support for genetics as an explanation of linguistic structures.[44][45] More recent anthropological research aims to avoid genetic determinism.Behavioural ecologyanddual inheritance theory, the study of gene–culture co-evolution, emphasise the role ofcultureas a human invention in shaping the genes, rather than vice versa.[39] Some former generative grammarians argue that genes may nonetheless have an indirect effect on abstract features of language. This makes up yet another approach referred to as 'functionalism' which makes a weaker claim with respect to genetics. Instead of arguing for a specific innate structure, it is suggested that humanphysiologyandneurologicalorganisation may give rise to linguistic phenomena in a more abstract way.[42] Based on a comparison of structures from multiple languages,John A. Hawkinssuggests that the brain, as a syntacticparser, may find it easier to process some word orders than others, thus explaining their prevalence. This theory remains to be confirmed bypsycholinguisticstudies.[46] Conceptual metaphortheory fromGeorge Lakoff'scognitive linguisticshypothesises that people have inherited from lower animals the ability fordeductive reasoningbased onvisual thinking, which explains why languages make so much use of visual metaphors.[47][48] It was thought in early evolutionary biology that languages andspeciescan be studied according to the same principles and methods. The idea of languages and cultures as fighting for living space became highly controversial as it was accused of being apseudosciencethat caused two world wars, and social Darwinism was banished from humanities by 1945. In the concepts of Schleicher and Müller, both endorsed byCharles Darwin, languages could be either organisms orpopulations.[49] Aneo-Darwinianversion of this idea was introduced asmemeticsbyRichard Dawkinsin 1976. In this thinking, ideas and cultural units, including words, are compared tovirusesorreplicators. Although meant as a softer alternative to genetic determinism, memetics has been widely discredited as pseudoscience,[39]and it has failed to establish itself as a recognised field of scientific research.[50]The language–species analogy nonetheless continues to enjoy popularity in linguistics and other human sciences.[51]Since the 1990s there have been numerous attempts to revive it in various guises. As Jamin Pelkey explains, Theorists who explore such analogies usually feel obliged to pin language to some specific sub-domain of biotic growth. William James selects "zoölogical evolution", William Croft prefers botanical evolution, but most theorists zoom in to more microbiotic levels – some claiming that linguistic phenomena are analogous to the cellular level and others arguing for the genetic level of biotic growth. For others, language is a parasite; for others still, language is a virus ... The disagreements over grounding analogies do not stop here.[52] Like many other approaches to linguistics, these, too, are collectively called 'functionalism'. They include various frameworks ofusage-based linguistics,[53]language as acomplex adaptive system,[54]construction grammar,[55][56]emergent linguistics,[57][58]and others.
https://en.wikipedia.org/wiki/Theory_of_language
Alternative semantics(orHamblin semantics) is a framework informal semanticsandlogic. In alternative semantics, expressionsdenotealternative sets, understood as sets of objects of the samesemantic type. For instance, while the word "Lena" might denote Lena herself in a classical semantics, it would denote the singleton set containing Lena in alternative semantics. The framework was introduced byCharles Leonard Hamblinin 1973 as a way of extendingMontague grammarto provide an analysis forquestions. In this framework, a question denotes the set of its possible answers. Thus, ifP{\displaystyle P}andQ{\displaystyle Q}arepropositions, then{P,Q}{\displaystyle \{P,Q\}}is the denotation of the question whetherP{\displaystyle P}orQ{\displaystyle Q}is true. Since the 1970s, it has been extended and adapted to analyze phenomena includingfocus,[1]scope,disjunction,[2]NPIs,[3][4]presupposition, andimplicature.[5][6] Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Alternative_semantics
Insemantics,mathematical logicand related disciplines, theprinciple of compositionalityis the principle that the meaning of a complex expression is determined by the meanings of its constituent expressions and the rules used to combine them. The principle is also calledFrege's principle, becauseGottlob Fregeis widely credited for the first modern formulation of it. However, the principle has never been explicitly stated by Frege,[1]and arguably it was already assumed byGeorge Boole[2]decades before Frege's work. The principle of compositionality (also known assemantic compositionalism) is highly debated in linguistics. Among its most challenging problems there are the issues ofcontextuality, the non-compositionality ofidiomatic expressions, and the non-compositionality ofquotations.[3] Discussion of compositionality started to appear at the beginning of the 19th century, during which it was debated whether what was most fundamental in language was compositionality orcontextuality, and compositionality was usually preferred.[4]Gottlob Fregenever adhered to the principle of compositionality as it is known today (Frege endorsed thecontext principleinstead), and the first to explicitly formulate it wasRudolf Carnapin 1947.[4] A common formulation[4]of the principle of compositionality comes fromBarbara Partee, stating: "The meaning of a compound expression is a function of the meanings of its parts and of the way they are syntactically combined."[5] It is possible to distinguish different levels of compositionality. Strong compositionality refers to compound expressions that are determined by the meaning of itsimmediateparts and a top-level syntactic function that describes their combination. Weak compositionality refers to compound expressions that are determined by the meaning of its parts as well as their complete syntactic combination.[6][7]However, there can also be further gradations in between these two extremes. This is possible, if one not only allows the meaning of immediate parts but also the meaning of the second-highest parts (third-highest parts, fourth-highest parts, etc.) together with functions that describes their respective combinations.[7] On a sentence level, the principle claims that what remains if one removes thelexicalparts of a meaningfulsentence, are the rules of composition. The sentence "Socrates was a man", for example, becomes "S was a M" once the meaningful lexical items—"Socrates" and "man"—are taken away. The task of finding the rules of composition, then becomes a matter of describing what the connection between S and M is. Among the most prominent linguistic problems that challenge the principle of compositionality are the issues ofcontextuality, the non compositionality ofidiomatic expressions, and the non compositionality ofquotations.[3] It is frequently taken to mean that every operation of thesyntaxshould be associated with an operation of the semantics that acts on the meanings of the constituents combined by the syntactic operation. As a guideline for constructing semantic theories, this is generally taken, as in the influential work on the philosophy of language byDonald Davidson, to mean that every construct of the syntax should be associated by a clause of theT-schemawith an operator in the semantics that specifies how the meaning of the whole expression is built from constituents combined by the syntactic rule. In some general mathematical theories (especially those in the tradition ofMontague grammar), this guideline is taken to mean that the interpretation of a language is essentially given by ahomomorphismbetween an algebra of syntactic representations and an algebra of semantic objects. The principle of compositionality also exists in a similar form in thecompositionality of programming languages. The principle of compositionality has been the subject of intense debate. Indeed, there is no general agreement as to how the principle is to be interpreted, although there have been several attempts to provide formal definitions of it.[8] Scholars are also divided as to whether the principle should be regarded as a factual claim, open toempiricaltesting; ananalytic truth, obvious from the nature of language and meaning; or amethodologicalprinciple to guide the development of theories of syntax and semantics. The Principle of Compositionality has been attacked in all three spheres, although so far none of the criticisms brought against it have been generally regarded as compelling.[citation needed]Most proponents of the principle, however, make certain exceptions foridiomaticexpressions in natural language.[8] The principle of compositionality usually holds when only syntactic factors play in the increased complexity ofsentence processing, while it becomes more problematic and questionable when the complexity increase is due to sentence or discoursecontext,semantic memory, orsensory cues.[9]Among the problematic phenomena for traditional theories of compositionality is that oflogical metonymy, which has been studied at least since the mid 1990s by linguistsJames PustejovskyandRay Jackendoff.[10][11][12]Logical metonymies are sentences likeJohn began the book, where the verbto beginrequires (subcategorizes) an event as its argument, but in a logical metonymy an object (i.e.the book) is found instead, and this forces to interpret the sentence by inferring an implicit event ("reading", "writing", or other prototypical actions performed on a book).[10]The problem for compositionality is that the meaning of reading or writing is not present in the words of the sentence, neither in "begin" nor in "book". Further, in the context of the philosophy of language, the principle of compositionality does not explain all of meaning. For example, you cannot infersarcasmpurely on the basis of words and their composition, yet a phrase used sarcastically means something completely different from the same phrase uttered straightforwardly. Thus, some theorists argue that the principle has to be revised to take into account linguistic and extralinguisticcontext, which includes the tone of voice used, common ground between the speakers, the intentions of the speaker, and so on.[8]
https://en.wikipedia.org/wiki/Compositionality
Computational semanticsis the study of how to automate the process of constructing and reasoning withmeaning representationsofnatural languageexpressions.[1]It consequently plays an important role innatural-language processingandcomputational linguistics. Some traditional topics of interest are:construction of meaning representations, semanticunderspecification,anaphoraresolution,[2]presuppositionprojection, andquantifierscope resolution. Methods employed usually draw fromformal semanticsorstatistical semantics. Computational semantics has points of contact with the areas oflexical semantics(word-sense disambiguationandsemantic role labeling), discourse semantics,knowledge representationandautomated reasoning(in particular,automated theorem proving). Since 1999 there has been anACLspecial interest group on computational semantics, SIGSEM. Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it. Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Computational_semantics
Informal linguistics,discourse representation theory(DRT) is a framework for exploring meaning under aformal semanticsapproach. One of the main differences between DRT-style approaches and traditionalMontagovianapproaches is that DRT includes a level of abstractmental representations(discourse representation structures, DRS) within its formalism, which gives it an intrinsic ability to handle meaning across sentence boundaries. DRT was created byHans Kampin 1981.[1]A very similar theory was developed independently byIrene Heimin 1982, under the name ofFile Change Semantics(FCS).[2]Discourse representation theories have been used to implementsemantic parsers[3]andnatural language understandingsystems.[4][5][6] DRT usesdiscourse representation structures (DRS) to represent a hearer's mental representation of a discourse as it unfolds over time. There are two critical components to a DRS: Consider Sentence (1) below: The DRS of (1) can be notated as (2) below: What (2) says is that there are two discourse referents, x and y, and three discourse conditionsfarmer,donkey, andowns, such that the conditionfarmerholds of x,donkeyholds of y, andownsholds of the pair x and y. Informally, the DRS in (2) is true in a given model of evaluation if and only if there are entities in that model that satisfy the conditions. So, if a model contains two individuals, and one is a farmer, the other is a donkey, and the first owns the second, the DRS in (2) is true in that model. Uttering subsequent sentences results in the existing DRS being updated. Uttering (3) after (1) results in the DRS in (2) being updated as follows, in (4) (assuming a way to disambiguate which pronoun refers to which individual). Successive utterances of sentences work in a similar way, although the process is somewhat more complicated for more complex sentences such as sentences containingnegation, andconditionals. In one sense, DRT offers a variation offirst-order predicate calculus—its forms are pairs of first-order formulae and thefree variablesthat occur in them. In traditional natural languagesemantics, only individual sentences are examined, but the context of a dialogue plays a role in meaning as well. For example,anaphoricpronouns such asheandsherely upon previously introduced individual constants in order to have meaning. DRT uses variables for every individual constant in order to account for this problem. A discourse is represented in adiscourse representation structure(DRS), a box with variables at the top and the sentences in theformal languagebelow in the order of the original discourse. Sub-DRS can be used for different types of sentences. One of the major advantages of DRT is its ability to account fordonkey sentences(Geach1962) in a principled fashion: Sentence (5) can be paraphrased as follows: Every farmer who owns a donkey beats the donkey that he/she owns. Under a Montagovian approach, the indefinitea donkey, which is assumed to be inherently anexistential quantifier, ends up becoming auniversal quantifier, an unwelcome result because the change in quantificational force cannot be accounted for in any principled way. DRT avoids this problem by assuming that indefinites introducediscourse referents(DRs), which are stored in the mental representation and are accessible (or not, depending on the conditions) to expressions like pronouns and otheranaphoricelements. Furthermore, they are inherently non-quantificational, and pick up quantificational force depending upon the context. On the other hand, genuine quantifiers (e.g., 'every professor') bear scope. An 'every-NP' triggers the introduction of a complex condition of the form K1 → K2, where K1 and K2 are sub-DRSs representing the restriction and the scope of the quantification respectively. Unlike true quantifiers, indefinite noun phrases just contribute a new DR (together with some descriptive material in terms of conditions on the DR), which is placed in a larger structure. This larger structure can be the top-level DRS or some sub-DRS according to the sentence-internal environment of the analyzed noun phrase—in other words, a level that is accessible to an anaphor that comes later.
https://en.wikipedia.org/wiki/Discourse_representation_theory
Frame semanticsis a theory oflinguisticmeaningdeveloped byCharles J. Fillmore[1]that extends his earliercase grammar. It relateslinguisticsemanticstoencyclopedicknowledge. The basic idea is that one cannot understand the meaning of a single word without access to all the essential knowledge that relates to that word. For example, one would not be able to understand the word "sell" without knowing anything about the situation of commercial transfer, which also involves, among other things, a seller, a buyer, goods, money, the relation between the money and the goods, the relations between the seller and the goods and the money, the relation between the buyer and the goods and the money and so on. Thus, a word activates, or evokes, a frame of semantic knowledge relating to the specific concept to which it refers (or highlights, in frame semantic terminology). The idea of the encyclopedic organisation of knowledge itself is old and was discussed byAge of Enlightenmentphilosophers such asDenis Diderot[2]andGiambattista Vico.[3]Fillmore and otherevolutionaryandcognitive linguistslikeJohn HaimanandAdele Goldberg, however, make an argument againstgenerative grammarandtruth-conditional semantics. As is elementary for Lakoffian–Langackerian Cognitive Linguistics, it is claimed that knowledge oflanguageis no different from other types of knowledge; therefore there is no grammar in the traditional sense, and language is not an independentcognitive function.[4]Instead, the spreading and survival of linguistic units is directly comparable to that of other types of units ofcultural evolution, like inmemeticsand other culturalreplicatortheories.[5][6][7] The theory applies the notion of asemantic framealso used inartificial intelligence, which is a collection of facts that specify "characteristic features, attributes, and functions of a denotatum, and its characteristic interactions with things necessarily or typically associated with it."[8]A semantic frame can also be defined as acoherent structureof related concepts that are related such that without knowledge of all of them, one does not have complete knowledge of any one; they are in that sense types ofgestalt. Frames are based on recurring experiences, therefore the commercial transaction frame is based on recurring experiences of commercial transactions. Words not only highlight individual concepts, but also specify a certain perspective from which the frame is viewed. For example "sell" views the situation from the perspective of the seller and "buy" from the perspective of the buyer. This, according to Fillmore, explains the observed asymmetries in manylexical relations. While originally only being applied tolexemes, frame semantics has now been expanded togrammatical constructionsand other larger and more complex linguistic units and has more or less been integrated intoconstruction grammaras the main semantic principle. Semantic frames are also becoming used ininformation modeling, for example inGellish, especially in the form of 'definition models' and 'knowledge models'. Frame semantics has much in common with the semantic principle of profiling fromRonald W. Langacker'scognitive grammar.[9] The concept of frames has been several times considered in philosophy andpsycholinguistics, namely supported byLawrence W. Barsalou,[10]and more recently by Sebastian Löbner.[11]They are viewed as a cognitive representation of the real world. From acomputational linguisticsviewpoint, there are semantic models of a sentence. Googleoriginally started a frame semanticparserproject that aims to parse the information onWikipediaand transfer it intoWikidataby coming up with relevant relations usingartificial intelligence.[12]
https://en.wikipedia.org/wiki/Frame_semantics_(linguistics)
Inquisitive semanticsis a framework inlogicandnatural language semantics. In inquisitive semantics, the semantic content of a sentence captures both the information that the sentence conveys and the issue that it raises. The framework provides a foundation for the linguistic analysis of statements and questions.[1][2]It was originally developed by Ivano Ciardelli,Jeroen Groenendijk, Salvador Mascarenhas, and Floris Roelofsen.[3][4][5][6][7] The essential notion in inquisitive semantics is that of aninquisitiveproposition. Inquisitive propositions encode informational content via the region of logical space that their information states cover. For instance, the inquisitive proposition{{w},∅}{\displaystyle \{\{w\},\emptyset \}}encodes the information that{w} is the actual world. The inquisitive proposition{{w},{v},∅}{\displaystyle \{\{w\},\{v\},\emptyset \}}encodes that the actual world is eitherw{\displaystyle w}orv{\displaystyle v}. An inquisitive proposition encodes inquisitive content via its maximal elements, known asalternatives. For instance, the inquisitive proposition{{w},{v},∅}{\displaystyle \{\{w\},\{v\},\emptyset \}}has two alternatives, namely{w}{\displaystyle \{w\}}and{v}{\displaystyle \{v\}}. Thus, it raises the issue of whether the actual world isw{\displaystyle w}orv{\displaystyle v}while conveying the information that it must be one or the other. The inquisitive proposition{{w,v},{w},{v},∅}{\displaystyle \{\{w,v\},\{w\},\{v\},\emptyset \}}encodes the same information but does not raise an issue since it contains only one alternative. The informational content of an inquisitive proposition can be isolated by pooling its constituent information states as shown below. Inquisitive propositions can be used to provide a semantics for theconnectivesofpropositional logicsince they form aHeyting algebrawhen ordered by thesubsetrelation. For instance, for every propositionPthere exists arelative pseudocomplementP∗{\displaystyle P^{*}}, which amounts to{s⊆W∣s∩t=∅for allt∈P}{\displaystyle \{s\subseteq W\mid s\cap t=\emptyset {\text{ for all }}t\in P\}}. Similarly, any two propositionsPandQhave ameetand ajoin, which amount toP∩Q{\displaystyle P\cap Q}andP∪Q{\displaystyle P\cup Q}respectively. Thus inquisitive propositions can be assigned to formulas ofL{\displaystyle {\mathcal {L}}}as shown below. Given a modelM=⟨W,V⟩{\displaystyle {\mathfrak {M}}=\langle W,V\rangle }whereWis a set of possible worlds andVis a valuation function: The operators ! and ? are used as abbreviations in the manner shown below. Conceptually, the !-operator can be thought of as cancelling the issues raised by whatever it applies to while leaving its informational content untouched. For any formulaφ{\displaystyle \varphi }, the inquisitive proposition[[!φ]]{\displaystyle [\![!\varphi ]\!]}expresses the same information as[[φ]]{\displaystyle [\![\varphi ]\!]}, but it may differ in that it raises no nontrivial issues. For example, if[[φ]]{\displaystyle [\![\varphi ]\!]}is the inquisitive propositionPfrom a few paragraphs ago, then[[!φ]]{\displaystyle [\![!\varphi ]\!]}is the inquisitive propositionQ. The ?-operator trivializes the information expressed by whatever it applies to, while converting information states that would establish that its issues are unresolvable into states that resolve it. This is very abstract, so consider another example. Imagine that logical space consists of four possible worlds,w1,w2,w3, andw4, and consider a formulaφ{\displaystyle \varphi }such that[[φ]]{\displaystyle [\![\varphi ]\!]}contains{w1},{w2}, and of course∅{\displaystyle \emptyset }. This proposition conveys that the actual world is eitherw1orw2and raises the issue of which of those worlds it actually is. Therefore, the issue it raises would not be resolved if we learned that the actual world is in the information state{w3,w4}. Rather, learning this would show that the issue raised by our toy proposition is unresolvable. As a result, the proposition[[?φ]]{\displaystyle [\![?\varphi ]\!]}contains all the states of[[φ]]{\displaystyle [\![\varphi ]\!]}, along with{w3,w4} and all of its subsets.
https://en.wikipedia.org/wiki/Inquisitive_semantics
Philosophy of languagerefers to thephilosophicalstudy of the nature oflanguage. It investigates the relationship between language, language users, and the world.[1]Investigations may include inquiry into the nature ofmeaning,intentionality,reference, the constitution of sentences, concepts,learning, andthought. Gottlob FregeandBertrand Russellwere pivotal figures in analytic philosophy's "linguistic turn". These writers were followed byLudwig Wittgenstein(Tractatus Logico-Philosophicus), theVienna Circle,logical positivists, andWillard Van Orman Quine.[2] In the West, inquiry into language stretches back to the 5th century BC with philosophers such asSocrates,Plato,Aristotle, and theStoics.[3]Linguistic speculation predated systematic descriptions of grammar which emergedc.the 5th century BCin India andc.the 3rd century BCin Greece. In the dialogueCratylus, Plato considered the question of whether the names of things were determined by convention or by nature. He criticizedconventionalismbecause it led to the bizarre consequence that anything can be conventionally denominated by any name. Hence, it cannot account for the correct or incorrect application of a name. He claimed that there was a natural correctness to names. To do this, he pointed out thatcompound wordsand phrases have a range of correctness. He also argued that primitive names had a natural correctness, because eachphonemerepresented basic ideas or sentiments. For example, for Plato the letterland its sound represented the idea of softness. However, by the end ofCratylus, he had admitted that some social conventions were also involved, and that there were faults in the idea that phonemes had individual meanings.[4]Plato is often considered a proponent ofextreme realism. Aristotle interested himself with issues oflogic, categories, and the creation of meaning. He separated all things into categories ofspeciesandgenus. He thought that the meaning of apredicatewas established through an abstraction of the similarities between various individual things. This theory later came to be callednominalism.[5]However, since Aristotle took these similarities to be constituted by a real commonality of form, he is more often considered a proponent ofmoderate realism. The Stoics made important contributions to the analysis of grammar, distinguishing five parts of speech: nouns, verbs,appellatives(names orepithets),conjunctionsandarticles. They also developed a sophisticated doctrine of thelektónassociated with each sign of a language, but distinct from both the sign itself and the thing to which it refers. Thislektónwas the meaning or sense of every term. The completelektónof a sentence is what we would now call itsproposition.[6]Only propositions were consideredtruth-bearing—meaning they could be considered true or false—while sentences were simply their vehicles of expression. Differentlektácould also express things besides propositions, such as commands, questions and exclamations.[7] Medieval philosophers were greatly interested in the subtleties of language and its usage. For manyscholastics, this interest was provoked by the necessity of translatingGreektexts intoLatin. There were several noteworthy philosophers of language in the medieval period. According to Peter J. King (though this has been disputed),Peter Abelardanticipated the moderntheories of reference.[8]Also,William of Ockham'sSumma Logicaebrought forward one of the first serious proposals for codifying a mental language.[9] The scholastics of the high medieval period, such as Ockham andJohn Duns Scotus, considered logic to be ascientia sermocinalis(science of language). The result of their studies was the elaboration of linguistic-philosophical notions whose complexity and subtlety has only recently come to be appreciated. Many of the most interesting problems of modern philosophy of language were anticipated by medieval thinkers. The phenomena of vagueness and ambiguity were analyzed intensely, and this led to an increasing interest in problems related to the use ofsyncategorematicwords, such asand,or,not,if, andevery. The study ofcategorematicwords (orterms) and their properties was also developed greatly.[10]One of the major developments of the scholastics in this area was the doctrine of thesuppositio.[11]Thesuppositioof a term is the interpretation that is given of it in a specific context. It can beproperorimproper(as when it is used inmetaphor,metonym, and other figures of speech). A propersuppositio, in turn, can be eitherformalormaterialaccording to whether it refers to its usual non-linguistic referent (as in "Charles is a man"), or to itself as a linguistic entity (as in "'Charles' has seven letters"). Such a classification scheme is the precursor of modern distinctions betweenuse and mention, and between language and metalanguage.[11] There is a tradition called speculative grammar which existed from the 11th to the 13th century. Leading scholars includedMartin of DaciaandThomas of Erfurt(seeModistae). Linguists of theRenaissanceandBaroqueperiods such asJohannes Goropius Becanus,Athanasius KircherandJohn Wilkinswere infatuated with the idea of aphilosophical languagereversing theconfusion of tongues, influenced by the gradual discovery ofChinese charactersandEgyptian hieroglyphs(Hieroglyphica). This thought parallels the idea that there might be a universal language of music. European scholarship began to absorb theIndian linguistic traditiononly from the mid-18th century, pioneered byJean François PonsandHenry Thomas Colebrooke(theeditio princepsofVaradarāja, a 17th-centurySanskritgrammarian, dating to 1849). In the early 19th century, the Danish philosopherSøren Kierkegaardinsisted that language should play a larger role inWestern philosophy. He argued that philosophy has not sufficiently focused on the role language plays in cognition and that future philosophy ought to proceed with a conscious focus on language: If the claim of philosophers to be unbiased were all it pretends to be, it would also have to take account of language and its whole significance in relation to speculative philosophy ... Language is partly something originally given, partly that which develops freely. And just as the individual can never reach the point at which he becomes absolutely independent ... so too with language.[12] The phrase "linguistic turn" was used to describe the noteworthy emphasis that contemporary philosophers put upon language. Language began to play a central role in Western philosophy in the early 20th century. One of the central figures involved in this development was the German philosopherGottlob Frege, whose work on philosophical logic and the philosophy of language in the late 19th century influenced the work of 20th-centuryanalytic philosophersBertrand RussellandLudwig Wittgenstein. The philosophy of language became so pervasive that for a time, inanalytic philosophycircles, philosophy as a whole was understood to be a matter of philosophy of language. Incontinental philosophy, the foundational work in the field wasFerdinand de Saussure'sCours de linguistique générale,[13]published posthumously in 1916. The topic that has received the most attention in the philosophy of language has been thenatureof meaning, to explain what "meaning" is, and what we mean when we talk about meaning. Within this area, issues include: the nature ofsynonymy, the origins of meaning itself, our apprehension of meaning, and the nature of composition (the question of how meaningful units of language are composed of smaller meaningful parts, and how the meaning of the whole is derived from the meaning of its parts). There have been several distinctive explanations of what alinguistic "meaning"is. Each has been associated with its own body of literature. Investigations into how language interacts with the world are calledtheories of reference.Gottlob Fregewas an advocate of amediated reference theory. Frege divided the semantic content of every expression, including sentences, into two components:sense and reference. The sense of a sentence is the thought that it expresses. Such a thought is abstract, universal and objective. The sense of any sub-sentential expression consists in its contribution to the thought that its embedding sentence expresses. Senses determine reference and are also the modes of presentation of the objects to which expressions refer.Referentsare the objects in the world that words pick out. The senses of sentences are thoughts, while their referents aretruth values(true or false). The referents of sentences embedded inpropositional attitudeascriptions and other opaque contexts are their usual senses.[26] Bertrand Russell, in his later writings and for reasons related to his theory of acquaintance inepistemology, held that the only directly referential expressions are what he called "logically proper names". Logically proper names are such terms asI,now,hereand otherindexicals.[27][28]He viewed proper names of the sort described above as "abbreviateddefinite descriptions" (seeTheory of descriptions). HenceJoseph R. Bidenmay be an abbreviation for "a past President of the United States and husband of Jill Biden". Definite descriptions are denoting phrases (see "On Denoting") which are analyzed by Russell into existentially quantified logical constructions. Such phrases denote in the sense that there is an object that satisfies the description. However, such objects are not to be considered meaningful on their own, but have meaning only in thepropositionexpressed by the sentences of which they are a part. Hence, they are not directly referential in the same way as logically proper names, for Russell.[29][30] On Frege's account, anyreferring expressionhas a sense as well as a referent. Such a "mediated reference" view has certain theoretical advantages over Mill's view. For example, co-referential names, such asSamuel ClemensandMark Twain, cause problems for a directly referential view because it is possible for someone to hear "Mark Twain is Samuel Clemens" and be surprised – thus, their cognitive content seems different. Despite the differences between the views of Frege and Russell, they are generally lumped together asdescriptivistsabout proper names. Such descriptivism was criticized inSaul Kripke'sNaming and Necessity. Kripke put forth what has come to be known as "the modal argument" (or "argument from rigidity"). Consider the nameAristotleand the descriptions "the greatest student of Plato", "the founder of logic" and "the teacher of Alexander".Aristotleobviously satisfies all of the descriptions (and many of the others we commonly associate with him), but it is notnecessarily truethat if Aristotle existed then Aristotle was any one, or all, of these descriptions. Aristotle may well have existed without doing any single one of the things for which he is known to posterity. He may have existed and not have become known to posterity at all or he may have died in infancy. Suppose that Aristotle is associated by Mary with the description "the last great philosopher of antiquity" and (the actual) Aristotle died in infancy. Then Mary's description would seem to refer to Plato. But this is deeply counterintuitive. Hence, names arerigid designators, according to Kripke. That is, they refer to the same individual in every possible world in which that individual exists. In the same work, Kripke articulated several other arguments against "Frege–Russell" descriptivism[22](see also Kripke'scausal theory of reference). The whole philosophical enterprise of studying reference has been critiqued by linguistNoam Chomskyin various works.[31][32] It has long been known that there are differentparts of speech. One part of the common sentence is thelexical word, which is composed ofnouns, verbs, and adjectives. A major question in the field – perhaps the single most important question forformalistandstructuralistthinkers – is how the meaning of a sentence emerges from its parts. Many aspects of the problem of the composition of sentences are addressed in the field of linguistics ofsyntax. Philosophical semantics tends to focus on theprinciple of compositionalityto explain the relationship between meaningful parts and whole sentences. The principle of compositionality asserts that a sentence can be understood on the basis of the meaning of thepartsof the sentence (i.e., words, morphemes) along with an understanding of itsstructure(i.e., syntax, logic).[33]Further, syntactic propositions are arranged intodiscourseornarrativestructures, which also encode meanings throughpragmaticslike temporal relations and pronominals.[34] It is possible to use the concept offunctionsto describe more than just how lexical meanings work: they can also be used to describe the meaning of a sentence. In the sentence "The horse is red", "the horse" can be considered to be the product of apropositional function. A propositional function is an operation of language that takes an entity (in this case, the horse) as an input and outputs asemantic fact(i.e., the proposition that is represented by "The horse is red"). In other words, a propositional function is like an algorithm. The meaning of "red" in this case is whatever takes the entity "the horse" and turns it into the statement, "The horse is red."[35] Linguists have developed at least two general methods of understanding the relationship between the parts of a linguistic string and how it is put together: syntactic and semantic trees.Syntactictrees draw upon the words of a sentence with thegrammarof the sentence in mind;semantictrees focus upon the role of themeaningof the words and how those meanings combine to provide insight onto the genesis of semantic facts. Some of the major issues at the intersection of philosophy of language and philosophy of mind are also dealt with in modernpsycholinguistics. Some important questions regard the amount of innate language, if language acquisition is a special faculty in the mind, and what the connection is between thought and language. There are three general perspectives on the issue of language learning. The first is thebehavioristperspective, which dictates that not only is the solid bulk of language learned, but it is learned via conditioning. The second is thehypothesis testing perspective, which understands the child's learning of syntactic rules and meanings to involve the postulation and testing of hypotheses, through the use of the general faculty of intelligence. The final candidate for explanation is theinnatistperspective, which states that at least some of the syntactic settings are innate and hardwired, based on certain modules of the mind.[36][37] There are varying notions of the structure of the brain when it comes to language.Connectionistmodels emphasize the idea that a person's lexicon and their thoughts operate in a kind of distributed,associativenetwork.[38]Nativist modelsassert that there arespecialized devicesin the brain that are dedicated to language acquisition.[37]Computationmodels emphasize the notion of a representationallanguage of thoughtand the logic-like, computational processing that the mind performs over them.[39]Emergentistmodels focus on the notion that natural faculties are a complex system that emerge from simpler biological parts.Reductionistmodels attempt to explain higher-level mental processes in terms of the basic low-level neurophysiological activity.[40] Firstly, this field of study seeks to better understand what speakers and listeners do with language incommunication, and how it is used socially. Specific interests include the topics oflanguage learning, language creation, andspeech acts. Secondly, the question of how language relates to the minds of both the speaker and theinterpreteris investigated. Of specific interest is the grounds for successfultranslationof words and concepts into their equivalents in another language. An important problem which touches both philosophy of language andphilosophy of mindis to what extent language influences thought and vice versa. There have been a number of different perspectives on this issue, each offering a number of insights and suggestions. LinguistsSapir and Whorfsuggested that language limited the extent to which members of a "linguistic community" can think about certain subjects (a hypothesis paralleled inGeorge Orwell's novelNineteen Eighty-Four).[41]In other words, language was analytically prior to thought. PhilosopherMichael Dummettis also a proponent of the "language-first" viewpoint.[42] The stark opposite to the Sapir–Whorf position is the notion that thought (or, more broadly, mental content) has priority over language. The "knowledge-first" position can be found, for instance, in the work ofPaul Grice.[42]Further, this view is closely associated withJerry Fodorand hislanguage of thoughthypothesis. According to his argument, spoken and written language derive their intentionality and meaning from an internal language encoded in the mind.[43]The main argument in favor of such a view is that the structure of thoughts and the structure of language seem to share a compositional, systematic character. Another argument is that it is difficult to explain how signs and symbols on paper can represent anything meaningful unless some sort of meaning is infused into them by the contents of the mind. One of the main arguments against is that such levels of language can lead to an infinite regress.[43]In any case, many philosophers of mind and language, such asRuth Millikan,Fred Dretskeand Fodor, have recently turned their attention to explaining the meanings of mental contents and states directly. Another tradition of philosophers has attempted to show that language and thought are coextensive – that there is no way of explaining one without the other. Donald Davidson, in his essay "Thought and Talk", argued that the notion of belief could only arise as a product of public linguistic interaction.Daniel Dennettholds a similarinterpretationistview ofpropositional attitudes.[44]To an extent, the theoretical underpinnings tocognitive semantics(including the notion of semanticframing) suggest the influence of language upon thought.[45]However, the same tradition views meaning and grammar as a function of conceptualization, making it difficult to assess in any straightforward way. Some thinkers, like the ancient sophistGorgias, have questioned whether or not language was capable of capturing thought at all. ...speech can never exactly represent perceptibles, since it is different from them, and perceptibles are apprehended each by the one kind of organ, speech by another. Hence, since the objects of sight cannot be presented to any other organ but sight, and the different sense-organs cannot give their information to one another, similarly speech cannot give any information about perceptibles. Therefore, if anything exists and is comprehended, it is incommunicable.[46] There are studies that prove that languages shape how people understand causality. Some of them were performed byLera Boroditsky. For example, English speakers tend to say things like "John broke the vase" even for accidents. However,SpanishorJapanesespeakers would be more likely to say "the vase broke itself". In studies conducted by Caitlin Fausey atStanford Universityspeakers of English, Spanish and Japanese watched videos of two people popping balloons, breaking eggs and spilling drinks either intentionally or accidentally. Later everyone was asked whether they could remember who did what. Spanish and Japanese speakers did not remember the agents of accidental events as well as did English speakers.[47] Russianspeakers, who make an extra distinction between light and dark blue in their language, are better able to visually discriminate shades of blue. ThePiraha, a tribe inBrazil, whose language has only terms like few and many instead of numerals, are not able to keep track of exact quantities.[48] In one study German and Spanish speakers were asked to describe objects having oppositegenderassignment in those two languages. The descriptions they gave differed in a way predicted bygrammatical gender. For example, when asked to describe a "key"—a word that is masculine in German and feminine in Spanish—theGermanspeakers were more likely to use words like "hard", "heavy", "jagged", "metal", "serrated" and "useful" whereas Spanish speakers were more likely to say "golden", "intricate", "little", "lovely", "shiny" and "tiny". To describe a "bridge", which is feminine in German and masculine in Spanish, the German speakers said "beautiful", "elegant", "fragile", "peaceful", "pretty" and "slender", and the Spanish speakers said "big", "dangerous", "long", "strong", "sturdy" and "towering". This was the case even though all testing was done in English, a language without grammatical gender.[49] In a series of studies conducted by Gary Lupyan, people were asked to look at a series of images of imaginary aliens.[50]Whether each alien was friendly or hostile was determined by certain subtle features but participants were not told what these were. They had to guess whether each alien was friendly or hostile, and after each response they were told if they were correct or not, helping them learn the subtle cues that distinguished friend from foe. A quarter of the participants were told in advance that the friendly aliens were called "leebish" and the hostile ones "grecious", while another quarter were told the opposite. For the rest, the aliens remained nameless. It was found that participants who were given names for the aliens learned to categorize the aliens far more quickly, reaching 80 per cent accuracy in less than half the time taken by those not told the names. By the end of the test, those told the names could correctly categorize 88 per cent of aliens, compared to just 80 per cent for the rest. It was concluded that naming objects helps us categorize and memorize them. In another series of experiments,[51]a group of people was asked to view furniture from anIKEAcatalog. Half the time they were asked to label the object – whether it was a chair or lamp, for example – while the rest of the time they had to say whether or not they liked it. It was found that when asked to label items, people were later less likely to recall the specific details of products, such as whether a chair had arms or not. It was concluded that labeling objects helps our minds build a prototype of the typical object in the group at the expense of individual features.[52] A common claim is that language is governed by social conventions. Questions inevitably arise on surrounding topics. One question regards what a convention exactly is, and how it is studied, and second regards the extent that conventions even matter in the study of language.David Kellogg Lewisproposed a worthy reply to the first question by expounding the view that a convention is a "rationally self-perpetuating regularity in behavior". However, this view seems to compete to some extent with the Gricean view of speaker's meaning, requiring either one (or both) to be weakened if both are to be taken as true.[42] Some have questioned whether or not conventions are relevant to the study of meaning at all.Noam Chomskyproposed that the study of language could be done in terms of the I-Language, or internal language of persons. If this is so, then it undermines the pursuit of explanations in terms of conventions, and relegates such explanations to the domain ofmetasemantics.Metasemanticsis a term used by philosopher of languageRobert Staintonto describe all those fields that attempt to explain how semantic facts arise.[35]One fruitful source of research involves investigation into the social conditions that give rise to, or are associated with, meanings and languages.Etymology(the study of the origins of words) andstylistics(philosophical argumentation over what makes "good grammar", relative to a particular language) are two other examples of fields that are taken to be metasemantic. Many separate (but related) fields have investigated the topic of linguistic convention within their own research paradigms. The presumptions that prop up each theoretical view are of interest to the philosopher of language. For instance, one of the major fields of sociology,symbolic interactionism, is based on the insight that human social organization is based almost entirely on the use of meanings.[53]In consequence, any explanation of asocial structure(like aninstitution) would need to account for the shared meanings which create and sustain the structure. Rhetoricis the study of the particular words that people use to achieve the proper emotional and rational effect in the listener, be it to persuade, provoke, endear, or teach. Some relevant applications of the field include the examination ofpropagandaanddidacticism, the examination of the purposes ofswearingandpejoratives(especially how it influences the behaviors of others, and defines relationships), or the effects of gendered language. It can also be used to studylinguistic transparency(or speaking in an accessible manner), as well asperformativeutterances and the various tasks that language can perform (called "speech acts"). It also has applications to the study and interpretation of law, and helps give insight to the logical concept of thedomain of discourse. Literary theoryis a discipline that some literary theorists claim overlaps with the philosophy of language. It emphasizes the methods that readers and critics use in understanding a text. This field, an outgrowth of the study of how to properly interpret messages, is closely tied to the ancient discipline ofhermeneutics. Finally, philosophers of language investigate how language and meaning relate totruthandthe reality being referred to. They tend to be less interested in which sentences areactually true, and more inwhat kinds of meanings can be true or false. A truth-oriented philosopher of language might wonder whether or not a meaningless sentence can be true or false, or whether or not sentences can express propositions about things that do not exist, rather than the way sentences are used.[citation needed] In the philosophical tradition stemming from the Ancient Greeks, such as Plato and Aristotle, language is seen as a tool for making statements about the reality by means ofpredication; e.g. "Man is a rational animal", whereManis thesubjectandis a rational animalis thepredicate, which expresses a property of the subject. Such structures also constitute the syntactic basis ofsyllogism, which remained the standard model of formal logic until the early 20th century, when it was replaced withpredicate logic. In linguistics and philosophy of language, the classical model survived in the Middle Ages, and the link between Aristotelian philosophy of science and linguistics was elaborated by Thomas of Erfurt'sModistaegrammar (c.1305), which gives an example of the analysis of thetransitivesentence: "Plato strikes Socrates", whereSocratesis theobjectand part of the predicate.[54][55] The social and evolutionary aspects of language were discussed during the classical and mediaeval periods. Plato's dialogueCratylusinvestigates theiconicityof words, arguing that words are made by "wordsmiths" and selected by those who need the words, and that the study of language is external to the philosophical objective of studyingideas.[56]Age-of-Enlightenment thinkers accommodated the classical model with a Christian worldview, arguing that God created Man social and rational, and, out of these properties, Man created his own cultural habits including language.[57]In this tradition, the logic of the subject-predicate structure forms a general, or 'universal' grammar, which governs thinking and underpins all languages. Variation between languages was investigated in thePort-Royal Grammarof Arnauld and Lancelot, among others, who described it as accidental and separate from the logical requirements of thought and language.[58] The classical view was overturned in the early 19th century by the advocates ofGerman romanticism.Humboldtand his contemporaries questioned the existence of a universalinner form of thought. They argued that, since thinking is verbal, language must be the prerequisite for thought. Therefore, every nation has its own unique way of thinking, aworldview, which has evolved with the linguistic history of the nation.[59]Diversity became emphasized with a focus on the uncontrollable sociohistorical construction of language. Influential romantic accounts includeGrimm'ssound lawsof linguistic evolution,Schleicher's "Darwinian" species-language analogy, theVölkerpsychologieaccounts of language bySteinthalandWundt, andSaussure'ssemiology, a dyadic model ofsemiotics, i.e., language as asignsystem with its own inner logic, separated from physical reality.[60] In the early 20th century,logical grammarwas defended byFregeandHusserl. Husserl's 'pure logical grammar' draws from 17th-century rational universal grammar, proposing a formal semantics that links the structures of physical reality (e.g., "This paper is white") with the structures of the mind, meaning, and the surface form of natural languages. Husserl's treatise was, however, rejected in general linguistics.[61]Instead, linguists opted forChomsky's theory ofuniversal grammaras an innate biological structure that generates syntax in aformalisticfashion, i.e., irrespective of meaning.[54] Many philosophers continue to hold the view that language is a logically based tool of expressing the structures of reality by means of predicate-argument structure. Proponents include, with different nuances,Russell,Wittgenstein,Sellars,Davidson,Putnam, andSearle. Attempts to revive logical formal semantics as a basis of linguistics followed, e.g., theMontague grammar. Despite resistance from linguists including Chomsky andLakoff,formal semanticswas established in the late twentieth century. However, its influence has been mostly limited tocomputational linguistics, with little impact on general linguistics.[62] The incompatibility withgeneticsandneuropsychologyof Chomsky's innate grammar gave rise to new psychologically and biologically oriented theories of language in the 1980s, and these have gained influence in linguistics andcognitive sciencein the 21st century. Examples include Lakoff'sconceptual metaphor, which argues that language arises automatically from visual and other sensory input, and different models inspired byDawkins'smemetics,[63]aneo-Darwinianmodel of linguistic units as the units ofnatural selection. These includecognitive grammar,construction grammar, andusage-based linguistics.[64] One debate that has captured the interest of many philosophers is the debate over the meaning ofuniversals. It might be asked, for example, why when people say the wordrocks, what it is that the word represents. Two different answers have emerged to this question. Some have said that the expression stands for some real, abstract universal out in the world called "rocks". Others have said that the word stands for some collection of particular, individual rocks that are associated with merely a nomenclature. The former position has been calledphilosophical realism, and the latternominalism.[65] The issue here can be explicated in examination of the proposition "Socrates is a man". From the realist's perspective, the connection between S and M is a connection between two abstract entities. There is an entity, "man", and an entity, "Socrates". These two things connect in some way or overlap. From a nominalist's perspective, the connection between S and M is the connection between a particular entity (Socrates) and a vast collection of particular things (men). To say that Socrates is a man is to say that Socrates is a part of the class of "men". Another perspective is to consider "man" to be apropertyof the entity, "Socrates". There is a third way, between nominalism and(extreme) realism, usually called "moderate realism" and attributed to Aristotle and Thomas Aquinas. Moderate realists hold that "man" refers to a real essence or form that is really present and identical in Socrates and all other men, but "man" does not exist as a separate and distinct entity. This is a realist position, because "man" is real, insofar as it really exists in all men; but it is a moderate realism, because "man" is not an entity separate from the men it informs. Another of the questions that has divided philosophers of language is the extent to which formal logic can be used as an effective tool in the analysis and understanding of natural languages. While most philosophers, includingGottlob Frege,Alfred TarskiandRudolf Carnap, have been more or less skeptical about formalizing natural languages, many of them developed formal languages for use in the sciences or formalizedpartsof natural language for investigation. Some of the most prominent members of this tradition offormal semanticsinclude Tarski, Carnap,Richard MontagueandDonald Davidson.[66] On the other side of the divide, and especially prominent in the 1950s and '60s, were the so-called "ordinary language philosophers". Philosophers such asP. F. Strawson,John Langshaw AustinandGilbert Rylestressed the importance of studying natural language without regard to the truth-conditions of sentences and the references of terms. They did not believe that the social and practical dimensions of linguistic meaning could be captured by any attempts at formalization using the tools of logic. Logic is one thing and language is something entirely different. What is important is not expressions themselves but what people use them to do in communication.[67] Hence, Austin developed a theory ofspeech acts, which described the kinds of things which can be done with a sentence (assertion, command, inquiry, exclamation) in different contexts of use on different occasions.[68]Strawson argued that the truth-table semantics of the logical connectives (e.g.,∧{\displaystyle \land },∨{\displaystyle \lor }and→{\displaystyle \rightarrow }) do not capture the meanings of their natural language counterparts ("and", "or" and "if-then").[69]While the "ordinary language" movement basically died out in the 1970s, its influence was crucial to the development of the fields of speech-act theory and the study ofpragmatics. Many of its ideas have been absorbed by theorists such asKent Bach,Robert Brandom,Paul HorwichandStephen Neale.[19]In recent work, the division between semantics and pragmatics has become a lively topic of discussion at the interface of philosophy and linguistics, for instance in work by Sperber and Wilson, Carston and Levinson.[70][71][72] While keeping these traditions in mind, the question of whether or not there is any grounds for conflict between the formal and informal approaches is far from being decided. Some theorists, likePaul Grice, have been skeptical of any claims that there is a substantial conflict between logic and natural language.[73] Game theory has been suggested as a tool to study the evolution of language. Some researchers that have developed game theoretical approaches to philosophy of language areDavid K. Lewis, Schuhmacher, and Rubinstein.[74] Translation and interpretation are two other problems that philosophers of language have attempted to confront. In the 1950s,W.V. Quineargued for the indeterminacy of meaning and reference based on the principle ofradical translation. InWord and Object, Quine asks readers to imagine a situation in which they are confronted with a previously undocumented, group of indigenous people where they must attempt to make sense of the utterances and gestures that its members make. This is the situation of radical translation.[75] He claimed that, in such a situation, it is impossiblein principleto be absolutely certain of the meaning or reference that a speaker of the indigenous peoples language attaches to an utterance. For example, if a speaker sees a rabbit and says "gavagai", is she referring to the whole rabbit, to the rabbit's tail, or to a temporal part of the rabbit? All that can be done is to examine the utterance as a part of the overall linguistic behaviour of the individual, and then use these observations to interpret the meaning of all other utterances. From this basis, one can form a manual of translation. But, since reference is indeterminate, there will be many such manuals, no one of which is more correct than the others. For Quine, as for Wittgenstein and Austin, meaning is not something that is associated with a single word or sentence, but is rather something that, if it can be attributed at all, can only be attributed to a whole language.[75]The resulting view is calledsemantic holism. Inspired by Quine's discussion,Donald Davidsonextended the idea of radical translation to the interpretation of utterances and behavior within a single linguistic community. He dubbed this notionradical interpretation. He suggested that the meaning that any individual ascribed to a sentence could only be determined by attributing meanings to many, perhaps all, of the individual's assertions, as well as their mental states and attitudes.[17] One issue that has troubled philosophers of language and logic is the problem of thevaguenessof words. The specific instances of vagueness that most interest philosophers of language are those where the existence of "borderline cases" makes it seemingly impossible to say whether a predicate is true or false. Classic examples are "is tall" or "is bald", where it cannot be said that some borderline case (some given person) is tall or not-tall. In consequence, vagueness gives rise to theparadox of the heap. Many theorists have attempted to solve the paradox by way ofn-valued logics, such asfuzzy logic, which have radically departed from classical two-valued logics.[76]
https://en.wikipedia.org/wiki/Philosophy_of_language
Inlinguisticsand related fields,pragmaticsis the study of howcontextcontributes to meaning. The field of study evaluates how human language is utilized in social interactions, as well as the relationship between the interpreter and the interpreted.[1]Linguists who specialize in pragmatics are calledpragmaticians. The field has been represented since 1986 by theInternational Pragmatics Association(IPrA). Pragmatics encompasses phenomena includingimplicature,speech acts,relevanceandconversation,[2]as well asnonverbal communication. Theories of pragmatics go hand-in-hand with theories ofsemantics, which studies aspects of meaning, andsyntax, which examines sentence structures, principles, and relationships. The ability to understand another speaker's intended meaning is calledpragmatic competence.[3][4][5]In 1938, Charles Morris first distinguished pragmatics as an independent subfield within semiotics, alongside syntax and semantics.[6]Pragmatics emerged as its own subfield in the 1950s after the pioneering work ofJ. L. AustinandPaul Grice.[7][8] Pragmatics was a reaction tostructuralistlinguistics as outlined byFerdinand de Saussure. In many cases, it expanded upon his idea that language has an analyzable structure, composed of parts that can be defined in relation to others. Pragmatics first engaged only insynchronicstudy, as opposed to examining the historical development of language. However, it rejected the notion that all meaning comes fromsignsexisting purely in the abstract space oflangue. Meanwhile,historical pragmaticshas also come into being. The field did not gain linguists' attention until the 1970s, when two different schools emerged: the Anglo-American pragmatic thought and the European continental pragmatic thought (also called the perspective view).[9] Ambiguity refers to when it is difficult to infer meaning without knowing the context, the identity of the speaker or the speaker's intent. For example, the sentence "You have a green light" is ambiguous, as without knowing the context, one could reasonably interpret it as meaning: Another example of an ambiguous sentence is, "I went to the bank." This is an example of lexical ambiguity, as the word bank can either be in reference to a place where money is kept, or the edge of a river. To understand what the speaker is truly saying, it is a matter of context, which is why it is pragmatically ambiguous as well.[15] Similarly, the sentence "Sherlock saw the man with binoculars" could mean that Sherlock observed the man by using binoculars, or it could mean that Sherlock observed a man who was holding binoculars (syntactic ambiguity).[16]The meaning of the sentence depends on an understanding of the context and the speaker's intent. As defined in linguistics, a sentence is an abstract entity: a string of words divorced from non-linguistic context, as opposed to anutterance, which is a concrete example of aspeech actin a specific context. The more closely conscious subjects stick to common words, idioms, phrasings, and topics, the more easily others can surmise their meaning; the further they stray from common expressions and topics, the wider the variations in interpretations. That suggests that sentences do not have intrinsic meaning, that there is no meaning associated with a sentence or word, and that either can represent an idea only symbolically.The cat sat on the matis a sentence in English. If someone were to say to someone else, "The cat sat on the mat", the act is itself an utterance. That implies that a sentence, term, expression or word cannot symbolically represent a single true meaning; such meaning is underspecified (which cat sat on which mat?) and potentially ambiguous. By contrast, the meaning of an utterance can be inferred through knowledge of both its linguistic and non-linguistic contexts (which may or may not be sufficient to resolve ambiguity). In mathematics, withBerry's paradox, there arises a similar systematic ambiguity with the word "definable". The referential uses of language are howsignsare used to refer to certain items. A sign is the link or relationship between asignified and the signifieras defined byde SaussureandJean-René Huguenin. The signified is some entity or concept in the world. The signifier represents the signified. An example would be: The relationship between the two gives the sign meaning. The relationship can be explained further by considering what is meant by "meaning." In pragmatics, there are two different types of meaning to consider:semantic-referential meaningandindexical meaning.[17]Semantic-referential meaning refers to the aspect of meaning, which describes events in the world that are independent of the circumstance they are uttered in. An example would be propositions such as: In this case, the proposition is describing that Santa Claus eats cookies. The meaning of the proposition does not rely on whether or not Santa Claus is eating cookies at the time of its utterance. Santa Claus could be eating cookies at any time and the meaning of the proposition would remain the same. The meaning is simply describing something that is the case in the world. In contrast, the proposition, "Santa Claus is eating a cookie right now", describes events that are happening at the time the proposition is uttered. Semantic-referential meaning is also present in meta-semantical statements such as: If someone were to say that a tiger is a carnivorous animal in one context and a mammal in another, the definition of tiger would still be the same. The meaning of the sign tiger is describing some animal in the world, which does not change in either circumstance. Indexicalmeaning, on the other hand, is dependent on the context of the utterance and has rules of use. By rules of use, it is meant that indexicals can tell when they are used, but not what they actually mean. Whom "I" refers to, depends on the context and the person uttering it. As mentioned, these meanings are brought about through the relationship between the signified and the signifier. One way to define the relationship is by placing signs in two categories:referential indexical signs,also called "shifters", andpure indexical signs. Referential indexical signs are signs where the meaning shifts depending on the context hence the nickname "shifters." 'I' would be considered a referential indexical sign. The referential aspect of its meaning would be '1st person singular' while the indexical aspect would be the person who is speaking (refer above for definitions of semantic-referential and indexical meaning). Another example would be: A pure indexical sign does not contribute to the meaning of the propositions at all. It is an example of a "non-referential use of language." A second way to define the signified and signifier relationship isC.S. Peirce'sPeircean Trichotomy. The components of the trichotomy are the following: These relationships allow signs to be used to convey intended meaning. If two people were in a room and one of them wanted to refer to a characteristic of a chair in the room he would say "this chair has four legs" instead of "a chair has four legs." The former relies on context (indexical and referential meaning) by referring to a chair specifically in the room at that moment while the latter is independent of the context (semantico-referential meaning), meaning the concept chair.[18] Referringto things and people is a common feature of conversation, and conversants do socollaboratively. Individuals engaging indiscourseutilize pragmatics.[19]In addition, individuals within the scope of discourse cannot help but avoid intuitive use of certain utterances or word choices in an effort to create communicative success.[19]The study of referential language is heavily focused upondefinite descriptionsand referent accessibility. Theories have been presented for why direct referent descriptions occur in discourse.[20](In layman's terms: why reiteration of certain names, places, or individuals involved or as a topic of the conversation at hand are repeated more than one would think necessary.) Four factors are widely accepted for the use of referent language including (i) competition with a possible referent, (ii)salienceof the referent in the context of discussion (iii) an effort for unity of the parties involved, and finally, (iv) a blatant presence of distance from the last referent.[19] Referential expressions are a form ofanaphora.[20]They are also a means of connecting past and present thoughts together to create context for information at hand. Analyzing the context of a sentence and determining whether or not the use of referent expression is necessary is highly reliant upon the author/speaker's digression- and is correlated strongly with the use of pragmatic competency.[20][19] Michael Silversteinhas argued that "nonreferential" or "pure" indices do not contribute to an utterance's referential meaning but instead "signal some particular value of one or more contextual variables."[21]Although nonreferential indexes are devoid of semantico-referential meaning, they do encode "pragmatic" meaning. The sorts of contexts that such indexes can mark are varied. Examples include: In all of these cases, the semantico-referential meaning of the utterances is unchanged from that of the other possible (but often impermissible) forms, but the pragmatic meaning is vastly different. J. L. Austinintroduced the concept of theperformative, contrasted in his writing with "constative" (i.e. descriptive) utterances. According to Austin's original formulation, a performative is a type of utterance characterized by two distinctive features: Examples: To be performative, an utterance must conform to various conditions involving what Austin callsfelicity. These deal with things like appropriate context and the speaker's authority. For instance, when a couple has been arguing and the husband says to his wife that he accepts her apology even though she has offered nothing approaching an apology, his assertion is infelicitous: because she has made neither expression of regret nor request for forgiveness, there exists none to accept, and thus no act of accepting can possibly happen. Roman Jakobson, expanding on the work ofKarl Bühler, described six "constitutive factors" of aspeech event, each of which represents the privileging of a corresponding function, and only one of which is the referential (which corresponds to thecontextof the speech event). The six constitutive factors and their corresponding functions are diagrammed below. The six constitutive factors of a speech event Addresser --------------------- Addressee The six functions of language Emotive ----------------------- Conative There is considerable overlap between pragmatics andsociolinguistics, since both share an interest inlinguistic meaningas determined by usage in a speech community. However, sociolinguists tend to be more interested in variations in language within such communities. Influences of philosophy and politics are also present in the field of pragmatics, as the dynamics of societies and oppression are expressed through language[24] Pragmatics helps anthropologists relate elements of language to broader social phenomena; it thus pervades the field oflinguistic anthropology. Because pragmatics describes generally the forces in play for a given utterance, it includes the study of power, gender, race, identity, and their interactions with individual speech acts. For example, the study ofcode switchingdirectly relates to pragmatics, since a switch in code effects a shift in pragmatic force.[23] According toCharles W. Morris, pragmatics tries to understand the relationship between signs and their users, whilesemanticstends to focus on the actual objects or ideas to which a word refers, andsyntax(or "syntactics") examines relationships among signs or symbols. Semantics is the literal meaning of an idea whereas pragmatics is the implied meaning of the given idea. Speech Act Theory, pioneered byJ. L. Austinand further developed byJohn Searle, centers around the idea of theperformative, a type of utterance that performs the very action it describes. Speech Act Theory's examination ofIllocutionary Actshas many of the same goals as pragmatics, as outlinedabove. Computational Pragmatics, as defined byVictoria Fromkin, concerns how humans can communicate their intentions to computers with as little ambiguity as possible.[25]That process, integral to the science ofnatural language processing(seen as a sub-discipline ofartificial intelligence), involves providing a computer system with some database of knowledge related to a topic and a series of algorithms, which control how the system responds to incoming data, using contextual knowledge to more accurately approximate natural human language and information processing abilities. Reference resolution, how a computer determines when two objects are different or not, is one of the most important tasks of computational pragmatics. There has been a great amount of discussion on the boundary between semantics and pragmatics[26]and there are many different formalizations of aspects of pragmatics linked to context dependence. Particularly interesting cases are the discussions on the semantics of indexicals and the problem of referential descriptions, a topic developed after the theories ofKeith Donnellan.[27]A proper logical theory of formal pragmatics has been developed byCarlo Dalla Pozza, according to which it is possible to connect classical semantics (treating propositional contents as true or false) and intuitionistic semantics (dealing with illocutionary forces). The presentation of a formal treatment of pragmatics appears to be a development of the Fregean idea of assertion sign as formal sign of the act of assertion. Over the past decade, many probabilistic and Bayesian methods have become very popular in the modelling of pragmatics, of which the most successful framework has been the Rational Speech Act[28]framework developed by Noah Goodman andMichael C. Frank, which has already seen much use in the analysis of metaphor,[29]hyperbole[30]and politeness.[31]In the Rational Speech Act, listeners and speakers both reason about the other's reasoning concerning the literal meaning of the utterances, and as such, the resulting interpretation depends, but is not necessarily determined by the literal truth conditional meaning of an utterance, and so it uses recursive reasoning to pursue a broadly Gricean co-operative ideal. In the most basic form of the Rational Speech Act, there are three levels of inference; Beginning from the highest level, the pragmatic listenerL1{\displaystyle L_{1}}will reason about the pragmatic speakerS1{\displaystyle S_{1}}, and will then infer the likely world states{\displaystyle s}taking into account thatS1{\displaystyle S_{1}}has deliberately chosen to produce utteranceu{\displaystyle u}, whileS1{\displaystyle S_{1}}chooses to produce utteranceu{\displaystyle u}by reasoning about how the literal listenerL0{\displaystyle L_{0}}will understand the literal meaning ofu{\displaystyle u}and so will attempt to maximise the chances thatL0{\displaystyle L_{0}}will correctly infer the world states{\displaystyle s}. As such, a simple schema of the Rational Speech Act reasoning hierarchy can be formulated for use in a reference game such that:[32] L1:PL1(s|u)∝PS1(u|s)⋅P(s)S1:PS1(u|s)∝exp⁡(αUS1(u;s))L0:PLO(s|u)∝[[u]](s)⋅P(s){\displaystyle {\begin{aligned}&L_{1}:P_{L_{1}}(s|u)\propto P_{S_{1}}(u|s)\cdot P(s)\\&S_{1}:P_{S_{1}}(u|s)\propto \exp(\alpha U_{S_{1}}(u;s))\\&L_{0}:P_{L_{O}}(s|u)\propto [\![u]\!](s)\cdot P(s)\end{aligned}}} Pragmatics (more specifically, Speech Act Theory's notion of theperformative) underpinsJudith Butler's theory ofgender performativity. InGender Trouble, they claim that gender and sex are not natural categories, but socially constructed roles produced by "reiterative acting." InExcitable Speechthey extend their theory ofperformativitytohate speechandcensorship, arguing that censorship necessarily strengthens any discourse it tries to suppress and therefore, since the state has sole power to define hate speech legally, it is the state that makes hate speech performative. Jacques Derridaremarked that some work done under Pragmatics aligned well with the program he outlined in his bookOf Grammatology. Émile Benvenisteargued that thepronouns"I" and "you" are fundamentally distinct from other pronouns because of their role in creating thesubject. Gilles DeleuzeandFélix Guattaridiscuss linguistic pragmatics in the fourth chapter ofA Thousand Plateaus("November 20, 1923--Postulates of Linguistics"). They draw three conclusions from Austin: (1) Aperformative utterancedoes not communicate information about an act second-hand, but it is the act; (2) Every aspect of language ("semantics, syntactics, or even phonematics") functionally interacts with pragmatics; (3) There is no distinction between language and speech. This last conclusion attempts to refuteSaussure'sdivision betweenlangueandparoleandChomsky'sdistinction betweendeep structure and surface structuresimultaneously.[33]
https://en.wikipedia.org/wiki/Pragmatics
Inlinguistics, thesyntax–semantics interfaceis the interaction betweensyntaxandsemantics. Its study encompasses phenomena that pertain to both syntax and semantics, with the goal of explaining correlations between form and meaning.[1]Specific topics includescope,[2][3]binding,[2]andlexical semanticproperties such asverbal aspectandnominal individuation,[4][5][6][7][8]semantic macroroles,[8]andunaccusativity.[4] The interface is conceived of very differently informalistandfunctionalistapproaches. While functionalists tend to look into semantics and pragmatics for explanations of syntactic phenomena, formalists try to limit such explanations within syntax itself.[9]Aside from syntax, other aspects of grammar have been studied in terms of how they interact with semantics; which can be observed by the existence of terms such asmorphosyntax–semantics interface.[3] Withinfunctionalistapproaches, research on the syntax–semantics interface has been aimed at disproving the formalist argument of theautonomy of syntax, by finding instances of semantically determined syntactic structures.[4][10] Levinand Rappaport Hovav, in their 1995 monograph, reiterated that there are some aspects of verb meaning that are relevant to syntax, and others that are not, as previously noted bySteven Pinker.[11][12]Levin and Rappaport Hovav isolated such aspects focusing on the phenomenon ofunaccusativitythat is "semantically determined and syntactically encoded".[13] Van ValinandLaPolla, in their 1997 monographic study, found that the more semantically motivated or driven a syntactic phenomenon is, the more it tends to be typologically universal, that is, to show less cross-linguistic variation.[14] Informal semantics,semantic interpretationis viewed as amappingfrom syntactic structures todenotations. There are several formal views of the syntax–semantics interface which differ in what they take to be the inputs and outputs of this mapping. In theHeim and Kratzermodel commonly adopted withingenerative linguistics, the input is taken to be a special level of syntactic representation calledlogical form. At logical form, semantic relationships such asscopeandbindingare represented unambiguously, having been determined by syntactic operations such asquantifier raising. Other formal frameworks take the opposite approach, assuming that such relationships are established by the rules of semantic interpretation themselves. In such systems, the rules include mechanisms such astype shiftinganddynamic binding.[1][15][16][2] Before the 1950s, there was no discussion of a syntax–semantics interface inAmerican linguistics, since neither syntax nor semantics was an active area of research.[17]This neglect was due in part to the influence oflogical positivismandbehaviorismin psychology, that viewed hypotheses about linguistic meaning as untestable.[17][18] By the 1960s, syntax had become a major area of study, and some researchers began examining semantics as well. In this period, the most prominent view of the interface was theKatz–PostalHypothesisaccording to whichdeep structurewas the level of syntactic representation which underwent semantic interpretation. This assumption was upended by data involving quantifiers, which showed thatsyntactic transformationscan affect meaning. During thelinguistics wars, a variety of competing notions of the interface were developed, many of which live on in present-day work.[17][2]
https://en.wikipedia.org/wiki/Syntax%E2%80%93semantics_interface
Musical languagesareconstructed languagesbased onmusicalsounds, which tend to incorporatearticulation. Whistled languages are dependent on an underlying spoken languages and are used in various cultures as a means for communication over distance, or as secret codes. The mystical concept of alanguage of the birdstries to connect the two categories, since some authors[who?]of musicala priorilanguageshave speculated about a mystical or primeval origin of the whistled languages.[citation needed] There are only a few language families as of now such as the Solresol language family,Mosslanguage family, and Nibuzigu language family. The Solresol family is a family ofa posteriorilanguages (usually English) where a sequence of 7 notes of the western C-Major scale or the 12 tone chromatic scale are used as phonemes. Kobaïanis a language constructed byChristian Vanderof the bandMagma, which uses elements of Slavic and Germanic languages,[3]but is based primarily on 'sonorities, not on applied meanings'.[4]
https://en.wikipedia.org/wiki/Musical_language
In Abrahamic and Europeanmythology,medieval literatureandoccultism, thelanguage of the birdsis postulated as a mystical, perfectdivine language,Adamic language,Enochian,angelic languageor amythicalor magical language used bybirdsto communicate with the initiated. InIndo-European religion, the behavior of birds has long been used for the purposes ofdivinationbyaugurs. According to a suggestion byWalter Burkert, these customs may have their roots in thePaleolithicwhen, during theIce Age, early humans looked forcarrionby observing scavenging birds.[1] There are also examples of contemporary bird-human communication andsymbiosis. InNorth America,ravenshave been known to leadwolves(and native hunters) to prey they otherwise would be unable to consume.[2][3]InAfrica, thegreater honeyguideis known to guide humans to beehives in the hope that the hive will be incapacitated and opened for them. Dating to theRenaissance, birdsong was the inspiration for somemagicalengineered languages, in particularmusical languages.Whistled languagesbased on spoken natural languages are also sometimes referred to as the language of the birds. Somelanguage gamesare also referred to as the language of birds, such as inOromoandAmharicof Ethiopia.[4] InNorse mythology, the power to understand the language of the birds was a sign of great wisdom. The godOdinhad two ravens, calledHugin and Munin, who flew around the world and told Odin what happened among mortal men. The legendaryking of SwedenDag the Wisewas so wise that he could understand what birds said. He had a tamehouse sparrowwhich flew around and brought back news to him. Once, a farmer inReidgotalandkilled Dag's sparrow, which brought on a terrible retribution from the Swedes. In theRígsþula, Konr was able to understand the speech of birds. When Konr was riding through the forest hunting and snaring birds, a crow spoke to him and suggested he would win more if he stopped hunting mere birds and rode to battle against foemen. The ability could also be acquired by tasting dragon blood. According to thePoetic Eddaand theVölsunga saga,Sigurdaccidentally tasted dragon blood while roasting the heart ofFafnir. This gave him the ability to understand the language of birds, and his life was saved as the birds were discussingRegin's plans to kill Sigurd. Through the same abilityÁslaug, Sigurd's daughter, found out the betrothment of her husbandRagnar Lodbrokto another woman. The 11th centuryRamsund carvinginSwedendepicts howSigurdlearnt the language of birds, in thePoetic Eddaand theVölsunga saga. In aneddic poemloosely connected with the Sigurd tradition which is namedHelgakviða Hjörvarðssonar, the reason why a man named Atli once had the ability is not explained. Atli's lord's son Helgi would marry what was presumably Sigurd's aunt, thevalkyrieSváfa. According toApollonius Rhodius, thefigureheadofJason's ship, theArgo, was built ofoakfrom the sacred grove atDodonaand could speak the language of birds.Tiresiaswas also said to have been given the ability to understand the language of the birds byAthena. The language of birds inGreek mythologymay be attained by magical means.Democritus,Anaximander,Apollonius of Tyana,Melampus, andAesopuswere all said to have understood the birds. The "birds" are also mentioned inHomer'sOdyssey: "“[...] although I am no prophet really, and I do not know much about the meaning of birds. I tell you he will not long be absent from his dear native land, not if chains of iron hold him fast. He will find a way to get back, for he is never at a loss."[5] In theQuran, Suleiman (Solomon) andDavidare said to have been taught the language of the birds.[6]WithinSufism, the language of birds is a mysticaldivine language.The Conference of the Birdsis a mystical poem of 4647 verses by the 12th centuryPersianpoetAttar of Nishapur.[7] In theJerusalem Talmud,Solomon's proverbial wisdom was due to his being granted understanding of the language of birds by God.[8] The concept is also known from manyfolk tales(including Welsh, Russian, German, Estonian, Greek, Romany), where usually the protagonist is granted the gift of understanding the language of the birds either by some magical transformation or as aboonby the king of birds. The birds then inform or warn the hero about some danger or hidden treasure. According to theAarne-Thompson-Uther Index, the understanding of the language of birds can appear in the following tale types: InKabbalah,Renaissance magic, andalchemy, the language of the birds was considered a secret and perfect language and the key to perfect knowledge, sometimes also called thelangue verte, or green language.[9][10] Elizabethan English occultistJohn Deelikened the magicalEnochianlanguage he received from communications with angels to the traditional notion of a language of birds.[citation needed] Compare also the rather comical and satiricalBirdsofAristophanes,Chaucer’sParliament of Fowls,andWilliam Baldwin’sBeware the Cat. In medievalFrance, the language of the birds (la langue des oiseaux) was a secret language of theTroubadours, connected with theTarot, allegedly based on puns and symbolism drawn from homophony, e. g. an inn calledau lion d'or("the Golden Lion") is allegedly "code" forau lit on dort"in the bed one sleeps".[11] René Guénonhas written an article about the symbolism of the language of the birds.[12]
https://en.wikipedia.org/wiki/Language_of_the_birds
Solresol(Solfège:Sol-Re-Sol), originally calledLangue universelleand thenLangue musicale universelle, is amusicalconstructed languagedevised byFrançois Sudre, beginning in 1817. His major book on it,Langue Musicale Universelle, was published after his death in 1866,[1]though he had already been publicizing it for some years. Solresol enjoyed a brief spell of popularity, reaching its pinnacle withBoleslas Gajewski's 1902 publication ofGrammaire du Solresol. Today, there exist small communities of Solresol enthusiasts scattered across the world.[2] There are multiple versions of Solresol, and they each have minor differences. Currently, there are three small variations on the language, each of which mostly edit vocabulary and a small amount of the grammar. Sudre created the language, and thus his version deserves the title of being the original version of Solresol. Vincent Gajewski popularised the language as the president of the Central Committee for the study and advancement of Solresol, founded by Madame Sudre. Boleslas Gajewski, the son of Vincent, published the Grammar of Solresol.[3]This is the most publicised version of Solresol, thanks to the translation to English by Stephen L. Rice from 1997,[3][4]with a chunk of the vocabulary changed from the original, as well as some of the grammar. One example is the wordfasol, defined as "here" in Sudre's dictionary, but "why?" in Gajewski's. The third is an unofficial version developed over time by the community, dubbed "Modern Solresol". It uses Sudre's version as a base, with tweaks to the grammar and vocabulary, such as changing the definitions ofsisolandsilafrom meaning "Sir" and "Young man", to anhonorificssystem inspired by what is used in Japanese; both are gender-neutral titles, one to be respectful, and one to be affectionate.[5] Gajewski's publication brought various additions that don't conflict with the original version of the language, such as various new methods of communication, including a set of symbols, using the seven colours of the rainbow, usingtonic sol-fato sign the language, and more.[3]: 16 Solresol can be communicated by using any seven distinct items, with a maximum of five per word. The main method of communication is by using the sevensolfègesyllables (a form ofsolmization), which may be accented, lengthened or repeated. The simplest way to use these syllables is to speak them as if they were regularsyllables. Due to predating theIPA, there are no specific pronunciation rules beyond the standard readings of the solfège. Due to each syllable being fairly distinct, they may be pronounced in almost any way the reader prefers. Although the seventh note is more modernly pronounced as "Ti" in a lot of countries, "Si" is still generally preferred within the Solresol community.[citation needed] Sudre outlined a way of transcribing the phonetics of French (and thus many other languages) into Solresol, primarily used for proper nouns.[1]: 32Using common pronunciations as given by the likes ofWiktionary, it is possible to reconstruct a table of sounds using the modern IPA. Due to the paucity of syllables, it is necessary to leave a brief pause between words so that each word remains clearly separate. As noted by Boleslas Gajewski: "one should take great care to pause after every word; this slight pause is necessary to separate the words, so that the listener does not become confused".[4]: Reversed meanings In Solresolmorphology, each word is divided into categories of either meaning or function, where longer words are generally more specific. Words are differentiated by three main characteristics: the initial syllable, word length, and whether it has a pair of repeated syllables. Words of syllable length 1 and 2 are used for pronouns and common particles, and those with repeated syllables are tenses. Words of syllable length 3 are devoted to words used frequently (at the time of Solresol's creation). The ones which include repeating syllables are reserved for "numbers, the months of the year, the days of the week, and temperature [weather conditions]", e.g. redodo "one", remimi "two" (according to Gajewski). Words of syllable length 4 fall into various themed categories. For example, words beginning with 'sol', which include no repeating syllables, have meanings related to arts or sciences (e.g. soldoredo, "art"; solmiredo, "acoustic").[1]: 22.VHowever, if words of syllable length 4 have a pair of repeated syllables, their meanings relate to sickness or medicine (e.g. solsolredo, "migraine"; solreresol, "smallpox").[1]: 23.VI More specifically, the classes without repeating syllables, are: 1. 'do': man, his body and spirit, intellectual faculties, qualities and nourishment; 2. 're': clothing, the house, housekeeping and the family 3. 'mi': man's actions and his flaws 4. 'fa': the countryside, travel, war, the sea 5. 'sol': fine arts and sciences 6. 'la': industry and commerce 7. 'si': the city, government and administration With repeating syllables, the same syllables yield: 1. 'do': religion 2. 're': construction and various trades 3. 'mi': prepositions, adverbial phrases and isolated adverbs 4. 'fa': sickness 5. 'sol': sickness (cont.) 6. 'la': industry and commerce (as in the non-repeating type) 7. 'si': justice, the magistracy, and the courts Finally, combinations of five syllables designate animals, plants and minerals. By default, all animate nouns and pronouns imply that they are of male sex. To differentiate the female sex, a bar,hyphenormacronis added to the final syllable of the corresponding article or the word itself. In speech, this is indicated by repeating the vowel of the syllable, with aglottal stopseparating the repeated vowel from the rest of the word.[1]: 24 However, in modern translations, pronouns do not change depending on gender. Instead, they are simply translated into English as neutral pronouns; it and they. A unique feature of Solresol is that meanings can be inverted by reversing the syllables in words. For instancefalameans good or tasty, andlafameans bad. Interruptions in the logical order of words in each category are usually caused by these reversible words.[1]: 31.XXIHowever, not all words are reversible in this sense, such asdorefaremeaning neck, andrefaredomeaning wardrobe, which are obviously not opposites. The following table shows the words of up to two syllables from Gajewski's dictionary: The definite article has different forms for nominative, genitive and dative case, or, in other words, for "the", "to the", and "of the": 'la', 'fa' and 'la si', respectively.[1]: 23-24.VII-VIII Apart from stress and length, Solresol words are not inflected. To keep sentences clear, especially with the possibility of information loss while communicating, certain parts of speech follow a strict word order. To make a word plural, anacute accentis added above the last syllable, which in speech is pronounced by lengthening the last letter of said syllable.[1]: 24.IXExamples of how to mark plural masculine and feminine words: This only affects the first word in anoun phrase. That is, it only affects a noun when the noun is alone, as above. If the word is accompanied by a grammatical particle (la, fa or lasi), the particle will take the gender and or number marking instead: Parts of speech (as well as more specific definitions for certain words) are derived from verbs by placing acircumflexabove one of the syllables in writing, and by pronouncing said syllable withrinforzando(sudden emphasis orcrescendo). With the accent placed on the first syllable, the word becomes a noun. In four-syllable words, accentuating the second syllable creates an agent noun. Thepenultimatesyllable produces an adjective, and the last creates an adverb.[1]: 25.XIFor example, On computers using keyboard layouts without the circumflex accent, the syllable may either be printed using capital letters, or acaretplaced between letters of a syllable or after a syllable. Due to the grammar and word order of Solresol, distinguishing parts of speech aren't usually required to understand the sentence. The varioustense-and-moodparticles are the double syllables, as given in vocabulary above. In addition, according to Gajewski, passive verbs are formed withfaremibetween this particle and the verb. The subjunctive is formed withmirebefore the pronoun. The negativedoonly appears once in the clause, before the word it negates. The wordfasibefore a noun or adjective isaugmentative; after it issuperlative.Sifais the opposite (diminutive):[1]: 21.III Questions in Solresol are not given much attention in the original documentation, nor do they have many examples. Sudre's publication includes three examples of interrogative sentences:[1]: 127 To make this an affirmative statement, you add the personal pronoun afterwards: Gajewski instead places the subject of the sentence after the verb instead of before the verb, a construction common in European languages. Some examples are:[4]: Interrogation and Negation In all versions of the language, there are words in the 4-syllable, repeated "Mi" section of the dictionary which includes some common questions, such as:[1]: 109.123 Each "note" of Solresol is represented as a symbol, for example, "Do" is a circle. Words of Solresol are formed by connecting the symbols in the order they appear in the word. Double notes are represented by crossing the symbol. Using thetonic sol-fasystem byJohn Curwen, SolReSol can also besigned. Another way of using Solresol is calledses, and was developed byGeorge Boeree.[citation needed]The notes are given a representative consonant and vowel (or diphthong). The most basic words use the vowel alone; all others use more complex syllable structure. In this way, one can write or pronounce words such as this one: Because the plural and feminine forms of words in Solresol are indicated by stress or length of sounds, ses usespau(some) orfai(many) to indicate the plural, andmu(well) to indicate the feminine when necessary. AnISO 639-3language code had been requested on 28 July 2017,[6]but was rejected on 1 February 2018.[7] Solresol has been assigned the codesqsoandart-x-solresolin theConLang Code Registry.[8] The seven basic symbols have been proposed to be registered in theConScript Unicode Registry.[9] Article 1 of theUniversal Declaration of Human Rightsin Solresol: Article 1 of theUniversal Declaration of Human Rightsin ses: Article 1 of theUniversal Declaration of Human Rightsin English:
https://en.wikipedia.org/wiki/Solresol
Kickapoo whistled speechis a means of communication among theKickapoo Traditional Tribe of Texas, aKickapootribe in Texas and Mexico.Whistled speechis a system of whistled communication that allows subjects to transmit and exchange a potentially unlimited set of messages over long distances.[1] Whistled language occurs among the Kickapoo Indian tribe living in theMexicanstate ofCoahuila. It is a substitute for spokenKickapoo, in which thepitchand length of vowels andvowel clustersare represented, while vowel qualities andconsonantsare not.[2]The system ofwhistlingwas employed around 1915 by young members of the Kickapoo tribe, who wanted to be able to communicate without their parents' understanding.[3]To produce whistled speech, users cup their hands together to form a chamber. Next, they blow into the chamber with their lips placed against the knuckles of their thumbs. To alter the pitch of their whistle, the Kickapoo Indians lift their fingers from the back of the chamber.[2]Among the Kickapoo Indian tribe, whistled speech is employed primarily forcourtshippurposes. Young men and women rendezvous using whistle speech each evening as a culturaltradition.[2]The whistling can be heard from dusk to as late as midnight every evening. Messages mostly consist of phrases such as, "I'm thinking of you" and "Come on."[3]
https://en.wikipedia.org/wiki/Kickapoo_whistled_speech
Sweepis a Britishpuppetand television character popular in the United Kingdom, United States, Canada, Australia, Ireland, New Zealand and other countries. Sweep is a grey glove puppet dog with long black ears who joinedThe Sooty Showin 1957, as a friend to fellow puppetSooty.[1]He is a dim-witted dog with a penchant forbonesandsausages.[2][3]Sweep is notable for his method of communication[4]which consists of a loud high-pitched squeak that gains its inflection from normal speech and its rhythm from the syllables in each word. The rest of the cast, namelySooand the presenter, could understand Sweep perfectly, and would (albeit indirectly) translate for the viewer.[5][6]The sound of Sweep's voice was achieved using "something similar to asaxophonereed".[7]Versions of the puppet later sold as toys had an integralsqueakerconnected to an air bulb that was squeezed by hand. Sweep's family first appeared on theSooty Showin an episode called "Sweep's Family". He has a mother and father; a twin brother, Swoop; two cousins, Swipe and Swap[8]and another seven brothers in the litter (all of whom look exactly like him, and wear different coloured collars to tell each other apart). Swipe and Swap are described as Sweep's brothers in theSooty & Co.episode "Sweep's Family" and theSooty Heightsepisode "The Hounds of Music". This article about a television comedy character is astub. You can help Wikipedia byexpanding it. Thispuppet-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Sweep_(puppet)
Clangers(usually referred to asThe Clangers)[2]is a Britishstop-motionanimatedchildren's television series, consisting of short films about a family ofmouse-likecreatures who live on, and inside, a small moon-like planet. They speak only in awhistled language, and eat green soup (supplied by the Soup Dragon) and blue string pudding. The programmes were originally broadcast onBBC1between 1969 and 1972, followed by a special episode which was broadcast in 1974. The series was made bySmallfilms, the company set up byOliver Postgate(who was the show's writer, animator and narrator) andPeter Firmin(who was its modelmaker and illustrator). Firmin designed the characters, and Joan Firmin, his wife, knitted and "dressed" them. The music, often part of the story, was provided byVernon Elliott. A third series, narrated byMonty PythonactorMichael Palin, was broadcast in the UK in June 2015 on theBBC'sCBeebiesTV channel, gaining hugely successful viewing figures, following on from a short special broadcast by the BBC earlier that year. The new programmes are still made using stop-motion animation (instead of thecomputer-generated imagerywhich had replaced the original stop-motion animation in revivals of other children's shows such asFireman Sam,Thomas & FriendsandThe Wombles). Further new series were made in 2017 and 2019.[3] Clangerswon theBritish Academy Children's Award for Pre-School Animationin 2015.[4] The Clangers originated in a series of children's books developed from anotherSmallfilmsproduction,Noggin the Nog. Publishers Kay and Ward created a series of books based on theNoggin the Nogtelevision episodes, which was subsequently expanded into a series calledNoggin First Reader, aimed at teaching children to read. In one of these, calledNoggin and the Moonmouse, published in 1967, a newhorse-troughwas put up in the middle of the town in the North-Lands. Aspacecrafthurtled down and splash-landed in it: the top unscrewed, and out came a largish, mouse-like creature in aduffel coat, who wanted fuel for his spaceship. He showed Nooka and the children that what he needed was vinegar and soap-flakes, so they filled up the fueltank of the little spherical ship, which then "took off in a dreadful cloud smelling of vinegar and soap-flakes, covering the town with bubbles".[5] In 1969 (the year of NASA's first landing on the Moon), theBBCaskedSmallfilmsto produce a new series forcolour television, but without specifying a storyline. Postgate concluded that asspace explorationwas topical the new series should take place in space (and, inspired by the real Moon Landing, Peter Firmin designed a set which strongly resembled the Moon). Postgate adapted the Moonmouse from the 1967 story, by simply removing its tail ("because it kept getting into thesoup").[5]Hence the Clangers looked similar to mice (and, from their pink colour, pigs). They wore clothes reminiscent of Roman armour, "against the space debris that kept falling onto the planet, lost from other places, such as television sets and bits of an Iron Chicken",[5]and they spoke inwhistled language. The Clangerswas described by Postgate as a family in space. They were small creatures living in peace and harmony on – and inside – a small, hollow planet, far, far away: nourished by Blue String Pudding, and by Green Soup harvested from the planet's volcanic soup wells by the Soup Dragon. The word "Clanger" is said to derive from the sound made by opening the metal cover of one of the creatures' crater-like burrows, each of which was covered with an old metal dustbin lid, to protect againstmeteoriteimpacts (and space debris). In each episode there would be some problem to solve, typically concerning something invented or discovered, or some new visitor to meet. Music Trees, with note-shaped fruit, grew on the planet's surface, and music would often be an integral feature in the simple but amusing plots. In theFishingepisode, one of the Cheese Trees provided a cylindrical five-linestafffor notes taken from the Music Trees. Postgate provided the narration, for the most part in a soft, melodic voice, describing and accounting for the curious antics of the little blue planet's knitted pink inhabitants, and providing a "translation", as it were, for much of their whistled dialogue. Postgate claimed that in reality when the Clangers' were whistling, they were "swearing their little heads off".[6] The first of the 26 episodes (aired as two series of 13 programmes each) was broadcast onBBC1from 16 November 1969. The last edition of the second series was transmitted on 10 November 1972. However, there was also one final programme, a seven-minute election special entitledVote for Froglet, broadcast on 10 October 1974 (the day of the General Election).Oliver Postgatesaid in a 2005 interview that he wasn't sure whether the 1974 special still existed,[5]and it has been referred to as a "missing episode".[7]In fact the whole episode is available from the British Film Institute.[8] The original Mother Clanger puppet was stolen in 1972.[9]Today, Major Clanger and the second Mother Clanger are on display at theRupert Bear Museum.[10] The Clangers grew in size between the first and last episodes, to allow Firmin to use anAction Manmodel figure in the episode "The Rock Collector".[5] BBC'sCBeebieschannel and the American pre-school channel, Sprout, produced a new series for broadcasting in their 2015 schedules,[11]withMichael Palinnarrating in place of the lateOliver Postgate.[12]The American pre-school channelSproutwere major funders and co-producers having commissioned the series in tandem with theBBC, withWilliam Shatnernarrating.[13] In November 2015,The Clangerswon the Best Pre-school Animation award at the BAFTAs.[14] The principal characters are the Clangers themselves, the females wearing tabards and the males brass armour: Other characters appeared in only one or two episodes each: One of the most noted aspects was the use ofsound effects, with a score composed byVernon Elliottunder instructions from Postgate. Although the episodes were scripted, most of the music used in the two series was written in translation by Postgate in the form of "musical sketches" or graphs that he drew for Elliott, who converted the drawings into a musical score. The music was then recorded by the two, along with other musicians – dubbed theClangers Ensemble– in a village hall, where they would often leave the windows open, leading to the sounds of birds outside being heard on some recordings. Much of the score was performed on Elliott's bassoon, and also included harp, clarinet, glockenspiel and bells. The distinctive whistles made by the Clangers, performed onswanee whistles, have become as identifiable as the characters themselves, much imitated by viewers. The series creators have said that the Clangers, living invacuum, did not communicate by sound, but rather by a type ofnuclear magnetic resonance, which was translated to audible whistles for the human audience. These whistles followed the rhythm and intonation of a script in English. The action was also narrated by a voice-over from Postgate. However, when the series was shown without narration to a group of overseas students, many of them felt that the Clangers were speaking their particular language. Postgate recounted: "When the BBC got the script, [they] rang me up and said "At the beginning of episode three, where the doors get stuck, Major Clanger says 'sod it, the bloody thing’s stuck again'. Well, darling, you can't say that on children's television, you know, I mean you just can't". I said "It's not going to be said, it's going to be whistled", but [they] just said "But people will know!" I said no, that if they had nice minds, they'd think "Oh dear, the silly thing's not working properly". If you watch the episode, the one where the rocket goes up and shoots down the Iron Chicken, Major Clanger kicks the door to make it work and his first words are "Sod it, the bloody thing's stuck again". Years later, when the merchandising took off, the Golden Bear company wanted a Clanger and a Clanger phrase for it to make when you squeezed it, they got "Sod it, the bloody thing’s stuck again"!"[5] Series 1 episode, "The Visitor" features two brief extracts from the song, "No Smokes" (1967) by the GlasgowpsychedelicbandOne in a Million.[16] John Du Prez, who wrote some of the music forMonty Python(another show Michael Palin was in) composed the score for the 2015 series.[17] The first series was transmitted on BBC1 at 5:55pm, except for the episode "Chicken" which went out at 5:50pm because there was aChildren in Needappeal at 6:00pm. The second series episodes were also transmitted weekly on BBC1, but in a wide variety of differing timeslots. Episodes 1 and 2 were seen at 4:50pm; episodes 3, 5 and 6 at 5:05pm; episodes 4 and 8 at 5:00pm; episode 7 at 4:40pm; episode 9 at 5:30pm; and episodes 10, 11, 12 and 13 (which followed episode 9 after a gap of more than a year) at 4:00pm. The first of these was an election special, produced in 1974, entitled "Vote for Froglet". Inspired by what Postgate referred to as the "Winter of Discontent" (a phrase from Shakespeare's playRichard III, usually employed to refer to thewinter of 1978–79, but Postgate was referring to the miners' strike in the winter of 1973-74), and inspired partly by his recollections of post-war Germany,[5]it was broadcast on the night of the October 1974 General Election.[18]The narrator explains the democratic process, and demonstrates it by asking the Clangers to vote between the Soup Dragon and a Froglet. The Soup Dragon wins the election on a policy of "No Soup for Froglets", but the Clangers are dissatisfied with the result.[5]This special was believed to be a lost episode for many years,[5]but it was released in full for free by the British Film Institute to coincide with the2017 UK General Election.[19] Episodes 1–26 were first broadcast at 5:30pm, while episodes 27–52 were at 6:00pm on CBeebies. Following the March 2015 special, a full series was commissioned for the summer of that year. The series was narrated by Michael Palin, and co-produced by Smallfilms with the involvement of Peter Firmin and Oliver Postgate's son, Dan. The series was directed by Chris Tichborne and Mole Hill,[30]with music composed by John Du Prez. 52 11-minute episodes were commissioned.[1]The voices of the Iron Chicken, the Soup Dragon, and the Baby Soup Dragon were by Dan Postgate. The first episode of the new series aired on 15 June 2015.[20]It turned out to be a massive hit for CBeebies. The BBC News Entertainment and Arts magazine revealed that 65% of the episode's viewing audience of 484,000 were adults, and that it was CBeebies' most watched programme of 2015 up to that date. The rating was more than double the previous record, set by an episode ofAlphablocks,Numberjacks,Waybuloo,Fimbles,Charlie and Lola,Teletubbies,The Lingo ShowandThe Octonautsthat year, as well as other CBeebies favourites since the station's launch in 2002, although an episode ofNumberjackspeaked at over 1 million back in 2009. The same year,William Shatnerwas chosen to be the American narrator for the series airing on the cable networkSprout. A second series of the revival, and the fourth series overall, was released on 11 September 2017 with 26 more episodes.[31] A third series of the revival, and the fifth series overall was broadcast on CBeebies on 17 July 2019 with another 26 more episodes. Although not quite as popular asBagpuss(which in 1999 was voted in a British television poll the best children's television programme ever made), since the death of Postgate in December 2008 interest has been revived in his work, which is considered to have had a notable influence on British culture throughout the 1960s, 1970s and 1980s. In 2007, Postgate and Firmin were jointly presented with the Action for Children's Arts J. M. Barrie Award "for a lifetime's achievement in delighting children".[32] The Soup Dragons, aScottishalternative rockbandof the late 1980s and early 1990s, took their name from the Clangers character.[33] In the 1972Doctor Whoserial "The Sea Devils",The Masteris seen to be watching the episode "The Rock Collector".[34]He states that they are fascinating creatures and even mimics their language. He is told that they are just television characters. The Master rolls his eyes. A Clanger (as a glove-puppet rather than a stop-motion puppet) appears as a member of the "Puppet Government" inThe GoodiesTV episode "The Goodies Rule – O.K.?". From the block's start until its discontinuation, the UK'sNick Jr. Classicsblock airedClangersepisodes specifically for parents who remembered the show.[35] Tiny Clanger (also as a glove-puppet) appeared onSprout's Sunny Side Up Showin honour of the U.S. premiere ofClangers. The series was not widely broadcast outside the UK in the 1970s, mainly because it did not require additional money from sales abroad to finance its production.[1]However theNorwegian Broadcasting Corporationshowed the series in 1970 and 1982, entitledRomlingane. It was narrated byIngebrigt Davik, a popular author of children's books. It was shown on Swedish television in the late 1960s and 1970s, entitledRymdlarna. The first 13 episodes were also shown onCzechoslovak Televisionin August 1972, entitledRámusíci[36]as a part of the children's evening program slotVečerníček. The revived version in 2015 has received funding fromSprout, a subsidiary ofNBCUniversal, and has been pre-sold to other foreign broadcasters including theAustralian Broadcasting Corporation.[1]The American transmissions are narrated by William Shatner.[13] As from 2018, it is also broadcast on the Belgian channelKetnet. As of 2023 the 2015 UK version is on the American video on demand siteBentkey. In 2001, a selection of the music and sound effects was compiled byJonny Trunkfrom 128 musical cues held by Postgate, who contributed act one, "The Iron Chicken and the Music Trees", ofA Clangers Opera, withlibrettothat he had compiled. In the early 1990s, threeVHScassettes of theClangerswere released byBBC EnterprisesLtd. Later, another six cassettes were released by Universal Pictures. A number ofDVDshave also been released byUniversal Pictures(original series) andSignature Entertainment(revived series).
https://en.wikipedia.org/wiki/Clangers
Sibilants(fromLatin:sibilans'hissing') arefricativeconsonants of higheramplitudeandpitch, made bydirectinga stream of air with the tongue towards theteeth.[1]Examples of sibilants are the consonants at the beginning of theEnglishwordssip,zip,ship, andgenre. The symbols in theInternational Phonetic Alphabetused to denote the sibilant sounds in these words are, respectively,[s][z][ʃ][ʒ]. Sibilants have a characteristically intense sound, which accounts for theirparalinguisticuse in getting one's attention (e.g. calling someone using "psst!" or quieting someone using "shhhh!"). In thealveolarhissingsibilants[s]and[z], the back of the tongue forms a narrow channel (isgrooved) to focus the stream of air more intensely, resulting in a high pitch. With thehushingsibilants (occasionally termedshibilants), such as English[ʃ],[tʃ],[ʒ], and[dʒ], the tongue is flatter, and the resulting pitch lower.[2][3] A broader category isstridents, which include more fricatives than sibilants such asuvulars. Sibilants are a higher pitched subset of the stridents. The English sibilants are: while the English stridents are: as/f/and/v/are stridents but not sibilants because they are lower in pitch.[4][5] Some linguistics use the terms "stridents" and "sibilants" interchangeably to refer to the greateramplitudeandpitchcompared to other fricatives.[6] "Stridency" refers to theperceptualintensityof the sound of a sibilant consonant, orobstacle fricativesoraffricates, which refers to the critical role of the teeth in producing the sound as an obstacle to the airstream. Non-sibilant fricatives and affricates produce their characteristic sound directly with the tongue or lips etc. and the place of contact in the mouth, without secondary involvement of the teeth.[citation needed] The characteristic intensity of sibilants means that small variations in tongue shape and position are perceivable, with the result that there are many sibilant types that contrast in various languages. Sibilants are louder than their non-sibilant counterparts, and most of their acoustic energy occurs at higher frequencies than non-sibilant fricatives—usually around 8,000 Hz.[7] All sibilants arecoronal consonants(made with the tip or front part of the tongue). However, there is a great deal of variety among sibilants as to tongue shape, point of contact on the tongue, and point of contact on the upper side of the mouth. The following variables affect sibilant sound quality, and, along with their possible values, are ordered from sharpest (highest-pitched) to dullest (lowest-pitched): Generally, the values of the different variables co-occur so as to produce an overall sharper or duller sound. For example, a laminal denti-alveolar grooved sibilant occurs inPolish, and a subapical palatal retroflex sibilant occurs inToda. The main distinction is the shape of the tongue. Most sibilants have agrooverunning down the centerline of the tongue that helps focus the airstream, but it is not known how widespread this is. In addition, the following tongue shapes are described, from sharpest and highest-pitched to dullest and lowest-pitched: The latter three post-alveolar types of sounds are often known as "hushing" sounds because of their quality, as opposed to the "hissing" alveolar sounds. The alveolar sounds in fact occur in several varieties, in addition to the normal sound of Englishs: Speaking non-technically, the retroflex consonant[ʂ]sounds somewhat like a mixture between the regular English[ʃ]of "ship" and a strong American "r"; while the alveolo-palatal consonant[ɕ]sounds somewhat like a mixture of English[ʃ]of "ship" and the[sj]in the middle of "miss you". Sibilants can be made at anycoronalarticulation[citation needed], i.e. the tongue can contact the upper side of the mouth anywhere from the upper teeth (dental) to thehard palate(palatal), with the in-between articulations beingdenti-alveolar,alveolarandpostalveolar. The tongue can contact the upper side of the mouth with the very tip of the tongue (anapicalarticulation, e.g.[ʃ̺]); with the surface just behind the tip, called thebladeof the tongue (alaminalarticulation, e.g.[ʃ̻]); or with the underside of the tip (asubapicalarticulation). Apical and subapical articulations are alwaystongue-up, with the tip of the tongue above the teeth, while laminal articulations can be either tongue-up ortongue-down, with the tip of the tongue behind the lower teeth. This distinction is particularly important forretroflexsibilants, because all three varieties can occur, with noticeably different sound qualities. For tongue-down laminal articulations, an additional distinction can be made depending on where exactly behind the lower teeth the tongue tip is placed. A little ways back from the lower teeth is a hollow area (or pit) in the lower surface of the mouth. When the tongue tip rests in this hollow area, there is an empty space below the tongue (asublingual cavity), which results in a relatively duller sound. When the tip of the tongue rests against the lower teeth, there is no sublingual cavity, resulting in a sharper sound. Usually, the position of the tip of the tongue correlates with the grooved vs. hushing tongue shape so as to maximize the differences. However, the palato-alveolar sibilants in theNorthwest Caucasian languagessuch asUbykhare an exception. These sounds have the tongue tip resting directly against the lower teeth, which gives the sounds a quality that Catford describes as "hissing-hushing". Ladefoged and Maddieson[1]term this a "closedlaminal postalveolar" articulation, and transcribe them (following Catford) as[ŝ,ẑ], although this is not an IPA notation. The following table shows the types of sibilant fricatives defined in theInternational Phonetic Alphabet: Diacritics can be used for finer detail. For example, apical and laminal alveolars can be specified as[s̺]vs[s̻]; adental(or more likelydenti-alveolar) sibilant as[s̪]; a palatalized alveolar as[sʲ]; and a generic "retracted sibilant" as[s̠], a transcription frequently used for the sharper-quality types of retroflex consonants (e.g. the laminal "flat" type and the "apico-alveolar" type). There is no diacritic to denote the laminal "closed" articulation of palato-alveolars in theNorthwest Caucasian languages, but they are sometimes provisionally transcribed as[ŝẑ]. The attested possibilities, with exemplar languages, are as follows. Note that the IPA diacritics are simplified; some articulations would require two diacritics to be fully specified, but only one is used in order to keep the results legible without the need forOpenTypeIPA fonts. Also,Ladefogedhas resurrected an obsolete IPA symbol, the under dot, to indicateapical postalveolar(normally included in the category ofretroflex consonants), and that notation is used here. (Note that the notations̠,ṣis sometimes reversed; either may also be called 'retroflex' and writtenʂ.) ^1⟨ŝẑ⟩is an ad-hoc transcription. The old IPA letters⟨ʆʓ⟩are also available. ^2These sounds are usually just transcribed⟨ʂʐ⟩. Apical postalveolar and subapical palatal sibilants do not contrast in any language, but if necessary, apical postalveolars can be transcribed with an apical diacritic, as⟨s̠̺z̠̺⟩or⟨ʂ̺ʐ̺⟩. Ladefoged resurrects the old retroflex sub-dot for apical retroflexes,⟨ṣẓ⟩Also seen in the literature on e.g. Hindi and Norwegian is⟨ᶘᶚ⟩– the domed articulation of[ʃʒ]precludes a subapical realization. Whistled sibilants occur phonemically in several southern Bantu languages, the best known beingShona. However, they also occur in speech pathology and may be caused by dental prostheses or orthodontics. The whistled sibilants of Shona have been variously described—aslabializedbut not velarized, as retroflex, etc., but none of these features are required for the sounds.[10]Using theExtended IPA, Shonasvandzvmay be transcribed⟨s͎⟩and⟨z͎⟩. Other transcriptions seen include purely labialized⟨s̫⟩and⟨z̫⟩(Ladefoged and Maddieson 1996) and labially co-articulated⟨sᶲ⟩and⟨zᵝ⟩(or⟨s͡ɸ⟩and⟨z͜β⟩). In the otherwise IPA transcription of Shona in Doke (1967), the whistled sibilants are transcribed with the non-IPA letters⟨ȿɀ⟩and⟨tȿdɀ⟩. Besides Shona, whistled sibilants have been reported as phonemes inKalanga,Tsonga,Changana,Tswa—all of which are Southern African languages—andTabasaran. The articulation of whistled sibilants may differ between languages. In Shona, the lips arecompressedthroughout, and the sibilant may be followed by normal labialization upon release. (That is, there is a contrast amongs, sw, ȿ, ȿw.) In Tsonga, the whistling effect is weak; the lips are narrowed but also the tongue isretroflex. Tswa may be similar. In Changana, the lips are rounded (protruded), but so is /s/ in the sequence /usu/, so there is evidently some distinct phonetic phenomenon occurring here that has yet to be formally identified and described.[11] Not including differences inmanner of articulationorsecondary articulation, some languages have as many as four different types of sibilants. For example,Northern QiangandSouthern Qianghave a four-way distinction among sibilant affricates/ts//tʂ//tʃ//tɕ/, with one for each of the four tongue shapes.[citation needed]Todaalso has a four-way sibilant distinction, with one alveolar, one palato-alveolar, and two retroflex (apical postalveolar and subapical palatal).[citation needed] The now-extinctUbykh languagewas particularly complex, with a total of 27 sibilant consonants. Not only all four tongue shapes were represented (with the palato-alveolar appearing in the laminal "closed" variation) but also both the palato-alveolars and alveolo-palatals could additionally appearlabialized. Besides, there was a five-way manner distinction among voiceless and voiced fricatives, voiceless and voiced affricates, andejectiveaffricates. (The three labialized palato-alveolar affricates were missing, which is why the total was 27, not 30.)[citation needed]The Bzyp dialect of the relatedAbkhaz languagealso has a similar inventory.[citation needed] Some languages have four types whenpalatalizationis considered.Polishis one example, with both palatalized and non-palatalized laminal denti-alveolars, laminal postalveolar (or "flat retroflex"), and alveolo-palatal ([s̪z̪][s̪ʲz̪ʲ][s̠z̠][ɕʑ]).[citation needed]Russianhas the same surface contrasts, but the alveolo-palatals are arguably not phonemic. They occur only geminate, and the retroflex consonants never occur geminate, which suggests that both are allophones of the same phoneme.[citation needed] Somewhat more common are languages with three sibilant types, including one hissing and two hushing. As with Polish and Russian, the two hushing types are usually postalveolar and alveolo-palatal since these are the two most distinct from each other.Mandarin Chineseis an example of such a language.[citation needed]However, other possibilities exist.Serbo-Croatianhas alveolar, flat postalveolar and alveolo-palatal affricates whereasBasquehas palato-alveolar and laminal and apical alveolar (apico-alveolar) fricatives and affricates (late Medieval peninsularSpanishandPortuguesehad the same distinctions among fricatives). Many languages, such asEnglishorArabic, have two sibilant types, one hissing and one hushing. A wide variety of languages across the world have this pattern. Perhaps most common is the pattern, as in English and Arabic, with alveolar and palato-alveolar sibilants. Modern northern peninsularSpanishhas a singleapico-alveolarsibilant fricative[s̠], as well as a single palato-alveolar sibilant affricate[tʃ]. However, there are also languages with alveolar and apical retroflex sibilants (such as StandardVietnamese) and with alveolar and alveolo-palatal postalveolars (e.g. alveolar and laminal palatalized[ʃʒtʃdʒ]i.e.[ʃʲʒʲtʃʲdʒʲ]inCatalanandBrazilian Portuguese, the latter probably through Amerindian influence,[12]and alveolar and dorsal i.e.[ɕʑtɕdʑ]proper inJapanese).[13] Only a few languages with sibilants lack the hissing type.Middle Vietnameseis normally reconstructed with two sibilant fricatives, both hushing (one retroflex, one alveolo-palatal). Some languages have only a single hushing sibilant and no hissing sibilant. That occurs in southern Peninsular Spanish dialects of the "ceceo" type, which have replaced the former hissing fricative with[θ], leaving only[tʃ]. Languages with no sibilants are fairly rare. Most have no fricatives at all or only the fricative/h/. Examples include mostAustralian languages, andRotokas, and what is generally reconstructed forProto-Bantu. Languages with fricatives but no sibilants, however, do occur, such asUkueinNigeria, which has only the fricatives/f,v,h/. Also, almost all EasternPolynesian languageshave no sibilants but do have the fricatives/v/and/or/f/:Māori,Hawaiian,Tahitian,Rapa Nui, mostCook Islands Māoridialects,Marquesan, andTuamotuan. Tamilonly has the sibilant/ʂ/and fricative/f/in loanwords, and they are frequently replaced by native sounds. The sibilants[s,ɕ]exist as allophones of/t͡ɕ/and the fricative[h]as an allophone of/k/. Authors includingChomskyandHallegroup[f]and[v]as sibilants. However, they do not have the grooved articulation and high frequencies of other sibilants, and most phoneticians[1]continue to group them together withbilabial[ɸ],[β]and (inter)dental[θ],[ð]as non-sibilantanteriorfricatives. For a grouping of sibilants and[f,v], the termstridentis more common. Some researchers judge[f]to be non-strident in English, based on measurements of its comparative amplitude, but to be strident in other languages (for example, in the African languageEwe, where it contrasts with non-strident[ɸ]). The nature ofsibilantsas so-called 'obstacle fricatives' is complicated – there is a continuum of possibilities relating to the angle at which the jet of air may strike an obstacle. The grooving often considered necessary for classification as asibilanthas been observed in ultrasound studies of the tongue for the supposedlynon-sibilantvoiceless alveolar fricative[θ̠]of English.[14]
https://en.wikipedia.org/wiki/Whistled_fricative
This article is a list of things named afterAndrey Markov, an influential Russian mathematician.
https://en.wikipedia.org/wiki/List_of_things_named_after_Andrey_Markov
Inmathematical analysis, theChebyshev–Markov–Stieltjesinequalitiesare inequalities related to theproblem of momentsthat were formulated in the 1880s byPafnuty Chebyshevand proved independently byAndrey Markovand (somewhat later) byThomas Jan Stieltjes.[1]Informally, they provide sharp bounds on ameasurefrom above and from below in terms of its firstmoments. Givenm0,...,m2m-1∈R, consider the collectionCof measuresμonRsuch that fork= 0,1,...,2m− 1 (and in particular the integral is defined and finite). LetP0,P1, ...,Pmbe the firstm+ 1orthogonal polynomials[clarification needed]with respect toμ∈C, and letξ1,...ξmbe the zeros ofPm. It is not hard to see that the polynomialsP0,P1, ...,Pm-1and the numbersξ1,...ξmare the same for everyμ∈C, and therefore are determined uniquely bym0,...,m2m-1. Denote TheoremForj= 1,2,...,m, and anyμ∈C,
https://en.wikipedia.org/wiki/Chebyshev%E2%80%93Markov%E2%80%93Stieltjes_inequalities
Gauss–Markov stochastic processes(named afterCarl Friedrich GaussandAndrey Markov) arestochastic processesthat satisfy the requirements for bothGaussian processesandMarkov processes.[1][2]A stationary Gauss–Markov process is unique[citation needed]up to rescaling; such a process is also known as anOrnstein–Uhlenbeck process. Gauss–Markov processes obeyLangevin equations.[3] Every Gauss–Markov processX(t) possesses the three following properties:[4] Property (3) means that every non-degenerate mean-square continuous Gauss–Markov process can be synthesized from the standard Wiener process (SWP). A stationary Gauss–Markov process withvarianceE(X2(t))=σ2{\displaystyle {\textbf {E}}(X^{2}(t))=\sigma ^{2}}andtime constantβ−1{\displaystyle \beta ^{-1}}has the following properties. There are also some trivial exceptions to all of the above.[clarification needed]
https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_process
Instatisticsandmachine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called aMarkov blanket. If a Markov blanket is minimal, meaning that it cannot drop any variable without losing information, it is called aMarkov boundary. Identifying a Markov blanket or a Markov boundary helps to extract useful features. The terms of Markov blanket and Markov boundary were coined byJudea Pearlin 1988.[1]A Markov blanket can be constituted by a set ofMarkov chains. A Markov blanket of a random variableY{\displaystyle Y}in a random variable setS={X1,…,Xn}{\displaystyle {\mathcal {S}}=\{X_{1},\ldots ,X_{n}\}}is any subsetS1{\displaystyle {\mathcal {S}}_{1}}ofS{\displaystyle {\mathcal {S}}}, conditioned on which other variables are independent withY{\displaystyle Y}: Y⊥⊥S∖S1∣S1.{\displaystyle Y\perp \!\!\!\perp {\mathcal {S}}\backslash {\mathcal {S}}_{1}\mid {\mathcal {S}}_{1}.} It means thatS1{\displaystyle {\mathcal {S}}_{1}}contains at least all the information one needs to inferY{\displaystyle Y}, where the variables inS∖S1{\displaystyle {\mathcal {S}}\backslash {\mathcal {S}}_{1}}are redundant. In general, a given Markov blanket is not unique. Any set inS{\displaystyle {\mathcal {S}}}that contains a Markov blanket is also a Markov blanket itself. Specifically,S{\displaystyle {\mathcal {S}}}is a Markov blanket ofY{\displaystyle Y}inS{\displaystyle {\mathcal {S}}}. AMarkov boundaryofY{\displaystyle Y}inS{\displaystyle {\mathcal {S}}}is a subsetS2{\displaystyle {\mathcal {S}}_{2}}ofS{\displaystyle {\mathcal {S}}}, such thatS2{\displaystyle {\mathcal {S}}_{2}}itself is a Markov blanket ofY{\displaystyle Y}, but any proper subset ofS2{\displaystyle {\mathcal {S}}_{2}}is not a Markov blanket ofY{\displaystyle Y}. In other words, a Markov boundary is a minimal Markov blanket. The Markov boundary of anodeA{\displaystyle A}in aBayesian networkis the set of nodes composed ofA{\displaystyle A}'s parents,A{\displaystyle A}'s children, andA{\displaystyle A}'s children's other parents. In aMarkov random field, the Markov boundary for a node is the set of its neighboring nodes. In adependency network, the Markov boundary for a node is the set of its parents. The Markov boundary always exists. Under some mild conditions, the Markov boundary is unique. However, for most practical and theoretical scenarios multiple Markov boundaries may provide alternative solutions.[2]When there are multiple Markov boundaries, quantities measuring causal effect could fail.[3]
https://en.wikipedia.org/wiki/Markov_blanket
Markov decision process(MDP), also called astochastic dynamic programor stochastic control problem, is a model forsequential decision makingwhenoutcomesare uncertain.[1] Originating fromoperations researchin the 1950s,[2][3]MDPs have since gained recognition in a variety of fields, includingecology,economics,healthcare,telecommunicationsandreinforcement learning.[4]Reinforcement learning utilizes the MDP framework to model the interaction between a learning agent and its environment. In this framework, the interaction is characterized by states, actions, and rewards. The MDP framework is designed to provide a simplified representation of key elements ofartificial intelligencechallenges. These elements encompass the understanding ofcause and effect, the management of uncertainty and nondeterminism, and the pursuit of explicit goals.[4] The name comes from its connection toMarkov chains, a concept developed by the Russian mathematicianAndrey Markov. The "Markov" in "Markov decision process" refers to the underlying structure ofstate transitionsthat still follow theMarkov property. The process is called a "decision process" because it involves making decisions that influence these state transitions, extending the concept of a Markov chain into the realm of decision-making under uncertainty. A Markov decision process is a 4-tuple(S,A,Pa,Ra){\displaystyle (S,A,P_{a},R_{a})}, where: A policy functionπ{\displaystyle \pi }is a (potentially probabilistic) mapping from state space (S{\displaystyle S}) to action space (A{\displaystyle A}). The goal in a Markov decision process is to find a good "policy" for the decision maker: a functionπ{\displaystyle \pi }that specifies the actionπ(s){\displaystyle \pi (s)}that the decision maker will choose when in states{\displaystyle s}. Once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like aMarkov chain(since the action chosen in states{\displaystyle s}is completely determined byπ(s){\displaystyle \pi (s)}). The objective is to choose a policyπ{\displaystyle \pi }that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon: whereγ{\displaystyle \ \gamma \ }is the discount factor satisfying0≤γ≤1{\displaystyle 0\leq \ \gamma \ \leq \ 1}, which is usually close to1{\displaystyle 1}(for example,γ=1/(1+r){\displaystyle \gamma =1/(1+r)}for some discount rater{\displaystyle r}). A lower discount factor motivates the decision maker to favor taking actions early, rather than postpone them indefinitely. Another possible, but strictly related, objective that is commonly used is theH−{\displaystyle H-}step return. This time, instead of using a discount factorγ{\displaystyle \ \gamma \ }, the agent is interested only in the firstH{\displaystyle H}steps of the process, with each reward having the same weight. whereH{\displaystyle \ H\ }is the time horizon. Compared to the previous objective, the latter one is more used inLearning Theory. A policy that maximizes the function above is called anoptimal policyand is usually denotedπ∗{\displaystyle \pi ^{*}}. A particular MDP may have multiple distinct optimal policies. Because of theMarkov property, it can be shown that the optimal policy is a function of the current state, as assumed above. In many cases, it is difficult to represent the transition probability distributions,Pa(s,s′){\displaystyle P_{a}(s,s')}, explicitly. In such cases, a simulator can be used to model the MDP implicitly by providing samples from the transition distributions. One common form of implicit MDP model is an episodic environment simulator that can be started from an initial state and yields a subsequent state and reward every time it receives an action input. In this manner, trajectories of states, actions, and rewards, often calledepisodesmay be produced. Another form of simulator is agenerative model, a single step simulator that can generate samples of the next state and reward given any state and action.[5](Note that this is a different meaning from the termgenerative modelin the context of statistical classification.) Inalgorithmsthat are expressed usingpseudocode,G{\displaystyle G}is often used to represent a generative model. For example, the expressions′,r←G(s,a){\displaystyle s',r\gets G(s,a)}might denote the action of sampling from the generative model wheres{\displaystyle s}anda{\displaystyle a}are the current state and action, ands′{\displaystyle s'}andr{\displaystyle r}are the new state and reward. Compared to an episodic simulator, a generative model has the advantage that it can yield data from any state, not only those encountered in a trajectory. These model classes form a hierarchy of information content: an explicit model trivially yields a generative model through sampling from the distributions, and repeated application of a generative model yields an episodic simulator. In the opposite direction, it is only possible to learn approximate models throughregression. The type of model available for a particular MDP plays a significant role in determining which solution algorithms are appropriate. For example, thedynamic programmingalgorithms described in the next section require an explicit model, andMonte Carlo tree searchrequires a generative model (or an episodic simulator that can be copied at any state), whereas mostreinforcement learningalgorithms require only an episodic simulator. An example of MDP is the Pole-Balancing model, which comes from classic control theory. In this example, we have Solutions for MDPs with finite state and action spaces may be found through a variety of methods such asdynamic programming. The algorithms in this section apply to MDPs with finite state and action spaces and explicitly given transition probabilities and reward functions, but the basic concepts may be extended to handle other problem classes, for example usingfunction approximation. Also, some processes with countably infinite state and action spaces can beexactlyreduced to ones with finite state and action spaces.[6] The standard family of algorithms to calculate optimal policies for finite state and action MDPs requires storage for two arrays indexed by state:valueV{\displaystyle V}, which contains real values, andpolicyπ{\displaystyle \pi }, which contains actions. At the end of the algorithm,π{\displaystyle \pi }will contain the solution andV(s){\displaystyle V(s)}will contain the discounted sum of the rewards to be earned (on average) by following that solution from states{\displaystyle s}. The algorithm has two steps, (1) a value update and (2) a policy update, which are repeated in some order for all the states until no further changes take place. Both recursively update a new estimation of the optimal policy and state value using an older estimation of those values. Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.[7] In value iteration (Bellman 1957), which is also calledbackward induction, theπ{\displaystyle \pi }function is not used; instead, the value ofπ(s){\displaystyle \pi (s)}is calculated withinV(s){\displaystyle V(s)}whenever it is needed. Substituting the calculation ofπ(s){\displaystyle \pi (s)}into the calculation ofV(s){\displaystyle V(s)}gives the combined step[further explanation needed]: wherei{\displaystyle i}is the iteration number. Value iteration starts ati=0{\displaystyle i=0}andV0{\displaystyle V_{0}}as a guess of thevalue function. It then iterates, repeatedly computingVi+1{\displaystyle V_{i+1}}for all statess{\displaystyle s}, untilV{\displaystyle V}converges with the left-hand side equal to the right-hand side (which is the "Bellman equation" for this problem[clarification needed]).Lloyd Shapley's 1953 paper onstochastic gamesincluded as a special case the value iteration method for MDPs,[8]but this was recognized only later on.[9] In policy iteration (Howard 1960)harv error: no target: CITEREFHoward1960 (help), step one is performed once, and then step two is performed once, then both are repeated until policy converges. Then step one is again performed once and so on. (Policy iteration was invented by Howard to optimizeSearscatalogue mailing, which he had been optimizing using value iteration.[10]) Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations. These equations are merely obtained by makings=s′{\displaystyle s=s'}in the step two equation.[clarification needed]Thus, repeating step two to convergence can be interpreted as solving the linear equations byrelaxation. This variant has the advantage that there is a definite stopping condition: when the arrayπ{\displaystyle \pi }does not change in the course of applying step 1 to all states, the algorithm is completed. Policy iteration is usually slower than value iteration for a large number of possible states. In modified policy iteration (van Nunen 1976;Puterman & Shin 1978), step one is performed once, and then step two is repeated several times.[11][12]Then step one is again performed once and so on. In this variant, the steps are preferentially applied to states which are in some way important – whether based on the algorithm (there were large changes inV{\displaystyle V}orπ{\displaystyle \pi }around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm). Algorithms for finding optimal policies withtime complexitypolynomial in the size of the problem representation exist for finite MDPs. Thus,decision problemsbased on MDPs are in computationalcomplexity classP.[13]However, due to thecurse of dimensionality, the size of the problem representation is often exponential in the number of state and action variables, limiting exact solution techniques to problems that have a compact representation. In practice, online planning techniques such asMonte Carlo tree searchcan find useful solutions in larger problems, and, in theory, it is possible to construct online planning algorithms that can find an arbitrarily near-optimal policy with no computational complexity dependence on the size of the state space.[14] A Markov decision process is astochastic gamewith only one player. The solution above assumes that the states{\displaystyle s}is known when action is to be taken; otherwiseπ(s){\displaystyle \pi (s)}cannot be calculated. When this assumption is not true, the problem is called a partially observable Markov decision process or POMDP. Constrained Markov decision processes (CMDPS) are extensions to Markov decision process (MDPs). There are three fundamental differences between MDPs and CMDPs.[15] The method of Lagrange multipliers applies to CMDPs. Many Lagrangian-based algorithms have been developed. There are a number of applications for CMDPs. It has recently been used inmotion planningscenarios in robotics.[17] In discrete-time Markov Decision Processes, decisions are made at discrete time intervals. However, forcontinuous-time Markov decision processes, decisions can be made at any time the decision maker chooses. In comparison to discrete-time Markov decision processes, continuous-time Markov decision processes can better model the decision-making process for a system that hascontinuous dynamics, i.e., the system dynamics is defined byordinary differential equations(ODEs). These kind of applications raise inqueueing systems, epidemic processes, andpopulation processes. Like the discrete-time Markov decision processes, in continuous-time Markov decision processes the agent aims at finding the optimalpolicywhich could maximize the expected cumulated reward. The only difference with the standard case stays in the fact that, due to the continuous nature of the time variable, the sum is replaced by an integral: where0≤γ<1.{\displaystyle 0\leq \gamma <1.} If the state space and action space are finite, we could use linear programming to find the optimal policy, which was one of the earliest approaches applied. Here we only consider the ergodic model, which means our continuous-time MDP becomes anergodiccontinuous-time Markov chain under a stationarypolicy. Under this assumption, although the decision maker can make a decision at any time in the current state, there is no benefit in taking multiple actions. It is better to take an action only at the time when system is transitioning from the current state to another state. Under some conditions,[18]if our optimal value functionV∗{\displaystyle V^{*}}is independent of statei{\displaystyle i}, we will have the following inequality: If there exists a functionh{\displaystyle h}, thenV¯∗{\displaystyle {\bar {V}}^{*}}will be the smallestg{\displaystyle g}satisfying the above equation. In order to findV¯∗{\displaystyle {\bar {V}}^{*}}, we could use the following linear programming model: y(i,a){\displaystyle y(i,a)}is a feasible solution to the D-LP ify(i,a){\displaystyle y(i,a)}is nonnative and satisfied the constraints in the D-LP problem. A feasible solutiony∗(i,a){\displaystyle y^{*}(i,a)}to the D-LP is said to be an optimal solution if for all feasible solutiony(i,a){\displaystyle y(i,a)}to the D-LP. Once we have found the optimal solutiony∗(i,a){\displaystyle y^{*}(i,a)}, we can use it to establish the optimal policies. In continuous-time MDP, if the state space and action space are continuous, the optimal criterion could be found by solvingHamilton–Jacobi–Bellman (HJB) partial differential equation. In order to discuss the HJB equation, we need to reformulate our problem D(⋅){\displaystyle D(\cdot )}is the terminal reward function,s(t){\displaystyle s(t)}is the system state vector,a(t){\displaystyle a(t)}is the system control vector we try to find.f(⋅){\displaystyle f(\cdot )}shows how the state vector changes over time. The Hamilton–Jacobi–Bellman equation is as follows: We could solve the equation to find the optimal controla(t){\displaystyle a(t)}, which could give us the optimalvalue functionV∗{\displaystyle V^{*}} Reinforcement learningis an interdisciplinary area ofmachine learningandoptimal controlthat has, as main objective, finding an approximately optimal policy for MDPs where transition probabilities and rewards are unknown.[19] Reinforcement learning can solve Markov-Decision processes without explicit specification of the transition probabilities which are instead needed to perform policy iteration. In this setting, transition probabilities and rewards must be learned from experience, i.e. by letting an agent interact with the MDP for a given number of steps. Both on a theoretical and on a practical level, effort is put in maximizing the sample efficiency, i.e. minimimizing the number of samples needed to learn a policy whose performance isε−{\displaystyle \varepsilon -}close to the optimal one (due to the stochastic nature of the process, learning the optimal policy with a finite number of samples is, in general, impossible). For the purpose of this section, it is useful to define a further function, which corresponds to taking the actiona{\displaystyle a}and then continuing optimally (or according to whatever policy one currently has): While this function is also unknown, experience during learning is based on(s,a){\displaystyle (s,a)}pairs (together with the outcomes′{\displaystyle s'}; that is, "I was in states{\displaystyle s}and I tried doinga{\displaystyle a}ands′{\displaystyle s'}happened"). Thus, one has an arrayQ{\displaystyle Q}and uses experience to update it directly. This is known asQ-learning. Another application of MDP process inmachine learningtheory is called learning automata. This is also one type of reinforcement learning if the environment is stochastic. The first detaillearning automatapaper is surveyed byNarendraand Thathachar (1974), which were originally described explicitly asfinite-state automata.[20]Similar to reinforcement learning, a learning automata algorithm also has the advantage of solving the problem when probability or rewards are unknown. The difference between learning automata and Q-learning is that the former technique omits the memory of Q-values, but updates the action probability directly to find the learning result. Learning automata is a learning scheme with a rigorous proof of convergence.[21] In learning automata theory,a stochastic automatonconsists of: The states of such an automaton correspond to the states of a "discrete-state discrete-parameterMarkov process".[22]At each time stept= 0,1,2,3,..., the automaton reads an input from its environment, updates P(t) to P(t+ 1) byA, randomly chooses a successor state according to the probabilities P(t+ 1) and outputs the corresponding action. The automaton's environment, in turn, reads the action and sends the next input to the automaton.[21] Other than the rewards, a Markov decision process(S,A,P){\displaystyle (S,A,P)}can be understood in terms ofCategory theory. Namely, letA{\displaystyle {\mathcal {A}}}denote thefree monoidwith generating setA. LetDistdenote theKleisli categoryof theGiry monad. Then a functorA→Dist{\displaystyle {\mathcal {A}}\to \mathbf {Dist} }encodes both the setSof states and the probability functionP. In this way, Markov decision processes could be generalized from monoids (categories with one object) to arbitrary categories. One can call the result(C,F:C→Dist){\displaystyle ({\mathcal {C}},F:{\mathcal {C}}\to \mathbf {Dist} )}acontext-dependent Markov decision process, because moving from one object to another inC{\displaystyle {\mathcal {C}}}changes the set of available actions and the set of possible states.[citation needed] The terminology and notation for MDPs are not entirely settled. There are two main streams — one focuses on maximization problems from contexts like economics, using the terms action, reward, value, and calling the discount factorβorγ, while the other focuses on minimization problems from engineering and navigation[citation needed], using the terms control, cost, cost-to-go, and calling the discount factorα. In addition, the notation for the transition probability varies. In addition, transition probability is sometimes writtenPr(s,a,s′){\displaystyle \Pr(s,a,s')},Pr(s′∣s,a){\displaystyle \Pr(s'\mid s,a)}or, rarely,ps′s(a).{\displaystyle p_{s's}(a).}
https://en.wikipedia.org/wiki/Markov_decision_process
Inprobability theory,Markov's inequalitygives anupper boundon theprobabilitythat anon-negativerandom variableis greater than or equal to some positiveconstant. Markov's inequality is tight in the sense that for each chosen positive constant, there exists a random variable such that the inequality is in fact an equality.[1] It is named after the Russian mathematicianAndrey Markov, although it appeared earlier in the work ofPafnuty Chebyshev(Markov's teacher), and many sources, especially inanalysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring toChebyshev's inequalityas the second Chebyshev inequality) orBienaymé's inequality. Markov's inequality (and other similar inequalities) relate probabilities toexpectations, and provide (frequently loose but still useful) bounds for thecumulative distribution functionof a random variable. Markov's inequality can also be used to upper bound the expectation of a non-negative random variable in terms of its distribution function. IfXis a nonnegative random variable anda> 0, then the probability thatXis at leastais at most the expectation ofXdivided bya:[1] WhenE⁡(X)>0{\displaystyle \operatorname {E} (X)>0}, we can takea=a~⋅E⁡(X){\displaystyle a={\tilde {a}}\cdot \operatorname {E} (X)}fora~>0{\displaystyle {\tilde {a}}>0}to rewrite the previous inequality as In the language ofmeasure theory, Markov's inequality states that if(X, Σ,μ)is ameasure space,f{\displaystyle f}is ameasurableextended real-valued function, andε> 0, then This measure-theoretic definition is sometimes referred to asChebyshev's inequality.[2] Ifφis a nondecreasing nonnegative function,Xis a (not necessarily nonnegative) random variable, andφ(a) > 0, then[3] An immediate corollary, using higher moments ofXsupported on values larger than 0, is IfXis a nonnegative random variable anda> 0, andUis a uniformly distributed random variable on[0,1]{\displaystyle [0,1]}that is independent ofX, then[4] SinceUis almost surely smaller than one, this bound is strictly stronger than Markov's inequality. Remarkably,Ucannot be replaced by any constant smaller than one, meaning that deterministic improvements to Markov's inequality cannot exist in general. While Markov's inequality holds with equality for distributions supported on{0,a}{\displaystyle \{0,a\}}, the above randomized variant holds with equality for any distribution that is bounded on[0,a]{\displaystyle [0,a]}. We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader. E⁡(X)=P⁡(X<a)⋅E⁡(X|X<a)+P⁡(X≥a)⋅E⁡(X|X≥a){\displaystyle \operatorname {E} (X)=\operatorname {P} (X<a)\cdot \operatorname {E} (X|X<a)+\operatorname {P} (X\geq a)\cdot \operatorname {E} (X|X\geq a)}whereE⁡(X|X<a){\displaystyle \operatorname {E} (X|X<a)}is larger than or equal to 0 as the random variableX{\displaystyle X}is non-negative andE⁡(X|X≥a){\displaystyle \operatorname {E} (X|X\geq a)}is larger than or equal toa{\displaystyle a}because the conditional expectation only takes into account of values larger than or equal toa{\displaystyle a}which r.v.X{\displaystyle X}can take. Property 1:P⁡(X<a)⋅E⁡(X∣X<a)≥0{\displaystyle \operatorname {P} (X<a)\cdot \operatorname {E} (X\mid X<a)\geq 0} Given a non-negative random variableX{\displaystyle X}, the conditional expectationE⁡(X∣X<a)≥0{\displaystyle \operatorname {E} (X\mid X<a)\geq 0}becauseX≥0{\displaystyle X\geq 0}. Also, probabilities are always non-negative, i.e.,P⁡(X<a)≥0{\displaystyle \operatorname {P} (X<a)\geq 0}. Thus, the product: P⁡(X<a)⋅E⁡(X∣X<a)≥0{\displaystyle \operatorname {P} (X<a)\cdot \operatorname {E} (X\mid X<a)\geq 0}. This is intuitive since conditioning onX<a{\displaystyle X<a}still results in non-negative values, ensuring the product remains non-negative. Property 2:P⁡(X≥a)⋅E⁡(X∣X≥a)≥a⋅P⁡(X≥a){\displaystyle \operatorname {P} (X\geq a)\cdot \operatorname {E} (X\mid X\geq a)\geq a\cdot \operatorname {P} (X\geq a)} ForX≥a{\displaystyle X\geq a}, the expected value givenX≥a{\displaystyle X\geq a}is at leasta.E⁡(X∣X≥a)≥a{\displaystyle a.\operatorname {E} (X\mid X\geq a)\geq a}. Multiplying both sides byP⁡(X≥a){\displaystyle \operatorname {P} (X\geq a)}, we get: P⁡(X≥a)⋅E⁡(X∣X≥a)≥a⋅P⁡(X≥a){\displaystyle \operatorname {P} (X\geq a)\cdot \operatorname {E} (X\mid X\geq a)\geq a\cdot \operatorname {P} (X\geq a)}. This is intuitive since all values considered are at leasta{\displaystyle a}, making their average also greater than or equal toa{\displaystyle a}. Hence intuitively,E⁡(X)≥P⁡(X≥a)⋅E⁡(X|X≥a)≥a⋅P⁡(X≥a){\displaystyle \operatorname {E} (X)\geq \operatorname {P} (X\geq a)\cdot \operatorname {E} (X|X\geq a)\geq a\cdot \operatorname {P} (X\geq a)}, which directly leads toP⁡(X≥a)≤E⁡(X)a{\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} (X)}{a}}}. Method 1:From the definition of expectation: However, X is a non-negative random variable thus, From this we can derive, From here, dividing through bya{\displaystyle a}allows us to see that Method 2:For any eventE{\displaystyle E}, letIE{\displaystyle I_{E}}be the indicator random variable ofE{\displaystyle E}, that is,IE=1{\displaystyle I_{E}=1}ifE{\displaystyle E}occurs andIE=0{\displaystyle I_{E}=0}otherwise. Using this notation, we haveI(X≥a)=1{\displaystyle I_{(X\geq a)}=1}if the eventX≥a{\displaystyle X\geq a}occurs, andI(X≥a)=0{\displaystyle I_{(X\geq a)}=0}ifX<a{\displaystyle X<a}. Then, givena>0{\displaystyle a>0}, which is clear if we consider the two possible values ofX≥a{\displaystyle X\geq a}. IfX<a{\displaystyle X<a}, thenI(X≥a)=0{\displaystyle I_{(X\geq a)}=0}, and soaI(X≥a)=0≤X{\displaystyle aI_{(X\geq a)}=0\leq X}. Otherwise, we haveX≥a{\displaystyle X\geq a}, for whichIX≥a=1{\displaystyle I_{X\geq a}=1}and soaIX≥a=a≤X{\displaystyle aI_{X\geq a}=a\leq X}. SinceE{\displaystyle \operatorname {E} }is a monotonically increasing function, taking expectation of both sides of an inequality cannot reverse it. Therefore, Now, using linearity of expectations, the left side of this inequality is the same as Thus we have and sincea> 0, we can divide both sides bya. We may assume that the functionf{\displaystyle f}is non-negative, since only its absolute value enters in the equation. Now, consider the real-valued functionsonXgiven by Then0≤s(x)≤f(x){\displaystyle 0\leq s(x)\leq f(x)}. By the definition of theLebesgue integral and sinceε>0{\displaystyle \varepsilon >0}, both sides can be divided byε{\displaystyle \varepsilon }, obtaining We now provide a proof for the special case whenX{\displaystyle X}is a discrete random variable which only takes on non-negative integer values. Leta{\displaystyle a}be a positive integer. By definitionaPr⁡(X>a){\displaystyle a\operatorname {Pr} (X>a)}=aPr⁡(X=a+1)+aPr⁡(X=a+2)+aPr⁡(X=a+3)+...{\displaystyle =a\operatorname {Pr} (X=a+1)+a\operatorname {Pr} (X=a+2)+a\operatorname {Pr} (X=a+3)+...}≤aPr⁡(X=a)+(a+1)Pr⁡(X=a+1)+(a+2)Pr⁡(X=a+2)+...{\displaystyle \leq a\operatorname {Pr} (X=a)+(a+1)\operatorname {Pr} (X=a+1)+(a+2)\operatorname {Pr} (X=a+2)+...}≤Pr⁡(X=1)+2Pr⁡(X=2)+3Pr⁡(X=3)+...{\displaystyle \leq \operatorname {Pr} (X=1)+2\operatorname {Pr} (X=2)+3\operatorname {Pr} (X=3)+...}+aPr⁡(X=a)+(a+1)Pr⁡(X=a+1)+(a+2)Pr⁡(X=a+2)+...{\displaystyle +a\operatorname {Pr} (X=a)+(a+1)\operatorname {Pr} (X=a+1)+(a+2)\operatorname {Pr} (X=a+2)+...}=E⁡(X){\displaystyle =\operatorname {E} (X)} Dividing bya{\displaystyle a}yields the desired result. Chebyshev's inequalityuses thevarianceto bound the probability that a random variable deviates far from the mean. Specifically, for anya> 0.[3]HereVar(X)is thevarianceof X, defined as: Chebyshev's inequality follows from Markov's inequality by considering the random variable and the constanta2,{\displaystyle a^{2},}for which Markov's inequality reads This argument can be summarized (where "MI" indicates use of Markov's inequality): Assuming no income is negative, Markov's inequality shows that no more than 10% (1/10) of the population can have more than 10 times the average income.[6] Another simple example is as follows: Andrew makes 4 mistakes on average on his Statistics course tests. The best upper bound on the probability that Andrew will do at least 10 mistakes is 0.4 asP⁡(X≥10)≤E⁡(X)α=410.{\displaystyle \operatorname {P} (X\geq 10)\leq {\frac {\operatorname {E} (X)}{\alpha }}={\frac {4}{10}}.}Note that Andrew might do exactly 10 mistakes with probability 0.4 and make no mistakes with probability 0.6; the expectation is exactly 4 mistakes.
https://en.wikipedia.org/wiki/Markov%27s_inequality
Inmathematics, theMarkov brothers' inequalityis aninequality,provedin the 1890s by brothersAndrey MarkovandVladimir Markov, two Russian mathematicians. This inequalityboundsthe maximum of thederivativesof apolynomialon anintervalin terms of the maximum of the polynomial.[1]Fork= 1 it was proved by Andrey Markov,[2]and fork= 2, 3, ... by his brother Vladimir Markov.[3] LetPbe a polynomial ofdegree≤n. Then for all nonnegativeintegersk{\displaystyle k} This inequality is tight, as equality is attained forChebyshev polynomialsof the first kind. Markov's inequality is used to obtain lower bounds incomputational complexity theoryvia the so-called "polynomial method".[4]
https://en.wikipedia.org/wiki/Markov_brothers%27_inequality
Inmathematics, aMarkov information source, or simply, aMarkov source, is aninformation sourcewhose underlying dynamics are given by a stationary finiteMarkov chain. Aninformation sourceis a sequence ofrandom variablesranging over a finite alphabetΓ{\displaystyle \Gamma }, having astationary distribution. A Markov information source is then a (stationary) Markov chainM{\displaystyle M}, together with afunction that maps statesS{\displaystyle S}in the Markov chain to letters in the alphabetΓ{\displaystyle \Gamma }. Aunifilar Markov sourceis a Markov source for which the valuesf(sk){\displaystyle f(s_{k})}are distinct whenever each of the statessk{\displaystyle s_{k}}are reachable, in one step, from a common prior state. Unifilar sources are notable in that many of their properties are far more easily analyzed, as compared to the general case. Markov sources are commonly used incommunication theory, as a model of atransmitter. Markov sources also occur innatural language processing, where they are used to represent hidden meaning in a text. Given the output of a Markov source, whose underlying Markov chain is unknown, the task of solving for the underlying chain is undertaken by the techniques ofhidden Markov models, such as theViterbi algorithm. Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Markov_information_source
In the domain ofphysicsandprobability, aMarkov random field(MRF),Markov networkorundirectedgraphical modelis a set ofrandom variableshaving aMarkov propertydescribed by anundirected graph. In other words, arandom fieldis said to be aMarkovrandom field if it satisfies Markov properties. The concept originates from theSherrington–Kirkpatrick model.[1] A Markov network or MRF is similar to aBayesian networkin its representation of dependencies; the differences being that Bayesian networks aredirected and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies[further explanation needed]); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies[further explanation needed]). The underlying graph of a Markov random field may be finite or infinite. When thejoint probability densityof the random variables is strictly positive, it is also referred to as aGibbs random field, because, according to theHammersley–Clifford theorem, it can then be represented by aGibbs measurefor an appropriate (locally defined) energy function. The prototypical Markov random field is theIsing model; indeed, the Markov random field was introduced as the general setting for the Ising model.[2]In the domain ofartificial intelligence, a Markov random field is used to model various low- to mid-level tasks inimage processingandcomputer vision.[3] Given an undirected graphG=(V,E){\displaystyle G=(V,E)}, a set of random variablesX=(Xv)v∈V{\displaystyle X=(X_{v})_{v\in V}}indexed byV{\displaystyle V}form a Markov random field with respect toG{\displaystyle G}if they satisfy the local Markov properties: The Global Markov property is stronger than the Local Markov property, which in turn is stronger than the Pairwise one.[4]However, the above three Markov properties are equivalent for positive distributions[5](those that assign only nonzero probabilities to the associated variables). The relation between the three Markov properties is particularly clear in the following formulation: As the Markov property of an arbitrary probability distribution can be difficult to establish, a commonly used class of Markov random fields are those that can be factorized according to thecliquesof the graph. Given a set of random variablesX=(Xv)v∈V{\displaystyle X=(X_{v})_{v\in V}}, letP(X=x){\displaystyle P(X=x)}be theprobabilityof a particular field configurationx{\displaystyle x}inX{\displaystyle X}—that is,P(X=x){\displaystyle P(X=x)}is the probability of finding that the random variablesX{\displaystyle X}take on the particular valuex{\displaystyle x}. BecauseX{\displaystyle X}is a set, the probability ofx{\displaystyle x}should be understood to be taken with respect to ajoint distributionof theXv{\displaystyle X_{v}}. If this joint density can be factorized over the cliques ofG{\displaystyle G}as thenX{\displaystyle X}forms a Markov random field with respect toG{\displaystyle G}. Here,cl⁡(G){\displaystyle \operatorname {cl} (G)}is the set of cliques ofG{\displaystyle G}. The definition is equivalent if only maximal cliques are used. The functionsφC{\displaystyle \varphi _{C}}are sometimes referred to asfactor potentialsorclique potentials. Note, however, conflicting terminology is in use: the wordpotentialis often applied to the logarithm ofφC{\displaystyle \varphi _{C}}. This is because, instatistical mechanics,log⁡(φC){\displaystyle \log(\varphi _{C})}has a direct interpretation as thepotential energyof aconfigurationxC{\displaystyle x_{C}}. Some MRF's do not factorize: a simple example can be constructed on a cycle of 4 nodes with some infinite energies, i.e. configurations of zero probabilities,[6]even if one, more appropriately, allows the infinite energies to act on the complete graph onV{\displaystyle V}.[7] MRF's factorize if at least one of the following conditions is fulfilled: When such a factorization does exist, it is possible to construct afactor graphfor the network. Any positive Markov random field can be written as exponential family in canonical form with feature functionsfk{\displaystyle f_{k}}such that the full-joint distribution can be written as where the notation is simply adot productover field configurations, andZis thepartition function: Here,X{\displaystyle {\mathcal {X}}}denotes the set of all possible assignments of values to all the network's random variables. Usually, the feature functionsfk,i{\displaystyle f_{k,i}}are defined such that they areindicatorsof the clique's configuration,i.e.fk,i(x{k})=1{\displaystyle f_{k,i}(x_{\{k\}})=1}ifx{k}{\displaystyle x_{\{k\}}}corresponds to thei-th possible configuration of thek-th clique and 0 otherwise. This model is equivalent to the clique factorization model given above, ifNk=|dom⁡(Ck)|{\displaystyle N_{k}=|\operatorname {dom} (C_{k})|}is the cardinality of the clique, and the weight of a featurefk,i{\displaystyle f_{k,i}}corresponds to the logarithm of the corresponding clique factor,i.e.wk,i=log⁡φ(ck,i){\displaystyle w_{k,i}=\log \varphi (c_{k,i})}, whereck,i{\displaystyle c_{k,i}}is thei-th possible configuration of thek-th clique,i.e.thei-th value in the domain of the cliqueCk{\displaystyle C_{k}}. The probabilityPis often called the Gibbs measure. This expression of a Markov field as a logistic model is only possible if all clique factors are non-zero,i.e.if none of the elements ofX{\displaystyle {\mathcal {X}}}are assigned a probability of 0. This allows techniques from matrix algebra to be applied,e.g.that thetraceof a matrix is log of thedeterminant, with the matrix representation of a graph arising from the graph'sincidence matrix. The importance of the partition functionZis that many concepts fromstatistical mechanics, such asentropy, directly generalize to the case of Markov networks, and anintuitiveunderstanding can thereby be gained. In addition, the partition function allowsvariational methodsto be applied to the solution of the problem: one can attach a driving force to one or more of the random variables, and explore the reaction of the network in response to thisperturbation. Thus, for example, one may add a driving termJv, for each vertexvof the graph, to the partition function to get: Formally differentiating with respect toJvgives theexpectation valueof the random variableXvassociated with the vertexv: Correlation functionsare computed likewise; the two-point correlation is: Unfortunately, though the likelihood of a logistic Markov network is convex, evaluating the likelihood or gradient of the likelihood of a model requires inference in the model, which is generally computationally infeasible (see'Inference'below). Amultivariate normal distributionforms a Markov random field with respect to a graphG=(V,E){\displaystyle G=(V,E)}if the missing edges correspond to zeros on theprecision matrix(the inversecovariance matrix): such that As in aBayesian network, one may calculate theconditional distributionof a set of nodesV′={v1,…,vi}{\displaystyle V'=\{v_{1},\ldots ,v_{i}\}}given values to another set of nodesW′={w1,…,wj}{\displaystyle W'=\{w_{1},\ldots ,w_{j}\}}in the Markov random field by summing over all possible assignments tou∉V′,W′{\displaystyle u\notin V',W'}; this is calledexact inference. However, exact inference is a#P-completeproblem, and thus computationally intractable in the general case. Approximation techniques such asMarkov chain Monte Carloand loopybelief propagationare often more feasible in practice. Some particular subclasses of MRFs, such as trees (seeChow–Liu tree), have polynomial-time inference algorithms; discovering such subclasses is an active research topic. There are also subclasses of MRFs that permit efficientMAP, or most likely assignment, inference; examples of these include associative networks.[9][10]Another interesting sub-class is the one of decomposable models (when the graph ischordal): having a closed-form for theMLE, it is possible to discover a consistent structure for hundreds of variables.[11] One notable variant of a Markov random field is aconditional random field, in which each random variable may also be conditioned upon a set of global observationso{\displaystyle o}. In this model, each functionφk{\displaystyle \varphi _{k}}is a mapping from all assignments to both thecliquekand the observationso{\displaystyle o}to the nonnegative real numbers. This form of the Markov network may be more appropriate for producingdiscriminative classifiers, which do not model the distribution over the observations. CRFs were proposed byJohn D. Lafferty,Andrew McCallumandFernando C.N. Pereirain 2001.[12] Markov random fields find application in a variety of fields, ranging fromcomputer graphicsto computer vision,[13]machine learningorcomputational biology,[2][14]andinformation retrieval.[15]MRFs are used in image processing to generate textures as they can be used to generate flexible and stochastic image models. In image modelling, the task is to find a suitable intensity distribution of a given image, where suitability depends on the kind of task and MRFs are flexible enough to be used for image and texture synthesis,image compressionand restoration,image segmentation, 3D image inference from 2D images,image registration,texture synthesis,super-resolution,stereo matchingandinformation retrieval. They can be used to solve various computer vision problems which can be posed as energy minimization problems or problems where different regions have to be distinguished using a set of discriminating features, within a Markov random field framework, to predict the category of the region.[16]Markov random fields were a generalization over the Ising model and have, since then, been used widely in combinatorial optimizations and networks.
https://en.wikipedia.org/wiki/Markov_random_field
AMarkov numberorMarkoff numberis a positiveintegerx,yorzthat is part of a solution to the MarkovDiophantine equation studied byAndrey Markoff(1879,1880). The first few Markov numbers are appearing as coordinates of the Markov triples There are infinitely many Markov numbers and Markov triples. There are two simple ways to obtain a new Markov triple from an old one (x,y,z). First, one maypermutethe 3 numbersx,y,z, so in particular one can normalize the triples so thatx≤y≤z. Second, if (x,y,z) is a Markov triple then so is (x,y, 3xy−z). Applying this operation twice returns the same triple one started with. Joining each normalized Markov triple to the 1, 2, or 3 normalized triples one can obtain from this gives a graph starting from (1,1,1) as in the diagram. This graph isconnected; in other words every Markov triple can be connected to(1,1,1)by a sequence of these operations.[1]If one starts, as an example, with(1, 5, 13)we get its threeneighbors(5, 13, 194),(1, 13, 34)and(1, 2, 5)in the Markov tree ifzis set to 1, 5 and 13, respectively. For instance, starting with(1, 1, 2)and tradingyandzbefore each iteration of the transform lists Markov triples withFibonacci numbers. Starting with that same triplet and tradingxandzbefore each iteration gives the triples withPell numbers. All the Markov numbers on the regions adjacent to 2's region areodd-indexed Pell numbers (or numbersnsuch that 2n2− 1 is asquare,OEIS:A001653), and all the Markov numbers on the regions adjacent to 1's region are odd-indexed Fibonacci numbers (OEIS:A001519). Thus, there are infinitely many Markov triples of the form whereFkis thekthFibonacci number. Likewise, there are infinitely many Markov triples of the form wherePkis thekthPell number.[2] Aside from the two smallestsingulartriples (1, 1, 1) and (1, 1, 2), every Markov triple consists of three distinct integers.[3] Theunicity conjecture, as remarked byFrobeniusin 1913,[4]states that for a given Markov numberc, there is exactly one normalized solution havingcas its largest element:proofsof thisconjecturehave been claimed but none seems to be correct.[5]Martin Aigner[6]examines several weaker variants of the unicity conjecture. His fixed numerator conjecture was proved by Rabideau and Schiffler in 2020,[7]while the fixed denominator conjecture and fixed sum conjecture were proved by Lee, Li, Rabideau and Schiffler in 2023.[8] None of the prime divisors of a Markov number is congruent to 3 modulo 4, which implies that an odd Markov number is 1 more than a multiple of 4.[9]Furthermore, ifm{\displaystyle m}is a Markov number then none of the prime divisors of9m2−4{\displaystyle 9m^{2}-4}is congruent to 3 modulo 4. AnevenMarkov number is 2 more than a multiple of 32.[10] In his 1982 paper,Don Zagierconjectured that thenth Markov number is asymptotically given by The erroro(1)=(log⁡(3mn)/C)2−n{\displaystyle o(1)=(\log(3m_{n})/C)^{2}-n}is plotted below. Moreover, he pointed out thatx2+y2+z2=3xyz+4/9{\displaystyle x^{2}+y^{2}+z^{2}=3xyz+4/9}, an approximation of the original Diophantine equation, is equivalent tof(x)+f(y)=f(z){\displaystyle f(x)+f(y)=f(z)}withf(t) =arcosh(3t/2).[11]The conjecture was proved[disputed–discuss]byGreg McShaneandIgor Rivinin 1995 using techniques fromhyperbolic geometry.[12] ThenthLagrange numbercan be calculated from thenth Markov number with the formula The Markov numbers are sums of (non-unique) pairs of squares. Markoff (1879,1880) showed that if is anindefinitebinary quadratic formwithrealcoefficients anddiscriminantD=b2−4ac{\displaystyle D=b^{2}-4ac}, then there are integersx,yfor whichftakes a nonzero value ofabsolute valueat most unlessfis aMarkov form:[13]a constant times a form such that where (p,q,r) is a Markov triple. Let tr denote thetracefunction overmatrices. IfXandYare inSL2(C{\displaystyle \mathbb {C} }), then so that iftr⁡(XYX−1Y−1)=−2{\textstyle \operatorname {tr} (XYX^{-1}Y^{-1})=-2}then In particular ifXandYalso have integer entries then tr(X)/3, tr(Y)/3, and tr(XY)/3 are a Markov triple. IfX⋅Y⋅Z=Ithen tr(XtY) = tr(Z), so more symmetrically ifX,Y, andZare in SL2(Z{\displaystyle \mathbb {Z} }) withX⋅Y⋅Z= I and thecommutatorof two of them has trace −2, then their traces/3 are a Markov triple.[14]
https://en.wikipedia.org/wiki/Markov_number
Inprobability theoryandstatistics, the termMarkov propertyrefers to thememorylessproperty of astochastic process, which means that its future evolution is independent of its history. It is named after theRussianmathematicianAndrey Markov. The termstrong Markov propertyis similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as astopping time. The termMarkov assumptionis used to describe a model where the Markov property is assumed to hold, such as ahidden Markov model. AMarkov random fieldextends this property to two or more dimensions or to random variables defined for an interconnected network of items.[1]An example of a model for such a field is theIsing model. A discrete-time stochastic process satisfying the Markov property is known as aMarkov chain. A stochastic process has the Markov property if theconditional probability distributionof future states of the process (conditional on both past and present values) depends only upon the present state; that is, given the present, the future does not depend on the past. A process with this property is said to beMarkovorMarkovianand known as aMarkov process. Two famous classes of Markov process are theMarkov chainandBrownian motion. Note that there is a subtle, often overlooked and very important point that is often missed in the plain English statement of the definition: the statespace of the process is constant through time. The conditional description involves a fixed "bandwidth". For example, without this restriction we could augment any process to one which includes the complete history from a given initial condition and it would be made to be Markovian. But the state space would be of increasing dimensionality over time and does not meet the definition. Let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}be aprobability spacewith afiltration(Fs,s∈I){\displaystyle ({\mathcal {F}}_{s},\ s\in I)}, for some (totally ordered) index setI{\displaystyle I}; and let(S,S){\displaystyle (S,{\mathcal {S}})}be ameasurable space. A(S,S){\displaystyle (S,{\mathcal {S}})}-valued stochastic processX={Xt:Ω→S}t∈I{\displaystyle X=\{X_{t}:\Omega \to S\}_{t\in I}}adapted to the filtrationis said to possess theMarkov propertyif, for eachA∈S{\displaystyle A\in {\mathcal {S}}}and eachs,t∈I{\displaystyle s,t\in I}withs<t{\displaystyle s<t}, In the case whereS{\displaystyle S}is a discrete set with thediscrete sigma algebraandI=N{\displaystyle I=\mathbb {N} }, this can be reformulated as follows: Alternatively, the Markov property can be formulated as follows. for allt≥s≥0{\displaystyle t\geq s\geq 0}andf:S→R{\displaystyle f:S\rightarrow \mathbb {R} }bounded and measurable.[3] Suppose thatX=(Xt:t≥0){\displaystyle X=(X_{t}:t\geq 0)}is astochastic processon aprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}withnatural filtration{Ft}t≥0{\displaystyle \{{\mathcal {F}}_{t}\}_{t\geq 0}}. Then for anystopping timeτ{\displaystyle \tau }onΩ{\displaystyle \Omega }, we can define ThenX{\displaystyle X}is said to have the strong Markov property if, for eachstopping timeτ{\displaystyle \tau }, conditional on the event{τ<∞}{\displaystyle \{\tau <\infty \}}, we have that for eacht≥0{\displaystyle t\geq 0},Xτ+t{\displaystyle X_{\tau +t}}is independent ofFτ{\displaystyle {\mathcal {F}}_{\tau }}givenXτ{\displaystyle X_{\tau }}. The strong Markov property implies the ordinary Markov property since by taking the stopping timeτ=t{\displaystyle \tau =t}, the ordinary Markov property can be deduced.[4] In the fields ofpredictive modellingandprobabilistic forecasting, the Markov property is considered desirable since it may enable the reasoning and resolution of the problem that otherwise would not be possible to be resolved because of itsintractability. Such a model is known as aMarkov model. Assume that an urn contains two red balls and one green ball. One ball was drawn yesterday, one ball was drawn today, and the final ball will be drawn tomorrow. All of the draws are "without replacement". Suppose you know that today's ball was red, but you have no information about yesterday's ball. The chance that tomorrow's ball will be red is 1/2. That's because the only two remaining outcomes for this random experiment are: On the other hand, if you know that both today and yesterday's balls were red, then you are guaranteed to get a green ball tomorrow. This discrepancy shows that the probability distribution for tomorrow's color depends not only on the present value, but is also affected by information about the past. This stochastic process of observed colors doesn't have the Markov property. Using the same experiment above, if sampling "without replacement" is changed to sampling "with replacement," the process of observed colors will have the Markov property.[5] An application of the Markov property in a generalized form is inMarkov chain Monte Carlocomputations in the context ofBayesian statistics.
https://en.wikipedia.org/wiki/Markov_property
In probability theory and statistics, aMarkov chainorMarkov processis astochastic processdescribing asequenceof possible events in which theprobabilityof each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairsnow." Acountably infinitesequence, in which the chain moves state at discrete time steps, gives adiscrete-time Markov chain(DTMC). Acontinuous-timeprocess is called acontinuous-time Markov chain(CTMC). Markov processes are named in honor of theRussianmathematicianAndrey Markov. Markov chains have many applications asstatistical modelsof real-world processes.[1]They provide the basis for general stochastic simulation methods known asMarkov chain Monte Carlo, which are used for simulating sampling from complexprobability distributions, and have found application in areas includingBayesian statistics,biology,chemistry,economics,finance,information theory,physics,signal processing, andspeech processing.[1][2][3] The adjectivesMarkovianandMarkovare used to describe something that is related to a Markov process.[4] A Markov process is astochastic processthat satisfies theMarkov property(sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history.[5]In other words,conditionalon the present state of the system, its future and past states areindependent. A Markov chain is a type of Markov process that has either a discretestate spaceor a discrete index set (often representing time), but the precise definition of a Markov chain varies.[6]For example, it is common to define a Markov chain as a Markov process in eitherdiscrete or continuous timewith a countable state space (thus regardless of the nature of time),[7][8][9][10]but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).[6] The system'sstate spaceand time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, adiscrete-time Markov chain (DTMC),[11]but a few authors use the term "Markov process" to refer to acontinuous-time Markov chain (CTMC)without explicit mention.[12][13][14]In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (seeMarkov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, thestate spaceof a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space.[15]However, many applications of Markov chains employ finite orcountably infinitestate spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (seeVariations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, atransition matrixdescribing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are theintegersornatural numbers, and the random process is a mapping of these to states. The Markov property states that theconditional probability distributionfor the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important. Andrey Markovstudied Markov processes in the early 20th century, publishing his first paper on the topic in 1906.[16][17][18]Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of thePoisson process.[19][20][21]Markov was interested in studying an extension of independent random sequences, motivated by a disagreement withPavel Nekrasovwho claimed independence was necessary for theweak law of large numbersto hold.[22]In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption,[16][17][18]which had been commonly regarded as a requirement for such mathematical laws to hold.[18]Markov later used Markov chains to study the distribution of vowels inEugene Onegin, written byAlexander Pushkin, and proved acentral limit theoremfor such chains.[16] In 1912Henri Poincaréstudied Markov chains onfinite groupswith an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced byPaulandTatyana Ehrenfestin 1907, and a branching process, introduced byFrancis GaltonandHenry William Watsonin 1873, preceding the work of Markov.[16][17]After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier byIrénée-Jules Bienaymé.[23]Starting in 1928,Maurice Fréchetbecame interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.[16][24] Andrey Kolmogorovdeveloped in a 1931 paper a large part of the early theory of continuous-time Markov processes.[25][26]Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well asNorbert Wiener's work on Einstein's model of Brownian movement.[25][27]He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes.[25][28]Independent of Kolmogorov's work,Sydney Chapmanderived in a 1928 paper an equation, now called theChapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement.[29]The differential equations are now called the Kolmogorov equations[30]or the Kolmogorov–Chapman equations.[31]Other mathematicians who contributed significantly to the foundations of Markov processes includeWilliam Feller, starting in 1930s, and then laterEugene Dynkin, starting in the 1950s.[26] Suppose that there is a coin purse containing five coins worth 25¢, five coins worth 10¢ and five coins worth 5¢, and one by one, coins are randomly drawn from the purse and are set on a table. IfXn{\displaystyle X_{n}}represents the total value of the coins set on the table afterndraws, withX0=0{\displaystyle X_{0}=0}, then the sequence{Xn:n∈N}{\displaystyle \{X_{n}:n\in \mathbb {N} \}}isnota Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. ThusX6=$0.50{\displaystyle X_{6}=\$0.50}. If we know not justX6{\displaystyle X_{6}}, but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine thatX7≥$0.60{\displaystyle X_{7}\geq \$0.60}with probability 1. But if we do not know the earlier values, then based only on the valueX6{\displaystyle X_{6}}we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses aboutX7{\displaystyle X_{7}}are impacted by our knowledge of values prior toX6{\displaystyle X_{6}}. However, it is possible to model this scenario as a Markov process. Instead of definingXn{\displaystyle X_{n}}to represent thetotal valueof the coins on the table, we could defineXn{\displaystyle X_{n}}to represent thecountof the various coin types on the table. For instance,X6=1,0,5{\displaystyle X_{6}=1,0,5}could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by6×6×6=216{\displaystyle 6\times 6\times 6=216}possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in stateX1=0,1,0{\displaystyle X_{1}=0,1,0}. The probability of achievingX2{\displaystyle X_{2}}now depends onX1{\displaystyle X_{1}}; for example, the stateX2=1,0,1{\displaystyle X_{2}=1,0,1}is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of theXn=i,j,k{\displaystyle X_{n}=i,j,k}state depends exclusively on the outcome of theXn−1=ℓ,m,p{\displaystyle X_{n-1}=\ell ,m,p}state. A discrete-time Markov chain is a sequence ofrandom variablesX1,X2,X3, ... with theMarkov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states: The possible values ofXiform acountable setScalled the state space of the chain. A continuous-time Markov chain (Xt)t≥ 0is defined by a finite or countable state spaceS, atransition rate matrixQwith dimensions equal to that of the state space and initial probability distribution defined on the state space. Fori≠j, the elementsqijare non-negative and describe the rate of the process transitions from stateito statej. The elementsqiiare chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process.[40] LetXt{\displaystyle X_{t}}be the random variable describing the state of the process at timet, and assume the process is in a stateiat timet. Then, knowingXt=i{\displaystyle X_{t}=i},Xt+h=j{\displaystyle X_{t+h}=j}is independent of previous values(Xs:s<t){\displaystyle \left(X_{s}:s<t\right)}, and ash→ 0 for alljand for allt,Pr(X(t+h)=j∣X(t)=i)=δij+qijh+o(h),{\displaystyle \Pr(X(t+h)=j\mid X(t)=i)=\delta _{ij}+q_{ij}h+o(h),}whereδij{\displaystyle \delta _{ij}}is theKronecker delta, using thelittle-o notation. Theqij{\displaystyle q_{ij}}can be seen as measuring how quickly the transition fromitojhappens. Define a discrete-time Markov chainYnto describe thenth jump of the process and variablesS1,S2,S3, ... to describe holding times in each of the states whereSifollows theexponential distributionwith rate parameter −qYiYi. For any valuen= 0, 1, 2, 3, ... and times indexed up to this value ofn:t0,t1,t2, ... and all states recorded at these timesi0,i1,i2,i3, ... it holds that wherepijis the solution of theforward equation(afirst-order differential equation) with initial condition P(0) is theidentity matrix. If the state space isfinite, the transition probability distribution can be represented by amatrix, called the transition matrix, with the (i,j)thelementofPequal to Since each row ofPsums to one and all elements are non-negative,Pis aright stochastic matrix. A stationary distributionπis a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrixPon it and so is defined by By comparing this definition with that of aneigenvectorwe see that the two concepts are related and that is a normalized (∑iπi=1{\textstyle \sum _{i}\pi _{i}=1}) multiple of a left eigenvectoreof the transition matrixPwith aneigenvalueof 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distributionπi{\displaystyle \textstyle \pi _{i}}are associated with the state space ofPand its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as∑i1⋅πi=1{\textstyle \sum _{i}1\cdot \pi _{i}=1}we see that thedot productof π with a vector whose components are all 1 is unity and that π lies on asimplex. If the Markov chain is time-homogeneous, then the transition matrixPis the same after each step, so thek-step transition probability can be computed as thek-th power of the transition matrix,Pk. If the Markov chain is irreducible and aperiodic, then there is a unique stationary distributionπ.[41]Additionally, in this casePkconverges to a rank-one matrix in which each row is the stationary distributionπ: where1is the column vector with all entries equal to 1. This is stated by thePerron–Frobenius theorem. If, by whatever means,limk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matricesP, the limitlimk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}does not exist while the stationary distribution does, as shown by this example: (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. LetPbe ann×nmatrix, and defineQ=limk→∞Pk.{\textstyle \mathbf {Q} =\lim _{k\to \infty }\mathbf {P} ^{k}.} It is always true that SubtractingQfrom both sides and factoring then yields whereInis theidentity matrixof sizen, and0n,nis thezero matrixof sizen×n. Multiplying together stochastic matrices always yields another stochastic matrix, soQmust be astochastic matrix(see the definition above). It is sometimes sufficient to use the matrix equation above and the fact thatQis a stochastic matrix to solve forQ. Including the fact that the sum of each the rows inPis 1, there aren+1equations for determiningnunknowns, so it is computationally easier if on the one hand one selects one row inQand substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector0, and next left-multiplies this latter vector by the inverse of transformed former matrix to findQ. Here is one method for doing so: first, define the functionf(A) to return the matrixAwith its right-most column replaced with all 1's. If [f(P−In)]−1exists then[42][41] One thing to notice is that ifPhas an elementPi,ion its main diagonal that is equal to 1 and theith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powersPk. Hence, theith row or column ofQwill have the 1 and the 0's in the same positions as inP. As stated earlier, from the equationπ=πP,{\displaystyle {\boldsymbol {\pi }}={\boldsymbol {\pi }}\mathbf {P} ,}(if exists) the stationary (or steady state) distributionπis a left eigenvector of rowstochastic matrixP. Then assuming thatPis diagonalizable or equivalently thatPhasnlinearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is,defective matrices, one may start with theJordan normal formofPand proceed with a bit more involved set of arguments in a similar way.[43]) LetUbe the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector ofPand letΣbe the diagonal matrix of left eigenvalues ofP, that is,Σ= diag(λ1,λ2,λ3,...,λn). Then byeigendecomposition Let the eigenvalues be enumerated such that: SincePis a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no otherπwhich solves the stationary distribution equation above). Letuibe thei-th column ofUmatrix, that is,uiis the left eigenvector ofPcorresponding to λi. Also letxbe a lengthnrow vector that represents a valid probability distribution; since the eigenvectorsuispanRn,{\displaystyle \mathbb {R} ^{n},}we can write If we multiplyxwithPfrom right and continue this operation with the results, in the end we get the stationary distributionπ. In other words,π=a1u1←xPP...P=xPkask→ ∞. That means Sinceπis parallel tou1(normalized by L2 norm) andπ(k)is a probability vector,π(k)approaches toa1u1=πask→ ∞ with a speed in the order ofλ2/λ1exponentially. This follows because|λ2|≥⋯≥|λn|,{\displaystyle |\lambda _{2}|\geq \cdots \geq |\lambda _{n}|,}henceλ2/λ1is the dominant term. The smaller the ratio is, the faster the convergence is.[44]Random noise in the state distributionπcan also speed up this convergence to the stationary distribution.[45] Many results for Markov chains with finite state space can be generalized to chains with uncountable state space throughHarris chains. The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space. "Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form. Seeinteracting particle systemandstochastic cellular automata(probabilistic cellular automata). See for instanceInteraction of Markov Processes[46]or.[47] Two states are said tocommunicatewith each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class isclosedif the probability of leaving the class is zero. A Markov chain isirreducibleif there is one communicating class, the state space. A stateihas periodkifkis thegreatest common divisorof the number of transitions by whichican be reached, starting fromi. That is: The state isperiodicifk>1{\displaystyle k>1}; otherwisek=1{\displaystyle k=1}and the state isaperiodic. A stateiis said to betransientif, starting fromi, there is a non-zero probability that the chain will never return toi. It is calledrecurrent(orpersistent) otherwise.[48]For a recurrent statei, the meanhitting timeis defined as: Stateiispositive recurrentifMi{\displaystyle M_{i}}is finite andnull recurrentotherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property.[49] A stateiis calledabsorbingif there are no outgoing transitions from the state. Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic.[50] If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given byπi=1/E[Ti]{\displaystyle \pi _{i}=1/E[T_{i}]}. A stateiis said to beergodicif it is aperiodic and positive recurrent. In other words, a stateiis ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive. It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a numberNsuch that any state can be reached from any other state in any number of steps less or equal to a numberN. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled withN= 1. A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic. Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones.[51]In fact, merely irreducible Markov chains correspond toergodic processes, defined according toergodic theory.[52] Some authors call a matrixprimitiveif there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive.[53]Some authors call itregular.[54] Theindex of primitivity, orexponent, of a regular matrix, is the smallestk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry ofM{\displaystyle M}is zero or positive, and therefore can be found on a directed graph withsign(M){\displaystyle \mathrm {sign} (M)}as its adjacency matrix. There are several combinatorial results about the exponent when there are finitely many states. Letn{\displaystyle n}be the number of states, then[55] If a Markov chain has a stationary distribution, then it can be converted to ameasure-preserving dynamical system: Let the probability space beΩ=ΣN{\displaystyle \Omega =\Sigma ^{\mathbb {N} }}, whereΣ{\displaystyle \Sigma }is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. LetT:Ω→Ω{\displaystyle T:\Omega \to \Omega }be the shift operator:T(X0,X1,…)=(X1,…){\displaystyle T(X_{0},X_{1},\dots )=(X_{1},\dots )}. Similarly we can construct such a dynamical system withΩ=ΣZ{\displaystyle \Omega =\Sigma ^{\mathbb {Z} }}instead.[57] SinceirreducibleMarkov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains. Inergodic theory, a measure-preserving dynamical system is calledergodicif any measurable subsetS{\displaystyle S}such thatT−1(S)=S{\displaystyle T^{-1}(S)=S}impliesS=∅{\displaystyle S=\emptyset }orΩ{\displaystyle \Omega }(up to a null set). The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain isirreducibleif its corresponding measure-preserving dynamical system isergodic.[52] In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, letXbe a non-Markovian process. Then define a processY, such that each state ofYrepresents a time-interval of states ofX. Mathematically, this takes the form: IfYhas the Markov property, then it is a Markovian representation ofX. An example of a non-Markovian process with a Markovian representation is anautoregressivetime seriesof order greater than one.[58] Thehitting timeis the time, starting in a given set of states, until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition. For a subset of statesA⊆S, the vectorkAof hitting times (where elementkiA{\displaystyle k_{i}^{A}}represents theexpected value, starting in stateithat the chain enters one of the states in the setA) is the minimal non-negative solution to[59] For a CTMCXt, the time-reversed process is defined to beX^t=XT−t{\displaystyle {\hat {X}}_{t}=X_{T-t}}. ByKelly's lemmathis process has the same stationary distribution as the forward process. A chain is said to bereversibleif the reversed process is the same as the forward process.Kolmogorov's criterionstates that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. One method of finding thestationary probability distribution,π, of anergodiccontinuous-time Markov chain,Q, is by first finding itsembedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as ajump process. Each element of the one-step transition probability matrix of the EMC,S, is denoted bysij, and represents theconditional probabilityof transitioning from stateiinto statej. These conditional probabilities may be found by From this,Smay be written as whereIis theidentity matrixand diag(Q) is thediagonal matrixformed by selecting themain diagonalfrom the matrixQand setting all other elements to zero. To find the stationary probability distribution vector, we must next findφ{\displaystyle \varphi }such that withφ{\displaystyle \varphi }being a row vector, such that all elements inφ{\displaystyle \varphi }are greater than 0 and‖φ‖1{\displaystyle \|\varphi \|_{1}}= 1. From this,πmay be found as (Smay be periodic, even ifQis not. Onceπis found, it must be normalized to aunit vector.) Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observingX(t) at intervals of δ units of time. The random variablesX(0),X(δ),X(2δ), ... give the sequence of states visited by the δ-skeleton. Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: ABernoulli schemeis a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as aBernoulli process. Note, however, by theOrnstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme;[60]thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states thatanystationary stochastic processis isomorphic to a Bernoulli scheme; the Markov chain is just one such example. When the Markov matrix is replaced by theadjacency matrixof afinite graph, the resulting shift is termed atopological Markov chainor asubshift of finite type.[60]A Markov matrix that is compatible with the adjacency matrix can then provide ameasureon the subshift. Many chaoticdynamical systemsare isomorphic to topological Markov chains; examples includediffeomorphismsofclosed manifolds, theProuhet–Thue–Morse system, theChacon system,sofic systems,context-free systemsandblock-coding systems.[60] Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends,[61]wind power,[62]stochastic terrorism,[63][64]andsolar irradiance.[65]The Markov chain forecasting models utilize a variety of settings, from discretizing the time series,[62]to hidden Markov models combined with wavelets,[61]and the Markov chain mixture distribution model (MCM).[65] Markovian systems appear extensively inthermodynamicsandstatistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description.[66][67]For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects.[67] Markov chains are used inlattice QCDsimulations.[68] A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain.[69]Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large numbernof molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time isntimes the probability a given molecule is in that state. The classical model of enzyme activity,Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains.[70] An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicalsin silicotowards a desired class of compounds such as drugs or natural products.[71]As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds.[72] Also, the growth (and composition) ofcopolymersmay be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due tosteric effects, second-order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has been suggested that the crystallization and growth of some epitaxialsuperlatticeoxide materials can be accurately described by Markov chains.[73] Markov chains are used in various areas of biology. Notable examples include: Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing.[citation needed] Solar irradiancevariability assessments are useful forsolar powerapplications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains,[76][77][78][79]also including modeling the two states of clear and cloudiness as a two-state Markov chain.[80][81] Hidden Markov modelshave been used inautomatic speech recognitionsystems.[82] Markov chains are used throughout information processing.Claude Shannon's famous 1948 paperA Mathematical Theory of Communication, which in a single step created the field ofinformation theory, opens by introducing the concept ofentropyby modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters.[83]Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effectivedata compressionthroughentropy encodingtechniques such asarithmetic coding. They also allow effectivestate estimationandpattern recognition. Markov chains also play an important role inreinforcement learning. Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use theViterbi algorithmfor error correction), speech recognition andbioinformatics(such as in rearrangements detection[84]). TheLZMAlossless data compression algorithm combines Markov chains withLempel-Ziv compressionto achieve very high compression ratios. Markov chains are the basis for the analytical treatment of queues (queueing theory).Agner Krarup Erlanginitiated the subject in 1917.[85]This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth).[86] Numerous queueing models use continuous-time Markov chains. For example, anM/M/1 queueis a CTMC on the non-negative integers where upward transitions fromitoi+ 1 occur at rateλaccording to aPoisson processand describe job arrivals, while transitions fromitoi– 1 (fori> 1) occur at rateμ(job service times are exponentially distributed) and describe completed services (departures) from the queue. ThePageRankof a webpage as used byGoogleis defined by a Markov chain.[87][88][89]It is the probability to be at pagei{\displaystyle i}in the stationary distribution on the following Markov chain on all (known) webpages. IfN{\displaystyle N}is the number of known webpages, and a pagei{\displaystyle i}haski{\displaystyle k_{i}}links to it then it has transition probabilityαki+1−αN{\displaystyle {\frac {\alpha }{k_{i}}}+{\frac {1-\alpha }{N}}}for all pages that are linked to and1−αN{\displaystyle {\frac {1-\alpha }{N}}}for all pages that are not linked to. The parameterα{\displaystyle \alpha }is taken to be about 0.15.[90] Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.[citation needed] Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process calledMarkov chain Monte Carlo(MCMC). In recent years this has revolutionized the practicability ofBayesian inferencemethods, allowing a wide range ofposterior distributionsto be simulated and their parameters found numerically.[citation needed] In 1971 aNaval Postgraduate SchoolMaster's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain andLanchester's laws.[91] In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict."[92] Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes.D. G. Champernownebuilt a Markov chain model of the distribution of income in 1953.[93]Herbert A. Simonand co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes.[94]Louis Bachelierwas the first to observe that stock prices followed a random walk.[95]The random walk was later seen as evidence in favor of theefficient-market hypothesisand random walk models were popular in the literature of the 1960s.[96]Regime-switching models of business cycles were popularized byJames D. Hamilton(1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions).[97]A more recent example is theMarkov switching multifractalmodel ofLaurent E. Calvetand Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models.[98][99]It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns. Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in ageneral equilibriumsetting.[100] Credit rating agenciesproduce annual tables of the transition probabilities for bonds of different credit ratings.[101] Markov chains are generally used in describingpath-dependentarguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due toKarl Marx'sDas Kapital, tyingeconomic developmentto the rise ofcapitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of themiddle class, the ratio of urban to rural residence, the rate ofpoliticalmobilization, etc., will generate a higher probability of transitioning fromauthoritariantodemocratic regime.[102] Markov chains are employed inalgorithmic music composition, particularly insoftwaresuch asCsound,Max, andSuperCollider. In a first-order chain, the states of the system become note or pitch values, and aprobability vectorfor each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could beMIDInote values, frequency (Hz), or any other desirable metric.[103] A second-order Markov chain can be introduced by considering the current stateandalso the previous state, as indicated in the second table. Higher,nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense ofphrasalstructure, rather than the 'aimless wandering' produced by a first-order system.[104] Markov chains can be used structurally, as in Xenakis's Analogique A and B.[105]Markov chains are also used in systems which use a Markov model to react interactively to music input.[106] Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed.[107] Markov chains can be used to model many games of chance. The children's gamesSnakes and Laddersand "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).[citation needed] Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team.[108]He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such asbuntingandbase stealingand differences when playing on grass vs.AstroTurf.[109] Markov processes can also be used togenerate superficially real-looking textgiven a sample document. Markov processes are used in a variety of recreational "parody generator" software (seedissociated press, Jeff Harrison,[110]Mark V. Shaney,[111][112]and Academias Neutronium). Several open-source text generation libraries using Markov chains exist.
https://en.wikipedia.org/wiki/Markov_process
In mathematics, astochastic matrixis asquare matrixused to describe the transitions of aMarkov chain. Each of its entries is anonnegativereal numberrepresenting aprobability.[1][2]: 10It is also called aprobability matrix,transition matrix,substitution matrix, orMarkov matrix. The stochastic matrix was first developed byAndrey Markovat the beginning of the 20th century, and has found use throughout a wide variety of scientific fields, includingprobability theory, statistics,mathematical financeandlinear algebra, as well ascomputer scienceandpopulation genetics. There are several different definitions and types of stochastic matrices: In the same vein, one may define aprobability vectoras avectorwhose elements are nonnegative real numbers which sum to 1. Thus, each row of a right stochastic matrix (or column of a left stochastic matrix) is a probability vector. Right stochastic matrices act uponrow vectorsof probabilities by multiplication from the right (hence their name) and the matrix entry in thei-th row andj-th column is the probability of transition from stateito statej. Left stochastic matrices act uponcolumn vectorsof probabilities by multiplication from the left (hence their name) and the matrix entry in thei-th row andj-th column is the probability of transition from statejto statei. This article uses the right/row stochastic matrix convention. The stochastic matrix was developed alongside the Markov chain byAndrey Markov, aRussian mathematicianand professor atSt. Petersburg Universitywho first published on the topic in 1906.[3]His initial intended uses were for linguistic analysis and other mathematical subjects likecard shuffling, but both Markov chains and matrices rapidly found use in other fields.[3][4] Stochastic matrices were further developed by scholars such asAndrey Kolmogorov, who expanded their possibilities by allowing for continuous-time Markov processes.[5]By the 1950s, articles using stochastic matrices had appeared in the fields ofeconometrics[6]andcircuit theory.[7]In the 1960s, stochastic matrices appeared in an even wider variety of scientific works, frombehavioral science[8]to geology[9][10]toresidential planning.[11]In addition, much mathematical work was also done through these decades to improve the range of uses and functionality of the stochastic matrix andMarkovian processesmore generally. From the 1970s to present, stochastic matrices have found use in almost every field that requires formal analysis, fromstructural science[12]tomedical diagnosis[13]topersonnel management.[14]In addition, stochastic matrices have found wide use inland change modeling, usually under the term Markov matrix.[15] A stochastic matrix describes aMarkov chainXtover afinitestate spaceSwithcardinalityα. If theprobabilityof moving fromitojin one time step isPr(j|i) =Pi,j, the stochastic matrixPis given by usingPi,jas thei-th row andj-th column element, e.g., P=[P1,1P1,2…P1,j…P1,αP2,1P2,2…P2,j…P2,α⋮⋮⋱⋮⋱⋮Pi,1Pi,2…Pi,j…Pi,α⋮⋮⋱⋮⋱⋮Pα,1Pα,2…Pα,j…Pα,α].{\displaystyle P=\left[{\begin{matrix}P_{1,1}&P_{1,2}&\dots &P_{1,j}&\dots &P_{1,\alpha }\\P_{2,1}&P_{2,2}&\dots &P_{2,j}&\dots &P_{2,\alpha }\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_{i,1}&P_{i,2}&\dots &P_{i,j}&\dots &P_{i,\alpha }\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_{\alpha ,1}&P_{\alpha ,2}&\dots &P_{\alpha ,j}&\dots &P_{\alpha ,\alpha }\\\end{matrix}}\right].} Since the total of transition probability from a stateito all other states must be 1,∀i∈{1,…,α},∑j=1αPi,j=1;{\displaystyle \forall i\in \{1,\ldots ,\alpha \},\quad \sum _{j=1}^{\alpha }P_{i,j}=1;\,}thus this matrix is a right stochastic matrix. The above elementwise sum across each rowiofPmay be more concisely written asP1=1, where1is theα-dimensional column vector of all ones. Using this, it can be seen that the product of two right stochastic matricesP′andP′′is also right stochastic:P′P′′1=P′ (P′′1) =P′1=1. In general, thek-th powerPkof a right stochastic matrixPis also right stochastic. The probability of transitioning fromitojin two steps is then given by the(i,j)-th element of the square ofP: (P2)i,j.{\displaystyle \left(P^{2}\right)_{i,j}.} In general, the probability transition of going from any state to another state in a finite Markov chain given by the matrixPinksteps is given byPk. An initial probability distribution of states, specifying where the system might be initially and with what probabilities, is given as arow vector. Astationaryprobability vectorπis defined as a distribution, written as a row vector, that does not change under application of the transition matrix; that is, it is defined as a probability distribution on the set{1, …,n}which is also aleft eigenvectorof the probability matrix, associated witheigenvalue1: πP=π.{\displaystyle {\boldsymbol {\pi }}P={\boldsymbol {\pi }}.} It can be shown that thespectral radiusof any stochastic matrix is one. By theGershgorin circle theorem, all of the eigenvalues of a stochastic matrix have absolute values less than or equal to one. More precisely, the eigenvalues ofn{\displaystyle n}-by-n{\displaystyle n}stochastic matrices are restricted to lie within a subset of the complex unit disk, known as Karpelevič regions.[16]This result was originally obtained byFridrikh Karpelevich,[17]following a question originally posed by Kolmogorov[18]and partially addressed byNikolay DmitriyevandEugene Dynkin.[19] Additionally, every right stochastic matrix has an "obvious" column eigenvector associated to the eigenvalue 1: the vector1used above, whose coordinates are all equal to 1. As left and right eigenvalues of a square matrix are the same, every stochastic matrix has, at least, aleft eigenvectorassociated to theeigenvalue1 and the largest absolute value of all its eigenvalues is also 1. Finally, theBrouwer Fixed Point Theorem(applied to the compact convex set of all probability distributions of the finite set{1, ...,n}) implies that there is some left eigenvector which is also a stationary probability vector. On the other hand, thePerron–Frobenius theoremalso ensures that everyirreduciblestochastic matrix has such a stationary vector, and that the largest absolute value of an eigenvalue is always 1. However, this theorem cannot be applied directly to such matrices because they need not be irreducible. In general, there may be several such vectors. However, for a matrix with strictly positive entries (or, more generally, for an irreducible aperiodic stochastic matrix), this vector is unique and can be computed by observing that for anyiwe have the following limit, limk→∞(Pk)i,j=πj,{\displaystyle \lim _{k\rightarrow \infty }\left(P^{k}\right)_{i,j}={\boldsymbol {\pi }}_{j},} whereπjis thej-th element of the row vectorπ. Among other things, this says that the long-term probability of being in a statejis independent of the initial statei. That both of these computations give the same stationary vector is a form of anergodic theorem, which is generally true in a wide variety ofdissipative dynamical systems: the system evolves, over time, to astationary state. Intuitively, a stochastic matrix represents a Markov chain; the application of the stochastic matrix to a probability distribution redistributes the probability mass of the original distribution while preserving its total mass. If this process is applied repeatedly, the distribution converges to a stationary distribution for the Markov chain.[2]: 14–17[20]: 116 Stochastic matrices and their product form acategory, which is both a subcategory of thecategory of matricesand of the one ofMarkov kernels. Suppose there is a timer and a row of five adjacent boxes. At time zero, a cat is in the first box, and a mouse is in the fifth box. The cat and the mouse both jump to a randomadjacentbox when the timer advances. For example, if the cat is in the second box and the mouse is in the fourth, the probability thatthe cat will be in the first boxandthe mouse in the fifth after the timer advancesis one fourth. If the cat is in the first box and the mouse is in the fifth, the probability thatthe cat will be in box two and the mouse will be in box four after the timer advancesis one. The cat eats the mouse if both end up in the same box, at which time the game ends. Let therandom variableKbe the time the mouse stays in the game. TheMarkov chainthat represents this game contains the following five states specified by the combination of positions (cat,mouse). Note that while a naive enumeration of states would list 25 states, many are impossible either because the mouse can never have a lower index than the cat (as that would mean the mouse occupied the cat's box and survived to move past it), or because the sum of the two indices will always have evenparity. In addition, the 3 possible states that lead to the mouse's death are combined into one: We use a stochastic matrix,P{\displaystyle P}(below), to represent thetransition probabilitiesof this system (rows and columns in this matrix are indexed by the possible states listed above, with the pre-transition state as the row and post-transition state as the column). For instance, starting from state 1 – 1st row – it is impossible for the system to stay in this state, soP11=0{\displaystyle P_{11}=0}; the system also cannot transition to state 2 – because the cat would have stayed in the same box – soP12=0{\displaystyle P_{12}=0}, and by a similar argument for the mouse,P14=0{\displaystyle P_{14}=0}. Transitions to states 3 or 5 are allowed, and thusP13,P15≠0{\displaystyle P_{13},P_{15}\neq 0}. P=[001/201/2001001/41/401/41/4001/201/200001].{\displaystyle P={\begin{bmatrix}0&0&1/2&0&1/2\\0&0&1&0&0\\1/4&1/4&0&1/4&1/4\\0&0&1/2&0&1/2\\0&0&0&0&1\end{bmatrix}}.} No matter what the initial state, the cat will eventually catch the mouse (with probability 1) and a stationary stateπ= (0,0,0,0,1) is approached as a limit. To compute the long-term average or expected value of a stochastic variableY{\displaystyle Y}, for each stateSj{\displaystyle S_{j}}and timetk{\displaystyle t_{k}}there is a contribution ofYj,k⋅P(S=Sj,t=tk){\displaystyle Y_{j,k}\cdot P(S=S_{j},t=t_{k})}. Survival can be treated as a binary variable withY=1{\displaystyle Y=1}for a surviving state andY=0{\displaystyle Y=0}for the terminated state. The states withY=0{\displaystyle Y=0}do not contribute to the long-term average. As State 5 is an absorbing state, the distribution of time to absorption isdiscrete phase-type distributed. Suppose the system starts in state 2, represented by the vector[0,1,0,0,0]{\displaystyle [0,1,0,0,0]}. The states where the mouse has perished don't contribute to the survival average so state five can be ignored. The initial state and transition matrix can be reduced to, τ=[0,1,0,0],T=[001200010141401400120],{\displaystyle {\boldsymbol {\tau }}=[0,1,0,0],\qquad T={\begin{bmatrix}0&0&{\frac {1}{2}}&0\\0&0&1&0\\{\frac {1}{4}}&{\frac {1}{4}}&0&{\frac {1}{4}}\\0&0&{\frac {1}{2}}&0\end{bmatrix}},} and (I−T)−11=[2.754.53.52.75],{\displaystyle (I-T)^{-1}{\boldsymbol {1}}={\begin{bmatrix}2.75\\4.5\\3.5\\2.75\end{bmatrix}},} whereI{\displaystyle I}is theidentity matrix, and1{\displaystyle \mathbf {1} }represents a column matrix of all ones that acts as a sum over states. Since each state is occupied for one step of time the expected time of the mouse's survival is just thesumof the probability of occupation over all surviving states and steps in time, E[K]=τ(I+T+T2+⋯)1=τ(I−T)−11=4.5.{\displaystyle E[K]={\boldsymbol {\tau }}\left(I+T+T^{2}+\cdots \right){\boldsymbol {1}}={\boldsymbol {\tau }}(I-T)^{-1}{\boldsymbol {1}}=4.5.} Higher order moments are given by E[K(K−1)…(K−n+1)]=n!τ(I−T)−nTn−11.{\displaystyle E[K(K-1)\dots (K-n+1)]=n!{\boldsymbol {\tau }}(I-{T})^{-n}{T}^{n-1}\mathbf {1} \,.}
https://en.wikipedia.org/wiki/Stochastic_matrix
Subjunctive possibility(also calledalethicpossibility) is a form of modality studied inmodal logic. Subjunctive possibilities are the sorts of possibilities considered when conceivingcounterfactualsituations; subjunctive modalities are modalities that bear on whether a statementmight have beenorcould betrue—such asmight,could,must,possibly,necessarily,contingently,essentially,accidentally, and so on. Subjunctive possibilities includelogical possibility,metaphysicalpossibility,nomologicalpossibility, and temporal possibility. Subjunctive possibility is contrasted with (among other things)epistemic possibility(which deals with how the worldmaybe,for all we know) anddeontic possibility(which deals with how the worldoughtto be). The contrast with epistemic possibility is especially important to draw, since in ordinary language the same phrases ("it's possible," "it can't be", "it must be") are often used to express either sort of possibility. But they are not the same. We do notknowwhetherGoldbach's conjectureis true or not (no-one has come up with a proof yet); so it is (epistemically)possible thatit is true and it is (epistemically)possible thatit is false. But if itis, in fact, provably true (as it may be, for all we know), then it would have to be (subjunctively)necessarilytrue; what being provablemeansis that it would not be (logically)possible forit to be false. Similarly, it might not be at all (epistemically)possible thatit is raining outside—we mightknowbeyond a shadow of a doubt that it is not—but that would hardly mean that it is (subjunctively)impossible forit to rain outside. This point is also made byNorman Swartzand Raymond Bradley.[1] There is some overlap in language between subjunctive possibilities and deontic possibilities: for example, we sometimes use the statement "You can/cannot do that" to express (i) what it is or is not subjunctively possible for you to do, and we sometimes use it to express (ii) what it would or would not be right for you to do. The two are less likely to be confused in ordinary language than subjunctive and epistemic possibility as there are some important differences in the logic of subjunctive modalities and deontic modalities. In particular, subjunctive necessity entails truth: if people logically must such and such, then you can infer that they actually do it. But in this non-ideal world, a deontic ‘must’ does not carry the moral certitude that people morally must do such and such. There are several different types of subjunctive modality, which can be classified as broader or more narrow than one another depending on how restrictive the rules for what counts as "possible" are. Some of the most commonly discussed are: Similarly David Lewis could have taken a degree in Economics but not in, say, Aviation (because it was not taught at Harvard) or Cognitive Neuroscience (because the so-called 'conceptual space' for such a major did not exist). There is some debate whether this final type of possibility in fact constitutes a type of possibility distinct from Temporal, and is sometimes called Historical Possibility by thinkers likeIan Hacking.
https://en.wikipedia.org/wiki/Subjunctive_possibility
Credibility theoryis a branch ofactuarial mathematicsconcerned with determiningrisk premiums.[1]To achieve this, it uses mathematical models in an effort to forecast the (expected) number of insurance claims based on past observations. Technically speaking, the problem is to find the best linear approximation to the mean of theBayesian predictive density, which is why credibility theory has many results in common withlinear filteringas well asBayesian statisticsmore broadly.[2][3] For example, ingroup health insurancean insurer is interested in calculating the risk premium,RP{\displaystyle RP}, (i.e. the theoretical expected claims amount) for a particular employer in the coming year. The insurer will likely have an estimate of historical overall claims experience,x{\displaystyle x}, as well as a more specific estimate for the employer in question,y{\displaystyle y}. Assigning a credibility factor,z{\displaystyle z}, to the overall claims experience (and the reciprocal to employer experience) allows the insurer to get a more accurate estimate of the risk premium in the following manner: RP=xz+y(1−z).{\displaystyle RP=xz+y(1-z).}The credibility factor is derived by calculating themaximum likelihood estimatewhich would minimise the error of estimate. Assuming the variance ofx{\displaystyle x}andy{\displaystyle y}are known quantities taking on the valuesu{\displaystyle u}andv{\displaystyle v}respectively, it can be shown thatz{\displaystyle z}should be equal to: z=v/(u+v).{\displaystyle z=v/(u+v).}Therefore, the more uncertainty the estimate has, the lower is its credibility. In Bayesian credibility, we separate each class (B) and assign them a probability (Probability of B). Then we find how likely our experience (A) is within each class (Probability of A given B). Next, we find how likely our experience was over all classes (Probability of A). Finally, we can find the probability of our class given our experience. So going back to each class, we weight each statistic with the probability of the particular class given the experience. Bühlmann credibility works by looking at the Variance across the population. More specifically, it looks to see how much of the Total Variance is attributed to the Variance of the Expected Values of each class (Variance of the Hypothetical Mean), and how much is attributed to the Expected Variance over all classes (Expected Value of the Process Variance). Say we have a basketball team with a high number of points per game. Sometimes they get 128 and other times they get 130 but always one of the two. Compared to all basketball teams this is a relatively low variance, meaning that they will contribute very little to the Expected Value of the Process Variance. Also, their unusually high point totals greatly increases the variance of the population, meaning that if the league booted them out, they'd have a much more predictable point total for each team (lower variance). So, this team is definitely unique (they contribute greatly to the Variance of the Hypothetical Mean). So we can rate this team's experience with a fairly high credibility. They often/always score a lot (low Expected Value of Process Variance) and not many teams score as much as them (high Variance of Hypothetical Mean). Suppose there are two coins in a box. One has heads on both sides and the other is a normal coin with 50:50 likelihood of heads or tails. You need to place a wager on the outcome after one is randomly drawn and flipped. The odds of heads is .5 * 1 + .5 * .5 = .75. This is because there is a .5 chance of selecting the heads-only coin with 100% chance of heads and .5 chance of the fair coin with 50% chance. Now the same coin is reused and you are asked to bet on the outcome again. If the first flip was tails, there is a 100% chance you are dealing with a fair coin, so the next flip has a 50% chance of heads and 50% chance of tails. If the first flip was heads, we must calculate the conditional probability that the chosen coin was heads-only as well as the conditional probability that the coin was fair, after which we can calculate the conditional probability of heads on the next flip. The probability that it came from a heads-only coin given that the first flip was heads is the probability of selecting a heads-only coin times the probability of heads for that coin divided by the initial probability of heads on the first flip, or .5 * 1 / .75 = 2/3. The probability that it came from a fair coin given that the first flip was heads is the probability of selecting a fair coin times the probability of heads for that coin divided by the initial probability of heads on the first flip, or .5 * .5 / .75 = 1/3. Finally, the conditional probability of heads on the next flip given that the first flip was heads is the conditional probability of a heads-only coin times the probability of heads for a heads-only coin plus the conditional probability of a fair coin times the probability of heads for a fair coin, or 2/3 * 1 + 1/3 * .5 = 5/6 ≈ .8333. Actuarial credibilitydescribes an approach used byactuariesto improvestatisticalestimates. Although the approach can be formulated in either afrequentistorBayesianstatistical setting, the latter is often preferred because of the ease of recognizing more than one source of randomness through both "sampling" and "prior" information. In a typical application, the actuary has an estimate X based on a small set of data, and an estimate M based on a larger but less relevant set of data. The credibility estimate is ZX + (1-Z)M,[4]where Z is a number between 0 and 1 (called the "credibility weight" or "credibility factor") calculated to balance thesampling errorof X against the possible lack of relevance (and therefore modeling error) of M. When aninsurancecompany calculates the premium it will charge, it divides the policy holders into groups. For example, it might divide motorists by age, sex, and type of car; a young man driving a fast car being considered a high risk, and an old woman driving a small car being considered a low risk. The division is made balancing the two requirements that the risks in each group are sufficiently similar and the group sufficiently large that ameaningful statisticalanalysis of the claims experience can be done to calculate the premium. This compromise means that none of the groups contains only identical risks. The problem is then to devise a way of combining the experience of the group with the experience of the individual risk to calculate the premium better. Credibility theory provides a solution to this problem. Foractuaries, it is important to know credibility theory in order to calculate a premium for a group ofinsurance contracts. The goal is to set up an experience rating system to determine next year's premium, taking into account not only the individual experience with the group, but also the collective experience. There are two extreme positions. One is to charge everyone the same premium estimated by the overall meanX¯{\displaystyle {\overline {X}}}of the data. This makes sense only if the portfolio is homogeneous, which means that all risks cells have identical mean claims. However, if the portfolio is heterogeneous, it is not a good idea to charge a premium in this way (overcharging "good" people and undercharging "bad" risk people) since the "good" risks will take their business elsewhere, leaving the insurer with only "bad" risks. This is an example ofadverse selection. The other way around is to charge to groupj{\displaystyle j}its own average claims, beingXj¯{\displaystyle {\overline {X_{j}}}}as premium charged to the insured. These methods are used if the portfolio is heterogeneous, provided a fairly large claim experience. To compromise these two extreme positions, we take theweighted averageof the two extremes: zj{\displaystyle z_{j}}has the following intuitive meaning: it expresses how"credible"(acceptability) the individual of cellj{\displaystyle j}is. If it is high, then use higherzj{\displaystyle z_{j}}to attach a larger weight to charging theXj¯{\displaystyle {\overline {X_{j}}}}, and in this case,zj{\displaystyle z_{j}}is called a credibility factor, and such a premium charged is called a credibility premium. If the group were completely homogeneous then it would be reasonable to setzj=0{\displaystyle z_{j}=0}, while if the group were completely heterogeneous then it would be reasonable to setzj=1{\displaystyle z_{j}=1}. Using intermediate values is reasonable to the extent that both individual and group history is useful in inferring future individual behavior. For example, an actuary has an accident and payroll historical data for a shoe factory suggesting a rate of 3.1 accidents per million dollars of payroll. She has industry statistics (based on all shoe factories) suggesting that the rate is 7.4 accidents per million. With a credibility, Z, of 30%, she would estimate the rate for the factory as 30%(3.1) + 70%(7.4) = 6.1 accidents per million.
https://en.wikipedia.org/wiki/Credibility_theory
Epistemologyis the branch ofphilosophythat examines the nature, origin, and limits ofknowledge. Also called "thetheory of knowledge", it explores different types of knowledge, such aspropositional knowledgeabout facts,practical knowledgein the form of skills, andknowledge by acquaintanceas a familiarity through experience. Epistemologists study the concepts ofbelief,truth, andjustificationto understand the nature of knowledge. To discover how knowledge arises, they investigate sources of justification, such asperception,introspection,memory,reason, andtestimony. The school ofskepticismquestions the human ability to attain knowledge whilefallibilismsays that knowledge is never certain.Empiricistshold that all knowledge comes from sense experience, whereasrationalistsbelieve that some knowledge does not depend on it.Coherentistsargue that a belief is justified if it coheres with other beliefs.Foundationalists, by contrast, maintain that the justification of basic beliefs does not depend on other beliefs.Internalism and externalismdebate whether justification is determined solely bymental statesor also by external circumstances. Separate branches of epistemology focus on knowledge in specific fields, like scientific, mathematical, moral, and religious knowledge.Naturalized epistemologyrelies on empirical methods and discoveries, whereasformal epistemologyuses formal tools fromlogic.Social epistemologyinvestigates the communal aspect of knowledge, andhistorical epistemologyexamines its historical conditions. Epistemology is closely related topsychology, which describes the beliefs people hold, while epistemology studies the norms governing the evaluation of beliefs. It also intersects with fields such asdecision theory,education, andanthropology. Early reflections on the nature, sources, and scope of knowledge are found inancient Greek,Indian, andChinese philosophy. The relation between reason andfaithwas a central topic in themedieval period. Themodern erawas characterized by the contrasting perspectives of empiricism and rationalism. Epistemologists in the 20th century examined the components, structure, and value of knowledge while integrating insights from thenatural sciencesandlinguistics. Epistemology is the philosophical study ofknowledgeand related concepts, such asjustification. Also calledtheory of knowledge,[a]it examinesthe natureand types of knowledge. It further investigates the sources of knowledge, likeperception,inference, andtestimony, to understand how knowledge is created. Another set of questions concerns the extent and limits of knowledge, addressing what people can and cannot know.[2]Central concepts in epistemology includebelief,truth,evidence, andreason.[3]As one of the main branches of philosophy, epistemology stands alongside fields likeethics,logic, andmetaphysics.[4]The term can also refer specific positions of philosophers within this branch, as inPlato's epistemology andImmanuel Kant's epistemology.[5] Epistemology explores how people should acquire beliefs. It determines which beliefs or forms of belief acquisition meet the standards or epistemic goals of knowledge and which ones fail, thereby providing an evaluation of beliefs. The fields ofpsychologyandcognitive sociologyare also interested in beliefs and related cognitive processes, but examine them from a different perspective. Unlike epistemology, they study the beliefs people actually have and how people acquire them instead of examining the evaluative norms of these processes.[6]In this regard, epistemology is anormativediscipline,[b]whereas psychology and cognitive sociology are descriptive disciplines.[8][c]Epistemology is relevant to many descriptive and normative disciplines, such as the other branches of philosophy and the sciences, by exploring the principles of how they may arrive at knowledge.[11] The wordepistemologycomes from theancient Greektermsἐπιστήμη(episteme, meaningknowledgeorunderstanding) andλόγος(logos, meaningstudy oforreason),literally, the study of knowledge. Despite its ancient roots, the word itself was only coined in the 19th century to designate this field as a distinct branch of philosophy.[12][d] Epistemologists examine several foundational concepts to understand their essences and rely on them to formulate theories. Various epistemological disagreements have their roots in disputes about the nature and function of these concepts, like the controversies surrounding the definition of knowledge and the role ofjustificationin it.[17] Knowledge is an awareness, familiarity, understanding, or skill. Its various forms all involve a cognitive success through which a person establishes epistemic contact with reality.[18]Epistemologists typically understand knowledge as an aspect of individuals, generally as a cognitivemental statethat helps them understand, interpret, and interact with the world. While this core sense is of particular interest to epistemologists, the term also has other meanings. For example, the epistemology of groups examines knowledge as a characteristic of a group of people who share ideas.[19]The term can also refer toinformationstored in documents and computers.[20] Knowledge contrasts withignorance, often simply defined as the absence of knowledge. Knowledge is usually accompanied by ignorance because people rarely have complete knowledge of a field, forcing them to rely on incomplete or uncertain information when making decisions.[21]Even though many forms of ignorance can be mitigated through education and research, certain limits to human understanding result in inevitable ignorance.[22]Some limitations are inherent in the humancognitive facultiesthemselves, such as the inability to know facts too complex for thehuman mindto conceive.[23]Others depend on external circumstances when no access to the relevant information exists.[24] Epistemologists disagree on how much people know, for example, whether fallible beliefs can amount to knowledge or whether absolute certainty is required. The most stringent position is taken byradical skeptics, who argue that there is no knowledge at all.[25] Epistemologists distinguish between different types of knowledge.[27]Their primary interest is in knowledge of facts, calledpropositional knowledge.[28]It istheoreticalknowledge that can be expressed indeclarative sentencesusing a that-clause, like "Ravi knows that kangaroos hop". For this reason, it is also calledknowledge-that.[29][e]Epistemologists often understand it as arelationbetween a knower and a knownproposition, in the case above between the person Ravi and the proposition "kangaroos hop".[30]It is use-independent since it is not tied to one specific purpose, unlike practical knowledge. It is a mental representation that embodies concepts and ideas to reflect reality.[31]Because of its theoretical nature, it is typically held that only creatures with highly developed minds, such as humans, possess propositional knowledge.[32] Propositional knowledge contrasts with non-propositional knowledge in the form ofknowledge-howandknowledge by acquaintance.[33]Knowledge-how is a practical ability or skill, like knowing how to read or how to preparelasagna.[34]It is usually tied to a specific goal and not mastered in the abstract without concrete practice.[35]To know something by acquaintance means to have an immediate familiarity with or awareness of it, usually as a result of direct experiential contact. Examples are "familiarity with the city ofPerth", "knowing the taste oftsampa", and "knowingMarta Vieira da Silvapersonally".[36] Another influential distinction in epistemology is betweena posteriorianda prioriknowledge.[38][f]A posterioriknowledge is knowledge ofempiricalfacts based on sensory experience, like "seeing that the sun is shining" and "smelling that a piece of meat has gone bad".[40]This type of knowledge is associated with the empirical science and everyday affairs.A prioriknowledge, by contrast, pertains to non-empirical facts and does not depend on evidence from sensory experience, like knowing that2+2=4{\displaystyle 2+2=4}. It belongs to fields such asmathematicsandlogic.[41]The distinction betweena posteriorianda prioriknowledge is central to the debate betweenempiricistsandrationalistsregarding whether all knowledge depends on sensory experience.[42] A closely related contrast is betweenanalytic and synthetic truths. A sentence is analytically true if its truth depends only on the meanings of the words it uses. For instance, the sentence "all bachelors are unmarried" is analytically true because the word "bachelor" already includes the meaning "unmarried". A sentence is synthetically true if its truth depends on additional facts. For example, the sentence "snow is white" is synthetically true because its truth depends on the color of snow in addition to the meanings of the wordssnowandwhite.A prioriknowledge is primarily associated with analytic sentences, whereasa posterioriknowledge is primarily associated with synthetic sentences. However, it is controversial whether this is true for all cases. Some philosophers, such asWillard Van Orman Quine, reject the distinction, saying that there are no analytic truths.[43] The analysis of knowledge is the attempt to identify theessential componentsorconditions of all and onlypropositional knowledge states. According to the so-calledtraditional analysis,[g]knowledge has three components: it is a belief that isjustifiedand true.[45]In the second half of the 20th century, this view was challenged by aseries of thought experimentsaiming to show that some justified true beliefs do not amount to knowledge.[46]In one of them, a person is unaware of all thefake barnsin their area. By coincidence, they stop in front of the only real barn and form a justified true belief that it is a real barn.[47]Many epistemologists agree that this is not knowledge because the justification is not directly relevant to the truth.[48]More specifically, this and similar counterexamples involve some form of epistemic luck, that is, a cognitive success that results from fortuitous circumstances rather than competence.[49] Following thesethought experiments, philosophers proposed various alternative definitions of knowledge by modifying or expanding the traditional analysis.[50]According to one view, the known fact has to cause the belief in the right way.[51]Another theory states that the belief is the product of a reliable belief formation process.[52]Further approaches require that the person would not have the belief if it was false,[53]that the belief is not inferred from a falsehood,[54]that the justification cannot beundermined,[55]or that the belief isinfallible.[56]There is no consensus on which of the proposed modifications and reconceptualizations is correct.[57]Some philosophers, such asTimothy Williamson, reject the basic assumption underlying the analysis of knowledge by arguing thatpropositional knowledge is a unique statethat cannot be dissected into simpler components.[58] The value of knowledge is the worth it holds by expanding understanding and guiding action. Knowledge can haveinstrumental valueby helping a person achieve their goals.[59]For example, knowledge of a disease helps a doctor cure their patient.[60]The usefulness of a known fact depends on the circumstances. Knowledge of some facts may have little to no uses, like memorizing random phone numbers from an outdated phone book.[61]Being able to assess the value of knowledge matters in choosing what information to acquire and share. It affects decisions like which subjects to teach at school and how to allocate funds to research projects.[62] Epistemologists are particularly interested in whether knowledge is more valuable than a mere true opinion.[63]Knowledge and true opinion often have a similar usefulness since both accurately represent reality. For example, if a person wants to go toLarissa, a true opinion about the directions can guide them as effectively as knowledge.[64]Considering this problem, Plato proposed that knowledge is better because it is more stable.[65]Another suggestion focuses onpractical reasoning, arguing that people put more trust in knowledge than in mere true opinions when drawing conclusions and deciding what to do.[66]A different response says that knowledge has intrinsic value in addition to instrumental value. This view asserts that knowledge is always valuable, whereas true opinion is only valuable in circumstances where it is useful.[67] Beliefs are mental states about what is the case, like believing that snow is white or thatGod exists.[68]In epistemology, they are often understood as subjectiveattitudes that affirm or deny a proposition, which can be expressed in adeclarative sentence. For instance, to believe that snow is white is to affirm the proposition "snow is white". According to this view, beliefs are representations of what the universe is like. They are stored in memory and retrieved when actively thinking about reality or deciding how to act.[69]A different view understands beliefs as behavioral patterns ordispositionsto act rather than as representational items stored in the mind. According to this perspective, to believe that there is mineral water in the fridge is nothing more than a group of dispositions related to mineral water and the fridge. Examples are the dispositions to answer questions about the presence of mineral water affirmatively and to go to the fridge when thirsty.[70]Some theorists deny the existence of beliefs, saying that this concept borrowed fromfolk psychologyoversimplifies much more complex psychological or neurological processes.[71]Beliefs are central to various epistemological debates, which cover their status as a component of propositional knowledge, the question of whether people havecontrol over and responsibility for their beliefs, and the issue of whether beliefs have degrees, calledcredences.[72] As propositional attitudes, beliefs are true or false depending on whether they affirm a true or a false proposition.[73]According to thecorrespondence theory of truth, to be true means to stand in the right relation to the world by accurately describing what it is like. This means that truth is objective: a belief is true if it corresponds to afact.[74]Thecoherence theory of truthsays that a belief is true if it belongs to a coherent system of beliefs. A result of this view is that truth is relative since it depends on other beliefs.[75]Furthertheories of truthincludepragmatist,semantic,pluralist, anddeflationary theories.[76]Truth plays a central role in epistemology as a goal of cognitive processes and an attribute of propositional knowledge.[77] In epistemology, justification is a property of beliefs that meet certain norms about what a person should believe.[78]According to a common view, this means that the person has sufficient reasons for holding this belief because they have information that supports it.[78]Another view states that a belief is justified if it is formed by a reliable belief formation process, such as perception.[79]The termsreasonable,warranted, andsupportedare sometimes used as synonyms of the wordjustified.[80]Justification distinguishes well-founded beliefs fromsuperstitionand lucky guesses.[81]However, it does not guarantee truth. For example, a person with strong but misleading evidence may form a justified belief that is false.[82] Epistemologists often identify justification as a key component of knowledge.[83]Usually, they are not only interested in whether a person has a sufficient reason to hold a belief, known aspropositional justification, but also in whether the person holds the belief because or based on[h]this reason, known asdoxastic justification. For example, if a person has sufficient reason to believe that a neighborhood is dangerous but forms this belief based on superstition then they have propositional justification but lack doxastic justification.[85] Sources of justification are ways or cognitive capacities through which people acquire justification. Often-discussed sources includeperception,introspection,memory,reason, andtestimony, but there is no universal agreement to what extent they all provide valid justification.[86]Perception relies onsensory organsto gain empirical information. Distinct forms of perception correspond to different physical stimuli, such asvisual,auditory,haptic,olfactory, andgustatoryperception.[87]Perception is not merely the reception of sense impressions but an active process that selects, organizes, and interpretssensory signals.[88]Introspection is a closely related process focused on internalmental statesrather than external physical objects. For example, seeing a bus at a bus station belongs to perception while feeling tired belongs to introspection.[89] Rationalists understand reason as a source of justification for non-empirical facts, explaining how people can know about mathematical, logical, and conceptual truths. Reason is also responsible for inferential knowledge, in which one or more beliefs serve as premises to support another belief.[90]Memory depends on information provided by other sources, which it retains and recalls, like remembering a phone number perceived earlier.[91]Justification by testimony relies on information one person communicates to another person. This can happen by talking to each other but can also occur in other forms, like a letter, a newspaper, and a blog.[92] Rationalityis closely related to justification and the termsrational beliefandjustified beliefare sometimes used interchangeably. However, rationality has a wider scope that encompasses both a theoretical side, covering beliefs, and a practical side, coveringdecisions,intentions, andactions.[93]There are different conceptions about what it means for something to be rational. According to one view, a mental state is rational if it is based on or responsive to good reasons. Another view emphasizes the role of coherence, stating that rationality requires that the different mental states of a person areconsistentand support each other.[94]A slightly different approach holds that rationality is about achieving certain goals. Two goals of theoretical rationality are accuracy and comprehensiveness, meaning that a person has as few false beliefs and as many true beliefs as possible.[95] Epistemologists rely on the concept of epistemic norms as criteria to assess the cognitive quality of beliefs, like their justification and rationality. They distinguish between deontic norms, whichprescribewhat people should believe, andaxiologicalnorms, which identify the goals andvaluesof beliefs.[96]Epistemic norms are closely linked to intellectual orepistemic virtues, which are character traits likeopen-mindednessandconscientiousness. Epistemic virtues help individuals form true beliefs and acquire knowledge. They contrast with epistemic vices and act as foundational concepts ofvirtue epistemology.[97][i] Epistemologists understandevidencefor a belief as information that favors or supports it. They conceptualize evidence primarily in terms of mental states, such as sensory impressions or other known propositions. But in a wider sense, it can also include physical objects, likebloodstains examined by forensic analystsor financial records studied by investigative journalists.[99]Evidence is often understood in terms ofprobability: evidence for a belief makes it more likely that the belief is true.[100]Adefeateris evidence against a belief or evidence that undermines another piece of evidence. For instance,witness testimonylinking a suspect to a crime is evidence of their guilt, while analibiis a defeater.[101]Evidentialistsanalyze justification in terms of evidence by asserting that for a belief to be justified, it needs to rest on adequate evidence.[102] The presence of evidence usually affectsdoubtandcertainty, which are subjective attitudes toward propositions that differ regarding their level of confidence. Doubt involves questioning the validity or truth of a proposition. Certainty, by contrast, is a strong affirmative conviction, indicating an absence of doubt about the proposition's truth. Doubt and certainty are central to ancient Greek skepticism and its goal of establishing that no belief is immune to doubt. They are also crucial in attempts to find a secure foundation of all knowledge, such asRené Descartes'foundationalistepistemology.[103] While propositional knowledge is the main topic in epistemology, some theorists focus onunderstandinginstead. Understanding is a more holistic notion that involves a wider grasp of a subject. To understand something, a person requires awareness of how different things are connected and why they are the way they are. For example, knowledge of isolated facts memorized from a textbook does not amount to understanding. According to one view, understanding is a unique epistemic good that, unlike propositional knowledge, is always intrinsically valuable.[104]Wisdomis similar in this regard and is sometimes considered the highest epistemic good. It encompasses a reflective understanding with practical applications, helping people grasp and evaluate complex situations and lead a good life.[105] In epistemology, knowledge ascription is the act of attributing knowledge to someone, expressed in sentences like "Sarah knows that it will rain today".[106]According to invariantism, knowledge ascriptions have fixed standards across different contexts.Contextualists, by contrast, argue that knowledge ascriptions are context-dependent. From this perspective, Sarah may know about the weather in the context of an everyday conversation even though she is not sufficiently informed to know it in the context of a rigorousmeteorologicaldebate.[107]Contrastivism, another view, argues that knowledge ascriptions are comparative, meaning that to know something involves distinguishing it from relevant alternatives. For example, if a person spots a bird in the garden, they may know that it is a sparrow rather than an eagle, but they may not know that it is a sparrow rather than an indistinguishable sparrow hologram.[108] Philosophical skepticismquestions the human ability to attain knowledge by challenging the foundations upon which knowledge claims rest. Some skeptics limit their criticism to specific domains of knowledge. For example,religious skepticssay that it is impossible to know about the existence of deities or the truth of other religious doctrines. Similarly, moral skeptics challenge the existence of moral knowledge and metaphysical skeptics say that humans cannot know ultimate reality.[109]External world skepticism questions knowledge of external facts,[110]whereasskepticism about other mindsdoubts knowledge of the mental states of others.[111] Global skepticism is the broadest form of skepticism, asserting that there is no knowledge in any domain.[112]Inancient philosophy, this view was embraced byacademic skeptics, whereasPyrrhonian skepticsrecommended thesuspension of beliefto attaintranquility.[113]Few epistemologists have explicitly defended global skepticism. The influence of this position stems from attempts by other philosophers to show that their theory overcomes the challenge of skepticism. For example,René Descartesusedmethodological doubtto find facts that cannot be doubted.[114] One consideration in favor of global skepticism is thedream argument. It starts from the observation that, while people are dreaming, they are usually unaware of this. This inability to distinguish between dream and regular experience is used to argue that there is no certain knowledge since a person can never be sure that they are not dreaming.[115][j]Some critics assert that global skepticism isself-refutingbecause denying the existence of knowledge is itself a knowledge claim. Another objection says that the abstract reasoning leading to skepticism is not convincing enough to overrule common sense.[117] Fallibilism is another response to skepticism.[118]Fallibilists agree with skeptics that absolute certainty is impossible. They reject the assumption that knowledge requires absolute certainty, leading them to the conclusion that fallible knowledge exists.[119]They emphasize the need to keep an open and inquisitive mind, acknowledging that doubt can never be fully excluded, even for well-established knowledge claims like thoroughly tested scientific theories.[120] Epistemic relativism is related to skepticism but differs in that it does not question the existence of knowledge in general. Instead, epistemic relativists only reject the notion of universal epistemic standards or absolute principles that apply equally to everyone. This means that what a person knows depends on subjective criteria or social conventions used to assess epistemic status.[121] The debate between empiricism and rationalism centers on the origins of human knowledge. Empiricism emphasizes thatsense experienceis the primary source of all knowledge. Some empiricists illustrate this view by describing the mind as ablank slatethat only develops ideas about the external world through the sense data received from the sensory organs. According to them, the mind can attain various additional insights by comparing impressions, combining them, generalizing to form more abstract ideas, and deducing new conclusions from them. Empiricists say that all these mental operations depend on sensory material and do not function on their own.[123] Even though rationalists usually accept sense experience as one source of knowledge,[k]they argue that certain forms of knowledge are directly accessed throughreasonwithout sense experience,[125]like knowledge of mathematical and logical truths.[126]Some forms of rationalism state that the mind possessesinborn ideas, accessible without sensory assistance. Others assert that there is an additional cognitive faculty, sometimes calledrational intuition, through which people acquire nonempirical knowledge.[127]Some rationalists limit their discussion to the origin of concepts, saying that the mind relies on inborncategoriesto understand the world and organize experience.[125] Foundationalists and coherentists disagree about the structure of knowledge.[129][l]Foundationalism distinguishes between basic and non-basic beliefs. A belief is basic if it is justified directly, meaning that its validity does not depend on the support of other beliefs.[m]A belief is non-basic if it is justified by another belief.[133]For example, the belief that it rained last night is a non-basic belief if it is inferred from the observation that the street is wet.[134]According to foundationalism, basic beliefs are the foundation on which all other knowledge is built while non-basic beliefs act as the superstructure resting on this foundation.[133] Coherentists reject the distinction between basic and non-basic beliefs, saying that the justification of any belief depends on other beliefs. They assert that a belief must align with other beliefs to amount to knowledge. This occurs when beliefs are consistent and support each other. According to coherentism, justification is aholisticaspect determined by the whole system of beliefs, which resembles an interconnected web.[135] Foundherentismis an intermediary position combining elements of both foundationalism and coherentism. It accepts the distinction between basic and non-basic beliefs while asserting that the justification of non-basic beliefs depends on coherence with other beliefs.[136] Infinitismpresents a less common alternative perspective on the structure of knowledge. It agrees with coherentism that there are no basic beliefs while rejecting the view that beliefs can support each other in acircular manner. Instead, it argues that beliefs form infinite justification chains, in which each link of the chain supports the belief following it and is supported by the belief preceding it.[137] The disagreement between internalism and externalism is about the sources of justification.[139][n]Internalists say that justification depends only on factors within the individual, such as perceptual experience, memories, and other beliefs. This view emphasizes the importance of the cognitive perspective of the individual in the form of their mental states. It is commonly associated with the idea that the relevant factors are accessible, meaning that the individual can become aware of their reasons for holding a justified belief through introspection and reflection.[141] Evidentialismis an influential internalist view, asserting that justification depends on the possession ofevidence.[142]In this context, evidence for a belief is any information in the individual's mind that supports the belief. For example, the perceptual experience of rain is evidence for the belief that it is raining. Evidentialists suggest various other forms of evidence, including memories, intuitions, and other beliefs.[143]According to evidentialism, a belief is justified if the individual's evidence supports it and they hold the belief on the basis of this evidence.[144] Externalism, by contrast, asserts that at least some relevant factors of knowledge are external to the individual.[141]For instance, when considering the belief that a cup of coffee stands on the table, externalists are not primarily interested in the subjective perceptual experience that led to this belief. Instead, they focus on objective factors, like the quality of the person's eyesight, their ability to differentiate coffee from other beverages, and the circumstances under which they observed the cup.[145]A key motivation of many forms of externalism is that justification makes it more likely that a belief is true. Based on this view, justification is external to the extent that some factors contributing to this likelihood are not part of the believer's cognitive perspective.[141] Reliabilismis an externalist theory asserting that a reliable connection between belief and truth is required for justification.[146]Some reliabilists explain this in terms of reliable processes. According to this view, a belief is justified if it is produced by a reliable process, like perception. A belief-formation process is deemed reliable if most of the beliefs it generates are true. An alternative view focuses on beliefs rather than belief-formation processes, saying that a belief is justified if it is a reliable indicator of the fact it presents. This means that the belief tracks the fact: the person believes it because it is true but would not believe it otherwise.[147] Virtue epistemology, another type of externalism, asserts that a belief is justified if it manifests intellectual virtues. Intellectual virtues are capacities or traits that perform cognitive functions and help people form true beliefs. Suggested examples include faculties, like vision, memory, and introspection, andcharacter traits, like open-mindedness.[148] Some branches of epistemology are characterized by their research methods.Formal epistemologyemploys formal tools from logic and mathematics to investigate the nature of knowledge.[149][o]For example,Bayesian epistemologyrepresents beliefs as degrees of certainty and usesprobability theoryto formally define norms ofrationalitygoverning how certain people should be.[151]Experimental epistemologistsbase their research on empirical evidence about common knowledge practices.[152]Applied epistemologyfocuses on the practical application of epistemological principles to diverse real-world problems, like the reliability of knowledge claims on the internet, how to assesssexual assaultallegations, and howracismmay lead toepistemic injustice.[153][p]Metaepistemologistsstudy the nature, goals, and research methods of epistemology. As ametatheory, it does not directly advocate for specific epistemological theories but examines their fundamental concepts and background assumptions.[155][q] Particularism and generalism disagree about the rightmethod of conducting epistemological research. Particularists start their inquiry by looking at specific cases. For example, to find a definition of knowledge, they rely on their intuitions about concrete instances of knowledge and particular thought experiments. They use these observations as methodological constraints that any theory of general principles needs to follow. Generalists proceed in the opposite direction. They prioritize general epistemic principles, saying that it is not possible to accurately identify and describe specific cases without a grasp of these principles.[157]Other methods in contemporary epistemology aim to extractphilosophical insights from ordinary languageor look at the role of knowledge in making assertions and guiding actions.[158] Phenomenologicalepistemology emphasizes the importance of first-person experience. It distinguishes between the natural and the phenomenological attitudes. The natural attitude focuses on objects belonging to common sense and natural science. The phenomenological attitude focuses on the experience of objects and aims to provide a presuppositionless description of how objects appear to the observer.[159] Naturalized epistemologyis closely associated with thenatural sciences, relying on their methods and theories to examine knowledge. Arguing that epistemological theories should rest on empirical observation, it is critical ofa priorireasoning.[160]Evolutionary epistemologyis a naturalistic approach that understands cognition as a product ofevolution, examining knowledge and the cognitive faculties responsible for it through the lens ofnatural selection.[161]Social epistemologyfocuses on the social dimension of knowledge. While traditional epistemology is mainly interested in the knowledge possessed by individuals, social epistemology covers knowledge acquisition, transmission, and evaluation within groups, with specific emphasis on how people rely on each other when seeking knowledge.[162] Pragmatistepistemology is a form of fallibilism that emphasizes the close relation between knowing and acting. It sees the pursuit of knowledge as an ongoing process guided by common sense and experience while always open to revision. This approach reinterprets some core epistemological notions, for example, by conceptualizing beliefs as habits that shape actions rather than representations that mirror the world.[163]Motivated by pragmatic considerations,epistemic conservatismis a view aboutbelief revision. It prioritizes pre-existing beliefs, asserting that a person should only change their beliefs if they have a good reason to. One argument for epistemic conservatism rests on the recognition that the cognitive resources of humans are limited, making it impractical to constantly reexamine every belief.[164] Postmodernepistemology critiques the conditions of knowledge in advanced societies. This concerns in particular themetanarrativeof a constant progress of scientific knowledge leading to a universal and foundational understanding of reality.[166]Similarly,feministepistemology adopts a critical perspective, focusing on the effect ofgenderon knowledge. Among other topics, it explores how preconceptions about gender influence who has access to knowledge, how knowledge is produced, and which types of knowledge are valued in society.[167]Some postmodern and feminist thinkers adopt aconstructivistapproach, arguing that the way people view the world is not a simple reflection of external reality but a social construction. This view emphasizes the creative role of interpretation while undermining objectivity since social constructions can vary across societies.[168]Another critical approach, found in decolonial scholarship, opposes the global influence of Western knowledge systems. It seeks to undermine Western hegemony anddecolonize knowledge.[169] The decolonial outlook is also present inAfrican epistemology. Grounded in Africanontology, it emphasizes the interconnectedness of reality as acontinuumbetween knowing subject and known object. It understands knowledge as aholisticphenomenon that includes sensory, emotional, intuitive, and rational aspects, extending beyond the limits of the physical domain.[170] Another epistemological tradition is found in ancientIndian philosophy. Its diverseschools of thoughtexamine different sources of knowledge, calledpramāṇa.Perception, inference, andtestimonyare sources discussed by most schools. Other sources only considered by some schools arenon-perception, which leads to knowledge of absences, and presumption.[171][r]Buddhistepistemology focuses on immediate experience, understood as the presentation of uniqueparticularswithout secondary cognitive processes, like thought and desire.[173]Nyāyaepistemology is a causal theory of knowledge, understanding sources of knowledge as reliable processes that cause episodes of truthful awareness. It sees perception as the primary source of knowledge and emphasizes its importance for successful action.[174]Mīmāṃsāepistemology considers the holy scriptures known as theVedasas a key source of knowledge, addressing the problem of their right interpretation.[175]Jain epistemologystates that reality ismany-sided, meaning that no single viewpoint can capture the entirety of truth.[176] Historical epistemologyexamines how the understanding of knowledge and related concepts has changed over time. It asks whether the main issues in epistemology are perennial and to what extent past epistemological theories are relevant to contemporary debates. It is particularly concerned with scientific knowledge and practices associated with it.[177]It contrasts with the history of epistemology, which presents, reconstructs, and evaluates epistemological theories of philosophers in the past.[178][s] Some branches of epistemology focus on knowledge within specific academic disciplines. Theepistemology of scienceexamines how scientific knowledge is generated and what problems arise in the process of validating, justifying, and interpreting scientific claims. A key issue concerns the problem of howindividual observations can support universal scientific laws. Other topics include the nature of scientific evidence and the aims of science.[180]The epistemology of mathematics studies the origin of mathematical knowledge. In exploring how mathematical theories are justified, it investigates the role of proofs and whether there are empirical sources of mathematical knowledge.[181] Distinct areas of epistemology are dedicated to specific sources of knowledge. Examples are the epistemology of perception,[182]the epistemology of memory,[183]and theepistemology of testimony.[184]In the epistemology of perception,direct and indirect realistsdebate the connection between the perceiver and the perceived object. Direct realists say that this connection is direct, meaning that there is no difference between the object present in perceptual experience and the physical object causing this experience. According to indirect realism, the connection is indirect, involving mental entities, like ideas or sense data, that mediate between the perceiver and the external world. The contrast between direct and indirect realism is important for explaining the nature ofillusions.[185] Epistemological issues are found in most areas of philosophy. Theepistemology of logicexamines how people know that anargumentisvalid. For example, it explores how logicians justify thatmodus ponensis a correctrule of inferenceor that allcontradictionsare false.[186]Epistemologists of metaphysicsinvestigate whether knowledge of the basic structure of reality is possible and what sources this knowledge could have.[187]Knowledge of moral statements, like the claim that lying is wrong, belongs to theepistemology of ethics. It studies the role ofethical intuitions,coherenceamong moral beliefs, and the problem of moral disagreement.[188]Theethics of beliefis a closely related field exploring the intersection of epistemology andethics. It examines the norms governing belief formation and asks whether violating them is morally wrong.[189]Religious epistemologystudies the role of knowledge and justification for religious doctrines and practices. It evaluates the reliability of evidence fromreligious experienceandholy scriptureswhile also asking whether the norms of reason should be applied to religiousfaith.[190] Epistemologists of language explore the nature of linguistic knowledge. One of their topics is the role of tacit knowledge, for example, when native speakers have mastered the rules ofgrammarbut are unable to explicitly articulate them.[191]Epistemologists of modality examine knowledge about what is possible and necessary.[192]Epistemic problems that arise when two people have diverging opinions on a topic are covered by the epistemology of disagreement.[193]Epistemologists of ignorance are interested in epistemic faults and gaps in knowledge.[194] Epistemology andpsychologywere not defined as distinct fields until the 19th century; earlier investigations about knowledge often do not fit neatly into today's academic categories.[195]Both contemporary disciplines study beliefs and the mental processes responsible for their formation and change. One key contrast is that psychology describes what beliefs people have and how they acquire them, thereby explaining why someone has a specific belief. The focus of epistemology is on evaluating beliefs, leading to a judgment about whether a belief is justified and rational in a particular case.[196]Epistemology also shares a close connection withcognitive science, which understands mental events as processes that transforminformation.[197]Artificial intelligencerelies on the insights of epistemology and cognitive science to implement concrete solutions to problems associated withknowledge representationandautomatic reasoning.[198] Logicis the study of correct reasoning. For epistemology, it is relevant to inferential knowledge, which arises when a person reasons from one known fact to another.[199]This is the case, for example, when inferring that it rained based on the observation that the streets are wet.[200]Whether an inferential belief amounts to knowledge depends on the form ofreasoningused, in particular, that the process does not violate thelaws of logic.[201]Another overlap between the two fields is found in the epistemic approach tofallacies.[202]Fallacies are faulty arguments based on incorrect reasoning.[203]The epistemic approach to fallacies explains why they are faulty, stating that arguments aim to expand knowledge. According to this view, an argument is a fallacy if it fails to do so.[202]A further intersection is found inepistemic logic, which uses formal logical devices to study epistemological concepts likeknowledgeandbelief.[204] Bothdecision theoryand epistemology are interested in the foundations of rational thought and the role of beliefs. Unlike many approaches in epistemology, the main focus of decision theory lies less in the theoretical and more in the practical side, exploring how beliefs are translated into action.[205]Decision theorists examine the reasoning involved in decision-making and the standards of good decisions,[206]identifying beliefs as a central aspect of decision-making. One of their innovations is to distinguish between weaker and stronger beliefs, which helps them consider the effects of uncertainty on decisions.[207] Epistemology andeducationhave a shared interest in knowledge, with one difference being that education focuses on the transmission of knowledge, exploring the roles of both learner and teacher.[208]Learning theoryexamines how people acquire knowledge.[209]Behaviorallearning theories explain the process in terms of behavior changes, for example, byassociating a certain response with a particular stimulus.[210]Cognitivelearning theories study how the cognitive processes that affect knowledge acquisition transform information.[211]Pedagogylooks at the transmission of knowledge from the teacher's perspective, exploring theteaching methodsthey may employ.[212]In teacher-centered methods, the teacher serves as the main authority delivering knowledge and guiding the learning process. Instudent-centered methods, the teacher primarily supports and facilitates the learning process, allowing students to take a more active role.[213]The beliefs students have about knowledge, calledpersonal epistemology, influence their intellectual development and learning success.[214] Theanthropologyof knowledge examines how knowledge is acquired, stored, retrieved, and communicated. It studies the social and cultural circumstances that affect how knowledge is reproduced and changes, covering the role of institutions like university departments and scientific journals as well as face-to-face discussions and online communications. This field has a broad concept of knowledge, encompassing various forms of understanding and culture, including practical skills. Unlike epistemology, it is not interested in whether a belief is true or justified but in how understanding is reproduced in society.[215]A closely related field, thesociology of knowledgehas a similar conception of knowledge. It explores how physical, demographic, economic, and sociocultural factors impact knowledge. This field examines in what sociohistorical contexts knowledge emerges and the effects it has on people, for example, how socioeconomic conditions are related to thedominant ideologyin a society.[216] Early reflections on the nature and sources of knowledge are found in ancient history. Inancient Greek philosophy,Plato(427–347 BCE) studiedwhat knowledge is, examining how it differs from trueopinionby being based on good reasons.[217]He proposed that learning isa form of recollectionin which the soul remembers what it already knew but had forgotten.[218][t]Plato's studentAristotle(384–322 BCE) was particularly interested in scientific knowledge, exploring the role of sensory experience and the process of making inferences from general principles.[219]Aristotle's ideas influenced theHellenistic schools of philosophy, which began to arise in the 4th century BCE and includedEpicureanism,Stoicism, andskepticism. The Epicureans had anempiricistoutlook, stating that sensations are always accurate and act as the supreme standard of judgments.[220]The Stoics defended a similar position but confined their trust to lucid and specific sensations, which they regarded as true.[221]The skeptics questioned that knowledge is possible, recommending insteadsuspension of judgmentto attain astate of tranquility.[222]Emerging in the 3rd century CE and inspired by Plato's philosophy,[223]Neoplatonismdistinguished knowledge from true belief, arguing that knowledge is infallible and limited to the realm of immaterial forms.[224] TheUpanishads, philosophical scriptures composed inancient Indiabetween 700 and 300 BCE, examined how people acquire knowledge, including the role of introspection, comparison, and deduction.[226]In the 6th century BCE, the school ofAjñanadeveloped a radical skepticism questioning the possibility and usefulness of knowledge.[227]By contrast, the school ofNyaya, which emerged in the 2nd century BCE, asserted that knowledge is possible. It provided a systematic treatment of how people acquire knowledge, distinguishing between valid and invalid sources.[228]WhenBuddhist philosophersbecame interested in epistemology, they relied on concepts developed in Nyaya and other traditions.[229]Buddhist philosopherDharmakirti(6th or 7th century CE)[230]analyzed the process of knowing as a series of causally related events.[225] AncientChinese philosophersunderstood knowledge as an interconnected phenomenon fundamentally linked to ethical behavior and social involvement. Many saw wisdom as the goal of attaining knowledge.[231]Mozi(470–391 BCE) proposed a pragmatic approach to knowledge using historical records, sensory evidence, and practical outcomes to validate beliefs.[232]Mencius(c.372–289 BCE) explored analogical reasoning as a source of knowledge and employed this method to criticize Mozi.[233]Xunzi(c.310–220 BCE) aimed to combine empirical observation and rational inquiry. He emphasized the importance of clarity and standards of reasoning without excluding the role of feeling and emotion.[234] The relation betweenreasonandfaithwas a central topic in themedieval period.[235]InArabic–Persian philosophy,al-Farabi(c.870–950) andAverroes(1126–1198) discussed how philosophy andtheologyinteract, debating which one is a better vehicle to truth.[236]Al-Ghazali(c.1056–1111)criticized many core teachingsof previous Islamic philosophers, saying that they relied on unproven assumptions that did not amount to knowledge.[237]Similarly in Western philosophy,Anselm of Canterbury(1033–1109) proposed that theological teaching and philosophical inquiry are in harmony and complement each other.[238]Formulating a more critical approach,Peter Abelard(1079–1142) argued against unquestioned theological authorities and said that all things are open to rational doubt.[239]Influenced by Aristotle,Thomas Aquinas(1225–1274) developed an empiricist theory, stating that "nothing is in the intellect unless it first appeared in the senses".[240]According to an early form ofdirect realismproposed byWilliam of Ockham(c.1285–1349), perception of mind-independent objects happens directly without intermediaries.[241]Meanwhile, in 14th-century India,Gaṅgeśadeveloped a reliabilist theory of knowledge and considered the problems of testimony and fallacies.[242]In China,Wang Yangming(1472–1529) explored the unity of knowledge and action, holding that moral knowledge is inborn and can be attained by overcoming self-interest.[243] The course ofmodern philosophywas shaped byRené Descartes(1596–1650), who stated that philosophy must begin from a position of indubitable knowledge of first principles. Inspired by skepticism, he aimed to find absolutely certain knowledge by encountering truths that cannot be doubted. He thought that this is the case for the assertion "I think, therefore I am", from which he constructed the rest of his philosophical system.[245]Descartes, together withBaruch Spinoza(1632–1677) andGottfried Wilhelm Leibniz(1646–1716), belonged to the school ofrationalism, which asserts that the mind possessesinnate ideasindependent of experience.[246]John Locke(1632–1704) rejected this view in favor of an empiricism according to which the mind is ablank slate. This means that all ideas depend on experience, either as "ideas of sense", which are directly presented through the senses, or as "ideas of reflection", which the mind creates by reflecting on its own activities.[247]David Hume(1711–1776) used this idea to explore the limits of what people can know. He said that knowledge of facts is never certain, adding that knowledge of relations between ideas, like mathematical truths, can be certain but contains no information about the world.[248]Immanuel Kant(1724–1804) sought a middle ground between rationalism and empiricism by identifying a type of knowledge overlooked by Hume. For Kant, this knowledge pertains to principles that underlie and structure all experience, such as spatial and temporal relations and fundamentalcategories of understanding.[249] In the 19th century and influenced by Kant's philosophy,Georg Wilhelm Friedrich Hegel(1770–1831) rejected empiricism by arguing that sensory impressions alone cannot amount to knowledge since all knowledge is actively structured by the knowing subject.[250]John Stuart Mill(1806–1873), by contrast, defended a wide-sweeping form of empiricism and explained knowledge of general truths throughinductive reasoning.[251]Charles Peirce(1839–1914) thought that all knowledge isfallible, emphasizing that knowledge seekers should remain open to revising their beliefs in light of newevidence. He used this idea to argue against Cartesian foundationalism, which seeks absolutely certain truths.[252] In the 20th century, fallibilism was further explored byJ. L. Austin(1911–1960) andKarl Popper(1902–1994).[253]Incontinental philosophy,Edmund Husserl(1859–1938) applied the skeptical idea of suspending judgment to thestudy of experience. By not judging whether an experience is accurate, he tried to describe its internal structure instead.[254]Influenced by earlier empiricists,logical positivists, likeA. J. Ayer(1910–1989), said that all knowledge is either empirical or analytic, rejecting any form of metaphysical knowledge.[255]Bertrand Russell(1872–1970) developed an empiricist sense-datum theory, distinguishing between directknowledge by acquaintanceof sense data and indirect knowledge by description, which is inferred from knowledge by acquaintance.[256]Common sensehad a central place inG. E. Moore's (1873–1958) epistemology. He used trivial observations, like the fact that he has two hands, to argue against abstract philosophical theories that deviate from common sense.[257]Ordinary language philosophy, as practiced by the lateLudwig Wittgenstein(1889–1951), is a similar approach that tries to extract epistemological insights from how ordinary language is used.[258] Edmund Gettier(1927–2021) conceivedcounterexamplesagainst the idea that knowledge is justified true belief. These counterexamples prompted many philosophers to suggest alternativedefinitions of knowledge.[259]Developed by philosophers such asAlvin Goldman(1938–2024),reliabilismemerged as one of the alternatives, asserting that knowledge requires reliable sources and shifting the focus away from justification.[260]Virtue epistemologists, such asErnest Sosa(1940–present) andLinda Zagzebski(1946–present), analyse belief formation in terms of the intellectual virtues or cognitive competencies involved in the process.[261]Naturalized epistemology, as conceived byWillard Van Orman Quine(1908–2000), employs concepts and ideas from the natural sciences to formulate its theories.[262]Other developments in late 20th-century epistemology were the emergence ofsocial,feminist, andhistorical epistemology.[263]
https://en.wikipedia.org/wiki/Epistemology
Thefree energy principleis a mathematical principle of information physics. Its application to fMRI brain imaging data as a theoretical framework suggests that the brain reducessurpriseoruncertaintyby making predictions based oninternal modelsand usessensory inputto update its models so as to improve theaccuracy of its predictions. This principle approximates an integration ofBayesian inferencewithactive inference, where actions are guided by predictions andsensory feedbackrefines them. From it, wide-ranging inferences have been made aboutbrain function,perception, andaction.[1]Its applicability to living systems has been questioned.[2][3][4] Inbiophysicsandcognitive science, the free energy principle is a mathematical principle describing aformalaccount of the representational capacities of physical systems: that is, why things that exist look as if they track properties of the systems to which they are coupled.[5]It establishes that the dynamics of physical systems minimise a quantity known assurprisal(which is the negative log probability of some outcome); or equivalently, its variational upper bound, calledfree energy. The principle is used especially inBayesian approaches to brain function, but also some approaches toartificial intelligence; it is formally related tovariational Bayesian methodsand was originally introduced byKarl Fristonas an explanation for embodied perception-action loops inneuroscience.[6] The free energy principle models the behaviour of systems that are distinct from, but coupled to, another system (e.g., an embedding environment), where the degrees of freedom that implement the interface between the two systems is known as aMarkov blanket. More formally, the free energy principle says that if a system has a "particular partition" (i.e., into particles, with their Markov blankets), then subsets of that system will track the statistical structure of other subsets (which are known as internal and external states or paths of a system). The free energy principle is based on the Bayesian idea of the brain as an “inference engine.” Under the free energy principle, systems pursue paths ofleast surprise, or equivalently, minimize the difference between predictions based on their model of the world and theirsenseand associatedperception. This difference is quantified by variational free energy and is minimized by continuous correction of the world model of the system, or by making the world more like the predictions of the system. By actively changing the world to make it closer to the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction.[7]Friston also believes his principle applies tomental disordersas well as toartificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods.[7] The free energy principle is a mathematical principle of information physics: much like the principle of maximum entropy or the principle of least action, it is true on mathematical grounds. To attempt to falsify the free energy principle is a category mistake, akin to trying to falsifycalculusby making empirical observations. (One cannot invalidate a mathematical theory in this way; instead, one would need to derive a formal contradiction from the theory.) In a 2018 interview, Friston explained what it entails for the free energy principle to not be subject tofalsification:[8] I think it is useful to make a fundamental distinction at this point—that we can appeal to later. The distinction is between a state and process theory; i.e., the difference between a normative principle that things may or may not conform to, and a process theory or hypothesis about how that principle is realized. Under this distinction, the free energy principle stands in stark distinction to things likepredictive codingand the Bayesian brain hypothesis. This is because the free energy principle is what it is — aprinciple. LikeHamilton's principle of stationary action, it cannot be falsified. It cannot be disproven. In fact, there's not much you can do with it, unless you ask whether measurable systems conform to the principle. On the other hand, hypotheses that the brain performs some form of Bayesian inference or predictive coding are what they are—hypotheses. These hypotheses may or may not be supported by empirical evidence. There are many examples of these hypotheses being supported by empirical evidence.[9] The notion thatself-organisingbiological systems – like a cell or brain – can be understood as minimising variational free energy is based uponHelmholtz's work onunconscious inference[10]and subsequent treatments in psychology[11]and machine learning.[12]Variational free energy is a function of observations and a probability density over their hidden causes. Thisvariationaldensity is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation toBayesian model evidence.[13]Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world. However, free energy is also an upper bound on theself-informationof outcomes, where the long-term average ofsurpriseis entropy. This means that if a system acts to minimise free energy, it will implicitly place an upper bound on the entropy of the outcomes – or sensory states – it samples.[14][15] Active inference is closely related to thegood regulator theorem[16]and related accounts ofself-organisation,[17][18]such asself-assembly,pattern formation,autopoiesis[19]andpractopoiesis.[20]It addresses the themes considered incybernetics,synergetics[21]andembodied cognition. Because free energy can be expressed as the expected energy of observations under the variational density minus its entropy, it is also related to themaximum entropy principle.[22]Finally, because the time average of energy is action, the principle of minimum variational free energy is aprinciple of least action. Active inference allowing for scale invariance has also been applied to other theories and domains. For instance, it has been applied to sociology,[23][24][25][26]linguistics and communication,[27][28][29]semiotics,[30][31]and epidemiology[32]among others. Negative free energy is formally equivalent to theevidence lower bound, which is commonly used inmachine learningto traingenerative models, such asvariational autoencoders. Active inference applies the techniques ofapproximate Bayesian inferenceto infer the causes of sensory data from a'generative' modelof how that data is caused and then uses these inferences to guide action.Bayes' rulecharacterizes the probabilistically optimal inversion of such a causal model, but applying it is typically computationally intractable, leading to the use of approximate methods. In active inference, the leading class of such approximate methods arevariational methods, for both practical and theoretical reasons: practical, as they often lead to simple inference procedures; and theoretical, because they are related to fundamental physical principles, as discussed above. These variational methods proceed by minimizing an upper bound on the divergence between the Bayes-optimal inference (or 'posterior') and its approximation according to the method. This upper bound is known as thefree energy, and we can accordingly characterize perception as the minimization of the free energy with respect to inbound sensory information, and action as the minimization of the same free energy with respect to outbound action information. This holistic dual optimization is characteristic of active inference, and the free energy principle is the hypothesis that all systems which perceive and act can be characterized in this way. In order to exemplify the mechanics of active inference via the free energy principle, a generative model must be specified, and this typically involves a collection ofprobability density functionswhich together characterize the causal model. One such specification is as follows. The system is modelled as inhabiting a state spaceX{\displaystyle X}, in the sense that its states form the points of this space. The state space is then factorized according toX=Ψ×S×A×R{\displaystyle X=\Psi \times S\times A\times R}, whereΨ{\displaystyle \Psi }is the space of 'external' states that are 'hidden' from the agent (in the sense of not being directly perceived or accessible),S{\displaystyle S}is the space of sensory states that are directly perceived by the agent,A{\displaystyle A}is the space of the agent's possible actions, andR{\displaystyle R}is a space of 'internal' states that are private to the agent. Keeping with the Figure 1, note that in the following theψ˙,ψ,s,a{\displaystyle {\dot {\psi }},\psi ,s,a}andμ{\displaystyle \mu }are functions of (continuous) timet{\displaystyle t}. The generative model is the specification of the following density functions: These density functions determine the factors of a "joint model", which represents the complete specification of the generative model, and which can be written as Bayes' rule then determines the "posterior density"pBayes(ψ˙|s,a,μ,ψ){\displaystyle p_{\text{Bayes}}({\dot {\psi }}|s,a,\mu ,\psi )}, which expresses a probabilistically optimal belief about the external stateψ˙{\displaystyle {\dot {\psi }}}given the preceding stateψ{\displaystyle \psi }and the agent's actions, sensory signals, and internal states. Since computingpBayes{\displaystyle p_{\text{Bayes}}}is computationally intractable, the free energy principle asserts the existence of a "variational density"q(ψ˙|s,a,μ,ψ){\displaystyle q({\dot {\psi }}|s,a,\mu ,\psi )}, whereq{\displaystyle q}is an approximation topBayes{\displaystyle p_{\text{Bayes}}}. One then defines the free energy as and defines action and perception as the joint optimization problem where the internal statesμ{\displaystyle \mu }are typically taken to encode the parameters of the 'variational' densityq{\displaystyle q}and hence the agent's "best guess" about the posterior belief overΨ{\displaystyle \Psi }. Note that the free energy is also an upper bound on a measure of the agent's (marginal, or average) sensorysurprise, and hence free energy minimization is often motivated by the minimization of surprise. Free energy minimisation has been proposed as a hallmark of self-organising systems when cast asrandom dynamical systems.[33]This formulation rests on aMarkov blanket(comprising action and sensory states) that separates internal and external states. If internal states and action minimise free energy, then they place an upper bound on the entropy of sensory states: This is because – underergodicassumptions – the long-term average of surprise is entropy. This bound resists a natural tendency to disorder – of the sort associated with thesecond law of thermodynamicsand thefluctuation theorem. However, formulating a unifying principle for the life sciences in terms of concepts from statistical physics, such as random dynamical system, non-equilibrium steady state and ergodicity, places substantial constraints on the theoretical and empirical study of biological systems with the risk of obscuring all features that make biological systems interesting kinds of self-organizing systems.[34][2][3][4] All Bayesian inference can be cast in terms of free energy minimisation[35][failed verification]. When free energy is minimised with respect to internal states, theKullback–Leibler divergencebetween the variational and posterior density over hidden states is minimised. This corresponds to approximateBayesian inference– when the form of the variational density is fixed – and exact Bayesian inference otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g.,Kalman filtering). It is also used in Bayesianmodel selection, where free energy can be usefully decomposed into complexity and accuracy: Models with minimum free energy provide an accurate explanation of data, under complexity costs; cf.Occam's razorand more formal treatments of computational costs.[36]Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data). Variational free energy is an information-theoretic functional and is distinct from thermodynamic (Helmholtz)free energy.[37]However, the complexity term of variational free energy shares the same fixed point as Helmholtz free energy (under the assumption the system is thermodynamically closed but not isolated). This is because if sensory perturbations are suspended (for a suitably long period of time), complexity is minimised (because accuracy can be neglected). At this point, the system is at equilibrium and internal states minimise Helmholtz free energy, by theprinciple of minimum energy.[38] Free energy minimisation is equivalent to maximising themutual informationbetween sensory states and internal states that parameterise the variational density (for a fixed entropy variational density). This relates free energy minimization to the principle of minimum redundancy.[39][15] Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty[40]and therefore subscribes to theBayesian brainhypothesis.[41]The neuronal processes described by free energy minimisation depend on the nature of hidden states:Ψ=X×Θ×Π{\displaystyle \Psi =X\times \Theta \times \Pi }that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively. Free energy minimisation formalises the notion ofunconscious inferencein perception[10][12]and provides a normative (Bayesian) theory of neuronal processing. The associated process theory of neuronal dynamics is based on minimising free energy through gradient descent. This corresponds togeneralised Bayesian filtering(where ~ denotes a variable in generalised coordinates of motion andD{\displaystyle D}is a derivative matrix operator):[42] Usually, the generative models that define free energy are non-linear and hierarchical (like cortical hierarchies in the brain). Special cases of generalised filtering includeKalman filtering, which is formally equivalent topredictive coding[43]– a popular metaphor for message passing in the brain. Under hierarchical models, predictive coding involves the recurrent exchange of ascending (bottom-up) prediction errors and descending (top-down) predictions[44]that is consistent with the anatomy and physiology of sensory[45]and motor systems.[46] In predictive coding, optimising model parameters through a gradient descent on the time integral of free energy (free action) reduces to associative orHebbian plasticityand is associated withsynaptic plasticityin the brain. Optimizing the precision parameters corresponds to optimizing the gain of prediction errors (cf., Kalman gain). In neuronally plausible implementations of predictive coding,[44]this corresponds to optimizing the excitability of superficial pyramidal cells and has been interpreted in terms of attentional gain.[47] With regard to the top-down vs. bottom-up controversy, which has been addressed as a major open problem of attention, a computational model has succeeded in illustrating the circular nature of the interplay between top-down and bottom-up mechanisms. Using an established emergent model of attention, namely SAIM, the authors proposed a model called PE-SAIM, which, in contrast to the standard version, approaches selective attention from a top-down position. The model takes into account the transmission of prediction errors to the same level or a level above, in order to minimise the energy function that indicates the difference between the data and its cause, or, in other words, between the generative model and the posterior. To increase validity, they also incorporated neural competition between stimuli into their model. A notable feature of this model is the reformulation of the free energy function only in terms of prediction errors during task performance: ∂Etotal(YVP,XSN,xCN,yKN)∂ymnSN=xmnCN−bCNεnmCN+bCN∑k(εknmKN){\displaystyle {\dfrac {\partial E^{total}(Y^{VP},X^{SN},x^{CN},y^{KN})}{\partial y_{mn}^{SN}}}=x_{mn}^{CN}-b^{CN}\varepsilon _{nm}^{CN}+b^{CN}\sum _{k}(\varepsilon _{knm}^{KN})} whereEtotal{\displaystyle E^{total}}is the totalenergy functionof the neural networks entail, andεknmKN{\displaystyle \varepsilon _{knm}^{KN}}is the prediction error between the generative model (prior) and posterior changing over time.[48]Comparing the two models reveals a notable similarity between their respective results while also highlighting a remarkable discrepancy, whereby – in the standard version of the SAIM – the model's focus is mainly upon the excitatory connections, whereas in the PE-SAIM, the inhibitory connections are leveraged to make an inference. The model has also proved to be fit to predict the EEG and fMRI data drawn from human experiments with high precision. In the same vein, Yahya et al. also applied the free energy principle to propose a computational model for template matching in covert selective visual attention that mostly relies on SAIM.[49]According to this study, the total free energy of the whole state-space is reached by inserting top-down signals in the original neural networks, whereby we derive a dynamical system comprising both feed-forward and backward prediction error. When gradient descent is applied to actiona˙=−∂aF(s,μ~){\displaystyle {\dot {a}}=-\partial _{a}F(s,{\tilde {\mu }})}, motor control can be understood in terms of classical reflex arcs that are engaged by descending (corticospinal) predictions. This provides a formalism that generalizes the equilibrium point solution – to thedegrees of freedom problem[50]– to movement trajectories. Active inference is related tooptimal controlby replacing value or cost-to-go functions with prior beliefs about state transitions or flow.[51]This exploits the close connection between Bayesian filtering and the solution to theBellman equation. However, active inference starts with (priors over) flowf=Γ⋅∇V+∇×W{\displaystyle f=\Gamma \cdot \nabla V+\nabla \times W}that are specified with scalarV(x){\displaystyle V(x)}and vectorW(x){\displaystyle W(x)}value functions of state space (cf., theHelmholtz decomposition). Here,Γ{\displaystyle \Gamma }is the amplitude of random fluctuations and cost isc(x)=f⋅∇V+∇⋅Γ⋅V{\displaystyle c(x)=f\cdot \nabla V+\nabla \cdot \Gamma \cdot V}. The priors over flowp(x~∣m){\displaystyle p({\tilde {x}}\mid m)}induce a prior over statesp(x∣m)=exp⁡(V(x)){\displaystyle p(x\mid m)=\exp(V(x))}that is the solution to the appropriate forwardKolmogorov equations.[52]In contrast, optimal control optimises the flow, given a cost function, under the assumption thatW=0{\displaystyle W=0}(i.e., the flow is curl free or has detailed balance). Usually, this entails solving backwardKolmogorov equations.[53] Optimal decisionproblems (usually formulated aspartially observable Markov decision processes) are treated within active inference by absorbingutility functionsinto prior beliefs. In this setting, states that have a high utility (low cost) are states an agent expects to occupy. By equipping the generative model with hidden states that model control, policies (control sequences) that minimise variational free energy lead to high utility states.[54] Neurobiologically, neuromodulators such asdopamineare considered to report the precision of prediction errors by modulating the gain of principal cells encoding prediction error.[55]This is closely related to – but formally distinct from – the role of dopamine in reporting prediction errorsper se[56]and related computational accounts.[57] Active inference has been used to address a range of issues incognitive neuroscience, brain function and neuropsychiatry, including action observation,[58]mirror neurons,[59]saccades and visual search,[60][61]eye movements,[62]sleep,[63]illusions,[64]attention,[47]action selection,[55]consciousness,[65][66]hysteria[67]and psychosis.[68]Explanations of action in active inference often depend on the idea that the brain has 'stubborn predictions' that it cannot update, leading to actions that cause these predictions to come true.[69]
https://en.wikipedia.org/wiki/Free_energy_principle
Inductive probabilityattempts to give theprobabilityof future events based on past events. It is the basis forinductive reasoning, and gives the mathematical basis forlearningand the perception of patterns. It is a source ofknowledgeabout the world. There are three sources of knowledge:inference, communication, and deduction. Communication relays information found using other methods. Deduction establishes new facts based on existing facts. Inference establishes new facts from data. Its basis isBayes' theorem. Information describing the world is written in a language. For example, a simple mathematical language of propositions may be chosen. Sentences may be written down in this language as strings of characters. But in the computer it is possible to encode these sentences as strings of bits (1s and 0s). Then the language may be encoded so that the most commonly used sentences are the shortest. This internal language implicitly represents probabilities of statements. Occam's razorsays the "simplest theory, consistent with the data is most likely to be correct". The "simplest theory" is interpreted as the representation of the theory written in this internal language. The theory with the shortest encoding in this internal language is most likely to be correct. Probability and statistics was focused onprobability distributionsand tests of significance. Probability was formal, well defined, but limited in scope. In particular its application was limited to situations that could be defined as an experiment or trial, with a well defined population. Bayes's theoremis named after Rev.Thomas Bayes1701–1761.Bayesian inferencebroadened the application of probability to many situations where a population was not well defined. But Bayes' theorem always depended on prior probabilities, to generate new probabilities. It was unclear where these prior probabilities should come from. Ray Solomonoffdevelopedalgorithmic probabilitywhich gave an explanation for what randomness is and how patterns in the data may be represented by computer programs, that give shorter representations of the data circa 1964. Chris Wallaceand D. M. Boulton developedminimum message lengthcirca 1968. LaterJorma Rissanendeveloped theminimum description lengthcirca 1978. These methods allowinformation theoryto be related to probability, in a way that can be compared to the application of Bayes' theorem, but which give a source and explanation for the role of prior probabilities. Marcus Huttercombineddecision theorywith the work of Ray Solomonoff andAndrey Kolmogorovto give a theory for thePareto optimalbehavior for anIntelligent agent, circa 1998. The program with the shortest length that matches the data is the most likely to predict future data. This is the thesis behind theminimum message length[1]andminimum description length[2]methods. At first sightBayes' theoremappears different from the minimimum message/description length principle. At closer inspection it turns out to be the same. Bayes' theorem is about conditional probabilities, and states the probability that eventBhappens if firstly eventAhappens: becomes in terms of message lengthL, This means that if all the information is given describing an event then the length of the information may be used to give the raw probability of the event. So if the information describing the occurrence ofAis given, along with the information describingBgivenA, then all the information describingAandBhas been given.[3][4] Overfittingoccurs when the model matches the random noise and not the pattern in the data. For example, take the situation where a curve is fitted to a set of points. If a polynomial with many terms is fitted then it can more closely represent the data. Then the fit will be better, and the information needed to describe the deviations from the fitted curve will be smaller. Smaller information length means higher probability. However, the information needed to describe the curve must also be considered. The total information for a curve with many terms may be greater than for a curve with fewer terms, that has not as good a fit, but needs less information to describe the polynomial. Solomonoff's theory of inductive inferenceis also inductive inference. A bit stringxis observed. Then consider all programs that generate strings starting withx. Cast in the form of inductive inference, the programs are theories that imply the observation of the bit stringx. The method used here to give probabilities for inductive inference is based onSolomonoff's theory of inductive inference. If all the bits are 1, then people infer that there is a bias in the coin and that it is more likely also that the next bit is 1 also. This is described as learning from, or detecting a pattern in the data. Such a pattern may be represented by acomputer program. A short computer program may be written that produces a series of bits which are all 1. If the length of the programKisL(K){\displaystyle L(K)}bits then its prior probability is, The length of the shortest program that represents the string of bits is called theKolmogorov complexity. Kolmogorov complexity is not computable. This is related to thehalting problem. When searching for the shortest program some programs may go into an infinite loop. The Greek philosopherEpicurusis quoted as saying "If more than one theory is consistent with the observations, keep all theories".[5] As in a crime novel all theories must be considered in determining the likely murderer, so with inductive probability all programs must be considered in determining the likely future bits arising from the stream of bits. Programs that are already longer thannhave no predictive power. The raw (or prior) probability that the pattern of bits is random (has no pattern) is2−n{\displaystyle 2^{-n}}. Each program that produces the sequence of bits, but is shorter than thenis a theory/pattern about the bits with a probability of2−k{\displaystyle 2^{-k}}wherekis the length of the program. The probability of receiving a sequence of bitsyafter receiving a series of bitsxis then theconditional probabilityof receivingygivenx, which is the probability ofxwithyappended, divided by the probability ofx.[6][7][8] The programming language affects the predictions of the next bit in the string. The language acts as aprior probability. This is particularly a problem where the programming language codes for numbers and other data types. Intuitively we think that 0 and 1 are simple numbers, and that prime numbers are somehow more complex than numbers that may be composite. Using theKolmogorov complexitygives an unbiased estimate (a universal prior) of the prior probability of a number. As a thought experiment anintelligent agentmay be fitted with a data input device giving a series of numbers, after applying some transformation function to the raw numbers. Another agent might have the same input device with a different transformation function. The agents do not see or know about these transformation functions. Then there appears no rational basis for preferring one function over another. A universal prior insures that although two agents may have different initial probability distributions for the data input, the difference will be bounded by a constant. So universal priors do not eliminate an initial bias, but they reduce and limit it. Whenever we describe an event in a language, either using a natural language or other, the language has encoded in it our prior expectations. So some reliance on prior probabilities are inevitable. A problem arises where an intelligent agent's prior expectations interact with the environment to form a self reinforcing feed back loop. This is the problem of bias or prejudice. Universal priors reduce but do not eliminate this problem. The theory ofuniversal artificial intelligenceappliesdecision theoryto inductive probabilities. The theory shows how the best actions to optimize a reward function may be chosen. The result is a theoretical model of intelligence.[9] It is a fundamental theory of intelligence, which optimizes the agents behavior in, In general no agent will always provide the best actions in all situations. A particular choice made by an agent may be wrong, and the environment may provide no way for the agent to recover from an initial bad choice. However the agent isPareto optimalin the sense that no other agent will do better than this agent in this environment, without doing worse in another environment. No other agent may, in this sense, be said to be better. At present the theory is limited by incomputability (thehalting problem). Approximations may be used to avoid this. Processing speed andcombinatorial explosionremain the primary limiting factors forartificial intelligence. Probability is the representation of uncertain or partial knowledge about the truth of statements. Probabilities are subjective and personal estimates of likely outcomes based on past experience and inferences made from the data. This description of probability may seem strange at first. In natural language we refer to "the probability" that the sun will rise tomorrow. We do not refer to "your probability" that the sun will rise. But in order for inference to be correctly modeled probability must be personal, and the act of inference generates new posterior probabilities from prior probabilities. Probabilities are personal because they are conditional on the knowledge of the individual. Probabilities are subjective because they always depend, to some extent, on prior probabilities assigned by the individual. Subjective should not be taken here to mean vague or undefined. The termintelligent agentis used to refer to the holder of the probabilities. The intelligent agent may be a human or a machine. If the intelligent agent does not interact with the environment then the probability will converge over time to the frequency of the event. If however the agent uses the probability to interact with the environment there may be a feedback, so that two agents in the identical environment starting with only slightly different priors, end up with completely different probabilities. In this case optimaldecision theoryas inMarcus Hutter'sUniversal Artificial Intelligence will givePareto optimalperformance for the agent. This means that no other intelligent agent could do better in one environment without doing worse in another environment. In deductive probability theories, probabilities are absolutes, independent of the individual making the assessment. But deductive probabilities are based on, For example, in a trial the participants are aware the outcome of all the previous history of trials. They also assume that each outcome is equally probable. Together this allows a single unconditional value of probability to be defined. But in reality each individual does not have the same information. And in general the probability of each outcome is not equal. The dice may be loaded, and this loading needs to be inferred from the data. Theprinciple of indifferencehas played a key role in probability theory. It says that if N statements are symmetric so that one condition cannot be preferred over another then all statements are equally probable.[10] Taken seriously, in evaluating probability this principle leads to contradictions. Suppose there are 3 bags of gold in the distance and one is asked to select one. Then because of the distance one cannot see the bag sizes. You estimate using the principle of indifference that each bag has equal amounts of gold, and each bag has one third of the gold. Now, while one of us is not looking, the other takes one of the bags and divide it into 3 bags. Now there are 5 bags of gold. The principle of indifference now says each bag has one fifth of the gold. A bag that was estimated to have one third of the gold is now estimated to have one fifth of the gold. Taken as a value associated with the bag the values are different therefore contradictory. But taken as an estimate given under a particular scenario, both values are separate estimates given under different circumstances and there is no reason to believe they are equal. Estimates of prior probabilities are particularly suspect. Estimates will be constructed that do not follow any consistent frequency distribution. For this reason prior probabilities are considered as estimates of probabilities rather than probabilities. A full theoretical treatment would associate with each probability, Inductive probability combines two different approaches to probability. Each approach gives a slightly different viewpoint. Information theory is used in relating probabilities to quantities of information. This approach is often used in giving estimates of prior probabilities. Frequentist probabilitydefines probabilities as objective statements about how often an event occurs. This approach may be stretched by defining thetrialsto be overpossible worlds. Statements about possible worlds defineevents. Whereas logic represents only two values; true and false as the values of statement, probability associates a number in [0,1] to each statement. If the probability of a statement is 0, the statement is false. If the probability of a statement is 1 the statement is true. In considering some data as a string of bits the prior probabilities for a sequence of 1s and 0s, the probability of 1 and 0 is equal. Therefore, each extra bit halves the probability of a sequence of bits. This leads to the conclusion that, WhereP(x){\displaystyle P(x)}is the probability of the string of bitsx{\displaystyle x}andL(x){\displaystyle L(x)}is its length. The prior probability of any statement is calculated from the number of bits needed to state it. See alsoinformation theory. Two statementsA{\displaystyle A}andB{\displaystyle B}may be represented by two separate encodings. Then the length of the encoding is, or in terms of probability, But this law is not always true because there may be a shorter method of encodingB{\displaystyle B}if we assumeA{\displaystyle A}. So the above probability law applies only ifA{\displaystyle A}andB{\displaystyle B}are "independent". The primary use of the information approach to probability is to provide estimates of the complexity of statements. Recall that Occam's razor states that "All things being equal, the simplest theory is the most likely to be correct". In order to apply this rule, first there needs to be a definition of what "simplest" means. Information theory defines simplest to mean having the shortest encoding. Knowledge is represented asstatements. Each statement is aBooleanexpression. Expressions are encoded by a function that takes a description (as against the value) of the expression and encodes it as a bit string. The length of the encoding of a statement gives an estimate of the probability of a statement. This probability estimate will often be used as the prior probability of a statement. Technically this estimate is not a probability because it is not constructed from a frequency distribution. The probability estimates given by it do not always obeythe law of total of probability. Applying the law of total probability to various scenarios will usually give a more accurate probability estimate of the prior probability than the estimate from the length of the statement. An expression is constructed from sub expressions, AHuffman codemust distinguish the 3 cases. The length of each code is based on the frequency of each type of sub expressions. Initially constants are all assigned the same length/probability. Later constants may be assigned a probability using the Huffman code based on the number of uses of the function id in all expressions recorded so far. In using a Huffman code the goal is to estimate probabilities, not to compress the data. The length of a function application is the length of the function identifier constant plus the sum of the sizes of the expressions for each parameter. The length of a quantifier is the length of the expression being quantified over. No explicit representation of natural numbers is given. However natural numbers may be constructed by applying the successor function to 0, and then applying other arithmetic functions. A distribution of natural numbers is implied by this, based on the complexity of constructing each number. Rational numbers are constructed by the division of natural numbers. The simplest representation has no common factors between the numerator and the denominator. This allows the probability distribution of natural numbers may be extended to rational numbers. The probability of aneventmay be interpreted as the frequencies ofoutcomeswhere the statement is true divided by the total number of outcomes. If the outcomes form a continuum the frequency may need to be replaced with ameasure. Events are sets of outcomes. Statements may be related to events. A Boolean statement B about outcomes defines a set of outcomes b, Each probability is always associated with the state of knowledge at a particular point in the argument. Probabilities before an inference are known as prior probabilities, and probabilities after are known as posterior probabilities. Probability depends on the facts known. The truth of a fact limits the domain of outcomes to the outcomes consistent with the fact. Prior probabilities are the probabilities before a fact is known. Posterior probabilities are after a fact is known. The posterior probabilities are said to be conditional on the fact. the probability thatB{\displaystyle B}is true given thatA{\displaystyle A}is true is written as:P(B|A).{\displaystyle P(B|A).} All probabilities are in some sense conditional. The prior probability ofB{\displaystyle B}is, In thefrequentist approach, probabilities are defined as the ratio of the number ofoutcomeswithin an event to the total number of outcomes. In thepossible worldmodel each possible world is an outcome, and statements about possible worlds define events. The probability of a statement being true is the number of possible worlds where the statement is true divided by the total number of possible worlds. The probability of a statementA{\displaystyle A}being true about possible worlds is then, For a conditional probability. then Using symmetry this equation may be written out as Bayes' law. This law describes the relationship between prior and posterior probabilities when new facts are learnt. Written as quantities of informationBayes' Theorembecomes, Two statements A and B are said to be independent if knowing the truth of A does not change the probability of B. Mathematically this is, thenBayes' Theoremreduces to, For a set of mutually exclusive possibilitiesAi{\displaystyle A_{i}}, the sum of the posterior probabilities must be 1. Substituting using Bayes' theorem gives thelaw of total probability This result is used to give theextended form of Bayes' theorem, This is the usual form of Bayes' theorem used in practice, because it guarantees the sum of all the posterior probabilities forAi{\displaystyle A_{i}}is 1. For mutually exclusive possibilities, the probabilities add. Using Then the alternatives are all mutually exclusive. Also, so, putting it all together, As, then Implication is related to conditional probability by the following equation, Derivation, Bayes' theorem may be used to estimate the probability of a hypothesis or theory H, given some facts F. The posterior probability of H is then or in terms of information, By assuming the hypothesis is true, a simpler representation of the statement F may be given. The length of the encoding of this simpler representation isL(F|H).{\displaystyle L(F|H).} L(H)+L(F|H){\displaystyle L(H)+L(F|H)}represents the amount of information needed to represent the facts F, if H is true.L(F){\displaystyle L(F)}is the amount of information needed to represent F without the hypothesis H. The difference is how much the representation of the facts has been compressed by assuming that H is true. This is the evidence that the hypothesis H is true. IfL(F){\displaystyle L(F)}is estimated fromencoding lengththen the probability obtained will not be between 0 and 1. The value obtained is proportional to the probability, without being a good probability estimate. The number obtained is sometimes referred to as a relative probability, being how much more probable the theory is than not holding the theory. If a full set of mutually exclusive hypothesis that provide evidence is known, a proper estimate may be given for the prior probabilityP(F){\displaystyle P(F)}. Probabilities may be calculated from the extended form of Bayes' theorem. Given all mutually exclusive hypothesisHi{\displaystyle H_{i}}which give evidence, such that, and also the hypothesis R, that none of the hypothesis is true, then, In terms of information, In most situations it is a good approximation to assume thatF{\displaystyle F}is independent ofR{\displaystyle R}, which meansP(F|R)=P(F){\displaystyle P(F|R)=P(F)}giving, Abductive inference[11][12][13][14]starts with a set of factsFwhich is a statement (Boolean expression).Abductive reasoningis of the form, The theoryT, also called an explanation of the conditionF, is an answer to the ubiquitous factual "why" question. For example, for the conditionFis "Why do apples fall?". The answer is a theoryTthat implies that apples fall; Inductive inference is of the form, In terms of abductive inference,all objects in a class C or set have a property Pis a theory that implies the observed condition,All observed objects in a class C have a property P. Soinductive inferenceis a general case of abductive inference. In common usage the term inductive inference is often used to refer to both abductive and inductive inference. Inductive inference is related togeneralization. Generalizations may be formed from statements by replacing a specific value with membership of a category, or by replacing membership of a category with membership of a broader category. In deductive logic, generalization is a powerful method of generating new theories that may be true. In inductive inference generalization generates theories that have a probability of being true. The opposite of generalization is specialization. Specialization is used in applying a general rule to a specific case. Specializations are created from generalizations by replacing membership of a category by a specific value, or by replacing a category with a sub category. TheLinnaenclassification of living things and objects forms the basis for generalization and specification. The ability to identify, recognize and classify is the basis for generalization. Perceiving the world as a collection of objects appears to be a key aspect of human intelligence. It is the object oriented model, in the noncomputer sciencesense. The object oriented model is constructed from ourperception. In particularlyvisionis based on the ability to compare two images and calculate how much information is needed to morph or map one image into another.Computer visionuses this mapping to construct 3D images fromstereo image pairs. Inductive logic programmingis a means of constructing theory that implies a condition. Plotkin's[15][16]"relative least general generalization (rlgg)" approach constructs the simplest generalization consistent with the condition. Isaac Newtonused inductive arguments in constructing hislaw of universal gravitation.[17]Starting with the statement, Generalizing by replacing apple for object, and Earth for object gives, in a two body system, The theory explains all objects falling, so there is strong evidence for it. The second observation, After some complicated mathematicalcalculus, it can be seen that if the acceleration follows the inverse square law then objects will follow an ellipse. So induction gives evidence for the inverse square law. UsingGalileo'sobservation that all objects drop with the same speed, wherei1{\displaystyle i_{1}}andi2{\displaystyle i_{2}}vectors towards the center of the other object. Then usingNewton's third lawF1=−F2{\displaystyle F_{1}=-F_{2}} Implication determines condition probabilityas, So, This result may be used in the probabilities given for Bayesian hypothesis testing. For a single theory, H = T and, or in terms of information, the relative probability is, Note that this estimate for P(T|F) is not a true probability. IfL(Ti)<L(F){\displaystyle L(T_{i})<L(F)}then the theory has evidence to support it. Then for a set of theoriesTi=Hi{\displaystyle T_{i}=H_{i}}, such thatL(Ti)<L(F){\displaystyle L(T_{i})<L(F)}, giving, Make a list of all the shortest programsKi{\displaystyle K_{i}}that each produce a distinct infinite string of bits, and satisfy the relation, whereR(Ki){\displaystyle R(K_{i})}is the result of running the programKi{\displaystyle K_{i}}andTn{\displaystyle T_{n}}truncates the string afternbits. The problem is to calculate the probability that the source is produced by programKi,{\displaystyle K_{i},}given that the truncated source after n bits isx. This is represented by the conditional probability, Using theextended form of Bayes' theorem The extended form relies on thelaw of total probability. This means that thes=R(Ki){\displaystyle s=R(K_{i})}must be distinct possibilities, which is given by the condition that eachKi{\displaystyle K_{i}}produce a different infinite string. Also one of the conditionss=R(Ki){\displaystyle s=R(K_{i})}must be true. This must be true, as in the limit asn→∞,{\displaystyle n\to \infty ,}there is always at least one program that producesTn(s){\displaystyle T_{n}(s)}. AsKi{\displaystyle K_{i}}are chosen so thatTn(R(Ki))=x,{\displaystyle T_{n}(R(K_{i}))=x,}then, The apriori probability of the string being produced from the program, given no information about the string, is based on the size of the program, giving, Programs that are the same or longer than the length ofxprovide no predictive power. Separate them out giving, Then identify the two probabilities as, But the prior probability thatxis a random set of bits is2−n{\displaystyle 2^{-n}}. So, The probability that the source is random, or unpredictable is, A model of how worlds are constructed is used in determining the probabilities of theories, Ifwis the bit string then the world is created such thatR(w){\displaystyle R(w)}is true. Anintelligent agenthas some facts about the word, represented by the bit stringc, which gives the condition, The set of bit strings identical with any conditionxisE(x){\displaystyle E(x)}. A theory is a simpler condition that explains (or implies)C. The set of all such theories is calledT, extended form of Bayes' theoremmay be applied where, To apply Bayes' theorem the following must hold:Ai{\displaystyle A_{i}}is apartitionof the event space. ForT(C){\displaystyle T(C)}to be a partition, no bit stringnmay belong to two theories. To prove this assume they can and derive a contradiction, Secondly prove thatTincludes all outcomes consistent with the condition. As all theories consistent withCare included thenR(w){\displaystyle R(w)}must be in this set. So Bayes theorem may be applied as specified giving, Using theimplication and condition probability law, the definition ofT(C){\displaystyle T(C)}implies, The probability of each theory inTis given by, so, Finally the probabilities of the events may be identified with the probabilities of the condition which the outcomes in the event satisfy, giving This is the probability of the theorytafter observing that the conditionCholds. Theories that are less probable than the conditionChave no predictive power. Separate them out giving, The probability of the theories without predictive power onCis the same as the probability ofC. So, So the probability and the probability of no prediction for C, written asrandom⁡(C){\displaystyle \operatorname {random} (C)}, The probability of a condition was given as, Bit strings for theories that are more complex than the bit string given to the agent as input have no predictive power. There probabilities are better included in therandomcase. To implement this a new definition is given asFin, UsingF, an improved version of the abductive probabilities is,
https://en.wikipedia.org/wiki/Inductive_probability
Information field theory(IFT) is aBayesianstatistical field theoryrelating tosignal reconstruction,cosmography, and other related areas.[1][2]IFT summarizes the information available on aphysical fieldusingBayesian probabilities. It uses computational techniques developed forquantum field theoryandstatistical field theoryto handle the infinite number ofdegrees of freedomof a field and to derivealgorithmsfor the calculation of fieldexpectation values. For example, theposteriorexpectation value of a field generated by a knownGaussian processand measured by a linear device with knownGaussian noisestatistics is given by ageneralized Wiener filterapplied to the measured data. IFT extends such known filter formula to situations withnonlinear physics,nonlinear devices,non-Gaussianfield or noise statistics, dependence of the noise statistics on the field values, and partly unknown parameters of measurement. For this it usesFeynman diagrams,renormalisationflow equations, and other methods frommathematical physics.[3] Fieldsplay an important role in science, technology, and economy. They describe the spatial variations of a quantity, like the air temperature, as a function of position. Knowing the configuration of a field can be of large value. Measurements of fields, however, can never provide the precise field configuration with certainty. Physical fields have an infinite number of degrees of freedom, but the data generated by any measurement device is always finite, providing only a finite number of constraints on the field. Thus, an unambiguous deduction of such a field from measurement data alone is impossible and onlyprobabilistic inferenceremains as a means to make statements about the field. Fortunately, physical fields exhibit correlations and often follow known physical laws. Such information is best fused into the field inference in order to overcome the mismatch of field degrees of freedom to measurement points. To handle this, an information theory for fields is needed, and that is what information field theory is. s(x){\displaystyle s(x)}is a field value at a locationx∈Ω{\displaystyle x\in \Omega }in a spaceΩ{\displaystyle \Omega }. The prior knowledge about the unknown signal fields{\displaystyle s}is encoded in the probability distributionP(s){\displaystyle {\mathcal {P}}(s)}. The datad{\displaystyle d}provides additional information ons{\displaystyle s}via the likelihoodP(d|s){\displaystyle {\mathcal {P}}(d|s)}that gets incorporated into the posterior probabilityP(s|d)=P(d|s)P(s)P(d){\displaystyle {\mathcal {P}}(s|d)={\frac {{\mathcal {P}}(d|s)\,{\mathcal {P}}(s)}{{\mathcal {P}}(d)}}}according toBayes theorem.[4] In IFT Bayes theorem is usually rewritten in the language of a statistical field theory,P(s|d)=P(d,s)P(d)≡e−H(d,s)Z(d),{\displaystyle {\mathcal {P}}(s|d)={\frac {{\mathcal {P}}(d,s)}{{\mathcal {P}}(d)}}\equiv {\frac {e^{-{\mathcal {H}}(d,s)}}{{\mathcal {Z}}(d)}},}with the information Hamiltonian defined asH(d,s)≡−ln⁡P(d,s)=−ln⁡P(d|s)−ln⁡P(s)≡H(d|s)+H(s),{\displaystyle {\mathcal {H}}(d,s)\equiv -\ln {\mathcal {P}}(d,s)=-\ln {\mathcal {P}}(d|s)-\ln {\mathcal {P}}(s)\equiv {\mathcal {H}}(d|s)+{\mathcal {H}}(s),}the negative logarithm of the joint probability of data and signal and with thepartition functionbeingZ(d)≡P(d)=∫DsP(d,s).{\displaystyle {\mathcal {Z}}(d)\equiv {\mathcal {P}}(d)=\int {\mathcal {D}}s\,{\mathcal {P}}(d,s).}This reformulation of Bayes theorem permits the usage of methods of mathematical physics developed for the treatment ofstatistical field theoriesandquantum field theories. As fields have an infinite number of degrees of freedom, the definition of probabilities over spaces of field configurations has subtleties. Identifying physical fields as elements of function spaces provides the problem that noLebesgue measureis defined over the latter and therefore probability densities can not be defined there. However, physical fields have much more regularity than most elements of function spaces, as they are continuous and smooth at most of their locations. Therefore, less general, but sufficiently flexible constructions can be used to handle the infinite number of degrees of freedom of a field. A pragmatic approach is to regard the field to be discretized in terms of pixels. Each pixel carries a single field value that is assumed to be constant within the pixel volume. All statements about the continuous field have then to be cast into its pixel representation. This way, one deals with finite dimensional field spaces, over which probability densities are well definable. In order for this description to be a proper field theory, it is further required that the pixel resolutionΔx{\displaystyle \Delta x}can always be refined, while expectation values of the discretized fieldsΔx{\displaystyle s_{\Delta x}}converge to finite values:⟨f(s)⟩(s|d)≡limΔx→0∫dsΔxf(sΔx)P(sΔx).{\displaystyle \langle f(s)\rangle _{(s|d)}\equiv \lim _{\Delta x\rightarrow 0}\int ds_{\Delta x}f(s_{\Delta x})\,{\mathcal {P}}(s_{\Delta x}).} If this limit exists, one can talk about the field configuration space integral orpath integral⟨f(s)⟩(s|d)≡∫Dsf(s)P(s).{\displaystyle \langle f(s)\rangle _{(s|d)}\equiv \int {\mathcal {D}}s\,f(s)\,{\mathcal {P}}(s).}irrespective of the resolution it might be evaluated numerically. The simplest prior for a field is that of a zero meanGaussian probability distributionP(s)=G(s,S)≡1|2πS|e−12s†S−1s.{\displaystyle {\mathcal {P}}(s)={\mathcal {G}}(s,S)\equiv {\frac {1}{\sqrt {|2\pi S|}}}e^{-{\frac {1}{2}}\,s^{\dagger }S^{-1}\,s}.}The determinant in the denominator might be ill-defined in thecontinuum limitΔx→0{\displaystyle \Delta x\rightarrow 0}, however, all what is necessary for IFT to be consistent is that this determinant can be estimated for any finite resolution field representation withΔx>0{\displaystyle \Delta x>0}and that this permits the calculation of convergent expectation values. A Gaussian probability distribution requires the specification of the field two point correlation functionS≡⟨ss†⟩(s){\displaystyle S\equiv \langle s\,s^{\dagger }\rangle _{(s)}}with coefficientsSxy≡⟨s(x)s(y)¯⟩(s){\displaystyle S_{xy}\equiv \langle s(x)\,{\overline {s(y)}}\rangle _{(s)}}and a scalar product for continuous fieldsa†b≡∫Ωdxa(x)¯b(x),{\displaystyle a^{\dagger }b\equiv \int _{\Omega }dx\,{\overline {a(x)}}\,b(x),}with respect to which the inverse signal field covarianceS−1{\displaystyle S^{-1}}is constructed,i.e.(S−1S)xy≡∫Ωdz(S−1)xzSzy=1xy≡δ(x−y).{\displaystyle (S^{-1}S)_{xy}\equiv \int _{\Omega }dz\,(S^{-1})_{xz}S_{zy}=\mathbb {1} _{xy}\equiv \delta (x-y).} The corresponding prior information Hamiltonian readsH(s)=−ln⁡G(s,S)=12s†S−1s+12ln⁡|2πS|.{\displaystyle {\mathcal {H}}(s)=-\ln {\mathcal {G}}(s,S)={\frac {1}{2}}\,s^{\dagger }S^{-1}\,s+{\frac {1}{2}}\,\ln |2\pi S|.} The measurement datad{\displaystyle d}was generated with the likelihoodP(d|s){\displaystyle {\mathcal {P}}(d|s)}. In case the instrument was linear, a measurement equation of the formd=Rs+n{\displaystyle d=R\,s+n}can be given, in whichR{\displaystyle R}is the instrument response, which describes how the data on average reacts to the signal, andn{\displaystyle n}is the noise, simply the difference between datad{\displaystyle d}and linear signal responseRs{\displaystyle R\,s}. The response translates the infinite dimensional signal vector into the finite dimensional data space. In components this readsdi=∫ΩdxRixsx+ni,{\displaystyle d_{i}=\int _{\Omega }dx\,R_{ix}\,s_{x}+n_{i},} where a vector component notation was also introduced for signal and data vectors. If the noise follows a signal independent zero mean Gaussian statistics with covarianceN{\displaystyle N},P(n|s)=G(n,N),{\displaystyle {\mathcal {P}}(n|s)={\mathcal {G}}(n,N),}then the likelihood is Gaussian as well,P(d|s)=G(d−Rs,N),{\displaystyle {\mathcal {P}}(d|s)={\mathcal {G}}(d-R\,s,N),}and the likelihood information Hamiltonian isH(d|s)=−ln⁡G(d−Rs,N)=12(d−Rs)†N−1(d−Rs)+12ln⁡|2πN|.{\displaystyle {\mathcal {H}}(d|s)=-\ln {\mathcal {G}}(d-R\,s,N)={\frac {1}{2}}\,(d-R\,s)^{\dagger }N^{-1}\,(d-R\,s)+{\frac {1}{2}}\,\ln |2\pi N|.}A linear measurement of a Gaussian signal, subject to Gaussian and signal-independent noise leads to a free IFT. The joint information Hamiltonian of the Gaussian scenario described above isH(d,s)=H(d|s)+H(s)=^12(d−Rs)†N−1(d−Rs)+12s†S−1s=^12[s†(S−1+R†N−1R)⏟D−1s−s†R†N−1d⏟j−d†N−1R⏟j†s]≡12[s†D−1s−s†j−j†s]=12[s†D−1s−s†D−1Dj⏟m−j†D⏟m†D−1s]=^12(s−m)†D−1(s−m),{\displaystyle {\begin{aligned}{\mathcal {H}}(d,s)&={\mathcal {H}}(d|s)+{\mathcal {H}}(s)\\&{\widehat {=}}{\frac {1}{2}}\,(d-R\,s)^{\dagger }N^{-1}\,(d-R\,s)+{\frac {1}{2}}\,s^{\dagger }S^{-1}\,s\\&{\widehat {=}}{\frac {1}{2}}\,\left[s^{\dagger }\underbrace {(S^{-1}+R^{\dagger }N^{-1}R)} _{D^{-1}}\,s-s^{\dagger }\underbrace {R^{\dagger }N^{-1}d} _{j}-\underbrace {d^{\dagger }N^{-1}R} _{j^{\dagger }}\,s\right]\\&\equiv {\frac {1}{2}}\,\left[s^{\dagger }D^{-1}s-s^{\dagger }j-j^{\dagger }s\right]\\&={\frac {1}{2}}\,\left[s^{\dagger }D^{-1}s-s^{\dagger }D^{-1}\underbrace {D\,j} _{m}-\underbrace {j^{\dagger }D} _{m^{\dagger }}\,D^{-1}s\right]\\&{\widehat {=}}{\frac {1}{2}}\,(s-m)^{\dagger }D^{-1}(s-m),\end{aligned}}}where=^{\displaystyle {\widehat {=}}}denotes equality up to irrelevant constants, which, in this case, means expressions that are independent ofs{\displaystyle s}. From this it is clear, that the posterior must be a Gaussian with meanm{\displaystyle m}and varianceD{\displaystyle D},P(s|d)∝e−H(d,s)∝e−12(s−m)†D−1(s−m)∝G(s−m,D){\displaystyle {\mathcal {P}}(s|d)\propto e^{-{\mathcal {H}}(d,s)}\propto e^{-{\frac {1}{2}}\,(s-m)^{\dagger }D^{-1}(s-m)}\propto {\mathcal {G}}(s-m,D)}where equality between the right and left hand sides holds as both distributions are normalized,∫DsP(s|d)=1=∫DsG(s−m,D){\displaystyle \int {\mathcal {D}}s\,{\mathcal {P}}(s|d)=1=\int {\mathcal {D}}s\,{\mathcal {G}}(s-m,D)}. The posterior meanm=Dj=(S−1+R†N−1R)−1R†N−1d{\displaystyle m=D\,j=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}R^{\dagger }N^{-1}d}is also known as the generalizedWiener filtersolution and the uncertainty covarianceD=(S−1+R†N−1R)−1{\displaystyle D=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}}as the Wiener variance. In IFT,j=R†N−1d{\displaystyle j=R^{\dagger }N^{-1}d}is called the information source, as it acts as a source term to excite the field (knowledge), andD{\displaystyle D}the information propagator, as it propagates information from one location to another inmx=∫ΩdyDxyjy.{\displaystyle m_{x}=\int _{\Omega }dy\,D_{xy}j_{y}.} If any of the assumptions that lead to the free theory is violated, IFT becomes an interacting theory, with terms that are of higher than quadratic order in the signal field. This happens when the signal or the noise are not following Gaussian statistics, when the response is non-linear, when the noise depends on the signal, or when response or covariances are uncertain. In this case, the information Hamiltonian might be expandable in aTaylor-Fréchetseries, H(d,s)=12s†D−1s−j†s+H0⏟=Hfree(d,s)+∑n=3∞1n!Λx1...xn(n)sx1...sxn⏟=Hint(d,s),{\displaystyle {\mathcal {H}}(d,\,s)=\underbrace {{\frac {1}{2}}s^{\dagger }D^{-1}s-j^{\dagger }s+{\mathcal {H}}_{0}} _{={\mathcal {H}}_{\text{free}}(d,\,s)}+\underbrace {\sum _{n=3}^{\infty }{\frac {1}{n!}}\Lambda _{x_{1}...x_{n}}^{(n)}s_{x_{1}}...s_{x_{n}}} _{={\mathcal {H}}_{\text{int}}(d,\,s)},}whereHfree(d,s){\displaystyle {\mathcal {H}}_{\text{free}}(d,\,s)}is the free Hamiltonian, which alone would lead to a Gaussian posterior, andHint(d,s){\displaystyle {\mathcal {H}}_{\text{int}}(d,\,s)}is the interacting Hamiltonian, which encodes non-Gaussian corrections. The first and second order Taylor coefficients are often identified with the (negative) information source−j{\displaystyle -j}and information propagatorD{\displaystyle D}, respectively. The higher coefficientsΛx1...xn(n){\displaystyle \Lambda _{x_{1}...x_{n}}^{(n)}}are associated with non-linear self-interactions. The classical fieldscl{\displaystyle s_{\text{cl}}}minimizes the information Hamiltonian,∂H(d,s)∂s|s=scl=0,{\displaystyle \left.{\frac {\partial {\mathcal {H}}(d,s)}{\partial s}}\right|_{s=s_{\text{cl}}}=0,}and therefore maximizes the posterior:∂P(s|d)∂s|s=scl=∂∂se−H(d,s)Z(d)|s=scl=−P(d,s)∂H(d,s)∂s|s=scl⏟=0=0{\displaystyle \left.{\frac {\partial {\mathcal {P}}(s|d)}{\partial s}}\right|_{s=s_{\text{cl}}}=\left.{\frac {\partial }{\partial s}}\,{\frac {e^{-{\mathcal {H}}(d,s)}}{{\mathcal {Z}}(d)}}\right|_{s=s_{\text{cl}}}=-{\mathcal {P}}(d,s)\,\underbrace {\left.{\frac {\partial {\mathcal {H}}(d,s)}{\partial s}}\right|_{s=s_{\text{cl}}}} _{=0}=0}The classical fieldscl{\displaystyle s_{\text{cl}}}is therefore themaximum a posteriori estimatorof the field inference problem. The Wiener filter problem requires the two point correlationS≡⟨ss†⟩(s){\displaystyle S\equiv \langle s\,s^{\dagger }\rangle _{(s)}}of a field to be known. If it is unknown, it has to be inferred along with the field itself. This requires the specification of ahyperpriorP(S){\displaystyle {\mathcal {P}}(S)}. Often, statistical homogeneity (translation invariance) can be assumed, implying thatS{\displaystyle S}is diagonal inFourier space(forΩ=Ru{\displaystyle \Omega =\mathbb {R} ^{u}}being au{\displaystyle u}dimensionalCartesian space). In this case, only the Fourier space power spectrumPs(k→){\displaystyle P_{s}({\vec {k}})}needs to be inferred. Given a further assumption of statistical isotropy, this spectrum depends only on the lengthk=|k→|{\displaystyle k=|{\vec {k}}|}of the Fourier vectork→{\displaystyle {\vec {k}}}and only a one dimensional spectrumPs(k){\displaystyle P_{s}(k)}has to be determined. The prior field covariance reads then in Fourier space coordinatesSk→q→=(2π)uδ(k→−q→)Ps(k){\displaystyle S_{{\vec {k}}{\vec {q}}}=(2\pi )^{u}\delta ({\vec {k}}-{\vec {q}})\,P_{s}(k)}. If the prior onPs(k){\displaystyle P_{s}(k)}is flat, the joint probability of data and spectrum isP(d,Ps)=∫DsP(d,s,Ps)=∫DsP(d|s,Ps)P(s|Ps)P(Ps)∝∫DsG(d−Rs,N)G(s,S)∝1|S|12∫Dsexp⁡[−12(s†D−1s−j†s−s†j)]∝|D|12|S|12exp⁡[12j†Dj],{\displaystyle {\begin{aligned}{\mathcal {P}}(d,P_{s})&=\int {\mathcal {D}}s\,{\mathcal {P}}(d,s,P_{s})\\&=\int {\mathcal {D}}s\,{\mathcal {P}}(d|s,P_{s})\,{\mathcal {P}}(s|P_{s})\,{\mathcal {P}}(P_{s})\\&\propto \int {\mathcal {D}}s\,{\mathcal {G}}(d-Rs,N)\,{\mathcal {G}}(s,S)\\&\propto {\frac {1}{|S|^{\frac {1}{2}}}}\int {\mathcal {D}}s\,\exp \left[-{\frac {1}{2}}\left(s^{\dagger }D^{-1}s-j^{\dagger }s-s^{\dagger }j\right)\right]\\&\propto {\frac {|D|^{\frac {1}{2}}}{|S|^{\frac {1}{2}}}}\exp \left[{\frac {1}{2}}j^{\dagger }D\,j\right],\end{aligned}}}where the notation of the information propagatorD=(S−1+R†N−1R)−1{\displaystyle D=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}}and sourcej=R†N−1d{\displaystyle j=R^{\dagger }N^{-1}d}of the Wiener filter problem was used again. The corresponding information Hamiltonian isH(d,Ps)=^12[ln⁡|SD−1|−j†Dj]=12Tr[ln⁡(SD−1)−jj†D],{\displaystyle {\mathcal {H}}(d,P_{s})\;{\widehat {=}}\;{\frac {1}{2}}\left[\ln |S\,D^{-1}|-j^{\dagger }D\,j\right]={\frac {1}{2}}\mathrm {Tr} \left[\ln \left(S\,D^{-1}\right)-j\,j^{\dagger }D\right],}where=^{\displaystyle {\widehat {=}}}denotes equality up to irrelevant constants (here: constant with respect toPs{\displaystyle P_{s}}). Minimizing this with respect toPs{\displaystyle P_{s}}, in order to get its maximum a posteriori power spectrum estimator, yields∂H(d,Ps)∂Ps(k)=12Tr[DS−1∂(SD−1)∂Ps(k)−jj†∂D∂Ps(k)]=12Tr[DS−1∂(1+SR†N−1R)∂Ps(k)+jj†D∂D−1∂Ps(k)D]=12Tr[DS−1∂S∂Ps(k)R†N−1R+mm†∂S−1∂Ps(k)]=12Tr[(R†N−1RDS−1−S−1mm†S−1)∂S∂Ps(k)]=12∫(dq2π)u∫(dq′2π)u((D−1−S−1)DS−1−S−1mm†S−1)q→q→′∂(2π)uδ(q→−q→′)Ps(q)∂Ps(k)=12∫(dq2π)u(S−1−S−1DS−1−S−1mm†S−1)q→q→δ(k−q)=12Tr{S−1[S−(D+mm†)]S−1Pk}=Tr[Pk]2Ps(k)−Tr[(D+mm†)Pk]2[Ps(k)]2=0,{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {H}}(d,P_{s})}{\partial P_{s}(k)}}&={\frac {1}{2}}\mathrm {Tr} \left[D\,S^{-1}\,{\frac {\partial \left(S\,D^{-1}\right)}{\partial P_{s}(k)}}-j\,j^{\dagger }{\frac {\partial D}{\partial P_{s}(k)}}\right]\\&={\frac {1}{2}}\mathrm {Tr} \left[D\,S^{-1}\,{\frac {\partial \left(1+S\,R^{\dagger }N^{-1}R\right)}{\partial P_{s}(k)}}+j\,j^{\dagger }D\,{\frac {\partial D^{-1}}{\partial P_{s}(k)}}\,D\right]\\&={\frac {1}{2}}\mathrm {Tr} \left[D\,S^{-1}\,{\frac {\partial S}{\partial P_{s}(k)}}R^{\dagger }N^{-1}R+m\,m^{\dagger }\,{\frac {\partial S^{-1}}{\partial P_{s}(k)}}\right]\\&={\frac {1}{2}}\mathrm {Tr} \left[\left(R^{\dagger }N^{-1}R\,D\,S^{-1}-S^{-1}m\,m^{\dagger }\,S^{-1}\right)\,{\frac {\partial S}{\partial P_{s}(k)}}\right]\\&={\frac {1}{2}}\int \left({\frac {dq}{2\pi }}\right)^{u}\int \left({\frac {dq'}{2\pi }}\right)^{u}\left(\left(D^{-1}-S^{-1}\right)\,D\,S^{-1}-S^{-1}m\,m^{\dagger }\,S^{-1}\right)_{{\vec {q}}{\vec {q}}'}\,{\frac {\partial (2\pi )^{u}\delta ({\vec {q}}-{\vec {q}}')\,P_{s}(q)}{\partial P_{s}(k)}}\\&={\frac {1}{2}}\int \left({\frac {dq}{2\pi }}\right)^{u}\left(S^{-1}-S^{-1}D\,S^{-1}-S^{-1}m\,m^{\dagger }\,S^{-1}\right)_{{\vec {q}}{\vec {q}}}\,\delta (k-q)\\&={\frac {1}{2}}\mathrm {Tr} \left\{S^{-1}\left[S-\left(D+m\,m^{\dagger }\right)\right]\,S^{-1}\mathbb {P} _{k}\right\}\\&={\frac {\mathrm {Tr} \left[\mathbb {P} _{k}\right]}{2\,P_{s}(k)}}-{\frac {\mathrm {Tr} \left[\left(D+m\,m^{\dagger }\right)\,\mathbb {P} _{k}\right]}{2\,\left[P_{s}(k)\right]^{2}}}=0,\end{aligned}}}where the Wiener filter meanm=Dj{\displaystyle m=D\,j}and the spectral band projector(Pk)q→q→′≡(2π)uδ(q→−q→′)δ(|q→|−k){\displaystyle (\mathbb {P} _{k})_{{\vec {q}}{\vec {q}}'}\equiv (2\pi )^{u}\delta ({\vec {q}}-{\vec {q}}')\,\delta (|{\vec {q}}|-k)}were introduced. The latter commutes withS−1{\displaystyle S^{-1}}, since(S−1)k→q→=(2π)uδ(k→−q→)[Ps(k)]−1{\displaystyle (S^{-1})_{{\vec {k}}{\vec {q}}}=(2\pi )^{u}\delta ({\vec {k}}-{\vec {q}})\,[P_{s}(k)]^{-1}}is diagonal in Fourier space. The maximum a posteriori estimator for the power spectrum is thereforePs(k)=Tr[(mm†+D)Pk]Tr[Pk].{\displaystyle P_{s}(k)={\frac {\mathrm {Tr} \left[\left(m\,m^{\dagger }+D\right)\,\mathbb {P} _{k}\right]}{\mathrm {Tr} \left[\mathbb {P} _{k}\right]}}.}It has to be calculated iteratively, asm=Dj{\displaystyle m=D\,j}andD=(S−1+R†N−1R)−1{\displaystyle D=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}}depend both onPs{\displaystyle P_{s}}themselves. In anempirical Bayesapproach, the estimatedPs{\displaystyle P_{s}}would be taken as given. As a consequence, the posterior mean estimate for the signal field is the correspondingm{\displaystyle m}and its uncertainty the correspondingD{\displaystyle D}in the empirical Bayes approximation. The resulting non-linear filter is called thecritical filter.[5]The generalization of the power spectrum estimation formula asPs(k)=Tr[(mm†+δD)Pk]Tr[Pk]{\displaystyle P_{s}(k)={\frac {\mathrm {Tr} \left[\left(m\,m^{\dagger }+\delta \,D\right)\,\mathbb {P} _{k}\right]}{\mathrm {Tr} \left[\mathbb {P} _{k}\right]}}}exhibits a perception thresholds forδ<1{\displaystyle \delta <1}, meaning that the data variance in a Fourier band has to exceed the expected noise level by a certain threshold before the signal reconstructionm{\displaystyle m}becomes non-zero for this band. Whenever the data variance exceeds this threshold slightly, the signal reconstruction jumps to a finite excitation level, similar to afirst order phase transitionin thermodynamic systems. For filter withδ=1{\displaystyle \delta =1}perception of the signal starts continuously as soon the data variance exceeds the noise level. The disappearance of the discontinuous perception atδ=1{\displaystyle \delta =1}is similar to a thermodynamic system going through acritical point. Hence the name critical filter. The critical filter, extensions thereof to non-linear measurements, and the inclusion of non-flat spectrum priors, permitted the application of IFT to real world signal inference problems, for which the signal covariance is usually unknown a priori. The generalized Wiener filter, that emerges in free IFT, is in broad usage in signal processing. Algorithms explicitly based on IFT were derived for a number of applications. Many of them are implemented using theNumerical Information Field Theory(NIFTy) library. Many techniques from quantum field theory can be used to tackle IFT problems, like Feynman diagrams, effective actions, and the field operator formalism. In case the interaction coefficientsΛ(n){\displaystyle \Lambda ^{(n)}}in aTaylor-Fréchetexpansion of the information HamiltonianH(d,s)=12s†D−1s−j†s+H0⏟=Hfree(d,s)+∑n=3∞1n!Λx1...xn(n)sx1...sxn⏟=Hint(d,s),{\displaystyle {\mathcal {H}}(d,\,s)=\underbrace {{\frac {1}{2}}s^{\dagger }D^{-1}s-j^{\dagger }s+{\mathcal {H}}_{0}} _{={\mathcal {H}}_{\text{free}}(d,\,s)}+\underbrace {\sum _{n=3}^{\infty }{\frac {1}{n!}}\Lambda _{x_{1}...x_{n}}^{(n)}s_{x_{1}}...s_{x_{n}}} _{={\mathcal {H}}_{\text{int}}(d,\,s)},}are small, the log partition function, orHelmholtz free energy,ln⁡Z(d)=ln⁡∫Dse−H(d,s)=∑c∈Cc{\displaystyle \ln {\mathcal {Z}}(d)=\ln \int {\mathcal {D}}s\,e^{-{\mathcal {H}}(d,s)}=\sum _{c\in C}c}can be expanded asymptotically in terms of these coefficients. The free Hamiltonian specifies the meanm=Dj{\displaystyle m=D\,j}and varianceD{\displaystyle D}of the Gaussian distributionG(s−m,D){\displaystyle {\mathcal {G}}(s-m,D)}over which the expansion is integrated. This leads to a sum over the setC{\displaystyle C}of all connectedFeynman diagrams. From the Helmholtz free energy, any connected moment of the field can be calculated via⟨sx1…sxn⟩(s|d)c=∂nln⁡Z∂jx1…∂jxn.{\displaystyle \langle s_{x_{1}}\ldots s_{x_{n}}\rangle _{(s|d)}^{\text{c}}={\frac {\partial ^{n}\ln {\mathcal {Z}}}{\partial j_{x_{1}}\ldots \partial j_{x_{n}}}}.}Situations where small expansion parameters exist that are needed for such a diagrammatic expansion to converge are given by nearly Gaussian signal fields, where the non-Gaussianity of the field statistics leads to small interaction coefficientsΛ(n){\displaystyle \Lambda ^{(n)}}. For example, the statistics of theCosmic Microwave Backgroundis nearly Gaussian, with small amounts of non-Gaussianities believed to be seeded during theinflationary epochin theEarly Universe. In order to have a stable numerics for IFT problems, a field functional that if minimized provides the posterior mean field is needed. Such is given by the effective action orGibbs free energyof a field. The Gibbs free energyG{\displaystyle G}can be constructed from the Helmholtz free energy via aLegendre transformation. In IFT, it is given by the difference of the internal information energyU=⟨H(d,s)⟩P′(s|d′){\displaystyle U=\langle {\mathcal {H}}(d,s)\rangle _{{\mathcal {P}}'(s|d')}}and theShannon entropyS=−∫DsP′(s|d′)ln⁡P′(s|d′){\displaystyle {\mathcal {S}}=-\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\mathcal {P}}'(s|d')}for temperatureT=1{\displaystyle T=1}, where a Gaussian posterior approximationP′(s|d′)=G(s−m,D){\displaystyle {\mathcal {P}}'(s|d')={\mathcal {G}}(s-m,D)}is used with the approximate datad′=(m,D){\displaystyle d'=(m,D)}containing the mean and the dispersion of the field.[6] The Gibbs free energy is thenG(m,D)=U(m,D)−TS(m,D)=⟨H(d,s)+ln⁡P′(s|d′)⟩P′(s|d′)=∫DsP′(s|d′)ln⁡P′(s|d′)P(d,s)=∫DsP′(s|d′)ln⁡P′(s|d′)P(s|d)P(d)=∫DsP′(s|d′)ln⁡P′(s|d′)P(s|d)−lnP(d)=KL(P′(s|d′)||P(s|d))−ln⁡Z(d),{\displaystyle {\begin{aligned}G(m,D)&=U(m,D)-T\,{\mathcal {S}}(m,D)\\&=\langle {\mathcal {H}}(d,s)+\ln {\mathcal {P}}'(s|d')\rangle _{{\mathcal {P}}'(s|d')}\\&=\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\frac {{\mathcal {P}}'(s|d')}{{\mathcal {P}}(d,s)}}\\&=\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\frac {{\mathcal {P}}'(s|d')}{{\mathcal {P}}(s|d)\,{\mathcal {P}}(d)}}\\&=\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\frac {{\mathcal {P}}'(s|d')}{{\mathcal {P}}(s|d)}}-\ln \,{\mathcal {P}}(d)\\&={\text{KL}}({\mathcal {P}}'(s|d')||{\mathcal {P}}(s|d))-\ln {\mathcal {Z}}(d),\end{aligned}}}theKullback-Leibler divergenceKL(P′,P){\displaystyle {\text{KL}}({\mathcal {P}}',{\mathcal {P}})}between approximative and exact posterior plus the Helmholtz free energy. As the latter does not depend on the approximate datad′=(m,D){\displaystyle d'=(m,D)}, minimizing the Gibbs free energy is equivalent to minimizing the Kullback-Leibler divergence between approximate and exact posterior. Thus, the effective action approach of IFT is equivalent to thevariational Bayesian methods, which also minimize the Kullback-Leibler divergence between approximate and exact posteriors. Minimizing the Gibbs free energy provides approximatively the posterior mean field⟨s⟩(s|d)=∫DssP(s|d),{\displaystyle \langle s\rangle _{(s|d)}=\int {\mathcal {D}}s\,s\,{\mathcal {P}}(s|d),}whereas minimizing the information Hamiltonian provides the maximum a posteriori field. As the latter is known to over-fit noise, the former is usually a better field estimator. The calculation of the Gibbs free energy requires the calculation of Gaussian integrals over an information Hamiltonian, since the internal information energy isU(m,D)=⟨H(d,s)⟩P′(s|d′)=∫DsH(d,s)G(s−m,D).{\displaystyle U(m,D)=\langle {\mathcal {H}}(d,s)\rangle _{{\mathcal {P}}'(s|d')}=\int {\mathcal {D}}s\,{\mathcal {H}}(d,s)\,{\mathcal {G}}(s-m,D).}Such integrals can be calculated via a field operator formalism,[7]in whichOm=m+Dddm{\displaystyle O_{m}=m+D\,{\frac {\mathrm {d} }{\mathrm {d} m}}}is the field operator. This generates the field expressions{\displaystyle s}within the integral if applied to the Gaussian distribution function,OmG(s−m,D)=(m+Dddm)1|2πD|12exp⁡[−12(s−m)†D−1(s−m)]=(m+DD−1(s−m))1|2πD|12exp⁡[−12(s−m)†D−1(s−m)]=sG(s−m,D),{\displaystyle {\begin{aligned}O_{m}\,{\mathcal {G}}(s-m,D)&=(m+D\,{\frac {\mathrm {d} }{\mathrm {d} m}})\,{\frac {1}{|2\pi D|^{\frac {1}{2}}}}\,\exp \left[-{\frac {1}{2}}(s-m)^{\dagger }D^{-1}(s-m)\right]\\&=(m+D\,D^{-1}(s-m))\,{\frac {1}{|2\pi D|^{\frac {1}{2}}}}\,\exp \left[-{\frac {1}{2}}(s-m)^{\dagger }D^{-1}(s-m)\right]\\&=s\,{\mathcal {G}}(s-m,D),\end{aligned}}}and any higher power of the field if applied several times,(Om)nG(s−m,D)=snG(s−m,D).{\displaystyle {\begin{aligned}(O_{m})^{n}\,{\mathcal {G}}(s-m,D)&=s^{n}\,{\mathcal {G}}(s-m,D).\end{aligned}}}If the information Hamiltonian is analytical, all its terms can be generated via the field operatorH(d,Om)G(s−m,D)=H(d,s)G(s−m,D).{\displaystyle {\mathcal {H}}(d,O_{m})\,{\mathcal {G}}(s-m,D)={\mathcal {H}}(d,s)\,{\mathcal {G}}(s-m,D).}As the field operator does not depend on the fields{\displaystyle s}itself, it can be pulled out of the path integral of the internal information energy construction,U(m,D)=∫DsH(d,Om)G(s−m,D)=H(d,Om)∫DsG(s−m,D)=H(d,Om)1m,{\displaystyle U(m,D)=\int {\mathcal {D}}s\,{\mathcal {H}}(d,O_{m})\,{\mathcal {G}}(s-m,D)={\mathcal {H}}(d,O_{m})\int {\mathcal {D}}s\,{\mathcal {G}}(s-m,D)={\mathcal {H}}(d,O_{m})\,1_{m},}where1m=1{\displaystyle 1_{m}=1}should be regarded as a functional that always returns the value1{\displaystyle 1}irrespective the value of its inputm{\displaystyle m}. The resulting expression can be calculated by commuting the mean field annihilatorDddm{\displaystyle D\,{\frac {\mathrm {d} }{\mathrm {d} m}}}to the right of the expression, where they vanish sinceddm1m=0{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} m}}\,1_{m}=0}. The mean field annihilatorDddm{\displaystyle D\,{\frac {\mathrm {d} }{\mathrm {d} m}}}commutes with the mean field as[Dddm,m]=Dddmm−mDddm=D+mDddm−mDddm=D.{\displaystyle \left[D\,{\frac {\mathrm {d} }{\mathrm {d} m}},m\right]=D\,{\frac {\mathrm {d} }{\mathrm {d} m}}\,m-m\,D\,{\frac {\mathrm {d} }{\mathrm {d} m}}=D+m\,D\,{\frac {\mathrm {d} }{\mathrm {d} m}}-m\,D\,{\frac {\mathrm {d} }{\mathrm {d} m}}=D.} By the usage of the field operator formalism the Gibbs free energy can be calculated, which permits the (approximate) inference of the posterior mean field via a numerical robust functional minimization. The book ofNorbert Wiener[8]might be regarded as one of the first works on field inference. The usage of path integrals for field inference was proposed by a number of authors, e.g. Edmund Bertschinger[9]or William Bialek and A. Zee.[10]The connection of field theory and Bayesian reasoning was made explicit by Jörg Lemm.[11]The terminformation field theorywas coined by Torsten Enßlin.[12]See the latter reference for more information on the history of IFT.
https://en.wikipedia.org/wiki/Information_field_theory
Probabilistic programming(PP) is aprogramming paradigmbased on the declarative specification ofprobabilistic models, for which inference is performed automatically.[1]Probabilistic programming attempts to unify probabilistic modeling and traditional general purpose programming in order to make the former easier and more widely applicable.[2][3]It can be used to create systems that help make decisions in the face of uncertainty. Programming languages following the probabilistic programming paradigm are referred to as "probabilistic programming languages" (PPLs). Probabilistic reasoning has been used for a wide variety of tasks such as predicting stock prices, recommending movies, diagnosing computers, detecting cyber intrusions and image detection.[4]However, until recently (partially due to limited computing power), probabilistic programming was limited in scope, and most inference algorithms had to be written manually for each task. Nevertheless, in 2015, a 50-line probabilisticcomputer visionprogram was used to generate 3D models of human faces based on 2D images of those faces. The program used inverse graphics as the basis of its inference method, and was built using the Picture package inJulia.[4]This made possible "in 50 lines of code what used to take thousands".[5][6] TheGenprobabilistic programming library (also written in Julia) has been applied to vision and robotics tasks.[7] More recently, the probabilistic programming systemTuring.jlhas been applied in various pharmaceutical[8]and economics applications.[9] Probabilistic programming in Julia has also been combined withdifferentiable programmingby combining the Julia package Zygote.jl with Turing.jl.[10] Probabilistic programming languages are also commonly used inBayesian cognitive scienceto develop and evaluate models of cognition.[11] PPLs often extend from a basic language. For instance, Turing.jl[12]is based onJulia,Infer.NETis based on.NET Framework,[13]while PRISM extends fromProlog.[14]However, some PPLs, such asWinBUGS, offer a self-contained language that maps closely to the mathematical representation of the statistical models, with no obvious origin in another programming language.[15][16] The language for WinBUGS was implemented to perform Bayesian computation using Gibbs Sampling and related algorithms. Although implemented in a relatively unknown programming language (Component Pascal), this language permitsBayesian inferencefor a wide variety of statistical models using a flexible computational approach. The same BUGS language may be used to specify Bayesian models for inference via different computational choices ("samplers") and conventions or defaults, using a standalone program WinBUGS (or related R packages, rbugs and r2winbugs) and JAGS (Just Another Gibbs Sampler, another standalone program with related R packages including rjags, R2jags, and runjags). More recently, other languages to support Bayesian model specification and inference allow different or more efficient choices for the underlying Bayesian computation, and are accessible from the R data analysis and programming environment, e.g.:Stan, NIMBLE and NUTS. The influence of the BUGS language is evident in these later languages, which even use the same syntax for some aspects of model specification. Several PPLs are in active development, including some in beta test. Two popular tools are Stan andPyMC.[17] Aprobabilistic relational programming language(PRPL) is a PPL specially designed to describe and infer withprobabilistic relational models(PRMs). A PRM is usually developed with a set of algorithms for reducing, inference about and discovery of concerned distributions, which are embedded into the corresponding PRPL. Probabilistic logic programming is aprogramming paradigmthat extendslogic programmingwith probabilities. Most approaches to probabilistic logic programming are based on thedistribution semantics,which splits a program into a set of probabilistic facts and a logic program. It defines a probability distribution on interpretations of theHerbrand universeof the program.[18] This list summarises the variety of PPLs that are currently available, and clarifies their origins.
https://en.wikipedia.org/wiki/Probabilistic_programming
Thislist ofstatisticianslists people who have made notable contributions to the theories or application ofstatistics, or to the related fields ofprobabilityormachine learning. It includes thefounders of statisticsand others. It includes some 17th- and 18th-centurymathematiciansandpolymathswhose work is regarded as influential in shaping the later discipline of statistics. Also included are variousactuaries,economists, anddemographersknown for providing leadership in applying statistics to their fields.
https://en.wikipedia.org/wiki/List_of_statisticians
TheHammersley–Clifford theoremis a result inprobability theory,mathematical statisticsandstatistical mechanicsthat gives necessary and sufficient conditions under which a strictly positiveprobability distributioncan be represented as events generated by aMarkov network(also known as aMarkov random field). It is thefundamental theorem of random fields.[1]It states that a probability distribution that has a strictly positivemassordensitysatisfies one of theMarkov propertieswith respect to an undirected graphGif and only if it is aGibbs random field, that is, its density can be factorized over the cliques (orcomplete subgraphs) of the graph. The relationship between Markov and Gibbs random fields was initiated byRoland Dobrushin[2]andFrank Spitzer[3]in the context ofstatistical mechanics. The theorem is named afterJohn HammersleyandPeter Clifford, who proved the equivalence in an unpublished paper in 1971.[4][5]Simpler proofs using theinclusion–exclusion principlewere given independently byGeoffrey Grimmett,[6]Preston[7]and Sherman[8]in 1973, with a further proof byJulian Besagin 1974.[9] It is a trivial matter to show that a Gibbs random field satisfies everyMarkov property. As an example of this fact, see the following: In the image to the right, a Gibbs random field over the provided graph has the formPr(A,B,C,D,E,F)∝f1(A,B,D)f2(A,C,D)f3(C,D,F)f4(C,E,F){\displaystyle \Pr(A,B,C,D,E,F)\propto f_{1}(A,B,D)f_{2}(A,C,D)f_{3}(C,D,F)f_{4}(C,E,F)}. If variablesC{\displaystyle C}andD{\displaystyle D}are fixed, then the global Markov property requires that:A,B⊥E,F|C,D{\displaystyle A,B\perp E,F|C,D}(seeconditional independence), sinceC,D{\displaystyle C,D}forms a barrier betweenA,B{\displaystyle A,B}andE,F{\displaystyle E,F}. WithC{\displaystyle C}andD{\displaystyle D}constant,Pr(A,B,E,F|C=c,D=d)∝[f1(A,B,d)f2(A,c,d)]⋅[f3(c,d,F)f4(c,E,F)]=g1(A,B)g2(E,F){\displaystyle \Pr(A,B,E,F|C=c,D=d)\propto [f_{1}(A,B,d)f_{2}(A,c,d)]\cdot [f_{3}(c,d,F)f_{4}(c,E,F)]=g_{1}(A,B)g_{2}(E,F)}whereg1(A,B)=f1(A,B,d)f2(A,c,d){\displaystyle g_{1}(A,B)=f_{1}(A,B,d)f_{2}(A,c,d)}andg2(E,F)=f3(c,d,F)f4(c,E,F){\displaystyle g_{2}(E,F)=f_{3}(c,d,F)f_{4}(c,E,F)}. This implies thatA,B⊥E,F|C,D{\displaystyle A,B\perp E,F|C,D}. To establish that every positive probability distribution that satisfies the local Markov property is also a Gibbs random field, the following lemma, which provides a means for combining different factorizations, needs to be proved: Lemma 1 LetU{\displaystyle U}denote the set of all random variables under consideration, and letΘ,Φ1,Φ2,…,Φn⊆U{\displaystyle \Theta ,\Phi _{1},\Phi _{2},\dots ,\Phi _{n}\subseteq U}andΨ1,Ψ2,…,Ψm⊆U{\displaystyle \Psi _{1},\Psi _{2},\dots ,\Psi _{m}\subseteq U}denote arbitrary sets of variables. (Here, given an arbitrary set of variablesX{\displaystyle X},X{\displaystyle X}will also denote an arbitrary assignment to the variables fromX{\displaystyle X}.) If Pr(U)=f(Θ)∏i=1ngi(Φi)=∏j=1mhj(Ψj){\displaystyle \Pr(U)=f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i})=\prod _{j=1}^{m}h_{j}(\Psi _{j})} for functionsf,g1,g2,…gn{\displaystyle f,g_{1},g_{2},\dots g_{n}}andh1,h2,…,hm{\displaystyle h_{1},h_{2},\dots ,h_{m}}, then there exist functionsh1′,h2′,…,hm′{\displaystyle h'_{1},h'_{2},\dots ,h'_{m}}andg1′,g2′,…,gn′{\displaystyle g'_{1},g'_{2},\dots ,g'_{n}}such that Pr(U)=(∏j=1mhj′(Θ∩Ψj))(∏i=1ngi′(Φi)){\displaystyle \Pr(U)={\bigg (}\prod _{j=1}^{m}h'_{j}(\Theta \cap \Psi _{j}){\bigg )}{\bigg (}\prod _{i=1}^{n}g'_{i}(\Phi _{i}){\bigg )}} In other words,∏j=1mhj(Ψj){\displaystyle \prod _{j=1}^{m}h_{j}(\Psi _{j})}provides a template for further factorization off(Θ){\displaystyle f(\Theta )}. In order to use∏j=1mhj(Ψj){\displaystyle \prod _{j=1}^{m}h_{j}(\Psi _{j})}as a template to further factorizef(Θ){\displaystyle f(\Theta )}, all variables outside ofΘ{\displaystyle \Theta }need to be fixed. To this end, letθ¯{\displaystyle {\bar {\theta }}}be an arbitrary fixed assignment to the variables fromU∖Θ{\displaystyle U\setminus \Theta }(the variables not inΘ{\displaystyle \Theta }). For an arbitrary set of variablesX{\displaystyle X}, letθ¯[X]{\displaystyle {\bar {\theta }}[X]}denote the assignmentθ¯{\displaystyle {\bar {\theta }}}restricted to the variables fromX∖Θ{\displaystyle X\setminus \Theta }(the variables fromX{\displaystyle X}, excluding the variables fromΘ{\displaystyle \Theta }). Moreover, to factorize onlyf(Θ){\displaystyle f(\Theta )}, the other factorsg1(Φ1),g2(Φ2),...,gn(Φn){\displaystyle g_{1}(\Phi _{1}),g_{2}(\Phi _{2}),...,g_{n}(\Phi _{n})}need to be rendered moot for the variables fromΘ{\displaystyle \Theta }. To do this, the factorization Pr(U)=f(Θ)∏i=1ngi(Φi){\displaystyle \Pr(U)=f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i})} will be re-expressed as Pr(U)=(f(Θ)∏i=1ngi(Φi∩Θ,θ¯[Φi]))(∏i=1ngi(Φi)gi(Φi∩Θ,θ¯[Φi])){\displaystyle \Pr(U)={\bigg (}f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}]){\bigg )}{\bigg (}\prod _{i=1}^{n}{\frac {g_{i}(\Phi _{i})}{g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])}}{\bigg )}} For eachi=1,2,...,n{\displaystyle i=1,2,...,n}:gi(Φi∩Θ,θ¯[Φi]){\displaystyle g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])}isgi(Φi){\displaystyle g_{i}(\Phi _{i})}where all variables outside ofΘ{\displaystyle \Theta }have been fixed to the values prescribed byθ¯{\displaystyle {\bar {\theta }}}. Letf′(Θ)=f(Θ)∏i=1ngi(Φi∩Θ,θ¯[Φi]){\displaystyle f'(\Theta )=f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])}andgi′(Φi)=gi(Φi)gi(Φi∩Θ,θ¯[Φi]){\displaystyle g'_{i}(\Phi _{i})={\frac {g_{i}(\Phi _{i})}{g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])}}}for eachi=1,2,…,n{\displaystyle i=1,2,\dots ,n}so Pr(U)=f′(Θ)∏i=1ngi′(Φi)=∏j=1mhj(Ψj){\displaystyle \Pr(U)=f'(\Theta )\prod _{i=1}^{n}g'_{i}(\Phi _{i})=\prod _{j=1}^{m}h_{j}(\Psi _{j})} What is most important is thatgi′(Φi)=gi(Φi)gi(Φi∩Θ,θ¯[Φi])=1{\displaystyle g'_{i}(\Phi _{i})={\frac {g_{i}(\Phi _{i})}{g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])}}=1}when the values assigned toΦi{\displaystyle \Phi _{i}}do not conflict with the values prescribed byθ¯{\displaystyle {\bar {\theta }}}, makinggi′(Φi){\displaystyle g'_{i}(\Phi _{i})}"disappear" when all variables not inΘ{\displaystyle \Theta }are fixed to the values fromθ¯{\displaystyle {\bar {\theta }}}. Fixing all variables not inΘ{\displaystyle \Theta }to the values fromθ¯{\displaystyle {\bar {\theta }}}gives Pr(Θ,θ¯)=f′(Θ)∏i=1ngi′(Φi∩Θ,θ¯[Φi])=∏j=1mhj(Ψj∩Θ,θ¯[Ψj]){\displaystyle \Pr(\Theta ,{\bar {\theta }})=f'(\Theta )\prod _{i=1}^{n}g'_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])=\prod _{j=1}^{m}h_{j}(\Psi _{j}\cap \Theta ,{\bar {\theta }}[\Psi _{j}])} Sincegi′(Φi∩Θ,θ¯[Φi])=1{\displaystyle g'_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])=1}, f′(Θ)=∏j=1mhj(Ψj∩Θ,θ¯[Ψj]){\displaystyle f'(\Theta )=\prod _{j=1}^{m}h_{j}(\Psi _{j}\cap \Theta ,{\bar {\theta }}[\Psi _{j}])} Lettinghj′(Θ∩Ψj)=hj(Ψj∩Θ,θ¯[Ψj]){\displaystyle h'_{j}(\Theta \cap \Psi _{j})=h_{j}(\Psi _{j}\cap \Theta ,{\bar {\theta }}[\Psi _{j}])}gives: f′(Θ)=∏j=1mhj′(Θ∩Ψj){\displaystyle f'(\Theta )=\prod _{j=1}^{m}h'_{j}(\Theta \cap \Psi _{j})}which finally gives: Pr(U)=(∏j=1mhj′(Θ∩Ψj))(∏i=1ngi′(Φi)){\displaystyle \Pr(U)={\bigg (}\prod _{j=1}^{m}h'_{j}(\Theta \cap \Psi _{j}){\bigg )}{\bigg (}\prod _{i=1}^{n}g'_{i}(\Phi _{i}){\bigg )}} Lemma 1 provides a means of combining two different factorizations ofPr(U){\displaystyle \Pr(U)}. The local Markov property implies that for any random variablex∈U{\displaystyle x\in U}, that there exists factorsfx{\displaystyle f_{x}}andf−x{\displaystyle f_{-x}}such that: Pr(U)=fx(x,∂x)f−x(U∖{x}){\displaystyle \Pr(U)=f_{x}(x,\partial x)f_{-x}(U\setminus \{x\})} where∂x{\displaystyle \partial x}are the neighbors of nodex{\displaystyle x}. Applying Lemma 1 repeatedly eventually factorsPr(U){\displaystyle \Pr(U)}into a product of clique potentials (see the image on the right). End of Proof
https://en.wikipedia.org/wiki/Hammersley%E2%80%93Clifford_theorem
Instatistics, amaximum-entropy Markov model(MEMM), orconditional Markov model(CMM), is agraphical modelforsequence labelingthat combines features ofhidden Markov models(HMMs) andmaximum entropy(MaxEnt) models. An MEMM is adiscriminative modelthat extends a standardmaximum entropy classifierby assuming that the unknown values to be learnt are connected in aMarkov chainrather than beingconditionally independentof each other. MEMMs find applications innatural language processing, specifically inpart-of-speech tagging[1]andinformation extraction.[2] Suppose we have a sequence of observationsO1,…,On{\displaystyle O_{1},\dots ,O_{n}}that we seek to tag with the labelsS1,…,Sn{\displaystyle S_{1},\dots ,S_{n}}that maximize the conditional probabilityP(S1,…,Sn∣O1,…,On){\displaystyle P(S_{1},\dots ,S_{n}\mid O_{1},\dots ,O_{n})}. In a MEMM, this probability is factored into Markov transition probabilities, where the probability of transitioning to a particular label depends only on the observation at that position and the previous position's label[citation needed]: Each of these transition probabilities comes from the same general distributionP(s∣s′,o){\displaystyle P(s\mid s',o)}. For each possible label value of the previous labels′{\displaystyle s'}, the probability of a certain labels{\displaystyle s}is modeled in the same way as amaximum entropy classifier:[3] Here, thefa(o,s){\displaystyle f_{a}(o,s)}are real-valued or categorical feature-functions, andZ(o,s′){\displaystyle Z(o,s')}is a normalization term ensuring that the distribution sums to one. This form for the distribution corresponds to themaximum entropy probability distributionsatisfying the constraint that the empirical expectation for the feature is equal to the expectation given the model: The parametersλa{\displaystyle \lambda _{a}}can be estimated usinggeneralized iterative scaling.[4]Furthermore, a variant of theBaum–Welch algorithm, which is used for training HMMs, can be used to estimate parameters when training data hasincomplete or missing labels.[2] The optimal state sequenceS1,…,Sn{\displaystyle S_{1},\dots ,S_{n}}can be found using a very similarViterbi algorithmto the one used for HMMs. The dynamic program uses the forward probability: An advantage of MEMMs rather than HMMs for sequence tagging is that they offer increased freedom in choosing features to represent observations. In sequence tagging situations, it is useful to use domain knowledge to design special-purpose features. In the original paper introducing MEMMs, the authors write that "when trying to extract previously unseen company names from a newswire article, the identity of a word alone is not very predictive; however, knowing that the word is capitalized, that is a noun, that it is used in an appositive, and that it appears near the top of the article would all be quite predictive (in conjunction with the context provided by the state-transition structure)."[2]Useful sequence tagging features, such as these, are often non-independent. Maximum entropy models do not assume independence between features, but generative observation models used in HMMs do.[2]Therefore, MEMMs allow the user to specify many correlated, but informative features. Another advantage of MEMMs versus HMMs andconditional random fields(CRFs) is that training can be considerably more efficient. In HMMs and CRFs, one needs to use some version of theforward–backward algorithmas an inner loop in training[citation needed]. However, in MEMMs, estimating the parameters of the maximum-entropy distributions used for the transition probabilities can be done for each transition distribution in isolation. A drawback of MEMMs is that they potentially suffer from the "label bias problem," where states with low-entropy transition distributions "effectively ignore their observations." Conditional random fields were designed to overcome this weakness,[5]which had already been recognised in the context of neural network-based Markov models in the early 1990s.[5][6]Another source of label bias is that training is always done with respect to known previous tags, so the model struggles at test time when there is uncertainty in the previous tag.
https://en.wikipedia.org/wiki/Maximum_entropy_Markov_model
Instatistics, theGauss–Markov theorem(or simplyGauss theoremfor some authors)[1]states that theordinary least squares(OLS) estimator has the lowestsampling variancewithin theclassoflinearunbiasedestimators, if theerrorsin thelinear regression modelareuncorrelated, haveequal variancesand expectation value of zero.[2]The errors do not need to benormal, nor do they need to beindependent and identically distributed(onlyuncorrelatedwith mean zero andhomoscedasticwith finite variance). The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, theJames–Stein estimator(which also drops linearity),ridge regression, or simply anydegenerateestimator. The theorem was named afterCarl Friedrich GaussandAndrey Markov, although Gauss' work significantly predates Markov's.[3]But while Gauss derived the result under the assumption of independence and normality, Markov reduced the assumptions to the form stated above.[4]A further generalization tonon-spherical errorswas given byAlexander Aitken.[5] Suppose we are given two random variable vectors,X,Y∈Rk{\displaystyle X{\text{, }}Y\in \mathbb {R} ^{k}}and that we want to find the best linear estimator ofY{\displaystyle Y}givenX{\displaystyle X}, using the best linear estimatorY^=αX+μ{\displaystyle {\hat {Y}}=\alpha X+\mu }Where the parametersα{\displaystyle \alpha }andμ{\displaystyle \mu }are both real numbers. Such an estimatorY^{\displaystyle {\hat {Y}}}would have the same mean and standard deviation asY{\displaystyle Y}, that is,μY^=μY,σY^=σY{\displaystyle \mu _{\hat {Y}}=\mu _{Y},\sigma _{\hat {Y}}=\sigma _{Y}}. Therefore, if the vectorX{\displaystyle X}has respective mean and standard deviationμx,σx{\displaystyle \mu _{x},\sigma _{x}}, the best linear estimator would be Y^=σy(X−μx)σx+μy{\displaystyle {\hat {Y}}=\sigma _{y}{\frac {(X-\mu _{x})}{\sigma _{x}}}+\mu _{y}} sinceY^{\displaystyle {\hat {Y}}}has the same mean and standard deviation asY{\displaystyle Y}. Suppose we have, in matrix notation, the linear relationship expanding to, whereβj{\displaystyle \beta _{j}}are non-random butunobservable parameters,Xij{\displaystyle X_{ij}}are non-random and observable (called the "explanatory variables"),εi{\displaystyle \varepsilon _{i}}are random, and soyi{\displaystyle y_{i}}are random. The random variablesεi{\displaystyle \varepsilon _{i}}are called the "disturbance", "noise" or simply "error" (will be contrasted with "residual" later in the article; seeerrors and residuals in statistics). Note that to include a constant in the model above, one can choose to introduce the constant as a variableβK+1{\displaystyle \beta _{K+1}}with a newly introduced last column of X being unity i.e.,Xi(K+1)=1{\displaystyle X_{i(K+1)}=1}for alli{\displaystyle i}. Note that thoughyi,{\displaystyle y_{i},}as sample responses, are observable, the following statements and arguments including assumptions, proofs and the others assume under theonlycondition of knowingXij,{\displaystyle X_{ij},}but notyi.{\displaystyle y_{i}.} TheGauss–Markovassumptions concern the set of error random variables,εi{\displaystyle \varepsilon _{i}}: Alinear estimatorofβj{\displaystyle \beta _{j}}is a linear combination in which the coefficientscij{\displaystyle c_{ij}}are not allowed to depend on the underlying coefficientsβj{\displaystyle \beta _{j}}, since those are not observable, but are allowed to depend on the valuesXij{\displaystyle X_{ij}}, since these data are observable. (The dependence of the coefficients on eachXij{\displaystyle X_{ij}}is typically nonlinear; the estimator is linear in eachyi{\displaystyle y_{i}}and hence in each randomε,{\displaystyle \varepsilon ,}which is why this is"linear" regression.) The estimator is said to beunbiasedif and only if regardless of the values ofXij{\displaystyle X_{ij}}. Now, let∑j=1Kλjβj{\textstyle \sum _{j=1}^{K}\lambda _{j}\beta _{j}}be some linear combination of the coefficients. Then themean squared errorof the corresponding estimation is in other words, it is the expectation of the square of the weighted sum (across parameters) of the differences between the estimators and the corresponding parameters to be estimated. (Since we are considering the case in which all the parameter estimates are unbiased, this mean squared error is the same as the variance of the linear combination.) Thebest linear unbiased estimator(BLUE) of the vectorβ{\displaystyle \beta }of parametersβj{\displaystyle \beta _{j}}is one with the smallest mean squared error for every vectorλ{\displaystyle \lambda }of linear combination parameters. This is equivalent to the condition that is a positive semi-definite matrix for every other linear unbiased estimatorβ~{\displaystyle {\widetilde {\beta }}}. Theordinary least squares estimator (OLS)is the function ofy{\displaystyle y}andX{\displaystyle X}(whereXT{\displaystyle X^{\operatorname {T} }}denotes thetransposeofX{\displaystyle X}) that minimizes thesum of squares ofresiduals(misprediction amounts): The theorem now states that the OLS estimator is a best linear unbiased estimator (BLUE). The main idea of the proof is that the least-squares estimator is uncorrelated with every linear unbiased estimator of zero, i.e., with every linear combinationa1y1+⋯+anyn{\displaystyle a_{1}y_{1}+\cdots +a_{n}y_{n}}whose coefficients do not depend upon the unobservableβ{\displaystyle \beta }but whose expected value is always zero. Proof that the OLS indeedminimizesthe sum of squares of residuals may proceed as follows with a calculation of theHessian matrixand showing that it is positive definite. The MSE function we want to minimize isf(β0,β1,…,βp)=∑i=1n(yi−β0−β1xi1−⋯−βpxip)2{\displaystyle f(\beta _{0},\beta _{1},\dots ,\beta _{p})=\sum _{i=1}^{n}(y_{i}-\beta _{0}-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})^{2}}for a multiple regression model withpvariables. The first derivative isddβf=−2XT(y−Xβ)=−2[∑i=1n(yi−⋯−βpxip)∑i=1nxi1(yi−⋯−βpxip)⋮∑i=1nxip(yi−⋯−βpxip)]=0p+1,{\displaystyle {\begin{aligned}{\frac {d}{d{\boldsymbol {\beta }}}}f&=-2X^{\operatorname {T} }\left(\mathbf {y} -X{\boldsymbol {\beta }}\right)\\&=-2{\begin{bmatrix}\sum _{i=1}^{n}(y_{i}-\dots -\beta _{p}x_{ip})\\\sum _{i=1}^{n}x_{i1}(y_{i}-\dots -\beta _{p}x_{ip})\\\vdots \\\sum _{i=1}^{n}x_{ip}(y_{i}-\dots -\beta _{p}x_{ip})\end{bmatrix}}\\&=\mathbf {0} _{p+1},\end{aligned}}}whereXT{\displaystyle X^{\operatorname {T} }}is the design matrixX=[1x11⋯x1p1x21⋯x2p⋮1xn1⋯xnp]∈Rn×(p+1);n≥p+1{\displaystyle X={\begin{bmatrix}1&x_{11}&\cdots &x_{1p}\\1&x_{21}&\cdots &x_{2p}\\&&\vdots \\1&x_{n1}&\cdots &x_{np}\end{bmatrix}}\in \mathbb {R} ^{n\times (p+1)};\qquad n\geq p+1} TheHessian matrixof second derivatives isH=2[n∑i=1nxi1⋯∑i=1nxip∑i=1nxi1∑i=1nxi12⋯∑i=1nxi1xip⋮⋮⋱⋮∑i=1nxip∑i=1nxipxi1⋯∑i=1nxip2]=2XTX{\displaystyle {\mathcal {H}}=2{\begin{bmatrix}n&\sum _{i=1}^{n}x_{i1}&\cdots &\sum _{i=1}^{n}x_{ip}\\\sum _{i=1}^{n}x_{i1}&\sum _{i=1}^{n}x_{i1}^{2}&\cdots &\sum _{i=1}^{n}x_{i1}x_{ip}\\\vdots &\vdots &\ddots &\vdots \\\sum _{i=1}^{n}x_{ip}&\sum _{i=1}^{n}x_{ip}x_{i1}&\cdots &\sum _{i=1}^{n}x_{ip}^{2}\end{bmatrix}}=2X^{\operatorname {T} }X} Assuming the columns ofX{\displaystyle X}are linearly independent so thatXTX{\displaystyle X^{\operatorname {T} }X}is invertible, letX=[v1v2⋯vp+1]{\displaystyle X={\begin{bmatrix}\mathbf {v_{1}} &\mathbf {v_{2}} &\cdots &\mathbf {v} _{p+1}\end{bmatrix}}}, thenk1v1+⋯+kp+1vp+1=0⟺k1=⋯=kp+1=0{\displaystyle k_{1}\mathbf {v_{1}} +\dots +k_{p+1}\mathbf {v} _{p+1}=\mathbf {0} \iff k_{1}=\dots =k_{p+1}=0} Now letk=(k1,…,kp+1)T∈R(p+1)×1{\displaystyle \mathbf {k} =(k_{1},\dots ,k_{p+1})^{T}\in \mathbb {R} ^{(p+1)\times 1}}be an eigenvector ofH{\displaystyle {\mathcal {H}}}. k≠0⟹(k1v1+⋯+kp+1vp+1)2>0{\displaystyle \mathbf {k} \neq \mathbf {0} \implies \left(k_{1}\mathbf {v_{1}} +\dots +k_{p+1}\mathbf {v} _{p+1}\right)^{2}>0} In terms of vector multiplication, this means[k1⋯kp+1][v1⋮vp+1][v1⋯vp+1][k1⋮kp+1]=kTHk=λkTk>0{\displaystyle {\begin{bmatrix}k_{1}&\cdots &k_{p+1}\end{bmatrix}}{\begin{bmatrix}\mathbf {v_{1}} \\\vdots \\\mathbf {v} _{p+1}\end{bmatrix}}{\begin{bmatrix}\mathbf {v_{1}} &\cdots &\mathbf {v} _{p+1}\end{bmatrix}}{\begin{bmatrix}k_{1}\\\vdots \\k_{p+1}\end{bmatrix}}=\mathbf {k} ^{\operatorname {T} }{\mathcal {H}}\mathbf {k} =\lambda \mathbf {k} ^{\operatorname {T} }\mathbf {k} >0}whereλ{\displaystyle \lambda }is the eigenvalue corresponding tok{\displaystyle \mathbf {k} }. Moreover,kTk=∑i=1p+1ki2>0⟹λ>0{\displaystyle \mathbf {k} ^{\operatorname {T} }\mathbf {k} =\sum _{i=1}^{p+1}k_{i}^{2}>0\implies \lambda >0} Finally, as eigenvectork{\displaystyle \mathbf {k} }was arbitrary, it means all eigenvalues ofH{\displaystyle {\mathcal {H}}}are positive, thereforeH{\displaystyle {\mathcal {H}}}is positive definite. Thus,β=(XTX)−1XTY{\displaystyle {\boldsymbol {\beta }}=\left(X^{\operatorname {T} }X\right)^{-1}X^{\operatorname {T} }Y}is indeed a global minimum. Or, just see that for all vectorsv,vTXTXv=‖Xv‖2≥0{\displaystyle \mathbf {v} ,\mathbf {v} ^{\operatorname {T} }X^{\operatorname {T} }X\mathbf {v} =\|\mathbf {X} \mathbf {v} \|^{2}\geq 0}. So the Hessian is positive definite if full rank. Letβ~=Cy{\displaystyle {\tilde {\beta }}=Cy}be another linear estimator ofβ{\displaystyle \beta }withC=(XTX)−1XT+D{\displaystyle C=(X^{\operatorname {T} }X)^{-1}X^{\operatorname {T} }+D}whereD{\displaystyle D}is aK×n{\displaystyle K\times n}non-zero matrix. As we're restricting tounbiasedestimators, minimum mean squared error implies minimum variance. The goal is therefore to show that such an estimator has a variance no smaller than that ofβ^,{\displaystyle {\widehat {\beta }},}the OLS estimator. We calculate: Therefore, sinceβ{\displaystyle \beta }isunobservable,β~{\displaystyle {\tilde {\beta }}}is unbiased if and only ifDX=0{\displaystyle DX=0}. Then: SinceDDT{\displaystyle DD^{\operatorname {T} }}is a positive semidefinite matrix,Var⁡(β~){\displaystyle \operatorname {Var} \left({\tilde {\beta }}\right)}exceedsVar⁡(β^){\displaystyle \operatorname {Var} \left({\widehat {\beta }}\right)}by a positive semidefinite matrix. As it has been stated before, the condition ofVar⁡(β~)−Var⁡(β^){\displaystyle \operatorname {Var} \left({\tilde {\beta }}\right)-\operatorname {Var} \left({\widehat {\beta }}\right)}is a positive semidefinite matrix is equivalent to the property that the best linear unbiased estimator ofℓTβ{\displaystyle \ell ^{\operatorname {T} }\beta }isℓTβ^{\displaystyle \ell ^{\operatorname {T} }{\widehat {\beta }}}(best in the sense that it has minimum variance). To see this, letℓTβ~{\displaystyle \ell ^{\operatorname {T} }{\tilde {\beta }}}another linear unbiased estimator ofℓTβ{\displaystyle \ell ^{\operatorname {T} }\beta }. Moreover, equality holds if and only ifDTℓ=0{\displaystyle D^{\operatorname {T} }\ell =0}. We calculate This proves that the equality holds if and only ifℓTβ~=ℓTβ^{\displaystyle \ell ^{\operatorname {T} }{\tilde {\beta }}=\ell ^{\operatorname {T} }{\widehat {\beta }}}which gives the uniqueness of the OLS estimator as a BLUE. Thegeneralized least squares(GLS), developed byAitken,[5]extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrix.[6]The Aitken estimator is also a BLUE. In most treatments of OLS, the regressors (parameters of interest) in thedesign matrixX{\displaystyle \mathbf {X} }are assumed to be fixed in repeated samples. This assumption is considered inappropriate for a predominantly nonexperimental science likeeconometrics.[7]Instead, the assumptions of the Gauss–Markov theorem are stated conditional onX{\displaystyle \mathbf {X} }. The dependent variable is assumed to be a linear function of the variables specified in the model. The specification must be linear in its parameters. This does not mean that there must be a linear relationship between the independent and dependent variables. The independent variables can take non-linear forms as long as the parameters are linear. The equationy=β0+β1x2,{\displaystyle y=\beta _{0}+\beta _{1}x^{2},}qualifies as linear whiley=β0+β12x{\displaystyle y=\beta _{0}+\beta _{1}^{2}x}can be transformed to be linear by replacingβ12{\displaystyle \beta _{1}^{2}}by another parameter, sayγ{\displaystyle \gamma }. An equation with a parameter dependent on an independent variable does not qualify as linear, for exampley=β0+β1(x)⋅x{\displaystyle y=\beta _{0}+\beta _{1}(x)\cdot x}, whereβ1(x){\displaystyle \beta _{1}(x)}is a function ofx{\displaystyle x}. Data transformationsare often used to convert an equation into a linear form. For example, theCobb–Douglas function—often used in economics—is nonlinear: But it can be expressed in linear form by taking thenatural logarithmof both sides:[8] This assumption also covers specification issues: assuming that the proper functional form has been selected and there are noomitted variables. One should be aware, however, that the parameters that minimize the residuals of the transformed equation do not necessarily minimize the residuals of the original equation. For alln{\displaystyle n}observations, the expectation—conditional on the regressors—of the error term is zero:[9] wherexi=[xi1xi2⋯xik]T{\displaystyle \mathbf {x} _{i}={\begin{bmatrix}x_{i1}&x_{i2}&\cdots &x_{ik}\end{bmatrix}}^{\operatorname {T} }}is the data vector of regressors for theith observation, and consequentlyX=[x1Tx2T⋯xnT]T{\displaystyle \mathbf {X} ={\begin{bmatrix}\mathbf {x} _{1}^{\operatorname {T} }&\mathbf {x} _{2}^{\operatorname {T} }&\cdots &\mathbf {x} _{n}^{\operatorname {T} }\end{bmatrix}}^{\operatorname {T} }}is the data matrix or design matrix. Geometrically, this assumption implies thatxi{\displaystyle \mathbf {x} _{i}}andεi{\displaystyle \varepsilon _{i}}areorthogonalto each other, so that theirinner product(i.e., their cross moment) is zero. This assumption is violated if the explanatory variables aremeasured with error, or areendogenous.[10]Endogeneity can be the result ofsimultaneity, where causality flows back and forth between both the dependent and independent variable.Instrumental variabletechniques are commonly used to address this problem. The sample data matrixX{\displaystyle \mathbf {X} }must have full columnrank. OtherwiseXTX{\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {X} }is not invertible and the OLS estimator cannot be computed. A violation of this assumption isperfect multicollinearity, i.e. some explanatory variables are linearly dependent. One scenario in which this will occur is called "dummy variable trap," when a base dummy variable is not omitted resulting in perfect correlation between the dummy variables and the constant term.[11] Multicollinearity (as long as it is not "perfect") can be present resulting in a less efficient, but still unbiased estimate. The estimates will be less precise and highly sensitive to particular sets of data.[12]Multicollinearity can be detected fromcondition numberor thevariance inflation factor, among other tests. Theouter productof the error vector must be spherical. This implies the error term has uniform variance (homoscedasticity) and noserial correlation.[13]If this assumption is violated, OLS is still unbiased, butinefficient. The term "spherical errors" will describe themultivariate normal distribution: ifVar⁡[ε∣X]=σ2I{\displaystyle \operatorname {Var} [\,{\boldsymbol {\varepsilon }}\mid \mathbf {X} ]=\sigma ^{2}\mathbf {I} }in the multivariate normal density, then the equationf(ε)=c{\displaystyle f(\varepsilon )=c}is the formula for aballcentered at μ with radius σ in n-dimensional space.[14] Heteroskedasticityoccurs when the amount of error is correlated with an independent variable. For example, in a regression on food expenditure and income, the error is correlated with income. Low income people generally spend a similar amount on food, while high income people may spend a very large amount or as little as low income people spend. Heteroskedastic can also be caused by changes in measurement practices. For example, as statistical offices improve their data, measurement error decreases, so the error term declines over time. This assumption is violated when there isautocorrelation. Autocorrelation can be visualized on a data plot when a given observation is more likely to lie above a fitted line if adjacent observations also lie above the fitted regression line. Autocorrelation is common in time series data where a data series may experience "inertia." If a dependent variable takes a while to fully absorb a shock. Spatial autocorrelation can also occur geographic areas are likely to have similar errors. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. In these cases, correcting the specification is one possible way to deal with autocorrelation. When the spherical errors assumption may be violated, the generalized least squares estimator can be shown to be BLUE.[6]
https://en.wikipedia.org/wiki/Best_linear_unbiased_estimator
Instatistics,completenessis a property of astatisticcomputed on asample datasetin relation to a parametric model of the dataset. It is opposed to the concept of anancillary statistic. While an ancillary statistic contains no information about the model parameters, a complete statistic contains only information about the parameters, and no ancillary information. It is closely related to the concept of asufficient statisticwhich contains all of the information that the dataset provides about the parameters.[1] Consider arandom variableXwhose probability distribution belongs to aparametric modelPθparametrized byθ. SayTis astatistic; that is, the composition of ameasurable functionwith a random sampleX1,...,Xn. The statisticTis said to becompletefor the distribution ofXif, for every measurable functiong,[1] The statisticTis said to beboundedly completefor the distribution ofXif this implication holds for every measurable functiongthat is also bounded. The Bernoulli model admits a complete statistic.[1]LetXbe arandom sampleof sizensuch that eachXihas the sameBernoulli distributionwith parameterp. LetTbe the number of 1s observed in the sample, i.e.T=∑i=1nXi{\displaystyle \textstyle T=\sum _{i=1}^{n}X_{i}}.Tis a statistic ofXwhich has abinomial distributionwith parameters (n,p). If the parameter space forpis (0,1), thenTis a complete statistic. To see this, note that Observe also that neitherpnor 1 −pcan be 0. HenceEp(g(T))=0{\displaystyle E_{p}(g(T))=0}if and only if: On denotingp/(1 −p) byr, one gets: First, observe that the range ofris thepositive reals. Also, E(g(T)) is apolynomialinrand, therefore, can only be identical to 0 if all coefficients are 0, that is,g(t) = 0 for allt. It is important to notice that the result that all coefficients must be 0 was obtained because of the range ofr. Had the parameter space been finite and with a number of elements less than or equal ton, it might be possible to solve the linear equations ing(t) obtained by substituting the values ofrand get solutions different from 0. For example, ifn= 1 and the parameter space is {0.5}, a single observation and a single parameter value,Tis not complete. Observe that, with the definition: then, E(g(T)) = 0 althoughg(t) is not 0 fort= 0 nor fort= 1. This example will show that, in a sampleX1,X2of size 2 from anormal distributionwith known variance, the statisticX1+X2is complete and sufficient. SupposeX1,X2areindependent, identically distributed random variables,normally distributedwith expectationθand variance 1. The sum is acomplete statisticforθ. To show this, it is sufficient to demonstrate that there is no non-zero functiong{\displaystyle g}such that the expectation of remains zero regardless of the value ofθ. That fact may be seen as follows. The probability distribution ofX1+X2is normal with expectation 2θand variance 2. Its probability density function inx{\displaystyle x}is therefore proportional to The expectation ofgabove would therefore be a constant times A bit of algebra reduces this to wherek(θ) is nowhere zero and As a function ofθthis is a two-sidedLaplace transformofh, and cannot be identically zero unlesshis zero almost everywhere.[2]The exponential is not zero, so this can only happen ifgis zero almost everywhere. By contrast, the statistic(X1,X2){\textstyle (X_{1},X_{2})}is sufficient but not complete. It admits a non-zero unbiased estimator of zero, namelyX1−X2{\textstyle X_{1}-X_{2}}. Most parametric models have asufficient statisticwhich is not complete. This is important because theLehmann–Scheffé theoremcannot be applied to such models. Galili and Meilijson 2016[3]propose the following didactic example. Considern{\displaystyle n}independent samples from the uniform distribution: k{\displaystyle k}is a known design parameter. This model is ascale family(a specific case ofa location-scale family) model: scaling the samples by a multiplierc{\displaystyle c}multiplies the parameterθ{\displaystyle \theta }. Galili and Meilijson show that the minimum and maximum of the samples are together a sufficient statistic:X(1),X(n){\displaystyle X_{(1)},X_{(n)}}(using the usual notation fororder statistics). Indeed, conditional on these two values, the distribution of the rest of the sample is simply uniform on the range they define:[X(1),X(n)]{\displaystyle \left[X_{(1)},X_{(n)}\right]}. However, their ratio has a distribution which does not depend onθ{\displaystyle \theta }. This follows from the fact that this is a scale family: any change of scale impacts both variables identically. Subtracting the meanm{\displaystyle m}from that distribution, we obtain: We have thus shown that there exists a functiong(X(1),X(n)){\displaystyle g\left(X_{(1)},X_{(n)}\right)}which is not0{\displaystyle 0}everywhere but which has expectation0{\displaystyle 0}. The pair is thus not complete. The notion of completeness has many applications in statistics, particularly in the following theorems of mathematical statistics. Completenessoccurs in theLehmann–Scheffé theorem,[1]which states that if a statistic that is unbiased,completeandsufficientfor some parameterθ, then it is the best mean-unbiased estimator forθ. In other words, this statistic has a smaller expected loss for anyconvexloss function; in many practical applications with the squared loss-function, it has a smaller mean squared error among any estimators with the sameexpected value. Examples exists that when the minimal sufficient statistic isnot completethen several alternative statistics exist for unbiased estimation ofθ, while some of them have lower variance than others.[3] See alsominimum-variance unbiased estimator. Bounded completenessoccurs inBasu's theorem,[1]which states that a statistic that is bothboundedly completeandsufficientisindependentof anyancillary statistic. Bounded completenessalso occurs in Bahadur's theorem. In the case where there exists at least oneminimal sufficientstatistic, a statistic which issufficientand boundedly complete, is necessarily minimal sufficient.[4]
https://en.wikipedia.org/wiki/Completeness_(statistics)
Detection theoryorsignal detection theoryis a means to measure the ability to differentiate between information-bearing patterns (calledstimulusin living organisms,signalin machines) and random patterns that distract from the information (callednoise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator). In the field ofelectronics,signal recoveryis the separation of such patterns from a disguising background.[1] According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g. fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat. Much of the early work in detection theory was done byradarresearchers.[2]By 1954, the theory was fully developed on the theoretical side as described byPeterson, Birdsall and Fox[3]and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, andJohn A. Swets, also in 1954.[4]Detection theory was used in 1966 by John A. Swets and David M. Green forpsychophysics.[5]Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential)response biases.[6] Detection theory has applications in many fields such asdiagnosticsof any kind,quality control,telecommunications, andpsychology. The concept is similar to thesignal-to-noise ratioused in the sciences andconfusion matricesused inartificial intelligence. It is also usable inalarm management, where it is important to separate important events frombackground noise. Signal detection theory (SDT) is used when psychologists want to measure the way we make decisions under conditions of uncertainty, such as how we would perceive distances in foggy conditions or duringeyewitness identification.[7][8]SDT assumes that the decision maker is not a passive receiver of information, but an active decision-maker who makes difficult perceptual judgments under conditions of uncertainty. In foggy circumstances, we are forced to decide how far away from us an object is, based solely upon visual stimulus which is impaired by the fog. Since the brightness of the object, such as a traffic light, is used by the brain to discriminate the distance of an object, and the fog reduces the brightness of objects, we perceive the object to be much farther away than it actually is (see alsodecision theory). According to SDT, during eyewitness identifications, witnesses base their decision as to whether a suspect is the culprit or not based on their perceived level of familiarity with the suspect. To apply signal detection theory to a data set where stimuli were either present or absent, and the observer categorized each trial as having the stimulus present or absent, the trials are sorted into one of four categories: Based on the proportions of these types of trials, numerical estimates of sensitivity can be obtained with statistics like thesensitivity indexd'and A',[9]and response bias can be estimated with statistics like c and β.[9]β is the measure of response bias.[10] Signal detection theory can also be applied to memory experiments, where items are presented on a study list for later testing. A test list is created by combining these 'old' items with novel, 'new' items that did not appear on the study list. On each test trial the subject will respond 'yes, this was on the study list' or 'no, this was not on the study list'. Items presented on the study list are called Targets, and new items are called Distractors. Saying 'Yes' to a target constitutes a Hit, while saying 'Yes' to a distractor constitutes a False Alarm. Signal Detection Theory has wide application, both in humans andanimals. Topics includememory, stimulus characteristics of schedules of reinforcement, etc. Conceptually, sensitivity refers to how hard or easy it is to detect that a target stimulus is present from background events. For example, in a recognition memory paradigm, having longer to study to-be-remembered words makes it easier to recognize previously seen or heard words. In contrast, having to remember 30 words rather than 5 makes the discrimination harder. One of the most commonly used statistics for computing sensitivity is the so-calledsensitivity indexord'. There are alsonon-parametricmeasures, such as the area under theROC-curve.[6] Bias is the extent to which one response is more probable than another, averaging across stimulus-present and stimulus-absent cases. That is, a receiver may be more likely overall to respond that a stimulus is present or more likely overall to respond that a stimulus is not present. Bias is independent of sensitivity. Bias can be desirable if false alarms and misses lead to different costs. For example, if the stimulus is a bomber, then a miss (failing to detect the bomber) may be more costly than a false alarm (reporting a bomber when there is not one), making a liberal response bias desirable. In contrast, giving false alarms too often (crying wolf) may make people less likely to respond, a problem that can be reduced by a conservative response bias. Another field which is closely related to signal detection theory is calledcompressed sensing(or compressive sensing). The objective of compressed sensing is to recover high dimensional but with low complexity entities from only a few measurements. Thus, one of the most important applications of compressed sensing is in the recovery of high dimensional signals which are known to be sparse (or nearly sparse) with only a few linear measurements. The number of measurements needed in the recovery of signals is by far smaller than what Nyquist sampling theorem requires provided that the signal is sparse, meaning that it only contains a few non-zero elements. There are different methods of signal recovery in compressed sensing includingbasis pursuit,expander recovery algorithm[11], CoSaMP[12]and alsofastnon-iterative algorithm.[13]In all of the recovery methods mentioned above, choosing an appropriate measurement matrix using probabilistic constructions or deterministic constructions, is of great importance. In other words, measurement matrices must satisfy certain specific conditions such asRIP(Restricted Isometry Property) orNull-Space propertyin order to achieve robust sparse recovery. In the case of making a decision between twohypotheses,H1, absent, andH2, present, in the event of a particularobservation,y, a classical approach is to chooseH1whenp(H1|y) > p(H2|y)andH2in the reverse case.[14]In the event that the twoa posterioriprobabilitiesare equal, one might choose to default to a single choice (either always chooseH1or always chooseH2), or might randomly select eitherH1orH2. Thea prioriprobabilities ofH1andH2can guide this choice, e.g. by always choosing the hypothesis with the highera prioriprobability. When taking this approach, usually what one knows are the conditional probabilities,p(y|H1)andp(y|H2), and thea prioriprobabilitiesp(H1)=π1{\displaystyle p(H1)=\pi _{1}}andp(H2)=π2{\displaystyle p(H2)=\pi _{2}}. In this case, p(H1|y)=p(y|H1)⋅π1p(y){\displaystyle p(H1|y)={\frac {p(y|H1)\cdot \pi _{1}}{p(y)}}}, p(H2|y)=p(y|H2)⋅π2p(y){\displaystyle p(H2|y)={\frac {p(y|H2)\cdot \pi _{2}}{p(y)}}} wherep(y)is the total probability of eventy, p(y|H1)⋅π1+p(y|H2)⋅π2{\displaystyle p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}. H2is chosen in case p(y|H2)⋅π2p(y|H1)⋅π1+p(y|H2)⋅π2≥p(y|H1)⋅π1p(y|H1)⋅π1+p(y|H2)⋅π2{\displaystyle {\frac {p(y|H2)\cdot \pi _{2}}{p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}}\geq {\frac {p(y|H1)\cdot \pi _{1}}{p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}}} ⇒p(y|H2)p(y|H1)≥π1π2{\displaystyle \Rightarrow {\frac {p(y|H2)}{p(y|H1)}}\geq {\frac {\pi _{1}}{\pi _{2}}}} andH1otherwise. Often, the ratioπ1π2{\displaystyle {\frac {\pi _{1}}{\pi _{2}}}}is calledτMAP{\displaystyle \tau _{MAP}}andp(y|H2)p(y|H1){\displaystyle {\frac {p(y|H2)}{p(y|H1)}}}is calledL(y){\displaystyle L(y)}, thelikelihood ratio. Using this terminology,H2is chosen in caseL(y)≥τMAP{\displaystyle L(y)\geq \tau _{MAP}}. This is called MAP testing, where MAP stands for "maximuma posteriori"). Taking this approach minimizes the expected number of errors one will make. In some cases, it is far more important to respond appropriately toH1than it is to respond appropriately toH2. For example, if an alarm goes off, indicating H1 (an incoming bomber is carrying anuclear weapon), it is much more important to shoot down the bomber if H1 = TRUE, than it is to avoid sending a fighter squadron to inspect afalse alarm(i.e., H1 = FALSE, H2 = TRUE) (assuming a large supply of fighter squadrons). TheBayescriterion is an approach suitable for such cases.[14] Here autilityis associated with each of four situations: As is shown below, what is important are the differences,U11−U21{\displaystyle U_{11}-U_{21}}andU22−U12{\displaystyle U_{22}-U_{12}}. Similarly, there are four probabilities,P11{\displaystyle P_{11}},P12{\displaystyle P_{12}}, etc., for each of the cases (which are dependent on one's decision strategy). The Bayes criterion approach is to maximize the expected utility: E{U}=P11⋅U11+P21⋅U21+P12⋅U12+P22⋅U22{\displaystyle E\{U\}=P_{11}\cdot U_{11}+P_{21}\cdot U_{21}+P_{12}\cdot U_{12}+P_{22}\cdot U_{22}} E{U}=P11⋅U11+(1−P11)⋅U21+P12⋅U12+(1−P12)⋅U22{\displaystyle E\{U\}=P_{11}\cdot U_{11}+(1-P_{11})\cdot U_{21}+P_{12}\cdot U_{12}+(1-P_{12})\cdot U_{22}} E{U}=U21+U22+P11⋅(U11−U21)−P12⋅(U22−U12){\displaystyle E\{U\}=U_{21}+U_{22}+P_{11}\cdot (U_{11}-U_{21})-P_{12}\cdot (U_{22}-U_{12})} Effectively, one may maximize the sum, U′=P11⋅(U11−U21)−P12⋅(U22−U12){\displaystyle U'=P_{11}\cdot (U_{11}-U_{21})-P_{12}\cdot (U_{22}-U_{12})}, and make the following substitutions: P11=π1⋅∫R1p(y|H1)dy{\displaystyle P_{11}=\pi _{1}\cdot \int _{R_{1}}p(y|H1)\,dy} P12=π2⋅∫R1p(y|H2)dy{\displaystyle P_{12}=\pi _{2}\cdot \int _{R_{1}}p(y|H2)\,dy} whereπ1{\displaystyle \pi _{1}}andπ2{\displaystyle \pi _{2}}are thea prioriprobabilities,P(H1){\displaystyle P(H1)}andP(H2){\displaystyle P(H2)}, andR1{\displaystyle R_{1}}is the region of observation events,y, that are responded to as thoughH1is true. ⇒U′=∫R1{π1⋅(U11−U21)⋅p(y|H1)−π2⋅(U22−U12)⋅p(y|H2)}dy{\displaystyle \Rightarrow U'=\int _{R_{1}}\left\{\pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)-\pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)\right\}\,dy} U′{\displaystyle U'}and thusU{\displaystyle U}are maximized by extendingR1{\displaystyle R_{1}}over the region where π1⋅(U11−U21)⋅p(y|H1)−π2⋅(U22−U12)⋅p(y|H2)>0{\displaystyle \pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)-\pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)>0} This is accomplished by deciding H2 in case π2⋅(U22−U12)⋅p(y|H2)≥π1⋅(U11−U21)⋅p(y|H1){\displaystyle \pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)\geq \pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)} ⇒L(y)≡p(y|H2)p(y|H1)≥π1⋅(U11−U21)π2⋅(U22−U12)≡τB{\displaystyle \Rightarrow L(y)\equiv {\frac {p(y|H2)}{p(y|H1)}}\geq {\frac {\pi _{1}\cdot (U_{11}-U_{21})}{\pi _{2}\cdot (U_{22}-U_{12})}}\equiv \tau _{B}} and H1 otherwise, whereL(y)is the so-definedlikelihood ratio. Das and Geisler[15]extended the results of signal detection theory for normally distributed stimuli, and derived methods of computing the error rate andconfusion matrixforideal observersand non-ideal observers for detecting and categorizing univariate and multivariate normal signals from two or more categories.
https://en.wikipedia.org/wiki/Detection_theory
In statistics,efficiencyis a measure of quality of anestimator, of an experimental design,[1]or of ahypothesis testingprocedure.[2]Essentially, a more efficient estimator needs fewer input data or observations than a less efficient one to achieve theCramér–Rao bound. Anefficient estimatoris characterized by having the smallest possiblevariance, indicating that there is a smalldeviancebetween the estimated value and the "true" value in theL2 normsense.[1] Therelative efficiencyof two procedures is the ratio of their efficiencies, although often this concept is used where the comparison is made between a given procedure and a notional "best possible" procedure. The efficiencies and the relative efficiency of two procedures theoretically depend on the sample size available for the given procedure, but it is often possible to use theasymptotic relative efficiency(defined as the limit of the relative efficiencies as the sample size grows) as the principal comparison measure. The efficiency of anunbiasedestimator,T, of aparameterθis defined as[3] whereI(θ){\displaystyle {\mathcal {I}}(\theta )}is theFisher informationof the sample. Thuse(T) is the minimum possible variance for an unbiased estimator divided by its actual variance. TheCramér–Rao boundcan be used to prove thate(T) ≤ 1. Anefficient estimatoris anestimatorthat estimates the quantity of interest in some “best possible” manner. The notion of “best possible” relies upon the choice of a particularloss function— the function which quantifies the relative degree of undesirability of estimation errors of different magnitudes. The most common choice of the loss function isquadratic, resulting in themean squared errorcriterion of optimality.[4] In general, the spread of an estimator around the parameter θ is a measure of estimator efficiency and performance. This performance can be calculated by finding the mean squared error. More formally, letTbe an estimator for the parameterθ. The mean squared error ofTis the valueMSE⁡(T)=E[(T−θ)2]{\displaystyle \operatorname {MSE} (T)=E[(T-\theta )^{2}]}, which can be decomposed as a sum of its variance and bias: An estimatorT1performs better than an estimatorT2ifMSE⁡(T1)<MSE⁡(T2){\displaystyle \operatorname {MSE} (T_{1})<\operatorname {MSE} (T_{2})}.[5]For a more specific case, ifT1andT2are two unbiased estimators for the same parameter θ, then the variance can be compared to determine performance. In this case,T2ismore efficientthanT1if the variance ofT2issmallerthan the variance ofT1, i.e.var⁡(T1)>var⁡(T2){\displaystyle \operatorname {var} (T_{1})>\operatorname {var} (T_{2})}for all values ofθ. This relationship can be determined by simplifying the more general case above for mean squared error; since the expected value of an unbiased estimator is equal to the parameter value,E⁡[T]=θ{\displaystyle \operatorname {E} [T]=\theta }. Therefore, for an unbiased estimator,MSE⁡(T)=var⁡(T){\displaystyle \operatorname {MSE} (T)=\operatorname {var} (T)}, as the(E⁡[T]−θ)2{\displaystyle (\operatorname {E} [T]-\theta )^{2}}term drops out for being equal to 0.[5] If anunbiasedestimatorof a parameterθattainse(T)=1{\displaystyle e(T)=1}for all values of the parameter, then the estimator is called efficient.[3] Equivalently, the estimator achieves equality in theCramér–Rao inequalityfor allθ. TheCramér–Rao lower boundis a lower bound of the variance of an unbiased estimator, representing the "best" an unbiased estimator can be. An efficient estimator is also theminimum variance unbiased estimator(MVUE). This is because an efficient estimator maintains equality on the Cramér–Rao inequality for all parameter values, which means it attains the minimum variance for all parameters (the definition of the MVUE). The MVUE estimator, even if it exists, is not necessarily efficient, because "minimum" does not mean equality holds on the Cramér–Rao inequality. Thus an efficient estimator need not exist, but if it does, it is the MVUE. Suppose{Pθ|θ∈ Θ} is aparametric modelandX= (X1, …,Xn)are the data sampled from this model. LetT=T(X)be anestimatorfor the parameterθ. If this estimator isunbiased(that is,E[T] =θ), then theCramér–Rao inequalitystates thevarianceof this estimator is bounded from below: whereIθ{\displaystyle \scriptstyle {\mathcal {I}}_{\theta }}is theFisher information matrixof the model at pointθ. Generally, the variance measures the degree of dispersion of a random variable around its mean. Thus estimators with small variances are more concentrated, they estimate the parameters more precisely. We say that the estimator is afinite-sample efficient estimator(in the class of unbiased estimators) if it reaches the lower bound in the Cramér–Rao inequality above, for allθ∈ Θ. Efficient estimators are alwaysminimum variance unbiased estimators. However the converse is false: There exist point-estimation problems for which the minimum-variance mean-unbiased estimator is inefficient.[6] Historically, finite-sample efficiency was an early optimality criterion. However this criterion has some limitations: As an example, among the models encountered in practice, efficient estimators exist for: the meanμof thenormal distribution(but not the varianceσ2), parameterλof thePoisson distribution, the probabilitypin thebinomialormultinomial distribution. Consider the model of anormal distributionwith unknown mean but known variance:{Pθ=N(θ,σ2) |θ∈R}.The data consists ofnindependent and identically distributedobservations from this model:X= (x1, …,xn). We estimate the parameterθusing thesample meanof all observations: This estimator has meanθand variance ofσ2/n, which is equal to the reciprocal of theFisher informationfrom the sample. Thus, the sample mean is a finite-sample efficient estimator for the mean of the normal distribution. Asymptotic efficiency requiresConsistency (statistics), asymptotically normal distribution of the estimator, and an asymptotic variance-covariance matrix no worse than that of any other estimator.[9] Consider a sample of sizeN{\displaystyle N}drawn from anormal distributionof meanμ{\displaystyle \mu }and unitvariance, i.e.,Xn∼N(μ,1).{\displaystyle X_{n}\sim {\mathcal {N}}(\mu ,1).} Thesample mean,X¯{\displaystyle {\overline {X}}}, of the sampleX1,X2,…,XN{\displaystyle X_{1},X_{2},\ldots ,X_{N}}, defined as The variance of the mean, 1/N(the square of thestandard error) is equal to the reciprocal of theFisher informationfrom the sample and thus, by theCramér–Rao inequality, the sample mean is efficient in the sense that its efficiency is unity (100%). Now consider thesample median,X~{\displaystyle {\widetilde {X}}}. This is anunbiasedandconsistentestimator forμ{\displaystyle \mu }. For largeN{\displaystyle N}the sample median is approximatelynormally distributedwith meanμ{\displaystyle \mu }and varianceπ/2N,{\displaystyle {\pi }/{2N},}[10] The efficiency of the median for largeN{\displaystyle N}is thus In other words, the relative variance of the median will beπ/2≈1.57{\displaystyle \pi /2\approx 1.57}, or 57% greater than the variance of the mean – the standard error of the median will be 25% greater than that of the mean.[11] Note that this is theasymptoticefficiency — that is, the efficiency in the limit as sample sizeN{\displaystyle N}tends to infinity. For finite values ofN,{\displaystyle N,}the efficiency is higher than this (for example, a sample size of 3 gives an efficiency of about 74%).[citation needed] The sample mean is thus more efficient than the sample median in this example. However, there may be measures by which the median performs better. For example, the median is far more robust tooutliers, so that if the Gaussian model is questionable or approximate, there may advantages to using the median (seeRobust statistics). IfT1{\displaystyle T_{1}}andT2{\displaystyle T_{2}}are estimators for the parameterθ{\displaystyle \theta }, thenT1{\displaystyle T_{1}}is said todominateT2{\displaystyle T_{2}}if: Formally,T1{\displaystyle T_{1}}dominatesT2{\displaystyle T_{2}}if holds for allθ{\displaystyle \theta }, with strict inequality holding somewhere. The relative efficiency of two unbiased estimators is defined as[12] Althoughe{\displaystyle e}is in general a function ofθ{\displaystyle \theta }, in many cases the dependence drops out; if this is so,e{\displaystyle e}being greater than one would indicate thatT1{\displaystyle T_{1}}is preferable, regardless of the true value ofθ{\displaystyle \theta }. An alternative to relative efficiency for comparing estimators, is thePitman closeness criterion. This replaces the comparison of mean-squared-errors with comparing how often one estimator produces estimates closer to the true value than another estimator. In estimating the mean of uncorrelated, identically distributed variables we can take advantage of the fact thatthe variance of the sum is the sum of the variances. In this case efficiency can be defined as the square of thecoefficient of variation, i.e.,[13] Relative efficiency of two such estimators can thus be interpreted as the relative sample size of one required to achieve the certainty of the other. Proof: Now becauses12=n1σ2,s22=n2σ2{\displaystyle s_{1}^{2}=n_{1}\sigma ^{2},\,s_{2}^{2}=n_{2}\sigma ^{2}}we havee1e2=n1n2{\displaystyle {\frac {e_{1}}{e_{2}}}={\frac {n_{1}}{n_{2}}}}, so the relative efficiency expresses the relative sample size of the first estimator needed to match the variance of the second. Efficiency of an estimator may change significantly if the distribution changes, often dropping. This is one of the motivations ofrobust statistics– an estimator such as the sample mean is an efficient estimator of the population mean of a normal distribution, for example, but can be an inefficient estimator of amixture distributionof two normal distributions with the same mean and different variances. For example, if a distribution is a combination of 98%N(μ,σ) and 2%N(μ,10σ), the presence of extreme values from the latter distribution (often "contaminating outliers") significantly reduces the efficiency of the sample mean as an estimator ofμ.By contrast, thetrimmed meanis less efficient for a normal distribution, but is more robust (i.e., less affected) by changes in the distribution, and thus may be more efficient for a mixture distribution. Similarly, theshape of a distribution, such asskewnessorheavy tails, can significantly reduce the efficiency of estimators that assume a symmetric distribution or thin tails. While efficiency is a desirable quality of an estimator, it must be weighed against other considerations, and an estimator that is efficient for certain distributions may well be inefficient for other distributions. Most significantly, estimators that are efficient for clean data from a simple distribution, such as the normal distribution (which is symmetric, unimodal, and has thin tails) may not be robust to contamination by outliers, and may be inefficient for more complicated distributions. Inrobust statistics, more importance is placed on robustness and applicability to a wide variety of distributions, rather than efficiency on a single distribution.M-estimatorsare a general class of estimators motivated by these concerns. They can be designed to yield both robustness and high relative efficiency, though possibly lower efficiency than traditional estimators for some cases. They can be very computationally complicated, however. A more traditional alternative areL-estimators, which are very simple statistics that are easy to compute and interpret, in many cases robust, and often sufficiently efficient for initial estimates. Seeapplications of L-estimatorsfor further discussion. Efficiency in statistics is important because they allow one to compare the performance of various estimators. Although an unbiased estimator is usually favored over a biased one, a more efficient biased estimator can sometimes be more valuable than a less efficient unbiased estimator. For example, this can occur when the values of the biased estimator gathers around a number closer to the true value. Thus, estimator performance can be predicted easily by comparing their mean squared errors or variances. For comparingsignificance tests, a meaningful measure of efficiency can be defined based on the sample size required for the test to achieve a given taskpower.[14] Pitman efficiency[15]andBahadur efficiency(orHodges–Lehmann efficiency)[16][17][18]relate to the comparison of the performance ofstatistical hypothesis testingprocedures. For experimental designs, efficiency relates to the ability of a design to achieve the objective of the study with minimal expenditure of resources such as time and money. In simple cases, the relative efficiency of designs can be expressed as the ratio of the sample sizes required to achieve a given objective.[19]
https://en.wikipedia.org/wiki/Efficiency_(statistics)
Insignal processing, the output of thematched filteris given bycorrelatinga known delayedsignal, ortemplate, with an unknown signal to detect the presence of the template in the unknown signal.[1][2]This is equivalent toconvolvingthe unknown signal with aconjugatedtime-reversed version of the template. The matched filter is the optimallinear filterfor maximizing thesignal-to-noise ratio(SNR) in the presence of additivestochasticnoise. Matched filters are commonly used inradar, in which a known signal is sent out, and the reflected signal is examined for common elements of the out-going signal.Pulse compressionis an example of matched filtering. It is so called because the impulse response is matched to input pulse signals. Two-dimensional matched filters are commonly used inimage processing, e.g., to improve the SNR of X-ray observations. Additional applications of note are inseismologyandgravitational-wave astronomy. Matched filtering is a demodulation technique withLTI (linear time invariant) filtersto maximize SNR.[3]It was originally also known as aNorth filter.[4] The following section derives the matched filter for adiscrete-time system. The derivation for acontinuous-time systemis similar, with summations replaced with integrals. The matched filter is the linear filter,h{\displaystyle h}, that maximizes the outputsignal-to-noise ratio. wherex[k]{\displaystyle x[k]}is the input as a function of the independent variablek{\displaystyle k}, andy[n]{\displaystyle y[n]}is the filtered output. Though we most often express filters as theimpulse responseof convolution systems, as above (seeLTI system theory), it is easiest to think of the matched filter in the context of theinner product, which we will see shortly. We can derive the linear filter that maximizes output signal-to-noise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal (a vector) with a filter (another vector) that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive stochastic noise, we have the additional challenge of minimizing the output due to noise by choosing a filter that is orthogonal to the noise. Let us formally define the problem. We seek a filter,h{\displaystyle h}, such that we maximize the output signal-to-noise ratio, where the output is the inner product of the filter and the observed signalx{\displaystyle x}. Our observed signal consists of the desirable signals{\displaystyle s}and additive noisev{\displaystyle v}: Let us define theauto-correlation matrixof the noise, reminding ourselves that this matrix hasHermitian symmetry, a property that will become useful in the derivation: wherevH{\displaystyle v^{\mathrm {H} }}denotes theconjugate transposeofv{\displaystyle v}, andE{\displaystyle E}denotesexpectation(note that in case the noisev{\displaystyle v}has zero-mean, its auto-correlation matrixRv{\displaystyle R_{v}}is equal to itscovariance matrix). Let us call our output,y{\displaystyle y}, the inner product of our filter and the observed signal such that We now define the signal-to-noise ratio, which is our objective function, to be the ratio of the power of the output due to the desired signal to the power of the output due to the noise: We rewrite the above: We wish to maximize this quantity by choosingh{\displaystyle h}. Expanding the denominator of our objective function, we have Now, ourSNR{\displaystyle \mathrm {SNR} }becomes We will rewrite this expression with some matrix manipulation. The reason for this seemingly counterproductive measure will become evident shortly. Exploiting the Hermitian symmetry of the auto-correlation matrixRv{\displaystyle R_{v}}, we can write We would like to find an upper bound on this expression. To do so, we first recognize a form of theCauchy–Schwarz inequality: which is to say that the square of the inner product of two vectors can only be as large as the product of the individual inner products of the vectors. This concept returns to the intuition behind the matched filter: this upper bound is achieved when the two vectorsa{\displaystyle a}andb{\displaystyle b}are parallel. We resume our derivation by expressing the upper bound on ourSNR{\displaystyle \mathrm {SNR} }in light of the geometric inequality above: Our valiant matrix manipulation has now paid off. We see that the expression for our upper bound can be greatly simplified: We can achieve this upper bound if we choose, whereα{\displaystyle \alpha }is an arbitrary real number. To verify this, we plug into our expression for the outputSNR{\displaystyle \mathrm {SNR} }: Thus, our optimal matched filter is We often choose to normalize the expected value of the power of the filter output due to the noise to unity. That is, we constrain This constraint implies a value ofα{\displaystyle \alpha }, for which we can solve: yielding giving us our normalized filter, If we care to write the impulse responseh{\displaystyle h}of the filter for the convolution system, it is simply thecomplex conjugatetime reversal of the inputs{\displaystyle s}. Though we have derived the matched filter in discrete time, we can extend the concept to continuous-time systems if we replaceRv{\displaystyle R_{v}}with the continuous-timeautocorrelationfunction of the noise, assuming a continuous signals(t){\displaystyle s(t)}, continuous noisev(t){\displaystyle v(t)}, and a continuous filterh(t){\displaystyle h(t)}. Alternatively, we may solve for the matched filter by solving our maximization problem with a Lagrangian. Again, the matched filter endeavors to maximize the output signal-to-noise ratio (SNR{\displaystyle \mathrm {SNR} }) of a filtered deterministic signal in stochastic additive noise. The observed sequence, again, is with the noise auto-correlation matrix, The signal-to-noise ratio is whereys=hHs{\displaystyle y_{s}=h^{\mathrm {H} }s}andyv=hHv{\displaystyle y_{v}=h^{\mathrm {H} }v}. Evaluating the expression in the numerator, we have and in the denominator, The signal-to-noise ratio becomes If we now constrain the denominator to be 1, the problem of maximizingSNR{\displaystyle \mathrm {SNR} }is reduced to maximizing the numerator. We can then formulate the problem using aLagrange multiplier: which we recognize as ageneralized eigenvalue problem SincessH{\displaystyle ss^{\mathrm {H} }}is of unit rank, it has only one nonzero eigenvalue. It can be shown that this eigenvalue equals yielding the following optimal matched filter This is the same result found in the previous subsection. Matched filtering can also be interpreted as aleast-squares estimatorfor the optimal location and scaling of a given model or template. Once again, let the observed sequence be defined as wherevk{\displaystyle v_{k}}is uncorrelated zero mean noise. The signalsk{\displaystyle s_{k}}is assumed to be a scaled and shifted version of a known model sequencefk{\displaystyle f_{k}}: We want to find optimal estimatesj∗{\displaystyle j^{*}}andμ∗{\displaystyle \mu ^{*}}for the unknown shiftj0{\displaystyle j_{0}}and scalingμ0{\displaystyle \mu _{0}}by minimizing the least-squares residual between the observed sequencexk{\displaystyle x_{k}}and a "probing sequence"hj−k{\displaystyle h_{j-k}}: The appropriatehj−k{\displaystyle h_{j-k}}will later turn out to be the matched filter, but is as yet unspecified. Expandingxk{\displaystyle x_{k}}and the square within the sum yields The first term in brackets is a constant (since the observed signal is given) and has no influence on the optimal solution. The last term has constant expected value because the noise is uncorrelated and has zero mean. We can therefore drop both terms from the optimization. After reversing the sign, we obtain the equivalent optimization problem Setting the derivative w.r.t.μ{\displaystyle \mu }to zero gives an analytic solution forμ∗{\displaystyle \mu ^{*}}: Inserting this into our objective function yields a reduced maximization problem for justj∗{\displaystyle j^{*}}: The numerator can be upper-bounded by means of theCauchy–Schwarz inequality: The optimization problem assumes its maximum when equality holds in this expression. According to the properties of the Cauchy–Schwarz inequality, this is only possible when for arbitrary non-zero constantsν{\displaystyle \nu }orκ{\displaystyle \kappa }, and the optimal solution is obtained atj∗=j0{\displaystyle j^{*}=j_{0}}as desired. Thus, our "probing sequence"hj−k{\displaystyle h_{j-k}}must be proportional to the signal modelfk−j0{\displaystyle f_{k-j_{0}}}, and the convenient choiceκ=1{\displaystyle \kappa =1}yields the matched filter Note that the filter is the mirrored signal model. This ensures that the operation∑kxkhj−k{\displaystyle \sum _{k}x_{k}h_{j-k}}to be applied in order to find the optimum is indeed the convolution between the observed sequencexk{\displaystyle x_{k}}and the matched filterhk{\displaystyle h_{k}}. The filtered sequence assumes its maximum at the position where the observed sequencexk{\displaystyle x_{k}}best matches (in a least-squares sense) the signal modelfk{\displaystyle f_{k}}. The matched filter may be derived in a variety of ways,[2]but as a special case of aleast-squares procedureit may also be interpreted as amaximum likelihoodmethod in the context of a (coloured)Gaussian noisemodel and the associatedWhittle likelihood.[5]If the transmitted signal possessednounknown parameters (like time-of-arrival, amplitude,...), then the matched filter would, according to theNeyman–Pearson lemma, minimize the error probability. However, since the exact signal generally is determined by unknown parameters that effectively are estimated (orfitted) in the filtering process, the matched filter constitutes ageneralized maximum likelihood(test-) statistic.[6]The filtered time series may then be interpreted as (proportional to) theprofile likelihood, the maximized conditional likelihood as a function of the ("arrival") time parameter.[7]This implies in particular that theerror probability(in the sense of Neyman and Pearson, i.e., concerning maximization of the detection probability for a given false-alarm probability[8]) is not necessarily optimal. What is commonly referred to as theSignal-to-noise ratio (SNR), which is supposed to be maximized by a matched filter, in this context corresponds to2log⁡(L){\displaystyle {\sqrt {2\log({\mathcal {L}})}}}, whereL{\displaystyle {\mathcal {L}}}is the (conditionally) maximized likelihood ratio.[7][nb 1] The construction of the matched filter is based on aknownnoise spectrum. In practice, however, the noise spectrum is usuallyestimatedfrom data and hence only known up to a limited precision. For the case of an uncertain spectrum, the matched filter may be generalized to a more robust iterative procedure with favourable properties also in non-Gaussian noise.[7] When viewed in the frequency domain, it is evident that the matched filter applies the greatest weighting to spectral components exhibiting the greatest signal-to-noise ratio (i.e., large weight where noise is relatively low, and vice versa). In general this requires a non-flat frequency response, but the associated "distortion" is no cause for concern in situations such asradaranddigital communications, where the original waveform is known and the objective is the detection of this signal against the background noise. On the technical side, the matched filter is aweighted least-squaresmethod based on the (heteroscedastic) frequency-domain data (where the "weights" are determined via the noise spectrum, see also previous section), or equivalently, aleast-squaresmethod applied to thewhiteneddata. Matched filters are often used insignal detection.[1]As an example, suppose that we wish to judge the distance of an object by reflecting a signal off it. We may choose to transmit a pure-tone sinusoid at 1 Hz. We assume that our received signal is an attenuated and phase-shifted form of the transmitted signal with added noise. To judge the distance of the object, we correlate the received signal with a matched filter, which, in the case ofwhite (uncorrelated) noise, is another pure-tone 1-Hz sinusoid. When the output of the matched filter system exceeds a certain threshold, we conclude with high probability that the received signal has been reflected off the object. Using the speed of propagation and the time that we first observe the reflected signal, we can estimate the distance of the object. If we change the shape of the pulse in a specially-designed way, the signal-to-noise ratio and the distance resolution can be even improved after matched filtering: this is a technique known aspulse compression. Additionally, matched filters can be used in parameter estimation problems (seeestimation theory). To return to our previous example, we may desire to estimate the speed of the object, in addition to its position. To exploit theDoppler effect, we would like to estimate the frequency of the received signal. To do so, we may correlate the received signal with several matched filters of sinusoids at varying frequencies. The matched filter with the highest output will reveal, with high probability, the frequency of the reflected signal and help us determine theradial velocityof the object, i.e. the relative speed either directly towards or away from the observer. This method is, in fact, a simple version of thediscrete Fourier transform (DFT). The DFT takes anN{\displaystyle N}-valued complex input and correlates it withN{\displaystyle N}matched filters, corresponding to complex exponentials atN{\displaystyle N}different frequencies, to yieldN{\displaystyle N}complex-valued numbers corresponding to the relative amplitudes and phases of the sinusoidal components (seeMoving target indication). The matched filter is also used in communications. In the context of a communication system that sends binary messages from the transmitter to the receiver across a noisy channel, a matched filter can be used to detect the transmitted pulses in the noisy received signal. Imagine we want to send the sequence "0101100100" coded in non polarnon-return-to-zero(NRZ) through a certain channel. Mathematically, a sequence in NRZ code can be described as a sequence of unit pulses or shiftedrect functions, each pulse being weighted by +1 if the bit is "1" and by -1 if the bit is "0". Formally, the scaling factor for thekth{\displaystyle k^{\mathrm {th} }}bit is, We can represent our message,M(t){\displaystyle M(t)}, as the sum of shifted unit pulses: whereT{\displaystyle T}is the time length of one bit andΠ(x){\displaystyle \Pi (x)}is therectangular function. Thus, the signal to be sent by the transmitter is If we model our noisy channel as anAWGNchannel, white Gaussian noise is added to the signal. At the receiver end, for a Signal-to-noise ratio of 3 dB, this may look like: A first glance will not reveal the original transmitted sequence. There is a high power of noise relative to the power of the desired signal (i.e., there is a lowsignal-to-noise ratio). If the receiver were to sample this signal at the correct moments, the resulting binary message could be incorrect. To increase our signal-to-noise ratio, we pass the received signal through a matched filter. In this case, the filter should be matched to an NRZ pulse (equivalent to a "1" coded in NRZ code). Precisely, the impulse response of the ideal matched filter, assuming white (uncorrelated) noise should be a time-reversedcomplex-conjugatedscaled version of the signal that we are seeking. We choose In this case, due to symmetry, the time-reversed complex conjugate ofh(t){\displaystyle h(t)}is in facth(t){\displaystyle h(t)}, allowing us to callh(t){\displaystyle h(t)}the impulse response of our matched filter convolution system. After convolving with the correct matched filter, the resulting signal,Mfiltered(t){\displaystyle M_{\mathrm {filtered} }(t)}is, where∗{\displaystyle *}denotes convolution. Which can now be safely sampled by the receiver at the correct sampling instants, and compared to an appropriate threshold, resulting in a correct interpretation of the binary message. Matched filters play a central role ingravitational-wave astronomy.[9]Thefirst observation of gravitational waveswas based on large-scale filtering of each detector's output for signals resembling the expected shape, followed by subsequent screening for coincident and coherent triggers between both instruments.[10]False-alarm rates, and with that, thestatistical significanceof the detection were then assessed usingresamplingmethods.[11][12]Inference on the astrophysical source parameters was completed usingBayesian methodsbased on parameterized theoretical models for the signal waveform and (again) on theWhittle likelihood.[13][14] Matched filters find use inseismologyto detect similar earthquake or other seismic signals, often using multicomponent and/or multichannel empirically determined templates.[15]Matched filtering applications in seismology include the generation of large event catalogues to study earthquake seismicity[16]and volcanic activity,[17][18]and in the global detection of nuclear explosions.[19] Animals living in relatively static environments would have relatively fixed features of the environment to perceive. This allows the evolution of filters that match the expected signal with the highest signal-to-noise ratio, the matched filter.[20]Sensors that perceive the world "through such a 'matched filter' severely limits the amount of information the brain can pick up from the outside world, but it frees the brain from the need to perform more intricate computations to extract the information finally needed for fulfilling a particular task."[21]
https://en.wikipedia.org/wiki/Matched_filter
Maximum entropy spectral estimationis a method ofspectral density estimation. The goal is to improve thespectralquality based on theprinciple of maximum entropy. The method is based on choosing the spectrum which corresponds to the most random or the most unpredictable time series whoseautocorrelationfunction agrees with the known values. This assumption, which corresponds to the concept of maximum entropy as used in bothstatistical mechanicsandinformation theory, is maximally non-committal with regard to the unknown values of the autocorrelation function of the time series. It is simply the application of maximum entropy modeling to any type of spectrum and is used in all fields where data is presented in spectral form. The usefulness of the technique varies based on the source of the spectral data since it is dependent on the amount of assumed knowledge about the spectrum that can be applied to the model. In maximum entropy modeling, probability distributions are created on the basis of that which is known, leading to a type ofstatistical inferenceabout the missing information which is called the maximum entropy estimate. For example, in spectral analysis the expected peak shape is often known, but in a noisy spectrum the center of the peak may not be clear. In such a case, inputting the known information allows the maximum entropy model to derive a better estimate of the center of the peak, thus improving spectral accuracy. In theperiodogramapproach to calculating the power spectra, the sample autocorrelation function is multiplied by some window function and thenFourier transformed. The window is applied to provide statistical stability as well as to avoid leakage from other parts of the spectrum. However, the window limits the spectral resolution. The maximum entropy method attempts to improve the spectral resolution by extrapolating thecorrelation functionbeyond the maximum lag, in such a way that the entropy of the corresponding probability density function is maximized in each step of the extrapolation. The maximum entropy rate stochastic process that satisfies the given empirical autocorrelation and variance constraints is anautoregressive modelwith independent and identically distributed zero-mean Gaussian input. Therefore, the maximum entropy method is equivalent to least-squares fitting the available time series data to an autoregressive model where theϵk{\displaystyle \epsilon _{k}}are independent and identically distributed asN(0,σ2){\displaystyle N(0,\sigma ^{2})}. The unknown coefficientsαk{\displaystyle \alpha _{k}}are found using the least-square method. Once the autoregressive coefficients have been determined, the spectrum of the time series data is estimated by evaluating thepower spectraldensity functionof the fitted autoregressive model whereTs{\displaystyle T_{s}}is the sampling period andi=−1{\displaystyle i={\sqrt {-1}}}is the imaginary unit.
https://en.wikipedia.org/wiki/Maximum_entropy_spectral_estimation
Instatistics, anuisance parameteris anyparameterwhich is unspecified[1]but which must be accounted for in the hypothesis testing of the parameters which are of interest. The classic example of a nuisance parameter comes from thenormal distribution, a member of thelocation–scale family. For at least one normal distribution, thevariance(s),σ2is often not specified or known, but one desires to hypothesis test on the mean(s). Another example might belinear regressionwith unknown variance in theexplanatory variable(the independent variable): its variance is a nuisance parameter that must be accounted for to derive an accurateinterval estimateof theregression slope, calculatep-values, hypothesis test on the slope's value; seeregression dilution. Nuisance parameters are oftenscale parameters, but not always; for example inerrors-in-variables models, the unknown true location of each observation is a nuisance parameter. A parameter may also cease to be a "nuisance" if it becomes the object of study, is estimated from data, or known. The general treatment of nuisance parameters can be broadly similar between frequentist and Bayesian approaches to theoretical statistics. It relies on an attempt to partition thelikelihood functioninto components representing information about the parameters of interest and information about the other (nuisance) parameters. This can involve ideas aboutsufficient statisticsandancillary statistics. When this partition can be achieved it may be possible to complete a Bayesian analysis for the parameters of interest by determining their joint posterior distribution algebraically. The partition allows frequentist theory to develop general estimation approaches in the presence of nuisance parameters. If the partition cannot be achieved it may still be possible to make use of an approximate partition. In some special cases, it is possible to formulate methods that circumvent the presences of nuisance parameters. Thet-testprovides a practically useful test because the test statistic does not depend on the unknown variance but only the sample variance. It is a case where use can be made of apivotal quantity. However, in other cases no such circumvention is known. Practical approaches to statistical analysis treat nuisance parameters somewhat differently in frequentist and Bayesian methodologies. A general approach in a frequentist analysis can be based on maximumlikelihood-ratio tests. These provide bothsignificance testsandconfidence intervalsfor the parameters of interest which are approximately valid for moderate to large sample sizes and which take account of the presence of nuisance parameters. SeeBasu(1977) for some general discussion and Spall and Garner (1990) for some discussion relative to the identification of parameters in linear dynamic (i.e.,state space representation) models. InBayesian analysis, a generally applicable approach creates random samples from the joint posterior distribution of all the parameters: seeMarkov chain Monte Carlo. Given these, the joint distribution of only the parameters of interest can be readily found bymarginalizingover the nuisance parameters. However, this approach may not always be computationally efficient if some or all of the nuisance parameters can be eliminated on a theoretical basis.
https://en.wikipedia.org/wiki/Nuisance_parameter
Inmathematics, aparametric equationexpresses several quantities, such as thecoordinatesof apoint, asfunctionsof one or severalvariablescalledparameters.[1] In the case of a single parameter, parametric equations are commonly used to express thetrajectoryof a moving point, in which case, the parameter is often, but not necessarily, time, and the point describes acurve, called aparametric curve. In the case of two parameters, the point describes asurface, called aparametric surface. In all cases, the equations are collectively called aparametric representation,[2]orparametric system,[3]orparameterization(also spelledparametrization,parametrisation) of the object.[1][4][5] For example, the equationsx=cos⁡ty=sin⁡t{\displaystyle {\begin{aligned}x&=\cos t\\y&=\sin t\end{aligned}}}form a parametric representation of theunit circle, wheretis the parameter: A point(x,y)is on the unit circleif and only ifthere is a value oftsuch that these two equations generate that point. Sometimes the parametric equations for the individualscalaroutput variables are combined into a single parametric equation invectors: (x,y)=(cos⁡t,sin⁡t).{\displaystyle (x,y)=(\cos t,\sin t).} Parametric representations are generally nonunique (see the "Examples in two dimensions" section below), so the same quantities may be expressed by a number of different parameterizations.[1] In addition to curves and surfaces, parametric equations can describemanifoldsandalgebraic varietiesof higherdimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension isoneandoneparameter is used, for surfaces dimensiontwoandtwoparameters, etc.). Parametric equations are commonly used inkinematics, where thetrajectoryof an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labeledt; however, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience. Parameterizations are non-unique; more than one set of parametric equations can specify the same curve.[6] Converting a set of parametric equations to a singleimplicit equationinvolves eliminating the variabletfrom the simultaneous equationsx=f(t),y=g(t).{\displaystyle x=f(t),\ y=g(t).}This process is calledimplicitization. If one of these equations can be solved fort, the expression obtained can be substituted into the other equation to obtain an equation involvingxandyonly: Solvingy=g(t){\displaystyle y=g(t)}to obtaint=g−1(y){\displaystyle t=g^{-1}(y)}and using this inx=f(t){\displaystyle x=f(t)}gives the explicit equationx=f(g−1(y)),{\displaystyle x=f(g^{-1}(y)),}while more complicated cases will give an implicit equation of the formh(x,y)=0.{\displaystyle h(x,y)=0.} If the parametrization is given byrational functionsx=p(t)r(t),y=q(t)r(t),{\displaystyle x={\frac {p(t)}{r(t)}},\qquad y={\frac {q(t)}{r(t)}},} wherep,q, andrare set-wisecoprimepolynomials, aresultantcomputation allows one to implicitize. More precisely, the implicit equation is theresultantwith respect totofxr(t) –p(t)andyr(t) –q(t). In higher dimensions (either more than two coordinates or more than one parameter), the implicitization of rational parametric equations may by done withGröbner basiscomputation; seeGröbner basis § Implicitization in higher dimension. To take the example of the circle of radiusa, the parametric equationsx=acos⁡(t)y=asin⁡(t){\displaystyle {\begin{aligned}x&=a\cos(t)\\y&=a\sin(t)\end{aligned}}} can be implicitized in terms ofxandyby way of thePythagorean trigonometric identity. With xa=cos⁡(t)ya=sin⁡(t){\displaystyle {\begin{aligned}{\frac {x}{a}}&=\cos(t)\\{\frac {y}{a}}&=\sin(t)\\\end{aligned}}}andcos⁡(t)2+sin⁡(t)2=1,{\displaystyle \cos(t)^{2}+\sin(t)^{2}=1,}we get(xa)2+(ya)2=1,{\displaystyle \left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{a}}\right)^{2}=1,}and thusx2+y2=a2,{\displaystyle x^{2}+y^{2}=a^{2},} which is the standard equation of a circle centered at the origin. The simplest equation for aparabola,y=x2{\displaystyle y=x^{2}} can be (trivially) parameterized by using a free parametert, and settingx=t,y=t2for−∞<t<∞.{\displaystyle x=t,y=t^{2}\quad \mathrm {for} -\infty <t<\infty .} More generally, any curve given by an explicit equationy=f(x){\displaystyle y=f(x)} can be (trivially) parameterized by using a free parametert, and settingx=t,y=f(t)for−∞<t<∞.{\displaystyle x=t,y=f(t)\quad \mathrm {for} -\infty <t<\infty .} A more sophisticated example is the following. Consider the unit circle which is described by the ordinary (Cartesian) equationx2+y2=1.{\displaystyle x^{2}+y^{2}=1.} This equation can be parameterized as follows:(x,y)=(cos⁡(t),sin⁡(t))for0≤t<2π.{\displaystyle (x,y)=(\cos(t),\;\sin(t))\quad \mathrm {for} \ 0\leq t<2\pi .} With the Cartesian equation it is easier to check whether a point lies on the circle or not. With the parametric version it is easier to obtain points on a plot. In some contexts, parametric equations involving onlyrational functions(that is fractions of twopolynomials) are preferred, if they exist. In the case of the circle, such arational parameterizationisx=1−t21+t2y=2t1+t2.{\displaystyle {\begin{aligned}x&={\frac {1-t^{2}}{1+t^{2}}}\\y&={\frac {2t}{1+t^{2}}}\,.\end{aligned}}} With this pair of parametric equations, the point(−1, 0)is not represented by arealvalue oft, but by thelimitofxandywhenttends toinfinity. Anellipsein canonical position (center at origin, major axis along thex-axis) with semi-axesaandbcan be represented parametrically asx=acos⁡ty=bsin⁡t.{\displaystyle {\begin{aligned}x&=a\,\cos t\\y&=b\,\sin t\,.\end{aligned}}} An ellipse in general position can be expressed asx=Xc+acos⁡tcos⁡φ−bsin⁡tsin⁡φy=Yc+acos⁡tsin⁡φ+bsin⁡tcos⁡φ{\displaystyle {\begin{alignedat}{4}x={}&&X_{\mathrm {c} }&+a\,\cos t\,\cos \varphi {}&&-b\,\sin t\,\sin \varphi \\y={}&&Y_{\mathrm {c} }&+a\,\cos t\,\sin \varphi {}&&+b\,\sin t\,\cos \varphi \end{alignedat}}} as the parametertvaries from0to2π. Here(Xc,Yc)is the center of the ellipse, andφis the angle between thex-axis and the major axis of the ellipse. Both parameterizations may be maderationalby using thetangent half-angle formulaand settingtan⁡t2=u.{\textstyle \tan {\frac {t}{2}}=u\,.} ALissajous curveis similar to an ellipse, but thexandysinusoidsare not in phase. In canonical position, a Lissajous curve is given byx=acos⁡(kxt)y=bsin⁡(kyt){\displaystyle {\begin{aligned}x&=a\,\cos(k_{x}t)\\y&=b\,\sin(k_{y}t)\end{aligned}}}wherekxandkyare constants describing the number of lobes of the figure. An east-west openinghyperbolacan be represented parametrically by x=asec⁡t+hy=btan⁡t+k,{\displaystyle {\begin{aligned}x&=a\sec t+h\\y&=b\tan t+k\,,\end{aligned}}} or,rationally x=a1+t21−t2+hy=b2t1−t2+k.{\displaystyle {\begin{aligned}x&=a{\frac {1+t^{2}}{1-t^{2}}}+h\\y&=b{\frac {2t}{1-t^{2}}}+k\,.\end{aligned}}} A north-south opening hyperbola can be represented parametrically as x=btan⁡t+hy=asec⁡t+k,{\displaystyle {\begin{aligned}x&=b\tan t+h\\y&=a\sec t+k\,,\end{aligned}}} or, rationally x=b2t1−t2+hy=a1+t21−t2+k.{\displaystyle {\begin{aligned}x&=b{\frac {2t}{1-t^{2}}}+h\\y&=a{\frac {1+t^{2}}{1-t^{2}}}+k\,.\end{aligned}}} In all these formulae(h,k)are the center coordinates of the hyperbola,ais the length of the semi-major axis, andbis the length of the semi-minor axis. Note that in the rational forms of these formulae, the points(−a, 0)and(0 ,−a), respectively, are not represented by a real value oft, but are the limit ofxandyasttends to infinity. Ahypotrochoidis a curve traced by a point attached to a circle of radiusrrolling around the inside of a fixed circle of radiusR, where the point is at a distancedfrom the center of the interior circle. The parametric equations for the hypotrochoids are: x(θ)=(R−r)cos⁡θ+dcos⁡(R−rrθ)y(θ)=(R−r)sin⁡θ−dsin⁡(R−rrθ).{\displaystyle {\begin{aligned}x(\theta )&=(R-r)\cos \theta +d\cos \left({R-r \over r}\theta \right)\\y(\theta )&=(R-r)\sin \theta -d\sin \left({R-r \over r}\theta \right)\,.\end{aligned}}} Some examples: Parametric equations are convenient for describingcurvesin higher-dimensional spaces. For example: x=acos⁡(t)y=asin⁡(t)z=bt{\displaystyle {\begin{aligned}x&=a\cos(t)\\y&=a\sin(t)\\z&=bt\,\end{aligned}}} describes a three-dimensional curve, thehelix, with a radius ofaand rising by2πbunits per turn. The equations are identical in theplaneto those for a circle. Such expressions as the one above are commonly written as r(t)=(x(t),y(t),z(t))=(acos⁡(t),asin⁡(t),bt),{\displaystyle {\begin{aligned}\mathbf {r} (t)&=(x(t),y(t),z(t))\\&=(a\cos(t),a\sin(t),bt)\,,\end{aligned}}} whereris a three-dimensional vector. Atoruswith major radiusRand minor radiusrmay be defined parametrically as x=cos⁡(t)(R+rcos⁡(u)),y=sin⁡(t)(R+rcos⁡(u)),z=rsin⁡(u).{\displaystyle {\begin{aligned}x&=\cos(t)\left(R+r\cos(u)\right),\\y&=\sin(t)\left(R+r\cos(u)\right),\\z&=r\sin(u)\,.\end{aligned}}} where the two parameterstanduboth vary between0and2π. Asuvaries from0to2πthe point on the surface moves about a short circle passing through the hole in the torus. Astvaries from0to2πthe point on the surface moves about a long circle around the hole in the torus. The parametric equation of the line through the point(x0,y0,z0){\displaystyle \left(x_{0},y_{0},z_{0}\right)}and parallel to the vectorai^+bj^+ck^{\displaystyle a{\hat {\mathbf {i} }}+b{\hat {\mathbf {j} }}+c{\hat {\mathbf {k} }}}is[7] x=x0+aty=y0+btz=z0+ct{\displaystyle {\begin{aligned}x&=x_{0}+at\\y&=y_{0}+bt\\z&=z_{0}+ct\end{aligned}}} Inkinematics, objects' paths through space are commonly described as parametric curves, with each spatial coordinate depending explicitly on an independent parameter (usually time). Used in this way, the set of parametric equations for the object's coordinates collectively constitute avector-valued functionfor position. Such parametric curves can then beintegratedanddifferentiatedtermwise. Thus, if a particle's position is described parametrically asr(t)=(x(t),y(t),z(t)),{\displaystyle \mathbf {r} (t)=(x(t),y(t),z(t))\,,} then itsvelocitycan be found asv(t)=r′(t)=(x′(t),y′(t),z′(t)),{\displaystyle {\begin{aligned}\mathbf {v} (t)&=\mathbf {r} '(t)\\&=(x'(t),y'(t),z'(t))\,,\end{aligned}}} and itsaccelerationasa(t)=v′(t)=r″(t)=(x″(t),y″(t),z″(t)).{\displaystyle {\begin{aligned}\mathbf {a} (t)&=\mathbf {v} '(t)=\mathbf {r} ''(t)\\&=(x''(t),y''(t),z''(t))\,.\end{aligned}}} Another important use of parametric equations is in the field ofcomputer-aided design(CAD).[8]For example, consider the following three representations, all of which are commonly used to describeplanar curves. Each representation has advantages and drawbacks for CAD applications. The explicit representation may be very complicated, or even may not exist. Moreover, it does not behave well undergeometric transformations, and in particular underrotations. On the other hand, as a parametric equation and an implicit equation may easily be deduced from an explicit representation, when a simple explicit representation exists, it has the advantages of both other representations. Implicit representations may make it difficult to generate points on the curve, and even to decide whether there are real points. On the other hand, they are well suited for deciding whether a given point is on a curve, or whether it is inside or outside of a closed curve. Such decisions may be difficult with a parametric representation, but parametric representations are best suited for generating points on a curve, and for plotting it.[9] Numerous problems ininteger geometrycan be solved using parametric equations. A classical such solution isEuclid's parametrization ofright trianglessuch that the lengths of their sidesa,band their hypotenusecarecoprime integers. Asaandbare not both even (otherwisea,bandcwould not be coprime), one may exchange them to haveaeven, and the parameterization is then a=2mnb=m2−n2c=m2+n2,{\displaystyle {\begin{aligned}a&=2mn\\b&=m^{2}-n^{2}\\c&=m^{2}+n^{2}\,,\end{aligned}}} where the parametersmandnare positive coprime integers that are not both odd. By multiplyinga,bandcby an arbitrary positive integer, one gets a parametrization of all right triangles whose three sides have integer lengths. Asystem ofmlinear equationsinnunknowns isunderdeterminedif it has more than one solution. This occurs when thematrixof the system and itsaugmented matrixhave the samerankrandr<n. In this case, one can selectn−runknowns as parameters and represent all solutions as a parametric equation where all unknowns are expressed aslinear combinationsof the selected ones. That is, if the unknowns arex1,…,xn,{\displaystyle x_{1},\ldots ,x_{n},}one can reorder them for expressing the solutions as[10] x1=β1+∑j=r+1nα1,jxj⋮xr=βr+∑j=r+1nαr,jxjxr+1=xr+1⋮xn=xn.{\displaystyle {\begin{aligned}x_{1}&=\beta _{1}+\sum _{j=r+1}^{n}\alpha _{1,j}x_{j}\\\vdots \\x_{r}&=\beta _{r}+\sum _{j=r+1}^{n}\alpha _{r,j}x_{j}\\x_{r+1}&=x_{r+1}\\\vdots \\x_{n}&=x_{n}.\end{aligned}}} Such a parametric equation is called aparametric formof the solution of the system.[10] The standard method for computing a parametric form of the solution is to useGaussian eliminationfor computing areduced row echelon formof the augmented matrix. Then the unknowns that can be used as parameters are the ones that correspond to columns not containing anyleading entry(that is the left most non zero entry in a row or the matrix), and the parametric form can be straightforwardly deduced.[10]
https://en.wikipedia.org/wiki/Parametric_equation
Instatistical analysis, therule of threestates that if a certain event did not occur in a sample withnsubjects, the interval from 0 to 3/nis a 95%confidence intervalfor the rate of occurrences in thepopulation. Whennis greater than 30, this is a good approximation of results from more sensitive tests. For example, a pain-relief drug is tested on 1500human subjects, and noadverse eventis recorded. From the rule of three, it can be concluded with 95% confidence that fewer than 1 person in 500 (or 3/1500) will experience an adverse event. By symmetry, for only successes, the 95% confidence interval is[1−3/n,1]. The rule is useful in the interpretation ofclinical trialsgenerally, particularly inphase IIand phase III where often there are limitations in duration orstatistical power. The rule of three applies well beyond medical research, to any trial donentimes. If 300 parachutes are randomly tested and all open successfully, then it is concluded with 95% confidence that fewer than 1 in 100 parachutes with the same characteristics (3/300) will fail.[1] A 95%confidence intervalis sought for the probabilitypof an event occurring for any randomly selected single individual in a population, given that it has not been observed to occur innBernoulli trials. Denoting the number of events byX, we therefore wish to find the values of the parameterpof abinomial distributionthat give Pr(X= 0) ≤ 0.05. The rule can then be derived[2]either from thePoisson approximation to the binomial distribution, or from the formula (1−p)nfor the probability of zero events in the binomial distribution. In the latter case, the edge of the confidence interval is given by Pr(X= 0) = 0.05 and hence (1−p)n= .05 sonln(1–p) = ln .05 ≈ −2.996. Rounding the latter to −3 and using the approximation, forpclose to 0, that ln(1−p) ≈ −p(Taylor's formula), we obtain the interval's boundary 3/n. By a similar argument, the numerator values of 3.51, 4.61, and 5.3 may be used for the 97%, 99%, and 99.5% confidence intervals, respectively, and in general the upper end of the confidence interval can be given as−ln⁡(α)n{\displaystyle {\frac {-\ln(\alpha )}{n}}}, where1−α{\displaystyle 1-\alpha }is the desired confidence level. TheVysochanskij–Petunin inequalityshows that the rule of three holds forunimodaldistributions with finitevariancebeyond just the binomial distribution, and gives a way to change the factor 3 if a different confidence is desired[citation needed].Chebyshev's inequalityremoves the assumption of unimodality at the price of a higher multiplier (about 4.5 for 95% confidence)[citation needed].Cantelli's inequalityis the one-tailed version of Chebyshev's inequality. A century and a half ago Charles Darwin said he had "no Faith in anything short of actual measurement and theRule of Three," by which he appeared to mean the peak of arithmetical accomplishment in a nineteenth-century gentleman, solving forxin "6 is to 3 as 9 is tox." Some decades later, in the early 1900s, Karl Pearson shifted the meaning of the rule of three – "take 3σ [three standard deviations] as definitely significant" – and claimed it for his new journal of significance testing,Biometrika. Even Darwin late in life seems to have fallen into the confusion. (Ziliak and McCloskey, 2008, p. 26; parenthetic gloss in original)
https://en.wikipedia.org/wiki/Rule_of_three_(statistics)
Incontrol theory, astate observer,state estimator, orLuenberger observeris a system that provides anestimateof theinternal stateof a given real system, from measurements of theinputand output of the real system. It is typically computer-implemented, and provides the basis of many practical applications. Knowing the system state is necessary to solve manycontrol theoryproblems; for example, stabilizing a system usingstate feedback. In most practical cases, the physical state of the system cannot be determined by direct observation. Instead, indirect effects of the internal state are observed by way of the system outputs. A simple example is that of vehicles in a tunnel: the rates and velocities at which vehicles enter and leave the tunnel can be observed directly, but the exact state inside the tunnel can only be estimated. If a system isobservable, it is possible to fully reconstruct the system state from its output measurements using the state observer. Linear, delayed, sliding mode, high gain, Tau, homogeneity-based, extended and cubic observers are among several observer structures used for state estimation of linear and nonlinear systems. A linear observer structure is described in the following sections. The state of a linear, time-invariant discrete-time system is assumed to satisfy where, at timek{\displaystyle k},x(k){\displaystyle x(k)}is the plant's state;u(k){\displaystyle u(k)}is its inputs; andy(k){\displaystyle y(k)}is its outputs. These equations simply say that the plant's current outputs and its future state are both determined solely by its current states and the current inputs. (Although these equations are expressed in terms ofdiscretetime steps, very similar equations hold forcontinuoussystems). If this system isobservablethen the output of the plant,y(k){\displaystyle y(k)}, can be used to steer the state of the state observer. The observer model of the physical system is then typically derived from the above equations. Additional terms may be included in order to ensure that, on receiving successive measured values of the plant's inputs and outputs, the model's state converges to that of the plant. In particular, the output of the observer may be subtracted from the output of the plant and then multiplied by a matrixL{\displaystyle L}; this is then added to the equations for the state of the observer to produce a so-calledLuenbergerobserver, defined by the equations below. Note that the variables of a state observer are commonly denoted by a "hat":x^(k){\displaystyle {\hat {x}}(k)}andy^(k){\displaystyle {\hat {y}}(k)}to distinguish them from the variables of the equations satisfied by the physical system. The observer is called asymptotically stable if the observer errore(k)=x^(k)−x(k){\displaystyle e(k)={\hat {x}}(k)-x(k)}converges to zero whenk→∞{\displaystyle k\to \infty }. For a Luenberger observer, the observer error satisfiese(k+1)=(A−LC)e(k){\displaystyle e(k+1)=(A-LC)e(k)}. The Luenberger observer for this discrete-time system is therefore asymptotically stable when the matrixA−LC{\displaystyle A-LC}has all the eigenvalues inside the unit circle. For control purposes the output of the observer system is fed back to the input of both the observer and the plant through the gains matrixK{\displaystyle K}. The observer equations then become: or, more simply, Due to theseparation principlewe know that we can chooseK{\displaystyle K}andL{\displaystyle L}independently without harm to the overall stability of the systems. As a rule of thumb, the poles of the observerA−LC{\displaystyle A-LC}are usually chosen to converge 10 times faster than the poles of the systemA−BK{\displaystyle A-BK}. The previous example was for an observer implemented in a discrete-time LTI system. However, the process is similar for the continuous-time case; the observer gainsL{\displaystyle L}are chosen to make the continuous-time error dynamics converge to zero asymptotically (i.e., whenA−LC{\displaystyle A-LC}is aHurwitz matrix). For a continuous-time linear system wherex∈Rn,u∈Rm,y∈Rr{\displaystyle x\in \mathbb {R} ^{n},u\in \mathbb {R} ^{m},y\in \mathbb {R} ^{r}}, the observer looks similar to discrete-time case described above: The observer errore=x−x^{\displaystyle e=x-{\hat {x}}}satisfies the equation The eigenvalues of the matrixA−LC{\displaystyle A-LC}can be chosen arbitrarily by appropriate choice of the observer gainL{\displaystyle L}when the pair[A,C]{\displaystyle [A,C]}is observable, i.e.observabilitycondition holds. In particular, it can be made Hurwitz, so the observer errore(t)→0{\displaystyle e(t)\to 0}whent→∞{\displaystyle t\to \infty }. When the observer gainL{\displaystyle L}is high, the linear Luenberger observer converges to the system states very quickly. However, high observer gain leads to a peaking phenomenon in which initial estimator error can be prohibitively large (i.e., impractical or unsafe to use).[1]As a consequence, nonlinear high-gain observer methods are available that converge quickly without the peaking phenomenon. For example,sliding mode controlcan be used to design an observer that brings one estimated state's error to zero in finite time even in the presence of measurement error; the other states have error that behaves similarly to the error in a Luenberger observer after peaking has subsided. Sliding mode observers also have attractive noise resilience properties that are similar to aKalman filter.[2][3]Another approach is to apply multi observer, that significantly improves transients and reduces observer overshoot. Multi-observer can be adapted to every system where high-gain observer is applicable.[4] High gain, sliding mode and extended observers are the most common observers for nonlinear systems. To illustrate the application of sliding mode observers for nonlinear systems, first consider the no-input non-linear system: wherex∈Rn{\displaystyle x\in \mathbb {R} ^{n}}. Also assume that there is a measurable outputy∈R{\displaystyle y\in \mathbb {R} }given by There are several non-approximate approaches for designing an observer. The two observers given below also apply to the case when the system has an input. That is, One suggestion by Krener and Isidori[5]and Krener and Respondek[6]can be applied in a situation when there exists a linearizing transformation (i.e., adiffeomorphism, like the one used infeedback linearization)z=Φ(x){\displaystyle z=\Phi (x)}such that in new variables the system equations read The Luenberger observer is then designed as The observer error for the transformed variablee=z^−z{\displaystyle e={\hat {z}}-z}satisfies the same equation as in classical linear case. As shown by Gauthier, Hammouri, and Othman[7]and Hammouri and Kinnaert,[8]if there exists transformationz=Φ(x){\displaystyle z=\Phi (x)}such that the system can be transformed into the form then the observer is designed as whereL(t){\displaystyle L(t)}is a time-varying observer gain. Ciccarella, Dalla Mora, and Germani[9]obtained more advanced and general results, removing the need for a nonlinear transform and proving global asymptotic convergence of the estimated state to the true state using only simple assumptions on regularity. As discussed for the linear case above, the peaking phenomenon present in Luenberger observers justifies the use of switched observers. A switched observer encompasses a relay or binary switch that acts upon detecting minute changes in the measured output. Some common types of switched observers include the sliding mode observer, nonlinear extended state observer,[10]fixed time observer,[11]switched high gain observer[12]and uniting observer.[13]Thesliding mode observeruses non-linear high-gain feedback to drive estimated states to ahypersurfacewhere there is no difference between the estimated output and the measured output. The non-linear gain used in the observer is typically implemented with a scaled switching function, like thesignum(i.e., sgn) of the estimated – measured output error. Hence, due to this high-gain feedback, the vector field of the observer has a crease in it so that observer trajectoriesslide alonga curve where the estimated output matches the measured output exactly. So, if the system isobservablefrom its output, the observer states will all be driven to the actual system states. Additionally, by using the sign of the error to drive the sliding mode observer, the observer trajectories become insensitive to many forms of noise. Hence, some sliding mode observers have attractive properties similar to theKalman filterbut with simpler implementation.[2][3] As suggested by Drakunov,[14]asliding mode observercan also be designed for a class of non-linear systems. Such an observer can be written in terms of original variable estimatex^{\displaystyle {\hat {x}}}and has the form where: The idea can be briefly explained as follows. According to the theory of sliding modes, in order to describe the system behavior, once sliding mode starts, the functionsgn⁡(vi(t)−hi(x^(t))){\displaystyle \operatorname {sgn}(v_{i}(t)\!-\!h_{i}({\hat {x}}(t)))}should be replaced by equivalent values (seeequivalent controlin the theory ofsliding modes). In practice, it switches (chatters) with high frequency with slow component being equal to the equivalent value. Applying appropriate lowpass filter to get rid of the high frequency component on can obtain the value of the equivalent control, which contains more information about the state of the estimated system. The observer described above uses this method several times to obtain the state of the nonlinear system ideally in finite time. The modified observation error can be written in the transformed statese=H(x)−H(x^){\displaystyle e=H(x)-H({\hat {x}})}. In particular, and so So: So, for sufficiently largemi{\displaystyle m_{i}}gains, all observer estimated states reach the actual states in finite time. In fact, increasingmi{\displaystyle m_{i}}allows for convergence in any desired finite time so long as each|hi(x(0))|{\displaystyle |h_{i}(x(0))|}function can be bounded with certainty. Hence, the requirement that the mapH:Rn→Rn{\displaystyle H:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}is adiffeomorphism(i.e., that itsJacobian linearizationis invertible) asserts that convergence of the estimated output implies convergence of the estimated state. That is, the requirement is an observability condition. In the case of the sliding mode observer for the system with the input, additional conditions are needed for the observation error to be independent of the input. For example, that does not depend on time. The observer is then Multi-observer extends the high-gain observer structure from single to multi observer, with many models working simultaneously. This has two layers: the first consists of multiple high-gain observers with different estimation states, and the second determines the importance weights of the first layer observers. The algorithm is simple to implement and does not contain any risky operations like differentiation.[4]The idea of multiple models was previously applied to obtain information in adaptive control.[15] Assuming that the number of high-gain observers equalsn+1{\displaystyle n+1}, wherek=1,…,n+1{\displaystyle k=1,\dots ,n+1}is the observer index. The first layer observers consists of the same gainL{\displaystyle L}but they differ with the initial statexk(0){\displaystyle x_{k}(0)}. In the second layer allxk(t){\displaystyle x_{k}(t)}fromk=1...n+1{\displaystyle k=1...n+1}observers are combined into one to obtain single state vector estimation whereαk∈R{\displaystyle \alpha _{k}\in \mathbb {R} }are weight factors. These factors are changed to provide the estimation in the second layer and to improve the observation process. Let assume that and whereξk∈Rn×1{\displaystyle \xi _{k}\in \mathbb {R} ^{n\times 1}}is some vector that depends onkth{\displaystyle kth}observer errorek(t){\displaystyle e_{k}(t)}. Some transformation yields to linear regression problem This formula gives possibility to estimateαk(t){\displaystyle \alpha _{k}(t)}. To construct manifold we need mappingm:Rn→Rn{\displaystyle m:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}betweenξk(t)=m(ek(t)){\displaystyle \xi _{k}(t)=m(e_{k}(t))}and ensurance thatξk(t){\displaystyle \xi _{k}(t)}is calculable relying on measurable signals. First thing is to eliminate parking phenomenon forαk(t){\displaystyle \alpha _{k}(t)}from observer error Calculaten{\displaystyle n}times derivative onηk(t)=y^k(t)−y(t){\displaystyle \eta _{k}(t)={\hat {y}}_{k}(t)-y(t)}to find mapping m lead toξk(t){\displaystyle \xi _{k}(t)}defined as wheretd>0{\displaystyle t_{d}>0}is some time constant. Note thatξk(t){\displaystyle \xi _{k}(t)}relays on bothηk(t){\displaystyle \eta _{k}(t)}and its integrals hence it is easily available in the control system. Furtherαk(t){\displaystyle \alpha _{k}(t)}is specified by estimation law; and thus it proves that manifold is measurable. In the second layerα^k(t){\displaystyle {\hat {\alpha }}_{k}(t)}fork=1…n+1{\displaystyle k=1\dots n+1}is introduced as estimates ofαk(t){\displaystyle \alpha _{k}(t)}coefficients. The mapping error is specified as whereeξ(t)∈Rn×1,α^k(t)∈R{\displaystyle e_{\xi }(t)\in \mathbb {R} ^{n\times 1},{\hat {\alpha }}_{k}(t)\in \mathbb {R} }. If coefficientsα^(t){\displaystyle {\hat {\alpha }}(t)}are equal toαk(t){\displaystyle \alpha _{k}(t)}, then mapping erroreξ(t)=0{\displaystyle e_{\xi }(t)=0}Now it is possible to calculatex^{\displaystyle {\hat {x}}}from above equation and hence the peaking phenomenon is reduced thanks to properties of manifold. The created mapping gives a lot of flexibility in the estimation process. Even it is possible to estimate the value ofx(t){\displaystyle x(t)}in the second layer and to calculate the statex{\displaystyle x}.[4] Bounding[16]or interval observers[17][18]constitute a class of observers that provide two estimations of the state simultaneously: one of the estimations provides an upper bound on the real value of the state, whereas the second one provides a lower bound. The real value of the state is then known to be always within these two estimations. These bounds are very important in practical applications,[19][20]as they make possible to know at each time the precision of the estimation. Mathematically, two Luenberger observers can be used, ifL{\displaystyle L}is properly selected, using, for example,positive systemsproperties:[21]one for the upper boundx^U(k){\displaystyle {\hat {x}}_{U}(k)}(that ensures thate(k)=x^U(k)−x(k){\displaystyle e(k)={\hat {x}}_{U}(k)-x(k)}converges to zero from above whenk→∞{\displaystyle k\to \infty }, in the absence of noise anduncertainty), and a lower boundx^L(k){\displaystyle {\hat {x}}_{L}(k)}(that ensures thate(k)=x^L(k)−x(k){\displaystyle e(k)={\hat {x}}_{L}(k)-x(k)}converges to zero from below). That is, alwaysx^U(k)≥x(k)≥x^L(k){\displaystyle {\hat {x}}_{U}(k)\geq x(k)\geq {\hat {x}}_{L}(k)}
https://en.wikipedia.org/wiki/State_estimator
Signal processingis anelectrical engineeringsubfield that focuses on analyzing, modifying and synthesizingsignals, such assound,images,potential fields,seismic signals,altimetry processing, andscientific measurements.[1]Signal processing techniques are used to optimize transmissions,digital storageefficiency, correcting distorted signals, improvesubjective video quality, and to detect or pinpoint components of interest in a measured signal.[2] According toAlan V. OppenheimandRonald W. Schafer, the principles of signal processing can be found in the classicalnumerical analysistechniques of the 17th century. They further state that the digital refinement of these techniques can be found in the digitalcontrol systemsof the 1940s and 1950s.[3] In 1948,Claude Shannonwrote the influential paper "A Mathematical Theory of Communication" which was published in theBell System Technical Journal.[4]The paper laid the groundwork for later development of information communication systems and the processing of signals for transmission.[5] Signal processing matured and flourished in the 1960s and 1970s, anddigital signal processingbecame widely used with specializeddigital signal processorchips in the 1980s.[5] A signal is afunctionx(t){\displaystyle x(t)}, where this function is either[6] Analog signal processing is for signals that have not been digitized, as in most 20th-centuryradio, telephone, and television systems. This involves linear electronic circuits as well as nonlinear ones. The former are, for instance,passive filters,active filters,additive mixers,integrators, anddelay lines. Nonlinear circuits includecompandors, multipliers (frequency mixers,voltage-controlled amplifiers),voltage-controlled filters,voltage-controlled oscillators, andphase-locked loops. Continuous-time signalprocessing is for signals that vary with the change of continuous domain (without considering some individual interrupted points). The methods of signal processing includetime domain,frequency domain, andcomplex frequency domain. This technology mainly discusses the modeling of alinear time-invariantcontinuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals. For example, in time domain, a continuous-time signalx(t){\displaystyle x(t)}passing through alinear time-invariantfilter/system denoted ash(t){\displaystyle h(t)}, can be expressed at the output as y(t)=∫−∞∞h(τ)x(t−τ)dτ{\displaystyle y(t)=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )\,d\tau } In some contexts,h(t){\displaystyle h(t)}is referred to as the impulse response of the system. The aboveconvolutionoperation is conducted between the input and the system. Discrete-time signalprocessing is for sampled signals, defined only at discrete points in time, and as such are quantized in time, but not in magnitude. Analog discrete-time signal processingis a technology based on electronic devices such assample and holdcircuits, analog time-divisionmultiplexers,analog delay linesandanalog feedback shift registers. This technology was a predecessor of digital signal processing (see below), and is still used in advanced processing of gigahertz signals.[7] The concept of discrete-time signal processing also refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without takingquantization errorinto consideration. Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purposecomputersor by digital circuits such asASICs,field-programmable gate arraysor specializeddigital signal processors. Typical arithmetical operations includefixed-pointandfloating-point, real-valued and complex-valued, multiplication and addition. Other typical operations supported by the hardware arecircular buffersandlookup tables. Examples of algorithms are thefast Fourier transform(FFT),finite impulse response(FIR) filter,Infinite impulse response(IIR) filter, andadaptive filterssuch as theWienerandKalman filters. Nonlinear signal processing involves the analysis and processing of signals produced fromnonlinear systemsand can be in the time,frequency, or spatiotemporal domains.[8][9]Nonlinear systems can produce highly complex behaviors includingbifurcations,chaos,harmonics, andsubharmonicswhich cannot be produced or analyzed using linear methods. Polynomial signal processing is a type of non-linear signal processing, wherepolynomialsystems may be interpreted as conceptually straightforward extensions of linear systems to the nonlinear case.[10] Statistical signal processingis an approach which treats signals asstochastic processes, utilizing theirstatisticalproperties to perform signal processing tasks.[11]Statistical techniques are widely used in signal processing applications. For example, one can model theprobability distributionof noise incurred when photographing an image, and construct techniques based on this model toreduce the noisein the resulting image. Graph signal processinggeneralizes signal processing tasks to signals living on non-Euclidean domains whose structure can be captured by a weighted graph.[12]Graph signal processing presents several key points such as sampling signal techniques,[13]recovery techniques[14]and time-varying techiques.[15]Graph signal processing has been applied with success in the field of image processing, computer vision[16][17][18]and sound anomaly detection.[19] In communication systems, signal processing may occur at:[citation needed]
https://en.wikipedia.org/wiki/Statistical_signal_processing
Instatistics,sufficiencyis a property of astatisticcomputed on asample datasetin relation to a parametric model of the dataset. A sufficient statistic contains all of the information that the dataset provides about the model parameters. It is closely related to the concepts of anancillary statisticwhich contains no information about the model parameters, and of acomplete statisticwhich only contains information about the parameters and no ancillary information. A related concept is that oflinear sufficiency, which is weaker thansufficiencybut can be applied in some cases where there is no sufficient statistic, although it is restricted to linear estimators.[1]TheKolmogorov structure functiondeals with individual finite data; the related notion there is the algorithmic sufficient statistic. The concept is due toSir Ronald Fisherin 1920.[2]Stephen Stiglernoted in 1973 that the concept of sufficiency had fallen out of favor indescriptive statisticsbecause of the strong dependence on an assumption of the distributional form (seePitman–Koopman–Darmois theorembelow), but remained very important in theoretical work.[3] Roughly, given a setX{\displaystyle \mathbf {X} }ofindependent identically distributeddata conditioned on an unknown parameterθ{\displaystyle \theta }, a sufficient statistic is a functionT(X){\displaystyle T(\mathbf {X} )}whose value contains all the information needed to compute any estimate of the parameter (e.g. amaximum likelihoodestimate). Due to the factorization theorem (see below), for a sufficient statisticT(X){\displaystyle T(\mathbf {X} )}, the probability density can be written asfX(x;θ)=h(x)g(θ,T(x)){\displaystyle f_{\mathbf {X} }(x;\theta )=h(x)\,g(\theta ,T(x))}. From this factorization, it can easily be seen that the maximum likelihood estimate ofθ{\displaystyle \theta }will interact withX{\displaystyle \mathbf {X} }only throughT(X){\displaystyle T(\mathbf {X} )}. Typically, the sufficient statistic is a simple function of the data, e.g. the sum of all the data points. More generally, the "unknown parameter" may represent avectorof unknown quantities or may represent everything about the model that is unknown or not fully specified. In such a case, the sufficient statistic may be a set of functions, called ajointly sufficient statistic. Typically, there are as many functions as there are parameters. For example, for aGaussian distributionwith unknownmeanandvariance, the jointly sufficient statistic, from which maximum likelihood estimates of both parameters can be estimated, consists of two functions, the sum of all data points and the sum of all squared data points (or equivalently, thesample meanandsample variance). In other words,thejoint probability distributionof the data is conditionally independent of the parameter given the value of the sufficient statistic for the parameter. Both the statistic and the underlying parameter can be vectors. A statistict=T(X) issufficient for underlying parameterθprecisely if theconditional probability distributionof the dataX, given the statistict=T(X), does not depend on the parameterθ.[4] Alternatively, one can say the statisticT(X) is sufficient forθif, for all prior distributions onθ, themutual informationbetweenθandT(X)equals the mutual information betweenθandX.[5]In other words, thedata processing inequalitybecomes an equality: As an example, the sample mean is sufficient for the (unknown) meanμof anormal distributionwith known variance. Once the sample mean is known, no further information aboutμcan be obtained from the sample itself. On the other hand, for an arbitrary distribution themedianis not sufficient for the mean: even if the median of the sample is known, knowing the sample itself would provide further information about the population mean. For example, if the observations that are less than the median are only slightly less, but observations exceeding the median exceed it by a large amount, then this would have a bearing on one's inference about the population mean. Fisher'sfactorization theoremorfactorization criterionprovides a convenientcharacterizationof a sufficient statistic. If theprobability density functionis ƒθ(x), thenTis sufficient forθif and only ifnonnegative functionsgandhcan be found such that i.e., the density ƒ can be factored into a product such that one factor,h, does not depend onθand the other factor, which does depend onθ, depends onxonly throughT(x). A general proof of this was given by Halmos and Savage[6]and the theorem is sometimes referred to as the Halmos–Savage factorization theorem.[7]The proofs below handle special cases, but an alternative general proof along the same lines can be given.[8]In many simple cases the probability density function is fully specified byθ{\displaystyle \theta }andT(x){\displaystyle T(x)}, andh(x)=1{\displaystyle h(x)=1}(seeExamples). It is easy to see that ifF(t) is a one-to-one function andTis a sufficient statistic, thenF(T) is a sufficient statistic. In particular we can multiply a sufficient statistic by a nonzero constant and get another sufficient statistic. An implication of the theorem is that when using likelihood-based inference, two sets of data yielding the same value for the sufficient statisticT(X) will always yield the same inferences aboutθ. By the factorization criterion, the likelihood's dependence onθis only in conjunction withT(X). As this is the same in both cases, the dependence onθwill be the same as well, leading to identical inferences. Due to Hogg and Craig.[9]LetX1,X2,…,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}}, denote a random sample from a distribution having thepdff(x,θ) forι<θ<δ. LetY1=u1(X1,X2, ...,Xn) be a statistic whose pdf isg1(y1;θ). What we want to prove is thatY1=u1(X1,X2, ...,Xn) is a sufficient statistic forθif and only if, for some functionH, First, suppose that We shall make the transformationyi=ui(x1,x2, ...,xn), fori= 1, ...,n, having inverse functionsxi=wi(y1,y2, ...,yn), fori= 1, ...,n, andJacobianJ=[wi/yj]{\displaystyle J=\left[w_{i}/y_{j}\right]}. Thus, The left-hand member is the joint pdfg(y1,y2, ...,yn; θ) ofY1=u1(X1, ...,Xn), ...,Yn=un(X1, ...,Xn). In the right-hand member,g1(y1;θ){\displaystyle g_{1}(y_{1};\theta )}is the pdf ofY1{\displaystyle Y_{1}}, so thatH[w1,…,wn]|J|{\displaystyle H[w_{1},\dots ,w_{n}]|J|}is the quotient ofg(y1,…,yn;θ){\displaystyle g(y_{1},\dots ,y_{n};\theta )}andg1(y1;θ){\displaystyle g_{1}(y_{1};\theta )}; that is, it is the conditional pdfh(y2,…,yn∣y1;θ){\displaystyle h(y_{2},\dots ,y_{n}\mid y_{1};\theta )}ofY2,…,Yn{\displaystyle Y_{2},\dots ,Y_{n}}givenY1=y1{\displaystyle Y_{1}=y_{1}}. ButH(x1,x2,…,xn){\displaystyle H(x_{1},x_{2},\dots ,x_{n})}, and thusH[w1(y1,…,yn),…,wn(y1,…,yn))]{\displaystyle H\left[w_{1}(y_{1},\dots ,y_{n}),\dots ,w_{n}(y_{1},\dots ,y_{n}))\right]}, was given not to depend uponθ{\displaystyle \theta }. Sinceθ{\displaystyle \theta }was not introduced in the transformation and accordingly not in the JacobianJ{\displaystyle J}, it follows thath(y2,…,yn∣y1;θ){\displaystyle h(y_{2},\dots ,y_{n}\mid y_{1};\theta )}does not depend uponθ{\displaystyle \theta }and thatY1{\displaystyle Y_{1}}is a sufficient statistics forθ{\displaystyle \theta }. The converse is proven by taking: whereh(y2,…,yn∣y1){\displaystyle h(y_{2},\dots ,y_{n}\mid y_{1})}does not depend uponθ{\displaystyle \theta }becauseY2...Yn{\displaystyle Y_{2}...Y_{n}}depend only uponX1...Xn{\displaystyle X_{1}...X_{n}}, which are independent onΘ{\displaystyle \Theta }when conditioned byY1{\displaystyle Y_{1}}, a sufficient statistics by hypothesis. Now divide both members by the absolute value of the non-vanishing JacobianJ{\displaystyle J}, and replacey1,…,yn{\displaystyle y_{1},\dots ,y_{n}}by the functionsu1(x1,…,xn),…,un(x1,…,xn){\displaystyle u_{1}(x_{1},\dots ,x_{n}),\dots ,u_{n}(x_{1},\dots ,x_{n})}inx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}. This yields whereJ∗{\displaystyle J^{*}}is the Jacobian withy1,…,yn{\displaystyle y_{1},\dots ,y_{n}}replaced by their value in termsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}. The left-hand member is necessarily the joint pdff(x1;θ)⋯f(xn;θ){\displaystyle f(x_{1};\theta )\cdots f(x_{n};\theta )}ofX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}. Sinceh(y2,…,yn∣y1){\displaystyle h(y_{2},\dots ,y_{n}\mid y_{1})}, and thush(u2,…,un∣u1){\displaystyle h(u_{2},\dots ,u_{n}\mid u_{1})}, does not depend uponθ{\displaystyle \theta }, then is a function that does not depend uponθ{\displaystyle \theta }. A simpler more illustrative proof is as follows, although it applies only in the discrete case. We use the shorthand notation to denote the joint probability density of(X,T(X)){\displaystyle (X,T(X))}byfθ(x,t){\displaystyle f_{\theta }(x,t)}. SinceT{\displaystyle T}is a deterministic function ofX{\displaystyle X}, we havefθ(x,t)=fθ(x){\displaystyle f_{\theta }(x,t)=f_{\theta }(x)}, as long ast=T(x){\displaystyle t=T(x)}and zero otherwise. Therefore: with the last equality being true by the definition of sufficient statistics. Thusfθ(x)=a(x)bθ(t){\displaystyle f_{\theta }(x)=a(x)b_{\theta }(t)}witha(x)=fX∣t(x){\displaystyle a(x)=f_{X\mid t}(x)}andbθ(t)=fθ(t){\displaystyle b_{\theta }(t)=f_{\theta }(t)}. Conversely, iffθ(x)=a(x)bθ(t){\displaystyle f_{\theta }(x)=a(x)b_{\theta }(t)}, we have With the first equality by thedefinition of pdf for multiple variables, the second by the remark above, the third by hypothesis, and the fourth because the summation is not overt{\displaystyle t}. LetfX∣t(x){\displaystyle f_{X\mid t}(x)}denote the conditional probability density ofX{\displaystyle X}givenT(X){\displaystyle T(X)}. Then we can derive an explicit expression for this: With the first equality by definition of conditional probability density, the second by the remark above, the third by the equality proven above, and the fourth by simplification. This expression does not depend onθ{\displaystyle \theta }and thusT{\displaystyle T}is a sufficient statistic.[10] A sufficient statistic isminimal sufficientif it can be represented as a function of any other sufficient statistic. In other words,S(X) isminimal sufficientif and only if[11] Intuitively, a minimal sufficient statisticmost efficientlycaptures all possible information about the parameterθ. A useful characterization of minimal sufficiency is that when the densityfθexists,S(X) isminimal sufficientif and only if[citation needed] This follows as a consequence fromFisher's factorization theoremstated above. A case in which there is no minimal sufficient statistic was shown by Bahadur, 1954.[12]However, under mild conditions, a minimal sufficient statistic does always exist. In particular, in Euclidean space, these conditions always hold if the random variables (associated withPθ{\displaystyle P_{\theta }}) are all discrete or are all continuous. If there exists a minimal sufficient statistic, and this is usually the case, then everycompletesufficient statistic is necessarily minimal sufficient[13](note that this statement does not exclude a pathological case in which a complete sufficient exists while there is no minimal sufficient statistic). While it is hard to find cases in which a minimal sufficient statistic does not exist, it is not so hard to find cases in which there is no complete statistic. The collection of likelihood ratios{L(X∣θi)L(X∣θ0)}{\displaystyle \left\{{\frac {L(X\mid \theta _{i})}{L(X\mid \theta _{0})}}\right\}}fori=1,...,k{\displaystyle i=1,...,k}, is a minimal sufficient statistic if the parameter space is discrete{θ0,...,θk}{\displaystyle \left\{\theta _{0},...,\theta _{k}\right\}}. IfX1, ....,Xnare independentBernoulli-distributedrandom variables with expected valuep, then the sumT(X) =X1+ ... +Xnis a sufficient statistic forp(here 'success' corresponds toXi= 1 and 'failure' toXi= 0; soTis the total number of successes) This is seen by considering the joint probability distribution: Because the observations are independent, this can be written as and, collecting powers ofpand 1 −p, gives which satisfies the factorization criterion, withh(x) = 1 being just a constant. Note the crucial feature: the unknown parameterpinteracts with the dataxonly via the statisticT(x) = Σxi. As a concrete application, this gives a procedure for distinguishing afair coin from a biased coin. IfX1, ....,Xnare independent anduniformly distributedon the interval [0,θ], thenT(X) = max(X1, ...,Xn) is sufficient for θ — thesample maximumis a sufficient statistic for the population maximum. To see this, consider the jointprobability density functionofX(X1,...,Xn). Because the observations are independent, the pdf can be written as a product of individual densities where1{...}is theindicator function. Thus the density takes form required by the Fisher–Neyman factorization theorem, whereh(x) =1{min{xi}≥0}, and the rest of the expression is a function of onlyθandT(x) = max{xi}. In fact, theminimum-variance unbiased estimator(MVUE) forθis This is the sample maximum, scaled to correct for thebias, and is MVUE by theLehmann–Scheffé theorem. Unscaled sample maximumT(X) is themaximum likelihood estimatorforθ. IfX1,...,Xn{\displaystyle X_{1},...,X_{n}}are independent anduniformly distributedon the interval[α,β]{\displaystyle [\alpha ,\beta ]}(whereα{\displaystyle \alpha }andβ{\displaystyle \beta }are unknown parameters), thenT(X1n)=(min1≤i≤nXi,max1≤i≤nXi){\displaystyle T(X_{1}^{n})=\left(\min _{1\leq i\leq n}X_{i},\max _{1\leq i\leq n}X_{i}\right)}is a two-dimensional sufficient statistic for(α,β){\displaystyle (\alpha \,,\,\beta )}. To see this, consider the jointprobability density functionofX1n=(X1,…,Xn){\displaystyle X_{1}^{n}=(X_{1},\ldots ,X_{n})}. Because the observations are independent, the pdf can be written as a product of individual densities, i.e. The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting Sinceh(x1n){\displaystyle h(x_{1}^{n})}does not depend on the parameter(α,β){\displaystyle (\alpha ,\beta )}andg(α,β)(x1n){\displaystyle g_{(\alpha \,,\,\beta )}(x_{1}^{n})}depends only onx1n{\displaystyle x_{1}^{n}}through the functionT(X1n)=(min1≤i≤nXi,max1≤i≤nXi),{\displaystyle T(X_{1}^{n})=\left(\min _{1\leq i\leq n}X_{i},\max _{1\leq i\leq n}X_{i}\right),} the Fisher–Neyman factorization theorem impliesT(X1n)=(min1≤i≤nXi,max1≤i≤nXi){\displaystyle T(X_{1}^{n})=\left(\min _{1\leq i\leq n}X_{i},\max _{1\leq i\leq n}X_{i}\right)}is a sufficient statistic for(α,β){\displaystyle (\alpha \,,\,\beta )}. IfX1, ....,Xnare independent and have aPoisson distributionwith parameterλ, then the sumT(X) =X1+ ... +Xnis a sufficient statistic forλ. To see this, consider the joint probability distribution: Because the observations are independent, this can be written as which may be written as which shows that the factorization criterion is satisfied, whereh(x) is the reciprocal of the product of the factorials. Note the parameter λ interacts with the data only through its sumT(X). IfX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}are independent andnormally distributedwith expected valueθ{\displaystyle \theta }(a parameter) and known finite varianceσ2,{\displaystyle \sigma ^{2},}then is a sufficient statistic forθ.{\displaystyle \theta .} To see this, consider the jointprobability density functionofX1n=(X1,…,Xn){\displaystyle X_{1}^{n}=(X_{1},\dots ,X_{n})}. Because the observations are independent, the pdf can be written as a product of individual densities, i.e. The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting Sinceh(x1n){\displaystyle h(x_{1}^{n})}does not depend on the parameterθ{\displaystyle \theta }andgθ(x1n){\displaystyle g_{\theta }(x_{1}^{n})}depends only onx1n{\displaystyle x_{1}^{n}}through the function the Fisher–Neyman factorization theorem impliesT(X1n){\displaystyle T(X_{1}^{n})}is a sufficient statistic forθ{\displaystyle \theta }. Ifσ2{\displaystyle \sigma ^{2}}is unknown and sinces2=1n−1∑i=1n(xi−x¯)2{\displaystyle s^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}\right)^{2}}, the above likelihood can be rewritten as The Fisher–Neyman factorization theorem still holds and implies that(x¯,s2){\displaystyle ({\overline {x}},s^{2})}is a joint sufficient statistic for(θ,σ2){\displaystyle (\theta ,\sigma ^{2})}. IfX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}are independent andexponentially distributedwith expected valueθ(an unknown real-valued positive parameter), thenT(X1n)=∑i=1nXi{\displaystyle T(X_{1}^{n})=\sum _{i=1}^{n}X_{i}}is a sufficient statistic for θ. To see this, consider the jointprobability density functionofX1n=(X1,…,Xn){\displaystyle X_{1}^{n}=(X_{1},\dots ,X_{n})}. Because the observations are independent, the pdf can be written as a product of individual densities, i.e. The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting Sinceh(x1n){\displaystyle h(x_{1}^{n})}does not depend on the parameterθ{\displaystyle \theta }andgθ(x1n){\displaystyle g_{\theta }(x_{1}^{n})}depends only onx1n{\displaystyle x_{1}^{n}}through the functionT(X1n)=∑i=1nXi{\displaystyle T(X_{1}^{n})=\sum _{i=1}^{n}X_{i}} the Fisher–Neyman factorization theorem impliesT(X1n)=∑i=1nXi{\displaystyle T(X_{1}^{n})=\sum _{i=1}^{n}X_{i}}is a sufficient statistic forθ{\displaystyle \theta }. IfX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}are independent and distributed as aΓ(α,β){\displaystyle \Gamma (\alpha \,,\,\beta )}, whereα{\displaystyle \alpha }andβ{\displaystyle \beta }are unknown parameters of aGamma distribution, thenT(X1n)=(∏i=1nXi,∑i=1nXi){\displaystyle T(X_{1}^{n})=\left(\prod _{i=1}^{n}{X_{i}},\sum _{i=1}^{n}X_{i}\right)}is a two-dimensional sufficient statistic for(α,β){\displaystyle (\alpha ,\beta )}. To see this, consider the jointprobability density functionofX1n=(X1,…,Xn){\displaystyle X_{1}^{n}=(X_{1},\dots ,X_{n})}. Because the observations are independent, the pdf can be written as a product of individual densities, i.e. The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting Sinceh(x1n){\displaystyle h(x_{1}^{n})}does not depend on the parameter(α,β){\displaystyle (\alpha \,,\,\beta )}andg(α,β)(x1n){\displaystyle g_{(\alpha \,,\,\beta )}(x_{1}^{n})}depends only onx1n{\displaystyle x_{1}^{n}}through the functionT(x1n)=(∏i=1nxi,∑i=1nxi),{\displaystyle T(x_{1}^{n})=\left(\prod _{i=1}^{n}x_{i},\sum _{i=1}^{n}x_{i}\right),} the Fisher–Neyman factorization theorem impliesT(X1n)=(∏i=1nXi,∑i=1nXi){\displaystyle T(X_{1}^{n})=\left(\prod _{i=1}^{n}X_{i},\sum _{i=1}^{n}X_{i}\right)}is a sufficient statistic for(α,β).{\displaystyle (\alpha \,,\,\beta ).} Sufficiencyfinds a useful application in theRao–Blackwell theorem, which states that ifg(X) is any kind of estimator ofθ, then typically theconditional expectationofg(X) given sufficient statisticT(X) is a better (in the sense of having lowervariance) estimator ofθ, and is never worse. Sometimes one can very easily construct a very crude estimatorg(X), and then evaluate that conditional expected value to get an estimator that is in various senses optimal. According to thePitman–Koopman–Darmois theorem,among families of probability distributions whose domain does not vary with the parameter being estimated, only inexponential familiesis there a sufficient statistic whose dimension remains bounded as sample size increases. Intuitively, this states that nonexponential families of distributions on the real line requirenonparametric statisticsto fully capture the information in the data. Less tersely, supposeXn,n=1,2,3,…{\displaystyle X_{n},n=1,2,3,\dots }areindependent identically distributedrealrandom variables whose distribution is known to be in some family of probability distributions, parametrized byθ{\displaystyle \theta }, satisfying certain technical regularity conditions, then that family is anexponentialfamily if and only if there is aRm{\displaystyle \mathbb {R} ^{m}}-valued sufficient statisticT(X1,…,Xn){\displaystyle T(X_{1},\dots ,X_{n})}whose number of scalar componentsm{\displaystyle m}does not increase as the sample sizenincreases.[14] This theorem shows that the existence of a finite-dimensional, real-vector-valued sufficient statistics sharply restricts the possible forms of a family of distributions on thereal line. When the parameters or the random variables are no longer real-valued, the situation is more complex.[15] An alternative formulation of the condition that a statistic be sufficient, set in a Bayesian context, involves the posterior distributions obtained by using the full data-set and by using only a statistic. Thus the requirement is that, for almost everyx, More generally, without assuming a parametric model, we can say that the statisticsTispredictive sufficientif It turns out that this "Bayesian sufficiency" is a consequence of the formulation above,[16]however they are not directly equivalent in the infinite-dimensional case.[17]A range of theoretical results for sufficiency in a Bayesian context is available.[18] A concept called "linear sufficiency" can be formulated in a Bayesian context,[19]and more generally.[20]First define the best linear predictor of a vectorYbased onXasE^[Y∣X]{\displaystyle {\hat {E}}[Y\mid X]}. Then a linear statisticT(x) is linear sufficient[21]if
https://en.wikipedia.org/wiki/Sufficiency_(statistics)
Thislist of sequence alignment softwareis a compilation of software tools and web portals used in pairwisesequence alignmentandmultiple sequence alignment. Seestructural alignment softwareforstructural alignmentof proteins. *Sequence type:protein or nucleotide *Sequence type:protein or nucleotide **Alignment type:local or global *Sequence type:protein or nucleotide. **Alignment type:local or global *Sequence type:protein or nucleotide *Sequence type:protein or nucleotide Please seeList of alignment visualization software. [51][52]
https://en.wikipedia.org/wiki/Sequence_alignment_software
Protein structure predictionis the inference of the three-dimensional structure of aproteinfrom itsamino acidsequence—that is, the prediction of itssecondaryandtertiary structurefromprimary structure. Structure prediction is different from the inverse problem ofprotein design. Protein structure prediction is one of the most important goals pursued bycomputational biologyand addressesLevinthal's paradox. Accurate structure prediction has important applications inmedicine(for example, indrug design) andbiotechnology(for example, in novelenzymedesign). Starting in 1994, the performance of current methods is assessed biannually in theCritical Assessment of Structure Prediction(CASP) experiment. A continuous evaluation of protein structure prediction web servers is performed by the community projectContinuous Automated Model EvaluatiOn(CAMEO3D). Proteins are chains ofamino acidsjoined together bypeptide bonds. Many conformations of this chain are possible due to the rotation of the main chain about the two torsion angles φ and ψ at the Cα atom (see figure). This conformational flexibility is responsible for differences in the three-dimensional structure of proteins. The peptide bonds in the chain are polar, i.e. they have separated positive and negative charges (partial charges) in thecarbonyl group, which can act as hydrogen bond acceptor and in the NH group, which can act as hydrogen bond donor. These groups can therefore interact in the protein structure. Proteins consist mostly of 20 different types of L-α-amino acids (theproteinogenic amino acids). These can be classified according to the chemistry of the side chain, which also plays an important structural role.Glycinetakes on a special position, as it has the smallest side chain, only one hydrogen atom, and therefore can increase the local flexibility in the protein structure.Cysteinein contrast can react with another cysteine residue to form onecystineand thereby form a cross link stabilizing the whole structure. The protein structure can be considered as a sequence of secondary structure elements, such asα helicesandβ sheets. In these secondary structures, regular patterns of H-bonds are formed between the main chain NH and CO groups of spatially neighboring amino acids, and the amino acids have similarΦ and ψ angles.[1] The formation of these secondary structures efficiently satisfies the hydrogen bonding capacities of the peptide bonds. The secondary structures can be tightly packed in the protein core in a hydrophobic environment, but they can also present at the polar protein surface. Each amino acid side chain has a limited volume to occupy and a limited number of possible interactions with other nearby side chains, a situation that must be taken into account in molecular modeling and alignments.[2][3] The α-helix is the most abundant type of secondary structure in proteins. The α-helix has 3.6 amino acids per turn with an H-bond formed between every fourth residue; the average length is 10 amino acids (3 turns) or 10Åbut varies from 5 to 40 (1.5 to 11 turns). The alignment of the H-bonds creates a dipole moment for the helix with a resulting partial positive charge at the amino end of the helix. Because this region has free NH2groups, it will interact with negatively charged groups such as phosphates. The most common location of α-helices is at the surface of protein cores, where they provide an interface with the aqueous environment. The inner-facing side of the helix tends to have hydrophobic amino acids and the outer-facing side hydrophilic amino acids. Thus, every third of four amino acids along the chain will tend to be hydrophobic, a pattern that can be quite readily detected. In the leucine zipper motif, a repeating pattern of leucines on the facing sides of two adjacent helices is highly predictive of the motif. A helical-wheel plot can be used to show this repeated pattern. Other α-helices buried in the protein core or in cellular membranes have a higher and more regular distribution of hydrophobic amino acids, and are highly predictive of such structures. Helices exposed on the surface have a lower proportion of hydrophobic amino acids. Amino acid content can be predictive of an α-helical region. Regions richer inalanine(A),glutamic acid(E),leucine(L), andmethionine(M) and poorer inproline(P),glycine(G),tyrosine(Y), andserine(S) tend to form an α-helix. Proline destabilizes or breaks an α-helix but can be present in longer helices, forming a bend. β-sheets are formed by H-bonds between an average of 5–10 consecutive amino acids in one portion of the chain with another 5–10 farther down the chain. The interacting regions may be adjacent, with a short loop in between, or far apart, with other structures in between. Every chain may run in the same direction to form a parallel sheet, every other chain may run in the reverse chemical direction to form an anti parallel sheet, or the chains may be parallel and anti parallel to form a mixed sheet. The pattern of H bonding is different in the parallel and anti parallel configurations. Each amino acid in the interior strands of the sheet forms two H-bonds with neighboring amino acids, whereas each amino acid on the outside strands forms only one bond with an interior strand. Looking across the sheet at right angles to the strands, more distant strands are rotated slightly counterclockwise to form a left-handed twist. The Cα-atoms alternate above and below the sheet in a pleated structure, and the R side groups of the amino acids alternate above and below the pleats. The Φ and Ψ angles of the amino acids in sheets vary considerably in one region of theRamachandran plot. It is more difficult to predict the location of β-sheets than of α-helices. The situation improves somewhat when the amino acid variation in multiple sequence alignments is taken into account. Some parts of the protein have fixed three-dimensional structure, but do not form any regular structures. They should not be confused withdisordered or unfolded segmentsof proteins orrandom coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. These parts are frequently called "deltas" (Δ) because they connect β-sheets and α-helices. Deltas are usually located at protein surface, and therefore mutations of their residues are more easily tolerated. Having more substitutions, insertions, and deletions in a certain region of a sequence alignment maybe an indication of some delta. The positions ofintronsin genomic DNA may correlate with the locations of loops in the encoded protein[citation needed]. Deltas also tend to have charged and polar amino acids and are frequently a component of active sites. Proteins may be classified according to both structural and sequential similarity. For structural classification, the sizes and spatial arrangements of secondary structures described in the above paragraph are compared in known three-dimensional structures. Classification based on sequence similarity was historically the first to be used. Initially, similarity based on alignments of whole sequences was performed. Later, proteins were classified on the basis of the occurrence of conserved amino acid patterns.Databasesthat classify proteins by one or more of these schemes are available. In considering protein classification schemes, it is important to keep several observations in mind. First, two entirely different protein sequences from different evolutionary origins may fold into a similar structure. Conversely, the sequence of an ancient gene for a given structure may have diverged considerably in different species while at the same time maintaining the same basic structural features. Recognizing any remaining sequence similarity in such cases may be a very difficult task. Second, two proteins that share a significant degree of sequence similarity either with each other or with a third sequence also share an evolutionary origin and should share some structural features also. However, gene duplication and genetic rearrangements during evolution may give rise to new gene copies, which can then evolve into proteins with new function and structure.[2] The more commonly used terms for evolutionary and structural relationships among proteins are listed below. Many additional terms are used for various kinds of structural features found in proteins. Descriptions of such terms may be found at the CATH Web site, theStructural Classification of Proteins(SCOP) Web site, and aGlaxo Wellcometutorial on the Swiss bioinformatics Expasy Web site.[citation needed] Secondary structure predictionis a set of techniques inbioinformaticsthat aim to predict the localsecondary structuresofproteinsbased only on knowledge of theiramino acidsequence. For proteins, a prediction consists of assigning regions of the amino acid sequence as likelyalpha helices,beta strands(often termedextendedconformations), orturns. The success of a prediction is determined by comparing it to the results of theDSSPalgorithm (or similar e.g.STRIDE) applied to thecrystal structureof the protein. Specialized algorithms have been developed for the detection of specific well-defined patterns such astransmembrane helicesandcoiled coilsin proteins.[2] The best modern methods of secondary structure prediction in proteins were claimed to reach 80% accuracy after using machine learning andsequence alignments;[5]this high accuracy allows the use of the predictions as feature improvingfold recognitionandab initioprotein structure prediction, classification ofstructural motifs, and refinement ofsequence alignments. The accuracy of current protein secondary structure prediction methods is assessed in weeklybenchmarkssuch asLiveBenchandEVA. Early methods of secondary structure prediction, introduced in the 1960s and early 1970s,[6][7][8][9][10]focused on identifying likely alpha helices and were based mainly onhelix-coil transition models.[11]Significantly more accurate predictions that included beta sheets were introduced in the 1970s and relied on statistical assessments based on probability parameters derived from known solved structures. These methods, applied to a single sequence, are typically at most about 60–65% accurate, and often underpredict beta sheets.[2]Since the 1980s,artificial neural networkshave been applied to the prediction of protein structures.[12][13]Theevolutionaryconservationof secondary structures can be exploited by simultaneously assessing manyhomologous sequencesin amultiple sequence alignment, by calculating the net secondary structure propensity of an aligned column of amino acids. In concert with larger databases of known protein structures and modernmachine learningmethods such asneural netsandsupport vector machines, these methods can achieve up to 80% overall accuracy inglobular proteins.[14]The theoretical upper limit of accuracy is around 90%,[14]partly due to idiosyncrasies in DSSP assignment near the ends of secondary structures, where local conformations vary under native conditions but may be forced to assume a single conformation in crystals due to packing constraints. Moreover, the typical secondary structure prediction methods do not account for the influence oftertiary structureon formation of secondary structure; for example, a sequence predicted as a likely helix may still be able to adopt a beta-strand conformation if it is located within a beta-sheet region of the protein and its side chains pack well with their neighbors. Dramatic conformational changes related to the protein's function or environment can also alter local secondary structure. To date, over 20 different secondary structure prediction methods have been developed. One of the first algorithms wasChou–Fasman method, which relies predominantly on probability parameters determined from relative frequencies of each amino acid's appearance in each type of secondary structure.[15]The original Chou-Fasman parameters, determined from the small sample of structures solved in the mid-1970s, produce poor results compared to modern methods, though the parameterization has been updated since it was first published. The Chou-Fasman method is roughly 50–60% accurate in predicting secondary structures.[2] The next notable program was theGOR methodis aninformation theory-based method. It uses the more powerful probabilistic technique ofBayesian inference.[16]The GOR method takes into account not only the probability of each amino acid having a particular secondary structure, but also theconditional probabilityof the amino acid assuming each structure given the contributions of its neighbors (it does not assume that the neighbors have that same structure). The approach is both more sensitive and more accurate than that of Chou and Fasman because amino acid structural propensities are only strong for a small number of amino acids such asprolineandglycine. Weak contributions from each of many neighbors can add up to strong effects overall. The original GOR method was roughly 65% accurate and is dramatically more successful in predicting alpha helices than beta sheets, which it frequently mispredicted as loops or disorganized regions.[2] Another big step forward, was usingmachine learningmethods. Firstartificial neural networksmethods were used. As a training sets they use solved structures to identify common sequence motifs associated with particular arrangements of secondary structures. These methods are over 70% accurate in their predictions, although beta strands are still often underpredicted due to the lack of three-dimensional structural information that would allow assessment ofhydrogen bondingpatterns that can promote formation of the extended conformation required for the presence of a complete beta sheet.[2]PSIPREDandJPREDare some of the most known programs based on neural networks for protein secondary structure prediction. Next,support vector machineshave proven particularly useful for predicting the locations ofturns, which are difficult to identify with statistical methods.[17][18] Extensions of machine learning techniques attempt to predict more fine-grained local properties of proteins, such asbackbonedihedral anglesin unassigned regions. Both SVMs[19]and neural networks[20]have been applied to this problem.[17]More recently, real-value torsion angles can be accurately predicted by SPINE-X and successfully employed for ab initio structure prediction.[21] It is reported that in addition to the protein sequence, secondary structure formation depends on other factors. For example, it is reported that secondary structure tendencies depend also on local environment,[22]solvent accessibility of residues,[23]protein structural class,[24]and even the organism from which the proteins are obtained.[25]Based on such observations, some studies have shown that secondary structure prediction can be improved by addition of information about protein structural class,[26]residue accessible surface area[27][28]and alsocontact numberinformation.[29] The practical role of protein structure prediction is now more important than ever.[30]Massive amounts of protein sequence data are produced by modern large-scaleDNAsequencing efforts such as theHuman Genome Project. Despite community-wide efforts instructural genomics, the output of experimentally determined protein structures—typically by time-consuming and relatively expensiveX-ray crystallographyorNMR spectroscopy—is lagging far behind the output of protein sequences. The protein structure prediction remains an extremely difficult and unresolved undertaking. The two main problems are the calculation ofprotein free energyandfinding the global minimumof this energy. A protein structure prediction method must explore the space of possible protein structures which isastronomically large. These problems can be partially bypassed in "comparative" orhomology modelingandfold recognitionmethods, in which the search space is pruned by the assumption that the protein in question adopts a structure that is close to the experimentally determined structure of another homologous protein. In contrast, thede novo protein structure predictionmethods must explicitly resolve these problems. The progress and challenges in protein structure prediction have been reviewed by Zhang.[31] Most tertiary structure modelling methods, such as Rosetta, are optimized for modelling the tertiary structure of single protein domains. A step calleddomain parsing, ordomain boundary prediction, is usually done first to split a protein into potential structural domains. As with the rest of tertiary structure prediction, this can be done comparatively from known structures[32]orab initiowith the sequence only (usually bymachine learning, assisted by covariation).[33]The structures for individual domains are docked together in a process calleddomain assemblyto form the final tertiary structure.[34][35] Ab initio- orde novo- protein modelling methods seek to build three-dimensional protein models "from scratch", i.e., based on physical principles rather than (directly) on previously solved structures. There are many possible procedures that either attempt to mimicprotein foldingor apply somestochasticmethod to search possible solutions (i.e.,global optimizationof a suitable energy function). These procedures tend to require vast computational resources, and have thus only been carried out for tiny proteins. To predict protein structurede novofor larger proteins will require better algorithms and larger computational resources like those afforded by either powerful supercomputers (such asBlue GeneorMDGRAPE-3) or distributed computing (such asFolding@home, theHuman Proteome Folding ProjectandRosetta@Home). Although these computational barriers are vast, the potential benefits of structural genomics (by predicted or experimental methods) makeab initiostructure prediction an active research field.[31] As of 2009, a 50-residue protein could be simulated atom-by-atom on a supercomputer for 1 millisecond.[36]As of 2012, comparable stable-state sampling could be done on a standard desktop with a new graphics card and more sophisticated algorithms.[37]A much larger simulation timescales can be achieved usingcoarse-grained modeling.[38][39] As sequencing became more commonplace in the 1990s several groups used protein sequence alignments to predict correlatedmutationsand it was hoped that these coevolved residues could be used to predict tertiary structure (using the analogy to distance constraints from experimental procedures such asNMR). The assumption is when single residue mutations are slightly deleterious, compensatory mutations may occur to restabilize residue-residue interactions. This early work used what are known aslocalmethods to calculate correlated mutations from protein sequences, but suffered from indirect false correlations which result from treating each pair of residues as independent of all other pairs.[40][41][42] In 2011, a different, and this timeglobalstatistical approach, demonstrated that predicted coevolved residues were sufficient to predict the 3D fold of a protein, providing there are enough sequences available (>1,000 homologous sequences are needed).[43]The method,EVfold, uses no homology modeling, threading or 3D structure fragments and can be run on a standard personal computer even for proteins with hundreds of residues. The accuracy of the contacts predicted using this and related approaches has now been demonstrated on many known structures and contact maps,[44][45][46]including the prediction of experimentally unsolved transmembrane proteins.[47] Comparative protein modeling uses previously solved structures as starting points, or templates. This is effective because it appears that although the number of actual proteins is vast, there is a limited set oftertiarystructural motifsto which most proteins belong. It has been suggested that there are only around 2,000 distinct protein folds in nature, though there are many millions of different proteins. The comparative protein modeling can combine with the evolutionary covariation in the structure prediction.[48] These methods may also be split into two groups:[31] Accurate packing of the amino acidside chainsrepresents a separate problem in protein structure prediction. Methods that specifically address the problem of predicting side-chain geometry includedead-end eliminationand theself-consistent mean fieldmethods. The side chain conformations with low energy are usually determined on the rigid polypeptide backbone and using a set of discrete side chain conformations known as "rotamers". The methods attempt to identify the set of rotamers that minimize the model's overall energy. These methods use rotamer libraries, which are collections of favorable conformations for each residue type in proteins. Rotamer libraries may contain information about the conformation, its frequency, and the standard deviations about mean dihedral angles, which can be used in sampling.[51]Rotamer libraries are derived fromstructural bioinformaticsor other statistical analysis of side-chain conformations in known experimental structures of proteins, such as by clustering the observed conformations for tetrahedral carbons near the staggered (60°, 180°, −60°) values. Rotamer libraries can be backbone-independent, secondary-structure-dependent, or backbone-dependent. Backbone-independent rotamer libraries make no reference to backbone conformation, and are calculated from all available side chains of a certain type (for instance, the first example of a rotamer library, done by Ponder andRichardsat Yale in 1987).[52]Secondary-structure-dependent libraries present different dihedral angles and/or rotamer frequencies forα{\displaystyle \alpha }-helix,β{\displaystyle \beta }-sheet, or coil secondary structures.[53]Backbone-dependent rotamer librariespresent conformations and/or frequencies dependent on the local backbone conformation as defined by the backbone dihedral anglesϕ{\displaystyle \phi }andψ{\displaystyle \psi }, regardless of secondary structure.[54] The modern versions of these libraries as used in most software are presented as multidimensional distributions of probability or frequency, where the peaks correspond to the dihedral-angle conformations considered as individual rotamers in the lists. Some versions are based on very carefully curated data and are used primarily for structure validation,[55]while others emphasize relative frequencies in much larger data sets and are the form used primarily for structure prediction, such as theDunbrack rotamer libraries.[56] Side-chain packing methods are most useful for analyzing the protein'shydrophobiccore, where side chains are more closely packed; they have more difficulty addressing the looser constraints and higher flexibility of surface residues, which often occupy multiple rotamer conformations rather than just one.[57][58] In the case ofcomplexes of two or more proteins, where the structures of the proteins are known or can be predicted with high accuracy,protein–protein dockingmethods can be used to predict the structure of the complex. Information of the effect of mutations at specific sites on the affinity of the complex helps to understand the complex structure and to guide docking methods. A great number of software tools for protein structure prediction exist. Approaches includehomology modeling,protein threading,ab initiomethods,secondary structure prediction, and transmembrane helix and signal peptide prediction. In particular,deep learningbased onlong short-term memoryhas been used for this purpose since 2007, when it was successfully applied to protein homology detection[59]and to predict subcellular localization of proteins.[60]Some recent successful methods based on theCASPexperiments includeI-TASSER,HHpredandAlphaFold. In 2021, AlphaFold was reported to perform best.[61] Knowing the structure of a protein often allows functional prediction as well. For instance, collagen is folded into a long-extended fiber-like chain and it makes it a fibrous protein. Recently, several techniques have been developed to predict protein folding and thus protein structure, for example, Itasser, and AlphaFold. AlphaFoldwas one of the first AIs to predict protein structures. It was introduced by Google's DeepMind in the 13th CASP competition, which was held in 2018.[61]AlphaFoldrelies on aneural networkapproach, which directly predicts the 3D coordinates of all non-hydrogen atoms for a given protein using the amino acid sequence and alignedhomologous sequences. TheAlphaFoldnetwork consists of a trunk which processes the inputs through repeated layers, and a structure module which introduces an explicit 3D structure.[61]Earlier neural networks for protein structure prediction usedLSTM.[59][60] SinceAlphaFoldoutputs protein coordinates directly,AlphaFoldproduces predictions in graphics processing unit (GPU) minutes to GPU hours, depending on the length of protein sequence.[61] TheEuropean Bioinformatics Institutetogether withDeepMindhave constructed the AlphaFold – EBI database[62]for predicted protein structures.[63] AlphaFold2, was introduced in CASP14, and is capable of predicting protein structures to near experimental accuracy.[64]AlphaFold was swiftly followed by RoseTTAFold[65]and later by OmegaFold and the ESM Metagenomic Atlas.[66] In a study, Sommer et al. 2022 demonstrated the application of protein structure prediction in genome annotation, specifically in identifying functional protein isoforms using computationally predicted structures, available athttps://www.isoform.io.[67]This study highlights the promise of protein structure prediction as a genome annotation tool and presents a practical, structure-guided approach that can be used to enhance the annotation of any genome. In 2024,David BakerandDemis Hassabiswere awarded theNobel Prize in Chemistry[68]for their contributions to computational protein modeling, including the development of AlphaFold2, an AI-based model for protein structure prediction. AlphaFold2's accuracy has been evaluated against experimentally determined protein structures using metrics such asroot-mean-square deviation(RMSD).[69]The median RMSD between different experimental structures of the same protein is approximately 0.6 Å, while the median RMSD between AlphaFold2 predictions and experimental structures is around 1 Å. For regions where AlphaFold2 assigns high confidence, the median RMSD is about 0.6 Å, comparable to the variability observed between different experimental structures. However, in low-confidence regions, the RMSD can exceed 2 Å, indicating greater deviations. In proteins with multiple domains connected by flexible linkers, AlphaFold2 predicts individual domain structures accurately but may assign random relative positions to these domains. Additionally, AlphaFold2 does not account for structural constraints such as the membrane plane, sometimes placing protein domains in positions that would physically clash with the membrane.[70] CASP, which stands for Critical Assessment of Techniques for Protein Structure Prediction, is a community-wide experiment for protein structure prediction taking place every two years since 1994. CASP provides with an opportunity to assess the quality of available human, non-automated methodology (human category) and automatic servers for protein structure prediction (server category, introduced in the CASP7).[71] TheCAMEO3DContinuous Automated Model EvaluatiOn Server evaluates automated protein structure prediction servers on a weekly basis using blind predictions for newly release protein structures. CAMEO publishes the results on its website.
https://en.wikipedia.org/wiki/Protein_structure_prediction
Aposition weight matrix (PWM), also known as aposition-specific weight matrix (PSWM)orposition-specific scoring matrix (PSSM), is a commonly used representation ofmotifs(patterns) in biological sequences. PWMs are often derived from a set of aligned sequences that are thought to be functionally related and have become an important part of many software tools for computational motif discovery. A PWM has one row for each symbol of the alphabet (4 rows fornucleotidesinDNAsequences or 20 rows foramino acidsinproteinsequences) and one column for each position in the pattern. In the first step in constructing a PWM, a basic position frequency matrix (PFM) is created by counting the occurrences of each nucleotide at each position. From the PFM, a position probability matrix (PPM) can now be created by dividing that former nucleotide count at each position by the number of sequences, thereby normalising the values. Formally, given a setXofNaligned sequences of lengthl, the elements of the PPMMare calculated: wherei∈{\displaystyle \in }(1,...,N),j∈{\displaystyle \in }(1,...,l),kis the set of symbols in the alphabet andI(a=k)is anindicator functionwhereI(a=k)is 1 ifa=kand 0 otherwise. For example, given the following DNA sequences: GAGGTAAACTCCGTAAGTCAGGTTGGAACAGTCAGTTAGGTCATTTAGGTACTGATGGTAACTCAGGTATACTGTGTGAGTAAGGTAAGT The corresponding PFM is: Therefore, the resulting PPM is:[1] Both PPMs and PWMs assumestatistical independencebetween positions in the pattern, as the probabilities for each position are calculated independently of other positions. From the definition above, it follows that the sum of values for a particular position (that is, summing over all symbols) is 1. Each column can therefore be regarded as an independentmultinomial distribution. This makes it easy to calculate the probability of a sequence given a PPM, by multiplying the relevant probabilities at each position. For example, the probability of the sequenceS=GAGGTAAACgiven the above PPMMcan be calculated: Pseudocounts(orLaplace estimators) are often applied when calculating PPMs if based on a small dataset, in order to avoid matrix entries having a value of 0.[2]This is equivalent to multiplying each column of the PPM by aDirichlet distributionand allows the probability to be calculated for new sequences (that is, sequences which were not part of the original dataset). In the example above, without pseudocounts, any sequence which did not have aGin the 4th position or aTin the 5th position would have a probability of 0, regardless of the other positions. Most often the elements in PWMs are calculated as log odds. That is, the elements of a PPM are transformed using a background modelb{\displaystyle b}so that: describes howan element in the PWM (left),Mk,j{\displaystyle M_{k,j}}, can be calculated. The simplest background model assumes that each letter appears equally frequently in the dataset. That is, the value ofbk=1/|k|{\displaystyle b_{k}=1/\vert k\vert }for all symbols in the alphabet (0.25 for nucleotides and 0.05 for amino acids). Applying this transformation to the PPMMfrom above (with no pseudocounts added) gives: The−∞{\displaystyle -\infty }entries in the matrix make clear the advantage of adding pseudocounts, especially when using small datasets to constructM. The background model need not have equal values for each symbol: for example, when studying organisms with a highGC-content, the values forCandGmay be increased with a corresponding decrease for theAandTvalues. When the PWM elements are calculated using log likelihoods, the score of a sequence can be calculated by adding (rather than multiplying) the relevant values at each position in the PWM. The sequence score gives an indication of how different the sequence is from a random sequence. The score is 0 if the sequence has the same probability of being a functional site and of being a random site. The score is greater than 0 if it is more likely to be a functional site than a random site, and less than 0 if it is more likely to be a random site than a functional site.[1]The sequence score can also be interpreted in a physical framework as the binding energy for that sequence. Theinformation content(IC) of a PWM is sometimes of interest, as it says something about how different a given PWM is from auniform distribution. Theself-informationof observing a particular symbol at a particular position of the motif is: The expected (average) self-information of a particular element in the PWM is then: Finally, the IC of the PWM is then the sum of the expected self-information of every element: Often, it is more useful to calculate the information content with the background letter frequencies of the sequences you are studying rather than assuming equal probabilities of each letter (e.g., the GC-content of DNA ofthermophilicbacteria range from 65.3 to 70.8,[3]thus a motif of ATAT would contain much more information than a motif of CCGG). The equation for information content thus becomes wherepj{\displaystyle p_{j}}is the background frequency for letterj{\displaystyle j}. This corresponds to theKullback–Leibler divergenceor relative entropy. However, it has been shown that when using PSSM to search genomic sequences (see below) this uniform correction can lead to overestimation of the importance of the different bases in a motif, due to the uneven distribution of n-mers in real genomes, leading to a significantly larger number of false positives.[4] There are various algorithms to scan for hits of PWMs in sequences. One example is the MATCH algorithm[5]which has been implemented in the ModuleMaster.[6]More sophisticated algorithms for fast database searching with nucleotide as well as amino acid PWMs/PSSMs are implemented in the possumsearch software.[7] The basic PWM/PSSM is unable to deal with insertions and deletions. A PSSM with additional probabilities for insertion and deletion at each position can be interpreted as ahidden Markov model. This is the approach used byPfam.[8][9]
https://en.wikipedia.org/wiki/Position-specific_scoring_matrix
Multiple sequence alignment(MSA) is the process or the result ofsequence alignmentof three or morebiological sequences, generallyprotein,DNA, orRNA. These alignments are used to inferevolutionaryrelationships viaphylogeneticanalysis and can highlighthomologousfeatures between sequences. Alignments highlightmutationevents such aspoint mutations(singleamino acidornucleotidechanges),insertion mutationsanddeletion mutations, and alignments are used to assess sequenceconservationand infer the presence and activity ofprotein domains,tertiary structures,secondary structures, and individual amino acids or nucleotides. Multiple sequence alignments require more sophisticated methodologies thanpairwise alignments, as they are morecomputationally complex. Most multiple sequence alignment programs useheuristicmethods rather thanglobal optimizationbecause identifying the optimal alignment between more than a few sequences of moderate length is prohibitively computationally expensive. However, heuristic methods generally cannot guarantee high-quality solutions and have been shown to fail to yield near-optimal solutions on benchmark test cases.[1][2][3] Givenm{\displaystyle m}sequencesSi{\displaystyle S_{i}},i=1,⋯,m{\displaystyle i=1,\cdots ,m}similar to the form below: S:={S1=(S11,S12,…,S1n1)S2=(S21,S22,⋯,S2n2)⋮Sm=(Sm1,Sm2,…,Smnm){\displaystyle S:={\begin{cases}S_{1}=(S_{11},S_{12},\ldots ,S_{1n_{1}})\\S_{2}=(S_{21},S_{22},\cdots ,S_{2n_{2}})\\\,\,\,\,\,\,\,\,\,\,\vdots \\S_{m}=(S_{m1},S_{m2},\ldots ,S_{mn_{m}})\end{cases}}} A multiple sequence alignment is taken of this set of sequencesS{\displaystyle S}by inserting any amount of gaps needed into each of theSi{\displaystyle S_{i}}sequences ofS{\displaystyle S}until the modified sequences,Si′{\displaystyle S'_{i}}, all conform to lengthL≥max{ni∣i=1,…,m}{\displaystyle L\geq \max\{n_{i}\mid i=1,\ldots ,m\}}and no values in the sequences ofS{\displaystyle S}of the same column consists of only gaps. The mathematical form of an MSA of the above sequence set is shown below: S′:={S1′=(S11′,S12′,…,S1L′)S2′=(S21′,S22′,…,S2L′)⋮Sm′=(Sm1′,Sm2′,…,SmL′){\displaystyle S':={\begin{cases}S'_{1}=(S'_{11},S'_{12},\ldots ,S'_{1L})\\S'_{2}=(S'_{21},S'_{22},\ldots ,S'_{2L})\\\,\,\,\,\,\,\,\,\,\,\vdots \\S'_{m}=(S'_{m1},S'_{m2},\ldots ,S'_{mL})\end{cases}}} To return from each particular sequenceSi′{\displaystyle S'_{i}}toSi{\displaystyle S_{i}}, remove all gaps. A general approach when calculating multiple sequence alignments is to usegraphsto identify all of the different alignments. When finding alignments via graph, acomplete alignmentis created in a weighted graph that contains a set of vertices and a set of edges. Each of the graph edges has a weight based on a certain heuristic that helps to score eachalignmentor subset of the original graph. When determining the best suited alignments for each MSA, atraceis usually generated. A trace is a set ofrealized, or corresponding and aligned, vertices that has a specific weight based on the edges that are selected between corresponding vertices. When choosing traces for a set of sequences it is necessary to choose a trace with a maximum weight to get the best alignment of the sequences. There are various alignment methods used within multiple sequence to maximize scores and correctness of alignments. Each is usually based on a certain heuristic with an insight into the evolutionary process. Most try to replicate evolution to get the most realistic alignment possible to best predict relations between sequences. A direct method for producing an MSA uses thedynamic programmingtechnique to identify the globally optimal alignment solution. For proteins, this method usually involves two sets of parameters: agap penaltyand asubstitution matrixassigning scores or probabilities to the alignment of each possible pair of amino acids based on the similarity of the amino acids' chemical properties and the evolutionary probability of the mutation. For nucleotide sequences, a similar gap penalty is used, but a much simpler substitution matrix, wherein only identical matches and mismatches are considered, is typical. The scores in the substitution matrix may be either all positive or a mix of positive and negative in the case of a global alignment, but must be both positive and negative, in the case of a local alignment.[4] Fornindividual sequences, the naive method requires constructing then-dimensional equivalent of the matrix formed in standard pairwisesequence alignment. The search space thus increases exponentially with increasingnand is also strongly dependent on sequence length. Expressed with thebig O notationcommonly used to measurecomputational complexity, anaïveMSA takesO(LengthNseqs)time to produce. To find the global optimum fornsequences this way has been shown to be anNP-completeproblem.[5][6][7]In 1989, based on Carrillo-Lipman Algorithm,[8]Altschul introduced a practical method that uses pairwise alignments to constrain the n-dimensional search space.[9]In this approach pairwise dynamic programming alignments are performed on each pair of sequences in the query set, and only the space near the n-dimensional intersection of these alignments is searched for the n-way alignment. The MSA program optimizes the sum of all of the pairs of characters at each position in the alignment (the so-calledsum of pairscore) and has been implemented in a software program for constructing multiple sequence alignments.[10]In 2019, Hosseininasab and van Hoeve showed that by using decision diagrams, MSA may be modeled in polynomial space complexity.[3] The most widely used approach to multiple sequence alignments uses a heuristic search known as progressive technique (also known as the hierarchical or tree method) developed by Da-Fei Feng and Doolittle in 1987.[11]Progressive alignment builds up a final MSA by combining pairwise alignments beginning with the most similar pair and progressing to the most distantly related. All progressive alignment methods require two stages: a first stage in which the relationships between the sequences are represented as aphylogenetic tree, called aguide tree, and a second step in which the MSA is built by adding the sequences sequentially to the growing MSA according to the guide tree. The initialguide treeis determined by an efficientclusteringmethod such asneighbor-joiningorunweighted pair group method with arithmetic mean(UPGMA), and may use distances based on the number of identical two-letter sub-sequences (as inFASTArather than a dynamic programming alignment).[12] Progressive alignments are not guaranteed to be globally optimal. The primary problem is that when errors are made at any stage in growing the MSA, these errors are then propagated through to the final result. Performance is also particularly bad when all of the sequences in the set are rather distantly related. Most modern progressive methods modify their scoring function with a secondary weighting function that assigns scaling factors to individual members of the query set in a nonlinear fashion based on their phylogenetic distance from their nearest neighbors. This corrects for non-random selection of the sequences given to the alignment program.[12] Progressive alignment methods are efficient enough to implement on a large scale for many (100s to 1000s) sequences. A popular progressive alignment method has been theClustalfamily.[13][14]ClustalW is used extensively for phylogenetic tree construction, in spite of the author's explicit warnings that unedited alignments should not be used in such studies and as input forprotein structure predictionby homology modeling.European Bioinformatics Institute(EMBL-EBI) announced that CLustalW2 will expire in August 2015. They recommendClustalOmega which performs based on seeded guide trees and HMM profile-profile techniques for protein alignments. An alternative tool for progressive DNA alignments ismultiple alignment using fast Fourier transform(MAFFT).[15] Another common progressive alignment method namedT-Coffee[16]is slower than Clustal and its derivatives but generally produces more accurate alignments for distantly related sequence sets. T-Coffee calculates pairwise alignments by combining the direct alignment of the pair with indirect alignments that aligns each sequence of the pair to a third sequence. It uses the output from Clustal as well as another local alignment program LALIGN, which finds multiple regions of local alignment between two sequences. The resulting alignment and phylogenetic tree are used as a guide to produce new and more accurate weighting factors. Because progressive methods are heuristics that are not guaranteed to converge to a global optimum, alignment quality can be difficult to evaluate and their true biological significance can be obscure. A semi-progressive method that improves alignment quality and does not use a lossy heuristic while running inpolynomial timehas been implemented in the program PSAlign.[17] A set of methods to produce MSAs while reducing the errors inherent in progressive methods are classified as "iterative" because they work similarly to progressive methods but repeatedly realign the initial sequences as well as adding new sequences to the growing MSA. One reason progressive methods are so strongly dependent on a high-quality initial alignment is the fact that these alignments are always incorporated into the final result – that is, once a sequence has been aligned into the MSA, its alignment is not considered further. This approximation improves efficiency at the cost of accuracy. By contrast, iterative methods can return to previously calculated pairwise alignments or sub-MSAs incorporating subsets of the query sequence as a means of optimizing a generalobjective functionsuch as finding a high-quality alignment score.[12] A variety of subtly different iteration methods have been implemented and made available in software packages; reviews and comparisons have been useful but generally refrain from choosing a "best" technique.[18]The software package PRRN/PRRP uses ahill-climbing algorithmto optimize its MSA alignment score[19]and iteratively corrects both alignment weights and locally divergent or "gappy" regions of the growing MSA.[12]PRRP performs best when refining an alignment previously constructed by a faster method.[12] Another iterative program, DIALIGN, takes an unusual approach of focusing narrowly on local alignments between sub-segments orsequence motifswithout introducing a gap penalty.[20]The alignment of individual motifs is then achieved with a matrix representation similar to a dot-matrix plot in a pairwise alignment. An alternative method that uses fast local alignments as anchor points orseedsfor a slower global-alignment procedure is implemented in the CHAOS/DIALIGN suite.[20] A third popular iteration-based method namedMUSCLE(multiple sequence alignment by log-expectation) improves on progressive methods with a more accurate distance measure to assess the relatedness of two sequences.[21]The distance measure is updated between iteration stages (although, in its original form, MUSCLE contained only 2-3 iterations depending on whether refinement was enabled). Consensus methods attempt to find the optimal multiple sequence alignment given multiple different alignments of the same set of sequences. There are two commonly used consensus methods, M-COFFEE and MergeAlign.[22]M-COFFEE uses multiple sequence alignments generated by seven different methods to generate consensus alignments. MergeAlign is capable of generating consensus alignments from any number of input alignments generated using different models of sequence evolution or different methods of multiple sequence alignment. The default option for MergeAlign is to infer a consensus alignment using alignments generated using 91 different models of protein sequence evolution. Ahidden Markov model(HMM) is a probabilistic model that can assign likelihoods to all possible combinations of gaps, matches, and mismatches, to determine the most likely MSA or set of possible MSAs. HMMs can produce a single highest-scoring output but can also generate a family of possible alignments that can then be evaluated for biological significance. HMMs can produce both global and local alignments. Although HMM-based methods have been developed relatively recently, they offer significant improvements in computational speed, especially for sequences that contain overlapping regions.[12] Typical HMM-based methods work by representing an MSA as a form ofdirected acyclic graphknown as a partial-order graph, which consists of a series of nodes representing possible entries in the columns of an MSA. In this representation a column that is absolutely conserved (that is, that all the sequences in the MSA share a particular character at a particular position) is coded as a single node with as many outgoing connections as there are possible characters in the next column of the alignment. In the terms of a typical hidden Markov model, the observed states are the individual alignment columns and the "hidden" states represent the presumed ancestral sequence from which the sequences in the query set are hypothesized to have descended. An efficient search variant of the dynamic programming method, named theViterbi algorithm, is generally used to successively align the growing MSA to the next sequence in the query set to produce a new MSA.[23]This is distinct from progressive alignment methods because the alignment of prior sequences is updated at each new sequence addition. However, like progressive methods, this technique can be influenced by the order in which the sequences in the query set are integrated into the alignment, especially when the sequences are distantly related.[12] Several software programs are available in which variants of HMM-based methods have been implemented and which are noted for their scalability and efficiency, although properly using an HMM method is more complex than using more common progressive methods. The simplest is Partial-Order Alignment (POA),[24]and a similar more general method is implemented in the Sequence Alignment and Modeling System (SAM) software package.[25]andHMMER.[26]SAM has been used as a source of alignments forprotein structure predictionto participate in the Critical Assessment of Structure Prediction (CASP) structure prediction experiment and to develop a database of predicted proteins in theyeastspeciesS. cerevisiae.HHsearch[27]is a software package for the detection of remotely related protein sequences based on the pairwise comparison of HMMs. A server running HHsearch (HHpred) was the fastest of 10 automatic structure prediction servers in the CASP7 and CASP8 structure prediction competitions.[28] Most multiple sequence alignment methods try to minimize the number ofinsertions/deletions(gaps) and, as a consequence, produce compact alignments. This causes several problems if the sequences to be aligned contain non-homologousregions, if gaps are informative in aphylogenyanalysis. These problems are common in newly produced sequences that are poorly annotated and may containframe-shifts, wrongdomainsor non-homologoussplicedexons. The first such method was developed in 2005 by Löytynoja and Goldman.[29]The same authors released a software package calledPRANKin 2008.[30]PRANK improves alignments when insertions are present. Nevertheless, it runs slowly compared to progressive and/or iterative methods which have been developed for several years. In 2012, two new phylogeny-aware tools appeared. One is calledPAGANthat was developed by the same team as PRANK.[31]The other isProGraphMSAdeveloped by Szalkowski.[32]Both software packages were developed independently but share common features, notably the use ofgraph algorithmsto improve the recognition of non-homologous regions, and an improvement in code making these software faster than PRANK. Motif finding, also known as profile analysis, is a method of locatingsequence motifsin global MSAs that is both a means of producing a better MSA and a means of producing a scoring matrix for use in searching other sequences for similar motifs. A variety of methods for isolating the motifs have been developed, but all are based on identifying short highly conserved patterns within the larger alignment and constructing a matrix similar to a substitution matrix that reflects the amino acid or nucleotide composition of each position in the putative motif. The alignment can then be refined using these matrices. In standard profile analysis, the matrix includes entries for each possible character as well as entries for gaps.[12]Alternatively, statistical pattern-finding algorithms can identify motifs as a precursor to an MSA rather than as a derivation. In many cases when the query set contains only a small number of sequences or contains only highly related sequences,pseudocountsare added to normalize the distribution reflected in the scoring matrix. In particular, this corrects zero-probability entries in the matrix to values that are small but nonzero. Blocks analysis is a method of motif finding that restricts motifs to ungapped regions in the alignment. Blocks can be generated from an MSA or they can be extracted from unaligned sequences using a precalculated set of common motifs previously generated from known gene families.[33]Block scoring generally relies on the spacing of high-frequency characters rather than on the calculation of an explicit substitution matrix. Statistical pattern-matching has been implemented using both theexpectation-maximization algorithmand theGibbs sampler. One of the most common motif-finding tools, namedMultiple EM for Motif Elicitation(MEME), uses expectation maximization and hidden Markov methods to generate motifs that are then used as search tools by its companion MAST in the combined suite MEME/MAST.[34][35] Non-coding DNA regions, especiallytranscription factorbinding sites (TFBSs), are conserved, but not necessarily evolutionarily related, and may have converged from non-common ancestors. Thus, the assumptions used to align protein sequences and DNA coding regions are inherently different from those that hold for TFBS sequences. Although it is meaningful to align DNA coding regions for homologous sequences using mutation operators, alignment of binding site sequences for the same transcription factor cannot rely on evolutionary related mutation operations. Similarly, the evolutionary operator of point mutations can be used to define an edit distance for coding sequences, but this has little meaning for TFBS sequences because any sequence variation has to maintain a certain level of specificity for the binding site to function. This becomes specifically important when trying to align known TFBS sequences to build supervised models to predict unknown locations of the same TFBS. Hence, Multiple Sequence Alignment methods need to adjust the underlying evolutionary hypothesis and the operators used as in the work published incorporating neighbouring base thermodynamic information[36]to align the binding sites searching for the lowest thermodynamic alignment conserving specificity of the binding site. Standard optimization techniques in computer science – both of which were inspired by, but do not directly reproduce, physical processes – have also been used in an attempt to more efficiently produce quality MSAs. One such technique,genetic algorithms, has been used for MSA production in an attempt to broadly simulate the hypothesized evolutionary process that gave rise to the divergence in the query set. The method works by breaking a series of possible MSAs into fragments and repeatedly rearranging those fragments with the introduction of gaps at varying positions. A generalobjective functionis optimized during the simulation, most generally the "sum of pairs" maximization function introduced in dynamic programming-based MSA methods. A technique for protein sequences has been implemented in the software program SAGA (Sequence Alignment by Genetic Algorithm)[37]and its equivalent in RNA is called RAGA.[38] The technique ofsimulated annealing, by which an existing MSA produced by another method is refined by a series of rearrangements designed to find better regions of alignment space than the one the input alignment already occupies. Like the genetic algorithm method, simulated annealing maximizes an objective function like the sum-of-pairs function. Simulated annealing uses a metaphorical "temperature factor" that determines the rate at which rearrangements proceed and the likelihood of each rearrangement; typical usage alternates periods of high rearrangement rates with relatively low likelihood (to explore more distant regions of alignment space) with periods of lower rates and higher likelihoods to more thoroughly explore local minima near the newly "colonized" regions. This approach has been implemented in the program MSASA (Multiple Sequence Alignment by Simulated Annealing).[39] Mathematical programmingand in particularmixed integer programmingmodels are another approach to solve MSA problems. The advantage of such optimization models is that they can be used to find the optimal MSA solution more efficiently compared to the traditional DP approach. This is due in part, to the applicability of decomposition techniques for mathematical programs, where the MSA model is decomposed into smaller parts and iteratively solved until the optimal solution is found. Example algorithms used to solve mixed integer programming models of MSA includebranch and price[40]andBenders decomposition.[3]Although exact approaches are computationally slow compared to heuristic algorithms for MSA, they are guaranteed to reach the optimal solution eventually, even for large-size problems. In January 2017,D-Wave Systemsannounced that its qbsolv open-sourcequantum computingsoftware had been successfully used to find a faster solution to the MSA problem.[41] The necessary use of heuristics for multiple alignment means that for an arbitrary set of proteins, there is always a good chance that an alignment will contain errors. For example, an evaluation of several leading alignment programs using theBAliBase benchmarkfound that at least 24% of all pairs of aligned amino acids were incorrectly aligned.[2]These errors can arise because of unique insertions into one or more regions of sequences, or through some more complex evolutionary process leading to proteins that do not align easily by sequence alone. As the number of sequence and their divergence increases many more errors will be made simply because of the heuristic nature of MSA algorithms.Multiple sequence alignment viewersenable alignments to be visually reviewed, often by inspecting the quality of alignment for annotated functional sites on two or more sequences. Many also enable the alignment to be edited to correct these (usually minor) errors, in order to obtain an optimal 'curated' alignment suitable for use in phylogenetic analysis or comparative modeling.[42] However, as the number of sequences increases and especially in genome-wide studies that involve many MSAs it is impossible to manually curate all alignments. Furthermore, manual curation is subjective. And finally, even the best expert cannot confidently align the more ambiguous cases of highly diverged sequences. In such cases it is common practice to use automatic procedures to exclude unreliably aligned regions from the MSA. For the purpose of phylogeny reconstruction (see below) the Gblocks program is widely used to remove alignment blocks suspect of low quality, according to various cutoffs on the number of gapped sequences in alignment columns.[43]However, these criteria may excessively filter out regions with insertion/deletion events that may still be aligned reliably, and these regions might be desirable for other purposes such as detection of positive selection. A few alignment algorithms output site-specific scores that allow the selection of high-confidence regions. Such a service was first offered by the SOAP program,[44]which tests the robustness of each column to perturbation in the parameters of the popular alignment program CLUSTALW. The T-Coffee program[45]uses a library of alignments in the construction of the final MSA, and its output MSA is colored according to confidence scores that reflect the agreement between different alignments in the library regarding each aligned residue. Its extension, Transitive Consistency Score (TCS), uses T-Coffeelibrariesof pairwise alignments to evaluate any third party MSA. Pairwise projections can be produced using fast or slow methods, thus allowing a trade-off between speed and accuracy.[46][47]Another alignment program that can output an MSA with confidence scores is FSA,[48]which uses a statistical model that allows calculation of the uncertainty in the alignment. The HoT (Heads-Or-Tails) score can be used as a measure of site-specific alignment uncertainty due to the existence of multiple co-optimal solutions.[49]The GUIDANCE program[50]calculates a similar site-specific confidence measure based on the robustness of the alignment to uncertainty in the guide tree that is used in progressive alignment programs. An alternative, more statistically justified approach to assess alignment uncertainty is the use of probabilistic evolutionary models for joint estimation of phylogeny and alignment. A Bayesian approach allows calculation of posterior probabilities of estimated phylogeny and alignment, which is a measure of the confidence in these estimates. In this case, a posterior probability can be calculated for each site in the alignment. Such an approach was implemented in the program BAli-Phy.[51] There are free programs available for visualization of multiple sequence alignments, for exampleJalviewandUGENE. Multiple sequence alignments can be used to create aphylogenetic tree.[52]This is made possible by two reasons. The first is because functional domains that are known in annotated sequences can be used for alignment in non-annotated sequences. The other is that conserved regions known to be functionally important can be found. This makes it possible for multiple sequence alignments to be used to analyze and find evolutionary relationships through homology between sequences. Point mutations and insertion or deletion events (called indels) can be detected. Multiple sequence alignments can also be used to identify functionally important sites, such as binding sites, active sites, or sites corresponding to other key functions, by locating conserved domains. When looking at multiple sequence alignments, it is useful to consider different aspects of the sequences when comparing sequences. These aspects include identity, similarity, and homology. Identity means that the sequences have identical residues at their respective positions. On the other hand, similarity has to do with the sequences being compared having similar residues quantitatively. For example, in terms of nucleotide sequences, pyrimidines are considered similar to each other, as are purines. Similarity ultimately leads to homology, in that the more similar sequences are, the closer they are to being homologous. This similarity in sequences can then go on to help find common ancestry.[52]
https://en.wikipedia.org/wiki/Multiple_sequence_alignment
Critical Assessment of Structure Prediction(CASP), sometimes calledCritical Assessment of Protein Structure Prediction, is a community-wide, worldwide experiment forprotein structure predictiontaking place every two years since 1994.[1][2]CASP provides research groups with an opportunity to objectively test their structure prediction methods and delivers an independent assessment of the state of the art in protein structure modeling to the research community and software users. Even though the primary goal of CASP is to help advance the methods of identifyingproteinthree-dimensional structure from its amino acid sequence many view the experiment more as a "world championship" in this field of science. More than 100 research groups from all over the world participate in CASP on a regular basis and it is not uncommon for entire groups to suspend their other research for months while they focus on getting their servers ready for the experiment and on performing the detailed predictions. In order to ensure that no predictor can have prior information about a protein's structure that would put them at an advantage, it is important that the experiment be conducted in a double-blind fashion: Neither predictors nor the organizers and assessors know the structures of the target proteins at the time when predictions are made. Targets for structure prediction are either structures soon-to-be solved byX-ray crystallographyor NMR spectroscopy, or structures that have just been solved (mainly by one of thestructural genomics centers) and are kept on hold by theProtein Data Bank. If the given sequence is found to be related by common descent to a protein sequence of known structure (called a template),comparative protein modelingmay be used to predict thetertiary structure. Templates can be found usingsequence alignmentmethods (e.g.BLASTorHHsearch) orprotein threadingmethods, which are better in finding distantly related templates. Otherwise,de novoprotein structure predictionmust be applied (e.g. Rosetta), which is much less reliable but can sometimes yield models with the correct fold (usually, for proteins less than 100-150 amino acids). Truly new folds are becoming quite rare among the targets,[3][4]making that category smaller than desirable. The primary method of evaluation[5]is a comparison of the predicted modelα-carbonpositions with those in the target structure. The comparison is shown visually by cumulative plots of distances between pairs of equivalentsα-carbonin the alignment of the model and the structure, such as shown in the figure (a perfect model would stay at zero all the way across), and is assigned a numerical scoreGDT-TS (Global Distance Test—Total Score)describing percentage of well-modeled residues in the model with respect to the target.[6]Free modeling (template-free, orde novo) is also evaluated visually by the assessors, since the numerical scores do not work as well for finding loose resemblances in the most difficult cases.[7]High-accuracy template-based predictions were evaluated in CASP7 by whether they worked for molecular-replacement phasing of the target crystal structure[8]with successes followed up later,[9]and by full-model (not justα-carbon) model quality and full-model match to the target in CASP8.[10] Evaluation of the results is carried out in the following prediction categories: Tertiary structure prediction category was further subdivided into: Starting with CASP7, categories have been redefined to reflect developments in methods. The 'Template based modeling' category includes all former comparative modeling, homologous fold based models and some analogous fold based models. The 'template free modeling (FM)' category includes models of proteins with previously unseen folds and hard analogous fold based models. Due to limited numbers of template free targets (they are quite rare), in 2011 so called CASP ROLL was introduced. This continuous (rolling) CASP experiment aims at more rigorous evaluation of template free prediction methods through assessment of a larger number of targets outside of the regular CASP prediction season. UnlikeLiveBenchandEVA, this experiment is in the blind-prediction spirit of CASP, i.e. all the predictions are made on yet unknown structures.[11] The CASP results are published in special supplement issues of the scientific journalProteins, all of which are accessible through the CASP website.[12]A lead article in each of these supplements describes specifics of the experiment[13][14]while a closing article evaluates progress in the field.[15][16] In December 2018, CASP13 made headlines when it was won byAlphaFold, anartificial intelligenceprogram created byDeepMind.[17]In November 2020, an improved version 2 of AlphaFold won CASP14.[18]According to one of CASP co-founders John Moult, AlphaFold scored around 90 on a 100-point scale of prediction accuracy for moderately difficult protein targets.[19]AlphaFold was madeopen sourcein 2021, and in CASP15 in 2022, while DeepMind did not enter, virtually all of the high-ranking teams used AlphaFold or modifications of AlphaFold.[20] Automated assessments for CASP15 (2022) Automated assessments for CASP14 (2020) Automated assessments for CASP13 (2018) Automated assessments for CASP12 (2016) Automated assessments for CASP11 (2014) Automated assessments for CASP10 (2012) Automated assessments for CASP9 (2010) Automated assessments for CASP8 (2008) Automated assessments for CASP7 (2006)
https://en.wikipedia.org/wiki/CASP