text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
In mathematics and computer science,optimal radix choiceis the problem of choosing the base, orradix, that is best suited for representing numbers. Various proposals have been made to quantify the relative costs of using different radices in representing numbers, especially in computer systems. One formula is the number ofdigitsneeded to express it in that base, multiplied by the base (the number of possible values each digit could have). This expression also arises in questions regarding organizational structure, networking, and other fields.
The cost of representing a numberNin a given basebcan be defined as
where we use thefloor function⌊⌋{\displaystyle \lfloor \rfloor }and the base-blogarithmlogb{\displaystyle \log _{b}}.
If bothbandNare positive integers, then the quantityE(b,N){\displaystyle E(b,N)}is equal to the number ofdigitsneeded to express the numberNin baseb, multiplied by baseb.[1]This quantity thus measures the cost of storing or processing the numberNin basebif the cost of each "digit" is proportional tob. A base with a lower averageE(b,N){\displaystyle E(b,N)}is therefore, in some senses, more efficient than a base with a higher average value.
For example,100indecimalhas three digits, so its cost of representation is 10×3 = 30, while its binary representation has seven digits (11001002), so the analogous calculation gives 2×7 = 14. Likewise, inbase 3its representation has five digits (102013), for a value of 3×5 = 15, and in base 36 (2S36) one finds 36×2 = 72.
If the number is imagined to be represented by acombination lockor atally counter, in which each wheel hasbdigit faces, from0,1,...,b−1{\displaystyle 0,1,...,b-1}and having⌊logb(N)+1⌋{\displaystyle \lfloor \log _{b}(N)+1\rfloor }wheels, thenE(b,N){\displaystyle E(b,N)}is the total number of digit faces needed to inclusively represent any integer from 0 toN.
The quantityE(b,N){\displaystyle E(b,N)}for largeNcan be approximated as follows:
The asymptotically best value is obtained for base 3, sincebln(b){\displaystyle b \over \ln(b)}attains a minimum forb=3{\displaystyle b=3}in the positive integers:
For base 10, we have:
The closely relatedcontinuous optimizationproblem of finding the maximum of the functionf(x)=x1/x,{\displaystyle f(x)=x^{1/x},}or equivalently, on taking logs and inverting, minimizingxlnx{\displaystyle {\tfrac {x}{\ln x}}}for continuous rather than integer values ofx{\displaystyle x}, was posed and solved byJakob Steinerin 1850.[2]The solution isEuler's numbere≈2.71828{\displaystyle e\approx 2.71828}, the base of thenatural logarithm, for whichelne=e≈2.71828.{\displaystyle {\frac {e}{\ln e}}=e\approx 2.71828\,.}Translating this solution back to Steiner's formulation,e1/e≈1.44467{\displaystyle e^{1/e}\approx 1.44467}is the unique maximum off(x)=x1/x{\displaystyle f(x)=x^{1/x}}.[3]
This analysis has sometimes been used to argue that, in some sense, "basee{\displaystyle e}is the most economical base for the representation and storage of numbers", despite the difficulty in understanding what that might mean in practice.[4]
This topic appears inUnderwood Dudley'sMathematical Cranks.One of the eccentrics discussed in the book argues thate{\displaystyle e}is the best base, based on a muddled understanding of Steiner's calculus problem, and with a greatly exaggerated sense of how important the choice of radix is.[5]
The values ofE(b,N){\displaystyle E(b,N)}of basesb1andb2may be compared for a large value ofN:
Choosinge{\displaystyle e}forb2{\displaystyle b_{2}}gives
The averageE(b,N){\displaystyle E(b,N)}of various bases up to several arbitrary numbers (avoiding proximity to powers of 2 through 12 ande) are given in the table below. Also shown are the values relative to that of basee.E(1,N){\displaystyle E(1,N)}of any numberN{\displaystyle N}is justN{\displaystyle N}, makingunarythe most economical for the first few integers, but this no longer holds asNclimbs to infinity.
N= 1 to 6
N= 1 to 43
N= 1 to 182
N= 1 to 5329
One result of the relative economy of base 3 is thatternary search treesoffer an efficient strategy for retrieving elements of a database.[6]A similar analysis suggests that the optimum design of a largetelephone menu systemto minimise the number of menu choices that the average customer must listen to (i.e. the product of the number of choices per menu and the number of menu levels) is to have three choices per menu.[1]
In ad-ary heap, apriority queuedata structure based ond-ary trees, the worst-case number of comparisons per operation in a heap containingn{\displaystyle n}elements isdlogdn{\displaystyle d\log _{d}n}(up to lower-order terms), the same formula used above. It has been suggested that choosingd=3{\displaystyle d=3}ord=4{\displaystyle d=4}may offer optimal performance in practice.[7]
Brian Hayessuggests thatE(b,N){\displaystyle E(b,N)}may be the appropriate measure for the complexity of anInteractive voice responsemenu: in a tree-structured phone menu withn{\displaystyle n}outcomes andr{\displaystyle r}choices per step, the time to traverse the menu is proportional to the product ofr{\displaystyle r}(the time to present the choices at each step) withlogrn{\displaystyle \log _{r}n}(the number of choices that need to be made to determine the outcome). From this analysis, the optimal number of choices per step in such a menu is three.[1]
The 1950 referenceHigh-Speed Computing Devicesdescribes a particular situation using contemporary technology. Each digit of a number would be stored as the state of aring countercomposed of severaltriodes. Whethervacuum tubesorthyratrons, the triodes were the most expensive part of a counter. For small radicesrless than about 7, a single digit requiredrtriodes.[8](Larger radices required 2rtriodes arranged asrflip-flops, as inENIAC's decimal counters.)[9]
So the number of triodes in a numerical register withndigits wasrn. In order to represent numbers up to 106, the following numbers of tubes were needed:
The authors conclude,
Under these assumptions, the radix 3, on the average, is the most economical choice, closely followed by radices 2 and 4. These assumptions are, of course, only approximately valid, and the choice of 2 as a radix is frequently justified on more complete analysis. Even with the optimistic assumption that 10 triodes will yield a decimal ring, radix 10 leads to about one and one-half times the complexity of radix 2, 3, or 4. This is probably significant despite the shallow nature of the argument used here.[10]
|
https://en.wikipedia.org/wiki/Radix_economy
|
Incomputer science,radix sortis a non-comparativesorting algorithm. It avoids comparison by creating anddistributingelements into buckets according to theirradix. For elements with more than onesignificant digit, this bucketing process is repeated for each digit, while preserving the ordering of the prior step, until all digits have been considered. For this reason,radix sorthas also been calledbucket sortanddigital sort.
Radix sort can be applied to data that can be sortedlexicographically, be they integers, words, punch cards, playing cards, or themail.
Radix sort dates back as far as 1887 to the work ofHerman Hollerithontabulating machines.[1]Radix sorting algorithms came into common use as a way to sortpunched cardsas early as 1923.[2]
The first memory-efficient computer algorithm for this sorting method was developed in 1954 atMITbyHarold H. Seward. Computerized radix sorts had previously been dismissed as impractical because of the perceived need for variable allocation of buckets of unknown size. Seward's innovation was to use a linear scan to determine the required bucket sizes and offsets beforehand, allowing for a single static allocation of auxiliary memory. The linear scan is closely related to Seward's other algorithm —counting sort.
In the modern era, radix sorts are most commonly applied to collections of binarystringsandintegers. It has been shown in some benchmarks to be faster than other more general-purpose sorting algorithms, sometimes 50% to three times faster.[3][4][5]
Radix sorts can be implemented to start at either themost significant digit(MSD) orleast significant digit(LSD). For example, with1234, one could start with 1 (MSD) or 4 (LSD).
LSD radix sorts typically use the following sorting order: short keys come before longer keys, and then keys of the same length are sortedlexicographically. This coincides with the normal order of integer representations, like the sequence[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. LSD sorts are generallystable sorts.
MSD radix sorts are most suitable for sorting strings or fixed-length integer representations. A sequence like[b, c, e, d, f, g, ba]would be sorted as[b, ba, c, d, e, f, g]. If lexicographic ordering is used to sort variable-length integers in base 10, then numbers from 1 to 10 would be output as[1, 10, 2, 3, 4, 5, 6, 7, 8, 9], as if the shorter keys were left-justified and padded on the right with blank characters to make the shorter keys as long as the longest key. MSD sorts are not necessarily stable if the original ordering of duplicate keys must always be maintained.
Other than the traversal order, MSD and LSD sorts differ in their handling of variable length input.
LSD sorts can group by length, radix sort each group, then concatenate the groups in size order. MSD sorts must effectively 'extend' all shorter keys to the size of the largest key and sort them accordingly, which can be more complicated than the grouping required by LSD.
However, MSD sorts are more amenable to subdivision and recursion. Each bucket created by an MSD step can itself be radix sorted using the next most significant digit, without reference to any other buckets created in the previous step. Once the last digit is reached, concatenating the buckets is all that is required to complete the sort.
Input list:
Starting from the rightmost (last) digit, sort the numbers based on that digit:
Sorting by the next left digit:
And finally by the leftmost digit:
Each step requires just a single pass over the data, since each item can be placed in its bucket without comparison with any other element.
Some radix sort implementations allocate space for buckets by first counting the number of keys that belong in each bucket before moving keys into those buckets. The number of times that each digit occurs is stored in anarray.
Although it is always possible to pre-determine the bucket boundaries using counts, some implementations opt to use dynamic memory allocation instead.
Input list, fixed width numeric strings with leading zeros:
First digit, with brackets indicating buckets:
Next digit:
Final digit:
All that remains is concatenation:
Radix sort operates inO(n⋅w){\displaystyle O(n\cdot w)}time, wheren{\displaystyle n}is the number of keys, andw{\displaystyle w}is the key length. LSD variants can achieve a lower bound forw{\displaystyle w}of 'average key length' when splitting variable length keys into groups as discussed above.
Optimized radix sorts can be very fast when working in a domain that suits them.[6]They are constrained to lexicographic data, but for many practical applications this is not a limitation. Large key sizes can hinder LSD implementations when the induced number of passes becomes the bottleneck.[2]
Binary MSD radix sort, also called binary quicksort, can be implemented in-place by splitting the input array into two bins - the 0s bin and the 1s bin. The 0s bin is grown from the beginning of the array, whereas the 1s bin is grown from the end of the array. The 0s bin boundary is placed before the first array element. The 1s bin boundary is placed after the last array element. The most significant bit of the first array element is examined. If this bit is a 1, then the first element is swapped with the element in front of the 1s bin boundary (the last element of the array), and the 1s bin is grown by one element by decrementing the 1s boundary array index. If this bit is a 0, then the first element remains at its current location, and the 0s bin is grown by one element. The next array element examined is the one in front of the 0s bin boundary (i.e. the first element that is not in the 0s bin or the 1s bin). This process continues until the 0s bin and the 1s bin reach each other. The 0s bin and the 1s bin are then sorted recursively based on the next bit of each array element. Recursive processing continues until the least significant bit has been used for sorting.[7][8]Handling signedtwo's complementintegers requires treating the most significant bit with the opposite sense, followed by unsigned treatment of the rest of the bits.
In-place MSD binary-radix sort can be extended to larger radix and retain in-place capability.Counting sortis used to determine the size of each bin and their starting index. Swapping is used to place the current element into its bin, followed by expanding the bin boundary. As the array elements are scanned the bins are skipped over and only elements between bins are processed, until the entire array has been processed and all elements end up in their respective bins. The number of bins is the same as the radix used - e.g. 16 bins for 16-radix. Each pass is based on a single digit (e.g. 4-bits per digit in the case of 16-radix), starting from themost significant digit. Each bin is then processed recursively using the next digit, until all digits have been used for sorting.[9][10]
Neither in-place binary-radix sort nor n-bit-radix sort, discussed in paragraphs above, arestable algorithms.
MSD radix sort can be implemented as a stable algorithm, but requires the use of a memory buffer of the same size as the input array. This extra memory allows the input buffer to be scanned from the first array element to last, and move the array elements to the destination bins in the same order. Thus, equal elements will be placed in the memory buffer in the same order they were in the input array. The MSD-based algorithm uses the extra memory buffer as the output on the first level of recursion, but swaps the input and output on the next level of recursion, to avoid the overhead of copying the output result back to the input buffer. Each of the bins are recursively processed, as is done for the in-place MSD radix sort. After the sort by the last digit has been completed, the output buffer is checked to see if it is the original input array, and if it's not, then a single copy is performed. If the digit size is chosen such that the key size divided by the digit size is an even number, the copy at the end is avoided.[11]
Radix sort, such as the two-pass method wherecounting sortis used during the first pass of each level of recursion, has a large constant overhead. Thus, when the bins get small, other sorting algorithms should be used, such asinsertion sort. A good implementation of insertion sort is fast for small arrays, stable, in-place, and can significantly speed up radix sort.
This recursive sorting algorithm has particular application toparallel computing, as each of the bins can be sorted independently. In this case, each bin is passed to the next available processor. A single processor would be used at the start (the most significant digit). By the second or third digit, all available processors would likely be engaged. Ideally, as each subdivision is fully sorted, fewer and fewer processors would be utilized. In the worst case, all of the keys will be identical or nearly identical to each other, with the result that there will be little to no advantage to using parallel computing to sort the keys.
In the top level of recursion, opportunity for parallelism is in thecounting sortportion of the algorithm. Counting is highly parallel, amenable to the parallel_reduce pattern, and splits the work well across multiple cores until reaching memory bandwidth limit. This portion of the algorithm has data-independent parallelism. Processing each bin in subsequent recursion levels is data-dependent, however. For example, if all keys were of the same value, then there would be only a single bin with any elements in it, and no parallelism would be available. For random inputs all bins would be near equally populated and a large amount of parallelism opportunity would be available.[12]
There are faster parallel sorting algorithms available, for example optimal complexity O(log(n)) are those of the Three Hungarians and Richard Cole[13][14]andBatcher'sbitonic merge sorthas an algorithmic complexity of O(log2(n)), all of which have a lower algorithmic time complexity to radix sort on a CREW-PRAM. The fastest known PRAM sorts were described in 1991 byDavid M W Powerswith a parallelized quicksort that can operate in O(log(n)) time on a CRCW-PRAM withnprocessors by performing partitioning implicitly, as well as a radixsort that operates using the same trick in O(k), wherekis the maximum keylength.[15]However, neither the PRAM architecture or a single sequential processor can actually be built in a way that will scale without the number of constantfan-outgate delays per cycle increasing as O(log(n)), so that in effect a pipelined version of Batcher's bitonic mergesort and the O(log(n)) PRAM sorts are all O(log2(n)) in terms of clock cycles, with Powers acknowledging that Batcher's would have lower constant in terms of gate delays than his Parallelquicksortand radix sort, or Cole'smerge sort, for a keylength-independentsorting networkof O(nlog2(n)).[16]
Radix sorting can also be accomplished by building atree(orradix tree) from the input set, and doing apre-ordertraversal. This is similar to the relationship betweenheapsortand theheapdata structure. This can be useful for certain data types, seeburstsort.
|
https://en.wikipedia.org/wiki/Radix_sort
|
Non-standard positional numeral systemshere designatesnumeral systemsthat may loosely be described aspositional systems, but that do not entirely comply with the following description of standard positional systems:
This article summarizes facts on some non-standard positional numeral systems. In most cases, the polynomial form in the description of standard systems still applies.
Some historical numeral systems may be described as non-standard positional numeral systems. E.g., thesexagesimalBabylonian notationand the Chineserod numerals, which can be classified as standard systems of base 60 and 10, respectively, counting the space representing zero as a numeral, can also be classified as non-standard systems, more specifically, mixed-base systems with unary components, considering the primitive repeatedglyphsmaking up the numerals.
However, most of the non-standard systems listed below have never been intended for general use, but were devised by mathematicians or engineers for special academic or technical use.
Abijective numeral systemwith basebusesbdifferent numerals to represent all non-negative integers. However, the numerals have values 1, 2, 3, etc. up to and includingb, whereas zero is represented by an empty digit string. For example, it is possible to havedecimal without a zero.
Unary is the bijective numeral system with baseb= 1. In unary, one numeral is used to represent all positive integers. The value of the digit stringpqrsgiven by the polynomial form can be simplified intop+q+r+ssincebn= 1 for alln. Non-standard features of this system include:
In some systems, while the base is a positive integer, negative digits are allowed.Non-adjacent formis a particular system where the base isb= 2. In thebalanced ternarysystem, the base isb= 3, and the numerals have the values −1, 0 and +1 (rather than 0, 1 and 2 as in the standardternary system, or 1, 2 and 3 as in the bijective ternary system).
The reflected binary code, also known as the Gray code, is closely related tobinary numbers, but somebitsare inverted, depending on the parity of the higher order bits.
Cistercian numeralsare a decimal positional numeral system, but the positions are not aligned as in common decimal notation; instead, they are attached to the top-right, top-left, bottom-right and bottom-left of a vertical stem, respectively, and thus limited to four in number (so only integers from 0 to 9999 can be represented). The system has close similarities to standard positional numeral systems, but may also be compared to e.g.Greek numerals, where different sets of symbols (in fact,Greek letters) are used for the ones, tens, hundreds and thousands, likewise giving an upper limit on the numbers that can be represented.
Similarly, in computers, e.g. thelong integerformat is a standard binary system (apart from the sign bit), but it has a limited number of positions, and the physical locations for the representations of the digits may not be aligned. In an analogodometerand in anabacus, the decimal digits are aligned but limited in number.
A few positional systems have been suggested in which the basebis not a positive integer.
Negative-base systems includenegabinary,negaternaryandnegadecimal, with bases −2, −3, and −10 respectively; in base −bthe number of different numerals used isb. Due to the properties of negative numbers raised to powers, all integers, positive and negative, can be represented without a sign.
In a purely imaginary basebisystem, wherebis an integer larger than 1 anditheimaginary unit, the standard set of digits consists of theb2numbers from 0 tob2− 1. It can be generalized to other complex bases, giving rise to thecomplex-base systems.
In non-integer bases, the number of different numerals used clearly cannot beb. Instead, the numerals 0 to⌊b⌋{\displaystyle \lfloor b\rfloor }are used. For example,golden ratio base(phinary), uses the 2 different numerals 0 and 1.
It is sometimes convenient to consider positional numeral systems where the weights associated with the positions do not form ageometric sequence1,b,b2,b3, etc., starting from the least significant position, as given in the polynomial form. Examples include:
Sequences where each weight isnotan integer multiple of the previous weight may also be used, but then every integer may not have a unique representation. For example,Fibonacci codinguses the digits 0 and 1, weighted according to theFibonacci sequence(1, 2, 3, 5, 8, ...); a unique representation of all non-negative integers may be ensured by forbidding consecutive 1s.Binary-coded decimal(BCD) are mixed base systems where bits (binary digits) are used to express decimal digits. E.g., in 1001 0011, each group of four bits may represent a decimal digit (in this example 9 and 3, so the eight bits combined represent decimal 93). The weights associated with these 8 positions are 80, 40, 20, 10, 8, 4, 2 and 1. Uniqueness is ensured by requiring that, in each group of four bits, if the first bit is 1, the next two must be 00.
Asymmetric numeral systems are systems used incomputer sciencewhere each digit can have different bases, usually non-integer. In these, not only are the bases of a given digit different, they can be also nonuniform and altered in an asymmetric way to encode information more efficiently. They are optimized for chosen non-uniform probability distributions of symbols, using on average approximatelyShannon entropybits per symbol.[1]
|
https://en.wikipedia.org/wiki/Non-standard_positional_numeral_systems
|
There are many differentnumeral systems, that is,writing systemsfor expressingnumbers.
"Abaseis a natural number B whosepowers(B multiplied by itself some number of times) are specially designated within a numerical system."[1]: 38The term is not equivalent toradix, as it applies to all numerical notation systems (not just positional ones with a radix) and most systems of spoken numbers.[1]Some systems have two bases, a smaller (subbase) and a larger (base); an example is Roman numerals, which are organized by fives (V=5, L=50, D=500, the subbase) and tens (X=10, C=100, M=1,000, the base).
零一二三四五六七八九十百千萬億 (Default,Traditional Chinese)〇一二三四五六七八九十百千万亿 (Default,Simplified Chinese)
Bengali০ ১ ২ ৩ ৪ ৫ ৬ ৭ ৮ ৯
Devanagari० १ २ ३ ४ ५ ६ ७ ८ ९
Gujarati૦ ૧ ૨ ૩ ૪ ૫ ૬ ૭ ૮ ૯
Kannada೦ ೧ ೨ ೩ ೪ ೫ ೬ ೭ ೮ ೯
Malayalam൦ ൧ ൨ ൩ ൪ ൫ ൬ ൭ ൮ ൯
Odia୦ ୧ ୨ ୩ ୪ ୫ ୬ ୭ ୮ ୯
Punjabi੦ ੧ ੨ ੩ ੪ ੫ ੬ ੭ ੮ ੯
Tamil௦ ௧ ௨ ௩ ௪ ௫ ௬ ௭ ௮ ௯
Telugu౦ ౧ ౨ ౩ ౪ ౫ ౬ ౭ ౮ ౯
Tibetan༠ ༡ ༢ ༣ ༤ ༥ ༦ ༧ ༨ ༩
Urdu۰ ۱ ۲ ۳ ۴ ۵ ۶ ۷ ۸ ۹
Numeral systems are classified here as to whether they usepositional notation(also known as place-value notation), and further categorized byradixor base.
The common names are derivedsomewhat arbitrarilyfrom a mix ofLatinandGreek, in some cases including roots from both languages within a single name.[27]There have been some proposals for standardisation.[28]
Someemailspam filterstag messages with a number ofasterisksin ane-mail headersuch asX-Spam-BarorX-SPAM-LEVEL. The larger the number, the more likely the email is considered spam.
All known numeral systems developed before theBabylonian numeralsare non-positional,[65]as are many developed later, such as theRoman numerals. The French Cistercian monks createdtheir own numeral system.
|
https://en.wikipedia.org/wiki/List_of_numeral_systems
|
Instatistics, thehypergeometric distributionis the discreteprobability distributiongenerated by picking colored balls at random from anurnwithout replacement.
Various generalizations to this distribution exist for cases where the picking of colored balls isbiasedso that balls of one color are more likely to be picked than balls of another color.
This can be illustrated by the following example. Assume that anopinion pollis conducted by calling random telephone numbers. Unemployed people are more likely to be home and answer the phone than employed people are. Therefore, unemployed respondents are likely to be over-represented in thesample. Theprobability distributionof employed versus unemployed respondents in a sample ofnrespondents can be described as a noncentral hypergeometric distribution.
The description ofbiasedurn modelsis complicated by the fact that there is more than one noncentral hypergeometric distribution. Which distribution one gets depends on whether items (e.g., colored balls) are sampled one by one in a manner in which there is competition between the items or they are sampled independently of one another. The namenoncentral hypergeometric distributionhas been used for both of these cases. The use of the same name for two different distributions came about because they were studied by two different groups of scientists with hardly any contact with each other.
Agner Fog(2007, 2008) suggested that the best way to avoid confusion is to use the nameWallenius' noncentral hypergeometric distributionfor the distribution of a biased urn model in which a predetermined number of items are drawn one by one in a competitive manner and to use the nameFisher's noncentral hypergeometric distributionfor one in which items are drawn independently of each other, so that the total number of items drawn is known only after the experiment. The names refer to Kenneth Ted Wallenius andR. A. Fisher, who were the first to describe the respective distributions.
Fisher's noncentral hypergeometric distributionhad previously been given the nameextended hypergeometric distribution, but this name is rarely used in the scientific literature, except in handbooks that need to distinguish between the two distributions.
Wallenius' distribution can be explained as follows.
Assume that anurncontainsm1{\displaystyle m_{1}}red balls andm2{\displaystyle m_{2}}white balls, totallingN=m1+m2{\displaystyle N=m_{1}+m_{2}}balls.n{\displaystyle n}balls are drawn at random from the urn one by one without replacement. Each red ball has the weightω1{\displaystyle \omega _{1}}, and each white ball has the weightω2{\displaystyle \omega _{2}}. We assume that the probability of taking a particular ball is proportional to its weight. The physical property that determines theoddsmay be something else than weight, such as size or slipperiness or some other factor, but it is convenient to use the wordweightfor the odds parameter.
The probability that the first ball picked is red is equal to the weight fraction of red balls:
The probability that the second ball picked is red depends on whether the first ball was red or white. If the first ball was red then the above formula is used withm1{\displaystyle m_{1}}reduced by one. If the first ball was white then the above formula is used withm2{\displaystyle m_{2}}reduced by one.
The important fact that distinguishes Wallenius' distribution is that there iscompetitionbetween the balls. The probability that a particular ball is taken in a particular draw depends not only on its own weight, but also on the total weight of the competing balls that remain in the urn at that moment. And the weight of the competing balls depends on the outcomes of all preceding draws.
A multivariate version of Wallenius' distribution is used if there are more than two different colors.
The distribution of the balls that are not drawn is acomplementary Wallenius' noncentral hypergeometric distribution.
In the Fisher model, the fates of the balls are independent and there is no dependence between draws. One may as well take allnballs at the same time. Each ball has no "knowledge" of what happens to the other balls. For the same reason, it is impossible to know the value ofnbefore the experiment. If we tried to fix the value ofnthen we would have no way of preventing ball numbern+ 1 from being taken without violating the principle of independence between balls.nis therefore a random variable, and the Fisher distribution is a conditional distribution which can only be determined after the experiment whennis observed. The unconditional distribution is two independentbinomials, one for each color.
Fisher's distribution can simply be defined as theconditional distributionof two or more independent binomial variates dependent upon their sum. A multivariate version of the Fisher's distribution is used if there are more than two colors of balls.
Wallenius' and Fisher's distributions are approximately equal when theodds ratioω=ω1/ω2{\displaystyle \omega =\omega _{1}/\omega _{2}}is near 1, andnis low compared to the total number of balls,N. The difference between the two distributions becomes higher when the odds ratio is far from one andnis nearN. The two distributions approximate each other better when they have the same mean than when they have the same odds (ω = 1) (see figures above).
Both distributions degenerate into thehypergeometric distributionwhen the odds ratio is 1, or to thebinomial distributionwhenn= 1.
To understand why the two distributions are different, we may consider the following extreme example: An urn contains one red ball with the weight 1000, and a thousand white balls each with the weight 1. We want to calculate the probability that the red ball isnottaken.
First we consider the Wallenius model. The probability that the red ball is not taken in the first draw is 1000/2000 =1⁄2. The probability that the red ball is not taken in the second draw, under the condition that it was not taken in the first draw, is 999/1999 ≈1⁄2. The probability that the red ball is not taken in the third draw, under the condition that it was not taken in the first two draws, is 998/1998 ≈1⁄2. Continuing in this way, we can calculate that the probability of not taking the red ball inndraws is approximately 2−nas long asnis small compared toN. In other words, the probability of not taking a very heavy ball inndraws falls almost exponentially withnin Wallenius' model. The exponential function arises because the probabilities for each draw are all multiplied together.
This is not the case in Fisher's model, where balls are taken independently, and possibly simultaneously. Here the draws are independent and the probabilities are therefore not multiplied together. The probability of not taking the heavy red ball in Fisher's
model is approximately 1/(n+ 1). The two distributions are therefore very different in this extreme case, even though they are quite similar in less extreme cases.
The following conditions must be fulfilled for Wallenius' distribution to be applicable:
The following conditions must be fulfilled for Fisher's distribution to be applicable:
The following examples illustrate which distribution applies in different situations.
You are catching fish in a small lake that contains a limited number of fish. There are different kinds of fish with different weights. The probability of catching a particular fish at a particular moment is proportional to its weight.
You are catching the fish one by one with a fishing rod. You have decided to catchnfish. You are determined to catch exactlynfish regardless of how long it may take. You will stop after you have caughtnfish even if you can see more fish that are tempting.
This scenario will give a distribution of the types of fish caught that is equal to Wallenius' noncentral hypergeometric distribution.
You are catching fish as in example 1, but using a big net. You set up the net one day and come back the next day to remove the net. You count how many fish you have caught and then you go home regardless of how many fish you have caught. Each fish has a probability of being ensnared that is proportional to its weight but independent of what happens to the other fish.
The total number of fish that will be caught in this scenario is not known in advance. The expected number of fish caught is therefore described by multiple binomial distributions, one for each kind of fish.
After the fish have been counted, the total numbernof fish is known. The probability distribution whennis known (but the number of each type is not known yet) is Fisher's noncentral hypergeometric distribution.
You are catching fish with a small net. It is possible that more than one fish can be caught in the net at the same time. You will use the net repeatedly until you have got at leastnfish.
This scenario gives a distribution that lies between Wallenius' and Fisher's distributions. The total number of fish caught can vary if you are getting too many fish in the last catch. You may put the excess fish back into the lake, but this still does not give Wallenius' distribution. This is because you are catching
multiple fish at the same time. The condition that each catch depends on all previous catches does not hold for fish that are caught simultaneously or in the same operation.
The resulting distribution will be close to Wallenius' distribution if there are few fish in the net in each catch and many casts of the net. The resulting distribution will be close to Fisher's distribution if there are many fish in the net in each catch and few casts.
You are catching fish with a big net. Fish swim into the net randomly in a situation that resembles aPoisson process. You watch the net and take it up as soon as you have caught exactlynfish.
The resulting distribution will be close to Fisher's distribution because the fish arrive in the net independently of each other. But the fates of the fish are not completely independent because a particular fish can be saved from being caught ifnother fish happen to arrive in the net before this particular fish. This is more likely to happen if the other fish are heavy than if they are light.
You are catching fish one by one with a fishing rod as in example 1. You need a particular amount of fish in order to feed your family. You will stop when the total weight of the fish caught reaches this predetermined limit. The resulting distribution will be close to Wallenius' distribution, but not exactly equal to it because the decision to stop depends on the weight of the fish caught so far.nis therefore not known before the fishing trip.
These examples show that the distribution of the types of fish caught depends on the way they are caught. Many situations will give a distribution that lies somewhere between Wallenius' and Fisher's noncentral hypergeometric distributions.
A consequence of the difference between these two distributions is that one will catch more of the heavy fish, on average, by catchingnfish one by one than by catching allnat the same time. In general, we can say that, in biased sampling, the odds parameter has a stronger effect in Wallenius' distribution than in Fisher's distribution, especially whenn/Nis high.
Johnson, N. L.;Kemp, A. W.; Kotz, S. (2005),Univariate Discrete Distributions, Hoboken, New Jersey: Wiley and Sons.
McCullagh, P.; Nelder, J. A. (1983),Generalized Linear Models, London: Chapman and Hall.
Fog, Agner (2007),Random number theory.
Fog, Agner (2008), "Calculation Methods for Wallenius' Noncentral Hypergeometric Distribution",Communications in Statistics - Simulation and Computation, vol. 37, no. 2, pp.258–273,doi:10.1080/03610910701790269,S2CID9040568.
|
https://en.wikipedia.org/wiki/Noncentral_hypergeometric_distributions
|
N∈{0,1,2,…}{\displaystyle N\in \left\{0,1,2,\dots \right\}}- total number of elementsK∈{0,1,2,…,N}{\displaystyle K\in \left\{0,1,2,\dots ,N\right\}}- total number of 'success' elements
Inprobability theoryandstatistics, thenegative hypergeometric distributiondescribes probabilities for when sampling from a finite population without replacement in which each sample can be classified into two mutually exclusive categories like Pass/Fail or Employed/Unemployed. As random selections are made from the population, each subsequent draw decreases the population causing the probability of success to change with each draw. Unlike the standardhypergeometric distribution, which describes the number of successes in a fixed sample size, in the negative hypergeometric distribution, samples are drawn untilr{\displaystyle r}failures have been found, and the distribution describes the probability of findingk{\displaystyle k}successes in such a sample. In other words, the negative hypergeometric distribution describes the likelihood ofk{\displaystyle k}successes in a sample with exactlyr{\displaystyle r}failures.
There areN{\displaystyle N}elements, of whichK{\displaystyle K}are defined as "successes" and the rest are "failures".
Elements are drawn one after the other,withoutreplacements, untilr{\displaystyle r}failures are encountered. Then, the drawing stops and the numberk{\displaystyle k}of successes is counted. The negative hypergeometric distribution,NHGN,K,r(k){\displaystyle NHG_{N,K,r}(k)}is thediscrete distributionof thisk{\displaystyle k}.
[1]
The negative hypergeometric distribution is a special case of thebeta-binomial distribution[2]with parametersα=r{\displaystyle \alpha =r}andβ=N−K−r+1{\displaystyle \beta =N-K-r+1}both being integers (andn=K{\displaystyle n=K}).
The outcome requires that we observek{\displaystyle k}successes in(k+r−1){\displaystyle (k+r-1)}draws and the(k+r)-th{\displaystyle (k+r){\text{-th}}}bit must be a failure. The probability of the former can be found by the direct application of thehypergeometric distribution(HGN,K,k+r−1(k)){\displaystyle (HG_{N,K,k+r-1}(k))}and the probability of the latter is simply the number of failures remaining(=N−K−(r−1)){\displaystyle (=N-K-(r-1))}divided by the size of the remaining population(=N−(k+r−1){\displaystyle (=N-(k+r-1)}. The probability of having exactlyk{\displaystyle k}successes up to ther-th{\displaystyle r{\text{-th}}}failure (i.e. the drawing stops as soon as the sample includes the predefined number ofr{\displaystyle r}failures) is then the product of these two probabilities:
Therefore, arandom variableX{\displaystyle X}follows the negative hypergeometric distribution if itsprobability mass function(pmf) is given by
where
By design the probabilities sum up to 1. However, in case we want show it explicitly we have:
where we have used that,
which can be derived using thebinomial identity,
and theChu–Vandermonde identity,
which holds for any complex-valuesm{\displaystyle m}andn{\displaystyle n}and any non-negative integerk{\displaystyle k}.
When counting the numberk{\displaystyle k}of successes beforer{\displaystyle r}failures, the expected number of successes isrKN−K+1{\displaystyle {\frac {rK}{N-K+1}}}and can be derived as follows.
E[X]=∑k=0KkPr(X=k)=∑k=0Kk(k+r−1k)(N−r−kK−k)(NK)=r(NK)[∑k=0K(k+r)r(k+r−1r−1)(N−r−kK−k)]−r=r(NK)[∑k=0K(k+rr)(N−r−kK−k)]−r=r(NK)[∑k=0K(k+rk)(N−r−kK−k)]−r=r(NK)[(N+1K)]−r=rKN−K+1,{\displaystyle {\begin{aligned}E[X]&=\sum _{k=0}^{K}k\Pr(X=k)=\sum _{k=0}^{K}k{\frac {{{k+r-1} \choose {k}}{{N-r-k} \choose {K-k}}}{N \choose K}}={\frac {r}{N \choose K}}\left[\sum _{k=0}^{K}{\frac {(k+r)}{r}}{{k+r-1} \choose {r-1}}{{N-r-k} \choose {K-k}}\right]-r\\&={\frac {r}{N \choose K}}\left[\sum _{k=0}^{K}{{k+r} \choose {r}}{{N-r-k} \choose {K-k}}\right]-r={\frac {r}{N \choose K}}\left[\sum _{k=0}^{K}{{k+r} \choose {k}}{{N-r-k} \choose {K-k}}\right]-r\\&={\frac {r}{N \choose K}}\left[{{N+1} \choose K}\right]-r={\frac {rK}{N-K+1}},\end{aligned}}}
where we have used the relationship∑j=0k(j+mj)(n−m−jk−j)=(n+1k){\displaystyle \sum _{j=0}^{k}{\binom {j+m}{j}}{\binom {n-m-j}{k-j}}={\binom {n+1}{k}}}, that we derived above to show that the negative hypergeometric distribution was properly normalized.
The variance can be derived by the following calculation.
E[X2]=∑k=0Kk2Pr(X=k)=[∑k=0K(k+r)(k+r+1)Pr(X=k)]−(2r+1)E[X]−r2−r=r(r+1)(NK)[∑k=0K(k+r+1r+1)(N+1−(r+1)−kK−k)]−(2r+1)E[X]−r2−r=r(r+1)(NK)[(N+2K)]−(2r+1)E[X]−r2−r=rK(N−r+Kr+1)(N−K+1)(N−K+2){\displaystyle {\begin{aligned}E[X^{2}]&=\sum _{k=0}^{K}k^{2}\Pr(X=k)=\left[\sum _{k=0}^{K}(k+r)(k+r+1)\Pr(X=k)\right]-(2r+1)E[X]-r^{2}-r\\&={\frac {r(r+1)}{N \choose K}}\left[\sum _{k=0}^{K}{{k+r+1} \choose {r+1}}{{N+1-(r+1)-k} \choose {K-k}}\right]-(2r+1)E[X]-r^{2}-r\\&={\frac {r(r+1)}{N \choose K}}\left[{{N+2} \choose K}\right]-(2r+1)E[X]-r^{2}-r={\frac {rK(N-r+Kr+1)}{(N-K+1)(N-K+2)}}\end{aligned}}}
Then the variance isVar[X]=E[X2]−(E[X])2=rK(N+1)(N−K−r+1)(N−K+1)2(N−K+2){\displaystyle {\textrm {Var}}[X]=E[X^{2}]-\left(E[X]\right)^{2}={\frac {rK(N+1)(N-K-r+1)}{(N-K+1)^{2}(N-K+2)}}}
If the drawing stops after a constant numbern{\displaystyle n}of draws (regardless of the number of failures), then the number of successes has thehypergeometric distribution,HGN,K,n(k){\displaystyle HG_{N,K,n}(k)}. The two functions are related in the following way:[1]
Negative-hypergeometric distribution (like the hypergeometric distribution) deals with drawswithout replacement, so that the probability of success is different in each draw. In contrast, negative-binomial distribution (like the binomial distribution) deals with drawswith replacement, so that the probability of success is the same and the trials are independent. The following table summarizes the four distributions related to drawing items:
Some authors[3][4]define the negative hypergeometric distribution to be the number of draws required to get ther{\displaystyle r}th failure. If we letY{\displaystyle Y}denote this number then it is clear thatY=X+r{\displaystyle Y=X+r}whereX{\displaystyle X}is as defined above. Hence the PMF
If we let the number of failuresN−K{\displaystyle N-K}be denoted byM{\displaystyle M}means that we have
The support ofY{\displaystyle Y}is the set{r,r+1,…,N−M+r}{\displaystyle \{r,r+1,\dots ,N-M+r\}}. It is clear that:
andVar[X]=Var[Y]{\displaystyle {\textrm {Var}}[X]={\textrm {Var}}[Y]}.
|
https://en.wikipedia.org/wiki/Negative_hypergeometric_distribution
|
Inprobability theory, themultinomial distributionis a generalization of thebinomial distribution. For example, it models the probability of counts for each side of ak-sided die rolledntimes. Fornindependenttrials each of which leads to a success for exactly one ofkcategories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.
Whenkis 2 andnis 1, the multinomial distribution is theBernoulli distribution. Whenkis 2 andnis bigger than 1, it is thebinomial distribution. Whenkis bigger than 2 andnis 1, it is thecategorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (sondetermines the suffix, andkthe prefix).
TheBernoulli distributionmodels the outcome of a singleBernoulli trial. In other words, it models whether flipping a (possiblybiased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). Thebinomial distributiongeneralizes this to the number of heads from performingnindependent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome ofnexperiments, where the outcome of each trial has acategorical distribution, such as rolling a (possiblybiased)k-sided dientimes.
Letkbe a fixed finite number. Mathematically, we havekpossible mutually exclusive outcomes, with corresponding probabilitiesp1, ...,pk, andnindependent trials. Since thekoutcomes are mutually exclusive and one must occur we havepi≥ 0 fori= 1, ...,kand∑i=1kpi=1{\displaystyle \sum _{i=1}^{k}p_{i}=1}. Then if the random variablesXiindicate the number of times outcome numberiis observed over thentrials, the vectorX= (X1, ...,Xk) follows a multinomial distribution with parametersnandp, wherep= (p1, ...,pk). While the trials are independent, their outcomesXiare dependent because they must sum to n.
n∈{0,1,2,…}{\displaystyle n\in \{0,1,2,\ldots \}}number of trialsk>0{\displaystyle k>0}number of mutually exclusive events (integer)
Suppose one does an experiment of extractingnballs ofkdifferent colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of colori(i= 1, ...,k) asXi, and denote aspithe probability that a given extraction will be in colori. Theprobability mass functionof this multinomial distribution is:
for non-negative integersx1, ...,xk.
The probability mass function can be expressed using thegamma functionas:
This form shows its resemblance to theDirichlet distribution, which is itsconjugate prior.
Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?
Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is themultivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size[1].
The multinomial distribution is normalized according to:
where the sum is over all permutations ofxj{\displaystyle x_{j}}such that∑j=1kxj=n{\displaystyle \sum _{j=1}^{k}x_{j}=n}.
Theexpectednumber of times the outcomeiwas observed overntrials is
Thecovariance matrixis as follows. Each diagonal entry is thevarianceof a binomially distributed random variable, and is therefore
The off-diagonal entries are thecovariances:
fori,jdistinct.
All covariances are negative because for fixedn, an increase in one component of a multinomial vector requires a decrease in another component.
When these expressions are combined into a matrix withi, jelementcov(Xi,Xj),{\displaystyle \operatorname {cov} (X_{i},X_{j}),}the result is ak×kpositive-semidefinitecovariance matrixof rankk− 1. In the special case wherek=nand where thepiare all equal, the covariance matrix is thecentering matrix.
The entries of the correspondingcorrelation matrixare
Note that the number of trialsndrops out of this expression.
Each of thekcomponents separately has a binomial distribution with parametersnandpi, for the appropriate value of the subscripti.
Thesupportof the multinomial distribution is the set
Its number of elements is
In matrix notation,
and
withpT= the row vector transpose of the column vectorp.
Just like one can interpret thebinomial distributionas (normalized) one-dimensional (1D) slices ofPascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices ofPascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of therangeof the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. asimplexwith a grid.[citation needed]
Similarly, just like one can interpret thebinomial distributionas the polynomial coefficients of(p+q)n{\displaystyle (p+q)^{n}}when expanded, one can interpret the multinomial distribution as the coefficients of(p1+p2+p3+⋯+pk)n{\displaystyle (p_{1}+p_{2}+p_{3}+\cdots +p_{k})^{n}}when expanded, noting that just the coefficients must sum up to 1.
ByStirling's formula, at the limit ofn,x1,...,xk→∞{\displaystyle n,x_{1},...,x_{k}\to \infty }, we haveln(nx1,⋯,xk)+∑i=1kxilnpi=−nDKL(p^‖p)−k−12ln(2πn)−12∑i=1kln(p^i)+o(1){\displaystyle \ln {\binom {n}{x_{1},\cdots ,x_{k}}}+\sum _{i=1}^{k}x_{i}\ln p_{i}=-nD_{KL}({\hat {p}}\|p)-{\frac {k-1}{2}}\ln(2\pi n)-{\frac {1}{2}}\sum _{i=1}^{k}\ln({\hat {p}}_{i})+o(1)}where relative frequenciesp^i=xi/n{\displaystyle {\hat {p}}_{i}=x_{i}/n}in the data can be interpreted as probabilities from the empirical distributionp^{\displaystyle {\hat {p}}}, andDKL{\displaystyle D_{KL}}is theKullback–Leibler divergence.
This formula can be interpreted as follows.
ConsiderΔk{\displaystyle \Delta _{k}}, the space of all possible distributions over the categories{1,2,...,k}{\displaystyle \{1,2,...,k\}}. It is asimplex. Aftern{\displaystyle n}independent samples from the categorical distributionp{\displaystyle p}(which is how we construct the multinomial distribution), we obtain an empirical distributionp^{\displaystyle {\hat {p}}}.
By the asymptotic formula, the probability that empirical distributionp^{\displaystyle {\hat {p}}}deviates from the actual distributionp{\displaystyle p}decays exponentially, at a ratenDKL(p^‖p){\displaystyle nD_{KL}({\hat {p}}\|p)}. The more experiments and the more differentp^{\displaystyle {\hat {p}}}is fromp{\displaystyle p}, the less likely it is to see such an empirical distribution.
IfA{\displaystyle A}is a closed subset ofΔk{\displaystyle \Delta _{k}}, then by dividing upA{\displaystyle A}into pieces, and reasoning about the growth rate ofPr(p^∈Aϵ){\displaystyle Pr({\hat {p}}\in A_{\epsilon })}on each pieceAϵ{\displaystyle A_{\epsilon }}, we obtainSanov's theorem, which states thatlimn→∞1nlnPr(p^∈A)=−infp^∈ADKL(p^‖p){\displaystyle \lim _{n\to \infty }{\frac {1}{n}}\ln Pr({\hat {p}}\in A)=-\inf _{{\hat {p}}\in A}D_{KL}({\hat {p}}\|p)}
Due to the exponential decay, at largen{\displaystyle n}, almost all the probability mass is concentrated in a small neighborhood ofp{\displaystyle p}. In this small neighborhood, we can take the first nonzero term in the Taylor expansion ofDKL{\displaystyle D_{KL}}, to obtainln(nx1,⋯,xk)p1x1⋯pkxk≈−n2∑i=1k(p^i−pi)2pi=−12∑i=1k(xi−npi)2npi{\displaystyle \ln {\binom {n}{x_{1},\cdots ,x_{k}}}p_{1}^{x_{1}}\cdots p_{k}^{x_{k}}\approx -{\frac {n}{2}}\sum _{i=1}^{k}{\frac {({\hat {p}}_{i}-p_{i})^{2}}{p_{i}}}=-{\frac {1}{2}}\sum _{i=1}^{k}{\frac {(x_{i}-np_{i})^{2}}{np_{i}}}}This resembles the gaussian distribution, which suggests the following theorem:
Theorem.At then→∞{\displaystyle n\to \infty }limit,n∑i=1k(p^i−pi)2pi=∑i=1k(xi−npi)2npi{\displaystyle n\sum _{i=1}^{k}{\frac {({\hat {p}}_{i}-p_{i})^{2}}{p_{i}}}=\sum _{i=1}^{k}{\frac {(x_{i}-np_{i})^{2}}{np_{i}}}}converges in distributionto thechi-squared distributionχ2(k−1){\displaystyle \chi ^{2}(k-1)}.
The space of all distributions over categories{1,2,…,k}{\displaystyle \{1,2,\ldots ,k\}}is asimplex:Δk={(y1,…,yk):y1,…,yk≥0,∑iyi=1}{\displaystyle \Delta _{k}=\left\{(y_{1},\ldots ,y_{k})\colon y_{1},\ldots ,y_{k}\geq 0,\sum _{i}y_{i}=1\right\}}, and the set of all possible empirical distributions aftern{\displaystyle n}experiments is a subset of the simplex:Δk,n={(x1/n,…,xk/n):x1,…,xk∈N,∑ixi=n}{\displaystyle \Delta _{k,n}=\left\{(x_{1}/n,\ldots ,x_{k}/n)\colon x_{1},\ldots ,x_{k}\in \mathbb {N} ,\sum _{i}x_{i}=n\right\}}. That is, it is the intersection betweenΔk{\displaystyle \Delta _{k}}and the lattice(Zk)/n{\displaystyle (\mathbb {Z} ^{k})/n}.
Asn{\displaystyle n}increases, most of the probability mass is concentrated in a subset ofΔk,n{\displaystyle \Delta _{k,n}}nearp{\displaystyle p}, and the probability distribution nearp{\displaystyle p}becomes well-approximated by(nx1,⋯,xk)p1x1⋯pkxk≈e−n2∑i(p^i−pi)2pi{\displaystyle {\binom {n}{x_{1},\cdots ,x_{k}}}p_{1}^{x_{1}}\cdots p_{k}^{x_{k}}\approx e^{-{\frac {n}{2}}\sum _{i}{\frac {({\hat {p}}_{i}-p_{i})^{2}}{p_{i}}}}}From this, we see that the subset upon which the mass is concentrated has radius on the order of1/n{\displaystyle 1/{\sqrt {n}}}, but the points in the subset are separated by distance on the order of1/n{\displaystyle 1/n}, so at largen{\displaystyle n}, the points merge into a continuum.
To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point ofΔk,n{\displaystyle \Delta _{k,n}}inΔk{\displaystyle \Delta _{k}}. However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability densityρ(p^)=Ce−n2∑i(p^i−pi)2pi{\displaystyle \rho ({\hat {p}})=Ce^{-{\frac {n}{2}}\sum _{i}{\frac {({\hat {p}}_{i}-p_{i})^{2}}{p_{i}}}}}, whereC{\displaystyle C}is a constant.
Finally, since the simplexΔk{\displaystyle \Delta _{k}}is not all ofRk{\displaystyle \mathbb {R} ^{k}}, but only within a(k−1){\displaystyle (k-1)}-dimensional plane, we obtain the desired result.
The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification forPearson's chi-squared test.
Theorem.Given frequenciesxi∈N{\displaystyle x_{i}\in \mathbb {N} }observed in a dataset withn{\displaystyle n}points, we imposeℓ+1{\displaystyle \ell +1}independent linearconstraints{∑ip^i=1,∑ia1ip^i=b1,∑ia2ip^i=b2,⋯,∑iaℓip^i=bℓ{\displaystyle {\begin{cases}\sum _{i}{\hat {p}}_{i}=1,\\\sum _{i}a_{1i}{\hat {p}}_{i}=b_{1},\\\sum _{i}a_{2i}{\hat {p}}_{i}=b_{2},\\\cdots ,\\\sum _{i}a_{\ell i}{\hat {p}}_{i}=b_{\ell }\end{cases}}}(notice that the first constraint is simply the requirement that the empirical distributions sum to one), such that empiricalp^i=xi/n{\displaystyle {\hat {p}}_{i}=x_{i}/n}satisfy all these constraints simultaneously. Letq{\displaystyle q}denote theI{\displaystyle I}-projection of prior distributionp{\displaystyle p}on the sub-region of the simplex allowed by the linear constraints. At then→∞{\displaystyle n\to \infty }limit, sampled countsnp^i{\displaystyle n{\hat {p}}_{i}}from the multinomial distributionconditional onthe linear constraints are governed by2nDKL(p^||q)≈n∑i(p^i−qi)2qi{\displaystyle 2nD_{KL}({\hat {p}}\vert \vert q)\approx n\sum _{i}{\frac {({\hat {p}}_{i}-q_{i})^{2}}{q_{i}}}}whichconverges in distributionto thechi-squared distributionχ2(k−1−ℓ){\displaystyle \chi ^{2}(k-1-\ell )}.
An analogous proof applies in this Diophantine problem of coupled linear equations in count variablesnp^i{\displaystyle n{\hat {p}}_{i}},[2]but this timeΔk,n{\displaystyle \Delta _{k,n}}is the intersection of(Zk)/n{\displaystyle (\mathbb {Z} ^{k})/n}withΔk{\displaystyle \Delta _{k}}andℓ{\displaystyle \ell }hyperplanes, all linearly independent, so the probability densityρ(p^){\displaystyle \rho ({\hat {p}})}is restricted to a(k−ℓ−1){\displaystyle (k-\ell -1)}-dimensional plane. In particular, expanding the KL divergenceDKL(p^||p){\displaystyle D_{KL}({\hat {p}}\vert \vert p)}around its minimumq{\displaystyle q}(theI{\displaystyle I}-projection ofp{\displaystyle p}onΔk,n{\displaystyle \Delta _{k,n}}) in the constrained problem ensures by the Pythagorean theorem forI{\displaystyle I}-divergence that any constant and linear term in the countsnp^i{\displaystyle n{\hat {p}}_{i}}vanishes from the conditional probability to multinationally sample those counts.
Notice that
by definition, every one ofp^1,p^2,...,p^k{\displaystyle {\hat {p}}_{1},{\hat {p}}_{2},...,{\hat {p}}_{k}}must be a rational number,
whereasp1,p2,...,pk{\displaystyle p_{1},p_{2},...,p_{k}}may be chosen from any real number in[0,1]{\displaystyle [0,1]}and need not satisfy the Diophantine system of equations.
Only asymptotically asn→∞{\displaystyle n\rightarrow \infty }, thep^i{\displaystyle {\hat {p}}_{i}}'s can be regarded as probabilities over[0,1]{\displaystyle [0,1]}.
Away from empirically observed constraintsb1,…,bℓ{\displaystyle b_{1},\ldots ,b_{\ell }}(such as moments or prevalences) the theorem can be generalized:
Theorem.
In the case that allp^i{\displaystyle {\hat {p}}_{i}}are equal, the Theorem reduces to the concentration of entropies around the Maximum Entropy.[3][4]
In some fields such asnatural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when acategorical distributionis actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-k" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range1…k{\displaystyle 1\dots k}; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial.
The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions.
Letq{\displaystyle q}denote a theoretical multinomial distribution and letp{\displaystyle p}be a true underlying distribution. The distributionsp{\displaystyle p}andq{\displaystyle q}are considered equivalent ifd(p,q)<ε{\displaystyle d(p,q)<\varepsilon }for a distanced{\displaystyle d}and a tolerance parameterε>0{\displaystyle \varepsilon >0}. The equivalence test problem isH0={d(p,q)≥ε}{\displaystyle H_{0}=\{d(p,q)\geq \varepsilon \}}versusH1={d(p,q)<ε}{\displaystyle H_{1}=\{d(p,q)<\varepsilon \}}. The true underlying distributionp{\displaystyle p}is unknown. Instead, the counting frequenciespn{\displaystyle p_{n}}are observed, wheren{\displaystyle n}is a sample size. An equivalence test usespn{\displaystyle p_{n}}to rejectH0{\displaystyle H_{0}}. IfH0{\displaystyle H_{0}}can be rejected then the equivalence betweenp{\displaystyle p}andq{\displaystyle q}is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010).[5]The equivalence test for the total variation distance is developed in Ostrovski (2017).[6]The exact equivalence test for the specific cumulative distance is proposed in Frey (2009).[7]
The distance between the true underlying distributionp{\displaystyle p}and a family of the multinomial distributionsM{\displaystyle {\mathcal {M}}}is defined byd(p,M)=minh∈Md(p,h){\displaystyle d(p,{\mathcal {M}})=\min _{h\in {\mathcal {M}}}d(p,h)}. Then the equivalence test problem is given byH0={d(p,M)≥ε}{\displaystyle H_{0}=\{d(p,{\mathcal {M}})\geq \varepsilon \}}andH1={d(p,M)<ε}{\displaystyle H_{1}=\{d(p,{\mathcal {M}})<\varepsilon \}}. The distanced(p,M){\displaystyle d(p,{\mathcal {M}})}is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).[8]
In the setting of a multinomial distribution, constructing confidence intervals for the difference between the proportions of observations from two events,pi−pj{\displaystyle p_{i}-p_{j}}, requires the incorporation of the negative covariance between the sample estimatorsp^i=Xin{\displaystyle {\hat {p}}_{i}={\frac {X_{i}}{n}}}andp^j=Xjn{\displaystyle {\hat {p}}_{j}={\frac {X_{j}}{n}}}.
Some of the literature on the subject focused on the use-case of matched-pairs binary data, which requires careful attention when translating the formulas to the general case ofpi−pj{\displaystyle p_{i}-p_{j}}for any multinomial distribution. Formulas in the current section will be generalized, while formulas in the next section will focus on the matched-pairs binary data use-case.
Wald's standard error (SE) of the difference of proportion can be estimated using:[9]: 378[10]
SE(p^i−p^j)^=(p^i+p^j)−(p^i−p^j)2n{\displaystyle {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}={\sqrt {\frac {({\hat {p}}_{i}+{\hat {p}}_{j})-({\hat {p}}_{i}-{\hat {p}}_{j})^{2}}{n}}}}
For a100(1−α)%{\displaystyle 100(1-\alpha )\%}approximate confidence interval, themargin of errormay incorporate the appropriate quantile from thestandard normal distribution, as follows:
(p^i−p^j)±zα/2⋅SE(p^i−p^j)^{\displaystyle ({\hat {p}}_{i}-{\hat {p}}_{j})\pm z_{\alpha /2}\cdot {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}}
As the sample size (n{\displaystyle n}) increases, the sample proportions will approximately follow amultivariate normal distribution, thanks to themultidimensional central limit theorem(and it could also be shown using theCramér–Wold theorem). Therefore, their difference will also be approximately normal. Also, these estimators areweakly consistentand plugging them into the SE estimator makes it also weakly consistent. Hence, thanks toSlutsky's theorem, thepivotal quantity(p^i−p^j)−(pi−pj)SE(p^i−p^j)^{\displaystyle {\frac {({\hat {p}}_{i}-{\hat {p}}_{j})-(p_{i}-p_{j})}{\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}}}approximately follows thestandard normal distribution. And from that, the aboveapproximate confidence intervalis directly derived.
The SE can be constructed using the calculus ofthe variance of the difference of two random variables:SE(p^i−p^j)^=p^i(1−p^i)n+p^j(1−p^j)n−2(−p^ip^jn)=1n(p^i+p^j−p^i2−p^j2+2p^ip^j)=(p^i+p^j)−(p^i−p^j)2n{\displaystyle {\begin{aligned}{\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}&={\sqrt {{\frac {{\hat {p}}_{i}(1-{\hat {p}}_{i})}{n}}+{\frac {{\hat {p}}_{j}(1-{\hat {p}}_{j})}{n}}-2\left(-{\frac {{\hat {p}}_{i}{\hat {p}}_{j}}{n}}\right)}}\\&={\sqrt {{\frac {1}{n}}\left({\hat {p}}_{i}+{\hat {p}}_{j}-{\hat {p}}_{i}^{2}-{\hat {p}}_{j}^{2}+2{\hat {p}}_{i}{\hat {p}}_{j}\right)}}\\&={\sqrt {\frac {({\hat {p}}_{i}+{\hat {p}}_{j})-({\hat {p}}_{i}-{\hat {p}}_{j})^{2}}{n}}}\end{aligned}}}
A modification which includes acontinuity correctionadds1n{\displaystyle {\frac {1}{n}}}to the margin of error as follows:[11]: 102–3
(p^i−p^j)±(zα/2⋅SE(p^i−p^j)^+1n){\displaystyle ({\hat {p}}_{i}-{\hat {p}}_{j})\pm \left(z_{\alpha /2}\cdot {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}+{\frac {1}{n}}\right)}
Another alternative is to rely on a Bayesian estimator usingJeffreys priorwhich leads to using adirichlet distribution, with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of thekelements, leading to an overall increase of the sample size byk2{\displaystyle {\frac {k}{2}}}. This was originally developed for a multinomial distribution with four events, and is known aswald+2, for analyzing matched pairs data (see the next section for more details).[12]
This leads to the following SE:
SE(p^i−p^j)^wald+k2=(p^i+p^j+1n)nn+k2−(p^i−p^j)2(nn+k2)2n+k2{\displaystyle {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}_{wald+{\frac {k}{2}}}={\sqrt {\frac {\left({\hat {p}}_{i}+{\hat {p}}_{j}+{\frac {1}{n}}\right){\frac {n}{n+{\frac {k}{2}}}}-\left({\hat {p}}_{i}-{\hat {p}}_{j}\right)^{2}\left({\frac {n}{n+{\frac {k}{2}}}}\right)^{2}}{n+{\frac {k}{2}}}}}}
SE(p^i−p^j)^wald+k2=(xi+1/2n+k2+xj+1/2n+k2)−(xi+1/2n+k2−xj+1/2n+k2)2n+k2=(xin+xjn+1n)nn+k2−(xin−xjn)2(nn+k2)2n+k2=(p^i+p^j+1n)nn+k2−(p^i−p^j)2(nn+k2)2n+k2{\displaystyle {\begin{aligned}{\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}_{wald+{\frac {k}{2}}}&={\sqrt {\frac {\left({\frac {x_{i}+1/2}{n+{\frac {k}{2}}}}+{\frac {x_{j}+1/2}{n+{\frac {k}{2}}}}\right)-\left({\frac {x_{i}+1/2}{n+{\frac {k}{2}}}}-{\frac {x_{j}+1/2}{n+{\frac {k}{2}}}}\right)^{2}}{n+{\frac {k}{2}}}}}\\&={\sqrt {\frac {\left({\frac {x_{i}}{n}}+{\frac {x_{j}}{n}}+{\frac {1}{n}}\right){\frac {n}{n+{\frac {k}{2}}}}-\left({\frac {x_{i}}{n}}-{\frac {x_{j}}{n}}\right)^{2}\left({\frac {n}{n+{\frac {k}{2}}}}\right)^{2}}{n+{\frac {k}{2}}}}}\\&={\sqrt {\frac {\left({\hat {p}}_{i}+{\hat {p}}_{j}+{\frac {1}{n}}\right){\frac {n}{n+{\frac {k}{2}}}}-\left({\hat {p}}_{i}-{\hat {p}}_{j}\right)^{2}\left({\frac {n}{n+{\frac {k}{2}}}}\right)^{2}}{n+{\frac {k}{2}}}}}\end{aligned}}}
Which can just be plugged into the original Wald formula as follows:
(pi−pj)nn+k2±zα/2⋅SE(p^i−p^j)^wald+k2{\displaystyle (p_{i}-p_{j}){\frac {n}{n+{\frac {k}{2}}}}\pm z_{\alpha /2}\cdot {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}_{wald+{\frac {k}{2}}}}
For the case of matched-pairs binary data, a common task is to build the confidence interval of the difference of the proportion of the matched events. For example, we might have a test for some disease, and we may want to check the results of it for some population at two points in time (1 and 2), to check if there was a change in the proportion of the positives for the disease during that time.
Such scenarios can be represented using a two-by-twocontingency tablewith the number of elements that had each of the combination of events. We can use smallffor sampling frequencies:f11,f10,f01,f00{\displaystyle f_{11},f_{10},f_{01},f_{00}}, and capitalFfor population frequencies:F11,F10,F01,F00{\displaystyle F_{11},F_{10},F_{01},F_{00}}. These four combinations could be modeled as coming from a multinomial distribution (with four potential outcomes). The sizes of the sample and population can benandNrespectively. And in such a case, there is an interest in building a confidence interval for the difference of proportions from the marginals of the following (sampled) contingency table:
In this case, checking the difference in marginal proportions means we are interested in using the following definitions:p1∗=F1∗N=F11+F10N{\displaystyle p_{1*}={\frac {F_{1*}}{N}}={\frac {F_{11}+F_{10}}{N}}},p∗1=F∗1N=F11+F01N{\displaystyle p_{*1}={\frac {F_{*1}}{N}}={\frac {F_{11}+F_{01}}{N}}}.
And the difference we want to build confidence intervals for is:
p∗1−p1∗=F11+F01N−F11+F10N=F01N−F10N=p01−p10{\displaystyle p_{*1}-p_{1*}={\frac {F_{11}+F_{01}}{N}}-{\frac {F_{11}+F_{10}}{N}}={\frac {F_{01}}{N}}-{\frac {F_{10}}{N}}=p_{01}-p_{10}}
Hence, a confidence intervals for the marginal positive proportions (p∗1−p1∗{\displaystyle p_{*1}-p_{1*}}) is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table (p01−p10{\displaystyle p_{01}-p_{10}}).
Calculating ap-valuefor such a difference is known asMcNemar's test. Building confidence interval around it can be constructed using methods described above forConfidence intervals for the difference of two proportions.
The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as:[11]: 102–3
SE(p∗1−p1∗)^=SE(p01−p10)^=n(f10+f01)−(f10−f01)2nn{\displaystyle {\widehat {\operatorname {SE} (p_{*1}-p_{1*})}}={\widehat {\operatorname {SE} (p_{01}-p_{10})}}={\frac {\sqrt {n(f_{10}+f_{01})-(f_{10}-f_{01})^{2}}}{n{\sqrt {n}}}}}
Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application.[11]
One such modification includesAgresti and Min’s Wald+2(similar to some of their other works[13]) in which each cell frequency had an extra12{\displaystyle {\frac {1}{2}}}added to it.[12]This leads to theWald+2confidence intervals. In a Bayesian interpretation, this is like building the estimators taking as prior adirichlet distributionwith all parameters being equal to 0.5 (which is, in fact, theJeffreys prior). The+2in the namewald+2can now be taken to mean that in the context of a two-by-two contingency table, which is a multinomial distribution with four possible events, then since we add 1/2 an observation to each of them, then this translates to an overall addition of 2 observations (due to the prior).
This leads to the following modified SE for the case of matched pairs data:
SE(p∗1−p1∗)^=(n+2)(f10+f01+1)−(f10−f01)2(n+2)(n+2){\displaystyle {\widehat {\operatorname {SE} (p_{*1}-p_{1*})}}={\frac {\sqrt {(n+2)(f_{10}+f_{01}+1)-(f_{10}-f_{01})^{2}}}{(n+2){\sqrt {(n+2)}}}}}
Which can just be plugged into the original Wald formula as follows:
(p∗1−p1∗)nn+2±zα/2⋅SE(p^i−p^j)^wald+2{\displaystyle (p_{*1}-p_{1*}){\frac {n}{n+2}}\pm z_{\alpha /2}\cdot {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}_{wald+2}}
Other modifications includeBonett and Price’s Adjusted Wald, andNewcombe’s Score.
First, reorder the parametersp1,…,pk{\displaystyle p_{1},\ldots ,p_{k}}such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variableXfrom a uniform (0, 1) distribution. The resulting outcome is the component
{Xj= 1,Xk= 0 fork≠j} is one observation from the multinomial distribution withp1,…,pk{\displaystyle p_{1},\ldots ,p_{k}}andn= 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution withnequal to the number of such repetitions.
Given the parametersp1,p2,…,pk{\displaystyle p_{1},p_{2},\ldots ,p_{k}}and a total for the samplen{\displaystyle n}such that∑i=1kXi=n{\displaystyle \sum _{i=1}^{k}X_{i}=n}, it is possible to sample sequentially for the number in an arbitrary stateXi{\displaystyle X_{i}}, by partitioning the state space intoi{\displaystyle i}and not-i{\displaystyle i}, conditioned on any prior samples already taken, repeatedly.
Heuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency.[14]
|
https://en.wikipedia.org/wiki/Multinomial_distribution
|
In thisstatistics,quality assurance, andsurvey methodology,samplingis the selection of a subset or astatistical sample(termedsamplefor short) of individuals from within astatistical populationto estimate characteristics of the whole population. The subset is meant to reflect the whole population, and statisticians attempt to collect samples that are representative of the population. Sampling has lower costs and faster data collection compared to recording data from the entire population (in many cases, collecting the whole population is impossible, like getting sizes of all stars in the universe), and thus, it can provide insights in cases where it is infeasible to measure an entire population.
Eachobservationmeasures one or more properties (such as weight, location, colour or mass) of independent objects or individuals. Insurvey sampling, weights can be applied to the data to adjust for the sample design, particularly instratified sampling.[1]Results fromprobability theoryandstatistical theoryare employed to guide the practice. In business and medical research, sampling is widely used for gathering information about a population.[2]Acceptance samplingis used to determine if a production lot of material meets the governingspecifications.
Random sampling by using lots is an old idea, mentioned several times in the Bible. In 1786, Pierre SimonLaplaceestimated the population of France by using a sample, along withratio estimator. He also computed probabilistic estimates of the error. These were not expressed as modernconfidence intervalsbut as the sample size that would be needed to achieve a particular upper bound on the sampling error with probability 1000/1001. His estimates usedBayes' theoremwith a uniformprior probabilityand assumed that his sample was random.Alexander Ivanovich Chuprovintroduced sample surveys toImperial Russiain the 1870s.[3]
In the US, the 1936Literary Digestprediction of a Republican win in thepresidential electionwent badly awry, due to severebias[1]. More than two million people responded to the study with their names obtained through magazine subscription lists and telephone directories. It was not appreciated that these lists were heavily biased towards Republicans and the resulting sample, though very large, was deeply flawed.[4][5]
Elections in Singaporehave adopted this practice since the2015 election, also known as the sample counts, whereas according to theElections Department(ELD), their country's election commission, sample counts help reduce speculation and misinformation, while helping election officials to check against the election result for that electoral division. While the reported sample counts yield a fairly accurate indicative result with a 4%margin of errorat a 95%confidence interval, ELD reminded the public that sample counts are separate from official results, and only thereturning officerwill declare the official results once vote counting is complete.[6][7]
Successful statistical practice is based on focused problem definition. In sampling, this includes defining the "population" from which our sample is drawn. A population can be defined as including all people or items with the characteristics one wishes to understand. Because there is very rarely enough time or money to gather information from everyone or everything in a population, the goal becomes finding a representative sample (or subset) of that population.
Sometimes what defines a population is obvious. For example, a manufacturer needs to decide whether a batch of material fromproductionis of high enough quality to be released to the customer or should be scrapped or reworked due to poor quality. In this case, the batch is the population.
Although the population of interest often consists of physical objects, sometimes it is necessary to sample over time, space, or some combination of these dimensions. For instance, an investigation of supermarket staffing could examine checkout line length at various times, or a study on endangered penguins might aim to understand their usage of various hunting grounds over time. For the time dimension, the focus may be on periods or discrete occasions.
In other cases, the examined 'population' may be even less tangible. For example,Joseph Jaggerstudied the behaviour ofroulettewheels at a casino inMonte Carlo, and used this to identify a biased wheel. In this case, the 'population' Jagger wanted to investigate was the overall behaviour of the wheel (i.e. theprobability distributionof its results over infinitely many trials), while his 'sample' was formed from observed results from that wheel. Similar considerations arise when taking repeated measurements of properties of materials such as theelectrical conductivityofcopper.
This situation often arises when seeking knowledge about thecause systemof which theobservedpopulation is an outcome. In such cases, sampling theory may treat the observed population as a sample from a larger 'superpopulation'. For example, a researcher might study the success rate of a new 'quit smoking' program on a test group of 100 patients, in order to predict the effects of the program if it were made available nationwide. Here the superpopulation is "everybody in the country, given access to this treatment" – a group that does not yet exist since the program is not yet available to all.
The population from which the sample is drawn may not be the same as the population from which information is desired. Often there is a large but not complete overlap between these two groups due to frame issues etc. (see below). Sometimes they may be entirely separate – for instance, one might study rats in order to get a better understanding of human health, or one might study records from people born in 2008 in order to make predictions about people born in 2009.
Time spent in making the sampled population and population of concern precise is often well spent because it raises many issues, ambiguities, and questions that would otherwise have been overlooked at this stage.
In the most straightforward case, such as the sampling of a batch of material from production (acceptance sampling by lots), it would be most desirable to identify and measure every single item in the population and to include any one of them in our sample. However, in the more general case this is not usually possible or practical. There is no way to identify all rats in the set of all rats. Where voting is not compulsory, there is no way to identify which people will vote at a forthcoming election (in advance of the election). These imprecise populations are not amenable to sampling in any of the ways below and to which we could apply statistical theory.
As a remedy, we seek asampling framewhich has the property that we can identify every single element and include any in our sample.[8][9][10][11]The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in anopinion poll, possible sampling frames include anelectoral registerand atelephone directory.
Aprobability sampleis a sample in which every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined. The combination of these traits makes it possible to produce unbiased estimates of population totals, by weighting sampled units according to their probability of selection.
Example: We want to estimate the total income of adults living in a given street. We visit each household in that street, identify all adults living there, and randomly select one adult from each household. (For example, we can allocate each person a random number, generated from auniform distributionbetween 0 and 1, and select the person with the highest number in each household). We then interview the selected person and find their income.
People living on their own are certain to be selected, so we simply add their income to our estimate of the total. But a person living in a household of two adults has only a one-in-two chance of selection. To reflect this, when we come to such a household, we would count the selected person's income twice towards the total. (The person whoisselected from that household can be loosely viewed as also representing the person whoisn'tselected.)
In the above example, not everybody has the same probability of selection; what makes it a probability sample is the fact that each person's probability is known. When every element in the populationdoeshave the same probability of selection, this is known as an 'equal probability of selection' (EPS) design. Such designs are also referred to as 'self-weighting' because all sampled units are given the same weight.
Probability sampling includes:simple random sampling,systematic sampling,stratified sampling, probability-proportional-to-size sampling, andclusterormultistage sampling. These various ways of probability sampling have two things in common:
Nonprobability samplingis any sampling method where some elements of the population havenochance of selection (these are sometimes referred to as 'out of coverage'/'undercovered'), or where the probability of selection cannot be accurately determined. It involves the selection of elements based on assumptions regarding the population of interest, which forms the criteria for selection. Hence, because the selection of elements is nonrandom, nonprobability sampling does not allow the estimation of sampling errors. These conditions give rise toexclusion bias, placing limits on how much information a sample can provide about the population. Information about the relationship between sample and population is limited, making it difficult to extrapolate from the sample to the population.
Example: We visit every household in a given street, and interview the first person to answer the door. In any household with more than one occupant, this is a nonprobability sample, because some people are more likely to answer the door (e.g. an unemployed person who spends most of their time at home is more likely to answer than an employed housemate who might be at work when the interviewer calls) and it's not practical to calculate these probabilities.
Nonprobability sampling methods includeconvenience sampling,quota sampling, andpurposive sampling. In addition, nonresponse effects may turnanyprobability design into a nonprobability design if the characteristics of nonresponse are not well understood, since nonresponse effectively modifies each element's probability of being sampled.
Within any of the types of frames identified above, a variety of sampling methods can be employed individually or in combination. Factors commonly influencing the choice between these designs include:
In a simple random sample (SRS) of a given size, all subsets of a sampling frame have an equal probability of being selected. Each element of the frame thus has an equal probability of selection: the frame is not subdivided or partitioned. Furthermore, any givenpairof elements has the same chance of selection as any other such pair (and similarly for triples, and so on). This minimizes bias and simplifies analysis of results. In particular, the variance between individual results within the sample is a good indicator of variance in the overall population, which makes it relatively easy to estimate the accuracy of results.
Simple random sampling can be vulnerable to sampling error because the randomness of the selection may result in a sample that does not reflect the makeup of the population. For instance, a simple random sample of ten people from a given country willon averageproduce five men and five women, but any given trial is likely to over represent one sex and underrepresent the other. Systematic and stratified techniques attempt to overcome this problem by "using information about the population" to choose a more "representative" sample.
Also, simple random sampling can be cumbersome and tedious when sampling from a large target population. In some cases, investigators are interested in research questions specific to subgroups of the population. For example, researchers might be interested in examining whether cognitive ability as a predictor of job performance is equally applicable across racial groups. Simple random sampling cannot accommodate the needs of researchers in this situation, because it does not provide subsamples of the population, and other sampling strategies, such as stratified sampling, can be used instead.
Systematic sampling (also known as interval sampling) relies on arranging the study population according to some ordering scheme, and then selecting elements at regular intervals through that ordered list. Systematic sampling involves a random start and then proceeds with the selection of everykth element from then onwards. In this case,k=(population size/sample size). It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from within the first to thekth element in the list. A simple example would be to select every 10th name from the telephone directory (an 'every 10th' sample, also referred to as 'sampling with a skip of 10').
As long as the starting point israndomized, systematic sampling is a type ofprobability sampling. It is easy to implement and thestratificationinduced can make it efficient,ifthe variable by which the list is ordered is correlated with the variable of interest. 'Every 10th' sampling is especially useful for efficient sampling fromdatabases.
For example, suppose we wish to sample people from a long street that starts in a poor area (house No. 1) and ends in an expensive district (house No. 1000). A simple random selection of addresses from this street could easily end up with too many from the high end and too few from the low end (or vice versa), leading to an unrepresentative sample. Selecting (e.g.) every 10th street number along the street ensures that the sample is spread evenly along the length of the street, representing all of these districts. (If we always start at house #1 and end at #991, the sample is slightly biased towards the low end; by randomly selecting the start between #1 and #10, this bias is eliminated.)
However, systematic sampling is especially vulnerable to periodicities in the list. If periodicity is present and the period is a multiple or factor of the interval used, the sample is especially likely to beunrepresentative of the overall population, making the scheme less accurate than simple random sampling.
For example, consider a street where the odd-numbered houses are all on the north (expensive) side of the road, and the even-numbered houses are all on the south (cheap) side. Under the sampling scheme given above, it is impossible to get a representative sample; either the houses sampled willallbe from the odd-numbered, expensive side, or they willallbe from the even-numbered, cheap side, unless the researcher has previous knowledge of this bias and avoids it by a using a skip which ensures jumping between the two sides (any odd-numbered skip).
Another drawback of systematic sampling is that even in scenarios where it is more accurate than SRS, its theoretical properties make it difficult toquantifythat accuracy. (In the two examples of systematic sampling that are given above, much of the potential sampling error is due to variation between neighbouring houses – but because this method never selects two neighbouring houses, the sample will not give us any information on that variation.)
As described above, systematic sampling is an EPS method, because all elements have the same probability of selection (in the example given, one in ten). It isnot'simple random sampling' because different subsets of the same size have different selection probabilities – e.g. the set {4,14,24,...,994} has a one-in-ten probability of selection, but the set {4,13,24,34,...} has zero probability of selection.
Systematic sampling can also be adapted to a non-EPS approach; for an example, see discussion of PPS samples below.
When the population embraces a number of distinct categories, the frame can be organized by these categories into separate "strata." Each stratum is then sampled as an independent sub-population, out of which individual elements can be randomly selected.[8]The ratio of the size of this random selection (or sample) to the size of the population is called asampling fraction.[12]There are several potential benefits to stratified sampling.[12]
First, dividing the population into distinct, independent strata can enable researchers to draw inferences about specific subgroups that may be lost in a more generalized random sample.
Second, utilizing a stratified sampling method can lead to more efficient statistical estimates (provided that strata are selected based upon relevance to the criterion in question, instead of availability of the samples). Even if a stratified sampling approach does not lead to increased statistical efficiency, such a tactic will not result in less efficiency than would simple random sampling, provided that each stratum is proportional to the group's size in the population.
Third, it is sometimes the case that data are more readily available for individual, pre-existing strata within a population than for the overall population; in such cases, using a stratified sampling approach may be more convenient than aggregating data across groups (though this may potentially be at odds with the previously noted importance of utilizing criterion-relevant strata).
Finally, since each stratum is treated as an independent population, different sampling approaches can be applied to different strata, potentially enabling researchers to use the approach best suited (or most cost-effective) for each identified subgroup within the population.
There are, however, some potential drawbacks to using stratified sampling. First, identifying strata and implementing such an approach can increase the cost and complexity of sample selection, as well as leading to increased complexity of population estimates. Second, when examining multiple criteria, stratifying variables may be related to some, but not to others, further complicating the design, and potentially reducing the utility of the strata. Finally, in some cases (such as designs with a large number of strata, or those with a specified minimum sample size per group), stratified sampling can potentially require a larger sample than would other methods (although in most cases, the required sample size would be no larger than would be required for simple random sampling).
Stratification is sometimes introduced after the sampling phase in a process called "poststratification".[8]This approach is typically implemented due to a lack of prior knowledge of an appropriate stratifying variable or when the experimenter lacks the necessary information to create a stratifying variable during the sampling phase. Although the method is susceptible to the pitfalls of post hoc approaches, it can provide several benefits in the right situation. Implementation usually follows a simple random sample. In addition to allowing for stratification on an ancillary variable, poststratification can be used to implement weighting, which can improve the precision of a sample's estimates.[8]
Choice-based sampling or oversampling is one of the stratified sampling strategies. In choice-based sampling,[13]the data are stratified on the target and a sample is taken from each stratum so that rarer target classes will be more represented in the sample. The model is then built on thisbiased sample. The effects of the input variables on the target are often estimated with more precision with the choice-based sample even when a smaller overall sample size is taken, compared to a random sample. The results usually must be adjusted to correct for the oversampling.
In some cases the sample designer has access to an "auxiliary variable" or "size measure", believed to be correlated to the variable of interest, for each element in the population. These data can be used to improve accuracy in sample design. One option is to use the auxiliary variable as a basis for stratification, as discussed above.
Another option is probability proportional to size ('PPS') sampling, in which the selection probability for each element is set to be proportional to its size measure, up to a maximum of 1. In a simple PPS design, these selection probabilities can then be used as the basis forPoisson sampling. However, this has the drawback of variable sample size, and different portions of the population may still be over- or under-represented due to chance variation in selections.
Systematic sampling theory can be used to create a probability proportionate to size sample. This is done by treating each count within the size variable as a single sampling unit. Samples are then identified by selecting at even intervals among these counts within the size variable. This method is sometimes called PPS-sequential or monetary unit sampling in the case of audits or forensic sampling.
Example: Suppose we have six schools with populations of 150, 180, 200, 220, 260, and 490 students respectively (total 1500 students), and we want to use student population as the basis for a PPS sample of size three. To do this, we could allocate the first school numbers 1 to 150, the second school 151 to 330 (= 150 + 180), the third school 331 to 530, and so on to the last school (1011 to 1500). We then generate a random start between 1 and 500 (equal to 1500/3) and count through the school populations by multiples of 500. If our random start was 137, we would select the schools which have been allocated numbers 137, 637, and 1137, i.e. the first, fourth, and sixth schools.
The PPS approach can improve accuracy for a given sample size by concentrating sample on large elements that have the greatest impact on population estimates. PPS sampling is commonly used for surveys of businesses, where element size varies greatly and auxiliary information is often available – for instance, a survey attempting to measure the number of guest-nights spent in hotels might use each hotel's number of rooms as an auxiliary variable. In some cases, an older measurement of the variable of interest can be used as an auxiliary variable when attempting to produce more current estimates.[14]
Sometimes it is more cost-effective to select respondents in groups ('clusters'). Sampling is often clustered by geography, or by time periods. (Nearly all samples are in some sense 'clustered' in time – although this is rarely taken into account in the analysis.) For instance, if surveying households within a city, we might choose to select 100 city blocks and then interview every household within the selected blocks.
Clustering can reduce travel and administrative costs. In the example above, an interviewer can make a single trip to visit several households in one block, rather than having to drive to a different block for each household.
It also means that one does not need asampling framelisting all elements in the target population. Instead, clusters can be chosen from a cluster-level frame, with an element-level frame created only for the selected clusters. In the example above, the sample only requires a block-level city map for initial selections, and then a household-level map of the 100 selected blocks, rather than a household-level map of the whole city.
Cluster sampling (also known as clustered sampling) generally increases the variability of sample estimates above that of simple random sampling, depending on how the clusters differ between one another as compared to the within-cluster variation. For this reason, cluster sampling requires a larger sample than SRS to achieve the same level of accuracy – but cost savings from clustering might still make this a cheaper option.
Cluster samplingis commonly implemented asmultistage sampling. This is a complex form of cluster sampling in which two or more levels of units are embedded one in the other. The first stage consists of constructing the clusters that will be used to sample from. In the second stage, a sample of primary units is randomly selected from each cluster (rather than using all units contained in all selected clusters). In following stages, in each of those selected clusters, additional samples of units are selected, and so on. All ultimate units (individuals, for instance) selected at the last step of this procedure are then surveyed. This technique, thus, is essentially the process of taking random subsamples of preceding random samples.
Multistage sampling can substantially reduce sampling costs, where the complete population list would need to be constructed (before other sampling methods could be applied). By eliminating the work involved in describing clusters that are not selected, multistage sampling can reduce the large costs associated with traditional cluster sampling.[14]However, each sample may not be a full representative of the whole population.
Inquota sampling, the population is first segmented intomutually exclusivesub-groups, just as instratified sampling. Then judgement is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60.
It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example, interviewers might be tempted to interview those who look most helpful. The problem is that these samples may be biased because not everyone gets a chance of selection. This random element is its greatest weakness and quota versus probability has been a matter of controversy for several years.
In imbalanced datasets, where the sampling ratio does not follow the population statistics, one can resample the dataset in a conservative manner calledminimax sampling. The minimax sampling has its origin inAndersonminimax ratio whose value is proved to be 0.5: in a binary classification, the class-sample sizes should be chosen equally. This ratio can be proved to be minimax ratio only under the assumption ofLDAclassifier with Gaussian distributions. The notion of minimax sampling is recently developed for a general class of classification rules, called class-wise smart classifiers. In this case, the sampling ratio of classes is selected so that the worst case classifier error over all the possible population statistics for class prior probabilities, would be the best.[12]
Accidental sampling (sometimes known asgrab,convenienceoropportunity sampling) is a type of nonprobability sampling which involves the sample being drawn from that part of the population which is close to hand. That is, a population is selected because it is readily available and convenient. It may be through meeting the person or including a person in the sample when one meets them or chosen by finding them through technological means such as the internet or through phone. The researcher using such a sample cannot scientifically make generalizations about the total population from this sample because it would not be representative enough. For example, if the interviewer were to conduct such a survey at a shopping center early in the morning on a given day, the people that they could interview would be limited to those given there at that given time, which would not represent the views of other members of society in such an area, if the survey were to be conducted at different times of day and several times per week. This type of sampling is most useful for pilot testing. Several important considerations for researchers using convenience samples include:
In social science research,snowball samplingis a similar technique, where existing study subjects are used to recruit more subjects into the sample. Some variants of snowball sampling, such as respondent driven sampling, allow calculation of selection probabilities and are probability sampling methods under certain conditions.
The voluntary sampling method is a type of non-probability sampling. Volunteers choose to complete a survey.
Volunteers may be invited through advertisements in social media.[15]The target population for advertisements can be selected by characteristics like location, age, sex, income, occupation, education, or interests using tools provided by the social medium. The advertisement may include a message about the research and link to a survey. After following the link and completing the survey, the volunteer submits the data to be included in the sample population. This method can reach a global population but is limited by the campaign budget. Volunteers outside the invited population may also be included in the sample.
It is difficult to make generalizations from this sample because it may not represent the total population. Often, volunteers have a strong interest in the main topic of the survey.
Line-intercept sampling is a method of sampling elements in a region whereby an element is sampled if a chosen line segment, called a "transect", intersects the element.
Panel samplingis the method of first selecting a group of participants through a random sampling method and then asking that group for (potentially the same) information several times over a period of time. Therefore, each participant is interviewed at two or more time points; each period of data collection is called a "wave". The method was developed by sociologistPaul Lazarsfeldin 1938 as a means of studyingpolitical campaigns.[16]Thislongitudinalsampling-method allows estimates of changes in the population, for example with regard to chronic illness to job stress to weekly food expenditures. Panel sampling can also be used to inform researchers about within-person health changes due to age or to help explain changes in continuous dependent variables such as spousal interaction.[17]There have been several proposed methods of analyzingpanel data, includingMANOVA,growth curves, andstructural equation modelingwith lagged effects.
Snowball sampling involves finding a small group of initial respondents and using them to recruit more respondents. It is particularly useful in cases where the population is hidden or difficult to enumerate.
Theoretical sampling[18]occurs when samples are selected on the basis of the results of the data collected so far with a goal of developing a deeper understanding of the area or develop theories. An initial, general sample is first collected with the goal of investigating general trends, where further sampling may consist of extreme or very specific cases might be selected in order to maximize the likelihood a phenomenon will actually be observable.
Inactive sampling, the samples which are used for training a machine learning algorithm are actively selected, also compareactive learning (machine learning).
Judgement sampling, also known as expert or purposive sampling, is a type non-random sampling where samples are selected based on the opinion of an expert, who can select participants based on how valuable the information they provide is.
Haphazard sampling refers to the idea of using human judgement to simulate randomness. Despite samples being hand-picked, the goal is to ensure that no conscious bias exists within the choice of samples, but often fails due toselection bias.[19]Haphazard sampling is generally opted for due to its convenience, when the tools or capacity to perform other sampling methods may not exist.
The major weakness of such samples is that they often do not represent the characteristics of the entire population, but just a segment of the population. Because of this unbalanced representation, results from haphazard sampling are often biased.[20]
Sampling schemes may bewithout replacement('WOR' – no element can be selected more than once in the same sample) orwith replacement('WR' – an element may appear multiple times in the one sample). For example, if we catch fish, measure them, and immediately return them to the water before continuing with the sample, this is a WR design, because we might end up catching and measuring the same fish more than once. However, if we do not return the fish to the water ortag and releaseeach fish after catching it, this becomes a WOR design.
Formulas, tables, and power function charts are well known approaches to determine sample size.
Steps for using sample size tables:
Good data collection involves:
Sampling enables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. For example, there are about 600 million tweets produced every day. It is not necessary to look at all of them to determine the topics that are discussed during the day, nor is it necessary to look at all the tweets to determine the sentiment on each of the topics. A theoretical formulation for sampling Twitter data has been developed.[22]
In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict down-time it may not be necessary to look at all the data but a sample may be sufficient.
Survey results are typically subject to some error. Total errors can be classified into sampling errors and non-sampling errors. The term "error" here includes systematic biases as well as random errors.
Sampling errors and biases are induced by the sample design. They include:
Non-sampling errors are other errors which can impact final survey estimates, caused by problems in data collection, processing, or sample design. Such errors may include:
After sampling, a review is held of the exact process followed in sampling, rather than that intended, in order to study any effects that any divergences might have on subsequent analysis.
A particular problem involvesnon-response. Two major types of non-response exist:[23][24]
Insurvey sampling, many of the individuals identified as part of the sample may be unwilling to participate, not have the time to participate (opportunity cost),[25]or survey administrators may not have been able to contact them. In this case, there is a risk of differences between respondents and nonrespondents, leading to biased estimates of population parameters. This is often addressed by improving survey design, offering incentives, and conducting follow-up studies which make a repeated attempt to contact the unresponsive and to characterize their similarities and differences with the rest of the frame.[26]The effects can also be mitigated by weighting the data (when population benchmarks are available) or by imputing data based on answers to other questions. Nonresponse is particularly a problem in internet sampling. Reasons for this problem may include improperly designed surveys,[24]over-surveying (or survey fatigue),[17][27][need quotation to verify]and the fact that potential participants may have multiple e-mail addresses, which they do not use anymore or do not check regularly.
In many situations, the sample fraction may be varied by stratum and data will have to be weighted to correctly represent the population. Thus for example, a simple random sample of individuals in the United Kingdom might not include some in remote Scottish islands who would be inordinately expensive to sample. A cheaper method would be to use a stratified sample with urban and rural strata. The rural sample could be under-represented in the sample, but weighted up appropriately in the analysis to compensate.
More generally, data should usually be weighted if the sample design does not give each individual an equal chance of being selected. For instance, when households have equal selection probabilities but one person is interviewed from within each household, this gives people from large households a smaller chance of being interviewed. This can be accounted for using survey weights. Similarly, households with more than one telephone line have a greater chance of being selected in a random digit dialing sample, and weights can adjust for this.
Weights can also serve other purposes, such as helping to correct for non-response.
The textbook by Groves et alia provides an overview of survey methodology, including recent literature on questionnaire development (informed bycognitive psychology) :
The other books focus on thestatistical theoryof survey sampling and require some knowledge of basic statistics, as discussed in the following textbooks:
The elementary book by Scheaffer et alia uses quadratic equations from high-school algebra:
More mathematical statistics is required for Lohr, for Särndal et alia, and for Cochran:[28]
The historically important books by Deming and Kish remain valuable for insights for social scientists (particularly about the U.S. census and theInstitute for Social Researchat theUniversity of Michigan):
|
https://en.wikipedia.org/wiki/Sampling_(statistics)
|
Inmathematics, ageneralized hypergeometric seriesis apower seriesin which the ratio of successivecoefficientsindexed bynis arational functionofn. The series, if convergent, defines ageneralized hypergeometric function, which may then be defined over a wider domain of the argument byanalytic continuation. The generalized hypergeometric series is sometimes just called the hypergeometric series, though this term also sometimes just refers to theGaussian hypergeometric series. Generalized hypergeometric functions include the (Gaussian)hypergeometric functionand theconfluent hypergeometric functionas special cases, which in turn have many particularspecial functionsas special cases, such aselementary functions,Bessel functions, and theclassical orthogonal polynomials.
A hypergeometric series is formally defined as apower series
in which the ratio of successive coefficients is arational functionofn. That is,
whereA(n) andB(n) arepolynomialsinn.
For example, in the case of the series for theexponential function,
we have:
So this satisfies the definition withA(n) = 1andB(n) =n+ 1.
It is customary to factor out the leading term, so β0is assumed to be 1. The polynomials can be factored into linear factors of the form (aj+n) and (bk+n) respectively, where theajandbkarecomplex numbers.
For historical reasons, it is assumed that (1 +n) is a factor ofB. If this is not already the case then bothAandBcan be multiplied by this factor; the factor cancels so the terms are unchanged and there is no loss of generality.
The ratio between consecutive coefficients now has the form
wherecanddare the leading coefficients ofAandB. The series then has the form
or, by scalingzby the appropriate factor and rearranging,
This has the form of anexponential generating function. This series is usually denoted by
or
Using the rising factorial orPochhammer symbol
this can be written
(Note that this use of the Pochhammer symbol is not standard; however it is the standard usage in this context.)
When all the terms of the series are defined and it has a non-zeroradius of convergence, then the series defines ananalytic function. Such a function, and itsanalytic continuations, is called thehypergeometric function.
The case when the radius of convergence is 0 yields many interesting series in mathematics, for example theincomplete gamma functionhas theasymptotic expansion
which could be writtenza−1e−z2F0(1−a,1;;−z−1). However, the use of the termhypergeometric seriesis usually restricted to the case where the series defines an actual analytic function.
The ordinary hypergeometric series should not be confused with thebasic hypergeometric series, which, despite its name, is a rather more complicated and recondite series. The "basic" series is theq-analogof the ordinary hypergeometric series. There are several such generalizations of the ordinary hypergeometric series, including the ones coming fromzonal spherical functionsonRiemannian symmetric spaces.
The series without the factor ofn! in the denominator (summed over all integersn, including negative) is called thebilateral hypergeometric series.
There are certain values of theajandbkfor which the numerator or the denominator of the coefficients is 0.
Excluding these cases, theratio testcan be applied to determine the radius of convergence.
The question of convergence forp=q+1 whenzis on the unit circle is more difficult. It can be shown that the series converges absolutely atz= 1 if
Further, ifp=q+1,∑i=1pai≥∑j=1qbj{\displaystyle \sum _{i=1}^{p}a_{i}\geq \sum _{j=1}^{q}b_{j}}andzis real, then the following convergence result holdsQuigley et al. (2013):
It is immediate from the definition that the order of the parametersaj, or the order of the parametersbkcan be changed without changing the value of the function. Also, if any of the parametersajis equal to any of the parametersbk, then the matching parameters can be "cancelled out", with certain exceptions when the parameters are non-positive integers. For example,
This cancelling is a special case of a reduction formula that may be applied whenever a parameter on the top row differs from one on the bottom row by a non-negative integer.[1][2]
The following basic identity is very useful as it relates the higher-order hypergeometric functions in terms of integrals over the lower order ones[3]
The generalized hypergeometric function satisfies
and
(zddz+bk−1)pFq[a1,…,apb1,…,bk,…,bq;z]=(bk−1)pFq[a1,…,apb1,…,bk−1,…,bq;z]forbk≠1{\displaystyle {\begin{aligned}\left(z{\frac {\rm {d}}{{\rm {d}}z}}+b_{k}-1\right){}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{p}\\b_{1},\dots ,b_{k},\dots ,b_{q}\end{array}};z\right]&=(b_{k}-1)\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{p}\\b_{1},\dots ,b_{k}-1,\dots ,b_{q}\end{array}};z\right]{\text{ for }}b_{k}\neq 1\end{aligned}}}
Additionally,
ddzpFq[a1,…,apb1,…,bq;z]=∏i=1pai∏j=1qbjpFq[a1+1,…,ap+1b1+1,…,bq+1;z]{\displaystyle {\begin{aligned}{\frac {\rm {d}}{{\rm {d}}z}}\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{p}\\b_{1},\dots ,b_{q}\end{array}};z\right]&={\frac {\prod _{i=1}^{p}a_{i}}{\prod _{j=1}^{q}b_{j}}}\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1}+1,\dots ,a_{p}+1\\b_{1}+1,\dots ,b_{q}+1\end{array}};z\right]\end{aligned}}}
Combining these gives a differential equation satisfied byw=pFq:
Take the following operator:
From the differentiation formulas given above, the linear space spanned by
contains each of
Since the space has dimension 2, any three of thesep+q+2 functions are linearly dependent:[4][5]
These dependencies can be written out to generate a large number of identities involvingpFq{\displaystyle {}_{p}F_{q}}.
For example, in the simplest non-trivial case,
So
This, and other important examples,
can be used to generatecontinued fractionexpressions known asGauss's continued fraction.
Similarly, by applying the differentiation formulas twice, there are(p+q+32){\displaystyle {\binom {p+q+3}{2}}}such functions contained in
which has dimension three so any four are linearly dependent. This generates more identities and the process can be continued. The identities thus generated can be combined with each other to produce new ones in a different way.
A function obtained by adding ±1 to exactly one of the parametersaj,bkin
is calledcontiguousto
Using the technique outlined above, an identity relating0F1(;a;z){\displaystyle {}_{0}F_{1}(;a;z)}and its two contiguous functions can be given, six identities relating1F1(a;b;z){\displaystyle {}_{1}F_{1}(a;b;z)}and any two of its four contiguous functions, and fifteen identities relating2F1(a,b;c;z){\displaystyle {}_{2}F_{1}(a,b;c;z)}and any two of its six contiguous functions have been found. The first one was derived in the previous paragraph. The last fifteen were given by (Gauss 1813).
A number of other hypergeometric function identities were discovered in the nineteenth and twentieth centuries. A 20th century contribution to the methodology of proving these identities is theEgorychev method.
Saalschütz's theorem[6](Saalschütz 1890) is
For extension of this theorem, see a research paper by Rakha & Rathie. According to (Andrews, Askey & Roy 1999, p. 69), it was in fact first discovered byPfaffin 1797.[7]
Dixon's identity,[8]first proved byDixon (1902), gives the sum of a well-poised3F2at 1:
For generalization of Dixon's identity, see a paper by Lavoie, et al.
Dougall's formula (Dougall1907) gives the sum of a very well-poised series that is terminating and 2-balanced.
Terminating means thatmis a non-negative integer and 2-balanced means that
Many of the other formulas for special values of hypergeometric functions can be derived from this as special or limiting cases. It is also called the Dougall-Ramanujan identity. It is a special case of Jackson's identity, and it gives Dixon's identity and Saalschütz's theorem as special cases.[9]
Identity 1.
where
Identity 2.
which linksBessel functionsto2F2; this reduces to Kummer's second formula forb= 2a:
Identity 3.
Identity 4.
which is a finite sum ifb-dis a non-negative integer.
Kummer's relation is
Clausen's formula
was used byde Brangesto prove theBieberbach conjecture.
Many of the special functions in mathematics are special cases of theconfluent hypergeometric functionor thehypergeometric function; see the corresponding articles for examples.
As noted earlier,0F0(;;z)=ez{\displaystyle {}_{0}F_{0}(;;z)=e^{z}}. The differential equation for this function isddzw=w{\displaystyle {\frac {d}{dz}}w=w}, which has solutionsw=kez{\displaystyle w=ke^{z}}wherekis a constant.
The functions of the form0F1(;a;z){\displaystyle {}_{0}F_{1}(;a;z)}are calledconfluent hypergeometric limit functionsand are closely related toBessel functions.
The relationship is:
The differential equation for this function is
or
Whenais not a positive integer, the substitution
gives a linearly independent solution
so the general solution is
wherek,lare constants. (Ifais a positive integer, the independent solution is given by the appropriate Bessel function of the second kind.)
A special case is:
An important case is:
The differential equation for this function is
or
which has solutions
wherekis a constant.
The functions of the form1F1(a;b;z){\displaystyle {}_{1}F_{1}(a;b;z)}are calledconfluent hypergeometric functions of the first kind, also writtenM(a;b;z){\displaystyle M(a;b;z)}. The incomplete gamma functionγ(a,z){\displaystyle \gamma (a,z)}is a special case.
The differential equation for this function is
or
Whenbis not a positive integer, the substitution
gives a linearly independent solution
so the general solution is
wherek,lare constants.
When a is a non-positive integer, −n,1F1(−n;b;z){\displaystyle {}_{1}F_{1}(-n;b;z)}is a polynomial. Up to constant factors, these are theLaguerre polynomials. This impliesHermite polynomialscan be expressed in terms of1F1as well.
Relations to other functions are known for certain parameter combinations only.
The functionx1F2(12;32,32;−x24){\displaystyle x\;{}_{1}F_{2}\left({\frac {1}{2}};{\frac {3}{2}},{\frac {3}{2}};-{\frac {x^{2}}{4}}\right)}is the antiderivative of thecardinal sine. With modified values ofa1{\displaystyle a_{1}}andb1{\displaystyle b_{1}}, one obtains the antiderivative ofsin(xβ)/xα{\displaystyle \sin(x^{\beta })/x^{\alpha }}.[10]
TheLommel functionissμ,ν(z)=zμ+1(μ−ν+1)(μ+ν+1)1F2(1;μ2−ν2+32,μ2+ν2+32;−z24){\displaystyle s_{\mu ,\nu }(z)={\frac {z^{\mu +1}}{(\mu -\nu +1)(\mu +\nu +1)}}{}_{1}F_{2}(1;{\frac {\mu }{2}}-{\frac {\nu }{2}}+{\frac {3}{2}},{\frac {\mu }{2}}+{\frac {\nu }{2}}+{\frac {3}{2}};-{\frac {z^{2}}{4}})}.[11]
The confluent hypergeometric function of the second kind can be written as:[12]
Historically, the most important are the functions of the form2F1(a,b;c;z){\displaystyle {}_{2}F_{1}(a,b;c;z)}. These are sometimes calledGauss's hypergeometric functions, classical standard hypergeometric or often simply hypergeometric functions. The termGeneralized hypergeometric functionis used for the functionspFqif there is risk of confusion. This function was first studied in detail byCarl Friedrich Gauss, who explored the conditions for its convergence.
The differential equation for this function is
or
It is known as thehypergeometric differential equation. Whencis not a positive integer, the substitution
gives a linearly independent solution
so the general solution for |z| < 1 is
wherek,lare constants. Different solutions can be derived for other values ofz. In fact there are 24 solutions, known as theKummersolutions, derivable using various identities, valid in different regions of the complex plane.
Whenais a non-positive integer, −n,
is a polynomial. Up to constant factors and scaling, these are theJacobi polynomials. Several other classes of orthogonal polynomials, up to constant factors, are special cases of Jacobi polynomials, so these can be expressed using2F1as well. This includesLegendre polynomialsandChebyshev polynomials.
A wide range of integrals of elementary functions can be expressed using the hypergeometric function, e.g.:
TheMott polynomialscan be written as:[13]
The function
is thedilogarithm[14]
The function
is aHahn polynomial.
The function
is aWilson polynomial.
All roots of aquintic equationcan be expressed in terms of radicals and theBring radical, which is the real solution tox5+x+a=0{\displaystyle x^{5}+x+a=0}. The Bring radical can be written as:[15]
The functions
forq∈N0{\displaystyle q\in \mathbb {N} _{0}}andp∈N{\displaystyle p\in \mathbb {N} }are thePolylogarithm.
For each integern≥2, the roots of the polynomialxn−x+t can be expressed as a sum of at mostN−1 hypergeometric functions of typen+1Fn, which can always be reduced by eliminating at least one pair ofaandbparameters.[15]
The generalized hypergeometric function is linked to theMeijer G-functionand theMacRobert E-function. Hypergeometric series were generalised to several variables, for example byPaul Emile AppellandJoseph Kampé de Fériet; but a comparable general theory took long to emerge. Many identities were found, some quite remarkable. A generalization, theq-seriesanalogues, called thebasic hypergeometric series, were given byEduard Heinein the late nineteenth century. Here, the ratios considered of successive terms, instead of a rational function ofn, are a rational function ofqn. Another generalization, theelliptic hypergeometric series, are those series where the ratio of terms is anelliptic function(a doubly periodicmeromorphic function) ofn.
During the twentieth century this was a fruitful area of combinatorial mathematics, with numerous connections to other fields. There are a number of new definitions ofgeneral hypergeometric functions, by Aomoto,Israel Gelfandand others; and applications for example to the combinatorics of arranging a number ofhyperplanesin complexN-space (seearrangement of hyperplanes).
Special hypergeometric functions occur aszonal spherical functionsonRiemannian symmetric spacesand semi-simpleLie groups. Their importance and role can be understood through the following example: the hypergeometric series2F1has theLegendre polynomialsas a special case, and when considered in the form ofspherical harmonics, these polynomials reflect, in a certain sense, the symmetry properties of the two-sphere or, equivalently, the rotations given by the Lie groupSO(3). In tensor product decompositions of concrete representations of this groupClebsch–Gordan coefficientsare met, which can be written as3F2hypergeometric series.
Bilateral hypergeometric seriesare a generalization of hypergeometric functions where one sums over all integers, not just the positive ones.
Fox–Wright functionsare a generalization of generalized hypergeometric functions where the Pochhammer symbols in the series expression are generalised to gamma functions of linear expressions in the indexn.
|
https://en.wikipedia.org/wiki/Generalized_hypergeometric_function
|
Inprobability theory, thecoupon collector's problemrefers to mathematical analysis of "collect allcouponsand win" contests. It asks the following question: if each box of a given product (e.g., breakfast cereals) contains a coupon, and there arendifferent types of coupons, what is the probability that more thantboxes need to be bought to collect allncoupons? An alternative statement is: givenncoupons, how many coupons do youexpectyou need to draw withreplacementbefore having drawn each coupon at least once? The mathematical analysis of the problem reveals that theexpected numberof trials needed grows asΘ(nlog(n)){\displaystyle \Theta (n\log(n))}.[a]For example, whenn= 50 it takes about 225[b]trials on average to collect all 50 coupons.
By definition ofStirling numbers of the second kind, the probability that exactlyTdraws are needed isS(T−1,n−1)n!nT{\displaystyle {\frac {S(T-1,n-1)n!}{n^{T}}}}By manipulating the generating function of the Stirling numbers, we can explicitly calculate all moments ofT:fk(x):=∑TS(T,k)xT=∏r=1kx1−rx{\displaystyle f_{k}(x):=\sum _{T}S(T,k)x^{T}=\prod _{r=1}^{k}{\frac {x}{1-rx}}}In general, thek-th moment is(n−1)!((Dxx)kfn−1(x))|x=1/n{\displaystyle (n-1)!((D_{x}x)^{k}f_{n-1}(x)){\Big |}_{x=1/n}}, whereDx{\displaystyle D_{x}}is the derivative operatord/dx{\displaystyle d/dx}.
For example, the 0th moment is∑TS(T−1,n−1)n!nT=(n−1)!fn−1(1/n)=(n−1)!×∏r=1n−11/n1−r/n=1{\displaystyle \sum _{T}{\frac {S(T-1,n-1)n!}{n^{T}}}=(n-1)!f_{n-1}(1/n)=(n-1)!\times \prod _{r=1}^{n-1}{\frac {1/n}{1-r/n}}=1}and the 1st moment is(n−1)!(Dxxfn−1(x))|x=1/n{\displaystyle (n-1)!(D_{x}xf_{n-1}(x)){\Big |}_{x=1/n}}, which can be explicitly evaluated tonHn{\displaystyle nH_{n}}, etc.
Let timeTbe the number of draws needed to collect allncoupons, and lettibe the time to collect thei-th coupon afteri− 1 coupons have been collected. ThenT=t1+⋯+tn{\displaystyle T=t_{1}+\cdots +t_{n}}. Think ofTandtiasrandom variables. Observe that the probability of collecting anewcoupon ispi=n−(i−1)n=n−i+1n{\displaystyle p_{i}={\frac {n-(i-1)}{n}}={\frac {n-i+1}{n}}}. Therefore,ti{\displaystyle t_{i}}hasgeometric distributionwith expectation1pi=nn−i+1{\displaystyle {\frac {1}{p_{i}}}={\frac {n}{n-i+1}}}. By thelinearity of expectationswe have:
HereHnis then-thharmonic number. Using theasymptoticsof the harmonic numbers, we obtain:
whereγ≈0.5772156649{\displaystyle \gamma \approx 0.5772156649}is theEuler–Mascheroni constant.
Using theMarkov inequalityto bound the desired probability:
The above can be modified slightly to handle the case when we've already collected some of the coupons. Letkbe the number of coupons already collected, then:
And whenk=0{\displaystyle k=0}then we get the original result.
Using the independence of random variablesti, we obtain:
sinceπ26=112+122+⋯+1n2+⋯{\displaystyle {\frac {\pi ^{2}}{6}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{n^{2}}}+\cdots }(seeBasel problem).
Bound the desired probability using theChebyshev inequality:
A stronger tail estimate for the upper tail be obtained as follows. LetZir{\displaystyle {Z}_{i}^{r}}denote the event that thei{\displaystyle i}-th coupon was not picked in the firstr{\displaystyle r}trials. Then
Thus, forr=βnlogn{\displaystyle r=\beta n\log n}, we haveP[Zir]≤e(−βnlogn)/n=n−β{\displaystyle P\left[{Z}_{i}^{r}\right]\leq e^{(-\beta n\log n)/n}=n^{-\beta }}. Via a union bound over then{\displaystyle n}coupons, we obtain
|
https://en.wikipedia.org/wiki/Coupon_collector%27s_problem
|
⌈−1log2(1−p)⌉{\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)}}\right\rceil }
⌈−1log2(1−p)⌉−1{\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)}}\right\rceil -1}
Inprobability theoryandstatistics, thegeometric distributionis either one of twodiscrete probability distributions:
These two different geometric distributions should not be confused with each other. Often, the nameshiftedgeometric distribution is adopted for the former one (distribution ofX{\displaystyle X}); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.
The geometric distribution gives the probability that the first occurrence of success requiresk{\displaystyle k}independent trials, each with success probabilityp{\displaystyle p}. If the probability of success on each trial isp{\displaystyle p}, then the probability that thek{\displaystyle k}-th trial is the first success is
fork=1,2,3,4,…{\displaystyle k=1,2,3,4,\dots }
The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for modeling the number of failures until the first success:
fork=0,1,2,3,…{\displaystyle k=0,1,2,3,\dots }
The geometric distribution gets its name because its probabilities follow ageometric sequence. It is sometimes called the Furry distribution afterWendell H. Furry.[1]: 210
The geometric distribution is thediscrete probability distributionthat describes when the first success in an infinite sequence ofindependent and identically distributedBernoulli trialsoccurs. Itsprobability mass functiondepends on its parameterization andsupport. When supported onN{\displaystyle \mathbb {N} }, the probability mass function isP(X=k)=(1−p)k−1p{\displaystyle P(X=k)=(1-p)^{k-1}p}wherek=1,2,3,…{\displaystyle k=1,2,3,\dotsc }is the number of trials andp{\displaystyle p}is the probability of success in each trial.[2]: 260–261
The support may also beN0{\displaystyle \mathbb {N} _{0}}, definingY=X−1{\displaystyle Y=X-1}. This alters the probability mass function intoP(Y=k)=(1−p)kp{\displaystyle P(Y=k)=(1-p)^{k}p}wherek=0,1,2,…{\displaystyle k=0,1,2,\dotsc }is the number of failures before the first success.[3]: 66
An alternative parameterization of the distribution gives the probability mass functionP(Y=k)=(PQ)k(1−PQ){\displaystyle P(Y=k)=\left({\frac {P}{Q}}\right)^{k}\left(1-{\frac {P}{Q}}\right)}whereP=1−pp{\displaystyle P={\frac {1-p}{p}}}andQ=1p{\displaystyle Q={\frac {1}{p}}}.[1]: 208–209
An example of a geometric distribution arises from rolling a six-sideddieuntil a "1" appears. Each roll isindependentwith a1/6{\displaystyle 1/6}chance of success. The number of rolls needed follows a geometric distribution withp=1/6{\displaystyle p=1/6}.
The geometric distribution is the only memoryless discrete probability distribution.[4]It is the discrete version of the same property found in theexponential distribution.[1]: 228The property asserts that the number of previously failed trials does not affect the number of future trials needed for a success.
Because there are two definitions of the geometric distribution, there are also two definitions of memorylessness for discrete random variables.[5]Expressed in terms ofconditional probability, the two definitions arePr(X>m+n∣X>n)=Pr(X>m),{\displaystyle \Pr(X>m+n\mid X>n)=\Pr(X>m),}
andPr(Y>m+n∣Y≥n)=Pr(Y>m),{\displaystyle \Pr(Y>m+n\mid Y\geq n)=\Pr(Y>m),}
wherem{\displaystyle m}andn{\displaystyle n}arenatural numbers,X{\displaystyle X}is a geometrically distributed random variable defined overN{\displaystyle \mathbb {N} }, andY{\displaystyle Y}is a geometrically distributed random variable defined overN0{\displaystyle \mathbb {N} _{0}}. Note that these definitions are not equivalent for discrete random variables;Y{\displaystyle Y}does not satisfy the first equation andX{\displaystyle X}does not satisfy the second.
Theexpected valueandvarianceof a geometrically distributedrandom variableX{\displaystyle X}defined overN{\displaystyle \mathbb {N} }is[2]: 261E(X)=1p,var(X)=1−pp2.{\displaystyle \operatorname {E} (X)={\frac {1}{p}},\qquad \operatorname {var} (X)={\frac {1-p}{p^{2}}}.}With a geometrically distributed random variableY{\displaystyle Y}defined overN0{\displaystyle \mathbb {N} _{0}}, the expected value changes intoE(Y)=1−pp,{\displaystyle \operatorname {E} (Y)={\frac {1-p}{p}},}while the variance stays the same.[6]: 114–115
For example, when rolling a six-sided die until landing on a "1", the average number of rolls needed is11/6=6{\displaystyle {\frac {1}{1/6}}=6}and the average number of failures is1−1/61/6=5{\displaystyle {\frac {1-1/6}{1/6}}=5}.
Themoment generating functionof the geometric distribution when defined overN{\displaystyle \mathbb {N} }andN0{\displaystyle \mathbb {N} _{0}}respectively is[7][6]: 114MX(t)=pet1−(1−p)etMY(t)=p1−(1−p)et,t<−ln(1−p){\displaystyle {\begin{aligned}M_{X}(t)&={\frac {pe^{t}}{1-(1-p)e^{t}}}\\M_{Y}(t)&={\frac {p}{1-(1-p)e^{t}}},t<-\ln(1-p)\end{aligned}}}The moments for the number of failures before the first success are given by
whereLi−n(1−p){\displaystyle \operatorname {Li} _{-n}(1-p)}is thepolylogarithm function.[8]
Thecumulant generating functionof the geometric distribution defined overN0{\displaystyle \mathbb {N} _{0}}is[1]: 216K(t)=lnp−ln(1−(1−p)et){\displaystyle K(t)=\ln p-\ln(1-(1-p)e^{t})}Thecumulantsκr{\displaystyle \kappa _{r}}satisfy the recursionκr+1=qδκrδq,r=1,2,…{\displaystyle \kappa _{r+1}=q{\frac {\delta \kappa _{r}}{\delta q}},r=1,2,\dotsc }whereq=1−p{\displaystyle q=1-p}, when defined overN0{\displaystyle \mathbb {N} _{0}}.[1]: 216
Consider the expected valueE(X){\displaystyle \mathrm {E} (X)}ofXas above, i.e. the average number of trials until a success.
The first trial either succeeds with probabilityp{\displaystyle p}, or fails with probability1−p{\displaystyle 1-p}.
If it fails, theremainingmean number of trials until a success is identical to the original mean -
this follows from the fact that all trials are independent.
From this we get the formula:
which, when solved forE(X){\displaystyle \mathrm {E} (X)}, gives:
The expected number offailuresY{\displaystyle Y}can be found from thelinearity of expectation,E(Y)=E(X−1)=E(X)−1=1p−1=1−pp{\displaystyle \mathrm {E} (Y)=\mathrm {E} (X-1)=\mathrm {E} (X)-1={\frac {1}{p}}-1={\frac {1-p}{p}}}. It can also be shown in the following way:
The interchange of summation and differentiation is justified by the fact that convergentpower seriesconverge uniformlyoncompactsubsets of the set of points where they converge.
Themeanof the geometric distribution is its expected value which is, as previously discussed in§ Moments and cumulants,1p{\displaystyle {\frac {1}{p}}}or1−pp{\displaystyle {\frac {1-p}{p}}}when defined overN{\displaystyle \mathbb {N} }orN0{\displaystyle \mathbb {N} _{0}}respectively.
Themedianof the geometric distribution is⌈−log2log(1−p)⌉{\displaystyle \left\lceil -{\frac {\log 2}{\log(1-p)}}\right\rceil }when defined overN{\displaystyle \mathbb {N} }[9]and⌊−log2log(1−p)⌋{\displaystyle \left\lfloor -{\frac {\log 2}{\log(1-p)}}\right\rfloor }when defined overN0{\displaystyle \mathbb {N} _{0}}.[3]: 69
Themodeof the geometric distribution is the first value in the support set. This is 1 when defined overN{\displaystyle \mathbb {N} }and 0 when defined overN0{\displaystyle \mathbb {N} _{0}}.[3]: 69
Theskewnessof the geometric distribution is2−p1−p{\displaystyle {\frac {2-p}{\sqrt {1-p}}}}.[6]: 115
Thekurtosisof the geometric distribution is9+p21−p{\displaystyle 9+{\frac {p^{2}}{1-p}}}.[6]: 115Theexcess kurtosisof a distribution is the difference between its kurtosis and the kurtosis of anormal distribution,3{\displaystyle 3}.[10]: 217Therefore, the excess kurtosis of the geometric distribution is6+p21−p{\displaystyle 6+{\frac {p^{2}}{1-p}}}. Sincep21−p≥0{\displaystyle {\frac {p^{2}}{1-p}}\geq 0}, the excess kurtosis is always positive so the distribution isleptokurtic.[3]: 69In other words, the tail of a geometric distribution decays faster than a Gaussian.[10]: 217
Entropy is a measure of uncertainty in a probability distribution. For the geometric distribution that models the number of failures before the first success, the probability mass function is:
The entropyH(X){\displaystyle H(X)}for this distribution is defined as:
The entropy increases as the probabilityp{\displaystyle p}decreases, reflecting greater uncertainty as success becomes rarer.
Fisher information measures the amount of information that an observable random variableX{\displaystyle X}carries about an unknown parameterp{\displaystyle p}. For the geometric distribution (failures before the first success), the Fisher information with respect top{\displaystyle p}is given by:
Proof:
Fisher information increases asp{\displaystyle p}decreases, indicating that rarer successes provide more information about the parameterp{\displaystyle p}.
For the geometric distribution modeling the number of trials until the first success, the probability mass function is:
The entropyH(X){\displaystyle H(X)}for this distribution is given by:
Entropy increases asp{\displaystyle p}decreases, reflecting greater uncertainty as the probability of success in each trial becomes smaller.
Fisher information for the geometric distribution modeling the number of trials until the first success is given by:
Proof:
The true parameterp{\displaystyle p}of an unknown geometric distribution can be inferred through estimators and conjugate distributions.
Provided they exist, the firstl{\displaystyle l}moments of a probability distribution can be estimated from a samplex1,…,xn{\displaystyle x_{1},\dotsc ,x_{n}}using the formulami=1n∑j=1nxji{\displaystyle m_{i}={\frac {1}{n}}\sum _{j=1}^{n}x_{j}^{i}}wheremi{\displaystyle m_{i}}is thei{\displaystyle i}th sample moment and1≤i≤l{\displaystyle 1\leq i\leq l}.[16]: 349–350EstimatingE(X){\displaystyle \mathrm {E} (X)}withm1{\displaystyle m_{1}}gives thesample mean, denotedx¯{\displaystyle {\bar {x}}}. Substituting this estimate in the formula for the expected value of a geometric distribution and solving forp{\displaystyle p}gives the estimatorsp^=1x¯{\displaystyle {\hat {p}}={\frac {1}{\bar {x}}}}andp^=1x¯+1{\displaystyle {\hat {p}}={\frac {1}{{\bar {x}}+1}}}when supported onN{\displaystyle \mathbb {N} }andN0{\displaystyle \mathbb {N} _{0}}respectively. These estimators arebiasedsinceE(1x¯)>1E(x¯)=p{\displaystyle \mathrm {E} \left({\frac {1}{\bar {x}}}\right)>{\frac {1}{\mathrm {E} ({\bar {x}})}}=p}as a result ofJensen's inequality.[17]: 53–54
Themaximum likelihood estimatorofp{\displaystyle p}is the value that maximizes thelikelihood functiongiven a sample.[16]: 308By finding thezeroof thederivativeof thelog-likelihood functionwhen the distribution is defined overN{\displaystyle \mathbb {N} }, the maximum likelihood estimator can be found to bep^=1x¯{\displaystyle {\hat {p}}={\frac {1}{\bar {x}}}}, wherex¯{\displaystyle {\bar {x}}}is the sample mean.[18]If the domain isN0{\displaystyle \mathbb {N} _{0}}, then the estimator shifts top^=1x¯+1{\displaystyle {\hat {p}}={\frac {1}{{\bar {x}}+1}}}. As previously discussed in§ Method of moments, these estimators are biased.
Regardless of the domain, the bias is equal to
which yields thebias-corrected maximum likelihood estimator,[citation needed]
InBayesian inference, the parameterp{\displaystyle p}is a random variable from aprior distributionwith aposterior distributioncalculated usingBayes' theoremafter observing samples.[17]: 167If abeta distributionis chosen as the prior distribution, then the posterior will also be a beta distribution and it is called theconjugate distribution. In particular, if aBeta(α,β){\displaystyle \mathrm {Beta} (\alpha ,\beta )}prior is selected, then the posterior, after observing samplesk1,…,kn∈N{\displaystyle k_{1},\dotsc ,k_{n}\in \mathbb {N} }, is[19]p∼Beta(α+n,β+∑i=1n(ki−1)).{\displaystyle p\sim \mathrm {Beta} \left(\alpha +n,\ \beta +\sum _{i=1}^{n}(k_{i}-1)\right).\!}Alternatively, if the samples are inN0{\displaystyle \mathbb {N} _{0}}, the posterior distribution is[20]p∼Beta(α+n,β+∑i=1nki).{\displaystyle p\sim \mathrm {Beta} \left(\alpha +n,\beta +\sum _{i=1}^{n}k_{i}\right).}Since the expected value of aBeta(α,β){\displaystyle \mathrm {Beta} (\alpha ,\beta )}distribution isαα+β{\displaystyle {\frac {\alpha }{\alpha +\beta }}},[11]: 145asα{\displaystyle \alpha }andβ{\displaystyle \beta }approach zero, the posterior mean approaches its maximum likelihood estimate.
The geometric distribution can be generated experimentally fromi.i.d.standard uniformrandom variables by finding the first such random variable to be less than or equal top{\displaystyle p}. However, the number of random variables needed is also geometrically distributed and the algorithm slows asp{\displaystyle p}decreases.[21]: 498
Random generation can be done inconstant timeby truncatingexponential random numbers. An exponential random variableE{\displaystyle E}can become geometrically distributed with parameterp{\displaystyle p}through⌈−E/log(1−p)⌉{\displaystyle \lceil -E/\log(1-p)\rceil }. In turn,E{\displaystyle E}can be generated from a standard uniform random variableU{\displaystyle U}altering the formula into⌈log(U)/log(1−p)⌉{\displaystyle \lceil \log(U)/\log(1-p)\rceil }.[21]: 499–500[22]
The geometric distribution is used in many disciplines. Inqueueing theory, theM/M/1 queuehas a steady state following a geometric distribution.[23]Instochastic processes, the Yule Furry process is geometrically distributed.[24]The distribution also arises when modeling the lifetime of a device in discrete contexts.[25]It has also been used to fit data including modeling patients spreadingCOVID-19.[26]
|
https://en.wikipedia.org/wiki/Geometric_distribution
|
Keno/kiːnoʊ/is alottery-like gambling game often played at moderncasinos, and also offered as a game in some lotteries.
Players wager by choosing numbers ranging from 1 through (usually) 80. After all players make their wagers, 20 numbers (some variants draw fewer numbers) are drawn at random, either with a ball machine similar to ones used for lotteries andbingo, or with arandom number generator.
Each casino sets its own series of payouts, called "paytables". The player is paid based on how many numbers were chosen (either player selection, or the terminal picking the numbers), the number of matches out of those chosen, and the wager.
There are a wide variety of keno paytables depending on the casino, usually with a larger "house edge" than other games, ranging from less than 4 percent[1]to over 35 percent[2]in online play, and 20-40% in in-person casinos.[3]By way of comparison, the typical house edge for non-slot casino games is under 5%.[4]
The word "keno" hasFrenchorLatinroots (Fr.quine"five winning numbers", L.quini"five each"), but by all accounts the game originated in China. Legend has it thatZhang Lianginvented the game during theChu-Han Contentionto raise money to defend an ancient city, and its widespread popularity later helped raise funds to build theGreat Wall of China. In modern China, the idea of usinglotteriesto fund a public institution was not accepted before the late 19th century.[5]
Chinese lottery is not documented before 1847, when the Portuguese government ofMacaodecided to grant a licence to lottery operators. According to some, results of keno games in great cities were sent to outlying villages and hamlets bycarrier pigeons, resulting in its Chinese name 白鸽票báigē piào, with the literal reading "white dove tickets" in Mandarin, but in Southern varieties of Chinese spoken inGuangdongsimply meaning "pigeon tickets",[6]and pronouncedbaak6-gaap3-piu3inCantonese(on which the Western spelling 'pak-ah-pu' / 'pakapoo' was based).
The Chinese played the game using sheets printed withChinese characters, often the first 80 characters of theThousand Character Classic, from which the winning characters were selected.[7][8]Eventually, Chinese immigrants introduced keno to the West when they sailed across the Pacific Ocean to work on construction of theFirst transcontinental railroadin the 19th century,[9]where the name was Westernized intoboc hop bu[8]andpuck-apu.[7]There were also other, earlier games called Keno, but these were played in the same way as the game now known as "Bingo", not the modern game of Keno.[citation needed]
Keno payouts are based on how many numbers the player chooses and how many of those numbers are "hit", multiplied by the proportion of the player's original wager to the "base rate" of the paytable. Typically, the more numbers a player chooses and the more numbers hit, the greater the payout, although some paytables pay for hitting a lesser number of spots. For example, it is not uncommon to see casinos paying $500 or even $1,000 for a “catch” of 0 out of 20 on a 20 spot ticket with a $5.00 wager. Payouts vary widely by casino. Most casinos allow paytable wagers of 1 through 20 numbers, but some limit the choice to only 1 through 10, 12 and 15 numbers, or "spots" as keno aficionados call the numbers selected.[10]
Theprobabilityof a player hitting all 20 numbers on a 20 spot ticket is approximately 1 in 3.5quintillion(1 in 3,535,316,142,212,174,320).[11]
Even though it is highly improbable to hit all 20 numbers on a 20 spot ticket, the same player would typically also get paid for hitting “catches” 0, 1, 2, 3, and 7 through 19 out of 20, often with the 17 through 19 catches paying the same as the solid 20 hit. Some of the other paying "catches" on a 20 spot ticket or any other ticket with high "solid catch" odds are in reality very possible to hit:
Probabilities change significantly based on the number of spots and numbers that are picked on each ticket.
Keno probabilities come from ahypergeometric distribution.[12][13]For Keno, one calculates the probability of hitting exactlyr{\displaystyle r}spots on ann{\displaystyle n}-spot ticket by the formula:
To calculate the probability of hitting 4 spots on a 6-spot ticket, the formula is:
where(nr){\displaystyle {n \choose r}}is calculated asn!r!(n−r)!{\displaystyle n! \over r!(n-r)!}, where X! is notation for Xfactorial. Spreadsheets have the functionCOMBIN(n,r)to calculate(nr){\displaystyle {n \choose r}}.
To calculate "odds-to-1", divide the probability into 1.0 and subtract 1 from the result.
|
https://en.wikipedia.org/wiki/Keno
|
In thedesign of experimentsinstatistics, thelady tasting teais arandomized experimentdevised byRonald Fisherand reported in his bookThe Design of Experiments(1935).[1]The experiment is the original exposition of Fisher's notion of anull hypothesis, which is "never proved or established, but is possibly disproved, in the course of experimentation".[2][3]
The example is loosely based on an event in Fisher's life. The woman in question,phycologistMuriel Bristol, claimed to be able to tellwhether the tea or the milk was added first to a cup. Her future husband, William Roach, suggested that Fisher give her eight cups, four of each variety, in random order.[4]One could then ask what the probability was for her getting the specific number of cups she identified correct (in fact all eight), but just by chance.
Fisher's description is less than 10 pages in length and is notable for its simplicity and completeness regarding terminology, calculations and design of the experiment.[5]The test used wasFisher's exact test.
The experiment provides a subject with eight randomly ordered cups of tea – four prepared by pouring milk and then tea, four by pouring tea and then milk. The subject attempts to select the four cups prepared by one method or the other, and may compare cups directly against each other as desired. The method employed in the experiment is fully disclosed to the subject.
Thenull hypothesisis that the subject has no ability to distinguish the teas. In Fisher's approach, there was noalternative hypothesis,[2]unlike in theNeyman–Pearson approach.
The test statistic is a simple count of the number of successful attempts to select the four cups prepared by a given method. The distribution of possible numbers of successes, assuming thenull hypothesisis true, can be computed using the number of combinations. Using thecombinationformula, withn=8{\displaystyle n=8}total cups andk=4{\displaystyle k=4}cups chosen, there are(84)=8!4!(8−4)!=70{\displaystyle {\binom {8}{4}}={\frac {8!}{4!(8-4)!}}=70}possible combinations.
The frequencies of the possible numbers of successes, given in the final column of this table, are derived as follows. For 0 successes, there is clearly only one set of four choices (namely, choosing all four incorrect cups) giving this result. For one success and three failures, there are four correct cups of which one is selected, which by the combination formula can occur in(41)=4{\textstyle {\binom {4}{1}}=4}different ways (as shown in column 2, withxdenoting a correct cup that is chosen andodenoting a correct cup that is not chosen); and independently of that, there are four incorrect cups of which three are selected, which can occur in(43)=4{\textstyle {\binom {4}{3}}=4}ways (as shown in the second column, this time withxinterpreted as an incorrect cup which is not chosen, andoindicating an incorrect cup which is chosen). Thus a selection of any one correct cup and any three incorrect cups can occur in any of 4×4 = 16 ways. The frequencies of the other possible numbers of successes are calculated correspondingly. Thus the number of successes is distributed according to thehypergeometric distribution. Specifically, for a random variableX{\displaystyle X}equal to the number of successes, we may writeX∼Hypergeometric(N=8,K=4,n=4){\displaystyle X\sim \operatorname {Hypergeometric} (N=8,K=4,n=4)}, whereN{\displaystyle N}is the population size or total number of cups of tea,K{\displaystyle K}is the number of success states in the population or four cups of either type, andn{\displaystyle n}is the number of draws, or four cups. The distribution of combinations for makingkselections out of the2kavailable selections corresponds to thekth row of Pascal's triangle, such that each integer in the row is squared. In this case,k=4{\displaystyle k=4}because 4 teacups are selected from the 8 available teacups.
The critical region for rejection of the null of no ability to distinguish was the single case of 4 successes of 4 possible, based on the conventional probability criterion < 5%. This is the critical region because under the null of no ability to distinguish, 4 successes has 1 chance out of 70 (≈ 1.4% < 5%) of occurring, whereas at least 3 of 4 successes has a probability of (16+1)/70 (≈ 24.3% > 5%).
Thus,if and only ifthe lady properly categorized all 8 cups was Fisher willing to reject the null hypothesis – effectively acknowledging the lady's ability at a 1.4% significance level (but without quantifying her ability). Fisher later discussed the benefits of more trials and repeated tests.
David Salsburgreports that a colleague of Fisher,H. Fairfield Smith, revealed that in the actual experiment the lady succeeded in identifying all eight cups correctly.[6][7]The chance of someone who just guesses of getting all correct, assuming she guesses that any four had the tea put in first and the other four the milk, would be only 1 in 70 (the combinations of 8 taken 4 at a time).
David Salsburgpublished apopular sciencebook entitledThe Lady Tasting Tea,[6]which describes Fisher's experiment and ideas onrandomization.Deb Basuwrote that "the famous case of the 'lady tasting tea'" was "one of the two supporting pillars ... of the randomization analysis ofexperimental data."[8]
|
https://en.wikipedia.org/wiki/Lady_tasting_tea
|
Incomputer science, aparsing expression grammar(PEG) is a type ofanalyticformal grammar, i.e. it describes aformal languagein terms of a set of rules for recognizingstringsin the language. The formalism was introduced by Bryan Ford in 2004[1]and is closely related to the family oftop-down parsing languagesintroduced in the early 1970s.
Syntactically, PEGs also look similar tocontext-free grammars(CFGs), but they have a different interpretation: the choice operator selects the first match in PEG, while it is ambiguous in CFG. This is closer to how string recognition tends to be done in practice, e.g. by arecursive descent parser.
Unlike CFGs, PEGs cannot beambiguous; a string has exactly one validparse treeor none. It is conjectured that there exist context-free languages that cannot be recognized by a PEG, but this is not yet proven.[1]PEGs are well-suited to parsing computer languages (and artificial human languages such asLojban) where multiple interpretation alternatives can be disambiguated locally, but are less likely to be useful for parsingnatural languageswhere disambiguation may have to be global.[2]
Aparsing expressionis a kind of pattern that each string may eithermatchornot match. In case of a match, there is a unique prefix of the string (which may be the whole string, the empty string, or something in between) which has beenconsumedby the parsing expression; this prefix is what one would usually think of as having matched the expression. However, whether a string matches a parsing expressionmay(because of look-ahead predicates) depend on parts of it which come after the consumed part. Aparsing expression languageis a set of all strings that match some specific parsing expression.[1]: Sec.3.4
Aparsing expression grammaris a collection of named parsing expressions, which may reference each other. The effect of one such reference in a parsing expression is as if the whole referenced parsing expression was given in place of the reference. A parsing expression grammar also has a designatedstarting expression; a string matches the grammar if it matches its starting expression.
An element of a string matched is called aterminal symbol, orterminalfor short. Likewise the names assigned to parsing expressions are callednonterminal symbols, ornonterminalsfor short. These terms would be descriptive forgenerative grammars, but in the case of parsing expression grammars they are merely terminology, kept mostly because of being near ubiquitous in discussions ofparsingalgorithms.
Bothabstractandconcretesyntaxes of parsing expressions are seen in the literature, and in this article. The abstract syntax is essentially amathematical formulaand primarily used in theoretical contexts, whereas concrete syntax parsing expressions could be used directly to control aparser. The primary concrete syntax is that defined by Ford,[1]: Fig.1although many tools have their own dialect of this. Other tools[3]can be closer to using a programming-language native encoding of abstract syntax parsing expressions as their concrete syntax.
The two main kinds of parsing expressions not containing another parsing expression are individual terminal symbols and nonterminal symbols. In concrete syntax, terminals are placed inside quotes (single or double), whereas identifiers not in quotes denote nonterminals:
In the abstract syntax there is no formalised distinction, instead each symbol is supposedly defined as either terminal or nonterminal, but a common convention is to use upper case for nonterminals and lower case for terminals.
The concrete syntax also has a number of forms for classes of terminals:
In abstract syntax, such forms are usually formalised as nonterminals whose exact definition is elided for brevity; in Unicode, there are tens of thousands of characters that are letters. Conversely, theoretical discussions sometimes introduce atomic abstract syntax for concepts that can alternatively be expressed using composite parsing expressions. Examples of this include:
In the concrete syntax, quoted and bracketed terminals have backslash escapes, so that "line feedorcarriage return" may be written[\n\r]. The abstract syntax counterpart of a quoted terminal of length greater than one would be the sequence of those terminals;"bar"is the same as"b" "a" "r". The primary concrete syntax assigns no distinct meaning to terminals depending on whether they use single or double quotes, but some dialects treat one as case-sensitive and the other as case-insensitive.
Given any existing parsing expressionse,e1, ande2, a new parsing expression can be constructed using the following operators:
Operator priorities are as follows, based on Table 1 in:[1]
In the concrete syntax, a parsing expression grammar is simply a sequence of nonterminal definitions, each of which has the form
TheIdentifieris the nonterminal being defined, and theExpressionis the parsing expression it is defined as referencing. TheLEFTARROWvaries a bit between dialects, but is generally some left-pointing arrow or assignment symbol, such as<-,←,:=, or=. One way to understand it is precisely as making an assignment or definition of the nonterminal. Another way to understand it is as a contrast to the right-pointing arrow → used in the rules of acontext-free grammar; with parsing expressions the flow of information goes from expression to nonterminal, not nonterminal to expression.
As a mathematical object, a parsing expression grammar is a tuple(N,Σ,P,eS){\displaystyle (N,\Sigma ,P,e_{S})}, whereN{\displaystyle N}is the set of nonterminal symbols,Σ{\displaystyle \Sigma }is the set of terminal symbols,P{\displaystyle P}is afunctionfromN{\displaystyle N}to the set of parsing expressions onN∪Σ{\displaystyle N\cup \Sigma }, andeS{\displaystyle e_{S}}is the starting parsing expression. Some concrete syntax dialects give the starting expression explicitly,[4]but the primary concrete syntax instead has the implicit rule that the first nonterminal defined is the starting expression.
It is worth noticing that the primary dialect of concrete syntax parsing expression grammars does not have an explicit definition terminator or separator between definitions, although it is customary to begin a new definition on a new line; theLEFTARROWof the next definition is sufficient for finding the boundary, if one adds the constraint that a nonterminal in anExpressionmust not be followed by aLEFTARROW. However, some dialects may allow an explicit terminator, or outright require[4]it.
This is a PEG that recognizes mathematical formulas that apply the basic five operations to non-negative integers.
In the above example, the terminal symbols are characters of text, represented by characters in single quotes, such as'('and')'. The range[0-9]is a shortcut for the ten characters from'0'to'9'. (This range syntax is the same as the syntax used byregular expressions.) The nonterminal symbols are the ones that expand to other rules:Value,Power,Product,Sum, andExpr. Note that rulesSumandProductdon't lead to desired left-associativity of these operations (they don't deal with associativity at all, and it has to be handled in post-processing step after parsing), and thePowerrule (by referring to itself on the right) results in desired right-associativity of exponent. Also note that a rule likeSum←Sum(('+'/'-')Product)?(with intention to achieve left-associativity) would cause infinite recursion, so it cannot be used in practice even though it can be expressed in the grammar.
The fundamental difference betweencontext-free grammarsand parsing expression grammars is that the PEG's choice operator isordered. If the first alternative succeeds, the second alternative is ignored. Thus ordered choice is notcommutative, unlike unordered choice as in context-free grammars. Ordered choice is analogous tosoft cutoperators available in somelogic programminglanguages.
The consequence is that if a CFG is transliterated directly to a PEG, any ambiguity in the former is resolved by deterministically picking one parse tree from the possible parses. By carefully choosing the order in which the grammar alternatives are specified, a programmer has a great deal of control over which parse tree is selected.
Parsing expression grammars also add the and- and not-syntactic predicates. Because they can use an arbitrarily complex sub-expression to "look ahead" into the input string without actually consuming it, they provide a powerful syntacticlookaheadand disambiguation facility, in particular when reordering the alternatives cannot specify the exact parse tree desired.
Each nonterminal in a parsing expression grammar essentially represents a parsingfunctionin arecursive descent parser, and the corresponding parsing expression represents the "code" comprising the function. Each parsing function conceptually takes an input string as its argument, and yields one of the following results:
An atomic parsing expression consisting of a singleterminal(i.e. literal) succeeds if the first character of the input string matches that terminal, and in that case consumes the input character; otherwise the expression yields a failure result. An atomic parsing expression consisting of theemptystring always trivially succeeds without consuming any input.
An atomic parsing expression consisting of anonterminalArepresents arecursivecall to the nonterminal-functionA. A nonterminal may succeed without actually consuming any input, and this is considered an outcome distinct from failure.
Thesequenceoperatore1e2first invokese1, and ife1succeeds, subsequently invokese2on the remainder of the input string left unconsumed bye1, and returns the result. If eithere1ore2fails, then the sequence expressione1e2fails (consuming no input).
Thechoiceoperatore1/e2first invokese1, and ife1succeeds, returns its result immediately. Otherwise, ife1fails, then the choice operatorbacktracksto the original input position at which it invokede1, but then callse2instead, returninge2's result.
Thezero-or-more,one-or-more, andoptionaloperators consume zero or more, one or more, or zero or one consecutive repetitions of their sub-expressione, respectively. Unlike incontext-free grammarsandregular expressions, however, these operatorsalwaysbehavegreedily, consuming as much input as possible and never backtracking. (Regular expression matchers may start by matching greedily, but will then backtrack and try shorter matches if they fail to match.) For example, the expression a* will always consume as many a's as are consecutively available in the input string, and the expression (a* a) will always fail because the first part (a*) will never leave any a's for the second part to match.
Theand-predicateexpression &einvokes the sub-expressione, and then succeeds ifesucceeds and fails ifefails, but in either casenever consumes any input.
Thenot-predicateexpression !esucceeds ifefails and fails ifesucceeds, again consuming no input in either case.
The following recursive rule matches standard C-style if/then/else statements in such a way that the optional "else" clause always binds to the innermost "if", because of the implicit prioritization of the '/' operator. (In acontext-free grammar, this construct yields the classicdangling else ambiguity.)
The following recursive rule matches Pascal-style nested comment syntax,(* which can (* nest *) like this *). Recall that.matches any single character.
The parsing expressionfoo&(bar)matches and consumes the text "foo" but only if it is followed by the text "bar". The parsing expressionfoo!(bar)matches the text "foo" but only if it isnotfollowed by the text "bar". The expression!(a+b)amatches a single "a" but only if it is not part of an arbitrarily long sequence of a's followed by a b.
The parsing expression('a'/'b')*matches and consumes an arbitrary-length sequence of a's and b's. Theproduction ruleS←'a'''S''?'b'describes the simplecontext-free"matching language"{anbn:n≥1}{\displaystyle \{a^{n}b^{n}:n\geq 1\}}.
The following parsing expression grammar describes the classic non-context-free language{anbncn:n≥1}{\displaystyle \{a^{n}b^{n}c^{n}:n\geq 1\}}:
Any parsing expression grammar can be converted directly into arecursive descent parser.[5]Due to the unlimitedlookaheadcapability that the grammar formalism provides, however, the resulting parser could exhibitexponential timeperformance in the worst case.
It is possible to obtain better performance for any parsing expression grammar by converting its recursive descent parser into apackrat parser, which always runs inlinear time, at the cost of substantially greater storage space requirements. A packrat parser[5]is a form ofparsersimilar to a recursive descent parser in construction, except that during the parsing process itmemoizesthe intermediate results of all invocations of themutually recursiveparsing functions, ensuring that each parsing function is only invoked at most once at a given input position. Because of this memoization, a packrat parser has the ability to parse manycontext-free grammarsandanyparsing expression grammar (including some that do not represent context-free languages) in linear time. Examples of memoized recursive descent parsers are known from at least as early as 1993.[6]This analysis of the performance of a packrat parser assumes that enough memory is available to hold all of the memoized results; in practice, if there is not enough memory, some parsing functions might have to be invoked more than once at the same input position, and consequently the parser could take more than linear time.
It is also possible to buildLL parsersandLR parsersfrom parsing expression grammars,[citation needed]with better worst-case performance than a recursive descent parser without memoization, but the unlimited lookahead capability of the grammar formalism is then lost. Therefore, not all languages that can be expressed using parsing expression grammars can be parsed by LL or LR parsers.
A pika parser[7]uses dynamic programming to apply PEG rules bottom-up and right to left, which is the inverse of the normal recursive descent order of top-down, left to right. Parsing in reverse order solves the left recursion problem, allowing left-recursive rules to be used directly in the grammar without being rewritten into non-left-recursive form, and also confers optimal error recovery capabilities upon the parser, which historically proved difficult to achieve for recursive descent parsers.
Many parsing algorithms require a preprocessing step where the grammar is first compiled into an opaque executable form, often some sort of automaton. Parsing expressions can be executed directly (even if it is typically still advisable to transform the human-readable PEGs shown in this article into a more native format, such asS-expressions, before evaluating them).
Compared to pureregular expressions(i.e., describing a language recognisable using afinite automaton), PEGs are vastly more powerful. In particular they can handle unbounded recursion, and so match parentheses down to an arbitrary nesting depth; regular expressions can at best keep track of nesting down to some fixed depth, because a finite automaton (having a finite set of internal states) can only distinguish finitely many different nesting depths. In more theoretical terms,{anbn}n⩾0{\displaystyle \{a^{n}b^{n}\}_{n\geqslant 0}}(the language of all strings of zero or morea{\displaystyle a}'s, followed by anequal numberofb{\displaystyle b}s) is not a regular language, but it is easily seen to be a parsing expression language, matched by the grammar
HereAB !.is the starting expression. The!.part enforces that the input ends after theAB, by saying “there is no next character”; unlike regular expressions, which have magic constraints$or\Zfor this, parsing expressions can express the end of input using only the basic primitives.
The*,+, and?of parsing expressions are similar to those in regular expressions, but a difference is that these operate strictly in a greedy mode. This is ultimately due to/being an ordered choice. A consequence is that something can match as a regular expression which does not match as parsing expression:
is both a valid regular expression and a valid parsing expression. As regular expression, it matchesbc, but as parsing expression it does not match, because the[ab]?will match theb, then[bc]will match thec, leaving nothing for the[cd], so at that point matching the sequence fails. "Trying again" with having[ab]?match the empty string is explicitly against the semantics of parsing expressions; this is not an edge case of a particular matching algorithm, instead it is the sought behaviour.
Even regular expressions that depend on nondeterminismcanbe compiled into a parsing expression grammar, by having a separate nonterminal for every state of the correspondingDFAand encoding its transition function into the definitions of these nonterminals —
is effectively saying "from state A transition to state B if the next character is x, but to state C if the next character is y" — but this works because nondeterminism can be eliminated within the realm of regular languages. It would not make use of the parsing expression variants of the repetition operations.
PEGs can comfortably be given in terms of characters, whereascontext-free grammars(CFGs) are usually given in terms of tokens, thus requiring an extra step of tokenisation in front of parsing proper.[8]An advantage of not having a separate tokeniser is that different parts of the language (for example embeddedmini-languages) can easily have different tokenisation rules.
In the strict formal sense, PEGs are likely incomparable to CFGs, but practically there are many things that PEGs can do which pure CFGs cannot, whereas it is difficult to come up with examples of the contrary. In particular PEGs can be crafted to natively resolve ambiguities, such as the "dangling else" problem in C, C++, and Java, whereas CFG-based parsing often needs a rule outside of the grammar to resolve them. Moreover any PEG can be parsed in linear time by using a packrat parser, as described above, whereas parsing according to a general CFG is asymptotically equivalent[9]toboolean matrixmultiplication (thus likely between quadratic and cubic time).
One classical example of a formal language which is provably not context-free is the language{anbncn}n⩾0{\displaystyle \{a^{n}b^{n}c^{n}\}_{n\geqslant 0}}: an arbitrary number ofa{\displaystyle a}s are followed by an equal number ofb{\displaystyle b}s, which in turn are followed by an equal number ofc{\displaystyle c}s. This, too, is a parsing expression language, matched by the grammar
ForABto match, the first stretch ofa{\displaystyle a}s must be followed by an equal number ofb{\displaystyle b}s, and in additionBChas to match where thea{\displaystyle a}s switch tob{\displaystyle b}s, which means thoseb{\displaystyle b}s are followed by an equal number ofc{\displaystyle c}s.
PEG parsing is typically carried out viapackrat parsing, which usesmemoization[10][11]to eliminate redundant parsing steps. Packrat parsing requires internal storage proportional to the total input size, rather than to the depth of the parse tree as with LR parsers. Whether this is a significant difference depends on circumstances; if parsing is a service provided as afunctionthen the parser will have stored the full parse tree up until returning it, and already that parse tree will typically be of size proportional to the total input size. If parsing is instead provided as ageneratorthen one might get away with only keeping parts of the parse tree in memory, but the feasibility of this depends on the grammar. A parsing expression grammar can be designed so that only after consuming the full input will the parser discover that it needs to backtrack to the beginning,[12]which again could require storage proportional to total input size.
For recursive grammars and some inputs, the depth of the parse tree can be proportional to the input size,[13]so both an LR parser and a packrat parser will appear to have the same worst-case asymptotic performance. However in many domains, for example hand-written source code, the expression nesting depth has an effectively constant bound quite independent of the length of the program, because expressions nested beyond a certain depth tend to getrefactored. When it is not necessary to keep the full parse tree, a more accurate analysis would take the depth of the parse tree into account separately from the input size.[14]
In order to attain linear overall complexity, the storage used for memoization must furthermore provideamortized constant timeaccess to individual data items memoized. In practice that is no problem — for example a dynamically sizedhash tableattains this – but that makes use ofpointerarithmetic, so it presumes having arandom-access machine. Theoretical discussions of data structures and algorithms have an unspoken tendency to presume a more restricted model (possibly that oflambda calculus, possibly that ofScheme), where a sparse table rather has to be built using trees, and data item access is not constant time. Traditional parsing algorithms such as theLL parserare not affected by this, but it becomes a penalty for the reputation of packrat parsers: they rely on operations of seemingly ill repute.
Viewed the other way around, this says packrat parsers tap into computational power readily available in real life systems, that older parsing algorithms do not understand to employ.
A PEG is calledwell-formed[1]if it contains noleft-recursiverules, i.e., rules that allow a nonterminal to expand to an expression in which the same nonterminal occurs as the leftmost symbol. For a left-to-right top-down parser, such rules cause infinite regress: parsing will continually expand the same nonterminal without moving forward in the string. Therefore, to allow packrat parsing, left recursion must be eliminated.
Direct recursion, be that left or right, is important in context-free grammars, because there recursion is the only way to describe repetition:
People trained in using context-free grammars often come to PEGs expecting to use the same idioms, but parsing expressions can do repetition without recursion:
A difference lies in theabstract syntax treesgenerated: with recursion eachSumorArgscan have at most two children, but with repetition there can be arbitrarily many. If later stages of processing require that such lists of children are recast as trees with boundeddegree, for example microprocessor instructions for addition typically only allow two operands, then properties such asleft-associativitywould be imposed after the PEG-directed parsing stage.
Therefore left-recursion is practically less likely to trouble a PEG packrat parser than, say, an LL(k) context-free parser, unless one insists on using context-free idioms. However, not all cases of recursion are about repetition.
For example, in the arithmetic grammar above, it could seem tempting to express operator precedence as a matter of ordered choice —Sum / Product / Valuewould mean first try viewing asSum(since we parse top–down), second try viewing asProduct, and only third try viewing asValue— rather than via nesting of definitions. This (non-well-formed) grammar seeks to keep precedence order only in one line:
Unfortunately matching anExprrequires testing if aSummatches, while matching aSumrequires testing if anExprmatches. Because the term appears in the leftmost position, these rules make up acircular definitionthat cannot be resolved. (Circular definitions that can be resolved exist—such as in the original formulation from the first example—but such definitions are required not to exhibit pathological recursion.) However, left-recursive rules can always be rewritten to eliminate left-recursion.[2][15]For example, the following left-recursive CFG rule:
can be rewritten in a PEG using the plus operator:
The process of rewritingindirectlyleft-recursiverules is complex in some packrat parsers, especially when semantic actions are involved.
With some modification, traditional packrat parsing can support direct left recursion,[5][16][17]but doing so results in a loss of the linear-time parsing property[16]which is generally the justification for using PEGs and packrat parsing in the first place. Only theOMetaparsing algorithm[16]supports full direct and indirect left recursion without additional attendant complexity (but again, at a loss of the linear time complexity), whereas allGLR parserssupport left recursion.
A common first impression of PEGs is that they look like CFGs with certain convenience features — repetition operators*+?as in regular expressions and lookahead predicates&!— plus ordered choice for disambiguation. This understanding can be sufficient when one's goal is to create a parser for a language, but it is not sufficient for more theoretical discussions of the computational power of parsing expressions. In particular thenondeterminisminherent in the unordered choice|of context-free grammars makes it very different from the deterministic ordered choice/.
PEG packrat parsers cannot recognize some unambiguous nondeterministic CFG rules, such as the following:[2]
NeitherLL(k)nor LR(k) parsing algorithms are capable of recognizing this example. However, this grammar can be used by a general CFG parser like theCYK algorithm. However, thelanguagein question can be recognised by all these types of parser, since it is in fact a regular language (that of strings of an odd number of x's).
It is instructive to work out exactly what a PEG parser does when attempting to match
against the stringxxxxxq. As expected, it recursively tries to match the nonterminalSat increasing positions in this string, until failing the match against theq, and after that begins to backtrack. This goes as follows:
Matching against a parsing expression isgreedy, in the sense that the first success encountered is the only one considered. Even if locally the choices are ordered longest first, there is no guarantee that this greedy matching will find the globally longest match.
LL(k) and LR(k) parser generators will fail to complete when the input grammar is ambiguous. This is a feature in the common case that the grammar is intended to be unambiguous but is defective. A PEG parser generator will resolve unintended ambiguities earliest-match-first, which may be arbitrary and lead to surprising parses.
The ordering of productions in a PEG grammar affects not only the resolution of ambiguity, but also thelanguage matched. For example, consider the first PEG example in Ford's paper[1](example rewritten in pegjs.org/online notation, and labelledG1{\displaystyle G_{1}}andG2{\displaystyle G_{2}}):
Ford notes thatThe second alternative in the latter PEG rule will never succeed because the first choice is always taken if the input string ... begins with 'a'..[1]Specifically,L(G1){\displaystyle L(G_{1})}(i.e., the language matched byG1{\displaystyle G_{1}}) includes the input "ab", butL(G2){\displaystyle L(G_{2})}does not.
Thus, adding a new option to a PEG grammar canremovestrings from the language matched, e.g.G2{\displaystyle G_{2}}is the addition of a rule to the single-production grammarA = "a" "b", which contains a string not matched byG2{\displaystyle G_{2}}.
Furthermore, constructing a grammar to matchL(G1)∪L(G2){\displaystyle L(G_{1})\cup L(G_{2})}from PEG grammarsG1{\displaystyle G_{1}}andG2{\displaystyle G_{2}}is not always a trivial task.
This is in stark contrast to CFG's, in which the addition of a new production cannot remove strings (though, it can introduce problems in the form of ambiguity),
and a (potentially ambiguous) grammar forL(G1)∪L(G2){\displaystyle L(G_{1})\cup L(G_{2})}can be constructed
It is an open problem to give a concrete example of a context-free language which cannot be recognized by a parsing expression grammar.[1]In particular, it is open whether a parsing expression grammar can recognize the language of palindromes.[18]
The class of parsing expression languages is closed under set intersection and complement, thus also under set union.[1]: Sec.3.4
In stark contrast to the case for context-free grammars, it is not possible to generate elements of a parsing expression language from its grammar. Indeed, it is algorithmicallyundecidablewhether the language recognised by a parsing expression grammar is empty! One reason for this is that any instance of thePost correspondence problemreduces to an instance of the problem of deciding whether a parsing expression language is empty.
Recall that an instance of the Post correspondence problem consists of a list(α1,β1),(α2,β2),…,(αn,βn){\displaystyle (\alpha _{1},\beta _{1}),(\alpha _{2},\beta _{2}),\dotsc ,(\alpha _{n},\beta _{n})}of pairs of strings (of terminal symbols). The problem is to determine whether there exists a sequence{ki}i=1m{\displaystyle \{k_{i}\}_{i=1}^{m}}of indices in the range{1,…,n}{\displaystyle \{1,\dotsc ,n\}}such thatαk1αk2⋯αkm=βk1βk2⋯βkm{\displaystyle \alpha _{k_{1}}\alpha _{k_{2}}\dotsb \alpha _{k_{m}}=\beta _{k_{1}}\beta _{k_{2}}\dotsb \beta _{k_{m}}}. Toreducethis to a parsing expression grammar, letγ0,γ1,…,γn{\displaystyle \gamma _{0},\gamma _{1},\dotsc ,\gamma _{n}}be arbitrary pairwise distinct equally long strings of terminal symbols (already with2{\displaystyle 2}distinct symbols in the terminal symbol alphabet, length⌈log2(n+1)⌉{\displaystyle \lceil \log _{2}(n+1)\rceil }suffices) and consider the parsing expression grammarS←&(A!.)&(B!.)(γ1/⋯/γn)+γ0A←γ0/γ1Aα1/⋯/γnAαnB←γ0/γ1Bβ1/⋯/γnBβn{\displaystyle {\begin{aligned}S&\leftarrow \&(A\,!.)\&(B\,!.)(\gamma _{1}/\dotsb /\gamma _{n})^{+}\gamma _{0}\\A&\leftarrow \gamma _{0}/\gamma _{1}A\alpha _{1}/\dotsb /\gamma _{n}A\alpha _{n}\\B&\leftarrow \gamma _{0}/\gamma _{1}B\beta _{1}/\dotsb /\gamma _{n}B\beta _{n}\end{aligned}}}Any string matched by the nonterminalA{\displaystyle A}has the formγkm⋯γk2γk1γ0αk1αk2⋯αkm{\displaystyle \gamma _{k_{m}}\dotsb \gamma _{k_{2}}\gamma _{k_{1}}\gamma _{0}\alpha _{k_{1}}\alpha _{k_{2}}\dotsb \alpha _{k_{m}}}for some indicesk1,k2,…,km{\displaystyle k_{1},k_{2},\dotsc ,k_{m}}. Likewise any string matched by the nonterminalB{\displaystyle B}has the formγkm⋯γk2γk1γ0βk1βk2⋯βkm{\displaystyle \gamma _{k_{m}}\dotsb \gamma _{k_{2}}\gamma _{k_{1}}\gamma _{0}\beta _{k_{1}}\beta _{k_{2}}\dotsb \beta _{k_{m}}}. Thus any string matched byS{\displaystyle S}will have the formγkm⋯γk2γk1γ0ρ{\displaystyle \gamma _{k_{m}}\dotsb \gamma _{k_{2}}\gamma _{k_{1}}\gamma _{0}\rho }whereρ=αk1αk2⋯αkm=βk1βk2⋯βkm{\displaystyle \rho =\alpha _{k_{1}}\alpha _{k_{2}}\dotsb \alpha _{k_{m}}=\beta _{k_{1}}\beta _{k_{2}}\dotsb \beta _{k_{m}}}.
|
https://en.wikipedia.org/wiki/Parsing_expression_grammar
|
Intheoretical linguisticsandcomputational linguistics,probabilistic context free grammars(PCFGs) extendcontext-free grammars, similar to howhidden Markov modelsextendregular grammars. Eachproductionis assigned a probability. The probability of a derivation (parse) is the product of the probabilities of the productions used in that derivation. These probabilities can be viewed as parameters of the model, and for large problems it is convenient to learn these parameters viamachine learning. A probabilistic grammar's validity is constrained by context of its training dataset.
PCFGs originated fromgrammar theory, and have application in areas as diverse asnatural language processingto the study the structure ofRNAmolecules and design ofprogramming languages. Designing efficient PCFGs has to weigh factors of scalability and generality. Issues such as grammar ambiguity must be resolved. The grammar design affects results accuracy. Grammar parsing algorithms have various time and memory requirements.
Derivation:The process of recursive generation of strings from a grammar.
Parsing:Finding a valid derivation using an automaton.
Parse Tree:The alignment of the grammar to a sequence.
An example of a parser for PCFG grammars is thepushdown automaton. The algorithm parses grammar nonterminals from left to right in astack-likemanner. Thisbrute-forceapproach is not very efficient. In RNA secondary structure prediction variants of theCocke–Younger–Kasami (CYK) algorithmprovide more efficient alternatives to grammar parsing than pushdown automata.[1]Another example of a PCFG parser is the Stanford Statistical Parser which has been trained usingTreebank.[2]
Similar to aCFG, a probabilistic context-free grammarGcan be defined by a quintuple:
where
PCFGs models extendcontext-free grammarsthe same way ashidden Markov modelsextendregular grammars.
TheInside-Outside algorithmis an analogue of theForward-Backward algorithm. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. This is equivalent to the probability of the PCFG generating the sequence, and is intuitively a measure of how consistent the sequence is with the given grammar. The Inside-Outside algorithm is used in modelparametrizationto estimate prior frequencies observed from training sequences in the case of RNAs.
Dynamic programmingvariants of theCYK algorithmfind theViterbi parseof a RNA sequence for a PCFG model. This parse is the most likely derivation of the sequence by the given PCFG.
Context-free grammars are represented as a set of rules inspired from attempts to model natural languages.[3][4][5]The rules are absolute and have a typical syntax representation known asBackus–Naur form. The production rules consist of terminal{a,b}{\displaystyle \left\{a,b\right\}}and non-terminalSsymbols and a blankϵ{\displaystyle \epsilon }may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals. In PCFG nulls are excluded.[1]An example of a grammar:
This grammar can be shortened using the '|' ('or') character into:
Terminals in a grammar are words and through the grammar rules a non-terminal symbol is transformed into a string of either terminals and/or non-terminals. The above grammar is read as "beginning from a non-terminalSthe emission can generate eitheraorborϵ{\displaystyle \epsilon }".
Its derivation is:
Ambiguous grammarmay result in ambiguous parsing if applied onhomographssince the same word sequence can have more than one interpretation.Pun sentencessuch as the newspaper headline "Iraqi Head Seeks Arms" are an example of ambiguous parses.
One strategy of dealing with ambiguous parses (originating with grammarians as early asPāṇini) is to add yet more rules, or prioritize them so that one rule takes precedence over others. This, however, has the drawback of proliferating the rules, often to the point where they become difficult to manage. Another difficulty is overgeneration, where unlicensed structures are also generated.
Probabilistic grammars circumvent these problems by ranking various productions on frequency weights, resulting in a "most likely" (winner-take-all) interpretation. As usage patterns are altered indiachronicshifts, these probabilistic rules can be re-learned, thus updating the grammar.
Assigning probability to production rules makes a PCFG. These probabilities are informed by observing distributions on a training set of similar composition to the language to be modeled. On most samples of broad language, probabilistic grammars where probabilities are estimated from data typically outperform hand-crafted grammars. CFGs when contrasted with PCFGs are not applicable to RNA structure prediction because while they incorporate sequence-structure relationship they lack the scoring metrics that reveal a sequence structural potential[6]
Aweighted context-free grammar(WCFG) is a more general category ofcontext-free grammar, where each production has a numeric weight associated with it. The weight of a specificparse treein a WCFG is the product[7](or sum[8]) of all rule weights in the tree. Each rule weight is included as often as the rule is used in the tree. A special case of WCFGs are PCFGs, where the weights are (logarithmsof[9][10])probabilities.
An extended version of theCYK algorithmcan be used to find the "lightest" (least-weight) derivation of a string given some WCFG.
When the tree weight is the product of the rule weights, WCFGs and PCFGs can express the same set ofprobability distributions.[7]
Since the 1990s, PCFG has been applied to modelRNA structures.[11][12][13][14][15]
Energy minimization[16][17]and PCFG provide ways of predicting RNA secondary structure with comparable performance.[11][12][1]However structure prediction by PCFGs is scored probabilistically rather than by minimum free energy calculation. PCFG model parameters are directly derived from frequencies of different features observed in databases of RNA structures[6]rather than by experimental
determination as is the case with energy minimization methods.[18][19]
The types of various structure that can be modeled by a PCFG include long range interactions, pairwise structure and other nested structures. However, pseudoknots can not be modeled.[11][12][1]PCFGs extend CFG by assigning probabilities to each production rule. A maximum probability parse tree from the grammar implies a maximum probability structure. Since RNAs preserve their structures over their primary sequence, RNA structure prediction can be guided by combining evolutionary information from comparative sequence analysis with biophysical knowledge about a structure plausibility based on such probabilities. Also search results for structural homologs using PCFG rules are scored according to PCFG derivations probabilities. Therefore, building grammar to model the behavior of base-pairs and single-stranded regions starts with exploring features of structuralmultiple sequence alignmentof related RNAs.[1]
The above grammar generates a string in an outside-in fashion, that is the basepair on the furthest extremes of the terminal is derived first. So a string such asaabaabaa{\displaystyle aabaabaa}is derived by first generating the distala's on both sides before moving inwards:
A PCFG model extendibility allows constraining structure prediction by incorporating expectations about different features of an RNA . Such expectation may reflect for example the propensity for assuming a certain structure by an RNA.[6]However incorporation of too much information may increase PCFG space and memory complexity and it is desirable that a PCFG-based model be as simple as possible.[6][20]
Every possible stringxa grammar generates is assigned a probability weightP(x|θ){\displaystyle P(x|\theta )}given the PCFG modelθ{\displaystyle \theta }. It follows that the sum of all probabilities to all possible grammar productions is∑xP(x|θ)=1{\displaystyle \sum _{\text{x}}P(x|\theta )=1}. The scores for each paired and unpaired residue explain likelihood for secondary structure formations. Production rules also allow scoring loop lengths as well as the order of base pair stacking hence it is possible to explore the range of all possible generations including suboptimal structures from the grammar and accept or reject structures based on score thresholds.[1][6]
RNA secondary structure implementations based on PCFG approaches can be utilized in :
Different implementation of these approaches exist. For example, Pfold is used in secondary structure prediction from a group of related RNA sequences,[20]covariance models are used in searching databases for homologous sequences and RNA annotation and classification,[11][24]RNApromo, CMFinder and TEISER are used in finding stable structural motifs in RNAs.[25][26][27]
PCFG design impacts the secondary structure prediction accuracy. Any useful structure prediction probabilistic model based on PCFG has to maintain simplicity without much compromise to prediction accuracy. Too complex a model of excellent performance on a single sequence may not scale.[1]A grammar based model should be able to:
The resulting of multipleparse treesper grammar denotes grammar ambiguity. This may be useful in revealing all possible base-pair structures for a grammar. However an optimal structure is the one where there is one and only one correspondence between the parse tree and the secondary structure.
Two types of ambiguities can be distinguished. Parse tree ambiguity and structural ambiguity. Structural ambiguity does not affect thermodynamic approaches as the optimal structure selection is always on the basis of lowest free energy scores.[6]Parse tree ambiguity concerns the existence of multiple parse trees per sequence. Such an ambiguity can reveal all possible base-paired structures for the sequence by generating all possible parse trees then finding the optimal one.[28][29][30]In the case of structural ambiguity multiple parse trees describe the same secondary structure. This obscures the CYK algorithm decision on finding an optimal structure as the correspondence between the parse tree and the structure is not unique.[31]Grammar ambiguity can be checked for by the conditional-inside algorithm.[1][6]
A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left.
A starting non-terminalS{\displaystyle \mathbf {\mathit {S}} }produces loops. The rest of the grammar proceeds with parameterL{\displaystyle \mathbf {\mathit {L}} }that decide whether a loop is a start of a stem or a single stranded regionsand parameterF{\displaystyle \mathbf {\mathit {F}} }that produces paired bases.
The formalism of this simple PCFG looks like:
The application of PCFGs in predicting structures is a multi-step process. In addition, the PCFG itself can be incorporated into probabilistic models that consider RNA evolutionary history or search homologous sequences in databases. In an evolutionary history context inclusion of prior distributions of RNA structures of astructural alignmentin the production rules of the PCFG facilitates good prediction accuracy.[21]
A summary of general steps for utilizing PCFGs in various scenarios:
Several algorithms dealing with aspects of PCFG based probabilistic models in RNA structure prediction exist. For instance the inside-outside algorithm and the CYK algorithm. The inside-outside algorithm is a recursive dynamic programming scoring algorithm that can followexpectation-maximizationparadigms. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. The inside part scores the subtrees from a parse tree and therefore subsequences probabilities given an PCFG. The outside part scores the probability of the complete parse tree for a full sequence.[32][33]CYK modifies the inside-outside scoring. Note that the term 'CYK algorithm' describes the CYK variant of the inside algorithm that finds an optimal parse tree for a sequence using a PCFG. It extends the actualCYK algorithmused in non-probabilistic CFGs.[1]
The inside algorithm calculatesα(i,j,v){\displaystyle \alpha (i,j,v)}probabilities for alli,j,v{\displaystyle i,j,v}of a parse subtree rooted atWv{\displaystyle W_{v}}for subsequencexi,...,xj{\displaystyle x_{i},...,x_{j}}. Outside algorithm calculatesβ(i,j,v){\displaystyle \beta (i,j,v)}probabilities of a complete parse tree for sequencexfrom root excluding the calculation ofxi,...,xj{\displaystyle x_{i},...,x_{j}}. The variablesαandβrefine the estimation of probability parameters of an PCFG. It is possible to reestimate the PCFG algorithm by finding the expected number of times a state is used in a derivation through summing all the products ofαandβdivided by the probability for a sequencexgiven the modelP(x|θ){\displaystyle P(x|\theta )}. It is also possible to find the expected number of times a production rule is used by an expectation-maximization that utilizes the values ofαandβ.[32][33]The CYK algorithm calculatesγ(i,j,v){\displaystyle \gamma (i,j,v)}to find the most probable parse treeπ^{\displaystyle {\hat {\pi }}}and yieldslogP(x,π^|θ){\displaystyle \log P(x,{\hat {\pi }}|\theta )}.[1]
Memory and time complexity for general PCFG algorithms in RNA structure predictions areO(L2M){\displaystyle O(L^{2}M)}andO(L3M3){\displaystyle O(L^{3}M^{3})}respectively. Restricting a PCFG may alter this requirement as is the case with database searches methods.
Covariance models (CMs) are a special type of PCFGs with applications in database searches for homologs, annotation and RNA classification. Through CMs it is possible to build PCFG-based RNA profiles where related RNAs can be represented by a consensus secondary structure.[11][12]The RNA analysis package Infernal uses such profiles in inference of RNA alignments.[34]The Rfam database also uses CMs in classifying RNAs into families based on their structure and sequence information.[24]
CMs are designed from a consensus RNA structure. A CM allowsindelsof unlimited length in the alignment. Terminals constitute states in the CM and the transition probabilities between the states is 1 if no indels are considered.[1]Grammars in a CM are as follows:
The model has 6 possible states and each state grammar includes different types of secondary structure probabilities of the non-terminals. The states are connected by transitions. Ideally current node states connect to all insert states and subsequent node states connect to non-insert states. In order to allow insertion of more than one base insert states connect to themselves.[1]
In order to score a CM model the inside-outside algorithms are used. CMs use a slightly different implementation of CYK. Log-odds emission scores for the optimum parse tree -loge^{\displaystyle \log {\hat {e}}}- are calculated out of the emitting statesP,L,R{\displaystyle P,~L,~R}. Since these scores are a function of sequence length a more discriminative measure to recover an optimum parse tree probability score-logP(x,π^|θ){\displaystyle \log {\text{P}}(x,{\hat {\pi }}|\theta )}- is reached by limiting the maximum length of the sequence to be aligned and calculating the log-odds relative to a null. The computation time of this step is linear to the database size and the algorithm has a memory complexity ofO(MaD+MbD2){\displaystyle O(M_{a}D+M_{b}D^{2})}.[1]
The KH-99 algorithm by Knudsen and Hein lays the basis of the Pfold approach to predicting RNA secondary structure.[20]In this approach the parameterization requires evolutionary history information derived from an alignment tree in addition to probabilities of columns and mutations. The grammar probabilities are observed from a training dataset.
In a structural alignment the probabilities of the unpaired bases columns and the paired bases columns are independent of other columns. By counting bases in single base positions and paired positions one obtains the frequencies of bases in loops and stems.
For basepairXandYan occurrence ofXY{\displaystyle XY}is also counted as an occurrence ofYX{\displaystyle YX}. Identical basepairs such asXX{\displaystyle XX}are counted twice.
By pairing sequences in all possible ways overall mutation rates are estimated. In order to recover plausible mutations a sequence identity threshold should be used so that the comparison is between similar sequences. This approach uses 85% identity threshold between pairing sequences.
First single base positions differences -except for gapped columns- between sequence pairs are counted such that if the same position in two sequences had different basesX, Ythe count of the difference is incremented for each sequence.
For unpaired bases a 4 X 4 mutation rate matrix is used that satisfies that the mutation flow from X to Y is reversible:[35]
For basepairs a 16 X 16 rate distribution matrix is similarly generated.[36][37]The PCFG is used to predict the prior probability distribution of the structure whereas posterior probabilities are estimated by the inside-outside algorithm and the most likely structure is found by the CYK algorithm.[20]
After calculating the column prior probabilities the alignment probability is estimated by summing over all possible secondary structures. Any columnCin a secondary structureσ{\displaystyle \sigma }for a sequenceDof lengthlsuch thatD=(C1,C2,...Cl){\displaystyle D=(C_{1},~C_{2},...C_{l})}can be scored with respect to the alignment treeTand the mutational modelM. The prior distribution given by the PCFG isP(σ|M){\displaystyle P(\sigma |M)}. The phylogenetic tree,Tcan be calculated from the model by maximum likelihood estimation. Note that gaps are treated as unknown bases and the summation can be done throughdynamic programming.[38]
Each structure in the grammar is assigned production probabilities devised from the structures of the training dataset. These prior probabilities give weight to predictions accuracy.[21][32][33]The number of times each rule is used depends on the observations from the training dataset for that particular grammar feature. These probabilities are written in parentheses in the grammar formalism and each rule will have a total of 100%.[20]For instance:
Given the prior alignment frequencies of the data the most likely structure from the ensemble predicted by the grammar can then be computed by maximizingP(σ|D,T,M){\displaystyle P(\sigma |D,T,M)}through the CYK algorithm. The structure with the highest predicted number of correct predictions is reported as the consensus structure.[20]
PCFG based approaches are desired to be scalable and general enough. Compromising speed for accuracy needs to as minimal as possible. Pfold addresses the limitations of the KH-99 algorithm with respect to scalability, gaps, speed and accuracy.[20]
Whereas PCFGs have proved powerful tools for predicting RNA secondary structure, usage in the field of protein sequence analysis has been limited. Indeed, the size of theamino acidalphabet and the variety of interactions seen in proteins make grammar inference much more challenging.[39]As a consequence, most applications offormal language theoryto protein analysis have been mainly restricted to the production of grammars of lower expressive power to model simple functional patterns based on local interactions.[40][41]Since protein structures commonly display higher-order dependencies including nested and crossing relationships, they clearly exceed the capabilities of any CFG.[39]Still, development of PCFGs allows expressing some of those dependencies and providing the ability to model a wider range of protein patterns.
|
https://en.wikipedia.org/wiki/Stochastic_context-free_grammar
|
Astraight-line grammar(sometimes abbreviated as SLG) is aformal grammarthat generates exactly one string.[1]Consequently, it does not branch (every non-terminal has only one associated production rule) nor loop (if non-terminalAappears in a derivation ofB, thenBdoes not appear in a derivation ofA).[1]
Straight-line grammars are widely used in the development of algorithms that execute directly on compressed structures (without prior decompression).[2]: 212
SLGs are of interest in fields likeKolmogorov complexity,Lossless data compression,Structure discoveryandCompressed data structures.[clarification needed]
The problem of finding a context-free grammar (equivalently: an SLG) of minimal size that generates a given string is called thesmallest grammar problem.[citation needed]
Straight-line grammars (more precisely: straight-line context-free string grammars) can be generalized toStraight-line context-free tree grammars.
The latter can be used conveniently to compresstrees.[2]: 212
Acontext-free grammarGis an SLG if:
1. for every non-terminalN, there is at most one production rule that hasNas its left-hand side, and
2. thedirected graphG=<V,E>, defined byVbeing the set of non-terminals and (A,B) ∈EwheneverBappears at the right-hand side of a production rule forA, isacyclic.
A mathematical definition of the more general formalism of straight-line context-free tree grammars can be found in Lohrey et al.[2]: 215
An SLG inChomsky normal formis equivalent to astraight-line program.[citation needed]
|
https://en.wikipedia.org/wiki/Context-free_grammar_generation_algorithms
|
Incomputer science, in particular informal language theory, thepumping lemmafor context-free languages, also known as theBar-Hillellemma,[1]is alemmathat gives a property shared by allcontext-free languagesand generalizes thepumping lemma for regular languages.
The pumping lemma can be used to construct arefutation by contradictionthat a specific language isnotcontext-free. Conversely, the pumping lemma does not suffice to guarantee that a languageiscontext-free; there are other necessary conditions, such asOgden's lemma, or theInterchange lemma.
If a languageL{\displaystyle L}is context-free, then there exists some integerp≥1{\displaystyle p\geq 1}(called a "pumping length")[2]such that every strings{\displaystyle s}inL{\displaystyle L}that has alengthofp{\displaystyle p}or more symbols (i.e. with|s|≥p{\displaystyle |s|\geq p}) can be written as
withsubstringsu,v,w,x{\displaystyle u,v,w,x}andy{\displaystyle y}, such that
Below is a formal expression of the Pumping Lemma.
(∀L⊆Σ∗)(context free(L)⇒((∃p≥1)((∀s∈L)((|s|≥p)⇒((∃u,v,w,x,y∈Σ∗)(s=uvwxy∧|vx|≥1∧|vwx|≤p∧(∀n≥0)(uvnwxny∈L))))))){\displaystyle {\begin{array}{l}(\forall L\subseteq \Sigma ^{*})\\\quad ({\mbox{context free}}(L)\Rightarrow \\\quad ((\exists p\geq 1)((\forall s\in L)((|s|\geq p)\Rightarrow \\\quad ((\exists u,v,w,x,y\in \Sigma ^{*})(s=uvwxy\land |vx|\geq 1\land |vwx|\leq p\land (\forall n\geq 0)(uv^{n}wx^{n}y\in L)))))))\end{array}}}
The pumping lemma for context-free languages (called just "the pumping lemma" for the rest of this article) describes a property that all context-free languages are guaranteed to have.
The property is a property of all strings in the language that are of length at leastp{\displaystyle p}, wherep{\displaystyle p}is a constant—called thepumping length—that varies between context-free languages.
Says{\displaystyle s}is a string of length at leastp{\displaystyle p}that is in the language.
The pumping lemma states thats{\displaystyle s}can be split into five substrings,s=uvwxy{\displaystyle s=uvwxy}, wherevx{\displaystyle vx}is non-empty and the length ofvwx{\displaystyle vwx}is at mostp{\displaystyle p}, such that repeatingv{\displaystyle v}andx{\displaystyle x}the same number of times (n{\displaystyle n}) ins{\displaystyle s}produces a string that is still in the language. It is often useful to repeat zero times, which removesv{\displaystyle v}andx{\displaystyle x}from the string. This process of "pumping up"s{\displaystyle s}with additional copies ofv{\displaystyle v}andx{\displaystyle x}is what gives the pumping lemma its name.
Finite languages(which are regular and hence context-free) obey the pumping lemma trivially by havingp{\displaystyle p}equal to the maximum string length inL{\displaystyle L}plus one. As there are no strings of this length the pumping lemma is not violated.
The pumping lemma is often used to prove that a given languageLis non-context-free, by showing that arbitrarily long stringssare inLthat cannot be "pumped" without producing strings outsideL.
For example, ifS⊂N{\displaystyle S\subset \mathbb {N} }is infinite but does not contain an (infinite)arithmetic progression, thenL={an:n∈S}{\displaystyle L=\{a^{n}:n\in S\}}is not context-free. In particular, neither theprime numbersnor thesquare numbersare context-free.
For example, the languageL={anbncn|n>0}{\displaystyle L=\{a^{n}b^{n}c^{n}|n>0\}}can be shown to be non-context-free by using the pumping lemma in aproof by contradiction. First, assume thatLis context free. By the pumping lemma, there exists an integerpwhich is the pumping length of languageL. Consider the strings=apbpcp{\displaystyle s=a^{p}b^{p}c^{p}}inL. The pumping lemma tells us thatscan be written in the forms=uvwxy{\displaystyle s=uvwxy}, whereu, v, w, x, andyare substrings, such that|vx|≥1{\displaystyle |vx|\geq 1},|vwx|≤p{\displaystyle |vwx|\leq p}, anduviwxiy∈L{\displaystyle uv^{i}wx^{i}y\in L}for every integeri≥0{\displaystyle i\geq 0}. By the choice ofsand the fact that|vwx|≤p{\displaystyle |vwx|\leq p}, it is easily seen that the substringvwxcan contain no more than two distinct symbols. That is, we have one of five possibilities forvwx:
For each case, it is easily verified thatuviwxiy{\displaystyle uv^{i}wx^{i}y}does not contain equal numbers of each letter for anyi≠1{\displaystyle i\neq 1}. Thus,uv2wx2y{\displaystyle uv^{2}wx^{2}y}does not have the formaibici{\displaystyle a^{i}b^{i}c^{i}}. This contradicts the definition ofL. Therefore, our initial assumption thatLis context free must be false.
In 1960, Scheinberg proved thatL={anbnan|n>0}{\displaystyle L=\{a^{n}b^{n}a^{n}|n>0\}}is not context-free using a precursor of the pumping lemma.[3]
While the pumping lemma is often a useful tool to prove that a given language is not context-free, it does not give a complete characterization of the context-free languages. If a language does not satisfy the condition given by the pumping lemma, we have established that it is not context-free. On the other hand, there are languages that are not context-free, but still satisfy the condition given by the pumping lemma, for example
fors=bjckdlwith e.g.j≥1 choosevwxto consist only ofb's, fors=aibjcjdjchoosevwxto consist only ofa's; in both cases all pumped strings are still inL.[4]
|
https://en.wikipedia.org/wiki/Pumping_lemma_for_context-free_languages
|
Incomputer science,Backus–Naur form(BNF, pronounced/ˌbækəsˈnaʊər/), also known asBackus normal form, is a notation system for defining thesyntaxofprogramming languagesand otherformal languages, developed byJohn BackusandPeter Naur. It is ametasyntaxforcontext-free grammars, providing a precise way to outline the rules of a language's structure.
It has been widely used in official specifications, manuals, and textbooks onprogramming language theory, as well as to describedocument formats,instruction sets, andcommunication protocols. Over time, variations such asextended Backus–Naur form(EBNF) andaugmented Backus–Naur form(ABNF) have emerged, building on the original framework with added features.
BNF specifications outline how symbols are combined to form syntactically valid sequences. Each BNF consists of three core components: a set ofnon-terminal symbols, a set ofterminal symbols, and a series of derivation rules.[1]Non-terminal symbols represent categories or variables that can be replaced, while terminal symbols are the fixed, literal elements (such as keywords or punctuation) that appear in the final sequence. Derivation rules provide the instructions for replacing non-terminal symbols with specific combinations of symbols.
A derivation rule is written in the format:
where:
For example, in the rule<opt-suffix-part>::="Sr." | "Jr." | "", the entire line is the derivation rule, "Sr.", "Jr.", and "" (an empty string) are terminal symbols, and<opt-suffix-part>is a non-terminal symbol.
Generating a valid sequence involves starting with a designated start symbol and iteratively applying the derivation rules.[3]This process can extend sequences incrementally. To allow flexibility, some BNF definitions include an optional "delete" symbol (represented as an empty alternative, e.g.,<item>::=<thing>|), enabling the removal of certain elements while maintaining syntactic validity.[3]
A practical illustration of BNF is a specification for a simplified U.S.postal address:
This translates into English as:
Note that many things (such as the format of a first-name, apartment number, ZIP-code, and Roman numeral) are left unspecified here. If necessary, they may be described using additional BNF rules.
The concept of usingrewriting rulesto describe language structure traces back to at leastPāṇini, an ancient Indian Sanskrit grammarian who lived sometime between the 6th and 4th centuriesBC.[5]His notation for describingSanskritword structure is equivalent in power to that of BNF and exhibits many similar properties.[6]
In Western society, grammar was long regarded as a subject for teaching rather than scientific study; descriptions were informal and targeted at practical usage. This perspective shifted in the first half of the 20th century, when linguists such asLeonard BloomfieldandZellig Harrisbegan attempts to formalize language description, includingphrase structure. Meanwhile, mathematicians explored related ideas throughstring rewriting rulesasformal logical systems, such asAxel Thuein 1914,Emil Postin the 1920s–40s,[7]andAlan Turingin 1936.Noam Chomsky, teaching linguistics to students ofinformation theoryatMITcombined linguistics and mathematics, adapting Thue's formalism to describe natural language syntax. In 1956, he introduced a clear distinction between generative rules (those ofcontext-free grammars) and transformation rules.[8][9]
BNF itself emerged whenJohn Backus, a programming language designer atIBM, proposed ametalanguageofmetalinguistic formulasto define the syntax of the new programming language IAL, known today asALGOL 58, in 1959.[10]This notation was formalized in theALGOL 60report, wherePeter Naurnamed itBackus normal formin the committee's 1963 report.[11]Whether Backus was directly influenced by Chomsky's work is uncertain.[12][13]
Donald Knuthargued in 1964 that BNF should be read asBackus–Naur form, as it is "not anormal formin the conventional sense," unlikeChomsky normal form.[14]In 1967, Peter Zilahy Ingerman suggested renaming itPāṇini Backus formto acknowledge Pāṇini's earlier, independent development of a similar notation.[15]
In the ALGOL 60 report, Naur described BNF as ametalinguistic formula:[16]
Sequences of characters enclosed in the brackets <> represent metalinguistic variables whose values are sequences of symbols. The marks "::=" and "|" (the latter with the meaning of "or") are metalinguistic connectives. Any mark in a formula, which is not a variable or a connective, denotes itself. Juxtaposition of marks or variables in a formula signifies juxtaposition of the sequence denoted.
This is exemplified in the report's section 2.3, where comments are specified:
For the purpose of including text among the symbols of a program the following "comment" conventions hold:
Equivalence here means that any of the three structures shown in the left column may be replaced, in any occurrence outside of strings, by the symbol shown in the same line in the right column without any effect on the action of the program.
Naur altered Backus's original symbols for ALGOL 60, changing:≡to::=and the overbarred "or" to|, using commonly available characters.[17]: 14
BNF is very similar tocanonical-formBoolean algebraequations (used in logic-circuit design), reflecting Backus's mathematical background as a FORTRAN designer.[18]Studies of Boolean algebra were commonly part of a mathematics curriculum, which may have informed Backus's approach. Neither Backus nor Naur described the names enclosed in< >as non-terminals—Chomsky's terminology was not originally used in describing BNF. Naur later called them "classes" in 1961 course materials.[18]In the ALGOL 60 report, they were "metalinguistic variables," with other symbols defining the target language.
Saul Rosen, involved with theAssociation for Computing Machinerysince 1947, contributed to the transition from IAL to ALGOL and edited Communications of the ACM. He described BNF as a metalanguage for ALGOL in his 1967 book.[19]Early ALGOL manuals from IBM, Honeywell, Burroughs, and Digital Equipment Corporation followed this usage.
BNF significantly influenced programming language development, notably as the basis for earlycompiler-compilersystems. Examples include Edgar T. Irons' "A Syntax Directed Compiler for ALGOL 60" and Brooker and Morris' "A Compiler Building System," which directly utilized BNF.[20]Others, like Schorre'sMETA II, adapted BNF into a programming language, replacing< >with quoted strings and adding operators like $ for repetition, as in:
This influenced tools likeyacc, a widely usedparser generatorrooted in BNF principles, and Unix utilities likeyacc.[21]BNF remains one of the oldest computer-related notations still referenced today, though its variants often dominate modern applications.
Examples of its use as a metalanguage include defining arithmetic expressions:
Here,<expr>can recursively include itself, allowing repeated additions.
BNF today is one of the oldest computer-related languages still in use.[citation needed]
BNF's syntax itself may be represented with a BNF like the following:
Note that "" is theempty string.
The original BNF did not use quotes as shown in<literal>rule. This assumes that nowhitespaceis necessary for proper interpretation of the rule.
<EOL>represents the appropriateline-endspecifier (inASCII, carriage-return, line-feed or both depending on theoperating system).<rule-name>and<text>are to be substituted with a declared rule's name/label or literal text, respectively.
In the U.S. postal address example above, the entire block-quote is a<syntax>. Each line or unbroken grouping of lines is a rule; for example one rule begins with<name-part> ::=. The other part of that rule (aside from a line-end) is an expression, which consists of two lists separated by a vertical bar|. These two lists consists of some terms (three terms and two terms, respectively). Each term in this particular rule is a rule-name.
There are many variants and extensions of BNF, generally either for the sake of simplicity and succinctness, or to adapt it to a specific application. One common feature of many variants is the use ofregular expressionrepetition operators such as*and+. Theextended Backus–Naur form(EBNF) is a common one.
Another common extension is the use of square brackets around optional items. Although not present in the original ALGOL 60 report (instead introduced a few years later inIBM'sPL/Idefinition), the notation is now universally recognised.
Augmented Backus–Naur form(ABNF) and Routing Backus–Naur form (RBNF)[22]are extensions commonly used to describeInternet Engineering Task Force(IETF)protocols.
Parsing expression grammarsbuild on the BNF andregular expressionnotations to form an alternative class offormal grammar, which is essentiallyanalyticrather thangenerativein character.
Many BNF specifications found online today are intended to be human-readable and are non-formal. These often include many of the following syntax rules and extensions:
|
https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form
|
Incomputer science, theCocke–Younger–Kasami algorithm(alternatively calledCYK, orCKY) is aparsingalgorithmforcontext-free grammarspublished by Itiroo Sakai in 1961.[1][2]The algorithm is named after some of its rediscoverers:John Cocke, Daniel Younger,Tadao Kasami, andJacob T. Schwartz. It employsbottom-up parsinganddynamic programming.
The standard version of CYK operates only on context-free grammars given inChomsky normal form(CNF). However any context-free grammar may be algorithmically transformed into a CNF grammar expressing the same language (Sipser 1997).
The importance of the CYK algorithm stems from its high efficiency in certain situations. UsingbigOnotation, theworst case running timeof CYK isO(n3⋅|G|){\displaystyle {\mathcal {O}}\left(n^{3}\cdot \left|G\right|\right)}, wheren{\displaystyle n}is the length of the parsed string and|G|{\displaystyle \left|G\right|}is the size of the CNF grammarG{\displaystyle G}(Hopcroft & Ullman 1979, p. 140). This makes it one of the most efficient[citation needed]parsing algorithms in terms of worst-caseasymptotic complexity, although other algorithms exist with better average running time in many practical scenarios.
Thedynamic programmingalgorithm requires the context-free grammar to be rendered intoChomsky normal form(CNF), because it tests for possibilities to split the current sequence into two smaller sequences. Any context-free grammar that does not generate the empty string can be represented in CNF using onlyproduction rulesof the formsA→α{\displaystyle A\rightarrow \alpha }andA→BC{\displaystyle A\rightarrow BC}; to allow for the empty string, one can explicitly allowS→ε{\displaystyle S\to \varepsilon }, whereS{\displaystyle S}is the start symbol.[3]
The algorithm inpseudocodeis as follows:
Allows to recover the most probable parse given the probabilities of all productions.
In informal terms, this algorithm considers every possible substring of the input string and setsP[l,s,v]{\displaystyle P[l,s,v]}to be true if the substring of lengthl{\displaystyle l}starting froms{\displaystyle s}can be generated from the nonterminalRv{\displaystyle R_{v}}. Once it has considered substrings of length 1, it goes on to substrings of length 2, and so on. For substrings of length 2 and greater, it considers every possible partition of the substring into two parts, and checks to see if there is some productionA→BC{\displaystyle A\to B\;C}such thatB{\displaystyle B}matches the first part andC{\displaystyle C}matches the second part. If so, it recordsA{\displaystyle A}as matching the whole substring. Once this process is completed, the input string is generated by the grammar if the substring containing the entire input string is matched by the start symbol.
This is an example grammar:
Now the sentenceshe eats a fish with a forkis analyzed using the CYK algorithm. In the following table, inP[i,j,k]{\displaystyle P[i,j,k]},iis the number of the row (starting at the bottom at 1), andjis the number of the column (starting at the left at 1).
For readability, the CYK table forPis represented here as a 2-dimensional matrixMcontaining a set of non-terminal symbols, such thatRkis inM[i,j]{\displaystyle M[i,j]}if, and only if,P[i,j,k]{\displaystyle P[i,j,k]}.
In the above example, since a start symbolSis inM[7,1]{\displaystyle M[7,1]}, the sentence can be generated by the grammar.
The above algorithm is arecognizerthat will only determine if a sentence is in the language. It is simple to extend it into aparserthat also constructs aparse tree, by storing parse tree nodes as elements of the array, instead of the boolean 1. The node is linked to the array elements that were used to produce it, so as to build the tree structure. Only one such node in each array element is needed if only one parse tree is to be produced. However, if all parse trees of an ambiguous sentence are to be kept, it is necessary to store in the array element a list of all the ways the corresponding node can be obtained in the parsing process. This is sometimes done with a second table B[n,n,r] of so-calledbackpointers.
The end result is then a shared-forest of possible parse trees, where common trees parts are factored between the various parses. This shared forest can conveniently be read as anambiguous grammargenerating only the sentence parsed, but with the same ambiguity as the original grammar, and the same parse trees up to a very simple renaming of non-terminals, as shown byLang (1994).
As pointed out byLange & Leiß (2009), the drawback of all known transformations into Chomsky normal form is that they can lead to an undesirable bloat in grammar size. The size of a grammar is the sum of the sizes of its production rules, where the size of a rule is one plus the length of its right-hand side. Usingg{\displaystyle g}to denote the size of the original grammar, the size blow-up in the worst case may range fromg2{\displaystyle g^{2}}to22g{\displaystyle 2^{2g}}, depending on the transformation algorithm used. For the use in teaching, Lange and Leiß propose a slight generalization of the CYK algorithm, "without compromising efficiency of the algorithm, clarity of its presentation, or simplicity of proofs" (Lange & Leiß 2009).
It is also possible to extend the CYK algorithm to parse strings usingweightedandstochastic context-free grammars. Weights (probabilities) are then stored in the table P instead of booleans, so P[i,j,A] will contain the minimum weight (maximum probability) that the substring from i to j can be derived from A. Further extensions of the algorithm allow all parses of a string to be enumerated from lowest to highest weight (highest to lowest probability).
When the probabilistic CYK algorithm is applied to a long string, the splitting probability can become very small due to multiplying many probabilities together. This can be dealt with by summing log-probability instead of multiplying probabilities.
Theworst case running timeof CYK isΘ(n3⋅|G|){\displaystyle \Theta (n^{3}\cdot |G|)}, wherenis the length of the parsed string and |G| is the size of the CNF grammarG. This makes it one of the most efficient algorithms for recognizing general context-free languages in practice.Valiant (1975)gave an extension of the CYK algorithm. His algorithm computes the same parsing table
as the CYK algorithm; yet he showed thatalgorithms for efficient multiplicationofmatrices with 0-1-entriescan be utilized for performing this computation.
Using theCoppersmith–Winograd algorithmfor multiplying these matrices, this gives an asymptotic worst-case running time ofO(n2.38⋅|G|){\displaystyle O(n^{2.38}\cdot |G|)}. However, the constant term hidden by theBig O Notationis so large that the Coppersmith–Winograd algorithm is only worthwhile for matrices that are too large to handle on present-day computers (Knuth 1997), and this approach requires subtraction and so is only suitable for recognition. The dependence on efficient matrix multiplication cannot be avoided altogether:Lee (2002)has proved that any parser for context-free grammars working in timeO(n3−ε⋅|G|){\displaystyle O(n^{3-\varepsilon }\cdot |G|)}can be effectively converted into an algorithm computing the product of(n×n){\displaystyle (n\times n)}-matrices with 0-1-entries in timeO(n3−ε/3){\displaystyle O(n^{3-\varepsilon /3})}, and this was extended by Abboud et al.[4]to apply to a constant-size grammar.
|
https://en.wikipedia.org/wiki/CYK_algorithm
|
Informal languagetheory, acontext-free grammaris inGreibach normal form(GNF) if the right-hand sides of allproductionrules start with aterminal symbol, optionally followed by some non-terminals. A non-strict form allows one exception to this format restriction for allowing theempty word(epsilon, ε) to be a member of the described language. The normal form was established bySheila Greibachand it bears her name.
More precisely, a context-free grammar is in Greibach normal form, if all production rules are of the form:
whereA{\displaystyle A}is anonterminal symbol,a{\displaystyle a}is a terminal symbol, andA1A2…An{\displaystyle A_{1}A_{2}\ldots A_{n}}is a (possibly empty) sequence of nonterminal symbols.
Observe that the grammar does not haveleft recursions.
Every context-free grammar can be transformed into an equivalent grammar in Greibach normal form.[1]Various constructions exist. Some do not permit the second form of rule and cannot transform context-free grammars that can generate the empty word. For one such construction the size of the constructed grammar is O(n4) in the general case and O(n3) if no derivation of the original grammar consists of a single nonterminal symbol, wherenis the size of the original grammar.[2]This conversion can be used to prove that everycontext-free languagecan be accepted by a real-time (non-deterministic)pushdown automaton, i.e., the automaton reads a letter from its input every step.
Given a grammar in GNF and a derivable string in the grammar with lengthn, anytop-down parserwill halt at depthn.
|
https://en.wikipedia.org/wiki/Greibach_normal_form
|
Informal language theory, anoncontracting grammaris inKuroda normal formif all production rules are of the form:[1]
where A, B, C and D arenonterminalsymbols andais aterminal symbol.[1]Some sources omit theA→Bpattern.[2]
It is named afterSige-Yuki Kuroda, who originally called it alinear bounded grammar, a terminology that was also used by a few other authors thereafter.[3]
Every grammar in Kuroda normal form isnoncontracting, and therefore, generates acontext-sensitive language. Conversely, every noncontracting grammar that does not generate theempty stringcan be converted to Kuroda normal form.[2]
A straightforward technique attributed to György Révész transforms a grammar in Kuroda normal form to acontext-sensitive grammar:AB→CDis replaced by four context-sensitive rulesAB→AZ,AZ→WZ,WZ→WDandWD→CD. This proves that every noncontracting grammar generates a context-sensitive language.[1]
There is a similar normal form forunrestricted grammarsas well, which at least some authors call "Kuroda normal form" too:[4]
where ε is the empty string. Every unrestricted grammar isweakly equivalentto one using only productions of this form.[2]
If the rule AB → CD is eliminated from the above, one obtains context-free grammars inChomsky Normal Form.[5]ThePenttonen normal form(for unrestricted grammars) is a special case where first rule above isAB→AD.[4]Similarly, for context-sensitive grammars, the Penttonen normal form, also called theone-sided normal form(following Penttonen's own terminology) is:[1][2]
For every context-sensitive grammar, there exists a weakly equivalent one-sided normal form.[2]
|
https://en.wikipedia.org/wiki/Kuroda_normal_form
|
Combinatorics on wordsis a fairly new field ofmathematics, branching fromcombinatorics, which focuses on the study ofwordsandformal languages. The subject looks at letters orsymbols, and thesequencesthey form.Combinatorics on wordsaffects various areas of mathematical study, includingalgebraandcomputer science. There have been a wide range of contributions to the field. Some of the first work was onsquare-free wordsbyAxel Thuein the early 1900s. He and colleagues observed patterns within words and tried to explain them. As time went on, combinatorics on words became useful in the study ofalgorithmsandcoding. It led to developments inabstract algebraand answering open questions.
Combinatorics is an area ofdiscrete mathematics. Discrete mathematics is the study of countable structures. These objects have a definite beginning and end. The study of enumerable objects is the opposite of disciplines such asanalysis, wherecalculusandinfinitestructures are studied. Combinatorics studies how to count these objects using various representations. Combinatorics on words is a recent development in this field that focuses on the study of words and formal languages. A formal language is any set of symbols and combinations of symbols that people use to communicate information.[1]
Some terminology relevant to the study of words should first be explained. First and foremost, a word is basically a sequence of symbols, or letters, in afinite set.[1]One of these sets is known by the general public as thealphabet. For example, the word "encyclopedia" is a sequence of symbols in theEnglish alphabet, a finite set of twenty-six letters. Since a word can be described as a sequence, other basic mathematical descriptions can be applied. The alphabet is aset, so as one would expect, theempty setis asubset. In other words, there exists auniqueword of length zero. The length of the wordw{\displaystyle w}is defined by the number of symbols that make up the sequence, and is denoted by|w|{\displaystyle |w|}.[1]Again looking at the example "encyclopedia",|w|=12{\displaystyle |w|=12}, since encyclopedia has twelve letters. The idea offactoringof large numbers can be applied to words, where a factor of a word is a block of consecutive symbols.[1]Thus, "cyclop" is a factor of "encyclopedia".
In addition to examining sequences in themselves, another area to consider of combinatorics on words is how they can be represented visually. In mathematics various structures are used to encode data. A common structure used in combinatorics is thetree structure. A tree structure is agraphwhere theverticesare connected by one line, called a path oredge. Trees may not containcycles, and may or may not be complete. It is possible toencodea word, since a word is constructed by symbols, and encode the data by using a tree.[1]This gives a visual representation of the object.
The first books on combinatorics on words that summarize the origins of the subject were written by a group of mathematicians that collectively went by the name ofM. Lothaire. Their first book was published in 1983, when combinatorics on words became more widespread.[1]
A main contributor to the development of combinatorics on words wasAxel Thue(1863–1922); he researched repetition. Thue's main contribution was the proof of the existence of infinitesquare-free words. Square-free words do not have adjacent repeated factors.[1]To clarify, "dining" is not square-free since "in" is repeated consecutively, while "servers" is square-free, its two "er" factors not being adjacent. Thue proves his conjecture on the existence of infinite square-free words by usingsubstitutions. A substitution is a way to take a symbol and replace it with a word. He uses this technique to describe his other contribution, theThue–Morse sequence, or Thue–Morse word.[1]
Thue wrote two papers on square-free words, the second of which was on the Thue–Morse word.Marston Morseis included in the name because he discovered the same result as Thue did, yet they worked independently. Thue also proved the existence of an overlap-free word. An overlap-free word is when, for two symbolsx{\displaystyle x}andy{\displaystyle y}, the patternxyxyx{\displaystyle xyxyx}does not exist within the word. He continues in his second paper to prove a relationship between infinite overlap-free words and square-free words. He takes overlap-free words that are created using two different letters, and demonstrates how they can be transformed into square-free words of three letters using substitution.[1]
As was previously described, words are studied by examining the sequences made by the symbols. Patterns are found, and they can be described mathematically. Patterns can be either avoidable patterns, or unavoidable. A significant contributor to the work ofunavoidable patterns, or regularities, wasFrank Ramseyin 1930. His important theorem states that for integersk{\displaystyle k},m≥2{\displaystyle m\geq 2}, there exists a least positive integerR(k,m){\displaystyle R(k,m)}such that despite how a complete graph is colored with two colors, there will always exist a solid color subgraph of each color.[1]
Other contributors to the study of unavoidable patterns includevan der Waerden. His theorem states that if the positive integers are partitioned intok{\displaystyle k}classes, then there exists a classc{\displaystyle c}such thatc{\displaystyle c}contains an arithmetic progression of some unknown length. Anarithmetic progressionis a sequence of numbers in which the difference between adjacent numbers remains constant.[1]
When examining unavoidable patternssesquipowersare also studied. For some patternsx{\displaystyle x},y{\displaystyle y},z{\displaystyle z}, a sesquipower is of the formx{\displaystyle x},xyx{\displaystyle xyx},xyxzxyx{\displaystyle xyxzxyx},…{\displaystyle \ldots ~}. This is another pattern such as square-free, or unavoidable patterns. Coudrain andSchützenbergermainly studied these sesquipowers forgroup theoryapplications. In addition,Ziminproved that sesquipowers are all unavoidable. Whether the entire pattern shows up, or only some piece of the sesquipower shows up repetitively, it is not possible to avoid it.[1]
Necklacesare constructed from words of circular sequences. They are most frequently used inmusicandastronomy. Flye Sainte-Marie in 1894 proved there are22n−1−n{\displaystyle 2^{2^{n-1}-n}}binaryde Bruijnnecklaces of length2n{\displaystyle 2^{n}}. A de Bruijn necklace contains factors made of words of length n over a certain number of letters. The words appear only once in the necklace.[1]
In 1874,Baudotdeveloped the code that would eventually take the place ofMorse codeby applying the theory of binary de Bruijn necklaces. The problem continued from Sainte-Marie toMartinin 1934, who began looking at algorithms to make words of the de Bruijn structure. It was then worked on byKlaas Posthumusin 1943.[1]
Possibly the most applied result in combinatorics on words is theChomsky hierarchy, developed byNoam Chomsky. He studied formal language in the 1950s.[2]His way of looking at language simplified the subject. He disregards the actual meaning of the word, does not consider certain factors such as frequency and context, and applies patterns of short terms to all length terms. The basic idea of Chomsky's work is to divide language into four levels, or the languagehierarchy. The four levels are:regular,context-free,context-sensitive, andcomputably enumerableor unrestricted.[2]Regular is the least complex while computably enumerable is the most complex. While his work grew out of combinatorics on words, it drastically affected other disciplines, especiallycomputer science.[3]
Sturmian words, created by François Sturm, have roots in combinatorics on words. There exist several equivalent definitions of Sturmian words. For example, an infinite word is Sturmian if and only if it hasn+1{\displaystyle n+1}distinct factors of lengthn{\displaystyle n}, for every non-negative integern{\displaystyle n}.[1]
ALyndon wordis a word over a given alphabet that is written in its simplest and most ordered form out of its respectiveconjugacy class. Lyndon words are important because for any given Lyndon wordx{\displaystyle x}, there exists Lyndon wordsy{\displaystyle y}andz{\displaystyle z}, withy<z{\displaystyle y<z},x=yz{\displaystyle x=yz}. Further, there exists atheorem by Chen, Fox, and Lyndon, that states any word has a unique factorization of Lyndon words, where the factorization words arenon-increasing. Due to this property, Lyndon words are used to studyalgebra, specificallygroup theory. They form the basis for the idea ofcommutators.[1]
Cobhamcontributed work relatingEugène Prouhet's work withfinite automata. A mathematical graph is made of edges andnodes. With finite automata, the edges are labeled with a letter in an alphabet. To use the graph, one starts at a node and travels along the edges to reach a final node. The path taken along the graph forms the word. It is a finite graph because there are a countable number of nodes and edges, and only one path connects twodistinctnodes.[1]
Gauss codes, created byCarl Friedrich Gaussin 1838, are developed from graphs. Specifically, aclosed curveon aplaneis needed. If the curve only crosses over itself a finite number of times, then one labels the intersections with a letter from the alphabet used. Traveling along the curve, the word is determined by recording each letter as an intersection is passed. Gauss noticed that the distance between when the same symbol shows up in a word is aneven integer.[1]
Walther Franz Anton von Dyckbegan the work of combinatorics on words in group theory by his published work in 1882 and 1883. He began by using words as group elements.Lagrangealso contributed in 1771 with his work onpermutation groups.[1]
One aspect of combinatorics on words studied in group theory is reduced words. A group is constructed with words on some alphabet includinggeneratorsandinverse elements, excluding factors that appear of the form aā or āa, for some a in the alphabet.Reduced wordsare formed when the factors aā, āa are used to cancel out elements until a unique word is reached.[1]
Nielsen transformationswere also developed. For a set of elements of afree group, a Nielsen transformation is achieved by three transformations; replacing an element with its inverse, replacing an element with theproductof itself and another element, and eliminating any element equal to 1. By applying these transformations Nielsen reduced sets are formed. A reduced set means no element can be multiplied by other elements to cancel out completely. There are also connections with Nielsen transformations with Sturmian words.[1]
One problem considered in the study of combinatorics on words in group theory is the following: for two elementsx{\displaystyle x},y{\displaystyle y}of asemigroup, doesx=y{\displaystyle x=y}modulothe definingrelationsofx{\displaystyle x}andy{\displaystyle y}.PostandMarkovstudied this problem and determined itundecidable, meaning that there is no possible algorithm that can answer the question in all cases (because any such algorithm could be encoded into a word problem which that algorithm could not solve).[1]
TheBurnsidequestion was proved using the existence of an infinitecube-free word. This question asks if a group is finite if the group has a definite number of generators and meets the criteriaxn=1{\displaystyle x^{n}=1}, forx{\displaystyle x}in the group.[1]
Many word problems are undecidable based on thePost correspondence problem. Any twohomomorphismsg,h{\displaystyle g,h}with a common domain and a common codomain form an instance of the Post correspondence problem, which asks whether there exists a wordw{\displaystyle w}in the domain such thatg(w)=h(w){\displaystyle g(w)=h(w)}. Post proved that this problem is undecidable; consequently, any word problem that can be reduced to this basic problem is likewise undecidable.[1]
Combinatorics on words have applications onequations. Makanin proved that it is possible to find a solution for a finite system of equations, when the equations are constructed from words.[1]
|
https://en.wikipedia.org/wiki/Combinatorics_on_words
|
Incomputer science,formal methodsaremathematicallyrigorous techniques for thespecification, development,analysis, andverificationofsoftwareandhardwaresystems.[1]The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.[2]
Formal methods employ a variety oftheoretical computer sciencefundamentals, includinglogiccalculi,formal languages,automata theory,control theory,program semantics,type systems, andtype theory.[3]
Formal methods can be applied at various points through thedevelopment process.
Formal methods may be used to give a formal description of the system to be developed, at whatever level of detail desired. Further formal methods may depend on this specification to synthesize a program or to verify the correctness of a system.
Alternatively, specification may be the only stage in which formal methods is used. By writing a specification, ambiguities in the informal requirements can be discovered and resolved. Additionally, engineers can use a formal specification as a reference to guide their development processes.[4]
The need for formal specification systems has been noted for years. In theALGOL 58report,[5]John Backuspresented a formal notation for describingprogramming language syntax, later namedBackus normal formthen renamedBackus–Naur form(BNF).[6]Backus also wrote that a formal description of the meaning of syntactically valid ALGOL programs was not completed in time for inclusion in the report, stating that it "will be included in a subsequent paper." However, no paper describing the formal semantics was ever released.[7]
Program synthesis is the process of automatically creating a program that conforms to a specification. Deductive synthesis approaches rely on a complete formal specification of the program, whereas inductive approaches infer the specification from examples. Synthesizers perform a search over the space of possible programs to find a program consistent with the specification. Because of the size of this search space, developing efficient search algorithms is one of the major challenges in program synthesis.[8]
Formal verification is the use of software tools to prove properties of a formal specification, or to prove that a formal model of a systemimplementationsatisfies its specification.
Once a formal specification has been developed, the specification may be used as the basis forprovingproperties of the specification, and by inference, properties of the system implementation.
Sign-off verification is the use of a formal verification tool that is highly trusted. Such a tool can replace traditional verification methods (the tool may even be certified).[citation needed]
Sometimes, the motivation for proving thecorrectnessof a system is not the obvious need for reassurance of the correctness of the system, but a desire to understand the system better. Consequently, some proofs of correctness are produced in the style ofmathematical proof: handwritten (or typeset) usingnatural language, using a level of informality common to such proofs. A "good" proof is one that is readable and understandable by other human readers.
Critics of such approaches point out that theambiguityinherent in natural language allows errors to be undetected in such proofs; often, subtle errors can be present in the low-level details typically overlooked by such proofs. Additionally, the work involved in producing such a good proof requires a high level of mathematical sophistication and expertise.
In contrast, there is increasing interest in producing proofs of correctness of such systems by automated means. Automated techniques fall into three general categories:
Someautomated theorem proversrequire guidance as to which properties are "interesting" enough to pursue, while others work without human intervention. Model checkers can quickly get bogged down in checking millions of uninteresting states if not given a sufficiently abstract model.
Proponents of such systems argue that the results have greater mathematical certainty than human-produced proofs, since all the tedious details have been algorithmically verified. The training required to use such systems is also less than that required to produce good mathematical proofs by hand, making the techniques accessible to a wider variety of practitioners.
Critics note that some of those systems are likeoracles: they make a pronouncement of truth, yet give no explanation of that truth. There is also the problem of "verifying the verifier"; if the program that aids in the verification is itself unproven, there may be reason to doubt the soundness of the produced results. Some modern model checking tools produce a "proof log" detailing each step in their proof, making it possible to perform, given suitable tools, independent verification.
The main feature of the abstract interpretation approach is that it provides a sound analysis, i.e. no false negatives are returned. Moreover, it is efficiently scalable, by tuning the abstract domain representing the property to be analyzed, and by applying widening operators[9]to get fast convergence.
Formal methods includes a number of different techniques.
The design of a computing system can be expressed using a specification language, which is a formal language that includes a proof system. Using this proof system, formal verification tools can reason about the specification and establish that a system adheres to the specification.[10]
A binary decision diagram is a data structure that represents aBoolean function.[11]If a Boolean formulaP{\displaystyle {\mathcal {P}}}expresses that an execution of a program conforms to the specification, a binary decision diagram can be used to determine ifP{\displaystyle {\mathcal {P}}}is a tautology; that is, it always evaluates to TRUE. If this is the case, then the program always conforms to the specification.[12]
A SAT solver is a program that can solve theBoolean satisfiability problem, the problem of finding an assignment of variables that makes a given propositional formula evaluate to true. If a Boolean formulaP{\displaystyle {\mathcal {P}}}expresses that a specific execution of a program conforms to the specification, then determining that¬P{\displaystyle \neg {\mathcal {P}}}is unsatisfiable is equivalent to determining that all executions conform to the specification. SAT solvers are often used in bounded model checking, but can also be used in unbounded model checking.[13]
Formal methods are applied in different areas of hardware and software, includingrouters,Ethernet switches,routing protocols, security applications, andoperating systemmicrokernelssuch asseL4. There are several examples in which they have been used to verify the functionality of the hardware and software used indata centres.IBMusedACL2, a theorem prover, in theAMDx86 processor development process.[citation needed]Intel uses such methods to verify its hardware andfirmware(permanent software programmed into aread-only memory)[citation needed].Dansk Datamatik Centerused formal methods in the 1980s to develop a compiler system for theAda programming languagethat went on to become a long-lived commercial product.[14][15]
There are several other projects ofNASAin which formal methods are applied, such asNext Generation Air Transportation System[citation needed], Unmanned Aircraft System integration in National Airspace System,[16]and Airborne Coordinated Conflict Resolution and Detection (ACCoRD).[17]B-MethodwithAtelier B,[18]is used to develop safety automatisms for the various subways installed throughout the world byAlstomandSiemens, and also forCommon Criteriacertification and the development of system models byATMELandSTMicroelectronics.
Formal verification has been frequently used in hardware by most of the well-known hardware vendors, such as IBM,Intel, and AMD. There are many areas of hardware, where Intel have used formal methods to verify the working of the products, such as parameterized verification of cache-coherent protocol,[19]Intel Core i7 processor execution engine validation[20](using theorem proving,BDDs, and symbolic evaluation), optimization for Intel IA-64 architecture using HOL light theorem prover,[21]and verification of high-performance dual-portgigabit Ethernetcontrollerwith support forPCI expressprotocol and Intel advance management technology using Cadence.[22]Similarly, IBM has used formal methods in the verification of power gates,[23]registers,[24]and functional verification of the IBM Power7 microprocessor.[25]
Insoftware development, formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification, and design levels. Formal methods are most likely to be applied to safety-critical or security-critical software and systems, such asavionics software. Software safety assurance standards, such asDO-178Callows the usage of formal methods through supplementation, andCommon Criteriamandates formal methods at the highest levels of categorization.
For sequential software, examples of formal methods include theB-Method, the specification languages used inautomated theorem proving,RAISE, and theZ notation.
Infunctional programming,property-based testinghas allowed the mathematical specification and testing (if not exhaustive testing) of the expected behaviour of individual functions.
TheObject Constraint Language(and specializations such asJava Modeling Language) has allowed object-oriented systems to be formally specified, if not necessarily formally verified.
For concurrent software and systems,Petri nets,process algebra, andfinite-state machines(which are based onautomata theory; see alsovirtual finite state machineorevent driven finite state machine) allow executable software specification and can be used to build up and validate application behaviour.
Another approach to formal methods in software development is to write a specification in some form of logic—usually a variation offirst-order logic—and then to directly execute the logic as though it were a program. TheOWLlanguage, based ondescription logic, is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, as well as executing the logic directly. Examples areAttempto Controlled English, and Internet Business Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English–logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level.[citation needed]
Semi-formal methods are formalisms and languages that are not considered fully "formal". It defers the task of completing the semantics to a later stage, which is then done either by human interpretation or by interpretation through software like code or test casegenerators.[26]
Some practitioners believe that the formal methods community has overemphasized full formalization of a specification or design.[27][28]They contend that theexpressivenessof the languages involved, as well as the complexity of the systems being modelled, make full formalization a difficult and expensive task. As an alternative, variouslightweightformal methods, which emphasize partial specification and focused application, have been proposed. Examples of this lightweight approach to formal methods include theAlloyobject modelling notation,[29]Denney's synthesis of some aspects of theZ notationwithuse casedriven development,[30]and the CSKVDMTools.[31]
There are a variety of formal methods and notations available.
Many problems in formal methods areNP-hard, but can be solved in cases arising in practice. For example, the Boolean satisfiability problem isNP-completeby theCook–Levin theorem, butSAT solverscan solve a variety of large instances. There are "solvers" for a variety of problems that arise in formal methods, and there are many periodic competitions to evaluate the state-of-the-art in solving such problems.[33]
|
https://en.wikipedia.org/wiki/Formal_method
|
Inabstract algebra, thefree monoidon asetis themonoidwhose elements are all thefinite sequences(or strings) of zero or more elements from that set, withstring concatenationas the monoid operation and with the unique sequence of zero elements, often called theempty stringand denoted by ε or λ, as theidentity element. The free monoid on a setAis usually denotedA∗. Thefree semigrouponAis the subsemigroupofA∗containing all elements except the empty string. It is usually denotedA+.[1][2]
More generally, an abstract monoid (or semigroup)Sis described asfreeif it isisomorphicto the free monoid (or semigroup) on some set.[3]
As the name implies, free monoids and semigroups are those objects which satisfy the usualuniversal propertydefiningfree objects, in the respectivecategoriesof monoids and semigroups. It follows that every monoid (or semigroup) arises as a homomorphic image of a free monoid (or semigroup). The study of semigroups as images of free semigroups is called combinatorial semigroup theory.
Free monoids (and monoids in general) areassociative, by definition; that is, they are written without any parenthesis to show grouping or order of operation. The non-associative equivalent is thefree magma.
The monoid (N0,+) ofnatural numbers(including zero) under addition is a free monoid on a singleton free generator, in this case, the natural number 1.
According to the formal definition, this monoid consists of all sequences like "1", "1+1", "1+1+1", "1+1+1+1", and so on, including the empty sequence.
Mapping each such sequence to its evaluation result[4]and the empty sequence to zero establishes an isomorphism from the set of such sequences toN0.
This isomorphism is compatible with "+", that is, for any two sequencessandt, ifsis mapped (i.e. evaluated) to a numbermandtton, then their concatenations+tis mapped to the summ+n.
Informal languagetheory, usually a finite set of "symbols" A (sometimes called thealphabet) is considered. A finite sequence of symbols is called a "word overA", and the free monoidA∗is called the "Kleene starofA".
Thus, the abstract study of formal languages can be thought of as the study of subsets of finitely generated free monoids.
For example, assuming an alphabetA= {a,b,c}, its Kleene starA∗contains all concatenations ofa,b, andc:
IfAis any set, theword lengthfunction onA∗is the uniquemonoid homomorphismfromA∗to (N0,+) that maps each element ofAto 1. A free monoid is thus agraded monoid.[5](A graded monoidM{\displaystyle M}is a monoid that can be written asM=M0⊕M1⊕M2⋯{\displaystyle M=M_{0}\oplus M_{1}\oplus M_{2}\cdots }. EachMn{\displaystyle M_{n}}is a grade; the grading here is just the length of the string. That is,Mn{\displaystyle M_{n}}contains those strings of lengthn.{\displaystyle n.}The⊕{\displaystyle \oplus }symbol here can be taken to mean "set union"; it is used instead of the symbol∪{\displaystyle \cup }because, in general, set unions might not be monoids, and so a distinct symbol is used. By convention, gradations are always written with the⊕{\displaystyle \oplus }symbol.)
There are deep connections between the theory ofsemigroupsand that ofautomata. For example, every formal language has asyntactic monoidthat recognizes that language. For the case of aregular language, that monoid is isomorphic to thetransition monoidassociated to thesemiautomatonof somedeterministic finite automatonthat recognizes that language. The regular languages over an alphabet A are the closure of the finite subsets of A*, the free monoid over A, under union, product, and generation of submonoid.[6]
For the case ofconcurrent computation, that is, systems withlocks,mutexesorthread joins, the computation can be described withhistory monoidsandtrace monoids. Roughly speaking, elements of the monoid can commute, (e.g. different threads can execute in any order), but only up to a lock or mutex, which prevent further commutation (e.g. serialize thread access to some object).
We define a pair of words inA∗of the formuvandvuasconjugate: the conjugates of a word are thus itscircular shifts.[7]Two words are conjugate in this sense if they areconjugate in the sense of group theoryas elements of thefree groupgenerated byA.[8]
A free monoid isequidivisible: if the equationmn=pqholds, then there exists anssuch that eitherm=ps,sn=q(example see image) orms=p,n=sq.[9]This result is also known asLevi's lemma.[10]
A monoid is free if and only if it isgraded(in the strong sense that only the identity has gradation 0) and equidivisible.[9]
The members of a setAare called thefree generatorsforA∗andA+. The superscript * is then commonly understood to be theKleene star. More generally, ifSis an abstract free monoid (semigroup), then a set of elements which maps onto the set of single-letter words under an isomorphism to a monoidA∗(semigroupA+) is called aset of free generatorsforS.
Each free monoid (or semigroup)Shas exactly one set of free generators, thecardinalityof which is called therankofS.
Two free monoids or semigroups are isomorphic if and only if they have the same rank. In fact,everyset of generatorsfor a free monoid or semigroupScontains the free generators, since a free generator has word length 1 and hence can only be generated by itself. It follows that a free semigroup or monoid is finitely generated if and only if it has finite rank.
AsubmonoidNofA∗isstableifu,v,ux,xvinNtogether implyxinN.[11]A submonoid ofA∗is stable if and only if it is free.[12]For example, using the set ofbits{ "0", "1" } asA, the setNof all bit strings containing an even number of "1"s is a stable submonoid because ifucontains an even number of "1"s, anduxas well, thenxmust contain an even number of "1"s, too. WhileNcannot be freely generated by any set of single bits, itcanbe freely generated by the set of bit strings { "0", "11", "101", "1001", "10001", ... } – the set of strings of the form "10n1" for some nonnegative integern(along with the string "0").
A set of free generators for a free monoidPis referred to as abasisforP: a set of wordsCis acodeifC* is a free monoid andCis a basis.[3]A setXof words inA∗is aprefix, or has theprefix property, if it does not contain a proper(string) prefixof any of its elements. Every prefix inA+is a code, indeed aprefix code.[3][13]
A submonoidNofA∗isright unitaryifx,xyinNimpliesyinN. A submonoid is generated by a prefix if and only if it is right unitary.[14]
A factorization of a free monoid is a sequence of subsets of words with the property that every word in the free monoid can be written as a concatenation of elements drawn from the subsets. TheChen–Fox–Lyndon theoremstates that theLyndon wordsfurnish a factorization. More generally,Hall wordsprovide a factorization; the Lyndon words are a special case of the Hall words.
The intersection of free submonoids of a free monoidA∗is again free.[15][16]IfSis a subset of a free monoidA* then the intersection of all free submonoids ofA* containingSis well-defined, sinceA* itself is free, and containsS; it is a free monoid and called thefree hullofS. A basis for this intersection is a code.
Thedefect theorem[15][16][17]states that ifXis finite andCis the basis of the free hull ofX, then eitherXis a code andC=X, or
Amonoid morphismffrom a free monoidB∗to a monoidMis a map such thatf(xy) =f(x)⋅f(y) for wordsx,yandf(ε) = ι, where ε and ι denote the identity elements ofB∗andM, respectively. The morphismfis determined by its values on the letters ofBand conversely any map fromBtoMextends to a morphism. A morphism isnon-erasing[18]orcontinuous[19]if no letter ofBmaps to ι andtrivialif every letter ofBmaps to ι.[20]
A morphismffrom a free monoidB∗to a free monoidA∗istotalif every letter ofAoccurs in some word in the image off;cyclic[20]orperiodic[21]if the image offis contained in {w}∗for some wordwofA∗. A morphismfisk-uniformif the length |f(a)| is constant and equal tokfor allainA.[22][23]A 1-uniform morphism isstrictly alphabetic[19]or acoding.[24]
A morphismffrom a free monoidB∗to a free monoidA∗issimplifiableif there is an alphabetCof cardinality less than that ofBsuch the morphismffactors throughC∗, that is, it is the composition of a morphism fromB∗toC∗and a morphism from that toA∗; otherwisefiselementary. The morphismfis called acodeif the image of the alphabetBunderfis a code. Every elementary morphism is a code.[25]
ForLa subset ofB∗, a finite subsetTofLis atest setforLif morphismsfandgonB∗agree onLif and only if they agree onT. TheEhrenfeucht conjectureis that any subsetLhas a test set:[26]it has been proved[27]independently by Albert and Lawrence; McNaughton; and Guba. The proofs rely onHilbert's basis theorem.[28]
The computational embodiment of a monoid morphism is amapfollowed by afold. In this setting, the free monoid on a setAcorresponds tolistsof elements fromAwith concatenation as the binary operation. A monoid homomorphism from the free monoid to any other monoid (M,•) is a functionfsuch that
whereeis the identity onM. Computationally, every such homomorphism corresponds to amapoperation applyingfto all the elements of a list, followed by afoldoperation which combines the results using the binary operator •. Thiscomputational paradigm(which can be generalized to non-associative binary operators) has inspired theMapReducesoftware framework.[citation needed]
AnendomorphismofA∗is a morphism fromA∗to itself.[29]Theidentity mapIis an endomorphism ofA∗, and the endomorphisms form amonoidundercomposition of functions.
An endomorphismfisprolongableif there is a letterasuch thatf(a) =asfor a non-empty strings.[30]
The operation ofstring projectionis an endomorphism. That is, given a lettera∈ Σ and a strings∈ Σ∗, the string projectionpa(s) removes every occurrence ofafroms; it is formally defined by
Note that string projection is well-defined even if the rank of the monoid is infinite, as the above recursive definition works for all strings of finite length. String projection is amorphismin the category of free monoids, so that
wherepa(Σ∗){\displaystyle p_{a}\left(\Sigma ^{*}\right)}is understood to be the free monoid of all finite strings that don't contain the lettera. Projection commutes with the operation of string concatenation, so thatpa(st)=pa(s)pa(t){\displaystyle p_{a}(st)=p_{a}(s)p_{a}(t)}for all stringssandt. There are many right inverses to string projection, and thus it is asplit epimorphism.
The identity morphism ispε,{\displaystyle p_{\varepsilon },}defined aspε(s)=s{\displaystyle p_{\varepsilon }(s)=s}for all stringss, andpε(ε)=ε{\displaystyle p_{\varepsilon }(\varepsilon )=\varepsilon }.
String projection is commutative, as clearly
For free monoids of finite rank, this follows from the fact that free monoids of the same rank are isomorphic, as projection reduces the rank of the monoid by one.
String projection isidempotent, as
for all stringss. Thus, projection is an idempotent, commutative operation, and so it forms a boundedsemilatticeor a commutativeband.
Given a setA, thefreecommutative monoidonAis the set of all finitemultisetswith elements drawn fromA, with the monoid operation being multiset sum and the monoid unit being the empty multiset.
For example, ifA= {a,b,c}, elements of the free commutative monoid onAare of the form
Thefundamental theorem of arithmeticstates that the monoid of positive integers under multiplication is a free commutative monoid on an infinite set of generators, theprime numbers.
Thefree commutative semigroupis the subset of the free commutative monoid that contains all multisets with elements drawn fromAexcept the empty multiset.
Thefree partially commutative monoid, ortrace monoid, is a generalization that encompasses both the free and free commutative monoids as instances. This generalization finds applications incombinatoricsand in the study ofparallelismincomputer science.
|
https://en.wikipedia.org/wiki/Free_monoid
|
Inlinguistics,grammaris the set of rules for how anatural languageis structured, as demonstrated by its speakers orwriters. Grammar rules may concern the use ofclauses,phrases, andwords. The term may also refer to the study of such rules, a subject that includesphonology,morphology, andsyntax, together withphonetics,semantics, andpragmatics. There are, broadly speaking, two different ways to study grammar:traditional grammarandtheoretical grammar.
Fluency in a particularlanguage varietyinvolves a speaker internalizing these rules, many or most of which areacquiredby observing other speakers, as opposed to intentional study orinstruction. Much of this internalization occurs during early childhood; learning a language later in life usually involves more direct instruction.[1]The termgrammarcan also describe the linguistic behaviour of groups of speakers and writers rather than individuals. Differences in scale are important to this meaning: for example,English grammarcould describe those rules followed by every one of the language's speakers.[2]At smaller scales, it may refer to rules shared by smaller groups of speakers.
A description, study, or analysis of such rules may also be known as a grammar, or as agrammar book. Areference workdescribing the grammar of a language is called areference grammaror simply agrammar. A fully revealed grammar, which describes thegrammaticalconstructions of a particular speech type in great detail is called descriptive grammar. This kind oflinguistic descriptioncontrasts withlinguistic prescription, a plan to marginalize some constructions whilecodifyingothers, either absolutely or in the framework of astandard language. The wordgrammaroften has divergent meanings when used in contexts outside linguistics. It may be used more broadly to includeorthographicconventions ofwritten language, such asspellingandpunctuation, which are not typically considered part of grammar by linguists; that is, theconventionsused for writing a language. It may also be used more narrowly to refer to a set ofprescriptive normsonly, excluding the aspects of a language's grammar which do notchangeor are clearly acceptable (or not) without the need for discussions.
The wordgrammaris derived fromGreekγραμματικὴ τέχνη(grammatikḕ téchnē), which means "art of letters", fromγράμμα(grámma), "letter", itself fromγράφειν(gráphein), "to draw, to write".[3]The same Greek root also appears in the wordsgraphics,grapheme, andphotograph.
The first systematic grammar ofSanskritoriginated inIron Age India, withYaska(6th century BC),Pāṇini(6th–5th century BC[4]) and his commentatorsPingala(c.200 BC),Katyayana, andPatanjali(2nd century BC).Tolkāppiyam, the earliestTamilgrammar, is mostly dated to before the 5th century AD. TheBabyloniansalso made some early attempts at language description.[5]
Grammar appeared as a discipline inHellenismfrom the 3rd century BC forward with authors such asRhyanusandAristarchus of Samothrace. The oldest known grammar handbook is theArt of Grammar(Τέχνη Γραμματική), a succinct guide to speaking and writing clearly and effectively, written by the ancient Greek scholarDionysius Thrax(c.170– c.90 BC), a student of Aristarchus of Samothrace who founded a school on the Greek island of Rhodes. Dionysius Thrax's grammar book remained the primary grammar textbook for Greek schoolboys until as late as the twelfth century AD. The Romans based their grammatical writings on it and its basic format remains the basis for grammar guides in many languages even today.[6]Latin grammardeveloped by following Greek models from the 1st century BC, due to the work of authors such asOrbilius Pupillus,Remmius Palaemon,Marcus Valerius Probus,Verrius Flaccus, andAemilius Asper.
The grammar ofIrishoriginated in the 7th century withAuraicept na n-Éces.Arabic grammaremerged withAbu al-Aswad al-Du'aliin the 7th century. The first treatises onHebrew grammarappeared in theHigh Middle Ages, in the context ofMidrash(exegesis of theHebrew Bible). TheKaraitetradition originated inAbbasidBaghdad. TheDiqduq(10th century) is one of the earliest grammatical commentaries on the Hebrew Bible.[7]Ibn Barunin the 12th century, compares the Hebrew language with Arabic in theIslamic grammatical tradition.[8]
Belonging to thetriviumof the sevenliberal arts, grammar was taught as a core discipline throughout theMiddle Ages, following the influence of authors fromLate Antiquity, such asPriscian. Treatment of vernaculars began gradually during theHigh Middle Ages, with isolated works such as theFirst Grammatical Treatise, but became influential only in theRenaissanceandBaroqueperiods. In 1486,Antonio de NebrijapublishedLas introduciones Latinas contrapuesto el romance al Latin, and the firstSpanish grammar,Gramática de la lengua castellana, in 1492. During the 16th-centuryItalian Renaissance, theQuestione della linguawas the discussion on the status and ideal form of the Italian language, initiated byDante'sde vulgari eloquentia(Pietro Bembo,Prose della volgar linguaVenice 1525). The first grammar ofSlovenewas written in 1583 byAdam Bohorič, andGrammatica Germanicae Linguae, the first grammar of German, was published in 1578.
Grammars of some languages began to be compiled for the purposes of evangelism andBible translationfrom the 16th century onward, such asGrammatica o Arte de la Lengua General de Los Indios de Los Reynos del Perú(1560), aQuechuagrammar byFray Domingo de Santo Tomás.
From the latter part of the 18th century, grammar came to be understood as a subfield of the emerging discipline of modern linguistics. TheDeutsche GrammatikofJacob Grimmwas first published in the 1810s. TheComparative GrammarofFranz Bopp, the starting point of moderncomparative linguistics, came out in 1833.
Frameworks of grammar which seek to give a precise scientific theory of the syntactic rules of grammar and their function have been developed intheoretical linguistics.
Other frameworks are based on an innate "universal grammar", an idea developed byNoam Chomsky. In such models, the object is placed into the verb phrase. The most prominent biologically oriented theories are:
Parse treesare commonly used by such frameworks to depict their rules. There are various alternative schemes for some grammar:
Grammars evolve throughusage. Historically, with the advent ofwritten representations, formal rules aboutlanguage usagetend to appear also, although such rules tend to describe writing conventions more accurately than conventions of speech.[11]Formal grammarsarecodificationsof usage which are developed by repeated documentation andobservationover time. As rules are established and developed, the prescriptive concept ofgrammatical correctnesscan arise. This often produces a discrepancy between contemporary usage and that which has been accepted, over time, as being standard or "correct". Linguists tend to view prescriptive grammar as having little justification beyond their authors' aesthetic tastes, although style guides may give useful advice aboutstandard language employmentbased on descriptions of usage in contemporary writings of the same language. Linguistic prescriptions also form part of the explanation for variation in speech, particularly variation in the speech of an individual speaker (for example, why some speakers say "I didn't do nothing", some say "I didn't do anything", and some say one or the other depending on social context).
The formal study of grammar is an important part of children's schooling from a young age through advancedlearning, though the rules taught in schools are not a "grammar" in the sense that mostlinguistsuse, particularly as they areprescriptivein intent rather thandescriptive.
Constructed languages(also calledplanned languagesorconlangs) are more common in the modern day, although still extremely uncommon compared to natural languages. Many have been designed to aid human communication (for example, naturalisticInterlingua, schematicEsperanto, and the highly logicalLojban). Each of these languages has its own grammar.
Syntaxrefers to the linguistic structure above the word level (for example, how sentences are formed) – though without taking into accountintonation, which is the domain of phonology. Morphology, by contrast, refers to the structure at and below the word level (for example, howcompound wordsare formed), but above the level of individual sounds, which, like intonation, are in the domain of phonology.[12]However, no clear line can be drawn between syntax and morphology.Analytic languagesuse syntax to convey information that is encoded byinflectioninsynthetic languages. In other words, word order is not significant, and morphology is highly significant in a purely synthetic language, whereas morphology is not significant and syntax is highly significant in an analytic language. For example, Chinese andAfrikaansare highly analytic, thus meaning is very context-dependent. (Both have some inflections, and both have had more in the past; thus, they are becoming even less synthetic and more "purely" analytic over time.)Latin, which is highlysynthetic, usesaffixesandinflectionsto convey the same information that Chinese does with syntax. Because Latin words are quite (though not totally) self-contained, an intelligible Latinsentencecan be made from elements that are arranged almost arbitrarily. Latin has a complex affixation and simple syntax, whereas Chinese has the opposite.
Prescriptivegrammar is taught in primary and secondary school. The term "grammar school" historically referred to a school (attached to a cathedral or monastery) that teaches Latin grammar to future priests and monks. It originally referred to a school that taught students how to read, scan, interpret, and declaim Greek and Latin poets (including Homer, Virgil, Euripides, and others). These should not be mistaken for the related, albeit distinct, modern British grammar schools.
Astandard languageis a dialect that is promoted above other dialects in writing, education, and, broadly speaking, in the public sphere; it contrasts withvernacular dialects, which may be the objects of study in academic,descriptive linguisticsbut which are rarely taught prescriptively. The standardized "first language" taught in primary education may be subject topoliticalcontroversy because it may sometimes establish a standard defining nationality orethnicity.
Recently, efforts have begun to updategrammar instructionin primary and secondary education. The main focus has been to prevent the use of outdated prescriptive rules in favor of setting norms based on earlier descriptive research and to change perceptions about the relative "correctness" of prescribed standard forms in comparison to non-standard dialects. A series of metastudies have found that the explicit teaching of grammatical parts of speech and syntax has little or no effect on the improvement of student writing quality in elementary school, middle school or high school; other methods of writing instruction had far greater positive effect, including strategy instruction, collaborative writing, summary writing, process instruction, sentence combining and inquiry projects.[13][14][15]
The preeminence ofParisian Frenchhas reigned largely unchallenged throughout the history of modern French literature. Standard Italian is based on the speech of Florence rather than the capital because of its influence on early literature. Likewise, standard Spanish is not based on the speech of Madrid but on that of educated speakers from more northern areas such as Castile and León (seeGramática de la lengua castellana). InArgentinaandUruguaythe Spanish standard is based on the local dialects of Buenos Aires and Montevideo (Rioplatense Spanish).Portuguesehas, for now,two official standards,Brazilian PortugueseandEuropean Portuguese.
TheSerbianvariant ofSerbo-Croatianis likewise divided;Serbiaand theRepublika SrpskaofBosnia and Herzegovinause their own distinct normative subvarieties, with differences inyatreflexes. The existence and codification of a distinct Montenegrin standard is a matter of controversy, some treatMontenegrinas a separate standard lect, and some think that it should be considered another form of Serbian.
Norwegianhas two standards,BokmålandNynorsk, the choice between which is subject tocontroversy: Each Norwegian municipality can either declare one as its official language or it can remain "language neutral". Nynorsk is backed by 27 percent of municipalities. The main language used in primary schools, chosen by referendum within the local school district, normally follows the official language of its municipality.Standard Germanemerged from the standardized chancellery use ofHigh Germanin the 16th and 17th centuries. Until about 1800, it was almost exclusively a written language, but now it is so widely spoken that most of the formerGerman dialectsare nearly extinct.
Standard Chinesehas official status as the standard spoken form of the Chinese language in the People's Republic of China (PRC), theRepublic of China(ROC), and theRepublic of Singapore. Pronunciation of Standard Chinese is based on the local accent ofMandarin Chinesefrom Luanping, Chengde in Hebei Province near Beijing, while grammar and syntax are based on modernvernacular written Chinese.
Modern Standard Arabicis directly based onClassical Arabic, the language of theQur'an. TheHindustani languagehas two standards,HindiandUrdu.
In the United States, the Society for the Promotion of Good Grammar designated 4 March asNational Grammar Dayin 2008.[16]
|
https://en.wikipedia.org/wiki/Grammar_framework
|
Mathematical notationconsists of usingsymbolsfor representingoperations, unspecifiednumbers,relations, and any othermathematical objectsand assembling them intoexpressionsandformulas. Mathematical notation is widely used inmathematics,science, andengineeringfor representing complexconceptsandpropertiesin a concise, unambiguous, and accurate way.
For example, the physicistAlbert Einstein's formulaE=mc2{\displaystyle E=mc^{2}}is the quantitative representation in mathematical notation ofmass–energy equivalence.[1]
Mathematical notation was first introduced byFrançois Vièteat the end of the 16th century and largely expanded during the 17th and 18th centuries byRené Descartes,Isaac Newton,Gottfried Wilhelm Leibniz, and overallLeonhard Euler.
The use of many symbols is the basis of mathematical notation. They play a similar role as words innatural languages. They may play different roles in mathematical notation similarly as verbs, adjective and nouns play different roles in a sentence.
Letters are typically used for naming—inmathematical jargon, one saysrepresenting—mathematical objects. TheLatinandGreekalphabets are used extensively, but a few letters of other alphabets are also used sporadically, such as theHebrewℵ{\displaystyle \aleph },CyrillicШ, andHiraganaよ. Uppercase and lowercase letters are considered as different symbols. For Latin alphabet, different typefaces also provide different symbols. For example,r,R,R,R,r,{\displaystyle r,R,\mathbb {R} ,{\mathcal {R}},{\mathfrak {r}},}andR{\displaystyle {\mathfrak {R}}}could theoretically appear in the same mathematical text with six different meanings. Normally, roman upright typeface is not used for symbols, except for symbols representing a standard function, such as the symbol "sin{\displaystyle \sin }" of thesine function.[2]
In order to have more symbols, and for allowing related mathematical objects to be represented by related symbols,diacritics,subscriptsandsuperscriptsare often used. For example,f1′^{\displaystyle {\hat {f'_{1}}}}may denote theFourier transformof thederivativeof afunctioncalledf1.{\displaystyle f_{1}.}
Symbols are not only used for naming mathematical objects. They can be used foroperations(+,−,/,⊕,…),{\displaystyle (+,-,/,\oplus ,\ldots ),}forrelations(=,<,≤,∼,≡,…),{\displaystyle (=,<,\leq ,\sim ,\equiv ,\ldots ),}forlogical connectives(⟹,∧,∨,…),{\displaystyle (\implies ,\land ,\lor ,\ldots ),}forquantifiers(∀,∃),{\displaystyle (\forall ,\exists ),}and for other purposes.
Some symbols are similar to Latin or Greek letters, some are obtained by deforming letters, some are traditionaltypographic symbols, but many have been specially designed for mathematics.
TheInternational Organization for Standardization(ISO) is aninternational standarddevelopment organization composed of representatives from the nationalstandards organizationsof member countries. The international standardISO 80000-2(previously,ISO 31-11) specifies symbols for use in mathematical equations. The standard requires use of italic fonts for variables (e.g.,E=mc2) and roman (upright) fonts for mathematical constants (e.g., e or π).
An expression is a written arrangement ofsymbolsfollowing the context-dependent,syntacticconventions of mathematical notation. Symbols can denotenumbers,variables,operations, andfunctions.[3]Other symbols includepunctuationmarks andbrackets, used forgroupingwhere there is not a well-definedorder of operations.
Expressions are commonly distinguished fromformulas: expressions are a kind ofmathematical object, whereas formulas are statementsaboutmathematical objects.[4]This is analogous tonatural language, where anoun phraserefers to an object, and a wholesentencerefers to afact. For example,8x−5{\displaystyle 8x-5}is an expression, while theinequality8x−5≥3{\displaystyle 8x-5\geq 3}is a formula.
Toevaluatean expression means to find a numericalvalueequivalent to the expression.[5][6]Expressions can beevaluatedorsimplifiedby replacingoperationsthat appear in them with their result. For example, the expression8×2−5{\displaystyle 8\times 2-5}simplifies to16−5{\displaystyle 16-5}, and evaluates to11.{\displaystyle 11.}
It is believed that a notation to representnumberswas first developed at least 50,000 years ago.[7]Early mathematical ideas such asfinger counting[8]have also been represented by collections of rocks, sticks, bone, clay, stone, wood carvings, and knotted ropes. Thetally stickis a way of counting dating back to theUpper Paleolithic. Perhaps the oldest known mathematical texts are those of ancientSumer. TheCensus Quipuof the Andes and theIshango Bonefrom Africa both used thetally markmethod of accounting for numerical concepts.
The concept ofzeroand the introduction of a notation for it are important developments in early mathematics, which predates for centuries the concept of zero as a number. It was used as a placeholder by theBabyloniansandGreek Egyptians, and then as anintegerby theMayans,IndiansandArabs(seethe history of zero).
Until the 16th century, mathematics was essentiallyrhetorical, in the sense that everything but explicit numbers was expressed in words. However, some authors such asDiophantusused some symbols as abbreviations.
The first systematic use of formulas, and, in particular the use of symbols (variables) for unspecified numbers is generally attributed toFrançois Viète(16th century). However, he used different symbols than those that are now standard.
Later,René Descartes(17th century) introduced the modern notation for variables andequations; in particular, the use ofx,y,z{\displaystyle x,y,z}forunknownquantities anda,b,c{\displaystyle a,b,c}for known ones (constants). He introduced also the notationiand the term "imaginary" for theimaginary unit.
The 18th and 19th centuries saw the standardization of mathematical notation as used today.Leonhard Eulerwas responsible for many of the notations currently in use: thefunctional notationf(x),{\displaystyle f(x),}efor the base of thenatural logarithm,∑{\textstyle \sum }forsummation, etc.[9]He also popularized the use ofπfor theArchimedes constant(proposed byWilliam Jones, based on an earlier notation ofWilliam Oughtred).[10]
Since then many new notations have been introduced, often specific to a particular area of mathematics. Some notations are named after their inventors, such asLeibniz's notation,Legendre symbol, theEinstein summation convention, etc.
Generaltypesetting systemsare generally not well suited for mathematical notation. One of the reasons is that, in mathematical notation, the symbols are often arranged in two-dimensional figures, such as in:
TeXis a mathematically oriented typesetting system that was created in 1978 byDonald Knuth. It is widely used in mathematics, through its extension calledLaTeX, and is ade factostandard. (The above expression is written in LaTeX.)
More recently, another approach for mathematical typesetting is provided byMathML. However, it is not well supported in web browsers, which is its primary target.
Modern Arabic mathematical notationis based mostly on theArabic alphabetand is used widely in theArab world, especially in pre-tertiary education. (Western notation usesArabic numerals, but the Arabic notation also replaces Latin letters and related symbols with Arabic script.)
In addition to Arabic notation, mathematics also makes use ofGreek lettersto denote a wide variety of mathematical objects and variables. On some occasions, certainHebrew lettersare also used (such as in the context ofinfinite cardinals).
Some mathematical notations are mostly diagrammatic, and so are almost entirely script independent. Examples arePenrose graphical notationandCoxeter–Dynkin diagrams.
Braille-based mathematical notations used by blind people includeNemeth BrailleandGS8 Braille.
Thesyntaxof notation defines how symbols can be combined to makewell-formed expressions, without any given meaning or interpretation. Thesemanticsof notation interprets what the symbols represent and assigns a meaning to the expressions and formulas. The reverse process of taking a statement and writing it in logical or mathematical notation is calledtranslation.
Given aformal language, aninterpretationassigns adomain of discourseto the language. Specifically, it assigns each of the constant symbols to objects of the domain, function letters to functions within the domain, predicate letters to statments, and vairiables are assumed to range over the domain.
Themap–territory relationdescribes the relationship between an object and the representation of that object, such as theEarthand amapof it. In mathematics, this is how the number 4 relates to its representation "4". The quotation marks are the formally correct usage, distinguishing the number from its name. However, it is fairly common practice in math to commit this falacy saying "Let x denote...", rather than "Let "x" denote..." which is generally harmless.
|
https://en.wikipedia.org/wiki/Mathematical_notation
|
Incomputer programming, astringis traditionally asequenceofcharacters, either as aliteral constantor as some kind ofvariable. The latter may allow its elements to bemutatedand the length changed, or it may be fixed (after creation). A string is often implemented as anarray data structureofbytes(orwords) that stores a sequence of elements, typically characters, using somecharacter encoding. More general,stringmay also denote a sequence (orlist) of data other than just characters.
Depending on the programming language and precise data type used, avariabledeclared to be a string may either cause storage in memory to be statically allocated for a predetermined maximum length or employdynamic allocationto allow it to hold a variable number of elements.
When a string appears literally insource code, it is known as astring literalor an anonymous string.[1]
Informal languages, which are used inmathematical logicandtheoretical computer science, a string is a finite sequence ofsymbolsthat are chosen from asetcalled analphabet.
A primary purpose of strings is to store human-readable text, like words and sentences. Strings are used to communicate information from a computer program to the user of the program.[2]A program may also accept string input from its user. Further, strings may store data expressed as characters yet not intended for human reading.
Example strings and their purposes:
The term string may also designate a sequence of data or computer records other than characters — like a "string ofbits" — but when used without qualification it refers to strings of characters.[4]
Use of the word "string" to mean any items arranged in a line, series or succession dates back centuries.[5][6]In 19th-century typesetting,compositorsused the term "string" to denote a length of type printed on paper; the string would be measured to determine the compositor's pay.[7][4][8]
Use of the word "string" to mean "a sequence of symbols or linguistic elements in a definite order" emerged from mathematics,symbolic logic, andlinguistic theoryto speak about theformalbehavior of symbolic systems, setting aside the symbols' meaning.[4]
For example, logicianC. I. Lewiswrote in 1918:[9]
A mathematical system is any set of strings of recognisable marks in which some of the strings are taken initially and the remainder derived from these by operations performed according to rules which are independent of any meaning assigned to the marks. That a system should consist of 'marks' instead of sounds or odours is immaterial.
According toJean E. Sammet, "the first realistic string handling and pattern matching language" for computers wasCOMITin the 1950s, followed by theSNOBOLlanguage of the early 1960s.[10]
Astring datatypeis a datatype modeled on the idea of a formal string. Strings are such an important and useful datatype that they are implemented in nearly everyprogramming language. In some languages they are available asprimitive typesand in others ascomposite types. Thesyntaxof most high-level programming languages allows for a string, usually quoted in some way, to represent an instance of a string datatype; such a meta-string is called aliteralorstring literal.
Although formal strings can have an arbitrary finite length, the length of strings in real languages is often constrained to an artificial maximum. In general, there are two types of string datatypes:fixed-length strings, which have a fixed maximum length to be determined atcompile timeand which use the same amount of memory whether this maximum is needed or not, andvariable-length strings, whose length is not arbitrarily fixed and which can use varying amounts of memory depending on the actual requirements at run time (seeMemory management). Most strings in modernprogramming languagesare variable-length strings. Of course, even variable-length strings are limited in length by the amount of available memory. The string length can be stored as a separate integer (which may put another artificial limit on the length) or implicitly through a termination character, usually a character value with all bits zero such as in C programming language. See also "Null-terminated" below.
String datatypes have historically allocated one byte per character, and, although the exact character set varied by region, character encodings were similar enough that programmers could often get away with ignoring this, since characters a program treated specially (such as period and space and comma) were in the same place in all the encodings a program would encounter. These character sets were typically based onASCIIorEBCDIC. If text in one encoding was displayed on a system using a different encoding, text was oftenmangled, though often somewhat readable and some computer users learned to read the mangled text.
Logographiclanguages such asChinese,Japanese, andKorean(known collectively asCJK) need far more than 256 characters (the limit of a one 8-bit byte per-character encoding) for reasonable representation. The normal solutions involved keeping single-byte representations for ASCII and using two-byte representations for CJKideographs. Use of these with existing code led to problems with matching and cutting of strings, the severity of which depended on how the character encoding was designed. Some encodings such as theEUCfamily guarantee that a byte value in the ASCII range will represent only that ASCII character, making the encoding safe for systems that use those characters as field separators. Other encodings such asISO-2022andShift-JISdo not make such guarantees, making matching on byte codes unsafe. These encodings also were not "self-synchronizing", so that locating character boundaries required backing up to the start of a string, and pasting two strings together could result in corruption of the second string.
Unicodehas simplified the picture somewhat. Most programming languages now have a datatype for Unicode strings. Unicode's preferred byte stream formatUTF-8is designed not to have the problems described above for older multibyte encodings. UTF-8, UTF-16 andUTF-32require the programmer to know that the fixed-size code units are different from the "characters", the main difficulty currently is incorrectly designed APIs that attempt to hide this difference (UTF-32 does makecode pointsfixed-sized, but these are not "characters" due to composing codes).
Some languages, such asC++,PerlandRuby, normally allow the contents of a string to be changed after it has been created; these are termedmutablestrings. In other languages, such asJava,JavaScript,Lua,Python, andGo, the value is fixed and a new string must be created if any alteration is to be made; these are termedimmutablestrings. Some of these languages with immutable strings also provide another type that is mutable, such as Java and.NET'sStringBuilder, the thread-safe JavaStringBuffer, and theCocoaNSMutableString. There are both advantages and disadvantages to immutability: although immutable strings may require inefficiently creating many copies, they are simpler and completelythread-safe.
Strings are typically implemented asarraysof bytes, characters, or code units, in order to allow fast access to individual units or substrings—including characters when they have a fixed length. A few languages such asHaskellimplement them aslinked listsinstead.
A lot of high-level languages provide strings as a primitive data type, such asJavaScriptandPHP, while most others provide them as a composite data type, some with special language support in writing literals, for example,JavaandC#.
Some languages, such asC,PrologandErlang, avoid implementing a dedicated string datatype at all, instead adopting the convention of representing strings as lists of character codes. Even in programming languages having a dedicated string type, string can usually be iterated as a sequence character codes, like lists of integers or other values.
Representations of strings depend heavily on the choice of character repertoire and the method of character encoding. Older string implementations were designed to work with repertoire and encoding defined by ASCII, or more recent extensions like theISO 8859series. Modern implementations often use the extensive repertoire defined by Unicode along with a variety of complex encodings such as UTF-8 and UTF-16.
The termbyte stringusually indicates a general-purpose string of bytes, rather than strings of only (readable) characters, strings of bits, or such. Byte strings often imply that bytes can take any value and any data can be stored as-is, meaning that there should be no value interpreted as a termination value.
Most string implementations are very similar to variable-lengtharrayswith the entries storing thecharacter codesof corresponding characters. The principal difference is that, with certain encodings, a single logical character may take up more than one entry in the array. This happens for example with UTF-8, where single codes (UCScode points) can take anywhere from one to four bytes, and single characters can take an arbitrary number of codes. In these cases, the logical length of the string (number of characters) differs from the physical length of the array (number of bytes in use).UTF-32avoids the first part of the problem.
The length of a string can be stored implicitly by using a special terminating character; often this is thenull character(NUL), which has all bits zero, a convention used and perpetuated by the popularC programming language.[11]Hence, this representation is commonly referred to as aC string. This representation of ann-character string takesn+ 1 space (1 for the terminator), and is thus animplicit data structure.
In terminated strings, the terminating code is not an allowable character in any string. Strings withlengthfield do not have this limitation and can also store arbitrarybinary data.
An example of anull-terminated stringstored in a 10-bytebuffer, along with itsASCII(or more modernUTF-8) representation as 8-bithexadecimal numbersis:
The length of the string in the above example, "FRANK", is 5 characters, but it occupies 6 bytes. Characters after the terminator do not form part of the representation; they may be either part of other data or just garbage. (Strings of this form are sometimes calledASCIZ strings, after the originalassembly languagedirective used to declare them.)
Using a special byte other than null for terminating strings has historically appeared in both hardware and software, though sometimes with a value that was also a printing character.$was used by many assembler systems,:used byCDCsystems (this character had a value of zero), and theZX80used"[12]since this was the string delimiter in its BASIC language.
Somewhat similar, "data processing" machines like theIBM 1401used a specialword markbit to delimit strings at the left, where the operation would start at the right. This bit had to be clear in all other parts of the string. This meant that, while the IBM 1401 had a seven-bit word, almost no-one ever thought to use this as a feature, and override the assignment of the seventh bit to (for example) handle ASCII codes.
Early microcomputer software relied upon the fact that ASCII codes do not use the high-order bit, and set it to indicate the end of a string. It must be reset to 0 prior to output.[13]
The length of a string can also be stored explicitly, for example by prefixing the string with the length as a byte value. This convention is used in manyPascaldialects; as a consequence, some people call such a string aPascal stringorP-string. Storing the string length as byte limits the maximum string length to 255. To avoid such limitations, improved implementations of P-strings use 16-, 32-, or 64-bitwordsto store the string length. When thelengthfield covers theaddress space, strings are limited only by theavailable memory.
If the length is bounded, then it can be encoded in constant space, typically a machine word, thus leading to animplicit data structure, takingn+kspace, wherekis the number of characters in a word (8 for 8-bit ASCII on a 64-bit machine, 1 for 32-bit UTF-32/UCS-4 on a 32-bit machine, etc.).
If the length is not bounded, encoding a lengthntakes log(n) space (seefixed-length code), so length-prefixed strings are asuccinct data structure, encoding a string of lengthnin log(n) +nspace.
In the latter case, the length-prefix field itself does not have fixed length, therefore the actual string data needs to be moved when the string grows such that the length field needs to be increased.
Here is a Pascal string stored in a 10-byte buffer, along with its ASCII / UTF-8 representation:
Many languages, including object-oriented ones, implement strings asrecordswith an internal structure like:
However, since the implementation is usuallyhidden, the string must be accessed and modified through member functions.textis a pointer to a dynamically allocated memory area, which might be expanded as needed. See alsostring (C++).
Both character termination and length codes limit strings: For example, C character arrays that contain null (NUL) characters cannot be handled directly byC stringlibrary functions: Strings using a length code are limited to the maximum value of the length code.
Both of these limitations can be overcome by clever programming.
It is possible to create data structures and functions that manipulate them that do not have the problems associated with character termination and can in principle overcome length code bounds. It is also possible to optimize the string represented using techniques fromrun length encoding(replacing repeated characters by the character value and a length) andHamming encoding[clarification needed].
While these representations are common, others are possible. Usingropesmakes certain string operations, such as insertions, deletions, and concatenations more efficient.
The core data structure in atext editoris the one that manages the string (sequence of characters) that represents the current state of the file being edited.
While that state could be stored in a single long consecutive array of characters, a typical text editor instead uses an alternative representation as its sequence data structure—agap buffer, alinked listof lines, apiece table, or arope—which makes certain string operations, such as insertions, deletions, and undoing previous edits, more efficient.[14]
The differing memory layout and storage requirements of strings can affect the security of the program accessing the string data. String representations requiring a terminating character are commonly susceptible tobuffer overflowproblems if the terminating character is not present, caused by a coding error or anattackerdeliberately altering the data. String representations adopting a separate length field are also susceptible if the length can be manipulated. In such cases, program code accessing the string data requiresbounds checkingto ensure that it does not inadvertently access or change data outside of the string memory limits.
String data is frequently obtained from user input to a program. As such, it is the responsibility of the program to validate the string to ensure that it represents the expected format. Performinglimited or no validationof user input can cause a program to be vulnerable tocode injectionattacks.
Sometimes, strings need to be embedded inside a text file that is both human-readable and intended for consumption by a machine. This is needed in, for example, source code of programming languages, or in configuration files. In this case, the NUL character does not work well as a terminator since it is normally invisible (non-printable) and is difficult to input via a keyboard. Storing the string length would also be inconvenient as manual computation and tracking of the length is tedious and error-prone.
Two common representations are:
While character strings are very common uses of strings, a string in computer science may refer generically to any sequence of homogeneously typed data. Abit stringorbyte string, for example, may be used to represent non-textualbinary dataretrieved from a communications medium. This data may or may not be represented by a string-specific datatype, depending on the needs of the application, the desire of the programmer, and the capabilities of the programming language being used. If the programming language's string implementation is not8-bit clean, data corruption may ensue.
C programmers draw a sharp distinction between a "string", aka a "string of characters", which by definition is always null terminated, vs. a "array of characters" which may be stored in the same array but is often not null terminated.
UsingC string handlingfunctions on such an array of characters often seems to work, but later leads tosecurity problems.[15][16][17]
There are manyalgorithmsfor processing strings, each with various trade-offs. Competing algorithms can beanalyzedwith respect to run time, storage requirements, and so forth. The namestringologywas coined in 1984 by computer scientistZvi Galilfor the theory of algorithms and data structures used for string processing.[18][19][20]
Some categories of algorithms include:
Advanced string algorithms often employ complex mechanisms and data structures, among themsuffix treesandfinite-state machines.
Character strings are such a useful datatype that several languages have been designed in order to make string processing applications easy to write. Examples include the following languages:
ManyUnixutilities perform simple string manipulations and can be used to easily program some powerful string processing algorithms. Files and finite streams may be viewed as strings.
SomeAPIslikeMultimedia Control Interface,embedded SQLorprintfuse strings to hold commands that will be interpreted.
Manyscripting programming languages, including Perl,Python, Ruby, and Tcl employregular expressionsto facilitate text operations. Perl is particularly noted for its regular expression use,[21]and many other languages and applications implementPerl compatible regular expressions.
Some languages such as Perl and Ruby supportstring interpolation, which permits arbitrary expressions to be evaluated and included in string literals.
String functionsare used to create strings or change the contents of a mutable string. They also are used to query information about a string. The set of functions and their names varies depending on thecomputer programming language.
The most basic example of a string function is thestring lengthfunction – the function that returns the length of a string (not counting any terminator characters or any of the string's internal structural information) and does not modify the string. This function is often namedlengthorlen. For example,length("hello world")would return 11. Another common function isconcatenation, where a new string is created by appending two strings, often this is the + addition operator.
Somemicroprocessor'sinstruction set architecturescontain direct support for string operations, such as block copy (e.g. Inintel x86mREPNZ MOVSB).[22]
Let Σ be afinite setof distinct, unambiguous symbols (alternatively called characters), called thealphabet. Astring(orword[23]orexpression[24]) over Σ is any finitesequenceof symbols from Σ.[25]For example, if Σ = {0, 1}, then01011is a string over Σ.
Thelengthof a stringsis the number of symbols ins(the length of the sequence) and can be anynon-negative integer; it is often denoted as |s|. Theempty stringis the unique string over Σ of length 0, and is denotedεorλ.[25][26]
The set of all strings over Σ of lengthnis denoted Σn. For example, if Σ = {0, 1}, then Σ2= {00, 01, 10, 11}. We have Σ0= {ε} for every alphabet Σ.
The set of all strings over Σ of any length is theKleene closureof Σ and is denoted Σ*. In terms of Σn,
For example, if Σ = {0, 1}, then Σ*= {ε, 0, 1, 00, 01, 10, 11, 000, 001, 010, 011, ...}. Although the set Σ*itself iscountably infinite, each element of Σ*is a string of finite length.
A set of strings over Σ (i.e. anysubsetof Σ*) is called aformal languageover Σ. For example, if Σ = {0, 1}, the set of strings with an even number of zeros, {ε, 1, 00, 11, 001, 010, 100, 111, 0000, 0011, 0101, 0110, 1001, 1010, 1100, 1111, ...}, is a formal language over Σ.
Concatenationis an importantbinary operationon Σ*. For any two stringssandtin Σ*, their concatenation is defined as the sequence of symbols insfollowed by the sequence of characters int, and is denotedst. For example, if Σ = {a, b, ..., z},s=bear, andt=hug, thenst=bearhugandts=hugbear.
String concatenation is anassociative, but non-commutativeoperation. The empty string ε serves as theidentity element; for any strings, εs=sε =s. Therefore, the set Σ*and the concatenation operation form amonoid, thefree monoidgenerated by Σ. In addition, the length function defines amonoid homomorphismfrom Σ*to the non-negative integers (that is, a functionL:Σ∗↦N∪{0}{\displaystyle L:\Sigma ^{*}\mapsto \mathbb {N} \cup \{0\}}, such thatL(st)=L(s)+L(t)∀s,t∈Σ∗{\displaystyle L(st)=L(s)+L(t)\quad \forall s,t\in \Sigma ^{*}}).
A stringsis said to be asubstringorfactoroftif there exist (possibly empty) stringsuandvsuch thatt=usv. Therelation"is a substring of" defines apartial orderon Σ*, theleast elementof which is the empty string.
A stringsis said to be aprefixoftif there exists a stringusuch thatt=su. Ifuis nonempty,sis said to be aproperprefix oft. Symmetrically, a stringsis said to be asuffixoftif there exists a stringusuch thatt=us. Ifuis nonempty,sis said to be apropersuffix oft. Suffixes and prefixes are substrings oft. Both the relations "is a prefix of" and "is a suffix of" areprefix orders.
The reverse of a string is a string with the same symbols but in reverse order. For example, ifs= abc (where a, b, and c are symbols of the alphabet), then the reverse ofsis cba. A string that is the reverse of itself (e.g.,s= madam) is called apalindrome, which also includes the empty string and all strings of length 1.
A strings=uvis said to be a rotation oftift=vu. For example, if Σ = {0, 1} the string 0011001 is a rotation of 0100110, whereu= 00110 andv= 01. As another example, the string abc has three different rotations, viz. abc itself (withu=abc,v=ε), bca (withu=bc,v=a), and cab (withu=c,v=ab).
It is often useful to define anorderingon a set of strings. If the alphabet Σ has atotal order(cf.alphabetical order) one can define a total order on Σ*calledlexicographical order. The lexicographical order istotalif the alphabetical order is, but is notwell-foundedfor any nontrivial alphabet, even if the alphabetical order is. For example, if Σ = {0, 1} and 0 < 1, then the lexicographical order on Σ*includes the relationships ε < 0 < 00 < 000 < ... < 0001 < ... < 001 < ... < 01 < 010 < ... < 011 < 0110 < ... < 01111 < ... < 1 < 10 < 100 < ... < 101 < ... < 111 < ... < 1111 < ... < 11111 ... With respect to this ordering, e.g. the infinite set { 1, 01, 001, 0001, 00001, 000001, ... } has no minimal element.
SeeShortlexfor an alternative string ordering that preserves well-foundedness.
For the example alphabet, the shortlex order is ε < 0 < 1 < 00 < 01 < 10 < 11 < 000 < 001 < 010 < 011 < 100 < 101 < 0110 < 111 < 0000 < 0001 < 0010 < 0011 < ... < 1111 < 00000 < 00001 ...
A number of additional operations on strings commonly occur in the formal theory. These are given in the article onstring operations.
Strings admit the following interpretation as nodes on a graph, wherekis the number of symbols in Σ:
The natural topology on the set of fixed-length strings or variable-length strings is the discrete topology, but the natural topology on the set of infinite strings is thelimit topology, viewing the set of infinite strings as theinverse limitof the sets of finite strings. This is the construction used for thep-adic numbersand some constructions of theCantor set, and yields the same topology.
Isomorphismsbetween string representations of topologies can be found by normalizing according to thelexicographically minimal string rotation.
|
https://en.wikipedia.org/wiki/String_(computer_science)
|
Language acquisitionis the process by which humans acquire the capacity to perceive and comprehendlanguage. In other words, it is how human beings gain the ability to be aware of language, to understand it, and to produce and usewordsandsentencesto communicate.
Language acquisition involves structures, rules, and representation. The capacity to successfully use language requires human beings to acquire a range of tools, includingphonology,morphology,syntax,semantics, and an extensivevocabulary. Language can be vocalized as in speech, or manual as insign.[1]Human language capacity isrepresented in the brain. Even though human language capacity is finite, one can say and understand an infinite number of sentences, which is based on a syntactic principle calledrecursion. Evidence suggests that every individual has three recursivemechanismsthat allow sentences to go indeterminately. These three mechanisms are:relativization,complementationandcoordination.[2]
There are two main guiding principles in first-language acquisition:speech perceptionalways precedesspeech production, and the gradually evolving system by which a child learns a language is built up one step at a time, beginning with the distinction between individualphonemes.[3]
For many years, linguists interested in child language acquisition have questioned how language is acquired. Lidz et al. state, "The question of how these structures are acquired, then, is more properly understood as the question of how a learner takes the surface forms in the input and converts them into abstract linguistic rules and representations."[4]
Language acquisition usually refers tofirst-language acquisition. It studies infants' acquisition of theirnative language, whether that is a spoken language or a sign language,[1]though it can also refer tobilingual first language acquisition(BFLA), referring to an infant's simultaneous acquisition of two native languages.[5][6][7][8][9][10][11]This is distinguished fromsecond-language acquisition, which deals with the acquisition (in bothchildrenand adults) of additional languages. On top of speech, reading and writing a language with an entirely different script increases the complexities of true foreign languageliteracy. Language acquisition is one of the quintessential human traits.[12][13]
Some early observation-based ideas about language acquisition were proposed byPlato, who felt that word-meaning mapping in some form was innate. Additionally,Sanskrit grammariansdebated for over twelve centuries whether humans' ability to recognize the meaning of words was god-given (possibly innate) or passed down by previous generations and learned from already established conventions: a child learning the word forcowby listening to trusted speakers talking about cows.[14]
Philosophers in ancient societies were interested in how humans acquired the ability to understand and produce language well beforeempirical methodsfor testing those theories were developed, but for the most part they seemed to regard language acquisition as a subset of man's ability to acquire knowledge and learn concepts.[15]
Empiricists, likeThomas HobbesandJohn Locke, argued that knowledge (and, for Locke, language) emerge ultimately from abstracted sense impressions. These arguments lean towards the "nurture" side of the argument: that language is acquired through sensory experience, which led toRudolf Carnap's Aufbau, an attempt to learn all knowledge from sense datum, using the notion of "remembered as similar" to bind them into clusters, which would eventually map into language.[16]
Proponents ofbehaviorismargued that language may be learned through a form ofoperant conditioning. InB. F. Skinner'sVerbal Behavior(1957), he suggested that the successful use of a sign, such as a word orlexical unit, given a certain stimulus,reinforcesits "momentary" or contextual probability. Since operant conditioning is contingent on reinforcement by rewards, a child would learn that a specific combination of sounds means a specific thing through repeated successful associations made between the two. A "successful" use of a sign would be one in which the child is understood (for example, a child saying "up" when they want to be picked up) and rewarded with the desired response from another person, thereby reinforcing the child's understanding of the meaning of that word and making it more likely that they will use that word in a similar situation in the future. Someempiricisttheories of language acquisition include thestatistical learning theory. Charles F. Hockett of language acquisition,relational frame theory,functionalist linguistics,social interactionist theory, and usage-based language acquisition.
Skinner's behaviorist idea was strongly attacked byNoam Chomskyin a review article in 1959, calling it "largely mythology" and a "serious delusion."[17]Arguments against Skinner's idea of language acquisition through operant conditioning include the fact that children often ignore language corrections from adults. Instead, children typically follow a pattern of using an irregular form of a word correctly, making errors later on, and eventually returning to the proper use of the word. For example, a child may correctly learn the word "gave" (past tense of "give"), and later on use the word "gived". Eventually, the child will typically go back to using the correct word, "gave". Chomsky claimed the pattern is difficult to attribute to Skinner's idea of operant conditioning as the primary way that children acquire language. Chomsky argued that if language were solely acquired through behavioral conditioning, children would not likely learn the proper use of a word and suddenly use the word incorrectly.[18]Chomsky believed that Skinner failed to account for the central role of syntactic knowledge in language competence. Chomsky also rejected the term "learning", which Skinner used to claim that children "learn" language through operant conditioning.[19]Instead, Chomsky argued for a mathematical approach to language acquisition, based on a study ofsyntax.
The capacity to acquire and use language is a key aspect that distinguisheshumansfrom other beings. Although it is difficult to pin down what aspects of language are uniquely human, there are a few design features that can be found in all known forms of human language, but that are missing from forms ofanimal communication. For example, many animals are able to communicate with each other by signaling to the things around them, but this kind of communication lacks the arbitrariness of human vernaculars (in that there is nothing about the sound of the word "dog" that would hint at its meaning). Other forms of animal communication may utilize arbitrary sounds, but are unable to combine those sounds in different ways to create completely novel messages that can then be automatically understood by another.Hockettcalled this design feature of human language "productivity". It is crucial to the understanding of human language acquisition that humans are not limited to a finite set of words, but, rather, must be able to understand and utilize a complex system that allows for an infinite number of possible messages. So, while many forms of animal communication exist, they differ from human language in that they have a limited range of vocabulary tokens, and the vocabulary items are not combined syntactically to create phrases.[20]
Herbert S. Terraceconducted a study on a chimpanzee known asNim Chimpskyin an attempt to teach himAmerican Sign Language. This study was an attempt to further research done with a chimpanzee namedWashoe, who was reportedly able to acquire American Sign Language. However, upon further inspection, Terrace concluded that both experiments were failures.[21]While Nim was able to acquire signs, he never acquired a knowledge of grammar, and was unable to combine signs in a meaningful way. Researchers noticed that "signs that seemed spontaneous were, in fact, cued by teachers",[22]and not actually productive. When Terrace reviewed Project Washoe, he found similar results. He postulated that there is a fundamental difference between animals and humans in their motivation to learn language; animals, such as in Nim's case, are motivated only by physical reward, while humans learn language in order to "create a new type of communication".[23]
In another language acquisition study,Jean-Marc-Gaspard Itardattempted to teachVictor of Aveyron, a feral child, how to speak. Victor was able to learn a few words, but ultimately never fully acquired language.[24]Slightly more successful was a study done onGenie, another child never introduced to society. She had been entirely isolated for the first thirteen years of her life by her father. Caretakers and researchers attempted to measure her ability to learn a language. She was able to acquire a large vocabulary, but never acquired grammatical knowledge. Researchers concluded that the theory of acritical periodwastrue —Genie was too old to learn how to speak productively, although she was still able to comprehend language.[25]
A major debate in understanding language acquisition is how these capacities are picked up by infants from the linguistic input.[26]Input in the linguisticcontextis defined as "All words, contexts, and other forms of language to which a learner is exposed, relative to acquired proficiency in first or second languages".Nativistssuch as Chomsky have focused on the hugely complex nature of human grammars, the finiteness andambiguityof the input that children receive, and the relatively limitedcognitive abilitiesof an infant. From these characteristics, they conclude that the process of language acquisition in infants must be tightly constrained and guided by the biologically given characteristics of the human brain. Otherwise, they argue, it is extremely difficult to explain how children, within the first five years of life, routinely master the complex, largely tacitgrammatical rulesof their native language.[27]Additionally, the evidence of such rules in their native language is all indirect—adult speech to children cannot encompass all of what children know by the time they have acquired their native language.[28]
Other scholars,[who?]however, have resisted the possibility that infants' routine success at acquiring the grammar of their native language requires anything more than the forms of learning seen with other cognitive skills, including such mundane motor skills as learning to ride a bike. In particular, there has been resistance to the possibility that human biology includes any form of specialization for language. This conflict is often referred to as the "nature and nurture" debate. Of course, most scholars acknowledge that certain aspects of language acquisition must result from the specific ways in which the human brain is "wired" (a "nature" component, which accounts for the failure of non-human species to acquire human languages) and that certain others are shaped by the particular language environment in which a person is raised (a "nurture" component, which accounts for the fact that humans raised in different societies acquire different languages). The as-yet unresolved question is the extent to which the specific cognitive capacities in the "nature" component are also used outside of language.[citation needed]
Emergentisttheories, such as Brian MacWhinney'scompetition model, posit that language acquisition is acognitive processthat emerges from the interaction of biological pressures and the environment. According to these theories, neither nature nor nurture alone is sufficient to trigger language learning; both of these influences must work together in order to allow children to acquire a language. The proponents of these theories argue that general cognitive processes subserve language acquisition and that the result of these processes is language-specific phenomena, such asword learningandgrammar acquisition. The findings of many empirical studies support the predictions of these theories, suggesting that language acquisition is a more complex process than many have proposed.[29]
Although Chomsky's theory of agenerative grammarhas been enormously influential in the field of linguistics since the 1950s, many criticisms of the basic assumptions of generative theory have been put forth by cognitive-functional linguists, who argue that language structure is created through language use.[30]These linguists argue that the concept of alanguage acquisition device(LAD) is unsupported by evolutionary anthropology, which tends to show a gradual adaptation of the human brain and vocal cords to the use of language, rather than a sudden appearance of a complete set of binary parameters delineating the whole spectrum of possible grammars ever to have existed and ever to exist.[31]On the other hand, cognitive-functional theorists use this anthropological data to show how human beings have evolved the capacity for grammar and syntax to meet our demand for linguistic symbols. (Binary parameters are common to digital computers, but may not be applicable to neurological systems such as the human brain.)[citation needed]
Further, the generative theory has several constructs (such as movement, empty categories, complex underlying structures, and strict binary branching) that cannot possibly be acquired from any amount of linguistic input. It is unclear that human language is actuallyanything likethe generative conception of it. Since language, as imagined by nativists, is unlearnably complex,[citation needed]subscribers to this theory argue that it must, therefore, be innate.[32]Nativists hypothesize that some features of syntactic categories exist even before a child is exposed to any experience—categories on which children map words of their language as they learn their native language.[33]A differenttheory of language, however, may yield different conclusions. While all theories of language acquisition posit some degree of innateness, they vary in how much value they place on this innate capacity to acquire language. Empiricism places less value on the innate knowledge, arguing instead that the input, combined with both general and language-specific learning capacities, is sufficient for acquisition.[34]
Since 1980, linguists studying children, such asMelissa BowermanandAsifa Majid,[35]and psychologists followingJean Piaget, like Elizabeth Bates[36]and Jean Mandler, came to suspect that there may indeed be many learning processes involved in the acquisition process, and that ignoring the role of learning may have been a mistake.[citation needed]
In recent years, the debate surrounding the nativist position has centered on whether the inborn capabilities are language-specific or domain-general, such as those that enable the infant to visually make sense of the world in terms of objects and actions. The anti-nativist view has many strands, but a frequent theme is that language emerges from usage in social contexts, using learning mechanisms that are a part of an innate general cognitive learning apparatus. This position has been championed byDavid M. W. Powers,[37]Elizabeth Bates,[38]Catherine Snow,Anat Ninio,Brian MacWhinney,Michael Tomasello,[20]Michael Ramscar,[39]William O'Grady,[40]and others. Philosophers, such as Fiona Cowie[41]andBarbara ScholzwithGeoffrey Pullum[42]have also argued against certain nativist claims in support of empiricism.
The new field ofcognitive linguisticshas emerged as a specific counter to Chomsky's Generative Grammar and to Nativism.
Some language acquisition researchers, such asElissa Newport, Richard Aslin, andJenny Saffran, emphasize the possible roles of generallearningmechanisms, especially statistical learning, in language acquisition. The development ofconnectionistmodels that when implemented are able to successfully learn words and syntactical conventions[43]supports the predictions of statistical learning theories of language acquisition, as do empirical studies of children's detection of word boundaries.[44]In a series of connectionist model simulations, Franklin Chang has demonstrated that such a domain general statistical learning mechanism could explain a wide range of language structure acquisition phenomena.[45]
Statistical learning theorysuggests that, when learning language, a learner would use the natural statistical properties of language to deduce its structure, including sound patterns, words, and the beginnings of grammar.[46]That is, language learners are sensitive to how oftensyllablecombinations or words occur in relation to other syllables.[44][47][48]Infants between 21 and 23 months old are also able to use statistical learning to develop "lexical categories", such as an animal category, which infants might later map to newly learned words in the same category. These findings suggest that early experience listening to language is critical to vocabulary acquisition.[48]
The statistical abilities are effective, but also limited by what qualifies as input, what is done with that input, and by the structure of the resulting output.[46]Statistical learning (and more broadly, distributional learning) can be accepted as a component of language acquisition by researchers on either side of the "nature and nurture" debate. From the perspective of that debate, an important question is whether statistical learning can, by itself, serve as an alternative to nativist explanations for the grammatical constraints of human language.
The central idea of these theories is that language development occurs through the incremental acquisition of meaningfulchunksof elementaryconstituents, which can be words, phonemes, or syllables. Recently, this approach has been highly successful in simulating several phenomena in the acquisition ofsyntactic categories[49]and the acquisition of phonological knowledge.[50]
Chunking theories of language acquisition constitute a group of theories related to statistical learning theories, in that they assume that the input from the environment plays an essential role; however, they postulate different learning mechanisms.[clarification needed]
Researchers at theMax Planck Institute for Evolutionary Anthropologyhave developed a computer model analyzing early toddler conversations to predict the structure of later conversations. They showed that toddlers develop their own individual rules for speaking, with 'slots' into which they put certain kinds of words. A significant outcome of this research is that rules inferred from toddler speech were better predictors of subsequent speech than traditional grammars.[51]
This approach has several features that make it unique: the models are implemented as computer programs, which enables clear-cut and quantitative predictions to be made; they learn from naturalistic input—actual child-directed utterances; and attempt to create their own utterances, the model was tested in languages including English, Spanish, and German. Chunking for this model was shown to be most effective in learning a first language but was able to create utterances learning a second language.[52]
Therelational frame theory(RFT) (Hayes, Barnes-Holmes, Roche, 2001), provides a wholly selectionist/learning account of the origin and development of language competence and complexity. Based upon the principles of Skinnerian behaviorism, RFT posits that children acquire language purely through interacting with the environment. RFT theorists introduced the concept offunctional contextualismin language learning, which emphasizes the importance of predicting and influencing psychological events, such as thoughts, feelings, and behaviors, by focusing on manipulable variables in their own context. RFT distinguishes itself from Skinner's work by identifying and defining a particular type of operant conditioning known as derived relational responding, a learning process that, to date, appears to occur only in humans possessing a capacity for language. Empirical studies supporting the predictions of RFT suggest that children learn language through a system of inherent reinforcements, challenging the view that language acquisition is based upon innate, language-specific cognitive capacities.[53]
Social interactionist theory is an explanation oflanguage developmentemphasizing the role of social interaction between the developing child and linguistically knowledgeable adults. It is based largely on the socio-cultural theories of Soviet psychologistLev Vygotsky, and was made prominent in the Western world byJerome Bruner.[54]
Unlike other approaches, it emphasizes the role of feedback and reinforcement in language acquisition. Specifically, it asserts that much of a child's linguistic growth stems from modeling of and interaction with parents and other adults, who very frequently provide instructive correction.[55]It is thus somewhat similar to behaviorist accounts of language learning. It differs substantially, though, in that it posits the existence of a social-cognitive model and other mental structures within children (a sharp contrast to the "black box" approach of classical behaviorism).
Another key idea within the theory of social interactionism is that of thezone of proximal development. This is a theoretical construct denoting the set of tasks a child is capable of performing with guidance but not alone.[56]As applied to language, it describes the set of linguistic tasks (for example, proper syntax, suitable vocabulary usage) that a child cannot carry out on its own at a given time, but can learn to carry out if assisted by an able adult.
As syntax began to be studied more closely in the early 20th century in relation to language learning, it became apparent to linguists, psychologists, and philosophers that knowing a language was not merely a matter of associating words with concepts, but that a critical aspect of language involves knowledge of how to put words together; sentences are usually needed in order to communicate successfully, not just isolated words.[15]A child will use short expressions such asBye-bye MummyorAll-gone milk, which actually are combinations of individualnounsand anoperator,[57]before they begin to produce gradually more complex sentences. In the 1990s, within theprinciples and parametersframework, this hypothesis was extended into a maturation-basedstructure building model of child languageregarding the acquisition of functional categories. In this model, children are seen as gradually building up more and more complex structures, with lexical categories (like noun and verb) being acquired before functional-syntactic categories (like determiner and complementizer).[58]It is also often found that in acquiring a language, the most frequently used verbs areirregular verbs.[citation needed]In learning English, for example, young children first begin to learn the past tense of verbs individually. However, when they acquire a "rule", such as adding-edto form the past tense, they begin to exhibit occasional overgeneralization errors (e.g. "runned", "hitted") alongside correct past tense forms. One influential[citation needed]proposal regarding the origin of this type of error suggests that the adult state of grammar stores each irregular verb form in memory and also includes a "block" on the use of the regular rule for forming that type of verb. In the developing child's mind, retrieval of that "block" may fail, causing the child to erroneously apply the regular rule instead of retrieving the irregular.[59][60]
In bare-phrase structure (minimalist program), theory-internal considerations define the specifier position of an internal-merge projection (phases vP and CP) as the only type of host which could serve as potential landing-sites for move-based elements displaced from lower down within the base-generated VP structure—e.g. A-movement such as passives (["The apple was eaten by [John (ate the apple)"]]), or raising ["Some work does seem to remain [(There) does seem to remain (some work)"]]). As a consequence, any strong version of a structure building model of child language which calls for an exclusive "external-merge/argument structure stage" prior to an "internal-merge/scope-discourse related stage" would claim that young children's stage-1 utterances lack the ability to generate and host elements derived via movement operations. In terms of a merge-based theory of language acquisition,[61]complements and specifiers are simply notations for first-merge (= "complement-of" [head-complement]), and later second-merge (= "specifier-of" [specifier-head], with merge always forming to a head. First-merge establishes only a set {a, b} and is not an ordered pair—e.g., an {N, N}-compound of 'boat-house' would allow the ambiguous readings of either 'a kind of house' and/or 'a kind of boat'. It is only with second-merge that order is derived out of a set {a {a, b}} which yields the recursive properties of syntax—e.g., a 'house-boat' {house {house, boat}} now reads unambiguously only as a 'kind of boat'. It is this property of recursion that allows for projection and labeling of a phrase to take place;[62]in this case, that the Noun 'boat' is the Head of the compound, and 'house' acting as a kind of specifier/modifier. External-merge (first-merge) establishes substantive 'base structure' inherent to the VP, yielding theta/argument structure, and may go beyond the lexical-category VP to involve the functional-category light verb vP. Internal-merge (second-merge) establishes more formal aspects related to edge-properties of scope and discourse-related material pegged to CP. In a Phase-based theory, this twin vP/CP distinction follows the "duality of semantics" discussed within the Minimalist Program, and is further developed into a dual distinction regarding a probe-goal relation.[63]As a consequence, at the "external/first-merge-only" stage, young children would show an inability to interpret readings from a given ordered pair, since they would only have access to the mental parsing of a non-recursive set. (See Roeper for a full discussion of recursion in child language acquisition).[64]In addition to word-order violations, other more ubiquitous results of a first-merge stage would show that children's initial utterances lack the recursive properties of inflectional morphology, yielding a strict Non-inflectional stage-1, consistent with an incremental Structure-building model of child language.
Generative grammar, associated especially with the work of Noam Chomsky, is currently one of the approaches to explaining children's acquisition of syntax.[65]Its leading idea is that human biology imposes narrow constraints on the child's "hypothesis space" during language acquisition. In the principles and parameters framework, which has dominated generative syntax since Chomsky's (1980)Lectures on Government and Binding: The Pisa Lectures, the acquisition of syntax resembles ordering from a menu: the human brain comes equipped with a limited set of choices from which the child selects the correct options by imitating the parents' speech while making use of the context.[66]
An important argument which favors the generative approach, is thepoverty of the stimulusargument. The child's input (a finite number of sentences encountered by the child, together with information about the context in which they were uttered) is, in principle, compatible with an infinite number of conceivable grammars. Moreover, rarely can children rely oncorrective feedbackfrom adults when they make a grammatical error; adults generally respond and provide feedback regardless of whether a child's utterance was grammatical or not, and children have no way of discerning if a feedback response was intended to be a correction. Additionally, when children do understand that they are being corrected, they don't always reproduce accurate restatements.[dubious–discuss][67][68]Yet, barring situations of medical abnormality or extreme privation, all children in a given speech-community converge on very much the same grammar by the age of about five years. An especially dramatic example is provided by children who, for medical reasons, are unable to produce speech and, therefore, can never be corrected for a grammatical error but nonetheless, converge on the same grammar as their typically developing peers, according to comprehension-based tests of grammar.[69][70]
Considerations such as those have led Chomsky,Jerry Fodor,Eric Lennebergand others to argue that the types of grammar the child needs to consider must be narrowly constrained by human biology (the nativist position).[71]These innate constraints are sometimes referred to asuniversal grammar, the human "language faculty", or the "language instinct".[72]
The comparative method of crosslinguistic research applies thecomparative methodused inhistorical linguisticstopsycholinguisticresearch.[73]In historical linguistics the comparative method uses comparisons between historically related languages to reconstruct a proto-language and trace the history of each daughter language. The comparative method can be repurposed for research on language acquisition by comparing historically related child languages. The historical ties within each language family provide a roadmap for research. ForIndo-European languages, the comparative method would first compare language acquisition within the Slavic, Celtic, Germanic, Romance and Indo-Iranian branches of the family before attempting broader comparisons between the branches. ForOtomanguean languages, the comparative method would first compare language acquisition within the Oto-pamean, Chinantecan, Tlapanecan, Popolocan, Zapotecan, Amuzgan and Mixtecan branches before attempting broader comparisons between the branches. The comparative method imposes an evaluation standard for assessing the languages used in language acquisition research.
The comparative method derives its power by assembling comprehensive datasets for each language. Descriptions of theprosodyandphonologyfor each language inform analyses ofmorphologyand thelexicon, which in turn inform analyses ofsyntaxandconversationalstyles. Information on prosodic structure in one language informs research on the prosody of the related languages and vice versa. The comparative method produces a cumulative research program in which each description contributes to a comprehensive description of language acquisition for each language within a family as well as across the languages within each branch of the language family.
Comparative studies of language acquisition control the number of extraneous factors that impact language development. Speakers of historically related languages typically share a common culture that may include similar lifestyles and child-rearing practices. Historically related languages have similar phonologies and morphologies that impact early lexical and syntactic development in similar ways. The comparative method predicts that children acquiring historically related languages will exhibit similar patterns of language development, and that these common patterns may not hold in historically unrelated languages. The acquisition ofDutchwill resemble the acquisition ofGerman, but not the acquisition ofTotonacorMixtec. A claim about any universal of language acquisition must control for the shared grammatical structures that languages inherit from a common ancestor.
Several language acquisition studies have accidentally employed features of the comparative method due to the availability of datasets from historically related languages. Research on the acquisition of theRomanceandScandinavianlanguages used aspects of the comparative method, but did not produce detailed comparisons across different levels of grammar.[74][75][76][77]The most advanced use of the comparative method to date appears in research on the acquisition of theMayanlanguages. This research has yielded detailed comparative studies on the acquisition of phonological, lexical, morphological and syntactic features in eight Mayan languages as well as comparisons of language input and language socialization.[78][79][80][81][82][83][84][85][86]
Recent advances in functionalneuroimaging technologyhave allowed for a better understanding of how language acquisition is manifested physically in the brain. Language acquisition almost always occurs in children during a period of rapid increase in brain volume. At this point in development, a child has many more neural connections than he or she will have as an adult, allowing for the child to be more able to learn new things than he or she would be as an adult.[87]
Language acquisition has been studied from the perspective ofdevelopmental psychologyandneuroscience,[88]which looks at learning to use and understand language parallel to a child's brain development. It has been determined, through empirical research on developmentally normal children, as well as through some extreme cases oflanguage deprivation, that there is a "sensitive period" of language acquisition in which human infants have the ability to learn any language. Several researchers have found that from birth until the age of six months, infants can discriminate the phonetic contrasts of all languages. Researchers believe that this gives infants the ability to acquire the language spoken around them. After this age, the child is able to perceive only the phonemes specific to the language being learned. The reduced phonemic sensitivity enables children to build phonemic categories and recognize stress patterns and sound combinations specific to the language they are acquiring.[89]As Wilder Penfield noted, "Before the child begins to speak and to perceive, the uncommitted cortex is a blank slate on which nothing has been written. In the ensuing years much is written, and the writing is normally never erased. After the age of ten or twelve, the general functional connections have been established and fixed for the speech cortex." According to the sensitive or critical period models, the age at which a child acquires the ability to use language is a predictor of how well he or she is ultimately able to use language.[90]However, there may be an age at which becoming a fluent and natural user of a language is no longer possible; Penfield and Roberts (1959) cap their sensitive period at nine years old.[91]The human brainmayvery well be automatically wired to learn languages, but this ability does not last into adulthood in the same way that it exists during childhood.[92]By around age 12, language acquisition has typically been solidified, and it becomes more difficult to learn a language in the same way a native speaker would.[93]Just like children who speak, deaf children go through a critical period for learning language. Deaf children who acquire their first language later in life show lower performance in complex aspects of grammar.[94]At that point, it is usually a second language that a person is trying to acquire and not a first.[27]
Assuming that children are exposed to language during the critical period,[95]acquiring language is almost never missed by cognitively normal children. Humans are so well-prepared to learn language that it becomes almost impossible not to. Researchers are unable to experimentally test the effects of the sensitive period of development on language acquisition, because it would be unethical to deprive children of language until this period is over. However, case studies on abused,language-deprivedchildrenshow that they exhibit extreme limitations in language skills, even after instruction.[96]
At a very young age, children can distinguish different sounds but cannot yet produce them. During infancy, children begin to babble. Deaf babies babble in the same patterns as hearing babies do, showing thatbabblingis not a result of babies simply imitating certain sounds, but is actually a natural part of the process of language development. Deaf babies do, however, often babble less than hearing babies, and they begin to babble later on in infancy—at approximately 11 months as compared to approximately 6 months for hearing babies.[97]
Prelinguistic language abilities that are crucial for language acquisition have been seen even earlier than infancy. There have been many different studies examining different modes of language acquisition prior to birth. The study of language acquisition in fetuses began in the late 1980s when several researchers independently discovered that very young infants could discriminate their native language from other languages. InMehler et al. (1988),[98]infants underwent discrimination tests, and it was shown that infants as young as 4 days old could discriminate utterances in their native language from those in an unfamiliar language, but could not discriminate between two languages when neither was native to them. These results suggest that there are mechanisms for fetal auditory learning, and other researchers have found further behavioral evidence to support this notion. Fetus auditory learning through environmental habituation has been seen in a variety of different modes, such as fetus learning of familiar melodies,[99]story fragments (DeCasper & Spence, 1986),[100]recognition of mother's voice,[101]and other studies showing evidence of fetal adaptation to native linguistic environments.[102]
Prosody is the property of speech that conveys an emotional state of the utterance, as well as the intended form of speech, for example, question, statement or command. Some researchers in the field of developmental neuroscience argue that fetal auditory learning mechanisms result solely from discrimination of prosodic elements. Although this would hold merit in an evolutionary psychology perspective (i.e. recognition of mother's voice/familiar group language from emotionally valent stimuli), some theorists argue that there is more than prosodic recognition in elements of fetal learning. Newer evidence shows that fetuses not only react to the native language differently from non-native languages, but that fetuses react differently and can accurately discriminate between native and non-native vowel sounds (Moon, Lagercrantz, & Kuhl, 2013).[103]Furthermore, a 2016 study showed that newborn infants encode the edges of multisyllabic sequences better than the internal components of the sequence (Ferry et al., 2016).[104]Together, these results suggest that newborn infants have learned important properties of syntactic processing in utero, as demonstrated by infant knowledge of native language vowels and the sequencing of heard multisyllabic phrases. This ability to sequence specific vowels gives newborn infants some of the fundamental mechanisms needed in order to learn the complex organization of a language.
From a neuroscientific perspective, neural correlates have been found that demonstrate human fetal learning of speech-like auditory stimuli that most other studies have been analyzing[clarification needed](Partanen et al., 2013).[105]In a study conducted by Partanen et al. (2013),[105]researchers presented fetuses with certain word variants and observed that these fetuses exhibited higher brain activity in response to certain word variants as compared to controls. In this same study, "a significant correlation existed between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure," pointing to the important learning mechanisms present before birth that are fine-tuned to features in speech (Partanen et al., 2013).[105]
Learning a new word, that is, learning to speak this word and speak it on the appropriate occasions, depends upon many factors. First, the learner needs to be able to hear what they are attempting to pronounce. Also required is the capacity to engage inspeech repetition.[106][107][108][109]Children with reduced ability to repeat non-words (a marker of speech repetition abilities) show a slower rate of vocabulary expansion than children with normal ability.[110]Several computational models of vocabulary acquisition have been proposed.[111][112][113][114][115][116][117]Various studies have shown that the size of a child's vocabulary by the age of 24 months correlates with the child's future development and language skills. If a child knows fifty or fewer words by the age of 24 months, he or she is classified as alate-talker, and future language development, like vocabulary expansion and the organization of grammar, is likely to be slower and stunted.[citation needed]
Two more crucial elements of vocabulary acquisition are word segmentation and statistical learning (described above). Word segmentation, or the ability to break down words into syllables from fluent speech can be accomplished by eight-month-old infants.[44]By the time infants are 17 months old, they are able to link meaning to segmented words.[47]
Recent evidence also suggests that motor skills and experiences may influence vocabulary acquisition during infancy. Specifically, learning to sit independently between 3 and 5 months of age has been found to predict receptive vocabulary at both 10 and 14 months of age,[118]and independent walking skills have been found to correlate with language skills at around 10 to 14 months of age.[119][120]These findings show that language acquisition is an embodied process that is influenced by a child's overall motor abilities and development. Studies have also shown a correlation betweensocioeconomic status and vocabulary acquisition.[121]
Children learn, on average, ten to fifteen new word meanings each day, but only one of these can be accounted for by direct instruction.[122]The other nine to fourteen word meanings must have been acquired in some other way. It has been proposed that children acquire these meanings through processes modeled bylatent semantic analysis; that is, when they encounter an unfamiliar word, children use contextual information to guess its rough meaning correctly.[122]A child may expand the meaning and use of certain words that are already part of itsmental lexiconin order to denominate anything that is somehow related but for which it does not know the specific word. For instance, a child may broaden the use ofmummyanddadain order to indicate anything that belongs to its mother or father, or perhaps every person who resembles its own parents; another example might be to sayrainwhile meaningI don't want to go out.[123]
There is also reason to believe that children use variousheuristicsto infer the meaning of words properly.Markmanand others have proposed that children assume words to refer to objects with similar properties ("cow" and "pig" might both be "animals") rather than to objects that are thematically related ("cow" and "milk" are probably not both "animals").[124]Children also seem to adhere to the "whole object assumption" and think that a novel label refers to an entire entity rather than to one of its parts.[124]This assumption along with other resources, such as grammar and morphological cues or lexical constraints, may help the child in acquiring word meaning, but conclusions based on such resources may sometimes conflict.[125]
According to several linguists, neurocognitive research has confirmed many standards of language learning, such as: "learning engages the entire person (cognitive, affective, and psychomotor domains), the human brain seeks patterns in its searching for meaning, emotions affect all aspects of learning, retention and recall, past experience always affects new learning, the brain's working memory has a limited capacity, lecture usually results in the lowest degree of retention, rehearsal is essential for retention, practice [alone] does not make perfect, and each brain is unique" (Sousa, 2006, p. 274). In terms of genetics, the geneROBO1has been associated with phonological buffer integrity or length.[126]
Genetic research has found two major factors predicting successful language acquisition and maintenance. These include inherited intelligence, and the lack of genetic anomalies that may cause speech pathologies, such as mutations in the FOXP2 gene which causeverbal dyspraxia. The role of inherited intelligence increases with age, accounting for 20% of IQ variation in infants, and for 60% in adults. It affects a vast variety of language-related abilities, from spatio-motor skills to writing fluency. There have been debates in linguistics, philosophy, psychology, and genetics, with some scholars arguing that language is fully or mostly innate, but the research evidence points to genetic factors only working in interaction with environmental ones.[127]
Although it is difficult to determine without invasive measures which exact parts of the brain become most active and important for language acquisition,fMRIandPETtechnology has allowed for some conclusions to be made about where language may be centered.Kuniyoshi Sakaihas proposed, based on several neuroimaging studies, that there may be a "grammar center" in the brain, whereby language is primarily processed in the left lateralpremotor cortex(located near the pre central sulcus and theinferior frontal sulcus). Additionally, these studies have suggested that first language and second language acquisition may be represented differently in thecortex.[27]In a study conducted by Newman et al., the relationship between cognitive neuroscience and language acquisition was compared through a standardized procedure involving native speakers of English and native Spanish speakers who all had a similar length of exposure to the English language (averaging about 26 years). It was concluded that the brain does in fact process languages differently[clarification needed], but rather than being related to proficiency levels, language processing relates more to the function of the brain itself.[128]
During early infancy, language processing seems to occur over many areas in the brain. However, over time, it gradually becomes concentrated into two areas—Broca's areaandWernicke's area. Broca's area is in the leftfrontal cortexand is primarily involved in the production of the patterns in vocal and sign language. Wernicke's area is in the lefttemporal cortexand is primarily involved in language comprehension. The specialization of these language centers is so extensive[clarification needed]that damage to them can result inaphasia.[129]
Kelly et al. (2015: 286) comment that “There is a dawning realization that the field of child language needs data from the broadest typological array of languages and language-learning environments.”[130]This realization is part of a broader recognition inpsycholinguisticsfor the need to document diversity.[131][132][133]Children's linguistic accomplishments are all the more impressive with recognition of the diversity that exists at every level of the language system.[134]Different levels of grammar interact in language-specific ways so that differences in morphosyntax build on differences inprosody, which in turn reflect differences in conversational style. The diversity of adult languages results in diverse child language phenomena that challenge every acquisition theory.
One such challenge is to explain how children acquire complex vowels inOtomangueanand other languages. The complex vowels in these languages combine oral and laryngeal gestures produced with laryngeal constriction [ʔ] or laryngeal spreading [h]. The production of thelaryngealizedvowels is complicated by the production of tonal contrasts, which rely upon contrasts in vocal fold vibration. Otomanguean languages manage the conflict between tone and laryngeal gesture by timing the gesture at the start, middle or end of the vowel, e.g. ʔV, VʔV and Vʔ. The phonetic realization of laryngealized vowels gives rise to the question of whether children acquire laryngealized vowels as single phonemes or sequences of phonemes. The unit analysis enlarges the vowel inventory but simplifies the syllable inventory, while the sequence analysis simplifies the vowel inventory but complicates the syllable inventory. The Otomanguean languages exhibit language-specific differences in the types and timing of the laryngeal gestures, and thus children must learn the specific laryngeal gestures that contribute to the phonological contrasts in the adult language.[135]
An acquisition challenge in morphosyntax is to explain how children acquire ergative grammatical structures.Ergativelanguages treat the subject of intransitive verbs like the object of transitive verbs at the level of morphology, syntax or both. At the level of morphology, ergative languages assign an ergative marker to the subject of transitive verbs. The ergative marking may be realized by case markers on nouns or agreement markers on verbs.[136][137]At the level of syntax, ergative languages have syntactic operations that treat the subject of transitive verbs differently from the subject of intransitive verbs. Languages with ergative syntax likeK'iche'may restrict the use of subject questions for transitive verbs but not intransitive verbs. The acquisition challenge that ergativity creates is to explain how children acquire the language-specific manifestations of morphological and syntactic ergativity in the adult languages.[138]TheMayanlanguageMamhas ergative agreement making on its transitive verbs but extends the ergative marking to both the subject of intransitive verbs and the object of transitive verbs yielding transitive verbs with two ergative agreement markers.[139]The contexts for extended ergative marking differ in type and frequency between Mayan languages, but two-year-old children produce extended ergative marking equally proficiently despite vast differences in the frequency of extended ergative marking in the adult languages.[83]
Children acquire language through exposure to a diverse variety of cultural practices.[140]Local groups vary in size and mobility depending on their means of subsistence. Some cultures require men to marry women who speak another language. Their children may be exposed to their mother's language for several years before moving in with their father and learning his language. Language groups have diverse beliefs about when children say their first words and what words they say. Such beliefs shape the time when parents perceive that children understand language. In many cultures, children hear more speech directed to others than to themselves, yet children acquire language in all cultures.
Documenting the diversity of child languages is made more urgent by the rapid loss of languages around the world.[141][142][143]It may not be possible to document child language in half of the world's languages by the end of this century.[144][145]Documenting child language should be a part of everylanguage documentationproject, and has an important role to play inrevitalizinglocal languages.[146][147]Documenting child language preserves cultural modes of language transmission and can emphasize their significance throughout the language community.
Some algorithms for language acquisition are based onstatistical machine translation.[148]Language acquisition can be modeled as amachine learningprocess, which may be based on learningsemantic parsers[149]orgrammar inductionalgorithms.[150][151]
Prelingual deafness is defined as hearing loss that occurred at birth or before an individual has learned to speak. In the United States, 2 to 3 out of every 1000 children are born deaf or hard of hearing. Even though it might be presumed that deaf children acquire language in different ways since they are not receiving the same auditory input as hearing children, many research findings indicate that deaf children acquire language in the same way that hearing children do and when given the proper language input, understand and express language just as well as their hearing peers. Babies who learn sign language produce signs or gestures that are more regular and more frequent than hearing babies acquiring spoken language. Just as hearing babies babble, deaf babies acquiring sign language will babble with their hands, otherwise known asmanual babbling. Therefore, as many studies have shown,language acquisition by deaf childrenparallels the language acquisition of a spoken language by hearing children because humans are biologically equipped for language regardless of themodality.
Deaf children's visual-manual language acquisition not only parallel spoken language acquisition but by the age of 30 months, most deaf children that were exposed to a visual language had a more advanced grasp with subject-pronoun copy rules than hearing children. Their vocabulary bank at the ages of 12–17 months exceed that of a hearing child's, though it does even out when they reach the two-word stage. The use of space for absent referents and the more complex handshapes in some signs prove to be difficult for children between 5 and 9 years of age because of motor development and the complexity of remembering the spatial use.
Other options besides sign language for kids with prelingual deafness include the use of hearing aids to strengthen remaining sensory cells orcochlear implantsto stimulate the hearing nerve directly. Cochlear implants (often known simply as CIs) are hearing devices that are placed behind the ear and contain a receiver and electrodes which are placed under the skin and inside the cochlea. Despite these developments, there is still a risk that prelingually deaf children may not develop good speech and speech reception skills. Although cochlear implants produce sounds, they are unlike typical hearing and deaf and hard of hearing people must undergo intensive therapy in order to learn how to interpret these sounds. They must also learn how to speak given the range of hearing they may or may not have. However, deaf children of deaf parents tend to do better with language, even though they are isolated from sound and speech because their language uses a different mode of communication that is accessible to them: the visual modality of language.
Although cochlear implants were initially approved for adults, now there is pressure to implant children early in order to maximize auditory skills for mainstream learning which in turn has created controversy around the topic. Due to recent advances in technology, cochlear implants allow some deaf people to acquire some sense of hearing. There are interior and exposed exterior components that are surgically implanted. Those who receive cochlear implants earlier on in life show more improvement on speech comprehension and language. Spoken language development does vary widely for those with cochlear implants though due to a number of different factors including: age at implantation, frequency, quality and type of speech training. Some evidence suggests that speech processing occurs at a more rapid pace in some prelingually deaf children with cochlear implants than those with traditional hearing aids. However, cochlear implants may not always work.
Research shows that people develop better language with a cochlear implant when they have a solid first language to rely on to understand the second language they would be learning. In the case of prelingually deaf children with cochlear implants, a signed language, likeAmerican Sign Languagewould be an accessible language for them to learn to help support the use of the cochlear implant as they learn a spoken language as their L2. Without a solid, accessible first language, these children run the risk of language deprivation, especially in the case that a cochlear implant fails to work. They would have no access to sound, meaning no access to the spoken language they are supposed to be learning. If a signed language was not a strong language for them to use and neither was a spoken language, they now have no access to any language and run the risk of missing theircritical period.
In June 2024, a cross-sectional study that the notableacademic journalScientific Reportspublished cautioned that "children with CIs exhibit significant variability in speech and language development": both "with too many recipients demonstrating suboptimal outcomes" and also with the investigations of those individuals broadly being "not well defined for prelingually deafened children with CIs, for whom language development is ongoing." The authors found that "the relationships between spectral resolution, temporal resolution, and speech recognition are well defined in adults with cochlear implants (CIs)" in contrast to the situation with children, and they concluded from their research that "[f]urther investigation is warranted to better understand the relationships between spectral resolution, temporal resolution, and speech recognition so that" medical experts methodologically "can identify the underlying mechanisms driving auditory-based speech perception in children with CIs."[152]
|
https://en.wikipedia.org/wiki/Language_acquisition
|
Theorigin of language, its relationship withhuman evolution, and its consequences have been subjects of study for centuries. Scholars wishing to study the origins of language draw inferences from evidence such as thefossil record,archaeological evidence, and contemporary language diversity. They may also studylanguage acquisitionas well as comparisons between humanlanguageand systems ofanimal communication(particularlyother primates).[1]Many argue for the close relation between the origins of language and the origins ofmodern human behavior, but there is little agreement about the facts and implications of this connection.
The shortage of direct,empirical evidencehas caused many scholars to regard the entire topic as unsuitable for serious study; in 1866, theLinguistic Society of Parisbanned any existing or future debates on the subject, a prohibition which remained influential across much of theWestern worlduntil the late twentieth century.[2]Various hypotheses have been developed on the emergence of language.[3]WhileCharles Darwin'stheory of evolutionbynatural selectionhad provoked a surge of speculation on the origin of language over a century and a half ago, the speculations had not resulted in a scientific consensus by 1996.[4]Despite this, academic interest had returned to the topic by the early 1990s.Linguists,archaeologists,psychologists, andanthropologistshave renewed the investigation into the origin of language with modern methods.[5]
Attempts to explain the origin of language take a variety of forms:[6]
Most linguistic scholars as of 2024[update]favor continuity-based theories, but they vary in how they hypothesize language development.[citation needed]Some among those who consider language as mostly innate avoid speculating about specific precursors in nonhuman primates, stressing simply that the language faculty must have evolved gradually.[7]
Those who consider language as learned socially, such asMichael Tomasello, consider it developing from the cognitively controlled aspects of primate communication, mostly gestural rather than vocal.[8][9]Where vocal precursors are concerned, many continuity theorists envisage language as evolving from early human capacities for song.[10][11][12][13]
Noam Chomsky, a proponent of discontinuity theory, argues that a single change occurred in humans before leaving Africa, coincident with the Great Leap approximately 100,000 years ago, in which a common language faculty developed in a group of humans and their descendants. Chomsky bases his argument on the observation that any human baby of any culture can be raised in a different culture and will completely assimilate the language and behavior of the new culture in which they were raised. This implies that no major change to the human language faculty has occurred since they left Africa.[14]
Transcending the continuity-versus-discontinuity divide, some scholars view the emergence of language as the consequence of some kind of social transformation[15]that, by generating unprecedented levels of public trust, liberated a genetic potential for linguistic creativity that had previously lain dormant.[16][17][18]"Ritual/speech coevolution theory" exemplifies this approach.[19][20]Scholars in this intellectual camp point to the fact that evenchimpanzeesandbonoboshave latent symbolic capacities that they rarely—if ever—use in the wild.[21]Objecting to the sudden mutation idea, these authors argue that even if a chance mutation were to install a language organ in an evolving bipedal primate, it would be adaptively useless under all known primate social conditions. A very specific social structure – one capable of upholding unusually high levels of public accountability and trust – must have evolved before or concurrently with language to make reliance on "cheap signals" (e.g. words) anevolutionarily stable strategy.
Since the emergence of language lies so far back inhuman prehistory, the relevant developments have left no direct historical traces, and comparable processes cannot be observed today. Despite this, the emergence of new sign languages in modern times—Nicaraguan Sign Language, for example—may offer insights into the developmental stages and creative processes necessarily involved.[22]Another approach inspects early human fossils, looking for traces of physical adaptation to language use.[23][24]In some cases, when theDNAof extinct humans can be recovered, the presence or absence of genes considered to be language-relevant—FOXP2, for example—may prove informative.[25]Another approach, this time archaeological, involves invokingsymbolic behavior(such as repeated ritual activity) that may leave an archaeological trace—such as mining and modifying ochre pigments forbody-painting—while developing theoretical arguments to justify inferences fromsymbolismin general to language in particular.[26][27][28]
The time range for the evolution of language or its anatomical prerequisites extends, at least in principle, from the phylogenetic divergence ofHomofromPanto the emergence of fullbehavioral modernitysome 50,000–150,000 years ago. Few dispute thatAustralopithecusprobably lacked vocal communication significantly more sophisticated than that ofgreat apesin general,[29]but scholarly opinions vary as to the developments since the appearance ofHomosome 2.5 million years ago. Some scholars assume the development of primitive language-like systems (proto-language) as early asHomo habilis, while others place the development ofsymbolic communicationonly withHomo erectus(1.8 million years ago) or withHomo heidelbergensis(0.6 million years ago) and the development of language proper withHomo sapiens, currently estimated at less than 200,000 years ago.
Using statistical methods to estimate the time required to achieve the current spread and diversity in modern languages,Johanna Nichols—a linguist at theUniversity of California, Berkeley—argued in 1998 that vocal languages must have begun diversifying in the human species at least 100,000 years ago.[30]Estimates of this kind are not universally accepted, but jointly consideringgenetic,archaeological,palaeontological, and much other evidence indicates that language likely emerged somewhere insub-Saharan Africaduring theMiddle Stone Age, roughly contemporaneous with the speciation ofHomo sapiens.[31]
I cannot doubt that language owes its origin to the imitation and modification, aided by signs and gestures, of various natural sounds, the voices of other animals, and man's own instinctive cries.
In 1861, historical linguistMax Müllerpublished a list of speculative theories concerning the origins of spoken language:[33]
Most scholars today consider all such theories not so much wrong—they occasionally offer peripheral insights—as naïve and irrelevant.[35][36]The problem with these theories is that they rest on the assumption that once early humans had discovered a workablemechanismfor linking sounds with meanings, language would automatically have evolved.[citation needed]
Much earlier,medieval Muslim scholarsdeveloped theories on the origin of language.[37][38]Their theories were of five general types:[39]
From the perspective of signalling theory, the main obstacle to the evolution of language-like communication in nature is not a mechanistic one. Rather, it is the fact that symbols—arbitrary associations of sounds or other perceptible forms with corresponding meanings—are unreliable and may as well be false.[40][41][42]The problem of reliability was not recognized at all by Darwin, Müller or the other early evolutionary theorists.
Animal vocal signals are, for the most part, intrinsically reliable. When a cat purrs, the signal constitutes direct evidence of the animal's contented state. The signal is trusted, not because the cat is inclined to be honest, but because it just cannot fake that sound. Primate vocal calls may be slightly more manipulable, but they remain reliable for the same reason—because they are hard to fake.[43]Primate social intelligence is "Machiavellian"; that is,self-servingand unconstrained by moral scruples. Monkeys, apes and particularly humans often attempt to deceive each other, while at the same time remaining constantly on guard against falling victim to deception themselves.[44][45]Paradoxically, it is theorized that primates' resistance to deception is what blocks the evolution of their signalling systems along language-like lines. Language is ruled out because the best way to guard against being deceived is to ignore all signals except those that are instantly verifiable. Words automatically fail this test.[19]
Words are easy to fake. Should they turn out to be lies, listeners will adapt by ignoring them in favor of hard-to-fake indices or cues. For language to work, listeners must be confident that those with whom they are on speaking terms are generally likely to be honest.[46]A peculiar feature of language isdisplaced reference, which means reference to topics outside the currently perceptible situation. This property prevents utterances from being corroborated in the immediate "here" and "now". For this reason, language presupposes relatively high levels of mutual trust in order to become established over time as anevolutionarily stable strategy. This stability is born of a longstanding mutual trust and is what grants language its authority. A theory of the origins of language must therefore explain why humans could begintrusting cheap signalsin ways that other animals apparently cannot.
The "mother tongues" hypothesis was proposed in 2004 as a possible solution to this problem.[47]W. Tecumseh Fitchsuggested that the Darwinian principle of "kin selection"[48]—the convergence of genetic interests between relatives—might be part of the answer. Fitch suggests that languages were originally "mother tongues". If language evolved initially for communication between mothers and their own biological offspring, extending later to include adult relatives as well, the interests of speakers and listeners would have tended to coincide. Fitch argues that shared genetic interests would have led to sufficient trust and cooperation for intrinsically unreliable signals—words—to become accepted as trustworthy and so begin evolving for the first time.[49]
Critics of this theory point out that kin selection is not unique to humans.[50]So even if one accepts Fitch's initial premises, the extension of the posited "mother tongue" networks from close relatives to more distant relatives remains unexplained.[50]Fitch argues, however, that the extended period of physical immaturity of human infants and the postnatal growth of the human brain give the human-infant relationship a different and more extended period of intergenerational dependency than that found in any other species.[47]
Ib Ulbæk[6]invokes another standard Darwinian principle—"reciprocal altruism"[51]—to explain the unusually high levels of intentional honesty necessary for language to evolve. "Reciprocal altruism" can be expressed as the principle thatif you scratch my back, I'll scratch yours. In linguistic terms, it would mean thatif you speak truthfully to me, I'll speak truthfully to you. Ordinary Darwinian reciprocal altruism, Ulbæk points out, is a relationship established between frequently interacting individuals. For language to prevail across an entire community, however, the necessary reciprocity would have needed to be enforced universally instead of being left to individual choice. Ulbæk concludes that for language to evolve, society as a whole must have been subject to moral regulation.
Critics point out that this theory fails to explain when, how, why or by whom "obligatory reciprocal altruism" could possibly have been enforced.[20]Various proposals have been offered to remedy this defect.[20]A further criticism is that language does not work on the basis of reciprocal altruism anyway. Humans in conversational groups do not withhold information to all except listeners likely to offer valuable information in return. On the contrary, they seem to want toadvertise to the worldtheir access to socially relevant information, broadcasting that information without expectation of reciprocity to anyone who will listen.[52]
Gossip, according toRobin Dunbarin his bookGrooming, Gossip and the Evolution of Language, language does for group-living humans whatmanual groomingdoes for other primates—it allows individuals to service their relationships and so maintain their alliances on the basis of the principle:if you scratch my back, I'll scratch yours. Dunbar argues that as humans began living in increasingly larger social groups, the task of manually grooming all one's friends and acquaintances became so time-consuming as to be unaffordable.[53]In response to this problem, humans developed "a cheap and ultra-efficient form of grooming"—vocal grooming. To keep allies happy, one now needs only to "groom" them with low-cost vocal sounds, servicing multiple allies simultaneously while keeping both hands free for other tasks. Vocal grooming then evolved gradually into vocal language—initially in the form of "gossip".[53]Dunbar's hypothesis seems to be supported by adaptations, in the structure of language, to the function of narration in general.[54]
Critics of this theory point out that the efficiency of "vocal grooming"—the fact that words are so cheap—would have undermined its capacity to signal commitment of the kind conveyed by time-consuming and costly manual grooming.[55]A further criticism is that the theory does nothing to explain the crucial transition from vocal grooming—the production of pleasing but meaningless sounds—to the cognitive complexities of syntactical speech.
The ritual/speech coevolution theory was originally proposed by social anthropologistRoy Rappaport[56]before being elaborated by anthropologists such as Chris Knight,[57]Jerome Lewis,[58]Nick Enfield,[59]Camilla Power[60]and Ian Watts.[61]Cognitive scientist and robotics engineerLuc Steels[62]is another prominent supporter of this general approach, as is biological anthropologist and neuroscientistTerrence Deacon.[63]A more recent champion of the approach is the Chomskyan specialist inlinguistic syntax, Cedric Boeckx.[64]
These scholars argue that there can be no such thing as a "theory of the origins of language". This is because language is not a separate adaptation, but an internal aspect of something much wider—namely, the entire domain known to anthropologists as humansymbolic culture.[65]Attempts to explain language independently of this wider context have failed, say these scientists, because they are addressing a problem with no solution. Language would not work outside its necessary environment of confidence-building social mechanisms and institutions. For example, it would not work for a nonhuman ape communicating with others of its kind in the wild. Not even the cleverest nonhuman ape could make language work under such conditions.
Lie and alternative, inherent in language ... pose problems to any society whose structure is founded on language, which is to say all human societies. I have therefore argued that if there are to be words at all it is necessary to establishThe Word, and that The Word is established by the invariance of liturgy.
Advocates of this school of thought point out that words are cheap. Should an especially clever nonhuman ape, or even a group of articulate nonhuman apes, try to use words in the wild, they would carry no conviction. The primate vocalizations that do carry conviction—those they actually use—are unlike words, in that they are emotionally expressive, intrinsically meaningful, and reliable because they are relatively costly and hard to fake.
Oral and gestural languages consist of pattern-making whose cost is essentially zero. As pure social conventions, signals of this kind cannot evolve in a Darwinian social world—they are a theoretical impossibility.[67]Being intrinsically unreliable, language works only if one can build up a reputation for trustworthiness within a certain kind of society—namely, one where symbolic cultural facts (sometimes called "institutional facts") can be established and maintained through collective social endorsement.[68]In any hunter-gatherer society, the basic mechanism for establishing trust in symbolic cultural facts is collectiveritual.[69]Therefore, the task facing researchers into the origins of language is more multidisciplinary than is usually supposed. It involves addressing the evolutionary emergence of human ritual, kinship, religion and symbolic culture taken as a whole, with language an important but subsidiary component.
In a 2023 article, Cedric Boeckx[64]endorses the Rappaport/Searle/Knight way of capturing the "special" nature of human words. Words are symbols. This means that, from a standpoint in Darwinian signal evolution theory, they are "patently false signals." Words are facts, but "facts whose existence depends entirely on subjective belief".[70]In philosophical terms, they are "institutional facts": fictions that are granted factual status within human social institutions[71]From this standpoint, according to Boeckx, linguistic utterances are symbolic to the extent that they are patent falsehoods serving as guides to communicative intentions. "They are communicatively useful untruths, as it were."[64]The reason why words can survive among humans despite being false is largely down to a matter of trust. The corresponding origins theory is that language can only have begun to evolve from the moment humans started reciprocally faking in communicatively helpful ways, i.e., when they became capable of upholding the levels of trust necessary for linguistic communication to work.
The point here is that an ape or other nonhuman must always carry at least some of the burden of generating the trust necessary for communication to work. That is, in order to be taken seriously, each signal it emits must be a patently reliable one, trusted because it is rooted in some way in the real world. But now imagine what might happen under social conditions where trust could be taken for granted. The signaller could stop worrying about reliability and concentrate instead on perceptual discriminability. Carried to its conclusion, this should permit digital signaling—the cheapest and most efficient kind of communication.
From this philosophical standpoint, animal communication cannot be digital because it does not have the luxury of being patently false. Costly signals of any kind can only be evaluated on an analog scale. Put differently, truly symbolic, digital signals become socially acceptable only under highly unusual conditions—such as those internal to a ritually bonded community whose members are not tempted to lie.[citation needed]
Critics of the speech/ritual co-evolution idea theory include Noam Chomsky, who terms it the "non-existence" hypothesis—a denial of the very existence of language as an object of study for natural science.[72]Chomsky's own theory is that language emerged in an instant and in perfect form,[73]prompting his critics in turn, to retort that only something that does not exist—a theoretical construct or convenient scientific fiction—could possibly emerge in such a miraculous way.[17]The controversy remains unresolved.
Acheuleantool use began during theLower Paleolithicapproximately 1.75 million years ago. Studies focusing on the lateralization of Acheulean tool production and language production have noted similar areas of blood flow when engaging in these activities separately; this theory suggests that the brain functions needed for the production of tools across generations is consistent with the brain systems required for producing language. Researchers used functional transcranial Doppler ultrasonography (fTDC) and had participants perform activities related to the creation of tools using the same methods during the Lower Paleolithic as well as a task designed specifically for word generation.[74]The purpose of this test was to focus on the planning aspect of Acheulean tool making and cued word generation in language (an example of cued word generation would be trying to list all words beginning with a given letter). Theories of language developing alongside tool use has been theorized by multiple individuals;[75][76][77]however, until recently, there has been little empirical data to support these hypotheses. Focusing on the results of the study performed by Uominiet al.evidence for the usage of the same brain areas has been found when looking at cued word generation and Acheulean tool use. The relationship between tool use and language production is found in working and planning memory respectively and was found to be similar across a variety of participants, furthering evidence that these areas of the brain are shared.[74]This evidence lends credibility to the theory that language developed alongside tool use in the Lower Paleolithic.
Thehumanistictradition considers language as a human invention.Renaissance philosopherAntoine Arnauldgave a detailed description of his idea of the origin of language inPort-Royal Grammar. According to Arnauld, people are social and rational by nature, and this urged them to create language as a means to communicate their ideas to others. Language construction would have occurred through a slow and gradual process.[78]In later theory, especially infunctional linguistics, the primacy of communication
is emphasised over psychological needs.[79]
The exact way language evolved is however not considered as vital to the study of languages.Structural linguistFerdinand de Saussureabandonedevolutionary linguisticsafter having come to the firm conclusion that it would not be able to provide any further revolutionary insight after the completion of the major works inhistorical linguisticsby the end of the 19th century. Saussure was particularly sceptical of the attempts ofAugust Schleicherand other Darwinian linguists to access prehistorical languages through series of reconstructions ofproto-languages.[80]
Saussure's solution to the problem of language evolution involves dividingtheoretical linguisticsin two. Evolutionary and historical linguistics are renamed asdiachronic linguistics. It is the study oflanguage change, but it has only limited explanatory power due to the inadequacy of all of the reliable research material that could ever be made available.Synchronic linguistics, in contrast, aims to widen scientists' understanding of language through a study of a given contemporary or historical language stage as a system in its own right.[81]
Although Saussure put much focus on diachronic linguistics, later structuralists who equated structuralism with the synchronic analysis were sometimes criticised of ahistoricism. According tostructural anthropologistClaude Lévi-Strauss, language and meaning—in opposition to "knowledge, which develops slowly and progressively"—must have appeared in an instant.[82]
Structuralism, as first introduced tosociologybyÉmile Durkheim, is nonetheless a type of humanistic evolutionary theory which explains diversification as necessitated by growing complexity.[83]There was a shift of focus to functional explanation after Saussure's death. Functional structuralists including thePrague Circlelinguists andAndré Martinetexplained the growth and maintenance of structures as being necessitated by their functions.[79]For example, novel technologies make it necessary for people to invent new words, but these may lose their function and be forgotten as the technologies are eventually replaced by more modern ones.
According to Chomsky's single-mutation theory, the emergence of language resembled the formation of a crystal; withdigital infinityas theseed crystalin a super-saturated primate brain, on the verge of blossoming into the human mind, by physical law, onceevolutionadded a single small but crucial keystone.[84][85]Thus, in this theory, language appeared rather suddenly within the history of human evolution. Chomsky, writing with computational linguist and computer scientist Robert C. Berwick, suggests that this scenario is completely compatible with modern biology. They note that "none of the recent accounts of human language evolution seem to have completely grasped the shift from conventional Darwinism to its fullystochasticmodern version—specifically, that there are stochastic effects not only due to sampling like directionless drift, but also due to directed stochastic variation in fitness, migration, and heritability—indeed, all the "forces" that affect individual or gene frequencies... All this can affect evolutionary outcomes—outcomes that as far as we can make out are not brought out in recent books on the evolution of language, yet would arise immediately in the case of any new genetic or individual innovation, precisely the kind of scenario likely to be in play when talking about language's emergence."
Citing evolutionary geneticistSvante Pääbo, they concur that a substantial difference must have occurred to differentiateHomo sapiensfromNeanderthalsto "prompt the relentless spread of our species, who had never crossed open water, up and out of Africa and then on across the entire planet in just a few tens of thousands of years.... What we do not see is any kind of 'gradualism' in new tool technologies or innovations like fire, shelters, or figurative art." Berwick and Chomsky therefore suggest language emerged approximately between 200,000 years ago and 60,000 years ago (between the appearance of the first anatomically modern humans in southern Africa and the last exodus from Africa respectively). "That leaves us with about 130,000 years, or approximately 5,000–6,000 generations of time for evolutionary change. This is not 'overnight in one generation' as some have (incorrectly) inferred—but neither is it on the scale of geological eons. It's time enough—within the ballpark for what Nilsson and Pelger (1994) estimated as the time required for the full evolution of avertebrateeye from a single cell, even without the invocation of any 'evo-devo' effects."[86]
The single-mutation theory of language evolution has been directly questioned on different grounds. A formal analysis of the probability of such a mutation taking place and going to fixation in the species has concluded that such a scenario is unlikely, with multiple mutations with more moderate fitness effects being more probable.[87]Another criticism has questioned the logic of the argument for single mutation and puts forward that from the formal simplicity ofMerge, the capacity Berwick and Chomsky deem the core property of human language that emerged suddenly, one cannot derive the (number of) evolutionary steps that led to it.[88]
The Romulus and Remus hypothesis, proposed by neuroscientistAndrey Vyshedskiy, seeks to address the question as to why the modern speech apparatus originated over 500,000 years before the earliest signs of modern human imagination. This hypothesis proposes that there were two phases that led to modern recursive language. The phenomenon ofrecursionoccurs across multiple linguistic domains, arguably most prominently insyntaxandmorphology. Thus, by nesting a structure such as a sentence or a word within themselves, it enables the generation of potentially (countably) infinite new variations of that structure. For example, the base sentence [Peter likes apples.] can be nested inirrealisclauses to produce [Mary said [Peter likes apples.]], [Paul believed [Mary said [Peter likes apples.]]] and so forth.[89]
The first phase includes the slow development of non-recursive language with a large vocabulary along with the modern speech apparatus, which includes changes to the hyoid bone, increased voluntary control of the muscles of the diaphragm, and the evolution of the FOXP2 gene, as well as other changes by 600,000 years ago.[90]Then, the second phase was a rapidChomskian single step, consisting of three distinct events that happened in quick succession around 70,000 years ago and allowed the shift from non-recursive to recursive language in early hominins.
It is not enough for children to have a modern prefrontal cortex (PFC) to allow the development of PFS; the children must also be mentally stimulated and have recursive elements already in their language to acquire PFS. Since their parents would not have invented these elements yet, the children would have had to do it themselves, which is a common occurrence among young children that live together, in a process calledcryptophasia.[92]This means that delayed PFC development would have allowed more time to acquire PFS and develop recursive elements.
Delayed PFC development also comes with negative consequences, such as a longer period of reliance on one's parents to survive and lower survival rates. For modern language to have occurred, PFC delay had to have an immense survival benefit in later life, such as PFS ability. This suggests that the mutation that caused PFC delay and the development of recursive language and PFS occurred simultaneously, which lines up with evidence of a genetic bottleneck around 70,000 years ago.[93]This could have been the result of a few individuals who developed PFS and recursive language which gave them significant competitive advantage over all other humans at the time.[91]
The gestural theory states that human language developed fromgesturesthat were used for simple communication.
Two types of evidence support this theory.
Research has found strong support for the idea thatoral communicationand sign language depend on similar neural structures. Patients who used sign language, and who suffered from a left-hemispherelesion, showed the same disorders with their sign language as vocal patients did with their oral language.[96]Other researchers found that the same left-hemisphere brain regions were active during sign language as during the use of vocal or written language.[97]
Primate gesture is at least partially genetic: different nonhuman apes will perform gestures characteristic of their species, even if they have never seen another ape perform that gesture. For example, gorillas beat their breasts. This shows that gestures are an intrinsic and important part of primate communication, which supports the idea that language evolved from gesture.[98]
Further evidence suggests that gesture and language are linked. In humans, manually gesturing has an effect on concurrent vocalizations, thus creating certain natural vocal associations of manual efforts. Chimpanzees move their mouths when performing fine motor tasks. These mechanisms may have played an evolutionary role in enabling the development of intentional vocal communication as a supplement to gestural communication. Voice modulation could have been prompted by preexisting manual actions.[98]
From infancy, gestures both supplement and predict speech.[99][100]This addresses the idea that gestures quickly change in humans from a sole means of communication (from a very young age) to a supplemental and predictive behavior that is used despite the ability to communicate verbally. This too serves as a parallel to the idea that gestures developed first and language subsequently built upon it.
Two possible scenarios have been proposed for the development of language,[101]one of which supports the gestural theory:
The first perspective that language evolved from the calls of human ancestors seems logical because both humans and animals make sounds or cries. One evolutionary reason to refute this is that, anatomically, the centre that controls calls in monkeys and other animals is located in a completely different part of the brain than in humans. In monkeys, this centre is located in the depths of the brain related to emotions. In the human system, it is located in an area unrelated to emotion. Humans can communicate simply to communicate—without emotions. So, anatomically, this scenario does not work.[101]This suggests that language was derived from gesture[102](humans communicated by gesture first and sound was attached later).
The important question for gestural theories is why there was a shift to vocalization. Various explanations have been proposed:
A comparable hypothesis states that in 'articulate' language, gesture and vocalisation are intrinsically linked, as language evolved from equally intrinsically linked dance and song.[13]
Humans still use manual and facial gestures when they speak, especially when people meet who have no language in common.[106]There are also a great number ofsign languagesstill in existence, commonly associated with Deaf communities. These sign languages are equal in complexity, sophistication, and expressive power, to any oral language.[107]The cognitive functions are similar and the parts of the brain used are similar. The main difference is that the "phonemes" are produced on the outside of the body, articulated with hands, body, and facial expression, rather than inside the body articulated with tongue, teeth, lips, and breathing.[108](Compare themotor theory of speech perception.)
Critics of gestural theory note that it is difficult to name serious reasons why the initial pitch-based vocal communication (which is present in primates) would be abandoned in favor of the much less effective non-vocal, gestural communication.[109]However,Michael Corballishas pointed out that it is supposed that primate vocal communication (such as alarm calls) cannot be controlled consciously, unlike hand movement, and thus it is not credible as precursor to human language; primate vocalization is rather homologous to and continued in involuntary reflexes (connected with basic human emotions) such as screams or laughter (the fact that these can be faked does not disprove the fact that genuine involuntary responses to fear or surprise exist).[102]Also, gesture is not generally less effective, and depending on the situation can even be advantageous, for example in a loud environment or where it is important to be silent, such as on a hunt. Other challenges to the "gesture-first" theory have been presented by researchers inpsycholinguistics, includingDavid McNeill.[110]
Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. TheTool-use sound hypothesissuggests that the production and perception of sound also contributed substantially, particularlyincidental sound of locomotion(ISOL) andtool-use sound(TUS).[111]Human bipedalism resulted in rhythmic and more predictableISOL. That may have stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations, and to mimic natural sounds.[112]Since the human brain proficiently extracts information about objects and events from the sounds they produce,TUS, and mimicry ofTUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-producedTUSactivates multimodal brain processing (motor neurons, hearing,proprioception, touch, vision), andTUSstimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor-processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry ofTUSmay have resulted in a limited number of vocalizations or protowords that were associated with tool use.[111]A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties, meaning, or both could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed toTUSover millions of years, coinciding with the period during which spoken language evolved.
In humans,functional MRIstudies have reported finding areas homologous to the monkeymirror neuronsystem in theinferior frontal cortex, close toBroca's area, one of the language regions of the brain. This has led to suggestions that human language evolved from a gesture performance/understanding system implemented in mirror neurons. Mirror neurons have been said to have the potential to provide a mechanism for action-understanding, imitation-learning, and the simulation of other people's behavior.[113]This hypothesis is supported by somecytoarchitectonichomologies between monkey premotor area F5 and human Broca's area.[114]
Rates of vocabulary expansion link to the ability of children to vocally mirror non-words and so to acquire the new word pronunciations. Suchspeech repetitionoccurs automatically, quickly[115]and separately in the brain tospeech perception.[116][117]Moreover, such vocal imitation can occur without comprehension such as inspeech shadowing[118]andecholalia.[114][119]Further evidence for this link comes from a recent study in which the brain activity of two participants was measured using fMRI while they were gesturing words to each other using hand gestures with a game ofcharades—a modality that some have suggested might represent the evolutionary precursor of human language. Analysis of the data usingGranger Causalityrevealed that the mirror-neuron system of the observer indeed reflects the pattern of activity of in the motor system of the sender, supporting the idea that the motor concept associated with the words is indeed transmitted from one brain to another using the mirror system.[120]
Not all linguists agree with the above arguments, however. In particular, supporters of Noam Chomsky argue against the possibility that the mirror neuron system can play any role in the hierarchical recursive structures essential to syntax.[121]
According toDean Falk's "putting-down-the-baby" theory, vocal interactions between early hominid mothers and infants began a sequence of events that led, eventually, to human ancestors' earliest words.[122]The basic idea is that evolving human mothers, unlike their counterparts in other primates, could not move around and forage with their infants clinging onto their backs. Loss of fur in the human case left infants with no means of clinging on. Frequently, therefore, mothers had to put their babies down. As a result, these babies needed to be reassured that they were not being abandoned. Mothers responded by developing 'motherese'—an infant-directed communicative system embracing facial expressions, body language, touching, patting, caressing, laughter, tickling, and emotionally expressive contact calls. The argument is that language developed out of this interaction.[122]
InThe Mental and Social Life of Babies, psychologistKenneth Kayenoted that no usable adult language could have evolved without interactive communication between very young children and adults. "No symbolic system could have survived from one generation to the next if it could not have been easily acquired by young children under their normal conditions of social life."[123]
The "from where to what" model is a language evolution model that is derived primarily from the organization oflanguage processing in the braininto two structures: the auditory dorsal stream and the auditory ventral stream.[124][125]It hypothesizes seven stages of language evolution (see illustration). Speech originated for the purpose of exchanging contact calls between mothers and their offspring to find one another in the event they became separated (illustration part 1). The contact calls could be modified with intonations in order to express either a higher or lower level of distress (illustration part 2). The use of two types of contact calls enabled the first question-answer conversation. In this scenario, the child would emit a low-level distress call to express a desire to interact with an object, and the mother would respond with either another low-level distress call (to express approval of the interaction) or a high-level distress call (to express disapproval) (illustration part 3). Over time, the improved use of intonations and vocal control led to the invention of unique calls (phonemes) associated with distinct objects (illustration part 4). At first, children learned the calls (phonemes) from their parents by imitating their lip-movements (illustration part 5). Eventually, infants were able to encode into long-term memory all the calls (phonemes). Consequentially, mimicry via lip-reading was limited to infancy and older children learned new calls through mimicry without lip-reading (illustration part 6). Once individuals became capable of producing a sequence of calls, this allowed multi-syllabic words, which increased the size of their vocabulary (illustration part 7). The use of words, composed of sequences of syllables, provided the infrastructure for communicating with sequences of words (i.e. sentences).
The theory's name is derived from the two auditory streams, which are both found in the brains of humans and other primates. The auditory ventral stream is responsible for sound recognition, and so it is referred to as the auditorywhatstream.[126][127][128]In primates, the auditory dorsal stream is responsible forsound localization, and thus it is called the auditorywherestream. Only in humans (in the left hemisphere) is it also responsible for other processes associated with language use and acquisition, such as speech repetition and production, integration of phonemes with their lip movements, perception and production of intonations, phonologicallong-term memory(long-term memory storage of the sounds of words), and phonological working memory (the temporary storage of the sounds of words).[129][130][131][132][133][134][135][136]Some evidence also indicates a role in recognizing others by their voices.[137][138]The emergence of each of these functions in the auditory dorsal stream represents an intermediate stage in the evolution of language.
A contact call origin for human language is consistent with animal studies, as like human language, contact call discrimination in monkeys is lateralised to the left hemisphere.[139][140]Mice with knock-out to language related genes (such asFOXP2andSRPX2) also resulted in the pups no longer emitting contact calls when separated from their mothers.[141][142]Supporting this model is also its ability to explain unique human phenomena, such as the use of intonations when converting words into commands and questions, the tendency of infants to mimic vocalizations during the first year of life (and its disappearance later on) and the protruding and visiblehuman lips, which are not found in other apes. This theory could be considered an elaboration of the putting-down-the-baby theory of language evolution.
"Grammaticalization" is a continuous historical process in which free-standing words develop into grammatical appendages, while these in turn become ever more specialized and grammatical. An initially "incorrect" usage, in becoming accepted, leads tounforeseen consequences, triggering knock-on effects and extended sequences of change. Paradoxically, grammar evolves because, in the final analysis, humans care less about grammatical niceties than about making themselves understood.[143]If this is how grammar evolves today, according to this school of thought, similar principles at work can be legitimately inferred among distant human ancestors, when grammar itself was first being established.[144][145][146]
In order to reconstruct the evolutionary transition from early language to languages with complex grammars, it is necessary to know which hypothetical sequences are plausible and which are not. In order to convey abstract ideas, the first recourse of speakers is to fall back on immediately recognizable concrete imagery, very often deployingmetaphorsrooted in shared bodily experience.[147]A familiar example is the use of concrete terms such as "belly" or "back" to convey abstract meanings such as "inside" or "behind". Equally metaphorical is the strategy of representing temporal patterns on the model of spatial ones. For example, English speakers might say "It is going to rain", modelled on "I am going to London." This can be abbreviated colloquially to "It's gonna rain." Even when in a hurry, English speakers do not say "I'm gonna London"—the contraction is restricted to the job of specifying tense. From such examples it can be seen why grammaticalisation is consistently unidirectional—from concrete to abstract meaning, not the other way around.[144]
Grammaticalization theorists picture early language as simple, perhaps consisting only of nouns.[146]p. 111Even under that extreme theoretical assumption, however, it is difficult to imagine what would realistically have prevented people from using, say, "spear" as if it were a verb ("Spear that pig!"). People might have used their nouns as verbs or their verbs as nouns as occasion demanded. In short, while a noun-only language might seem theoretically possible, grammaticalization theory indicates that it cannot have remained fixed in that state for any length of time.[144][148]
Creativity drives grammatical change.[148]This presupposes a certain attitude on the part of listeners. Instead of punishing deviations from accepted usage, listeners must prioritise imaginative mind-reading. Imaginative creativity—emitting a leopard alarm when no leopard was present, for example—is not the kind of behaviour which, say,vervet monkeyswould appreciate or reward.[149]Creativity and reliability are incompatible demands; for "Machiavellian" primates as for animals generally, the overriding pressure is to demonstrate reliability.[150]If humans escape these constraints, it is because in their case, listeners are primarily interested in mental states.
To focus on mental states is to accept fictions—inhabitants of the imagination—as potentially informative and interesting. An example is metaphor: a metaphor is, literally, a false statement.[151]InRomeo and Juliet, Romeo declares "Juliet is the sun!". Juliet is a woman, not a ball of plasma in the sky, but human listeners are not (or not usually) pedants insistent on point-by-point factual accuracy. They want to know what the speaker has in mind. Grammaticalisation is essentially based on metaphor. To outlaw its use would be to stop grammar from evolving and, by the same token, to exclude all possibility of expressing abstract thought.[147][152]
A criticism of all this is that while grammaticalization theory might explain language change today, it does not satisfactorily address the really difficult challenge—explaining the initial transition from primate-style communication to language as it is known today. Rather, the theory assumes that language already exists. AsBernd HeineandTania Kutevaacknowledge: "Grammaticalisation requires a linguistic system that is used regularly and frequently within a community of speakers and is passed on from one group of speakers to another".[146]Outside modern humans, such conditions do not prevail.
Human language is used for self-expression; however, expression displays different stages. The consciousness of self and feelings represents the stage immediately prior to the external, phonetic expression of feelings in the form of sound (i.e. language). Intelligent animals such as dolphins, Eurasian magpies, and chimpanzees live in communities, wherein they assign themselves roles for group survival and show emotions such as sympathy.[153]When such animals view their reflection (mirror test), they recognize themselves and exhibitself-consciousness.[154]Notably, humans evolved in a quite different environment than that of these animals. Human survival became easier with the development of tools, shelter, and fire, thus facilitating further advancement of social interaction, self-expression, and tool-making, as for hunting and gathering.[155]The increasing brain size allowed advanced provisioning and tools and the technological advances during the Palaeolithic era that built upon the previous evolutionary innovations of bipedalism and hand versatility allowed the development of human language.[citation needed]
According to a study investigating the song differences betweenwhite-rumped muniasand its domesticated counterpart (Bengalese finch), the wild munias use a highly stereotyped song sequence, whereas the domesticated ones sing a highly unconstrained song. In wild finches, song syntax is subject to female preference—sexual selection—and remains relatively fixed. However, in the Bengalese finch, natural selection is replaced by breeding, in this case for colorful plumage, and thus, decoupled from selective pressures, stereotyped song syntax is allowed to drift. It is replaced, supposedly within 1000 generations, by a variable and learned sequence. Wild finches, moreover, are thought incapable of learning song sequences from other finches.[156]In the field ofbird vocalization, brains capable of producing only an innate song have very simple neural pathways: the primary forebrain motor centre, called the robust nucleus ofarcopallium, connects to midbrain vocal outputs, which in turn project to brainstem motor nuclei. By contrast, in brains capable of learning songs, the arcopallium receives input from numerous additional forebrain regions, including those involved in learning and social experience. Control over song generation has become less constrained, more distributed, and more flexible.[156]
One way to think about human evolution is that humans areself-domesticated apes. Just as domestication relaxed selection for stereotypic songs in the finches—mate choice was supplanted by choices made by the aesthetic sensibilities of bird breeders and their customers—so might human cultural domestication have relaxed selection on many of their primate behavioural traits, allowing old pathways to degenerate and reconfigure. Given the highly indeterminate way that mammalian brains develop—they basically construct themselves "bottom up", with one set of neuronal interactions preparing for the next round of interactions—degraded pathways would tend to seek out and find new opportunities for synaptic hookups. Such inherited de-differentiations of brain pathways might have contributed to the functional complexity that characterises human language. And, as exemplified by the finches, such de-differentiations can occur in very rapid time-frames.[157]
A distinction can be drawn betweenspeechandlanguage. Language is not necessarily spoken: it might alternatively be written or signed. Speech is among a number of different methods of encoding and transmitting linguistic information, albeit arguably[by whom?]the most natural one.[158]
Some scholars, such as Noam Chomsky, view language as an initially cognitive development, its "externalisation" to serve communicative purposes occurring later in human evolution. According to one such school of thought, the key feature distinguishing human language isrecursion,[159](in this context, the iterative embedding of phrases within phrases). Other scholars—notablyDaniel Everett—deny that recursion is universal, citing certain languages (e.g.Pirahã) which allegedly[by whom?]lack this feature.[160]
The ability to ask questions is considered by some[like whom?]to distinguish language from non-human systems of communication.[161]Some captive primates (notably bonobos and chimpanzees), having learned to use rudimentary signing to communicate with their human trainers, proved able to respond correctly to complex questions and requests. Yet they failed to ask even the simplest questions themselves.[162]Conversely, human children are able to ask their first questions (using only questionintonation) at the babbling period of their development, long before they start using syntactic structures. Although babies from different cultures acquire native languages from their social environment, all languages of the world without exception—tonal, non-tonal, intonational and accented—use similar rising "question intonation" foryes–no questions.[163][164]Except, of course, the ones that don't.[165][clarification needed]This fact is a strong evidence of the universality ofquestion intonation. In general, according to some authors[like whom?], sentence intonation/pitch is pivotal in spoken grammar and is the basic information used by children to learn the grammar of whatever language.[13]
Language users have high-level reference (ordeixis)—the ability to refer to things or states of being that are not in the immediate realm of the speaker. This ability is often related to theory of mind, or an awareness of the other as a being like the self with individual wants and intentions. According to Chomsky, Hauser and Fitch (2002), there are six main aspects of this high-level reference system:
Simon Baron-Cohen(1999) argues that theory of mind must have preceded language use, based on evidence of use of the following characteristics as much as 40,000 years ago: intentional communication, repairing failed communication, teaching, intentional persuasion, intentional deception, building shared plans and goals, intentional sharing of focus or topic, and pretending. Moreover, Baron-Cohen argues that many primates show some, but not all, of these abilities.[citation needed]Call and Tomasello's research on chimpanzees supports this, in that individual chimps seem to understand that other chimps have awareness, knowledge, and intention, but do not seem to understand false beliefs. Many primates show some tendencies toward a theory of mind, but not a full one as humans have.[166]
Ultimately, there is some consensus within the field that a theory of mind is necessary for language use. Thus, the development of a full theory of mind in humans was a necessary precursor to full language use.[167]
In one particular study, rats and pigeons were required to press a button a certain number of times to get food. The animals showed very accurate distinction for numbers less than four, but as the numbers increased, the error rate increased.[159]In another, theprimatologistTetsuro Matsuzawaattempted to teach chimpanzees Arabic numerals.[168]The difference between primates and humans in this regard was very large, as it took the chimps thousands of trials to learn 1–9, with each number requiring a similar amount of training time; yet, after learning the meaning of 1, 2 and 3 (and sometimes 4), children (after the age of 5.5 to 6) easily comprehend the value of greater integers by using asuccessor function(i.e. 2 is 1 greater than 1, 3 is 1 greater than 2, 4 is 1 greater than 3; once 4 is reached it seems most childrensuddenly understandthat the value of any integernis 1 greater than the previous integer).[169]Put simply, other primates learn the meaning of numbers one by one, similar to their approach to other referential symbols, while children first learn an arbitrary list of symbols (1, 2, 3, 4...) and then later learn their precise meanings.[170]These results can be seen as evidence for the application of the "open-ended generative property" of language in human numeral cognition.[159]
Hockett (1966) details a list of features regarded as essential to describing human language.[171]In the domain of the lexical-phonological principle, two features of this list are most important:
The sound system of a language is composed of a finite set of simple phonological items. Under the specificphonotacticrules of a given language, these items can be recombined and concatenated, giving rise tomorphologyand the open-ended lexicon. A key feature of language is that a simple, finite set of phonological items gives rise to an infinite lexical system wherein rules determine the form of each item, and meaning is inextricably linked with form. Phonological syntax, then, is a simple combination of pre-existing phonological units. Related to this is another essential feature of human language: lexical syntax, wherein pre-existing units are combined, giving rise to semantically novel or distinct lexical items.[This paragraph needs citation(s)]
Certain elements of the lexical-phonological principle are known to exist outside of humans. While all (or nearly all) have been documented in some form in the natural world, very few coexist within the same species. Bird-song, singing nonhuman apes, and the songs of whales all display phonological syntax, combining units of sound into larger structures apparently devoid of enhanced or novel meaning. Certain other primate species do have simple phonological systems with units referring to entities in the world. However, in contrast to human systems, the units in these primates' systems normally occur in isolation, betraying a lack of lexical syntax. There is new[when?]evidence to suggest thatCampbell's monkeysalso display lexical syntax, combining two calls (a predator alarm call with a "boom", the combination of which denotes a lessened threat of danger), however it is still unclear whether this is a lexical or a morphological phenomenon.[172]
Pidginsare significantly simplified languages with only rudimentary grammar and a restricted vocabulary. In their early stage, pidgins mainly consist of nouns, verbs, and adjectives with few or no articles, prepositions, conjunctions or auxiliary verbs. Often the grammar has no fixedword orderand the words have noinflection.[173]
If contact is maintained between the groups speaking the pidgin for long periods of time, the pidgins may become more complex over many generations. If the children of one generation adopt the pidgin as their native language it develops into acreole language, which becomes fixed and acquires a more complex grammar, with fixed phonology, syntax, morphology, and syntactic embedding. The syntax and morphology of such languages may often have local innovations not obviously derived from any of the parent languages.
Studies of creole languages around the world have suggested that they display remarkable similarities in grammar[citation needed]and are developed uniformly from pidgins in a single generation. These similarities are apparent even when creoles do not have any common language origins. In addition, creoles are similar, despite being developed in isolation from each other.Syntactic similaritiesincludesubject–verb–objectword order. Even when creoles are derived from languages with a different word order they often develop the SVO word order. Creoles tend to have similar usage patterns for definite and indefinite articles, and similar movement rules for phrase structures even when the parent languages do not.[173]
Field primatologists can give useful insights intogreat apecommunication in the wild.[29]One notable finding is that nonhuman primates, including the other great apes, produce calls that are graded, as opposed to categorically differentiated, with listeners striving to evaluate subtle gradations in signallers' emotional and bodily states. Nonhuman apes seemingly find it extremely difficult to produce vocalisations in the absence of the corresponding emotional states.[43]In captivity, nonhuman apes have been taught rudimentary forms of sign language or have been persuaded to uselexigrams—symbols that do not graphically resemble the corresponding words—on computer keyboards. Some nonhuman apes, such asKanzi, have been able to learn and use hundreds of lexigrams.[174][175]
TheBroca'sandWernicke's areasin the primate brain are responsible for controlling the muscles of the face, tongue, mouth, and larynx, as well as recognizing sounds. Primates are known to make "vocal calls", and these calls are generated by circuits in thebrainstemandlimbic system.[176]
In the wild, the communication ofvervet monkeyshas been the most extensively studied.[173]They are known to make up to ten different vocalizations. Many of these are used to warn other members of the group about approaching predators. They include a "leopard call", a "snake call", and an "eagle call".[177]Each call triggers a different defensive strategy in the monkeys who hear the call and scientists were able to elicit predictable responses from the monkeys using loudspeakers and prerecorded sounds. Other vocalisations may be used for identification. If an infant monkey calls, its mother turns toward it, but other vervet mothers turn instead toward that infant's mother to see what she will do.[178][179]
Similarly, researchers have demonstrated that chimpanzees (in captivity) use different "words" in reference to different foods. They recorded vocalisations that chimps made in reference, for example, to grapes, and then other chimps pointed at pictures of grapes when they heard the recorded sound.[180][181]
A study published inHOMO: Journal of Comparative Human Biologyin 2017 claims thatArdipithecus ramidus, a hominin dated at approximately 4.5Ma, shows the first evidence of an anatomical shift in the hominin lineage suggestive of increased vocal capability.[182]This study compared the skull ofA. ramiduswith 29 chimpanzee skulls of different ages and found that in numerous featuresA. ramidusclustered with the infant and juvenile measures as opposed to the adult measures. Such affinity with the shape dimensions of infant and juvenile chimpanzee skull architecture, it was argued, may have resulted in greater vocal capability. This assertion was based on the notion that the chimpanzee vocal tract ratios that prevent speech are a result of growth factors associated with puberty—growth factors absent inA. ramidusontogeny.A. ramiduswas also found to have a degree ofcervicallordosismore conducive to vocal modulation when compared with chimpanzees as well as cranial base architecture suggestive of increased vocal capability.
What was significant in this study, according to the authors,[182]was the observation that the changes in skull architecture that correlate with reduced aggression are the same changes necessary for the evolution of early hominin vocal ability. In integrating data on anatomical correlates of primate mating and social systems with studies of skull and vocal tract architecture that facilitate speech production, the authors argue thatpaleoanthropologistsprior to their study have failed to understand the important relationship between early hominin social evolution and the evolution of our species' capacities for language.
While the skull ofA. ramidus, according to the authors, lacks the anatomical impediments to speech evident in chimpanzees, it is unclear what the vocal capabilities of this early hominin were. While they suggestA. ramidus—based on similar vocal tract ratios—may have had vocal capabilities equivalent to a modern human infant or very young child, they concede this is a debatable and speculative hypothesis. However, they do claim that changes in skull architecture through processes of social selection were a necessary prerequisite for language evolution. As they write:
We propose that as a result of paedomorphic morphogenesis of the cranial base and craniofacial morphologyAr. ramiduswould have not been limited in terms of the mechanical components of speech production as chimpanzees and bonobos are. It is possible thatAr. ramidushad vocal capability approximating that of chimpanzees and bonobos, with its idiosyncratic skull morphology not resulting in any significant advances in speech capability. In this sense the anatomical features analysed in this essay would have been exapted in later more voluble species of hominin. However, given the selective advantages of pro-social vocal synchrony, we suggest the species would have developed significantly more complex vocal abilities than chimpanzees and bonobos.[182]
Anatomically, some scholars believe that features ofbipedalismdeveloped in theaustralopithecinesaround 3.5 million years ago. Around this time, these structural developments within the skull led to a more prominently L-shaped vocal tract.[183][page needed]In order to generate the sounds modernHomo sapiensare capable of making, such as vowels, it is vital that Early Homo populations must have a specifically shaped voice track and a lower sitting larynx.[184]Opposing research previously suggested thatNeanderthalswere physically incapable of creating the full range of vocals seen in modern humans due to the differences in larynx placement. Establishing distinct larynx positions through fossil remains ofHomo sapiensand Neanderthals would support this theory; however, modern research has revealed that the hyoid bone was indistinguishable in the two populations. Though research has shown a lower sitting larynx is important to producing speech, another theory states it may not be as important as once thought.[185]Cataldo, Migliano, and Vinicius report speech alone appears inadequate for transmitting stone tool-making knowledge, and suggest that speech may have emerged due to an increase in complex social interactions.[186]
Steven Mithenproposed the termHmmmmmfor the pre-linguistic system of communication posited to have been used by archaicHomo, beginning withHomo ergasterand reaching the highest sophistication in theMiddle PleistocenewithHomo heidelbergensisandHomo neanderthalensis.Hmmmmmis an acronym forholistic (non-compositional),manipulative (utterances are commands or suggestions, not descriptive statements),multi-modal (acoustic as well as gestural and facial),musical, andmimetic.[187]
Evidence forHomo erectuspotentially using language comes in the form ofAcheuleantool usage. The use of abstract thought in the formation of Acheulean hand axes coincides with the symbol creation necessary for simple language.[188]Recent language theories presentrecursionas the unique facet of human language and theory of mind.[189][190]However, breaking down language into its symbolic parts: separating meaning from the requirements of grammar, it becomes possible to see that language does not depend on either recursion or grammar. This can be evidenced by thePirahãlanguage users in Brazil that have no myth or creation stories, no numbers and no colors within their language.[191]This is to highlight that even though grammar may have been unavailable, use of foresight, planning and symbolic thought can be evidence of language as early as one million years ago with Homoerectus.
Homo heidelbergensiswas a close relative (most probably a migratory descendant) ofHomo ergaster. Some researchers believe this species to be the first hominin to make controlled vocalisations, possibly mimicking animal vocalisations,[187]and that asHomo heidelbergensisdeveloped more sophisticated culture, proceeded from this point and possibly developed an early form of symbolic language.
The discovery in 1989 of the (Neanderthal) Kebara 2 hyoid bone suggests that Neanderthals may have been anatomically capable of producing sounds similar to modern humans.[192][193]Thehypoglossal nerve, which passes through the hypoglossal canal, controls the movements of the tongue, which may have enabled voicing for size exaggeration (see size exaggeration hypothesis below) or may reflect speech abilities.[24][194][195][196][197][198]
However, although Neanderthals may have been anatomically able to speak,Richard G. Kleinin 2004 doubted that they possessed a fully modern language. He largely bases his doubts on the fossil record of archaic humans and their stone tool kit. Bart de Boer in 2017 acknowledges this ambiguity of a universally accepted Neanderthal vocal tract; however, he notes the similarities in the thoracic vertebral canal, potential air sacs, and hyoid bones between modern humans and Neanderthals to suggest the presence of complex speech.[199]For two million years following the emergence ofHomo habilis, the stone tool technology of hominins changed very little. Klein, who has worked extensively on ancient stone tools, describes the crude stone tool kit of archaic humans as impossible to break down into categories based on their function, and reports that Neanderthals seem to have had little concern for the final aesthetic form of their tools. Klein argues that the Neanderthal brain may have not reached the level of complexity required for modern speech, even if the physical apparatus for speech production was well-developed.[200][201]The issue of the Neanderthal's level of cultural and technological sophistication remains a controversial one.[citation needed]
Based on computer simulations used to evaluate that evolution of language that resulted in showing three stages in the evolution of syntax, Neanderthals are thought to have been in stage 2, showing they had something more evolved than proto-language but not quite as complex as the language of modern humans.[202]
Some researchers, applying auditory bioengineering models to computerised tomography scans of Neanderthal skulls, have asserted that Neanderthals had auditory capacity very similar to that of anatomically modern humans.[203]These researchers claim that this finding implies that "Neanderthals evolved the auditory capacities to support a vocal communication system as efficient as modern human speech."[203]
Anatomically modern humans begin toappear in the fossil recordin Ethiopia some 200,000 years ago.[204]Although there is still much debate as to whether behavioural modernity emerged in Africa at around the same time, a growing number of archaeologists nowadays[when?]invoke the southern African Middle Stone Age use of red ochre pigments—for example atBlombos Cave—as evidence that modern anatomy and behaviour co-evolved.[205]These archaeologists argue strongly that if modern humans at this early stage were using red ochre pigments for ritual and symbolic purposes, they probably had symbolic language as well.[26]
According to therecent African origins hypothesis, from around 60,000 – 50,000 years ago[206]a group of humans left Africa and began migrating to occupy the rest of the world, carrying language and symbolic culture with them.[207]
Thelarynx(orvoice box) is an organ in the neck housing thevocal folds, which are responsible forphonation. In humans, the larynx isdescended. The human species is not unique in this respect: goats, dogs, pigs and tamarins lower the larynx temporarily, to emit loud calls.[208]Several deer species have a permanently lowered larynx, which may be lowered still further by males during their roaring displays.[209]Lions, jaguars, cheetahs and domestic cats also do this.[210]However, laryngeal descent in nonhumans (according to Philip Lieberman) is not accompanied by descent of the hyoid; hence the tongue remains horizontal in the oral cavity, preventing it from acting as a pharyngeal articulator.[211]
Despite all this, scholars remain divided as to how "special" the human vocal tract really is. It has been shown that the larynx does descend to some extent during development in chimpanzees, followed by hyoidal descent.[212]As against this, Philip Lieberman points out that only humans have evolved permanent and substantial laryngeal descent in association with hyoidal descent, resulting in a curved tongue and two-tube vocal tract with 1:1 proportions. He argues that Neanderthals and early anatomically modern humans could not have possessed supralaryngeal vocal tracts capable of producing "fully human speech".[213]Uniquely in the human case, simple contact between theepiglottisandvelumis no longer possible, disrupting the normal mammalian separation of the respiratory and digestive tracts during swallowing. Since this entails substantial costs—increasing the risk of choking while swallowing food—we are forced to ask what benefits might have outweighed those costs. The obvious benefit—so it is claimed—must have been speech. But this idea has been vigorously contested. One objection is that humans are in fact not seriously at risk of choking on food: medical statistics indicate that accidents of this kind are extremely rare.[214]Another objection is that in the view of most scholars, speech as it is known emerged relatively late in human evolution, roughly contemporaneously with the emergence ofHomo sapiens.[215]A development as complex as the reconfiguration of the human vocal tract would have required much more time, implying an early date of origin. This discrepancy in timescales undermines the idea that human vocal flexibility was initially driven by selection pressures for speech, thus not excluding that it was selected for e.g. improved singing ability.
To lower the larynx is to increase the length of the vocal tract, in turn loweringformantfrequencies so that the voice sounds "deeper"—giving an impression of greater size. John Ohala argues that the function of the lowered larynx in humans, especially males, is probably to enhance threat displays rather than speech itself.[216]Ohala points out that if the lowered larynx were an adaptation for speech, adult human males would be expected to be better adapted in this respect than adult females, whose larynx is considerably less low. However, females outperform males in verbal tests,[217]falsifying this whole line of reasoning.
W. Tecumseh Fitchlikewise argues that this was the original selective advantage of laryngeal lowering in the human species. Although (according to Fitch) the initial lowering of the larynx in humans had nothing to do with speech, the increased range of possible formant patterns was subsequently co-opted for speech. Size exaggeration remains the sole function of the extreme laryngeal descent observed in male deer. Consistent with the size exaggeration hypothesis, a second descent of the larynx occurs at puberty in humans, although only in males. In response to the objection that the larynx is descended in human females, Fitch suggests that mothers vocalizing to protect their infants would also have benefited from this ability.[218]
In 2011, Quentin Atkinson published a survey ofphonemesfrom 500 different languages as well aslanguage familiesand compared their phonemic diversity by region, number of speakers and distance from Africa. The survey revealed that African languages had the largest number of phonemes, andOceaniaandSouth Americahad the smallest number. After allowing for the number of speakers, the phonemic diversity was compared to over 2000 possible origin locations. Atkinson's "best fit" model is that language originated in western, central, or southern Africa between 80,000 and 160,000 years ago. This predates the hypothesizedsouthern coastal peoplingof Arabia, India, southeast Asia, and Australia. It would also mean that the origin of language occurred at the same time as the emergence of symbolic culture.[219]
Numerous linguists[220][221][222]have criticized Atkinson's paper as misrepresenting both the phonemic data and processes of linguistic change, as language complexity does not necessarily correspond to age, and of failing to take into account the borrowing of phonemes from neighbouring languages, as someBantu languageshave done with click consonants.[222]Recreations of his method gave possible origins of language in the Caucasus[220]and Turkmenistan,[221]in addition to southern and eastern Africa.
The search for the origin of language has a long history inmythology. Most mythologies do not credit humans with the invention of language but speak of adivine languagepredating human language. Mystical languages used to communicate with animals or spirits, such as thelanguage of the birds, are also common, and were of particular interest during theRenaissance.
Vācis theHindugoddess of speech, or "speech personified". AsBrahman's "sacred utterance", she has a cosmological role as the "Mother of theVedas". TheAztecs' story maintains that only a man,Coxcox, and a woman,Xochiquetzal, survived a flood, having floated on a piece of bark. They found themselves on land and had many children who were at first born unable to speak, but subsequently, upon the arrival of adove, were endowed with language, although each one was given a different speech such that they could not understand one another.[223]
In theOld Testament, theBook of Genesis(chapter 11) says that God prevented theTower of Babelfrom being completed through amiraclethat made its construction workers start speaking different languages. After this, they migrated to other regions, grouped together according to which of the newly created languages they spoke, explaining the origins of languages and nations outside of theFertile Crescent.[224]
History contains a number of anecdotes about people who attempted to discover the origin of language by experiment. The first such tale was told byHerodotus(Histories2.2). He relates that Pharaoh Psammetichus (probablyPsammetichus I, 7th century BC) had two children raised by a shepherd, with the instructions that no one should speak to them, but that the shepherd should feed and care for them while listening to determine their first words. When one of the children cried "bekos" with outstretched arms the shepherd concluded that the word wasPhrygian, because that was the sound of the Phrygian word for 'bread'. From this, Psammetichus concluded that the first language was Phrygian. KingJames IV of Scotlandis said to have tried a similar experiment; his children were supposed to have spokenHebrew.[225]
Both the medieval monarchFrederick IIandAkbarare said to have tried similar experiments; the children involved in these experiments did not speak. The current situation ofdeafpeople also points into this direction.[clarification needed]
Modern linguistics did not begin until the late 18th century, and theRomanticoranimisttheses ofJohann Gottfried HerderandJohann Christoph Adelungremained influential well into the 19th century. The question of language origin seemed inaccessible to methodical approaches, and in 1866 theLinguistic Society of Parisfamously banned all discussion of the origin of language, deeming it to be an unanswerable problem. An increasingly systematic approach tohistorical linguisticsdeveloped in the course of the 19th century, reaching its culmination in theNeogrammarianschool ofKarl Brugmannand others.[citation needed]
However, scholarly interest in the question of the origin of language has only gradually been revived from the 1950s on (and then controversially) with ideas such asuniversal grammar,mass comparisonandglottochronology.[citation needed]
The "origin of language" as a subject in its own right emerged from studies inneurolinguistics,psycholinguisticsandhuman evolution. TheLinguistic Bibliographyintroduced "Origin of language" as a separate heading in 1988, as a sub-topic of psycholinguistics. Dedicated research institutes ofevolutionary linguisticsare a recent phenomenon, emerging only in the 1990s.[226]
|
https://en.wikipedia.org/wiki/Origin_of_language
|
Formal semanticsis the study ofgrammaticalmeaning innatural languagesusingformalconcepts fromlogic,mathematicsandtheoretical computer science. It is an interdisciplinary field, sometimes regarded as a subfield of bothlinguisticsandphilosophy of language. It provides accounts of what linguistic expressions mean and how their meanings arecomposedfrom the meanings of their parts. The enterprise of formal semantics can be thought of as that of reverse-engineering the semantic components of natural languages' grammars.
Formal semantics studies thedenotationsof natural language expressions. High-level concerns includecompositionality,reference, and thenature of meaning. Key topic areas includescope,modality,binding,tense, andaspect. Semantics is distinct frompragmatics, which encompasses aspects of meaning which arise from interaction and communicative intent.
Formal semantics is an interdisciplinary field, often viewed as a subfield of bothlinguisticsandphilosophy, while also incorporating work fromcomputer science,mathematical logic, andcognitive psychology. Within philosophy, formal semanticists typically adopt aPlatonisticontology and anexternalistview of meaning.[1]Within linguistics, it is more common to view formal semantics as part of the study of linguisticcognition. As a result, philosophers put more of an emphasis on conceptual issues while linguists are more likely to focus on thesyntax–semantics interfaceand crosslinguistic variation.[2][3]
The fundamental question of formal semantics is what you know when you know how to interpret expressions of a language. A common assumption is that knowing the meaning of a sentence requires knowing itstruth conditions, or in other words knowing what the world would have to be like for the sentence to be true. For instance, to know the meaning of the English sentence "Nancy smokes" one has to know that it is true when the person Nancy performs the action of smoking.[1][4]
However, many current approaches to formal semantics posit that there is more to meaning than truth-conditions.[5]In the formal semantic framework ofinquisitive semantics, knowing the meaning of a sentence also requires knowing what issues (i.e. questions) it raises. For instance "Nancy smokes, but does she drink?" conveys the same truth-conditional information as the previous example but also raises an issue of whether Nancy drinks.[6]Other approaches generalize the concept of truth conditionality or treat it as epiphenomenal. For instance indynamic semantics, knowing the meaning of a sentence amounts to knowing how it updates a context.[7]Pietroski treats meanings as instructions to build concepts.[8]
The Principle of Compositionality is the fundamental assumption in formal semantics. This principle states that thedenotationof a complex expression is determined by the denotations of its parts along with their mode of composition. For instance, the denotation of theEnglishsentence "Nancy smokes" is determined by the meaning of "Nancy", the denotation of "smokes", and whatever semantic operations combine the meanings ofsubjectswith the meanings ofpredicates. In a simplified semantic analysis, this idea would be formalized by positing that "Nancy" denotes Nancy herself, while "smokes" denotes a function which takes some individualxas an argument and returns thetruth value "true"ifxindeed smokes. Assuming that the words "Nancy" and "smokes" are semantically composed viafunction application, this analysis would predict that the sentence as a whole is true if Nancy indeed smokes.[9][10][11]
Scope can be thought of as the semantic order of operations. For instance, in the sentence "Paulina doesn't drink beer but she does drink wine," thepropositionthat Paulina drinks beer occurs within the scope ofnegation, but the proposition that Paulina drinks wine does not. One of the major concerns of research in formal semantics is the relationship between operators'syntactic positionsand their semantic scope. This relationship is not transparent, since the scope of an operator need not directly correspond to itssurface positionand a single surface form can besemantically ambiguousbetween different scope construals. Some theories of scope posit a level of syntactic structure calledlogical form, in which an item's syntactic position corresponds to its semantic scope. Others theories compute scope relations in the semantics itself, using formal tools such as type shifters,monads, andcontinuations.[12][13][14][15]
Binding is the phenomenon in whichanaphoricelements such aspronounsare grammatically associated with theirantecedents. For instance in the English sentence "Mary saw herself", theanaphor"herself" is bound by its antecedent "Mary". Binding can be licensed or blocked in certain contexts or syntactic configurations, e.g. the pronoun "her" cannot be bound by "Mary" in the English sentence "Mary saw her". While all languages have binding, restrictions on it vary even among closely related languages. Binding was a major component to thegovernment and binding theoryparadigm.
Modality is the phenomenon whereby language is used to discuss potentially non-actual scenarios. For instance, while a non-modal sentence such as "Nancy smoked" makes a claim about the actual world, modalized sentences such as "Nancy might have smoked" or "If Nancy smoked, I'll be sad" make claims about alternative scenarios. The most intensely studied expressions includemodal auxiliariessuch as "could", "should", or "must"; modal adverbs such as "possibly" or "necessarily"; and modal adjectives such as "conceivable" and "probable". However, modal components have been identified in the meanings of countless natural language expressions includingcounterfactuals,propositional attitudes,evidentials,habitualsand generics. The standard treatment of linguistic modality was proposed byAngelika Kratzerin the 1970s, building on an earlier tradition of work inmodal logic.[16][17][18]
Formal semantics emerged as a major area of research in the early 1970s, with the pioneering work of the philosopher and logicianRichard Montague. Montague proposed a formal system now known asMontague grammarwhich consisted of a novelsyntactic formalismfor English, a logical system calledIntensional Logic, and a set ofhomomorphictranslation rules linking the two. In retrospect, Montague Grammar has been compared to aRube Goldberg machine, but it was regarded as earth-shattering when first proposed, and many of its fundamental insights survive in the various semantic models which have superseded it.[19][20][21]
Montague Grammar was a major advance because it showed that natural languages could be treated asinterpretedformal languages. Before Montague, many linguists had doubted that this was possible, and logicians of that era tended to view logic as a replacement for natural language rather than a tool for analyzing it.[21]Montague's work was published during theLinguistics Wars, and many linguists were initially puzzled by it. While linguists wanted a restrictive theory that could only model phenomena that occur in human languages, Montague sought a flexible framework that characterized the concept of meaning at its most general. At one conference, Montague toldBarbara Parteethat she was "the only linguist who it is not the case that I can't talk to".[21]
Formal semantics grew into a major subfield of linguistics in the late 1970s and early 1980s, due to the seminal work of Barbara Partee. Partee developed a linguistically plausible system which incorporated the key insights of both Montague Grammar andTransformational grammar. Early research in linguistic formal semantics used Partee's system to achieve a wealth of empirical and conceptual results.[21]Later work byIrene Heim,Angelika Kratzer,Tanya Reinhart,Robert Mayand others built on Partee's work to further reconcile it with thegenerativeapproach to syntax. The resulting framework is known as theHeim and Kratzersystem, after the authors of the textbookSemantics in Generative Grammarwhich first codified and popularized it. The Heim and Kratzer system differs from earlier approaches in that it incorporates a level of syntactic representation calledlogical formwhich undergoes semantic interpretation. Thus, this system often includes syntactic representations and operations which were introduced by translation rules in Montague's system.[22][21]However, work by others such asGerald Gazdarproposed models of the syntax-semantics interface which stayed closer to Montague's, providing a system of interpretation in which denotations could be computed on the basis ofsurface structures. These approaches live on in frameworks such ascategorial grammarandcombinatory categorial grammar.[23][21]
Cognitive semanticsemerged as a reaction against formal semantics, but there have been recently several attempts at reconciling both positions.[24]
|
https://en.wikipedia.org/wiki/Formal_semantics_(natural_language)
|
Whistled languagesare linguistic systems that usewhistlingas a form of speech and facilitate communication between individuals. More than 80 languages have been found to practice various degrees of whistling, most of them in rugged topography or dense forests, where whistling expands the area of communication while movement to carry messages is challenging.[1]The practice is generally threatened by increased modernization and faster roads, but successfulconservationefforts are recorded.[1]
A whistled language is a system of whistled communication which allows fluent whistlers to transmit and comprehend a potentially unlimited number of messages over long distances. Whistled languages are different in this respect from free associative whistling, which may be done to simulate music, to attract attention, or, in the case of herders or animal trainers, to transmit simple messages or instructions to animal companions. Generally, whistled languages emulate thetonesor vowelformantsof a natural spoken language, as well as aspects of itsintonationandprosody, so that trained listeners who speak that language can understand the encoded message.
Whistled language is rare compared to spoken language, but it is found in cultures around the world.[2]It is especially common intone languageswhere the whistled tones transmit the tones of the syllables (tone melodies of the words). This might be because in tone languages the tone melody carries more of thefunctional loadof communication while non-tonal phonology carries proportionally less. The genesis of a whistled language has never been recorded in either case and has not yet received much productive study.
Because whistled language is so much rarer than standard vocal language or non-verbal physical language such as sign language, historical research on whistled speech is sparse.
In early China, the technique oftranscendental whistling, orxiao, was a kind of nonverbal language with affinities to the spiritual aspects ofDaoist meditation.[3]The development of xiao as a practice and art form can be traced through the works of theWestern Zhoudynasty, and it was initially used to convey a sense of grief, or to invoke the spirits of dearly departed loved ones. By the time of theSix Dynastiesin Han China, xiao had become a widely-used complement to spoken language, irrespective of social class.[4]Due to the shrill tones employed while whistling, xiao was often used to punctuate intense feelings or reactions, such as joy, displeasure, and surprise.
In the Melpomene, the fourth book of hisHistories,Herodotusmakes a passing reference to an Ethiopian tribe who "spoke like bats".[5]While travelling through the territory of an ancient tribe on the southern Black Sea coast in 400 B.C.E, Xenophon wrote in theAnabasisthat theMossynoeciinhabitants could hear one another at great distances across the valleys. The same area encompasses the Turkish village ofKuşköywhere whistled speech (kuş dili) is practiced today.[6]Aelianlater wrote inDe Natura Animaliumof the Kinoprosipi people of North Africa, who made use of "acute whistling," who later historians believe were likely a tribe of theAnuakin South Sudan.
In 1982 in theGreekvillage of Antia onEuboeaisland, the entire population knew the local whistled speech calledsfyria,[7]but only a few whistlers remain now.[8]
Whistled languages have naturally developed in response to the necessity for humans to communicate in conditions of relative isolation, with possible causes being distance, noise levels, and night, as well as specific activities, such as social information, shepherding, hunting, fishing, courtship, or shamanism.[9]Because of this usage, they are mostly related to places with mountains or dense forests. Southern China, Papua New Guinea, the Amazon forest, Subsaharan Africa, Mexico, and Europe encompass most of these locations.
They have been more recently found in dense forests like the Amazon where they may replace spoken dialogue in the villages while hunting or fishing to overcome the pressure of the acoustic environment.[8][10]The main advantage of whistling speech is that it allows the speaker to cover much larger distances (typically 1–2 kilometres (0.62–1.24 mi) but up to 5 km (3.1 mi) in mountains and less in reverberating forests) than ordinary speech, without the strain (and lesser range) of shouting. More specifically, whistle speech can reach a loudness of 130 dB, and the transmission range can reach up to 10 km (as verified in La Gomera, Canary Island).[11]The long range of whistling is enhanced by the mountainousterrainfound in areas where whistled languages are used. Many areas with such languages work hard to preserve their ancient traditions, in the face of rapidly advancing telecommunications systems in many areas.
In some cases (e.g. Chinantec) the whistled speech is an important and integral part of the language and culture; in others (e.g. Nahuatl) its role is much lesser. Whistled speech may be very central and highly valued in a culture. Shouting is very rare inSochiapam Chinantec. Men in that culture are subject to being fined if they do not handle whistle-speech well enough to perform certain town jobs. They may whistle for fun in situations where spoken speech could easily be heard.[12]
InSochiapam,Oaxaca, and other places in Mexico, and reportedly in West and Southern Africa as well (specifically among the VhaVenda), whistled speech is men's language: although women may understand it, they do not use it.
Though whistled languages are not secretcodesorsecret languages(with the exception of a whistled language used byñañigosinsurgenciesinCubaduringSpanish occupation),[13]they may be used for secretive communication among outsiders or others who do not know or understand the whistled language though they may understand its spoken origin. Stories are told of farmers in Aas duringWorld War II, or in La Gomera, who were able to hide evidence of such nefarious activities as milk-watering because they were warned in whistle-speech that the police were approaching.[13]
Various documentation, conservation, and revitalization effort are ongoing. In France, the whistling of Aas is being systematically audio-recorded usingopen sourceLingua Libre. Those recordings have been used to create an interactive maps of Occitan villages names.[14]
Whistled languages differ according to whether the spoken language istonalor not, with the whistling being either tone orarticulationbased (or both). Most whistle languages, of which there are several hundred, are based on tonal languages.
A way in which true whistled languages differ from other types of whistled communication is that they encode auditory features of spoken languages by 'transposing' (i.e.carrying over into a whistled form) key components of speech sounds. There are two types of whistled languages: those based on non-tone languages, which transpose F²formantspatterns, those based on tone languages which transpose tonal-melodies.[15]However, both types of whistle tones have a phonological structure that is related to the spoken language that they are transposing.
Tonal languages are often stripped of articulation, leaving onlysuprasegmental featuressuch as duration and tone, and when whistled retain the spoken melodic line. Thus whistled tonal languages conveyphonemicinformation solely throughtone, length, and, to a lesser extent,stress, and mostsegmentalphonemic distinctions of the spoken language are lost.
In non-tonal languages, more of the articulatory features of speech are retained, and the normallytimbralvariations imparted by the movements of thetongueandsoft palateare transformed intopitchvariations.[13]Certain consonants can be pronounced while whistling, so as to modify the whistled sound, much as consonants in spoken language modify the vowel sounds adjacent to them.
Different whistling styles may be used in a single language.Sochiapam Chinantechas three different words for whistle-speech:sie3for whistling with the tongue against the alveolar ridge,jui̵32for bilabial whistling, andjuo2for finger-in-the-mouth whistling. These are used for communication over varying distances. There is also a kind of loud falsetto (hóh32) which functions in some ways like whistled speech.[16]
Only the tone of the speech is saved in the whistle, while aspects as articulation and phonation are eliminated. These are replaced by other features such as stress and rhythmical variations. However, some languages, like that of theZezuruwho speak aShona-derived dialect, include articulation so that consonants interrupt the flow of the whistle. A similar language is theTsongawhistle language used in the highlands in the Southern parts ofMozambique. This should not be confused with thewhistled sibilantsof Shona.
There are two different types of whistle tones -hole tonesandedge tones. A hole (or 'orifice') tone is produced by a fast-moving cylinder (or 'vena contracta') of air that interacts with the slow-moving anulus of air surrounding it.[17]Instability in the boundary layer leads to perturbations that increase in size until a feedback path is established whereby specific frequencies of the resonance chamber are emphasized.[18]An edge tone, on the other hand, is generated by a thin jet of air that strikes an obstacle. Vortices are shed near the point of disturbance in the flow, alternating on each side of the obstacle or 'wedge'.[17]
One of the best-studied whistled languages is a whistled language based on Spanish calledSilbo, whistled on the island ofLa Gomerain theCanary Islands(Rialland2005). The number of distinctive sounds or phonemes in this language is a matter of disagreement, varying according to the researcher from two to five vowels and four to nine consonants. This variation may reflect differences in speakers' abilities as well as in the methods used to elicit contrasts. The work of Meyer[8][10]clarifies this debate by providing the first statistical analyses of production for various whistlers as well as psycholinguistic tests of vowel identification.
In a non-tonal language, segments may be differentiated as follows:
Whistling techniques do not require the vibration of thevocal cords: they produce a shock effect of the compressed air stream inside the cavity of the mouth and/or of the hands. When the jaws are fixed by a finger, the size of the hole is stable. The air stream expelled makes vibrations at the edge of the mouth. The faster the air stream is expelled, the higher is the noise inside the cavities. If the hole (mouth) and the cavity (intra-oral volume) are well matched, the resonance is tuned, and the whistle is projected more loudly. The frequency of thisbioacousticalphenomenon is modulated by the morphing of the resonating cavity that can be, to a certain extent, related to thearticulationof the equivalent spoken form.[9]"Apart from the five vowel-phonemes [of Silbo Gomero]—and even these do not invariably have a fixed or steady pitch—all whistled speech-sound realizations areglideswhich are interpreted in terms of range, contour, and steepness."[13]
There are a few different techniques of how to produce whistle speech, the choice of which is dependent on practical concerns.Bilabialandlabiodentaltechniques are common for short and medium distance discussions (in a market, in the noise of a room, or for hunting); whereas thetongue retroflexed, one or two fingers introduced in the mouth, a blow concentrated at the junction between two fingers or the lower lip pulled while breathing in air are techniques used to reach high levels of power for long distance speaking.[9]Each place has its favorite trend that depends on the most common use of the village and on the personal preferences of each whistler. Whistling with a leaf or a flute is often related to courtship or poetic expression (reported in the Kickapoo language in Mexico[19]and in the Hmong[20]and Akha[21]cultures in Asia).
"All whistled languages share one basic characteristic: they function by varying thefrequencyof a simplewave-formas afunctionof time, generally with minimaldynamic variations, which is readily understandable since in most cases their only purpose is long-distance communication."[13]A whistled tone is essentially a simple oscillation (orsine wave), and thustimbralvariations are impossible. Normal articulation during an ordinary lip-whistle is relatively easy though the lips move little causing a constant oflabializationand makinglabialandlabiodental consonants(p, b, m, f, etc.) problematical.[13]
The expressivity of whistled speech is likely to be somewhat limited compared to spoken speech (although not inherently so), but such a conclusion should not be taken as absolute, as it depends heavily on various factors including thephonologyof the language. For example, in some tonal languages with few tones, whistled messages typically consist of stereotyped or otherwise standardized expressions, are elaborately descriptive, and often have to be repeated. However, in heavily tonal languages such asMazatecandYoruba, a large amount of information is conveyed through pitch even when spoken, and therefore extensive conversations may be whistled. In any case, even for non-tonal languages, measurements indicate that high intelligibility can be achieved with whistled speech (90%) of intelligibility of non-standardized sentences for Greek[8]and the equivalent for Turkish.[22]
Thelack of understandingcan be seen with aconfusion matrix. It was tested using two speakers of Silbo (Jampolsky 1999). The study revealed that generally, the vowels were relatively easy to understand, and the consonants a bit more difficult.[15]
More than 80 whistled languages have been found to date.[1]The following list is of languages that exist or existed in a whistled form, or ofethnic groupsthat speak such languages.
In West Africa, speech may be conveyed by a whistle or other musical instrument, most famously the "talking drums". However, while drums may be used bygriotssinging praise songs or for inter-village communication, and other instruments may be used on theradioforstation identificationjingles, for regular conversation at a distance whistled speech is used. As two people approach each other, one may even switch from whistled to spoken speech in mid-sentence.
|
https://en.wikipedia.org/wiki/Whistled_language
|
Andrey Andreyevich Markov[a](14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work onstochastic processes. A primary subject of his research later became known as theMarkov chain. He was also a strong, close to master-level, chess player.
Markov and his younger brotherVladimir Andreyevich Markov(1871–1897) proved theMarkov brothers' inequality.
His son, anotherAndrey Andreyevich Markov(1903–1979), was also a notable mathematician, making contributions toconstructive mathematicsandrecursive functiontheory.[2]
Andrey Markov was born on 14 June 1856 in Russia. He attended the St. Petersburg Grammar School, where some teachers saw him as a rebellious student. In his academics he performed poorly in most subjects other than mathematics. Later in life he attended Saint Petersburg Imperial University (nowSaint Petersburg State University). Among his teachers wereYulian Sokhotski(differential calculus, higher algebra),Konstantin Posse(analytic geometry), Yegor Zolotarev (integral calculus),Pafnuty Chebyshev(number theory and probability theory),Aleksandr Korkin(ordinary and partial differential equations), Mikhail Okatov (mechanism theory),Osip Somov(mechanics), and Nikolai Budajev (descriptive and higher geometry). He completed his studies at the university and was later asked if he would like to stay and have a career as a mathematician. He later taught at high schools and continued his own mathematical studies. In this time he found a practical use for his mathematical skills. He figured out that he could use chains to model the alliteration of vowels and consonants in Russian literature. He also contributed to many other mathematical aspects in his time. He died at age 66 on 20 July 1922.
In 1877, Markov was awarded a gold medal for his outstanding solution of the problem
About Integration of Differential Equations byContinued Fractionswith an Application to the Equation(1+x2)dydx=n(1+y2){\displaystyle (1+x^{2}){\frac {dy}{dx}}=n(1+y^{2})}.
During the following year, he passed the candidate's examinations, and he remained at the university to prepare for a lecturer's position.
In April 1880, Markov defended hismaster's thesis"On the Binary Square Forms with Positive Determinant", which was directed by Aleksandr Korkin and Yegor Zolotarev. Four years later in 1884, he defended his doctoral thesis titled "On Certain Applications of the Algebraic Continuous Fractions".
Hispedagogicalwork began after the defense of his master's thesis in autumn 1880. As aprivatdozenthe lectured on differential and integral calculus. Later he lectured alternately on "introduction to analysis", probability theory (succeeding Chebyshev, who had left the university in 1882) and the calculus of differences. From 1895 through 1905 he also lectured indifferential calculus.
One year after the defense of his doctoral thesis, Markov was appointed extraordinary professor (1886) and in the same year he was elected adjunct to the Academy of Sciences. In 1890, after the death of Viktor Bunyakovsky, Markov became an extraordinary member of the academy. His promotion to an ordinary professor of St. Petersburg University followed in the fall of 1894.
In 1896, Markov was elected an ordinary member of the academy as the successor ofChebyshev. In 1905, he was appointed merited professor and was granted the right to retire, which he did immediately. Until 1910, however, he continued to lecture in the calculus of differences.
In connection with student riots in 1908, professors and lecturers of St. Petersburg University were ordered to monitor their students. Markov refused to accept this decree, and he wrote an explanation in which he declined to be an "agent of the governance". Markov was removed from further teaching duties at St. Petersburg University, and hence he decided to retire from the university.
Markov was anatheist. In 1912, he responded toLeo Tolstoy'sexcommunicationfrom theRussian Orthodox Churchby requesting his own excommunication. The Church complied with his request.[3][4]
In 1913, the council of St. Petersburg elected nine scientists honorary members of the university. Markov was among them, but his election was not affirmed by the minister of education. The affirmation only occurred four years later, after theFebruary Revolutionin 1917. Markov then resumed his teaching activities and lectured on probability theory and the calculus of differences until his death in 1922.
|
https://en.wikipedia.org/wiki/Andrey_Markov
|
Bayesian inference(/ˈbeɪziən/BAY-zee-ənor/ˈbeɪʒən/BAY-zhən)[1]is a method ofstatistical inferencein whichBayes' theoremis used to calculate a probability of a hypothesis, given priorevidence, and update it as moreinformationbecomes available. Fundamentally, Bayesian inference uses aprior distributionto estimateposterior probabilities.Bayesian inference is an important technique instatistics, and especially inmathematical statistics. Bayesian updating is particularly important in thedynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, includingscience,engineering,philosophy,medicine,sport, andlaw. In the philosophy ofdecision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".
Bayesian inference derives theposterior probabilityas aconsequenceof twoantecedents: aprior probabilityand a "likelihood function" derived from astatistical modelfor the observed data. Bayesian inference computes the posterior probability according toBayes' theorem:P(H∣E)=P(E∣H)⋅P(H)P(E),{\displaystyle P(H\mid E)={\frac {P(E\mid H)\cdot P(H)}{P(E)}},}where
For different values ofH, only the factorsP(H){\displaystyle P(H)}andP(E∣H){\displaystyle P(E\mid H)}, both in the numerator, affect the value ofP(H∣E){\displaystyle P(H\mid E)}– the posterior probability of a hypothesis is proportional to its prior probability (its inherent likeliness) and the newly acquired likelihood (its compatibility with the new observed evidence).
In cases where¬H{\displaystyle \neg H}("notH"), thelogical negationofH, is a valid likelihood, Bayes' rule can be rewritten as follows:P(H∣E)=P(E∣H)P(H)P(E)=P(E∣H)P(H)P(E∣H)P(H)+P(E∣¬H)P(¬H)=11+(1P(H)−1)P(E∣¬H)P(E∣H){\displaystyle {\begin{aligned}P(H\mid E)&={\frac {P(E\mid H)P(H)}{P(E)}}\\\\&={\frac {P(E\mid H)P(H)}{P(E\mid H)P(H)+P(E\mid \neg H)P(\neg H)}}\\\\&={\frac {1}{1+\left({\frac {1}{P(H)}}-1\right){\frac {P(E\mid \neg H)}{P(E\mid H)}}}}\\\end{aligned}}}becauseP(E)=P(E∣H)P(H)+P(E∣¬H)P(¬H){\displaystyle P(E)=P(E\mid H)P(H)+P(E\mid \neg H)P(\neg H)}andP(H)+P(¬H)=1.{\displaystyle P(H)+P(\neg H)=1.}This focuses attention on the term(1P(H)−1)P(E∣¬H)P(E∣H).{\displaystyle \left({\tfrac {1}{P(H)}}-1\right){\tfrac {P(E\mid \neg H)}{P(E\mid H)}}.}If that term is approximately 1, then the probability of the hypothesis given the evidence,P(H∣E){\displaystyle P(H\mid E)}, is about12{\displaystyle {\tfrac {1}{2}}}, about 50% likely - equally likely or not likely. If that term is very small, close to zero, then the probability of the hypothesis, given the evidence,P(H∣E){\displaystyle P(H\mid E)}is close to 1 or the conditional hypothesis is quite likely. If that term is very large, much larger than 1, then the hypothesis, given the evidence, is quite unlikely. If the hypothesis (without consideration of evidence) is unlikely, thenP(H){\displaystyle P(H)}is small (but not necessarily astronomically small) and1P(H){\displaystyle {\tfrac {1}{P(H)}}}is much larger than 1 and this term can be approximated asP(E∣¬H)P(E∣H)⋅P(H){\displaystyle {\tfrac {P(E\mid \neg H)}{P(E\mid H)\cdot P(H)}}}and relevant probabilities can be compared directly to each other.
One quick and easy way to remember the equation would be to userule of multiplication:P(E∩H)=P(E∣H)P(H)=P(H∣E)P(E).{\displaystyle P(E\cap H)=P(E\mid H)P(H)=P(H\mid E)P(E).}
Bayesian updating is widely used and computationally convenient. However, it is not the only updating rule that might be considered rational.
Ian Hackingnoted that traditional "Dutch book" arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. Hacking wrote:[2]"And neither the Dutch book argument nor any other in the personalist arsenal of proofs of the probability axioms entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour."
Indeed, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics") following the publication ofRichard C. Jeffrey's rule, which applies Bayes' rule to the case where the evidence itself is assigned a probability.[3]The additional hypotheses needed to uniquely require Bayesian updating have been deemed to be substantial, complicated, and unsatisfactory.[4]
If evidence is simultaneously used to update belief over a set of exclusive and exhaustive propositions, Bayesian inference may be thought of as acting on this belief distribution as a whole.
Suppose a process is generating independent and identically distributed eventsEn,n=1,2,3,…{\displaystyle E_{n},\ n=1,2,3,\ldots }, but theprobability distributionis unknown. Let the event spaceΩ{\displaystyle \Omega }represent the current state of belief for this process. Each model is represented by eventMm{\displaystyle M_{m}}. The conditional probabilitiesP(En∣Mm){\displaystyle P(E_{n}\mid M_{m})}are specified to define the models.P(Mm){\displaystyle P(M_{m})}is thedegree of beliefinMm{\displaystyle M_{m}}. Before the first inference step,{P(Mm)}{\displaystyle \{P(M_{m})\}}is a set ofinitial prior probabilities. These must sum to 1, but are otherwise arbitrary.
Suppose that the process is observed to generateE∈{En}{\displaystyle E\in \{E_{n}\}}. For eachM∈{Mm}{\displaystyle M\in \{M_{m}\}}, the priorP(M){\displaystyle P(M)}is updated to the posteriorP(M∣E){\displaystyle P(M\mid E)}. FromBayes' theorem:[5]
P(M∣E)=P(E∣M)∑mP(E∣Mm)P(Mm)⋅P(M).{\displaystyle P(M\mid E)={\frac {P(E\mid M)}{\sum _{m}{P(E\mid M_{m})P(M_{m})}}}\cdot P(M).}
Upon observation of further evidence, this procedure may be repeated.
For a sequence ofindependent and identically distributedobservationsE=(e1,…,en){\displaystyle \mathbf {E} =(e_{1},\dots ,e_{n})}, it can be shown by induction that repeated application of the above is equivalent toP(M∣E)=P(E∣M)∑mP(E∣Mm)P(Mm)⋅P(M),{\displaystyle P(M\mid \mathbf {E} )={\frac {P(\mathbf {E} \mid M)}{\sum _{m}{P(\mathbf {E} \mid M_{m})P(M_{m})}}}\cdot P(M),}whereP(E∣M)=∏kP(ek∣M).{\displaystyle P(\mathbf {E} \mid M)=\prod _{k}{P(e_{k}\mid M)}.}
By parameterizing the space of models, the belief in all models may be updated in a single step. The distribution of belief over the model space may then be thought of as a distribution of belief over the parameter space. The distributions in this section are expressed as continuous, represented by probability densities, as this is the usual situation. The technique is, however, equally applicable to discrete distributions.
Let the vectorθ{\displaystyle {\boldsymbol {\theta }}}span the parameter space. Let the initial prior distribution overθ{\displaystyle {\boldsymbol {\theta }}}bep(θ∣α){\displaystyle p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})}, whereα{\displaystyle {\boldsymbol {\alpha }}}is a set of parameters to the prior itself, orhyperparameters. LetE=(e1,…,en){\displaystyle \mathbf {E} =(e_{1},\dots ,e_{n})}be a sequence ofindependent and identically distributedevent observations, where allei{\displaystyle e_{i}}are distributed asp(e∣θ){\displaystyle p(e\mid {\boldsymbol {\theta }})}for someθ{\displaystyle {\boldsymbol {\theta }}}.Bayes' theoremis applied to find theposterior distributionoverθ{\displaystyle {\boldsymbol {\theta }}}:
p(θ∣E,α)=p(E∣θ,α)p(E∣α)⋅p(θ∣α)=p(E∣θ,α)∫p(E∣θ,α)p(θ∣α)dθ⋅p(θ∣α),{\displaystyle {\begin{aligned}p({\boldsymbol {\theta }}\mid \mathbf {E} ,{\boldsymbol {\alpha }})&={\frac {p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})}{p(\mathbf {E} \mid {\boldsymbol {\alpha }})}}\cdot p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})\\&={\frac {p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})}{\int p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})\,d{\boldsymbol {\theta }}}}\cdot p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }}),\end{aligned}}}wherep(E∣θ,α)=∏kp(ek∣θ).{\displaystyle p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})=\prod _{k}p(e_{k}\mid {\boldsymbol {\theta }}).}
PXy(A)=E(1A(X)|Y=y){\displaystyle P_{X}^{y}(A)=E(1_{A}(X)|Y=y)}Existence and uniqueness of the neededconditional expectationis a consequence of theRadon–Nikodym theorem. This was formulated byKolmogorovin his famous book from 1933. Kolmogorov underlines the importance of conditional probability by writing "I wish to call attention to ... and especially the theory of conditional probabilities and conditional expectations ..." in the Preface.[8]The Bayes theorem determines the posterior distribution from the prior distribution. Uniqueness requires continuity assumptions.[9]Bayes' theorem can be generalized to include improper prior distributions such as the uniform distribution on the real line.[10]ModernMarkov chain Monte Carlomethods have boosted the importance of Bayes' theorem including cases with improper priors.[11]
Bayesian theory calls for the use of the posterior predictive distribution to dopredictive inference, i.e., topredictthe distribution of a new, unobserved data point. That is, instead of a fixed point as a prediction, a distribution over possible points is returned. Only this way is the entire posterior distribution of the parameter(s) used. By comparison, prediction infrequentist statisticsoften involves finding an optimum point estimate of the parameter(s)—e.g., bymaximum likelihoodormaximum a posteriori estimation(MAP)—and then plugging this estimate into the formula for the distribution of a data point. This has the disadvantage that it does not account for any uncertainty in the value of the parameter, and hence will underestimate thevarianceof the predictive distribution.
In some instances, frequentist statistics can work around this problem. For example,confidence intervalsandprediction intervalsin frequentist statistics when constructed from anormal distributionwith unknownmeanandvarianceare constructed using aStudent's t-distribution. This correctly estimates the variance, due to the facts that (1) the average of normally distributed random variables is also normally distributed, and (2) the predictive distribution of a normally distributed data point with unknown mean and variance, using conjugate or uninformative priors, has a Student's t-distribution. In Bayesian statistics, however, the posterior predictive distribution can always be determined exactly—or at least to an arbitrary level of precision when numerical methods are used.
Both types of predictive distributions have the form of acompound probability distribution(as does themarginal likelihood). In fact, if the prior distribution is aconjugate prior, such that the prior and posterior distributions come from the same family, it can be seen that both prior and posterior predictive distributions also come from the same family of compound distributions. The only difference is that the posterior predictive distribution uses the updated values of the hyperparameters (applying the Bayesian update rules given in theconjugate priorarticle), while the prior predictive distribution uses the values of the hyperparameters that appear in the prior distribution.
P(E∣M)P(E)>1⇒P(E∣M)>P(E){\textstyle {\frac {P(E\mid M)}{P(E)}}>1\Rightarrow P(E\mid M)>P(E)}. That is, if the model were true, the evidence would be more likely than is predicted by the current state of belief. The reverse applies for a decrease in belief. If the belief does not change,P(E∣M)P(E)=1⇒P(E∣M)=P(E){\textstyle {\frac {P(E\mid M)}{P(E)}}=1\Rightarrow P(E\mid M)=P(E)}. That is, the evidence is independent of the model. If the model were true, the evidence would be exactly as likely as predicted by the current state of belief.
IfP(M)=0{\displaystyle P(M)=0}thenP(M∣E)=0{\displaystyle P(M\mid E)=0}. IfP(M)=1{\displaystyle P(M)=1}andP(E)>0{\displaystyle P(E)>0}, thenP(M|E)=1{\displaystyle P(M|E)=1}. This can be interpreted to mean that hard convictions are insensitive to counter-evidence.
The former follows directly from Bayes' theorem. The latter can be derived by applying the first rule to the event "notM{\displaystyle M}" in place of "M{\displaystyle M}", yielding "if1−P(M)=0{\displaystyle 1-P(M)=0}, then1−P(M∣E)=0{\displaystyle 1-P(M\mid E)=0}", from which the result immediately follows.
Consider the behaviour of a belief distribution as it is updated a large number of times withindependent and identically distributedtrials. For sufficiently nice prior probabilities, theBernstein-von Mises theoremgives that in the limit of infinite trials, the posterior converges to aGaussian distributionindependent of the initial prior under some conditions firstly outlined and rigorously proven byJoseph L. Doobin 1948, namely if the random variable in consideration has a finiteprobability space. The more general results were obtained later by the statisticianDavid A. Freedmanwho published in two seminal research papers in 1963[12]and 1965[13]when and under what circumstances the asymptotic behaviour of posterior is guaranteed. His 1963 paper treats, like Doob (1949), the finite case and comes to a satisfactory conclusion. However, if the random variable has an infinite but countableprobability space(i.e., corresponding to a die with infinite many faces) the 1965 paper demonstrates that for a dense subset of priors theBernstein-von Mises theoremis not applicable. In this case there isalmost surelyno asymptotic convergence. Later in the 1980s and 1990sFreedmanandPersi Diaconiscontinued to work on the case of infinite countable probability spaces.[14]To summarise, there may be insufficient trials to suppress the effects of the initial choice, and especially for large (but finite) systems the convergence might be very slow.
In parameterized form, the prior distribution is often assumed to come from a family of distributions calledconjugate priors. The usefulness of a conjugate prior is that the corresponding posterior distribution will be in the same family, and the calculation may be expressed inclosed form.
It is often desired to use a posterior distribution to estimate a parameter or variable. Several methods of Bayesian estimation selectmeasurements of central tendencyfrom the posterior distribution.
For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as arobust estimator.[15]
If there exists a finite mean for the posterior distribution, then the posterior mean is a method of estimation.[16]θ~=E[θ]=∫θp(θ∣X,α)dθ{\displaystyle {\tilde {\theta }}=\operatorname {E} [\theta ]=\int \theta \,p(\theta \mid \mathbf {X} ,\alpha )\,d\theta }
Taking a value with the greatest probability definesmaximuma posteriori(MAP)estimates:[17]{θMAP}⊂argmaxθp(θ∣X,α).{\displaystyle \{\theta _{\text{MAP}}\}\subset \arg \max _{\theta }p(\theta \mid \mathbf {X} ,\alpha ).}
There are examples where no maximum is attained, in which case the set of MAP estimates isempty.
There are other methods of estimation that minimize the posteriorrisk(expected-posterior loss) with respect to aloss function, and these are of interest tostatistical decision theoryusing the sampling distribution ("frequentist statistics").[18]
Theposterior predictive distributionof a new observationx~{\displaystyle {\tilde {x}}}(that is independent of previous observations) is determined by[19]p(x~|X,α)=∫p(x~,θ∣X,α)dθ=∫p(x~∣θ)p(θ∣X,α)dθ.{\displaystyle p({\tilde {x}}|\mathbf {X} ,\alpha )=\int p({\tilde {x}},\theta \mid \mathbf {X} ,\alpha )\,d\theta =\int p({\tilde {x}}\mid \theta )p(\theta \mid \mathbf {X} ,\alpha )\,d\theta .}
Suppose there are two full bowls of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?
Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes' theorem. LetH1{\displaystyle H_{1}}correspond to bowl #1, andH2{\displaystyle H_{2}}to bowl #2.
It is given that the bowls are identical from Fred's point of view, thusP(H1)=P(H2){\displaystyle P(H_{1})=P(H_{2})}, and the two must add up to 1, so both are equal to 0.5.
The eventE{\displaystyle E}is the observation of a plain cookie. From the contents of the bowls, we know thatP(E∣H1)=30/40=0.75{\displaystyle P(E\mid H_{1})=30/40=0.75}andP(E∣H2)=20/40=0.5.{\displaystyle P(E\mid H_{2})=20/40=0.5.}Bayes' formula then yieldsP(H1∣E)=P(E∣H1)P(H1)P(E∣H1)P(H1)+P(E∣H2)P(H2)=0.75×0.50.75×0.5+0.5×0.5=0.6{\displaystyle {\begin{aligned}P(H_{1}\mid E)&={\frac {P(E\mid H_{1})\,P(H_{1})}{P(E\mid H_{1})\,P(H_{1})\;+\;P(E\mid H_{2})\,P(H_{2})}}\\\\\ &={\frac {0.75\times 0.5}{0.75\times 0.5+0.5\times 0.5}}\\\\\ &=0.6\end{aligned}}}
Before we observed the cookie, the probability we assigned for Fred having chosen bowl #1 was the prior probability,P(H1){\displaystyle P(H_{1})}, which was 0.5. After observing the cookie, we must revise the probability toP(H1∣E){\displaystyle P(H_{1}\mid E)}, which is 0.6.
An archaeologist is working at a site thought to be from the medieval period, between the 11th century to the 16th century. However, it is uncertain exactly when in this period the site was inhabited. Fragments of pottery are found, some of which are glazed and some of which are decorated. It is expected that if the site were inhabited during the early medieval period, then 1% of the pottery would be glazed and 50% of its area decorated, whereas if it had been inhabited in the late medieval period then 81% would be glazed and 5% of its area decorated. How confident can the archaeologist be in the date of inhabitation as fragments are unearthed?
The degree of belief in the continuous variableC{\displaystyle C}(century) is to be calculated, with the discrete set of events{GD,GD¯,G¯D,G¯D¯}{\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}}as evidence. Assuming linear variation of glaze and decoration with time, and that these variables are independent,
P(E=GD∣C=c)=(0.01+0.81−0.0116−11(c−11))(0.5−0.5−0.0516−11(c−11)){\displaystyle P(E=GD\mid C=c)=(0.01+{\frac {0.81-0.01}{16-11}}(c-11))(0.5-{\frac {0.5-0.05}{16-11}}(c-11))}P(E=GD¯∣C=c)=(0.01+0.81−0.0116−11(c−11))(0.5+0.5−0.0516−11(c−11)){\displaystyle P(E=G{\bar {D}}\mid C=c)=(0.01+{\frac {0.81-0.01}{16-11}}(c-11))(0.5+{\frac {0.5-0.05}{16-11}}(c-11))}P(E=G¯D∣C=c)=((1−0.01)−0.81−0.0116−11(c−11))(0.5−0.5−0.0516−11(c−11)){\displaystyle P(E={\bar {G}}D\mid C=c)=((1-0.01)-{\frac {0.81-0.01}{16-11}}(c-11))(0.5-{\frac {0.5-0.05}{16-11}}(c-11))}P(E=G¯D¯∣C=c)=((1−0.01)−0.81−0.0116−11(c−11))(0.5+0.5−0.0516−11(c−11)){\displaystyle P(E={\bar {G}}{\bar {D}}\mid C=c)=((1-0.01)-{\frac {0.81-0.01}{16-11}}(c-11))(0.5+{\frac {0.5-0.05}{16-11}}(c-11))}
Assume a uniform prior offC(c)=0.2{\textstyle f_{C}(c)=0.2}, and that trials areindependent and identically distributed. When a new fragment of typee{\displaystyle e}is discovered, Bayes' theorem is applied to update the degree of belief for eachc{\displaystyle c}:fC(c∣E=e)=P(E=e∣C=c)P(E=e)fC(c)=P(E=e∣C=c)∫1116P(E=e∣C=c)fC(c)dcfC(c){\displaystyle f_{C}(c\mid E=e)={\frac {P(E=e\mid C=c)}{P(E=e)}}f_{C}(c)={\frac {P(E=e\mid C=c)}{\int _{11}^{16}{P(E=e\mid C=c)f_{C}(c)dc}}}f_{C}(c)}
A computer simulation of the changing belief as 50 fragments are unearthed is shown on the graph. In the simulation, the site was inhabited around 1420, orc=15.2{\displaystyle c=15.2}. By calculating the area under the relevant portion of the graph for 50 trials, the archaeologist can say that there is practically no chance the site was inhabited in the 11th and 12th centuries, about 1% chance that it was inhabited during the 13th century, 63% chance during the 14th century and 36% during the 15th century. TheBernstein-von Mises theoremasserts here the asymptotic convergence to the "true" distribution because theprobability spacecorresponding to the discrete set of events{GD,GD¯,G¯D,G¯D¯}{\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}}is finite (see above section on asymptotic behaviour of the posterior).
Adecision-theoreticjustification of the use of Bayesian inference was given byAbraham Wald, who proved that every unique Bayesian procedure isadmissible. Conversely, everyadmissiblestatistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.[20]
Wald characterized admissible procedures as Bayesian procedures (and limits of Bayesian procedures), making the Bayesian formalism a central technique in such areas offrequentist inferenceasparameter estimation,hypothesis testing, and computingconfidence intervals.[21][22][23]For example:
Bayesian methodology also plays a role inmodel selectionwhere the aim is to select one model from a set of competing models that represents most closely the underlying process that generated the observed data. In Bayesian model comparison, the model with the highestposterior probabilitygiven the data is selected. The posterior probability of a model depends on the evidence, ormarginal likelihood, which reflects the probability that the data is generated by the model, and on theprior beliefof the model. When two competing models are a priori considered to be equiprobable, the ratio of their posterior probabilities corresponds to theBayes factor. Since Bayesian model comparison is aimed on selecting the model with the highest posterior probability, this methodology is also referred to as the maximum a posteriori (MAP) selection rule[28]or the MAP probability rule.[29]
While conceptually simple, Bayesian methods can be mathematically and numerically challenging. Probabilistic programming languages (PPLs) implement functions to easily build Bayesian models together with efficient automatic inference methods. This helps separate the model building from the inference, allowing practitioners to focus on their specific problems and leaving PPLs to handle the computational details for them.[30][31][32]
See the separate Wikipedia entry onBayesian statistics, specifically thestatistical modelingsection in that page.
Bayesian inference has applications inartificial intelligenceandexpert systems. Bayesian inference techniques have been a fundamental part of computerizedpattern recognitiontechniques since the late 1950s.[33]There is also an ever-growing connection between Bayesian methods and simulation-basedMonte Carlotechniques since complex models cannot be processed in closed form by a Bayesian analysis, while agraphical modelstructuremayallow for efficient simulation algorithms like theGibbs samplingand otherMetropolis–Hastings algorithmschemes.[34]Recently[when?]Bayesian inference has gained popularity among thephylogeneticscommunity for these reasons; a number of applications allow many demographic and evolutionary parameters to be estimated simultaneously.
As applied tostatistical classification, Bayesian inference has been used to develop algorithms for identifyinge-mail spam. Applications which make use of Bayesian inference for spam filtering includeCRM114,DSPAM,Bogofilter,SpamAssassin,SpamBayes,Mozilla, XEAMS, and others. Spam classification is treated in more detail in the article on thenaïve Bayes classifier.
Solomonoff's Inductive inferenceis the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computableprobability distribution. It is a formal inductive framework that combines two well-studied principles of inductive inference: Bayesian statistics andOccam's Razor.[35][unreliable source?]Solomonoff's universal prior probability of any prefixpof a computable sequencexis the sum of the probabilities of all programs (for a universal computer) that compute something starting withp. Given somepand any computable but unknown probability distribution from whichxis sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts ofxin optimal fashion.[36][37]
Bayesian inference has been applied in differentBioinformaticsapplications, including differential gene expression analysis.[38]Bayesian inference is also used in a general cancer risk model, calledCIRI(Continuous Individualized Risk Index), where serial measurements are incorporated to update a Bayesian model which is primarily built from prior knowledge.[39][40]
Bayesian inference can be used by jurors to coherently accumulate the evidence for and against a defendant, and to see whether, in totality, it meets their personal threshold for "beyond a reasonable doubt".[41][42][43]Bayes' theorem is applied successively to all evidence presented, with the posterior from one stage becoming the prior for the next. The benefit of a Bayesian approach is that it gives the juror an unbiased, rational mechanism for combining evidence. It may be appropriate to explain Bayes' theorem to jurors inodds form, asbetting oddsare more widely understood than probabilities. Alternatively, alogarithmic approach, replacing multiplication with addition, might be easier for a jury to handle.
If the existence of the crime is not in doubt, only the identity of the culprit, it has been suggested that the prior should be uniform over the qualifying population.[44]For example, if 1,000 people could have committed the crime, the prior probability of guilt would be 1/1000.
The use of Bayes' theorem by jurors is controversial. In the United Kingdom, a defenceexpert witnessexplained Bayes' theorem to the jury inR v Adams. The jury convicted, but the case went to appeal on the basis that no means of accumulating evidence had been provided for jurors who did not wish to use Bayes' theorem. The Court of Appeal upheld the conviction, but it also gave the opinion that "To introduce Bayes' Theorem, or any similar method, into a criminal trial plunges the jury into inappropriate and unnecessary realms of theory and complexity, deflecting them from their proper task."
Gardner-Medwin[45]argues that the criterion on which a verdict in a criminal trial should be based isnotthe probability of guilt, but rather theprobability of the evidence, given that the defendant is innocent(akin to afrequentistp-value). He argues that if the posterior probability of guilt is to be computed by Bayes' theorem, the prior probability of guilt must be known. This will depend on the incidence of the crime, which is an unusual piece of evidence to consider in a criminal trial. Consider the following three propositions:
Gardner-Medwin argues that the jury should believe bothAand not-Bin order to convict.Aand not-Bimplies the truth ofC, but the reverse is not true. It is possible thatBandCare both true, but in this case he argues that a jury should acquit, even though they know that they will be letting some guilty people go free. See alsoLindley's paradox.
Bayesian epistemologyis a movement that advocates for Bayesian inference as a means of justifying the rules of inductive logic.
Karl PopperandDavid Millerhave rejected the idea of Bayesian rationalism, i.e. using Bayes rule to make epistemological inferences:[46]It is prone to the samevicious circleas any otherjustificationistepistemology, because it presupposes what it attempts to justify. According to this view, a rational interpretation of Bayesian inference would see it merely as a probabilistic version offalsification, rejecting the belief, commonly held by Bayesians, that high likelihood achieved by a series of Bayesian updates would prove the hypothesis beyond any reasonable doubt, or even with likelihood greater than 0.
The problem considered by Bayes in Proposition 9 of his essay, "An Essay Towards Solving a Problem in the Doctrine of Chances", is the posterior distribution for the parametera(the success rate) of thebinomial distribution.[citation needed]
The termBayesianrefers toThomas Bayes(1701–1761), who proved that probabilistic limits could be placed on an unknown event.[citation needed]However, it wasPierre-Simon Laplace(1749–1827) who introduced (as Principle VI) what is now calledBayes' theoremand used it to address problems incelestial mechanics, medical statistics,reliability, andjurisprudence.[54]Early Bayesian inference, which used uniform priors following Laplace'sprinciple of insufficient reason, was called "inverse probability" (because itinfersbackwards from observations to parameters, or from effects to causes[55]). After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be calledfrequentist statistics.[55]
In the 20th century, the ideas of Laplace were further developed in two different directions, giving rise toobjectiveandsubjectivecurrents in Bayesian practice. In the objective or "non-informative" current, the statistical analysis depends on only the model assumed, the data analyzed,[56]and the method assigning the prior, which differs from one objective Bayesian practitioner to another. In the subjective or "informative" current, the specification of the prior depends on the belief (that is, propositions on which the analysis is prepared to act), which can summarize information from experts, previous studies, etc.
In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery ofMarkov chain Monte Carlomethods, which removed many of the computational problems, and an increasing interest in nonstandard, complex applications.[57]Despite growth of Bayesian research, most undergraduate teaching is still based on frequentist statistics.[58]Nonetheless, Bayesian methods are widely accepted and used, such as for example in the field ofmachine learning.[59]
The following books are listed in ascending order of probabilistic sophistication:
|
https://en.wikipedia.org/wiki/Bayesian_inference
|
Richard J. Boys(6 April 1960 – 5 March 2019) was a statistician best known for his contributions to theBayesian inference,hidden Markov modelsand stochastic systems.[1]
Richard attended Newcastle University where he obtained a BSc in mathematics in 1981. He went on to do a Master's and a doctorate at theUniversity of Sheffield, completing it in 1985.[1]
In 1986, Boys published his first paper “Screening in a Normal Model” which was co-written with Ian Dunsmore in Series B of theRSS’s journal. He was known for collaborating in his papers.[2]
In the same year, he started alectureshipat Newcastle University and would stay at Newcastle for his whole career. In 1996, he became a senior lecturer, and in 2005, he became a Professor of Applied Statistics.[1]
Around the end of the 1990s, Richard started to steer towardsstatistics in biologyand was particularly interested in Markov models in segmenting DNA sequences. This led to him researching biological and computational stochastic systems. This widened out to stochastic systems in general, where most of his contributions lay.[1]
His most cited paper “Bayesian inference for a stochastic kinetic model” was featured in the scientific journalStatistics in Computingin 2008. The paper outlined how exact Bayesian inference may be possible for the parameters of a general range of biochemical network models, which helped create a new field of research incomputational biology.[1][3]
Richard embarked on a long-standing collaboration with mathematicians and archaeologists and another statistician and colleague called Andrew Golightly. They researched inference for population dynamics during theNeolithicperiod, which led toarchaeology, physics and statistics publications.[1]
Richard had a liking to visitingAustralia. He first visited the country in 2003 to attend a bioinformatics conference in Brisbane. He was also an Associate Investigator for the ARC Centre of Excellence for Mathematical and Statistical Frontier.[4]
He held a Deputy Head position from 2004 – 2009. He was also on the Newcastle University Senate for a term. By the time of his death, he was Head of Pure Mathematics and Statistics.[1]
|
https://en.wikipedia.org/wiki/Richard_James_Boys
|
Conditional random fields(CRFs) are a class ofstatistical modeling methodsoften applied inpattern recognitionandmachine learningand used forstructured prediction. Whereas aclassifierpredicts a label for a single sample without considering "neighbouring" samples, a CRF can take context into account. To do so, the predictions are modelled as agraphical model, which represents the presence of dependencies between the predictions. The kind of graph used depends on the application. For example, innatural language processing, "linear chain" CRFs are popular, for which each prediction is dependent only on its immediate neighbours. In image processing, the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions.
Other examples where CRFs are used are:labelingorparsingof sequential data fornatural language processingorbiological sequences,[1]part-of-speech tagging,shallow parsing,[2]named entity recognition,[3]gene finding, peptide critical functional region finding,[4]andobject recognition[5]andimage segmentationincomputer vision.[6]
CRFs are a type ofdiscriminativeundirectedprobabilisticgraphical model.
Lafferty,McCallumand Pereira[1]define a CRF on observationsX{\displaystyle {\boldsymbol {X}}}andrandom variablesY{\displaystyle {\boldsymbol {Y}}}as follows:
LetG=(V,E){\displaystyle G=(V,E)}be a graph such thatY=(Yv)v∈V{\displaystyle {\boldsymbol {Y}}=({\boldsymbol {Y}}_{v})_{v\in V}}, so thatY{\displaystyle {\boldsymbol {Y}}}is indexed by the vertices ofG{\displaystyle G}.
Then(X,Y){\displaystyle ({\boldsymbol {X}},{\boldsymbol {Y}})}is a conditional random field when each random variableYv{\displaystyle {\boldsymbol {Y}}_{v}}, conditioned onX{\displaystyle {\boldsymbol {X}}}, obeys theMarkov propertywith respect to the graph; that is, its probability is dependent only on its neighbours in G:
P(Yv|X,{Yw:w≠v})=P(Yv|X,{Yw:w∼v}){\displaystyle P({\boldsymbol {Y}}_{v}|{\boldsymbol {X}},\{{\boldsymbol {Y}}_{w}:w\neq v\})=P({\boldsymbol {Y}}_{v}|{\boldsymbol {X}},\{{\boldsymbol {Y}}_{w}:w\sim v\})}, wherew∼v{\displaystyle {\mathit {w}}\sim v}means
thatw{\displaystyle w}andv{\displaystyle v}areneighborsinG{\displaystyle G}.
What this means is that a CRF is anundirected graphical modelwhose nodes can be divided into exactly two disjoint setsX{\displaystyle {\boldsymbol {X}}}andY{\displaystyle {\boldsymbol {Y}}}, the observed and output variables, respectively; the conditional distributionp(Y|X){\displaystyle p({\boldsymbol {Y}}|{\boldsymbol {X}})}is then modeled.
For general graphs, the problem of exact inference in CRFs is intractable. The inference problem for a CRF is basically the same as for anMRFand the same arguments hold.[7]However, there exist special cases for which exact inference is feasible:
If exact inference is impossible, several algorithms can be used to obtain approximate solutions. These include:
Learning the parametersθ{\displaystyle \theta }is usually done bymaximum likelihoodlearning forp(Yi|Xi;θ){\displaystyle p(Y_{i}|X_{i};\theta )}. If all nodes have exponential family distributions and all nodes are observed during training, thisoptimizationis convex.[7]It can be solved for example usinggradient descentalgorithms, orQuasi-Newton methodssuch as theL-BFGSalgorithm. On the other hand, if some variables are unobserved, the inference problem has to be solved for these variables. Exact inference is intractable in general graphs, so approximations have to be used.
In sequence modeling, the graph of interest is usually a chain graph. An input sequence of observed variablesX{\displaystyle X}represents a sequence of observations andY{\displaystyle Y}represents a hidden (or unknown) state variable that needs to be inferred given the observations. TheYi{\displaystyle Y_{i}}are structured to form a chain, with an edge between eachYi−1{\displaystyle Y_{i-1}}andYi{\displaystyle Y_{i}}. As well as having a simple interpretation of theYi{\displaystyle Y_{i}}as "labels" for each element in the input sequence, this layout admits efficient algorithms for:
The conditional dependency of eachYi{\displaystyle Y_{i}}onX{\displaystyle X}is defined through a fixed set offeature functionsof the formf(i,Yi−1,Yi,X){\displaystyle f(i,Y_{i-1},Y_{i},X)}, which can be thought of as measurements on the input sequence that partially determine thelikelihoodof each possible value forYi{\displaystyle Y_{i}}. The model assigns each feature a numerical weight and combines them to determine the probability of a certain value forYi{\displaystyle Y_{i}}.
Linear-chain CRFs have many of the same applications as conceptually simpler hidden Markov models (HMMs), but relax certain assumptions about the input and output sequence distributions. An HMM can loosely be understood as a CRF with very specific feature functions that use constant probabilities to model state transitions and emissions. Conversely, a CRF can loosely be understood as a generalization of an HMM that makes the constant transition probabilities into arbitrary functions that vary across the positions in the sequence of hidden states, depending on the input sequence.
Notably, in contrast to HMMs, CRFs can contain any number of feature functions, the feature functions can inspect the entire input sequenceX{\displaystyle X}at any point during inference, and the range of the feature functions need not have a probabilistic interpretation.
CRFs can be extended into higher order models by making eachYi{\displaystyle Y_{i}}dependent on a fixed numberk{\displaystyle k}of previous variablesYi−k,...,Yi−1{\displaystyle Y_{i-k},...,Y_{i-1}}. In conventional formulations of higher order CRFs, training and inference are only practical for small values ofk{\displaystyle k}(such ask≤ 5),[8]since their computational cost increases exponentially withk{\displaystyle k}.
However, another recent advance has managed to ameliorate these issues by leveraging concepts and tools from the field of Bayesian nonparametrics. Specifically, the CRF-infinity approach[9]constitutes a CRF-type model that is capable of learning infinitely-long temporal dynamics in a scalable fashion. This is effected by introducing a novel potential function for CRFs that is based on the Sequence Memoizer (SM), a nonparametric Bayesian model for learning infinitely-long dynamics in sequential observations.[10]To render such a model computationally tractable, CRF-infinity employs amean-field approximation[11]of the postulated novel potential functions (which are driven by an SM). This allows for devising efficient approximate training and inference algorithms for the model, without undermining its capability to capture and model temporal dependencies of arbitrary length.
There exists another generalization of CRFs, thesemi-Markov conditional random field (semi-CRF), which models variable-lengthsegmentationsof the label sequenceY{\displaystyle Y}.[12]This provides much of the power of higher-order CRFs to model long-range dependencies of theYi{\displaystyle Y_{i}}, at a reasonable computational cost.
Finally, large-margin models forstructured prediction, such as thestructured Support Vector Machinecan be seen as an alternative training procedure to CRFs.
Latent-dynamic conditional random fields(LDCRF) ordiscriminative probabilistic latent variable models(DPLVM) are a type of CRFs for sequence tagging tasks. They arelatent variable modelsthat are trained discriminatively.
In an LDCRF, like in any sequence tagging task, given a sequence of observationsx=x1,…,xn{\displaystyle x_{1},\dots ,x_{n}}, the main problem the model must solve is how to assign a sequence of labelsy=y1,…,yn{\displaystyle y_{1},\dots ,y_{n}}from one finite set of labelsY. Instead of directly modelingP(y|x) as an ordinary linear-chain CRF would do, a set of latent variableshis "inserted" betweenxandyusing thechain rule of probability:[13]
This allows capturing latent structure between the observations and labels.[14]While LDCRFs can be trained using quasi-Newton methods, a specialized version of theperceptronalgorithm called thelatent-variable perceptronhas been developed for them as well, based on Collins'structured perceptronalgorithm.[13]These models find applications incomputer vision, specificallygesture recognitionfrom video streams[14]andshallow parsing.[13]
|
https://en.wikipedia.org/wiki/Conditional_random_field
|
Estimation theoryis a branch ofstatisticsthat deals with estimating the values ofparametersbased on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. Anestimatorattempts to approximate the unknown parameters using the measurements.
In estimation theory, two approaches are generally considered:[1]
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, inradarthe aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with anoisysignal.
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is astatistical sample– a set of data points taken from arandom vector(RV) of sizeN. Put into avector,x=[x[0]x[1]⋮x[N−1]].{\displaystyle \mathbf {x} ={\begin{bmatrix}x[0]\\x[1]\\\vdots \\x[N-1]\end{bmatrix}}.}Secondly, there areMparametersθ=[θ1θ2⋮θM],{\displaystyle {\boldsymbol {\theta }}={\begin{bmatrix}\theta _{1}\\\theta _{2}\\\vdots \\\theta _{M}\end{bmatrix}},}whose values are to be estimated. Third, the continuousprobability density function(pdf) or its discrete counterpart, theprobability mass function(pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:p(x|θ).{\displaystyle p(\mathbf {x} |{\boldsymbol {\theta }}).\,}It is also possible for the parameters themselves to have a probability distribution (e.g.,Bayesian statistics). It is then necessary to define theBayesian probabilityπ(θ).{\displaystyle \pi ({\boldsymbol {\theta }}).\,}After the model is formed, the goal is to estimate the parameters, with the estimates commonly denotedθ^{\displaystyle {\hat {\boldsymbol {\theta }}}}, where the "hat" indicates the estimate.
One common estimator is theminimum mean squared error(MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parameterse=θ^−θ{\displaystyle \mathbf {e} ={\hat {\boldsymbol {\theta }}}-{\boldsymbol {\theta }}}as the basis for optimality. This error term is then squared and theexpected valueof this squared value is minimized for the MMSE estimator.
Commonly used estimators (estimation methods) and topics related to them include:
Consider a receiveddiscrete signal,x[n]{\displaystyle x[n]}, ofN{\displaystyle N}independentsamplesthat consists of an unknown constantA{\displaystyle A}withadditive white Gaussian noise(AWGN)w[n]{\displaystyle w[n]}with zeromeanand knownvarianceσ2{\displaystyle \sigma ^{2}}(i.e.,N(0,σ2){\displaystyle {\mathcal {N}}(0,\sigma ^{2})}).
Since the variance is known then the only unknown parameter isA{\displaystyle A}.
The model for the signal is thenx[n]=A+w[n]n=0,1,…,N−1{\displaystyle x[n]=A+w[n]\quad n=0,1,\dots ,N-1}
Two possible (of many) estimators for the parameterA{\displaystyle A}are:
Both of these estimators have ameanofA{\displaystyle A}, which can be shown through taking theexpected valueof each estimatorE[A^1]=E[x[0]]=A{\displaystyle \mathrm {E} \left[{\hat {A}}_{1}\right]=\mathrm {E} \left[x[0]\right]=A}andE[A^2]=E[1N∑n=0N−1x[n]]=1N[∑n=0N−1E[x[n]]]=1N[NA]=A{\displaystyle \mathrm {E} \left[{\hat {A}}_{2}\right]=\mathrm {E} \left[{\frac {1}{N}}\sum _{n=0}^{N-1}x[n]\right]={\frac {1}{N}}\left[\sum _{n=0}^{N-1}\mathrm {E} \left[x[n]\right]\right]={\frac {1}{N}}\left[NA\right]=A}
At this point, these two estimators would appear to perform the same.
However, the difference between them becomes apparent when comparing the variances.var(A^1)=var(x[0])=σ2{\displaystyle \mathrm {var} \left({\hat {A}}_{1}\right)=\mathrm {var} \left(x[0]\right)=\sigma ^{2}}andvar(A^2)=var(1N∑n=0N−1x[n])=independence1N2[∑n=0N−1var(x[n])]=1N2[Nσ2]=σ2N{\displaystyle \mathrm {var} \left({\hat {A}}_{2}\right)=\mathrm {var} \left({\frac {1}{N}}\sum _{n=0}^{N-1}x[n]\right){\overset {\text{independence}}{=}}{\frac {1}{N^{2}}}\left[\sum _{n=0}^{N-1}\mathrm {var} (x[n])\right]={\frac {1}{N^{2}}}\left[N\sigma ^{2}\right]={\frac {\sigma ^{2}}{N}}}
It would seem that the sample mean is a better estimator since its variance is lower for everyN> 1.
Continuing the example using themaximum likelihoodestimator, theprobability density function(pdf) of the noise for one samplew[n]{\displaystyle w[n]}isp(w[n])=1σ2πexp(−12σ2w[n]2){\displaystyle p(w[n])={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}w[n]^{2}\right)}and the probability ofx[n]{\displaystyle x[n]}becomes (x[n]{\displaystyle x[n]}can be thought of aN(A,σ2){\displaystyle {\mathcal {N}}(A,\sigma ^{2})})p(x[n];A)=1σ2πexp(−12σ2(x[n]−A)2){\displaystyle p(x[n];A)={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}(x[n]-A)^{2}\right)}Byindependence, the probability ofx{\displaystyle \mathbf {x} }becomesp(x;A)=∏n=0N−1p(x[n];A)=1(σ2π)Nexp(−12σ2∑n=0N−1(x[n]−A)2){\displaystyle p(\mathbf {x} ;A)=\prod _{n=0}^{N-1}p(x[n];A)={\frac {1}{\left(\sigma {\sqrt {2\pi }}\right)^{N}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}\sum _{n=0}^{N-1}(x[n]-A)^{2}\right)}Taking thenatural logarithmof the pdflnp(x;A)=−Nln(σ2π)−12σ2∑n=0N−1(x[n]−A)2{\displaystyle \ln p(\mathbf {x} ;A)=-N\ln \left(\sigma {\sqrt {2\pi }}\right)-{\frac {1}{2\sigma ^{2}}}\sum _{n=0}^{N-1}(x[n]-A)^{2}}and the maximum likelihood estimator isA^=argmaxlnp(x;A){\displaystyle {\hat {A}}=\arg \max \ln p(\mathbf {x} ;A)}
Taking the firstderivativeof the log-likelihood function∂∂Alnp(x;A)=1σ2[∑n=0N−1(x[n]−A)]=1σ2[∑n=0N−1x[n]−NA]{\displaystyle {\frac {\partial }{\partial A}}\ln p(\mathbf {x} ;A)={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}(x[n]-A)\right]={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}x[n]-NA\right]}and setting it to zero0=1σ2[∑n=0N−1x[n]−NA]=∑n=0N−1x[n]−NA{\displaystyle 0={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}x[n]-NA\right]=\sum _{n=0}^{N-1}x[n]-NA}
This results in the maximum likelihood estimatorA^=1N∑n=0N−1x[n]{\displaystyle {\hat {A}}={\frac {1}{N}}\sum _{n=0}^{N-1}x[n]}which is simply the sample mean.
From this example, it was found that the sample mean is the maximum likelihood estimator forN{\displaystyle N}samples of a fixed, unknown parameter corrupted by AWGN.
To find theCramér–Rao lower bound(CRLB) of the sample mean estimator, it is first necessary to find theFisher informationnumberI(A)=E([∂∂Alnp(x;A)]2)=−E[∂2∂A2lnp(x;A)]{\displaystyle {\mathcal {I}}(A)=\mathrm {E} \left(\left[{\frac {\partial }{\partial A}}\ln p(\mathbf {x} ;A)\right]^{2}\right)=-\mathrm {E} \left[{\frac {\partial ^{2}}{\partial A^{2}}}\ln p(\mathbf {x} ;A)\right]}and copying from above∂∂Alnp(x;A)=1σ2[∑n=0N−1x[n]−NA]{\displaystyle {\frac {\partial }{\partial A}}\ln p(\mathbf {x} ;A)={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}x[n]-NA\right]}
Taking the second derivative∂2∂A2lnp(x;A)=1σ2(−N)=−Nσ2{\displaystyle {\frac {\partial ^{2}}{\partial A^{2}}}\ln p(\mathbf {x} ;A)={\frac {1}{\sigma ^{2}}}(-N)={\frac {-N}{\sigma ^{2}}}}and finding the negative expected value is trivial since it is now a deterministic constant−E[∂2∂A2lnp(x;A)]=Nσ2{\displaystyle -\mathrm {E} \left[{\frac {\partial ^{2}}{\partial A^{2}}}\ln p(\mathbf {x} ;A)\right]={\frac {N}{\sigma ^{2}}}}
Finally, putting the Fisher information intovar(A^)≥1I{\displaystyle \mathrm {var} \left({\hat {A}}\right)\geq {\frac {1}{\mathcal {I}}}}results invar(A^)≥σ2N{\displaystyle \mathrm {var} \left({\hat {A}}\right)\geq {\frac {\sigma ^{2}}{N}}}
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean isequal tothe Cramér–Rao lower bound for all values ofN{\displaystyle N}andA{\displaystyle A}.
In other words, the sample mean is the (necessarily unique)efficient estimator, and thus also theminimum variance unbiased estimator(MVUE), in addition to being themaximum likelihoodestimator.
One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use ofmaximum likelihoodestimators andlikelihood functions.
Given adiscrete uniform distribution1,2,…,N{\displaystyle 1,2,\dots ,N}with unknown maximum, theUMVUestimator for the maximum is given byk+1km−1=m+mk−1{\displaystyle {\frac {k+1}{k}}m-1=m+{\frac {m}{k}}-1}wheremis thesample maximumandkis thesample size, sampling without replacement.[2][3]This problem is commonly known as theGerman tank problem, due to application of maximum estimation to estimates of German tank production duringWorld War II.
The formula may be understood intuitively as;
the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.[note 1]
This has a variance of[2]1k(N−k)(N+1)(k+2)≈N2k2for small samplesk≪N{\displaystyle {\frac {1}{k}}{\frac {(N-k)(N+1)}{(k+2)}}\approx {\frac {N^{2}}{k^{2}}}{\text{ for small samples }}k\ll N}so a standard deviation of approximatelyN/k{\displaystyle N/k}, the (population) average size of a gap between samples; comparemk{\displaystyle {\frac {m}{k}}}above. This can be seen as a very simple case ofmaximum spacing estimation.
The sample maximum is themaximum likelihoodestimator for the population maximum, but, as discussed above, it is biased.
Numerous fields require the use of estimation theory.
Some of these fields include:
Measured data are likely to be subject tonoiseor uncertainty and it is through statisticalprobabilitythatoptimalsolutions are sought to extract as muchinformationfrom the data as possible.
|
https://en.wikipedia.org/wiki/Estimation_theory
|
TheHH-suiteis anopen-source softwarepackage for sensitiveproteinsequence searching. It contains programs that can search for similar protein sequences in protein sequence databases. Sequence searches are a standard tool in modern biology with which the function of unknown proteins can be inferred from the functions of proteins with similar sequences.HHsearchandHHblitsare two main programs in the package and the entry point to its search function, the latter being a faster iteration.[2][3]HHpredis an online server forprotein structure predictionthat uses homology information from HH-suite.[4]
The HH-suite searches for sequences usinghidden Markov models(HMMs). The name comes from the fact that it performs HMM-HMM alignments. Among the most popular methods for protein sequence matching, the programs have been cited more than 5000 times total according toGoogle Scholar.[5]
Proteins are central players in all of life's processes. Understanding them is central to understanding molecular processes in cells. This is particularly important in order to understand the origin of diseases. But for a large fraction of the approximately 20 000 human proteins the structures and functions remain unknown. Many proteins have been investigated in model organisms such as many bacteria, baker's yeast, fruit flies, zebra fish or mice, for which experiments can be often done more easily than with human cells. To predict the function, structure, or other properties of a protein for which only its sequence of amino acids is known, the protein sequence is compared to the sequences of other proteins in public databases. If a protein with sufficiently similar sequence is found, the two proteins are likely to be evolutionarily related ("homologous"). In that case, they are likely to share similar structures and functions. Therefore, if a protein with a sufficiently similar sequence and with known functions and/or structure can be found by the sequence search, the unknown protein's functions, structure, and domain composition can be predicted. Such predictions greatly facilitate the determination of the function or structure by targeted validation experiments.
Sequence searches are frequently performed by biologists to infer the function of an unknown protein from its sequence. For this purpose, the protein's sequence is compared to the sequences of other proteins in public databases and its function is deduced from those of the most similar sequences. Often, no sequences with annotated functions can be found in such a search. In this case, more sensitive methods are required to identify more remotely related proteins orprotein families. From these relationships, hypotheses about the protein's functions,structure, anddomain compositioncan be inferred. HHsearch performs searches with a protein sequence through databases. The HHpred server and the HH-suite software package offer many popular, regularly updated databases, such as theProtein Data Bank, as well as theInterPro,Pfam,COG, andSCOPdatabases.
Modern sensitive methods for protein search utilize sequence profiles. They may be used to compare a sequence to a profile, or in more advanced cases such as HH-suite, to match among profiles.[2][6][7][8]Profiles and alignments are themselves derived from matches, using for examplePSI-BLASTor HHblits. Aposition-specific scoring matrix(PSSM) profile contains for each position in the query sequence the similarity score for the 20 amino acids. The profiles are derived frommultiple sequence alignments(MSAs), in which related proteins are written together (aligned), such that the frequencies of amino acids in each position can be interpreted as probabilities for amino acids in new related proteins, and be used to derive the "similarity scores". Because profiles contain much more information than a single sequence (e.g. the position-specific degree of conservation), profile-profile comparison methods are much more powerful than sequence-sequence comparison methods likeBLASTor profile-sequence comparison methods like PSI-BLAST.[6]
HHpred and HHsearch represent query and database proteins byprofile hidden Markov models(HMMs), an extension of PSSM sequence profiles that also records position-specific amino acid insertion and deletion frequencies. HHsearch searches a database of HMMs with a query HMM. Before starting the search through the actual database of HMMs, HHsearch/HHpred builds amultiple sequence alignmentof sequences related to the query sequence/MSA using the HHblits program. From this alignment, a profile HMM is calculated. The databases contain HMMs that are precalculated in the same fashion using PSI-BLAST. The output of HHpred and HHsearch is a ranked list of database matches (including E-values and probabilities for a true relationship) and the pairwise query-database sequence alignments.
HHblits, a part of the HH-suite since 2001, builds high-qualitymultiple sequence alignments(MSAs) starting from a single query sequence or a MSA. As in PSI-BLAST, it works iteratively, repeatedly constructing new query profiles by adding the results found in the previous round. It matches against a pre-built HMM databases derived from protein sequence databases, each representing a "cluster" of related proteins. In the case of HHblits, such matches are done on the level of HMM-HMM profiles, which grants additional sensitivity. Its prefiltering reduces the tens of millions HMMs to match against to a few thousands of them, thus speeding up the slow HMM-HMM comparison process.[3]
The HH-suite comes with a number of pre-built profile HMMs that can be searched using HHblits and HHsearch, among them a clustered version of theUniProtdatabase, of theProtein Data Bankof proteins with known structures, ofPfamprotein family alignments, ofSCOPstructural protein domains, and many more.[9]
Applications of HHpred and HHsearch include protein structure prediction, complex structure prediction, function prediction, domain prediction, domain boundary prediction, and evolutionary classification of proteins.[10]
HHsearch is often used forhomology modeling, that is, to build a model of the structure of a query protein for which only the sequence is known: For that purpose, a database of proteins with known structures such as theprotein data bankis searched for "template" proteins similar to the query protein. If such a template protein is found, the structure of the protein of interest can be predicted based on a pairwisesequence alignmentof the query with the template protein sequence. For example, a search through the PDB database of proteins with solved 3D structure takes a few minutes. If a significant match with a protein of known structure (a "template") is found in the PDB database, HHpred allows the user to build a homology model using theMODELLERsoftware, starting from the pairwise query-template alignment.
HHpred servers have been ranked among the best servers duringCASP7, 8, and 9, for blind protein structure prediction experiments. In CASP9, HHpredA, B, and C were ranked 1st, 2nd, and 3rd out of 81 participating automatic structure prediction servers in template-based modeling[11]and 6th, 7th, 8th on all 147 targets, while being much faster than the best 20 servers.[12]InCASP8, HHpred was ranked 7th on all targets and 2nd on the subset of single domain proteins, while still being more than 50 times faster than the top-ranked servers.[4]
In addition to HHsearch and HHblits, the HH-suite contains programs and perl scripts for format conversion, filtering of MSAs, generation of profile HMMs, the addition of secondary structure predictions to MSAs, the extraction of alignments from program output, and the generation of customized databases.
The HMM-HMM alignment algorithm of HHblits and HHsearch was significantly accelerated usingvector instructionsin version 3 of the HH-suite.[13]
|
https://en.wikipedia.org/wiki/HH-suite
|
HMMERis afreeand commonly used software package for sequence analysis written bySean Eddy.[2]Its general usage is to identifyhomologousproteinornucleotidesequences, and to perform sequence alignments. It detects homology by comparing aprofile-HMM(aHidden Markov modelconstructed explicitly for a particular search) to either a single sequence or a database of sequences. Sequences that score significantly better to the profile-HMM compared to a null model are considered to be homologous to the sequences that were used to construct the profile-HMM. Profile-HMMs are constructed from amultiple sequence alignmentin the HMMER package using thehmmbuildprogram. The profile-HMM implementation used in the HMMER software was based on the work of Krogh and colleagues.[3]HMMER is aconsoleutility ported to every majoroperating system, including different versions ofLinux,Windows, andmacOS.
HMMER is the core utility that protein family databases such asPfamandInterProare based upon. Some other bioinformatics tools such asUGENEalso use HMMER.
HMMER3 also makes extensive use ofvector instructionsto increase computational speed. This work is based upon an earlier publication showing a significant acceleration of theSmith-Waterman algorithmfor aligning two sequences.[4]
A profile HMM is a variant of an HMM relating specifically to biological sequences. Profile HMMs turn a multiple sequence alignment into a position-specific scoring system, which can be used to align sequences and search databases for remotely homologous sequences.[5]They capitalise on the fact that certain positions in a sequence alignment tend to have biases in which residues are most likely to occur, and are likely to differ in their probability of containing an insertion or a deletion. Capturing this information gives them a better ability to detect true homologs than traditionalBLAST-based approaches, which penalise substitutions, insertions and deletions equally, regardless of where in an alignment they occur.[6]
Profile HMMs center around a linear set of match (M) states, with one state corresponding to each consensus column in a sequence alignment. Each M state emits a single residue (amino acid or nucleotide). The probability of emitting a particular residue is determined largely by the frequency at which that residue has been observed in that column of the alignment, but also incorporates prior information on patterns of residues that tend to co-occur in the same columns of sequence alignments. This string of match states emitting amino acids at particular frequencies is analogous to position specific score matrices or weight matrices.[5]
A profile HMM takes this modelling of sequence alignments further by modelling insertions and deletions, using I and D states, respectively. D states do not emit a residue, while I state do emit a residue. Multiple I state can occur consecutively, corresponding to multiple residues between consensus columns in an alignment. M, I and D states are connected by state transition probabilities, which also vary by position in the sequence alignment, to reflect the different frequencies of insertions and deletions across sequence alignments.[5]
The HMMER2 and HMMER3 releases used an architecture for building profile HMMs called the Plan 7 architecture, named after the seven states captured by the model. In addition to the three major states (M, I and D), six additional states capture non-homologous flanking sequence in the alignment. These 6 states collectively are important for controlling how sequences are aligned to the model, e.g. whether a sequence can have multiple consecutive hits to the same model (in the case of sequences with multiple instances of the same domain).[7]
The HMMER package consists of a collection of programs for performing functions using profile hidden Markov models.[8]The programs include:
The package contains numerous other specialised functions.
In addition to the software package, the HMMER search function is available in the form of a web server.[9]The service facilitates searches across a range of databases, including sequence databases such asUniProt,SwissProt, and theProtein Data Bank, and HMM databases such asPfam,TIGRFAMsandSUPERFAMILY. The four search types phmmer, hmmsearch, hmmscan and jackhmmer are supported (seePrograms). The search function accepts single sequences as well as sequence alignments or profile HMMs.[10]
The search results are accompanied by a report on the taxonomic breakdown, and thedomainorganisation of the hits. Search results can then be filtered according to either parameter.
The web service is currently run out of theEuropean Bioinformatics Institute (EBI)in the United Kingdom, while development of the algorithm is still performed by Sean Eddy's team in the United States.[9]Major reasons for relocating the web service were to leverage the computing infrastructure at the EBI, and to cross-link HMMER searches with relevant databases that are also maintained by the EBI.
The latest stable release of HMMER is version 3.0. HMMER3 is complete rewrite of the earlier HMMER2 package, with the aim of improving the speed of profile-HMM searches. Major changes are outlined below:
A major aim of the HMMER3 project, started in 2004 was to improve the speed of HMMER searches. While profile HMM-based homology searches were more accurate than BLAST-based approaches, their slower speed limited their applicability.[8]The main performance gain is due to aheuristic filterthat finds high-scoring un-gapped matches within database sequences to a query profile. This heuristic results in a computation time comparable toBLASTwith little impact on accuracy. Further gains in performance are due to alog-likelihoodmodel that requires no calibration for estimatingE-values, and allows the more accurateforward scoresto be used for computing the significance of ahomologoussequence.[11][6]
HMMER still lags behind BLAST in speed of DNA-based searches; however, DNA-based searches can be tuned such that an improvement in speed comes at the expense of accuracy.[12]
The major advance in speed was made possible by the development of an approach for calculating the significance of results integrated over a range of possible alignments.[11]In discovering remote homologs, alignments between query and hit proteins are often very uncertain. While most sequence alignment tools calculate match scores using only the best scoring alignment, HMMER3 calculates match scores by integrating across all possible alignments, to account for uncertainty in which alignment is best. HMMER sequence alignments are accompanied by posterior probability annotations, indicating which portions of the alignment have been assigned high confidence and which are more uncertain.
A major improvement in HMMER3 was the inclusion of DNA/DNA comparison tools. HMMER2 only had functionality to compare protein sequences.
While HMMER2 could perform local alignment (align a complete model to a subsequence of the target) and global alignment (align a complete model to a complete target sequence), HMMER3 only performs local alignment. This restriction is due to the difficulty in calculating the significance of hits when performing local/global alignments using the new algorithm.
Several implementations of profile HMM methods and related position-specific scoring matrix methods are available. Some are listed below:
|
https://en.wikipedia.org/wiki/HMMER
|
Time-inhomogeneous hiddenBernoullimodel (TI-HBM)is an alternative tohidden Markov model(HMM) forautomatic speech recognition. Contrary to HMM, the state transition process in TI-HBM is not aMarkov-dependent process, rather it is ageneralized Bernoulli(an independent) process. This difference leads to elimination ofdynamic programmingat state-level in TI-HBM decoding process. Thus, the computational complexity of TI-HBM for probability evaluation and state estimation isO(NL){\displaystyle O(NL)}(instead ofO(N2L){\displaystyle O(N^{2}L)}in the HMM case, whereN{\displaystyle N}andL{\displaystyle L}are number of states and observation sequence length respectively). The TI-HBM is able to model acoustic-unit duration (e.g. phone/word duration) by using a built-in parameter named survival probability. The TI-HBM is simpler and faster than HMM in a phoneme recognition task, but its performance is comparable to HMM.
For details, see[1]or[2].
|
https://en.wikipedia.org/wiki/Hidden_Bernoulli_model
|
Ahidden semi-Markov model(HSMM) is a statistical model with the same structure as ahidden Markov modelexcept that the unobservable process issemi-Markovrather thanMarkov. This means that the probability of there being a change in the hidden state depends on the amount of time that has elapsed since entry into the current state. This is in contrast to hidden Markov models where there is a constant probability of changing state given survival in the state up to that time.[1]
For instanceSansom & Thomson (2001)modelled daily rainfall using a hidden semi-Markov model.[2]If the underlying process (e.g. weather system) does not have ageometrically distributedduration, an HSMM may be more appropriate.
Hidden semi-Markov models can be used in implementations of statistical parametricspeech synthesisto model the probabilities of transitions between different states of encoded speech representations. They are often used along with other tools suchartificial neural networks, connecting with other components of a full parametric speech synthesis system to generate the output waveforms.[3]
The model was first published byLeonard E. Baumand Ted Petrie in 1966.[4][5]
Statistical inference for hidden semi-Markov models is more difficult than in hidden Markov models, since algorithms like theBaum–Welch algorithmare not directly applicable, and must be adapted requiring more resources.
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hidden_semi-Markov_model
|
Thehierarchical hidden Markov model (HHMM)is astatistical modelderived from thehidden Markov model(HMM). In an HHMM, each state is considered to be a self-containedprobabilistic model. More precisely, each state of the HHMM is itself an HHMM.
HHMMs and HMMs are useful in many fields, includingpattern recognition.[1][2]
It is sometimes useful to use HMMs in specific structures in order to facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data is available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM into a greater structure; which, theoretically, may not be able to solve any other problems than the basic HMM but can solve some problems more efficiently when it comes to the amount of training data required.
In the hierarchical hidden Markov model (HHMM), each state is considered to be a self-contained probabilistic model. More precisely, each state of the HHMM is itself an HHMM. This implies that the states of the HHMM emit sequences of observation symbols rather than single observation symbols as is the case for the standard HMM states.
When a state in an HHMM is activated, it will activate its own probabilistic model, i.e. it will activate one of the states of the underlying HHMM, which in turn may activate its underlying HHMM and so on. The process is repeated until a special state, called a production state, is activated. Only the production states emit observation symbols in the usual HMM sense. When the production state has emitted a symbol, control returns to the state that activated the production state.
The states that do not directly emit observations symbols are called internal states. The activation of a state in an HHMM under an internal state is called avertical transition. After a vertical transition is completed, ahorizontal transitionoccurs to a state within the same level. When a horizontal transition leads to aterminatingstate, control is returned to
the state in the HHMM, higher up in the hierarchy, that produced the last vertical transition.
Note that a vertical transition can result in more vertical transitions before reaching a sequence of production states and
finally returning to the top level. Thus the production states visited give rise to a sequence of observation symbols that is "produced" by the state at the top level.
The methods for estimating the HHMM parameters and model structure are more complex than for HMM parameters, and the interested reader is referred to Fineet al.(1998).
The HMM and HHMM belong to the same class of classifiers. That is, they can be used to solve the
same set of problems. In fact, the HHMM can be transformed into a standard HMM. However, the HHMM leverages its structure to solve a subset of the problems more efficiently.
Classical HHMMs require a pre-defined topology, meaning that the number and hierarchical structure of the submodels must be known in advance.[1]Samko et al. (2010) used information about states from feature space (i. e., from outside the Markov Model itself) in order to define the topology for a new HHMM in an unsupervised way.[2]However, such external data containing relevant information for HHMM construction may not be available in all contexts, e. g. in language processing.
|
https://en.wikipedia.org/wiki/Hierarchical_hidden_Markov_model
|
Thelayeredhidden Markov model(LHMM)is astatistical modelderived from the hidden Markov model (HMM).
A layered hidden Markov model (LHMM) consists ofNlevels of HMMs, where the HMMs on leveli+ 1 correspond to observation symbols or probability generators at leveli.
Every leveliof the LHMM consists ofKiHMMs running in parallel.[1]
LHMMs are sometimes useful in specific structures because they can facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data were available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM in a layered structure which, theoretically, may not be able to solve any problems the basic HMM cannot, but can solve some problems more efficiently because less training data is needed.
A layered hidden Markov model (LHMM) consists ofN{\displaystyle N}levels of HMMs where the HMMs on levelN+1{\displaystyle N+1}corresponds to observation symbols or probability generators at levelN{\displaystyle N}.
Every leveli{\displaystyle i}of the LHMM consists ofKi{\displaystyle K_{i}}HMMs running in parallel.
At any given levelL{\displaystyle L}in the LHMM a sequence ofTL{\displaystyle T_{L}}observation symbolsoL={o1,o2,…,oTL}{\displaystyle \mathbf {o} _{L}=\{o_{1},o_{2},\dots ,o_{T_{L}}\}}can be used to classify the input into one ofKL{\displaystyle K_{L}}classes, where each class corresponds to each of theKL{\displaystyle K_{L}}HMMs at levelL{\displaystyle L}. This classification can then be used to generate a new observation for the levelL−1{\displaystyle L-1}HMMs. At the lowest layer, i.e. levelN{\displaystyle N}, primitive observation symbolsop={o1,o2,…,oTp}{\displaystyle \mathbf {o} _{p}=\{o_{1},o_{2},\dots ,o_{T_{p}}\}}would be generated directly from observations of the modeled process. For example, in a trajectory tracking task the primitive observation symbols would originate from the quantized sensor values. Thus at each layer in the LHMM the observations originate from the classification of the underlying layer, except for the lowest layer where the observation symbols originate from measurements of the observed process.
It is not necessary to run all levels at the same time granularity. For example, it is possible to use windowing at any level in the structure so that the classification takes the average of several classifications into consideration before passing the results up the layers of the LHMM.[2]
Instead of simply using the winning HMM at levelL+1{\displaystyle L+1}as an input symbol for the HMM at levelL{\displaystyle L}it is possible to use it as aprobability generatorby passing the completeprobability distributionup the layers of the LHMM. Thus instead of having a "winner takes all" strategy where the most probable HMM is selected as an observation symbol, the likelihoodL(i){\displaystyle L(i)}of observing thei{\displaystyle i}th HMM can be used in the recursion formula of the levelL{\displaystyle L}HMM to account for the uncertainty in the classification of the HMMs at levelL+1{\displaystyle L+1}. Thus, if the classification of the HMMs at leveln+1{\displaystyle n+1}is uncertain, it is possible to pay more attention to the a-priori information encoded in the HMM at levelL{\displaystyle L}.
A LHMM could in practice be transformed into a single layered HMM where all the different models are concatenated together.[3]Some of the advantages that may be expected from using the LHMM over a large single layer HMM is that the LHMM is less likely to suffer fromoverfittingsince the individual sub-components are trained independently on smaller amounts of data. A consequence of this is that a significantly smaller amount of training data is required for the LHMM to achieve a performance comparable of the HMM. Another advantage is that the layers at the bottom of the LHMM, which are more sensitive to changes in the environment such as the type of sensors, sampling rate etc. can be retrained separately without altering the higher layers of the LHMM.
|
https://en.wikipedia.org/wiki/Layered_hidden_Markov_model
|
Sequential dynamical systems(SDSs) are a class ofgraph dynamical systems. They are discretedynamical systemswhich generalize many aspects of for example classicalcellular automata, and they provide a framework for studying asynchronous processes overgraphs. The analysis of SDSs uses techniques fromcombinatorics,abstract algebra,graph theory,dynamical systemsandprobability theory.
An SDS is constructed from the following components:
It is convenient to introduce theY-local mapsFiconstructed from the vertex functions by
The wordwspecifies the sequence in which theY-local maps are composed to derive the sequential dynamical system mapF:Kn→ Knas
If the update sequence is a permutation one frequently speaks of apermutation SDSto emphasize this point.
Thephase spaceassociated to a sequential dynamical system with mapF:Kn→ Knis the finite directed graph with vertex setKnand directed edges (x,F(x)). The structure of the phase space is governed by the properties of the graphY, the vertex functions (fi)i, and the update sequencew. A large part of SDS research seeks to infer phase space properties based on the structure of the system constituents.
Consider the case whereYis the graph with vertex set {1,2,3} and undirected edges {1,2}, {1,3} and {2,3} (a triangle or 3-circle) with vertex states fromK= {0,1}. For vertex functions use the symmetric, boolean function nor :K3→ Kdefined by nor(x,y,z) = (1+x)(1+y)(1+z) with boolean arithmetic. Thus, the only case in which the function nor returns the value 1 is when all the arguments are 0. Pickw= (1,2,3) as update sequence. Starting from the initial system state (0,0,0) at timet= 0 one computes the state of vertex 1 at timet=1 as nor(0,0,0) = 1. The state of vertex 2 at timet=1 is nor(1,0,0) = 0. Note that the state of vertex 1 at timet=1 is used immediately. Next one obtains the state of vertex 3 at timet=1 as nor(1,0,0) = 0. This completes the update sequence, and one concludes that the Nor-SDS map sends the system state (0,0,0) to (1,0,0). The system state (1,0,0) is in turned mapped to (0,1,0) by an application of the SDS map.
|
https://en.wikipedia.org/wiki/Sequential_dynamical_system
|
Inmathematics, atime seriesis a series ofdata pointsindexed (or listed or graphed) in time order. Most commonly, a time series is asequencetaken at successive equally spaced points in time. Thus it is a sequence ofdiscrete-timedata. Examples of time series are heights of oceantides, counts ofsunspots, and the daily closing value of theDow Jones Industrial Average.
A time series is very frequently plotted via arun chart(which is a temporalline chart). Time series are used instatistics,signal processing,pattern recognition,econometrics,mathematical finance,weather forecasting,earthquake prediction,electroencephalography,control engineering,astronomy,communications engineering, and largely in any domain of appliedscienceandengineeringwhich involvestemporalmeasurements.
Time seriesanalysiscomprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data.Time seriesforecastingis the use of amodelto predict future values based on previously observed values. Generally, time series data is modelled as astochastic process. Whileregression analysisis often employed in such a way as to test relationships between one or more different time series, this type of analysis is not usually called "time series analysis", which refers in particular to relationships between different points in time within a single series.
Time series data have a natural temporal ordering. This makes time series analysis distinct fromcross-sectional studies, in which there is no natural ordering of the observations (e.g. explaining people's wages by reference to their respective education levels, where the individuals' data could be entered in any order). Time series analysis is also distinct fromspatial data analysiswhere the observations typically relate to geographical locations (e.g. accounting for house prices by the location as well as the intrinsic characteristics of the houses). Astochasticmodel for a time series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. In addition, time series models will often make use of the natural one-way ordering of time so that values for a given period will be expressed as deriving in some way from past values, rather than from future values (seetime reversibility).
Time series analysis can be applied toreal-valued, continuous data,discretenumericdata, or discrete symbolic data (i.e. sequences of characters, such as letters and words in theEnglish language[1]).
Methods for time series analysis may be divided into two classes:frequency-domainmethods andtime-domainmethods. The former includespectral analysisandwavelet analysis; the latter includeauto-correlationandcross-correlationanalysis. In the time domain, correlation and analysis can be made in a filter-like manner usingscaled correlation, thereby mitigating the need to operate in the frequency domain.
Additionally, time series analysis techniques may be divided intoparametricandnon-parametricmethods. Theparametric approachesassume that the underlyingstationary stochastic processhas a certain structure which can be described using a small number of parameters (for example, using anautoregressiveormoving-average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast,non-parametric approachesexplicitly estimate thecovarianceor thespectrumof the process without assuming that the process has any particular structure.
Methods of time series analysis may also be divided intolinearandnon-linear, andunivariateandmultivariate.
A time series is one type ofpanel data. Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel (as is across-sectional dataset). A data set may exhibit characteristics of both panel data and time series data. One way to tell is to ask what makes one data record unique from the other records. If the answer is the time data field, then this is a time series data set candidate. If determining a unique record requires a time data field and an additional identifier which is unrelated to time (e.g. student ID, stock symbol, country code), then it is panel data candidate. If the differentiation lies on the non-time identifier, then the data set is a cross-sectional data set candidate.
There are several types of motivation and data analysis available for time series which are appropriate for different purposes.
In the context ofstatistics,econometrics,quantitative finance,seismology,meteorology, andgeophysicsthe primary goal of time series analysis isforecasting. In the context ofsignal processing,control engineeringandcommunication engineeringit is used for signal detection. Other applications are indata mining,pattern recognitionandmachine learning, where time series analysis can be used forclustering,[2][3][4]classification,[5]query by content,[6]anomaly detectionas well asforecasting.[7]
A simple way to examine a regular time series is manually with aline chart. The datagraphic shows tuberculosis deaths in the United States,[8]along with the yearly change and the percentage change from year to year. The total number of deaths declined in every year until the mid-1980s, after which there were occasional increases, often proportionately - but not absolutely - quite large.
A study of corporate data analysts found two challenges to exploratory time series analysis: discovering the shape of interesting patterns, and finding an explanation for these patterns.[9]Visual tools that represent time series data asheat map matricescan help overcome these challenges.
This approach may be based onharmonic analysisand filtering of signals in thefrequency domainusing theFourier transform, andspectral density estimation. Its development was significantly accelerated duringWorld War IIby mathematicianNorbert Wiener, electrical engineersRudolf E. Kálmán,Dennis Gaborand others for filtering signals from noise and predicting signal values at a certain point in time.
An equivalent effect may be achieved in the time domain, as in aKalman filter; seefilteringandsmoothingfor more techniques.
Other related techniques include:
Curve fitting[12][13]is the process of constructing acurve, ormathematical function, that has the best fit to a series ofdatapoints,[14]possibly subject to constraints.[15][16]Curve fitting can involve eitherinterpolation,[17][18]where an exact fit to the data is required, orsmoothing,[19][20]in which a "smooth" function is constructed that approximately fits the data. A related topic isregression analysis,[21][22]which focuses more on questions ofstatistical inferencesuch as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization,[23][24]to infer values of a function where no data are available,[25]and to summarize the relationships among two or more variables.[26]Extrapolationrefers to the use of a fitted curve beyond therangeof the observed data,[27]and is subject to adegree of uncertainty[28]since it may reflect the method used to construct the curve as much as it reflects the observed data.
For processes that are expected to generally grow in magnitude one of the curves in the graphic (and many others) can be fitted by estimating their parameters.
The construction of economic time series involves the estimation of some components for some dates byinterpolationbetween values ("benchmarks") for earlier and later dates. Interpolation is estimation of an unknown quantity between two known quantities (historical data), or drawing conclusions about missing information from the available information ("reading between the lines").[29]Interpolation is useful where the data surrounding the missing data is available and its trend, seasonality, and longer-term cycles are known. This is often done by using a related series known for all relevant dates.[30]Alternativelypolynomial interpolationorspline interpolationis used where piecewisepolynomialfunctions are fitted in time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also calledregression). The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set. Spline interpolation, however, yield a piecewise continuous function composed of many polynomials to model the data set.
Extrapolationis the process of estimating, beyond the original observation range, the value of a variable on the basis of its relationship with another variable. It is similar tointerpolation, which produces estimates between known observations, but extrapolation is subject to greateruncertaintyand a higher risk of producing meaningless results.
In general, a function approximation problem asks us to select afunctionamong a well-defined class that closely matches ("approximates") a target function in a task-specific way.
One can distinguish two major classes of function approximation problems: First, for known target functions,approximation theoryis the branch ofnumerical analysisthat investigates how certain known functions (for example,special functions) can be approximated by a specific class of functions (for example,polynomialsorrational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.).
Second, the target function, call itg, may be unknown; instead of an explicit formula, only a set of points (a time series) of the form (x,g(x)) is provided. Depending on the structure of thedomainandcodomainofg, several techniques for approximatinggmay be applicable. For example, ifgis an operation on thereal numbers, techniques ofinterpolation,extrapolation,regression analysis, andcurve fittingcan be used. If thecodomain(range or target set) ofgis a finite set, one is dealing with aclassificationproblem instead. A related problem ofonlinetime series approximation[31]is to summarize the data in one-pass and construct an approximate representation that can support a variety of time series queries with bounds on worst-case error.
To some extent, the different problems (regression,classification,fitness approximation) have received a unified treatment instatistical learning theory, where they are viewed assupervised learningproblems.
Instatistics,predictionis a part ofstatistical inference. One particular approach to such inference is known aspredictive inference, but the prediction can be undertaken within any of the several approaches to statistical inference. Indeed, one description of statistics is that it provides a means of transferring knowledge about a sample of a population to the whole population, and to other related populations, which is not necessarily the same as prediction over time. When information is transferred across time, often to specific points in time, the process is known asforecasting.
Assigning time series pattern to a specific category, for example identify a word based on series of hand movements insign language.
Splitting a time-series into a sequence of segments. It is often the case that a time-series can be represented as a sequence of individual segments, each with its own characteristic properties. For example, the audio signal from a conference call can be partitioned into pieces corresponding to the times during which each person was speaking. In time-series segmentation, the goal is to identify the segment boundary points in the time-series, and to characterize the dynamical properties associated with each segment. One can approach this problem usingchange-point detection, or by modeling the time-series as a more sophisticated system, such as a Markov jump linear system.
Time series data may be clustered, however special care has to be taken when considering subsequence clustering.[33][34]Time series clustering may be split into
Subsequence time series clustering resulted in unstable (random) clustersinduced by the feature extractionusing chunking with sliding windows.[35]It was found that the cluster centers (the average of the time series in a cluster - also a time series) follow an arbitrarily shifted sine pattern (regardless of the dataset, even on realizations of arandom walk). This means that the found cluster centers are non-descriptive for the dataset because the cluster centers are always nonrepresentative sine waves.
Models for time series data can have many forms and represent differentstochastic processes. When modeling variations in the level of a process, three broad classes of practical importance are theautoregressive(AR) models, theintegrated(I) models, and themoving-average(MA) models. These three classes dependlinearlyon previous data points.[36]Combinations of these ideas produceautoregressive moving-average(ARMA) andautoregressive integrated moving-average(ARIMA) models. Theautoregressive fractionally integrated moving-average(ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial "V" for "vector", as in VAR forvector autoregression. An additional set of extensions of these models is available for use where the observed time-series is driven by some "forcing" time-series (which may not have a causal effect on the observed series): the distinction from the multivariate case is that the forcing series may be deterministic or under the experimenter's control. For these models, the acronyms are extended with a final "X" for "exogenous".
Non-linear dependence of the level of a series on previous data points is of interest, partly because of the possibility of producing achaotictime series. However, more importantly, empirical investigations can indicate the advantage of using predictions derived from non-linear models, over those from linear models, as for example innonlinear autoregressive exogenous models. Further references on nonlinear time series analysis: (Kantz and Schreiber),[37]and (Abarbanel)[38]
Among other types of non-linear time series models, there are models to represent the changes of variance over time (heteroskedasticity). These models representautoregressive conditional heteroskedasticity(ARCH) and the collection comprises a wide variety of representation (GARCH, TARCH, EGARCH, FIGARCH, CGARCH, etc.). Here changes in variability are related to, or predicted by, recent past values of the observed series. This is in contrast to other possible representations of locally varying variability, where the variability might be modelled as being driven by a separate time-varying process, as in adoubly stochastic model.
In recent work on model-free analyses, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favor.[39]Multiscale (often referred to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependence at multiple scales. See alsoMarkov switching multifractal(MSMF) techniques for modeling volatility evolution.
Ahidden Markov model(HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplestdynamic Bayesian network. HMM models are widely used inspeech recognition, for translating a time series of spoken words into text.
Many of these models are collected in the python packagesktime.
A number of different notations are in use for time-series analysis. A common notation specifying a time seriesXthat is indexed by thenatural numbersis written
Another common notation is
whereTis theindex set.
There are two sets of conditions under which much of the theory is built:
Ergodicity implies stationarity, but the converse is not necessarily the case. Stationarity is usually classified intostrict stationarityand wide-sense orsecond-order stationarity. Both models and applications can be developed under each of these conditions, although the models in the latter case might be considered as only partly specified.
In addition, time-series analysis can be applied where the series areseasonally stationaryor non-stationary. Situations where the amplitudes of frequency components change with time can be dealt with intime-frequency analysiswhich makes use of atime–frequency representationof a time-series or signal.[40]
Tools for investigating time-series data include:
Time-series metrics orfeaturesthat can be used for time seriesclassificationorregression analysis:[44]
Time series can be visualized with two categories of chart: Overlapping Charts and Separated Charts. Overlapping Charts display all-time series on the same layout while Separated Charts presents them on different layouts (but aligned for comparison purpose)[48]
|
https://en.wikipedia.org/wiki/Time_series
|
In the mathematical theory ofstochastic processes,variable-order Markov (VOM) modelsare an important class of models that extend the well knownMarkov chainmodels. In contrast to the Markov chain models, where eachrandom variablein a sequence with aMarkov propertydepends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization.
This realization sequence is often called thecontext; therefore the VOM models are also calledcontext trees.[1]VOM models are nicely rendered by colorized probabilistic suffix trees (PST).[2]The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such asstatistical analysis,classificationandprediction.[3][4][5]
Consider for example a sequence ofrandom variables, each of which takes a value from the ternaryalphabet{a,b,c}. Specifically, consider the string constructed from infinite concatenations of the sub-stringaaabc:aaabcaaabcaaabcaaabc…aaabc.
The VOM model of maximal order 2 can approximate the above string usingonlythe following fiveconditional probabilitycomponents:Pr(a|aa) = 0.5,Pr(b|aa) = 0.5,Pr(c|b) = 1.0,Pr(a|c)= 1.0,Pr(a|ca) = 1.0.
In this example,Pr(c|ab) = Pr(c|b) = 1.0; therefore, the shorter contextbis sufficient to determine the next character. Similarly, the VOM model of maximal order 3 can generate the string exactly using only five conditional probability components, which are all equal to 1.0.
To construct theMarkov chainof order 1 for the next character in that string, one must estimate the following 9 conditional probability components:Pr(a|a),Pr(a|b),Pr(a|c),Pr(b|a),Pr(b|b),Pr(b|c),Pr(c|a),Pr(c|b),Pr(c|c). To construct the Markov chain of order 2 for the next character, one must estimate 27 conditional probability components:Pr(a|aa),Pr(a|ab),…,Pr(c|cc). And to construct the Markov chain of order three for the next character one must estimate the following 81 conditional probability components:Pr(a|aaa),Pr(a|aab),…,Pr(c|ccc).
In practical settings there is seldom sufficient data to accurately estimate theexponentially increasingnumber of conditional probability components as the order of the Markov chain increases.
The variable-order Markov model assumes that in realistic settings, there are certain realizations of states (represented by contexts) in which some past states are independent from the future states; accordingly, "a great reduction in the number of model parameters can be achieved."[1]
LetAbe a state space (finitealphabet) of size|A|{\displaystyle |A|}.
Consider a sequence with theMarkov propertyx1n=x1x2…xn{\displaystyle x_{1}^{n}=x_{1}x_{2}\dots x_{n}}ofnrealizations ofrandom variables, wherexi∈A{\displaystyle x_{i}\in A}is the state (symbol) at positioni(1≤i≤n){\displaystyle \scriptstyle (1\leq i\leq n)}, and the concatenation of statesxi{\displaystyle x_{i}}andxi+1{\displaystyle x_{i+1}}is denoted byxixi+1{\displaystyle x_{i}x_{i+1}}.
Given a training set of observed states,x1n{\displaystyle x_{1}^{n}}, the construction algorithm of the VOM models[3][4][5]learns a modelPthat provides aprobabilityassignment for each state in the sequence given its past (previously observed symbols) or future states.
Specifically, the learner generates aconditional probability distributionP(xi∣s){\displaystyle P(x_{i}\mid s)}for a symbolxi∈A{\displaystyle x_{i}\in A}given a contexts∈A∗{\displaystyle s\in A^{*}}, where the * sign represents a sequence of states of any length, including the empty context.
VOM models attempt to estimateconditional distributionsof the formP(xi∣s){\displaystyle P(x_{i}\mid s)}where the context length|s|≤D{\displaystyle |s|\leq D}varies depending on the available statistics.
In contrast, conventionalMarkov modelsattempt to estimate theseconditional distributionsby assuming a fixed contexts' length|s|=D{\displaystyle |s|=D}and, hence, can be considered as special cases of the VOM models.
Effectively, for a given training sequence, the VOM models are found to obtain better model parameterization than the fixed-orderMarkov modelsthat leads to a bettervariance-bias tradeoff of the learned models.[3][4][5]
Various efficient algorithms have been devised for estimating the parameters of the VOM model.[4]
VOM models have been successfully applied to areas such asmachine learning,information theoryandbioinformatics, including specific applications such ascodinganddata compression,[1]document compression,[4]classification and identification ofDNAandprotein sequences,[6][1][3]statistical process control,[5]spam filtering,[7]haplotyping,[8]speech recognition,[9]sequence analysis in social sciences,[2]and others.
|
https://en.wikipedia.org/wiki/Variable-order_Markov_model
|
Sliding window based part-of-speech taggingis used topart-of-speech taga text.
A high percentage of words in anatural languageare words which out of context can be assigned more than one part of speech. The percentage of these ambiguous words is typically around 30%, although it depends greatly on the language. Solving this problem is very important in many areas ofnatural language processing. For example inmachine translationchanging the part-of-speech of a word can dramatically change its translation.
Sliding window based part-of-speech taggers are programs which assign a single part-of-speech to a given lexical form of a word, by looking at a fixed sized "window" of words around the word to bedisambiguated.
The two main advantages of this approach are:
Let
be the set of grammatical tags of the application, that is, the set of all possible tags which may be assigned to a word, and let
be the vocabulary of the application. Let
be a function for morphological analysis which assigns eachw{\displaystyle w}its set of possible tags,T(w)⊆Γ{\displaystyle T(w)\subseteq \Gamma }, that can be implemented by a full-form lexicon, or a morphological analyser. Let
be the set of word classes, that in general will be apartitionofW{\displaystyle W}with the restriction that for eachσ∈Σ{\displaystyle \sigma \in \Sigma }all of the wordsw,Σ,σ{\displaystyle w,\Sigma ,\sigma }will receive the same set of tags, that is, all of the words in each word classσ{\displaystyle \sigma }belong to the same ambiguity class.
Normally,Σ{\displaystyle \Sigma }is constructed in a way that for high frequency words, each word class contains a single word, while for low frequency words, each word class corresponds to a single ambiguity class. This allows good performance for high frequency ambiguous words, and doesn't require too many parameters for the tagger.
With these definitions it is possible to state problem in the following way: Given a textw[1]w[2]…w[L]∈W∗{\displaystyle w[1]w[2]\ldots w[L]\in W^{*}}each wordw[t]{\displaystyle w[t]}is assigned a word classT(w[t])∈Σ{\displaystyle T(w[t])\in \Sigma }(either by using the lexicon or morphological analyser) in order to get an ambiguously tagged textσ[1]σ[2]…σ[L]∈W∗{\displaystyle \sigma [1]\sigma [2]\ldots \sigma [L]\in W^{*}}. The job of the tagger is to get a tagged textγ[1]γ[2]…γ[L]{\displaystyle \gamma [1]\gamma [2]\ldots \gamma [L]}(withγ[t]∈T(σ[t]){\displaystyle \gamma [t]\in T(\sigma [t])}) as correct as possible.
A statistical tagger looks for the most probable tag for an ambiguously tagged textσ[1]σ[2]…σ[L]{\displaystyle \sigma [1]\sigma [2]\ldots \sigma [L]}:
UsingBayes formula, this is converted into:
wherep(γ[1]γ[2]…γ[L]){\displaystyle p(\gamma [1]\gamma [2]\ldots \gamma [L])}is the probability that a particular tag (syntactic probability) andp(σ[1]…σ[L]γ[1]…γ[L]){\displaystyle p(\sigma [1]\dots \sigma [L]\gamma [1]\ldots \gamma [L])}is the probability that this tag corresponds to the textσ[1]…σ[L]{\displaystyle \sigma [1]\ldots \sigma [L]}(lexical probability).
In aMarkov model, these probabilities are approximated as products. The syntactic probabilities are modelled by a first order Markov process:
whereγ[0]{\displaystyle \gamma [0]}andγ[L+1]{\displaystyle \gamma [L+1]}are delimiter symbols.
Lexical probabilities are independent of context:
One form of tagging is to approximate the first probability formula:
whereC(−)[t]=σ[t−N(−)]σ[t−N(−)]…σ[t−1]{\displaystyle C_{(-)}[t]=\sigma [t-N_{(-)}]\sigma [t-N_{(-)}]\ldots \sigma [t-1]}is the right context of the sizeN(+){\displaystyle N_{(+)}}.
In this way the sliding window algorithm only has to take into account a context of sizeN(−)+N(+)+1{\displaystyle N_{(-)}+N_{(+)}+1}. For most applicationsN(−)=N(+)=1{\displaystyle N_{(-)}=N_{(+)}=1}. For example to tag the ambiguous word "run" in the sentence "He runs from danger", only the tags of the words "He" and "from" are needed to be taken into account.
|
https://en.wikipedia.org/wiki/Sliding_window_based_part-of-speech_tagging
|
Traditional grammar(also known asclassical grammar) is a framework for the description of the structure of alanguageor group of languages.[1]The roots of traditional grammar are in the work of classicalGreekandLatinphilologists.[2]The formal study ofgrammarbased on these models became popular during the Renaissance.[3]
Traditional grammars may be contrasted with more modern theories of grammar intheoretical linguistics, which grew out of traditional descriptions.[3]While traditional grammars seek to describe how particular languages are used, or to teach people to speak or read them,grammar frameworksin contemporary linguistics often seek to explain the nature of language knowledge and ability common to all languages.[4]Traditional grammar is oftenprescriptive, and may be regarded as unscientific by those working in linguistics.[5]
Traditional Western grammars classify words intoparts of speech. They describe the patterns for wordinflection, and the rules ofsyntaxby which those words are combined into sentences.[6]
Among the earliest studies of grammar are descriptions ofSanskrit, calledvyākaraṇa. The Indian grammarianPāṇiniwrote theAṣṭādhyāyī, adescriptive grammarof Sanskrit, sometime between the 4th and the 2nd century BCE.[7][8]This work, along with some grammars of Sanskrit produced around the same time, is often considered the beginning oflinguisticsas adescriptive science,[8]and consequently wouldn't be considered "traditional grammar" despite its antiquity. Although Pāṇini's work was not known in Europe until many centuries later, it is thought to have greatly influenced other grammars produced in Asia, such as theTolkāppiyam, aTamilgrammar generally dated between the 2nd and 1st century BCE.[9]
The formal study of grammar became popular in Europe during theRenaissance. Descriptive grammars were rarely used inClassical Greeceor inLatinthrough theMedieval period.[10]During the Renaissance, Latin andClassical Greekwere broadly studied along with the literature and philosophy written in those languages.[11]With the invention of theprinting pressand the use of Vulgate Latin as alingua francathroughout Europe, the study of grammar became part oflanguage teaching and learning.[10]
Although complete grammars were rare, Ancient Greekphilologistsand Latin teachers ofrhetoricproduced some descriptions of the structure of language.[12]The descriptions produced byclassical grammarians(teachers of philology and rhetoric) provided a model for traditional grammars in Europe. According to linguist William Harris, "Just as the Renaissance confirmed Greco-Roman tastes in poetry, rhetoric and architecture, it established ancient Grammar, especially that which the Roman school-grammarians had developed by the 4th [century CE], as an inviolate system of logical expression."[8]The earliest descriptions of other European languages were modeled on grammars of Latin. The primacy of Latin in traditional grammar persisted until the beginning of the 20th century.[8]
The use of grammar descriptions in the teaching of language, includingforeign languageteaching and the study oflanguage arts, has gone in and out of fashion.[10]As education increasingly took place invernacularlanguages at the close of the Renaissance, grammars of these languages were produced for teaching. Between 1801 and 1900 there were more than 850 grammars ofEnglishpublished specifically for use in schools.[13]Mastering grammar rules like those derived from the study of Latin has at times been a specific goal of English-language education.[14]This approach to teaching has, however, long competed with approaches that downplay the importance of grammar instruction.[15]Similarly in foreign orsecond languageteaching, thegrammar-translation methodbased on traditional Latin teaching, in which the grammar of the language being learned is described in the student's native language, has competed with approaches such as thedirect methodor thecommunicative approach, in which grammar instruction is minimized.[10]
The parts of speech are an important element of traditional grammars, since patterns ofinflectionand rules ofsyntaxeach depend on a word's part of speech.[12]
Although systems vary somewhat, typically traditional grammars name eight parts of speech:nouns,pronouns,adjectives,verbs,adverbs,prepositions,conjunctions, andinterjections.[16][17]These groupings are based on categories of function and meaning in Latin and otherIndo-European languages. Some traditional grammars include other parts of speech, such asarticlesordeterminers, though some grammars treat other groupings of words as subcategories of the major parts of speech.[18]
The traditional definitions of parts of speech refer to the role that a word plays in a sentence, itsmeaning, or both.[17]
Contemporary linguists argue that classification based on a mixture ofmorphosyntacticfunction andsemanticmeaning is insufficient for systematic analysis of grammar.[21]Such definitions are not sufficient on their own to assign a word an unambiguous part of speech. Nonetheless, similar definitions have been used in most traditional grammars.[17]
Accidence, also known as inflection, is the change of a word's form depending on its grammatical function. The change may involve the addition ofaffixesor else changes in the sounds of the word, known as vowel gradation orablaut.[22]Some words featureirregular inflection, not taking an affix or following a regular pattern of sound change.[23]
Verbs, nouns, pronouns, and adjectives may be inflected forperson,number, andgender.
The inflection of verbs is also known asconjugation.[24]A verb has person and number, which mustagreewith the subject of the sentence.
Verbs may also be inflected fortense,aspect,mood, andvoice. Verb tense indicates the time that the sentence describes. A verb also has mood, indicating whether the sentence describes reality or expresses a command, a hypothesis, a hope, etc. A verb inflected for tense and mood is called finite; non-finite verb forms areinfinitivesorparticiples.[24]The voice of the verb indicates whether the subject of the sentence isactiveorpassivein regard to the verb.
Number indicates whether the noun refers to one,two, ormanyinstances of its kind.
Nouns, pronouns, and adjectives may also be inflected forcase. The inflection of nouns, pronouns, and adjectives is also known asdeclension.[24]Noun case indicates how the noun relates to other elements of the sentence (I, mein "I see Jesse" and "Jesse sees me").[20]
A traditional means of learning accidence is through conjugation tables or declension tables, lists of the various forms of a word for a learner to memorize. The following tables present partial conjugation of the Latin verbesseand its English equivalent,be.[22]
This partial table includes only two tenses (presentandpreterite) and one mood (indicative) in addition to the infinitive. A more complete conjugation table for Latin would also include thesubjunctiveandimperative moodsand theimperfect indicative, which indicatesimperfective aspect.[22]In English the imperative often has the same form as the infinitive, while theEnglish subjunctiveoften has the same form as the indicative.[19]English does not have imperfective aspect as Latin does; it hasprogressiveandperfectaspects in addition to the simple form.[25]
Syntax is the set of rules governing how words combine into phrases andclauses. It deals with the formation ofsentences, including rules governing or describing how sentences are formed.[22]In traditional usage, syntax is sometimes calledgrammar, but the word grammar is also used more broadly to refer to various aspects of language and itsusage.[26]
In traditional grammar syntax, a sentence is analyzed as having two parts, asubjectand apredicate. The subject is the thing being talked about. In English and similar languages, the subject usually occurs at the beginning of the sentence, but this is not always the case.[note 2]The predicate comprises the rest of the sentence, all of the parts of the sentence that are not the subject.[24]
The subject of a sentence is generally a noun or pronoun, or aphrasecontaining a noun or pronoun. If the sentence features active voice, the thing named by the subject carries out the action of the sentence; in the case of passive voice, the subject is affected by the action. In sentences with imperative mood, the subject may not be expressed.
The predicate of a sentence may have many parts, but the only required element is a finite verb. In addition to the verb, the predicate may contain one or moreobjects, asubject complement,object complements,adpositional phrases(in English, these are prepositional phrases), oradverbialelements.[24]
Some verbs (calledtransitive verbs) take direct objects; some also take indirect objects. A direct object names the person or thing directly affected by the action of an active sentence. An indirect object names the entity indirectly affected. In a sentence with both a direct and an indirect object, the indirect object generally appears before the direct object.[24]
In the following sentence, the direct object,the book, is directly affected by the action; it is what is given. The indirect object,Nikolai, is indirectly affected; he receives the book as a result of it being given.
In place of an indirect object, a prepositional phrase beginning withtoorformay occur after the direct object.[24]
A subject complement (variously called apredicative expression, predicative, predicate noun or adjective, or complement) appears in a predicate with alinking verb(also called a copula). A subject complement is a noun, adjective, or phrase that refers to the subject of the linking verb, illustrated in the following examples.
While subject complements describe or modify the subject of a linking verb, object complements describe or modify nouns in the predicate, typically direct or indirect objects, or objects of adpositions. In the following example, the phrasesun's originis a complement of the direct objectJapan.
A subject and a predicate together make up a clause.
Although some traditional grammars consider adpositional phrases and adverbials part of the predicate, many grammars call these elementsadjuncts, meaning they are not a required element of the syntactic structure. Adjuncts may occur anywhere in a sentence.
Adpositional phrases can add to or modify the meaning of nouns, verbs, or adjectives. An adpositional phrase is a phrase that features eithera preposition, a postposition, or a circumposition. All three types of words have similar function; the difference is where the adposition appears relative to the other words in the phrase. Prepositions occur before their complements while postpositions appear after. Circumpositions consist of two parts, one before the complement and one after.
An adverbial consists of either a single adverb, anadverbial phrase, or anadverbial clausethat modifies either the verb or the sentence as a whole. Some traditional grammars consider adpositional phrases a type of adverb, but many grammars treat these as separate. Adverbials may modify time, place, or manner.Negationis also frequently indicated with adverbials, including adverbs such as Englishnot.
|
https://en.wikipedia.org/wiki/Traditional_grammar
|
Aclassifier(abbreviatedclf[1]orcl) is awordoraffixthat accompaniesnounsand can be considered to "classify" a noun depending on some characteristics (e.g. humanness, animacy, sex, shape, social status) of itsreferent.[2][3]Classifiers in this sense are specifically callednoun classifiersbecause some languages inPapuaas well as theAmericashaveverbal classifierswhich categorize the referent of itsargument.[4][5]
In languages that have classifiers, they are often used when the noun is being counted, that is, when it appears with anumeral. In such languages, a phrase such as "three people" is often required to be expressed as "threeX(of) people", whereXis a classifier appropriate to the noun for "people"; compare to "three blades of grass". Classifiers that appear next to anumeralor aquantifierare particularly callednumeral classifiers.[6]They play an important role in certain languages, especiallyEastandSoutheast Asian languages,[7]includingChinese,Korean,Japanese, andVietnamese.
Numeral classifiers may have other functions too; in Chinese, they are commonly used when a noun is preceded by ademonstrative(word meaning "this" or "that"). Some Asian languages likeZhuang,HmongandCantoneseuse "bare classifier construction" where a classifier is attached without numerals to a noun for definite reference; the latter two languages also extend numeral classifiers to thepossessive classifierconstruction where they behave as apossessivemarker connecting a noun to another noun that denotes the possessor.[8]
Possessive classifiers are usually used in accord with semantic characteristics of the possessed noun and less commonly with the relation between the possessed and the possessor[9][10]althoughpossessor classifiersare reported in a few languages (e.g.Dâw).[11]
Classifiers are absent or marginal in European languages. An example of a possible classifier inEnglishispiecein phrases like "three pieces of paper". In American Sign Language, particularclassifier handshapesrepresent a noun's orientation in space.
There are similarities between classifier systems andnoun classes, although there are alsosignificant differences. While noun classes are defined in terms ofagreement, classifiers do not alter the form of other elements in a clause.[12][13]Also, languages with classifiers may have hundreds of classifiers whereas languages with noun classes (or in particular,genders) tend to have a smaller number of classifiers. Noun classes are not always dependent on the nouns' meaning but they have a variety of grammatical consequences.
A classifier is a word (or in some analyses, abound morpheme) which accompanies anounin certain grammatical contexts, and generally reflects some kind of conceptual classification of nouns, based principally on features of theirreferents. Thus a language might have one classifier for nouns representing persons, another for nouns representing flat objects, another for nouns denoting periods of time, and so on. The assignment of classifier to noun may also be to some degree unpredictable, with certain nouns taking certain classifiers by historically established convention.
The situations in which classifiers may or must appear depend on the grammar of the language in question, but they are frequently required when a noun is accompanied by anumeral. They are therefore sometimes known (particularly in the context of languages such as Japanese) ascounter words. They may also be used when a noun is accompanied by ademonstrative(a word such as "this" or "that").
The following examples, fromStandard MandarinChinese, illustrate the use of classifiers with a numeral. The classifiers used here are 位 (pinyinwèi), used (among other things) with nouns for humans; 棵kē, used with nouns for trees; 只/隻 (zhī), used with nouns for certain animals, including birds; and 条/條 (tiáo), used with nouns for certain long flexible objects. (Plurals of Chinese nounsare not normally marked in any way; the same form of the noun is used for both singular and plural.)
三
sān
three
位
wèi
CL[human]
学生
xuéshēng
student
(三位學生)
三 位 学生
sān wèi xuéshēng
three CL[human] student
"three students"
三
sān
three
棵
kē
CL[tree]
树
shù
tree
(三棵樹)
三 棵 树
sān kē shù
three CL[tree] tree
"three trees"
三
sān
three
只
zhī
CL[animal]
鸟
niǎo
bird
(三隻鳥)
三 只 鸟
sān zhī niǎo
three CL[animal] bird
"three birds"
三
sān
three
条
tiáo
CL[long-wavy]
河
hé
river
(三條河)
三 条 河
sān tiáo hé
three CL[long-wavy] river
"three rivers"
个 (個)gè, is also often used in informal speech as a general classifier, with almost any noun, taking the place of more specific classifiers.
The noun in such phrases may be omitted, if the classifier alone (and the context) is sufficient to indicate what noun is intended. For example, in answering a question:
多少
duōshǎo
how many
条
tiáo
CL
河
hé
river
(多少條河)
多少 条 河
duōshǎo tiáo hé
{how many} CL river
"How many rivers?"
三
sān
three
条
tiáo
CL
(三條)
following noun omitted
三 条
sān tiáo
three CL
"Three."
Languages which make systematic use of (noun) classifiers includeChinese,Japanese,Korean,Southeast Asian languages,Bengali,Assamese,Persian,Austronesian languages,Mayan languagesand others. A less typical example of classifiers is those used with the verb. Verbal classifiers are found in languages likeSouthern Athabaskan.
Classifier handshapesare also found insign languages, although these have a somewhat different grammatical function.
Classifiers are often derived from nouns (or occasionally other parts of speech), which have become specialized as classifiers, or may retain other uses besides their use as classifiers. Classifiers, like other words, are sometimesborrowedfrom other languages. A language may be said to have dozens or even hundreds of different classifiers. However, such enumerations often also includemeasure words.
Measure wordsplay a similar role to classifiers, except that they denote a particular quantity of something (a drop, a cupful, a pint, etc.), rather than the inherent countable units associated with acount noun. Classifiers are used with count nouns; measure words can be used with mass nouns (e.g. "two pints of mud"), and can also be used when a count noun's quantity is not described in terms of its inherent countable units (e.g. "two pints of acorns").
However, the terminological distinction between classifiers and measure words is often blurred – classifiers are commonly referred to as measure words in some contexts, such as Chinese language teaching, and measure words are sometimes calledmass-classifiersor similar.[14][15]
Classifiers are not generally a feature ofEnglishor otherEuropean languages, although classifier-like constructions are found with certain nouns. A commonly cited English example is the wordheadin phrases such as "five head of cattle": the wordcattle(for some speakers) is anuncountable(mass) noun, and requires the wordheadto enable its units to be counted. The parallel construction exists inFrench:une tête de bétail("one head of cattle"), inSpanish:una cabeza de ganado("one head of cattle") and inItalian:un capo di bestiame("one head of cattle"). Note the difference between "five head of cattle" (meaning five animals), and "five heads of cattle" (identical to "five cattle's heads", meaning specifically their heads). A similar phrase used byfloristsis "ten stem of roses" (meaning roses on their stems).
European languages naturally usemeasure words. These are required for counting in the case of mass nouns, and some can also be used withcount nouns. For example, one can have aglassof beer, and ahandfulof coins. The English construction withofis paralleled in many languages, although in German (and similarly in Dutch and the Scandinavian languages) the two words are simply juxtaposed, e.g. one saysein Glas Bier(literally "a glass beer", with no word for "of"). Slavic languages put the second noun in thegenitive case(e.g.Russianчаша пива(chasha piva), literally "a beer's glass"), but Bulgarian, having lost the Slavic case system, uses expressions identical to German (e.g.чаша пиво).
Certain nouns are associated with particular measure words or other classifier-like words that enable them to be counted. For example,paperis often counted insheetsas in "five sheets of paper". Usage or non-usage of measure words may yield different meanings, e.g.five papersis grammatically equally correct but refers to newspapers or academic papers. Someinherently pluralnouns require the wordpair(s)(or its equivalent) to enable reference to a single object or specified number of objects, as in "a pair of scissors", "three pairs of pants", or the Frenchune paire de lunettes("a pair of (eye)glasses").
Australian Aboriginal languagesare known for often having extensivenoun classsystems based on semantic criteria. In many cases, a given noun can be identified as a member of a given class via an adjacent classifier, which can either form ahyponymconstruction with a specific noun, or act as a generic noun on its own.
[16]
In the following example fromKuuk Thaayorre, the specific borrowed nountin.meat'tinned meat' is preceded by its generic classifierminh'meat.'
minh
CL(meat)
tin.meat
tinned-meat(ACC)
mungka-rr
eat-PST.PFV
minh tin.meat mungka-rr
CL(meat) tinned-meat(ACC) eat-PST.PFV
'[they] ate tinned meat'
In the next example, the same classifierminhstands in on its own for a generic crocodile (punc), another member of theminhclass:
yokun
perhaps
minh-al
CL(meat)-ERG
patha-rr
bite-PST.PFV
pulnan
3DU.ACC
yokun minh-al patha-rr pulnan
perhaps CL(meat)-ERG bite-PST.PFV 3DU.ACC
'perhaps a [crocodile] got them'
Classifiers and specific nouns inKuuk Thaayorrecan also co-occupy the head of anoun phraseto form something like a compound or complex noun as inngat minh.patp'CL(fish) hawk' which is the complex noun meaning 'stingray'.
[17]
Another example of this kind of hyponym construction can be seen inDiyari:
ngathi
1SG.ERG
nhinha
3.SG.NFEM.ACC
pirta
CL(tree)
pathara
box.tree.ACC
dandra-rda
hit-PCP
purri-yi
AUX-PRS
ngathi nhinha pirta pathara dandra-rda purri-yi
1SG.ERG 3.SG.NFEM.ACC CL(tree) box.tree.ACC hit-PCP AUX-PRS
'I chop the box tree'
See the nine Diyari classifiers below
[18]
Contrast the above withNgalakganin which classifiers are prefixes on the various phrasal heads of the entire noun phrase (including modifiers):
mungu-yimiliʔ
CL(season)-wet.season
mu-ŋolko
CL(season)3-big
gu-mu-rabona
3sg-CL(season).3-go.FUT
mungu-yimiliʔ mu-ŋolko gu-mu-rabona
CL(season)-wet.season CL(season)3-big 3sg-CL(season).3-go.FUT
'A big wet season will be coming on'
Ngalakgan has fewer noun classes than many Australian Languages, the complete set of its class prefixes are below:
implements; seasons; etc.
Atypically for an Indo-European language,Bengalimakes use of classifiers. Every noun in this language must have its corresponding classifier when used with a numeral or other quantifier. Most nouns take the generic classifierṭa, although there are many more specific measure words, such asjon, which is only used to count humans. Still, there are many fewer measure words in Bengali than in Chinese or Japanese. As in Chinese, Bengali nouns are not inflected for number.
Nôe-ṭa
nine-CL
ghoṛi
clock
Nôe-ṭaghoṛi
nine-CLclock
Nine clocks
Kôe-ṭa
how.many-CL
balish
pillow
Kôe-ṭabalish
how.many-CLpillow
How many pillows
Ônek-jon
many-CL
lok
person
Ônek-jonlok
many-CLperson
Many people
Char-pañch-jon
four-five-CL
shikkhôk
teacher
Char-pañch-jonshikkhôk
four-five-CLteacher
Four or five teachers
Similar to the situation in Chinese, measuring nouns in Bengali without their corresponding measure words (e.g.aṭ biṛalinstead ofaṭ-ṭabiṛal"eight cats") would typically be considered ungrammatical. However, it is common to omit the classifier when it counts a noun that is not in thenominative case(e.g.,aṭ biṛaler desh(eight cats-possessive country ), orpanc bhUte khelo(five ghosts-instrumental ate)) or when the number is very large (e.g.,ek sho lok esechhe("One hundred people have come.")). Classifiers may also be dropped when the focus of the sentence is not on the actual counting but on a statement of fact (e.g.,amar char chhele(I-possessive four boy, I have four sons)). The -ṭa suffix comes from /goṭa/ 'piece', and is also used as a definite article.
Omitting the noun and preserving the classifier is grammatical and common. For example,Shudhu êk-jonthakbe.(lit. "Only one-MWwill remain.") would be understood to mean "Only onepersonwill remain.", sincejoncan only be used to count humans. The wordlok"person" is implied.
Maithili,NepaliandAssamesehave systems very similar to Bengali's. Maithili uses-tafor objects and-goateyfor humans; similarly, Nepali has-waṭā(-वटा) for objects and -janā(-जना) for humans.
Assamese,Chittagonian,Sylhetiand otherBengali-Assamese languageshave more classifiers than Bengali. The presence of classifiers in Northeast India may be linked to contact with the Tibeto-Burman and Austroasiatic languages spoken in the region.[citation needed]
আমটো
Am-tú
mango-CL[inanimate objects]
আমটো
Am-tú
{mango-CL[inanimate objects]}
The mango
দুটা
Du-ta
two-CL[counting numerals]
শব্দ
xobdo
word
দুটাশব্দ
Du-taxobdo
{two-CL[counting numerals]} word
Two words
কেইটা
Kei-ta
how.many-CL
বালিছ
balis
pillow
কেইটাবালিছ
Kei-tabalis
how.many-CLpillow
How many pillows
বালিছকেইটা
Balis-kei-ta
pillow-many-CL
বালিছকেইটা
Balis-kei-ta
pillow-many-CL
The pillows
চাৰি-পাঁচজন
Sari-pas-zon
manuh
four-five-CL[male humans (polite)]
মানুহ
human
চাৰি-পাঁচজনমানুহ
Sari-pas-zon
manuh
{four-five-CL[male humans (polite)]} human
Four or five menMismatch in the number of words between lines: 2 word(s) in line 1, 1 word(s) in line 2, 1 word(s) in line 3, 2 word(s) in line 4 (help);
মেকুৰীজনী
Mekuri-zoni
cat-CL[females of human and animals]
মেকুৰীজনী
Mekuri-zoni
{cat-CL[females of human and animals]}
The female cat
এখন
E-khon
one-CL[flat small; and big items]
ঘৰ
ghor
house
এখনঘৰ
E-khonghor
{one-CL[flat small; and big items]} house
A house
কিতাপকেইখন
Kitap-kei-khon
book-many-CL
কিতাপকেইখন
Kitap-kei-khon
book-many-CL
The books
পানীখিনি
Pani-khini
water-CL[uncountable and uncounted items]
পানীখিনি
Pani-khini
{water-CL[uncountable and uncounted items]}
The water
সাপডাল
Xap-dal
snake-CL[long and thin items]
সাপডাল
Xap-dal
{snake-CL[long and thin items]}
The snake
Persianhas a scheme very similar to the Indo-Aryan languages Bengali, Assamese, Maithili and Nepali.
Although not always used in written language,Persianuses classifiers regularly in spoken word. Persian has two general-use classifiers,دانه(dāne) andتا(tā), the former of which is used with singular nouns, while the latter is used with plural nouns.
Yek
One
dāne
CL:SG.general use
pesar
boy
Yekdānepesar
One {CL:SG.general use} boy
One boy
Do
Two
tā
CL:PL.general use
pesar
boy
Dotāpesar
Two {CL:PL.general use} boy
Two boys
čand
How many
tā
CL:PL.general use
pesar?
boy?
čandtāpesar?
{How many} {CL:PL.general use} boy?
How many boys?
In addition to general-use classifiers, Persian also has several specific classifiers, including the following:
Do
Two
bāb
CL:buildings
forušgāh
store
Dobābforušgāh
TwoCL:buildingsstore
Two stores
Yek
One
qors
CL:bread
nān
bread
Yekqorsnān
OneCL:breadbread
A loaf of bread
Se
Three
kalāf
CL:wire, yarn, thread
sim
wire
Sekalāfsim
Three {CL:wire, yarn, thread} wire
Three reels of wire
InBurmese, classifiers, in the form of particles, are used when counting or measuring nouns. They immediately follow the numerical quantification. Nouns to which classifiers refer can be omitted if the context allows, because many classifiers have implicit meanings.
သူ
θù
Thu
he
တူ
tù
tu
chopstick
နှစ်
n̥ə
hna
two
ချောင်း
t͡ʃʰáʊɴ
chaung
CL:long and thin items
ရှိ
ʃḭ
shi
have
တယ်
dè
de
PRES
သူ တူ နှစ်ချောင်းရှိ တယ်
θù tù n̥ət͡ʃʰáʊɴʃḭ dè
Thu tu hnachaungshi de
he chopstick two {CL:long and thin items} have PRES
He has two chopsticks.
စားပွဲ
zəbwé
Zabwe
table
ခုနစ်
kʰwɛʔ n̥ə
khun-hna
seven
လုံး
lóʊɴ
lon
CL:round, globular things
ရှိ
ʃḭ
shi
have
လား
là
la
Q
စားပွဲ ခုနစ်လုံးရှိ လား
zəbwé {kʰwɛʔ n̥ə}lóʊɴʃḭ là
Zabwe khun-hnalonshi la
table seven {CL:round, globular things} have Q
Do you have seven tables?
လူ
lù
lu
one
တစ်
tə
ta
CL:people
ဦး
ú
u
person
လူ တစ်ဦး
lù təú
lu ta u
oneCL:peopleperson
one personora person
Thai employs classifiers in the widest range ofNPconstructions compared to similar classifier languages from the area.[19]Classifiers are obligatory for nouns followed by numerals in Thai. Nouns in Thai are counted by a specific classifier,[20]which are usually grammaticalized nouns.[21]An example of a grammaticalized noun functioning as a classifier isคน(khon).Khonis used for people (except monks and royalty) and literally translates to 'person'. The general form for numerated nouns in Thai isnoun-numeral-classifier. Similar to Mandarin Chinese, classifiers in Thai are also used when the noun is accompanied by a demonstrative. However, this is not obligatory in the case of demonstratives.[22]Demonstratives also require a different word order than for numerals. The general scheme for demonstratives isnoun-classifier-demonstrative. In some instances, classifiers are also used to denote singularity. Thai nouns are bare nominals and are ambiguous regarding number.[21]In order to differentiate between the expression "this child" vs. "these children", a classifier is added to the noun followed by a demonstrative. This 'singularity effect'[21]is apparent inเด็กคนนี้(child-classifier-this) referring exclusively to one child as opposed toเด็กนี้(child this), which is vague in terms of number.
Combining nouns with adjectives could be simply done without the use of classifiers such asรถเก่า(rot kao, old car), it is sometimes necessary to add a classifier in order to distinguish thespecificobject from a group e.gรถคันเก่า(rot khan kao, the old car).[20][22]Somequantifiersrequire classifiers in Thai. It has been claimed that quantifiers which do not require classifiers areadjunctsand those which do are part of the functional structure of the noun phrase.[21]Quantifiers which require a classifier includeทุก(thuk, every)บาง(bang, some). This is also the case of approximations e.g.หมาบางตัว(ma bang tua, some dogs). Negative quantification is simply expressed by addingไม่มี(mai mi, there are not) in front of the noun.[20]
เพื่อน
phuen
friends
สอง
song
two
คน
khon
CL:people
เพื่อน สองคน
phuen songkhon
friends twoCL:people
Two friends
นก
nok
bird
ตัว
tua
CL:animals
หนึ่ง
nung
one
นกตัวหนึ่ง
noktuanung
birdCL:animalsone
a birdorone bird
ทุเรียน
turian
durian
หลาย
lai
many
ลูก
luk
CL:fruits or balls
ทุเรียน หลายลูก
turian lailuk
durian many {CL:fruits or balls}
Many durians
รถ
rot
car
คัน
khan
CL:land vehicles
นี้
ni
this
รถคันนี้
rotkhanni
car {CL:land vehicles} this
This car
บ้าน
ban
house
ทุก
tuk
every
หลัง
lang
CL:houses
บ้าน ทุกหลัง
ban tuklang
house everyCL:houses
Every house
นักเรียน
nakrian
student
คน
khon
CL:people
ที่
thi
ordinal particle
สอง
song
two
นักเรียนคนที่สอง
nakriankhonthisong
studentCL:people{ordinal particle} two
The second student
หนังสือ
nungsue
book
เล่ม
lem
CL:books and knives
ใหม่
mai
new
หนังสือ เล่ม ใหม่
nungsuelemmai
book {CL:books and knives} new
The new book
Complex nominal phrases can yield expressions containing several classifiers. This phenomenon is rather unique to Thai, compared to other classifier languages from the region.[22]
เรือ
ruea
boat
ลำ
lam
CL:boats and planes
ใหญ่
yai
large
ลำ
lam
CL:boats and planes
นั้น
nan
that
เรือลำใหญ่ลำนั้น
ruealamyailamnan
boat {CL:boats and planes} large {CL:boats and planes} that
that large boat
เรือ
ruea
boat
ลำ
lam
CL:boats and planes
ใหญ่
yai
large
สาม
sam
three
ลำ
lam
CL:boats and planes
เรือลำใหญ่ สามลำ
ruealamyai samlam
boat {CL:boats and planes} large three {CL:boats and planes}
three large boats
เรือ
ruea
boat
ลำ
lam
CL:boats and planes
ใหญ่
yai
large
สาม
sam
three
ลำ
lam
CL:boats and planes
นั้น
nan
that
เรือลำใหญ่ สามลำนั้น
ruealamyai samlamnan
boat {CL:boats and planes} large three {CL:boats and planes} that
those three large boats
Although classifiers were not often used inClassical Chinese, in allmodern Chinesevarietiessuch asMandarin, nouns are normally required to be accompanied by a classifier ormeasure wordwhen they are qualified by anumeralor by ademonstrative. Examples with numerals have been given above in theOverviewsection. An example with a demonstrative is the phrase for "this person" — 这个人zhè ge rén.The character 个 is a classifier, literally meaning "individual" or "single entity", so the entire phrase translates literally as "thisindividualperson" or "thissingleperson". A similar example is the phrase for "these people" — 这群人zhè qún rén, where the classifier 群 means "group" or "herd", so the phrase literally means "this group [of] people" or "this crowd".
The noun in a classifier phrase may be omitted, if the context and choice of classifier make the intended noun obvious. An example of this again appears in the Overview section above.
The choice of a classifier for each noun is somewhat arbitrary and must be memorized by learners of Chinese, but often relates to the object's physical characteristics. For example, the character 条tiáooriginally means "twig" or "thinbranch", is now used most often as a classifier for thin, elongated things such asrope,snakeandfish, and can be translated as "(a) length (of)", "strip" or "line".
Not all classifiers derive from nouns, however. For example, the character 張/张zhāngis originally a verb meaning "tospan(abow)", and is now used as a classifier to denote squarish flat objects such as paper,hide, or (the surface of) table, and can be more or less translated as "sheet". The character 把bǎwas originally a verb meaningto grasp/grip, but is now more commonly used as the noun for "handle", and as the classifier for "handful".
Technically a distinction is made between classifiers (orcount-classifiers), which are used only withcount nounsand do not generally carry any meaning of their own, andmeasure words(ormass-classifiers), which can be used also withmass nounsand specify a particular quantity (such as "bottle" [of water] or "pound" [of fruit]). Less formally, however, the term "measure word" is used interchangeably with "classifier".
InGilbertese, classifiers must be used as a suffix when counting. The appropriate classifier is chosen based on the kind and shape of the noun, and combines with the numeral, sometimes adopting several different forms.
There is a general classifier (-ua) which exists in simple numbers (te-ua-na 1; uo-ua 2; ten-ua 3; a-ua 4; nima-ua 5; until 9) and is used when there is no specific classifier and for counting periods of time and years; and specific classifiers like:
InJapanese grammar, classifiers must be used with a number when counting nouns. The appropriate classifier is chosen based on the kind and shape of the noun, and combines with the numeral, sometimes adopting several different forms.
鉛筆
enpitsu
pencil
五本
go-hon
five-CL[cylindrical objects]
鉛筆 五本
enpitsu go-hon
pencil {five-CL[cylindrical objects]}
five pencils
犬
inu
dog
三匹
san-biki
three-CL[small animals]
犬 三匹
inu san-biki
dog {three-CL[small animals]}
three dogs
子供
kodomo
child
四人
yo-nin
four-CL[people]
子供 四人
kodomo yo-nin
child four-CL[people]
four children
鶏
niwatori
chicken
三羽
san-ba
three-CL[birds]
鶏 三羽
niwatori san-ba
chicken three-CL[birds]
three chickens
ヨット
yotto
yacht
三艘
san-sō
three-CL[small boats]
ヨット 三艘
yotto san-sō
yacht {three-CL[small boats]}
three yachts
車
kuruma
car
一台
ichi-dai
one-CL[mechanical objects]
車 一台
kuruma ichi-dai
car {one-CL[mechanical objects]}
one car
トランプ
toranpu
playing.card
二枚
ni-mai
two-CL[flat objects]
トランプ 二枚
toranpu ni-mai
playing.card {two-CL[flat objects]}
two cards
TheKorean languagehas classifiers in the form of suffixes which attach to numerals. For example,jang(장) is used to count sheets of paper, blankets, leaves, and other similar objects: "ten bus tickets" could be translatedbeoseu pyo yeol-jang(버스 표 열 장), literally "bus ticket ten-[classifier]".
종이
jong'i
paper
세
se
three
장
jang
CL[flat objects]
종이 세 장
jong'i sejang
paper three {CL[flat objects]}
three sheets of paper
자전거
jajeongeo
bicycle
다섯
daseot
five
대
dae
CL[vehicles]
자전거 다섯 대
jajeongeo daseotdae
bicycle fiveCL[vehicles]
five bicycles
어른
eoreun
adult
네
ne
four
명
myeong
CL[people]
어른 네 명
eoreun nemyeong
adult fourCL[people]
four adults
물건
mulgeon
thing
여섯
yeoseot
six
개
gae
CL[common things]
물건 여섯 개
mulgeon yeoseotgae
thing six {CL[common things]}
six things
토끼
tokki
rabbit
한
han
one
마리
mari
CL[animals]
토끼 한 마리
tokki hanmari
rabbit oneCL[animals]
one rabbit
책
chaek
book
두
du
two
권
gwon
CL[books]
책 두 권
chaek dugwon
book twoCL[books]
two books
고기
gogi
meat
일곱
ilgop
seven
점
jeom
CL[pieces of meat]
고기 일곱 점
gogi ilgopjeom
meat seven {CL[pieces of meat]}
seven pieces of meat
옷
ot
cloth
여덟
yeodeol
eight
벌
beol
CL[clothes]
옷 여덟 벌
ot yeodeolbeol
cloth eightCL[clothes]
eight clothes
InMalay grammar, classifiers are used to count all nouns, includingconcrete nouns,abstract nouns[23]and phrasal nouns. Nouns are notreduplicatedfor plural form when used with classifiers, definite or indefinite, althoughMary Dalrympleand Suriel Mofu give counterexamples where reduplication and classifiers co-occur.[24]In informal language, classifiers can be used with numbers alone without the nouns if the context is well known.
The Malay term for classifiers ispenjodoh bilangan, while the term in Indonesian iskata penggolong.
Seekor
One-CL:animals
kerbau
water-buffalo.
Seekorkerbau
One-CL:animalswater-buffalo.
A water-buffalo.
Dua
Two
orang
CL:people
pelajar
students
itu
that.
Duaorangpelajar itu
TwoCL:peoplestudents that.
Those two students.
Berapa
How many
buah
CL:general
kereta
cars
yang
relative word
dijual?
sold?
/
/
Tiga
Three
buah.
CL:general
Berapabuahkereta yang dijual? / Tigabuah.
{How many}CL:generalcars {relative word} sold? / ThreeCL:general
How many cars are sold? / Three of them.
Secawan
One-cup
kopi.
coffee
Secawankopi.
One-cupcoffee
A cup of coffee.
Saya
I
mendengar
heard
empat
four
das
CL:gunshots
tembakan pistol.
gunshots.
Saya mendengar empatdas{tembakan pistol}.
I heard fourCL:gunshotsgunshots.
I heard four gunshots.
Saya
I
minta
would like
sebatang
one-CL:cylindrical objects
rokok.
cigarette.
Saya minta sebatangrokok.
I {would like} {one-CL:cylindrical objects} cigarette.
I would like a cigarette.
Tiga
Three
biji
CL:small grains
pasir.
sand.
Tigabijipasir.
Three {CL:small grains} sand.
Three grains of sand.
Vietnamese uses a similar set of classifiers to Chinese, Japanese and Korean.
ba
three
bộ
[inanimate object counter]
áo
upper
dài
garment+long
ba bộ áo dài
three {[inanimate object counter]} upper garment+long
three (sets of)áo dài[25]
Khmer(Cambodian) also uses classifiers, although they can quite frequently be omitted. Since it is ahead-firstlanguage, the classifier phrase (number plus classifier) comes after the noun.
Santaliuses several sets of classifiers. They can be divided into three classes:tɛn(varianttɛc,taŋ) for 'one' and non-human beings;eawith numerals 'two', 'four' and 'twenty';gɔtɛn(variantgɔtɜc) with numerals from 'five' to 'ten' and with the distributive numerals.
uni
3SG.M
mit'-taŋ
one-CLF
Kali-boɳga
Kali-idol
benao-akad-e-a-e
make-PRF.A-3SG.OBJ-FIN-3SG.SBJ
uni mit'-taŋKali-boɳga benao-akad-e-a-e
3SG.M one-CLFKali-idol make-PRF.A-3SG.OBJ-FIN-3SG.SBJ
"He has made a Kali idol."
InAmerican Sign Languageclassifier constructions are used to express position, stative description (size and shape), and how objects are handled manually. The particular hand shape used to express any of these constructions is what functions as theclassifier. Various hand shapes can represent whole entities; show how objects are handled or instruments are used; represent limbs; and be used to express various characteristics of entities such as dimensions, shape, texture, position, and path and manner of motion. While the label of classifiers has been accepted by many sign language linguists, some argue that these constructions do not parallel oral-language classifiers in all respects and prefer to use other terms, such as polymorphemic or polycomponential signs.[26]
Examples:
Classifiers are part of the grammar of mostEast Asian languages, includingChinese,Japanese,Korean,Vietnamese,Malay,Burmese,Thai,Hmong, and theBengaliandMunda languagesjust to the west of the East and Southeast Asialinguistic area. They are present in manyAustralian Aboriginal languages, including Yidiny and Murrinhpatha. Amongindigenous languages of the Americas, classifiers are present in thePacific Northwest, especially among theTsimshianic languages, and in many languages of Mesoamerica, includingClassic Mayaand most of itsmodern derivatives. They also occur in some languages of the Amazon Basin (most famouslyYagua) and a very small number ofWest African languages.
In contrast, classifiers are entirely[citation needed]absent not only from European languages, but also from many languages of northern Asia (Uralic,Turkic,Mongolic,Tungusicand mainlandPaleosiberian languages), and also from the indigenous languages of the southern parts of both North and South America. InAustronesian languages, classifiers are quite common and may have been acquired as a result of contact withMon–Khmer languages[citation needed]but the most remote members such asMalagasyandHawaiianhave lost them.
TheWorld Atlas of Language Structureshas aglobal mapshowing 400 languages andchapter textincluding geographical discussion:
Numeral classifiers exhibit striking worldwide distribution at the global level. The main concentration of numeral classifiers is in a single zone centered in East and Southeast Asia, but reaching out both westwards and eastwards. To the west, numeral classifiers peter out as one proceeds across the South Asian subcontinent; thus, in this particular region, the occurrence of numeral classifiers cross-cuts what has otherwise been characterized as one of the classical examples of a linguistic area, namely, South Asia. However, numeral classifiers pick up again, albeit in optional usage, in parts of western Asia centering on Iran and Turkey; it is not clear whether this should be considered as a continuation of the same large though interrupted isogloss, or as a separate one. To the east, numeral classifiers extend out through the Indonesian archipelago, and then into the Pacific in a grand arc through Micronesia and then down to the southeast, tapering out in New Caledonia and western Polynesia. Interestingly, whereas in the western parts of the Indonesian archipelago numeral classifiers are often optional, in the eastern parts of the archipelago and in Micronesia numeral classifiers tend once more, as in mainland East and Southeast Asia, to be obligatory. Outside this single large zone, numeral classifiers are almost exclusively restricted to a number of smaller hotbeds, in West Africa, the Pacific Northwest, Mesoamerica, and the Amazon basin. In large parts of the world, numeral classifiers are completely absent.
The concept of noun classifier is distinct from that ofnoun class.
Nevertheless, there is no clearly demarked difference between the two: since classifiers often evolve into class systems, they are two extremes of agrammaticalizationcontinuum.[27]
The Egyptian hieroglyphic script is formed of a repertoire of hundreds of graphemes which play different semiotic roles. Almost every word ends with an unpronounced grapheme (the so-called "determinative") that carries no additional phonetic value of its own. As such, this hieroglyph is a "mute" icon, which does not exist on the spoken level of language but supplies the word in question, through its iconic meaning alone, with extra semantic information.[28]
In recent years, this system of unpronounced graphemes was compared to classifiers in spoken languages. The results show that the two systems, those of unpronounced graphemic classifiers and those of pronounced classifiers in classifier languages obey similar rules of use and function. The graphemic classifiers of the hieroglyphic script presents an emic image of knowledge organization in the Ancient Egyptian mind.[29]
Similar graphemic classifiers are known also in Hieroglyphic Luwian[30]and in
Chinese scripts.[31]
|
https://en.wikipedia.org/wiki/Classifier_(linguistics)
|
Informal semanticsconservativityis a proposedlinguistic universalwhich states that anydeterminerD{\displaystyle D}must obey the equivalenceD(A,B)↔D(A,A∩B){\displaystyle D(A,B)\leftrightarrow D(A,A\cap B)}. For instance, theEnglishdeterminer "every" can be seen to be conservative by theequivalenceof the following two sentences, schematized ingeneralized quantifiernotation to the right.[1][2][3]
Conceptually, conservativity can be understood as saying that theelementsofB{\displaystyle B}which are not elements ofA{\displaystyle A}are not relevant for evaluating the truth of thedeterminer phraseas a whole. For instance, truth of the first sentence above does not depend on which biting non-aardvarks exist.[1][2][3]
Conservativity is significant to semantic theory because there are many logically possible determiners which are not attested asdenotationsof natural language expressions. For instance, consider the imaginary determinershmore{\displaystyle shmore}defined so thatshmore(A,B){\displaystyle shmore(A,B)}is true iff|A|>|B|{\displaystyle |A|>|B|}. If there are 50 biting aardvarks, 50 non-biting aardvarks, and millions of non-aardvark biters,shmore(A,B){\displaystyle shmore(A,B)}will be false butshmore(A,A∩B){\displaystyle shmore(A,A\cap B)}will be true.[1][2][3]
Some potential counterexamples to conservativity have been observed, notably, the English expression "only". This expression has been argued to not be a determiner since it can stack with bona fide determiners and can combine with non-nominal constituents such asverb phrases.[4]
Different analyses have treated conservativity as a constraint on thelexicon, a structural constraint arising from the architecture of thesyntax-semantics interface, as well as constraint onlearnability.[5][6][7]
Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Conservativity
|
Inlinguistics,determiner spreading(DS), also known as Multiple or Double Determiners[1]is the appearance of more than onedeterminerassociated with anoun phrase, usually marking anadjectiveas well as thenounitself.[2]The extra determiner has been called an adjectival determiner[3]because determiner spreading is most commonly found in adjectival phrases. Typical examples involve multiple occurrences of the definite article or definiteness marking, such is found in (but not limited to) the languages listed below. The structure of such phrases is widely discussed and there is not one conclusive analysis. Because of this, the example languages below each show unique structure where different proposed analyses have been used.
Determiner spreading is found inAlbanianwhere two definite articles are used for just one referentboy.[4]As shown in (1) and (2), the ordering of the noun and the adjective does not matter, as long as the determiner falls before the adjective.
(from Alexiadou et al. 2007:73)
In Albanian, either of these sentence constructions in (1) and (2) are grammatical to meanthe good boy. In both of the sentences the determinerimarks the referentboy. It is noted that the determineriis a morphological entity to mark the adjectival class rather than definiteness of the noun, as is found in Greek below.[6]
InModern Greek, determiner spreading is not obligatory, and it contrasts with a non-spreading example in (3):[7]
(3)
The sentence above has an ambiguous meaning explained by the last two lines in the sentence above in (3). Both the meanings (i) and (ii) are grammatical with non-determiner spreading. However when DS is introduced, the sentence no longer takes on an ambiguous meaning.
Determiner spreading in the example in (4) has eliminated the ambiguity, rendering the second meaning in (ii) to be ungrammatical.
(4)
The question that is being posed in (5) requires an answer to differentiate which kind of pen, silver or gold. The response in (6) shows determiner spreading occurring because the adjective is a "restrictive modifier".[10]
(5)
(6)
Researchers believe that determiner spreading only occurs when the phrase has an "intersective reading".[13]Meaning that the focus of the sentence is the kind of pen rather than the pen itself. The determiner spreading is syntactically how a speaker can give stress to a phrase in Modern Greek. It has been suggested that the same result of focus on the type of pen can be acquiredsemanticallyby giving focus to the adjective, as marked by capital letters in (7).
(7)
A commonality between the Greek examples we have seen thus far is that determiner spreading is obligatorily definiteArticle (grammar)in Greek. Another language specific observation is that the determiner precedes the adjective. Researchers suggest that in order for an adjective to appear post-nominally, the determiner must precede the adjective to be grammatical in Greek. The examples below in (8) and (9) reinforce this observation showing (8) to be ungrammatical wheremeghalofollowsspitiwith no determiner intervening.
(8)
(9)
An alternative structure to the tree above (see "the silver pen" tree), is shown to the right in "the big house". Researchers have described this structure as apredicativeconfiguration in which the DP in SpecDPto spitiis acting as the subject of the higher DP.[17]
In contrast with the optional DS in Greek,Swedishphrases that have an adjective show obligatory determiner spreading.[18]Example (10) is marked as ungrammatical because it is monadic with respect to DS - there is only one determiner. By suffixing the noun with the determiner-enin example (11), the phrase becomes grammatical.
(10)
(11)
One proposed analysis of the structure of DS in Swedish suggests thatden-support, is used to support definite DPs when D cannot be satisfied in any other way. It is used as a type of feature support for definiteness; when the definite noun carries stress,denis inserted for support to the DP. The determinerdenand the determiner suffix-enare in complementary distribution. This proposed structure is shown in the syntactic tree to the right; where thedenhas already been inserted and the-enis experiencing downward movement to suffix the nounbil.[21]
9582.1993.tb00844.x
|
https://en.wikipedia.org/wiki/Determiner_spreading
|
English determiners(also known asdeterminatives)[1]: 354arewords– such asthe,a,each,some,which,this, and numerals such assix– that are most commonly used withnounsto specify theirreferents. Thedeterminersform a closedlexical categoryinEnglish.[2]
The syntactic role characteristically performed by determiners is known as the determinative function (see§ Terminology).[3]A determinative combines with a noun (or, more formally, a nominal; seeEnglish nouns § Internal structure) to form anoun phrase(NP). This function typically comes before anymodifiersin the NP (e.g.,somevery pretty wool sweaters, not*very pretty some wool sweaters[a]). The determinative function is typically obligatory in a singular, countable, common noun phrase (compareI haveanew catto *I have new cat).
Semantically, determiners are usuallydefinite or indefinite(e.g.,thecatversusacat),[4]and they often agree with thenumberof theheadnoun (e.g.,anew catbut not *manynew cat).Morphologically, they are usually simple and do not inflect.
The most common of these are the definite and indefinitearticles,theanda(n). Other determiners in English include thedemonstrativesthisandthat, and thequantifiers(e.g.,all,many, andnone) as well as thenumerals.[1]: 373Determiners also occasionally function as modifiers in noun phrases (e.g.,themanychanges), determiner phrases (e.g.,manymore) or inadjectiveoradverb phrases(e.g.,notthatbig).[1]: 565They may appear on their own without a noun, similar topronouns(e.g.,I'll havesome), but they are distinct from pronouns.[1]: 412
Words and phrases can be categorized by both their syntactic category[b]and theirsyntactic function. In the clausethe dog bit the man, for example,the dogbelongs to the syntactic category of noun phrase and performs the syntactic function of subject. The distinction between category and function is at the heart of a terminological issue surrounding the worddeterminer: various grammars have used the word to describe a category, a function, or both.
Some sources, such asA Comprehensive Grammar of the English Language, usedetermineras a term for a category as defined above anddeterminativefor the function that determiners andpossessivestypically perform in a noun phrase (see§ Functions).[5]: 74Others, such asThe Cambridge Grammar of the English Language(CGEL), make the opposite terminological choice.[1]: 354And still others (e.g.,The Grammar Book[6]) usedeterminerfor both the category and the function. This article usesdeterminerfor the category anddeterminativefor the function in the noun phrase.
The lexical category determiner is the class of words described in this article. They head determiner phrases, which can realize the functions determinative, predeterminative, and modifier:
The syntactic function determinative is a function that specifies a noun phrase. That is, determinatives add abstract meanings to the noun phrase, such as definiteness, proximity, number, and the like.[7]: 115While the determinative function is typically realized by determiner phrases, they may also be realized by noun phrases and prepositional phrases:
This article is about determiners as a lexical category.
Traditional grammarhas no concept to match determiners, which are instead classified asadjectives, articles, or pronouns.[5]: 70Thearticlesand demonstratives have sometimes been seen as forming their own category, but are often classified as adjectives. Linguist and historianPeter Matthewsobserves that the assumption that determiners are distinct from adjectives is relatively new, "an innovation of … the early 1960s."[5]: 70
In 1892, prior to the emergence of the determiner category in English grammars,Leon Kellner, and later Jespersen,[8]discussed the idea of "determination" of a noun:
In Old English the possessive pronoun, or, as the French say, "pronominal adjective," expresses only the conception of belonging and possession; it is a real adjective, and does not convey, as at present, the idea of determination. If, therefore, Old English authors want to make nouns preceded by possessive pronouns determinative, they add the definite article.[9]
By 1924,Harold Palmerhad proposed a part of speech called "Pronouns and Determinatives", effectively "group[ing] with the pronouns all determinative adjectives (e.g., article-like, demonstratives, possessives, numerals, etc.), [and] shortening the term to determinatives (the "déterminatifs" of the French grammarians)."[10]: 24Palmer separated this category from more prototypical adjectives (what he calls "qualificative adjectives") because, unlike prototypical adjectives, words in this category are not used predicatively, tend not to inflect for comparison, and tend not to be modified.[10]: 45
In 1933,Leonard Bloomfieldintroduced the termdeterminerused in this article, which appears to define a syntactic function performed by "limiting adjectives".[11]
Our limiting adjectives fall into two sub-classes of determiners and numeratives … The determiners are defined by the fact that certain types of noun expressions (such ashouseorbig house) are always accompanied by a determiner (as,this house,a big house).[12]: 203
Matthews argues that the next important contribution was by Ralph B. Long in 1961, though Matthews notes that Long's contribution is largely ignored in the bibliographies of later prominent grammars, includingA Comprehensive Grammar of the English LanguageandCGEL. Matthews illustrates Long's analysis with the noun phrasethis boy: "thisis no longer, in [Long's] account, an adjective. It is instead a pronoun, of a class he called ‘determinative’, and it has the function of a ‘determinative modifier’."[5]: 71This analysis was developed in a 1962 grammar by Barbara M. H. Strang[5]: 73and in 1972 byRandolph Quirkand colleagues.[5]: 74In 1985,A Comprehensive Grammar of the English Languageappears to have been the first work to explicitly conceive of determiner as a distinct lexical category.[5]: 74
Until the late 1980s, linguists assumed that, in a phrase likethe red ball, theheadwas the nounballand thatthewas adependent. But a student at MIT named Paul Abney proposed, in his PhD dissertation aboutEnglish noun phrases(NPs) in 1987, that the head was not the nounballbut the determinerthe, so thatthe red ballis a determiner phrase (DP).[13]This has come to be known as the DP analysis or the DP hypothesis (seeDeterminer phrase), and as of 2008[update]it is the majority view ingenerative grammar,[14]: 93though it is rejected in other perspectives.[15]
The main similarity between adjectives and determiners is that they can both appear immediately before nouns (e.g.,many/happypeople).
The key difference between adjectives and determiners in English is that adjectives cannot function as determinatives. The determinative function is an element in NPs that is obligatory in most singular countable NPs and typically occurs before any modifiers (see§ Functions). For example,*I live insmall houseis ungrammatical becausesmall houseis a singular countable NP lacking a determinative. The adjectivesmallis a modifier, not a determinative. In contrast, if the adjective is replaced or preceded by a possessive NP (I live inmyhouse) or a determiner (I live inthathouse), then it becomes grammatical because possessive NPs and determiners function as determinatives.[1]: 538
There are a variety of other differences between the categories. Determiners appear inpartitiveconstructions, while adjectives do not (e.g.,someof the peoplebut not*happyof the people).[1]: 356Adjectives can function as a predicative complement in a verb phrase (e.g.,that waslovely), but determiners typically cannot (e.g.,*that was every).[1]: 253Adjectives are not typically definite or indefinite, while determiners are.[1]: 54Adjectives as modifiers in a noun phrase do not need to agree in number with a head noun (e.g.,old book,old books) while some determiners do (e.g.,thisbook,thesebooks).[1]: 56Morphologically, adjectives often inflect for grade (e.g.,big,bigger,biggest), while few determiners do.[1]: 356Finally, adjectives can typically form adverbs by adding-ly(e.g.,cheap→cheaply), while determiners cannot.[1]: 766
The boundary between determiner and adjective is not always clear, however. In the case of the wordmany, for example, the distinction between determiner and adjective is fuzzy, and different linguists and grammarians have placed this term into different categories.The CGELcategorizesmanyas a determiner because it can appear in partitive constructions, as inmany of them.[1]: 539Alternatively, Bas Aarts offers three reasons to support the analysis ofmanyas an adjective. First, it can be modified byvery(as inhis very many sins), which is a characteristic typical of certain adjectives but not of determiners. Second, it can occur as a predicative complement:his sins are many. Third,manyhas a comparative and superlative form (moreandmost, respectively).[16]: 126
There is disagreement about whether possessive words such asmyandyourare determiners or not. For example,CollinsCOBUILDGrammar[17]: 61classifies them as determiners whileCGELclassify them as pronouns[1]: 357andA Comprehensive Grammar of the English Languagehas them dually classified as determiners[18]: 253and as pronouns in determinative function.[18]: 361
The main reason for classifying these possessive words as determiners is that, like determiners, they usually function as determinative in an NP (e.g.,my/the cat).[1]: 357Reasons for calling thempronounsand not determiners include that the pronouns typically inflect (e.g.,I, me, my, mine, myself),[1]: 455while determiners typically allow no morphological change.[1]: 356Determiners also appear inpartitiveconstructions, while pronouns do not (e.g.,someof the peoplebut not*my of the people).[1]: 356Also, some determiners can be modified by adverbs (e.g.,verymany), but this is not possible for pronouns.[1]: 57
The wordsyouandweshare features commonly associated with both determiners and pronouns in constructions such aswe teachers do not get paid enough. On the one hand, the phrase-initial position of these words is a characteristic they share with determiners (comparethe teachers). Furthermore, they cannot combine with more prototypical determiners (*the we teachers), which suggests that they fill the same role.[16]: 125These characteristics have led linguists and grammarians likeRay Jackendoffand Steven Paul Abney to categorize such uses ofweandyouas determiners.[19][13][1]: 374
On the other hand, these words can show case contrast (e.g.,us teachers), a feature that, in Modern English, is typical of pronouns but not of determiners.[16]: 125Thus, Evelyne Delorme andRay C. Doughertytreat words likeusas pronouns inappositionwith the noun phrases that follow them, an analysis thatMerriam–Webster's Dictionary of English Usagealso follows.[20][21]Richard Hudsonand Mariangela Spinillo also categorize these words as pronouns but without assuming an appositive relationship between the pronoun and the rest of the noun phrase.[22][23]
There is disagreement about whetherthatis a determiner or a degree adverb in clauses likeit is not that unusual. For example,A Comprehensive Grammar of the English Languagecategorizes this use ofthatas an adverb. This analysis is supported by the fact that other pre-head modifiers of adjectives that "intensify" their meaning tend to be adverbs, such asawfullyinawfully sorryandtoointoo bright.[18]: 445–447
On the other hand, Aarts categorizes this word as a determiner, a categorization also used inCGEL.[7]: 137[1]: 549This analysis can be supported by expanding the determiner phrase:it is not all that unusual.Allcan function as a premodifier of determiners (e.g.,all that cake) but not adjectives (e.g., *all unusual), which leads Aarts to suggest thatthatis a determiner.[16]: 127
Expressions with similar quantification meanings such asa lot of,lots of,plenty of,a great deal of,tons of, etc. are sometimes said to be determiners,[18]: 263while other grammars argue that they are not words, or even phrases. The non-determiner analysis is that they consist of the first part of a noun phrase.[1]: 349For example,a lot of workis a noun phrase withlotas its head. It has apreposition phrasecomplementbeginning with the prepositionof. In this view, they could be consideredlexicalunits, but they are not syntactic constituents.
For the sake of this section, Abney's DP hypothesis(see§ History)is set aside. In other words, here a DP is taken to be a dependent in a noun phrase (NP) and not the other way around.
A determiner phrase (DP) is headed by a determiner and optionally takes dependents. DPs can take modifiers, which are usually adverb phrases (e.g., [almostno]people) or determiner phrases (e.g., [manymore]people) .[1]: 431Comparative determiners likefewerormorecan takethanprepositional phrase (PP) complements (e.g.,it weighs[lessthan five]grams).[1]: 443The following tree diagram in the style ofCGELshows the DPfar fewer than twenty, with the adverbfaras a modifier and the PPthan twentyas a complement.
As stated above, there is some terminological confusion about the terms "determiner" and "determinative". In this article, "determiner" is a lexical category while "determinative" is the function most typically performed by determiner phrases (in the same way that "adjective" denotes a category of words while "modifier" denotes the most typical function of adjective phrases). DPs are not the only phrases that can function as determinative, but they are the most common.[1]: 330
A determinative is a function only in noun phrases. It is usually the leftmostconstituentin the phrase, appearing before any modifiers.[24]A noun phrase may have many modifiers, but only one determinative is possible.[1]In most cases, a singular, countable, common noun requires a determinative to form a noun phrase; plurals and uncountables do not.[1]The determinative is underlined in the following examples:
The most common function of a DP is determinative in an NP. This is shown in the followingsyntax treein the style ofCGEL. It features two determiner phrases,allin predeterminer modifier function (see§ Predeterminative), andthein determinative function (labeled Det:DP).
If noun phrases can only contain one determinative, the following noun phrases present challenges:
The determiner phrasethefunctions as the determinative inall the time, andthosefunctions as the determinative inboth those cars. Butallandbothalso have specifying roles rather than modifying roles in the noun phrase, much like the determinatives do. To account for noun phrases like these,A Comprehensive Grammar of the English Languagealso recognizes the function of predeterminative (or predeterminer).[18]: 257Some linguists and grammarians offer different accounts of these constructions.CGEL, for instance, classifies them as a kind of modifier in noun phrases.[1]: 433
Predeterminatives are typically realized by determiner phrases (e.g.,allinall the time). However, they can also be realized by noun phrases (e.g.,one-fifth the size) and adverb phrases (e.g., thrice the rate).[7]: 119–120
Determiner phrases can function as pre-head modifiers in noun phrases, such as the determiner phrasetwointhese two images. In this example,thesefunctions as the determinative of the noun phrase, andtwofunctions as a modifier of the headimages.[7]: 126And they can function as pre-head modifiers in adjective phrases—[AdjP[DPthe]more], [AdjP[DPthe]merrier]—and adverb phrases—[AdvP[DPthe]longer]this dish cooks,[AdvP[DPthe]better]it tastes).[1]: 549[7]: 137, 162
Determiner phrases can also function as post-head modifiers in these phrases. For example, the determinerseach,enough,less, andmorecan function as post-head modifiers of noun phrases, as in the determiner phraseeachintwo seats each.[7]: 132Enoughcan fill the same role in adjective phrases (e.g.,clear enough) and in adverb phrases (e.g.,funnily enough).[1]: 549[7]: 138, 163
DPs also function as modifiers in DPs (e.g., [notthatmany]people).[1]: 330
Determiners may bear two functions at one time. Usually this is a fusion of determinative and head in an NP where no head noun exists. In the clausemany would disagree, the determinermanyis the fused determinative-head in the NP that functions as the subject.[1]: 332In many grammars, both traditional and modern, and in almost all dictionaries, such words are considered to be pronouns rather than determiners.
Several words can belong to the same part of speech but still differ from each other to various extents, with similar words forming subclasses of the part of speech. For example, the articlesaandthehave more in common with each other than with the demonstrativesthisorthat, but both belong to the class of determiner and, thus, share more characteristics with each other than with words from other parts of speech. Article and demonstrative, then, can be considered subclasses or types of determiners.
Most determiners are very basic in their morphology, but some are compounds.[1]: 391A large group of these is formed with the wordsany,every,no, andsometogether withbody,one,thing, orwhere(e.g.,anybody,somewhere).[1]: 411The morphological phenomenon started inOld English, whenthing, was combined withsome,any, andno. InMiddle English, it would combine withevery.[25]: 165
The cardinal numbers greater than 99 are also compound determiners.[1]: 356This group also includesa fewanda little,[1]: 391and Payne, Huddleston, and Pullum argue thatonce,twice, andthricealso belong here, and not in the adverb category.[26]
Although most determiners do not inflect, the following determiners participate in the system ofgrade.[1]: 393
The following types of determiners are organized, first, syntactically according to their typical position in a noun phrase in relation to each other and, then, according to their semantic contributions to the noun phrase. This first division, based on categorization fromA Comprehensive Grammar of the English Language, includes three categories:
The secondary divisions are based on the semantic contributions of the determiner to a noun phrase. The subclasses are named according to the labels assigned inCGELand theOxford Modern English Grammar, which use essentially the same labels.
According to CGEL,articlesserve as "the most basic expression of definiteness and indefiniteness."[1]: 368That is, while other determiners express definiteness and other kinds of meaning, articles serve primarily as markers of definiteness. The articles are generally considered to be:[27]
Other articles have been posited, including unstressedsome, a zero article (indefinite with mass and plural) and a null article (definite with singular proper nouns).[28]
The two maindemonstrativedeterminers arethisandthat. Their respective plural forms aretheseandthose.[27]
The demonstrative determiners mark noun phrases as definite. They also add meaning related to spatialdeixis; that is, they indicate where the thing referenced by the noun is in relation to the speaker. The proximalthissignals that the thing is relatively close to the speaker while the distalthatsignals that the thing is relatively far.[1]: 373
CGEL classifies the archaic and dialectalyonder(as in the noun phraseyonder hills) as a marginal demonstrative determiner.[1]: 615Yondersignals that the thing referenced by the noun is far from the speaker, typically farther than whatthatwould signal. Thus, we would expect yonder hills to be farther from the speaker than those hills. Unlike the main demonstrative determiners,yonderdoes not inflect for number (compareyonder hill).
The following are the distributive determiners:[27]
The distributive determiners mark noun phrases as indefinite.[29]They also add distributive meaning; that is, "they pick out the members of a set singly, rather than considering them in mass."[18]: 382Because they signal this distributive meaning, these determiners select singular noun heads when functioning as determinatives in noun phrases (e.g.,each student).[1]: 378
The following are the existential determiners:[27]
Existential determiners mark a noun phrase as indefinite. They also conveyexistential quantification, meaning that they assert the existence of a thing in a quantity greater than zero.[1]: 380
The following are the disjunctive determiners:[27]
Disjunctive determiners mark a noun phrase as definite. They also imply a single selection from a set of exactly two.[1]: 387Because they signal a single selection, disjunctive determiners select singular nouns when functioning as determinatives in noun phrases (e.g.,either side).A Comprehensive Grammar of the English Languagedoes not recognize this category and instead labeleitheran "assertive determiner" andneithera "negative determiner."[18]: 257
Thenegativedeterminer isnowith its independent formnone.[27]Distinct dependent and independent forms are otherwise found only in possessive pronouns, where the dependent is only found with a subsequent noun and the independent without (e.g.,my wayandno wayare dependent, whilemineandnoneare independent).
Nosignifies that not one member of a set or sub-quantity of a quantity under consideration has a particular property.Neitheralso conveys this kind of meaning but is only used when selecting from a set of exactly two, which is whyneitheris typically classified as disjunctive rather than negative.[1]: 389–390
The additive determiner isanother.[27]Anotherwas formed from the compounding of the indefinite articleanand the adjectiveother; thus, it marks a noun phrase as indefinite. It also conveys additive meaning. For example,another bananasignals an additional banana in addition to some first banana.Anothercan also mark an alternative. For example,another bananacan also signal a different banana, perhaps one that is riper. Because it can also convey this alternative meaning,anotheris sometimes labeled an alternative-additive determiner.[1]: 391
The following are the sufficiency determiners:[27]
These determiners convey inexact quantification that is framed in terms of some minimum quantity needed. For instance,enough money for a taxiimplies that a minimum amount of money is necessary to pay for a taxi and that the amount of money in question is sufficient for the purpose. When functioning as determinatives in a noun phrase, sufficiency determiners select plural count nouns (e.g.,sufficient reasons) or non-count nouns (e.g.,enough money).[1]: 396
The following are theinterrogativedeterminers:[27]
These determiners can also be followed by -everand -soever. Interrogative determiners are typically used in the formation of questions, as inwhat/which conductor do you like best?Usingwhatmarks a noun phrase as indefinite while usingwhichmarks the noun phrase as definite, being used when the context implies a limited number of choices.[18]: 369
The following are the relative determiners:[27]
These determiners can also be followed by -ever. Relative determiners typically function as determiners in noun phrases that introducerelative clauses, as inwe can use whatever/whichever edition you want.[1]: 398
In grammars that consider them determiners rather than pronouns (see§ Determiners versus other lexical categories), the personal determiners are the following:[27]
Though these words are normally pronouns, in phrases likewe teachersandyou guys, they are sometimes classified as personal determiners. Personal determiners mark a noun phrase as definite. They also add meaning related to personal deixis; that is, they indicate whether the thing referenced by the noun includes the speaker (we/us) or at least one addressee and not the speaker (you).[1]: 374In some dialects such as the Ozark dialect, this usage extends tothemas inthem folks.[30]
The following are the universal determiners:[27]
Universal determiners conveyuniversal quantification, meaning that they assert that no subset of a thing exists that lacks the property that is described. For example, saying "all the vegetables are ripe" is the same as saying "no vegetables are not ripe."[1]: 359The primary difference betweenallandbothis thatbothapplies only to sets with exactly two members whilealllacks this limitation. But CGEL notes that because of the possibility of usingbothinstead,all"generally stronglyimplicates'more than two.'"[1]: 374
Cardinal numerals (zero,one,two,thirty-four, etc.) can represent any number. Therefore, the members of this subclass of determiner are infinite in quantity and cannot be listed in full.
Cardinal numerals are typically thought to express the exact number of the things represented by the noun, but this exactness is throughimplicaturerather than necessity. In the clausefive people complained, for example, the number of people complaining is usually thought to be exactly five. But technically, the proposition would still be true if additional people were complaining as well: if seven people were complaining, then it is also necessarily true that five people were complaining. General norms of cooperative conversation, however, make it such that cardinal numerals typically express the exact number (e.g., five = no more and no less than five) unless otherwise modified (e.g.,at least fiveorat most five).[1]: 385–386
The following are the positive paucal determiners:[27]
The positive paucal determiners convey a small, imprecise quantity—generally characterized as greater than two but smaller than whatever quantity is considered large. When functioning as determinatives in a noun phrase, most paucal determiners select plural count nouns (e.g.,a few mistakes), buta littleselects non-count nouns (e.g.,a little money).[1]: 391–392
In grammars that consider them determiners rather than adjectives (see§ Determiners versus other lexical categories), the degree determiners are the following:[27]
Degree determiners mark a noun phrase as indefinite. They also convey imprecise quantification, withmanyandmuchexpressing a large quantity andfewandlittleexpressing a small quantity. Degree determiners are unusual in that they inflect for grade, a feature typical of adjectives and adverbs but not determiners. The comparative forms offew,little,many, andmucharefewer,less,more, andmorerespectively. The superlative forms arefewest,least,most, andmostrespectively.[1]: 393The plain forms can be modified with adverbs, especiallyvery,tooandso(andnotcan also be added). Note that unmodifiedmuchis quite rarely used in affirmative statements in colloquial English.
The mainsemanticcontributions of determiners arequantificationanddefiniteness.
Many determiners express quantification.[31][1]: 358
From a semantic point of view, adefiniteNP is one that is identifiable and activated in the minds of thefirst personand the addressee. From a grammatical point of view in English, definiteness is typically marked by definite determiners, such asthe,that, andthis,all,every,both, etc. Linguists find it useful to make a distinction between the grammatical feature of definiteness and the cognitive feature of identifiability.[32]: 84This accounts for cases ofform-meaning mismatch, where a definite determiner results in an indefinite NP, such as the exampleI metthis guy from Heidelbergon the train, where the underlined NP is grammatically definite but semantically indefinite.[32]: 82
The majority of determiners, however, are indefinite. These include the indefinite articlea, but also most quantifiers, including the cardinal numerals.
Choosing the definite article over no article in a pair likethe AmericansandAmericanscan have thepragmaticeffect of depicting "the group as a monolith of which the speaker is not a part."[33]Relatedly, the choice betweenthisandthatmay have an evaluative purpose, wherethissuggest a closeness, and therefore a more positive evaluation.[34]
|
https://en.wikipedia.org/wiki/English_determiners
|
Asemantic network, orframe networkis aknowledge basethat representssemanticrelations betweenconceptsin a network. This is often used as a form ofknowledge representation. It is adirectedorundirected graphconsisting ofvertices, which representconcepts, andedges, which representsemantic relationsbetweenconcepts,[1]mapping or connectingsemantic fields. A semantic network may be instantiated as, for example, agraph databaseor aconcept map. Typical standardized semantic networks are expressed assemantic triples.
Semantic networks are used inneurolinguisticsandnatural language processingapplications such assemantic parsing[2]andword-sense disambiguation.[3]Semantic networks can also be used as a method to analyze large texts and identify the main themes and topics (e.g., ofsocial mediaposts), to reveal biases (e.g., in news coverage), or even to map an entire research field.[4]
Examples of the use of semantic networks inlogic,directed acyclic graphsas a mnemonic tool, dates back centuries, the earliest documented use being the Greek philosopherPorphyry's commentary onAristotle'scategoriesin the third century AD.
Incomputing history, "Semantic Nets" for thepropositional calculuswere firstimplementedforcomputersbyRichard H. Richensof theCambridge Language Research Unitin 1956 as an "interlingua" formachine translationofnatural languages,[5]although the importance of this work and the Cambridge Language Research Unit was only belatedly realized.
Semantic networks were also independently implemented by Robert F. Simmons[6]and Sheldon Klein, using thefirst-order predicate calculusas a base, after being inspired by a demonstration ofVictor Yngve. The "line of research was originated by the first President of theAssociation for Computational Linguistics, Victor Yngve, who in 1960 had published descriptions ofalgorithmsfor using aphrase structure grammarto generate syntactically well-formed nonsense sentences. Sheldon Klein and I about 1962–1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text."[7]Other researchers, most notablyM. Ross Quillian[8]and others atSystem Development Corporationhelped contribute to their work in the early 1960s as part of the SYNTHEX project. It's these publications at System Development Corporation that most modern derivatives of the term "semantic network" cite as their background. Later prominent works were done byAllan M. Collinsand Quillian (e.g., Collins and Quillian;[9][10]Collins and Loftus[11]Quillian[12][13][14][15]). Still later in 2006, Hermann Helbig fully describedMultiNet.[16]
In the late 1980s, two universities in theNetherlands,GroningenandTwente, jointly began a project calledKnowledge Graphs, which are semantic networks but with the added constraint that edges are restricted to be from a limited set of possible relations, to facilitatealgebras on the graph.[17]In the subsequent decades, the distinction between semantic networks andknowledge graphswas blurred.[18][19]In 2012,Googlegave their knowledge graph the nameKnowledge Graph.
The semantic link network was systematically studied as asemantic social networkingmethod. Its basic model consists of semantic nodes, semantic links between nodes, and a semantic space that defines the semantics of nodes and links and reasoning rules on semantic links. The systematic theory and model was published in 2004.[20]This research direction can trace to the definition of inheritance rules for efficient model retrieval in 1998[21]and the Active Document Framework ADF.[22]Since 2003, research has developed toward social semantic networking.[23]This work is a systematic innovation at the age of theWorld Wide Weband global social networking rather than an application or simple extension of the Semantic Net (Network). Its purpose and scope are different from that of the Semantic Net (or network).[24]The rules for reasoning and evolution and automatic discovery of implicit links play an important role in the Semantic Link Network.[25][26]Recently it has been developed to support Cyber-Physical-Social Intelligence.[27]It was used for creating a general summarization method.[28]The self-organised Semantic Link Network was integrated with a multi-dimensional category space to form a semantic space to support advanced applications with multi-dimensional abstractions and self-organised semantic links[29][30]It has been verified that Semantic Link Network play an important role in understanding and representation throughtext summarisationapplications.[31][32]Semantic Link Network has been extended from cyberspace to cyber-physical-social space. Competition relation and symbiosis relation as well as their roles in evolving society were studied in the emerging topic: Cyber-Physical-Social Intelligence[33]
More specialized forms of semantic networks has been created for specific use. For example, in 2008, Fawsy Bendeck's PhD thesis formalized theSemantic Similarity Network(SSN) that contains specialized relationships and propagation algorithms to simplify thesemantic similarityrepresentation and calculations.[34]
A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another.
Most semantic networks are cognitively based. They consist of arcs (spokes) and nodes (hubs) which can be organized into a taxonomic hierarchy. Different semantic networks can also be connected by bridge nodes. Semantic networks contributed to the ideas ofspreading activation,inheritance, and nodes as proto-objects.
One process of constructing semantic networks, known also asco-occurrence networks, includes identifying keywords in the text, calculating the frequencies of co-occurrences, and analyzing the networks to find central words and clusters of themes in the network.[35]
In the field oflinguistics, semantic networks represent how the human mind handles associated concepts. Typically, concepts in a semantic network can have one of two different relationships: either semantic or associative.
If semantic in relation, the two concepts are linked by any of the following semantic relationships:synonymy,antonymy,hypernymy,hyponymy,holonymy,meronymy,metonymy, orpolysemy. These are not the only semantic relationships, but some of the most common.
If associative in relation, the two concepts are linked based on their frequency to occur together. These associations are accidental, meaning that nothing about their individual meanings requires them to be associated with one another, only that they typically are. Examples of this would be pig and farm, pig and trough, or pig and mud. While nothing about the meaning of pig forces it to be associated with farms, as pigs can be wild, the fact that pigs are so frequently found on farms creates an accidental associated relationship. These thematic relationships are common within semantic networks and are notable results infree associationtests.
As the initial word is given, activation of the most closely related concepts begin, spreading outward to the lesser associated concepts. An example of this would be the initial word pig prompting mammal, then animal, and then breathes. This example shows that taxonomic relationships are inherent within semantic networks. The most closely related concepts typically sharesemantic features, which are determinants of semantic similarity scores. Words with higher similarity scores are more closely related, thus have higher probability of being a close word in the semantic network.
These relationships can be suggested into the brain throughpriming, where previous examples of the same relationship are shown before the target word is shown. The effect of priming on a semantic network linking can be seen through the speed of the reaction time to the word. Priming can help to reveal the structure of a semantic network and which words are most closely associated with the original word.
Disruption of a semantic network can lead to a semantic deficit (not to be confused with assemantic dementia).
There exists physical manifestation of semantic relationships in the brain as well. Category-specific semantic circuits show that words belonging to different categories are processed in circuits differently located throughout the brain. For example, the semantic circuits for a word associated with the face or mouth (such as lick) is located in a different place of the brain than a word associated with the leg or foot (such as kick). This is a primary result of a 2013 study published byFriedemann Pulvermüller[citation needed]. These semantic circuits are directly tied to their sensorimotor areas of the brain. This is known as embodied semantics, a subtopic ofembodied language processing.
If brain damage occurs, the normal processing of semantic networks could be disrupted, leading to preference into what kind of relationships dominate the semantic network in the mind.
The following code shows an example of a semantic network in theLisp programming languageusing anassociation list.
To extract all the information about the "canary" type, one would use theassocfunction with a key of "canary".[36]
An example of a semantic network isWordNet, alexicaldatabase ofEnglish. It groups English words into sets of synonyms calledsynsets, provides short, general definitions, and records the various semantic relations between these synonym sets. Some of the most common semantic relations defined aremeronymy(A is a meronym of B if A is part of B),holonymy(B is a holonym of A if B contains A),hyponymy(ortroponymy) (A is subordinate of B; A is kind of B),hypernymy(A is superordinate of B),synonymy(A denotes the same as B) andantonymy(A denotes the opposite of B).
WordNet properties have been studied from anetwork theoryperspective and compared to other semantic networks created fromRoget's Thesaurusandword associationtasks. From this perspective the three of them are asmall world structure.[37]
It is also possible to represent logical descriptions using semantic networks such as theexistential graphsofCharles Sanders Peirceor the relatedconceptual graphsofJohn F. Sowa.[1]These have expressive power equal to or exceeding standardfirst-order predicate logic. Unlike WordNet or other lexical or browsing networks, semantic networks using these representations can be used for reliable automated logical deduction. Some automated reasoners exploit the graph-theoretic features of the networks during processing.
Other examples of semantic networks areGellishmodels.Gellish Englishwith itsGellish English dictionary, is aformal languagethat is defined as a network of relations between concepts and names of concepts. Gellish English is a formal subset of natural English, just as Gellish Dutch is a formal subset of Dutch, whereas multiple languages share the same concepts. Other Gellish networks consist of knowledge models and information models that are expressed in the Gellish language. A Gellish network is a network of (binary) relations between things. Each relation in the network is an expression of a fact that is classified by a relation type. Each relation type itself is a concept that is defined in the Gellish language dictionary. Each related thing is either a concept or an individual thing that is classified by a concept. The definitions of concepts are created in the form of definition models (definition networks) that together form a Gellish Dictionary. A Gellish network can be documented in a Gellish database and is computer interpretable.
SciCrunchis a collaboratively edited knowledge base for scientific resources. It provides unambiguous identifiers (Research Resource IDentifiers or RRIDs) for software, lab tools etc. and it also provides options to create links between RRIDs and from communities.
Another example of semantic networks, based oncategory theory, isologs. Here each type is an object, representing a set of things, and each arrow is a morphism, representing a function.Commutative diagramsalso are prescribed to constrain the semantics.
In the social sciences people sometimes use the term semantic network to refer toco-occurrence networks.[38][39]The basic idea is that words that co-occur in a unit of text, e.g. a sentence, are semantically related to one another. Ties based on co-occurrence can then be used to construct semantic networks. This process includes identifying keywords in the text, constructing co-occurrence networks, and analyzing the networks to find central words and clusters of themes in the network. It is a particularly useful method to analyze large text andbig data.[40]
There are also elaborate types of semantic networks connected with corresponding sets of software tools used forlexicalknowledge engineering, like the Semantic Network Processing System (SNePS) of Stuart C. Shapiro[41]or theMultiNetparadigm of Hermann Helbig,[42]especially suited for the semantic representation of natural language expressions and used in severalNLPapplications.
Semantic networks are used in specialized information retrieval tasks, such asplagiarism detection. They provide information on hierarchical relations in order to employsemantic compressionto reduce language diversity and enable the system to match word meanings, independently from sets of words used.
The Knowledge Graphproposed by Google in 2012 is actually an application of semantic network in search engine.
Modeling multi-relational data like semantic networks in low-dimensional spaces through forms ofembeddinghas benefits in expressing entity relationships as well as extracting relations from mediums like text. There are many approaches to learning these embeddings, notably using Bayesian clustering frameworks or energy-based frameworks, and more recently, TransE[43](NeurIPS2013). Applications of embedding knowledge base data includeSocial network analysisandRelationship extraction.
|
https://en.wikipedia.org/wiki/Semantic_net
|
Incomputational linguistics, atrigram taggeris a statistical method forautomatically identifying words as being nouns, verbs, adjectives, adverbs, etc.based on second orderMarkov modelsthat consider triples of consecutive words. It is trained on atext corpusas a method to predict the next word, taking the product of the probabilities ofunigram,bigramandtrigram. Inspeech recognition, algorithms utilizing trigram-tagger score better than those algorithms utilizing IIMM tagger but less well than Net tagger.
The description of the trigram tagger is provided by Brants (2000).
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Trigram_tagger
|
Word-sense disambiguationis the process of identifying whichsenseof awordis meant in asentenceor other segment ofcontext. In humanlanguage processingandcognition, it is usually subconscious.
Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain'sneural networks, computer science has had a long-term challenge in developing the ability in computers to donatural language processingandmachine learning.
Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources,supervised machine learningmethods in which aclassifieris trained for each distinct word on acorpusof manually sense-annotated examples, and completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successfulalgorithmsto date.
Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (homograph) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively.
Disambiguation requires two strict inputs: adictionaryto specify the senses which are to be disambiguated and a corpus oflanguagedata to be disambiguated (in some methods, atraining corpusof language examples is also required). WSD task has two variants: "lexical sample" (disambiguating the occurrences of a small sample of target words which were previously selected) and "all words" task (disambiguation of all the words in a running text). "All words" task is generally considered a more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word.
WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics.Warren Weaverfirst introduced the problem in a computational context in his 1949 memorandum on translation.[1]Later,Bar-Hillel(1960) argued[2]that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge.
In the 1970s, WSD was a subtask of semantic interpretation systems developed within the field of artificial intelligence, starting withWilks' preference semantics. However, since WSD systems were at the time largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck.
By the 1980s large-scale lexical resources, such as theOxford Advanced Learner's Dictionary of Current English(OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation was still knowledge-based or dictionary-based.
In the 1990s, the statistical revolution advanced computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques.
The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses,domain adaptation, semi-supervised and unsupervised corpus-based systems, combinations of different methods, and the return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best.
One problem with word sense disambiguation is deciding what the senses are, as differentdictionariesandthesauruseswill provide different divisions of words into senses. Some researchers have suggested choosing a particular dictionary, and using its set of senses to deal with this issue. Generally, however, research results using broad distinctions in senses have been much better than those using narrow ones.[3][4]Most researchers continue to work onfine-grainedWSD.
Most research in the field of WSD is performed by usingWordNetas a reference sense inventory for English. WordNet is a computationallexiconthat encodes concepts assynonymsets (e.g. the concept of car is encoded as { car, auto, automobile, machine, motorcar }). Other resources used for disambiguation purposes includeRoget's Thesaurus[5]andWikipedia.[6]More recently,BabelNet, a multilingual encyclopedic dictionary, has been used for multilingual WSD.[7]
In any real test,part-of-speech taggingand sense tagging have proven to be very closely related, with each potentially imposing constraints upon the other. The question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the Senseval/SemEvalcompetitions parts of speech are provided as input for the text to disambiguate).
Both WSD and part-of-speech tagging involve disambiguating or tagging with words. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96%[8]accuracy or better, as compared to less than 75%[citation needed]accuracy in word sense disambiguation withsupervised learning. These figures are typical for English, and may be very different from those for other languages.
Another problem isinter-judgevariance. WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses has been proven to be far more difficult.[9]While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of the senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense.[10]
As human performance serves as the standard, it is anupper boundfor computer performance. Human performance, however, is much better oncoarse-grainedthanfine-graineddistinctions, so this again is why research on coarse-grained distinctions[11][12]has been put to test in recent WSD evaluation exercises.[3][4]
A task-independent sense inventory is not a coherent concept:[13]each task requires its own division of word meaning into senses relevant to the task. Additionally, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target word selection. The "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language ("bank" could translate to the Frenchbanque– that is, 'financial bank' orrive– that is, 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant.
Finally, the very notion of "word sense" is slippery and controversial. Most people can agree in distinctions at thecoarse-grainedhomographlevel (e.g., pen as writing instrument or enclosure), but go down one level tofine-grainedpolysemy, and disagreements arise. For example, in Senseval-2, which used fine-grained sense distinctions, human annotators agreed in only 85% of word occurrences.[14]Word meaning is in principle infinitely variable and context-sensitive. It does not divide up easily into distinct or discrete sub-meanings.[15]Lexicographersfrequently discover in corpora loose and overlapping word meanings, and standard or conventional meanings extended, modulated, and exploited in a bewildering variety of ways. The art of lexicography is to generalize from the corpus to definitions that evoke and explain the full range of meaning of a word, making it seem like words are well-behaved semantically. However, it is not at all clear if these same meaning distinctions are applicable incomputational applications, as the decisions of lexicographers are usually driven by other considerations. In 2009, a task – namedlexical substitution– was proposed as a possible solution to the sense discreteness problem.[16]The task consists of providing a substitute for a word in context that preserves the meaning of the original word (potentially, substitutes can be chosen from the full lexicon of the target language, thus overcoming discreteness).
There are two main approaches to WSD – deep approaches and shallow approaches.
Deep approaches presume access to a comprehensive body ofworld knowledge. These approaches are generally not considered to be very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside very limited domains.[17]Additionally due to the long tradition incomputational linguistics, of trying such approaches in terms of coded knowledge and in some cases, it can be hard to distinguish between knowledge involved in linguistic or world knowledge. The first attempt was that byMargaret Mastermanand her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful,[18]but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s.
Shallow approaches do not try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge.
There are four conventional approaches to WSD:
Almost all these approaches work by defining a window ofncontent words around each word to be disambiguated in the corpus, and statistically analyzing thosensurrounding words. Two shallow approaches used to train and then disambiguate areNaïve Bayes classifiersanddecision trees. In recent research,kernel-based methodssuch assupport vector machineshave shown superior performance insupervised learning. Graph-based approaches have also gained much attention from the research community, and currently achieve performance close to the state of the art.
TheLesk algorithm[19]is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach[20]searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among the definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word.
An alternative to the use of the definitions is to consider general word-senserelatednessand to compute thesemantic similarityof each pair of word senses based on a given lexical knowledge base such asWordNet.Graph-basedmethods reminiscent ofspreading activationresearch of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods[21]or even outperforming them on specific domains.[3][22]Recently, it has been reported that simplegraph connectivity measures, such asdegree, perform state-of-the-art WSD in the presence of a sufficiently rich lexical knowledge base.[23]Also, automatically transferringknowledgein the form ofsemantic relationsfrom Wikipedia to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting.[24]
The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument).
Supervisedmethods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence,common senseandreasoningare deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such asfeature selection, parameter optimization, andensemble learning.Support Vector Machinesandmemory-based learninghave been shown to be the most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged corpora for training, which are laborious and expensive to create.
Because of the lack of training data, many word sense disambiguation algorithms usesemi-supervised learning, which allows both labeled and unlabeled data. TheYarowsky algorithmwas an early example of such an algorithm.[25]It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation.[26]
Thebootstrappingapproach starts from a small amount of seed data for each word: either manually tagged training examples or a small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initialclassifier, using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached.
Other semi-supervised techniques use large quantities of untagged corpora to provideco-occurrenceinformation that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains.
Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-alignedbilingualcorpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system.[citation needed]
Unsupervised learningis the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text byclusteringword occurrences using somemeasure of similarityof context,[27]a task referred to asword sense inductionor discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than for the other methods described above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If amappingto a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web search result clustering by increasing the quality of result clusters and the degree diversification of result lists.[28][29]It is hoped that unsupervised learning will overcome theknowledge acquisitionbottleneck because they are not dependent on manual effort.
Representing words considering their context through fixed-size dense vectors (word embeddings) has become one of the most fundamental blocks in several NLP systems.[30][31][32]Even though most of traditional word-embedding techniques conflate words with multiple meanings into a single vector representation, they still can be used to improve WSD.[33]A simple approach to employ pre-computed word embeddings to represent word senses is to compute the centroids of sense clusters.[34][35]In addition to word-embedding techniques, lexical databases (e.g.,WordNet,ConceptNet,BabelNet) can also assist unsupervised systems in mapping words and their senses as dictionaries. Some techniques that combine lexical databases and word embeddings are presented in AutoExtend[36][37]and Most Suitable Sense Annotation (MSSA).[38]In AutoExtend,[37]they present a method that decouples an object input representation into its properties, such as words and their word senses. AutoExtend uses a graph structure to map words (e.g. text) and non-word (e.g.synsetsinWordNet) objects as nodes and the relationship between nodes as edges. The relations (edges) in AutoExtend can either express the addition or similarity between its nodes. The former captures the intuition behind the offset calculus,[30]while the latter defines the similarity between two nodes. In MSSA,[38]an unsupervised disambiguation system uses the similarity between word senses in a fixed context window to select the most suitable word sense using a pre-trained word-embedding model andWordNet. For each context window, MSSA calculates the centroid of each word sense definition by averaging the word vectors of its words in WordNet'sglosses(i.e., short defining gloss and one or more usage example) using a pre-trained word-embedding model. These centroids are later used to select the word sense with the highest similarity of a target word to its immediately adjacent neighbors (i.e., predecessor and successor words). After all words are annotated and disambiguated, they can be used as a training corpus in any standard word-embedding technique. In its improved version, MSSA can make use of word sense embeddings to repeat its disambiguation process iteratively.
Other approaches may vary differently in their methods:
The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem.Unsupervised methodsrely on knowledge about word senses, which is only sparsely formulated in dictionaries and lexical databases.Supervised methodsdepend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far[when?]be met only for a handful of words for testing purposes, as it is done in theSensevalexercises.
One of the most promising trends in WSD research is using the largestcorpusever accessible, theWorld Wide Web, to acquire lexical information automatically.[50]WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such asinformation retrieval(IR). In this case, however, the reverse is also true:web search enginesimplement simple and robust IR techniques that can successfully mine the Web for information to use in WSD. The historic lack of training data has provoked the appearance of some new algorithms and techniques, as described inAutomatic acquisition of sense-tagged corpora.
Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be[51][52]classified as follows:
Structured:
Unstructured:
Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale,data sets. In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there is different sense inventories.
In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized.Senseval(now renamedSemEval) is an international word sense disambiguation competition, held every three years since 1998:Senseval-1(1998),Senseval-2(2001),Senseval-3[usurped](2004), and its successor,SemEval(2007). The objective of the competition is to organize different lectures, preparing and hand-annotating corpus for testing systems, perform a comparative evaluation of WSD systems in several kinds of tasks, including all-words and lexical sample WSD for different languages, and, more recently, new tasks such assemantic role labeling, gloss WSD,lexical substitution, etc. The systems submitted for evaluation to these competitions usually integrate different techniques and often combine supervised and knowledge-based methods (especially for avoiding bad performance in lack of training examples).
In recent years2007-2012, the WSD evaluation task choices had grown and the criterion for evaluating WSD has changed drastically depending on the variant of the WSD evaluation task. Below enumerates the variety of WSD tasks:
As technology evolves, the Word Sense Disambiguation (WSD) tasks grows in different flavors towards various research directions and for more languages:
|
https://en.wikipedia.org/wiki/Word_sense_disambiguation
|
Adversarial information retrieval(adversarial IR) is a topic ininformation retrievalrelated to strategies for working with a data source where some portion of it has been manipulated maliciously. Tasks can include gathering, indexing, filtering, retrieving and ranking information from such a data source. Adversarial IR includes the study of methods to detect, isolate, and defeat such manipulation.
On the Web, the predominant form of such manipulation issearch engine spamming(also known as spamdexing), which involves employing various techniques to disrupt the activity ofweb search engines, usually for financial gain. Examples of spamdexing arelink-bombing,commentorreferrer spam,spam blogs(splogs), malicious tagging.Reverse engineeringofranking algorithms,click fraud,[1]andweb content filteringmay also be considered forms of adversarialdata manipulation.[2]
Topics related to Web spam (spamdexing):
Other topics:
The term "adversarial information retrieval" was first coined in 2000 byAndrei Broder(then Chief Scientist atAlta Vista) during the Web plenary session at theTREC-9 conference.[3]
|
https://en.wikipedia.org/wiki/Adversarial_information_retrieval
|
Computer memorystores information, such as data and programs, for immediate use in thecomputer.[2]The termmemoryis often synonymous with the termsRAM,main memory,orprimary storage.Archaic synonyms for main memory includecore(for magnetic core memory) andstore.[3]
Main memory operates at a high speed compared tomass storagewhich is slower but less expensive per bit and higher in capacity. Besides storing opened programs and data being actively processed, computer memory serves as amass storage cacheandwrite bufferto improve both reading and writing performance. Operating systems borrowRAMcapacity for caching so long as it is not needed by running software.[4]If needed, contents of the computer memory can be transferred to storage; a common way of doing this is through a memory management technique calledvirtual memory.
Modern computer memory is implemented assemiconductor memory,[5][6]where data is stored withinmemory cellsbuilt fromMOS transistorsand other components on anintegrated circuit.[7]There are two main kinds of semiconductor memory:volatileandnon-volatile. Examples ofnon-volatile memoryareflash memoryandROM,PROM,EPROM, andEEPROMmemory. Examples ofvolatile memoryaredynamic random-access memory(DRAM) used for primary storage andstatic random-access memory(SRAM) used mainly forCPU cache.
Most semiconductor memory is organized intomemory cellseach storing onebit(0 or 1).Flash memoryorganization includes both one bit per memory cell and amulti-level cellcapable of storing multiple bits per cell. The memory cells are grouped into words of fixedword length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each word can be accessed by a binary address ofNbits, making it possible to store 2Nwords in the memory.
In the early 1940s, memory technology often permitted a capacity of a few bytes. The first electronic programmabledigital computer, theENIAC, using thousands ofvacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits stored in the vacuum tubes.
The next significant advance in computer memory came with acousticdelay-line memory, developed byJ. Presper Eckertin the early 1940s. Through the construction of a glass tube filled withmercuryand plugged at each end with a quartz crystal, delay lines could storebits of informationin the form of sound waves propagating through the mercury, with the quartz crystals acting astransducersto read and write bits. Delay-line memory was limited to a capacity of up to a few thousand bits.
Two alternatives to the delay line, theWilliams tubeandSelectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Usingcathode-ray tubes, Fred Williams invented the Williams tube, which was the firstrandom-access computer memory. The Williams tube was able to store more information than the Selectron tube (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and was less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental disturbances.
Efforts began in the late 1940s to findnon-volatile memory.Magnetic-core memoryallowed for memory recall after power loss. It was developed by Frederick W. Viehe andAn Wangin the late 1940s, and improved byJay ForresterandJan A. Rajchmanin the early 1950s, before being commercialized with theWhirlwind Icomputer in 1953.[8]Magnetic-core memory was the dominant form of memory until the development ofMOSsemiconductor memoryin the 1960s.[9]
The firstsemiconductor memorywas implemented as aflip-flopcircuit in the early 1960s usingbipolar transistors.[9]Semiconductor memory made fromdiscrete deviceswas first shipped byTexas Instrumentsto theUnited States Air Forcein 1961. In the same year, the concept ofsolid-statememory on anintegrated circuit(IC) chip was proposed byapplications engineerBob Norman atFairchild Semiconductor.[10]The first bipolar semiconductor memory IC chip was the SP95 introduced byIBMin 1965.[9]While semiconductor memory offered improved performance over magnetic-core memory, it remained larger and more expensive and did not displace magnetic-core memory until the late 1960s.[9][11]
The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled the practical use ofmetal–oxide–semiconductor(MOS) transistors asmemory cellstorage elements. MOS memory was developed by John Schmidt atFairchild Semiconductorin 1964.[12]In addition to higher performance, MOSsemiconductor memorywas cheaper and consumed less power than magnetic core memory.[13]In 1965, J. Wood and R. Ball of theRoyal Radar Establishmentproposed digital storage systems that useCMOS(complementary MOS) memory cells, in addition to MOSFETpower devicesfor thepower supply, switched cross-coupling,switchesanddelay-line storage.[14]The development ofsilicon-gateMOS integrated circuit(MOS IC) technology byFederico Fagginat Fairchild in 1968 enabled the production of MOSmemory chips.[15]NMOSmemory was commercialized byIBMin the early 1970s.[16]MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s.[13]
The two main types of volatilerandom-access memory(RAM) arestatic random-access memory(SRAM) anddynamic random-access memory(DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963,[9]followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964.[13]SRAM became an alternative to magnetic-core memory, but requires six transistors for eachbitof data.[17]Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for theSystem/360 Model 95.[9]
Toshibaintroduced bipolar DRAMmemory cellsfor its Toscal BC-1411electronic calculatorin 1965.[18][19]While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory.[20]MOS technology is the basis for modern DRAM. In 1966,Robert H. Dennardat theIBM Thomas J. Watson Research Centerwas working on MOS memory. While examining the characteristics of MOS technology, he found it was possible to buildcapacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell.[17]In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology.[21]This led to the first commercial DRAM IC chip, theIntel 1103in October 1970.[22][23][24]Synchronous dynamic random-access memory(SDRAM) later debuted with theSamsungKM48SL2000 chip in 1992.[25][26]
The termmemoryis also often used to refer tonon-volatile memoryincludingread-only memory(ROM) through modernflash memory.Programmable read-only memory(PROM) was invented byWen Tsing Chowin 1956, while working for the Arma Division of the American Bosch Arma Corporation.[27][28]In 1967, Dawon Kahng andSimon Szeof Bell Labs proposed that thefloating gateof a MOSsemiconductor devicecould be used for the cell of a reprogrammable ROM, which led toDov FrohmanofIntelinventingEPROM(erasable PROM) in 1971.[29]EEPROM(electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at theElectrotechnical Laboratoryin 1972.[30]Flash memory was invented byFujio MasuokaatToshibain the early 1980s.[31][32]Masuoka and colleagues presented the invention ofNOR flashin 1984,[33]and thenNAND flashin 1987.[34]Toshiba commercialized NAND flash memory in 1987.[35][36][37]
Developments in technology and economies of scale have made possible so-calledvery large memory(VLM) computers.[37]
Volatile memory is computer memory that requires power to maintain the stored information. Most modernsemiconductorvolatile memory is eitherstatic RAM(SRAM) ordynamic RAM(DRAM).[a]DRAM dominates for desktop system memory. SRAM is used forCPU cache. SRAM is also found in smallembedded systemsrequiring little memory.
SRAM retains its contents as long as the power is connected and may use a simpler interface, butcommonly uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs.[2][23][37]
Non-volatile memory can retain the stored information even when not powered. Examples of non-volatile memory includeread-only memory,flash memory, most types of magnetic computer storage devices (e.g.hard disk drives,floppy disksandmagnetic tape),optical discs, and early computer storage methods such asmagnetic drum,paper tapeandpunched cards.[37]
Non-volatile memory technologies under development includeferroelectric RAM,programmable metallization cell,Spin-transfer torque magnetic RAM,SONOS,resistive random-access memory,racetrack memory,Nano-RAM,3D XPoint, andmillipede memory.
A third category of memory issemi-volatile. The term is used to describe a memory that has some limited non-volatile duration after power is removed, but then data is ultimately lost. A typical goal when using a semi-volatile memory is to provide the high performance and durability associated with volatile memories while providing some benefits of non-volatile memory.
For example, some non-volatile memory types experience wear when written. Aworncell has increased volatility but otherwise continues to work. Data locations which are written frequently can thus be directed to use worn circuits. As long as the location is updated within some known retention time, the data stays valid. After a period of time without update, the value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows a high write rate while avoiding wear on the not-worn circuits.[38]
As a second example, anSTT-RAMcan be made non-volatile by building large cells, but doing so raises the cost per bit and power requirements and reduces the write speed. Using small cells improves cost, power, and speed, but leads to semi-volatile behavior. In some applications, the increased volatility can be managed to provide many benefits of a non-volatile memory, for example by removing power but forcing a wake-up before data is lost; or by caching read-only data and discarding the cached data if the power-off time exceeds the non-volatile threshold.[39]
The term semi-volatile is also used to describe semi-volatile behavior constructed from other memory types, such asnvSRAM, which combinesSRAMand a non-volatile memory on the samechip, where an external signal copies data from the volatile memory to the non-volatile memory, but if power is removed before the copy occurs, the data is lost. Another example isbattery-backed RAM, which uses an externalbatteryto power the memory device in case of external power loss. If power is off for an extended period of time, the battery may run out, resulting in data loss.[37]
Proper management of memory is vital for a computer system to operate properly. Modernoperating systemshave complex systems to properly manage memory. Failure to do so can lead to bugs or slow performance.
Improper management of memory is a common cause of bugs and security vulnerabilities, including the following types:
Virtual memory is a system wherephysical memoryis managed by the operating system typically with assistance from amemory management unit, which is part of many modernCPUs. It allows multiple types of memory to be used. For example, some data can be stored in RAM while other data is stored on ahard drive(e.g. in aswapfile), functioning as an extension of thecache hierarchy. This offers several advantages. Computer programmers no longer need to worry about where their data is physically stored or whether the user's computer will have enough memory. The operating system will place actively used data in RAM, which is much faster than hard disks. When the amount of RAM is not sufficient to run all the current programs, it can result in a situation where the computer spends more time moving data from RAM to disk and back than it does accomplishing tasks; this is known asthrashing.
Protected memory is a system where each program is given an area of memory to use and is prevented from going outside that range. If the operating system detects that a program has tried to alter memory that does not belong to it, the program is terminated (or otherwise restricted or redirected). This way, only the offending program crashes, and other programs are not affected by the misbehavior (whether accidental or intentional). Use of protected memory greatly enhances both the reliability and security of a computer system.
Without protected memory, it is possible that a bug in one program will alter the memory used by another program. This will cause that other program to run off of corrupted memory with unpredictable results. If the operating system's memory is corrupted, the entire computer system may crash and need to berebooted. At times programs intentionally alter the memory used by other programs. This is done by viruses and malware to take over computers. It may also be used benignly by desirable programs which are intended to modify other programs,debuggers, for example, to insert breakpoints or hooks.
|
https://en.wikipedia.org/wiki/Computer_memory
|
Controlled vocabulariesprovide a way to organize knowledge for subsequent retrieval. They are used insubject indexingschemes,subject headings,thesauri,[1][2]taxonomiesand otherknowledge organization systems. Controlled vocabulary schemes mandate the use of predefined, preferred terms that have been preselected by the designers of the schemes, in contrast tonatural languagevocabularies, which have no such restriction.[3]
Inlibrary and information science, controlled vocabulary is a carefully selected list ofwordsandphrases, which are used totagunits of information (document or work) so that they may be more easily retrieved by a search.[4][5]Controlled vocabularies solve the problems ofhomographs,synonymsandpolysemesby abijectionbetween concepts and preferred terms. In short, controlled vocabularies reduce unwanted ambiguity inherent in normal human languages where the same concept can be given different names and ensure consistency.[3]
For example, in theLibrary of Congress Subject Headings[6](a subject heading system that uses a controlled vocabulary), preferred terms—subject headings in this case—have to be chosen to handle choices between variant spellings of the same word (American versus British), choice among scientific and popular terms (cockroachversusPeriplaneta americana), and choices between synonyms (automobileversuscar), among other difficult issues.
Choices of preferred terms are based on the principles ofuser warrant(what terms users are likely to use),literary warrant(what terms are generally used in the literature and documents), andstructural warrant(terms chosen by considering the structure, scope of the controlled vocabulary).
Controlled vocabularies also typically handle the problem ofhomographswith qualifiers. For example, the termpoolhas to be qualified to refer to eitherswimming poolor the gamepoolto ensure that each preferred term or heading refers to only one concept.[7]
There are two main kinds of controlled vocabulary tools used in libraries: subject headings[8]andthesauri. While the differences between the two are diminishing, there are still some minor differences.
Thetermsare chosen and organized by trained professionals (including librarians and information scientists) who possess expertise in the subject area. Controlled vocabulary terms can accurately describe what a given document is actually about, even if the terms themselves do not occur within the document's text. Well known subject heading systems include theLibrary of Congress system,Medical Subject Headings(MeSH) created by theUnited States National Library of Medicine, andSears. Well known thesauri include theArt and Architecture Thesaurusand theERICThesaurus.
When selecting terms for a controlled vocabulary, the designer has to consider the specificity of the term chosen, whether to use direct entry, inter consistency and stability of the language.
Lastly the amount of pre-coordination (in which case the degree of enumeration versus synthesis becomes an issue) and post-coordination in the system is another important issue. Controlled vocabulary elements (terms/phrases) employed astags, to aid in the content identification process of documents, or other information system entities (e.g. DBMS, Web Services) qualifies asmetadata.
There are three main types of indexing languages.
When indexing a document, the indexer also has to choose the level of indexing exhaustivity, the level of detail in which the document is described. For example, using low indexing exhaustivity, minor aspects of the work will not be described with index terms. In general the higher the indexing exhaustivity, the more terms indexed for each document.
In recent yearsfree text searchas a means of access to documents has become popular. This involves using natural language indexing with an indexing exhaustively set to maximum (every word in the text isindexed). These methods have been compared in some studies, such as the 2007 article, "A Comparative Evaluation of Full-text, Concept-based, and Context-sensitive Search".[9]
Controlled vocabularies are often claimed to improve the accuracy of free text searching, such as to reduceirrelevantitems in the retrieval list. These irrelevant items (false positives) are often caused by the inherent ambiguity ofnatural language. Take the English wordfootballfor example.Footballis the name given to a number of differentteam sports. Worldwide the most popular of these team sports isassociation football, which also happens to be calledsoccerin several countries. The wordfootballis also applied torugby football(rugby unionandrugby league),American football,Australian rules football,Gaelic football, andCanadian football. A search forfootballtherefore will retrieve documents that are about several completely different sports. Controlled vocabulary solves this problem bytaggingthe documents in such a way that the ambiguities are eliminated.
Compared to free text searching, the use of a controlled vocabulary can dramatically increase the performance of an information retrieval system, if performance is measured by precision (the percentage of documents in the retrieval list that are actuallyrelevantto the search topic).
In some cases controlled vocabulary can enhance recall as well, because unlike natural language schemes, once the correct preferred term is searched, there is no need to search for other terms that might be synonyms of that term.
A controlled vocabulary search may lead to unsatisfactoryrecall, in that it will fail to retrieve some documents that are actually relevant to the search question.
This is particularly problematic when the search question involves terms that are sufficiently tangential to the subject area such that the indexer might have decided to tag it using a different term (but the searcher might consider the same). Essentially, this can be avoided only by an experienced user of controlled vocabulary whose understanding of the vocabulary coincides with that of the indexer.
Another possibility is that the article is just not tagged by the indexer because indexing exhaustivity is low. For example, an article might mention football as a secondary focus, and the indexer might decide not to tag it with "football" because it is not important enough compared to the main focus. But it turns out that for the searcher that article is relevant and hence recall fails. Afree text searchwould automatically pick up that article regardless.
On the other hand, free text searches have high exhaustivity (every word is searched) so although it has much lower precision, it has potential for high recall as long as the searcher overcome the problem of synonyms by entering every combination.
Controlled vocabularies may become outdated rapidly in fast developing fields of knowledge, unless the preferred terms are updated regularly. Even in an ideal scenario, a controlled vocabulary is often less specific than the words of the text itself. Indexers trying to choose the appropriate index terms might misinterpret the author, while this precise problem is not a factor in a free text, as it uses the author's own words.
The use of controlled vocabularies can be costly compared to free text searches because human experts or expensive automated systems are necessary to index each entry. Furthermore, the user has to be familiar with the controlled vocabulary scheme to make best use of the system. But as already mentioned, the control of synonyms, homographs can help increase precision.
Numerous methodologies have been developed to assist in the creation of controlled vocabularies, includingfaceted classification, which enables a given data record or document to be described in multiple ways.
Word choice in chosen vocabularies is not neutral, and the indexer must carefully consider the ethics of their word choices. For example, traditionally colonialist terms have often been the preferred terms in chosen vocabularies when discussing First Nations issues, which has caused controversy.[10]
Controlled vocabularies, such as theLibrary of Congress Subject Headings, are an essential component ofbibliography, the study and classification of books. They were initially developed inlibrary and information science. In the 1950s, government agencies began to develop controlled vocabularies for the burgeoning journal literature in specialized fields; an example is theMedical Subject Headings(MeSH) developed by theU.S. National Library of Medicine. Subsequently, for-profit firms (called Abstracting and indexing services) emerged to index the fast-growing literature in every field of knowledge. In the 1960s, an online bibliographic database industry developed based on dialupX.25networking. These services were seldom made available to the public because they were difficult to use; specialist librarians called search intermediaries handled the searching job. In the 1980s, the firstfull textdatabases appeared; these databases contain the full text of the index articles as well as the bibliographic information. Online bibliographic databases have migrated to the Internet and are now publicly available; however, most are proprietary and can be expensive to use. Students enrolled in colleges and universities may be able to access some of these services without charge; some of these services may be accessible without charge at a public library.
In large organizations, controlled vocabularies may be introduced to improvetechnical communication. The use of controlled vocabulary ensures that everyone is using the same word to mean the same thing. This consistency of terms is one of the most important concepts intechnical writingandknowledge management, where effort is expended to use the same word throughout adocumentororganizationinstead of slightly different ones to refer to the same thing.
Web searching could be dramatically improved by the development of a controlled vocabulary for describing Web pages; the use of such a vocabulary could culminate in aSemantic Web, in which the content of Web pages is described using a machine-readablemetadatascheme. One of the first proposals for such a scheme is theDublin CoreInitiative. An example of a controlled vocabulary which is usable forindexing web pagesisPSH.
It is unlikely that a single metadata scheme will ever succeed in describing the content of the entire Web.[11]To create a Semantic Web, it may be necessary to draw from two or more metadata systems to describe a Web page's contents. The eXchangeable Faceted Metadata Language (XFML) is designed to enable controlled vocabulary creators to publish and share metadata systems. XFML is designed onfaceted classificationprinciples.[12][non-primary source needed]
Controlled vocabularies of theSemantic Webdefine the concepts and relationships (terms) used to describe a field of interest or area of concern. For instance, to declare a person in a machine-readable format, a vocabulary is needed that has the formal definition of "Person", such as the Friend of a Friend (FOAF) vocabulary, which has a Person class that defines typical properties of a person including, but not limited to, name, honorific prefix, affiliation, email address, and homepage, or the Person vocabulary ofSchema.org.[13]Similarly, a book can be described using the Book vocabulary ofSchema.org[14]and general publication terms from theDublin Corevocabulary,[15]an event with the Event vocabulary ofSchema.org,[16]and so on.
To use machine-readable terms from any controlled vocabulary, web designers can choose from a variety of annotation formats, including RDFa,HTML5 Microdata, orJSON-LDin the markup, orRDFserializations (RDF/XML, Turtle, N3, TriG, TriX) in external files.
|
https://en.wikipedia.org/wiki/Controlled_vocabulary
|
Cross-language information retrieval(CLIR) is a subfield ofinformation retrievaldealing with retrieving information written in a language different from the language of the user's query.[1]The term "cross-language information retrieval" has many synonyms, of which the following are perhaps the most frequent: cross-lingual information retrieval, translingual information retrieval, multilingual information retrieval. The term "multilingual information retrieval" refers more generally both to technology for retrieval of multilingual collections and to technology which has been moved to handle material in one language to another. The term Multilingual Information Retrieval (MLIR) involves the study of systems that accept queries for information in various languages and return objects (text, and other media) of various languages, translated into the user's language. Cross-language information retrieval refers more specifically to the use case where users formulate their information need in one language and the system retrieves relevant documents in another. To do so, most CLIR systems use various translation techniques. CLIR techniques can be classified into different categories based on different translation resources:[2]
CLIR systems have improved so much that the most accurate multi-lingual and cross-lingualadhoc information retrievalsystems today are nearly as effective as monolingual systems.[3]Other related information access tasks, such asmedia monitoring,information filteringand routing,sentiment analysis, andinformation extractionrequire more sophisticated models and typically more processing and analysis of the information items of interest. Much of that processing needs to be aware of the specifics of the target languages it is deployed in.
Mostly, the various mechanisms ofvariation in human languagepose coverage challenges for information retrieval systems: texts in a collection may treat a topic of interest but use terms or expressions which do not match the expression of information need given by the user. This can be true even in a mono-lingual case, but this is especially true in cross-lingual information retrieval, where users may know the target language only to some extent. The benefits of CLIR technology for users with poor to moderate competence in the target language has been found to be greater than for those who are fluent.[4]Specific technologies in place for CLIR services includemorphological analysisto handleinflection, decompounding or compound splitting to handlecompound terms, and translations mechanisms to translate a query from one language to another.
The first workshop on CLIR was held in Zürich during the SIGIR-96 conference.[5]Workshops have been held yearly since 2000 at the meetings of theCross Language Evaluation Forum(CLEF). Researchers also convene at the annualText Retrieval Conference(TREC) to discuss their findings regarding different systems and methods of information retrieval, and the conference has served as a point of reference for the CLIR subfield.[6]Early CLIR experiments were conducted at TREC-6, held at theNational Institute of Standards and Technology(NIST) on November 19–21, 1997.[7]
Google Searchhad a cross-language search feature that was removed in 2013.[8]
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cross-language_information_retrieval
|
Data miningis the process of extracting and finding patterns in massivedata setsinvolving methods at the intersection ofmachine learning,statistics, anddatabase systems.[1]Data mining is aninterdisciplinarysubfield ofcomputer scienceandstatisticswith an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use.[1][2][3][4]Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.[5]Aside from the raw analysis step, it also involves database anddata managementaspects,data pre-processing,modelandinferenceconsiderations, interestingness metrics,complexityconsiderations, post-processing of discovered structures,visualization, andonline updating.[1]
The term "data mining" is amisnomerbecause the goal is the extraction ofpatternsand knowledge from large amounts of data, not theextraction (mining) of data itself.[6]It also is abuzzword[7]and is frequently applied to any form of large-scale data orinformation processing(collection,extraction,warehousing, analysis, and statistics) as well as any application ofcomputer decision support systems, includingartificial intelligence(e.g., machine learning) andbusiness intelligence. Often the more general terms (large scale)data analysisandanalytics—or, when referring to actual methods,artificial intelligenceandmachine learning—are more appropriate.
The actual data mining task is the semi-automaticor automatic analysis of massive quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), anddependencies(association rule mining,sequential pattern mining). This usually involves using database techniques such asspatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning andpredictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by adecision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, although they do belong to the overall KDD process as additional steps.
The difference betweendata analysisand data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of amarketing campaign, regardless of the amount of data. In contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.[8]
The related termsdata dredging,data fishing, anddata snoopingrefer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.
In the 1960s, statisticians and economists used terms likedata fishingordata dredgingto refer to what they considered the bad practice of analyzing data without ana-priorihypothesis. The term "data mining" was used in a similarly critical way by economistMichael Lovellin an article published in theReview of Economic Studiesin 1983.[9][10]Lovell indicates that the practice "masquerades under a variety of aliases, ranging from "experimentation" (positive) to "fishing" or "snooping" (negative).
The termdata miningappeared around 1990 in the database community, with generally positive connotations. For a short time in 1980s, the phrase "database mining"™, was used, but since it was trademarked by HNC, aSan Diego–based company, to pitch their Database Mining Workstation;[11]researchers consequently turned todata mining. Other terms used includedata archaeology,information harvesting,information discovery,knowledge extraction, etc.Gregory Piatetsky-Shapirocoined the term "knowledge discovery in databases" for the first workshop on the same topic(KDD-1989)and this term became more popular in theAIandmachine learningcommunities. However, the term data mining became more popular in the business and press communities.[12]Currently, the termsdata miningandknowledge discoveryare used interchangeably.
The manual extraction of patterns fromdatahas occurred for centuries. Early methods of identifying patterns in data includeBayes' theorem(1700s) andregression analysis(1800s).[13]The proliferation, ubiquity and increasing power of computer technology have dramatically increased data collection, storage, and manipulation ability. Asdata setshave grown in size and complexity, direct "hands-on" data analysis has increasingly been augmented with indirect, automated data processing, aided by other discoveries in computer science, specially in the field of machine learning, such asneural networks,cluster analysis,genetic algorithms(1950s),decision treesanddecision rules(1960s), andsupport vector machines(1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns.[14]in large data sets. It bridges the gap fromapplied statisticsand artificial intelligence (which usually provide the mathematical background) todatabase managementby exploiting the way data is stored and indexed in databases to execute the actual learning and discovery algorithms more efficiently, allowing such methods to be applied to ever-larger data sets.
Theknowledge discovery in databases (KDD) processis commonly defined with the stages:
It exists, however, in many variations on this theme, such as theCross-industry standard process for data mining(CRISP-DM) which defines six phases:
or a simplified process such as (1) Pre-processing, (2) Data Mining, and (3) Results Validation.
Polls conducted in 2002, 2004, 2007 and 2014 show that the CRISP-DM methodology is the leading methodology used by data miners.[15][16][17][18]
The only other data mining standard named in these polls wasSEMMA. However, 3–4 times as many people reported using CRISP-DM. Several teams of researchers have published reviews of data mining process models,[19]and Azevedo and Santos conducted a comparison of CRISP-DM and SEMMA in 2008.[20]
Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns actually present in the data, the target data set must be large enough to contain these patterns while remaining concise enough to be mined within an acceptable time limit. A common source for data is adata martordata warehouse. Pre-processing is essential to analyze themultivariatedata sets before data mining. The target set is then cleaned. Data cleaning removes the observations containingnoiseand those withmissing data.
Data mining involves six common classes of tasks:[5]
Data mining can unintentionally be misused, producing results that appear to be significant but which do not actually predict future behavior and cannot bereproducedon a new sample of data, therefore bearing little use. This is sometimes caused by investigating too many hypotheses and not performing properstatistical hypothesis testing. A simple version of this problem inmachine learningis known asoverfitting, but the same problem can arise at different phases of the process and thus a train/test split—when applicable at all—may not be sufficient to prevent this from happening.[21]
The final step of knowledge discovery from data is to verify that the patterns produced by the data mining algorithms occur in the wider data set. Not all patterns found by the algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. This is calledoverfitting. To overcome this, the evaluation uses atest setof data on which the data mining algorithm was not trained. The learned patterns are applied to this test set, and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish "spam" from "legitimate" e-mails would be trained on atraining setof sample e-mails. Once trained, the learned patterns would be applied to the test set of e-mails on which it hadnotbeen trained. The accuracy of the patterns can then be measured from how many e-mails they correctly classify. Several statistical methods may be used to evaluate the algorithm, such asROC curves.
If the learned patterns do not meet the desired standards, it is necessary to re-evaluate and change the pre-processing and data mining steps. If the learned patterns do meet the desired standards, then the final step is to interpret the learned patterns and turn them into knowledge.
The premier professional body in the field is theAssociation for Computing Machinery's (ACM) Special Interest Group (SIG) on Knowledge Discovery and Data Mining (SIGKDD).[22][23]Since 1989, this ACM SIG has hosted an annual international conference and published its proceedings,[24]and since 1999 it has published a biannualacademic journaltitled "SIGKDD Explorations".[25]
Computer science conferences on data mining include:
Data mining topics are also present in manydata management/database conferencessuch as the ICDE Conference,SIGMOD ConferenceandInternational Conference on Very Large Data Bases.
There have been some efforts to define standards for the data mining process, for example, the 1999 EuropeanCross Industry Standard Process for Data Mining(CRISP-DM 1.0) and the 2004Java Data Miningstandard (JDM 1.0). Development on successors to these processes (CRISP-DM 2.0 and JDM 2.0) was active in 2006 but has stalled since. JDM 2.0 was withdrawn without reaching a final draft.
For exchanging the extracted models—in particular for use inpredictive analytics—the key standard is thePredictive Model Markup Language(PMML), which is anXML-based language developed by the Data Mining Group (DMG) and supported as exchange format by many data mining applications. As the name suggests, it only covers prediction models, a particular data mining task of high importance to business applications. However, extensions to cover (for example)subspace clusteringhave been proposed independently of the DMG.[26]
Data mining is used wherever there is digital data available. Notableexamples of data miningcan be found throughout business, medicine, science, finance, construction, and surveillance.
While the term "data mining" itself may have no ethical implications, it is often associated with the mining of information in relation touser behavior(ethical and otherwise).[27]
The ways in which data mining can be used can in some cases and contexts raise questions regardingprivacy, legality, andethics.[28]In particular, data mining government or commercial data sets fornational securityorlaw enforcementpurposes, such as in theTotal Information AwarenessProgram or inADVISE, has raised privacy concerns.[29][30]
Data mining requires data preparation which uncovers information or patterns which compromiseconfidentialityandprivacyobligations. A common way for this to occur is throughdata aggregation.Data aggregationinvolves combining data together (possibly from various sources) in a way that facilitates analysis (but that also might make identification of private, individual-level data deducible or otherwise apparent).[31]This is not data miningper se, but a result of the preparation of data before—and for the purposes of—the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when the data were originally anonymous.[32]
It is recommended[according to whom?]to be aware of the followingbeforedata are collected:[31]
Data may also be modified so as tobecomeanonymous, so that individuals may not readily be identified.[31]However, even "anonymized" data sets can potentially contain enough information to allow identification of individuals, as occurred when journalists were able to find several individuals based on a set of search histories that were inadvertently released by AOL.[33]
The inadvertent revelation ofpersonally identifiable informationleading to the provider violates Fair Information Practices. This indiscretion can cause financial,
emotional, or bodily harm to the indicated individual. In one instance ofprivacy violation, the patrons of Walgreens filed a lawsuit against the company in 2011 for selling
prescription information to data mining companies who in turn provided the data
to pharmaceutical companies.[34]
Europehas rather strong privacy laws, and efforts are underway to further strengthen the rights of the consumers. However, theU.S.–E.U. Safe Harbor Principles, developed between 1998 and 2000, currently effectively expose European users to privacy exploitation by U.S. companies. As a consequence ofEdward Snowden'sglobal surveillance disclosure, there has been increased discussion to revoke this agreement, as in particular the data will be fully exposed to theNational Security Agency, and attempts to reach an agreement with the United States have failed.[35]
In the United Kingdom in particular there have been cases of corporations using data mining as a way to target certain groups of customers forcing them to pay unfairly high prices. These groups tend to be people of lower socio-economic status who are not savvy to the ways they can be exploited in digital market places.[36]
In the United States, privacy concerns have been addressed by theUS Congressvia the passage of regulatory controls such as theHealth Insurance Portability and Accountability Act(HIPAA). The HIPAA requires individuals to give their "informed consent" regarding information they provide and its intended present and future uses. According to an article inBiotech Business Week, "'[i]n practice, HIPAA may not offer any greater protection than the longstanding regulations in the research arena,' says the AAHC. More importantly, the rule's goal of protection through informed consent is approach a level of incomprehensibility to average individuals."[37]This underscores the necessity for data anonymity in data aggregation and mining practices.
U.S. information privacy legislation such as HIPAA and theFamily Educational Rights and Privacy Act(FERPA) applies only to the specific areas that each such law addresses. The use of data mining by the majority of businesses in the U.S. is not controlled by any legislation.
UnderEuropean copyrightdatabase laws, the mining of in-copyright works (such as byweb mining) without the permission of the copyright owner is not legal. Where a database is pure data in Europe, it may be that there is no copyright—but database rights may exist, so data mining becomes subject tointellectual propertyowners' rights that are protected by theDatabase Directive. On the recommendation of theHargreaves review, this led to the UK government to amend its copyright law in 2014 to allow content mining as alimitation and exception.[38]The UK was the second country in the world to do so after Japan, which introduced an exception in 2009 for data mining. However, due to the restriction of theInformation Society Directive(2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law also does not allow this provision to be overridden by contractual terms and conditions.
Since 2020 also Switzerland has been regulating data mining by allowing it in the research field under certain conditions laid down by art. 24d of the Swiss Copyright Act. This new article entered into force on 1 April 2020.[39]
TheEuropean Commissionfacilitated stakeholder discussion on text and data mining in 2013, under the title of Licences for Europe.[40]The focus on the solution to this legal issue, such as licensing rather than limitations and exceptions, led to representatives of universities, researchers, libraries, civil society groups andopen accesspublishers to leave the stakeholder dialogue in May 2013.[41]
US copyright law, and in particular its provision forfair use, upholds the legality of content mining in America, and other fair use countries such as Israel, Taiwan and South Korea. As content mining is transformative, that is it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of theGoogle Book settlementthe presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one being text and data mining.[42]
The following applications are available under free/open-source licenses. Public access to application source code is also available.
The following applications are available under proprietary licenses.
For more information about extracting information out of data (as opposed toanalyzingdata), see:
|
https://en.wikipedia.org/wiki/Data_mining
|
Data retrievalmeans obtaining data from adatabase management system(DBMS), like for example anobject-oriented database(ODBMS). In this case, it is considered that data is represented in astructuredway, and there is noambiguityin data.
In order to retrieve the desired data the user presents a set of criteria by aquery. Then the database management system selects the demanded data from the database. The retrieved data may be stored in a file, printed, or viewed on the screen.
Aquery language, like for exampleStructured Query Language(SQL), is used to prepare the queries. SQL is anAmerican National Standards Institute(ANSI) standardized query language developed specifically to write database queries. Each database management system may have its own language, but most are relational.[clarification needed]
Reportsand queries are the two primary forms of the retrieved data from a database. There are some overlaps between them, but queries generally select a relatively small portion of the database, while reports show larger amounts of data. Queries also present the data in a standard format and usually display it on the monitor; whereas reports allow formatting of the output however you like and is normally printed.
Reports are designed using areport generatorbuilt into the database management system.
|
https://en.wikipedia.org/wiki/Data_retrieval
|
TheEuropean Summer School in Information Retrieval(ESSIR) is a scientific event founded in 1990, which starts off a series ofsummer schoolsto teach aboutinformation retrieval. ESSIR is typically a week-long event consisting of guest lectures and seminars from invited lecturers.Maristella Agostiin 2008 stated that: "The term IR identifies the activities that a person – the user – has to conduct to choose, from a collection of documents, those that can be of interest to him to satisfy a specific and contingent information need."[1]
IR ranges fromcomputer sciencetoinformation scienceand beyond; moreover, a large number of IR methods and techniques are adopted and absorbed by several technologies. The IR core methods and techniques are those for designing and developing IR systems, Web search engines, and tools for information storing and querying in Digital Libraries. IR core subjects are: system architectures, algorithms, formal theoretical models, and evaluation of the diverse systems and services that implement functionalities of storing and retrieving documents from multimedia document collections, and over wide area networks such as theInternet.
ESSIR focuses on these three dimensions, and is intended for researchers starting out in IR, for industrialists who wish to know more about it, and for people working on topics related to management of information on the Internet.
Two books have been prepared as readings in IR from editions of ESSIR, the first one isLectures on Information Retrieval,[2]the second one isAdvanced Topics in Information Retrieval.[3]
ESSIR series started in 1990 coming out from the successful experience of the Summer School in Information Retrieval (SSIR) conceived and designed byMaristella Agosti,University of Padua, Italy andNick Belkin,Rutgers University, U.S.A., for an Italian audience in 1989.
|
https://en.wikipedia.org/wiki/European_Summer_School_in_Information_Retrieval
|
Human–computer information retrieval(HCIR) is the study and engineering ofinformation retrievaltechniques that bring human intelligence into thesearchprocess. It combines the fields ofhuman-computer interaction(HCI) and information retrieval (IR) and creates systems that improve search by taking into account the human context, or through a multi-step search process that provides the opportunity for human feedback.
This termhuman–computer information retrievalwas coined byGary Marchioniniin a series of lectures delivered between 2004 and 2006.[1]Marchionini's main thesis is that "HCIR aims to empower people to explore large-scale information bases but demands that people also take responsibility for this control by expending cognitive and physical energy."
In 1996 and 1998, a pair of workshops at theUniversity of Glasgowoninformation retrievalandhuman–computer interactionsought to address the overlap between these two fields. Marchionini notes the impact of theWorld Wide Weband the sudden increase ininformation literacy– changes that were only embryonic in the late 1990s.
A few workshops have focused on the intersection of IR and HCI. The Workshop on Exploratory Search, initiated by theUniversity of Maryland Human-Computer Interaction Labin 2005, alternates between theAssociation for Computing MachinerySpecial Interest Group on Information Retrieval(SIGIR) andSpecial Interest Group on Computer-Human Interaction(CHI) conferences. Also in 2005, theEuropean Science Foundationheld an Exploratory Workshop on Information Retrieval in Context. Then, the first Workshop on Human Computer Information Retrieval was held in 2007 at theMassachusetts Institute of Technology.
HCIR includes various aspects of IR and HCI. These includeexploratory search, in which users generally combine querying and browsing strategies to foster learning and investigation; information retrieval in context (i.e., taking into account aspects of the user or environment that are typically not reflected in a query); and interactive information retrieval, which Peter Ingwersen defines as "the interactive communication processes that occur during the retrieval of information by involving all the major participants in information retrieval (IR), i.e. the user, the intermediary, and the IR system."[2]
A key concern of HCIR is that IR systems intended for human users be implemented and evaluated in a way that reflects the needs of those users.[3]
Most modern IR systems employ arankedretrieval model, in which the documents are scored based on theprobabilityof the document'srelevanceto the query.[4]In this model, the system only presents the top-ranked documents to the user. This systems are typically evaluated based on theirmean average precisionover a set of benchmark queries from organizations like theText Retrieval Conference(TREC).
Because of its emphasis in using human intelligence in the information retrieval process, HCIR requires different evaluation models – one that combines evaluation of the IR and HCI components of the system. A key area of research in HCIR involves evaluation of these systems. Early work on interactive information retrieval, such as Juergen Koenemann andNicholas J. Belkin's 1996 study of different levels of interaction for automatic query reformulation, leverage the standard IR measures ofprecisionandrecallbut apply them to the results of multiple iterations of user interaction, rather than to a single query response.[5]Other HCIR research, such asPia Borlund's IIR evaluation model, applies a methodology more reminiscent of HCI, focusing on the characteristics of users, the details of experimental design, etc.[6]
HCIR researchers have put forth the following goals towards a system where the user has more control in determining relevant results.[1][7]
Systems should
In short, information retrieval systems are expected to operate in the way that good libraries do. Systems should help users to bridge the gap between data or information (in the very narrow, granular sense of these terms) and knowledge (processed data or information that provides the context necessary to inform the next iteration of an information seeking process). That is, good libraries provide both the information a patron needs as well as a partner in the learning process — theinformation professional— to navigate that information, make sense of it, preserve it, and turn it into knowledge (which in turn creates new, more informed information needs).
The techniques associated with HCIR emphasize representations of information that use human intelligence to lead the user to relevant results. These techniques also strive to allow users to explore and digest the dataset without penalty, i.e., without expending unnecessary costs of time, mouse clicks, or context shift.
Manysearch engineshave features that incorporate HCIR techniques.Spelling suggestionsandautomatic query reformulationprovide mechanisms for suggesting potential search paths that can lead the user to relevant results. These suggestions are presented to the user, putting control of selection and interpretation in the user's hands.
Faceted searchenables users to navigate informationhierarchically, going from a category to its sub-categories, but choosing the order in which the categories are presented. This contrasts with traditionaltaxonomiesin which the hierarchy of categories is fixed and unchanging.Faceted navigation, like taxonomic navigation, guides users by showing them available categories (or facets), but does not require them to browse through a hierarchy that may not precisely suit their needs or way of thinking.[8]
Lookaheadprovides a general approach to penalty-free exploration. For example, variousweb applicationsemployAJAXto automatically complete query terms and suggest popular searches. Another common example of lookahead is the way in which search engines annotate results with summary information about those results, including both static information (e.g.,metadataabout the objects) and "snippets" of document text that are most pertinent to the words in the search query.
Relevance feedbackallows users to guide an IR system by indicating whether particular results are more or less relevant.[9]
Summarization andanalyticshelp users digest the results that come back from the query. Summarization here is intended to encompass any means ofaggregatingorcompressingthe query results into a more human-consumable form. Faceted search, described above, is one such form of summarization. Another isclustering, which analyzes a set of documents by grouping similar or co-occurring documents or terms. Clustering allows the results to be partitioned into groups of related documents. For example, a search for "java" might return clusters forJava (programming language),Java (island), orJava (coffee).
Visual representation of datais also considered a key aspect of HCIR. The representation of summarization or analytics may be displayed as tables, charts, or summaries of aggregated data. Other kinds ofinformation visualizationthat allow users access to summary views of search results includetag cloudsandtreemapping.
|
https://en.wikipedia.org/wiki/Human%E2%80%93computer_information_retrieval
|
Information seekingis the process or activity of attempting to obtain information in both human and technological contexts. Information seeking is related to, but different from,information retrieval(IR).
Traditionally, IR tools have been designed for IR professionals to enable them to effectively and efficiently retrieve information from a source. It is assumed that the information exists in the source and that a well-formed query will retrieve it (and nothing else). It has been argued thatlaypersons'information seeking on the internet is very different from information retrieval as performed within the IR discourse. Yet, internet search engines are built on IR principles. Since the late 1990s a body of research on how casual users interact with internet search engines has been forming, but the topic is far from fully understood. IR can be said to be technology-oriented, focusing onalgorithmsand issues such asprecisionandrecall. Information seeking may be understood as a more human-oriented and open-ended process than information retrieval. In information seeking, one does not know whether there exists an answer to one's query, so the process of seeking may provide the learning required to satisfy one'sinformation need.
Much library and information science (LIS) research has focused on the information-seeking practices of practitioners within various fields of professional work. Studies have been carried out into the information-seeking behaviors of librarians,[1]academics,[2]medical professionals,[3]engineers,[4]lawyers[5][6]and mini-publics[7](among others). Much of this research has drawn on the work done by Leckie, Pettigrew (now Fisher) and Sylvain, who in 1996 conducted an extensive review of the LIS literature (as well as the literature of other academic fields) on professionals' information seeking. The authors proposed an analytic model of professionals' information seeking behaviour, intended to be generalizable across the professions, thus providing a platform for future research in the area. The model was intended to "prompt new insights... and give rise to more refined and applicable theories of information seeking" (1996, p. 188). The model has been adapted by Wilkinson (2001) who proposes a model of the information seeking of lawyers. Recent studies in this topic address the concept of information-gathering that "provides a broader perspective that adheres better to professionals' work-related reality and desired skills."[8](Solomon & Bronstein, 2021).
A variety of theories of information behavior – e.g.Zipf'sPrinciple of Least Effort,Brenda Dervin's Sense Making,Elfreda Chatman's Life in the Round – seek to understand the processes that surround information seeking. In addition, many theories from other disciplines have been applied in investigating an aspect or whole process of information seeking behavior.[9][10]
A review of the literature on information seeking behavior shows that information seeking has generally been accepted as dynamic and non-linear (Foster, 2005; Kuhlthau 2006). People experience the information search process as an interplay of thoughts, feelings and actions (Kuhlthau, 2006). Donald O. Case (2007) also wrote a good book that is a review of the literature.
Information seeking has been found to be linked to a variety of interpersonal communication behaviors beyond question-asking, to include strategies such as candidate answers.
Robinson's (2010)[11]research suggests that when seeking information at work, people rely on both other people and information repositories (e.g., documents and databases), and spend similar amounts of time consulting each (7.8% and 6.4% of work time, respectively; 14.2% in total). However, the distribution of time among the constituent information seeking stages differs depending on the source. When consulting other people, people spend less time locating the information source and information within that source, similar time understanding the information, and more time problem solving and decision making, than when consulting information repositories. Furthermore, the research found that people spend substantially more time receiving information passively (i.e., information that they have not requested) than actively (i.e., information that they have requested), and this pattern is also reflected when they provide others with information.
The concepts of information seeking, information retrieval, and information behaviour are objects of investigation ofinformation science. Within this scientific discipline a variety of studies has been undertaken analyzing the interaction of an individual withinformation sourcesin case of a specificinformation need, task, and context. The research models developed in these studies vary in their level of scope.Wilson(1999) therefore developed a nested model of conceptual areas, which visualizes the interrelation of the here mentioned central concepts.
Wilson defines models of information behavior to be "statements, often in the form of diagrams, that attempt to describe an information-seeking activity, the causes and consequences of that activity, or the relationships among stages in information-seeking behaviour" (1999: 250).
|
https://en.wikipedia.org/wiki/Information_seeking
|
Information seekingis the process or activity of attempting to obtain information in both human and technological contexts. Information seeking is related to, but different from,information retrieval(IR).
Traditionally, IR tools have been designed for IR professionals to enable them to effectively and efficiently retrieve information from a source. It is assumed that the information exists in the source and that a well-formed query will retrieve it (and nothing else). It has been argued thatlaypersons'information seeking on the internet is very different from information retrieval as performed within the IR discourse. Yet, internet search engines are built on IR principles. Since the late 1990s a body of research on how casual users interact with internet search engines has been forming, but the topic is far from fully understood. IR can be said to be technology-oriented, focusing onalgorithmsand issues such asprecisionandrecall. Information seeking may be understood as a more human-oriented and open-ended process than information retrieval. In information seeking, one does not know whether there exists an answer to one's query, so the process of seeking may provide the learning required to satisfy one'sinformation need.
Much library and information science (LIS) research has focused on the information-seeking practices of practitioners within various fields of professional work. Studies have been carried out into the information-seeking behaviors of librarians,[1]academics,[2]medical professionals,[3]engineers,[4]lawyers[5][6]and mini-publics[7](among others). Much of this research has drawn on the work done by Leckie, Pettigrew (now Fisher) and Sylvain, who in 1996 conducted an extensive review of the LIS literature (as well as the literature of other academic fields) on professionals' information seeking. The authors proposed an analytic model of professionals' information seeking behaviour, intended to be generalizable across the professions, thus providing a platform for future research in the area. The model was intended to "prompt new insights... and give rise to more refined and applicable theories of information seeking" (1996, p. 188). The model has been adapted by Wilkinson (2001) who proposes a model of the information seeking of lawyers. Recent studies in this topic address the concept of information-gathering that "provides a broader perspective that adheres better to professionals' work-related reality and desired skills."[8](Solomon & Bronstein, 2021).
A variety of theories of information behavior – e.g.Zipf'sPrinciple of Least Effort,Brenda Dervin's Sense Making,Elfreda Chatman's Life in the Round – seek to understand the processes that surround information seeking. In addition, many theories from other disciplines have been applied in investigating an aspect or whole process of information seeking behavior.[9][10]
A review of the literature on information seeking behavior shows that information seeking has generally been accepted as dynamic and non-linear (Foster, 2005; Kuhlthau 2006). People experience the information search process as an interplay of thoughts, feelings and actions (Kuhlthau, 2006). Donald O. Case (2007) also wrote a good book that is a review of the literature.
Information seeking has been found to be linked to a variety of interpersonal communication behaviors beyond question-asking, to include strategies such as candidate answers.
Robinson's (2010)[11]research suggests that when seeking information at work, people rely on both other people and information repositories (e.g., documents and databases), and spend similar amounts of time consulting each (7.8% and 6.4% of work time, respectively; 14.2% in total). However, the distribution of time among the constituent information seeking stages differs depending on the source. When consulting other people, people spend less time locating the information source and information within that source, similar time understanding the information, and more time problem solving and decision making, than when consulting information repositories. Furthermore, the research found that people spend substantially more time receiving information passively (i.e., information that they have not requested) than actively (i.e., information that they have requested), and this pattern is also reflected when they provide others with information.
The concepts of information seeking, information retrieval, and information behaviour are objects of investigation ofinformation science. Within this scientific discipline a variety of studies has been undertaken analyzing the interaction of an individual withinformation sourcesin case of a specificinformation need, task, and context. The research models developed in these studies vary in their level of scope.Wilson(1999) therefore developed a nested model of conceptual areas, which visualizes the interrelation of the here mentioned central concepts.
Wilson defines models of information behavior to be "statements, often in the form of diagrams, that attempt to describe an information-seeking activity, the causes and consequences of that activity, or the relationships among stages in information-seeking behaviour" (1999: 250).
|
https://en.wikipedia.org/wiki/Information_seeking#Compared_to_information_retrieval
|
Collaborative information seeking(CIS) is a field of research that involves studying situations, motivations, and methods for people working in collaborative groups for information seeking projects, as well as building systems for supporting such activities. Such projects often involve information searching orinformation retrieval(IR), information gathering, andinformation sharing. Beyond that, CIS can extend to collaborative information synthesis and collaborativesense-making.
Seeking for information is often considered a solo activity, but there are many situations that call for people working together forinformation seeking. Such situations are typically complex in nature, and involve working through several sessions exploring, evaluating, and gathering relevant information. Take for example, a couple going on a trip. They have the same goal, and in order to accomplish their goal, they need to seek out several kinds of information, including flights, hotels, and sightseeing. This may involve them working together over multiple sessions, exploring and collecting useful information, and collectively making decisions that help them move toward their common goal.
It is a common knowledge that collaboration is either necessary or highly desired in many activities that are complex or difficult to deal with for an individual. Despite its natural appeal and situational necessity, collaboration in information seeking is an understudied domain. The nature of the available information and its role in our lives have changed significantly, but the methods and tools that are used to access and share that information in collaboration have remained largely unaltered. People still use general-purpose systems such as email and IM for doing CIS projects, and there is a lack of specialized tools and techniques to support CIS explicitly.
There are also several models to explaininformation seekingandinformation behavior,[1]but the areas of collaborative information seeking and collaborative information behavior remain understudied. On the theory side, Shah has presented C5 Model[2][3]for studying collaborative situations, includinginformation seeking. On the practical side, a few specialized systems for supporting CIS have emerged in the recent past, but their usage and evaluations have underwhelmed. Despite such limitations, the field of CIS has been getting a lot of attention lately, and several promising theories and tools have come forth. Multiple reviews of CIS related literature are written by Shah.[4]Shah's book[5]provides a comprehensive review of this field, including theories, models, systems, evaluation, and future research directions. Other books in this area include one by Morris andTeevan,[6]as well as Foster's book on collaborative information behavior.[7]and Hansen, Shah, and Klas's edited book on CIS.[8]
Depending upon what one includes or excludes while talking about CIS, we have many or hardly any theories. If we consider the past work on thegroupwaresystems, many interesting insights can be obtained about people working on collaborative projects, the issues they face, and the guidelines for system designers. One of the notable works is by Grudin,[9]who laid out eight design principles for developers ofgroupwaresystems.
The discussion below is primarily based on some of the recent works in the field of computer supported cooperative workCSCW, collaborative IR, and CIS.
The literature is filled with works that use terms such ascollaborative information retrieval,[10][11]social searching,[12]concurrent search,[13]collaborative exploratory search,[14]co-browsing,[15]collaborative information behavior,[16][17]collaborative information synthesis,[18]andcollaborative information seeking,[19][20]which are often used interchangeably.
There are several definitions of such related or similar terms in the literature. For instance, Foster[21]defined collaborative IR as"the study of the systems and practices that enable individuals to collaborate during the seeking, searching, and retrieval of information."Shah[22]defined CIS as a process of collaboratively seeking information that is"defined explicitly among the participants, interactive, and mutually beneficial."While there is still a lack of a definition or a terminology that is universally accepted, but most agree that CIS is an active process, as opposed tocollaborative filtering, where a system connects the users based on their passive involvement (e.g., buying similar products on Amazon).
Foley and Smeaton[23]defined two key aspects of collaborative information seeking asdivision of laborand thesharing of knowledge. Division of labor allows collaborating searchers to tackle larger problems by reducing the duplication of effort (e.g., finding documents that one's collaborator has already discovered). The sharing of knowledge allows searchers to influence each other's activities as they interact with the retrieval system in pursuit of their (often evolving) information need. This influence can occur in real time if the collaborative search system supports it, or it can occur in a turn-taking, asynchronous manner if that is how interaction is structured.
Teevanet al.[24]characterized two classes of collaboration, task-based vs. trait-based. Task-based collaboration corresponds to intentional collaboration; trait-based collaboration facilitates the sharing of knowledge through inferred similarity of information need.
One of the important issues to study in CIS is the instance, reason, and the methods behind a collaboration. For instance, Morris,[25]using a survey with 204knowledge workersat a large technology company found that people often like and want to collaborate, but they do not find specialized tools to help them in such endeavors. Some of the situations for doing collaborative information seeking in this survey were travel planning, shopping, and literature search. Shah,[26]similarly, using personal interviews, identified three main reasons why people collaborate.
As far as the tools and/or methods for CIS are concerned, both Morris and Shah found that email is still the most used tool. Other popular methods are face-to-face meetings, IM, and phone or conference calls. In general, the choice of the method or tool for our respondents depended on their situation (co-located or remote), and objective (brainstorming or working on independent parts).
The classical way of organizing collaborative activities is based on two factors: location and time.[27]Recently Hansen & Jarvelin[28]and Golovchinsky, Pickens, & Back[29]also classified approaches to collaborative IR using these two dimensions of space and time. See "Browsing is a Collaborative Process",[30]where the authors depict various library activities on these two dimensions.[31]
As we can see from this figure, the majority of collaborative activities in conventional libraries are co-located and synchronous, whereas collaborative activities relating to digital libraries are more remote and synchronous. Social information filtering, or collaborative filtering, as we saw earlier, is a process benefitting from other users' actions in the past; thus, it falls under asynchronous and mostly remote domain. These days email also serves as a tool for doing asynchronous collaboration among users who are not co-located. Chat or IM (represented as 'internet' in the figure) helps to carry out synchronous and remote collaboration.
Rodden,[27]similarly, presented a classification of CSCW systems using the form of interaction and the geographical nature of cooperative systems. Further, Rodden & Blair[32]presented an important characteristic to all CSCW systems – control. According to the authors, two predominant control mechanisms have emerged within CSCW systems: speech act theory systems, and procedure based systems. These mechanisms are tightly coupled with the kind of control the system can support in a collaborative environment (discussed later).
Often researchers also talk about other dimensions, such as intentionality and depth of mediation (system mediated or user mediated),[29]while classifying various CIS systems.
Three components specific to group-work or collaboration that are highly predominant in the CIS or CSCW literature are control, communication, and awareness. In this section key definitions and related works for these components will be highlighted. Understanding their roles can also help us address various design issues with CIS systems.
Rodden identified the value of control in CSCW systems and listed a number of projects with their corresponding schemes for implementing for control. For instance, the COSMOS project[33]had a formal structure to represent control in the system. They used roles to represent people or automatons, and rules to represent the flow and processes. The roles of the people could be a supervisor, processor, or analyst. Rules could be a condition that a process needs to satisfy in order to start or finish. Due to such a structure seen in projects like COSMOS, Rodden classified these control systems as procedural based systems.
The control penal was every effort to seeking people and control others in this method used for highly responsible people take control of another network system was supply chine managements or transformation into out connection processor information
This is one of the most critical components of any collaboration. In fact, Rodden (1991) identified message or communication systems as the class of systems in CSCW that is most mature and most widely used.
Since the focus here is on CIS systems that allow its participants to engage in an intentional and interactive collaboration, there must be a way for the participants to communicate with each other. What is interesting to note is that often, collaboration could begin by letting a group of users communicate with each other. For instance, Donath & Robertson[34]presented a system that allows a user to know that others were currently viewing the same webpage and communicate with those people to initiate a possible collaboration or at least a co-browsing experience. Providing communication capabilities even in an environment that was not originally designed for carrying out collaboration is an interesting way of encouraging collaboration.
Awareness, in the context of CSCW, has been defined as"an understanding of the activities of others, which provides a context for your own activity".[35]The following four kinds of awareness are often discussed and addressed in the CSCW literature:[36]
Shah and Marchionini[37]studied awareness as provided by interface in collaborative information seeking. They found that one needs to provide "right" (not too little, not too much, and appropriate for the task at hand) kind of awareness to reduce the cost of coordination and maximize the benefits of collaboration.
A number of specialized systems have been developed back from the days of thegroupwaresystems to today's Web 2.0 interfaces. A few such examples, in chronological order, are given below.
Twidale et al.[38]developed Ariadne to support the collaborative learning of database browsing skills. In addition to enhancing the opportunities and effectiveness of the collaborative learning that already occurred, Ariadne was designed to provide the facilities that would allow collaborations to persist as people increasingly searched information remotely and had less opportunity for spontaneous face-to-face collaboration.
Ariadne was developed in the days when Telnet-based access to library catalogs was a common practice. Building on top of this command-line interface, Ariadne could capture the users’ input and the database’s output, and form them into a search history that consisted of a series of command-output pairs. Such a separation of capture and display allowed Ariadne to work with various forms of data capture methods.
To support complex browsing processes in collaboration, Ariadne presented a visualization of the search process.[39]This visualization consisted of thumbnails of screens, looking like playing cards, which represented command-output pairs. Any such card can be expanded to reveal its details. The horizontal axis on Ariadne’s display represented time, and the vertical axis showed information on the semantics of the action it represented: the top row for the top level menus, the middle row for specifying a search, and the bottom row for looking at particular book details.
This visualization of the search process in Ariadne makes it possible to annotate, discuss with colleagues around the screen, and distribute to remote collaborators for asynchronous commenting easily and effectively. As we saw in the previous section, having access to one’s history as well as the history of one’s collaborators are very crucial to effective collaboration. Ariadne implements these requirements with the features that let one visualize, save, and share a search process. In fact, the authors found one of the advantages of search visualization was the ability to recap previous searching sessions easily in a multi-session exploratory searching.
More recently, one of the collaborative information seeking tools that have caught a lot of attention is SearchTogether, developed by Morris andHorvitz.[40]The design of this tool was motivated by a survey that the researchers did with 204 knowledge workers,[25]in which they discovered the following.
Based on the survey responses, and the current and desired practices for collaborative search, the authors of SearchTogether identified three key features for supporting people’s collaborative information behavior while searching on the Web: awareness, division of labor, and persistence. Let us look at how these three features are implemented.
SearchTogether instantiatesawarenessin several ways, one of which is per-user query histories. This is done by showing each group member’s screen name, his/her photo and queries in the “Query Awareness” region. The access to the query histories is immediate and interactive, as clicking on a query brings back the results of that query from when it was executed. The authors identified query awareness as a very important feature in collaborative searching, which allows group members to not only share their query terms, but also learn better query formulation techniques from one another.
Another component of SearchTogether that facilitates awareness is the display of page-specific metadata. This region includes several pieces of information about the displayed page, including group members who viewed the given page, and their comments and ratings. The authors claim that such visitation information can help one either choose to avoid a page already visited by someone in the group to reduce the duplication of efforts, or perhaps choose to visit such pages, as they provide a sign of promising leads as indicated by the presence of comments and/or ratings.
Division of laborin SearchTogether is implemented in three ways: (1) “Split Search” allows one to split the search results among all online group members in a round-robin fashion, (2) “Multi-Engine Search” takes a query and runs it on n different search engines, wherenis the number of online group members, (3) manual division of labor can be facilitated using integrated IM.
Finally, thepersistencefeature in SearchTogether is instantiated by storing all the objects and actions, including IM conversations, query histories, recommendation queues, and page-specific metadata. Such data about all the group members are available to each member when he/she logs in. This allows one to easily carry a multi-session collaborative project.
Cerchiamo[41][42]is a collaborative information seeking tool that explores issues related to algorithmic mediation of information seeking activities and how collaborators' roles can be used to structure the user interface. Cerchiamo introduced the notion of algorithmic mediation, that is, the ability of the system to collect input asynchronously from multiple collaborating searchers, and to use these multiple streams of input to affect the information that is being retrieved and displayed to the searchers.
Cerchiamo collected judgments of relevance from multiple collaborating searchers and used those judgments to create a ranked list of items that were potentially relevant to the information need. This algorithm prioritized items that were retrieved by multiple queries and that were retrieved by queries that also retrieved many other relevant documents. This rank fusion is just one way in which a search system that manages activities of multiple collaborating searchers can combine their inputs to generate results that are better than those produced by individuals working independently.
Cerchiamo implemented two roles—Prospector and Miner—that searchers could assume. Each role had an associated interface. The Prospector role/interface focused on running many queries and making a few judgments of relevance for each query to explore the information space. The Miner role/interface focused on making relevance judgments on a ranked list of items selected from items retrieved by all queries in the current session. This combination of roles allowed searchers to explore and exploit the information space, and led teams to discover more unique relevant documents than pairs of individuals working separately.[41]
Coagmento(Latin for "working together") is a new and unique system that allows a group of people work together for their information seeking tasks without leaving their browsers.Coagmentohas been developed with a client-server architecture, where the client is implemented as a Firefox plug-in that helps multiple people working in collaboration to communicate, and search, share and organize information. The server component stores and provides all the objects and actions collected from the client. Due to this decoupling,Coagmentoprovides a flexible architecture that allows its users to be co-located or remote, working synchronously or asynchronously, and use different platforms.
Coagmentoincludes a toolbar and a sidebar. The toolbar has several buttons that helps one collect information and be aware of the progress in a given collaboration. The toolbar has three major parts:
The sidebar features a chat window, under which there are three tabs with the history of search engine queries, saved pages and snippets. With each of these objects, the user who created or collected that object is shown. Anyone in the group can access an object by clicking on it. For instance, one can click on a query issued by anyone in the group to re-run that query and bring up the results in the main browser window.
AnAndroid (operating system)app for Coagmento can be found in theAndroid Market.
Fernandez-Luna et al.[43]introduce Cosme (COde Search MEeting) as a NetBeans IDE plug-in that enables remote team of software developers to collaborate in real time during source-code search sessions. The COSME design was motivated by early studies of C. Foley, M. R. Morris, C. Shah, among others researchers, and by habits of software developers identified in a survey of 117 universities students and professors related with projects of software development, as well as to computer programmers of some companies. The five more commons collaborative search habits (or related to it) of the interviewees was:
COSME is designed to enable either synchronous or asynchronous, but explicit remote collaboration among team developers with shared technical information needs. Its client user interface include a search panel that lets developers to specify queries, division of labor principle (possible combination include the use of different search engines, ranking fusion, and split algorithms), searching field (comments, source-code, class or methods declaration), and the collection type (source-code files or digital documentation). The sessions panel wraps the principal options to management the collaborative search sessions, which consists in a team of developers working together to satisfy their shared technical information needs. For example, a developer can use the embedded chat room to negotiate the creation of a collaborative search session, and show comments of the current and historical search results. The implementation of Cosme was based on CIRLab (Collaborative Information Retrieval Laboratory) instantiation, a groupware framework for CIS research and experimentation, Java as programming language, NetBeans IDE Platform as plug-in base, and Amenities (A MEthodology for aNalysis and desIgn of cooperaTIve systEmS) as software engineering methodology.
CIS systems development is a complex task, which involves software technologies and Know-how in different areas such as distributed programming, information search and retrieval, collaboration among people, task coordination and many others according to the context. This situation is not ideal because it requires great programming efforts. Fortunately, some CIS application frameworks and toolkits are increasing their popularity since they have a high reusability impact for both developers and researchers, like Coagmento Collaboratory and DrakkarKeel.
Many interesting and important questions remain to be addressed in the field of CIS, including
|
https://en.wikipedia.org/wiki/Collaborative_information_seeking
|
Social information seekingis a field of research that involves studying situations, motivations, and methods for people seeking and sharing information in participatory online social sites, such asYahoo! Answers, Answerbag,WikiAnswersandTwitteras well as building systems for supporting such activities. Highly related topics involve traditional andvirtual referenceservices,information retrieval,information extraction, andknowledge representation.[1]
Social information seeking is often materialized in online question-answering (QA) websites, which are driven by a community. Such QA sites have emerged in the past few years as an enormous market, so to speak, for the fulfillment of information needs. Estimates of the volume of questions answered are difficult to come by, but it is likely that the number of questions answered on social/community QA (cQA) sites far exceeds the number of questions answered by library reference services,[2]which until recently were one of the few institutional sources for suchquestion answering. cQA sites make their content – questions and associated answers submitted on the site – available on the open web, and indexable by search engines, thus enabling web users to find answers provided for previously asked questions in response to new queries.
The popularity of such sites have been increasing dramatically for the past several years. Major sites that provide a general platform for questions of all types includeYahoo! Answers, Answerbag andQuora. While other sites that focus on particular fields; for example,StackOverflow(computing). StackOverflow has 3.45 million questions, 1.3 million users and over 6.86 million answers since July 2008 while Quora has 437 thousand questions, 264 thousand users and 979 thousand answers.[3]
Social Q&A or cQA, according to Shah et al.,[4]consists of three components: a mechanism for users to submit questions in natural language, a venue for users to submit answers to questions, and a community built around this exchange. Viewed in that light, online communities have performed a question answering function perhaps since the advent of Usenet and Bulletin Board Systems, so in one sense cQA is nothing new. Websites dedicated to cQA, however, have emerged on the web only within the past few years: the first cQA site was the Korean Naver Knowledge iN, launched in 2002, while the first English-language CQA site was Answerbag, launched in April 2003. Despite this short history, however, cQA has already attracted a great deal of attention from researchers investigating information seeking behaviors,[5]selection of resources,[6]social annotations,[7]user motivations,[8]comparisons with other types of question answering services,[9]and a range of other information-related behaviors.
Some of the interesting and important research questions in this area include:
Shah et al.[10]provide a detailed research agenda for social Q&A. A new book by Shah[11]presents a more recent and comprehensive information pertaining to social information seeking.
Friendsourcing is an important component of social question and answering, including how to route questions to friends or others who will most likely answer the question.[12]The important questions include what people's behaviors are in social networks, especially what kinds of questions people ask from their social networks and how different question types affect the frequency, speed and quality of answers they receive.
Morris et al. (2010)[13]conducted a survey of question and answering within social networks with 624 people, and gathered detailed data about the behavior of Q&A, including frequency, types of questions and answers, and motivations. They found that half (50.6%) of respondents reported having used their status messages to ask a question, which indicated that Q&A on social networks is popular. Also, the types of questions people asked include recommendation, opinion, factual knowledge, rhetorical, etc. And motivations for asking include trust, asking subjective questions, etc. Their analysis also explored the relationships between answer speed and quality, questions’ property and participants’ property. Only a very small portion (6.5%) of the questions were answered, but the 89.3% of the respondents were satisfied with the response time they experienced even though there's a discrepancy between that and expectation. Also, the responses gathered via social networks appear to be very valuable. Their findings implied design for search tools that could combine the speed and breadth of traditional search engines with the trustworthiness, personalization, and the high engagement of social media Q&A.
Paul et al. (2011)[14]did a study on question and answering on Twitter, and found that out of the 1152 questions they examined, the most popular question types asked on Twitter were rhetorical (42%) and factual (16%). Surprisingly, along with entertainment (29%) and technology (29%) questions, people asked personal and health-related questions (11%). Only 18.7% questions received response, while a handful of questions received a high number of responses. The larger the askers’ network, the more responses she received; however, posting more tweets or posting more frequently did not increase chances of receiving a response. Most often the “follow” relationship between asker and answerer was one-way. Paul et al. also examined what factors of the askers would increase the chance of getting a response and found that more relevant responses are received when there is a mutual relationship between askers and answerers. Intuitively, we would expect this, as mutual relationship would indicate stronger tie strength and hence, more number of relevant answers.
Existing social Q&A services can be characterized from the three perspectives, by the definition of social Q&A as a service involving (1) a method for presenting information needs, (2) a place for responding to information need, and (3) participation as a community.
These social networks support various friendsourcing behavior, provide information benefits that oftentimes traditional search tools cannot, and also may reinforce social bonds through the process. However, there are many questions and limitations that may prevent people from asking questions on their social networks. For example, they may feel uncomfortable asking questions that are too private, might not want to cost too much other people's time and effort, or might feel the burden of social debts.
Rzeszotarski and Morris (2014)[15]took a novel approach to explore the perceived social costs of friendsourcing on Twitter via monetary choices. They modeled friendsourcing costs across users, and compared it with crowdsourcing on Amazon Mechanical Turk. Their findings suggested interesting design considerations for minimizing social cost by building a hybrid system combining friendsourcing and crowdsourcing with microtask markets.
Sometimes, only asking question from people's own social networks or friends is not enough. If the question is obscure or time sensitive, no members of their social networks may know the answer. For example, this person's friends might not have expertise in providing evaluations for a specific model of digital camera. Also asking the current wait time for security at the local airport might not be possible if none of this person's friends are currently at the airport.
Nichols and Kang (2012)[16]leveraged Twitter for question and answering with targeted strangers by taking advantage of its public accessibility. In their approach, they mined the public status updates posted on Twitter to find strangers with potentially useful information, and send questions to these strangers to collect responses. As a feasibility study, they collected information regarding response rate, and response time. 42% of users responded to questions from strangers, and 44% of the responses arrived within 30 minutes.
Another important and unique component of social Q&A system is that it is a community which allows members to form relationships and bonds, so that their behavior in these social Q&A services will also add to their social capital.
Gray et al. (2013)[17]explored how bridging social capital, question type and relational closeness influence the perceived usefulness and satisfaction of information obtained through questions asked on Facebook. Their results indicated that bridging social capital could positively predict the perceived utility of the acquired information, meaning that information exchanges on social networks is an effective way of social capital conversion. Also, useful answers are more likely to be received from weak ties than strong ties.
In order to recommend the most appropriate users to provide answers in a social network, we need to find approaches to detect users' authority in a social network. In the field of information retrieval, there has been a trend of research investigating ways to detect users' authority effectively and accurately in a social network.
Cha et al.[18]investigate possible metrics for determining authority users on popular social network Twitter. They propose the following three simple network-based metrics and discuss their usefulness in determining a user's influence.
An initial analysis of the three aforementioned metrics showed that the users with the highest indegrees and the users with the highest retweet/mention counts were not the same. The top 1% of users by indegree are shown to have very low correlation with the same percentile of users by retweets and by mentions. This implies that follower count is not useful in determining whether a user's tweets get retweeted or whether the other users engage with them.
Pal et al.[19]designed features to measure a user's authority on a certain topic. For example, retweet impact refers to how many times a certain user has been retweeted on a certain topic. The impact is dampened by a factor measuring how many times the user had been retweeted by a unique author to avoid the cases when a user has fans who retweet regardless of the content. They first used a clustering approach to find the target cluster which has the highest average score across all features, and used a ranking algorithm to find the most authoritative users within the cluster.
With these authority detection methods, social Q&A could be more effective in providing accurate answers to askers.
People associated with social information seeking include:
|
https://en.wikipedia.org/wiki/Social_information_seeking
|
TheInformation Retrieval Facility(IRF), founded 2006 and located inVienna,Austria, was a research platform for networking and collaboration for professionals in the field ofinformation retrieval. It ceased operations in 2012.
|
https://en.wikipedia.org/wiki/Information_Retrieval_Facility
|
Visualization(orvisualisation), also known asgraphics visualization, is any technique for creatingimages,diagrams, oranimationsto communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. from history includecave paintings,Egyptian hieroglyphs, Greekgeometry, andLeonardo da Vinci's revolutionary methods of technical drawing for engineering purposes that actively involve scientific requirements.
Visualization today has ever-expanding applications in science, education, engineering (e.g., product visualization),interactive multimedia,medicine, etc. Typical of a visualization application is the field ofcomputer graphics. The invention of computer graphics (and3D computer graphics) may be the most important development in visualization since the invention ofcentral perspectivein theRenaissanceperiod. The development ofanimationalso helped advance visualization.
The use of visualization to present information is not a new phenomenon. It has been used in maps, scientific drawings, and data plots for over a thousand years. Examples fromcartographyincludePtolemy's Geographia(2nd century AD), a map of China (1137 AD), andMinard's map (1861) ofNapoleon'sinvasion of Russiaa century and a half ago. Most of the concepts learned in devising these images carry over in a straightforward manner to computer visualization.Edward Tuftehas written three critically acclaimed books that explain many of these principles.[1][2][3]
Computer graphics has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the publication of Visualization in Scientific Computing, a special issue of Computer Graphics.[4]Since then, there have been several conferences and workshops, co-sponsored by theIEEE Computer SocietyandACM SIGGRAPH, devoted to the general topic, and special areas in the field, for example volume visualization.
Most people are familiar with the digital animations produced to presentmeteorologicaldata during weather reports ontelevision, though few can distinguish between those models of reality and thesatellite photosthat are also shown on such programs. TV also offers scientific visualizations when it shows computer drawn and animated reconstructions of road or airplane accidents. Some of the most popular examples of scientific visualizations arecomputer-generated imagesthat show realspacecraftin action, out in the void far beyond Earth, or on otherplanets.[citation needed]Dynamic forms of visualization, such aseducational animationortimelines, have the potential to enhance learning about systems that change over time.
Apart from the distinction between interactive visualizations and animation, the most useful categorization is probably between abstract and model-based scientific visualizations. The abstract visualizations show completely conceptual constructs in 2D or 3D. These generated shapes are completely arbitrary. The model-based visualizations either place overlays of data on real or digitally constructed images of reality or make a digital construction of a real object directly from the scientific data.
Scientific visualization is usually done with specializedsoftware, though there are a few exceptions, noted below. Some of these specialized programs have been released asopen sourcesoftware, having very often its origins in universities, within an academic environment where sharing software tools and giving access to the source code is common. There are also manyproprietary softwarepackages of scientific visualization tools.
Models and frameworks for building visualizations include thedata flowmodels popularized by systems such as AVS, IRIS Explorer, andVTKtoolkit, and data state models in spreadsheet systems such as the Spreadsheet for Visualization and Spreadsheet for Images.
As a subject incomputer science,scientific visualizationis the use of interactive, sensory representations, typically visual, of abstract data to reinforcecognition,hypothesisbuilding, andreasoning.Scientific visualizationis the transformation, selection, or representation of data from simulations or experiments, with an implicit or explicit geometric structure, to allow the exploration, analysis, and understanding of the data. Scientific visualization focuses and emphasizes the representation of higher order data using primarily graphics and animation techniques.[5][6]It is a very important part of visualization and maybe the first one, as the visualization of experiments and phenomena is as old asscienceitself. Traditional areas of scientific visualization areflow visualization,medical visualization,astrophysical visualization, andchemical visualization. There are several different techniques to visualize scientific data, withisosurface reconstructionanddirect volume renderingbeing the more common.
Data visualization is a related subcategory of visualization dealing withstatistical graphicsandgeospatial data(as inthematic cartography) that is abstracted in schematic form.[7]
Information visualization concentrates on the use of computer-supported tools to explore large amount of abstract data. The term "information visualization" was originally coined by the User Interface Research Group at Xerox PARC and includedJock Mackinlay.[citation needed]Practical application of information visualization in computer programs involves selecting,transforming, and representing abstract data in a form that facilitates human interaction for exploration and understanding. Important aspects of information visualization are dynamics of visual representation and the interactivity. Strong techniques enable the user to modify the visualization in real-time, thus affording unparalleled perception of patterns and structural relations in the abstract data in question.
Educational visualization is using asimulationto create an image of something so it can be taught about. This is very useful when teaching about a topic that is difficult to otherwise see, for example,atomic structure, because atoms are far too small to be studied easily without expensive and difficult to use scientific equipment.
The use of visual representations to transfer knowledge between at least two persons aims to improve the transfer ofknowledgeby usingcomputerand non-computer-based visualization methods complementarily.[8]Thus properly designed visualization is an important part of not only data analysis but knowledge transfer process, too.[9]Knowledge transfer may be significantly improved using hybrid designs as it enhances information density but may decrease clarity as well. For example, visualization of a 3Dscalar fieldmay be implemented using iso-surfaces for field distribution and textures for the gradient of the field.[10]Examples of such visual formats aresketches,diagrams,images, objects, interactive visualizations, information visualization applications, and imaginary visualizations as instories. While information visualization concentrates on the use of computer-supported tools to derive new insights, knowledge visualization focuses on transferring insights and creating newknowledgeingroups. Beyond the mere transfer offacts, knowledge visualization aims to further transferinsights,experiences,attitudes,values,expectations,perspectives,opinions, andestimatesin different fields by using various complementary visualizations.
See also:picture dictionary,visual dictionary
Product visualizationinvolves visualization software technology for the viewing and manipulation of 3D models, technical drawing and other related documentation of manufactured components and large assemblies of products. It is a key part ofproduct lifecycle management. Product visualization software typically provides high levels of photorealism so that a product can be viewed before it is actually manufactured. This supports functions ranging from design and styling to sales and marketing.Technical visualizationis an important aspect of product development. Originallytechnical drawingswere made by hand, but with the rise of advancedcomputer graphicsthedrawing boardhas been replaced bycomputer-aided design(CAD). CAD-drawings and models have several advantages over hand-made drawings such as the possibility of3-Dmodeling,rapid prototyping, andsimulation. 3D product visualization promises more interactive experiences for online shoppers, but also challenges retailers to overcome hurdles in the production of 3D content, as large-scale 3D content production can be extremely costly and time-consuming.[11]
Visual communicationis thecommunicationofideasthrough the visual display ofinformation. Primarily associated withtwo dimensionalimages, it includes:alphanumerics,art,signs, andelectronicresources. Recent research in the field has focused onweb designand graphically orientedusability.
Visual analyticsfocuses on human interaction with visualization systems as part of a larger process of data analysis. Visual analytics has been defined as "the science of analytical reasoning supported by the interactive visual interface".[12]
Its focus is on human information discourse (interaction) within massive, dynamically changing information spaces. Visual analytics research concentrates on support for perceptual and cognitive operations that enable users to detect the expected and discover the unexpected in complex information spaces.
Technologies resulting from visual analytics find their application in almost all fields, but are being driven by critical needs (and funding) in biology and national security.
Interactive visualizationorinteractive visualisationis a branch ofgraphic visualizationincomputer sciencethat involves studying how humans interact with computers to create graphic illustrations of information and how this process can be made more efficient.
For a visualization to be considered interactive it must satisfy two criteria:
One particular type of interactive visualization isvirtual reality(VR), where the visual representation of information is presented using an immersive display device such as a stereo projector (seestereoscopy). VR is also characterized by the use of a spatial metaphor, where some aspect of the information is represented in three dimensions so that humans can explore the information as if it were present (where instead it was remote), sized appropriately (where instead it was on a much smaller or larger scale than humans can sense directly), or had shape (where instead it might be completely abstract).
Another type of interactive visualization is collaborative visualization, in which multiple people interact with the same computer visualization to communicate their ideas to each other or to explore information cooperatively. Frequently, collaborative visualization is used when people are physically separated. Using several networked computers, the same visualization can be presented to each person simultaneously. The people then make annotations to the visualization as well as communicate via audio (i.e., telephone), video (i.e., a video-conference), or text (i.e.,IRC) messages.
The Programmer's Hierarchical Interactive Graphics System (PHIGS) was one of the first programmatic efforts at interactive visualization and provided an enumeration of the types of input humans provide. People can:
All of these actions require a physical device. Input devices range from the common –keyboards,mice,graphics tablets,trackballs, andtouchpads– to the esoteric –wired gloves,boom arms, and evenomnidirectional treadmills.
These input actions can be used to control both theunique informationbeing represented or the way that the information is presented. When the information being presented is altered, the visualization is usually part of afeedback loop. For example, consider an aircraft avionics system where the pilot inputs roll, pitch, and yaw and the visualization system provides a rendering of the aircraft's new attitude. Another example would be a scientist who changes a simulation while it is running in response to a visualization of its current progress. This is calledcomputational steering.
More frequently, the representation of the information is changed rather than the information itself.
Experiments have shown that a delay of more than 20msbetween when input is provided and a visual representation is updated is noticeable by most people[citation needed]. Thus it is desirable for an interactive visualization to provide arenderingbased on human input within this time frame. However, when large amounts of data must be processed to create a visualization, this becomes hard or even impossible with current technology. Thus the term "interactive visualization" is usually applied to systems that provide feedback to users within several seconds of input. The terminteractiveframerateis often used to measure how interactive a visualization is. Framerates measure the frequency with which an image (a frame) can be generated by a visualization system. A framerate of 50 frames per second (frame/s) is considered good while 0.1 frame/s would be considered poor. The use of framerates to characterize interactivity is slightly misleading however, since framerate is a measure ofbandwidthwhile humans are more sensitive tolatency. Specifically, it is possible to achieve a good framerate of 50 frame/s but if the images generated refer to changes to the visualization that a person made more than 1 second ago, it will not feel interactive to a person.
The rapid response time required for interactive visualization is a difficult constraint to meet and there are several approaches that have been explored to provide people with rapid visual feedback based on their input. Some include
Many conferences occur where interactive visualization academic papers are presented and published.
|
https://en.wikipedia.org/wiki/Knowledge_visualization
|
Multimedia information retrieval(MMIRorMIR) is a research discipline ofcomputer sciencethat aims at extracting semantic information frommultimediadata sources.[1][failed verification]Data sources include directly perceivable media such asaudio,imageandvideo, indirectly perceivable sources such astext, semantic descriptions,[2]biosignalsas well as not perceivable sources such as bioinformation, stock prices, etc. The methodology of MMIR can be organized in three groups:
Feature extraction is motivated by the sheer size of multimedia objects as well as their redundancy and, possibly, noisiness.[1]: 2[failed verification]Generally, two possible goals can be achieved by feature extraction:
Multimedia Information Retrieval implies that multiple channels are employed for the understanding of media content.[5]Each of this channels is described by media-specific feature transformations. The resulting descriptions have to be merged to one description per media object. Merging can be performed by simple concatenation if the descriptions are of fixed size. Variable-sized descriptions – as they frequently occur in motion description – have to be normalized to a fixed length first.
Frequently used methods for description filtering includefactor analysis(e.g. by PCA), singular value decomposition (e.g. as latent semantic indexing in text retrieval) and the extraction and testing of statistical moments. Advanced concepts such as theKalman filterare used for merging of descriptions.
Generally, all forms of machine learning can be employed for the categorization of multimedia descriptions[1]: 125[failed verification]though some methods are more frequently used in one area than another. For example,hidden Markov modelsare state-of-the-art inspeech recognition, whiledynamic time warping– a semantically related method – is state-of-the-art in gene sequence alignment. The list of applicable classifiers includes the following:
The selection of the best classifier for a given problem (test set with descriptions and class labels, so-calledground truth) can be performed automatically, for example, using theWekaData Miner.
Models of Multimedia Information Retrieval
Spoken Language Audio Retrieval
Spoken Language Audio Retrieval focuses on audio content containing spoken words. It involves the transcription of spoken content into text using Automatic Speech Recognition (ASR) and indexing the transcriptions for text-based search.
Key Features:
Techniques: ASR for transcription and text indexing.
Query Types: Text-based queries.
Applications:
Searching podcast transcripts.
Analyzing customer service call logs.
Finding specific phrases in meeting recordings.
Challenges:
Errors in ASR can reduce retrieval accuracy.
Multilingual and accent variability requires robust systems.
Non-Speech Audio Retrieval
Non-Speech Audio Retrieval handles audio content without spoken words, such as music, environmental sounds, or sound effects. This model relies on extracting audio features like pitch, rhythm, and timbre to identify relevant audio.
Key Features:
Techniques: Acoustic feature extraction (e.g., spectrograms, MFCCs).
Query Types: Audio samples or textual descriptions.
Applications:
Music recommendation systems.
Environmental sound detection (e.g., gunshots, animal calls).
Sound effect retrieval in media production.
Challenges:
Difficulty in bridging the semantic gap between user queries and low-level audio features.
Efficient indexing of large datasets.
Graph Retrieval
Graph Retrieval retrieves information represented as graphs, which consist of nodes (entities) and edges (relationships). It is widely used in social networks, knowledge graphs, and bioinformatics.
Key Features:
Techniques: Graph matching, adjacency list/matrix storage, and graph databases (e.g., Neo4j).
Query Types: Subgraphs, patterns, or textual queries.
Applications:
Social network analysis.
Searching knowledge graphs.
Molecular structure retrieval.
Challenges:
Computationally intensive subgraph matching.
Scalability for large, complex graphs.
Imagery Retrieval
Imagery Retrieval retrieves images based on user input, such as textual descriptions or visual samples. It leverages both low-level features and semantic analysis for search.
Key Features:
Techniques: Content-Based Image Retrieval (CBIR), visual feature extraction, semantic analysis.
Query Types: Text, sketches, or example images.
Applications:
Stock image search.
E-commerce product matching.
Medical imaging analysis.
Challenges:
Bridging the semantic gap between user queries and image content.
Efficient indexing of large-scale image datasets.
Video Retrieval
Video Retrieval is the process of finding specific video content based on user queries. It involves analyzing both the visual and temporal features of videos.
Key Features:
Techniques: Keyframe extraction, motion pattern analysis, temporal indexing.
Query Types: Textual descriptions, sample clips, or temporal queries.
Applications:
Streaming service recommendations.
Surveillance footage analysis.
Sports analytics.
Challenges:
Managing the large file sizes of video content.
Efficient analysis of temporal sequences and multimodal features.
Comparison of Retrieval Models
Model Data Type Query Types Applications
Spoken Language Audio Speech recordings Text queries Podcasts, meeting logs, call centers
Non-Speech Audio Music, sound effects Audio samples or text Music apps, environmental sounds
Graph Retrieval Graph structures Subgraphs, patterns Knowledge graphs, bioinformatics
Imagery Retrieval Images Text, sketches, or images E-commerce, medical imaging
Video Retrieval Videos (visual + temporal) Text, clips, or time queries Surveillance, sports analysis
Conclusion
Multimedia Information Retrieval plays a crucial role in organizing and accessing vast multimedia data repositories. The variety of retrieval models ensures that users can effectively interact with and extract insights from complex multimedia datasets. Future advancements in artificial intelligence and machine learning are expected to improve the accuracy and scalability of MIR systems.
MMIR provides an overview over methods employed in the areas of information retrieval.[6][7]Methods of one area are adapted and employed on other types of media. Multimedia content is merged before the classification is performed. MMIR methods are, therefore, usually reused from other areas such as:
TheInternational Journal of Multimedia Information Retrieval[8]documents the development of MMIR as a research discipline that is independent of these areas. See alsoHandbook of Multimedia Information Retrieval[9]for a complete overview over this research discipline.
|
https://en.wikipedia.org/wiki/Multimedia_information_retrieval
|
Personal information management(PIM) is the study and implementation of the activities that people perform in order to acquire or create, store, organize, maintain, retrieve, and useinformationalitems such asdocuments(paper-based and digital),web pages, andemailmessages for everyday use to complete tasks (work-related or not) and fulfill a person's various roles (as parent, employee, friend, member of community, etc.);[1][2]it isinformation managementwith intrapersonal scope.Personal knowledge managementis by some definitions a subdomain.
One ideal of PIM is that people should always have the right information in the right place, in the right form, and of sufficient completeness and quality to meet their current need. Technologies and tools can help so that people spend less time with time-consuming and error-prone clerical activities of PIM (such as looking for and organising information). But tools and technologies can also overwhelm people with too much information leading toinformation overload.
A special focus of PIM concerns how people organize and maintain personal information collections, and methods that can help people in doing so. People may manage information in a variety of settings, for a variety of reasons, and with a variety of types of information. For example, a traditional office worker might manage physical documents in a filing cabinet by placing them in hanging folders organized alphabetically by project name. More recently, this office worker might organize digital documents into the virtual folders of a local, computer-basedfile systemor into a cloud-based store using afile hosting service(e.g.,Dropbox,Microsoft OneDrive,Google Drive). People manage information in many more private, personal contexts as well. A parent may, for example, collect and organize photographs of their children into a photo album which might be paper-based or digital.
PIM considers not only the methods used to store and organize information, but also is concerned with how peopleretrieve informationfrom their collections for re-use. For example, the office worker might re-locate a physical document by remembering the name of the project and then finding the appropriate folder by an alphabetical search. On a computer system with ahierarchical file system, a person might need to remember the top-level folder in which a document is located, and then browse through the folder contents to navigate to the desired document. Email systems often support additional methods for re-finding such as fielded search (e.g., search by sender, subject, date). The characteristics of the document types, the data that can be used to describe them (meta-data), and features of the systems used to store and organize them (e.g. fielded search) are all components that may influence how users accomplish personal information management.
The purview of PIM is broad. A person's perception of and ability to effect change in the world is determined, constrained, and sometimes greatly extended, by an ability to receive, send and otherwise manage information.
Research in the field of personal information management has considered six senses in which information can be personal (to "me") and so an object of that person's PIM activities:[2]
An encyclopaedic review of PIM literature suggests that all six senses of personal information listed above and the tools and technologies used to work with such information (from email applications and word processors topersonal information managersandvirtual assistants) combine to form apersonal space of information(PSI, pronounced as in theGreek letter, alternately referred to as apersonal information space) that is unique for each individual.[3]Within a person's PSI arepersonal information collections(PICs) or, simply, collections. Examples include:
Activities of PIM – i.e., the actions people take to manage information that is personal to them in one or more of the ways listed above – can be seen as an effort to establish, use, and maintain a mapping between information and need.[2]
Two activities of PIM occur repeatedly throughout a person's day and are often prompted by external events.
Meta-level activities focus more broadly on aspects of the mapping itself.
PIM activities overlap with one another. For example, the effort to keep an email attachment as a document in a personal file system may prompt an activity to organize the file system e.g., by creating a new folder for the document. Similarly, activities to organize may be prompted by a person's efforts to find a document as when, for example, a person discovers that two folders have overlapping content and should be consolidated.
Meta-level activities overlap not only with finding and keeping activities but, even more so, with each other. For example, efforts to re-organize a personal file system can be motivated by the evaluation that the current file organization is too time-consuming to maintain and doesn't properly highlight the information most in need of attention.
Information sent and received takes many different information forms in accordance with a growing list of communication modes, supporting tools, and people's customs, habits, and expectations. People still send paper-based letters, birthday cards, and thank you notes. But increasingly, people communicate using digital forms of information including emails, digital documents shared (as attachments or via afile hosting servicesuch asDropbox),blog postsandsocial mediaupdates (e.g., using a service such asFacebook),text messagesand links, text, photos, and videos shared via services such asTwitter,Snapchat,Reddit, andInstagram.
People work with information items as packages of information with properties that vary depending upon the information form involved. Files, emails, "tweets", Facebook updates, blog posts, etc. are each examples of the information item. The ways in which an information item can be manipulated depend upon its underlying form. Items can be created but not always deleted (completely). Most items can be copied, sent and transformed as in, for example, when a digital photo is taken of a paper document (transforming from paper to digital) and then possibly further transformed as when optical character recognition is used to extract text from the digital photo, and then transformed yet again when this information is sent to others via a text message.
Information fragmentation[4][2]is a key problem of PIM often made worse by the many information forms a person must work with. Information is scattered widely across information forms on different devices, in different formats, in different organizations, with different supporting tools.
Information fragmentation creates problems for each kind of PIM activity. Where to keep new information? Where to look for (re-find) information already kept? Meta-level activities, such as maintaining and organizing, are also more difficult and time-consuming when different stores on different devices must be separately maintained. Problems of information fragmentation are especially manifest when a person must look across multiple devices and applications to gather together the information needed to complete a project.[5]
PIM is a new field with ancient roots. When theoralrather than the written word dominated, human memory was the primary means for information preservation.[6]As information was increasingly rendered in paper form, tools were developed over time to meet the growing challenges of management. For example, the verticalfiling cabinet, now such a standard feature of home and workplace offices, was first commercially available in 1893.[7]
With the increasing availability of computers in the 1950s came an interest in the computer as a source of metaphors and a test bed for efforts to understand the human ability toprocess informationand tosolve problems.NewellandSimonpioneered the computer's use as a tool to model human thought.[8][9]They produced "TheLogic Theorist", generally thought to be the first running artificial intelligence (AI) program. The computer of the 1950s was also an inspiration for the development of an information processing approach to human behavior and performance.[10]
After the 1950s research showed that the computer, as a symbol processor, could "think" (to varying degrees of fidelity) like people do, the 1960s saw an increasing interest in the use of the computer to help people to think better and to process information more effectively. Working withAndries van Damand others,Ted Nelson, who coined the word "hypertext",[11]developed one of the first hypertext systems, The Hypertext Editing System, in 1968.[12]That same year,Douglas Engelbartalso completed work on a hypertext system called NLS (oN-Line System).[13]Engelbart advanced the notion that the computer could be used to augment the human intellect.[14][15]As heralded by the publication ofUlric Neisser's bookCognitive Psychology,[16]the 1960s also saw the emergence of cognitive psychology as a discipline that focused primarily on a better understanding of the human ability to think, learn, and remember.
The computer as aid to the individual, rather than remotenumber cruncherin a refrigerated room, gained further validity from work in the late 1970s and through the 1980s to producepersonal computersof increasing power and portability. These trends continue:computational powerroughly equivalent to that of adesktop computerof a decade ago can now be found in devices that fit into the palm of a hand.
The phrase "Personal Information Management" was itself apparently first used in the 1980s in the midst of general excitement over the potential of the personal computer to greatly enhance the human ability to process and manage information.[17]The 1980s also saw the advent of so-called "PIM tools" that provided limited support for the management of such things as appointments and scheduling, to-do lists, phone numbers, and addresses. A community dedicated to the study and improvement of human–computer interaction also emerged in the 1980s.[18][19]
As befits the "information" focus of PIM, PIM-relevant research of the 1980s and 1990s extended beyond the study of a particular device or application towards larger ecosystems of information management to include, for example, the organization of the physical office and the management of paperwork.[20][21]Malone characterized personal organization strategies as 'neat' or 'messy' and described 'filing' and 'piling' approaches to the organization of information.[22]Other studies showed that people vary their methods for keeping information according to anticipated uses of that information in the future.[23]Studies explored the practical implications that human memory research might carry in the design of, for example, personal filing systems,[24][25][26]and information retrieval systems.[27]Studies demonstrated a preference for navigation (browsing, "location-based finding) in the return to personal files,[28]a preference that endures today notwithstanding significant improvements in search support.[29][30][31][32]and an increasing use of search as the preferred method of return to e-mails.[33][34]
PIM, as a contemporary field of inquiry with a self-identified community of researchers, traces its origins to a Special Interest Group (SIG) session on PIM at the CHI 2004 conference and to a specialNational Science Foundation(NSF)-sponsored workshop held in Seattle in 2005.[35][36]
Much PIM research can be grouped according to the PIM activity that is the primary focus of the research. These activities are reflected in the two main models of PIM, i.e., that primary PIM activities are finding/re-finding, keeping and meta-level activities[37][2](see sectionActivities of PIM) or, alternatively, keeping, managing, and exploiting.[38][39]Important research is also being done under the special topics: Personality, mood, and emotion both as impacting and impacted by a person's practice of PIM, the management of personal health information and the management of personal information over the long run and for legacy.
Throughout a typical day, people repeatedly experience the need for information in large amounts and small (e.g., "When is my next meeting?"; "What's the status of the budget forecast?" "What's in the news today?") prompting activities to find and re-find.
A large body of research ininformation seeking,information behavior, andinformation retrievalrelates and especially to efforts to find information in public spaces such as the Web or a traditional library. There is a strong personal component even in efforts to find new information, never before experienced, from a public store such as the Web. For example, efforts to find information may be directed by a personally created outline, self-addressed email reminder or a to-do list. In addition, information inside a person's PSI can be used to support a more targeted, personalized search of the web.[40]
A person's efforts to find useful information are often a sequence of interactions rather than a single transaction. Under a "berry picking" model of finding, information is gathered in bits and pieces through a series of interactions, and during this time, a person's expression of need, as reflected in the current query, evolves.[41]People may favor stepwise approach to finding needed information to preserve a greater sense of control and context over the finding process and smaller steps may also reduce the cognitive burden associated with query formulation.[42]In some cases, there simply is not a "direct" way to access the information. For example, a person's remembrance for a needed Web site may only be through an email message sent by a colleague i.e., a person may not recall a Web address nor even keywords that might get be used in a Web search but the person does recall that the Web site was mentioned recently in an email from a colleague).
People may find (rather than re-find) information even when this information is ostensibly under their control. For example, items may be "pushed" into the PSI (e.g., via the inbox, podcast subscriptions, downloads). If these items are discovered later, it is through an act of finding not re-finding (since the person has no remembrance for the information).
Lansdale[17]characterized the retrieval of information as a two-step process involving interplay between actions torecallandrecognize. The steps of recall and recognition can iterate to progressively narrow the efforts to find the desired information. This interplay happens, for example, when people move through a folder hierarchy to a desired file or e-mail message or navigate through a website to a desired page.
But re-finding begins first with another step:Rememberto look in the first place. People may take the trouble to create Web bookmarks or to file away documents and then forget about this information so that, in worst case, the original effort is wasted.[43][44][45][46]
Also, finding/re-finding often means not just assembling a single item of information but rather a set of information. The person may need torepeatthe finding sequence several times. A challenge in tool support is to provide people with ways to group or interrelate information items so that their chances improve of retrieving a complete set of the information needed to complete a task.[3]
Over the years, PIM studies have determined that people prefer to return to personal information, most notably the information kept in personal digital files, by navigating rather than searching.[28][30][32]
Support for searching personal information has improved dramatically over the years most notably in the provision for full-text indexing to improve search speed.[47]With these improvements, preference may be shifting to search as a primary means for locating email messages (e.g., search on subject or sender, for messages not in view).[48][49]
However, a preference persists for navigation as the primary means of re-finding personal files (e.g., stepwise folder traversal; scanning a list of files within a folder for the desired file), notwithstanding ongoing improvements in search support.[30]The enduring preference for navigation as a primary means of return to files may have a neurological basis[50]i.e., navigation to files appears to use mental facilities similar to those people use to navigate in the physical world.
Preference for navigation is also in line with aprimacy effectrepeatedly observed in psychological research such that preferred method of return aligns with initial exposure. Under afirst impressionshypothesis, if a person's initial experience with a file included its placement in a folder, where the folder itself was reached by navigating through a hierarchy of containing folders, then the person will prefer a similar method – navigation – for return to the file later.[49]
There have been some prototyping efforts to explore an in-context creation e.g., creation in the context of a project the person is working on, of not only files, but also other forms of information such as web references and email.[51]Prototyping efforts have also explored ways to improve support for navigation e.g., by highlighting and otherwise making it easier to follow, the paths people are more likely to take in their navigation back to a file.[52]
Many events of daily life are roughly the converse of finding events: People encounter information and try to determine what, if anything, they should do with this information, i.e., people must match the information encountered to current or anticipated needs. Decisions and actions relating to encountered information are collectively referred to as keeping activities.
The ability to effectively handle information that is encountered by happenstance is essential to a person's ability to discover new material and make new connections.[53]People also keep information that they have actively sought but do not have time to process currently. A search on the web, for example, often produces much more information than can be consumed in the current session. Both the decision to keep this information for later use and the steps to do so are keeping activities.
Keeping activities are also triggered when people are interrupted during a current task and look for ways of preserving the current state so that work can be quickly resumed later.[54]People keep appointments by entering reminders into a calendar and keep good ideas or "things to pick up at the grocery store" by writing down a few cryptic lines on a loose piece of paper. People keep not only to ensure they have the information later, but also to build reminders to look for and use this information. Failure to remember to use information later is one kind ofprospective memoryfailure.[55]In order to avoid such a failure, people may, for example, self-e-mail a web page reference in addition to or instead of making a bookmark because the e-mail message with the reference appears in the inbox where it is more likely to be noticed and used.[56]
The keeping decision can be characterized as a signal detection task subject to errors of two kinds: 1) an incorrect rejection ("miss") when information is ignored that later is needed and should have been kept (e.g., proof of charitable donations needed now to file a tax return) and 2) a false positive when information kept as useful (incorrectly judged as "signal") turns out not to be used later.[57]Information kept and never used only adds to the clutter – digital and physical – in a person's life.[58]
Keeping can be a difficult and error prone effort. Filing i.e., placing information items such as paper documents, digital documents and emails, into folders, can be especially so.[59][60]To avoid, or delay filing information (e.g., until more is known concerning where the information might be used), people may opt to put information in "piles" instead.[22](Digital counterparts to physical piling include leaving information in the email inbox or placing digital documents and web links into a holding folder such as "stuff to look at later".) But information kept in a pile, physical or virtual, is easily forgotten as the pile fades into a background of clutter and research indicates that a typical person's ability to keep track of different piles, by location alone, is limited.[61]
Tagging provides another alternative to filing information items into folders. A strict folder hierarchy does not readily allow for the flexible classification of information even though, in a person's mind, an information item might fit in several different categories.[62]A number of tag-related prototypes for PIM have been developed over the years.[63][64]A tagging approach has also been pursued in commercial systems, most notably Gmail (as "labels"), but the success of tags so far is mixed. Bergman et al. found that users, when provided with options to use folders or tags, preferred folders to tags and, even when using tags, they typically refrained from adding more than a single tag per information item.[65][66]Civan et al., through an engagement of participants in critical, comparative observation of both tagging and the use of folders were able to elicit some limitations of tagging not previously discussed openly such as, for example, that once a person decides to use multiple tags, it is usually important to continue doing so (else the tag not applied consistently becomes ineffective as a means of retrieving a complete set of items).[67]
Technologies may help to reduce the costs, in personal time and effort, of keeping and the likelihood of error. For example, the ability to take a digital photo of a sign, billboard announcement or the page of a paper document can obviate the task of otherwise transcribing (or photocopying) the information.
A person's ongoing use of a smartphone through the day can create a time-stamped record of events as a kind of automated keeping and especially of information "experienced by me" (see section, "The senses in which information is personal") with potential use in a person's efforts to journal or to return to information previously experienced ("I think I read the email while in the taxi on the way to the airport...").Activity trackingtechnology can further enrich the record of a person's daily activity with tremendous potential use for people to enrich their understanding of their daily lives and the healthiness of their diet and their activities.[68]
Technologies to automate the keeping of personal information segue to personal informatics and thequantified selfmovement, life logging, in the extreme, a 'total capture" of information.[69]Tracking technologies raise serious issues of privacy (see "Managing privacy and the flow of information"). Additional questions arise concerning the utility and even the practical accessibility of "total capture".[70]
Activities of finding and, especially, keeping can segue into activities to maintain and organize as when, for example, efforts to keep a document in the file system prompt the creation of a new folder or efforts to re-find a document highlight the need to consolidate two folders with overlapping content and purpose.
Differences between people are especially apparent in their approaches to the maintenance and organization of information. Malone[22]distinguished between "neat" and "messy" organizations of paper documents. "Messy" people had more piles in their offices and appeared to invest less effort than "neat" people in filing information. Comparable differences have been observed in the ways people organize digital documents, emails, and web references.[71]
Activities of keeping correlate with activities of organizing so that, for example, people with more elaborate folder structures tend to file information more often and sooner.[71]However, people may be selective in the information forms for which they invest efforts to organize. The schoolteachers who participated in one study, for example, reported having regular "spring cleaning" habits for organization and maintenance of paper documents but no comparable habits for digital information.[72]
Activities of organization (e.g., creating and naming folders) segue into activities of maintenance such as consolidating redundant folders, archiving information no longer in active use, and ensuring that information is properlybacked upand otherwisesecured. (See also section, "Managing privacy and the flow of information").
Studies of people's folder organizations for digital information indicate that these have uses going far beyond the organization of files for later retrieval. Folders are information in their own right – representing, for example, a person's evolving understanding of a project and its components. A folder hierarchy can sometimes represent an informal problem decomposition with a parent folder representing a project and subfolders representing major components of the project (e.g., "wedding reception" and "church service" for a "wedding" project).[73]
However, people generally struggle to keep their information organized[74]and often do not have reliable backup routines.[75]People have trouble maintaining and organizing many distinct forms of information (e.g., digital documents, emails, and web references)[76]and are sometimes observed to make special efforts to consolidate different information forms into a single organization.[56]
With ever increasing stores of personal digital information, people face challenges ofdigital curationfor which they are not prepared.[77][78][79]At the same time, these stores offer their owners the opportunity, with the right training and tool support, forexploitationof their information in new, useful ways.[80]
Empirical observations of PIM studies motivate prototyping efforts towards information tools to provide better support for the maintenance, organization and, going further, curation of personal information. For example,GrayArea[81]applies the demotion principle of the user-subjective approach to allow people to move less frequently used files in any given folder to a gray area at the bottom end of the listing of this folder. These files can still be accessed but are less visible and so less distracting of a person's attention.
ThePlanz[51]prototype supports an in-context creation and integration of project-related files, emails, web references, informal notes and other forms of information into a simplified, document-like interface meant to represent the project with headings corresponding to folders in the personal file system and subheadings (for tasks, sub-projects, or other project components) corresponding to subfolders. The intention is that a single, useful organization should emerge incidentally as people focus on the planning and completion of their projects.
People face a continual evaluation of tradeoffs in deciding what information "flows" into and out of their PSI. Each interaction poses some degree of risk to privacy and security. Letting out information to the wrong recipients can lead toidentity theft. Letting in the wrong kind of information can mean that a person's devices are "infected" and the person's data is corrupted or "locked" forransom. By some estimates, 30% or more of the computers in the United States are infected.[82]However, the exchange of information, incoming and outgoing, is an essential part of living in the modern world. To order goods and services online, people must be prepared to "let out" their credit card information. To try out a potentially useful, new information tool, people may need to "let in" a download that could potentially make unwelcome changes to the web browser or the desktop. Providing for adequate control over the information, coming into and out of a PSI, is a major challenge. Even more challenging is the user interface to make clear the implications for various privacy choices particularly regardingInternet privacy. What, for example, are the personal information privacy implications of clicking the "Sign Up" button for use of social media services such as Facebook.[83]
People seek to understand how they might improve various aspects of their PIM practices with questions such as "Do I really need to keep all this information?"; "Is this tool (application, applet, device) worth the troubles (time, frustration) of its use?" and, perhaps most persistent, "Where did the day go? Where has the time gone? What did I accomplish?". These last questions may often be voiced in reflection, perhaps on the commute home from work at the end of the workday.
But there is increasing reason to expect that answers will be based on more than remembrance and reflection. Increasingly data incidentally, automatically captured over the course of a person's day and the person's interactions with various information tools to work with various forms of information (files, emails, texts, pictures, etc.) can be brought to bear in evaluations of a person's PIM practice and the identification of possible ways to improve.[84]
Efforts to make sense of information represent another set of meta-level activity that operate on personal information and the mapping between information and need. People must often assemble and analyze a larger collection of information to decide what to do next. "Which job applicant is most likely to work best for us?", "Which retirement plan to choose?", "What should we pack for our trip?". These and many other decisions are generally based not on a single information item but on a collection of information items – documents, emails (e.g., with advice or impressions from friends and colleagues), web references, etc.
Making sense of information is "meta" not only for its broader focus on information collections but also because it permeates most PIM activity even when the primary purpose may ostensibly be something else. For example, as people organize information into folders, ostensibly to ensure its subsequent retrieval, people may also be making sense and coming to a deeper understanding of this information.
Personalityandmoodcan impact a person's practice of PIM and, in turn, a person's emotions can be impacted by the person's practice of PIM.
In particular,personality traits(e.g., "conscientiousness" or "neuroticism") have, in certain circumstances, been shown to correlate with the extent to which a person keeps and organizes information into a personal archive such as a personal filing system.[85]However, another recent study found personality traits were not correlated with any aspects of personal filing systems, suggesting that PIM practices are influenced less by personality than by external factors such as the operating system used (i.e. Mac OS or Windows), which were seen to be much more predictive.[86]
Aside from the correlation between practices of PIM and more enduring personality traits, there is evidence to indicate that a person's (more changeable) mood impacts activities of PIM so that, for example, a person experiencing negative moods, when organizing personal information, is more likely to create a structure with more folders where folders, on average, contain fewer files.[87]
Conversely, the information a person keeps or routinely encounters (e.g., via social media), can profoundly impact a person's mood. Even as explorations continue into the potential for the automatic, incidental capture of information (see sectionKeeping) there is growing awareness for the need to design for forgetting as well as for remembrance as, for example, when a person realizes the need to dispose of digital belongings in the aftermath of a romantic breakup or the death of a loved one.[88]
Beyond the negative feelings induced by information associated with a failed relationship, people experience negative feelings about their PIM practices, per se. People are shown in general to experience anxiety and dissatisfaction with respect to their personal information archives including both concerns of possible loss of the information and also express concerns about their ability and effectiveness in managing and organizing their information.[89][90]
Traditional, personal health information resides in variousinformation systemsin healthcare institutions (e.g., clinics, hospitals, insurance providers), often in the form ofmedical records. People often have difficulty managing or even navigating a variety of paper orelectronic medical recordsacross multiple health services in different specializations and institutions.[91]Also referred to aspersonal health records, this type of personal health information usually requires people (i.e., patients) to engage in additional PIM finding activities to locate and gain access to health information and then to generate a comprehensible summary for their own use.
With the rise of consumer-facing health products includingactivity trackersand health-relatedmobile apps, people are able to access new types of personal health data (e.g., physical activity, heart rate) outside healthcare institutions. PIM behavior also changes. Much of the effort to keep information is automated. But people may experience difficulties making sense of a using the information later, e.g., to plan future physical activities based on activity tracker data. People are also frequently engaged in other meta-level activities, such as maintaining and organizing (e.g., syncing data across different health-related mobile apps).[92]
The purpose of PIM study is both descriptive and prescriptive. PIM research seeks to understand what people do now and the problems they encounter i.e., in the management of information and the use of information tools. This understanding is useful on its own but should also have application to understand what might be done in techniques, training and, especially, tool design to improve a person's practice of PIM.
The nature of PIM makes its study challenging.[93]The techniques and preferred methods of a person's PIM practice can vary considerably with information form (e.g., files vs. emails) and over time.[71][49][94]Theoperating systemand the defaultfile managerare also shown to impact PIM practices especially in the management of files.[32][95]A person's practice is also observed to vary in significant ways with gender, age and current life circumstances.[96][97][98][99]Certainly, differences among people on different sides of the so-called "digital divide" will have profound impact on PIM practices. And, as noted in section "Personality, mood, and emotion", personality traits and even a person's current mood can impact PIM behavior.
For research results to generalize, or else to be properly qualified, PIM research, at least in aggregate, should include the study of people, with a diversity of backgrounds and needs, over time as they work in many different situations, with different forms of information and different tools of information management.
At the same time, PIM research, at least in initial exploratory phases, must often be done in situ (e.g., in a person's workplace or office or at least where people have access to their laptops, smartphones and other devices of information management) so that people can be observed as they manage information that is "personal" to them (see section "The senses in which information is personal"). Exploratory methods are demanding in the time of both observer and participant and can also be intrusive for the participants. Consequently, the number and nature of participants is likely to be limited i.e., participants may often be people "close at hand" to the observer as family, friends, colleagues or other members of the observer's community.
For example, theguided tour, in which the participant is asked to give an interviewer a "tour" of the participant's various information collections (e.g., files, emails, Web bookmarks, digital photographs, paper documents, etc.), has proven a very useful, but expensive method of study with results bound by caveats reflecting the typically small number and narrow sampling of participants.
The guided tour method is one of several methods that are excellent for exploratory work but expensive and impractical to do with a larger, more diverse sampling of people. Other exploratory methods include the use ofthink aloud protocolscollected, for example, as a participant completes a keeping or finding task,[56]and theexperience sampling methodwherein participants report on their PIM actions and experiences over time possibly as prompted (e.g., by a beep or a text on a smartphone).
A challenge is to combine, within or across studies, time-consuming (and often demographically biased) methods of exploratory observation with other methods that have broader, more economical reach. The exploratory methods bring out interesting patterns; the follow-on methods add in numbers and diversity of participants. Among these methods are:
Another method using theDelphi techniquefor achieving consensus has been used to leverage the expertise and experience of PIM researchers as means of extending, indirectly, the number and diversity of PIM practices represented.[102]
The purview of PIM tool design applies to virtually any tool people use to work with their information including "sticky notes" andhanging foldersfor paper-based information to a wide range of computer-based applications for the management of digital information, ranging from applications people use every day such asWeb browsers,email applicationsandtexting applicationsto personal information managers.
With respect to methods for the evaluation of alternatives in PIM tools design, PIM researchers again face an "in situ" challenge. How to evaluate an alternative, as nearly as possible, in the working context of a person's PSI? One "let it lie" approach[103]would provide forinterfacesbetween the tool under evaluation and a participant's PSI so that the tool can work with a participant's other tools and the participant's personal information (as opposed to working in a separate environment with "test" data). Dropbox and other file hosting services exemplify this approach: Users can continue to work with their files and folders locally on their computers through the file manager even as an installed applet works to seamlessly synchronize the users files and folders with a Web store for the added benefits of a backup and options to synchronize this information with other devices and share this information with other users.
As what is better described as a methodology of tool design rather than a method, Bergman reports good success in the application of auser-subjective approach. The user-subjective approach advances three design principles. In brief, the design should allow the following: 1) all project-related items no matter their form (or format) are to be organized together (the subjective project classification principle); 2) the importance of information (to the user) should determine its visual salience and accessibility (the subjective importance principle); and 3) information should be retrieved and used by the user in the same context as it was previously used in (the subjective context principle). The approach may suggest design principles that serve not only in evaluating and improving existing systems but also in creating new implementations. For example, according to the demotion principle, information items of lower subjective importance should be demoted (i.e., by making them less visible) so as not to distract the user but be kept within their original context just in case they are needed. The principle has been applied in the creation of several interesting prototypes.[104][81]
Finally, a simple "checklist" methodology of tool design",[3]follows from an assessment of a proposed tool design with respect to each of the six senses in which information can be personal (see section "The senses in which information is personal") and each of the six activities of PIM (finding, keeping and the four meta-level activities, see section "Activities of PIM"). A tool that is good with respect to one kind of personal information or one PIM activity, may be bad with respect to another. For example, a new smartphone app that promises to deliver information potentially "relevant to me" (the "6th sense" in which information is personal) may do so only at the cost of a distracting increase in the information "directed to me" and by keeping too much personal information "about me" in a place not under the person's control.
PIM is a practical meeting ground for many disciplines includingcognitive psychology,cognitive science,human-computer interaction(HCI),human information interaction(HII),library and information science(LIS),artificial intelligence(AI), information retrieval, information behavior, organizationalinformation management, andinformation science.
Cognitive psychology, as the study of how people learn and remember, problem solve, and make decisions, necessarily also includes the study of how people make smart use of available information. The related field of cognitive science, in its efforts to apply these questions more broadly to the study and simulation of intelligent behavior, is also related to PIM. (Cognitive science, in turn, has significant overlap with the field of artificial intelligence).
There is great potential for a mutually beneficial interplay between cognitive science and PIM. Sub-areas of cognitive science of clear relevance to PIM include problem solving anddecision making. For example, folders created to hold information for a big project such as "plan my wedding" may sometimes resemble aproblem-decomposition.[105]To take another example, thesignal detection task[106]has long been used to frame and explain human behavior and has recently been used as a basis for analyzing our choices concerning what information to keep and how – a key activity of PIM.[57]Similarly, there is interplay between the psychological study ofcategorizationandconcept formationand the PIM study of how people use tags and folders to describe and organize their information.
Now large portions of a document may be the product of"copy-and-paste" operations(from our previous writings) rather than a product of original writing. Certainly, management of text pieces pasted for re-use is a PIM activity, and this raises several interesting questions. How do we go about deciding when to re-use and when to write from scratch? We may sometimes spend more time chasing down a paragraph we have previously written than it would have taken to simply write a new paragraph expressing the same thoughts. Beyond this, we can wonder at what point a reliance on an increasing (and increasingly available) supply of previously written material begins to impact our creativity.
As people do PIM they work in an external environment that includes other people, available technology, and, often, an organizational setting. This means thatsituated cognition,distributed cognition, andsocial cognitionall relate to the study of PIM.
The study of PIM is also related to the field of human–computer interaction (HCI). Some of the more influential papers on PIM over the years have been published in HCI journals and conference proceedings. However, the "I" in PIM is for information – in various forms, paper-based and digital (e.g., books, digital documents, emails and, even, the letter magnets on a refrigerator in the kitchen). The "I" in HCI stands for "interaction" as this relates to the "C" – computers. (An argument has been advanced that HCI should be focused more on information rather than computers.[107])
Group information management(GIM, usually pronounced with a soft "G") has been written about elsewhere in the context of PIM.[108][109]The study of GIM, in turn, has clear relevance to the study ofcomputer-supported cooperative work(CSCW). GIM is to CSCW as PIM is to HCI. Just as concerns of PIM substantially overlap with but are not fully subsumed by concerns of HCI (nor vice versa), concerns of GIM overlap with but are not subsumed by concerns of CSCW. Information in support of GIM activities can be in non-digital forms such as paper calendars and bulletin boards that do not involve computers.
Group and social considerations frequently enter into a person's PIM strategy.[110]For example, one member of a household may agree to manage medical information for everyone in the household (e.g., shot records) while another member of the household manages financial information for the household. But the collaborative organization and sharing of information is often difficult because, for example, the people working together in a group may have many different perspectives on how best to organize information.[111][112]
In larger organizational settings, the GIM goals of the organization may conflict with the PIM goals of individuals working within the organization, where the goals of different individuals may also conflict.[113]Individuals may, for example, keep copies of secure documents on their private laptops for the sake of convenience even though doing so violates group (organizational) security.[114]Given drawbacks—real or perceived—in the use of web services that support a shared use of folders,[115][116]people working in a group may opt to share information instead through the use of e-mail attachments.[117]
Concerns of data management relate to PIM especially with respect to the safe, secure, long-term preservation of personal information in digital form. The study of information management and knowledge management in organizations also relates to the study of PIM and issues seen first at an organizational level often migrate to the PIM domain.[118]
Concerns of knowledge management on a personal (vs. organizational) level have given rise to arguments for a field ofpersonal knowledge management(PKM). However, knowledge is not a "thing" to be managed directly but rather indirectly e.g., through items of information such as Web pages, emails and paper documents. PKM is best regarded as a useful subset of PIM[118]with special focus on important issues that might otherwise be overlooked such as self-directed efforts of knowledge elicitation ("What do I know? What have I learned?") and knowledge instillation ("how better to learn what it is I want to know?")
Bothtime managementandtask managementon a personal level make heavy use of information tools and external forms of information such as to-do lists, calendars, timelines, and email exchange. These are another form of information to be managed. Over the years, email, in particular, has been used in an ad hoc manner in support of task management.[119][120]
Much of the useful information a person receives comes, often unprompted, through a person's network of family, friends and colleagues. People reciprocate and much of the information a person sends to others reflects an attempt to build relationships and influence the behavior of others. As such,personal network management(PNM) is a crucial aspect of PIM and can be understood as the practice of managing the links and connections to other people for social and professional benefits.
|
https://en.wikipedia.org/wiki/Personal_information_management
|
Pearl growingis ametaphortaken from the process of small bits of sand growing to make a beautiful pearl, which is used ininformation literacy. This is also called "snowballing",[1]alluding to the process of how a snowball can grow into a big snow-man by accumulating snow. In this context this refers to the process of using one information item (like asubject termorcitation) to find content that provides more information items. This search strategy is most successfully employed at the beginning of the research process as the searcher uncovers newpearlsabout his or her topic.
Citation pearl growing is the act of using one relevant source, orcitation, to find more relevant sources on a topic. The searcher usually has a document that matches a topic or information need. From this document, the searcher is able to find other keywords, descriptors and themes to use in a subsequent search.[2]Citation Pearl Growing is a popular search and retrieval method used bylibrarians.[3]
Subject pearl growing is a strategy used in anelectronic databasethat hassubjectorkeyworddescriptors. By clicking on onesubject, the searcher is able to find other relatedsubjectsand subdivisions that may or may not be useful to the search.
Searchers use the pearl growing technique when surfing theInternet. Using the theory that websites that link to each other are similar, a searcher can move from site to site, collecting information. Ramer (2005) suggests pearl growing by using thepearlas a search term insearch enginesor even in theURL.
In systematic literature reviews, pearl growing is a technique used to ensure all relevant articles are included. Pearl growing involves identifying a primary article that meets the inclusion criteria for the review. From this primary article, the researcher works backwards to find all the articles cited in the bibliography and checks them for eligibility for inclusion in the review. The researcher then works forwards to search for any articles that have cited the primary article. It is estimated that up to 51% of references in a systematic review are identified by pearl growing.[4]There is evidence that using pearl growing for systematic reviews is a more comprehensive approach and more likely to identify all relevant articles compared to online database searches.[5]
Pearl growing, when applied to scientific literature, may also be referred to as citation mining or snowballing.
|
https://en.wikipedia.org/wiki/Pearl_growing
|
Query understandingis the process of inferring theintentof asearch engineuser by extracting semantic meaning from the searcher’s keywords.[1]Query understanding methods generally take place before the search engineretrievesandranksresults. It is related tonatural language processingbut specifically focused on the understanding of search queries.
Many languagesinflectwords to reflect their role in the utterance they appear in. The variation between various forms of a word is likely to be of little importance for the relatively coarse-grained model of meaning involved in a retrieval system, and for this reason the task of conflating the various forms of a word is a potentially useful technique to increase recall of a retrieval system.[2]
Stemmingalgorithms, also known as stemmers, typically use a collection of simple rules to removesuffixesintended to model the language’s inflection rules.[3]
For some languages, there are simplelemmatisationmethods to reduce a word in query to itslemmaorrootform or itsstem; for others, this operation involves non-trivial string processing and may require recognizing the word'spart of speechor referencing alexical database.
The effectiveness of stemming and lemmatization varies across languages.[4][5]
Query segmentation is a key component of query understanding, aiming to divide a query into meaningful segments. Traditional approaches, such as thebag-of-words model, treat individual words as independent units, which can limit interpretative accuracy. For languages like Chinese, where words are not separated by spaces, segmentation is essential, as individual characters often lack standalone meaning. Even in English, the BOW model may not capture the full meaning, as certain phrases—such as "New York"—carry significance as a whole rather than as isolated terms. By identifying phrases or entities within queries, query segmentation enhances interpretation, enabling search engines to apply proximity and ordering constraints, ultimately improving search accuracy and user satisfaction.[6]
Entity recognition is the process of locating and classifying entities within a text string.Named-entity recognitionspecifically focuses onnamed entities, such as names of people, places, and organizations. In addition, entity recognition includes identifying concepts in queries that may be represented by multi-word phrases. Entity recognition systems typically use grammar-based linguistic techniques or statisticalmachine learningmodels.[7]
Query rewriting is the process of automatically reformulating a search query to more accurately capture its intent.Query expansionadds additional query terms, such as synonyms, in order to retrieve more documents and thereby increase recall. Query relaxation removes query terms to reduce the requirements for a document to match the query, thereby also increasingrecall. Other forms of query rewriting, such as automatically converting consecutive query terms intophrasesand restricting query terms to specificfields, aim to increaseprecision.
Automaticspelling correctionis a critical feature of modern search engines, designed to address common spelling errors in user queries. Such errors are especially frequent as users often search for unfamiliar topics. By correcting misspelled queries, search engines enhance their understanding of user intent, thereby improving the relevance and quality of search results and overall user experience.[8]
|
https://en.wikipedia.org/wiki/Query_understanding
|
Ininformation scienceandinformation retrieval,relevancedenotes how well a retrieved document or set of documents meets theinformation needof the user. Relevance may include concerns such as timeliness, authority or novelty of the result.
The concern with the problem of finding relevant information dates back at least to the first publication of scientific journals in the 17th century.[citation needed]
The formal study of relevance began in the 20th century with the study of what would later be calledbibliometrics. In the 1930s and 1940s,S. C. Bradfordused the term "relevant" to characterize articles relevant to a subject (cf.,Bradford's law). In the 1950s, the first information retrieval systems emerged, and researchers noted the retrieval of irrelevant articles as a significant concern. In 1958, B. C. Vickery made the concept of relevance explicit in an address at the International Conference on Scientific Information.[1]
Since 1958, information scientists have explored and debated definitions of relevance. A particular focus of the debate was the distinction between "relevance to a subject" or "topical relevance" and "user relevance".[1]
The information retrieval community has emphasized the use of test collections and benchmark tasks to measure topical relevance, starting with theCranfield Experimentsof the early 1960s and culminating in theTRECevaluations that continue to this day as the main evaluation framework for information retrieval research.[2]
In order to evaluate how well aninformation retrievalsystem retrieved topically relevant results, the relevance of retrieved results must be quantified. InCranfield-style evaluations, this typically involves assigning arelevance levelto each retrieved result, a process known asrelevance assessment. Relevance levels can be binary (indicating a result is relevant or that it is not relevant), or graded (indicating results have a varying degree of match between the topic of the result and the information need). Once relevance levels have been assigned to the retrieved results,information retrieval performance measurescan be used to assess the quality of a retrieval system's output.
In contrast to this focus solely on topical relevance, the information science community has emphasized user studies that consider user relevance.[3]These studies often focus on aspects ofhuman-computer interaction(see alsohuman-computer information retrieval).
Thecluster hypothesis, proposed byC. J. van Rijsbergenin 1979, asserts that two documents that are similar to each other have a high likelihood of being relevant to the same information need. With respect to the embedding similarity space, the cluster hypothesis can be interpreted globally or locally.[4]The global interpretation assumes that there exist some fixed set of underlying topics derived from inter-document similarity. These global clusters or their representatives can then be used to relate relevance of two documents (e.g. two documents in the same cluster should both be relevant to the same request). Methods in this spirit include:
A second interpretation, most notably advanced byEllen Voorhees,[8]focuses on the local relationships between documents. The local interpretation avoids having to model the number or size of clusters in the collection and allow relevance at multiple scales. Methods in this spirit include:
Local methods require an accurate and appropriate documentsimilarity measure.
The documents which are most relevant are not necessarily those which are most useful to display in the first page of search results. For example, two duplicate documents might be individually considered quite relevant, but it is only useful to display one of them. A measure called "maximal marginal relevance" (MMR) has been proposed to manage this shortcoming. It considers the relevance of each document only in terms of how much new information it brings given the previous results.[13]
In some cases, a query may have an ambiguous interpretation, or a variety of potential responses. Providing a diversity of results can be a consideration when evaluating the utility of a result set.[14]
|
https://en.wikipedia.org/wiki/Relevance_(information_retrieval)
|
Relevance feedbackis a feature of someinformation retrievalsystems. The idea behind relevance feedback is to take the results that are initially returned from a given query, to gather userfeedback, and to use information about whether or not those results are relevant to perform a new query. We can usefully distinguish between three types of feedback: explicit feedback, implicit feedback, and blind or "pseudo" feedback.
Explicit feedback is obtained from assessors of relevance indicating the relevance of a document retrieved for a query. This type of feedback is defined as explicit only when the assessors (or other users of a system) know that the feedback provided is interpreted asrelevancejudgments.
Users may indicate relevance explicitly using abinaryorgradedrelevance system. Binary relevance feedback indicates that a document is either relevant or irrelevant for a given query. Graded relevance feedback indicates the relevance of a document to a query on a scale using numbers, letters, or descriptions (such as "not relevant", "somewhat relevant", "relevant", or "very relevant"). Graded relevance may also take the form of a cardinal ordering of documents created by an assessor; that is, the assessor places documents of a result set in order of (usually descending) relevance. An example of this would be theSearchWikifeature implemented byGoogleon their search website.
The relevance feedback information needs to be interpolated with the original query to improve retrieval performance, such as the well-knownRocchio algorithm.
Aperformance metricwhich became popular around 2005 to measure the usefulness of a rankingalgorithmbased on the explicit relevance feedback isnormalized discounted cumulative gain. Other measures includeprecisionatkandmean average precision.
Implicit feedback is inferred from user behavior, such as noting which documents they do and do not select for viewing, the duration of time spent viewing a document, or page browsing or scrolling actions.[1]There are many signals during the search process that one can use for implicit feedback and the types of information to provide in response.[2][3]
The key differences of implicit relevance feedback from that of explicit include:[4]
An example of this isdwell time, which is a measure of how long a user spends viewing the page linked to in a search result. It is an indicator of how well the search result met the query intent of the user, and is used as a feedback mechanism to improve search results.
Pseudo relevance feedback, also known as blind relevance feedback, provides a method for automatic local analysis. It automates the manual part of relevance feedback, so that the user gets improved retrieval performance without an extended interaction. The method is to do normal retrieval to find an initial set of most relevant documents, to then assume that the top "k" ranked documents are relevant, and finally to do relevance feedback as before under this assumption. The procedure is:
Some experiments such as results from the Cornell SMART system published in (Buckley et al.1995), show improvement of retrieval systems performances using pseudo-relevance feedback in the context of TREC 4 experiments.
This automatic technique mostly works. Evidence suggests that it tends to work better than global analysis.[5]Through a query expansion, some relevant documents missed in the initial round can then be retrieved to improve the overall performance. Clearly, the effect of this method strongly relies on the quality of selected expansion terms. It has been found to improve performance in the TREC ad hoc task[citation needed]. But it is not without the dangers of an automatic process. For example, if the query is about copper mines and the top several documents are all about mines in Chile, then there may be query drift in the direction of documents on Chile. In addition, if the words added to the original query are unrelated to the query topic, the quality of the retrieval is likely to be degraded, especially in Web search, where web documents often cover multiple different topics. To improve the quality of expansion words in pseudo-relevance feedback, a positional relevance feedback for pseudo-relevance feedback has been proposed to select from feedback documents those words that are focused on the query topic based on positions of words in feedback documents.[6]Specifically, the positional relevance model assigns more weights to words occurring closer to query words based on the intuition that words closer to query words are more likely to be related to the query topic.
Blind feedback automates the manual part of relevance feedback and has the advantage that assessors are not required.
Relevance information is utilized by using the contents of the relevant documents to either adjust the weights of terms in the original query, or by using those contents to add words to the query. Relevance feedback is often implemented using theRocchio algorithm.
|
https://en.wikipedia.org/wiki/Relevance_feedback
|
Inmachine learning, anearest centroid classifierornearest prototype classifieris aclassification modelthat assigns to observations the label of the class of training samples whosemean(centroid) is closest to the observation. When applied totext classificationusingword vectorscontainingtf*idfweights to represent documents, the nearest centroid classifier is known as theRocchio classifierbecause of its similarity to theRocchio algorithmforrelevance feedback.[1]
An extended version of the nearest centroid classifier has found applications in the medical domain, specifically classification oftumors.[2]
Given labeled training samples{(x→1,y1),…,(x→n,yn)}{\displaystyle \textstyle \{({\vec {x}}_{1},y_{1}),\dots ,({\vec {x}}_{n},y_{n})\}}with class labelsyi∈Y{\displaystyle y_{i}\in \mathbf {Y} }, compute the per-class centroidsμ→ℓ=1|Cℓ|∑i∈Cℓx→i{\displaystyle \textstyle {\vec {\mu }}_{\ell }={\frac {1}{|C_{\ell }|}}{\underset {i\in C_{\ell }}{\sum }}{\vec {x}}_{i}}whereCℓ{\displaystyle C_{\ell }}is the set of indices of samples belonging to classℓ∈Y{\displaystyle \ell \in \mathbf {Y} }.
The class assigned to an observationx→{\displaystyle {\vec {x}}}isy^=argminℓ∈Y‖μ→ℓ−x→‖{\displaystyle {\hat {y}}={\arg \min }_{\ell \in \mathbf {Y} }\|{\vec {\mu }}_{\ell }-{\vec {x}}\|}.
|
https://en.wikipedia.org/wiki/Nearest_centroid_classifier
|
Search engine indexingis the collecting,parsing, and storing of data to facilitate fast and accurateinformation retrieval. Index design incorporates interdisciplinary concepts fromlinguistics,cognitive psychology, mathematics,informatics, andcomputer science. An alternate name for the process, in the context ofsearch enginesdesigned to findweb pageson the Internet, isweb indexing.
Popular search engines focus on thefull-textindexing of online,natural languagedocuments.[1]Media typessuch as pictures, video, audio,[2]and graphics[3]are also searchable.
Meta search enginesreuse the indices of other services and do not store a local index whereas cache-based search engines permanently store the index along with thecorpus. Unlike full-text indices, partial-text services restrict the depth indexed to reduce index size. Larger services typically perform indexing at a predetermined time interval due to the required time and processing costs, whileagent-based search engines index inreal time.
The purpose of storing an index is to optimize speed and performance in findingrelevantdocuments for a search query. Without an index, the search engine wouldscanevery document in thecorpus, which would require considerable time and computing power. For example, while an index of 10,000 documents can be queried within milliseconds, a sequential scan of every word in 10,000 large documents could take hours. The additionalcomputer storagerequired to store the index, as well as the considerable increase in the time required for an update to take place, are traded off for the time saved during information retrieval.
Major factors in designing a search engine's architecture include:
Search engine architectures vary in the way indexing is performed and in methods of index storage to meet the various design factors.
A major challenge in the design of search engines is the management of serial computing processes. There are many opportunities forrace conditionsand coherent faults. For example, a new document is added to the corpus and the index must be updated, but the index simultaneously needs to continue responding to search queries. This is a collision between two competing tasks. Consider that authors are producers of information, and aweb crawleris the consumer of this information, grabbing the text and storing it in a cache (orcorpus). The forward index is the consumer of the information produced by the corpus, and the inverted index is the consumer of information produced by the forward index. This is commonly referred to as aproducer-consumer model. The indexer is the producer of searchable information and users are the consumers that need to search. The challenge is magnified when working with distributed storage and distributed processing. In an effort to scale with larger amounts of indexed information, the search engine's architecture may involvedistributed computing, where the search engine consists of several machines operating in unison. This increases the possibilities for incoherency and makes it more difficult to maintain a fully synchronized, distributed, parallel architecture.[13]
Many search engines incorporate aninverted indexwhen evaluating asearch queryto quickly locate documents containing the words in a query and then rank these documents by relevance. Because the inverted index stores a list of the documents containing each word, the search engine can use directaccessto find the documents associated with each word in the query in order to retrieve the matching documents quickly. The following is a simplified illustration of an inverted index:
This index can only determine whether a word exists within a particular document, since it stores no information regarding the frequency and position of the word; it is therefore considered to be aBooleanindex. Such an index determines which documents match a query but does not rank matched documents. In some designs the index includes additional information such as the frequency of each word in each document or the positions of a word in each document.[14]Position information enables the search algorithm to identify word proximity to support searching for phrases; frequency can be used to help in ranking the relevance of documents to the query. Such topics are the central research focus ofinformation retrieval.
The inverted index is asparse matrix, since not all words are present in each document. To reducecomputer storagememory requirements, it is stored differently from a two dimensionalarray. The index is similar to theterm document matricesemployed bylatent semantic analysis. The inverted index can be considered a form of a hash table. In some cases the index is a form of abinary tree, which requires additional storage but may reduce the lookup time. In larger indices the architecture is typically adistributed hash table.[15]
For phrase searching, a specialized form of an inverted index called a positional index is used. A positional index not only stores the ID of the document containing the token but also the exact position(s) of the token within the document in thepostings list. The occurrences of the phrase specified in the query are retrieved by navigating these postings list and identifying the indexes at which the desired terms occur in the expected order (the same as the order in the phrase). So if we are searching for occurrence of the phrase "First Witch", we would:
The postings lists can be navigated using a binary search in order to minimize the time complexity of this procedure.[16]
The inverted index is filled via a merge or rebuild. A rebuild is similar to a merge but first deletes the contents of the inverted index. The architecture may be designed to support incremental indexing,[17]where a merge identifies the document or documents to be added or updated and then parses each document into words. For technical accuracy, a merge conflates newly indexed documents, typically residing in virtual memory, with the index cache residing on one or more computer hard drives.
After parsing, the indexer adds the referenced document to the document list for the appropriate words. In a larger search engine, the process of finding each word in the inverted index (in order to report that it occurred within a document) may be too time consuming, and so this process is commonly split up into two parts, the development of a forward index and a process which sorts the contents of the forward index into the inverted index. The inverted index is so named because it is an inversion of the forward index.
The forward index stores a list of words for each document. The following is a simplified form of the forward index:
The rationale behind developing a forward index is that as documents are parsed, it is better to intermediately store the words per document. The delineation enables asynchronous system processing, which partially circumvents the inverted index updatebottleneck.[18]The forward index issortedto transform it to an inverted index. The forward index is essentially a list of pairs consisting of a document and a word, collated by the document. Converting the forward index to an inverted index is only a matter of sorting the pairs by the words. In this regard, the inverted index is a word-sorted forward index.
Generating or maintaining a large-scale search engine index represents a significant storage and processing challenge. Many search engines utilize a form ofcompressionto reduce the size of the indices ondisk.[19]Consider the following scenario for a full text, Internet search engine.
Given this scenario, an uncompressed index (assuming a non-conflated, simple, index) for 2 billion web pages would need to store 500 billion word entries. At 1 byte per character, or 5 bytes per word, this would require 2500 gigabytes of storage space alone.[citation needed]This space requirement may be even larger for a fault-tolerant distributed storage architecture. Depending on the compression technique chosen, the index can be reduced to a fraction of this size. The tradeoff is the time and processing power required to perform compression and decompression.[citation needed]
Notably, large scale search engine designs incorporate the cost of storage as well as the costs of electricity to power the storage. Thus compression is a measure of cost.[citation needed]
Document parsing breaks apart the components (words) of a document or other form of media for insertion into the forward and inverted indices. The words found are calledtokens, and so, in the context of search engine indexing andnatural language processing, parsing is more commonly referred to astokenization. It is also sometimes calledword boundary disambiguation,tagging,text segmentation,content analysis, text analysis,text mining,concordancegeneration,speech segmentation,lexing, orlexical analysis. The terms 'indexing', 'parsing', and 'tokenization' are used interchangeably in corporate slang.
Natural language processing is the subject of continuous research and technological improvement. Tokenization presents many challenges in extracting the necessary information from documents for indexing to support quality searching. Tokenization for indexing involves multiple technologies, the implementation of which are commonly kept as corporate secrets.[citation needed]
Unlikeliteratehumans, computers do not understand the structure of a natural language document and cannot automatically recognize words and sentences. To a computer, a document is only a sequence of bytes. Computers do not 'know' that a space character separates words in a document. Instead, humans must program the computer to identify what constitutes an individual or distinct word referred to as a token. Such a program is commonly called atokenizerorparserorlexer. Many search engines, as well as other natural language processing software, incorporatespecialized programsfor parsing, such asYACCorLex.
During tokenization, the parser identifies sequences of characters that represent words and other elements, such as punctuation, which are represented by numeric codes, some of which are non-printing control characters. The parser can also identifyentitiessuch asemailaddresses, phone numbers, andURLs. When identifying each token, several characteristics may be stored, such as the token's case (upper, lower, mixed, proper), language or encoding, lexical category (part of speech, like 'noun' or 'verb'), position, sentence number, sentence position, length, and line number.
If the search engine supports multiple languages, a common initial step during tokenization is to identify each document's language; many of the subsequent steps are language dependent (such asstemmingandpart of speechtagging).Language recognitionis the process by which a computer program attempts to automatically identify, or categorize, thelanguageof a document. Other names for language recognition include language classification, language analysis, language identification, and language tagging. Automated language recognition is the subject of ongoing research innatural language processing. Finding which language the words belongs to may involve the use of alanguage recognition chart.
If the search engine supports multipledocument formats, documents must be prepared for tokenization. The challenge is that many document formats contain formatting information in addition to textual content. For example,HTMLdocuments contain HTML tags, which specify formatting information such as new line starts, bold emphasis, andfontsize orstyle. If the search engine were to ignore the difference between content and 'markup', extraneous information would be included in the index, leading to poor search results. Format analysis is the identification and handling of the formatting content embedded within documents which controls the way the document is rendered on a computer screen or interpreted by a software program. Format analysis is also referred to as structure analysis, format parsing, tag stripping, format stripping, text normalization, text cleaning and text preparation. The challenge of format analysis is further complicated by the intricacies of various file formats. Certain file formats are proprietary with very little information disclosed, while others are well documented. Common, well-documented file formats that many search engines support include:
Options for dealing with various formats include using a publicly available commercial parsing tool that is offered by the organization which developed, maintains, or owns the format, and writing a customparser.
Some search engines support inspection of files that are stored in acompressedor encrypted file format. When working with a compressed format, the indexer first decompresses the document; this step may result in one or more files, each of which must be indexed separately. Commonly supportedcompressed file formatsinclude:
Format analysis can involve quality improvement methods to avoid including 'bad information' in the index. Content can manipulate the formatting information to include additional content. Examples of abusing document formatting forspamdexing:
Some search engines incorporate section recognition, the identification of major parts of a document, prior to tokenization. Not all the documents in a corpus read like a well-written book, divided into organized chapters and pages. Many documents on theweb, such as newsletters and corporate reports, contain erroneous content and side-sections that do not contain primary material (that which the document is about). For example, articles on the Wikipedia website display a side menu with links to other web pages. Some file formats, like HTML or PDF, allow for content to be displayed in columns. Even though the content is displayed, or rendered, in different areas of the view, the raw markup content may store this information sequentially. Words that appear sequentially in the raw source content are indexed sequentially, even though these sentences and paragraphs are rendered in different parts of the computer screen. If search engines index this content as if it were normal content, the quality of the index and search quality may be degraded due to the mixed content and improper word proximity. Two primary problems are noted:
Section analysis may require the search engine to implement the rendering logic of each document, essentially an abstract representation of the actual document, and then index the representation instead. For example, some content on the Internet is rendered via JavaScript. If the search engine does not render the page and evaluate the JavaScript within the page, it would not 'see' this content in the same way and would index the document incorrectly. Given that some search engines do not bother with rendering issues, many web page designers avoid displaying content via JavaScript or use theNoscripttag to ensure that the web page is indexed properly. At the same time, this fact can also beexploitedto cause the search engine indexer to 'see' different content than the viewer.
Indexing often has to recognize theHTMLtags to organize priority. Indexing low priority to high margin to labels likestrongandlinkto optimize the order of priority if those labels are at the beginning of the text could not prove to be relevant. Some indexers likeGoogleandBingensure that thesearch enginedoes not take the large texts as relevant source due to strong type system compatibility.[22]
Meta tag indexing plays an important role in organizing and categorizing web content. Specific documents often contain embedded meta information such as author, keywords, description, and language. For HTML pages, themeta tagcontains keywords which are also included in the index. Earlier Internetsearch engine technologywould only index the keywords in the meta tags for the forward index; the full document would not be parsed. At that time full-text indexing was not as well established, nor wascomputer hardwareable to support such technology. The design of the HTML markup language initially included support for meta tags for the very purpose of being properly and easily indexed, without requiring tokenization.[23]
As the Internet grew through the 1990s, manybrick-and-mortar corporationswent 'online' and established corporate websites. The keywords used to describe webpages (many of which were corporate-oriented webpages similar to product brochures) changed from descriptive to marketing-oriented keywords designed to drive sales by placing the webpage high in the search results for specific search queries. The fact that these keywords were subjectively specified was leading tospamdexing, which drove many search engines to adopt full-text indexing technologies in the 1990s. Search engine designers and companies could only place so many 'marketing keywords' into the content of a webpage before draining it of all interesting and useful information. Given that conflict of interest with the business goal of designing user-oriented websites which were 'sticky', thecustomer lifetime valueequation was changed to incorporate more useful content into the website in hopes of retaining the visitor. In this sense, full-text indexing was more objective and increased the quality of search engine results, as it was one more step away from subjective control of search engine result placement, which in turn furthered research of full-text indexing technologies.
Indesktop search, many solutions incorporate meta tags to provide a way for authors to further customize how the search engine will index content from various files that is not evident from the file content. Desktop search is more under the control of the user, while Internet search engines must focus more on the full text index.
|
https://en.wikipedia.org/wiki/Search_engine_indexing
|
SIGIRis theAssociation for Computing Machinery'sSpecial Interest Group on Information Retrieval. The scope of the group's specialty is the theory and application of computers to the acquisition, organization, storage,retrievaland distribution of information; emphasis is placed on working with non-numeric information, ranging fromnatural languageto highly structureddata bases.
The annual international SIGIR conference, which began in 1978, is considered the most important in the field of information retrieval. SIGIR also sponsors the annualJoint Conference on Digital Libraries(JCDL) in association withSIGWEB, theConference on Information and Knowledge Management(CIKM), and theInternational Conference on Web Search and Data Mining(WSDM) in association withSIGKDD,SIGMOD, andSIGWEB.
The group gives out several awards to contributions to the field of information retrieval. The most important award is theGerard Salton Award(named after thecomputer scientistGerard Salton), which is awarded every three years to an individual who has made "significant, sustained and continuing contributions to research ininformation retrieval". Additionally, SIGIR presents a Best Paper Award[1]to recognize the highest quality paper at each conference. "Test of time" Award[2]is a recent award that is given to a paper that has had "long-lasting influence, including impact on a subarea of information retrieval research, across subareas of information retrieval research, and outside of the information retrieval research community". This award is selected from a set of full papers presented at the main SIGIR conference 10–12 years before.
TheACM SIGIR Academy[3][4]is a group of researchers honored by SIGIR. Each year, 3-5 new members are elected (in addition to other "very senior members of the IR community" who will be "automatically" inducted) for having made significant, cumulative contributions to the development of the field ofinformation retrievaland influencing the research of others. These are the principal leaders of the field, whose efforts have shaped the discipline and/or industry through significant research, innovation, and/or service.
Here are the inductees into the SIGIR Academy by year:
|
https://en.wikipedia.org/wiki/Special_Interest_Group_on_Information_Retrieval
|
Subject indexingis the act of describing orclassifyingadocumentbyindex terms, keywords, or other symbols in order to indicate what different documents areabout, to summarize theircontentsor to increasefindability. In other words, it is about identifying and describing thesubjectof documents. Indexes are constructed, separately, on three distinct levels: terms in a document such as a book; objects in a collection such as a library; and documents (such as books and articles) within a field of knowledge.
Subject indexing is used ininformation retrievalespecially to createbibliographic indexesto retrieve documents on a particular subject. Examples of academic indexing services areZentralblatt MATH,Chemical AbstractsandPubMed. The index terms were mostly assigned by experts but author keywords are also common.
The process of indexing begins with any analysis of the subject of the document. The indexer must then identify terms which appropriately identify the subject either by extracting words directly from the document or assigning words from acontrolled vocabulary.[1]The terms in the index are then presented in a systematic order.
Indexers must decide how many terms to include and how specific the terms should be. Together this gives a depth of indexing.
The first step in indexing is to decide on the subject matter of the document. In manual indexing, the indexer would consider the subject matter in terms of answer to a set of questions such as "Does the document deal with a specific product, condition or phenomenon?".[2]As the analysis is influenced by the knowledge and experience of the indexer, it follows that two indexers may analyze the content differently and so come up with different index terms. This will impact on the success of retrieval.
Automatic indexingfollows set processes of analyzing frequencies of word patterns and comparing results to other documents in order to assign to subject categories. This requires no understanding of the material being indexed. This leads to more uniform indexing but at the expense of the true meaning being interpreted. A computer program will not understand the meaning of statements and may therefore fail to assign some relevant terms or assign incorrectly. Human indexers focus their attention on certain parts of the document such as the title, abstract, summary and conclusions, as analyzing the full text in depth is costly and time-consuming.[3]An automated system takes away the time limit and allows the entire document to be analyzed, but also has the option to be directed to particular parts of the document.
The second stage of indexing involves the translation of the subject analysis into a set ofindex terms. This can involve extracting from the document or assigning from acontrolled vocabulary. With the ability to conduct afull text searchwidely available, many people have come to rely on their own expertise in conducting information searches and full text search has become very popular. Subject indexing and its experts, professional indexers,catalogers, andlibrarians, remains crucial to information organization and retrieval. These experts understand controlled vocabularies and are able to find information that cannot be located by full text search. The cost of expert analysis to create subject indexing is not easily compared to the cost of hardware, software and labor to manufacture a comparable set of full-text, fully searchable materials. With new web applications that allow every user to annotate documents,social tagginghas gained popularity especially in the Web.[4]
One application of indexing, thebook index, remains relatively unchanged despite theinformation revolution.
Extraction indexing involves taking words directly from the document. It usesnatural languageand lends itself well to automated techniques where word frequencies are calculated and those with a frequency over a pre-determined threshold are used as index terms. A stop-list containing common words (such as "the", "and") would be referred to and suchstop wordswould be excluded as index terms.
Automated extraction indexing may lead to loss of meaning of terms by indexing single words as opposed to phrases. Although it is possible to extract commonly occurring phrases, it becomes more difficult if key concepts are inconsistently worded in phrases. Automated extraction indexing also has the problem that, even with use of a stop-list to remove common words, some frequent words may not be useful for allowing discrimination between documents. For example, the term glucose is likely to occur frequently in any document related to diabetes. Therefore, use of this term would likely return most or all the documents in the database. Post-coordinated indexing where terms are combined at the time of searching would reduce this effect but the onus would be on the searcher to link appropriate terms as opposed to the information professional. In addition terms that occur infrequently may be highly significant for example a new drug may be mentioned infrequently but the novelty of the subject makes any reference significant. One method for allowing rarer terms to be included and common words to be excluded by automated techniques would be a relative frequency approach where frequency of a word in a document is compared to frequency in the database as a whole. Therefore, a term that occurs more often in a document than might be expected based on the rest of the database could then be used as an index term, and terms that occur equally frequently throughout will be excluded.
Another problem with automated extraction is that it does not recognize when a concept is discussed but is not identified in the text by an indexable keyword.[5]
Since this process is based on simple string matching and involves no intellectual analysis, the resulting product is more appropriately known as aconcordancethan an index.
An alternative is assignment indexing where index terms are taken from a controlled vocabulary. This has the advantage of controlling forsynonymsas the preferred term is indexed and synonyms or related terms direct the user to the preferred term. This means the user can find articles regardless of the specific term used by the author and saves the user from having to know and check all possible synonyms.[6]It also removes any confusion caused byhomographsby inclusion of a qualifying term. A third advantage is that it allows the linking of related terms whether they are linked by hierarchy or association, e.g. an index entry for an oral medication may list other oral medications as related terms on the same level of the hierarchy but would also link to broader terms such as treatment. Assignment indexing is used in manual indexing to improve inter-indexer consistency as different indexers will have a controlled set of terms to choose from. Controlled vocabularies do not completely remove inconsistencies as two indexers may still interpret the subject differently.[2]
The final phase of indexing is to present the entries in a systematic order. This may involve linking entries. In a pre-coordinated index the indexer determines the order in which terms are linked in an entry by considering how a user may formulate their search. In a post-coordinated index, the entries are presented singly and the user can link the entries through searches, most commonly carried out by computer software. Post-coordination results in a loss of precision in comparison to pre-coordination.[7]
Indexers must make decisions about what entries should be included and how many entries an index should incorporate. The depth of indexing describes the thoroughness of the indexing process with reference to exhaustivity and specificity.[8]
An exhaustive index is one which lists all possible index terms. Greater exhaustivity gives a higherrecall, or more likelihood of all the relevant articles being retrieved, however, this occurs at the expense ofprecision. This means that the user may retrieve a larger number of irrelevant documents or documents which only deal with the subject in little depth. In a manual system a greater level of exhaustivity brings with it a greater cost as more man-hours are required. The additional time taken in an automated system would be much less significant. At the other end of the scale, in a selective index only the most important aspects are covered.[9]Recall is reduced in a selective index as if an indexer does not include enough terms, a highly relevant article may be overlooked. Therefore, indexers should strive for a balance and consider what the document may be used. They may also have to consider the implications of time and expense.
The specificity describes how closely the index terms match the topics they represent[10]An index is said to be specific if the indexer uses parallel descriptors to the concept of the document and reflects the concepts precisely.[11]Specificity tends to increase with exhaustivity as the more terms you include, the narrower those terms will be.
Hjørland(2011)[12]found that theories of indexing are at the deepest level connected to different theories of knowledge:
The core of indexing is, as stated by Rowley and Farrow[16]to evaluate a paper's contribution to knowledge and index it accordingly. Or, in the words of Hjørland (1992,[17]1997) to index its informative potentials.
"In order to achieve good consistent indexing, the indexer must have a thorough appreciation of the structure of the subject and the nature of the contribution that the document is making to the advancement of knowledge" (Rowley & Farrow, 2000,[16]p. 99).
|
https://en.wikipedia.org/wiki/Subject_indexing
|
Temporal information retrieval(T-IR) is an emerging area of research related to the field ofinformation retrieval(IR) and a considerable number of sub-areas, positioning itself, as an important dimension in the context of the user information needs.
According toinformation theoryscience (Metzger, 2007),[1]timeliness or currency is one of the key five aspects that determine a document's credibility besides relevance, accuracy, objectivity and coverage. One can provide many examples when the returned search results are of little value due to temporal problems such as obsolete data on weather, outdated information about a given company's earnings or information on already-happened or invalid predictions.
T-IR, in general, aims at satisfying these temporal needs and at combining traditional notions of document relevance with the so-called temporal relevance. This will enable the return of temporally relevant documents, thus providing a temporal overview of the results in the form of timeliness or similar structures. It also shows to be very useful forquery understanding, query disambiguation, query classification, result diversification and so on.
This article contains a list of the most important research in temporal information retrieval (T-IR) and its related sub-areas. As several of the referred works are related with different research areas a single article can be found in more than one different table. For ease of reading the articles are categorized in a number of different sub-areas referring to its main scope, in detail.
|
https://en.wikipedia.org/wiki/Temporal_information_retrieval
|
Ininformation retrieval,tf–idf(alsoTF*IDF,TFIDF,TF–IDF, orTf–idf), short forterm frequency–inverse document frequency, is a measure of importance of a word to adocumentin a collection orcorpus, adjusted for the fact that some words appear more frequently in general.[1]Like the bag-of-words model, it models a document as amultisetof words, withoutword order. It is a refinement over the simplebag-of-words model, by allowing the weight of words to depend on the rest of the corpus.
It was often used as aweighting factorin searches of information retrieval,text mining, anduser modeling. A survey conducted in 2015 showed that 83% of text-based recommender systems in digital libraries used tf–idf.[2]Variations of the tf–idf weighting scheme were often used bysearch enginesas a central tool in scoring and ranking a document'srelevancegiven a userquery.
One of the simplestranking functionsis computed by summing the tf–idf for each query term; many more sophisticated ranking functions are variants of this simple model.
Karen Spärck Jones(1972) conceived a statistical interpretation of term-specificity called Inverse Document Frequency (idf), which became a cornerstone of term weighting:[3]
The specificity of a term can be quantified as an inverse function of the number of documents in which it occurs.
For example, the df (document frequency) and idf for some words in Shakespeare's 37 plays are as follows:[4]
We see that "Romeo", "Falstaff", and "salad" appears in very few plays, so seeing these words, one could get a good idea as to which play it might be. In contrast, "good" and "sweet" appears in every play and are completely uninformative as to which play it is.
Term frequency,tf(t,d), is the relative frequency of termtwithin documentd,
whereft,dis theraw countof a term in a document, i.e., the number of times that termtoccurs in documentd. Note the denominator is simply the total number of terms in documentd(counting each occurrence of the same term separately). There are various other ways to define term frequency:[5]: 128
Theinverse document frequencyis a measure of how much information the word provides, i.e., how common or rare it is across all documents. It is thelogarithmically scaledinverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient):
with
Then tf–idf is calculated as
A high weight in tf–idf is reached by a high termfrequency(in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf–idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf–idf closer to 0.
Idf was introduced as "term specificity" byKaren Spärck Jonesin a 1972 paper. Although it has worked well as aheuristic, its theoretical foundations have been troublesome for at least three decades afterward, with many researchers trying to findinformation theoreticjustifications for it.[7]
Spärck Jones's own explanation did not propose much theory, aside from a connection toZipf's law.[7]Attempts have been made to put idf on aprobabilisticfooting,[8]by estimating the probability that a given documentdcontains a termtas the relative document frequency,
so that we can define idf as
Namely, the inverse document frequency is the logarithm of "inverse" relative document frequency.
This probabilistic interpretation in turn takes the same form as that ofself-information. However, applying such information-theoretic notions to problems in information retrieval leads to problems when trying to define the appropriateevent spacesfor the requiredprobability distributions: not only documents need to be taken into account, but also queries and terms.[7]
Both term frequency and inverse document frequency can be formulated in terms ofinformation theory; it helps to understand why their product has a meaning in terms of joint informational content of a document. A characteristic assumption about the distributionp(d,t){\displaystyle p(d,t)}is that:
This assumption and its implications, according to Aizawa: "represent the heuristic that tf–idf employs."[9]
Theconditional entropyof a "randomly chosen" document in the corpusD{\displaystyle D}, conditional to the fact it contains a specific termt{\displaystyle t}(and assuming that all documents have equal probability to be chosen) is:
In terms of notation,D{\displaystyle {\cal {D}}}andT{\displaystyle {\cal {T}}}are "random variables" corresponding to respectively draw a document or a term. Themutual informationcan be expressed as
The last step is to expandpt{\displaystyle p_{t}}, the unconditional probability to draw a term, with respect to the (random) choice of a document, to obtain:
This expression shows that summing the Tf–idf of all possible terms and documents recovers the mutual information between documents and term taking into account all the specificities of their joint distribution.[9]Each Tf–idf hence carries the "bit of information" attached to a term x document pair.
Suppose that we have term count tables of a corpus consisting of only two documents, as listed on the right.
The calculation of tf–idf for the term "this" is performed as follows:
In its raw frequency form, tf is just the frequency of the "this" for each document. In each document, the word "this" appears once; but as the document 2 has more words, its relative frequency is smaller.
An idf is constant per corpus, andaccountsfor the ratio of documents that include the word "this". In this case, we have a corpus of two documents and all of them include the word "this".
So tf–idf is zero for the word "this", which implies that the word is not very informative as it appears in all documents.
The word "example" is more interesting - it occurs three times, but only in the second document:
Finally,
(using thebase 10 logarithm).
The idea behind tf–idf also applies to entities other than terms. In 1998, the concept of idf was applied to citations.[10]The authors argued that "if a very uncommon citation is shared by two documents, this should be weighted more highly than a citation made by a large number of documents". In addition, tf–idf was applied to "visual words" with the purpose of conducting object matching in videos,[11]and entire sentences.[12]However, the concept of tf–idf did not prove to be more effective in all cases than a plain tf scheme (without idf). When tf–idf was applied to citations, researchers could find no improvement over a simple citation-count weight that had no idf component.[13]
A number of term-weighting schemes have derived from tf–idf. One of them is TF–PDF (term frequency * proportional document frequency).[14]TF–PDF was introduced in 2001 in the context of identifying emerging topics in the media. The PDF component measures the difference of how often a term occurs in different domains. Another derivate is TF–IDuF. In TF–IDuF,[15]idf is not calculated based on the document corpus that is to be searched or recommended. Instead, idf is calculated on users' personal document collections. The authors report that TF–IDuF was equally effective as tf–idf but could also be applied in situations when, e.g., a user modeling system has no access to a global document corpus.
|
https://en.wikipedia.org/wiki/Tf%E2%80%93idf
|
XML retrieval, orXML information retrieval, is the content-based retrieval of documents structured withXML(eXtensible Markup Language). As such it is used for computingrelevanceof XML documents.[1]
Most XML retrieval approaches do so based on techniques from theinformation retrieval(IR) area, e.g. by computing the similarity between a query consisting of keywords (query terms) and the document. However, in XML-Retrieval the query can also containstructuralhints. So-called "content and structure" (CAS) queries enable users to specify what structure the requested content can or must have.
Taking advantage of theself-describingstructure of XML documents can improve the search for XML documents significantly. This includes the use of CAS queries, the weighting of different XML elements differently and the focused retrieval of subdocuments.
Ranking in XML-Retrieval can incorporate both content relevance and structural similarity, which is the resemblance between the structure given in the query and the structure of the document. Also, the retrieval units resulting from an XML query may not always be entire documents, but can be any deeply nested XML elements, i.e. dynamic documents. The aim is to find the smallest retrieval unit that is highly relevant. Relevance can be defined according to the notion of specificity, which is the extent to which a retrieval unit focuses on the topic of request.[2]
An overview of two potential approaches is available.[3][4]The INitiative for the Evaluation of XML-Retrieval (INEX) was founded in 2002 and provides a platform for evaluating suchalgorithms.[2]Three different areas influence XML-Retrieval:[5]
Query languagessuch as theW3CstandardXQuery[6]supply complex queries, but only look for exact matches. Therefore, they need to be extended to allow for vague search with relevance computing. Most XML-centered approaches imply a quite exact knowledge of the documents'schemas.[7]
Classicdatabasesystems have adopted the possibility to storesemi-structured data[5]and resulted in the development ofXML databases. Often, they are very formal, concentrate more on searching than on ranking, and are used by experienced users able to formulate complex queries.
Classic information retrieval models such as thevector space modelprovide relevance ranking, but do not include document structure; only flat queries are supported. Also, they apply a static document concept, so retrieval units usually are entire documents.[7]They can be extended to consider structural information and dynamic document retrieval. Examples for approaches extending the vector space models are available: they use documentsubtrees(index terms plus structure) as dimensions of the vector space.[8]
For data-centric XML datasets, the unique and distinct keyword search method, namely, XDMA[9]for XML databases is designed and developed based on dual indexing and mutual summation.
|
https://en.wikipedia.org/wiki/XML_retrieval
|
Data miningis the process of extracting and finding patterns in massivedata setsinvolving methods at the intersection ofmachine learning,statistics, anddatabase systems.[1]Data mining is aninterdisciplinarysubfield ofcomputer scienceandstatisticswith an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use.[1][2][3][4]Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.[5]Aside from the raw analysis step, it also involves database anddata managementaspects,data pre-processing,modelandinferenceconsiderations, interestingness metrics,complexityconsiderations, post-processing of discovered structures,visualization, andonline updating.[1]
The term "data mining" is amisnomerbecause the goal is the extraction ofpatternsand knowledge from large amounts of data, not theextraction (mining) of data itself.[6]It also is abuzzword[7]and is frequently applied to any form of large-scale data orinformation processing(collection,extraction,warehousing, analysis, and statistics) as well as any application ofcomputer decision support systems, includingartificial intelligence(e.g., machine learning) andbusiness intelligence. Often the more general terms (large scale)data analysisandanalytics—or, when referring to actual methods,artificial intelligenceandmachine learning—are more appropriate.
The actual data mining task is the semi-automaticor automatic analysis of massive quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), anddependencies(association rule mining,sequential pattern mining). This usually involves using database techniques such asspatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning andpredictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by adecision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, although they do belong to the overall KDD process as additional steps.
The difference betweendata analysisand data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of amarketing campaign, regardless of the amount of data. In contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.[8]
The related termsdata dredging,data fishing, anddata snoopingrefer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.
In the 1960s, statisticians and economists used terms likedata fishingordata dredgingto refer to what they considered the bad practice of analyzing data without ana-priorihypothesis. The term "data mining" was used in a similarly critical way by economistMichael Lovellin an article published in theReview of Economic Studiesin 1983.[9][10]Lovell indicates that the practice "masquerades under a variety of aliases, ranging from "experimentation" (positive) to "fishing" or "snooping" (negative).
The termdata miningappeared around 1990 in the database community, with generally positive connotations. For a short time in 1980s, the phrase "database mining"™, was used, but since it was trademarked by HNC, aSan Diego–based company, to pitch their Database Mining Workstation;[11]researchers consequently turned todata mining. Other terms used includedata archaeology,information harvesting,information discovery,knowledge extraction, etc.Gregory Piatetsky-Shapirocoined the term "knowledge discovery in databases" for the first workshop on the same topic(KDD-1989)and this term became more popular in theAIandmachine learningcommunities. However, the term data mining became more popular in the business and press communities.[12]Currently, the termsdata miningandknowledge discoveryare used interchangeably.
The manual extraction of patterns fromdatahas occurred for centuries. Early methods of identifying patterns in data includeBayes' theorem(1700s) andregression analysis(1800s).[13]The proliferation, ubiquity and increasing power of computer technology have dramatically increased data collection, storage, and manipulation ability. Asdata setshave grown in size and complexity, direct "hands-on" data analysis has increasingly been augmented with indirect, automated data processing, aided by other discoveries in computer science, specially in the field of machine learning, such asneural networks,cluster analysis,genetic algorithms(1950s),decision treesanddecision rules(1960s), andsupport vector machines(1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns.[14]in large data sets. It bridges the gap fromapplied statisticsand artificial intelligence (which usually provide the mathematical background) todatabase managementby exploiting the way data is stored and indexed in databases to execute the actual learning and discovery algorithms more efficiently, allowing such methods to be applied to ever-larger data sets.
Theknowledge discovery in databases (KDD) processis commonly defined with the stages:
It exists, however, in many variations on this theme, such as theCross-industry standard process for data mining(CRISP-DM) which defines six phases:
or a simplified process such as (1) Pre-processing, (2) Data Mining, and (3) Results Validation.
Polls conducted in 2002, 2004, 2007 and 2014 show that the CRISP-DM methodology is the leading methodology used by data miners.[15][16][17][18]
The only other data mining standard named in these polls wasSEMMA. However, 3–4 times as many people reported using CRISP-DM. Several teams of researchers have published reviews of data mining process models,[19]and Azevedo and Santos conducted a comparison of CRISP-DM and SEMMA in 2008.[20]
Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns actually present in the data, the target data set must be large enough to contain these patterns while remaining concise enough to be mined within an acceptable time limit. A common source for data is adata martordata warehouse. Pre-processing is essential to analyze themultivariatedata sets before data mining. The target set is then cleaned. Data cleaning removes the observations containingnoiseand those withmissing data.
Data mining involves six common classes of tasks:[5]
Data mining can unintentionally be misused, producing results that appear to be significant but which do not actually predict future behavior and cannot bereproducedon a new sample of data, therefore bearing little use. This is sometimes caused by investigating too many hypotheses and not performing properstatistical hypothesis testing. A simple version of this problem inmachine learningis known asoverfitting, but the same problem can arise at different phases of the process and thus a train/test split—when applicable at all—may not be sufficient to prevent this from happening.[21]
The final step of knowledge discovery from data is to verify that the patterns produced by the data mining algorithms occur in the wider data set. Not all patterns found by the algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. This is calledoverfitting. To overcome this, the evaluation uses atest setof data on which the data mining algorithm was not trained. The learned patterns are applied to this test set, and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish "spam" from "legitimate" e-mails would be trained on atraining setof sample e-mails. Once trained, the learned patterns would be applied to the test set of e-mails on which it hadnotbeen trained. The accuracy of the patterns can then be measured from how many e-mails they correctly classify. Several statistical methods may be used to evaluate the algorithm, such asROC curves.
If the learned patterns do not meet the desired standards, it is necessary to re-evaluate and change the pre-processing and data mining steps. If the learned patterns do meet the desired standards, then the final step is to interpret the learned patterns and turn them into knowledge.
The premier professional body in the field is theAssociation for Computing Machinery's (ACM) Special Interest Group (SIG) on Knowledge Discovery and Data Mining (SIGKDD).[22][23]Since 1989, this ACM SIG has hosted an annual international conference and published its proceedings,[24]and since 1999 it has published a biannualacademic journaltitled "SIGKDD Explorations".[25]
Computer science conferences on data mining include:
Data mining topics are also present in manydata management/database conferencessuch as the ICDE Conference,SIGMOD ConferenceandInternational Conference on Very Large Data Bases.
There have been some efforts to define standards for the data mining process, for example, the 1999 EuropeanCross Industry Standard Process for Data Mining(CRISP-DM 1.0) and the 2004Java Data Miningstandard (JDM 1.0). Development on successors to these processes (CRISP-DM 2.0 and JDM 2.0) was active in 2006 but has stalled since. JDM 2.0 was withdrawn without reaching a final draft.
For exchanging the extracted models—in particular for use inpredictive analytics—the key standard is thePredictive Model Markup Language(PMML), which is anXML-based language developed by the Data Mining Group (DMG) and supported as exchange format by many data mining applications. As the name suggests, it only covers prediction models, a particular data mining task of high importance to business applications. However, extensions to cover (for example)subspace clusteringhave been proposed independently of the DMG.[26]
Data mining is used wherever there is digital data available. Notableexamples of data miningcan be found throughout business, medicine, science, finance, construction, and surveillance.
While the term "data mining" itself may have no ethical implications, it is often associated with the mining of information in relation touser behavior(ethical and otherwise).[27]
The ways in which data mining can be used can in some cases and contexts raise questions regardingprivacy, legality, andethics.[28]In particular, data mining government or commercial data sets fornational securityorlaw enforcementpurposes, such as in theTotal Information AwarenessProgram or inADVISE, has raised privacy concerns.[29][30]
Data mining requires data preparation which uncovers information or patterns which compromiseconfidentialityandprivacyobligations. A common way for this to occur is throughdata aggregation.Data aggregationinvolves combining data together (possibly from various sources) in a way that facilitates analysis (but that also might make identification of private, individual-level data deducible or otherwise apparent).[31]This is not data miningper se, but a result of the preparation of data before—and for the purposes of—the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when the data were originally anonymous.[32]
It is recommended[according to whom?]to be aware of the followingbeforedata are collected:[31]
Data may also be modified so as tobecomeanonymous, so that individuals may not readily be identified.[31]However, even "anonymized" data sets can potentially contain enough information to allow identification of individuals, as occurred when journalists were able to find several individuals based on a set of search histories that were inadvertently released by AOL.[33]
The inadvertent revelation ofpersonally identifiable informationleading to the provider violates Fair Information Practices. This indiscretion can cause financial,
emotional, or bodily harm to the indicated individual. In one instance ofprivacy violation, the patrons of Walgreens filed a lawsuit against the company in 2011 for selling
prescription information to data mining companies who in turn provided the data
to pharmaceutical companies.[34]
Europehas rather strong privacy laws, and efforts are underway to further strengthen the rights of the consumers. However, theU.S.–E.U. Safe Harbor Principles, developed between 1998 and 2000, currently effectively expose European users to privacy exploitation by U.S. companies. As a consequence ofEdward Snowden'sglobal surveillance disclosure, there has been increased discussion to revoke this agreement, as in particular the data will be fully exposed to theNational Security Agency, and attempts to reach an agreement with the United States have failed.[35]
In the United Kingdom in particular there have been cases of corporations using data mining as a way to target certain groups of customers forcing them to pay unfairly high prices. These groups tend to be people of lower socio-economic status who are not savvy to the ways they can be exploited in digital market places.[36]
In the United States, privacy concerns have been addressed by theUS Congressvia the passage of regulatory controls such as theHealth Insurance Portability and Accountability Act(HIPAA). The HIPAA requires individuals to give their "informed consent" regarding information they provide and its intended present and future uses. According to an article inBiotech Business Week, "'[i]n practice, HIPAA may not offer any greater protection than the longstanding regulations in the research arena,' says the AAHC. More importantly, the rule's goal of protection through informed consent is approach a level of incomprehensibility to average individuals."[37]This underscores the necessity for data anonymity in data aggregation and mining practices.
U.S. information privacy legislation such as HIPAA and theFamily Educational Rights and Privacy Act(FERPA) applies only to the specific areas that each such law addresses. The use of data mining by the majority of businesses in the U.S. is not controlled by any legislation.
UnderEuropean copyrightdatabase laws, the mining of in-copyright works (such as byweb mining) without the permission of the copyright owner is not legal. Where a database is pure data in Europe, it may be that there is no copyright—but database rights may exist, so data mining becomes subject tointellectual propertyowners' rights that are protected by theDatabase Directive. On the recommendation of theHargreaves review, this led to the UK government to amend its copyright law in 2014 to allow content mining as alimitation and exception.[38]The UK was the second country in the world to do so after Japan, which introduced an exception in 2009 for data mining. However, due to the restriction of theInformation Society Directive(2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law also does not allow this provision to be overridden by contractual terms and conditions.
Since 2020 also Switzerland has been regulating data mining by allowing it in the research field under certain conditions laid down by art. 24d of the Swiss Copyright Act. This new article entered into force on 1 April 2020.[39]
TheEuropean Commissionfacilitated stakeholder discussion on text and data mining in 2013, under the title of Licences for Europe.[40]The focus on the solution to this legal issue, such as licensing rather than limitations and exceptions, led to representatives of universities, researchers, libraries, civil society groups andopen accesspublishers to leave the stakeholder dialogue in May 2013.[41]
US copyright law, and in particular its provision forfair use, upholds the legality of content mining in America, and other fair use countries such as Israel, Taiwan and South Korea. As content mining is transformative, that is it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of theGoogle Book settlementthe presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one being text and data mining.[42]
The following applications are available under free/open-source licenses. Public access to application source code is also available.
The following applications are available under proprietary licenses.
For more information about extracting information out of data (as opposed toanalyzingdata), see:
|
https://en.wikipedia.org/wiki/Web_mining
|
Achampion list, also calledtop docorfancy listis a precomputed list sometimes used with thevector space modelto avoid computing relevancy rankings for all documents each time a document collection is queried. The champion list contains a set of n documents with the highest weights for the given term. The number n can be chosen to be different for each term and is often higher for rarer terms. The weights can be calculated by for exampletf-idf. There are two types of champion lists , champion list and global champion list.
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Champion_list
|
Compound-term processing,ininformation-retrieval, is search result matching on the basis ofcompound terms. Compound terms are built by combining two or more simple terms; for example, "triple" is a single word term, but "triple heart bypass" is a compound term.
Compound-term processing is a new approach to an old problem: how can one improve the relevance of search results while maintaining ease of use? Using this technique, a search forsurvival rates following a triple heart bypass in elderly peoplewill locate documents about this topic even if this precise phrase is not contained in any document. This can be performed by aconcept search, which itself uses compound-term processing. This will extract the key concepts automatically (in this case "survival rates", "triple heart bypass" and "elderly people") and use these concepts to select the most relevant documents.
In August 2003,Concept Searching Limitedintroduced the idea of using statistical compound-term processing.[1]
CLAMOUR is a European collaborative project which aims to find a better way to classify when collecting and disseminating industrial information and statistics. CLAMOUR appears to use a linguistic approach, rather than one based onstatistical modelling.[2]
Techniques for probabilistic weighting of single word terms date back to at least 1976 in the landmark publication byStephen E. RobertsonandKaren Spärck Jones.[3]Robertson stated that the assumption of word independence is not justified and exists as a matter of mathematical convenience. His objection to the term independence is not a new idea, dating back to at least 1964 when H. H. Williams stated that "[t]he assumption of independence of words in a document is usually made as a matter of mathematical convenience".[4]
In 2004, Anna Lynn Patterson filed patents on "phrase-based searching in an information retrieval system"[5]to whichGooglesubsequently acquired the rights.[6]
Statistical compound-term processing is more adaptable than the process described by Patterson. Her process is targeted at searching theWorld Wide Webwhere an extensive statistical knowledge of common searches can be used to identify candidate phrases. Statistical compound term processing is more suited toenterprise searchapplications where sucha prioriknowledge is not available.
Statistical compound-term processing is also more adaptable than the linguistic approach taken by the CLAMOUR project, which must consider the syntactic properties of the terms (i.e. part of speech, gender, number, etc.) and their combinations. CLAMOUR is highly language-dependent, whereas the statistical approach is language-independent.
Compound-term processing allows information-retrieval applications, such assearch engines, to perform their matching on the basis of multi-word concepts, rather than on single words in isolation which can be highly ambiguous.
Early search engines looked for documents containing the words entered by the user into the search box . These are known askeyword searchengines.Boolean searchengines add a degree of sophistication by allowing the user to specify additional requirements. For example, "Tiger NEAR Woods AND (golf OR golfing) NOT Volkswagen" uses the operators "NEAR", "AND", "OR" and "NOT" to specify that these words must follow certain requirements. Aphrase searchis simpler to use, but requires that the exact phrase specified appear in the results.
|
https://en.wikipedia.org/wiki/Compound_term_processing
|
Aconceptual spaceis a geometric structure that represents a number ofqualitydimensions, which denote basic features by which concepts and objects can be compared, such asweight,color,taste,temperature,pitch, and thethree ordinary spatial dimensions.[1][2]: 4In a conceptual space,pointsdenote objects, andregionsdenote concepts. The theory of conceptual spaces is a theory aboutconcept learningfirst proposed byPeter Gärdenfors.[3][4][5]It is motivated by notions such as conceptualsimilarityandprototype theory.
The theory also puts forward the notion thatnaturalcategories areconvexregions in conceptual spaces.[1]: 5In that ifx{\displaystyle x}andy{\displaystyle y}are elements of a category, and ifz{\displaystyle z}is betweenx{\displaystyle x}andy{\displaystyle y}, thenz{\displaystyle z}is also likely to belong to the category. The notion of concept convexity allows the interpretation of the focal points of regions as categoryprototypes. In the more general formulations of the theory, concepts are defined in terms conceptual similarity to their prototypes. Conceptual spaces have found applications in bothcognitive modellingandartificial intelligence.[1][6]
Thispsychology-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Conceptual_space
|
Sparse distributed memory(SDM) is a mathematical model of humanlong-term memoryintroduced byPentti Kanervain 1988 while he was atNASA Ames Research Center.[1]
This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines – e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, etc. Sparse distributed memory is used for storing and retrieving large amounts (21000{\displaystyle 2^{1000}}bits) of information without focusing on the accuracy but on similarity of information.[2]There are some recent applications in robot navigation[3]and experience-based robot manipulation.[4]
It is a generalizedrandom-access memory(RAM) for long (e.g., 1,000 bit) binary words. These words serve as both addresses to and data for the memory. The main attribute of the memory is sensitivity to similarity. This means that a word can be read back not only by giving the original write address but also by giving one close to it, as measured by the number of mismatched bits (i.e., theHamming distancebetweenmemory addresses).[1]
SDM implements transformation from logical space to physical space using distributed data representation and storage, similarly toencodingprocesses in human memory.[5]A value corresponding to a logical address is stored into many physical addresses. This way of storing is robust and not deterministic. A memory cell is not addressed directly. If input data (logical addresses) are partially damaged at all, we can still get correct output data.[6]
The theory of the memory is mathematically complete[1]and has been verified bycomputer simulation. It arose from the observation that the distances between points of ahigh-dimensional spaceresemble the proximity relations betweenconceptsin human memory. The theory is also practical in that memories based on it can be implemented with conventionalrandom-access memoryelements.[7]
Human memory has a tendency tocongregate memoriesbased on similarities between them (although they may not be related), such as "firetrucks are red and apples are red".[8]Sparse distributed memory is a mathematical representation of human memory, and useshigh-dimensional spaceto help model the large amounts of memory that mimics that of the human neural network.[9][10]An
important property of such high dimensional spaces is that two randomly chosen vectors are relatively far away from each other, meaning that they are uncorrelated.[11]SDM can be considered a realization oflocality-sensitive hashing.
The underlying idea behind a SDM is the mapping of a huge binary memory onto a smaller set of physical locations, so-calledhard locations. As a general guideline, those hard locations should be uniformly distributed in thevirtual space, to mimic the existence of the larger virtual space as accurately as possible. Every datum is stored distributed by a set of hard locations, and retrieved by averaging those locations. Therefore, recall may not be perfect, accuracy depending on the saturation of the memory.
Kanerva's proposal is based on four basic ideas:[12]
The SDM works with n-dimensional vectors with binary components. Depending on the context, the vectors are called points, patterns, addresses, words, memory items, data, or events. This section is mostly about the properties of the vector space N ={0,1}n{\displaystyle \{0,1\}^{n}}. Let n be number of dimensions of the space. The number of points, or possible memory items, is then2n{\displaystyle 2^{n}}. We will denote this number by N and will use N and2n{\displaystyle 2^{n}}to stand also for the space itself.[6]
Concepts Related to the space N:{0,1}n{\displaystyle \{0,1\}^{n}}[6]
Properties of the space N:{0,1}n{\displaystyle \{0,1\}^{n}}[1][6]
The space N can be represented by the vertices of the unit cube in n-dimensionalEuclidean space. The vertices lie on the surface of an n-dimensional sphere with (Euclidean-metric) radiusn/2{\displaystyle {\sqrt {n}}/2}. This gives rise to thesphereanalogy. We will call a space spherical if
The surface of a sphere (in Euclidean 3d-space) clearly is spherical. According to definition, N is also spherical, since y ⊕ x ⊕ (…) is an automorphism that maps x to y.
Because N is spherical, it is helpful to think of it as the surface of a sphere withcircumference2n. All points of N are equally qualified as points of origin, and a point and its complement are like two poles at distance n from each other, with the entire space in between. The points halfway between the poles and perpendicular to them are like the equator.
The number of points that are exactly d bits from an arbitrary point x (say, from the point 0) is the number of ways to choose d coordinates from a total of n coordinates, and is therefore given by thebinomial coefficient:(nd)=n!d!(n−d)!{\displaystyle {\binom {n}{d}}={\frac {n!}{d!(n-d)!}}}
The distribution of N thus is the binomial distribution with parameters n and p, where p = 1/2. The mean of thebinomial distributionis n/2, and thevarianceis n/4. This distribution function will be denoted by N(d).
Thenormal distributionF with mean n/2 andstandard deviationn/2{\displaystyle {\sqrt {n}}/2}is a good approximation to it:
N(d) = Pr{d(x, y) ≤ d} ≅ F{(d − n / 2)/n/4{\displaystyle {\sqrt {n/4}}}}
An outstanding property of N is that most of it lies at approximately the mean (indifference) distance n/2 from a point (and its complement). In
other words, most of the space is nearly orthogonal to any given point, and the larger n is, the more pronounced is this effect.
The SDM may be regarded either as acontent-addressableextension of a classicalrandom-access memory(RAM) or as a special type of three layerfeedforward neural network. The main SDM alterations to the RAM are:[13]
An idealized description ofneuronis as follows: a neuron has a cell body with two kinds of branches:dendritesand anaxon. It receives input signals from other neurons via dendrites, integrates (sums) them and generates its own (electric) output signal which is sent to outside neurons via axon. The points of electric contact between neurons are calledsynapses.
When a neuron generates signal it isfiringand after firing it mustrecoverbefore it fires again.
The relative importance of a synapse to the firing of neuron is calledsynaptic weight(orinput coefficient). There are two kinds of synapses:excitatorythat trigger neuron tofireandinhibitorythat hinder firing. The neuron is either excitatory or inhibitory according to the kinds of synapses its axon makes.[14]
A neuron fires when the sum of inputs exceed a specificthreshold. The higher the threshold the more important it is that excitatory synapses have input while inhibitory ones do not.[15]Whether a recovered neuron actually fires depends on whether it received sufficient excitatory input (beyond the threshold) and not too much of inhibitory input within a certain period.
The formal model of neuron makes further simplifying assumptions.[16]Ann-input neuron is modeled by alinear threshold functionF:{0,1}n−>{0,1}{\displaystyle F:\{0,1\}^{n}->\{0,1\}}as follows :
Fori=0,...,n−1{\displaystyle i=0,...,n-1}where n is the number of inputs, letFt{\displaystyle F_{t}}be the output at timet:Ft∈{0,1}{\displaystyle F_{t}\in \{0,1\}}, and letxi,t{\displaystyle x_{i,t}}be thei-th input at timet:xi,t∈{0,1}{\displaystyle x_{i,t}\in \{0,1\}}. Letwi{\displaystyle w_{i}}be the weight of thei-th input and letc{\displaystyle c}be the threshold.
Theweighted sumof the inputs at timetis defined bySt=∑i=0n−1wixi,t{\displaystyle S_{t}=\sum _{i=0}^{n-1}w_{i}x_{i,t}}
Theneuron outputat timetis then defined as aboolean function:Ft={1ifSt>=c,0otherwise.{\displaystyle \mathbf {F} _{t}={\begin{cases}1&{\text{if }}S_{t}>=c,\\0&{\text{otherwise }}.\end{cases}}}
Where Ft=1 means that the neuron fires at timetand Ft=0 that it doesn't, i.e. in order for neuron to fire the weighted sum must reach or exceed the threshold .
Excitatory inputs increase the sum and inhibitory inputs decrease it.
Kanerva's key thesis[1]is that certain neurons could have their input coefficients and thresholds fixed over the entire life of an organism and used as address decoders wheren-tuple of input coefficients (the pattern to which neurons respond most readily) determines then-bit memory address, and the threshold controls the size of the region of similar address patterns to which the neuron responds.
This mechanism is complementary to adjustable synapses or adjustable weights in a neural network (perceptronconvergence learning), as this fixed accessing mechanism would be a permanentframe of referencewhich allows toselectthe synapses in which the information is stored and from which it is retrieved under given set of circumstances. Furthermore, an encoding of the present circumstance would serve as an address.
The addressaof a neuron with input coefficientswwherew0,..,wn1{\displaystyle w_{0},..,w_{n_{1}}}is defined as ann-bit input pattern that maximizes the weighted sum. The maximum occurs when the inhibitory inputs are zeros and the excitatory inputs are ones.
Thei-th bit of address is:
ai={1ifwi>0,0ifwi<0.{\displaystyle \mathbf {a} _{i}={\begin{cases}1&{\text{if }}w_{i}>0,\\0&{\text{if }}w_{i}<0.\end{cases}}}(assuming weights are non-zero)
Themaximum weighted sumS(w){\displaystyle S(w)}is then the sum of all positive coefficients:S(w)=∑wi>0wi{\displaystyle S(w)=\sum _{w_{i}>0}w_{i}}
And theminimum weighted sums(w){\displaystyle s(w)}would correspond to a point opposite the neuron address a`:s(w)=∑wi<0wi{\displaystyle s(w)=\sum _{w_{i}<0}w_{i}}
When the threshold c is in ranges(w)<c<S(w){\displaystyle s(w)<c<S(w)}the output of the neuron is 0 for some addresses (input patterns) and 1 for others. If the threshold is above S the output is always 0, if it's below s the output is always 1. So by a proper choice of the threshold a neuron responds only to just one address. When the threshold is S (the maximum for the weighted sum) the neuron responds only to its own address and acts like an address decoder of a conventionalrandom-access memory.
SDM is designed to cope with address patterns that span an enormous address space (order of21000{\displaystyle 2^{1000}}). SDM assumes that the address patterns actually describing physical situations of interest aresparselyscattered throughout the input space. It is impossible to reserve a separate physical location corresponding to each possible input; SDM implements only a limited number of physical orhardlocations. The physical location is called a memory (orhard) location.[7]
Every hard location has associated with it two items:
In SDM a word could be stored in memory by writing it in a free storage location and at the same time providing the location with the appropriate address decoder. A neuron as an address decoder would select a location based on similarity of the location's address to the retrieval cue. Unlike conventionalTuring machinesSDM is taking advantage ofparallel computingby the address decoders. The mereaccessing the memoryis regarded as computing, the amount of which increases with memory size.[1]
An N-bit vector used in writing to and reading from the memory. The address pattern is a coded description of an environmental state. (e.g. N = 256.)
An M-bit vector that is the object of the writing and reading operations. Like the address pattern, it is a coded description of an environmental state. (e.g. M = 256.)
Writing is the operation of storing a data pattern into the memory using a particular
address pattern. During a write, the input to the memory consists of an address pattern and a data pattern. The address pattern is used to selecthardmemory locations whose hard addresses are within a certain cutoff distance from the address pattern. The data pattern is stored into each of the selected locations.
Reading is the operation of retrieving a data pattern from the memory using a particular address pattern. During a read, an address pattern is used to select a certain number ofhardmemory locations (just like during a write). The contents of the selected locations are bitwise summed and thresholded to derive an M-bit data pattern. This serves as the output read from the memory.
All of the items are linked in a single list (or array) of pointers to memory locations, and are stored in RAM. Each address in an array points to an individual line in the memory. That line is then returned if it is similar to other lines. Neurons are utilized as address decoders and encoders, similar to the way neurons work in the brain, and return items from the array that match or are similar.
Kanerva's model of memory has a concept of acritical point: prior to this point, a previously stored item can be easily retrieved; but beyond this point an item cannot be retrieved. Kanerva has methodically calculated this point for a particular set of (fixed) parameters.
The correspondingcritical distanceof a Sparse Distributed Memory can be approximately evaluated minimizing the following equation with the restrictiond∈N{\displaystyle d\in N}andd⩽n{\displaystyle d\leqslant n}. The proof can be found in,[17][18]
f~(d)={12⋅[1−N(z<w⋅shared(d)θ)+N(z<−w⋅shared(d)θ)]−dn}2{\displaystyle {\tilde {f}}(d)=\left\{{\frac {1}{2}}\cdot \left[1-N\left(z<{\frac {w\cdot shared(d)}{\sqrt {\theta }}}\right)+N\left(z<{\frac {-w\cdot shared(d)}{\sqrt {\theta }}}\right)\right]-{\frac {d}{n}}\right\}^{2}}
Where:
Anassociative memorysystem usingsparse, distributed representationscan be reinterpreted as animportance sampler, aMonte
Carlomethod of approximatingBayesian inference.[19]The SDM can be considered a Monte Carlo approximation to a multidimensionalconditional probabilityintegral. The SDM will produce acceptable responses from a training set when this approximation is valid, that is, when the training set contains sufficient data to provide good estimates of the underlyingjoint probabilitiesand there are enough Monte Carlo samples to obtain an accurate estimate of the integral.[20]
Sparse codingmay be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such task requires implementing stimulus-specificassociative memoriesin which only a few neurons out of apopulationrespond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli.
Theoretical work on SDM by Kanerva has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations. Experimentally, sparse representations of sensory information have been observed in many systems, including vision,[21]audition,[22]touch,[23]and olfaction.[24]However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been lacking until recently.
Some progress has been made in 2014 byGero Miesenböck's lab at theUniversity of OxfordanalyzingDrosophilaOlfactory system.[25]In Drosophila, sparse odor coding by theKenyon cellsof themushroom bodyis thought to generate a large number of precisely addressable locations for the storage of odor-specific memories. Lin et al.[26]demonstrated that sparseness is controlled by a negative feedback circuit between Kenyon cells and theGABAergicanterior paired lateral (APL) neuron. Systematic activation and blockade of each leg of this feedback circuit show that Kenyon cells activate APL and APL inhibits Kenyon cells. Disrupting the Kenyon cell-APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories. A 2017 publication inScience[27]showed that fly olfactory circuit implements an improved version of binarylocality sensitive hashingvia sparse, random projections.
In applications of the memory, the words are patterns of features. Some features are produced by a sensory system, others control a motor system. There is acurrent pattern(of e.g. 1000 bits), which is the current contents of the system'sfocus. The sensors feed into the focus, the motors are driven from the focus, and the memory is accessed through the focus.
What goes on in the world – the system's "subjective" experience – is represented internally by a sequence of patterns in the focus. The memory stores this sequence and can recreate it later in the focus if addressed with a pattern similar to one encountered in the past. Thus, the memory learns topredictwhat is about to happen. Wide applications of the memory would be in systems that deal with real-world information in real time.
The applications includevision– detecting and identifying objects in a scene and anticipating subsequent scenes –robotics,signal detection and verification, and adaptivelearningandcontrol. On the theoretical side, the working of the memory may help us understandmemoryandlearningin humans and animals.[7][28]
SDM can be applied to the problem of finding thebest matchto a test word in a dataset of stored words.[1][29]or, in other words, theNearest neighbor searchproblem.
Consider a memory with N locations whereN=2n{\displaystyle N=2^{n}}. Let each location have the capacity for onen-bit word (e.g. N= 2100100-bit words), and let the address decoding be done by N address decoder neurons. Set the threshold of each neuronxto its maximum weighted sum|x|{\displaystyle |x|}and use a common parameterdto adjust all thresholds when accessing the memory. The effective threshold of neuronxwill be thenx−|d|{\displaystyle x-|d|}which means that the locationxis accessible every time the addressxis withindbits of the address presented to memory (i.e. the address held by the address register).
Withd=0{\displaystyle d=0}we have a conventionalrandom-access memory. Assume further that each location has a speciallocation-occupiedbit that can be accessed in the same way as the regular datum bits. Writing a word to a location sets thislocation-occupiedbit. Assume that only occupied location can be read.
To file the data in memory, start by settingd=n{\displaystyle d=n}and issue a command to clear thelocation-occupiedbit. This single operation marks all memory as unoccupied regardless of the values of the address register. Then setd=0{\displaystyle d=0}and write each wordyof the data set withyitself as the address. Notice that each write operation affects only one location: the locationy. Filing time is thus proportional to the number of words in the dataset.
Finding the best match for a test wordz, involves placingzin the address register and finding the least distancedfor which there is an occupied location. We can start the search by settingd=0{\displaystyle d=0}and incrementingdsuccessively until an occupied location is found. This method gives average search times that are proportional to the number of address bits or slightly less thann/2{\displaystyle n/2}[1]because the nearest occupied location can be expected to be just undern/2{\displaystyle n/2}bits fromz(withbinary searchondthis would be O(log(n)).
With 100-bit words 2100locations would be needed, i.e. an enormously large memory. Howeverif we construct the memory as we store the words of the datasetwe need only one location (and one address decoder) for each word of the data set. None of the unoccupied locations need to be present. This represents the aspect ofsparsenessin SDM.
SDM can be applied intranscribing speech, with the training consisting of "listening" to a large corpus of spokenlanguage. Two hard problems with natural speech are how to detect word boundaries and how to adjust to different speakers. The memory should be able to handle both. First, it stores sequences of patterns as pointer chains. In training – in listening to speech – it will build a probabilistic structure with the highest incidence of branching at word boundaries. In transcribing speech, these branching points are detected and tend to break the stream into segments that correspond to words. Second, the memory's sensitivity to similarity is its mechanism for adjusting to different speakers – and to the variations in the voice of the same speaker.[7]
At the University of Memphis, Uma Ramamurthy, Sidney K. D'Mello, and Stan Franklin created a modified version of the sparse distributed memory system that represents "realizing forgetting." It uses a decay equation to better show interference in data. The sparse distributed memory system distributes each pattern into approximately one hundredth of the locations,[clarification needed]so interference can have detrimental results.[30]
Two possible examples of decay from this modified sparse distributed memory are presented
Exponential decay mechanism:f(x)=1+e−ax{\displaystyle \!f(x)=1+e^{-ax}}
Negated-translated sigmoid decay mechanism:f(x)=1−[11+e−a(x−c)]{\displaystyle f(x)=1-[{\frac {1}{1+e^{-a(x-c)}}}]}
In the exponential decay function, it approaches zero more quickly asxincreases, andais a constant (usually between 3-9) andcis a counter. For the negated-translatedsigmoid function, the decay is similar to the exponential decay function whenais greater than 4.[30]
As the graph approaches 0, it represents how the memory is being forgotten using decay mechanisms.
Ashraf Anwar, Stan Franklin, and Dipankar Dasgupta at The University of Memphis; proposed a model for SDM initialization using Genetic Algorithms and Genetic Programming (1999).
Genetic memoryuses genetic algorithm and sparse distributed memory as a pseudo artificial neural network. It has been considered for use in creating artificial life.[31]
SDM has been applied to statisticalprediction, the task of associating extremely large perceptual state vectors with future events. In conditions of near- or over- capacity, where the associative memory behavior of the model
breaks down, the processing performed by the model can be interpreted as that of a statistical predictor and each data counter in an SDM can be viewed as an independent estimate of the conditional probability of a binary function f being equal to the activation set defined by the counter's memory location.[32]
SDMs provide a linear, localfunction approximationscheme, designed to work when a very large/high-dimensional input (address) space has to be mapped into a much smallerphysical memory. In general, local architectures, SDMs included, can be subject to thecurse of dimensionality, as some target functions may require, in the worst case, an exponential number of local units to be approximated accurately across the entire input space. However, it is widely believed that mostdecision-making systemsneed high accuracy only around low-dimensionalmanifoldsof thestate space, or important state "highways".[37]The work in Ratitch et al.[38]combined the SDM memory model with the ideas frommemory-based learning, which provides an approximator that can dynamically adapt its structure and resolution in order to locate regions of the state space that are "more interesting"[39]and allocate proportionally more memory resources to model them accurately.
Dana H. Ballard's lab[40]demonstrated a general-purpose object indexing technique forcomputer visionthat combines the virtues ofprincipal component analysiswith the favorable matching properties of high-dimensional spaces to achieve high precision recognition. The indexing algorithm uses anactive visionsystem in conjunction with a modified form of SDM and provides a platform for learning the association between an object's appearance and its identity.
Many extensions and improvements to SDM have been proposed, e.g.:
|
https://en.wikipedia.org/wiki/Sparse_distributed_memory
|
Innatural language processingaw-shinglingis a set ofuniqueshingles(thereforen-grams) each of which is composed of contiguoussubsequencesoftokenswithin adocument, which can then be used to ascertain thesimilarity between documents. The symbolwdenotes the quantity of tokens in each shingle selected, or solved for.
The document, "a rose is a rose is a rose" can therefore be maximallytokenizedas follows:
Thesetof all contiguoussequences of 4 tokens(Thus 4=n, thus 4-grams) is
For a given shingle size, the degree to which two documentsAandBresemble each other can be expressed as the ratio of the magnitudes of their shinglings'intersectionandunion, or
where |A| is the size of set A. The resemblance is a number in the range [0,1], where 1 indicates that two documents are identical. This definition is identical with theJaccard coefficientdescribing similarity and diversity of sample sets.
|
https://en.wikipedia.org/wiki/W-shingling
|
Instatistics, theuncertainty coefficient, also calledproficiency,entropy coefficientorTheil's U, is a measure of nominalassociation. It was first introduced byHenri Theil[citation needed]and is based on the concept ofinformation entropy.
Suppose we have samples of two discrete random variables,XandY. By constructing the joint distribution,PX,Y(x,y), from which we can calculate theconditional distributions,PX|Y(x|y) =PX,Y(x,y)/PY(y)andPY|X(y|x) =PX,Y(x,y)/PX(x), and calculating the various entropies, we can determine the degree of association between the two variables.
The entropy of a single distribution is given as:[1]
while theconditional entropyis given as:[1]
The uncertainty coefficient[2]or proficiency[3]is defined as:
and tells us: givenY, what fraction of the bits ofXcan we predict? In this case we can think ofXas containing the total information, and ofYas allowing one to predict part of such information.
The above expression makes clear that the uncertainty coefficient is a normalisedmutual informationI(X;Y). In particular, the uncertainty coefficient ranges in [0, 1] asI(X;Y) < H(X)and bothI(X,Y)andH(X)are positive or null.
Note that the value ofU(but notH!) is independent of the base of thelogsince all logarithms are proportional.
The uncertainty coefficient is useful for measuring the validity of a statistical classification algorithm and has the advantage over simpler accuracy measures such asprecision and recallin that it is not affected by the relative fractions of the different classes, i.e.,P(x).[4]It also has the unique property that it won't penalize an algorithm for predicting the wrong classes, so long as it does so consistently (i.e., it simply rearranges the classes). This is useful in evaluatingclustering algorithmssince cluster labels typically have no particular ordering.[3]
The uncertainty coefficient is not symmetric with respect to the roles ofXandY. The roles can be reversed and a symmetrical measure thus defined as a weighted average between the two:[2]
Although normally applied to discrete variables, the uncertainty coefficient can be extended to continuous variables[1]usingdensity estimation.[citation needed]
|
https://en.wikipedia.org/wiki/Uncertainty_coefficient
|
Inmedicineandstatistics,sensitivity and specificitymathematically describe the accuracy of a test that reports the presence or absence of a medical condition. If individuals who have the condition are considered "positive" and those who do not are considered "negative", then sensitivity is a measure of how well a test can identify true positives and specificity is a measure of how well a test can identify true negatives:
If the true status of the condition cannot be known, sensitivity and specificity can be defined relative to a "gold standard test" which is assumed correct. For all testing, bothdiagnosesandscreening, there is usually a trade-off between sensitivity and specificity, such that higher sensitivities will mean lower specificities and vice versa.
A test which reliably detects the presence of a condition, resulting in a high number of true positives and low number of false negatives, will have a high sensitivity. This is especially important when the consequence of failing to treat the condition is serious and/or the treatment is very effective and has minimal side effects.
A test which reliably excludes individuals who do not have the condition, resulting in a high number of true negatives and low number of false positives, will have a high specificity. This is especially important when people who are identified as having a condition may be subjected to more testing, expense, stigma, anxiety, etc.
The terms "sensitivity" and "specificity" were introduced by American biostatistician Jacob Yerushalmy in 1947.[1]
There are different definitions withinlaboratory quality control, wherein "analytical sensitivity" is defined as the smallest amount of substance in a sample that can accurately be measured by an assay (synonymously todetection limit), and "analytical specificity" is defined as the ability of an assay to measure one particular organism or substance, rather than others.[2]However, this article deals with diagnostic sensitivity and specificity as defined at top.
Imagine a study evaluating a test that screens people for a disease. Each person taking the test either has or does not have the disease. The test outcome can be positive (classifying the person as having the disease) or negative (classifying the person as not having the disease). The test results for each subject may or may not match the subject's actual status. In that setting:
After getting the numbers of true positives, false positives, true negatives, and false negatives, the sensitivity and specificity for the test can be calculated. If it turns out that the sensitivity is high then any person who has the disease is likely to be classified as positive by the test. On the other hand, if the specificity is high, any person who does not have the disease is likely to be classified as negative by the test. An NIH web site has a discussion of how these ratios are calculated.[3]
Consider the example of a medical test for diagnosing a condition. Sensitivity (sometimes also named the detection rate in a clinical setting) refers to the test's ability to correctly detect ill patients out of those who do have the condition.[4]Mathematically, this can be expressed as:
A negative result in a test with high sensitivity can be useful for "ruling out" disease,[4]since it rarely misdiagnoses those who do have the disease. A test with 100% sensitivity will recognize all patients with the disease by testing positive. In this case, a negative test result would definitivelyrule outthe presence of the disease in a patient. However, a positive result in a test with high sensitivity is not necessarily useful for "ruling in" disease. Suppose a 'bogus' test kit is designed to always give a positive reading. When used on diseased patients, all patients test positive, giving the test 100% sensitivity. However, sensitivity does not take into account false positives. The bogus test also returns positive on all healthy patients, giving it a false positive rate of 100%, rendering it useless for detecting or "ruling in" the disease.
The calculation of sensitivity does not take into account indeterminate test results.
If a test cannot be repeated, indeterminate samples either should be excluded from the analysis (the number of exclusions should be stated when quoting sensitivity) or can be treated as false negatives (which gives the worst-case value for sensitivity and may therefore underestimate it).
A test with a higher sensitivity has a lowertype II errorrate.
Consider the example of a medical test for diagnosing a disease. Specificity refers to the test's ability to correctly reject healthy patients without a condition. Mathematically, this can be written as:
A positive result in a test with high specificity can be useful for "ruling in" disease, since the test rarely gives positive results in healthy patients.[5]A test with 100% specificity will recognize all patients without the disease by testing negative, so a positive test result would definitivelyrule inthe presence of the disease. However, a negative result from a test with high specificity is not necessarily useful for "ruling out" disease. For example, a test that always returns a negative test result will have a specificity of 100% because specificity does not consider false negatives. A test like that would return negative for patients with the disease, making it useless for "ruling out" the disease.
A test with a higher specificity has a lowertype I errorrate.
The above graphical illustration is meant to show the relationship between sensitivity and specificity. The black, dotted line in the center of the graph is where the sensitivity and specificity are the same. As one moves to the left of the black dotted line, the sensitivity increases, reaching its maximum value of 100% at line A, and the specificity decreases. The sensitivity at line A is 100% because at that point there are zero false negatives, meaning that all the negative test results are true negatives. When moving to the right, the opposite applies, the specificity increases until it reaches the B line and becomes 100% and the sensitivity decreases. The specificity at line B is 100% because the number of false positives is zero at that line, meaning all the positive test results are true positives.
The middle solid line in both figures above that show the level of sensitivity and specificity is the test cutoff point. As previously described, moving this line results in a trade-off between the level of sensitivity and specificity. The left-hand side of this line contains the data points that tests below the cut off point and are considered negative (the blue dots indicate the False Negatives (FN), the white dots True Negatives (TN)). The right-hand side of the line shows the data points that tests above the cut off point and are considered positive (red dots indicate False Positives (FP)). Each side contains 40 data points.
For the figure that shows high sensitivity and low specificity, there are 3 FN and 8 FP. Using the fact that positive results = true positives (TP) + FP, we get TP = positive results - FP, or TP = 40 - 8 = 32. The number of sick people in the data set is equal to TP + FN, or 32 + 3 = 35. The sensitivity is therefore 32 / 35 = 91.4%. Using the same method, we get TN = 40 - 3 = 37, and the number of healthy people 37 + 8 = 45, which results in a specificity of 37 / 45 = 82.2 %.
For the figure that shows low sensitivity and high specificity, there are 8 FN and 3 FP. Using the same method as the previous figure, we get TP = 40 - 3 = 37. The number of sick people is 37 + 8 = 45, which gives a sensitivity of 37 / 45 = 82.2 %. There are 40 - 8 = 32 TN. The specificity therefore comes out to 32 / 35 = 91.4%.
The red dot indicates the patient with the medical condition. The red background indicates the area where the test predicts the data point to be positive. The true positive in this figure is 6, and false negatives of 0 (because all positive condition is correctly predicted as positive). Therefore, the sensitivity is 100% (from6 / (6 + 0)). This situation is also illustrated in the previous figure where the dotted line is at position A (the left-hand side is predicted as negative by the model, the right-hand side is predicted as positive by the model). When the dotted line, test cut-off line, is at position A, the test correctly predicts all the population of the true positive class, but it will fail to correctly identify the data point from the true negative class.
Similar to the previously explained figure, the red dot indicates the patient with the medical condition. However, in this case, the green background indicates that the test predicts that all patients are free of the medical condition. The number of data point that is true negative is then 26, and the number of false positives is 0. This result in 100% specificity (from26 / (26 + 0)). Therefore, sensitivity or specificity alone cannot be used to measure the performance of the test.
Inmedical diagnosis, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test specificity is the ability of the test to correctly identify those without the disease (true negative rate).
If 100 patients known to have a disease were tested, and 43 test positive, then the test has 43% sensitivity. If 100 with no disease are tested and 96 return a completely negative result, then the test has 96% specificity. Sensitivity and specificity are prevalence-independent test characteristics, as their values are intrinsic to the test and do not depend on the diseaseprevalencein the population of interest.[6]Positiveandnegative predictive values, but not sensitivity or specificity, are values influenced by the prevalence of disease in the population that is being tested. These concepts are illustrated graphically in this appletBayesian clinical diagnostic modelwhich show the positive and negative predictive values as a function of the prevalence, sensitivity and specificity.
It is often claimed that a highly specific test is effective at ruling in a disease when positive, while a highly sensitive test is deemed effective at ruling out a disease when negative.[7][8]This has led to the widely used mnemonics SPPIN and SNNOUT, according to which a highlyspecific test, whenpositive, rulesindisease (SP-P-IN), and a highlysensitive test, whennegative, rulesoutdisease (SN-N-OUT). Both rules of thumb are, however, inferentially misleading, as the diagnostic power of any test is determined by the prevalence of the condition being tested, the test's sensitivityandits specificity.[9][10][11]The SNNOUT mnemonic has some validity when the prevalence of the condition in question is extremely low in the tested sample.
The tradeoff between specificity and sensitivity is explored inROC analysisas a trade off between recall (TPR) andFalse positive rate(FPR ).[12]Giving them equal weight optimizesinformedness= specificity + sensitivity − 1 = TPR − FPR, the magnitude of which gives the probability of an informed decision between the two classes (> 0 represents appropriate use of information, 0 represents chance-level performance, < 0 represents perverse use of information).[13]
Thesensitivity indexord′(pronounced "dee-prime") is astatisticused in signaldetection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. Fornormally distributedsignal and noise with mean and standard deviationsμS{\displaystyle \mu _{S}}andσS{\displaystyle \sigma _{S}}, andμN{\displaystyle \mu _{N}}andσN{\displaystyle \sigma _{N}}, respectively,d′is defined as:
An estimate ofd′can be also found from measurements of thehit rateandfalse-alarmrate. It is calculated as:
where functionZ(p),p∈ [0, 1], is the inverse of thecumulative Gaussian distribution.
d′is adimensionlessstatistic. A higherd′indicates that the signal can be more readily detected.
The relationship between sensitivity, specificity, and similar terms can be understood using the following table. Consider a group withPpositive instances andNnegative instances of some condition. The four outcomes can be formulated in a 2×2contingency tableorconfusion matrix, as well as derivations of several metrics using the four outcomes, as follows:
Related calculations
This hypothetical screening test (fecal occult blood test) correctly identified two-thirds (66.7%) of patients with colorectal cancer.[a]Unfortunately, factoring in prevalence rates reveals that this hypothetical test has a high false positive rate, and it does not reliably identify colorectal cancer in the overall population of asymptomatic people (PPV = 10%).
On the other hand, this hypothetical test demonstrates very accurate detection of cancer-free individuals (NPV ≈ 99.5%). Therefore, when used for routine colorectal cancer screening with asymptomatic adults, a negative result supplies important data for the patient and doctor, such as reassuring patients worried about developing colorectal cancer.
Sensitivity and specificity values alone may be highly misleading. The 'worst-case' sensitivity or specificity must be calculated in order to avoid reliance on experiments with few results. For example, a particular test may easily show 100% sensitivity if tested against thegold standardfour times, but a single additional test against the gold standard that gave a poor result would imply a sensitivity of only 80%. A common way to do this is to state thebinomial proportion confidence interval, often calculated using a Wilson score interval.
Confidence intervalsfor sensitivity and specificity can be calculated, giving the range of values within which the correct value lies at a given confidence level (e.g., 95%).[26]
Ininformation retrieval, the positive predictive value is calledprecision, and sensitivity is calledrecall. Unlike the Specificity vs Sensitivity tradeoff, these measures are both independent of the number of true negatives, which is generally unknown and much larger than the actual numbers of relevant and retrieved documents. This assumption of very large numbers of true negatives versus positives is rare in other applications.[13]
TheF-scorecan be used as a single measure of performance of the test for the positive class. The F-score is theharmonic meanof precision and recall:
In the traditional language ofstatistical hypothesis testing, the sensitivity of a test is called thestatistical powerof the test, although the wordpowerin that context has a more general usage that is not applicable in the present context. A sensitive test will have fewerType II errors.
Similarly to the domain ofinformation retrieval, in the research area ofgene prediction, the number of true negatives (non-genes) in genomic sequences is generally unknown and much larger than the actual number of genes (true positives). The convenient and intuitively understood term specificity in this research area has been frequently used with the mathematical formula forprecision and recallas defined in biostatistics. The pair of thus defined specificity (as positive predictive value) and sensitivity (true positive rate) represent major parameters characterizing the accuracy of gene prediction algorithms.[27][28][29][30]Conversely, the term specificity in a sense of true negative rate would have little, if any, application in the genome analysis research area.
|
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
|
In the field ofmachine learningand specifically the problem ofstatistical classification, aconfusion matrix, also known aserror matrix,[1]is a specifictablelayout that allows visualization of the performance of an algorithm, typically asupervised learningone; inunsupervised learningit is usually called amatching matrix.
Each row of thematrixrepresents the instances in an actual class while each column represents the instances in a predicted class, or vice versa – both variants are found in the literature.[2]The diagonal of the matrix therefore represents all instances that are correctly predicted.[3]The name stems from the fact that it makes it easy to see whether the system is confusing two classes (i.e. commonly mislabeling one as another).
It is a special kind ofcontingency table, with two dimensions ("actual" and "predicted"), and identical sets of "classes" in both dimensions (each combination of dimension and class is a variable in the contingency table).
Given a sample of 12 individuals, 8 that have been diagnosed with cancer and 4 that are cancer-free, where individuals with cancer belong to class 1 (positive) and non-cancer individuals belong to class 0 (negative), we can display that data as follows:
Assume that we have a classifier that distinguishes between individuals with and without cancer in some way, we can take the 12 individuals and run them through the classifier. The classifier then makes 9 accurate predictions and misses 3: 2 individuals with cancer wrongly predicted as being cancer-free (sample 1 and 2), and 1 person without cancer that is wrongly predicted to have cancer (sample 9).
Notice, that if we compare the actual classification set to the predicted classification set, there are 4 different outcomes that could result in any particular column. One, if the actual classification is positive and the predicted classification is positive (1,1), this is called a true positive result because the positive sample was correctly identified by the classifier. Two, if the actual classification is positive and the predicted classification is negative (1,0), this is called a false negative result because the positive sample is incorrectly identified by the classifier as being negative. Third, if the actual classification is negative and the predicted classification is positive (0,1), this is called a false positive result because the negative sample is incorrectly identified by the classifier as being positive. Fourth, if the actual classification is negative and the predicted classification is negative (0,0), this is called a true negative result because the negative sample gets correctly identified by the classifier.
We can then perform the comparison between actual and predicted classifications and add this information to the table, making correct results appear in green so they are more easily identifiable.
The template for any binary confusion matrix uses the four kinds of results discussed above (true positives, false negatives, false positives, and true negatives) along with the positive and negative classifications. The four outcomes can be formulated in a 2×2confusion matrix, as follows:
The color convention of the three data tables above were picked to match this confusion matrix, in order to easily differentiate the data.
Now, we can simply total up each type of result, substitute into the template, and create a confusion matrix that will concisely summarize the results of testing the classifier:
8 + 4 = 12
In this confusion matrix, of the 8 samples with cancer, the system judged that 2 were cancer-free, and of the 4 samples without cancer, it predicted that 1 did have cancer. All correct predictions are located in the diagonal of the table (highlighted in green), so it is easy to visually inspect the table for prediction errors, as values outside the diagonal will represent them. By summing up the 2 rows of the confusion matrix, one can also deduce the total number of positive (P) and negative (N) samples in the original dataset, i.e.P=TP+FN{\displaystyle P=TP+FN}andN=FP+TN{\displaystyle N=FP+TN}.
Inpredictive analytics, atable of confusion(sometimes also called aconfusion matrix) is a table with two rows and two columns that reports the number oftrue positives,false negatives,false positives, andtrue negatives. This allows more detailed analysis than simply observing the proportion of correct classifications (accuracy). Accuracy will yield misleading results if the data set is unbalanced; that is, when the numbers of observations in different classes vary greatly.
For example, if there were 95 cancer samples and only 5 non-cancer samples in the data, a particular classifier might classify all the observations as having cancer. The overall accuracy would be 95%, but in more detail the classifier would have a 100% recognition rate (sensitivity) for the cancer class but a 0% recognition rate for the non-cancer class.F1 scoreis even more unreliable in such cases, and here would yield over 97.4%, whereasinformednessremoves such bias and yields 0 as the probability of an informed decision for any form of guessing (here always guessing cancer).
According to Davide Chicco and Giuseppe Jurman, the most informative metric to evaluate a confusion matrix is theMatthews correlation coefficient (MCC).[11]
Other metrics can be included in a confusion matrix, each of them having their significance and use.
Confusion matrix is not limited to binary classification and can be used in multi-class classifiers as well. The confusion matrices discussed above have only two conditions: positive and negative. For example, the table below summarizes communication ofa whistled languagebetween two speakers, with zero values omitted for clarity.[20]
|
https://en.wikipedia.org/wiki/Confusion_matrix
|
Indecision theory, ascoring rule[1]provides evaluation metrics forprobabilistic predictions or forecasts. While "regular" loss functions (such asmean squared error) assign a goodness-of-fit score to a predicted value and an observed value, scoring rules assign such a score to a predicted probability distribution and an observed value. On the other hand, ascoring function[2]provides a summary measure for the evaluation of point predictions, i.e. one predicts a property orfunctionalT(F){\displaystyle T(F)}, like theexpectationor themedian.
Scoring rules answer the question "how good is a predicted probability distribution compared to an observation?" Scoring rules that are(strictly) properare proven to have the lowest expected score if the predicted distribution equals the underlying distribution of the target variable. Although this might differ for individual observations, this should result in a minimization of the expected score if the "correct" distributions are predicted.
Scoring rules and scoring functions are often used as "cost functions" or "loss functions" of probabilistic forecasting models. They are evaluated as the empirical mean of a given sample, the "score". Scores of different predictions or models can then be compared to conclude which model is best. For example, consider a model, that predicts (based on an inputx{\displaystyle x}) a meanμ∈R{\displaystyle \mu \in \mathbb {R} }and standard deviationσ∈R+{\displaystyle \sigma \in \mathbb {R} _{+}}. Together, those variables define agaussian distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}, in essence predicting the target variable as a probability distribution. A common interpretation of probabilistic models is that they aim to quantify their own predictive uncertainty. In this example, an observed target variabley∈R{\displaystyle y\in \mathbb {R} }is then held compared to the predicted distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}and assigned a scoreL(N(μ,σ2),y)∈R{\displaystyle {\mathcal {L}}({\mathcal {N}}(\mu ,\sigma ^{2}),y)\in \mathbb {R} }. When training on a scoring rule, it should "teach" a probabilistic model to predict when its uncertainty is low, and when its uncertainty is high, and it should result incalibratedpredictions, while minimizing the predictive uncertainty.
Although the example given concerns the probabilistic forecasting of areal valuedtarget variable, a variety of different scoring rules have been designed with different target variables in mind. Scoring rules exist for binary and categoricalprobabilistic classification, as well as for univariate and multivariate probabilisticregression.
Consider asample spaceΩ{\displaystyle \Omega }, aσ-algebraA{\displaystyle {\mathcal {A}}}of subsets ofΩ{\displaystyle \Omega }and a convex classF{\displaystyle {\mathcal {F}}}ofprobability measureson(Ω,A){\displaystyle (\Omega ,{\mathcal {A}})}. A function defined onΩ{\displaystyle \Omega }and taking values in the extended real line,R¯=[−∞,∞]{\displaystyle {\overline {\mathbb {R} }}=[-\infty ,\infty ]}, isF{\displaystyle {\mathcal {F}}}-quasi-integrable if it is measurable with
respect toA{\displaystyle {\mathcal {A}}}and is quasi-integrable with respect to allF∈F{\displaystyle F\in {\mathcal {F}}}.
A probabilistic forecast is any probability measureF∈F{\displaystyle F\in {\mathcal {F}}}. I.e. it is a distribution of potential future observations.
A scoring rule is any extended real-valued functionS:F×Ω→R{\displaystyle \mathbf {S} :{\mathcal {F}}\times \Omega \rightarrow \mathbb {R} }such thatS(F,⋅){\displaystyle \mathbf {S} (F,\cdot )}isF{\displaystyle {\mathcal {F}}}-quasi-integrable for allF∈F{\displaystyle F\in {\mathcal {F}}}.S(F,y){\displaystyle \mathbf {S} (F,y)}represents the loss or penalty when the forecastF∈F{\displaystyle F\in {\mathcal {F}}}is issued and the observationy∈Ω{\displaystyle y\in \Omega }materializes.
A point forecast is a functional, i.e. a potentially set-valued mappingF→T(F)⊆Ω{\displaystyle F\rightarrow T(F)\subseteq \Omega }.
A scoring function is any real-valued functionS:Ω×Ω→R{\displaystyle S:\Omega \times \Omega \rightarrow \mathbb {R} }whereS(x,y){\displaystyle S(x,y)}represents the loss or penalty when the point forecastx∈Ω{\displaystyle x\in \Omega }is issued and the observationy∈Ω{\displaystyle y\in \Omega }materializes.
Scoring rulesS(F,y){\displaystyle \mathbf {S} (F,y)}and scoring functionsS(x,y){\displaystyle S(x,y)}are negatively (positively) oriented if smaller (larger) values mean better. Here we adhere to negative orientation, hence the association with "loss".
We write for the expected score of a predictionF{\displaystyle F}underQ∈F{\displaystyle Q\in {\mathcal {F}}}as the expected score of the predicted distributionF∈F{\displaystyle F\in {\mathcal {F}}}, when sampling observations from distributionQ{\displaystyle Q}.
Many probabilistic forecasting models are training via the sample average score, in which a set of predicted distributionsF1,…,Fn∈F{\displaystyle F_{1},\ldots ,F_{n}\in {\mathcal {F}}}is evaluated against a set of observationsy1,…,yn∈Ω{\displaystyle y_{1},\ldots ,y_{n}\in \Omega }.
Strictly proper scoring rules and strictly consistent scoring functions encourage honest forecasts by maximization of the expected reward: If a forecaster is given a reward of−S(F,y){\displaystyle -\mathbf {S} (F,y)}ify{\displaystyle y}realizes (e.g.y=rain{\displaystyle y=rain}), then the highestexpectedreward (lowest score) is obtained by reporting the true probability distribution.[1]
A scoring ruleS{\displaystyle \mathbf {S} }isproperrelative toF{\displaystyle {\mathcal {F}}}if (assuming negative orientation) its expected score is minimized when the forecasted distribution matches the distribution of the observation.
It isstrictly properif the above equation holds with equality if and only ifF=Q{\displaystyle F=Q}.
A scoring functionS{\displaystyle S}isconsistentfor the functionalT{\displaystyle T}relative to the classF{\displaystyle {\mathcal {F}}}if
It is strictly consistent if it is consistent and equality in the above equation implies thatx∈T(F){\displaystyle x\in T(F)}.
To enforce thatcorrect forecasts are always strictly preferred, Ahmadian et al. (2024) introduced twosuperiorvariants:Penalized Brier Score (PBS)andPenalized Logarithmic Loss (PLL), which add a fixed penalty whenever the predicted class (argmaxp{\displaystyle \arg \max p}) differs from the true class (argmaxy{\displaystyle \arg \max y}).[3]
-PBSaugments the Brier score by adding(c−1)/c{\displaystyle (c-1)/c}for any misclassification (withc{\displaystyle c}the number of classes).
-PLLaugments the logarithmic score by adding−log(1/c){\displaystyle -\log(1/c)}for any misclassification.
Despite these penalties, PBS and PLL remainstrictly proper. Their expected score is uniquely minimized when the forecast equals the true distribution, satisfying thesuperiorproperty that every correct classification is scored strictly better than any incorrect one.
Note:Neither the standard Brier Score nor the logarithmic score satisfy thesuperiorcriterion. They remain strictly proper but can assign better scores to incorrect predictions than to certain correct ones—an issue resolved by PBS and PLL.[3]
An example ofprobabilistic forecastingis in meteorology where aweather forecastermay give the probability of rain on the next day. One could note the number of times that a 25% probability was quoted, over a long period, and compare this with the actual proportion of times that rain fell. If the actual percentage was substantially different from the stated probability we say that the forecaster ispoorly calibrated. A poorly calibrated forecaster might be encouraged to do better by abonussystem. A bonus system designed around a proper scoring rule will incentivize the forecaster to report probabilities equal to hispersonal beliefs.[4]
In addition to the simple case of abinary decision, such as assigning probabilities to 'rain' or 'no rain', scoring rules may be used for multiple classes, such as 'rain', 'snow', or 'clear', or continuous responses like the amount of rain per day.
The image to the right shows an example of a scoring rule, the logarithmic scoring rule, as a function of the probability reported for the event that actually occurred. One way to use this rule would be as a cost based on the probability that a forecaster or algorithm assigns, then checking to see which event actually occurs.
There are an infinite number of scoring rules, including entire parameterized families of strictly proper scoring rules. The ones shown below are simply popular examples.
For a categorical response variable withm{\displaystyle m}mutually exclusive events,Y∈Ω={1,…,m}{\displaystyle Y\in \Omega =\{1,\ldots ,m\}}, a probabilistic forecaster or algorithm will return aprobability vectorr{\displaystyle \mathbf {r} }with a probability for each of them{\displaystyle m}outcomes.
The logarithmic scoring rule is a local strictly proper scoring rule. This is also the negative ofsurprisal, which is commonly used as a scoring criterion inBayesian inference; the goal is to minimize expected surprise. This scoring rule has strong foundations ininformation theory.
Here, the score is calculated as the logarithm of the probability estimate for the actual outcome. That is, a prediction of 80% that correctly proved true would receive a score ofln(0.8) = −0.22. This same prediction also assigns 20% likelihood to the opposite case, and so if the prediction proves false, it would receive a score based on the 20%:ln(0.2) = −1.6. The goal of a forecaster is to maximize the score and for the score to be as large as possible, and −0.22 is indeed larger than −1.6.
If one treats the truth or falsity of the prediction as a variablexwith value 1 or 0 respectively, and the expressed probability asp, then one can write the logarithmic scoring rule asxln(p) + (1 −x) ln(1 −p). Note that any logarithmic base may be used, since strictly proper scoring rules remain strictly proper under linear transformation. That is:
is strictly proper for allb>1{\displaystyle b>1}.
The quadratic scoring rule is a strictly proper scoring rule
whereri{\displaystyle r_{i}}is the probability assigned to the correct answer andC{\displaystyle C}is the number of classes.
TheBrier score, originally proposed byGlenn W. Brierin 1950,[5]can be obtained by anaffine transformfrom the quadratic scoring rule.
Whereyj=1{\displaystyle y_{j}=1}when thej{\displaystyle j}th event is correct andyj=0{\displaystyle y_{j}=0}otherwise andC{\displaystyle C}is the number of classes.
An important difference between these two rules is that a forecaster should strive to maximize the quadratic scoreQ{\displaystyle Q}yet minimize the Brier scoreB{\displaystyle B}. This is due to a negative sign in the linear transformation between them.
The spherical scoring rule is also a strictly proper scoring rule
The ranked probability score[6](RPS) is a strictly proper scoring rule, that can be expressed as:
Whereyj=1{\displaystyle y_{j}=1}when thej{\displaystyle j}th event is correct andyj=0{\displaystyle y_{j}=0}otherwise, andC{\displaystyle C}is the number of classes. Other than other scoring rules, the ranked probability score considers the distance between classes, i.e. classes 1 and 2 are considered closer than classes 1 and 3. The score assigns better scores to probabilistic forecasts with high probabilities assigned to classes close to the correct class. For example, when considering probabilistic forecastsr1=(0.5,0.5,0){\displaystyle \mathbf {r} _{1}=(0.5,0.5,0)}andr2=(0.5,0,0.5){\displaystyle \mathbf {r} _{2}=(0.5,0,0.5)}, we find thatRPS(r1,1)=0.25{\displaystyle RPS(\mathbf {r} _{1},1)=0.25}, whileRPS(r2,1)=0.5{\displaystyle RPS(\mathbf {r} _{2},1)=0.5}, despite both probabilistic forecasts assigning identical probability to the correct class.
Shown below on the left is a graphical comparison of the Logarithmic, Quadratic, and Spherical scoring rules for abinary classificationproblem. Thex-axis indicates the reported probability for the event that actually occurred.
It is important to note that each of the scores have different magnitudes and locations. The magnitude differences are not relevant however as scores remain proper under affine transformation. Therefore, to compare different scores it is necessary to move them to a common scale. A reasonable choice of normalization is shown at the picture on the right where all scores intersect the points (0.5,0) and (1,1). This ensures that they yield 0 for a uniform distribution (two probabilities of 0.5 each), reflecting no cost or reward for reporting what is often the baseline distribution. All normalized scores below also yield 1 when the true class is assigned a probability of 1.
The scoring rules listed below aim to evaluate probabilistic predictions when the predicted distributions are univariatecontinuous probability distribution's, i.e. the predicted distributions are defined over a univariate target variableX∈R{\displaystyle X\in \mathbb {R} }and have aprobability density functionf:R→R+{\displaystyle f:\mathbb {R} \to \mathbb {R} _{+}}.
The logarithmic score is a local strictly proper scoring rule. It is defined as
wherefD{\displaystyle f_{D}}denotes the probability density function of the predicted distributionD{\displaystyle D}. It is a local, strictly proper scoring rule. The logarithmic score for continuous variables has strong ties toMaximum likelihood estimation. However, in many applications, the continuous ranked probability score is often preferred over the logarithmic score, as the logarithmic score can be heavily influenced by slight deviations in the tail densities of forecasted distributions.[7]
The continuous ranked probability score (CRPS)[8]is a strictly proper scoring rule much used in meteorology. It is defined as
whereFD{\displaystyle F_{D}}is thecumulative distribution functionof the forecasted distributionD{\displaystyle D},H{\displaystyle H}is theHeaviside step functionandy∈R{\displaystyle y\in \mathbb {R} }is the observation. For distributions with finite firstmoment, the continuous ranked probability score can be written as:[1]
whereX{\displaystyle X}andX′{\displaystyle X'}are independent random variables, sampled from the distributionD{\displaystyle D}. Furthermore, when the cumulative probability functionF{\displaystyle F}is continuous, the continuous ranked probability score can also be written as[9]
The continuous ranked probability score can be seen as both an continuous extension of the ranked probability score, as well asquantile regression. The continuous ranked probability score over theempirical distributionD^q{\displaystyle {\hat {D}}_{q}}of an ordered set pointsq1≤…≤qn{\displaystyle q_{1}\leq \ldots \leq q_{n}}(i.e. every point has1/n{\displaystyle 1/n}probability of occurring), is equal to twice the meanquantile lossapplied on those points with evenly spread quantiles(τ1,…,τn)=(1/(2n),…,(2n−1)/(2n)){\displaystyle (\tau _{1},\ldots ,\tau _{n})=(1/(2n),\ldots ,(2n-1)/(2n))}:[10]
For many popular families of distributions,closed-form expressionsfor the continuous ranked probability score have been derived. The continuous ranked probability score has been used as a loss function forartificial neural networks, in which weather forecasts are postprocessed to aGaussian probability distribution.[11][12]
CRPS was also adapted tosurvival analysisto cover censored events.[13]
CRPS is also known asCramer–von Mises distanceand can be seen as an improvement ofWasserstein distance(often used in machine learning) and further Cramer distance performed better inordinal regressionthanKL distanceor the Wasserstein metric.[14]
The scoring rules listed below aim to evaluate probabilistic predictions when the predicted distributions are univariatecontinuous probability distribution's, i.e. the predicted distributions are defined over a multivariate target variableX∈Rn{\displaystyle X\in \mathbb {R} ^{n}}and have aprobability density functionf:Rn→R+{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} _{+}}.
The multivariate logarithmic score is similar to the univariate logarithmic score:
wherefD{\displaystyle f_{D}}denotes the probability density function of the predicted multivariate distributionD{\displaystyle D}. It is a local, strictly proper scoring rule.
The Hyvärinen scoring function (of a density p) is defined by[15]
WhereΔ{\displaystyle \Delta }denotes theHessiantraceand∇{\displaystyle \nabla }denotes thegradient. This scoring rule can be used to computationally simplify parameter inference and address Bayesian model comparison with arbitrarily-vague priors.[15][16]It was also used to introduce new information-theoretic quantities beyond the existinginformation theory.[17]
The energy score is a multivariate extension of the continuous ranked probability score:[1]
Here,β∈(0,2){\displaystyle \beta \in (0,2)},‖‖2{\displaystyle \lVert \rVert _{2}}denotes then{\displaystyle n}-dimensionalEuclidean distanceandX,X′{\displaystyle X,X'}are independently sampled random variables from the probability distributionD{\displaystyle D}. The energy score is strictly proper for distributionsD{\displaystyle D}for whichEX∼D[‖X‖2]{\displaystyle \mathbb {E} _{X\sim D}[\lVert X\rVert _{2}]}is finite. It has been suggested that the energy score is somewhat ineffective when evaluating the intervariable dependency structure of the forecasted multivariate distribution.[18]The energy score is equal to twice theenergy distancebetween the predicted distribution and the empirical distribution of the observation.
Thevariogramscore of orderp{\displaystyle p}is given by:[19]
Here,wij{\displaystyle w_{ij}}are weights, often set to 1, andp>0{\displaystyle p>0}can be arbitrarily chosen, butp=0.5,1{\displaystyle p=0.5,1}or2{\displaystyle 2}are often used.Xi{\displaystyle X_{i}}is here to denote thei{\displaystyle i}'thmarginal random variableofX{\displaystyle X}. The variogram score is proper for distributions for which the(2p){\displaystyle (2p)}'thmomentis finite for all components, but is never strictly proper. Compared to the energy score, the variogram score is claimed to be more discriminative with respect to the predicted correlation structure.
The conditional continuous ranked probability score (Conditional CRPS or CCRPS) is a family of (strictly) proper scoring rules. Conditional CRPS evaluates a forecasted multivariate distributionD{\displaystyle D}by evaluation of CRPS over a prescribed set of univariateconditional probability distributionsof the predicted multivariate distribution:[20]
Here,Xi{\displaystyle X_{i}}is thei{\displaystyle i}'th marginal variable ofX∼D{\displaystyle X\sim D},T=(vi,Ci)i=1k{\displaystyle {\mathcal {T}}=(v_{i},{\mathcal {C}}_{i})_{i=1}^{k}}is a set of tuples that defines a conditional specification (withvi∈{1,…,n}{\displaystyle v_{i}\in \{1,\ldots ,n\}}andCi⊆{1,…,n}∖{vi}{\displaystyle {\mathcal {C}}_{i}\subseteq \{1,\ldots ,n\}\setminus \{v_{i}\}}), andPX∼D(Xvi|Xj=Yjforj∈Ci){\displaystyle P_{X\sim D}(X_{v_{i}}|X_{j}=Y_{j}{\text{ for }}j\in {\mathcal {C}}_{i})}denotes the conditional probability distribution forXvi{\displaystyle X_{v_{i}}}given that all variablesXj{\displaystyle X_{j}}forj∈Ci{\displaystyle j\in {\mathcal {C}}_{i}}are equal to their respective observations. In the case thatPX∼D(Xvi|Xj=Yjforj∈Ci){\displaystyle P_{X\sim D}(X_{v_{i}}|X_{j}=Y_{j}{\text{ for }}j\in {\mathcal {C}}_{i})}is ill-defined (i.e. its conditional event has zero likelihood), CRPS scores over this distribution are defined as infinite. Conditional CRPS is strictly proper for distributions with finite first moment, if thechain ruleis included in the conditional specification, meaning that there exists a permutationϕ1,…,ϕn{\displaystyle \phi _{1},\ldots ,\phi _{n}}of1,…,n{\displaystyle 1,\ldots ,n}such that for all1≤i≤n{\displaystyle 1\leq i\leq n}:(ϕi,{ϕ1,…,ϕi−1})∈T{\displaystyle (\phi _{i},\{\phi _{1},\ldots ,\phi _{i-1}\})\in {\mathcal {T}}}.
All proper scoring rules are equal to weighted sums (integral with a non-negative weighting functional) of the losses in a set of simple two-alternative decision problems thatusethe probabilistic prediction, each such decision problem having a particular combination of associated cost parameters forfalse positive and false negativedecisions. Astrictlyproper scoring rule corresponds to having a nonzero weighting for all possible decision thresholds. Any given proper scoring rule is equal to the expected losses with respect to a particular probability distribution over the decision thresholds; thus the choice of a scoring rule corresponds to an assumption about the probability distribution of decision problems for which the predicted probabilities will ultimately be employed, with for example the quadratic loss (or Brier) scoring rule corresponding to a uniform probability of the decision threshold being anywhere between zero and one. Theclassification accuracy score(percent classified correctly), a single-threshold scoring rule which is zero or one depending on whether the predicted probability is on the appropriate side of 0.5, is a proper scoring rule but not a strictly proper scoring rule because it is optimized (in expectation) not only by predicting the true probability but by predictinganyprobability on the same side of 0.5 as the true probability.[21][22][23][24][25][26]
A strictly proper scoring rule, whether binary or multiclass, after anaffine transformationremains a strictly proper scoring rule.[4]That is, ifS(r,i){\displaystyle S(\mathbf {r} ,i)}is a strictly proper scoring rule thena+bS(r,i){\displaystyle a+bS(\mathbf {r} ,i)}withb≠0{\displaystyle b\neq 0}is also a strictly proper scoring rule, though ifb<0{\displaystyle b<0}then the optimization sense of the scoring rule switches between maximization and minimization.
A proper scoring rule is said to belocalif its estimate for the probability of a specific event depends only on the probability of that event. This statement is vague in most descriptions but we can, in most cases, think of this as the optimal solution of the scoring problem "at a specific event" is invariant to all changes in the observation distribution that leave the probability of that event unchanged. All binary scores are local because the probability assigned to the event that did not occur is determined so there is no degree of flexibility to vary over.
Affine functions of the logarithmic scoring rule are the only strictly proper local scoring rules on afinite setthat is not binary.
The expectation value of a proper scoring ruleS{\displaystyle S}can be decomposed into the sum of three components, calleduncertainty,reliability, andresolution,[27][28]which characterize different attributes of probabilistic forecasts:
If a score is proper and negatively oriented (such as the Brier Score), all three terms are positive definite.
The uncertainty component is equal to the expected score of the forecast which constantly predicts the average event frequency.
The reliability component penalizes poorly calibrated forecasts, in which the predicted probabilities do not coincide with the event frequencies.
The equations for the individual components depend on the particular scoring rule.
For the Brier Score, they are given by
wherex¯{\displaystyle {\bar {x}}}is the average probability of occurrence of the binary eventx{\displaystyle x}, andπ(p){\displaystyle \pi (p)}is the conditional event probability, givenp{\displaystyle p}, i.e.π(p)=P(x=1∣p){\displaystyle \pi (p)=P(x=1\mid p)}
|
https://en.wikipedia.org/wiki/Scoring_rule
|
Thebase rate fallacy, also calledbase rate neglect[2]orbase rate bias, is a type offallacyin which people tend to ignore thebase rate(e.g., generalprevalence) in favor of the individuating information (i.e., information pertaining only to a specific case).[3]For example, if someone hears that a friend is very shy and quiet, they might think the friend is more likely to be a librarian than a salesperson. However, there are far more salespeople than librarians overall—hence making it more likely that their friend is actually a salesperson, even if a greater proportion of librarians fit the description of being shy and quiet. Base rate neglect is a specific form of the more generalextension neglect.
It is also called theprosecutor's fallacyordefense attorney's fallacywhen applied to the results of statistical tests (such as DNA tests) in the context of law proceedings. These terms were introduced by William C. Thompson and Edward Schumann in 1987,[4][5]although it has been argued that their definition of the prosecutor's fallacy extends to many additional invalid imputations of guilt or liability that are not analyzable as errors in base rates orBayes's theorem.[6]
An example of the base rate fallacy is thefalse positive paradox(also known asaccuracy paradox). This paradox describes situations where there are morefalse positivetest results than true positives (this means the classifier has a lowprecision). For example, if a facial recognition camera can identify wanted criminals 99% accurately, but analyzes 10,000 people a day, the high accuracy is outweighed by the number of tests; because of this, the program's list of criminals will likely have far more innocents (false positives) than criminals (true positives) because there are far more innocents than criminals overall. The probability of a positive test result is determined not only by the accuracy of the test but also by the characteristics of the sampled population.[7]The fundamental issue is that the far higher prevalence of true negatives means that the pool of people testing positively will be dominated by false positives, given that even a small fraction of the much larger [negative] group will produce a larger number of indicated positives than the larger fraction of the much smaller [positive] group.
When the prevalence, the proportion of those who have a given condition, is lower than the test'sfalse positive rate, even tests that have a very low risk of giving a false positivein an individual casewill give more false than true positivesoverall.[8]
It is especially counter-intuitive when interpreting a positive result in a test on a low-prevalencepopulationafter having dealt with positive results drawn from a high-prevalence population.[8]If the false positive rate of the test is higher than the proportion of thenewpopulation with the condition, then a test administrator whose experience has been drawn from testing in a high-prevalence population mayconclude from experiencethat a positive test result usually indicates a positive subject, when in fact a false positive is far more likely to have occurred.
Imagine running an infectious disease test on a populationAof 1,000 persons, of which 40% are infected. The test has a false positive rate of 5% (0.05) and a false negative rate of zero. Theexpected outcomeof the 1,000 tests on populationAwould be:
So, in populationA, a person receiving a positive test could be over 93% confident (400/30 + 400) that it correctly indicates infection.
Now consider the same test applied to populationB, of which only 2% are infected. The expected outcome of 1000 tests on populationBwould be:
In populationB, only 20 of the 69 total people with a positive test result are actually infected. So, the probability of actually being infected after one is told that one is infected is only 29% (20/20 + 49) for a test that otherwise appears to be "95% accurate".
A tester with experience of groupAmight find it a paradox that in groupB, a result that had usually correctly indicated infection is now usually a false positive. The confusion of theposterior probabilityof infection with theprior probabilityof receiving a false positive is a naturalerrorafter receiving a health-threatening test result.
Imagine that a group of police officers havebreathalyzersdisplaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. One in a thousand drivers is driving drunk. Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. No other information is known about them.
Many would estimate the probability that the driver is drunk as high as 95%, but the correct probability is about 2%.
An explanation for this is as follows: on average, for every 1,000 drivers tested,
Therefore, the probability that any given driver among the 1 + 49.95 = 50.95 positive test results really is drunk is1/50.95≈0.019627{\displaystyle 1/50.95\approx 0.019627}.
The validity of this result does, however, hinge on the validity of the initial assumption that the police officer stopped the driver truly at random, and not because of bad driving. If that or another non-arbitrary reason for stopping the driver was present, then the calculation also involves the probability of a drunk driver driving competently and a non-drunk driver driving (in-)competently.
More formally, the same probability of roughly 0.02 can be established usingBayes' theorem. The goal is to find the probability that the driver is drunk given that the breathalyzer indicated they are drunk, which can be represented as
p(drunk∣D){\displaystyle p(\mathrm {drunk} \mid D)}
whereDmeans that the breathalyzer indicates that the driver is drunk. Using Bayes's theorem,
p(drunk∣D)=p(D∣drunk)p(drunk)p(D).{\displaystyle p(\mathrm {drunk} \mid D)={\frac {p(D\mid \mathrm {drunk} )\,p(\mathrm {drunk} )}{p(D)}}.}
The following information is known in this scenario:
p(drunk)=0.001,p(sober)=0.999,p(D∣drunk)=1.00,p(D∣sober)=0.05.{\displaystyle {\begin{aligned}p(\mathrm {drunk} )&=0.001,\\p(\mathrm {sober} )&=0.999,\\p(D\mid \mathrm {drunk} )&=1.00,\\p(D\mid \mathrm {sober} )&=0.05.\end{aligned}}}
As can be seen from the formula, one needsp(D) for Bayes' theorem, which can be computed from the preceding values using thelaw of total probability:
p(D)=p(D∣drunk)p(drunk)+p(D∣sober)p(sober){\displaystyle p(D)=p(D\mid \mathrm {drunk} )\,p(\mathrm {drunk} )+p(D\mid \mathrm {sober} )\,p(\mathrm {sober} )}
which gives
p(D)=(1.00×0.001)+(0.05×0.999)=0.05095.{\displaystyle p(D)=(1.00\times 0.001)+(0.05\times 0.999)=0.05095.}
Plugging these numbers into Bayes' theorem, one finds that
p(drunk∣D)=1.00×0.0010.05095≈0.019627,{\displaystyle p(\mathrm {drunk} \mid D)={\frac {1.00\times 0.001}{0.05095}}\approx 0.019627,}
which is the precision of the test.
In a city of 1 million inhabitants, let there be 100 terrorists and 999,900 non-terrorists. To simplify the example, it is assumed that all people present in the city are inhabitants. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automaticfacial recognition software.
The software has two failure rates of 1%:
Suppose now that an inhabitant triggers the alarm. Someone making the base rate fallacy would infer that there is a 99% probability that the detected person is a terrorist. Although the inference seems to make sense, it is actually bad reasoning, and a calculation below will show that the probability of a terrorist is actually near 1%, not near 99%.
The fallacy arises from confusing the natures of two different failure rates. The 'number of non-bells per 100 terrorists' (P(¬B | T), or the probability that the bell fails to ring given the inhabitant is a terrorist) and the 'number of non-terrorists per 100 bells' (P(¬T | B), or the probability that the inhabitant is a non-terrorist given the bell rings) are unrelated quantities; one is not necessarily equal—or even close—to the other. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore, 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The 'number of non-terrorists per 100 bells' in that city is 100, yet P(T | B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell.
Imagine that the first city's entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. The probability that a person triggering the alarm actually is a terrorist is only about 99 in 10,098, which is less than 1% and very, very far below the initial guess of 99%.
The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists, and the number of false positives (non-terrorists scanned as terrorists) is so much larger than the true positives (terrorists scanned as terrorists).
Multiple practitioners have argued that as the base rate of terrorism is extremely low, usingdata miningand predictive algorithms to identify terrorists cannot feasibly work due to the false positive paradox.[9][10][11][12]Estimates of the number of false positives for each accurate result vary from over ten thousand[12]to one billion;[10]consequently, investigating each lead would be cost- and time-prohibitive.[9][11]The level of accuracy required to make these models viable is likely unachievable. Foremost, the low base rate of terrorism also means there is a lack of data with which to make an accurate algorithm.[11]Further, in the context of detecting terrorism false negatives are highly undesirable and thus must be minimised as much as possible; however, this requiresincreasing sensitivity at the cost of specificity, increasing false positives.[12]It is also questionable whether the use of such models by law enforcement would meet the requisiteburden of proofgiven that over 99% of results would be false positives.[12]
A crime is committed. Forensic analysis determines that the perpetrator has a certain blood type shared by 10% of the population. A suspect is arrested, and found to have that same blood type.
A prosecutor might charge the suspect with the crime on that basis alone, and claim at trial that the probability that the defendant is guilty is 90%.
However, this conclusion is only close to correct if the defendant was selected as the main suspect based on robust evidence discovered prior to the blood test and unrelated to it. Otherwise, the reasoning presented is flawed, as it overlooks the highprior probability(that is, prior to the blood test) that he is a random innocent person. Assume, for instance, that 1000 people live in the town where the crime occurred. This means that 100 people live there who have the perpetrator's blood type, of whom only one is the true perpetrator; therefore, the true probability that the defendant is guilty – based only on the fact that his blood type matches that of the killer – is only 1%, far less than the 90% argued by the prosecutor.
The prosecutor's fallacy involves assuming that the prior probability of a random match is equal to the probability that the defendant is innocent. When using it, a prosecutor questioning an expert witness may ask: "The odds of finding this evidence on an innocent man are so small that the jury can safely disregard the possibility that this defendant is innocent, correct?"[13]The claim assumes that the probability that evidence is found on an innocent man is the same as the probability that a man is innocent given that evidence was found on him, which is not true. Whilst the former is usually small (10% in the previous example) due to goodforensic evidenceprocedures, the latter (99% in that example) does not directly relate to it and will often be much higher, since, in fact, it depends on the likely quite highprior oddsof the defendant being a random innocent person.
O. J. Simpsonwas tried and acquitted in 1995 for the murders of his ex-wife Nicole Brown Simpson and her friend Ronald Goldman.
Crime scene blood matched Simpson's with characteristics shared by 1 in 400 people. However, the defense argued that the number of people from Los Angeles matching the sample could fill a football stadium and that the figure of 1 in 400 was useless.[14][15]It would have been incorrect, and an example of prosecutor's fallacy, to rely solely on the "1 in 400" figure to deduce that a given person matching the sample would be likely to be the culprit.
In the same trial, the prosecution presented evidence that Simpson had been violent toward his wife. The defense argued that there was only one woman murdered for every 2500 women who were subjected to spousal abuse, and that any history of Simpson being violent toward his wife was irrelevant to the trial. However, the reasoning behind the defense's calculation was fallacious. According to authorGerd Gigerenzer, the correct probability requires additional context: Simpson's wife had not only been subjected to domestic violence, but rather subjected to domestic violence (by Simpson)andkilled (by someone). Gigerenzer writes "the chances that a batterer actually murdered his partner, given that she has been killed, is about 8 in 9 or approximately 90%".[16]While most cases of spousal abuse do not end in murder, most cases of murder where there is a history of spousal abuse were committed by the spouse.
Sally Clark, a British woman, was accused in 1998 of having killed her first child at 11 weeks of age and then her second child at 8 weeks of age. The prosecution hadexpert witnessSirRoy Meadow, a professor and consultant paediatrician,[17]testify that the probability of two children in the same family dying fromSIDSis about 1 in 73 million. That was much less frequent than the actual rate measured inhistorical data– Meadow estimated it from single-SIDS death data, and the assumption that the probability of such deaths should beuncorrelatedbetween infants.[18]
Meadow acknowledged that 1-in-73 million is not an impossibility, but argued that such accidents would happen "once every hundred years" and that, in a country of 15 million 2-child families, it is vastly more likely that the double-deaths are due toMünchausen syndrome by proxythan to such a rare accident. However, there is good reason to suppose that the likelihood of a death from SIDS in a family is significantly greater if a previous child has already died in these circumstances, (agenetic predispositionto SIDS is likely to invalidate that assumedstatistical independence[19]) making some families more susceptible to SIDS and the error an outcome of theecological fallacy.[20]The likelihood of two SIDS deaths in the same family cannot be soundlyestimatedby squaring the likelihood of a single such death in all otherwise similar families.[21]
The 1-in-73 million figure greatly underestimated the chance of two successive accidents, but even if that assessment were accurate, the court seems to have missed the fact that the 1-in-73 million number meant nothing on its own. As ana prioriprobability, it should have been weighed against thea prioriprobabilities of the alternatives. Given that two deaths had occurred, one of the following explanations must be true, and all of them area prioriextremely improbable:
It is unclear whether an estimate of the probability for the second possibility was ever proposed during the trial, or whether the comparison of the first two probabilities was understood to be the key estimate to make in the statistical analysis assessing the prosecution's case against the case for innocence.
Clark was convicted in 1999, resulting in a press release by theRoyal Statistical Societywhich pointed out the mistakes.[22]
In 2002, Ray Hill (a mathematics professor atSalford) attempted to accurately compare the chances of these two possible explanations; he concluded that successive accidents are between 4.5 and 9 times more likely than are successive murders, so that thea priorioddsof Clark's guilt were between 4.5 to 1 and 9 to 1 against.[23]
After the court found that the forensic pathologist who had examined both babies had withheldexculpatory evidence, a higher court later quashed Clark's conviction, on 29 January 2003.[24]
In experiments, people have been found to prefer individuating information over general information when the former is available.[25][26][27]
In some experiments, students were asked to estimate thegrade point averages(GPAs) of hypothetical students. When given relevant statistics about GPA distribution, students tended to ignore them if given descriptive information about the particular student even if the new descriptive information was obviously of little or no relevance to school performance.[26]This finding has been used to argue that interviews are an unnecessary part of thecollege admissionsprocess because interviewers are unable to pick successful candidates better than basic statistics.
PsychologistsDaniel KahnemanandAmos Tverskyattempted to explain this finding in terms of asimple rule or "heuristic"calledrepresentativeness. They argued that many judgments relating to likelihood, or to cause and effect, are based on how representative one thing is of another, or of a category.[26]Kahneman considers base rate neglect to be a specific form ofextension neglect.[28]Richard Nisbetthas argued that someattributional biaseslike thefundamental attribution errorare instances of the base rate fallacy: people do not use the "consensus information" (the "base rate") about how others behaved in similar situations and instead prefer simplerdispositional attributions.[29]
There is considerable debate in psychology on the conditions under which people do or do not appreciate base rate information.[30][31]Researchers in the heuristics-and-biases program have stressed empirical findings showing that people tend to ignore base rates and make inferences that violate certain norms of probabilistic reasoning, such asBayes' theorem. The conclusion drawn from this line of research was that human probabilistic thinking is fundamentally flawed and error-prone.[32]Other researchers have emphasized the link between cognitive processes and information formats, arguing that such conclusions are not generally warranted.[33][34]
Consider again Example 2 from above. The required inference is to estimate the (posterior) probability that a (randomly picked) driver is drunk, given that the breathalyzer test is positive. Formally, this probability can be calculated using Bayes' theorem, as shown above. However, there are different ways of presenting the relevant information. Consider the following, formally equivalent variant of the problem:
In this case, the relevant numerical information—p(drunk),p(D| drunk),p(D| sober)—is presented in terms of natural frequencies with respect to a certain reference class (seereference class problem). Empirical studies show that people's inferences correspond more closely to Bayes' rule when information is presented this way, helping to overcome base-rate neglect in laypeople[34]and experts.[35]As a consequence, organizations like theCochrane Collaborationrecommend using this kind of format for communicating health statistics.[36]Teaching people to translate these kinds of Bayesian reasoning problems into natural frequency formats is more effective than merely teaching them to plug probabilities (or percentages) into Bayes' theorem.[37]It has also been shown that graphical representations of natural frequencies (e.g., icon arrays, hypothetical outcome plots) help people to make better inferences.[37][38][39][40]
One important reason why natural frequency formats are helpful is that this information format facilitates the required inference because it simplifies the necessary calculations. This can be seen when using an alternative way of computing the required probabilityp(drunk|D):
whereN(drunk ∩D) denotes the number of drivers that are drunk and get a positive breathalyzer result, andN(D) denotes the total number of cases with a positive breathalyzer result. The equivalence of this equation to the above one follows from the axioms of probability theory, according to whichN(drunk ∩D) =N×p(D| drunk) ×p(drunk). Importantly, although this equation is formally equivalent to Bayes' rule, it is not psychologically equivalent. Using natural frequencies simplifies the inference because the required mathematical operation can be performed on natural numbers, instead of normalized fractions (i.e., probabilities), because it makes the high number of false positives more transparent, and because natural frequencies exhibit a "nested-set structure".[41][42]
Not every frequency format facilitates Bayesian reasoning.[42][43]Natural frequencies refer to frequency information that results fromnatural sampling,[44]which preserves base rate information (e.g., number of drunken drivers when taking a random sample of drivers). This is different fromsystematic sampling, in which base rates are fixeda priori(e.g., in scientific experiments). In the latter case it is not possible to infer the posterior probabilityp(drunk | positive test) from comparing the number of drivers who are drunk and test positive compared to the total number of people who get a positive breathalyzer result, because base rate information is not preserved and must be explicitly re-introduced using Bayes' theorem.
|
https://en.wikipedia.org/wiki/Base_rate_fallacy
|
BLEU(bilingual evaluation understudy) is an algorithm forevaluatingthe quality of text which has beenmachine-translatedfrom onenatural languageto another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU.[1]Invented atIBMin 2001, BLEU was one of the firstmetricsto claim a highcorrelationwith human judgements of quality,[2][3]and remains one of the most popular automated and inexpensive metrics.
Scores are calculated for individual translated segments—generally sentences—by comparing them with a set of good quality reference translations. Those scores are then averaged over the wholecorpusto reach an estimate of the translation's overall quality. Intelligibility or grammatical correctness are not taken into account.[4]
BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.[5]
A basic, first attempt at defining the BLEU score would take two arguments: a candidate stringy^{\displaystyle {\hat {y}}}and a list of reference strings(y(1),...,y(N)){\displaystyle (y^{(1)},...,y^{(N)})}. The idea is thatBLEU(y^;y(1),...,y(N)){\displaystyle BLEU({\hat {y}};y^{(1)},...,y^{(N)})}should be close to 1 wheny^{\displaystyle {\hat {y}}}is similar toy(1),...,y(N){\displaystyle y^{(1)},...,y^{(N)}}, and close to 0 if not.
As an analogy, the BLEU score is like a language teacher trying to score the quality of a student translationy^{\displaystyle {\hat {y}}}by checking how closely it follows the reference answersy(1),...,y(N){\displaystyle y^{(1)},...,y^{(N)}}.
Since in natural language processing, one should evaluate a large set of candidate strings, one must generalize the BLEU score to the case where one has a list of M candidate strings (called a "corpus")(y^(1),⋯,y^(M)){\displaystyle ({\hat {y}}^{(1)},\cdots ,{\hat {y}}^{(M)})}, and for each candidate stringy^(i){\displaystyle {\hat {y}}^{(i)}}, a list of reference candidate stringsSi:=(y(i,1),...,y(i,Ni)){\displaystyle S_{i}:=(y^{(i,1)},...,y^{(i,N_{i})})}.
Given any stringy=y1y2⋯yK{\displaystyle y=y_{1}y_{2}\cdots y_{K}}, and any integern≥1{\displaystyle n\geq 1}, we define the set of itsn-gramsto beGn(y)={y1⋯yn,y2⋯yn+1,⋯,yK−n+1⋯yK}{\displaystyle G_{n}(y)=\{y_{1}\cdots y_{n},y_{2}\cdots y_{n+1},\cdots ,y_{K-n+1}\cdots y_{K}\}}Note that it is a set of unique elements, not amultisetallowing redundant elements, so that, for example,G2(abab)={ab,ba}{\displaystyle G_{2}(abab)=\{ab,ba\}}.
Given any two stringss,y{\displaystyle s,y}, define the substring countC(s,y){\displaystyle C(s,y)}to be the number of appearances ofs{\displaystyle s}as a substring ofy{\displaystyle y}. For example,C(ab,abcbab)=2{\displaystyle C(ab,abcbab)=2}.
Now, fix a candidate corpusS^:=(y^(1),⋯,y^(M)){\displaystyle {\hat {S}}:=({\hat {y}}^{(1)},\cdots ,{\hat {y}}^{(M)})}, and reference candidate corpusS=(S1,⋯,SM){\displaystyle S=(S_{1},\cdots ,S_{M})}, where eachSi:=(y(i,1),...,y(i,Ni)){\displaystyle S_{i}:=(y^{(i,1)},...,y^{(i,N_{i})})}.
Define themodified n-gram precisionfunction to bepn(S^;S):=∑i=1M∑s∈Gn(y^(i))min(C(s,y^(i)),maxy∈SiC(s,y))∑i=1M∑s∈Gn(y^(i))C(s,y^(i)){\displaystyle p_{n}({\hat {S}};S):={\frac {\sum _{i=1}^{M}\sum _{s\in G_{n}({\hat {y}}^{(i)})}\min(C(s,{\hat {y}}^{(i)}),\max _{y\in S_{i}}C(s,y))}{\sum _{i=1}^{M}\sum _{s\in G_{n}({\hat {y}}^{(i)})}C(s,{\hat {y}}^{(i)})}}}The modified n-gram, which looks complicated, is merely a straightforward generalization of the prototypical case: one candidate sentence and one reference sentence. In this case, it ispn({y^};{y})=∑s∈Gn(y^)min(C(s,y^),C(s,y))∑s∈Gn(y^)C(s,y^){\displaystyle p_{n}(\{{\hat {y}}\};\{y\})={\frac {\sum _{s\in G_{n}({\hat {y}})}\min(C(s,{\hat {y}}),C(s,y))}{\sum _{s\in G_{n}({\hat {y}})}C(s,{\hat {y}})}}}To work up to this expression, we start with the most obvious n-gram count summation:∑s∈Gn(y^)C(s,y)=number of n-substrings iny^that appear iny{\displaystyle \sum _{s\in G_{n}({\hat {y}})}C(s,y)={\text{number of n-substrings in }}{\hat {y}}{\text{ that appear in }}y}This quantity measures how many n-grams in the reference sentence are reproduced by the candidate sentence. Note that we count then-substrings, notn-grams. For example, wheny^=aba,y=abababa,n=2{\displaystyle {\hat {y}}=aba,y=abababa,n=2}, all the 2-substrings iny^{\displaystyle {\hat {y}}}(ab and ba) appear iny{\displaystyle y}3 times each, so the count is 6, not 2.
In the above situation, however, the candidate string is too short. Instead of 3 appearances ofab{\displaystyle ab}it contains only one, so we add a minimum function to correct for that:∑s∈Gn(y^)min(C(s,y^),C(s,y)){\displaystyle {\sum _{s\in G_{n}({\hat {y}})}\min(C(s,{\hat {y}}),C(s,y))}}This count summation cannot be used to compare between sentences, since it is not normalized. If both the reference and the candidate sentences are long, the count could be big, even if the candidate is of very poor quality. So we normalize it∑s∈Gn(y^)min(C(s,y^),C(s,y))∑s∈Gn(y^)C(s,y^){\displaystyle {\frac {\sum _{s\in G_{n}({\hat {y}})}\min(C(s,{\hat {y}}),C(s,y))}{\sum _{s\in G_{n}({\hat {y}})}C(s,{\hat {y}})}}}The normalization is such that it is always a number in[0,1]{\displaystyle [0,1]}, allowing meaningful comparisons between corpuses. It is zero if none of the n-substrings in candidate is in reference. It is one if every n-gram in the candidate appears in reference, for at least as many times as in candidate. In particular, if the candidate is a substring of the reference, then it is one.
The modified n-gram precision unduly gives a high score for candidate strings that are "telegraphic", that is, containing all the n-grams of the reference strings, but for as few times as possible.
In order to punish candidate strings that are too short, define thebrevity penaltyto beBP(S^;S):=e−(r/c−1)+{\displaystyle BP({\hat {S}};S):=e^{-(r/c-1)^{+}}}where(r/c−1)+=max(0,r/c−1){\displaystyle (r/c-1)^{+}=\max(0,r/c-1)}is the positive part ofr/c−1{\displaystyle r/c-1}.
c{\displaystyle c}is the length of the candidate corpus, that is,c:=∑i=1M|y^(i)|{\displaystyle c:=\sum _{i=1}^{M}|{\hat {y}}^{(i)}|}where|y|{\displaystyle |y|}is the length ofy{\displaystyle y}.
r{\displaystyle r}is theeffective reference corpus length, that is,r:=∑i=1M|y(i,j)|{\displaystyle r:=\sum _{i=1}^{M}|y^{(i,j)}|}wherey(i,j)=argminy∈Si||y|−|y^(i)||{\displaystyle y^{(i,j)}=\arg \min _{y\in S_{i}}||y|-|{\hat {y}}^{(i)}||}, that is, the sentence fromSi{\displaystyle S_{i}}whose length is as close to|y^(i)|{\displaystyle |{\hat {y}}^{(i)}|}as possible.
There is not a single definition of BLEU, but a whole family of them, parametrized by the weighting vectorw:=(w1,w2,⋯){\displaystyle w:=(w_{1},w_{2},\cdots )}. It is a probability distribution on{1,2,3,⋯}{\displaystyle \{1,2,3,\cdots \}}, that is,∑i=1∞wi=1{\displaystyle \sum _{i=1}^{\infty }w_{i}=1}, and∀i∈{1,2,3,⋯},wi∈[0,1]{\displaystyle \forall i\in \{1,2,3,\cdots \},w_{i}\in [0,1]}.
With a choice ofw{\displaystyle w}, the BLEU score isBLEUw(S^;S):=BP(S^;S)⋅exp(∑n=1∞wnlnpn(S^;S)){\displaystyle BLEU_{w}({\hat {S}};S):=BP({\hat {S}};S)\cdot \exp \left(\sum _{n=1}^{\infty }w_{n}\ln p_{n}({\hat {S}};S)\right)}In words, it is aweighted geometric meanof all the modified n-gram precisions, multiplied by the brevity penalty. We use the weighted geometric mean, rather than the weighted arithmetic mean, to strongly favor candidate corpuses that are simultaneously good according to multiple n-gram precisions.
The most typical choice, the one recommended in the original paper, isw1=⋯=w4=14{\displaystyle w_{1}=\cdots =w_{4}={\frac {1}{4}}}.[1]
This is illustrated in the following example from Papineni et al. (2002):
Of the seven words in the candidate translation, all of them appear in the reference translations. Thus the candidate text is given a unigram precision of,
wherem{\displaystyle ~m}is number of words from the candidate that are found in the reference, andwt{\displaystyle ~w_{t}}is the total number of words in the candidate. This is a perfect score, despite the fact that the candidate translation above retains little of the content of either of the references.
The modification that BLEU makes is fairly straightforward. For each word in the candidate translation, the algorithm takes its maximum total count,mmax{\displaystyle ~m_{max}}, in any of the reference translations. In the example above, the word "the" appears twice in reference 1, and once in reference 2. Thusmmax=2{\displaystyle ~m_{max}=2}.
For the candidate translation, the countmw{\displaystyle m_{w}}of each word is clipped to a maximum ofmmax{\displaystyle m_{max}}for that word. In this case, "the" hasmw=7{\displaystyle ~m_{w}=7}andmmax=2{\displaystyle ~m_{max}=2}, thusmw{\displaystyle ~m_{w}}is clipped to 2. These clipped countsmw{\displaystyle ~m_{w}}are then summed over all distinct words in the candidate.
This sum is then divided by the total number ofunigramsin the candidate translation. In the above example, the modified unigram precision score would be:
In practice, however, using individual words as the unit of comparison is not optimal. Instead, BLEU computes the same modified precision metric usingn-grams. The length which has the "highest correlation with monolingual human judgements"[6]was found to be four. The unigram scores are found to account for the adequacy of the translation, how much information is retained. The longern-gram scores account for the fluency of the translation, or to what extent it reads like "good English".
An example of a candidate translation for the same references as above might be:
In this example, the modified unigram precision would be,
as the word 'the' and the word 'cat' appear once each in the candidate, and the total number of words is two. The modified bigram precision would be1/1{\displaystyle 1/1}as the bigram, "the cat" appears once in the candidate. It has been pointed out that precision is usually twinned withrecallto overcome this problem[7], as the unigram recall of this example would be3/6{\displaystyle 3/6}or2/7{\displaystyle 2/7}. The problem being that as there are multiple reference translations, a bad translation could easily have an inflated recall, such as a translation which consisted of all the words in each of the references.[8]
To produce a score for the whole corpus, the modified precision scores for the segments are combined using thegeometric meanmultiplied by a brevity penalty to prevent very short candidates from receiving too high a score. Letrbe the total length of the reference corpus, andcthe total length of the translation corpus. Ifc≤r{\displaystyle c\leq r}, the brevity penalty applies, defined to bee(1−r/c){\displaystyle e^{(1-r/c)}}. (In the case of multiple reference sentences,ris taken to be the sum of the lengths of the sentences whose lengths are closest to the lengths of the candidate sentences. However, in the version of the metric used byNISTevaluations prior to 2009, the shortest reference sentence had been used instead.)
iBLEU is an interactive version of BLEU that allows a user to visually examine the BLEU scores obtained by the candidate translations. It also allows comparing two different systems in a visual and interactive manner which is useful for system development.[9]
BLEU has frequently been reported as correlating well with human judgement,[10][11][12]and remains a benchmark for the assessment of any new evaluation metric. There are however a number of criticisms that have been voiced. It has been noted that, although in principle capable of evaluating translations of any language, BLEU cannot, in its present form, deal with languages lacking word boundaries.[13]Designed to be used for several reference translation, in practice it's used with only the single one.[2]BLEU is infamously dependent on thetokenizationtechnique, and scores achieved with different ones are incomparable (which is often overlooked); in order to improve reproducibility and comparability, SacreBLEU variant was designed.[2]
It has been argued that although BLEU has significant advantages, there is no guarantee that an increase in BLEU score is an indicator of improved translation quality.[14]
|
https://en.wikipedia.org/wiki/BLEU
|
Evaluation of abinary classifiertypically assigns a numerical value, or values, to a classifier that represent its accuracy. An example is error rate, which measures how frequently the classifier makes a mistake.
There are many metrics that can be used; different fields have different preferences. For example, in medicinesensitivity and specificityare often used, while in computer scienceprecision and recallare preferred.
An important distinction is between metrics that are independent of theprevalenceor skew (how often each class occurs in the population), and metrics that depend on the prevalence – both types are useful, but they have very different properties.
Often, evaluation is used to compare two methods of classification, so that one can be adopted and the other discarded. Such comparisons are more directly achieved by a form of evaluation that results in a singleunitary metricrather than apair of metrics.
Given a data set, a classification (the output of a classifier on that set) gives two numbers: the number of positives and the number of negatives, which add up to the total size of the set. To evaluate a classifier, one compares its output to another reference classification – ideally a perfect classification, but in practice the output of anothergold standardtest – andcross tabulatesthe data into a 2×2contingency table, comparing the two classifications. One then evaluates the classifierrelativeto the gold standard by computingsummary statisticsof these 4 numbers. Generally these statistics will bescale invariant(scaling all the numbers by the same factor does not change the output), to make them independent of population size, which is achieved by using ratios ofhomogeneous functions, most simplyhomogeneous linearorhomogeneous quadraticfunctions.
Say we test some people for the presence of a disease. Some of these people have the disease, and our test correctly says they are positive. They are calledtrue positives(TP). Some have the disease, but the test incorrectly claims they don't. They are calledfalse negatives(FN). Some don't have the disease, and the test says they don't –true negatives(TN). Finally, there might be healthy people who have a positive test result –false positives(FP). These can be arranged into a 2×2 contingency table (confusion matrix), conventionally with the test result on the vertical axis and the actual condition on the horizontal axis.
These numbers can then be totaled, yielding both agrand totalandmarginal totals. Totaling the entire table, the number of true positives, false negatives, true negatives, and false positives add up to 100% of the set. Totaling the columns (adding vertically) the number of true positives and false positives add up to 100% of the test positives, and likewise for negatives. Totaling the rows (adding horizontally), the number of true positives and false negatives add up to 100% of the condition positives (conversely for negatives). The basic marginal ratio statistics are obtained by dividing the 2×2=4 values in the table by the marginal totals (either rows or columns), yielding 2 auxiliary 2×2 tables, for a total of 8 ratios. These ratios come in 4 complementary pairs, each pair summing to 1, and so each of these derived 2×2 tables can be summarized as a pair of 2 numbers, together with their complements. Further statistics can be obtained by taking ratios of these ratios, ratios of ratios, or more complicated functions.
The contingency table and the most common derived ratios are summarized below; see sequel for details.
Note that the rows correspond to thecondition actuallybeing positive or negative (or classified as such by the gold standard), as indicated by the color-coding, and the associated statistics are prevalence-independent, while the columns correspond to thetestbeing positive or negative, and the associated statistics are prevalence-dependent. There are analogous likelihood ratios for prediction values, but these are less commonly used, and not depicted above.
Often accuracy is evaluated with a pair of metrics composed in a standard pattern.
The fundamental prevalence-independent statistics aresensitivity and specificity.
SensitivityorTrue Positive Rate(TPR), also known asrecall, is the proportion of people that tested positive and are positive (True Positive, TP) of all the people that actually are positive (Condition Positive, CP = TP + FN). It can be seen asthe probability that the test is positive given that the patient is sick. With higher sensitivity, fewer actual cases of disease go undetected (or, in the case of the factory quality control, fewer faulty products go to the market).
Specificity(SPC) orTrue Negative Rate(TNR) is the proportion of people that tested negative and are negative (True Negative, TN) of all the people that actually are negative (Condition Negative, CN = TN + FP). As with sensitivity, it can be looked at asthe probability that the test result is negative given that the patient is not sick. With higher specificity, fewer healthy people are labeled as sick (or, in the factory case, fewer good products are discarded).
The relationship between sensitivity and specificity, as well as the performance of the classifier, can be visualized and studied using theReceiver Operating Characteristic(ROC) curve.
In theory, sensitivity and specificity are independent in the sense that it is possible to achieve 100% in both (such as in the red/blue ball example given above). In more practical, less contrived instances, however, there is usually a trade-off, such that they are inversely proportional to one another to some extent. This is because we rarely measure the actual thing we would like to classify; rather, we generally measure an indicator of the thing we would like to classify, referred to as asurrogate marker. The reason why 100% is achievable in the ball example is because redness and blueness is determined by directly detecting redness and blueness. However, indicators are sometimes compromised, such as when non-indicators mimic indicators or when indicators are time-dependent, only becoming evident after a certain lag time. The following example of a pregnancy test will make use of such an indicator.
Modern pregnancy testsdo notuse the pregnancy itself to determine pregnancy status; rather,human chorionic gonadotropinis used, or hCG, present in the urine ofgravidfemales, as asurrogate marker to indicatethat a woman is pregnant. Because hCG can also be produced by atumor, the specificity of modern pregnancy tests cannot be 100% (because false positives are possible). Also, because hCG is present in the urine in such small concentrations after fertilization and earlyembryogenesis, the sensitivity of modern pregnancy tests cannot be 100% (because false negatives are possible).
In addition to sensitivity and specificity, the performance of a binary classification test can be measured withpositive predictive value(PPV), also known asprecision, andnegative predictive value(NPV). The positive prediction value answers the question "If the test result ispositive, how well does thatpredictan actual presence of disease?". It is calculated as TP/(TP + FP); that is, it is the proportion of true positives out of all positive results. The negative prediction value is the same, but for negatives, naturally.
Prevalence has a significant impact on prediction values. As an example, suppose there is a test for a disease with 99% sensitivity and 99% specificity. If 2000 people are tested and the prevalence (in the sample) is 50%, 1000 of them are sick and 1000 of them are healthy. Thus about 990 true positives and 990 true negatives are likely, with 10 false positives and 10 false negatives. The positive and negative prediction values would be 99%, so there can be high confidence in the result.
However, if the prevalence is only 5%, so of the 2000 people only 100 are really sick, then the prediction values change significantly. The likely result is 99 true positives, 1 false negative, 1881 true negatives and 19 false positives. Of the 19+99 people tested positive, only 99 really have the disease – that means, intuitively, that given that a patient's test result is positive, there is only 84% chance that they really have the disease. On the other hand, given that the patient's test result is negative, there is only 1 chance in 1882, or 0.05% probability, that the patient has the disease despite the test result.
Precision and recall can be interpreted as (estimated) conditional probabilities:
Precision is given byP(C=P|C^=P){\displaystyle P(C=P|{\hat {C}}=P)}while recall is given byP(C^=P|C=P){\displaystyle P({\hat {C}}=P|C=P)},[9]whereC^{\displaystyle {\hat {C}}}is the predicted class andC{\displaystyle C}is the actual class.
Both quantities are therefore connected byBayes' theorem.
There are various relationships between these ratios.
If the prevalence, sensitivity, and specificity are known, the positive predictive value can be obtained from the following identity:
If the prevalence, sensitivity, and specificity are known, the negative predictive value can be obtained from the following identity:
In addition to the paired metrics, there are also unitary metrics that give a single number to evaluate the test.
Perhaps the simplest statistic isaccuracyorfraction correct(FC), which measures the fraction of all instances that are correctly categorized; it is the ratio of the number of correct classifications to the total number of correct or incorrect classifications: (TP + TN)/total population = (TP + TN)/(TP + TN + FP + FN). As such, it compares estimates ofpre- and post-test probability. In total ignorance, one can compare a rule to flipping a coin (p0=0.5). This measure isprevalence-dependent. If 90% of people with COVID symptoms don't have COVID, the prior probability P(-) is 0.9, and the simple rule "Classify all such patients as COVID-free." would be 90% accurate. Diagnosis should be better than that. One can construct a"One-proportion z-test"with p0 as max(priors) = max(P(-),P(+)) for a diagnostic method hoping to beat a simple rule using the most likely outcome. Here, the hypotheses are "Ho: p ≤ 0.9 vs. Ha: p > 0.9", rejecting Ho for large values of z. One diagnostic rule could be compared to another if the other's accuracy is known and substituted for p0 in calculating the z statistic. If not known and calculated from data, an accuracy comparison test could be made using"Two-proportion z-test, pooled for Ho: p1 = p2".
Not used very much is the complementary statistic, thefraction incorrect(FiC): FC + FiC = 1, or (FP + FN)/(TP + TN + FP + FN) – this is the sum of theantidiagonal, divided by the total population. Cost-weighted fractions incorrect could compareexpectedcosts of misclassification for different methods.
Thediagnostic odds ratio(DOR) can be a more useful overall metric, which can be defined directly as (TP×TN)/(FP×FN) = (TP/FN)/(FP/TN), or indirectly as a ratio of ratio of ratios (ratio of likelihood ratios, which are themselves ratios of true rates or prediction values). This has a useful interpretation – as anodds ratio– and is prevalence-independent.Likelihood ratiois generally considered to be prevalence-independent and is easily interpreted as the multiplier to turnprior probabilitiesintoposterior probabilities.
AnF-scoreis a combination of theprecisionand therecall, providing a single score. There is a one-parameter family of statistics, with parameterβ,which determines the relative weights of precision and recall. The traditional or balanced F-score (F1 score) is theharmonic meanof precision and recall:
F-scores do not take the true negative rate into account and, therefore, are more suited toinformation retrievalandinformation extractionevaluation where the true negatives are innumerable. Instead, measures such as thephi coefficient,Matthews correlation coefficient,informednessorCohen's kappamay be preferable to assess the performance of a binary classifier.[10][11]As acorrelation coefficient, the Matthews correlation coefficient is thegeometric meanof theregression coefficientsof the problem and itsdual. The component regression coefficients of the Matthews correlation coefficient aremarkedness(deltap) and informedness (Youden's J statisticor deltap').[12]
Hand has highlighted the importance of choosing an appropriate method of evaluation. However, of the many different methods for evaluating the accuracy of a classifier, there is no general method for determining which method should be used in which circumstances. Different fields have taken different approaches.[13]
Cullerne Bown has distinguished three basic approaches to evaluation:
° Mathematical - such as the Matthews Correlation Coefficient, in which both kinds of error are axiomatically treated as equally problematic;
° Cost-benefit - in which a currency is adopted (e.g. money orQuality Adjusted Life Years) and values assigned to errors and successes on the basis of empirical measurement;
° Judgemental - in which a human judgement is made about the relative importance of the two kinds of error; typically this starts by adopting a pair of indicators such as sensitivity and specificity, precision and recall or positive predictive value and negative predictive value.
In the judgemental case, he has provided a flow chart for determining which pair of indicators should be used when, and consequently how to choose between theReceiver Operating Characteristicand the Precision-Recall Curve.[14]
Often, we want to evaluate not a specific classifier working in a specific way but an underlying technology. Typically, the technology can be adjusted through altering the threshold of a score function, the threshold determining whether the result is a positive or negative. For such evaluations a useful single measure is"area under the ROC curve", AUC.
Apart from accuracy, binary classifiers can be assessed in many other ways, for example in terms of their speed or cost.
Probabilistic classificationmodels go beyond providing binary outputs and instead produce probability scores for each class. These models are designed to assess the likelihood or probability of an instance belonging to different classes. In the context of evaluating probabilistic classifiers,alternative evaluation metricshave been developed to properly assess the performance of these models. These metrics take into account the probabilistic nature of the classifier's output and provide a more comprehensive assessment of its effectiveness in assigning accurate probabilities to different classes. These evaluation metrics aim to capture the degree of calibration, discrimination, and overall accuracy of the probabilistic classifier's predictions.
Information retrieval systems, such asdatabasesandweb search engines, are evaluated bymany different metrics, some of which are derived from theconfusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions ofprecision and recall. In this context, precision is defined as the fraction of documents correctly retrieved compared to the documents retrieved (true positives divided by true positives plus false positives), using a set ofground truthrelevant results selected by humans. Recall is defined as the fraction of documents correctly retrieved compared to the relevant documents (true positives divided by true positives plus false negatives). Less commonly, the metric of accuracy is used, is defined as the fraction of documents correctly classified compared to the documents (true positives plus true negatives divided by true positives plus true negatives plus false positives plus false negatives).
None of these metrics take into account the ranking of results. Ranking is very important for web search engines because readers seldom go past the first page of results, and there are too many documents on the web to manually classify all of them as to whether they should be included or excluded from a given search. Adding a cutoff at a particular number of results takes ranking into account to some degree. The measureprecision at k, for example, is a measure of precision looking only at the top ten (k=10) search results. More sophisticated metrics, such asdiscounted cumulative gain, take into account each individual ranking, and are more commonly used where this is important.
|
https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers#Single_metrics
|
METEOR(Metric for Evaluation of Translation with Explicit ORdering) is ametricfor theevaluation of machine translation output. The metric is based on theharmonic meanof unigramprecision and recall, with recall weighted higher than precision. It also has several features that are not found in other metrics, such asstemmingandsynonymymatching, along with the standard exact word matching. The metric was designed to fix some of the problems found in the more popularBLEUmetric, and also produce good correlation with human judgement at the sentence or segment level. This differs from the BLEU metric in that BLEU seeks correlation at the corpus level.
Results have been presented which givecorrelationof up to 0.964 with human judgement at the corpus level, compared toBLEU's achievement of 0.817 on the same data set. At the sentence level, the maximum correlation with human judgement achieved was 0.403.[1]
As withBLEU, the basic unit of evaluation is the sentence, the algorithm first creates analignment(see illustrations) between twosentences, the candidate translation string, and the reference translation string. Thealignmentis a set ofmappingsbetweenunigrams. A mapping can be thought of as a line between a unigram in one string, and a unigram in another string. The constraints are as follows; every unigram in the candidate translation must map to zero or one unigram in the reference. Mappings are selected to produce analignmentas defined above. If there are two alignments with the same number of mappings, the alignment is chosen with the fewestcrosses, that is, with fewerintersectionsof two mappings. From the two alignments shown, alignment (a) would be selected at this point. Stages are run consecutively and each stage only adds to the alignment those unigrams which have not been matched in previous stages. Once the final alignment is computed, the score is computed as follows: Unigram precisionPis calculated as:
Wheremis the number of unigrams in the candidate translation that are also found in the reference translation, andwt{\displaystyle w_{t}}is the number of unigrams in the candidate translation. Unigram recallRis computed as:
Wheremis as above, andwr{\displaystyle w_{r}}is the number of unigrams in the reference translation. Precision and recall are combined using theharmonic meanin the following fashion, with recall weighted 9 times more than precision:
The measures that have been introduced so far only account for congruity with respect to single words but not with respect to larger segments that appear in both the reference and the candidate sentence. In order to take these into account, longern-gram matches are used to compute a penaltypfor the alignment. The more mappings there are that are not adjacent in the reference and the candidate sentence, the higher the penalty will be.
In order to compute this penalty, unigrams are grouped into the fewest possiblechunks, where a chunk is defined as a set of unigrams that are adjacent in the hypothesis and in the reference. The longer the adjacent mappings between the candidate and the reference, the fewer chunks there are. A translation that is identical to the reference will give just one chunk. The penaltypis computed as follows,
Wherecis the number of chunks, andum{\displaystyle u_{m}}is the number of unigrams that have been mapped. The final score for a segment is calculated asMbelow. The penalty has the effect of reducing theFmean{\displaystyle F_{mean}}by up to 50% if there are no bigram or longer matches.
To calculate a score over a wholecorpus, or collection of segments, the aggregate values forP,Randpare taken and then combined using the same formula. The algorithm also works for comparing a candidate translation against more than one reference translations. In this case the algorithm compares the candidate against each of the references and selects the highest score.
|
https://en.wikipedia.org/wiki/METEOR
|
NISTis a method forevaluating the qualityof text which has been translated usingmachine translation. Its name comes from the USNational Institute of Standards and Technology.
It is based on theBLEUmetric, but with some alterations. WhereBLEUsimply calculatesn-gramprecision adding equal weight to each one, NIST also calculates how informative a particularn-gramis. That is to say when a correctn-gramis found, the rarer that n-gram is, the more weight it will be given.[1]
For example, if the bigram "on the" is correctly matched, it will receive lower weight than the correct matching of bigram "interesting calculations", as this is less likely to occur.
NIST also differs fromBLEUin its calculation of the brevity penalty insofar as small variations in translation length do not impact the overall score as much.
NIST 2005 Machine Translation Evaluation Official Results
This technology-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/NIST_(metric)
|
Areceiver operating characteristic curve, orROC curve, is agraphical plotthat illustrates the performance of abinary classifiermodel (can be used for multi class classification as well) at varying threshold values. ROC analysis is commonly applied in the assessment of diagnostic test performance in clinical epidemiology.
The ROC curve is the plot of thetrue positive rate(TPR) against thefalse positive rate(FPR) at each threshold setting.
The ROC can also be thought of as a plot of thestatistical poweras a function of theType I Errorof the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity as a function offalse positive rate.
Given that theprobability distributionsfor both true positive and false positive are known, the ROC curve is obtained as thecumulative distribution function(CDF, area under the probability distribution from−∞{\displaystyle -\infty }to the discrimination threshold) of the detection probability in they-axis versus the CDF of the false positive probability on thex-axis.
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to the cost/benefit analysis of diagnosticdecision making.
The true-positive rate is also known assensitivityorprobability of detection.[1]The false-positive rate is also known as theprobability of false alarm[1]and equals (1 −specificity).
The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating characteristics (TPR and FPR) as the criterion changes.[2]
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields, starting in 1941, which led to its name ("receiver operating characteristic").[3]
It was soon introduced topsychologyto account for the perceptual detection of stimuli. ROC analysis has been used inmedicine,radiology,biometrics,forecastingofnatural hazards,[4]meteorology,[5]model performance assessment,[6]and other areas for many decades and is increasingly used inmachine learninganddata miningresearch.
A classification model (classifierordiagnosis[7]) is amappingof instances between certain classes/groups. Because the classifier or diagnosis result can be an arbitraryreal value(continuous output), the classifier boundary between classes must be determined by a threshold value (for instance, to determine whether a person hashypertensionbased on ablood pressuremeasure). Or it can be adiscreteclass label, indicating one of the classes.
Consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive (p) or negative (n). There are four possible outcomes from a binary classifier. If the outcome from a prediction ispand the actual value is alsop, then it is called atrue positive(TP); however if the actual value isnthen it is said to be afalse positive(FP). Conversely, atrue negative(TN) has occurred when both the prediction outcome and the actual value aren, and afalse negative(FN) is when the prediction outcome isnwhile the actual value isp.
To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but does not actually have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.
Consider an experiment fromPpositive instances andNnegative instances for some condition. The four outcomes can be formulated in a 2×2contingency tableorconfusion matrix, as follows:
The contingency table can derive several evaluation "metrics" (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter). The TPR defines how many correct positive results occur among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.
A ROC space is defined by FPR and TPR asxandyaxes, respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is equal to 1 −specificity, the ROC graph is sometimes called the sensitivity vs (1 − specificity) plot. Each prediction result or instance of aconfusion matrixrepresents one point in the ROC space.
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100%specificity(no false positives). The (0,1) point is also called aperfect classification. A random guess would give a point along a diagonal line (the so-calledline of no-discrimination) from the bottom left to the top right corners (regardless of the positive and negativebase rates).[16]An intuitive example of random guessing is a decision by flipping coins. As the size of the sample increases, a random classifier's ROC point tends towards the diagonal line. In the case of a balanced coin, it will tend to the point (0.5, 0.5).
The diagonal divides the ROC space. Points above the diagonal represent good classification results (better than random); points below the line represent bad results (worse than random). Note that the output of a consistently bad predictor could simply be inverted to obtain a good predictor.
Consider four prediction results from 100 positive and 100 negative instances:
Plots of the four results above in the ROC space are given in the figure. The result of methodAclearly shows the best predictive power amongA,B, andC. The result ofBlies on the random guess line (the diagonal line), and it can be seen in the table that theaccuracyofBis 50%. However, whenCis mirrored across the center point (0.5,0.5), the resulting methodC′is even better thanA. This mirrored method simply reverses the predictions of whatever method or test produced theCcontingency table. Although the originalCmethod has negative predictive power, simply reversing its decisions leads to a new predictive methodC′which has positive predictive power. When theCmethod predictsporn, theC′method would predictnorp, respectively. In this manner, theC′test would perform the best. The closer a result from a contingency table is to the upper left corner, the better it predicts, but the distance from the random guess line in either direction is the best indicator of how much predictive power a method has. If the result is below the line (i.e. the method is worse than a random guess), all of the method's predictions must be reversed in order to utilize its power, thereby moving the result above the random guess line.
In binary classification, the class prediction for each instance is often made based on acontinuous random variableX{\displaystyle X}, which is a "score" computed for the instance (e.g. the estimated probability in logistic regression). Given a threshold parameterT{\displaystyle T}, the instance is classified as "positive" ifX>T{\displaystyle X>T}, and "negative" otherwise.X{\displaystyle X}follows a probability densityf1(x){\displaystyle f_{1}(x)}if the instance actually belongs to class "positive", andf0(x){\displaystyle f_{0}(x)}if otherwise. Therefore, the true positive rate is given byTPR(T)=∫T∞f1(x)dx{\displaystyle {\mbox{TPR}}(T)=\int _{T}^{\infty }f_{1}(x)\,dx}and the false positive rate is given byFPR(T)=∫T∞f0(x)dx{\displaystyle {\mbox{FPR}}(T)=\int _{T}^{\infty }f_{0}(x)\,dx}.
The ROC curve plots parametricallyTPR(T){\displaystyle {\mbox{TPR}}(T)}versusFPR(T){\displaystyle {\mbox{FPR}}(T)}withT{\displaystyle T}as the varying parameter.
For example, imagine that the blood protein levels in diseased people and healthy people arenormally distributedwith means of 2g/dLand 1 g/dL respectively. A medical test might measure the level of a certain protein in a blood sample and classify any number above a certain threshold as indicating disease. The experimenter can adjust the threshold (green vertical line in the figure), which will in turn change the false positive rate. Increasing the threshold would result in fewer false positives (and more false negatives), corresponding to a leftward movement on the curve. The actual shape of the curve is determined by how much overlap the two distributions have.
Several studies criticize certain applications of the ROC curve and its area under the curve as measurements for assessing binary classifications when they do not capture the information relevant to the application.[18][17][19][20][21]
The main criticism to the ROC curve described in these studies regards the incorporation of areas with low sensitivity and lowspecificity(both lower than 0.5) for the calculation of the total area under the curve (AUC).,[19]as described in the plot on the right.
According to the authors of these studies, that portion of area under the curve (with low sensitivity and low specificity) regards confusion matrices where binary predictions obtain bad results, and therefore should not be included for the assessment of the overall performance.
Moreover, that portion of AUC indicates a space with high or low confusion matrix threshold which is rarely of interest for scientists performing a binary classification in any field.[19]
Another criticism to the ROC and its area under the curve is that they say nothing about precision and negative predictive value.[17]
A high ROC AUC, such as 0.9 for example, might correspond to low values of precision and negative predictive value, such as 0.2 and 0.1 in the [0, 1] range.
If one performed a binary classification, obtained an ROC AUC of 0.9 and decided to focus only on this metric, they might overoptimistically believe their binary test was excellent. However, if this person took a look at the values of precision and negative predictive value, they might discover their values are low.
The ROC AUC summarizes sensitivity and specificity, but does not inform regarding precision and negative predictive value.[17]
Sometimes, the ROC is used to generate a summary statistic. Common versions are:
However, any attempt to summarize the ROC curve into a single number loses information about the pattern of tradeoffs of the particular discriminator algorithm.
The area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').[26]In other words, when given one randomly selected positive instance and one randomly selected negative instance, AUC is the probability that the classifier will be able to tell which one is which.
This can be seen as follows: the area under the curve is given by (the integral boundaries are reversed as large thresholdT{\displaystyle T}has a lower value on thex-axis)
whereX1{\displaystyle X_{1}}is the score for a positive instance andX0{\displaystyle X_{0}}is the score for a negative instance, andf0{\displaystyle f_{0}}andf1{\displaystyle f_{1}}are probability densities as defined in previous section.
IfX0{\displaystyle X_{0}}andX1{\displaystyle X_{1}}follows two Gaussian distributions, thenA=Φ((μ1−μ0)/σ12+σ02){\displaystyle A=\Phi \left((\mu _{1}-\mu _{0})/{\sqrt {\sigma _{1}^{2}+\sigma _{0}^{2}}}\right)}.
It can be shown that the AUC is closely related to theMann–Whitney U,[27][28]which tests whether positives are ranked higher than negatives. For a predictorf{\textstyle f}, an unbiased estimator of its AUC can be expressed by the followingWilcoxon-Mann-Whitneystatistic:[29]
where1[f(t0)<f(t1)]{\textstyle {\textbf {1}}[f(t_{0})<f(t_{1})]}denotes anindicator functionwhich returns 1 iff(t0)<f(t1){\displaystyle f(t_{0})<f(t_{1})}otherwise return 0;D0{\displaystyle {\mathcal {D}}^{0}}is the set of negative examples, andD1{\displaystyle {\mathcal {D}}^{1}}is the set of positive examples.
In the context ofcredit scoring, a rescaled version of AUC is often used:
G1=2AUC−1{\displaystyle G_{1}=2\operatorname {AUC} -1}.
G1{\displaystyle G_{1}}is referred to as Gini index or Gini coefficient,[30]but it should not be confused with themeasure of statistical dispersion that is also called Gini coefficient.G1{\displaystyle G_{1}}is a special case ofSomers' D.
It is also common to calculate the Area Under the ROC Convex Hull (ROC AUCH = ROCH AUC) as any point on the line segment between two prediction results can be achieved by randomly using one or the other system with probabilities proportional to the relative length of the opposite component of the segment.[31]It is also possible to invert concavities – just as in the figure the worse solution can be reflected to become a better solution; concavities can be reflected in any line segment, but this more extreme form of fusion is much more likely to overfit the data.[32]
Themachine learningcommunity most often uses the ROC AUC statistic for model comparison.[33]This practice has been questioned because AUC estimates are quite noisy and suffer from other problems.[34][35][36]Nonetheless, the coherence of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate distribution,[37]and AUC has been linked to a number of other performance metrics such as theBrier score.[38]
Another problem with ROC AUC is that reducing the ROC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system, as well as ignoring the possibility of concavity repair, so that related alternative measures such as Informedness[citation needed]or DeltaP are recommended.[23][39]These measures are essentially equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP = Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is theMatthews correlation coefficient.[citation needed]
Whereas ROC AUC varies between 0 and 1 — with an uninformative classifier yielding 0.5 — the alternative measures known asInformedness,[citation needed]Certainty[23]and Gini Coefficient (in the single parameterization or single system case)[citation needed]all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and −1 represents the "perverse" case of full informedness always giving the wrong response.[40]Bringing chance performance to 0 allows these alternative scales to be interpreted as Kappa statistics. Informedness has been shown to have desirable characteristics for Machine Learning versus other common definitions of Kappa such asCohen KappaandFleiss Kappa.[citation needed][41]
Sometimes it can be more useful to look at a specific region of the ROC Curve rather than at the whole curve. It is possible to computepartial AUC.[42]For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests.[43]Another common approach for classification problems in which P ≪ N (common in bioinformatics applications) is to use a logarithmic scale for thex-axis.[44]
The ROC area under the curve is also calledc-statisticorc statistic.[45]
TheTotal Operating Characteristic(TOC) also characterizes diagnostic ability while revealing more information than the ROC. For each threshold, ROC reveals two ratios, TP/(TP + FN) and FP/(FP + TN). In other words, ROC revealshitshits+misses{\displaystyle {\frac {\text{hits}}{{\text{hits}}+{\text{misses}}}}}andfalse alarmsfalse alarms+correct rejections{\displaystyle {\frac {\text{false alarms}}{{\text{false alarms}}+{\text{correct rejections}}}}}. On the other hand, TOC shows the total information in the contingency table for each threshold.[46]The TOC method reveals all of the information that the ROC method provides, plus additional important information that ROC does not reveal, i.e. the size of every entry in the contingency table for each threshold. TOC also provides the popular AUC of the ROC.[47]
These figures are the TOC and ROC curves using the same data and thresholds.
Consider the point that corresponds to a threshold of 74. The TOC curve shows the number of hits, which is 3, and hence the number of misses, which is 7. Additionally, the TOC curve shows that the number of false alarms is 4 and the number of correct rejections is 16. At any given point in the ROC curve, it is possible to glean values for the ratios offalse alarmsfalse alarms+correct rejections{\displaystyle {\frac {\text{false alarms}}{{\text{false alarms}}+{\text{correct rejections}}}}}andhitshits+misses{\displaystyle {\frac {\text{hits}}{{\text{hits}}+{\text{misses}}}}}. For example, at threshold 74, it is evident that the x coordinate is 0.2 and the y coordinate is 0.3. However, these two values are insufficient to construct all entries of the underlying two-by-two contingency table.
An alternative to the ROC curve is thedetection error tradeoff(DET) graph, which plots the false negative rate (missed detections) vs. the false positive rate (false alarms) on non-linearly transformed x- and y-axes. The transformation function is the quantile function of the normal distribution, i.e., the inverse of the cumulative normal distribution. It is, in fact, the same transformation as zROC, below, except that the complement of the hit rate, the miss rate or false negative rate, is used. This alternative spends more graph area on the region of interest. Most of the ROC area is of little interest; one primarily cares about the region tight against they-axis and the top left corner – which, because of using miss rate instead of its complement, the hit rate, is the lower left corner in a DET plot. Furthermore, DET graphs have the useful property of linearity and a linear threshold behavior for normal distributions.[48]The DET plot is used extensively in theautomatic speaker recognitioncommunity, where the name DET was first used. The analysis of the ROC performance in graphs with this warping of the axes was used by psychologists in perception studies halfway through the 20th century,[citation needed]where this was dubbed "double probability paper".[49]
If astandard scoreis applied to the ROC curve, the curve will be transformed into a straight line.[50]This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memorystrength theory, one must assume that the zROC is not only linear, but has a slope of 1.0. The normal distributions of targets (studied objects that the subjects need to recall) and lures (non studied objects that the subjects attempt to recall) is the factor causing the zROC to be linear.
The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0. In most studies, it has been found that the zROC curve slopes constantly fall below 1, usually between 0.5 and 0.9.[51]Many experiments yielded a zROC slope of 0.8. A slope of 0.8 implies that the variability of the target strength distribution is 25% larger than the variability of the lure strength distribution.[52]
Another variable used isd'(d prime)(discussed above in "Other measures"), which can easily be expressed in terms of z-values. Althoughd' is a commonly used parameter, it must be recognized that it is only relevant when strictly adhering to the very strong assumptions of strength theory made above.[53]
The z-score of an ROC curve is always linear, as assumed, except in special situations. The Yonelinas familiarity-recollection model is a two-dimensional account of recognition memory. Instead of the subject simply answering yes or no to a specific input, the subject gives the input a feeling of familiarity, which operates like the original ROC curve. What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1. However, when adding the recollection component, the zROC curve will be concave up, with a decreased slope. This difference in shape and slope result from an added element of variability due to some items being recollected. Patients with anterograde amnesia are unable to recollect, so their Yonelinas zROC curve would have a slope close to 1.0.[54]
The ROC curve was first used duringWorld War IIfor the analysis ofradar signalsbefore it was employed insignal detection theory.[55]Following theattack on Pearl Harborin 1941, the United States military began new research to increase the prediction of correctly detected Japanese aircraft from their radar signals. For these purposes they measured the ability of a radar receiver operator to make these important distinctions, which was called the Receiver Operating Characteristic.[56]
In the 1950s, ROC curves were employed inpsychophysicsto assess human (and occasionally non-human animal) detection of weak signals.[55]Inmedicine, ROC analysis has been extensively used in the evaluation ofdiagnostic tests.[57][58]ROC curves are also used extensively inepidemiologyandmedical researchand are frequently mentioned in conjunction withevidence-based medicine. Inradiology, ROC analysis is a common technique to evaluate new radiology techniques.[59]In the social sciences, ROC analysis is often called the ROC Accuracy Ratio, a common technique for judging the accuracy of default probability models.
ROC curves are widely used in laboratory medicine to assess the diagnostic accuracy of a test, to choose the optimal cut-off of a test and to compare diagnostic accuracy of several tests.
ROC curves also proved useful for the evaluation ofmachine learningtechniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classificationalgorithms.[60]
ROC curves are also used in verification of forecasts in meteorology.[61]
As mentioned ROC curves are critical toradaroperation and theory. The signals received at a receiver station, as reflected by a target, are often of very low energy, in comparison to thenoise floor. The ratio ofsignal to noiseis an important metric when determining if a target will be detected. This signal to noise ratio is directly correlated to the receiver operating characteristics of the whole radar system, which is used to quantify the ability of a radar system.
Consider the development of a radar system. A specification for the abilities of the system may be provided in terms of probability of detect,PD{\displaystyle P_{D}}, with a certain tolerance for false alarms,PFA{\displaystyle P_{FA}}. A simplified approximation of the required signal to noise ratio at the receiver station can be calculated by solving[62]
for the signal to noise ratioX{\displaystyle {\mathcal {X}}}. Here,X{\displaystyle {\mathcal {X}}}is not indecibels, as is common in many radar applications. Conversion to decibels is throughXdB=10log10X{\displaystyle {\mathcal {X}}_{dB}=10\log _{10}{\mathcal {X}}}. From this figure, the common entries in the radar range equation (with noise factors) may be solved, to estimate the requiredeffective radiated power.
The extension of ROC curves for classification problems with more than two classes is cumbersome. Two common approaches for when there are multiple classes are (1) average over all pairwise AUC values[63]and (2) compute the volume under surface (VUS).[64][65]To average over all pairwise classes, one computes the AUC for each pair of classes, using only the examples from those two classes as if there were no other classes, and then averages these AUC values over all possible pairs. When there arecclasses there will bec(c− 1) / 2possible pairs of classes.
The volume under surface approach has one plot a hypersurface rather than a curve and then measure the hypervolume under that hypersurface. Every possible decision rule that one might use for a classifier forcclasses can be described in terms of its true positive rates(TPR1, . . . , TPRc). It is this set of rates that defines a point, and the set of all possible decision rules yields a cloud of points that define the hypersurface. With this definition, the VUS is the probability that the classifier will be able to correctly label allcexamples when it is given a set that has one randomly selected example from each class. The implementation of a classifier that knows that its input set consists of one example from each class might first compute a goodness-of-fit score for each of thec2possible pairings of an example to a class, and then employ theHungarian algorithmto maximize the sum of thecselected scores over allc!possible ways to assign exactly one example to each class.
Given the success of ROC curves for the assessment of classification models, the extension of ROC curves for other supervised tasks has also been investigated. Notable proposals for regression problems are the so-called regression error characteristic (REC) Curves[66]and the Regression ROC (RROC) curves.[67]In the latter, RROC curves become extremely similar to ROC curves for classification, with the notions of asymmetry, dominance and convex hull. Also, the area under RROC curves is proportional to the error variance of the regression model.
|
https://en.wikipedia.org/wiki/Receiver_operating_characteristic
|
ROUGE, orRecall-Oriented Understudy for Gisting Evaluation,[1]is a set of metrics and a software package used for evaluatingautomatic summarizationandmachine translationsoftware innatural language processing. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation. ROUGE metrics range between 0 and 1, with higher scores indicating higher similarity between the automatically produced summary and the reference.
The following five evaluation metrics are available.
|
https://en.wikipedia.org/wiki/ROUGE_(metric)
|
Word error rate(WER) is a common metric of the performance of aspeech recognitionormachine translationsystem. The WER metric typically ranges from 0 to 1, where 0 indicates that the compared pieces of text are exactly identical, and 1 (or larger) indicates that they are completely different with no similarity. This way, a WER of 0.8 means that there is an 80% error rate for compared sentences.
The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from theLevenshtein distance, working at the word level instead of thephonemelevel. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort.
This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation betweenperplexityand word error rate.[1]
Word error rate can then be computed as:
where
The intuition behind 'deletion' and 'insertion' is how to get from the reference to the hypothesis. So if we have the reference "This is wikipedia" and hypothesis "This _ wikipedia", we call it a deletion.
Note that sinceNis the number of words in the reference, the word error rate can be larger than 1.0, namely if the number of insertionsIis larger than the number of correct wordsC.
When reporting the performance of a speech recognition system, sometimesword accuracy (WAcc)is used instead:
Since the WER can be larger than 1.0, the word accuracy can be smaller than 0.0.
It is commonly believed that a lower word error rate shows superior accuracy in recognition of speech, compared with a higher word error rate. However, at least one study has shown that this may not be true. In aMicrosoft Researchexperiment, it was shown that, if people were trained under "that matches the optimization objective for understanding", (Wang, Acero and Chelba, 2003) they would show a higher accuracy in understanding of language than other people who demonstrated a lower word error rate, showing that true understanding of spoken language relies on more than just highword recognitionaccuracy.[2]
One problem with using a generic formula such as the one above, however, is that no account is taken of the effect that different types of error may have on the likelihood of successful outcome, e.g. some errors may be more disruptive than others and some may be corrected more easily than others. These factors are likely to be specific to thesyntaxbeing tested. A further problem is that, even with the best alignment, the formula cannot distinguish a substitution error from a combined deletion plus insertion error.
Hunt (1990) has proposed the use of a weighted measure of performance accuracy where errors of substitution are weighted at unity but errors of deletion and insertion are both weighted only at 0.5, thus:
There is some debate, however, as to whether Hunt's formula may properly be used to assess the performance of a single system, as it was developed as a means of comparing more fairly competing candidate systems. A further complication is added by whether a given syntax allows for error correction and, if it does, how easy that process is for the user. There is thus some merit to the argument that performance metrics should be developed to suit the particular system being measured.
Whichever metric is used, however, one major theoretical problem in assessing the performance of a system is deciding whether a word has been “mis-pronounced,” i.e. does the fault lie with the user or with the recogniser. This may be particularly relevant in a system which is designed to cope with non-native speakers of a given language or with strong regional accents.
The pace at which words should be spoken during the measurement process is also a source of variability between subjects, as is the need for subjects to rest or take a breath. All such factors may need to be controlled in some way.
For text dictation it is generally agreed that performance accuracy at a rate below 95% is not acceptable, but this again may be syntax and/or domain specific, e.g. whether there is time pressure on users to complete the task, whether there are alternative methods of completion, and so on.
The term "Single Word Error Rate" is sometimes referred to as the percentage of incorrect recognitions for each different word in the system vocabulary.
The word error rate may also be referred to as the length normalizededit distance.[3]The normalized edit distance between X and Y,d( X, Y ) is defined as the minimum of W( P ) / L ( P ), where P is an editing path between X and Y, W ( P ) is the sum of the weights of the elementary edit operations of P, and L(P) is the number of these operations (length of P).[4]
|
https://en.wikipedia.org/wiki/Word_error_rate
|
LEPOR(Length Penalty,Precision, n-gramPosition difference Penalty andRecall) is an automatic language independentmachine translation evaluationmetric with tunable parameters and reinforced factors.
SinceIBMproposed and realized the system ofBLEU[1]as the automatic metric forMachine Translation(MT) evaluation,[2]many other methods have been proposed to revise or improve it, such as TER,METEOR,[3]etc. However, there exist some problems in the traditionalautomatic evaluation metrics. Some metrics perform well on certain languages but weak on other languages, which is usually called as a language bias problem. Some metrics rely on a lot of language features or linguistic information, which makes it difficult for other researchers to repeat the experiments. LEPOR is an automatic evaluation metric that tries to address some of the existing problems.[4]LEPOR is designed with augmented factors and the corresponding tunable parameters to address the language bias problem. Furthermore, in the improved version of LEPOR, i.e. the hLEPOR,[5]it tries to use the optimized linguistic features that are extracted fromtreebanks. Another advanced version of LEPOR is the nLEPOR metric,[6]which adds the n-gram features into the previous factors. So far, the LEPOR metric has been developed into LEPOR series.[7][8]
LEPOR metrics have been studied and analyzed by many researchers from different fields, such as machine translation,[9]natural-language generation,[10]and searching,[11]and beyond. LEPOR metrics are getting more attention from scientific researchers innatural language processing.
LEPOR[4]is designed with the factors of enhanced length penalty,precision, n-gram word order penalty, andrecall. The enhanced length penalty ensures that the hypothesis translation, which is usually translated by machine translation systems, is punished if it is longer or shorter than the reference translation. The precision score reflects the accuracy of the hypothesis translation. The recall score reflects the loyalty of the hypothesis translation to the reference translation or source language. The n-gram based word order penalty factor is designed for the different position orders between the hypothesis translation and reference translation. The word order penalty factor has been proved to be useful by many researchers, such as the work of Wong and Kit (2008).[12]
In light that the word surface string matching metrics were criticized with lack of syntax and semantic awareness, the further developed LEPOR metric (hLEPOR) investigates the integration of linguistic features, such as part of speech (POS).[5][8]POS is introduced as a certain functionality of both syntax and semantic point of view, e.g. if a token of output sentence is a verb while it is expected to be a noun, then there shall be a penalty; also, if the POS is the same but the exact word is not the same, e.g. good vs nice, then this candidate shall gain certain credit. The overall score of hLEPOR then is calculated as the combination of word level score and POS level score with a weighting set. Language modelling inspired n-gram knowledge is also extensively explored in nLEPOR.[6][8]In addition to the n-gram knowledge for n-gram position difference penalty calculation, n-gram is also applied to n-gram precision and n-gram recall in nLEPOR, and the parameter n is an adjustable factor. In addition to POS knowledge in hLEPOR, phrase structure from parsing information is included in a new variant HPPR.[13]In HPPR evaluation modeling, the phrase structure set, such as noun phrase, verb phrase, prepositional phrase, adverbial phrase are considered during the matching from candidate text to reference text.
LEPOR metrics were originally implemented in Perl programming language,[14]and recently the Python version[15]is available by other researchers and engineers,[16]with a press announcement[17]from Logrus Global Language Service company.
LEPOR series have shown their good performances in theACL's annual international workshop of statistical machine translation (ACL-WMT). ACL-WMT is held by the special interest group of machine translation (SIGMT) in the international association forcomputational linguistics(ACL). In the ACL-WMT 2013,[18]there are two translation and evaluation tracks, English-to-other and other-to-English. The "other" languages includeSpanish,French,German,CzechandRussian. In the English-to-other direction, nLEPOR metric achieves the highest system-level correlation score with human judgments using the Pearson correlation coefficient, the second highest system-level correlation score with human judgments using theSpearman rank correlation coefficient. In the other-to-English direction, nLEPOR performs moderate andMETEORyields the highest correlation score with human judgments, which is due to the fact that nLEPOR only uses the concise linguistic feature, part-of-speech information, except for the officially offered training data; however, METEOR has used many other external resources, such as thesynonymsdictionaries,paraphrase, andstemming, etc.
One extended work and introduction about LEPOR's performances with different conditions including pure word-surface form,POSfeatures, phrase tags features, is described in a thesis fromUniversity of Macau.[8]
There is a deep statistical analysis about hLEPOR and nLEPOR performance in WMT13, which shows it performed as one of the best metrics "in both the individual language pair assessment for Spanish-to-English and the aggregated set of 9 language pairs", see the paper (Accurate Evaluation of Segment-level Machine Translation Metrics) "https://www.aclweb.org/anthology/N15-1124" Graham et al. 2015 NAACL (https://github.com/ygraham/segment-mteval)
In a MT user track presentation of MT Summit 2021, researchers fromhttps://www.welocalize.com/show that hLEPOR metric has correlation with human performance on multiple tested language pairs including German, Hindi (no model for Prism), Italian, Russian, Simplified Chinese (page 459https://aclanthology.org/attachments/2021.mtsummit-up.29.Presentation.pdf).
LEPOR automatic metric series have been applied and used by many researchers from different fields innatural language processing. For instance, in standard MT and Neural MT.[19]Also outside of MT community, for instance,[11]applied LEPOR in Search evaluation;[20]mentioned the application of LEPOR for code (programming language) generation evaluation;[10]investigated automatic evaluation of natural language generation[21]with metrics including LEPOR, and argued that automatic metrics can help system level evaluations; also LEPOR is applied in image captioning evaluation.[22]
|
https://en.wikipedia.org/wiki/LEPOR
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.