text
stringlengths
11
320k
source
stringlengths
26
161
Bangdiwala's Bstatistic was created byShrikant Bangdiwalain 1985 and is a measure ofinter-rater agreement.[1][2]While not as commonly used as thekappa statisticthe B test has been used by various workers.[3][4][5][6]While it is principally used as a graphical aid to inter observer agreement, itsasymptotic distributionis known. The test is applicable to testing the agreement between two observers. It is defined to be B=∑i=1knii2∑i=1kni.n.i{\displaystyle B={\frac {\sum _{i=1}^{k}n_{ii}^{2}}{\sum _{i=1}^{k}n_{i.}n_{.i}}}} wherenii{\displaystyle n_{ii}}are the values on the main diagonal,ni.{\displaystyle n_{i.}}is thei{\displaystyle i}th row total, andn.i{\displaystyle n_{.i}}is thei{\displaystyle i}th column total of thecontingency table. The value of B varies in value between 0 (no agreement) and +1 (perfect agreement). In large samples B has anormal distributionwhose variance has a complicated expression.[7]For small samples a permutation test is indicated.[7] Guidance on its use and its extension tonxntables have been provided by Munoz & Bangdiwala.[8]It may be more useful than the more commonly used Cohen's kappa in some circumstances.[9] Worked examples of the use of Bangdiwala's B have been published.[10][11]The statistical programming language R has a set of functions that will compute the B test,[12]and a tutorial on the use of a test using these R functions is available.[13]
https://en.wikipedia.org/wiki/Bangdiwala%27s_B
Instatistics, theintraclass correlation, or theintraclass correlation coefficient(ICC),[1]is adescriptive statisticthat can be used when quantitative measurements are made on units that are organized into groups. It describes how strongly units in the same group resemble each other. While it is viewed as a type ofcorrelation, unlike most other correlation measures, it operates on data structured as groups rather than data structured as paired observations. Theintraclass correlationis commonly used to quantify the degree to which individuals with a fixed degree of relatedness (e.g. full siblings) resemble each other in terms of a quantitative trait (seeheritability). Another prominent application is the assessment of consistency or reproducibility of quantitative measurements made by different observers measuring the same quantity. The earliest work on intraclass correlations focused on the case of paired measurements, and the first intraclass correlation (ICC) statistics to be proposed were modifications of theinterclass correlation(Pearson correlation). Consider a data set consisting ofNpaired data values (xn,1,xn,2), forn= 1, ...,N. The intraclass correlationroriginally proposed[2]byRonald Fisher[3]is where Later versions of this statistic[3]used thedegrees of freedom2N−1 in the denominator for calculatings2andN−1 in the denominator for calculatingr, so thats2becomes unbiased, andrbecomes unbiased ifsis known. The key difference between this ICC and theinterclass (Pearson) correlationis that the data are pooled to estimate the mean and variance. The reason for this is that in the setting where an intraclass correlation is desired, the pairs are considered to be unordered. For example, if we are studying the resemblance of twins, there is usually no meaningful way to order the values for the two individuals within a twin pair. Like the interclass correlation, the intraclass correlation for paired data will be confined to theinterval[−1, +1]. The intraclass correlation is also defined for data sets with groups having more than 2 values. For groups consisting of three values, it is defined as[3] where As the number of items per group grows, so does the number of cross-product terms in this expression grows. The following equivalent form is simpler to calculate: whereKis the number of data values per group, andx¯n{\displaystyle {\bar {x}}_{n}}is the sample mean of thenthgroup.[3]This form is usually attributed toHarris.[4]The left term is non-negative; consequently the intraclass correlation must satisfy For largeK, this ICC is nearly equal to which can be interpreted as the fraction of the total variance that is due to variation between groups.Ronald Fisherdevotes an entire chapter to intraclass correlation in his classic bookStatistical Methods for Research Workers.[3] For data from a population that is completely noise, Fisher's formula produces ICC values that are distributed about 0, i.e. sometimes being negative. This is because Fisher designed the formula to be unbiased, and therefore its estimates are sometimes overestimates and sometimes underestimates. For small or 0 underlying values in the population, the ICC calculated from a sample may be negative. Beginning with Ronald Fisher, the intraclass correlation has been regarded within the framework ofanalysis of variance(ANOVA), and more recently in the framework ofrandom effects models. A number of ICC estimators have been proposed. Most of the estimators can be defined in terms of the random effects model whereYijis theithobservation in thejthgroup,μis an unobserved overallmean,αjis an unobserved random effect shared by all values in groupj, andεijis an unobserved noise term.[5]For the model to be identified, theαjandεijare assumed to have expected value zero and to be uncorrelated with each other. Also, theαjare assumed to be identically distributed, and theεijare assumed to be identically distributed. The variance ofαjis denotedσ2αand the variance ofεijis denotedσ2ε. The population ICC in this framework is[6] With this framework, the ICC is thecorrelationof two observations from the same group. For a one-way random effects model: Yij=μ+αi+ϵij{\displaystyle Y_{ij}=\mu +\alpha _{i}+\epsilon _{ij}} αi∼N(0,σα2){\displaystyle \alpha _{i}\sim N(0,\sigma _{\alpha }^{2})},ϵij∼N(0,σε2){\displaystyle \epsilon _{ij}\sim N(0,\sigma _{\varepsilon }^{2})},αi{\displaystyle \alpha _{i}}s andϵij{\displaystyle \epsilon _{ij}}s independent andαi{\displaystyle \alpha _{i}}s are independent fromϵij{\displaystyle \epsilon _{ij}}s. The variance of any observation is:Var(Yij)=σε2+σα2{\displaystyle Var(Y_{ij})=\sigma _{\varepsilon }^{2}+\sigma _{\alpha }^{2}}The covariance of two observations from the same groupi{\displaystyle i}(forj≠k{\displaystyle j\neq k}) is:[7] Cov(Yij,Yik)=Cov(μ+αi+ϵij,μ+αi+ϵik)=Cov(αi+ϵij,αi+ϵik)=Cov(αi,αi)+2Cov(αi,ϵik)+Cov(ϵij,ϵik)=Cov(αi,αi)=Var(αi)=σα2.{\displaystyle {\begin{aligned}{\text{Cov}}(Y_{ij},Y_{ik})&={\text{Cov}}(\mu +\alpha _{i}+\epsilon _{ij},\mu +\alpha _{i}+\epsilon _{ik})\\&={\text{Cov}}(\alpha _{i}+\epsilon _{ij},\alpha _{i}+\epsilon _{ik})\\&={\text{Cov}}(\alpha _{i},\alpha _{i})+2{\text{Cov}}(\alpha _{i},\epsilon _{ik})+{\text{Cov}}(\epsilon _{ij},\epsilon _{ik})\\&={\text{Cov}}(\alpha _{i},\alpha _{i})\\&={\text{Var}}(\alpha _{i})\\&=\sigma _{\alpha }^{2}.\\\end{aligned}}} In this, we've usedproperties of the covariance. Put together we get:Cor(Yij,Yik)=Cov(Yij,Yik)Var(Yij)Var(Yik)=σα2σε2+σα2{\displaystyle {\text{Cor}}(Y_{ij},Y_{ik})={\frac {{\text{Cov}}(Y_{ij},Y_{ik})}{\sqrt {Var(Y_{ij})Var(Y_{ik})}}}={\frac {\sigma _{\alpha }^{2}}{\sigma _{\varepsilon }^{2}+\sigma _{\alpha }^{2}}}} An advantage of this ANOVA framework is that different groups can have different numbers of data values, which is difficult to handle using the earlier ICC statistics. This ICC is always non-negative, allowing it to be interpreted as the proportion of total variance that is "between groups." This ICC can be generalized to allow for covariate effects, in which case the ICC is interpreted as capturing the within-class similarity of the covariate-adjusted data values.[8] This expression can never be negative (unlike Fisher's original formula) and therefore, in samples from a population which has an ICC of 0, the ICCs in the samples will be higher than the ICC of the population. A number of different ICC statistics have been proposed, not all of which estimate the same population parameter. There has been considerable debate about which ICC statistics are appropriate for a given use, since they may produce markedly different results for the same data.[9][10] In terms of its algebraic form, Fisher's original ICC is the ICC that most resembles thePearson correlation coefficient. One key difference between the two statistics is that in the ICC, the data are centered and scaled using a pooled mean and standard deviation, whereas in the Pearson correlation, each variable is centered and scaled by its own mean and standard deviation. This pooled scaling for the ICC makes sense because all measurements are of the same quantity (albeit on units in different groups). For example, in a paired data set where each "pair" is a single measurement made for each of two units (e.g., weighing each twin in a pair of identical twins) rather than two different measurements for a single unit (e.g., measuring height and weight for each individual), the ICC is a more natural measure of association than Pearson's correlation. An important property of the Pearson correlation is that it is invariant to application of separatelinear transformationsto the two variables being compared. Thus, if we are correlatingXandY, where, say,Y= 2X+ 1, the Pearson correlation betweenXandYis 1 — a perfect correlation. This property does not make sense for the ICC, since there is no basis for deciding which transformation is applied to each value in a group. However, if all the data in all groups are subjected to the same linear transformation, the ICC does not change. The ICC is used to assess the consistency, or conformity, of measurements made by multiple observers measuring the same quantity.[11]For example, if several physicians are asked to score the results of a CT scan for signs of cancer progression, we can ask how consistent the scores are to each other. If the truth is known (for example, if the CT scans were on patients who subsequently underwent exploratory surgery), then the focus would generally be on how well the physicians' scores matched the truth. If the truth is not known, we can only consider the similarity among the scores. An important aspect of this problem is that there is bothinter-observerand intra-observer variability. Inter-observer variability refers to systematic differences among the observers — for example, one physician may consistently score patients at a higher risk level than other physicians. Intra-observer variability refers to deviations of a particular observer's score on a particular patient that are not part of a systematic difference. The ICC is constructed to be applied toexchangeablemeasurements — that is, grouped data in which there is no meaningful way to order the measurements within a group. In assessing conformity among observers, if the same observers rate each element being studied, then systematic differences among observers are likely to exist, which conflicts with the notion of exchangeability. If the ICC is used in a situation where systematic differences exist, the result is a composite measure of intra-observer and inter-observer variability. One situation where exchangeability might reasonably be presumed to hold would be where a specimen to be scored, say a blood specimen, is divided into multiple aliquots, and the aliquots are measured separately on the same instrument. In this case, exchangeability would hold as long as no effect due to the sequence of running the samples was present. Since theintraclass correlation coefficientgives a composite of intra-observer and inter-observer variability, its results are sometimes considered difficult to interpret when the observers are not exchangeable. Alternative measures such as Cohen'skappa statistic, theFleiss kappa, and theconcordance correlation coefficient[12]have been proposed as more suitable measures of agreement among non-exchangeable observers. ICC is supported in the open source software packageR(using the function "icc" with the packagespsyorirr, or via the function "ICC" in the packagepsych.) TherptRpackage[13]provides methods for the estimation of ICC and repeatabilities for Gaussian, binomial and Poisson distributed data in a mixed-model framework. Notably, the package allows estimation of adjusted ICC (i.e. controlling for other variables) and computes confidence intervals based on parametric bootstrapping and significances based on the permutation of residuals. Commercial software also supports ICC, for instanceStataorSPSS[14] The three models are: Number of measurements: Consistency or absolute agreement: The consistency ICC cannot be estimated in the one-way random effects model, as there is no way to separate the inter-rater and residual variances. An overview and re-analysis of the three models for the single measures ICC, with an alternative recipe for their use, has also been presented by Liljequist et al. (2019).[18] Cicchetti (1994)[19]gives the following often quoted guidelines for interpretation forkappaor ICC inter-rater agreement measures: A different guideline is given by Koo and Li (2016):[20]
https://en.wikipedia.org/wiki/Intraclass_correlation
Krippendorff's alpha coefficient,[1]named after academicKlaus Krippendorff, is a statistical measure of the agreement achieved when coding a set of units of analysis. Since the 1970s,alphahas been used incontent analysiswhere textual units are categorized by trained readers, in counseling andsurvey researchwhere experts code open-ended interview data into analyzable terms, in psychological testing where alternative tests of the same phenomena need to be compared, or inobservational studieswhere unstructured happenings are recorded for subsequent analysis. Krippendorff's alpha generalizes several known statistics, often called measures of inter-coder agreement,inter-rater reliability, reliability of coding given sets of units (as distinct from unitizing) but it also distinguishes itself from statistics that are called reliability coefficients but are unsuitable to the particulars of coding data generated for subsequent analysis. Krippendorff's alpha is applicable to any number of coders, each assigning one value to one unit of analysis, to incomplete (missing) data, to any number of values available for coding a variable, to binary, nominal, ordinal, interval, ratio, polar, and circular metrics (note that this is not a metric in the mathematical sense, but often the square of amathematical metric, seelevels of measurement), and it adjusts itself to small sample sizes of the reliability data. The virtue of a single coefficient with these variations is that computed reliabilities are comparable across any numbers of coders, values, different metrics, and unequal sample sizes. Software for calculating Krippendorff's alpha is available.[2][3][4][5][6][7][8][9] Reliability data are generated in a situation in whichm≥ 2 jointly instructed (e.g., by acode book) but independently working coders assign any one of a set of values 1,...,Vto a common set ofNunits of analysis. In their canonical form, reliability data are tabulated in anm-by-Nmatrix containingNvaluesvijthat codercihas assigned to unituj. Definemjas the number of values assigned to unitjacross all codersc. When data are incomplete,mjmay be less thanm. Reliability data require that values be pairable, i.e.,mj≥ 2. The total number of pairable values is∑j=1Nmj={\displaystyle \sum _{j=1}^{N}m_{j}=}n≤mN. To help clarify, here is what the canonical form looks like, in the abstract: We denote byR{\displaystyle R}the set of all possible responses an observer can give. The responses of all observers for an example is called a unit (it forms a multiset). We denote a multiset with these units as the items,U{\displaystyle U}. Alpha is given by: whereDo{\displaystyle D_{o}}is the disagreement observed andDe{\displaystyle D_{e}}is the disagreement expected by chance. whereδ{\displaystyle \delta }is a metric function (note that this is not a metric in the mathematical sense, but often the square of a mathematical metric, see below),n{\displaystyle n}is the total number of pairable elements,mu{\displaystyle m_{u}}is the number of items in a unit,ncku{\displaystyle n_{cku}}number of(c,k){\displaystyle (c,k)}pairs in unitu{\displaystyle u}, andP{\displaystyle P}is thepermutation function. Rearranging terms, the sum can be interpreted in a conceptual way as the weighted average of the disagreements of the individual units---weighted by the number of coders assigned to unit j: Do=1n∑j=1NmjE(δj){\displaystyle D_{o}={\frac {1}{n}}\sum _{j=1}^{N}m_{j}\,\mathbb {E} (\delta _{j})} whereE(δj){\displaystyle \mathbb {E} (\delta _{j})}is the mean of the(mj2){\displaystyle m_{j} \choose 2}numbersδ(vij,vi′j){\displaystyle \delta (v_{ij},v_{i'j})}(herei>i′{\displaystyle i>i'}and define pairable elements). Note that in the casemj=m{\displaystyle m_{j}=m}for allj{\displaystyle j},Do{\displaystyle D_{o}}is just the average all the numbersδ(vij,vi′j){\displaystyle \delta (v_{ij},v_{i'j})}withi>i′{\displaystyle i>i'}. There is also an interpretation ofDo{\displaystyle D_{o}}as the (weighted) average observed distance from the diagonal. wherePck{\displaystyle P_{ck}}is the number of ways the pair(c,k){\displaystyle (c,k)}can be made. This can be seen to be the average distance from the diagonal of all possible pairs of responses that could be derived from the multiset of all observations. The above is equivalent to the usual form ofα{\displaystyle \alpha }once it has been simplified algebraically.[10] One interpretation of Krippendorff'salphais:α=1−Dwithin units=in errorDwithin and between units=in total{\displaystyle \alpha =1-{\frac {D_{{\text{within units}}={\text{in error}}}}{D_{{\text{within and between units}}={\text{in total}}}}}} In this general form, disagreementsDoandDemay be conceptually transparent but are computationally inefficient. They can be simplified algebraically, especially when expressed in terms of the visually more instructive coincidence matrix representation of the reliability data. A coincidence matrix cross tabulates thenpairable values from the canonical form of the reliability data into av-by-vsquare matrix, wherevis the number of values available in a variable. Unlike contingency matrices, familiar in association and correlation statistics, which tabulatepairsof values (cross tabulation), a coincidence matrix tabulates all pairablevalues. A coincidence matrix omits references to coders and is symmetrical around its diagonal, which contains all perfect matches,viu=vi'ufor two codersiandi', across all unitsu. The matrix of observed coincidences contains frequencies: omitting unpaired values, whereI(∘) = 1 if∘is true, and 0 otherwise. Because a coincidence matrix tabulates all pairable values and its contents sum to the totaln, when four or more coders are involved,ockmay be fractions. The matrix of expected coincidences contains frequencies: which sum to the samenc,nk, andnas doesock. In terms of these coincidences, Krippendorff'salphabecomes: Difference functionsδ(v,v′){\displaystyle \delta (v,v')}[11]between valuesvandv'reflect the metric properties (levels of measurement) of their variable. In general: In particular: Inasmuch as mathematical statements of the statistical distribution ofalphaare always only approximations, it is preferable to obtainalpha’sdistribution bybootstrapping.[12][13]Alpha'sdistribution gives rise to two indices: The minimum acceptablealphacoefficient should be chosen according to the importance of the conclusions to be drawn from imperfect data. When the costs of mistaken conclusions are high, the minimumalphaneeds to be set high as well. In the absence of knowledge of the risks of drawing false conclusions from unreliable data, social scientists commonly rely on data with reliabilitiesα≥ 0.800, consider data with 0.800 >α≥ 0.667 only to draw tentative conclusions, and discard data whose agreement measures α < 0.667.[14] Let the canonical form of reliability data be a 3-coder-by-15 unit matrix with 45 cells: Suppose “*” indicates a default category like “cannot code,” “no answer,” or “lacking an observation.” Then, * provides no information about the reliability of data in the four values that matter. Note that unit 2 and 14 contains no information and unit 1 contains only one value, which is not pairable within that unit. Thus, these reliability data consist not ofmN= 45 but ofn= 26 pairable values, not inN= 15 but in 12 multiply coded units. The coincidence matrix for these data would be constructed as follows: In terms of the entries in this coincidence matrix, Krippendorff'salphamay be calculated from: For convenience, because products withδ(v,v)=0{\displaystyle \delta (v,v)=0}andδ(v,v′)=δ(v′,v){\displaystyle \delta (v,v')=\delta (v',v)}, only the entries in one of the off-diagonal triangles of the coincidence matrix are listed in the following: Considering that allδnominal(v,v′)=1{\displaystyle \delta _{\text{nominal}}(v,v')=1}whenv≠v′{\displaystyle v{\neq }v'}for nominal data the above expression yields: Withδinterval(1,2)=δinterval(2,3)=δinterval(3,4)=12,δinterval(1,3)=δinterval(2,4)=22,andδinterval(1,4)=32,{\displaystyle \delta _{\text{interval}}(1,2)=\delta _{\text{interval}}(2,3)=\delta _{\text{interval}}(3,4)=1^{2},\qquad \delta _{\text{interval}}(1,3)=\delta _{\text{interval}}(2,4)=2^{2},{\text{ and }}\delta _{\text{interval}}(1,4)=3^{2},}for interval data the above expression yields: Here,αinterval>αnominal{\displaystyle \alpha _{\text{interval}}>\alpha _{\text{nominal}}}because disagreements happens to occur largely among neighboring values, visualized by occurring closer to the diagonal of the coincidence matrix, a condition thatαinterval{\displaystyle \alpha _{\text{interval}}}takes into account butαnominal{\displaystyle \alpha _{\text{nominal}}}does not. When the observed frequenciesov≠v′are on the average proportional to the expected frequencies ev ≠ v',αinterval=αnominal{\displaystyle \alpha _{\text{interval}}=\alpha _{\text{nominal}}}. Comparingalphacoefficients across different metrics can provide clues to how coders conceptualize the metric of a variable. Krippendorff'salphabrings several known statistics under a common umbrella, each of them has its own limitations but no additional virtues. Krippendorff'salphais more general than any of these special purpose coefficients. It adjusts to varying sample sizes and affords comparisons across a wide variety of reliability data, mostly ignored by the familiar measures. Semantically, reliability is the ability to rely on something, here on coded data for subsequent analysis. When a sufficiently large number of coders agree perfectly on what they have read or observed, relying on their descriptions is a safe bet. Judgments of this kind hinge on the number of coders duplicating the process and how representative the coded units are of the population of interest. Problems of interpretation arise when agreement is less than perfect, especially when reliability is absent. Naming a statistic as one of agreement, reproducibility, or reliability does not make it a valid index of whether one can rely on coded data in subsequent decisions. Its mathematical structure must fit the process of coding units into a system of analyzable terms.
https://en.wikipedia.org/wiki/Krippendorff%27s_alpha
Whenclassificationis performed by a computer, statistical methods are normally used to develop the algorithm. Often, the individual observations are analyzed into a set of quantifiable properties, known variously asexplanatory variablesorfeatures. These properties may variously becategorical(e.g. "A", "B", "AB" or "O", forblood type),ordinal(e.g. "large", "medium" or "small"),integer-valued(e.g. the number of occurrences of a particular word in anemail) orreal-valued(e.g. a measurement ofblood pressure). Other classifiers work by comparing observations to previous observations by means of asimilarityordistancefunction. Analgorithmthat implements classification, especially in a concrete implementation, is known as aclassifier. The term "classifier" sometimes also refers to the mathematicalfunction, implemented by a classification algorithm, that maps input data to a category. Terminology across fields is quite varied. Instatistics, where classification is often done withlogistic regressionor a similar procedure, the properties of observations are termedexplanatory variables(orindependent variables, regressors, etc.), and the categories to be predicted are known as outcomes, which are considered to be possible values of thedependent variable. Inmachine learning, the observations are often known asinstances, the explanatory variables are termedfeatures(grouped into afeature vector), and the possible categories to be predicted areclasses. Other fields may use different terminology: e.g. incommunity ecology, the term "classification" normally refers tocluster analysis. Classificationand clustering are examples of the more general problem ofpattern recognition, which is the assignment of some sort of output value to a given input value. Other examples areregression, which assigns a real-valued output to each input;sequence labeling, which assigns a class to each member of a sequence of values (for example,part of speech tagging, which assigns apart of speechto each word in an input sentence);parsing, which assigns aparse treeto an input sentence, describing thesyntactic structureof the sentence; etc. A common subclass of classification isprobabilistic classification. Algorithms of this nature usestatistical inferenceto find the best class for a given instance. Unlike other algorithms, which simply output a "best" class, probabilistic algorithms output aprobabilityof the instance being a member of each of the possible classes. The best class is normally then selected as the one with the highest probability. However, such an algorithm has numerous advantages over non-probabilistic classifiers: Early work on statistical classification was undertaken byFisher,[1][2]in the context of two-group problems, leading toFisher's linear discriminantfunction as the rule for assigning a group to a new observation.[3]This early work assumed that data-values within each of the two groups had amultivariate normal distribution. The extension of this same context to more than two groups has also been considered with a restriction imposed that the classification rule should belinear.[3][4]Later work for the multivariate normal distribution allowed the classifier to benonlinear:[5]several classification rules can be derived based on different adjustments of theMahalanobis distance, with a new observation being assigned to the group whose centre has the lowest adjusted distance from the observation. Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into account any available information about the relative sizes of the different groups within the overall population.[6]Bayesian procedures tend to be computationally expensive and, in the days beforeMarkov chain Monte Carlocomputations were developed, approximations for Bayesian clustering rules were devised.[7] Some Bayesian procedures involve the calculation ofgroup-membership probabilities: these provide a more informative outcome than a simple attribution of a single group-label to each new observation. Classification can be thought of as two separate problems –binary classificationandmulticlass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes.[8]Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers. Most algorithms describe an individual instance whose category is to be predicted using afeature vectorof individual, measurable properties of the instance. Each property is termed afeature, also known in statistics as anexplanatory variable(orindependent variable, although features may or may not bestatistically independent). Features may variously bebinary(e.g. "on" or "off");categorical(e.g. "A", "B", "AB" or "O", forblood type);ordinal(e.g. "large", "medium" or "small");integer-valued(e.g. the number of occurrences of a particular word in an email); orreal-valued(e.g. a measurement of blood pressure). If the instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece of text, the feature values might be occurrence frequencies of different words. Some algorithms work only in terms of discrete data and require that real-valued or integer-valued data bediscretizedinto groups (e.g. less than 5, between 5 and 10, or greater than 10). A large number ofalgorithmsfor classification can be phrased in terms of alinear functionthat assigns a score to each possible categorykbycombiningthe feature vector of an instance with a vector of weights, using adot product. The predicted category is the one with the highest score. This type of score function is known as alinear predictor functionand has the following general form:score⁡(Xi,k)=βk⋅Xi,{\displaystyle \operatorname {score} (\mathbf {X} _{i},k)={\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i},}whereXiis the feature vector for instancei,βkis the vector of weights corresponding to categoryk, and score(Xi,k) is the score associated with assigning instanceito categoryk. Indiscrete choicetheory, where instances represent people and categories represent choices, the score is considered theutilityassociated with personichoosing categoryk. Algorithms with this basic setup are known aslinear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted. Examples of such algorithms include Since no single form of classification is appropriate for all data sets, a large toolkit of classification algorithms has been developed. The most commonly used include:[9] Choices between different possible algorithms are frequently made on the basis of quantitativeevaluation of accuracy. Classification has many applications. In some of these, it is employed as adata miningprocedure, while in others more detailed statistical modeling is undertaken.
https://en.wikipedia.org/wiki/Statistical_classification
Inlinguistics,Cartographic syntax, or simplyCartography, is a branch of Generativesyntax. The basic assumption of Cartographic syntax is that syntactic structures are built according to the same patterns in all languages of the world. It is assumed that all languages exhibit a richly articulated structure of hierarchical projections with specific meanings. Cartography belongs to the tradition ofgenerative grammarand is regarded as a theory belonging to thePrinciples and Parameterstheory. The founders of Cartography are the Italian linguistsLuigi RizziandGuglielmo Cinque. The Cartographic approach was developed with “the emergence of syntactic analyses that identified and implied functional heads” in the literature of the 1980s.[1]Functional heads are the minimal projection of functional categories such as Agreement (Agr), Tense, Aspect and Mood (TAM).[2]They are different than lexical heads as they are not part oflexical categoriessuch as Verbs (V) and Nouns (N).[3] In the work ofGuglielmo Cinquefrom 1999 the cartographic method was used to create a detailed map of the structure of a clause. In this book Cinque proposes a “fixed universal hierarchy of clausal functional projections”.[4] In the literature it is assumed that adverbs are adjuncts in the syntactic structure, but he argues that treating the adverbs in this manner is problematic since adjuncts can take different positions which are not always grammatical. Therefore, he proposes adverbs are in fact “specifiersof distinct maximal projections”. Moreover, after a large cross linguistic analysis one of the observations was that adverbs from seemingly different classes have a fixed order across languages. Another observation was that the morpho-syntactically expressed functional heads also have a fixed hierarchy. When compared, the two hierarchies (namely, the hierarchy of adverbs (AdvPs) and that of functional heads) match for number, type and relative order.[4][5] (Excerpt from Cinque 2012[6]representing the two proposed syntactic hierarchies in parallel (for the complete hierarchy please refer to the cited source). The hierarchy in 1a) is made up of functional heads (Mood, Tense, Modality, Aspect and Voice) while the hierarchy in 1b) is made up of Adverbs belonging to different classes.) Examples from English In the hierarchy from 1b the durative adverb class (briefly) is closer to the verb while the habitual class (normally) is farther away from it. Therefore, an inversed order of the adverbs would result in ungrammaticality, which is the case for example 2b.[6] The hierarchy is also visible in the examples from 3 Examples from other languages[6] Italian Hebrew Another influential discovery that has helped in shaping the properties of the cartographic model was thePrinciples and parametersmodel introduced byNoam Chomsky.[7]The model tackled the invariance and the variability of natural languages by stating that, while languages seem to exhibit a lot of variability at the surface level, abstracting at a smaller level can reveal “limits on possible variation”. This model has led to new research in previously understudied languages with new empirical claims.[8] The cartographic method is intended as a heuristic model which can lead to new empirical claims and generalizations.[8]In the view ofCinque&Rizzithe cartographic method can be useful for comparative syntax studies but also in studying uniformity across languages through the patterns observed. In this sense it is said to be a research topic rather than a program because it provides a tool for structural analyses.[9] The cartographic theory considers the Uniformity Principle proposed byNoam Chomsky: “In the absence of compelling evidence to the contrary, assume languages to be uniform, with variety restricted to easily detectable properties of utterances.”[10] This approach considers languages uniform in structure. It is assumed that even though some languages express or encode grammatical features in a visible way while others do not, the underlying functional sequence would be the same. It is also assumed that variability in the order of the functional elements could be explained by movement operations triggered by other influences (for exampleinformation structureorscope).[6] With the assumption that there is a one-to-one relationship between a syntactic feature and a head, within a hierarchy of functional categories every morphosyntactic feature (covertly or overtly realized) belonging to a functional element would be assigned a head with a fixed order in the hierarchy.[1] Previous research has shown that the inventory of functional elements is very rich, counting almost 150 separate functional elements.[11]While the position in the hierarchy of functional elements was relatively constant among some studied languages, it has been observed that for other the hierarchy did not hold. For example, in Cinque 1999 it has been shown that some functional categories such as NegP (Negation) and AgrP (Agreement) can have different positions.[4] The aim of Cartography is then the drawing of structural maps that express syntactic configurations. This is done by observing the properties of the functional elements and the way in which they interact with each other to form seemingly fixed functional hierarchies.[1] The basic method of Cartography is called 'transitivity method'. This method will be introduced by an abstract and then by means of a concrete English example. The starting point is an observation of two elements A and B and their relative ordering. Usually, languages prefer one order with two elements, in this case, for example, AB, but not *BA (the star indicates that an order is not well-formed). In other cases, BA is not ill-formed, but marked. This means, for example, that one element needs to be stressed and that it can only be used under certain circumstances. This is indicated by a number sign (i.e., #BA). Then, the relative order of other elements is explored, for example, the relative order of the elements B and C. Suppose the relative order of these elements would be BC, but not *CB. This predicts that the order AC should hold, but the order *CA should be ruled out. This can then be tested. This can be illustrated by the concrete example of English adjective ordering restrictions.[12]In English, evaluative adjectives, used by a speaker to express his/her subjective evaluation of a noun precede size adjectives:[13] Note that it is possible to say (1b), but this either requires a pause or stress. Thus, the neutral order is evaluation > size. Also note that this is not an order concerning the two adjectivesgreatandbig, but the whole class of evaluative (e.g.,cuteorawesome) and size adjectives (e.g.,tinyorsmall). So far, the observation that evaluative adjectives precede size adjectives in English is simply an empirical observation and is theory-neutral. Now, another class is tested. For example, color adjectives. Comparing color adjectives to size adjectives reveals the order size > color: Combining these insights predicts the order evaluation > color. This can now be tested: As the prediction indeed turns out to be on the right track we can conclude that the order should be: In fact, it is not only these three classes, but many others that also exhibit similar ordering restrictions not only in English, but in presumably all languages of the world. The question that emerges is how to theoretically account for these facts. In older versions of generative grammar, it was assumed that adjectives areadjuncts. However, an adjunct approach explicitly predicts that the order of the adjectives should be free which is against the empirical facts. The idea of Cartography is now that such ordering restrictions are hard-wired into the syntactic structures of all languages and that all languages exhibit the same structure. This leads to the assumption of a richly articulated and fixed set of functional projections. This is not only true for adjectives, but also for the structure of whole clauses. Such orders can be made visible by comparing different languages although languages are, of course, different on the surface. However, while languages use different strategies to express syntactic categories (or may not even express them at all) the order is nevertheless visible. As an example, consider the categoriesepistemic modalitywhich expresses a necessity or a possibility that is made by a speaker based on his/her knowledge, tense (which is a bit of an oversimplification here), ability, and an event description. These categories are expressed in English in exactly this order (and other orders will be ill-formed):[14] Comparing this order to German reveals that this language uses a reverse strategy, i.e., the order is exactly the same, but mirrored (note again, that it is not possible to change the order): Examples like these are taken to be evidence in favor of the idea that syntactic structures are fixed across languages although there may be surface variation due to the fact that languages may employ different strategies of expressing them (e.g., by concatenating them from right to left or from left to right). From the beginning of Cartography, the research of the left periphery of the clause, also called initial periphery was of particular interest. The structure of a syntactic clause is made of three layers. These layers are V-Projection (Verb) which includes the lexical content of the clause, an I-Projection (Inflectional) and a C-Projection (Complementizer) which connects to a matrix sentence or to discourse.[15]The initial periphery refers to the C-Projection, C-system or CP (Complementizer Phrase). It has been proposed that the left periphery is a structurally rich domain “fine grained” with distinct syntactic positions The study of the left periphery of the clause from a cartographic perspective initially focused on Italian. It has been observed that different types of complementizers have different orders when aTopicelement is added.  For example, the declarative complementizer “che” is acceptable in different dialects both in front and after the Topic element while the infinitival complementizer “di” is always after the Topic element.[9] The former corresponds to the “Force” while the latter corresponds to “Fin”. A “Force” feature selects for a declarative, interrogative or an exclamative sentence while the “Fin” feature according to Rizzi and Bocci “expresses the finite or non-finite character of the clause, agreeing in finiteness with the finite or non-finite morphology of the clause-internal predicate”.[9]This observation led to the simple mapping of functional features from 3 and to the conclusion that the C-system has a complex structure since “che” and “di” occupy different slots in this domain.[16] Another feature that was considered in the analysis of Rizzi 1997 was Focus. In Romance languages theFocusposition is usually at the left periphery of the clause. It has been shown that its position relative toTopicis still quite flexible, allowing for several Topic elements in an unrestricted order around Focus (with the appropriate context). The standard order can be seen in the mapping from 13b:[9] The interrogative element “se” has been shown to have a similarly flexible order around Top (it can both be preceded and followed by Top) but it must nevertheless be in a higher position than Foc as in the mapping from 14:[16] An explanation for this is the fact that syntactic features delimit the C-system in layers. As such the Force feature represented by the complementizer “que” would belong to the upper part of the C-system, the Int element “se” to the middle part and the “di” complementizer to the lower part. This evidence is believed to strengthen the rich structure hypothesis of the C-system.[16]
https://en.wikipedia.org/wiki/Cartographic_syntax
Ametasyntaxis a syntax used to define the syntax of aprogramming languageorformal language. It describes the allowable structure and composition of phrases and sentences of ametalanguage, which is used to describe either anatural languageor a computer programming language.[1]Some of the widely used formal metalanguages for computer languages areBackus–Naur form(BNF),extended Backus–Naur form(EBNF),Wirth syntax notation(WSN), andaugmented Backus–Naur form(ABNF). Metalanguages have their own metasyntax each composed ofterminal symbols,nonterminal symbols, andmetasymbols. A terminal symbol, such as a word or a token, is a stand-alone structure in a language being defined. A nonterminal symbol represents asyntacticcategory, which defines one or more valid phrasal or sentence structure consisted of an n-element subset. Metasymbols provide syntactic information for denotational purposes in a given metasyntax. Terminals, nonterminals, and metasymbols do not apply across all metalanguages. Typically, the metalanguage for token-level languages (formally called "regular languages") does not have nonterminals because nesting is not an issue in these regular languages. English, as a metalanguage for describing certain languages, does not contain metasymbols since all explanation could be done using English expression. There are only certain formal metalanguages used for describing recursive languages (formally calledcontext-free languages) that have terminals, nonterminals, and metasymbols in their metasyntax. The metasyntax convention of these formal metalanguages are not yet formalized. Many metasyntactic variations or extensions exist in the reference manual of various computer programming languages. One variation to the standard convention for denoting nonterminals and terminals is to remove metasymbols such as angle brackets and quotations and applyfont typesto the intended words. InAda, for example, syntactic categories are denoted by applying lower casesans-serif fonton the intended words or symbols. All terminal words or symbols, in Ada, consist of characters of code position between16#20#and16#7E#(inclusive). The definition for each character set is referred to the International Standard described byISO/IEC10646:2003. InCandJava, syntactic categories are denoted usingitalic fontwhile terminal symbols are denoted bygothicfont. InJ, its metasyntax does not apply metasymbols to describe J's syntax at all. Rather, all syntactic explanations are done in a metalanguage very similar to English called Dictionary, which is uniquely documented for J. The purpose of the new extensions is to provide a simpler and unambiguous metasyntax. In terms of simplicity, BNF's metanotation definitely does not help to make the metasyntax easier-to-read as the open-end and close-end metasymbols appear too abundantly. In terms of ambiguity, BNF's metanotation generates unnecessary complexity when quotation marks, apostrophes, less-than signs or greater-than signs come to serve as terminal symbols, which they often do. The extended metasyntax utilizes properties such as case, font, and code position of characters to reduce unnecessary aforementioned complexity. Moreover, some metalanguages use fonted separator categories to incorporate metasyntactic features for layout conventions, which are not formally supported by BNF.
https://en.wikipedia.org/wiki/Metasyntax
When analysing the regularities and structure ofmusicas well as the processing of music in thebrain, certain findings lead to the question of whether music is based on asyntaxthat could be compared withlinguistic syntax. To get closer to this question it is necessary to have a look at the basic aspects of syntax inlanguage, as language unquestionably presents a complex syntactical system. If music has a matchable syntax, noteworthy equivalents to basic aspects of linguistic syntax have to be found in musical structure. By implication the processing of music in comparison to language could also give information about the structure of music. Syntax in general can be referred to as a study of the principles and rules needed for the construction of a language or as a term in particular describing these principles and rules for a special language. Linguistic syntax is especially marked by its structural richness, which becomes apparent in its multi layered organization as well as in the strong relationship between syntax and meaning. That is that there are special linguistic syntactic principles that define how the language is formed out of different subunits, such as words out ofmorphemes, phrases out of words and sentences out of phrases. Furthermore, linguistic syntax is featured by the fact that a word can take on abstractgrammatical functionsthat are less defined through properties of the word itself and more through the context and structural relations. This is for example that everynouncan be used as asubject,objectorindirect object, but without a sentence as the normal context of a word, no statement about itsgrammatical functioncan be made. At last, linguistic syntax is marked by abstractness. This means that only conventional structural relations and not psychoacoustic relationships are the basis for the linguistic syntax.[1] Concerning musical syntax these three aspects of richness in linguistic syntax as well as the abstractness should be found in music too, if one wants to claim that music has a comparable syntax. An annotation that has to be made concerns the fact that most of the studies dealing with musical syntax are confined to the consideration of Western European tonal music. Thus this article can also only focus onWestern tonal music.[1] Considering the multilayered organization of music, three levels ofpitch organizationcan be found in music. The lowest level aremusical scales, which consist of seven tones or "scale degrees" peroctaveand have an asymmetric pattern ofintervalsbetween them (for example the C-major scale). They are built up out of the 12 possiblepitch classesper octave (A, A♯,B, C, C♯, D, D♯, E, F, F♯, G, G♯) and the different scale tones are not equal in their structural stability. Empirical evidence indicates that there is a hierarchy concerning the stability of the single tones. The most stable one is called the "tonic" and embodies the tonal centre of the scale. The most unstable tones were the ones closest to the tonic (scale degrees 2 and 7), which are called the "supertonic" and the "leading tone". In studies scale degrees 1, 3 and 5 have been judged as closely related. It was also shown that an implicit knowledge of scale structure has to be learned and developed in childhood and is not inborn. The next superordinate level of pitch organization is thechordstructure, which means that three scale tones with a distance of two scale steps each are played simultaneously and are therefore combined into chords. When building up chords on the basis of a musical scale there are three different kinds of chords resulting, namely "major"(e.g. C-E-G), "minor" (e.g. D-F-A) and "diminished" (e.g. B-D-F) triads. This is due to the asymmetric intervals between the scale tones. These asymmetric intervals effect, that a distance of two scale steps can comprise either three or four semitones and therefore be an interval of a minor (with three semitones) or a major (with four semitones) third. Amajor triadconsists of a major third followed by a minor third and is built on scale degrees 1, 3 and 5 (or 4, 6 and 1, for the subdominant, and 5, 7 and 2, for the dominant, the other two major triads that can be formed from the major scale). Aminor triadconsists of a minor third followed by a major third and is built on scale degrees 2, 4 and 6 (or 3, 5 and 7, for the mediant, and 6, 1 and 4, for the submediant). Only on scale degree 7 the triad consists of two minor thirds and is therefore defined as adiminished triad. Chordal syntax touches mainly four basic aspects. The first is, that the lowestnotein each triad functions as a fundament of the chord and therefore as the structural most important pitch. The chord is named after this note as well as the chord's harmonic label is grounded on it. The second aspect is, that chord syntax provides norms for altering chords by additional tones. One example is the addition of a fourth tone to a triad, which is the seventh tone of the scale (e.g. in a C-major scale the addition of F to the triad G-B-D would lead to a so-called "dominant seventh chord"). Concerning norms for the progression of chords in time the third aspect focuses on the relationship between chords. The patterning of chords in acadencefor example indicates a movement from a V chord to a I chord. The fact that the I chord is perceived as a resting point in a musical phrase implicates, that the single chords built up on notes of a scale are not equal in there stability but show the same differences in stability as the notes of the scale do. This describes the fourth basic aspect of chordal syntax. The tonic chord (the one built on the tonic, C-E-G in C-major, for example) is the most stable and central chord, followed by the dominant chord (built on the 5th scale degree) and the subdominant chord (built on the 4th scale degree). " The highest level of pitch organization can be seen inkey structure. In Western European tonal music the key is based on a scale with its associated chords and chord relations. Scales can be built up as minor or major scales (differing in the succession of intervals between the scale tones) on each of the 12 pitch classes and therefore there are 24 possible keys in tonal music. Analysingkeystructure in context of musical syntax means to examine the relationship between keys in a piece of music. Usually, not only one key is used to build up a composition, but also so-called key "modulations" (in other words the alteration of keys) are utilized. In these modulations a certain recurring pattern can be perceived. Switches from one key to another are often found between related keys. Three general principles for relationship between keys can be postulated on the basis of perceptual experiments and also neural evidence for implicit knowledge of key structure. Looking at the C-major key as an example, there are three close related keys: G-major, A-minor and C-minor. C-major and G-major are keys whose 1st scale degrees are separated by a musical fifth (the pattern of relations is represented in thecircle of fifths" for major keys). A-minor and C-major share the same notes of the scale but with a different tonic (so-calledrelative minor key, i.e. C-major and A-minor). And C-major and C-minor have the same tonic in their scales. All in all it can be said that music like the human language has a considerable multi layered organization. Considering the last two basic aspects of linguistic syntax, namely the considerable significance of the order of subunits for the meaning of a sentence as well as the fact that words undertake abstract grammatical functions defined through context and structural relations, it seems to be useful to analyse the hierarchical structure of music to find correlations in music. One aspect of hierarchical structure of music is theornamentation. The meaning of the word "ornamentation" points to the fact that there are events in a musical context that are less important to form an idea of the general gist of a sequence than others. The decision on the importance of events not only comprises harmonic considerations, but also rhythmic and motivic information. But a classification of events simply into ornamental and structural events would be too superficial. In fact the most common hypothesis implies, that music is organized into structural levels, which can be pictured as branches of a tree. A pitch that is structural at a higher level may be ornamental at a deeper level. This can be compared with the hierarchical syntactic structure of asentencein which there are structural elements that are necessary to build up a sentence like the noun phrase and the verb phrase but looking at a deeper level the structural elements also contain additional or ornamental constituents. Searching for other aspects of hierarchical structure of music there is a controversial discussion, if the organization oftensionandresolutionin music can be described as hierarchical structure or only as a purely sequential structure. According to Patel[1]research in this area has produced apparently contradictory evidence, and more research is needed to answer this question. The question concerning the kind of structure that features tension and resolution in music is linked very close to the relationship between order and meaning in music. Considering tension and resolution as one possible kind of meaning in music a hierarchical structure would imply that a change of order of musical elements would have an influence on the meaning of the music. The last aspect to examine is theabstractnessof linguistic syntax and its correlate in music. There are two contradicting points of views. The first one claims that the foundation for musical scales and for the existence of atonal centrein music can be seen in the physical basis ofovertoneseries or in the psychoacoustic properties of chord in tonal music respectively. But in recent time there is strong evidence for the second point of view that syntax reflects abstract cognitive relationships. All in all the consideration of syntax in music and language shows, that music has a syntax comparable to the linguistic syntax especially concerning a great complexity and a hierarchical organization. Nevertheless, it has to be emphasized, that musical syntax is not a simple variant of linguistic syntax, but a similar complex system with its own substance. That means that it would be the wrong way just to search for musical analogies of linguistic syntactic entities such as nouns or verbs. Investigating the neuronal processing of musical syntax can serve two proposed aspects.[2]The first is to learn more about the processing of music in general. That is, which areas of thebrainare involved and if there are specific markers of brain activity due to the processing of music and musical syntax. The second aspect is to compare the processing of musical and linguistic syntax to find out, if they have an effect upon each other or if there even is a significant overlap. The verification of an overlap would support the thesis, that syntactic operations (musical as well as linguistic) are modular. "Modular" means, that the complex system of processing is decomposed into subsystems with modular functions. Concerning the processing of syntax this would mean, that thedomainof music and language each have specific syntactic representations, but that they shareneural resourcesfor activating and integrating these representations duringsyntactic processing. Processing of music and musical syntax comprises several aspects concerning melodic, rhythmic, metric, timbral and harmonic structure. For the processing ofchord functionsfour steps in processing can be described. (1)Primarily, a tonal centre has to be detected out of the first chords of a sequence. Often the first chord is interpreted as the tonal centre of a sequence and a reevaluation is necessary, if the first chord has another harmonic function. (2)Successive chords are related to this tonal centre concerning theirharmonic distancefrom the tonal centre. (3)As described above (Does music have a syntax?), music has a hierarchical structure in terms of pitch organization and organization of tensioning and releasing in music. Pitch organization concerning chords means, that in a musical phrase the tonic is the most stable chord and experienced as the resting point. The dominant and subdominant anon are more stable than the submediant and the supertonic. The progression of chords in time forms a tonal structure based on pitch organization, in which moving away from the tonic is perceived as tensioning and moving towards the tonic is experienced as releasing. Therefore, hierarchical relations may convey organized patterns of meaning. (4)Concerning harmonic aspects of major-minor tonal music, Musical syntax can be characterized by statistical regularities in the succession of chord functions in time, that is probabilities of chord transitions. As these regularities are stored in along-term memory, predictions about following chords are made automatically, when listening to a musical phrase. Source:[3] The violation of these automatically made predictions lead to the observation of so-calledERPs(event related potential, a stereotyped electrophysiological response to an internal or external stimulus). Two forms of ERPs can be detected in the context of processing music. One is theMMN(mismatch negativity), which has first been investigated only with physical deviants likefrequency,sound intensity,timbredeviants (referred to asphMMN) and could now also be shown for changes of abstract auditory features like tone pitches (referred to asafMMN). The other one is the so-calledERAN(early right anterior negativity), which can be elicited by syntactic irregularities in music. Both the ERAN and the MMN are ERPs indicating a mismatch between predictions based on regularities and actually experienced acoustic information. As for a long time it seemed to be, that the ERAN is a special variant of the MMN, the question arises, why they are told apart today. There are several differences between the MMN and the ERAN found in the last years: Even though music syntactic regularities are often simultaneously acoustical similar and music syntactic irregularities are often simultaneously acoustical different, an ERAN but not an MMN can be elicit, when a chord does not represent a physical but a syntactic deviance. To demonstrate this, so-called "Neapolitan sixth chords" are used. These are consonant chords when played solitary, but which are added into a musical phrase of in which they are only distantly related to the harmonic context. Added into a chord sequence of five chords, the addition of aNeapolitan sixth chordat the third or at the fifth position evokes different amplitudes of ERANs in theEEGwith a higheramplitudeat the fifth position. Nevertheless, when creating a chord sequence in which theNeapolitan chordat the fifth position is music-syntactically less irregular than a Neapolitan chord at the third position, the amplitude is higher at the third position (see figure 4...). In opposition to the MMN, a clear ERAN is also elicited by using syntactically irregular chords, which are acoustically more similar to a proceeding harmonic context than syntactically regular chords. Therefore, the MMN seems to be based on an on-line establishment of regularities. That means, that the regularities are extracted on-line from the acoustic environment. In opposition, the ERAN rests upon representations of music-syntactic regularities which exist in along-term memoryformat and which are learned during early childhood. This is represented in the development of the ERAN and MMN. The ERAN cannot be verified in newborn babies, whereas the MMN can actually be demonstrated infetus. In two-year-old children, the ERAN is very small, in five-year-old children a clear ERAN is found, but with a longer latency than in adults. With the age of 11 years children show an ERAN similar to ERANs in adults. Out of these observation the thesis can be built that the MMN is essential for the establishment and maintenance of representations of the acoustic environment and for processes of theauditory scene analysis. But only the ERAN is completely based on learning to build up a structural model, which is established with reference to representations of syntactic regularities already existing in a long-term memory format. Considering effects oftrainingboth the ERAN and the MMN can be modulated by training. Differences between the ERAN and the MMN also exist in theneural sourcesfor the main contributions to the ERPs. The sources for the ERAN are located in thepars opercularisof theinferior fronto-lateral cortex(inferiorBrodmann's areawith contributions from the ventrolateralpremotor cortexand theanterior superior temporal gyrus, whereas the MMN receives its main contributions from and within the vicinity of theprimary auditory cortexwith additional sources in thefrontal cortical areas. Therefore, the sources for the ERAN basically lie in the frontal cortex whereas the sources for the MMN are located in thetemporal lobe. Other hints for this thesis emerge from the fact that under apropofolsedation which mainly affects the frontal cortex, the ERAN is abolished while the MMN is only reduced. At last, the amplitude of the ERAN is reduced under ignore conditions whereas the MMN is largely unaffected by attentional modulations. (1)First, a separation of sound sources, an extraction of sound features and the establishment of representations of auditory objects of the incoming acoustic input have to be made. The same processes are required for the MMN and ERAN. (2)For the MMN regularities are filtered on-line out of the input to create a model of the acoustic environment. At this point, there is a difference to the ERAN as for the ERAN representations of regularities already exist in a long-term memory format and the incoming sound is integrated into a pre existent model ofmusical structure. (3)According to the model of musical structure, predictions concerning forthcoming auditory events are formed. This process is similar for the ERAN and for the MMN. (4)At least a comparison between the actually incoming sound and the predictions based on the model is made. This process is partly the same for the MMN and the ERAN as well. Source:[1] As the ERAN is similar to an ERP calledELANwhich can be elicited by violation of linguistic syntax it seems to be obvious that the ERAN really represents syntactic processing. Deduced from this thought an interaction between music-syntactic and language-syntactic processing would be very likely.There are different possibilities inneuroscienceto approach to an answer to the question of an overlap between the neuronal processing of linguistic and musical syntax. This method deals with the question, how structure and function of thebrainrelate to outcomes in behaviour and other psychological processes. From this area of research there has been evidence for the dissociation between musical and linguistic syntactic abilities. In case reports it was possible to show thatamusia( a deficiency in fine-grainded perception of pitch which leads to musical tone-deafness and can be congenital or acquired later in life as from brain damage) is not necessarily linked toaphasia(severe language impairments following brain damage) and vice versa. This means that individuals with normal speech and language abilities showed musical tone-deafness as well as individuals with language impairments had sufficient means of musical syntactic abilities. The problem of neuropsychologic research is that there has not been a former case report which showed that aphasia does not necessarily entail amusia in non-musicians, to the contrary newer findings suggest that amusia is almost always linked to aphasia. Furthermore, results fromneuroimagingled to the "shared syntactic integration resource hypothesis" (SSIRH), which supports the presumption, that there is an overlap between the processing of musical and linguistic syntax and that syntactic operations are modular. Furthermore, research using the method ofelectroencephalographyhas shown that a difficulty or irritation in musical as well as in linguistic syntax elicit ERPs which are similar to each other. How can the discrepancy betweenneuropsychologyand neuroimaging be explained? In fact, the concept of modularity itself can help to understand the different and apparently contradicting findings in neuropsychologic research and neuroimaging. Introducing the concept of a dual system, in which there is a distinction between syntactic representation and syntactic processing, this could mean, that there is a distinction between long-term structural knowledge in adomain(representation) and operations conducted on that knowledge (syntactic processing). A damage in an area representing long-term musical knowledge would lead to amusia without aphasia, but a damage in an area representing syntactic processing would cause an impairment of both musical and linguistic syntactic processing. The comparison of the syntactic processing of language and music is based on three theories which should be mentioned but which are not explained in detail. The first two, the "dependency locality theory" and the "expectancy theory" refer to syntactic processing in language, whereas the third one, the "tonal pitch space theory", relates to the syntactic processing in music. The language theories contribute to the concept that in order to conceive the structure of a sentence, resources are consumed. If the conception of a this structure is difficult due to the fact that distant words belong to each other or an expected structure of the sentence is violated, more resources, namely the ones for activating low-activation items, are consumed. Violating an anticipated structure in music could mean a harmonically unexpected note or chord in a musical sequence. As in language this is associated with a "processing cost due to the tonal distance" (Patel, 2008) and therefore means that more resources are needed for activating low-activation items. Overall these theories lead to the "shared syntactic integration resources hypothesis" as the areas from which low-activation items are activated could be the correlate to the overlap between linguistic and musical syntax. Strong evidence for the existence of this overlap comes from studies, in which music-syntactic and a linguistic-syntactic irregularities were presented simultaneously. They showed an interaction between the ERAN and theLAN(left anterior negativity;ERP which is elicited by linguistic-syntactic irregularities). The LAN elicited was reduced when an irregular word was presented simultaneously with an irregular chord compared to the condition when an irregular word was presented with a regular chord. Contrary to this finding the phMMN elicited by frequency deviants did not interact with the LAN. From this facts it can be reasoned that the ERAN relies on neural resources related to syntactic processing (Koelsch 2008). Furthermore, they give strong evidence for the thesis, that there is an overlap between the processing of musical and linguistic syntax and therefore that syntactic operations (musical as well as linguistic) are modular. This article incorporates material from theCitizendiumarticle "Musical syntax", which is licensed under theCreative Commons Attribution-ShareAlike 3.0 Unported Licensebut not under theGFDL.
https://en.wikipedia.org/wiki/Musical_syntax
Semiotics(/ˌsɛmiˈɒtɪks/SEM-ee-OT-iks) is the systematic study ofsign processesand the communication ofmeaning. In semiotics, asignis defined as anything that communicates intentional and unintentional meaning or feelings to the sign's interpreter. Semiosis is any activity, conduct, or process that involves signs. Signs often are communicated by verbal language, but also by gestures, or by other forms of language, e.g. artistic ones (music, painting, sculpture, etc.). Contemporary semiotics is a branch of science that generally studies meaning-making (whether communicated or not) and various types of knowledge.[1] Unlikelinguistics, semiotics also studies non-linguisticsign systems. Semiotics includes the study of indication, designation, likeness,analogy,allegory,metonymy,metaphor,symbolism, signification, and communication. Semiotics is frequently seen as having importantanthropologicalandsociologicaldimensions. Some semioticians regard every cultural phenomenon as being able to be studied as communication.[2]Semioticians also focus on thelogicaldimensions of semiotics, examiningbiologicalquestions such as how organisms make predictions about, and adapt to, their semioticnichein the world. Fundamental semiotic theories take signs or sign systems as their object of study. Applied semiotics analyzes cultures and cultural artifacts according to the ways they construct meaning through their being signs. The communication of information in living organisms is covered inbiosemioticsincludingzoosemioticsandphytosemiotics. The importance of signs and signification has been recognized throughout much of the history ofphilosophyandpsychology. The term derives fromAncient Greekσημειωτικός(sēmeiōtikós)'observant of signs'[3](fromσημεῖον(sēmeîon)'a sign, mark, token').[4]For the Greeks, 'signs' (σημεῖονsēmeîon) occurred in the world of nature and 'symbols' (σύμβολονsýmbolon) in the world of culture. As such,PlatoandAristotleexplored the relationship between signs and the world.[5] It would not be untilAugustine of Hippo[6]that the nature of the sign would be considered within a conventional system. Augustine introduced a thematic proposal for uniting the two under the notion of 'sign' (signum) as transcending thenature–culture divideand identifying symbols as no more than a species (or sub-species) ofsignum.[7]A monograph study on this question was done by Manetti (1987).[8][a]These theories have had a lasting effect inWestern philosophy, especially throughscholasticphilosophy.[citation needed] The general study of signs that began in Latin with Augustine culminated with the 1632Tractatus de SignisofJohn Poinsotand then began anew in late modernity with the attempt in 1867 byCharles Sanders Peirceto draw up a "new list ofcategories". More recentlyUmberto Eco, in hisSemiotics and the Philosophy of Language, has argued that semiotic theories are implicit in the work of most, perhaps all, major thinkers.[citation needed] John Locke(1690), himself a man ofmedicine, was familiar with this "semeiotics" as naming a specialized branch within medical science. In his personal library were two editions of Scapula's 1579 abridgement ofHenricus Stephanus'Thesaurus Graecae Linguae, which listedσημειωτικήas the name for'diagnostics',[9]the branch of medicine concerned with interpreting symptoms of disease ("symptomatology"). Physician and scholarHenry Stubbe(1670) had transliterated this term of specialized science into English precisely as "semeiotics", marking the first use of the term in English:[10] "...nor is there any thing to be relied upon in Physick, but an exact knowledge of medicinal phisiology (founded on observation, not principles), semeiotics, method of curing, and tried (not excogitated, not commanding) medicines...." Locke would use the termsem(e)iotikeinAn Essay Concerning Human Understanding(book IV, chap. 21),[11][b]in which he explains how science may be divided into three parts:[12]: 174 All that can fall within the compass of human understanding, being either, first, the nature of things, as they are in themselves, their relations, and their manner of operation: or, secondly, that which man himself ought to do, as a rational and voluntary agent, for the attainment of any end, especially happiness: or, thirdly, the ways and means whereby the knowledge of both the one and the other of these is attained and communicated; I think science may be divided properly into these three sorts. Locke then elaborates on the nature of this third category, naming itΣημειωτική(Semeiotike), and explaining it as "the doctrine of signs" in the following terms:[12]: 175 Thirdly, the third branch [of sciences] may be termedσημειωτικὴ, or the doctrine of signs, the most usual whereof being words, it is aptly enough termed alsoΛογικὴ, logic; the business whereof is to consider the nature of signs the mind makes use of for the understanding of things, or conveying its knowledge to others. Juri Lotmanintroduced Eastern Europe to semiotics and adopted Locke's coinage (Σημειωτική) as the name to subtitle his founding at theUniversity of Tartuin Estonia in 1964 of the first semiotics journal,Sign Systems Studies. Ferdinand de Saussurefounded his semiotics, which he calledsemiology, in the social sciences:[13] It is...possible to conceive of a science which studies the role of signs as part of social life. It would form part of social psychology, and hence of general psychology. We shall call it semiology (from the Greeksemeîon, 'sign'). It would investigate the nature of signs and the laws governing them. Since it does not yet exist, one cannot say for certain that it will exist. But it has a right to exist, a place ready for it in advance. Linguistics is only one branch of this general science. The laws which semiology will discover will be laws applicable in linguistics, and linguistics will thus be assigned to a clearly defined place in the field of human knowledge. Thomas Sebeok[c]would assimilatesemiologytosemioticsas a part to a whole, and was involved in choosing the nameSemioticafor the first international journal devoted to the study of signs. Saussurean semiotics have exercised a great deal of influence on the schools of structuralism and post-structuralism.Jacques Derrida, for example, takes as his object the Saussurean relationship of signifier and signified, asserting that signifier and signified are not fixed, coining the expressiondifférance, relating to the endless deferral of meaning, and to the absence of a "transcendent signified". In the nineteenth century,Charles Sanders Peircedefined what he termed "semiotic" (which he would sometimes spell as "semeiotic") as the "quasi-necessary, or formal doctrine of signs," which abstracts "what must be the characters of all signs used by...an intelligence capable of learning by experience,"[14]and which is philosophical logic pursued in terms of signs and sign processes.[15][16] Peirce's perspective is considered as philosophical logic studied in terms of signs that are not always linguistic or artificial, and sign processes, modes of inference, and the inquiry process in general. The Peircean semiotic addresses not only the external communication mechanism, as per Saussure, but the internal representation machine, investigating sign processes, and modes of inference, as well as the whole inquiry process in general.[citation needed] Peircean semiotic is triadic, including sign, object, interpretant, as opposed to the dyadicSaussuriantradition (signifier, signified). Peircean semiotics further subdivides each of the three triadic elements into three sub-types, positing the existence of signs that are symbols; semblances ("icons"); and "indices," i.e., signs that are such through a factual connection to their objects.[17] Peircean scholar and editor Max H. Fisch (1978)[d]would claim that "semeiotic" was Peirce's own preferred rendering of Locke's σημιωτική.[18]Charles W. Morrisfollowed Peirce in using the term "semiotic" and in extending the discipline beyond human communication to animal learning and use of signals. While the Saussurean semiotic is dyadic (sign/syntax, signal/semantics), the Peircean semiotic is triadic (sign, object, interpretant), being conceived as philosophical logic studied in terms of signs that are not always linguistic or artificial. Peirce would aim to base his new list directly upon experience precisely as constituted by action of signs, in contrast with the list of Aristotle's categories which aimed to articulate within experience the dimension of being that is independent of experience and knowable as such, through human understanding.[citation needed] The estimative powers of animals interpret the environment as sensed to form a "meaningful world" of objects, but the objects of this world (orUmwelt, inJakob von Uexküll's term)[19]consist exclusively of objects related to the animal as desirable (+), undesirable (–), or "safe to ignore" (0). In contrast to this, human understanding adds to the animalUmwelta relation of self-identity within objects which transforms objects experienced into 'things' as well as +, –, 0 objects.[20][e]Thus, the generically animal objective world asUmwelt, becomes a species-specifically human objective world orLebenswelt('life-world'), wherein linguistic communication, rooted in the biologically underdeterminedInnenwelt('inner-world') of humans, makes possible the further dimension of cultural organization within the otherwise merely social organization of non-human animals whose powers of observation may deal only with directly sensible instances of objectivity.[citation needed] This further point, that human culture depends upon language understood first of all not as communication, but as the biologically underdetermined aspect or feature of the human animal'sInnenwelt, was originally clearly identified byThomas A. Sebeok.[21][22]Sebeok also played the central role in bringing Peirce's work to the center of the semiotic stage in the twentieth century,[f]first with his expansion of the human use of signs (anthroposemiosis) to include also the generically animal sign-usage (zoösemiosis),[g]then with his further expansion of semiosis to include the vegetative world (phytosemiosis). Such would initially be based on the work ofMartin Krampen,[23]but takes advantage of Peirce's point that an interpretant, as the third item within a sign relation, "need not be mental".[24][25][26] Peirce distinguished between the interpretant and the interpreter. The interpretant is the internal, mental representation that mediates between the object and its sign. The interpreter is the human who is creating the interpretant.[27]Peirce's "interpretant" notion opened the way to understanding an action of signs beyond the realm of animal life (study of phytosemiosis + zoösemiosis + anthroposemiosis =biosemiotics), which was his first advance beyond Latin Age semiotics.[h] Other early theorists in the field of semiotics includeCharles W. Morris.[28]Writing in 1951,Jozef Maria Bochenskisurveyed the field in this way: "Closely related to mathematical logic is the so-called semiotics (Charles Morris) which is now commonly employed by mathematical logicians. Semiotics is the theory of symbols and falls in three parts; Max Blackargued that the work ofBertrand Russellwas seminal in the field.[30] Semioticians classify signs or sign systems in relation to the way they aretransmitted. This process of carrying meaning depends on the use ofcodesthat may be the individual sounds or letters that humans use to form words, the body movements they make to show attitude or emotion, or even something as general as the clothes they wear. Tocoina word to refer to athing, thecommunitymust agree on a simple meaning (adenotativemeaning) within their language, but that word can transmit that meaning only within the language'sgrammatical structuresandcodes. Codes also represent thevaluesof theculture, and are able to add new shades ofconnotationto every aspect of life.[citation needed] To explain the relationship between semiotics andcommunication studies,communicationis defined as the process of transferring data and-or meaning from a source to a receiver. Hence, communication theorists construct models based on codes, media, andcontextsto explain thebiology,psychology, andmechanicsinvolved. Both disciplines recognize that the technical process cannot be separated from the fact that the receiver mustdecodethe data, i.e., be able to distinguish the data assalient, and make meaning out of it. This implies that there is a necessary overlap between semiotics and communication. Indeed, many of the concepts are shared, although in each field the emphasis is different. InMessages and Meanings: An Introduction to Semiotics,Marcel Danesi(1994) suggested that semioticians' priorities were to studysignificationfirst, and communication second. A more extreme view is offered byJean-Jacques Nattiezwho, as amusicologist, considered the theoretical study of communication irrelevant to his application of semiotics.[31]: 16 Semiotics differs fromlinguisticsin that it generalizes the definition of a sign to encompass signs in any medium or sensory modality. Thus it broadens the range of sign systems and sign relations, and extends the definition of language in what amounts to its widest analogical or metaphorical sense. The branch of semiotics that deals with such formal relations between signs or expressions in abstraction from their signification and their interpreters,[32]or—more generally—with formal properties of symbol systems[33](specifically, with reference to linguistic signs,syntax)[34]is referred to assyntactics. Peirce's definition of the termsemioticas the study of necessary features of signs also has the effect of distinguishing the discipline from linguistics as the study of contingent features that the world's languages happen to have acquired in the course of their evolutions. From a subjective standpoint, perhaps more difficult is the distinction between semiotics and thephilosophy of language. In a sense, the difference lies between separate traditions rather than subjects. Different authors have called themselves "philosopher of language" or "semiotician." This difference doesnotmatch the separation betweenanalyticandcontinental philosophy. On a closer look, there may be found some differences regarding subjects. Philosophy of language pays more attention tonatural languagesor to languages in general, while semiotics is deeply concerned with non-linguistic signification. Philosophy of language also bears connections to linguistics, while semiotics might appear closer to some of thehumanities(includingliterary theory) and tocultural anthropology. Semiosis orsemeiosisis the process that forms meaning from any organism's apprehension of the world through signs. Scholars who have talked about semiosis in their subtheories of semiotics includeC. S. Peirce,John Deely, andUmberto Eco. Cognitive semiotics is combining methods and theories developed in the disciplines of semiotics and the humanities, with providing new information into human signification and its manifestation in cultural practices. The research on cognitive semiotics brings together semiotics from linguistics, cognitive science, and related disciplines on a common meta-theoretical platform of concepts, methods, and shared data. Cognitive semioticsmay also be seen as the study ofmeaning-makingby employing and integrating methods and theories developed in the cognitive sciences. This involves conceptual and textual analysis as well as experimental investigations. Cognitive semiotics initially was developed at the Center for Semiotics atAarhus University(Denmark), with an important connection with the Center of Functionally Integrated Neuroscience (CFIN) at Aarhus Hospital. Amongst the prominent cognitive semioticians arePer Aage Brandt, Svend Østergaard, Peer Bundgård,Frederik Stjernfelt, Mikkel Wallentin, Kristian Tylén, Riccardo Fusaroli, and Jordan Zlatev. Zlatev later in co-operation with Göran Sonesson established CCS (Center for Cognitive Semiotics) atLund University, Sweden. Finite semiotics, developed by Cameron Shackell (2018, 2019),[35][36][37][38]aims to unify existing theories of semiotics for application to the post-Baudrillardianworld of ubiquitous technology. Its central move is to place the finiteness of thought at the root of semiotics and the sign as a secondary but fundamental analytical construct. The theory contends that the levels of reproduction that technology is bringing to human environments demands this reprioritisation if semiotics is to remain relevant in the face of effectively infinite signs. The shift in emphasis allows practical definitions of many core constructs in semiotics which Shackell has applied to areas such ashuman computer interaction,[39]creativitytheory,[40]and acomputational semioticsmethod for generatingsemiotic squaresfrom digital texts.[41] Pictorial semiotics[42]is intimately connected to art history and theory. It goes beyond them both in at least one fundamental way, however. Whileart historyhas limited its visual analysis to a small number of pictures that qualify as "works of art", pictorial semiotics focuses on the properties of pictures in a general sense, and on how the artistic conventions of images can be interpreted through pictorial codes. Pictorial codes are the way in which viewers of pictorial representations seem automatically to decipher the artistic conventions of images by being unconsciously familiar with them.[43] According to Göran Sonesson, a Swedish semiotician, pictures can be analyzed by three models: the narrative model, which concentrates on the relationship between pictures and time in a chronological manner as in a comic strip; the rhetoric model, which compares pictures with different devices as in a metaphor; and the Laokoon model, which considers the limits and constraints of pictorial expressions by comparing textual mediums that utilize time with visual mediums that utilize space.[44] The break from traditional art history and theory—as well as from other major streams of semiotic analysis—leaves open a wide variety of possibilities for pictorial semiotics. Some influences have been drawn from phenomenological analysis, cognitive psychology, structuralist, and cognitivist linguistics, and visual anthropology and sociology. Studies have shown that semiotics may be used to make or break abrand.Culture codesstrongly influence whether a population likes or dislikes a brand's marketing, especially internationally. If the company is unaware of a culture's codes, it runs the risk of failing in its marketing.Globalizationhas caused the development of a global consumer culture where products have similar associations, whether positive or negative, across numerous markets.[45] Mistranslations may lead to instances of "Engrish" or "Chinglish" terms for unintentionally humorous cross-cultural slogans intended to be understood in English. Whentranslating surveys, the same symbol may mean different things in the source and target language thus leading to potential errors. For example, the symbol of "x" is used to mark a response in English language surveys but "x" usually means'no'in the Chinese convention.[46]This may be caused by a sign that, in Peirce's terms, mistakenly indexes or symbolizes something in one culture, that it does not in another.[47]In other words, it creates a connotation that is culturally-bound, and that violates some culture code. Theorists who have studied humor (such asSchopenhauer) suggest that contradiction or incongruity creates absurdity and therefore, humor.[48]Violating a culture code creates this construct of ridiculousness for the culture that owns the code. Intentional humor also may fail cross-culturally because jokes are not on code for the receiving culture.[49] A good example of branding according to cultural code isDisney's internationaltheme parkbusiness. Disney fits well withJapan's cultural code because the Japanese value "cuteness", politeness, and gift-giving as part of their culture code;Tokyo Disneylandsells the most souvenirs of any Disney theme park. In contrast,Disneyland Parisfailed when it launched asEuro Disneybecause the company did not research the codes underlying European culture. Its storybook retelling of European folktales was taken aselitistand insulting, and the strict appearance standards that it had for employees resulted in discrimination lawsuits in France. Disney souvenirs were perceived as cheap trinkets. The park was a financial failure because its code violated the expectations of European culture in ways that were offensive.[50] However, some researchers have suggested that it is possible to successfully pass a sign perceived as a cultural icon, such as thelogosforCoca-ColaorMcDonald's, from one culture to another. This may be accomplished if the sign is migrated from a more economically developed to a less developed culture.[50]The intentional association of a product with another culture has been called "foreign consumer culture positioning" (FCCP). Products also may be marketed using global trends or culture codes, for example, saving time in a busy world; but even these may be fine-tuned for specific cultures.[45] Research also found that, as airline industry brandings grow and become more international their logos become more symbolic and less iconic. The iconicity andsymbolismof a sign depends on the cultural convention and are, on that ground, in relation with each other. If the cultural convention has greater influence on the sign, the signs get more symbolic value.[51] The flexibility of human semiotics is well demonstrated in dreams.Sigmund Freud[52]spelled out how meaning in dreams rests on a blend of images,affects, sounds, words, and kinesthetic sensations. In his chapter on "The Means of Representation," he showed how the most abstract sorts of meaning and logical relations can be represented by spatial relations. Two images in sequence may indicate "if this, then that" or "despite this, that." Freud thought the dream started with "dream thoughts" which were like logical, verbal sentences. He believed that the dream thought was in the nature of a taboo wish that would awaken the dreamer. In order to safeguard sleep, the midbrain converts and disguises the verbal dream thought into an imagistic form, through processes he called the "dream-work." Kofi Agawu[53]quotes the distinction made by Roman Jakobson[54]between "introversive semiosis, a language which signifies itself," and extoversive semiosis, the referential component of the semiosis. Jakobson writes that introversive semiosis "is indissolubly linked with the esthetic function of sign systems and dominates not only music but also glossolalic poetry and nonrepresentational painting and sculpture",[55]but Agawu uses the distinction mainly in music, proposing Schenkerian analysis as a path to introversive semiosis and topic theory as an example of extroversive semiosis. Jean-Jacques Nattiez makes the same distinction: "Roman Jakobson sees in music a semiotic system in which the 'introversive semiosis' – that is, the reference of each sonic element to the other elements to come — predominates over the 'extroversive semiosis' – or the referential link with the exterior world."[56] Semiotics can be directly linked to the ideals of musical topic theory, which traces patterns in musical figures throughout their prevalent context in order to assign some aspect of narrative, affect, or aesthetics to the gesture. Danuta Mirka'sThe Oxford Handbook of Topic Theorypresents a holistic recognition and overview regarding the subject, offering insight into the development of the theory.[57]In recognizing the indicative and symbolic elements of a musical line, gesture, or occurrence, one can gain a greater understanding of aspects regarding compositional intent and identity. Philosopher Charles Pierce discusses the relationship of icons and indexes in relation to signification and semiotics. In doing so, he draws on the elements of various ideas, acts, or styles that can be translated into a different field. Whereas indexes consist of a contextual representation of a symbol, icons directly correlate with the object or gesture that is being referenced. In his 1980 bookClassic Music: Expression, Form, and Style,Leonard Ratner amends the conversation surrounding musical tropes—or "topics"—in order to create a collection of musical figures that have historically been indicative of a given style.[58]Robert Hatten continues this conversation inBeethoven, Markedness, Correlation, and Interpretation(1994), in which he states that "richly coded style types which carry certain features linked to affect, class, and social occasion such as church styles, learned styles, and dance styles. In complex forms these topics mingle, providing a basis for musical allusion."[59] Subfields that have sprouted out of semiotics include, but are not limited to, the following: Thomas Carlyle(1795–1881) ascribed great importance to symbols in a religious context, noting that all worship "must proceed by Symbols"; he propounded this theory in such works as "Characteristics" (1831),[67]Sartor Resartus(1833–4),[68]andOn Heroes(1841),[69]which have been retroactively recognized as containing semiotic theories. Charles Sanders Peirce(1839–1914), anoted logicianwho founded philosophicalpragmatism, definedsemiosisas an irreducibly triadic process wherein something, as an object, logically determines or influences something as a sign to determine or influence something as an interpretation orinterpretant, itself a sign, thus leading to further interpretants.[70]Semiosis is logically structured to perpetuate itself. The object may be quality, fact, rule, or even fictional (Hamlet), and may be "immediate" to the sign, the object as represented in the sign, or "dynamic", the object as it really is, on which the immediate object is founded. The interpretant may be "immediate" to the sign, all that the sign immediately expresses, such as a word's usual meaning; or "dynamic", such as a state of agitation; or "final" or "normal", the ultimate ramifications of the sign about its object, to which inquiry taken far enough would be destined and with which any interpretant, at most, may coincide.[71]Hissemiotic[72]covered not only artificial, linguistic, and symbolic signs, but also semblances such as kindred sensible qualities, and indices such as reactions. He came c. 1903[73]toclassify any signby three interdependent trichotomies, intersecting to form ten (rather than 27) classes of sign.[74]Signs also enter into various kinds of meaningful combinations; Peirce covered both semantic and syntactical issues in his speculative grammar. He regarded formal semiotic as logicper seand part of philosophy; as also encompassing study of arguments (hypothetical,deductive, andinductive) and inquiry's methods including pragmatism; and as allied to, but distinct from logic's pure mathematics. In addition to pragmatism, Peirce provided a definition of "sign" as arepresentamen, in order to bring out the fact that a sign is something that "represents" something else in order to suggest it (that is, "re-present" it) in some way:[75][H] A sign, or representamen, is something which stands to somebody for something in some respect or capacity. It addresses somebody, that is, creates in the mind of that person an equivalent sign. That sign which it creates I call the interpretant of the first sign. The sign stands for something, its object not in all respects, but in reference to a sort of idea. Ferdinand de Saussure(1857–1913), the "father" of modernlinguistics, proposed a dualistic notion of signs, relating thesignifieras the form of the word or phrase uttered, to thesignifiedas the mental concept. According to Saussure, the sign is completelyarbitrary—i.e., there is no necessary connection between the sign and its meaning. This sets him apart from previous philosophers, such asPlatoor thescholastics, who thought that there must be some connection between a signifier and the object it signifies. In hisCourse in General Linguistics, Saussure credits the American linguistWilliam Dwight Whitney(1827–1894) with insisting on the arbitrary nature of the sign. Saussure's insistence on the arbitrariness of the sign also has influenced later philosophers and theorists such asJacques Derrida,Roland Barthes, andJean Baudrillard. Ferdinand de Saussure coined the termsémiologiewhile teaching his landmark "Course on General Linguistics" at theUniversity of Genevafrom 1906 to 1911. Saussure posited that no word is inherently meaningful. Rather a word is only a "signifier." i.e., the representation of something, and it must be combined in the brain with the "signified", or the thing itself, in order to form a meaning-imbued "sign." Saussure believed that dismantling signs was a real science, for in doing so we come to an empirical understanding of how humans synthesize physical stimuli into words and other abstract concepts. Jakob von Uexküll(1864–1944) studied thesign processesin animals. He used the German wordUmwelt,'environment', to describe the individual's subjective world, and he invented the concept of functional circle (funktionskreis) as a general model of sign processes. In hisTheory of Meaning(Bedeutungslehre, 1940), he described the semiotic approach tobiology, thus establishing the field that now is calledbiosemiotics. Valentin Voloshinov(1895–1936) was aSoviet-Russian linguist, whose work has been influential in the field ofliterary theoryandMarxisttheory of ideology. Written in the late 1920s in the USSR, Voloshinov'sMarxism and the Philosophy of Language(Russian:Marksizm i Filosofiya Yazyka) developed a counter-Saussurean linguistics, which situated language use in social process rather than in an entirely decontextualized Saussureanlangue.[citation needed] Louis Hjelmslev(1899–1965) developed a formalist approach to Saussure's structuralist theories. His best known work isProlegomena to a Theory of Language, which was expanded inRésumé of the Theory of Language, a formal development ofglossematics, his scientific calculus of language.[citation needed] Charles W. Morris(1901–1979): Unlike his mentorGeorge Herbert Mead, Morris was a behaviorist and sympathetic to theVienna Circlepositivismof his colleague,Rudolf Carnap. Morris was accused byJohn Deweyof misreading Peirce.[76] In his 1938Foundations of the Theory of Signs, he defined semiotics as grouped into three branches: Thure von Uexküll(1908–2004), the "father" of modernpsychosomatic medicine, developed a diagnostic method based on semiotic and biosemiotic analyses. Roland Barthes(1915–1980) was a French literary theorist and semiotician. He often would critique pieces of cultural material to expose how bourgeois society used them to impose its values upon others. For instance, the portrayal of wine drinking in French society as a robust and healthy habit would be a bourgeois ideal perception contradicted by certain realities (i.e. that wine can be unhealthy and inebriating). He found semiotics useful in conducting these critiques. Barthes explained that these bourgeois cultural myths were second-order signs, or connotations. A picture of a full, dark bottle is a sign, a signifier relating to a signified: a fermented, alcoholic beverage—wine. However, the bourgeois take this signified and apply their own emphasis to it, making "wine" a new signifier, this time relating to a new signified: the idea of healthy, robust, relaxing wine. Motivations for such manipulations vary from a desire to sell products to a simple desire to maintain the status quo. These insights brought Barthes very much in line with similar Marxist theory. Algirdas Julien Greimas(1917–1992) developed a structural version of semiotics named, "generative semiotics", trying to shift the focus of discipline from signs to systems of signification. His theories develop the ideas of Saussure, Hjelmslev,Claude Lévi-Strauss, andMaurice Merleau-Ponty. Thomas A. Sebeok(1920–2001), a student of Charles W. Morris, was a prolific and wide-ranging American semiotician. Although he insisted that animals are not capable of language, he expanded the purview of semiotics to include non-human signaling and communication systems, thus raising some of the issues addressed byphilosophy of mindand coining the termzoosemiotics. Sebeok insisted that all communication was made possible by the relationship between an organism and the environment in which it lives. He also posed the equation betweensemiosis(the activity of interpreting signs) andlife—a view that theCopenhagen-Tartu biosemiotic schoolhas further developed. Juri Lotman(1922–1993) was the founding member of theTartu(or Tartu-Moscow)Semiotic School. He developed a semiotic approach to the study of culture—semiotics of culture—and established a communication model for the study of text semiotics. He also introduced the concept of thesemiosphere. Among his Moscow colleagues wereVladimir Toporov,Vyacheslav IvanovandBoris Uspensky. Christian Metz(1931–1993) pioneered the application of Saussurean semiotics tofilm theory, applyingsyntagmatic analysisto scenes of films and groundingfilm semioticsin greater context. Eliseo Verón(1935–2014) developed his "Social Discourse Theory" inspired in the Peircian conception of "Semiosis." Groupe μ(founded 1967) developed a structural version ofrhetorics, and thevisual semiotics. Umberto Eco(1932–2016) was an Italian novelist, semiotician and academic. He made a wider audience aware of semiotics by various publications, most notablyA Theory of Semioticsand his novel,The Name of the Rose, which includes (second to its plot) applied semiotic operations. His most important contributions to the field bear on interpretation, encyclopedia, and model reader. He also criticized in several works (A theory of semiotics,La struttura assente,Le signe,La production de signes) the "iconism" or "iconic signs" (taken from Peirce's most famous triadic relation, based on indexes, icons, and symbols), to which he proposed four modes of sign production: recognition, ostension, replica, and invention. Julia Kristeva(born 1941), a student ofLucien GoldmannandRoland Barthes, Bulgarian-French semiotician,literary critic,psychoanalyst,feminist, andnovelist. She uses psychoanalytical concepts together with the semiotics, distinguishing the two components in the signification, the symbolic and the semiotic.Kristeva also studies therepresentation of women and women's bodies in popular culture, such as horror filmsand has had a remarkable influence on feminism and feminist literary studies. Michael Silverstein(1945–2020), a theoretician of semiotics and linguistic anthropology. Over the course of his career he created an original synthesis of research on the semiotics of communication, the sociology of interaction, Russian formalist literary theory, linguistic pragmatics, sociolinguistics, early anthropological linguistics and structuralist grammatical theory, together with his own theoretical contributions, yielding a comprehensive account of the semiotics of human communication and its relation to culture. His main influence wasCharles Sanders Peirce,Ferdinand de Saussure, andRoman Jakobson. Some applications of semiotics include:[citation needed] In some countries, the role of semiotics is limited toliterary criticismand an appreciation of audio and visual media. This narrow focus may inhibit a more general study of the social and political forces shaping how different media are used and their dynamic status within modern culture. Issues of technologicaldeterminismin the choice of media and the design of communication strategies assume new importance in this age of mass media.[citation needed] A world organization of semioticians, theInternational Association for Semiotic Studies, and its journalSemiotica, was established in 1969. The larger research centers together with teaching program include the semiotics departments at theUniversity of Tartu,University of Limoges,Aarhus University, andBologna University.[citation needed] Publication of research is both in dedicated journals such asSign Systems Studies, established byJuri Lotmanand published byTartu University Press;Semiotica, founded byThomas A. Sebeokand published byMouton de Gruyter;Zeitschrift für Semiotik;European Journal of Semiotics;Versus(founded and directed byUmberto Eco),The American Journal of Semiotics, et al.; and as articles accepted in periodicals of other disciplines, especially journals oriented toward philosophy and cultural criticism, communication theory, etc.[citation needed] The major semiotic book seriesSemiotics, Communication, Cognition, published byDe Gruyter Mouton(series editors Paul Cobley andKalevi Kull) replaces the former "Approaches to Semiotics" (series editorThomas A. Sebeok, 127 volumes) and "Approaches to Applied Semiotics" (7 volumes). Since 1980 theSemiotic Society of Americahas produced an annual conference series:Semiotics: The Proceedings of the Semiotic Society of America.[citation needed]
https://en.wikipedia.org/wiki/Semiotics
Syntaxis apeer-reviewedacademic journalin the field ofsyntaxofnatural languages, established in 1998 and published byWiley-Blackwell. The founding editors wereSuzanne Flynn(MIT) and Samuel D. Epstein (University of Michigan).Syntaxwas rated A in both theAustralian Research Council's ERA journal listfor 2010 and theEuropean Science Foundation'slinguisticsjournal list.[1] On 9 March 2024, editors Klaus Abels and Suzanne Flynn announced their resignation in protest of Wiley-Blackwell's publication practices. In an open letter to the linguistics community, they asserted that Wiley-Blackwell had failed to provide the quality of copyediting and publication that would be necessary to justify the unpaid work provided by authors, peer reviewers, and editors. The former editors announced their intent to start a newdiamond open accessjournal to replace the original,[2]titledSyntactic Theory and Research.[3] Thissyntax-related article is astub. You can help Wikipedia byexpanding it. This article about alinguisticsjournal is astub. You can help Wikipedia byexpanding it. See tips for writing articles about academic journals. Further suggestions might be found on the article'stalk page.
https://en.wikipedia.org/wiki/Syntax_(journal)
Anacademic journal(orscholarly journalorscientific journal) is aperiodical publicationin whichscholarshiprelating to a particularacademic disciplineis published. They serve as permanent and transparent forums for the dissemination, scrutiny, and discussion ofresearch. Unlikeprofessional magazinesortrade magazines, the articles are mostly written by researchers rather than staff writers employed by the journal. They nearly universally requirepeer reviewforresearch articlesor other scrutiny from contemporaries competent and established in their respective fields.[1][2]Academic journals trace their origins back to the17th century. As of 2012[update], it is estimated that over 28,100 active academic journals are in publication, with scopes ranging from the general sciences, as seen in journals likeScienceandNature, to highly specialized fields.[3][4]These journals publish a variety of articles includingoriginal research,review articles, andperspectives. Content usually takes the form of articles presentingoriginal research,review articles, orbook reviews. The purpose of an academic journal, according toHenry Oldenburg(the first editor ofPhilosophical Transactions of the Royal Society), is to give researchers a venue to "impart their knowledge to one another, and contribute what they can to the Grand design of improving natural knowledge, and perfecting all Philosophical Arts, and Sciences."[5] The termacademic journalapplies to scholarly publications in all fields; this includes journals that coverformal sciences,natural sciences,social sciences, andhumanities, which differ somewhat from each other in form and function. Although academic journals are superficially similar to professional magazines (or trade journals), they are quite different. Articles in academic journals are written by active researchers such as students, scientists, and professors. Their intended audience is others in the field, meaning their content is highly technical.[6]Academic articles also deal with research, and are peer reviewed. Meanwhile, trade journals are aimed at people in different fields, focusing on how people in those fields can do their jobs better.[7] The first academic journal wasJournal des sçavans(January 1665), followed soon after byPhilosophical Transactions of the Royal Society(March 1665), andMémoires de l'Académie des Sciences(1666). The first fully peer-reviewed journal wasMedical Essays and Observations(1733).[8] In the 17th century, scientists wrote letters to each other, and included scientific ideas with them. Then, in the mid-17th century, scientists began to hold meetings and share their scientific ideas. Eventually, they led to starting organizations, such as theRoyal Society(1660) and theFrench Academy of Sciences(1666).[9] The idea of a published journal with the purpose of "[letting] people know what is happening in theRepublic of Letters" was first conceived byFrançois Eudes de Mézerayin 1663. A publication titledJournal littéraire généralwas supposed to be published to fulfill that goal, but never was.Humanist scholarDenis de Sallo(under thepseudonym"Sieur de Hédouville") and printer Jean Cusson took Mazerai's idea, and obtained aroyal privilegefrom KingLouis XIVon 8 August 1664 to establish theJournal des sçavans. The journal's first issue was published on 5 January 1665. It was aimed atpeople of letters, and had four main objectives:[10] Soon after, theRoyal SocietyestablishedPhilosophical Transactions of the Royal Societyin March 1665, and theAcadémie des Sciencesestablished theMémoires de l'Académie des Sciencesin 1666, which focused on scientific communications.[11]By the end of the 18th century, nearly 500 such periodicals had been published,[12]the vast majority coming fromGermany(304 periodicals),France(53), andEngland(34). Several of those publications, in particular the German journals, tended to be short-lived (under five years). A.J. Meadows has estimated the proliferation of journals to reach 10,000 journals in 1950, and 71,000 in 1987. Michael Mabe wrote that the estimates will vary depending on the definition of what exactly counts as a scholarly publication, but that the growth rate has been "remarkably consistent over time", with an average rate of 3.46% per year from 1800 to 2003.[13] In 1733,Medical Essays and Observationswas established by theMedical Society of Edinburghas the first fullypeer-reviewedjournal.[8]Peer review was introduced as an attempt to increase the quality and pertinence of submissions.[14]Other important events in the history of academic journals include the establishment ofNature(1869) andScience(1880), the establishment ofPostmodern Culturein 1990 as the firstonline-only journal, the foundation ofarXivin 1991 for the dissemination ofpreprintsto be discussed prior to publication in a journal, and the establishment ofPLOS Onein 2006 as the firstmegajournal.[8] Peer reviewdid not begin until the 1970s, and was seen as a way of enabling researchers who were not as well-known to have their papers published in journals that were more prestigious. Though it was originally done by mailing copies of papers to reviewers, it is now done online.[15] There are two kinds ofarticle or paper submissionsinacademia: solicited, where an individual has been invited to submit work either through direct contact or through a general submissions call, and unsolicited, where an individual submits a work for potential publication without directly being asked to do so.[16]Upon receipt of a submitted article, editors at the journal determine whether to reject the submission outright or begin the process ofpeer review. In the latter case, the submission becomes subject to review by outside scholars of the editor's choosing who typically remain anonymous. The number of these peer reviewers (or "referees") varies according to each journal's editorial practice – typically, no fewer than two, though sometimes three or more, experts in the subject matter of the article produce reports upon the content, style, and other factors, which inform the editors' publication decisions. Though these reports are generally confidential, some journals and publishers also practicepublic peer review. The editors either choose to reject the article, ask for a revision and resubmission, or accept the article for publication. Even accepted articles are often subjected to further (sometimes considerable) editing by journal editorial staff before they appear in print. The peer review can take from several weeks to several months.[17] Articles have several sections, often including the following:[18] Articles can also be categorized by their purpose. The exact terminology and definitions vary by field and specific journal, but often include: Review articles, also called "reviews of progress", are checks on the research published in journals. Some journals are devoted entirely to review articles, some contain a few in each issue, and others do not publish review articles. Such reviews often cover the research from the preceding year, some for longer or shorter terms; some are devoted to specific topics, some to general surveys. Some reviews areenumerative, listing all significant articles in a given subject; others are selective, including only what they think worthwhile. Yet others are evaluative, judging the state of progress in the subject field. Some journals are published in series, each covering a complete subject field year, or covering specific fields through several years. Unlike original research articles, review articles tend to be solicited or "peer-invited" submissions, often planned years in advance, which may themselves go through a peer-review process once received.[24][25]They are typically relied upon by students beginning a study in a given field, or for current awareness of those already in the field.[24] Reviews of scholarly books are checks upon the research books published by scholars; unlike articles, book reviews tend to be solicited. Journals typically have a separate book review editor determining which new books to review and by whom. If an outside scholar accepts the book review editor's request for a book review, he or she generally receives a free copy of the book from the journal in exchange for a timely review. Publishers send books to book review editors in the hope that their books will be reviewed. The length and depth of research book reviews varies much from journal to journal, as does the extent of textbook and trade book review.[26] Anacademicjournal's prestige is established over time, and can reflect many factors, some but not all of which are expressible quantitatively. In many fields, a formal or informal hierarchy of scientific journals exists; the most prestigious journal in a field tends to be the most selective in terms of the articles it will select for publication, and usually will also have the highestimpact factor. In some countries, journal rankings can be utilized for funding decisions[27]and even evaluation of individual researchers, although they are poorly suited for that purpose.[28] In eachacademic discipline, some journals receive a high number of submissions and opt to restrict how many they publish, keeping theacceptance ratelow.[29]Size or prestige are not a guarantee of reliability.[30] In thenatural sciencesand in thesocial sciences, theimpact factoris an established proxy, measuring the number of later articles citing articles already published in the journal. There are other quantitative measures of prestige, such as the overall number of citations, how quickly articles are cited, and the average "half-life" of articles.Clarivate Analytics'Journal Citation Reports, which among other features, computes animpact factorfor academic journals, draws data for computation from theScience Citation Index Expanded(for natural science journals), and from theSocial Sciences Citation Index(for social science journals).[29]Several other metrics are also used, including theSCImago Journal Rank,CiteScore,Eigenfactor, andAltmetrics. In theAnglo-Americanhumanities, there is no tradition (as there is in the sciences) of giving impact-factors that could be used in establishing a journal's prestige. Recent moves have been made by the European Science Foundation (ESF) to change the situation, resulting in the publication of preliminary lists for therankingof academic journals in the humanities.[29]These rankings have been severely criticized, notably by history and sociology of science British journals that have published a common editorial entitled "Journals under Threat".[31]Though it did not prevent ESF and some national organizations from proposingjournal rankings, it largely prevented their use as evaluation tools.[32] In some disciplines such asknowledge management/intellectual capital, the lack of a well-established journal ranking system is perceived by academics as "a major obstacle on the way to tenure, promotion and achievement recognition".[33]Conversely, a significant number of scientists and organizations consider the pursuit ofimpact factorcalculations as inimical to the goals of science, and have signed theSan Francisco Declaration on Research Assessmentto limit its use.[34] Three categories of techniques have developed to assess journal quality and create journal rankings:[35] Many academic journals aresubsidizedby universities or professional organizations, and do not exist to make a profit. They often accept advertising, page and image charges from authors to pay for production costs. On the other hand, some journals are produced by commercial publishers who do make a profit by charging subscriptions to individuals and libraries. They may also sell all of their journals in discipline-specific collections or a variety of other packages.[37]Many scientists and librarians have long protested these costs, especially as they see these payments going to large for-profit publishing houses.[38]To allow their researchers online access to journals, many universities purchasesite licenses, permitting access from anywhere in the university, and, with appropriate authorization, by university-affiliated users at home or elsewhere. These may be much more expensive than the cost for a print subscription. Despite the transition to electronic publishing, the costs of site licenses continue to rise relative to universities' budgets. This is known as theserials crisis.[39] Journal editors tend to have other professional responsibilities, most often as teaching professors. In the case of the largest journals, there are paid staff assisting in the editing. The production of the journals is almost always done by publisher-paid staff. Humanities and social science academic journals are usually subsidized by universities or professional organization.[40] Traditional scientific journals require a paid subscription to access published articles.[41] The cost and value proposition of subscription to academic journals is being continuously re-assessed by institutions worldwide. In the context of thebig dealcancellations by several library systems in the world,[42]data analysis tools likeUnpaywall Journalsare used by libraries to estimate the specific cost and value of the various options: libraries can avoid subscriptions for materials already served by instantopen accessviaopen archiveslike PubMed Central.[43] Concerns about cost and open access have led to the creation of free-access journals such as thePublic Library of Science(PLoS) family and partly open or reduced-cost journals such as theJournal of High Energy Physics. However, professional editors still have to be paid, and PLoS still relies heavily on donations from foundations to cover the majority of its operating costs; smaller journals do not often have access to such resources.[citation needed]Open access journals may charge authors a fee for review or publication, rather than charging a readers a fee for access.[44] For scientific journals,reproducibility and replicabilityof the scientific results are core concepts that allow other scientists to check and reproduce the results under the same conditions described in the paper or at least similar conditions and produce similar results with similar measurements of the same subject or carried out under changed conditions of measurement. While the ability to reproduce the results based only on details included in the article is expected, verification of reproducibility by a third party is not generally required for publication.[45]The reproducibility of results presented in an article is therefore judged implicitly by the quality of the procedures reported and agreement with the data provided. However, some journals in the field of chemistry such asInorganic SynthesesandOrganic Synthesesrequire independent reproduction of the results presented as part of the review process. The inability for independent researches to reproduce published results is widespread, with 70% of researchers reporting failure to reproduce another scientist's results, including more than half who report failing to reproduce their own experiments.[46]Sources of irreproducibility vary, including publication offalsified or misrepresenteddata andpoor detailingof procedures.[47] Traditionally, the author of an article was required to transfer thecopyrightto the journal publisher. Publishers claimed this was necessary in order to protect authors' rights, and to coordinate permissions for reprints or other use. However, many authors, especially those active in theopen accessmovement, found this unsatisfactory,[48]and have used their influence to effect a gradual move towards a license to publish instead. Under such a system, the publisher has permission to edit, print, and distribute the article commercially, but the authors retain the other rights themselves. Even if they retain the copyright to an article, most journals allow certain rights to their authors. These rights usually include the ability to reuse parts of the paper in the author's future work, and allow the author to distribute a limited number of copies. In the print format, such copies are called reprints; in the electronic format, they are calledpostprints. Some publishers, for example theAmerican Physical Society, also grant the author the right to post and update the article on the author's or employer's website and on free e-print servers, to grant permission to others to use or reuse figures, and even to reprint the article as long as no fee is charged.[49]The rise of open access journals, in which the author retains the copyright but must pay a publication charge, such as thePublic Library of Sciencefamily of journals, is another recent response to copyright concerns.[50] TheInternethas revolutionized the production of, and access to, academic journals, with their contents available online via services subscribed to byacademic libraries. Individual articles are subject-indexed in databases such asGoogle Scholar. Some of the smallest, most specialized journals are prepared in-house, by an academic department, and published only online – this has sometimes been in the blog format, though some, like theopen accessjournalInternet Archaeology, use the medium to embed searchable datasets, 3D models, and interactive mapping.[51] Currently, there is a movement in higher education encouraging open access, either viaself archiving, whereby the author deposits a paper in adisciplinaryorinstitutional repositorywhere it can be searched for and read, or via publishing it in a freeopen access journal, which does not charge forsubscriptions, being either subsidized or financed by apublication fee. Given the goal of sharing scientific research to speed advances, open access has affected science journals more than humanities journals.[52]Commercial publishers are experimenting with open access models, but are trying to protect their subscription revenues.[53] The much lower entry cost of on-line publishing has also raised concerns of an increase inpublication of "junk" journalswith lower publishing standards. These journals, often with names chosen as similar to well-established publications, solicit articles via e-mail and then charge the author to publish an article, often withno sign of actual review.Jeffrey Beall, a research librarian at theUniversity of Colorado, has compiled a list of what he considers to be "potential, possible, or probable predatory scholarly open-access publishers"; the list numbered over 300 journals as of April 2013, but he estimates that there may be thousands.[54]TheOMICS Publishing Group, which publishes a number of the journals on this list,threatened to sue Beallin 2013 and Beall stopped publishing in 2017, citing pressure from his university.[55]A US judge fined OMICS $50 million in 2019 stemming from anFTClawsuit.[56] Some academic journals use theregistered reportformat, which aims to counteract issues such asdata dredgingand hypothesizing after the results are known. For example,Nature Human Behaviourhas adopted the registered report format, as it "shift[s] the emphasis from the results of research to the questions that guide the research and the methods used to answer them".[57]TheEuropean Journal of Personalitydefines this format: "In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data (if available). Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes."[58] Some journals areborn digitalin that they are solely published on the web and in a digital format. Though most electronic journals originated as print journals, which subsequently evolved to have an electronic version, while still maintaining a print component, others eventually became electronic-only.[59] Ane-journalclosely resembles a print journal in structure: there is a table of contents which lists the articles, and many electronic journals still use a volume/issue model, although some titles now publish on a continuous basis.[60]Online journal articles are a specialized form ofelectronic document: they have the purpose of providing material for academicresearchand study, and they are formatted approximately like journal articles in traditional printed journals. Often, a journal article will be available for download in two formats: PDF and HTML, although other electronic file types are often supported for supplementary material.[61]Articles are indexed inbibliographic databasesas well as by search engines.[62]E-journals allow new types of content to be included in journals, for example, video material, or the data sets on which research has been based. With the growth and development of the Internet, there has been a growth in the number of new digital-only journals. A subset of these journals exist as Open Access titles, meaning that they are free to access for all, and haveCreative Commonslicences which permit the reproduction of content in different ways.[63]High qualityopen access journalsare listed inDirectory of Open Access Journals. Most, however, continue to exist as subscription journals, for which libraries, organisations and individuals purchase access. Benefits of electronically publishing include easy availability of supplementary materials (data, graphics and video), lower cost, and availability to more people, especially scientists from non-developed countries. Hence, research results from more developed nations are becoming more accessible to scientists from non-developed countries.[64]
https://en.wikipedia.org/wiki/Academic_journal
Incomputer science, thesyntaxof acomputer languageis the rules that define the combinations of symbols that are considered to be correctly structuredstatementsorexpressionsin that language. This applies both toprogramming languages, where the document representssource code, and tomarkup languages, where the document represents data. The syntax of a language defines its surface form.[1]Text-basedcomputer languages are based on sequences ofcharacters, whilevisual programming languagesare based on the spatial layout and connections between symbols (which may be textual or graphical). Documents that are syntactically invalid are said to have asyntax error. When designing the syntax of a language, a designer might start by writing down examples of both legal and illegalstrings, before trying to figure out the general rules from these examples.[2] Syntax therefore refers to theformof the code, and is contrasted withsemantics– themeaning. In processing computer languages, semantic processing generally comes after syntactic processing; however, in some cases, semantic processing is necessary for complete syntactic analysis, and these are done together orconcurrently. In acompiler, the syntactic analysis comprises thefrontend, while thesemantic analysiscomprises thebackend(and middle end, if this phase is distinguished). Computer language syntax is generally distinguished into three levels: Distinguishing in this way yields modularity, allowing each level to be described and processed separately and often independently. First, a lexer turns the linear sequence of characters into a linear sequence of tokens; this is known as "lexical analysis" or "lexing".[3] Second, the parser turns the linear sequence of tokens into a hierarchical syntax tree; this is known as "parsing" narrowly speaking. This ensures that the line of tokens conform to the formal grammars of the programming language. The parsing stage itself can be divided into two parts: theparse tree, or "concrete syntax tree", which is determined by the grammar, but is generally far too detailed for practical use, and theabstract syntax tree(AST), which simplifies this into a usable form. The AST and contextual analysis steps can be considered a form of semantic analysis, as they are adding meaning and interpretation to the syntax, or alternatively as informal, manual implementations of syntactical rules that would be difficult or awkward to describe or implement formally. Thirdly, the contextual analysis resolves names and checks types. This modularity is sometimes possible, but in many real-world languages an earlier step depends on a later step – for example,the lexer hackin C is because tokenization depends on context. Even in these cases, syntactical analysis is often seen as approximating this ideal model. The levels generally correspond to levels in theChomsky hierarchy. Words are in aregular language, specified in thelexical grammar, which is a Type-3 grammar, generally given asregular expressions. Phrases are in acontext-free language(CFL), generally adeterministic context-free language(DCFL), specified in aphrase structure grammar, which is a Type-2 grammar, generally given asproduction rulesinBackus–Naur form(BNF). Phrase grammars are often specified in much more constrained grammars than fullcontext-free grammars, in order to make them easier to parse; while theLR parsercan parse any DCFL in linear time, the simpleLALR parserand even simplerLL parserare more efficient, but can only parse grammars whose production rules are constrained. In principle, contextual structure can be described by acontext-sensitive grammar, and automatically analyzed by means such asattribute grammars, though, in general, this step is done manually, vianame resolutionrules andtype checking, and implemented via asymbol tablewhich stores names and types for each scope. Tools have been written that automatically generate a lexer from a lexical specification written in regular expressions and a parser from the phrase grammar written in BNF: this allows one to usedeclarative programming, rather than need to have procedural or functional programming. A notable example is thelex-yaccpair. These automatically produce aconcretesyntax tree; the parser writer must then manually write code describing how this is converted to anabstractsyntax tree. Contextual analysis is also generally implemented manually. Despite the existence of these automatic tools, parsing is often implemented manually, for various reasons – perhaps the phrase structure is not context-free, or an alternative implementation improves performance or error-reporting, or allows the grammar to be changed more easily. Parsers are often written in functional languages, such asHaskell, or in scripting languages, such asPythonorPerl, or inCorC++. As an example,(add 1 1)is a syntactically valid Lisp program (assuming the 'add' function exists, else name resolution fails), adding 1 and 1. However, the following are invalid: The lexer is unable to identify the first error – all it knows is that, after producing the token LEFT_PAREN, '(' the remainder of the program is invalid, since no word rule begins with '_'. The second error is detected at the parsing stage: The parser has identified the "list" production rule due to the '(' token (as the only match), and thus can give an error message; in general it may beambiguous. Type errors and undeclared variable errors are sometimes considered to be syntax errors when they are detected at compile-time (which is usually the case when compiling strongly-typed languages), though it is common to classify these kinds of error assemanticerrors instead.[4][5][6] As an example, the Python code contains a type error because it adds a string literal to an integer literal. Type errors of this kind can be detected at compile-time: They can be detected during parsing (phrase analysis) if the compiler uses separate rules that allow "integerLiteral + integerLiteral" but not "stringLiteral + integerLiteral", though it is more likely that the compiler will use a parsing rule that allows all expressions of the form "LiteralOrIdentifier + LiteralOrIdentifier" and then the error will be detected during contextual analysis (when type checking occurs). In some cases this validation is not done by the compiler, and these errors are only detected at runtime. In a dynamically typed language, where type can only be determined at runtime, many type errors can only be detected at runtime. For example, the Python code is syntactically valid at the phrase level, but the correctness of the types of a and b can only be determined at runtime, as variables do not have types in Python, only values do. Whereas there is disagreement about whether a type error detected by the compiler should be called a syntax error (rather than astatic semanticerror), type errors which can only be detected at program execution time are always regarded as semantic rather than syntax errors. The syntax of textual programming languages is usually defined using a combination ofregular expressions(forlexicalstructure) andBackus–Naur form(ametalanguageforgrammaticalstructure) to inductively specifysyntactic categories(nonterminal) andterminalsymbols.[7]Syntactic categories are defined by rules calledproductions, which specify the values that belong to a particular syntactic category.[1]Terminal symbols are the concrete characters or strings of characters (for examplekeywordssuch asdefine,if,let, orvoid) from which syntactically valid programs are constructed. Syntax can be divided into context-free syntax and context-sensitive syntax.[7]Context-free syntax are rules directed by the metalanguage of the programming language. These would not be constrained by the context surrounding or referring that part of the syntax, whereas context-sensitive syntax would. A language can have different equivalent grammars, such as equivalent regular expressions (at the lexical levels), or different phrase rules which generate the same language. Using a broader category of grammars, such as LR grammars, can allow shorter or simpler grammars compared with more restricted categories, such as LL grammar, which may require longer grammars with more rules. Different but equivalent phrase grammars yield different parse trees, though the underlying language (set of valid documents) is the same. Below is a simple grammar, defined using the notation of regular expressions andExtended Backus–Naur form. It describes the syntax ofS-expressions, a data syntax of the programming languageLisp, which defines productions for the syntactic categoriesexpression,atom,number,symbol, andlist: This grammar specifies the following: Here the decimal digits, upper- and lower-case characters, and parentheses are terminal symbols. The following are examples of well-formed token sequences in this grammar: '12345', '()', '(A B C232 (1))' The grammar needed to specify a programming language can be classified by its position in theChomsky hierarchy. The phrase grammar of most programming languages can be specified using a Type-2 grammar, i.e., they arecontext-free grammars,[8]though the overall syntax is context-sensitive (due to variable declarations and nested scopes), hence Type-1. However, there are exceptions, and for some languages the phrase grammar is Type-0 (Turing-complete). In some languages like Perl and Lisp the specification (or implementation) of the language allows constructs that execute during the parsing phase. Furthermore, these languages have constructs that allow the programmer to alter the behavior of the parser. This combination effectively blurs the distinction between parsing and execution, and makes syntax analysis anundecidable problemin these languages, meaning that the parsing phase may not finish. For example, in Perl it is possible to execute code during parsing using aBEGINstatement, and Perl function prototypes may alter the syntactic interpretation, and possibly even the syntactic validity of the remaining code.[9][10]Colloquially this is referred to as "only Perl can parse Perl" (because code must be executed during parsing, and can modify the grammar), or more strongly "even Perl cannot parse Perl" (because it is undecidable). Similarly, Lispmacrosintroduced by thedefmacrosyntax also execute during parsing, meaning that a Lisp compiler must have an entire Lisp run-time system present. In contrast, C macros are merely string replacements, and do not require code execution.[11][12] The syntax of a language describes the form of a valid program, but does not provide any information about the meaning of the program or the results of executing that program. The meaning given to a combination of symbols is handled by semantics (eitherformalor hard-coded in areference implementation). Valid syntax must be established before semantics can make meaning out of it.[7]Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibitundefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it. Usingnatural languageas an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false: The following C language fragment is syntactically correct, but performs an operation that is not semantically defined (becausepis anull pointer, the operationsp->realandp->imhave no meaning): As a simpler example, is syntactically valid, but not semantically defined, as it uses anuninitialized variable. Even though compilers for some programming languages (e.g., Java and C#) would detect uninitialized variable errors of this kind, they should be regarded assemanticerrors rather than syntax errors.[6][13] To quickly compare syntax of various programming languages, take a look at the list of"Hello, World!" programexamples:
https://en.wikipedia.org/wiki/Syntax_(programming_languages)
Inlinguistics, thesyntax–semantics interfaceis the interaction betweensyntaxandsemantics. Its study encompasses phenomena that pertain to both syntax and semantics, with the goal of explaining correlations between form and meaning.[1]Specific topics includescope,[2][3]binding,[2]andlexical semanticproperties such asverbal aspectandnominal individuation,[4][5][6][7][8]semantic macroroles,[8]andunaccusativity.[4] The interface is conceived of very differently informalistandfunctionalistapproaches. While functionalists tend to look into semantics and pragmatics for explanations of syntactic phenomena, formalists try to limit such explanations within syntax itself.[9]Aside from syntax, other aspects of grammar have been studied in terms of how they interact with semantics; which can be observed by the existence of terms such asmorphosyntax–semantics interface.[3] Withinfunctionalistapproaches, research on the syntax–semantics interface has been aimed at disproving the formalist argument of theautonomy of syntax, by finding instances of semantically determined syntactic structures.[4][10] Levinand Rappaport Hovav, in their 1995 monograph, reiterated that there are some aspects of verb meaning that are relevant to syntax, and others that are not, as previously noted bySteven Pinker.[11][12]Levin and Rappaport Hovav isolated such aspects focusing on the phenomenon ofunaccusativitythat is "semantically determined and syntactically encoded".[13] Van ValinandLaPolla, in their 1997 monographic study, found that the more semantically motivated or driven a syntactic phenomenon is, the more it tends to be typologically universal, that is, to show less cross-linguistic variation.[14] Informal semantics,semantic interpretationis viewed as amappingfrom syntactic structures todenotations. There are several formal views of the syntax–semantics interface which differ in what they take to be the inputs and outputs of this mapping. In theHeim and Kratzermodel commonly adopted withingenerative linguistics, the input is taken to be a special level of syntactic representation calledlogical form. At logical form, semantic relationships such asscopeandbindingare represented unambiguously, having been determined by syntactic operations such asquantifier raising. Other formal frameworks take the opposite approach, assuming that such relationships are established by the rules of semantic interpretation themselves. In such systems, the rules include mechanisms such astype shiftinganddynamic binding.[1][15][16][2] Before the 1950s, there was no discussion of a syntax–semantics interface inAmerican linguistics, since neither syntax nor semantics was an active area of research.[17]This neglect was due in part to the influence oflogical positivismandbehaviorismin psychology, that viewed hypotheses about linguistic meaning as untestable.[17][18] By the 1960s, syntax had become a major area of study, and some researchers began examining semantics as well. In this period, the most prominent view of the interface was theKatz–PostalHypothesisaccording to whichdeep structurewas the level of syntactic representation which underwent semantic interpretation. This assumption was upended by data involving quantifiers, which showed thatsyntactic transformationscan affect meaning. During thelinguistics wars, a variety of competing notions of the interface were developed, many of which live on in present-day work.[17][2]
https://en.wikipedia.org/wiki/Syntax%E2%80%93Semantics_Interface
Theusageof alanguageis the ways in which itswrittenandspokenvariations are routinely employed by its speakers; that is, it refers to "the collective habits of a language's native speakers",[1]as opposed to idealized models of how a language works (or should work) in the abstract. For instance,Fowlercharacterized usage as "the way in which a word or phrase is normally and correctly used" and as the "points ofgrammar,syntax,style, and the choice of words."[2]In everyday usage, language is used differently, depending on the situation and individual.[3]Individual language users can shape language structures and language usage based on their community.[4] In thedescriptivetradition of language analysis, by way of contrast, "correct" tends to mean functionally adequate for the purposes of the speaker or writer using it, and adequatelyidiomaticto be accepted by the listener or reader; usage is also, however, a concern for theprescriptivetradition, for which "correctness" is a matter of arbitrating style.[5][6] Common usage may be used as one of the criteria of laying outprescriptive normsforcodifiedstandard languageusage.[7] Everyday language users, including editors and writers, look at dictionaries, style guides, usage guides, and other published authoritative works to help inform their language decisions. This takes place because of the perception that Standard English is determined by language authorities.[8]For many language users, the dictionary is the source of correct language use, as far as accurate vocabulary and spelling go.[9]Moderndictionariesare not generally prescriptive, but they often include "usage notes" which may describe words as "formal", "informal", "slang", and so on.[10]"Despite occasional usage notes,lexicographersgenerally disclaim any intent to guide writers and editors on the thorny points of English usage."[1] According to Jeremy Butterfield, "The first person we know of who madeusagerefer to language wasDaniel Defoe, at the end of the seventeenth century". Defoe proposed the creation of alanguage societyof 36 individuals who would setprescriptivelanguage rules for the approximately six million English speakers.[5] The Latin equivalentususwas a crucial term in the research of Danish linguistsOtto JespersenandLouis Hjelmslev.[11]They used the term to designate usage that has widespread or significant acceptance among speakers of a language, regardless of its conformity to the sanctioned standard language norms.[12]
https://en.wikipedia.org/wiki/Usage
Acontronymorcontranymis a word with twooppositemeanings. For example, the wordoriginalcan mean "authentic, traditional", or "novel, never done before". This feature is also calledenantiosemy,[1][2]enantionymy(enantio-means "opposite"),antilogyorautoantonymy. An enantiosemic term is by definitionpolysemic. A contronym is alternatively called anautantonym,auto-antonym,antagonym,[3][4]enantiodrome,enantionym,Janus word(after the Roman godJanus, who is usually depicted with two faces),[4]self-antonym,antilogy, oraddad(Arabic, singulardidd).[5][6] Some pairs of contronyms are truehomographs, i.e., distinct words with differentetymologieswhich happen to have the same form.[7]For instancecleave"separate" is fromOld Englishclēofan, whilecleave"adhere" is from Old Englishclifian, which was pronounced differently. Other contronyms are a form ofpolysemy, but where a single word acquires different and ultimately opposite definitions. For example,sanction—"permit" or "penalize";bolt(originally fromcrossbows)—"leave quickly" or "fix/immobilize";fast—"moving rapidly" or "fixed in place". Some English examples result fromnounsbeingverbedin the patterns of "add <noun> to" and "remove <noun> from"; e.g.dust,seed,stone.Denotationsandconnotationscan drift or branch over centuries. Anapocryphalstory relates howCharles II(or sometimesQueen Anne) describedSt Paul's Cathedral(using contemporaneous English) as "awful, pompous, and artificial", with the meaning (rendered in modern English) of "awe-inspiring, majestic, and ingeniously designed."[8] Negative words such asbad[9]andsicksometimes acquire ironic senses byantiphrasis[10]referring to traits that are impressive and admired, if not necessarily positive (that outfit is bad as hell;lyrics full of sick burns). Some contronyms result from differences invarieties of English. For example, totablea bill means "to put it up for debate" inBritish English, while it means "to remove it from debate" inAmerican English(where British English would have "shelve", which in this sense has an identical meaning in American English). Tobarrack, inAustralian English, is to loudly demonstrate support, while in British English it is to express disapproval and contempt. InLatin,sacerhas the double meaning "sacred, holy" and "accursed, infamous". Greekδημιουργόςgave Latin itsdemiurgus, from which English got itsdemiurge, which can refer either toGodas thecreatoror to thedevil, depending on philosophical context. In some languages, a word stem associated with a single event may treat the action of that event as unitary, so in translation it may appear contronymic. For example, Latinhospescan be translated as both "guest" and "host". In some varieties of English,borrowmay mean both "borrow" and "lend". Seeming contronyms can arise from translation. InHawaiian, for example,alohais translated both as "hello" and as "goodbye", but the essential meaning of the word is "love", whether used as a greeting or farewell. Similarly,안녕(annyeong) inKoreancan mean both "hello" and "goodbye" but the central meaning is "peace". TheItaliangreetingciaois translated as "hello" or "goodbye" depending on the context; the original meaning was "at your service" (literally "(I'm your) slave").[34]
https://en.wikipedia.org/wiki/Contronym
The ultimate goal ofsemantic technologyis to help machines understand data. To enable the encoding of semantics with the data, well-known technologies areRDF(Resource Description Framework)[1]andOWL(Web Ontology Language).[2]These technologies formallyrepresentthe meaning involved in information. For example,ontologycan describe concepts, relationships between things, and categories of things. These embedded semantics with the data offer significant advantages such as reasoning over data and dealing with heterogeneous data sources. Insoftware, semantic technology encodes meanings separately from data and content files, and separately from application code. This enables machines as well as people to understand, share and reason with them at execution time. With semantic technologies, adding, changing and implementing new relationships or interconnecting programs in a different way can be just as simple as changing the external model that these programs share. With traditionalinformation technology, on the other hand, meanings and relationships must be predefined and "hard wired" into data formats and the application program code at design time. This means that when something changes, previously unexchanged information needs to be exchanged, or two programs need to interoperate in a new way, the humans must get involved. Off-line, the parties must define and communicate between them the knowledge needed to make the change, and then recode the data structures and program logic to accommodate it, and then apply these changes to the database and the application. Then, and only then, can they implement the changes. Semantic technologies are "meaning-centered". They involve but are not limited to the following areas of application: Given a question, semantic technologies can directly search topics, concepts, associations that span a vast number of sources. Semantic technologies provide an abstraction layer above existing IT technologies that enables bridging and interconnection of data, content, and processes. Second, from the portal perspective, semantic technologies can be thought of as a new level of depth that provides far more intelligent, capable, relevant, and responsive interaction than with information technologies alone. Semantic technologies would often leverage natural language processing and machine learning in order to extract topics, concepts, and associations between concepts in text.
https://en.wikipedia.org/wiki/Semantic_technology
Semantic change(alsosemantic shift,semantic progression,semantic development, orsemantic drift) is a form oflanguage changeregarding the evolution ofword usage—usually to the point that the modern meaning is radically different from the original usage. Indiachronic (or historical) linguistics, semantic change is a change in one of the meanings of aword. Every word has a variety ofsensesandconnotations, which can be added, removed, or altered over time, often to the extent thatcognatesacross space and time have very different meanings. The study of semantic change can be seen as part ofetymology,onomasiology,semasiology, andsemantics. A number of classification schemes have been suggested for semantic change. Recent overviews have been presented by Blank[3]andBlank & Koch (1999). Semantic change has attracted academic discussions since ancient times, although the first major works emerged in the 19th century withReisig (1839),Paul (1880), andDarmesteter (1887).[4]Studies beyond the analysis of single words have been started with the word-field analyses ofTrier (1931), who claimed that every semantic change of a word would also affect all other words in a lexical field.[5]His approach was later refined byCoseriu (1964).Fritz (1974)introducedGenerativesemantics. More recent works includingpragmaticandcognitivetheories are those inWarren (1992),Dirk Geeraerts,[6]Traugott (1990)andBlank (1997). A chronological list of typologies is presented below. Today, the most currently used typologies are those byBloomfield (1933)andBlank (1999). Reisig'sideas for a classification were published posthumously. He resorts to classical rhetorics and distinguishes between The last two are defined as change between whole and part, which would today be rendered assynecdoche. This classification does not neatly distinguish between processes and forces/causes of semantic change. The most widely accepted scheme in the English-speaking academic world[according to whom?]is fromBloomfield (1933): Ullmann distinguishes between nature and consequences of semantic change: However, the categorization ofBlank (1999)has gained increasing acceptance:[8] Blank considered it problematic to include amelioration and pejoration of meaning (as in Ullman) as well as strengthening and weakening of meaning (as in Bloomfield). According to Blank, these are not objectively classifiable phenomena; moreover, Blank has argued that all of the examples listed under these headings can be grouped under other phenomena, rendering the categories redundant. Blank[9]has tried to create a complete list of motivations for semantic change. They can be summarized as: This list has been revised and slightly enlarged byGrzega (2004):[10] A specific case of semantic change isreappropriation, a cultural process by which a group reclaims words or artifacts that were previously used in a way disparaging of that group, for example like with the wordqueer. Other related processes include pejoration and amelioration.[11] Apart from many individual studies,etymological dictionariesare prominent reference books for finding out about semantic changes. A recent survey lists practical tools and online systems for investigating semantic change of words over time.[12]WordEvolutionStudy is an academic platform that takes arbitrary words as input to generate summary views of their evolution based on Google Books ngram dataset and the Corpus of Historical American English.[13]
https://en.wikipedia.org/wiki/Semantic_change
"Talking past each other" is an English phrase describing the situation where two or more people talk about different subjects, while believing that they are talking about the same thing.[1] David Horton writes that when characters in fiction talk past each other, the effect is to expose "an unbridgeable gulf between their respective perceptions and intentions. The result is an exchange, but never an interchange, of words in fragmented and cramped utterances whose subtext often reveals more than their surface meaning."[2] The phrase is used in widely varying contexts. For example, in 1917,Albert EinsteinandDavid Hilberthad dawn-to-dusk discussions of physics; and they continued their debate in writing, althoughFelix Kleinrecords that they "talked past each other, as happens not infrequently between simultaneously producing mathematicians."[3]
https://en.wikipedia.org/wiki/Talking_past_each_other
Inmathematicsandcomputer science,graph edit distance(GED) is ameasure of similarity(or dissimilarity) between twographs. The concept of graph edit distance was first formalized mathematically by Alberto Sanfeliu and King-Sun Fu in 1983.[1]A major application of graph edit distance is ininexact graph matching, such as error-tolerantpattern recognitioninmachine learning.[2] The graph edit distance between two graphs is related to thestring edit distancebetweenstrings. With the interpretation of strings asconnected,directed acyclic graphsofmaximum degreeone, classical definitions of edit distance such asLevenshtein distance,[3][4]Hamming distance[5]andJaro–Winkler distancemay be interpreted as graph edit distances between suitably constrained graphs. Likewise, graph edit distance is also a generalization oftree edit distancebetweenrooted trees.[6][7][8][9][10] The mathematical definition of graph edit distance is dependent upon the definitions of the graphs over which it is defined, i.e. whether and how the vertices and edges of the graph arelabeledand whether the edges aredirected. Generally, given a set ofgraph edit operations(also known as elementarygraph operations), the graph edit distance between two graphsg1{\displaystyle g_{1}}andg2{\displaystyle g_{2}}, written asGED(g1,g2){\displaystyle GED(g_{1},g_{2})}can be defined as whereP(g1,g2){\displaystyle {\mathcal {P}}(g_{1},g_{2})}denotes the set of edit paths transformingg1{\displaystyle g_{1}}into (a graphisomorphicto)g2{\displaystyle g_{2}}andc(e)≥0{\displaystyle c(e)\geq 0}is the cost of each graph edit operatione{\displaystyle e}. The set of elementary graph edit operators typically includes: Additional, but less common operators, include operations such asedge splittingthat introduces a new vertex into an edge (also creating a new edge), andedge contractionthat eliminates vertices of degree two between edges (of the same color). Although such complex edit operators can be defined in terms of more elementary transformations, their use allows finer parameterization of the cost functionc{\displaystyle c}when the operator is cheaper than the sum of its constituents. A deep analysis of the elementary graph edit operators is presented in[11][12][13] And some methods have been presented to automatically deduce these elementary graph edit operators.[14][15][16][17][18]And some algorithms learn these costs online:[19] Graph edit distance finds applications inhandwriting recognition,[20]fingerprint recognition[21]andcheminformatics.[22] Exact algorithms for computing the graph edit distance between a pair of graphs typically transform the problem into one of finding the minimum cost edit path between the two graphs. The computation of the optimal edit path is cast as apathfindingsearch orshortest path problem, often implemented as anA* search algorithm. In addition to exact algorithms, a number of efficient approximation algorithms are also known. Most of them have cubic computational time[23][24][25][26][27] Moreover, there is an algorithm that deduces an approximation of the GED in linear time[28] Despite the above algorithms sometimes working well in practice, in general the problem of computing graph edit distance is NP-hard (for a proof that's available online, see Section 2 ofZeng et al.), and is even hard to approximate (formally, it isAPX-hard[29]).
https://en.wikipedia.org/wiki/Graph_edit_distance
Incomputer science, thestring-to-string correction problemrefers to determining the minimum cost sequence of edit operations necessary to change onestringinto another (i.e., computing the shortestedit distance). Each type of edit operation has its own cost value.[1]A single edit operation may be changing a singlesymbolof the string into another (cost WC), deleting a symbol (cost WD), or inserting a new symbol (cost WI).[2] If all edit operations have the same unit costs (WC= WD= WI= 1) the problem is the same as computing theLevenshtein distanceof two strings. Severalalgorithmsexist to provide an efficient way to determine string distance and specify the minimum number of transformation operations required.[3][4]Such algorithms are particularly useful fordeltacreation operations where something is stored as a set of differences relative to a base version. This allows several versions of a single object to be stored much more efficiently than storing them separately. This holds true even for single versions of several objects if they do not differ greatly, or anything in between. Notably, such difference algorithms are used inmolecular biologyto provide some measure of kinship between different kinds of organisms based on the similarities of theirmacromolecules(such asproteinsorDNA). The extended variant of the problem includes a new type of edit operation: swapping any two adjacent symbols, with a cost of WS. This version can be solved in polynomial time under certain restrictions on edit operation costs.[2][5] Robert A. Wagner (1975) showed that the general problem isNP-complete. In particular, he proved that when WI< WC= WD= ∞ and 0 < WS< ∞ (or equivalently, changing and deletion are not permitted), the problem is NP-complete.[5]
https://en.wikipedia.org/wiki/String-to-string_correction_problem
Inmathematicsandcomputer science, astring metric(also known as astring similarity metricorstring distance function) is ametricthat measuresdistance("inverse similarity") between twotext stringsforapproximate string matchingor comparison and infuzzy string searching. A requirement for a stringmetric(e.g. in contrast tostring matching) is fulfillment of thetriangle inequality. For example, the strings "Sam" and "Samuel" can be considered to be close.[1]A string metric provides a number indicating an algorithm-specific indication of distance. The most widely known string metric is a rudimentary one called theLevenshtein distance(also known as edit distance).[2]It operates between two input strings, returning a number equivalent to the number of substitutions and deletions needed in order to transform one input string into another. Simplistic string metrics such asLevenshtein distancehave expanded to include phonetic,token, grammatical and character-based methods of statistical comparisons. String metrics are used heavily ininformation integrationand are currently used in areas includingfraud detection,fingerprint analysis,plagiarism detection,ontology merging,DNA analysis, RNA analysis,image analysis, evidence-basedmachine learning,databasedata deduplication,data mining,incremental search,data integration,malware detection,[3]and semanticknowledge integration. There also exist functions which measure a dissimilarity between strings, but do not necessarily fulfill the triangle inequality, and as such are notmetricsin the mathematical sense. An example of such function is theJaro–Winkler distance.
https://en.wikipedia.org/wiki/String_metric
In thedata analysisoftime series,Time Warp Edit Distance(TWED) is ameasure of similarity(or dissimilarity) between pairs of discrete time series, controlling the relative distortion of the time units of the two series using the physical notion ofelasticity. In comparison to other distance measures, (e.g. DTW (dynamic time warping) or LCS (longest common subsequence problem)), TWED is ametric. Itscomputational time complexityisO(n2){\displaystyle O(n^{2})}, but can be drastically reduced in some specific situations by using a corridor to reduce the search space. Itsmemoryspace complexitycan be reduced toO(n){\displaystyle O(n)}. It was first proposed in 2009 by P.-F. Marteau. δλ,ν(A1p,B1q)=Min{δλ,ν(A1p−1,B1q)+Γ(ap′→Λ)deleteinAδλ,ν(A1p−1,B1q−1)+Γ(ap′→bq′)matchorsubstitutionδλ,ν(A1p,B1q−1)+Γ(Λ→bq′)deleteinB{\displaystyle \delta _{\lambda ,\nu }(A_{1}^{p},B_{1}^{q})=Min{\begin{cases}\delta _{\lambda ,\nu }(A_{1}^{p-1},B_{1}^{q})+\Gamma (a_{p}^{'}\to \Lambda )&{\rm {delete\ in\ A}}\\\delta _{\lambda ,\nu }(A_{1}^{p-1},B_{1}^{q-1})+\Gamma (a_{p}^{'}\to b_{q}^{'})&{\rm {match\ or\ substitution}}\\\delta _{\lambda ,\nu }(A_{1}^{p},B_{1}^{q-1})+\Gamma (\Lambda \to b_{q}^{'})&{\rm {delete\ in\ B}}\end{cases}}}whereas Γ(αp′→Λ)=dLP(ap′,ap−1′)+ν⋅(tap−tap−1)+λ{\displaystyle \Gamma (\alpha _{p}^{'}\to \Lambda )=d_{LP}(a_{p}^{'},a_{p-1}^{'})+\nu \cdot (t_{a_{p}}-t_{a_{p-1}})+\lambda }Γ(αp′→bq′)=dLP(ap′,bq′)+dLP(ap−1′,bq−1′)+ν⋅(|tap−tbq|+|tap−1−tbq−1|){\displaystyle \Gamma (\alpha _{p}^{'}\to b_{q}^{'})=d_{LP}(a_{p}^{'},b_{q}^{'})+d_{LP}(a_{p-1}^{'},b_{q-1}^{'})+\nu \cdot (|t_{a_{p}}-t_{b_{q}}|+|t_{a_{p-1}}-t_{b_{q-1}}|)}Γ(Λ→bq′)=dLP(bp′,bp−1′)+ν⋅(tbq−tbq−1)+λ{\displaystyle \Gamma (\Lambda \to b_{q}^{'})=d_{LP}(b_{p}^{'},b_{p-1}^{'})+\nu \cdot (t_{b_{q}}-t_{b_{q-1}})+\lambda } Whereas therecursionδλ,ν{\displaystyle \delta _{\lambda ,\nu }}is initialized as:δλ,ν(A10,B10)=0,{\displaystyle \delta _{\lambda ,\nu }(A_{1}^{0},B_{1}^{0})=0,}δλ,ν(A10,B1j)=∞forj≥1{\displaystyle \delta _{\lambda ,\nu }(A_{1}^{0},B_{1}^{j})=\infty \ {\rm {{for\ }j\geq 1}}}δλ,ν(A1i,B10)=∞fori≥1{\displaystyle \delta _{\lambda ,\nu }(A_{1}^{i},B_{1}^{0})=\infty \ {\rm {{for\ }i\geq 1}}}witha0′=b0′=0{\displaystyle a'_{0}=b'_{0}=0} An implementation of the TWEDalgorithminCwith aPythonwrapperis available at[1] TWED is also implemented into the Time Series Subsequence Search Python package (TSSEARCH for short) available at[1]. AnRimplementation of TWED has been integrated into the TraMineR, aR packageformining, describing and visualizing sequences of states or events, and more generally discrete sequencedata.[2] Additionally,cuTWEDis aCUDA- accelerated implementation of TWED which uses an improved algorithm due to G. Wright (2020). This method is linear in memory and massively parallelized. cuTWED is written in CUDA C/C++, comes with Python bindings, and also includes Python bindings for Marteau's reference C implementation. Backtracking, to find the mostcost-efficientpath: Backtracking, to find the most cost-efficient path:
https://en.wikipedia.org/wiki/Time_Warp_Edit_Distance
AGLR parser(generalized left-to-right rightmost derivation parser) is an extension of anLR parseralgorithm to handlenon-deterministicandambiguous grammars.[1]The theoretical foundation was provided in a 1974 paper[2]by Bernard Lang (along with other generalcontext-free parserssuch as GLL). It describes a systematic way to produce such algorithms, and provides uniform results regarding correctness proofs, complexity with respect to grammar classes, and optimization techniques. The first actual implementation of GLR was described in a 1984 paper byMasaru Tomita, it has also been referred to as a "parallel parser". Tomita presented five stages in his original work,[3]though in practice it is the second stage that is recognized as the GLR parser. Though the algorithm has evolved since its original forms, the principles have remained intact. As shown by an earlier publication,[4]Lang was primarily interested in more easily used and more flexible parsers forextensible programminglanguages. Tomita's goal was to parsenatural languagetext thoroughly and efficiently. StandardLR parserscannot accommodate thenondeterministicand ambiguous nature ofnatural language, and the GLR algorithm can. Briefly, the GLR algorithm works in a manner similar to theLR parseralgorithm, except that, given a particular grammar, a GLR parser will process all possible interpretations of a given input in abreadth-first search. On the front-end, a GLRparser generatorconverts an input grammar into parser tables, in a manner similar to an LR generator. However, where LR parse tables allow for only onestate transition(given a state and an input token), GLR parse tables allow for multiple transitions. In effect, GLR allows for shift/reduce and reduce/reduce conflicts. When a conflicting transition is encountered, the parse stack is forked into two or more parallel parse stacks, where the state corresponding to each possible transition is at the top. Then, the next input token is read and used to determine the next transition(s) for each of the "top" states – and further forking can occur. If any given top state and input token do not result in at least one transition, then that "path" through the parse tables is invalid and can be discarded. A crucial optimization known as agraph-structured stackallows sharing of common prefixes and suffixes of these stacks, which constrains the overallsearch spaceand memory usage required to parse input text. The complex structures that arise from this improvement make the search graph adirected acyclic graph(with additional restrictions on the "depths" of various nodes), rather than a tree. Recognition using the GLR algorithm has the same worst-case time complexity as theCYK algorithmandEarley algorithm:O(n3).[citation needed]However, GLR carries two additional advantages: In practice, the grammars of most programming languages are deterministic or "nearly deterministic", meaning that any nondeterminism is usually resolved within a small (though possibly unbounded) number of tokens[citation needed]. Compared to other algorithms capable of handling the full class of context-free grammars (such asEarley parserorCYK algorithm), the GLR algorithm gives better performance on these "nearly deterministic" grammars, because only a single stack will be active during the majority of the parsing process. GLR can be combined with theLALR(1) algorithm, in a hybrid parser, allowing still higher performance.[5]
https://en.wikipedia.org/wiki/GLR_parser
Incomputer science, theEarley parseris analgorithmforparsingstringsthat belong to a givencontext-free language, though (depending on the variant) it may suffer problems with certain nullable grammars.[1]The algorithm, named after its inventorJay Earley, is achart parserthat usesdynamic programming; it is mainly used for parsing incomputational linguistics. It was first introduced in his dissertation[2]in 1968 (and later appeared in abbreviated, more legible form in a journal).[3] Earley parsers are appealing because they can parse all context-free languages, unlikeLR parsersandLL parsers, which are more typically used incompilersbut which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general caseO(n3){\displaystyle {O}(n^{3})}, wherenis the length of the parsed string, quadratic time forunambiguous grammarsO(n2){\displaystyle {O}(n^{2})},[4]and linear time for alldeterministic context-free grammars. It performs particularly well when the rules are writtenleft-recursively. The following algorithm describes the Earley recogniser. The recogniser can be modified to create a parse tree as it recognises, and in that way can be turned into a parser. In the following descriptions, α, β, and γ represent anystringofterminals/nonterminals(including theempty string), X and Y represent single nonterminals, andarepresents a terminal symbol. Earley's algorithm is a top-downdynamic programmingalgorithm. In the following, we use Earley's dot notation: given aproductionX → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected. Input position 0 is the position prior to input. Input positionnis the position after accepting thenth token. (Informally, input positions can be thought of as locations attokenboundaries.) For every input position, the parser generates astate set. Each state is atuple(X → α • β,i), consisting of (Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.) A state is finished when its current position is the last position of the right side of the production, that is, when there is no symbol to the right of the dot • in the visual representation of the state. The state set at input positionkis called S(k). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations:prediction,scanning, andcompletion. Duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is. The algorithm accepts if (X → γ •, 0) ends up in S(n), where (X → γ) is the top level-rule andnthe input length, otherwise it rejects. Adapted from Speech and Language Processing[5]byDaniel Jurafskyand James H. Martin, Consider the following simple grammar for arithmetic expressions: With the input: This is the sequence of state sets: The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences. Earley's dissertation[6]briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. ButTomitanoticed[7]that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb. Another method[8]is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation forambiguousparses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest. SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from. Philippe McLean and R. Nigel Horspool in their paper"A Faster Earley Parser"combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude.
https://en.wikipedia.org/wiki/Earley_parser
ThePackrat parseris a type ofparserthat shares similarities with therecursive descent parserin its construction. However, it differs because it takesparsing expression grammars (PEGs)as input rather thanLL grammars.[1] In 1970, Alexander Birman laid the groundwork for packrat parsing by introducing the "TMG recognition scheme" (TS), and "generalized TS" (gTS). TS was based upon Robert M. McClure'sTMGcompiler-compiler, and gTS was based upon Dewey Val Schorre'sMETAcompiler-compiler. Birman's work was later refined by Aho and Ullman; and renamed as Top-Down Parsing Language (TDPL), and Generalized TDPL (GTDPL), respectively. These algorithms were the first of their kind to employ deterministic top-down parsing with backtracking.[2][3] Bryan Ford developed PEGs as an expansion of GTDPL and TS. UnlikeCFGs, PEGs are unambiguous and can match well with machine-oriented languages. PEGs, similar to GTDPL and TS, can also express allLL(k)andLR(k). Bryan also introduced Packrat as a parser that usesmemoizationtechniques on top of a simple PEG parser. This was done because PEGs have an unlimitedlookaheadcapability resulting in a parser withexponential timeperformance in the worst case.[2][3] Packrat keeps track of the intermediate results for all mutually recursive parsing functions. Each parsing function is only called once at a specific input position. In some instances of packrat implementation, if there is insufficient memory, certain parsing functions may need to be called multiple times at the same input position, causing the parser to take longer than linear time.[4] The packrat parser takes in input the same syntax as PEGs: a simple PEG is composed of terminal and nonterminal symbols, possibly interleaved with operators that compose one or several derivation rules.[2] αβ{\displaystyle \alpha \beta } Failure:Ifα{\displaystyle \alpha }orβ{\displaystyle \beta }are not recognized Consumed:α{\displaystyle \alpha }andβ{\displaystyle \beta }in case of success α/β/γ{\displaystyle \alpha /\beta /\gamma } Failure:All of{α,β,γ}{\displaystyle \{\alpha ,\beta ,\gamma \}}do not match Consumed:The atomic expression that has generated a success so if multiple succeed the first one is always returned &α{\displaystyle \&\alpha } Failure:Ifα{\displaystyle \alpha }is not recognized Consumed:No input is consumed !α{\displaystyle !\alpha } Failure:Ifα{\displaystyle \alpha }is recognized Consumed:No input is consumed α+{\displaystyle \alpha +} Failure:Ifα{\displaystyle \alpha }is not recognized Consumed:The maximum number thatα{\displaystyle \alpha }is recognized α∗{\displaystyle \alpha *} Failure:Cannot fail Consumed:The maximum number thatα{\displaystyle \alpha }is recognized α?{\displaystyle \alpha ?} Failure:Cannot fail Consumed:α{\displaystyle \alpha }if it is recognized [a−b{\displaystyle a-b}] Failure:If no terminal inside of[a−b]{\displaystyle [a-b]}can be recognized Consumed:c{\displaystyle c}if it is recognized .{\displaystyle .} Failure:If no character in the input Consumed:any character in the input A derivation rule is composed by a nonterminal symbol and an expressionS→α{\displaystyle S\rightarrow \alpha }. A special expressionαs{\displaystyle \alpha _{s}}is the starting point of the grammar.[2]In case noαs{\displaystyle \alpha _{s}}is specified, the first expression of the first rule is used. An input string is considered accepted by the parser if theαs{\displaystyle \alpha _{s}}is recognized. As a side-effect, a stringx{\displaystyle x}can be recognized by the parser even if it was not fully consumed.[2] An extreme case of this rule is that the grammarS→x∗{\displaystyle S\rightarrow x*}matches any string. This can be avoided by rewriting the grammar asS→x∗!.{\displaystyle S\rightarrow x*!.} {S→A/B/DA→'a'S'a'B→'b'S'b'D→('0'−'9')?{\displaystyle {\begin{cases}S\rightarrow A/B/D\\A\rightarrow {\texttt {'a'}}\ S\ {\texttt {'a'}}\\B\rightarrow {\texttt {'b'}}\ S\ {\texttt {'b'}}\\D\rightarrow ({\texttt {'0'}}-{\texttt {'9'}})?\end{cases}}} This grammar recognizes apalindromeover the alphabet{a,b}{\displaystyle \{a,b\}}, with an optional digit in the middle. Example strings accepted by the grammar include:'aa'{\displaystyle {\texttt {'aa'}}}and'aba3aba'{\displaystyle {\texttt {'aba3aba'}}}. Left recursion happens when a grammar production refers to itself as its left-most element, either directly or indirectly. Since Packrat is a recursive descent parser, it cannot handle left recursion directly.[5]During the early stages of development, it was found that a production that is left-recursive can be transformed into a right-recursive production.[6]This modification significantly simplifies the task of a Packrat parser. Nonetheless, if there is an indirect left recursion involved, the process of rewriting can be quite complex and challenging. If the time complexity requirements are loosened from linear tosuperlinear, it is possible to modify the memoization table of a Packrat parser to permit left recursion, without altering the input grammar.[5] The iterative combinatorα+{\displaystyle \alpha +},α∗{\displaystyle \alpha *}, needs special attention when used in a Packrat parser. As a matter of fact, the use of iterative combinators introduces asecretrecursion that does not record intermediate results in the outcome matrix. This can lead to the parser operating with a superlinear behaviour. This problem can be resolved apply the following transformation:[1] With this transformation, the intermediate results can be properly memoized. Memoization is anoptimizationtechnique in computing that aims to speed up programs by storing the results of expensive function calls. This technique essentially works bycachingthe results so that when the same inputs occur again, the cached result is simply returned, thus avoiding the time-consuming process of re-computing.[7]When using packrat parsing and memoization, it's noteworthy that the parsing function for each nonterminal is solely based on the input string. It does not depend on any information gathered during the parsing process. Essentially, memoization table entries do not affect or rely on the parser's specific state at any given time.[8] Packrat parsing stores results in a matrix or similar data structure that allows for quick look-ups and insertions. When a production is encountered, the matrix is checked to see if it has already occurred. If it has, the result is retrieved from the matrix. If not, the production is evaluated, the result is inserted into the matrix, and then returned.[9]When evaluating the entirem∗n{\displaystyle m*n}matrix in a tabular approach, it would requireΘ(mn){\displaystyle \Theta (mn)}space.[9]Here,m{\displaystyle m}represents the number of nonterminals, andn{\displaystyle n}represents the input string size. In a naïve implementation, the entire table can be derived from the input string starting from the end of the string. The Packrat parser can be improved to update only the necessary cells in the matrix through a depth-first visit of each subexpression tree. Consequently, using a matrix with dimensions ofm∗n{\displaystyle m*n}is often wasteful, as most entries would remain empty.[5]These cells are linked to the input string, not to the nonterminals of the grammar. This means that increasing the input string size would always increase memory consumption, while the number of parsing rules changes only the worst space complexity.[1] Another operator calledcuthas been introduced to Packrat to reduce its average space complexity even further. This operator utilizes the formal structures of many programming languages to eliminate impossible derivations. For instance, control statements parsing in a standard programming language is mutually exclusive from the first recognized token, e.g.,{if,do,while,switch}{\displaystyle \{{\mathtt {if,do,while,switch}}\}}.[10] α↑β/γ(α↑β)∗{\displaystyle {\begin{array}{l}\alpha \uparrow \beta /\gamma \\(\alpha \uparrow \beta )*\end{array}}} In the first case don't evaluateγ{\displaystyle \gamma }ifα{\displaystyle \alpha }was recognized The second rule is can be rewritten asN→α↑βN/ϵ{\displaystyle N\rightarrow \alpha \uparrow \beta N/\epsilon }and the same rules can be applied. When a Packrat parser uses cut operators, it effectively clears its backtracking stack. This is because a cut operator reduces the number of possible alternatives in an ordered choice. By adding cut operators in the right places in a grammar's definition, the resulting Packrat parser only needs a nearly constant amount of space for memoization.[10] Sketch of an implementation of a Packrat algorithm in a Lua-like pseudocode.[5] Given the following context, a free grammar that recognizes simple arithmetic expressions composed of single digits interleaved by sum, multiplication, and parenthesis. {S→AA→M'+'A/MM→P'*'M/PP→'('A')'/DD→('0'−'9'){\displaystyle {\begin{cases}S\rightarrow A\\A\rightarrow M\ {\texttt {'+'}}\ A\ /\ M\\M\rightarrow P\ {\texttt {'*'}}\ M\ /\ P\\P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}\ /\ D\\D\rightarrow ({\texttt {'0'}}-{\texttt {'9'}})\end{cases}}} Denoted with ⊣ the line terminator we can apply thepackrat algorithm Backtrack to the first grammar rule with unexplored alternativeP→'('A')'/D_{\textstyle P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}\ /\ {\underline {D}}} No update because no terminal was recognized Update: D(1) = 1; P(1) = 1; No update because no nonterminal was fully recognized Backtrack to the first grammar rule with unexplored alternativeP→'('A')'/D_{\textstyle P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}\ /\ {\underline {D}}} No update because no terminal was recognized but the new input will not match insideM→P'*'M{\displaystyle M\rightarrow P\ {\texttt {'*'}}\ M}so an unroll is necessary toM→P'*'M/P_{\displaystyle M\rightarrow P\ {\texttt {'*'}}\ M\ /\ {\underline {P}}} Update: D(4) = 1; P(4) = 1; And we don't expand it has we have an hit in the memoization table P(4) ≠ 0 so shift the input by P(4). Shift also the+{\displaystyle +}fromA→M'+'A{\displaystyle A\rightarrow M\ {\texttt {'+'}}\ A} Hit on P(4) Update M(4) = 1 as M was recognized Backtrack to the first grammar rule with unexplored alternativeP→'('A')'/D_{\textstyle P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}\ /\ {\underline {D}}} No update because no terminal was recognized but the new input will not match insideM→P'*'M{\displaystyle M\rightarrow P\ {\texttt {'*'}}\ M}so an unroll is necessary Update: D(6) = 1; P(6) = 1; And we don't expand it has we have an hit in the memoization table P(6) ≠ 0 so shift the input by P(6). but the new input will not match+{\displaystyle +}insideA→M'+'A{\displaystyle A\rightarrow M\ {\texttt {'+'}}\ A}so an unroll is necessary Hit on P(6) Update M(6) = 1 as M was recognized And we don't expand it has we have an hit in the memoization table M(6) ≠ 0 so shift the input by M(6). Also shift){\displaystyle )}fromP→'('A')'{\displaystyle P\rightarrow {\texttt {'('}}\ A\ {\texttt {')'}}} Hit on M(6) Update A(4) = 3 as A was recognized Update P(3)=5 as P was recognized No update because no terminal was recognized Hit on P(3) Update M(1)=7 as M was recognized No update because no terminal was recognized S was totally reduced, so the input string is recognized. Hit on M(1) Update A(1)=7 as A was recognized Update S(1)=7 as S was recognized
https://en.wikipedia.org/wiki/Packrat_parser
Forparsing algorithmsincomputer science, theinside–outside algorithmis a way of re-estimating production probabilities in aprobabilistic context-free grammar. It was introduced byJames K. Bakerin 1979 as a generalization of theforward–backward algorithmfor parameter estimation onhidden Markov modelstostochastic context-free grammars. It is used to compute expectations, for example as part of theexpectation–maximization algorithm(an unsupervised learning algorithm). The inside probabilityβj(p,q){\displaystyle \beta _{j}(p,q)}is the total probability of generating wordswp⋯wq{\displaystyle w_{p}\cdots w_{q}}, given the root nonterminalNj{\displaystyle N^{j}}and a grammarG{\displaystyle G}:[1] The outside probabilityαj(p,q){\displaystyle \alpha _{j}(p,q)}is the total probability of beginning with the start symbolN1{\displaystyle N^{1}}and generating the nonterminalNpqj{\displaystyle N_{pq}^{j}}and all the words outsidewp⋯wq{\displaystyle w_{p}\cdots w_{q}}, given a grammarG{\displaystyle G}:[1] Base Case: βj(p,p)=P(wp|Nj,G){\displaystyle \beta _{j}(p,p)=P(w_{p}|N^{j},G)} General case: Suppose there is a ruleNj→NrNs{\displaystyle N_{j}\rightarrow N_{r}N_{s}}in the grammar, then the probability of generatingwp⋯wq{\displaystyle w_{p}\cdots w_{q}}starting with a subtree rooted atNj{\displaystyle N_{j}}is: ∑k=pk=q−1P(Nj→NrNs)βr(p,k)βs(k+1,q){\displaystyle \sum _{k=p}^{k=q-1}P(N_{j}\rightarrow N_{r}N_{s})\beta _{r}(p,k)\beta _{s}(k+1,q)} The inside probabilityβj(p,q){\displaystyle \beta _{j}(p,q)}is just the sum over all such possible rules: βj(p,q)=∑Nr,Ns∑k=pk=q−1P(Nj→NrNs)βr(p,k)βs(k+1,q){\displaystyle \beta _{j}(p,q)=\sum _{N_{r},N_{s}}\sum _{k=p}^{k=q-1}P(N_{j}\rightarrow N_{r}N_{s})\beta _{r}(p,k)\beta _{s}(k+1,q)} Base Case: αj(1,n)={1ifj=10otherwise{\displaystyle \alpha _{j}(1,n)={\begin{cases}1&{\mbox{if }}j=1\\0&{\mbox{otherwise}}\end{cases}}} Here the start symbol isN1{\displaystyle N_{1}}. General case: Suppose there is a ruleNr→NjNs{\displaystyle N_{r}\rightarrow N_{j}N_{s}}in the grammar that generatesNj{\displaystyle N_{j}}. Then theleftcontribution of that rule to the outside probabilityαj(p,q){\displaystyle \alpha _{j}(p,q)}is: ∑k=q+1k=nP(Nr→NjNs)αr(p,k)βs(q+1,k){\displaystyle \sum _{k=q+1}^{k=n}P(N_{r}\rightarrow N_{j}N_{s})\alpha _{r}(p,k)\beta _{s}(q+1,k)} Now suppose there is a ruleNr→NsNj{\displaystyle N_{r}\rightarrow N_{s}N_{j}}in the grammar. Then therightcontribution of that rule to the outside probabilityαj(p,q){\displaystyle \alpha _{j}(p,q)}is: ∑k=1k=p−1P(Nr→NsNj)αr(k,q)βs(k,p−1){\displaystyle \sum _{k=1}^{k=p-1}P(N_{r}\rightarrow N_{s}N_{j})\alpha _{r}(k,q)\beta _{s}(k,p-1)} The outside probabilityαj(p,q){\displaystyle \alpha _{j}(p,q)}is the sum of the left and right contributions over all such rules: αj(p,q)=∑Nr,Ns∑k=q+1k=nP(Nr→NjNs)αr(p,k)βs(q+1,k)+∑Nr,Ns∑k=1k=p−1P(Nr→NsNj)αr(k,q)βs(k,p−1){\displaystyle \alpha _{j}(p,q)=\sum _{N_{r},N_{s}}\sum _{k=q+1}^{k=n}P(N_{r}\rightarrow N_{j}N_{s})\alpha _{r}(p,k)\beta _{s}(q+1,k)+\sum _{N_{r},N_{s}}\sum _{k=1}^{k=p-1}P(N_{r}\rightarrow N_{s}N_{j})\alpha _{r}(k,q)\beta _{s}(k,p-1)}
https://en.wikipedia.org/wiki/Inside%E2%80%93outside_algorithm
Inlinguistics, acatena(English pronunciation:/kəˈtiːnə/, pluralcatenasorcatenae; fromLatinfor "chain")[1]is a unit ofsyntaxandmorphology, closely associated withdependency grammars. It is a more flexible and inclusive unit than theconstituentand its proponents therefore consider it to be better suited than the constituent to serve as the fundamental unit of syntactic and morphosyntactic analysis.[2] The catena has served as the basis for the analysis of a number of phenomena of syntax, such asidiosyncratic meaning,ellipsismechanisms (e.g.gapping,stripping,VP-ellipsis,pseudogapping,sluicing,answer ellipsis, comparative deletion),predicate-argumentstructures, anddiscontinuities(topicalization,wh-fronting,scrambling,extraposition, etc.).[3]The catena concept has also been taken as the basis for a theory of morphosyntax, i.e. for the extension of dependencies into words; dependencies are acknowledged between the morphs that constitute words.[4] While the catena concept has been applied mainly to the syntax of English, other works are also demonstrating its applicability to the syntax and morphology of other languages.[5] Two descriptions and two definitions of the catena unit are now given. An understanding of the catena is established by distinguishing between the catena and other, similarly defined units. There are four units (including the catena) that are pertinent in this regard:string,catena,component, andconstituent. The informal definition of the catena is repeated for easy comparison with the definitions of the other three units:[8] A component is complete if it includes all the elements that its root node dominates. The string and catena complement each other in an obvious way, and the definition of the constituent is essentially the same as one finds in most theories of syntax, where a constituent is understood to consist ofany node plus all the nodes that that node dominates. These definitions will now be illustrated with the help of the following dependency tree. The capital letters serve to abbreviate the words: All of the distinct strings, catenae, components, and constituents in this tree are listed here:[9] Noteworthy is the fact that the tree contains 39 distinct word combinations that are not catenae, e.g. AC, BD, CE, BCE, ADF, ABEF, ABDEF, etc. Observe as well that there are a mere six constituents, but 24 catenae. There are therefore four times more catenae in the tree than there are constituents. The inclusivity and flexibility of the catena unit becomes apparent. The following Venn diagram provides an overview of how the four units relate to each other: The catena concept has been present in linguistics for a few decades. In the 1970s, the German dependency grammarian Jürgen Kunze called the unit aTeilbaum'subtree'.[10]In the early 1990s, the psycholinguists Martin Pickering and Guy Barry acknowledged the catena unit, calling it adependency constituent.[11]However, the catena concept did not generate much interest among linguists until William O'Grady observed in his 1998 article that the words that form idioms are stored as catenae in the lexicon.[12]O'Grady called the relevant syntactic unit achain, however, not acatena. The termcatenawas introduced later by Timothy Osborne and colleagues as a means of avoiding confusion with the preexisting chain concept ofMinimalisttheory.[13]Since that time, the catena concept has been developed beyond O'Grady's analysis of idioms to serve as the basis for the analysis of a number central phenomena in the syntax of natural languages (e.g. ellipsis and predicate–argument structures).[14] Idiosyncratic language of all sorts can be captured in terms of catenae. When meaning is constructed in such a manner that does not allow one to acknowledge meaning chunks as constituents, the catena is involved. The meaning-bearing units are catenae, not constituents. This situation is illustrated here in terms of various collocations and proper idioms. Simple collocations (i.e. the co-occurrence of certain words) demonstrate well the catena concept. The idiosyncratic nature ofparticle verbcollocations provide the first group of examples:take after,take in,take on,take over,take up, etc. In its purest form, the verbtakemeans 'seize, grab, possess'. In these collocations with the various particles, however, the meaning oftakeshifts significantly each time depending on the particle. The particle andtakeconvey a distinct meaning together, whereby this distinct meaning cannot be understood as a straightforward combination of the meaning oftakealone and the meaning of the preposition alone. In such cases, one says that the meaning isnon-compositional. Non-compositional meaning can be captured in terms of catenae. The word combinations that assume non-compositional meaning form catenae (but not constituents): Both sentences a and b show that while the verb and its particle do not form a constituent, they do form a catena each time. The contrast in word order across the sentences of each pair illustrates what is known asshifting. Shifting occurs to accommodate the relative weight of the constituents involved. Heavy constituents prefer to appear to the right of lighter sister constituents. The shifting does not change the fact that the verb and particle form a catena each time, even when they do not form a string. Numerous verb-preposition combinations are idiosyncratic collocations insofar as the choice of preposition is strongly restricted by the verb, e.g.account for,count on,fill out,rely on,take after,wait for, etc. The meaning of many of these combinations is also non-compositional, as with the particle verbs. And also as with the particle verbs, the combinations form catenae (but not constituents) in simple declarative sentences: The verb and the preposition that it demands form a single meaning-bearing unit, whereby this unit is a catena. These meaning-bearing units can thus be stored as catenae in the mental lexicon of speakers. As catenae, they are concrete units of syntax. The final type of collocations produced here to illustrate catenae is the complex preposition, e.g.because of,due to,inside of,in spite of,out of,outside of, etc. The intonation pattern for these prepositions suggests that orthographic conventions are correct in writing them as two (or more) words. This situation, however, might be viewed as a problem, since it is not clear that the two words each time can be viewed as forming a constituent. In this regard, they do of course qualify as a catena, e.g. The collocations illustrated in this section have focused mainly on prepositions and particles and they are therefore just a small selection of meaning-bearing collocations. They are, however, quite suggestive. It seems likely that all meaning-bearing collocations are stored as catenae in the mental lexicon of language users. Full idioms are the canonical cases of non-compositional meaning. The fixed words of idioms do not bear their productive meaning, e.g.take it on the chin. Someone who "takes it on the chin" does not actually experience any physical contact to their chin, which means thatchindoes not have its normal productive meaning and must hence be part of a greater collocation. This greater collocation is the idiom, which consists of five words in this case. While the idiomtake it on the chincan be stored as a VP constituent (and is therefore not a problem for constituent-based theories), there are many idioms that clearly cannot be stored as constituents. These idioms are a problem for constituent-based theories precisely because they do not qualify as constituents. However, they do of course qualify as catenae. The discussion here focuses on these idioms since they illustrate particularly well the value of the catena concept. Many idioms in English consist of a verb and a noun (and more), whereby the noun takes a possessor that co-indexed with the subject and will thus vary with subject. These idioms are stored as catenae but clearly not as constituents, e.g. Similar idioms have a possessor that is freer insofar as it is not necessarily co-indexed with the subject. These idioms are also stored as catenae (but not as constituents),[15]e.g. The following idioms include the verb, and object, and at least one preposition. It should again be obvious that the fixed words of the idioms can in no way be viewed as forming constituents: The following idioms include the verb and the prepositional phrase at the same time that the object is free: And the following idioms involving a ditransitive verb include the second object at the same time that the first object is free: Certainly sayings are also idiomatic. When an adverb (or some other adjunct) appears in a saying, it is not part of the saying. Nevertheless, the words of the saying still form a catena: Ellipsismechanisms (gapping, stripping, VP-ellipsis, pseudogapping, answer fragments, sluicing, comparative deletion) are eliding catenae, whereby many of these catenae are non-constituents.[16]The following examples illustrategapping:[17] Clauses a are acceptable instances of gapping; the gapped material corresponds to the catena in green. Clauses b are failed attempts at gapping; they fail because the gapped material does not correspond to a catena. The following examples illustratestripping. Many linguists see stripping as a particular manifestation of gapping where just a single remnant remains in the gapped/stripped clause:[18] Clauses a are acceptable instances of stripping, in part because the stripped material corresponds to a catena (in green). Clauses b again fail; they fail because the stripped material does not qualify as a catena. The following examples illustrate answer ellipsis:[19] In each of the acceptable answer fragments (a–e), the elided material corresponds to a catena. In contrast, the elided material corresponds to a non-catena in each of the unacceptable answer fragments (f–h). An analysis ofVP-ellipsisusing thecatenaaims to captureantecedent contained deletionwithoutquantifier raising.[20] Both the elided material (in light grey) and the antecedent (in bold) to the elided material qualify as catenae. As catenae, both are concrete units of syntactic analysis. The need for a movement-type analysis (in terms of QR or otherwise) does not occur. One can note that the second of the two examples is an instance ofpseudogapping, pseudogapping being a particular manifestation of VP-ellipsis. Two additional complex examples further illustrate how a catena-based analysis ofanswer fragmentsworks: While the elided material shown in light gray certainly cannot be construed as a constituent, it does qualify as a catena (because it forms a subtree). The following example shows that even when the answer contains two fragments, the elided material still qualifies as a catena: Such answers that contain two (or even more) fragments are rare in English (although they are more common in other languages) and may be less than fully acceptable. A movement analysis of this answer fragment would have to assume that bothSusanandLarryhave moved out of the encompassing constituent so that that constituent can be elided. The catena-based analysis, in contrast, does not need to appeal to movement in this way. In the following example ofpseudogapping, the elided words qualify as a catena in surface syntax, which means movement is not necessary: The elided words in light gray qualify as a catena (but not a constituent). Thus if the catena is taken as the fundamental unit of syntactic analysis, the analysis of pseudogapping can remain entirely with what is present on the surface. The catena unit is suited to an understanding ofpredicatesand theirarguments[21]—a predicate is a property that is assigned to an argument or as a relationship that is established between arguments. A given predicate appears in sentence structure as a catena, and so do its arguments. A standard matrix predicate in a sentence consists of a content verb and potentially one or more auxiliary verbs. The next examples illustrate how predicates and their arguments are manifest in synonymous sentences across languages: The words in green are the main predicate and those in red are that predicate's arguments. The single-word predicatesaidin the English sentence on the left corresponds to the two-word predicatehat gesagtin German. Each predicate shown and each of its arguments shown is a catena. The next example is similar, but this time a French sentence is used to make the point: The matrix predicates are again in green, and their arguments in red. The arrow dependency edge marks anadjunct—this convention was not employed in the examples further above. In this case, the main predicate in English consists of two words corresponding to one word in French. The next examples delivers a sense of the manner in which the main sentence predicate remains a catena as the number of auxiliary verbs increases: Sentence a contains one auxiliary verb, sentence b two, and sentence c three. The appearance of these auxiliary verbs adds functional information to the core content provided by the content verbrevised. As each additional auxiliary verb is added, the predicate grows, the predicate catena gaining links. When assessing the approach to predicate–argument structures in terms of catenae, it is important to keep in mind that the constituent unit of phrase structure grammar is much less helpful in characterizing the actual word combinations that qualify as predicates and their arguments. This fact should be evident from the examples here, where the word combinations in green would not qualify as constituents in phrase structure grammars.
https://en.wikipedia.org/wiki/Catena_(linguistics)
Incomputer science, in particular inconcurrency theory, adependency relationis abinary relationon a finite domainΣ{\displaystyle \Sigma },[1]: 4symmetric, andreflexive;[1]: 6i.e. a finitetolerance relation. That is, it is a finite set ofordered pairsD{\displaystyle D}, such that In general, dependency relations are nottransitive; thus, they generalize the notion of anequivalence relationby discarding transitivity. Σ{\displaystyle \Sigma }is also called thealphabeton whichD{\displaystyle D}is defined. Theindependencyinduced byD{\displaystyle D}is the binary relationI{\displaystyle I} That is, the independency is the set of all ordered pairs that are not inD{\displaystyle D}. The independency relation is symmetric and irreflexive. Conversely, given any symmetric and irreflexive relationI{\displaystyle I}on a finite alphabet, the relation is a dependency relation. The pair(Σ,D){\displaystyle (\Sigma ,D)}is called theconcurrent alphabet.[2]: 6The pair(Σ,I){\displaystyle (\Sigma ,I)}is called theindependency alphabetorreliance alphabet, but this term may also refer to the triple(Σ,D,I){\displaystyle (\Sigma ,D,I)}(withI{\displaystyle I}induced byD{\displaystyle D}).[3]: 6Elementsx,y∈Σ{\displaystyle x,y\in \Sigma }are calleddependentifxDy{\displaystyle xDy}holds, andindependent, else (i.e. ifxIy{\displaystyle xIy}holds).[1]: 6 Given a reliance alphabet(Σ,D,I){\displaystyle (\Sigma ,D,I)}, a symmetric and irreflexive relation≐{\displaystyle \doteq }can be defined on thefree monoidΣ∗{\displaystyle \Sigma ^{*}}of all possible strings of finite length by:xaby≐xbay{\displaystyle xaby\doteq xbay}for all stringsx,y∈Σ∗{\displaystyle x,y\in \Sigma ^{*}}and all independent symbolsa,b∈I{\displaystyle a,b\in I}. Theequivalence closureof≐{\displaystyle \doteq }is denoted≡{\displaystyle \equiv }or≡(Σ,D,I){\displaystyle \equiv _{(\Sigma ,D,I)}}and called(Σ,D,I){\displaystyle (\Sigma ,D,I)}-equivalence. Informally,p≡q{\displaystyle p\equiv q}holds if the stringp{\displaystyle p}can be transformed intoq{\displaystyle q}by a finite sequence of swaps of adjacent independent symbols. Theequivalence classesof≡{\displaystyle \equiv }are calledtraces,[1]: 7–8and are studied intrace theory. Given the alphabetΣ={a,b,c}{\displaystyle \Sigma =\{a,b,c\}}, a possible dependency relation isD={(a,b),(b,a),(a,c),(c,a),(a,a),(b,b),(c,c)}{\displaystyle D=\{(a,b),\,(b,a),\,(a,c),\,(c,a),\,(a,a),\,(b,b),\,(c,c)\}}, see picture. The corresponding independency isI={(b,c),(c,b)}{\displaystyle I=\{(b,c),\,(c,b)\}}. Then e.g. the symbolsb,c{\displaystyle b,c}are independent of one another, and e.g.a,b{\displaystyle a,b}are dependent. The stringacbba{\displaystyle acbba}is equivalent toabcba{\displaystyle abcba}and toabbca{\displaystyle abbca}, but to no other string.
https://en.wikipedia.org/wiki/Dependency_relation
Inlinguistics,head directionalityis a proposedparameterthat classifies languages according to whether they arehead-initial(theheadof aphraseprecedes itscomplements) orhead-final(the head follows its complements). Theheadis the element that determines the category of a phrase: for example, in averb phrase, the head is a verb. Therefore, head initial would be"VO" languagesand head final would be"OV" languages.[1] Some languages are consistently head-initial or head-final at all phrasal levels.Englishis considered to be mainly head-initial (verbs precede their objects, for example), whileJapaneseis an example of a language that is consistently head-final. In certain other languages, such asGermanandGbe, examples of both types of head directionality occur. Various theories have been proposed to explain such variation. Head directionality is connected with the type ofbranchingthat predominates in a language: head-initial structures areright-branching, while head-final structures areleft-branching.[2]On the basis of these criteria, languages can be divided into head-final (rigid and non-rigid) and head-initial types. The identification of headedness is based on the following:[3] In some cases, particularly with noun and adjective phrases, it is not always clear which dependents are to be classed as complements, and which asadjuncts. Although in principle the head-directionality parameter concerns the order of heads and complements only, considerations of head-initiality and head-finality sometimes take account of the position of the head in the phrase as a whole, including adjuncts. The structure of the various types of phrase is analyzed below in relation to specific languages, with a focus on the ordering of head and complement. In some cases (such as English and Japanese) this ordering is found to be the same in practically all types of phrase, whereas in others (such as German and Gbe) the pattern is less consistent. Different theoretical explanations of these inconsistencies are discussed later in the article. There are various types of phrase in which the ordering of head and complement(s) may be considered when attempting to determine the head directionality of a language, including: Englishis a mainly head-initial language. In a typical verb phrase, for example, the verb precedes its complements, as in the following example:[6] The head of the phrase (the verbeat) precedes its complement (the determiner phrasean apple). Switching the order to "[VP[DPan apple] [Veat]]" would be ungrammatical. Nouns also tend to precede any complements, as in the following example, where therelative clause(orcomplementizer phrase) that follows the noun may be considered to be a complement:[7] Nouns do not necessarily begin their phrase; they may be preceded byattributive adjectives, but these are regarded asadjunctsrather than complements. Adjectives themselves may be preceded by adjuncts, namelyadverbs, as inextremely happy.[8]However, when an adjective phrase contains a true complement, such as a prepositional phrase, the head adjective precedes it:[9] English adpositional phrases are also head-initial; that is, English hasprepositionsrather than postpositions:[10] On thedeterminer phrase(DP) view, where adetermineris taken to be the head of its phrase (rather than the associated noun), English can be seen to be head-initial in this type of phrase too. In the following example[11]the head is taken to be the determinerany, and the complement is the noun (phrase)book: English also has head-initialcomplementizer phrases, as in this example[12]where the complementizerthatprecedes its complement, the tense phraseMary did not swim: Grammatical words marking tense and aspect generally precede the semantic verb. This indicates that, if finite verb phrases are analyzed astense phrasesor aspect phrases, these are again head-initial in English. In the example above,didis considered a (past) tense marker, and precedes its complement, the verb phrasenot swim. In the following,hasis a (perfect) aspect marker;[13]again it appears before the verb (phrase) which is its complement. The following example shows a sequence of nested phrases in which each head precedes its complement.[14]In thecomplementizer phrase(CP) in (a), the complementizer (C) precedes its tense phrase (TP) complement. In thetense phrasein (b), thetense-marking element (T) precedes its verb phrase (VP) complement. (The subject of the tense phrase,the girl, is aspecifier, which does not need to be considered when analyzing the ordering of head and complement.) In theverb phrasein (c), the verb (V) precedes its two complements, namely the determiner phrase (DP)the bookand the prepositional phrase (PP)on the table. In (d), wherea pictureis analyzed as a determiner phrase, the determiner (D)aprecedes its noun phrase (NP) complement, while in (e), thepreposition(P)onprecedes its DP complementyour desk. Indonesianis an example of an SVO head-initial language.[1][15]The characteristic of it being a head-initial language can be examined through a dependency perspective or through a word order perspective. Both approaches lead to the conclusion that Indonesian is a head-initial language. When examining Indonesian through a dependency perspective, it is considered head initial as thegovernorof both constituents are positioned before thedependent.[16] Placing the head before a dependent minimizes the overall dependency distance, which is the distance between the twoconstituents.[16]Minimizing dependency distance allows for less cognitive demand as a head-final dependency requires the constituents in the dependent clause to be stored in working memory until the head is realized.[16] In Indonesian, the number of constituencies affects the dependency direction. When there are 6 constituents — which is a relatively short sentence — there is a preference for head initial relation.[16]However, when there are 11-30 constituents, there appears to be a balance of head-initial and head-final dependencies.[16]Regardless, Indonesian displays an overall head-initial preference on all levels of dependency structure as it consistently attempts to position the head as early on in the sentence even though it produces a longer dependency distance rather than placing the head after its dependents.[16]Furthermore, Indonesian has an overall preference towards head-initial when comparing head-initial and head-final relation on all levels of constituent length for both spoken and written data.[16] The subject of the sentence followed by the verb, representing SVO order.[17]The following examples demonstrate head-initial directionality inIndonesian(note thatperdana menteri"prime minister" is unusually beinghead-final): Perdana Prime menteri minister sudah already pulang home Perdana menteri sudah pulang Prime minister already home "The Prime minister has returned home" [CP[DPPerdana menteri] [VPsudah pulang]] Classifiersandpartitivescan function as the head nouns ofnoun phrases. Below is an example of the internal structure of a noun phrase and its head-initial word order. Botol Bottle ini DET-this retak crack Botol ini retak Bottle DET-this crack "This bottle is cracked" [CP[DPbotol ini][VPretak]] Head-initial word order is seen in the internal structure of theverb phrasein the following example where the V is in the head position of the verb phrase and thus appears before its complement: Dokter Doctor memeriksa checks mata eye saya PN-my Dokter memeriksa mata saya Doctor checks eye PN-my "The doctor checked my eyes" [CP[DPDokter][VP[Vmemeriksa][DPmata saya]]] InIndonesiana noun can be followed by anothermodifying nounwhose primary function is to provide more specific information about the preceding head noun, such as indicating what the head noun is made of, gender, locative sense, and what the head noun does, etc. However, no other word is able to intervene between a head noun and its following modifying noun. If a word follows the modifying noun, then it provides reference to thehead nounand not the modifying noun.[17] guru teacher bahasa language guru bahasa teacher language "language teacher" guru teacher sekolah school itu DET-that guru sekolah itu teacher school DET-that "that schoolteacher" toko shop buku book toko buku shop book "Bookshop" toko shop buku book yang DET-a besar big toko buku yang besar shop book DET-a big a big bookshop sate satay ayam chicken sate ayam satay chicken "chicken satay" Japaneseis an example of a strongly head-final language. This can be seen in verb phrases and tense phrases: the verb (tabein the example) comes after its complement, while the tense marker (ru) comes after the whole verb phrase which is its complement.[6] リンゴを ringo-o apple-ACC 食べる tabe-ru eat-NPAST リンゴを 食べる ringo-o tabe-ru apple-ACC eat-NPAST "eat an apple" [TP[VP[DPringo-o] [Vtabe]] [Tru]] Nouns also typically come after any complements, as in the following example where the PPNew York-de-nomay be regarded as a complement:[18] ジョンの John-no John-GEN 昨日の kinoo-no yesterday-GEN ニューヨークでの New York-de-no New York-in-GEN 講義 koogi lecture ジョンの 昨日の ニューヨークでの 講義 John-no kinoo-no {New York-de-no} koogi John-GEN yesterday-GEN {New York-in-GEN} lecture "John's lecture in New York yesterday" [NP[PPNew York-de-no] [Nkoogi]] Adjectives also follow any complements they may have. In this example the complement of quantity,ni-juu-meetoru("twenty meters"), precedes the head adjectivetakai("tall"):[19] この Kono this ビルは biru-wa building-TOP 20メートル ni-juu-meetoru two-ten-meter 高い takai tall この ビルは 20メートル 高い Kono biru-wa ni-juu-meetoru takai this building-TOP two-ten-meter tall "This building is twenty meters tall." [AP[Qni-juu-meetoru] [Atakai]] Japanese uses postpositions rather than prepositions, so its adpositional phrases are again head-final:[20] 僕が Boku-ga I-NOM 高須村に Takasu-mura-ni Takasu-village-in 住んでいる sunde-iru live-PRES 僕が 高須村に 住んでいる Boku-ga Takasu-mura-ni sunde-iru I-NOM Takasu-village-in live-PRES "I live in Takasu village." [PP[DPTakasu-mura] [Pni]] Determiner phrases are head-final as well:[11] 誰 dare person も mo any 誰 も dare mo person any "anyone" [DP[NPdare] [Dmo]] A complementizer (herekoto, equivalent to English "that") comes after its complement (here a tense phrase meaning "Mary did not swim"), thus Japanese complementizer phrases are head-final:[12] メリーが Mary-ga Mary-NOM 泳がなかったこと oyog-ana-katta-koto swim-NEG-PAST-that メリーが 泳がなかったこと Mary-ga oyog-ana-katta-koto Mary-NOM swim-NEG-PAST-that "that Mary did not swim" [CP[TPMary-ga oyog-ana-katta] [Ckoto]] Turkishis an agglutinative, head-final, and left-branching language that uses aSOVword order.[21]As such, Turkish complements and adjuncts typically precede their head under neutral prosody, andadpositionsare postpositional. Turkish employs a case marking system[22]whichaffixesto the right boundary of the word it is modifying. As such, all case markings in Turkish are suffixes. For example, the set ofaccusativecase marking suffixes-(y)ı-, -(y)i-, -(y)u-, -(y)ü-in Turkish indicate that it is the direct object of a verb. Additionally, while some kinds of definite determiners andpostpositionsin Turkish can be marked by case, other types also exist as free morphemes.[22]In the following examples, Turkish case marker suffixes are analyzed as complements to the head. In Turkish, tense is denoted by a case marking suffix on the verb.[23] Ahmet Ahmet anne-sin-i mother-3SG-ACC ziyaret visit et-ti do-PAST Ahmet anne-sin-i ziyaret et-ti Ahmet mother-3SG-ACC visit do-PAST 'Ahmet visited his mother.' [TP[VPet][T-ti]] In neutral prosody, Turkish verb phrases are primarily head-final, as the verb comes after its complement. Variation in object-verb ordering is not strictly rigid. However, constructions where the verb precedes the object are less common.[24] Çocuk-lar child-PL çikolata chocolate sever like Çocuk-lar çikolata sever child-PL chocolate like 'Children like chocolate.' [VP[DPçikolata][Vsever]] In Turkish, definite determiners may be marked with a case marker suffix on the noun, such as when the noun is the direct object of a verb. They may also exist as free morphemes that attach to a head-initial determiner phrase, such as when the determiner is a demonstrative. Like other case markers in Turkish, when the morpheme carrying the demonstrative meaning is a case marker, they attach at the end of the word. As such, the head of the phrase, in this case the determiner, follows its complement like in the example below:[22] Dün Yesterday çok very garip strange kitap-lar-ı book-PL-ACC oku-du-m read-PAST-1SG Dün çok garip kitap-lar-ı oku-du-m Yesterday very strange book-PL-ACC read-PAST-1SG 'Yesterday I read the very strange books.' [DP[NPkitap-lar][D-ı]] Turkish adpositions are postpositions that can affix as a case marker at the end of a word. They can also be a separate word that attaches to the head-final postpositional phrase, as is the case in the example below:[24] Bu This kitab-ı book-ACC Ahmet Ahmet için for al-dı-m buy-PAST-1SG Bu kitab-ı Ahmet için al-dı-m This book-ACC Ahmet for buy-PAST-1SG 'I bought this book for Ahmet.' [PP[DPAhmet][Piçin]] Turkish employs acase markingsystem that allows some constituents in Turkish clauses to participate in permutations of its canonical SOV word order, thereby in some ways exhibiting a 'free' word order. Specifically, constituents of anindependent clausecan be moved around and constituents of phrasal categories can occur outside of theprojectionsthey are elements of. As a result, it is possible for the major case-marked constituents of a clause in Turkish to appear in all possible orders in a sentence, such that SOV, SVO, OSV, OVS, VSO, and VOS word orders are acceptable.[25] This free word order allows for the verbal phrase to occur in any position in an independent clause, unlike other head-final languages (such asJapaneseandKorean, in which any variation in word order must occur in the preverbal domain and the verb remains at the end of the clause(see§ Japanese, above)). Because of this relatively high degree of variation in word order in Turkish, its status as a head-final language is generally considered to be less strict and not absolute like Japanese or Korean, since while embedded clauses must remain verb-final, matrix clauses can show variability in word order.[25] In the canonical word order of Turkish, as is typical in a head-final language, subjects come at the beginning of the sentence, then objects, with verbs coming in last: 1. Subject-Object-Verb (SOV, canonical word order) Yazar author makale-yi article-ACC bitir-di finish-PAST Yazar makale-yi bitir-di author article-ACC finish-PAST 'The author finished the article.' However, several variations on this order can occur on matrix clauses, such that the subject, object, and verb can occupy all different positions within a sentence. Because Turkish uses a case-marking system to denote how each word functions in a sentence in relation to the rest, case-marked elements can be moved around without a loss in meaning. These variations, also called permutations,[26][25]can change the discourse focus of the constituents in the sentence: 2. Object-Subject-Verb (OSV) Makale-yi article-ACC yazar author bitir-di finish-PAST Makale-yi yazar bitir-di article-ACC author finish-PAST 'The author finished the article.' In this variation, the object moves to the beginning of the sentence, the subject follows, and the verb remains in final position. 3. Object-Verb-Subject (OVS) Makale-yi article-ACC bitir-di finish-PAST yazar author Makale-yi bitir-di yazar article-ACC finish-PAST author 'The author finished the article.' In this variation, the subject moves to end of the sentence. This is an example of how verbs in Turkish can move to other positions in the clause, even though other head-final languages, such as Japanese and Korean, typically see verbs coming only at the end of the sentence. 4. Subject-Verb-Object (SVO) Yazar author bitir-di finish-PAST makale-yi article-ACC Yazar bitir-di makale-yi author finish-PAST article-ACC 'The author finished the article.' In this variation, the object moves to the end of the sentence and the verb phrase now directly precedes the subject, which remains at the beginning of the sentence. This word order is akin toEnglishword order. 5. Verb-Subject-Object (VSO) Bitir-di finish-PAST yazar author makale-yi article-ACC Bitir-di yazar makale-yi finish-PAST author article-ACC 'The author finished the article.' In this variation, the verb phrase moves from the end of the sentence to the beginning of the sentence. 6. Verb-Object-Subject (VOS) Bitir-di finish-PAST makale-yi article-ACC yazar author Bitir-di makale-yi yazar finish-PAST article-ACC author 'The author finished the article.' In this variation, the verb phrase moves to the beginning of the sentence, the object moves so that it is directly following the verb, and the subject is at the end of the sentence. German, while being predominantly head-initial, is less conclusively so than in the case of English. German also features certain head-final structures. For example, in anonfiniteverb phrase the verb is final. In a finite verb phrase (or tense/aspect phrase) the verb (tense/aspect) is initial, although it may move to final position in asubordinate clause. In the following example,[27]the non-finite verb phrasees findenis head-final, whereas in the tensed main clauseich werde es finden(headed by theauxiliary verbwerdeindicatingfuture tense), the finite auxiliary precedes its complement (as an instance of averb-secondconstruction; in the example below, this V2-position is called "T"). Ich I werde will es it finden find Ich werde es finden I will it find "I will find it." Noun phrases containing complements are head-initial; in this example[28]the complement, the CPder den Befehl überbrachte, follows the head nounBoten. Man one beschimpfte insulted den the Boten, messenger der who den the Befehl command überbrachte delivered Man beschimpfte den Boten, der den Befehl überbrachte one insulted the messenger who the command delivered "The messenger, who delivered the command, was insulted." Adjective phrases may be head-final or head-initial. In the next example the adjective (stolze) follows its complement (auf seine Kinder).[29] der the auf of seine his Kinder children stolze proud Vater father der auf seine Kinder stolze Vater the of his children proud father "the father (who is) proud of his children" However, when essentially the same adjective phrase is usedpredicativelyrather than attributively, it can also be head-initial:[30] weil since er he stolz proud auf of seine his Kinder children ist is weil er stolz auf seine Kinder ist since he proud of his children is "since he is proud of his children" Most adpositional phrases are head-initial (as German has mostly prepositions rather than postpositions), as in the following example, whereaufcomes before its complementden Tisch:[31] Peter Peter legt puts das the Buch book auf on den the.ACC Tisch table Peter legt das Buch auf den Tisch Peter puts the book on the.ACC table "Peter puts the book on the table." German also has somepostpositions, however (such asgegenüber"opposite"), and so adpositional phrases can also sometimes be head-final. Another example is provided by the analysis of the following sentence:[32] Die the Schnecke snail kroch crept das the Dach roof hinauf up Die Schnecke kroch das Dach hinauf the snail crept the roof up "The snail crept up the roof" Like in English, determiner phrases and complementizer phrases in German are head-initial. The next example is of a determiner phrase, headed by the articleder:[33] der the Mann man der Mann the man "the man" In the following example, the complementizerdassprecedes the tense phrase which serves as its complement:[34] dass that Lisa Lisa eine a Blume flower gepflanzt planted hat has dass Lisa eine Blume gepflanzt hat that Lisa a flower planted has "that Lisa planted a flower" Standard Chinese(whose syntax is typical ofChinese varietiesgenerally) features a mixture of head-final and head-initial structures. Noun phrases are head-final. Modifiers virtually always precede the noun they modify. In the case of strict head/complement ordering, however, Chinese appears to be head-initial. Verbs normally precede their objects. Both prepositions and postpositions are reported, but the postpositions can be analyzed as a type of noun (the prepositions are often calledcoverbs). InGbe, a mixture of head-initial and head-final structures is found. For example, a verb may appear after or before its complement, which means that both head-initial and head-final verb phrases occur.[35]In the first example the verb for "use" appears after its complement: Kɔ̀jó Kojo tó IMPERF àmí oil lɔ́ DET zân use Kɔ̀jó tó àmí lɔ́ zân Kojo IMPERF oil DET use "Kojo is using the oil." In the second example the verb precedes the complement: Kɔ̀jó Kojo nɔ̀ HAB zán use-PERF àmí oil lɔ́ DET Kɔ̀jó nɔ̀ zán àmí lɔ́ Kojo HAB use-PERF oil DET "Kojo habitually used the oil/Kojo habitually uses the oil." It has been debated whether the first example is due to objectmovementto the left side of the verb[36]or whether the lexical entry of the verb simply allows head-initial and head-final structures.[37] Tense phrases and aspect phrases are head-initial since aspect markers (such astóandnɔ̀above) and tense markers (such as the future markernáin the following example, but that does not apply to tense markers shown by verbinflection) come before the verb phrase.[38] dàwé man lɔ̀ DET ná FUT xɔ̀ buy kɛ̀kɛ́ bicycle dàwé lɔ̀ ná xɔ̀ kɛ̀kɛ́ man DET FUT buy bicycle "The man will buy a bicycle." Gbe noun phrases are typically head-final, as in this example:[39] Kɔ̀kú Koku sín CASE ɖìdè sketch lɛ̀ PL Kɔ̀kú sín ɖìdè lɛ̀ KokuCASEsketch PL "sketches of Koku" In the following example of an adjective phrase, Gbe follows a head-initial pattern, as the headyùprecedes theintensifiertàùú.[40] àǔn dog yù black tàùú INT àǔn yù tàùú dog black INT "really black dogs" Gbe adpositional phrases are head-initial, with prepositions preceding their complement:[41] Kòfi Kofi zé take-PERF kwɛ́ money xlán to Àsíbá Asiba Kòfi zé kwɛ́ xlán Àsíbá Kofi take-PERF money to Asiba "Kofi sent money to Asiba." Determiner phrases, however, are head-final:[42] Asíbá Asiba xɔ̀ buy-PERF àvɔ̀ cloth àmàmú green màtàn-màtàn odd ɖé DEF Asíbá xɔ̀ àvɔ̀ àmàmú màtàn-màtàn ɖé Asiba buy-PERF cloth green odd DEF "Asiba bought a specific ugly green cloth" Complementizer phrases are head-initial:[43] ɖé that Dòsà Dosa gbá build-PERF xwé house ɔ̀ DEF ɔ̀ DET ɖé Dòsà gbá xwé ɔ̀ ɔ̀ that Dosa build-PERF house DEF DET "that Dosa built the house" The idea that syntactic structures reduce to binary relations was introduced byLucien Tesnièrein 1959 within the framework ofdependency theory, which was further developed in the 1960s. Tesnière distinguished two structures that differ in the placement of the structurally governing element (head):[44]centripetal structures, in which heads precede theirdependents, andcentrifugal structures, in which heads follow their dependents. Dependents here may includecomplements,adjuncts, andspecifiers. Joseph Greenberg, who worked in the field oflanguage typology, put forward an implicational theory ofword order, whereby:[45] The first set of properties make heads come at the start of their phrases, while the second set make heads come at the end. However, it has been claimed that many languages (such asBasque) do not fulfill the above conditions, and that Greenberg's theory fails to predict the exceptions.[46] Winfred P. Lehmann, expanding upon Greenberg's theory, proposed aFundamental Principle of Placement (FPP)in 1973. The FPP states that the order of object and verb relative to each other in a language determines other features of that language's typology, beyond the features that Greenberg identified. Lehmann also believed that the subject is not a primary element of a sentence, and that the traditional six-order typology of languages should be reduced to just two, VO and OV, based on head-directionality alone. Thus, for example, SVO and VSO would be considered the same type in Lehmann's classification system. Noam Chomsky'sPrinciples and Parameters theoryin the 1980s[48]introduced the idea that a small number of innate principles are common to every human language (e.g. phrases are oriented around heads), and that these general principles are subject to parametric variation (e.g. the order of heads and other phrasal components may differ). In this theory, the dependency relation between heads, complements, specifiers, and adjuncts is regulated byX-bar theory, proposed by Jackendoff[49]in the 1970s. The complement is sister to the head, and they can be ordered in one of two ways. A head-complement order is called ahead-initial structure, while a complement-head order is called ahead-final structure. These are special cases of Tesnière's centripetal and centrifugal structures, since here only complements are considered, whereas Tesnière considered all types of dependents. In the principles and parameters theory, a head-directionality parameter is proposed as a way ofclassifying languages. A language which has head-initial structures is considered to be ahead-initial language, and one which has head-final structures is considered to be ahead-final language. It is found, however, that very few, if any, languages are entirely one direction or the other. Linguists have come up with a number of theories to explain the inconsistencies, sometimes positing a more consistentunderlyingorder, with the phenomenon of phrasalmovementbeing used to explain the surface deviations. According to theAntisymmetrytheory proposed byRichard S. Kayne, there is no head-directionality parameter as such: it is claimed that at an underlying level, all languages are head-initial. In fact, it is argued that all languages have the underlying order Specifier-Head-Complement. Deviations from this order are accounted for by differentsyntactic movementsapplied by languages. Kayne argues that a theory that allows both directionalities would imply an absence ofasymmetriesbetween languages, whereas in fact languages fail to be symmetrical in many respects. Kayne argues using the concept of a probe-goal search (based on the ideas of theMinimalist program), whereby aheadacts as a probe and looks for a goal, namely itscomplement. Kayne proposes that the direction of the probe-goal search must share the direction of languageparsingand production.[50]Parsing and production proceed in a left-to-right direction: the beginning of sentence is heard or spoken first, and the end of the sentence is heard or spoken last. This implies (according to the theory) an ordering whereby probe comes before goal, i.e. head precedes complement. Some linguists have rejected the conclusions of the Antisymmetry approach. Some have pointed out that in predominantly head-final languages such asJapaneseandBasque, the change from an underlying head-initial form to a largely head-final surface form would involve complex and massive leftward movement, which is not in accordance with the ideal of grammatical simplicity.[46]Some take a "surface true" viewpoint: that analysis of head direction must take place at the level ofsurface derivations, or even thePhonetic Form(PF), i.e. the order in which sentences are pronounced in natural speech. This rejects the idea of an underlying ordering which is then subject to movement, as posited in Antisymmetry and in certain other approaches. It has been argued that a head parameter must only reside at PF, as it is unmaintainable in its original form as a structural parameter.[51] Some linguists have provided evidence which may be taken to support Kayne's scheme, such as Lin,[52]who considered Standard Chinese sentences with thesentence-final particlele. Certain restrictions on movement from within verb phrases preceding such a particle are found (if various other assumptions from the literature are accepted) to be consistent with the idea that the verb phrase has moved from its underlying position after its head (the particlelehere being taken as the head of anaspect phrase). However, Takita (2009) observes that similar restrictions do not apply in Japanese, in spite of its surface head-final character, concluding that if Lin's assumptions are correct, then Japanese must be considered to be a true head-final language, contrary to the main tenet of Antisymmetry.[53]More details about these arguments can be found in theAntisymmetryarticle. Some scholars, such as Tesnière, argue that there are no absolute head-initial or head-final languages. According to this approach, it is true that some languages have more head-initial or head-final elements than other languages do, but almost any language contains both head-initial and head-final elements. Therefore, rather than being classifiable into fixed categories, languages can be arranged on acontinuumwith head-initial and head-final as the extremes, based on the frequency distribution of theirdependencydirections. This view was supported in a study by Haitao Liu (2010), who investigated 20 languages using a dependencytreebank-based method.[54]For instance, Japanese is close to the head-final end of the continuum, while English and German, which have mixed head-initial and head-final dependencies, are plotted in relatively intermediate positions on the continuum. Polinsky (2012) identified the following five head-directionality sub-types: She identified a strong correlation between the head-directionality type of a language and the ratio of verbs to nouns in the lexical inventory. Languages with a scarcity of simple verbs tend to be rigidly head-final, as in the case of Japanese, whereas verb-rich languages tend to be head-initial languages.[55]
https://en.wikipedia.org/wiki/Head-directionality_parameter
Igor Aleksandrovič Mel'čuk, sometimesMelchuk(Russian:Игорь Александрович Мельчук;Ukrainian:Ігор Олександрович Мельчук; born 1932) is a Soviet and Canadian linguist, a retired professor at the Department of Linguistics and Translation,Université de Montréal. He graduated from the Moscow State University's Philological department and worked from 1956 till 1976 for theInstitute of LinguisticsinMoscow. He is known as one of the developers ofMeaning–text theorywith the seminal book published in 1974. He is also the author ofCours de morphologie généralein 5 volumes. After making statements in support of Soviet dissidentsAndrey SinyavskyandYuli Danielhe was fired from the Institute, and subsequently emigrated from theSoviet Unionin 1976. Since 1977 he has lived and worked inCanada. Melchuk isJewish.[1]
https://en.wikipedia.org/wiki/Igor_Mel%27%C4%8Duk
Aparse treeorparsing tree[1](also known as aderivation treeorconcrete syntax tree) is an ordered, rootedtreethat represents thesyntacticstructure of astringaccording to somecontext-free grammar. The termparse treeitself is used primarily incomputational linguistics; in theoretical syntax, the termsyntax treeis more common. Concrete syntax trees reflect the syntax of the input language, making them distinct from theabstract syntax treesused in computer programming. UnlikeReed-Kellogg sentence diagramsused for teaching grammar, parse trees do not use distinct symbol shapes for different types ofconstituents. Parse trees are usually constructed based on either the constituency relation of constituency grammars (phrase structure grammars) or the dependency relation ofdependency grammars. Parse trees may be generated forsentencesinnatural languages(seenatural language processing), as well as duringprocessingof computer languages, such asprogramming languages. A related concept is that ofphrase markerorP-marker, as used intransformational generative grammar. A phrase marker is a linguistic expression marked as to its phrase structure. This may be presented in the form of a tree, or as a bracketed expression. Phrase markers are generated by applyingphrase structure rules, and themselves are subject to further transformational rules.[2]A set of possible parse trees for asyntactically ambiguoussentence is called a "parse forest".[3] A parse tree is made up of nodes and branches.[4]In the picture the parse tree is the entire structure, starting from S and ending in each of the leaf nodes (John, ball, the, hit). In a parse tree, each node is either arootnode, abranchnode, or aleafnode. In the above example, S is a root node, NP and VP are branch nodes, while John, ball, the, and hit are all leaf nodes. Nodes can also be referred to as parent nodes and child nodes. Aparentnode is one which has at least one other node linked by a branch under it. In the example, S is a parent of both NP and VP. Achildnode is one which has at least one node directly above it to which it is linked by a branch of the tree. Again from our example, hit is a child node of V. Anonterminal functionis a function (node) which is either a root or a branch in that tree whereas aterminal functionis a function (node) in a parse tree which is a leaf. For binary trees (where each parent node has two immediate child nodes), the number of possible parse trees for a sentence withnwords is given by theCatalan numberCn{\displaystyle C_{n}}. The constituency-based parse trees of constituency grammars (phrase structure grammars) distinguish between terminal and non-terminal nodes. Theinterior nodesare labeled bynon-terminalcategories of the grammar, while theleaf nodesare labeled byterminalcategories. The image below represents a constituency-based parse tree; it shows the syntactic structure of theEnglishsentenceJohn hit the ball: The parse tree is the entire structure, starting from S and ending in each of the leaf nodes (John,hit,the,ball). The following abbreviations are used in the tree: Each node in the tree is either arootnode, abranchnode, or aleafnode.[5]A root node is a node that does not have any branches on top of it. Within a sentence, there is only ever one root node. A branch node is a parent node that connects to two or more child nodes. A leaf node, however, is a terminal node that does not dominate other nodes in the tree. S is the root node, NP and VP are branch nodes, andJohn(N),hit(V),the(D), andball(N) are all leaf nodes. The leaves are thelexical tokensof the sentence. A parent node is one that has at least one other node linked by a branch under it. In the example, S is a parent of both N and VP. A child node is one that has at least one node directly above it to which it is linked by a branch of a tree. From the example,hitis a child node of V. The termsmotheranddaughterare also sometimes used for this relationship. The dependency-based parse trees ofdependency grammars[6]see all nodes as terminal, which means they do not acknowledge the distinction between terminal and non-terminal categories. They are simpler on average than constituency-based parse trees because they contain fewer nodes. The dependency-based parse tree for the example sentence above is as follows: This parse tree lacks the phrasal categories (S, VP, and NP) seen in the constituency-based counterpart above. Like the constituency-based tree,constituentstructure is acknowledged. Any complete sub-tree of the tree is a constituent. Thus this dependency-based parse tree acknowledges the subject nounJohnand the object noun phrasethe ballas constituents just like the constituency-based parse tree does. The constituency vs. dependency distinction is far-reaching. Whether the additional syntactic structure associated with constituency-based parse trees is necessary or beneficial is a matter of debate. Phrase markers, or P-markers, were introduced in earlytransformational generative grammar, as developed byNoam Chomskyand others. A phrase marker representing thedeep structureof a sentence is generated by applyingphrase structure rules. Then, this application may undergo further transformations. Phrase markers may be presented in the form oftrees(as in the above section onconstituency-based parse trees), but are often given instead in the form of "bracketed expressions", which occupy less space in the memory. For example, a bracketed expression corresponding to the constituency-based tree given above may be something like: As with trees, the precise construction of such expressions and the amount of detail shown can depend on the theory being applied and on the points that the query author wishes to illustrate.
https://en.wikipedia.org/wiki/Parse_tree
Michael K. Brame(January 27, 1944[1]– August 16, 2010[2]) was an American linguist. He served as a professor at theUniversity of Washingtonand was the founding editor of thepeer-reviewedresearch journal,Linguistic Analysis.[3]Brame's work focused on the development ofrecursive categorical syntax, also referred to as algebraic syntax, which integrated principles fromalgebraandcategory theoryto analyze sentence structure and linguistic relationships. His framework challenged conventionaltransformational grammarby advocating for a lexicon-centered approach and emphasizing the connections between words and phrases. Additionally, Brame collaborated with his wife on research investigating the identity of the author behind the name "William Shakespeare", resulting in several publications.[1] Michael Brame was born on January 27, 1944, inSan Antonio, Texas.[1] Brame started his study of linguistics at theUniversity of Texas at Austin, receiving his BA in 1966.[1]That summer he studiedEgyptian Arabicat theAmerican University of Cairo.[1]That fall, Brame began a PhD program at theMassachusetts Institute of Technology, studying underMorris HalleandNoam Chomsky, who was his adviser.[2]He received his PhD in 1970[1]or 1971.[2]His dissertation was titledArabic Phonology: Implications for Phonological Theory and Historical Semitic.[4] Brame was aFulbrightscholar (Netherlands, 1973–1974).[5] Recursive Categorical Syntax (RCS), also known as algebraic syntax, is a linguistic framework that integrates concepts fromalgebraandcategory theoryto model sentence structure and linguistic relationships. It is a type ofdependency grammar, and is related tolink grammars. It views words and phrases as mathematical entities, employing algebraic operations to depict their combinations within sentences. Brame's view that "transformations simply do not exist"[6]challengestransformational-generative grammar, advocating for a lexicon-centered perspective. By formalizing word connections, algebraic syntax aims to better understand syntax and simplify traditional theories of grammar, stressing the recursive nature of language and the hierarchical arrangement of linguistic elements, as reflected in Brame's assertions that "the lexicon must be elaborated"[6]and "deep structure falls along with the classical transformations once the lexicon is taken seriously."[6]This approach is intended to provide a comprehensive and mathematical grasp of sentence formation and linguistic structure. As Brame emphasized, this approach relies on a non-associativegroupoidstructure with inverses to represent the interactions oflexical items(words and phrases), or lexes for short. A LEX is a lexicon containing string representations of a word or idiomatic phrase together with a notation specifying what other classes of word or phrase can bond with the string.[7][6] In 2002, Brame co-authored with his wife Galina Popova a book titledShakespeare's Fingerprints.[8][1][9]Over the next two years, they published three more books on the topic. Brame was married to Galina Popova.[1]
https://en.wikipedia.org/wiki/Recursive_categorical_syntax
TheDice-Sørensen coefficient(see below for other names) is a statistic used to gauge the similarity of twosamples. It was independently developed by the botanistsLee Raymond Dice[1]andThorvald Sørensen,[2]who published in 1945 and 1948 respectively. The index is known by several other names, especiallySørensen–Dice index,[3]Sørensen indexandDice's coefficient. Other variations include the "similarity coefficient" or "index", such asDice similarity coefficient(DSC). Common alternate spellings for Sørensen areSorenson,SoerensonandSörenson, and all three can also be seen with the–senending (theDanish letter øis phonetically equivalent to the German/Swedish ö, which can be written as oe in ASCII). Other names include: Sørensen's original formula was intended to be applied to discrete data. Given two sets, X and Y, it is defined as where |X| and |Y| are thecardinalitiesof the two sets (i.e. the number of elements in each set). The Sørensen index equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. Equivalently, the index is the size of the intersection as a fraction of the average size of the two sets. When applied to Boolean data, using the definition of true positive (TP), false positive (FP), and false negative (FN), it can be written as It is different from theJaccard indexwhich only counts true positives once in both the numerator and denominator. DSC is the quotient of similarity and ranges between 0 and 1.[9]It can be viewed as asimilarity measureover sets. Similarly to theJaccard index, the set operations can be expressed in terms of vector operations over binary vectorsaandb: which gives the same outcome over binary vectors and also gives a more general similarity metric over vectors in general terms. For setsXandYof keywords used ininformation retrieval, the coefficient may be defined as twice the shared information (intersection) over the sum of cardinalities :[10] When taken as astring similaritymeasure, the coefficient may be calculated for two strings,xandyusingbigramsas follows:[11] wherentis the number of character bigrams found in both strings,nxis the number of bigrams in stringxandnyis the number of bigrams in stringy. For example, to calculate the similarity between: We would find the set of bigrams in each word: Each set has four elements, and the intersection of these two sets has only one element:ht. Inserting these numbers into the formula, we calculate,s= (2 · 1) / (4 + 4) = 0.25. Source:[12] For a discrete (binary) ground truthA{\displaystyle A}and continuous measuresB{\displaystyle B}in the interval [0,1], the following formula can be used: cDC=2|A∩B|c∗|A|+|B|{\displaystyle cDC={\frac {2|A\cap B|}{c*|A|+|B|}}} Where|A∩B|=Σiaibi{\displaystyle |A\cap B|=\Sigma _{i}a_{i}b_{i}}and|B|=Σibi{\displaystyle |B|=\Sigma _{i}b_{i}} c can be computed as follows: c=ΣiaibiΣiaisign⁡(bi){\displaystyle c={\frac {\Sigma _{i}a_{i}b_{i}}{\Sigma _{i}a_{i}\operatorname {sign} {(b_{i})}}}} IfΣiaisign⁡(bi)=0{\displaystyle \Sigma _{i}a_{i}\operatorname {sign} {(b_{i})}=0}which means no overlap between A and B, c is set to 1 arbitrarily. This coefficient is not very different in form from theJaccard index. In fact, both are equivalent in the sense that given a value for the Sørensen–Dice coefficientS{\displaystyle S}, one can calculate the respective Jaccard index valueJ{\displaystyle J}and vice versa, using the equationsJ=S/(2−S){\displaystyle J=S/(2-S)}andS=2J/(1+J){\displaystyle S=2J/(1+J)}. Since the Sørensen–Dice coefficient does not satisfy thetriangle inequality, it can be considered asemimetricversion of the Jaccard index.[4] The function ranges between zero and one, like Jaccard. Unlike Jaccard, the corresponding difference function is not a proper distance metric as it does not satisfy the triangle inequality.[4]The simplest counterexample of this is given by the three setsX={a}{\displaystyle X=\{a\}},Y={b}{\displaystyle Y=\{b\}}andZ=X∪Y={a,b}{\displaystyle Z=X\cup Y=\{a,b\}}. We haved(X,Y)=1{\displaystyle d(X,Y)=1}andd(X,Z)=d(Y,Z)=1/3{\displaystyle d(X,Z)=d(Y,Z)=1/3}. To satisfy the triangle inequality, the sum of any two sides must be greater than or equal to that of the remaining side. However,d(X,Z)+d(Y,Z)=2/3<1=d(X,Y){\displaystyle d(X,Z)+d(Y,Z)=2/3<1=d(X,Y)}. The Sørensen–Dice coefficient is useful for ecological community data (e.g. Looman & Campbell, 1960[13]). Justification for its use is primarily empirical rather than theoretical (although it can be justified theoretically as the intersection of twofuzzy sets[14]). As compared toEuclidean distance, the Sørensen distance retains sensitivity in more heterogeneous data sets and gives less weight to outliers.[15]Recently the Dice score (and its variations, e.g. logDice taking a logarithm of it) has become popular in computerlexicographyfor measuring the lexical association score of two given words.[16]logDice is also used as part of the Mash Distance for genome and metagenome distance estimation[17]Finally, Dice is used inimage segmentation, in particular for comparing algorithm output against reference masks in medical applications.[8] The expression is easily extended toabundanceinstead of presence/absence of species. This quantitative version is known by several names:
https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient
Ininformation theory, theHamming distancebetween twostringsor vectors of equal length is the number of positions at which the correspondingsymbolsare different. In other words, it measures the minimum number ofsubstitutionsrequired to change one string into the other, or equivalently, the minimum number oferrorsthat could have transformed one string into the other. In a more general context, the Hamming distance is one of severalstring metricsfor measuring theedit distancebetween two sequences. It is named after the American mathematicianRichard Hamming. A major application is incoding theory, more specifically toblock codes, in which the equal-length strings arevectorsover afinite field. The Hamming distance between two equal-length strings of symbols is the number of positions at which the corresponding symbols are different.[1] The symbols may be letters, bits, or decimal digits, among other possibilities. For example, the Hamming distance between: For a fixed lengthn, the Hamming distance is ametricon the set of thewordsof lengthn(also known as aHamming space), as it fulfills the conditions of non-negativity, symmetry, the Hamming distance of two words is 0 if and only if the two words are identical, and it satisfies thetriangle inequalityas well:[2]Indeed, if we fix three wordsa,bandc, then whenever there is a difference between theith letter ofaand theith letter ofc, then there must be a difference between theith letter ofaandith letter ofb, or between theith letter ofband theith letter ofc. Hence the Hamming distance betweenaandcis not larger than the sum of the Hamming distances betweenaandband betweenbandc. The Hamming distance between two wordsaandbcan also be seen as theHamming weightofa−bfor an appropriate choice of the − operator, much as the difference between two integers can be seen as a distance from zero on the number line.[clarification needed] For binary stringsaandbthe Hamming distance is equal to the number of ones (population count) inaXORb.[3]The metric space of length-nbinary strings, with the Hamming distance, is known as theHamming cube; it is equivalent as a metric space to the set of distances between vertices in ahypercube graph. One can also view a binary string of lengthnas a vector inRn{\displaystyle \mathbb {R} ^{n}}by treating each symbol in the string as a real coordinate; with this embedding, the strings form the vertices of ann-dimensionalhypercube, and the Hamming distance of the strings is equivalent to theManhattan distancebetween the vertices. Theminimum Hamming distanceorminimum distance(usually denoted bydmin) is used to define some essential notions incoding theory, such aserror detecting and error correcting codes. In particular, acodeCis said to bekerror detecting if, and only if, the minimum Hamming distance between any two of its codewords is at leastk+1.[2] For example, consider a code consisting of two codewords "000" and "111". The Hamming distance between these two words is 3, and therefore it isk=2 error detecting. This means that if one bit is flipped or two bits are flipped, the error can be detected. If three bits are flipped, then "000" becomes "111" and the error cannot be detected. A codeCis said to bek-error correctingif, for every wordwin the underlying Hamming spaceH, there exists at most one codewordc(fromC) such that the Hamming distance betweenwandcis at mostk. In other words, a code isk-errors correcting if the minimum Hamming distance between any two of its codewords is at least 2k+1. This is also understood geometrically as anyclosed ballsof radiuskcentered on distinct codewords being disjoint.[2]These balls are also calledHamming spheresin this context.[4] For example, consider the same 3-bit code consisting of the two codewords "000" and "111". The Hamming space consists of 8 words 000, 001, 010, 011, 100, 101, 110 and 111. The codeword "000" and the single bit error words "001","010","100" are all less than or equal to the Hamming distance of 1 to "000". Likewise, codeword "111" and its single bit error words "110","101" and "011" are all within 1 Hamming distance of the original "111". In this code, a single bit error is always within 1 Hamming distance of the original codes, and the code can be1-error correcting, that isk=1. Since the Hamming distance between "000" and "111" is 3, and those comprise the entire set of codewords in the code, the minimum Hamming distance is 3, which satisfies2k+1 = 3. Thus a code with minimum Hamming distancedbetween its codewords can detect at mostd-1 errors and can correct ⌊(d-1)/2⌋ errors.[2]The latter number is also called thepacking radiusor theerror-correcting capabilityof the code.[4] The Hamming distance is named afterRichard Hamming, who introduced the concept in his fundamental paper onHamming codes,Error detecting and error correcting codes, in 1950.[5]Hamming weight analysis of bits is used in several disciplines includinginformation theory,coding theory, andcryptography.[6] It is used intelecommunicationto count the number of flipped bits in a fixed-length binary word as an estimate of error, and therefore is sometimes called thesignal distance.[7]Forq-ary strings over analphabetof sizeq≥ 2 the Hamming distance is applied in case of theq-ary symmetric channel, while theLee distanceis used forphase-shift keyingor more generally channels susceptible tosynchronization errorsbecause the Lee distance accounts for errors of ±1.[8]Ifq=2{\displaystyle q=2}orq=3{\displaystyle q=3}both distances coincide because any pair of elements fromZ/2Z{\textstyle \mathbb {Z} /2\mathbb {Z} }orZ/3Z{\textstyle \mathbb {Z} /3\mathbb {Z} }differ by 1, but the distances are different for largerq{\displaystyle q}. The Hamming distance is also used insystematicsas a measure of genetic distance.[9] However, for comparing strings of different lengths, or strings where not just substitutions but also insertions or deletions have to be expected, a more sophisticated metric like theLevenshtein distanceis more appropriate. The following function, written in Python 3, returns the Hamming distance between two strings: Or, in a shorter expression: The functionhamming_distance(), implemented inPython 3, computes the Hamming distance between two strings (or otheriterableobjects) of equal length by creating a sequence of Boolean values indicating mismatches and matches between corresponding positions in the two inputs, then summing the sequence with True and False values, interpreted as one and zero, respectively. where thezip()function merges two equal-length collections in pairs. The followingCfunction will compute the Hamming distance of two integers (considered as binary values, that is, as sequences of bits). The running time of this procedure is proportional to the Hamming distance rather than to the number of bits in the inputs. It computes thebitwiseexclusive orof the two inputs, and then finds theHamming weightof the result (the number of nonzero bits) using an algorithm ofWegner (1960)that repeatedly finds and clears the lowest-order nonzero bit. Some compilers support the__builtin_popcountfunction which can calculate this using specialized processor hardware where available. A faster alternative is to use the population count (popcount) assembly instruction. Certain compilers such as GCC and Clang make it available via an intrinsic function:
https://en.wikipedia.org/wiki/Hamming_distance
Instatistics,correlationordependenceis any statistical relationship, whethercausalor not, between tworandom variablesorbivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables arelinearlyrelated. Familiar examples of dependent phenomena include the correlation between theheightof parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in thedemand curve. Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. In this example, there is acausal relationship, becauseextreme weathercauses people to use more electricity for heating or cooling. However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e.,correlation does not imply causation). Formally, random variables aredependentif they do not satisfy a mathematical property ofprobabilistic independence. In informal parlance,correlationis synonymous withdependence. However, when used in a technical sense, correlation refers to any of several specific types of mathematical relationship betweenthe conditional expectation of one variable given the other is not constant as the conditioning variable changes; broadly correlation in this specific sense is used whenE(Y|X=x){\displaystyle E(Y|X=x)}is related tox{\displaystyle x}in some manner (such as linearly, monotonically, or perhaps according to some particular functional form such as logarithmic). Essentially, correlation is the measure of how two or more variables are related to one another. There are severalcorrelation coefficients, often denotedρ{\displaystyle \rho }orr{\displaystyle r}, measuring the degree of correlation. The most common of these is thePearson correlation coefficient, which is sensitive only to a linear relationship between two variables (which may be present even when one variable is a nonlinear function of the other). Other correlation coefficients – such asSpearman's rank correlation coefficient– have been developed to be morerobustthan Pearson's and to detect less structured relationships between variables.[1][2][3]Mutual informationcan also be applied to measure dependence between two variables. The most familiar measure of dependence between two quantities is thePearson product-moment correlation coefficient(PPMCC), or "Pearson's correlation coefficient", commonly called simply "the correlation coefficient". It is obtained by taking the ratio of the covariance of the two variables in question of our numerical dataset, normalized to the square root of their variances. Mathematically, one simply divides thecovarianceof the two variables by the product of theirstandard deviations.Karl Pearsondeveloped the coefficient from a similar but slightly different idea byFrancis Galton.[4] A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the expected values and the resulting Pearson's correlation coefficient indicates how far away the actual dataset is from the expected values. Depending on the sign of our Pearson's correlation coefficient, we can end up with either a negative or positive correlation if there is any sort of relationship between the variables of our data set.[citation needed] The population correlation coefficientρX,Y{\displaystyle \rho _{X,Y}}between tworandom variablesX{\displaystyle X}andY{\displaystyle Y}withexpected valuesμX{\displaystyle \mu _{X}}andμY{\displaystyle \mu _{Y}}andstandard deviationsσX{\displaystyle \sigma _{X}}andσY{\displaystyle \sigma _{Y}}is defined as: ρX,Y=corr⁡(X,Y)=cov⁡(X,Y)σXσY=E⁡[(X−μX)(Y−μY)]σXσY,ifσXσY>0.{\displaystyle \rho _{X,Y}=\operatorname {corr} (X,Y)={\operatorname {cov} (X,Y) \over \sigma _{X}\sigma _{Y}}={\operatorname {E} [(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}},\quad {\text{if}}\ \sigma _{X}\sigma _{Y}>0.} whereE{\displaystyle \operatorname {E} }is theexpected valueoperator,cov{\displaystyle \operatorname {cov} }meanscovariance, andcorr{\displaystyle \operatorname {corr} }is a widely used alternative notation for the correlation coefficient. The Pearson correlation is defined only if both standard deviations are finite and positive. An alternative formula purely in terms ofmomentsis: ρX,Y=E⁡(XY)−E⁡(X)E⁡(Y)E⁡(X2)−E⁡(X)2⋅E⁡(Y2)−E⁡(Y)2{\displaystyle \rho _{X,Y}={\operatorname {E} (XY)-\operatorname {E} (X)\operatorname {E} (Y) \over {\sqrt {\operatorname {E} (X^{2})-\operatorname {E} (X)^{2}}}\cdot {\sqrt {\operatorname {E} (Y^{2})-\operatorname {E} (Y)^{2}}}}} It is a corollary of theCauchy–Schwarz inequalitythat theabsolute valueof the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between −1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), −1 in the case of a perfect inverse (decreasing) linear relationship (anti-correlation),[5]and some value in theopen interval(−1,1){\displaystyle (-1,1)}in all other cases, indicating the degree oflinear dependencebetween the variables. As it approaches zero there is less of a relationship (closer to uncorrelated). The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables. If the variables areindependent, Pearson's correlation coefficient is 0. However, because the correlation coefficient detects only linear dependencies between two variables, the converse is not necessarily true. A correlation coefficient of 0 does not imply that the variables are independent[citation needed]. X,Yindependent⇒ρX,Y=0(X,Yuncorrelated)ρX,Y=0(X,Yuncorrelated)⇏X,Yindependent{\displaystyle {\begin{aligned}X,Y{\text{ independent}}\quad &\Rightarrow \quad \rho _{X,Y}=0\quad (X,Y{\text{ uncorrelated}})\\\rho _{X,Y}=0\quad (X,Y{\text{ uncorrelated}})\quad &\nRightarrow \quad X,Y{\text{ independent}}\end{aligned}}} For example, suppose the random variableX{\displaystyle X}is symmetrically distributed about zero, andY=X2{\displaystyle Y=X^{2}}. ThenY{\displaystyle Y}is completely determined byX{\displaystyle X}, so thatX{\displaystyle X}andY{\displaystyle Y}are perfectly dependent, but their correlation is zero; they areuncorrelated. However, in the special case whenX{\displaystyle X}andY{\displaystyle Y}arejointly normal, uncorrelatedness is equivalent to independence. Even though uncorrelated data does not necessarily imply independence, one can check if random variables are independent if theirmutual informationis 0. Given a series ofn{\displaystyle n}measurements of the pair(Xi,Yi){\displaystyle (X_{i},Y_{i})}indexed byi=1,…,n{\displaystyle i=1,\ldots ,n}, thesample correlation coefficientcan be used to estimate the population Pearson correlationρX,Y{\displaystyle \rho _{X,Y}}betweenX{\displaystyle X}andY{\displaystyle Y}. The sample correlation coefficient is defined as wherex¯{\displaystyle {\overline {x}}}andy¯{\displaystyle {\overline {y}}}are the samplemeansofX{\displaystyle X}andY{\displaystyle Y}, andsx{\displaystyle s_{x}}andsy{\displaystyle s_{y}}are thecorrected sample standard deviationsofX{\displaystyle X}andY{\displaystyle Y}. Equivalent expressions forrxy{\displaystyle r_{xy}}are wheresx′{\displaystyle s'_{x}}andsy′{\displaystyle s'_{y}}are theuncorrectedsample standard deviationsofX{\displaystyle X}andY{\displaystyle Y}. Ifx{\displaystyle x}andy{\displaystyle y}are results of measurements that contain measurement error, the realistic limits on the correlation coefficient are not −1 to +1 but a smaller range.[6]For the case of a linear model with a single independent variable, thecoefficient of determination (R squared)is the square ofrxy{\displaystyle r_{xy}}, Pearson's product-moment coefficient. Consider thejoint probability distributionofXandYgiven in the table below. For this joint distribution, themarginal distributionsare: This yields the following expectations and variances: Therefore: Rank correlationcoefficients, such asSpearman's rank correlation coefficientandKendall's rank correlation coefficient (τ)measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship. If, as the one variable increases, the otherdecreases, the rank correlation coefficients will be negative. It is common to regard these rank correlation coefficients as alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to make the coefficient less sensitive to non-normality in distributions. However, this view has little mathematical basis, as rank correlation coefficients measure a different type of relationship than thePearson product-moment correlation coefficient, and are best seen as measures of a different type of association, rather than as an alternative measure of the population correlation coefficient.[7][8] To illustrate the nature of rank correlation, and its difference from linear correlation, consider the following four pairs of numbers(x,y){\displaystyle (x,y)}: As we go from each pair to the next pairx{\displaystyle x}increases, and so doesy{\displaystyle y}. This relationship is perfect, in the sense that an increase inx{\displaystyle x}isalwaysaccompanied by an increase iny{\displaystyle y}. This means that we have a perfect rank correlation, and both Spearman's and Kendall's correlation coefficients are 1, whereas in this example Pearson product-moment correlation coefficient is 0.7544, indicating that the points are far from lying on a straight line. In the same way ify{\displaystyle y}alwaysdecreaseswhenx{\displaystyle x}increases, the rank correlation coefficients will be −1, while the Pearson product-moment correlation coefficient may or may not be close to −1, depending on how close the points are to a straight line. Although in the extreme cases of perfect rank correlation the two coefficients are both equal (being both +1 or both −1), this is not generally the case, and so values of the two coefficients cannot meaningfully be compared.[7]For example, for the three pairs (1, 1) (2, 3) (3, 2) Spearman's coefficient is 1/2, while Kendall's coefficient is 1/3. The information given by a correlation coefficient is not enough to define the dependence structure between random variables. The correlation coefficient completely defines the dependence structure only in very particular cases, for example when the distribution is amultivariate normal distribution. (See diagram above.) In the case ofelliptical distributionsit characterizes the (hyper-)ellipses of equal density; however, it does not completely characterize the dependence structure (for example, amultivariate t-distribution's degrees of freedom determine the level of tail dependence). For continuous variables, multiple alternative measures of dependence were introduced to address the deficiency of Pearson's correlation that it can be zero for dependent random variables (see[9]and reference references therein for an overview). They all share the important property that a value of zero implies independence. This led some authors[9][10]to recommend their routine usage, particularly ofdistance correlation.[11][12]Another alternative measure is the Randomized Dependence Coefficient.[13]The RDC is a computationally efficient,copula-based measure of dependence between multivariate random variables and is invariant with respect to non-linear scalings of random variables. One important disadvantage of the alternative, more general measures is that, when used to test whether two variables are associated, they tend to have lower power compared to Pearson's correlation when the data follow a multivariate normal distribution.[9]This is an implication of theNo free lunch theorem. To detect all kinds of relationships, these measures have to sacrifice power on other relationships, particularly for the important special case of a linear relationship with Gaussian marginals, for which Pearson's correlation is optimal. Another problem concerns interpretation. While Person's correlation can be interpreted for all values, the alternative measures can generally only be interpreted meaningfully at the extremes.[14] For twobinary variables, theodds ratiomeasures their dependence, and takes range non-negative numbers, possibly infinity:⁠[0,+∞]{\displaystyle [0,+\infty ]}⁠. Related statistics such asYule'sYandYule'sQnormalize this to the correlation-like range⁠[−1,1]{\displaystyle [-1,1]}⁠. The odds ratio is generalized by thelogistic modelto model cases where the dependent variables are discrete and there may be one or more independent variables. Thecorrelation ratio,entropy-basedmutual information,total correlation,dual total correlationandpolychoric correlationare all also capable of detecting more general dependencies, as is consideration of thecopulabetween them, while thecoefficient of determinationgeneralizes the correlation coefficient tomultiple regression. The degree of dependence between variablesXandYdoes not depend on the scale on which the variables are expressed. That is, if we are analyzing the relationship betweenXandY, most correlation measures are unaffected by transformingXtoa+bXandYtoc+dY, wherea,b,c, anddare constants (banddbeing positive). This is true of some correlationstatisticsas well as theirpopulationanalogues. Some correlation statistics, such as the rank correlation coefficient, are also invariant tomonotone transformationsof the marginal distributions ofXand/orY. Most correlation measures are sensitive to the manner in whichXandYare sampled. Dependencies tend to be stronger if viewed over a wider range of values. Thus, if we consider the correlation coefficient between the heights of fathers and their sons over all adult males, and compare it to the same correlation coefficient calculated when the fathers are selected to be between 165 cm and 170 cm in height, the correlation will be weaker in the latter case. Several techniques have been developed that attempt to correct for range restriction in one or both variables, and are commonly used in meta-analysis; the most common are Thorndike's case II and case III equations.[15] Various correlation measures in use may be undefined for certain joint distributions ofXandY. For example, the Pearson correlation coefficient is defined in terms ofmoments, and hence will be undefined if the moments are undefined. Measures of dependence based onquantilesare always defined. Sample-based statistics intended to estimate population measures of dependence may or may not have desirable statistical properties such as beingunbiased, orasymptotically consistent, based on the spatial structure of the population from which the data were sampled. Sensitivity to the data distribution can be used to an advantage. For example,scaled correlationis designed to use the sensitivity to the range in order to pick out correlations between fast components oftime series.[16]By reducing the range of values in a controlled manner, the correlations on long time scale are filtered out and only the correlations on short time scales are revealed. The correlation matrix ofn{\displaystyle n}random variablesX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}is then×n{\displaystyle n\times n}matrixC{\displaystyle C}whose(i,j){\displaystyle (i,j)}entry is Thus the diagonal entries are all identicallyone. If the measures of correlation used are product-moment coefficients, the correlation matrix is the same as thecovariance matrixof thestandardized random variablesXi/σ(Xi){\displaystyle X_{i}/\sigma (X_{i})}fori=1,…,n{\displaystyle i=1,\dots ,n}. This applies both to the matrix of population correlations (in which caseσ{\displaystyle \sigma }is the population standard deviation), and to the matrix of sample correlations (in which caseσ{\displaystyle \sigma }denotes the sample standard deviation). Consequently, each is necessarily apositive-semidefinite matrix. Moreover, the correlation matrix is strictlypositive definiteif no variable can have all its values exactly generated as a linear function of the values of the others. The correlation matrix is symmetric because the correlation betweenXi{\displaystyle X_{i}}andXj{\displaystyle X_{j}}is the same as the correlation betweenXj{\displaystyle X_{j}}andXi{\displaystyle X_{i}}. A correlation matrix appears, for example, in one formula for thecoefficient of multiple determination, a measure of goodness of fit inmultiple regression. Instatistical modelling, correlation matrices representing the relationships between variables are categorized into different correlation structures, which are distinguished by factors such as the number of parameters required to estimate them. For example, in anexchangeablecorrelation matrix, all pairs of variables are modeled as having the same correlation, so all non-diagonal elements of the matrix are equal to each other. On the other hand, anautoregressivematrix is often used when variables represent a time series, since correlations are likely to be greater when measurements are closer in time. Other examples include independent, unstructured, M-dependent, andToeplitz. Inexploratory data analysis, theiconography of correlationsconsists in replacing a correlation matrix by a diagram where the "remarkable" correlations are represented by a solid line (positive correlation), or a dotted line (negative correlation). In some applications (e.g., building data models from only partially observed data) one wants to find the "nearest" correlation matrix to an "approximate" correlation matrix (e.g., a matrix which typically lacks semi-definite positiveness due to the way it has been computed). In 2002, Higham[17]formalized the notion of nearness using theFrobenius normand provided a method for computing the nearest correlation matrix using theDykstra's projection algorithm, of which an implementation is available as an online Web API.[18] This sparked interest in the subject, with new theoretical (e.g., computing the nearest correlation matrix with factor structure[19]) and numerical (e.g. usage theNewton's methodfor computing the nearest correlation matrix[20]) results obtained in the subsequent years. Similarly for two stochastic processes{Xt}t∈T{\displaystyle \left\{X_{t}\right\}_{t\in {\mathcal {T}}}}and{Yt}t∈T{\displaystyle \left\{Y_{t}\right\}_{t\in {\mathcal {T}}}}: If they are independent, then they are uncorrelated.[21]: p. 151The opposite of this statement might not be true. Even if two variables are uncorrelated, they might not be independent to each other. The conventional dictum that "correlation does not imply causation" means that correlation cannot be used by itself to infer a causal relationship between the variables.[22]This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap withidentityrelations (tautologies), where no causal process exists (e.g., between two variables measuring the same construct). Consequently, a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction). A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be. The Pearson correlation coefficient indicates the strength of alinearrelationship between two variables, but its value generally does not completely characterize their relationship. In particular, if theconditional meanofY{\displaystyle Y}givenX{\displaystyle X}, denotedE⁡(Y∣X){\displaystyle \operatorname {E} (Y\mid X)}, is not linear inX{\displaystyle X}, the correlation coefficient will not fully determine the form ofE⁡(Y∣X){\displaystyle \operatorname {E} (Y\mid X)}. The adjacent image showsscatter plotsofAnscombe's quartet, a set of four different pairs of variables created byFrancis Anscombe.[23]The foury{\displaystyle y}variables have the same mean (7.5), variance (4.12), correlation (0.816) and regression line (y=3+0.5x{\textstyle y=3+0.5x}). However, as can be seen on the plots, the distribution of the variables is very different. The first one (top left) seems to be distributed normally, and corresponds to what one would expect when considering two variables correlated and following the assumption of normality. The second one (top right) is not distributed normally; while an obvious relationship between the two variables can be observed, it is not linear. In this case the Pearson correlation coefficient does not indicate that there is an exact functional relationship: only the extent to which that relationship can be approximated by a linear relationship. In the third case (bottom left), the linear relationship is perfect, except for oneoutlierwhich exerts enough influence to lower the correlation coefficient from 1 to 0.816. Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear. These examples indicate that the correlation coefficient, as asummary statistic, cannot replace visual examination of the data. The examples are sometimes said to demonstrate that the Pearson correlation assumes that the data follow anormal distribution, but this is only partially correct.[4]The Pearson correlation can be accurately calculated for any distribution that has a finitecovariance matrix, which includes most distributions encountered in practice. However, the Pearson correlation coefficient (taken together with the sample mean and variance) is only asufficient statisticif the data is drawn from amultivariate normal distribution. As a result, the Pearson correlation coefficient fully characterizes the relationship between variables if and only if the data are drawn from a multivariate normal distribution. If a pair(X,Y){\displaystyle \ (X,Y)\ }of random variables follows abivariate normal distribution, the conditional meanE⁡(X∣Y){\displaystyle \operatorname {\boldsymbol {\mathcal {E}}} (X\mid Y)}is a linear function ofY{\displaystyle Y}, and the conditional meanE⁡(Y∣X){\displaystyle \operatorname {\boldsymbol {\mathcal {E}}} (Y\mid X)}is a linear function ofX.{\displaystyle \ X~.}The correlation coefficientρX,Y{\displaystyle \ \rho _{X,Y}\ }betweenX{\displaystyle \ X\ }andY,{\displaystyle \ Y\ ,}and themarginalmeans and variances ofX{\displaystyle \ X\ }andY{\displaystyle \ Y\ }determine this linear relationship: whereE⁡(X){\displaystyle \operatorname {\boldsymbol {\mathcal {E}}} (X)}andE⁡(Y){\displaystyle \operatorname {\boldsymbol {\mathcal {E}}} (Y)}are the expected values ofX{\displaystyle \ X\ }andY,{\displaystyle \ Y\ ,}respectively, andσX{\displaystyle \ \sigma _{X}\ }andσY{\displaystyle \ \sigma _{Y}\ }are the standard deviations ofX{\displaystyle \ X\ }andY,{\displaystyle \ Y\ ,}respectively. The empirical correlationr{\displaystyle r}is anestimateof the correlation coefficientρ.{\displaystyle \ \rho ~.}A distribution estimate forρ{\displaystyle \ \rho \ }is given by whereFHyp{\displaystyle \ F_{\mathsf {Hyp}}\ }is theGaussian hypergeometric function. This density is both a Bayesianposteriordensity and an exact optimalconfidence distributiondensity.[24][25]
https://en.wikipedia.org/wiki/Correlation
TheJaccard indexis astatisticused for gauging thesimilarityanddiversityofsamplesets. It is defined in general taking the ratio of two sizes (areas or volumes), the intersection size divided by the union size, also calledintersection over union(IoU). It was developed byGrove Karl Gilbertin 1884 as hisratio of verification (v)[1]and now is often called thecritical success indexin meteorology.[2]It was later developed independently byPaul Jaccard, originally giving the French namecoefficient de communauté(coefficient of community),[3][4]and independently formulated again by T. Tanimoto.[5]Thus, it is also calledTanimoto indexorTanimoto coefficientin some fields. The Jaccard index measures similarity between finite non-empty sample sets and is defined as the size of theintersectiondivided by the size of theunionof the sample sets: Note that by design,0≤J(A,B)≤1.{\displaystyle 0\leq J(A,B)\leq 1.}If the setsA{\displaystyle A}andB{\displaystyle B}have no elements in common, their intersection is empty, so|A∩B|=0{\displaystyle |A\cap B|=0}and thereforeJ(A,B)=0.{\displaystyle J(A,B)=0.}The other extreme is that the two sets are equal. In that caseA∩B=A∪B=A=B,{\displaystyle A\cap B=A\cup B=A=B,}so thenJ(A,B)=1.{\displaystyle J(A,B)=1.}The Jaccard index is widely used in computer science, ecology, genomics and other sciences wherebinary or binarized dataare used. Both the exact solution and approximation methods are available for hypothesis testing with the Jaccard index.[6] Jaccard similarity also applies to bags, i.e.,multisets. This has a similar formula,[7]but the symbols used represent bag intersection and bag sum (not union). The maximum value is 1/2. TheJaccard distance, which measuresdissimilarity between sample sets, is complementary to the Jaccard index and is obtained by subtracting the Jaccard index from 1 or, equivalently, by dividing the difference of the sizes of the union and the intersection of two sets by the size of the union: An alternative interpretation of the Jaccard distance is as the ratio of the size of thesymmetric differenceA△B=(A∪B)−(A∩B){\displaystyle A\mathbin {\triangle } B=(A\cup B)-(A\cap B)}to the union. Jaccard distance is commonly used to calculate ann×nmatrix forclusteringandmultidimensional scalingofnsample sets. This distance is ametricon the collection of all finite sets.[8][9][10] There is also a version of the Jaccard distance formeasures, includingprobability measures. Ifμ{\displaystyle \mu }is a measure on ameasurable spaceX{\displaystyle X}, then we define the Jaccard index by and the Jaccard distance by Care must be taken ifμ(A∪B)=0{\displaystyle \mu (A\cup B)=0}or∞{\displaystyle \infty }, since these formulas are not well defined in these cases. TheMinHashmin-wise independent permutationslocality sensitive hashingscheme may be used to efficiently compute an accurate estimate of the Jaccard similarity index of pairs of sets, where each set is represented by a constant-sized signature derived from the minimum values of ahash function. Given two objects,AandB, each withnbinaryattributes, the Jaccard index is a useful measure of the overlap thatAandBshare with their attributes. Each attribute ofAandBcan either be 0 or 1. The total number of each combination of attributes for bothAandBare specified as follows: Each attribute must fall into one of these four categories, meaning that The Jaccard similarity index,J, is given as The Jaccard distance,dJ, is given as Statistical inference can be made based on the Jaccard similarity index, and consequently related metrics.[6]Given two sample setsAandBwithnattributes, a statistical test can be conducted to see if an overlap isstatistically significant. The exact solution is available, although computation can be costly asnincreases.[6]Estimation methods are available either by approximating amultinomial distributionor by bootstrapping.[6] When used for binary attributes, the Jaccard index is very similar to thesimple matching coefficient. The main difference is that the SMC has the termM00{\displaystyle M_{00}}in its numerator and denominator, whereas the Jaccard index does not. Thus, the SMC counts both mutual presences (when an attribute is present in both sets) and mutual absence (when an attribute is absent in both sets) as matches and compares it to the total number of attributes in the universe, whereas the Jaccard index only counts mutual presence as matches and compares it to the number of attributes that have been chosen by at least one of the two sets. Inmarket basket analysis, for example, the basket of two consumers who we wish to compare might only contain a small fraction of all the available products in the store, so the SMC will usually return very high values of similarities even when the baskets bear very little resemblance, thus making the Jaccard index a more appropriate measure of similarity in that context. For example, consider a supermarket with 1000 products and two customers. The basket of the first customer contains salt and pepper and the basket of the second contains salt and sugar. In this scenario, the similarity between the two baskets as measured by the Jaccard index would be 1/3, but the similarity becomes 0.998 using the SMC. In other contexts, where 0 and 1 carry equivalent information (symmetry), the SMC is a better measure of similarity. For example, vectors of demographic variables stored indummy variables, such as gender, would be better compared with the SMC than with the Jaccard index since the impact of gender on similarity should be equal, independently of whether male is defined as a 0 and female as a 1 or the other way around. However, when we have symmetric dummy variables, one could replicate the behaviour of the SMC by splitting the dummies into two binary attributes (in this case, male and female), thus transforming them into asymmetric attributes, allowing the use of the Jaccard index without introducing any bias. The SMC remains, however, more computationally efficient in the case of symmetric dummy variables since it does not require adding extra dimensions. Ifx=(x1,x2,…,xn){\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{n})}andy=(y1,y2,…,yn){\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})}are two vectors with all realxi,yi≥0{\displaystyle x_{i},y_{i}\geq 0}, then their Jaccard similarity index (also known then as Ruzicka similarity[citation needed]) is defined as and Jaccard distance (also known then as Soergel distance) With even more generality, iff{\displaystyle f}andg{\displaystyle g}are two non-negative measurable functions on a measurable spaceX{\displaystyle X}with measureμ{\displaystyle \mu }, then we can define wheremax{\displaystyle \max }andmin{\displaystyle \min }are pointwise operators. Then Jaccard distance is Then, for example, for two measurable setsA,B⊆X{\displaystyle A,B\subseteq X}, we haveJμ(A,B)=J(χA,χB),{\displaystyle J_{\mu }(A,B)=J(\chi _{A},\chi _{B}),}whereχA{\displaystyle \chi _{A}}andχB{\displaystyle \chi _{B}}are the characteristic functions of the corresponding set. The weighted Jaccard similarity described above generalizes the Jaccard Index to positive vectors, where a set corresponds to a binary vector given by theindicator function, i.e.xi∈{0,1}{\displaystyle x_{i}\in \{0,1\}}. However, it does not generalize the Jaccard Index to probability distributions, where a set corresponds to a uniform probability distribution, i.e. It is always less if the sets differ in size. If|X|>|Y|{\displaystyle |X|>|Y|}, andxi=1X(i)/|X|,yi=1Y(i)/|Y|{\displaystyle x_{i}=\mathbf {1} _{X}(i)/|X|,y_{i}=\mathbf {1} _{Y}(i)/|Y|}then Instead, a generalization that is continuous between probability distributions and their corresponding support sets is which is called the "Probability" Jaccard.[11]It has the following bounds against the Weighted Jaccard on probability vectors. Here the upper bound is the (weighted)Sørensen–Dice coefficient. The corresponding distance,1−JP(x,y){\displaystyle 1-J_{\mathcal {P}}(x,y)}, is a metric over probability distributions, and apseudo-metricover non-negative vectors. The Probability Jaccard Index has a geometric interpretation as the area of an intersection ofsimplices. Every point on a unitk{\displaystyle k}-simplex corresponds to a probability distribution onk+1{\displaystyle k+1}elements, because the unitk{\displaystyle k}-simplex is the set of points ink+1{\displaystyle k+1}dimensions that sum to 1. To derive the Probability Jaccard Index geometrically, represent a probability distribution as the unit simplex divided into sub simplices according to the mass of each item. If you overlay two distributions represented in this way on top of each other, and intersect the simplices corresponding to each item, the area that remains is equal to the Probability Jaccard Index of the distributions. Consider the problem of constructing random variables such that they collide with each other as much as possible. That is, ifX∼x{\displaystyle X\sim x}andY∼y{\displaystyle Y\sim y}, we would like to constructX{\displaystyle X}andY{\displaystyle Y}to maximizePr[X=Y]{\displaystyle \Pr[X=Y]}. If we look at just two distributionsx,y{\displaystyle x,y}in isolation, the highestPr[X=Y]{\displaystyle \Pr[X=Y]}we can achieve is given by1−TV(x,y){\displaystyle 1-{\text{TV}}(x,y)}whereTV{\displaystyle {\text{TV}}}is theTotal Variation distance. However, suppose we weren't just concerned with maximizing that particular pair, suppose we would like to maximize the collision probability of any arbitrary pair. One could construct an infinite number of random variables one for each distributionx{\displaystyle x}, and seek to maximizePr[X=Y]{\displaystyle \Pr[X=Y]}for all pairsx,y{\displaystyle x,y}. In a fairly strong sense described below, the Probability Jaccard Index is an optimal way to align these random variables. For any sampling methodG{\displaystyle G}and discrete distributionsx,y{\displaystyle x,y}, ifPr[G(x)=G(y)]>JP(x,y){\displaystyle \Pr[G(x)=G(y)]>J_{\mathcal {P}}(x,y)}then for somez{\displaystyle z}whereJP(x,z)>JP(x,y){\displaystyle J_{\mathcal {P}}(x,z)>J_{\mathcal {P}}(x,y)}andJP(y,z)>JP(x,y){\displaystyle J_{\mathcal {P}}(y,z)>J_{\mathcal {P}}(x,y)}, eitherPr[G(x)=G(z)]<JP(x,z){\displaystyle \Pr[G(x)=G(z)]<J_{\mathcal {P}}(x,z)}orPr[G(y)=G(z)]<JP(y,z){\displaystyle \Pr[G(y)=G(z)]<J_{\mathcal {P}}(y,z)}.[11] That is, no sampling method can achieve more collisions thanJP{\displaystyle J_{\mathcal {P}}}on one pair without achieving fewer collisions thanJP{\displaystyle J_{\mathcal {P}}}on another pair, where the reduced pair is more similar underJP{\displaystyle J_{\mathcal {P}}}than the increased pair. This theorem is true for the Jaccard Index of sets (if interpreted as uniform distributions) and the probability Jaccard, but not of the weighted Jaccard. (The theorem uses the word "sampling method" to describe a joint distribution over all distributions on a space, because it derives from the use ofweighted minhashing algorithmsthat achieve this as their collision probability.) This theorem has a visual proof on three element distributions using the simplex representation. Various forms of functions described as Tanimoto similarity and Tanimoto distance occur in the literature and on the Internet. Most of these are synonyms for Jaccard similarity and Jaccard distance, but some are mathematically different. Many sources[12]cite an IBM Technical Report[5]as the seminal reference. In "A Computer Program for Classifying Plants", published in October 1960,[13]a method of classification based on a similarity ratio, and a derived distance function, is given. It seems that this is the most authoritative source for the meaning of the terms "Tanimoto similarity" and "Tanimoto Distance". The similarity ratio is equivalent to Jaccard similarity, but the distance function isnotthe same as Jaccard distance. In that paper, a "similarity ratio" is given overbitmaps, where each bit of a fixed-size array represents the presence or absence of a characteristic in the plant being modelled. The definition of the ratio is the number of common bits, divided by the number of bits set (i.e.nonzero) in either sample. Presented in mathematical terms, if samplesXandYare bitmaps,Xi{\displaystyle X_{i}}is theith bit ofX, and∧,∨{\displaystyle \land ,\lor }arebitwiseand,oroperators respectively, then the similarity ratioTs{\displaystyle T_{s}}is If each sample is modelled instead as a set of attributes, this value is equal to the Jaccard index of the two sets. Jaccard is not cited in the paper, and it seems likely that the authors were not aware of it.[citation needed] Tanimoto goes on to define a "distance" based on this ratio, defined for bitmaps with non-zero similarity: This coefficient is, deliberately, not a distance metric. It is chosen to allow the possibility of two specimens, which are quite different from each other, to both be similar to a third. It is easy to construct an example which disproves the property oftriangle inequality. Tanimoto distance is often referred to, erroneously, as a synonym for Jaccard distance1−Ts{\displaystyle 1-T_{s}}. This function is a proper distance metric. "Tanimoto Distance" is often stated as being a proper distance metric, probably because of its confusion with Jaccard distance.[clarification needed][citation needed] If Jaccard or Tanimoto similarity is expressed over a bit vector, then it can be written as where the same calculation is expressed in terms of vector scalar product and magnitude. This representation relies on the fact that, for a bit vector (where the value of each dimension is either 0 or 1) then and This is a potentially confusing representation, because the function as expressed over vectors is more general, unless its domain is explicitly restricted. Properties ofTs{\displaystyle T_{s}}do not necessarily extend tof{\displaystyle f}. In particular, the difference function1−f{\displaystyle 1-f}does not preservetriangle inequality, and is not therefore a proper distance metric, whereas1−Ts{\displaystyle 1-T_{s}}is. There is a real danger that the combination of "Tanimoto Distance" being defined using this formula, along with the statement "Tanimoto Distance is a proper distance metric" will lead to the false conclusion that the function1−f{\displaystyle 1-f}is in fact a distance metric over vectors ormultisetsin general, whereas its use in similarity search or clustering algorithms may fail to produce correct results. Lipkus[9]uses a definition of Tanimoto similarity which is equivalent tof{\displaystyle f}, and refers to Tanimoto distance as the function1−f{\displaystyle 1-f}. It is, however, made clear within the paper that the context is restricted by the use of a (positive) weighting vectorW{\displaystyle W}such that, for any vectorAbeing considered,Ai∈{0,Wi}.{\displaystyle A_{i}\in \{0,W_{i}\}.}Under these circumstances, the function is a proper distance metric, and so a set of vectors governed by such a weighting vector forms ametric spaceunder this function. Inconfusion matricesemployed forbinary classification, the Jaccard index can be framed in the following formula: where TP are the true positives, FP the false positives and FN the false negatives.[14]
https://en.wikipedia.org/wiki/Jaccard_index
SimRankis a generalsimilarity measure, based on a simple and intuitivegraph-theoretic model. SimRank is applicable in anydomainwith object-to-objectrelationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, SimRank is a measure that says "two objects are considered to be similar if they are referenced by similar objects." Although SimRank is widely adopted, it may output unreasonable similarity scores which are influenced by different factors, and can be solved in several ways, such as introducing an evidence weight factor,[1]inserting additional terms that are neglected by SimRank[2]or using PageRank-based alternatives.[3] Manyapplicationsrequire a measure of "similarity" between objects. One obvious example is the "find-similar-document" query, on traditional text corpora or theWorld-Wide Web. More generally, asimilarity measurecan be used tocluster objects, such as forcollaborative filteringin arecommender system, in which “similar” users and items are grouped based on the users’ preferences. Various aspects of objects can be used to determine similarity, usually depending on the domain and the appropriate definition of similarity for that domain. In adocument corpus, matching text may be used, and for collaborative filtering, similar users may be identified by common preferences. SimRank is a general approach that exploits the object-to-object relationships found in many domains of interest. On theWeb, for example, two pages are related if there arehyperlinksbetween them. A similar approach can be applied to scientific papers and their citations, or to any other document corpus withcross-referenceinformation. In the case of recommender systems, a user’s preference for an item constitutes a relationship between the user and the item. Such domains are naturally modeled asgraphs, withnodesrepresenting objects andedgesrepresenting relationships. The intuition behind the SimRank algorithm is that, in many domains,similar objects are referenced by similar objects. More precisely, objectsa{\displaystyle a}andb{\displaystyle b}are considered to be similar if they are pointed from objectsc{\displaystyle c}andd{\displaystyle d}, respectively, andc{\displaystyle c}andd{\displaystyle d}are themselves similar. Thebase caseis that objects are maximally similar to themselves .[4] It is important to note that SimRank is a general algorithm that determines only the similarity of structural context. SimRank applies to any domain where there are enough relevant relationships between objects to base at least some notion of similarity on relationships. Obviously, similarity of other domain-specific aspects are important as well; these can — and should be combined with relational structural-context similarity for an overall similarity measure. For example, forWeb pagesSimRank can be combined with traditional textual similarity; the same idea applies to scientific papers or other document corpora. For recommendation systems, there may be built-in known similarities between items (e.g., both computers, both clothing, etc.), as well as similarities between users (e.g., same gender, same spending level). Again, these similarities can be combined with the similarity scores that are computed based on preference patterns, in order to produce an overall similarity measure. For a nodev{\displaystyle v}in a directed graph, we denote byI(v){\displaystyle I(v)}andO(v){\displaystyle O(v)}the set of in-neighbors and out-neighbors ofv{\displaystyle v}, respectively. Individual in-neighbors are denoted asIi(v){\displaystyle I_{i}(v)}, for1≤i≤|I(v)|{\displaystyle 1\leq i\leq \left|I(v)\right|}, and individual out-neighbors are denoted asOi(v){\displaystyle O_{i}(v)}, for1≤i≤|O(v)|{\displaystyle 1\leq i\leq \left|O(v)\right|}. Let us denote the similarity between objectsa{\displaystyle a}andb{\displaystyle b}bys(a,b)∈[0,1]{\displaystyle s(a,b)\in [0,1]}. Following the earlier motivation, a recursive equation is written fors(a,b){\displaystyle s(a,b)}. Ifa=b{\displaystyle a=b}thens(a,b){\displaystyle s(a,b)}is defined to be1{\displaystyle 1}. Otherwise, whereC{\displaystyle C}is a constant between0{\displaystyle 0}and1{\displaystyle 1}. A slight technicality here is that eithera{\displaystyle a}orb{\displaystyle b}may not have any in-neighbors. Since there is no way to infer any similarity betweena{\displaystyle a}andb{\displaystyle b}in this case, similarity is set tos(a,b)=0{\displaystyle s(a,b)=0}, so the summation in the above equation is defined to be0{\displaystyle 0}whenI(a)=∅{\displaystyle I(a)=\emptyset }orI(b)=∅{\displaystyle I(b)=\emptyset }. Given an arbitrary constantC{\displaystyle C}between0{\displaystyle 0}and1{\displaystyle 1}, letS{\displaystyle \mathbf {S} }be the similarity matrix whose entry[S]a,b{\displaystyle [\mathbf {S} ]_{a,b}}denotes the similarity scores(a,b){\displaystyle s(a,b)}, andA{\displaystyle \mathbf {A} }be the column normalizedadjacency matrixwhose entry[A]a,b=1|I(b)|{\displaystyle [\mathbf {A} ]_{a,b}={\tfrac {1}{|{\mathcal {I}}(b)|}}}if there is an edge froma{\displaystyle a}tob{\displaystyle b}, and 0 otherwise. Then, in matrix notations, SimRank can be formulated as whereI{\displaystyle \mathbf {I} }is an identity matrix. A solution to the SimRank equations for a graphG{\displaystyle G}can be reached byiterationto afixed-point. Letn{\displaystyle n}be the number of nodes inG{\displaystyle G}. For each iterationk{\displaystyle k}, we can keepn2{\displaystyle n^{2}}entriessk(∗,∗){\displaystyle s_{k}(*,*)}, wheresk(a,b){\displaystyle s_{k}(a,b)}gives the score betweena{\displaystyle a}andb{\displaystyle b}on iterationk{\displaystyle k}. We successively computesk+1(∗,∗){\displaystyle s_{k+1}(*,*)}based onsk(∗,∗){\displaystyle s_{k}(*,*)}. We start withs0(∗,∗){\displaystyle s_{0}(*,*)}where eachs0(a,b){\displaystyle s_{0}(a,b)}is a lower bound on the actual SimRank scores(a,b){\displaystyle s(a,b)}: To computesk+1(a,b){\displaystyle s_{k+1}(a,b)}fromsk(∗,∗){\displaystyle s_{k}(*,*)}, we use the basic SimRank equation to get: fora≠b{\displaystyle a\neq b}, andsk+1(a,b)=1{\displaystyle s_{k+1}(a,b)=1}fora=b{\displaystyle a=b}. That is, on each iterationk+1{\displaystyle k+1}, we update the similarity of(a,b){\displaystyle (a,b)}using the similarity scores of the neighbours of(a,b){\displaystyle (a,b)}from the previous iterationk{\displaystyle k}according to the basic SimRank equation. The valuessk(∗,∗){\displaystyle s_{k}(*,*)}arenondecreasingask{\displaystyle k}increases. It was shown in[4]that the valuesconvergetolimitssatisfying the basic SimRank equation, the SimRank scoress(∗,∗){\displaystyle s(*,*)}, i.e., for alla,b∈V{\displaystyle a,b\in V},limk→∞sk(a,b)=s(a,b){\displaystyle \lim _{k\to \infty }s_{k}(a,b)=s(a,b)}. The original SimRank proposal suggested choosing the decay factorC=0.8{\displaystyle C=0.8}and a fixed numberK=5{\displaystyle K=5}of iterations to perform. However, the recent research[5]showed that the given values forC{\displaystyle C}andK{\displaystyle K}generally imply relatively lowaccuracyof iteratively computed SimRank scores. For guaranteeing more accurate computation results, the latter paper suggests either using a smaller decay factor (in particular,C=0.6{\displaystyle C=0.6}) or taking more iterations. CoSimRank is a variant of SimRank with the advantage of also having a local formulation, i.e. CoSimRank can be computed for a single node pair.[6]LetS{\displaystyle \mathbf {S} }be the similarity matrix whose entry[S]a,b{\displaystyle [\mathbf {S} ]_{a,b}}denotes the similarity scores(a,b){\displaystyle s(a,b)}, andA{\displaystyle \mathbf {A} }be the column normalized adjacency matrix. Then, in matrix notations, CoSimRank can be formulated as: whereI{\displaystyle \mathbf {I} }is an identity matrix. To compute the similarity score of only a single node pair, letp(0)(i)=ei{\displaystyle p^{(0)}(i)=e_{i}}, withei{\displaystyle e_{i}}being a vector of the standard basis, i.e., thei{\displaystyle i}-th entry is 1 and all other entries are 0. Then, CoSimRank can be computed in two steps: Step one can be seen a simplified version of PersonalizedPageRank. Step two sums up the vector similarity of each iteration. Both, matrix and local representation, compute the same similarity score. CoSimRank can also be used to compute the similarity of sets of nodes, by modifyingp(0)(i){\displaystyle p^{(0)}(i)}. Lizorkin et al.[5]proposed three optimization techniques for speeding up the computation of SimRank: In particular, the second observation of partial sums memoization plays a paramount role in greatly speeding up the computation of SimRank fromO(Kd2n2){\displaystyle {\mathcal {O}}(Kd^{2}n^{2})}toO(Kdn2){\displaystyle {\mathcal {O}}(Kdn^{2})}, whereK{\displaystyle K}is the number of iterations,d{\displaystyle d}is average degree of a graph, andn{\displaystyle n}is the number of nodes in a graph. The central idea of partial sums memoization consists of two steps: First, the partial sums overI(a){\displaystyle I(a)}are memoized as and thensk+1(a,b){\displaystyle s_{k+1}(a,b)}is iteratively computed fromPartialI(a)sk(j){\displaystyle {\text{Partial}}_{I(a)}^{s_{k}}(j)}as Consequently, the results ofPartialI(a)sk(j){\displaystyle {\text{Partial}}_{I(a)}^{s_{k}}(j)},∀j∈I(b){\displaystyle \forall j\in I(b)}, can be reused later when we compute the similaritiessk+1(a,∗){\displaystyle s_{k+1}(a,*)}for a given vertexa{\displaystyle a}as the first argument.
https://en.wikipedia.org/wiki/SimRank
Inacousticsandfluid dynamics, anacoustic metric(also known as asonic metric) is ametricthat describes the signal-carrying properties of a given particulate medium. (Generally, inmathematical physics, a metric describes the arrangement of relative distances within a surface or volume, usually measured by signals passing through the region – essentially describing the intrinsic geometry of the region.) For simplicity, we will assume that the underlying background geometry isEuclidean, and that this space is filled with anisotropicinviscid fluidat zero temperature (e.g. asuperfluid). This fluid is described by adensity fieldρand avelocity fieldv→{\displaystyle {\vec {v}}}. The speed of sound at any given point depends upon thecompressibilitywhich in turn depends upon the density at that point. It requires much work to compress anything more into an already compacted space. This can be specified by the "speed of sound field"c. Now, the combination of both isotropy andGalilean covariancetells us that the permissible velocities of the sound waves at a given pointx,u→{\displaystyle {\vec {u}}}has to satisfy(u→−v→(x))2=c(x)2{\displaystyle ({\vec {u}}-{\vec {v}}(x))^{2}=c(x)^{2}} This restriction can also arise if we imagine that sound is like "light" moving through a spacetime described by an effectivemetric tensorcalled theacoustic metric. The acoustic metric isg=g00dt⊗dt+2g0idxi⊗dt+gijdxi⊗dxj{\displaystyle \mathbf {g} =g_{00}dt\otimes dt+2g_{0i}dx^{i}\otimes dt+g_{ij}dx^{i}\otimes dx^{j}} "Light" moving with a velocity ofu→{\displaystyle {\vec {u}}}(notthe 4-velocity) has to satisfyg00+2g0iui+gijuiuj=0{\displaystyle g_{00}+2g_{0i}u^{i}+g_{ij}u^{i}u^{j}=0} Ifg=α2(−(c2−v2)−v→−v→1),{\displaystyle g=\alpha ^{2}{\begin{pmatrix}-(c^{2}-v^{2})&-{\vec {v}}\\-{\vec {v}}&\mathbf {1} \end{pmatrix}},}whereαis some conformal factor which is yet to be determined (seeWeyl rescaling), we get the desired velocity restriction.αmay be some function of the density, for example. An acoustic metric can give rise to "acoustic horizons"[1](also known as "sonic horizons"), analogous to the event horizons in the spacetime metric of general relativity. However, unlike the spacetime metric, in which the invariant speed is the absolute upper limit on the propagation of all causal effects, the invariant speed in an acoustic metric is not the upper limit on propagation speeds. For example, the speed of sound is less than the speed of light. As a result, the horizons in acoustic metrics are not perfectly analogous to those associated with the spacetime metric. It is possible for certain physical effects to propagate back across an acoustic horizon. Such propagation is sometimes considered to be analogous to Hawking radiation, although the latter arises from quantum field effects in curved spacetime.
https://en.wikipedia.org/wiki/Acoustic_metric
Inmathematical analysis, ametric spaceMis calledcomplete(or aCauchy space) if everyCauchy sequenceof points inMhas alimitthat is also inM. Intuitively, a space is complete if there are no "points missing" from it (inside or at the boundary). For instance, the set ofrational numbersis not complete, because e.g.2{\displaystyle {\sqrt {2}}}is "missing" from it, even though one can construct a Cauchy sequence of rational numbers that converges to it (see further examples below). It is always possible to "fill all the holes", leading to thecompletionof a given space, as explained below. Cauchy sequence Asequencex1,x2,x3,…{\displaystyle x_{1},x_{2},x_{3},\ldots }of elements fromX{\displaystyle X}of ametric space(X,d){\displaystyle (X,d)}is calledCauchyif for every positivereal numberr>0{\displaystyle r>0}there is a positiveintegerN{\displaystyle N}such that for all positive integersm,n>N,{\displaystyle m,n>N,}d(xm,xn)<r.{\displaystyle d(x_{m},x_{n})<r.} Complete space A metric space(X,d){\displaystyle (X,d)}iscompleteif any of the following equivalent conditions are satisfied: The spaceQ{\displaystyle \mathbb {Q} }of rational numbers, with the standardmetricgiven by theabsolute valueof thedifference, is not complete. Consider for instance the sequence defined by This is a Cauchy sequence of rational numbers, but it does not converge towards any rational limit: If the sequence did have a limitx,{\displaystyle x,}then by solvingx=x2+1x{\displaystyle x={\frac {x}{2}}+{\frac {1}{x}}}necessarilyx2=2,{\displaystyle x^{2}=2,}yet no rational number has this property. However, considered as a sequence ofreal numbers, it does converge to theirrational number2{\displaystyle {\sqrt {2}}}. Theopen interval(0,1), again with the absolute difference metric, is not complete either. The sequence defined byxn=1n{\displaystyle x_{n}={\tfrac {1}{n}}}is Cauchy, but does not have a limit in the given space. However theclosed interval[0,1]is complete; for example the given sequence does have a limit in this interval, namely zero. The spaceR{\displaystyle \mathbb {R} }of real numbers and the spaceC{\displaystyle \mathbb {C} }ofcomplex numbers(with the metric given by the absolute difference) are complete, and so isEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, with theusual distancemetric. In contrast,infinite-dimensionalnormed vector spacesmay or may not be complete; those that are complete areBanach spaces. The space C[a,b]ofcontinuous real-valued functions on a closed and bounded intervalis a Banach space, and so a complete metric space, with respect to thesupremum norm. However, the supremum norm does not give a norm on the space C(a,b)of continuous functions on(a,b), for it may containunbounded functions. Instead, with thetopologyofcompact convergence, C(a,b)can be given the structure of aFréchet space: alocally convex topological vector spacewhose topology can be induced by a completetranslation-invariantmetric. The spaceQpofp-adic numbersis complete for anyprime numberp.{\displaystyle p.}This space completesQwith thep-adic metric in the same way thatRcompletesQwith the usual metric. IfS{\displaystyle S}is an arbitrary set, then the setSNof all sequences inS{\displaystyle S}becomes a complete metric space if we define the distance between the sequences(xn){\displaystyle \left(x_{n}\right)}and(yn){\displaystyle \left(y_{n}\right)}to be1N{\displaystyle {\tfrac {1}{N}}}whereN{\displaystyle N}is the smallest index for whichxN{\displaystyle x_{N}}isdistinctfromyN{\displaystyle y_{N}}or0{\displaystyle 0}if there is no such index. This space ishomeomorphicto theproductof acountablenumber of copies of thediscrete spaceS.{\displaystyle S.} Riemannian manifoldswhich are complete are calledgeodesic manifolds; completeness follows from theHopf–Rinow theorem. Everycompact metric spaceis complete, though complete spaces need not be compact. In fact, a metric space is compactif and only ifit is complete andtotally bounded. This is a generalization of theHeine–Borel theorem, which states that any closed and bounded subspaceS{\displaystyle S}ofRnis compact and therefore complete.[1] Let(X,d){\displaystyle (X,d)}be a complete metric space. IfA⊆X{\displaystyle A\subseteq X}is a closed set, thenA{\displaystyle A}is also complete. Let(X,d){\displaystyle (X,d)}be a metric space. IfA⊆X{\displaystyle A\subseteq X}is a complete subspace, thenA{\displaystyle A}is also closed. IfX{\displaystyle X}is asetandM{\displaystyle M}is a complete metric space, then the setB(X,M){\displaystyle B(X,M)}of all bounded functionsffromXtoM{\displaystyle M}is a complete metric space. Here we define the distance inB(X,M){\displaystyle B(X,M)}in terms of the distance inM{\displaystyle M}with thesupremum normd(f,g)≡sup{d[f(x),g(x)]:x∈X}{\displaystyle d(f,g)\equiv \sup\{d[f(x),g(x)]:x\in X\}} IfX{\displaystyle X}is atopological spaceandM{\displaystyle M}is a complete metric space, then the setCb(X,M){\displaystyle C_{b}(X,M)}consisting of allcontinuousbounded functionsf:X→M{\displaystyle f:X\to M}is a closed subspace ofB(X,M){\displaystyle B(X,M)}and hence also complete. TheBaire category theoremsays that every complete metric space is aBaire space. That is, theunionof countably manynowhere densesubsets of the space has emptyinterior. TheBanach fixed-point theoremstates that acontraction mappingon a complete metric space admits afixed point. The fixed-point theorem is often used toprovetheinverse function theoremon complete metric spaces such as Banach spaces. Theorem[2](C. Ursescu)—LetX{\displaystyle X}be a complete metric space and letS1,S2,…{\displaystyle S_{1},S_{2},\ldots }be a sequence of subsets ofX.{\displaystyle X.} For any metric spaceM, it is possible to construct a complete metric spaceM′(which is also denoted asM¯{\displaystyle {\overline {M}}}), which containsMas adense subspace. It has the followinguniversal property: ifNis any complete metric space andfis anyuniformly continuous functionfromMtoN, then there exists a unique uniformly continuous functionf′fromM′toNthat extendsf. The spaceM'is determinedup toisometryby this property (among all complete metric spaces isometrically containingM), and is called thecompletionofM. The completion ofMcan be constructed as a set ofequivalence classesof Cauchy sequences inM. For any two Cauchy sequencesx∙=(xn){\displaystyle x_{\bullet }=\left(x_{n}\right)}andy∙=(yn){\displaystyle y_{\bullet }=\left(y_{n}\right)}inM, we may define their distance asd(x∙,y∙)=limnd(xn,yn){\displaystyle d\left(x_{\bullet },y_{\bullet }\right)=\lim _{n}d\left(x_{n},y_{n}\right)} (This limit exists because the real numbers are complete.) This is only apseudometric, not yet a metric, since two different Cauchy sequences may have the distance 0. But "having distance 0" is anequivalence relationon the set of all Cauchy sequences, and the set of equivalence classes is a metric space, the completion ofM. The original space is embedded in this space via the identification of an elementxofM'with the equivalence class of sequences inMconverging tox(i.e., the equivalence class containing the sequence with constant valuex). This defines an isometry onto a dense subspace, as required. Notice, however, that this construction makes explicit use of the completeness of the real numbers, so completion of the rational numbers needs a slightly different treatment. Cantor's construction of the real numbers is similar to the above construction; the real numbers are the completion of the rational numbers using the ordinary absolute value to measure distances. The additional subtlety to contend with is that it is not logically permissible to use the completeness of the real numbers in their own construction. Nevertheless, equivalence classes of Cauchy sequences are defined as above, and the set of equivalence classes is easily shown to be afieldthat has the rational numbers as asubfield. This field is complete, admits a naturaltotal ordering, and is the unique totally ordered complete field (up toisomorphism). It isdefinedas the field of real numbers (see alsoConstruction of the real numbersfor more details). One way to visualize this identification with the real numbers as usually viewed is that the equivalence class consisting of those Cauchy sequences of rational numbers that "ought" to have a given real limit is identified with that real number. The truncations of thedecimal expansiongive just one choice of Cauchy sequence in the relevant equivalence class. For a primep,{\displaystyle p,}thep-adic numbersarise by completing the rational numbers with respect to a different metric. If the earlier completion procedure is applied to a normed vector space, the result is a Banach space containing the original space as a dense subspace, and if it is applied to aninner product space, the result is aHilbert spacecontaining the original space as a dense subspace. Completeness is a property of themetricand not of thetopology, meaning that a complete metric space can behomeomorphicto a non-complete one. An example is given by the real numbers, which are complete but homeomorphic to the open interval(0,1), which is not complete. Intopologyone considerscompletely metrizable spaces, spaces for which there exists at least one complete metric inducing the given topology. Completely metrizable spaces can be characterized as those spaces that can be written as an intersection of countably many open subsets of some complete metric space. Since the conclusion of theBaire category theoremis purely topological, it applies to these spaces as well. Completely metrizable spaces are often calledtopologically complete. However, the latter term is somewhat arbitrary since metric is not the most general structure on a topological space for which one can talk about completeness (see the sectionAlternatives and generalizations). Indeed, some authors use the termtopologically completefor a wider class of topological spaces, thecompletely uniformizable spaces.[3] A topological space homeomorphic to aseparablecomplete metric space is called aPolish space. SinceCauchy sequencescan also be defined in generaltopological groups, an alternative to relying on a metric structure for defining completeness and constructing the completion of a space is to use a group structure. This is most often seen in the context oftopological vector spaces, but requires only the existence of a continuous "subtraction" operation. In this setting, the distance between two pointsx{\displaystyle x}andy{\displaystyle y}is gauged not by a real numberε{\displaystyle \varepsilon }via the metricd{\displaystyle d}in the comparisond(x,y)<ε,{\displaystyle d(x,y)<\varepsilon ,}but by anopen neighbourhoodN{\displaystyle N}of0{\displaystyle 0}via subtraction in the comparisonx−y∈N.{\displaystyle x-y\in N.} A common generalisation of these definitions can be found in the context of auniform space, where anentourageis a set of all pairs of points that are at no more than a particular "distance" from each other. It is also possible to replace Cauchysequencesin the definition of completeness by Cauchynetsor Cauchyfilters. If every Cauchy net (or equivalently every Cauchy filter) has a limit inX,{\displaystyle X,}thenX{\displaystyle X}is called complete. One can furthermore construct a completion for an arbitrary uniform space similar to the completion of metric spaces. The most general situation in which Cauchy nets apply isCauchy spaces; these too have a notion of completeness and completion just like uniform spaces.
https://en.wikipedia.org/wiki/Complete_metric_space
Inmathematics, adiversityis a generalization of the concept ofmetric space. The concept was introduced in 2012 by Bryant and Tupper,[1]who call diversities "a form of multi-way metric".[2]The concept finds application in nonlinear analysis.[3] Given a setX{\displaystyle X}, let℘fin(X){\displaystyle \wp _{\mbox{fin}}(X)}be the set of finite subsets ofX{\displaystyle X}. A diversity is a pair(X,δ){\displaystyle (X,\delta )}consisting of a setX{\displaystyle X}and a functionδ:℘fin(X)→R{\displaystyle \delta \colon \wp _{\mbox{fin}}(X)\to \mathbb {R} }satisfying (D1)δ(A)≥0{\displaystyle \delta (A)\geq 0}, withδ(A)=0{\displaystyle \delta (A)=0}if and only if|A|≤1{\displaystyle \left|A\right|\leq 1} and (D2) ifB≠∅{\displaystyle B\neq \emptyset }thenδ(A∪C)≤δ(A∪B)+δ(B∪C){\displaystyle \delta (A\cup C)\leq \delta (A\cup B)+\delta (B\cup C)}. Bryant and Tupper observe that these axioms imply monotonicity; that is, ifA⊆B{\displaystyle A\subseteq B}, thenδ(A)≤δ(B){\displaystyle \delta (A)\leq \delta (B)}. They state that the term "diversity" comes from the appearance of a special case of their definition in work on phylogenetic and ecological diversities. They give the following examples: Let(X,d){\displaystyle (X,d)}be a metric space. Settingδ(A)=maxa,b∈Ad(a,b)=diam⁡(A){\displaystyle \delta (A)=\max _{a,b\in A}d(a,b)=\operatorname {diam} (A)}for allA∈℘fin(X){\displaystyle A\in \wp _{\mbox{fin}}(X)}defines a diversity. For all finiteA⊆Rn{\displaystyle A\subseteq \mathbb {R} ^{n}}if we defineδ(A)=∑imaxa,b{|ai−bi|:a,b∈A}{\displaystyle \delta (A)=\sum _{i}\max _{a,b}\left\{\left|a_{i}-b_{i}\right|\colon a,b\in A\right\}}then(Rn,δ){\displaystyle (\mathbb {R} ^{n},\delta )}is a diversity. IfTis aphylogenetic treewithtaxon setX. For each finiteA⊆X{\displaystyle A\subseteq X}, defineδ(A){\displaystyle \delta (A)}as the length of the smallestsubtreeofTconnecting taxa inA. Then(X,δ){\displaystyle (X,\delta )}is a (phylogenetic) diversity. Let(X,d){\displaystyle (X,d)}be a metric space. For each finiteA⊆X{\displaystyle A\subseteq X}, letδ(A){\displaystyle \delta (A)}denote the minimum length of aSteiner treewithinXconnecting elements inA. Then(X,δ){\displaystyle (X,\delta )}is a diversity. Let(X,δ){\displaystyle (X,\delta )}be a diversity. For allA∈℘fin(X){\displaystyle A\in \wp _{\mbox{fin}}(X)}defineδ(k)(A)=max{δ(B):|B|≤k,B⊆A}{\displaystyle \delta ^{(k)}(A)=\max \left\{\delta (B)\colon |B|\leq k,B\subseteq A\right\}}. Then ifk≥2{\displaystyle k\geq 2},(X,δ(k)){\displaystyle (X,\delta ^{(k)})}is a diversity. If(X,E){\displaystyle (X,E)}is agraph, andδ(A){\displaystyle \delta (A)}is defined for any finiteAas the largestcliqueofA, then(X,δ){\displaystyle (X,\delta )}is a diversity. Thismetric geometry-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Diversity_(mathematics)
In mathematics, specifically incategory theory, ageneralized metric spaceis ametric spacebut without the symmetry property and some other properties.[1]Precisely, it is a categoryenrichedover[0,∞]{\displaystyle [0,\infty ]}, the one-point compactification ofR{\displaystyle \mathbb {R} }. The notion was introduced in 1973 by Lawvere who noticed that a metric space can be viewed as a particular kind of a category. The categorical point of view is useful since byYoneda's lemma, a generalized metric space can be embedded into a much larger category in which, for instance, one can construct theCauchy completionof the space. Thiscategory theory-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Generalized_metric_space
Inmathematics,Hilbert's fourth problemin the 1900 list ofHilbert's problemsis a foundational question ingeometry. In one statement derived from the original, it was to find — up to an isomorphism — allgeometriesthat have anaxiomaticsystem of the classical geometry (Euclidean,hyperbolicandelliptic), with those axioms ofcongruencethat involve the concept of the angle dropped, and `triangle inequality', regarded as an axiom, added. If one assumes the continuity axiom in addition, then, in the case of the Euclidean plane, we come to the problem posed byJean Gaston Darboux: "To determine all the calculus of variation problems in the plane whose solutions are all the plane straight lines."[1] There are several interpretations of the original statement ofDavid Hilbert. Nevertheless, a solution was sought, with the German mathematicianGeorg Hamelbeing the first to contribute to the solution of Hilbert's fourth problem.[2] A recognized solution was given by Soviet mathematicianAleksei Pogorelovin 1973.[3][4]In 1976, Armenian mathematicianRouben V. Ambartzumianproposed another proof of Hilbert's fourth problem.[5] Hilbert discusses the existence ofnon-Euclidean geometryandnon-Archimedean geometry ...a geometry in which all the axioms of ordinary euclidean geometry hold, and in particular all the congruence axioms except the one of the congruence of triangles (or all except the theorem of the equality of the base angles in the isosceles triangle), and in which, besides, the proposition that in every triangle the sum of two sides is greater than the third is assumed as a particular axiom.[6] Due to the idea that a 'straight line' is defined as the shortest path between two points, he mentions how congruence of triangles is necessary for Euclid's proof that a straight line in the plane is the shortest distance between two points. He summarizes as follows: The theorem of the straight line as the shortest distance between two points and the essentially equivalent theorem of Euclid about the sides of a triangle, play an important part not only in number theory but also in the theory of surfaces and in the calculus of variations. For this reason, and because I believe that the thorough investigation of the conditions for the validity of this theorem will throw a new light upon the idea of distance, as well as upon other elementary ideas, e. g., upon the idea of the plane, and the possibility of its definition by means of the idea of the straight line,the construction and systematic treatment of the geometries here possible seem to me desirable.[6] Desargues's theorem: If two triangles lie on a plane such that the lines connecting corresponding vertices of the triangles meet at one point, then the three points, at which the prolongations of three pairs of corresponding sides of the triangles intersect, lie on one line. The necessary condition for solving Hilbert's fourth problem is the requirement that a metric space that satisfies the axioms of this problem should be Desarguesian, i.e.,: For Desarguesian spacesGeorg Hamelproved that every solution of Hilbert's fourth problem can be represented in a realprojective spaceRPn{\displaystyle RP^{n}}or in a convex domain ofRPn{\displaystyle RP^{n}}if one determines the congruence of segments by equality of their lengths in a special metric for which the lines of the projective space are geodesics. Metrics of this type are calledflatorprojective. Thus, the solution of Hilbert's fourth problem was reduced to the solution of the problem of constructive determination of all complete flat metrics. Hamel solved this problem under the assumption of high regularity of the metric.[2]However, as simple examples show, the class of regular flat metrics is smaller than the class of all flat metrics. The axioms of geometries under consideration imply only a continuity of the metrics. Therefore, to solve Hilbert's fourth problem completely it is necessary to determine constructively all the continuous flat metrics. Before 1900, there was known theCayley–Klein modelof Lobachevsky geometry in the unit disk, according to which geodesic lines are chords of the disk and the distance between points is defined as a logarithm of thecross-ratioof a quadruple. For two-dimensional Riemannian metrics,Eugenio Beltrami(1835–1900) proved that flat metrics are the metrics of constant curvature.[7] For multidimensional Riemannian metrics this statement was proved byE. Cartanin 1930. In 1890, for solving problems on the theory of numbers,Hermann Minkowskiintroduced a notion of the space that nowadays is called the finite-dimensionalBanach space.[8] LetF0⊂En{\displaystyle F_{0}\subset \mathbb {E} ^{n}}be a compact convex hypersurface in a Euclidean space defined by where the functionF=F(y){\displaystyle F=F(y)}satisfies the following conditions: The length of the vectorOAis defined by: A space with this metric is calledMinkowski space. The hypersurfaceF0{\displaystyle F_{0}}is convex and can be irregular. The defined metric is flat. LetMandTM={(x,y)|x∈M,y∈TxM}{\displaystyle TM=\{(x,y)|x\in M,y\in T_{x}M\}}be a smooth finite-dimensional manifold and its tangent bundle, respectively. The functionF(x,y):TM→[0,+∞){\displaystyle F(x,y)\colon TM\rightarrow [0,+\infty )}is calledFinsler metricif (M,F){\displaystyle (M,F)}isFinsler space. LetU⊂(En+1,‖⋅‖E){\displaystyle U\subset (\mathbb {E} ^{n+1},\|\cdot \|_{\mathbb {E} })}be a bounded open convex set with the boundary of classC2and positive normal curvatures. Similarly to the Lobachevsky space, the hypersurface∂U{\displaystyle \partial U}is called the absolute of Hilbert's geometry.[9] Hilbert's distance (see fig.) is defined by The distancedU{\displaystyle d_{U}}induces theHilbert–Finsler metricFU{\displaystyle F_{U}}onU. For anyx∈U{\displaystyle x\in U}andy∈TxU{\displaystyle y\in T_{x}U}(see fig.), we have The metric is symmetric and flat. In 1895, Hilbert introduced this metric as a generalization of the Lobachevsky geometry. If the hypersurface∂U{\displaystyle \partial U}is an ellipsoid, then we have the Lobachevsky geometry. In 1930, Funk introduced a non-symmetric metric. It is defined in a domain bounded by a closed convex hypersurface and is also flat. Georg Hamelwas first to contribute to the solution of Hilbert's fourth problem.[2]He proved the following statement. Theorem. A regular Finsler metricF(x,y)=F(x1,…,xn,y1,…,yn){\displaystyle F(x,y)=F(x_{1},\ldots ,x_{n},y_{1},\ldots ,y_{n})}is flat if and only if it satisfies the conditions: Consider a set of all oriented lines on a plane. Each line is defined by the parametersρ{\displaystyle \rho }andφ,{\displaystyle \varphi ,}whereρ{\displaystyle \rho }is a distance from the origin to the line, andφ{\displaystyle \varphi }is an angle between the line and thex-axis. Then the set of all oriented lines is homeomorphic to a circular cylinder of radius 1 with the area elementdS=dρdφ{\displaystyle dS=d\rho \,d\varphi }. Letγ{\displaystyle \gamma }be a rectifiable curve on a plane. Then the length ofγ{\displaystyle \gamma }isL=14∬Ωn(ρ,φ)dpdφ{\displaystyle L={\frac {1}{4}}\iint _{\Omega }n(\rho ,\varphi )\,dp\,d\varphi }whereΩ{\displaystyle \Omega }is a set of lines that intersect the curveγ{\displaystyle \gamma }, andn(p,φ){\displaystyle n(p,\varphi )}is the number of intersections of the line withγ{\displaystyle \gamma }. Crofton proved this statement in 1870.[10] A similar statement holds for a projective space. In 1966, in his talk at theInternational Mathematical Congressin Moscow,Herbert Busemannintroduced a new class of flat metrics. On a set of lines on the projective planeRP2{\displaystyle RP^{2}}he introduced a completely additive non-negative measureσ{\displaystyle \sigma }, which satisfies the following conditions: If we consider aσ{\displaystyle \sigma }-metric in an arbitrary convex domainΩ{\displaystyle \Omega }of a projective spaceRP2{\displaystyle RP^{2}}, then condition 3) should be replaced by the following: for any setHsuch thatHis contained inΩ{\displaystyle \Omega }and the closure ofHdoes not intersect the boundary ofΩ{\displaystyle \Omega }, the inequality Using this measure, theσ{\displaystyle \sigma }-metric onRP2{\displaystyle RP^{2}}is defined by whereτ[x,y]{\displaystyle \tau [x,y]}is the set of straight lines that intersect the segment[x,y]{\displaystyle [x,y]}. The triangle inequality for this metric follows fromPasch's theorem. Theorem.σ{\displaystyle \sigma }-metric onRP2{\displaystyle RP^{2}}is flat, i.e., the geodesics are the straight lines of the projective space. But Busemann was far from the idea thatσ{\displaystyle \sigma }-metrics exhaust all flat metrics. He wrote,"The freedom in the choice of a metric with given geodesics is for non-Riemannian metrics so great that it may be doubted whether there really exists a convincing characterization of all Desarguesian spaces".[11] The following theorem was proved byPogorelovin 1973[3][4] Theorem.Any two-dimensional continuous complete flat metric is aσ{\displaystyle \sigma }-metric. Thus Hilbert's fourth problem for the two-dimensional case was completely solved. A consequence of this is that you can glue boundary to boundary two copies of the same planar convex shape, with an angle twist between them, you will get a 3D object without crease lines, the two faces beingdevelopable. In 1976, Ambartsumian proposed another proof of Hilbert's fourth problem.[5] His proof uses the fact that in the two-dimensional case the whole measure can be restored by its values on biangles, and thus be defined on triangles in the same way as the area of a triangle is defined on a sphere. Since the triangle inequality holds, it follows that this measure is positive on non-degenerate triangles and is determined on allBorel sets. However, this structure can not be generalized to higher dimensions because of Hilbert's third problem solved byMax Dehn. In the two-dimensional case, polygons with the same volume are scissors-congruent. As was shown by Dehn this is not true for a higher dimension. For three dimensional case Pogorelov proved the following theorem. Theorem.Any three-dimensional regular complete flat metric is aσ{\displaystyle \sigma }-metric. However, in the three-dimensional caseσ{\displaystyle \sigma }-measures can take either positive or negative values. The necessary and sufficient conditions for the regular metric defined by the function of the setσ{\displaystyle \sigma }to be flat are the following three conditions: Moreover, Pogorelov showed that any complete continuous flat metric in the three-dimensional case is the limit of regularσ{\displaystyle \sigma }-metrics with the uniform convergence on any compact sub-domain of the metric's domain. He called them generalizedσ{\displaystyle \sigma }-metrics. Thus Pogorelov could prove the following statement. Theorem.In the three-dimensional case any complete continuous flat metric is aσ{\displaystyle \sigma }-metric in generalized meaning. Busemann, in his review to Pogorelov’s book "Hilbert’s Fourth Problem" wrote, "In the spirit of the time Hilbert restricted himself ton= 2, 3 and so does Pogorelov. However, this has doubtless pedagogical reasons, because he addresses a wide class of readers. The real difference is betweenn= 2 andn>2. Pogorelov's method works forn>3, but requires greater technicalities".[12] The multi-dimensional case of the Fourth Hilbert problem was studied by Szabo.[13]In 1986, he proved, as he wrote, the generalized Pogorelov theorem. Theorem.Eachn-dimensional Desarguesian space of the classCn+2,n>2{\displaystyle C^{n+2},n>2}, is generated by the Blaschke–Busemann construction. Aσ{\displaystyle \sigma }-measure that generates a flat measure has the following properties: There was given the example of a flat metric not generated by the Blaschke–Busemann construction. Szabo described all continuous flat metrics in terms of generalized functions. Hilbert's fourth problem is also closely related to the properties ofconvex bodies. A convex polyhedron is called azonotopeif it is theMinkowski sumof segments. A convex body which is a limit of zonotopes in the Blaschke – Hausdorff metric is called azonoid. For zonoids, thesupport functionis represented by whereσ(u){\displaystyle \sigma (u)}is an even positiveBorel measureon a sphereSn−1{\displaystyle S^{n-1}}. The Minkowski space is generated by the Blaschke–Busemann construction if and only if the support function of the indicatrix has the form of (1), whereσ(u){\displaystyle \sigma (u)}is even and not necessarily of positive Borel measure.[14]The bodies bounded by such hypersurfaces are calledgeneralized zonoids. The octahedron|x1|+|x2|+|x3|≤1{\displaystyle |x_{1}|+|x_{2}|+|x_{3}|\leq 1}in the Euclidean spaceE3{\displaystyle E^{3}}is not a generalized zonoid. From the above statement it follows that the flat metric of Minkowski space with the norm‖x‖=max{|x1|,|x2|,|x3|}{\displaystyle \|x\|=\max\{|x_{1}|,|x_{2}|,|x_{3}|\}}is not generated by the Blaschke–Busemann construction. There was found the correspondence between the planarn-dimensional Finsler metrics and special symplectic forms on the Grassmann manifoldG(n+1,2){\displaystyle G(n+1,2)}вEn+1{\displaystyle E^{n+1}}.[15] There were considered periodic solutions of Hilbert's fourth problem : Another exposition of Hilbert's fourth problem can be found in work of Paiva.[17]
https://en.wikipedia.org/wiki/Hilbert%27s_fourth_problem
Ametric treeis anytreedata structurespecialized to index data inmetric spaces. Metric trees exploit properties of metric spaces such as thetriangle inequalityto make accesses to the data more efficient. Examples include theM-tree,vp-trees,cover trees,MVP trees, andBK-trees.[1] Most algorithms and data structures for searching a dataset are based on the classicalbinary searchalgorithm, and generalizations such as thek-d treeorrange treework by interleaving thebinary search algorithmover the separate coordinates and treating each spatial coordinate as an independent search constraint. These data structures are well-suited forrange queryproblems asking for every point(x,y){\displaystyle (x,y)}that satisfiesminx≤x≤maxx{\displaystyle {\mbox{min}}_{x}\leq x\leq {\mbox{max}}_{x}}andminy≤y≤maxy{\displaystyle {\mbox{min}}_{y}\leq y\leq {\mbox{max}}_{y}}. A limitation of these multidimensional search structures is that they are only defined for searching over objects that can be treated as vectors. They are not applicable for the more general case in which the algorithm is given only a collection of objects and a function for measuring the distance or similarity between two objects. If, for example, someone were to create a function that returns a value indicating how similar one image is to another, a natural algorithmic problem would be to take a dataset of images and find the ones that are similar according to the function to a given query image. If there is no structure to thesimilarity measurethen abrute force searchrequiring the comparison of the query image to every image in the dataset is the best that can be done[citation needed]. If, however, the similarity function satisfies thetriangle inequalitythen it is possible to use the result of each comparison to prune the set of candidates to be examined. The first article on metric trees, as well as the first use of the term "metric tree", published in the open literature was byJeffrey Uhlmannin 1991.[2]Other researchers were working independently on similar data structures. In particular, Peter Yianilos claimed to have independently discovered the same method, which he called avantage point tree(VP-tree).[3]The research on metric tree data structures blossomed in the late 1990s and included an examination by Google co-founderSergey Brinof their use for very large databases.[4]The first textbook on metric data structures was published in 2006.[1]
https://en.wikipedia.org/wiki/Metric_tree
TheMinkowski distanceorMinkowski metricis ametricin anormed vector spacewhich can be considered as a generalization of both theEuclidean distanceand theManhattan distance. It is named after the Polish mathematicianHermann Minkowski. The Minkowski distance of orderp{\displaystyle p}(wherep{\displaystyle p}is an integer) between two pointsX=(x1,x2,…,xn)andY=(y1,y2,…,yn)∈Rn{\displaystyle X=(x_{1},x_{2},\ldots ,x_{n}){\text{ and }}Y=(y_{1},y_{2},\ldots ,y_{n})\in \mathbb {R} ^{n}}is defined as:D(X,Y)=(∑i=1n|xi−yi|p)1p.{\displaystyle D\left(X,Y\right)={\biggl (}\sum _{i=1}^{n}|x_{i}-y_{i}|^{p}{\biggr )}^{\frac {1}{p}}.} Forp≥1,{\displaystyle p\geq 1,}the Minkowski distance is ametricas a result of theMinkowski inequality.[1]Whenp<1,{\displaystyle p<1,}the distance between(0,0){\displaystyle (0,0)}and(1,1){\displaystyle (1,1)}is21/p>2,{\displaystyle 2^{1/p}>2,}but the point(0,1){\displaystyle (0,1)}is at a distance1{\displaystyle 1}from both of these points. Since this violates thetriangle inequality, forp<1{\displaystyle p<1}it is not a metric. However, a metric can be obtained for these values by simply removing the exponent of1/p.{\displaystyle 1/p.}The resulting metric is also anF-norm. Minkowski distance is typically used withp{\displaystyle p}being 1 or 2, which correspond to theManhattan distanceand theEuclidean distance, respectively.[2]In the limiting case ofp{\displaystyle p}reaching infinity, we obtain theChebyshev distance:limp→∞(∑i=1n|xi−yi|p)1p=maxi=1n|xi−yi|.{\displaystyle \lim _{p\to \infty }{{\biggl (}\sum _{i=1}^{n}|x_{i}-y_{i}|^{p}{\biggr )}^{\frac {1}{p}}}=\max _{i=1}^{n}|x_{i}-y_{i}|.} Similarly, forp{\displaystyle p}reaching negative infinity, we have:limp→−∞(∑i=1n|xi−yi|p)1p=mini=1n|xi−yi|.{\displaystyle \lim _{p\to -\infty }{{\biggl (}\sum _{i=1}^{n}|x_{i}-y_{i}|^{p}{\biggr )}^{\frac {1}{p}}}=\min _{i=1}^{n}|x_{i}-y_{i}|.} The Minkowski distance can also be viewed as a multiple of thepower meanof the component-wise differences betweenP{\displaystyle P}andQ.{\displaystyle Q.} The following figure shows unit circles (thelevel setof the distance function where all points are at the unit distance from the center) with various values ofp{\displaystyle p}: The Minkowski metric is very useful in the field ofmachine learningandAI. Many popular machine learning algorithms use specific distance metrics such as the aforementioned to compare the similarity of two data points. Depending on the nature of the data being analyzed, various metrics can be used. The Minkowski metric is most useful for numerical datasets where one wants to determine the similarity of size between multiple datapoint vectors.
https://en.wikipedia.org/wiki/Minkowski_distance
Inmathematicsand its applications, thesigned distance functionorsigned distance field(SDF) is theorthogonal distanceof a given pointxto theboundaryof asetΩ in ametric space(such as the surface of a geometric shape), with thesigndetermined by whether or notxis in theinteriorof Ω. Thefunctionhas positive values at pointsxinside Ω, it decreases in value asxapproaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω.[1]However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside).[2]The concept also sometimes goes by the nameoriented distance function/field. LetΩbe asubsetof ametric spaceXwithmetricd, and∂Ω{\displaystyle \partial \Omega }be itsboundary. The distance between a pointxofXand the subset∂Ω{\displaystyle \partial \Omega }ofXis defined as usual as whereinf{\displaystyle \inf }denotes theinfimum. Thesigned distance functionfrom a pointxofXtoΩ{\displaystyle \Omega }is defined by If Ω is a subset of theEuclidean spaceRnwithpiecewisesmoothboundary, then the signed distance function is differentiablealmost everywhere, and itsgradientsatisfies theeikonal equation If the boundary of Ω isCkfork≥ 2 (seeDifferentiability classes) thendisCkon points sufficiently close to the boundary of Ω.[3]In particular,onthe boundaryfsatisfies whereNis the inwardnormal vectorfield. The signed distance function is thus a differentiable extension of the normal vector field. In particular, theHessianof the signed distance function on the boundary of Ω gives theWeingarten map. If, further, Γ is a region sufficiently close to the boundary of Ω thatfis twice continuously differentiable on it, then there is an explicit formula involving the Weingarten mapWxfor the Jacobian of changing variables in terms of the signed distance function and nearest boundary point. Specifically, ifT(∂Ω,μ) is the set of points within distanceμof the boundary of Ω (i.e. thetubular neighbourhoodof radiusμ), andgis anabsolutely integrable functionon Γ, then wheredetdenotes thedeterminantanddSuindicates that we are taking thesurface integral.[4] Algorithmsfor calculating the signed distance function include the efficientfast marching method,fast sweeping method[5]and the more generallevel-set method. Forvoxelrendering, a fast algorithm for calculating the SDF intaxicab geometryusessummed-area tables.[6] Signed distance functions are applied, for example, inreal-time rendering,[7]for instance the method ofSDF ray marching, andcomputer vision.[8][9] SDF has been used to describe object geometry inreal-time rendering, usually in a raymarching context, starting in the mid 2000s. By 2007,Valveis using SDFs to render large pixel-size (orhigh DPI)smooth fontswithGPUacceleration in its games.[10]Valve's method is not perfect as it runs inraster spacein order to avoid the computational complexity of solving the problem in the (continuous) vector space. The rendered text often loses sharp corners. In 2014, an improved method was presented byBehdad Esfahbod. Behdad's GLyphy approximates the font'sBézier curveswith arc splines, accelerated by grid-baseddiscretizationtechniques (which culls too-far-away points) to run in real time.[11] A modified version of SDF was introduced as aloss functionto minimise the error in interpenetration of pixels while rendering multiple objects.[12]In particular, for any pixel that does not belong to an object, if it lies outside the object in rendition, no penalty is imposed; if it does, a positive value proportional to its distance inside the object is imposed. In 2020, theFOSSgame engineGodot 4.0received SDF-based real-timeglobal illumination(SDFGI), that became a compromise between more realistic voxel-based GI and baked GI. Its core advantage is that it can be applied to infinite space, which allows developers to use it for open-world games.[13] In 2023, the authors of theZedtext editor announced a GPUI framework that draws all UI elements using the GPU at 120 fps. The work makes use of Inigo Quilez's list of geometric primitives in SDF,Figmaco-founder Evan Wallace'sGaussian blurin SDF, and a new rounded rectangle SDF.[14]
https://en.wikipedia.org/wiki/Signed_distance_function
Inmathematics, aspaceis aset(sometimes known as auniverse) endowed with astructuredefining the relationships among theelementsof the set. Asubspaceis asubsetof the parent space which retains the same structure. While modern mathematics uses many types of spaces, such asEuclidean spaces,linear spaces,topological spaces,Hilbert spaces, orprobability spaces, it does not define the notion of "space" itself.[1][a] A space consists of selectedmathematical objectsthat are treated aspoints, and selected relationships between these points. The nature of the points can vary widely: for example, the points can represent numbers, functions on another space, or subspaces of another space. It is the relationships that define the nature of the space. More precisely, isomorphic spaces are considered identical, where anisomorphismbetween two spaces is a one-to-one correspondence between their points that preserves the relationships. For example, the relationships between the points of a three-dimensional Euclidean space are uniquely determined by Euclid's axioms,[b]and all three-dimensional Euclidean spaces are considered identical. Topological notions such as continuity have natural definitions for every Euclidean space. However, topology does not distinguish straight lines from curved lines, and the relation between Euclidean and topological spaces is thus "forgetful". Relations of this kind are treated in more detail in the"Types of spaces"section. It is not always clear whether a given mathematical object should be considered as ageometric "space", or analgebraic "structure". A general definition of "structure", proposed byBourbaki,[2]embraces all common types of spaces, provides a general definition of isomorphism, and justifies the transfer of properties between isomorphic structures. In ancient Greek mathematics, "space" was a geometric abstraction of the three-dimensional reality observed in everyday life. About 300 BC,Euclidgave axioms for the properties of space. Euclid built all of mathematics on these geometric foundations, going so far as to define numbers by comparing the lengths of line segments to the length of a chosen reference segment. The method of coordinates (analytic geometry) was adopted byRené Descartesin 1637.[3]At that time, geometric theorems were treated as absolute objective truths knowable through intuition and reason, similar to objects of natural science;[4]: 11and axioms were treated as obvious implications of definitions.[4]: 15 Twoequivalence relationsbetween geometric figures were used:congruenceandsimilarity. Translations, rotations and reflections transform a figure into congruent figures;homotheties— into similar figures. For example, all circles are mutually similar, but ellipses are not similar to circles. A third equivalence relation, introduced byGaspard Mongein 1795, occurs inprojective geometry: not only ellipses, but also parabolas and hyperbolas, turn into circles under appropriate projective transformations; they all are projectively equivalent figures. The relation between the two geometries, Euclidean and projective,[4]: 133shows that mathematical objects are not given to uswith their structure.[4]: 21Rather, each mathematical theory describes its objects bysomeof their properties, precisely those that are put as axioms at the foundations of the theory.[4]: 20 Distances and angles cannot appear in theorems of projective geometry, since these notions are neither mentioned in the axioms of projective geometry nor defined from the notions mentioned there. The question "what is the sum of the three angles of a triangle" is meaningful in Euclidean geometry but meaningless in projective geometry. A different situation appeared in the 19th century: in some geometries the sum of the three angles of a triangle is well-defined but different from the classical value (180 degrees). Non-Euclideanhyperbolic geometry, introduced byNikolai Lobachevskyin 1829 andJános Bolyaiin 1832 (andCarl Friedrich Gaussin 1816, unpublished)[4]: 133stated that the sum depends on the triangle and is always less than 180 degrees.Eugenio Beltramiin 1868 andFelix Kleinin 1871 obtained Euclidean "models" of the non-Euclidean hyperbolic geometry, and thereby completely justified this theory as a logical possibility.[4]: 24[5] This discovery forced the abandonment of the pretensions to the absolute truth of Euclidean geometry. It showed that axioms are not "obvious", nor "implications of definitions". Rather, they are hypotheses. To what extent do they correspond to an experimental reality? This important physical problem no longer has anything to do with mathematics. Even if a "geometry" does not correspond to an experimental reality, its theorems remain no less "mathematical truths".[4]: 15 A Euclidean model of anon-Euclidean geometryis a choice of some objects existing in Euclidean space and some relations between these objects that satisfy all axioms (and therefore, all theorems) of the non-Euclidean geometry. These Euclidean objects and relations "play" the non-Euclidean geometry like contemporary actors playing an ancient performance. Actors can imitate a situation that never occurred in reality. Relations between the actors on the stage imitate relations between the characters in the play. Likewise, the chosen relations between the chosen objects of the Euclidean model imitate the non-Euclidean relations. It shows that relations between objects are essential in mathematics, while the nature of the objects is not. The word "geometry" (from Ancient Greek: geo- "earth", -metron "measurement") initially meant a practical way of processing lengths, regions and volumes in the space in which we live, but was then extended widely (as well as the notion of space in question here). According to Bourbaki,[4]: 131the period between 1795 (Géométrie descriptiveof Monge) and 1872 (the"Erlangen programme"of Klein) can be called "the golden age of geometry". The original space investigated by Euclid is now called three-dimensionalEuclidean space. Its axiomatization, started by Euclid 23 centuries ago, was reformed withHilbert's axioms,Tarski's axiomsandBirkhoff's axioms. These axiom systems describe the space viaprimitive notions(such as "point", "between", "congruent") constrained by a number ofaxioms. Analytic geometry made great progress and succeeded in replacing theorems of classical geometry with computations via invariants of transformation groups.[4]: 134, 5Since that time, new theorems of classical geometry have been of more interest to amateurs than to professional mathematicians.[4]: 136However, the heritage of classical geometry was not lost. According to Bourbaki,[4]: 138"passed over in its role as an autonomous and living science, classical geometry is thus transfigured into a universal language of contemporary mathematics". Simultaneously, numbers began to displace geometry as the foundation of mathematics. For instance, in Richard Dedekind's 1872 essayStetigkeit und irrationale Zahlen(Continuity and irrational numbers), he asserts that points on a line ought to have the properties ofDedekind cuts, and that therefore a line was the same thing as the set of real numbers. Dedekind is careful to note that this is an assumption that is incapable of being proven. In modern treatments, Dedekind's assertion is often taken to be the definition of a line, thereby reducing geometry to arithmetic. Three-dimensional Euclidean space is defined to be an affine space whose associated vector space of differences of its elements is equipped with an inner product.[6]A definition "from scratch", as in Euclid, is now not often used, since it does not reveal the relation of this space to other spaces. Also, a three-dimensionalprojective spaceis now defined as the space of all one-dimensional subspaces (that is, straight lines through the origin) of a four-dimensional vector space. This shift in foundations requires a new set of axioms, and if these axioms are adopted, the classical axioms of geometry become theorems. A space now consists of selected mathematical objects (for instance, functions on another space, or subspaces of another space, or just elements of a set) treated as points, and selected relationships between these points. Therefore, spaces are just mathematical structures of convenience. One may expect that the structures called "spaces" are perceived more geometrically than other mathematical objects, but this is not always true. According to the famous inaugural lecture given byBernhard Riemannin 1854, every mathematical object parametrized bynreal numbers may be treated as a point of then-dimensional space of all such objects.[4]: 140Contemporary mathematicians follow this idea routinely and find it extremely suggestive to use the terminology of classical geometry nearly everywhere.[4]: 138 Functionsare important mathematical objects. Usually they form infinite-dimensionalfunction spaces, as noted already by Riemann[4]: 141and elaborated in the 20th century byfunctional analysis. While each type of space has its own definition, the general idea of "space" evades formalization. Some structures are called spaces, other are not, without a formal criterion. Moreover, there is no consensus on the general idea of "structure". According to Pudlák,[7]"Mathematics [...] cannot be explained completely by a single concept such as the mathematical structure. Nevertheless, Bourbaki's structuralist approach is the best that we have." We will return to Bourbaki's structuralist approach in the last section "Spaces and structures", while we now outline a possible classification of spaces (and structures) in the spirit of Bourbaki. We classify spaces on three levels. Given that each mathematical theory describes its objects by some of their properties, the first question to ask is: which properties? This leads to the first (upper) classification level. On the second level, one takes into account answers to especially important questions (among the questions that make sense according to the first level). On the third level of classification, one takes into account answers to all possible questions. For example, theupper-level classificationdistinguishes between Euclidean andprojective spaces, since the distance between two points is defined in Euclidean spaces but undefined in projective spaces. Another example. The question "what is the sum of the three angles of a triangle" makes sense in a Euclidean space but not in a projective space. In a non-Euclidean space the question makes sense but is answered differently, which is not an upper-level distinction. Also, the distinction between a Euclidean plane and a Euclidean 3-dimensional space is not an upper-level distinction; the question "what is the dimension" makes sense in both cases. Thesecond-level classificationdistinguishes, for example, between Euclidean and non-Euclidean spaces; between finite-dimensional and infinite-dimensional spaces; between compact and non-compact spaces, etc. In Bourbaki's terms,[2]the second-level classification is the classification by "species". Unlike biological taxonomy, a space may belong to several species. Thethird-level classificationdistinguishes, for example, between spaces of different dimension, but does not distinguish between a plane of a three-dimensional Euclidean space, treated as a two-dimensional Euclidean space, and the set of all pairs of real numbers, also treated as a two-dimensional Euclidean space. Likewise it does not distinguish between different Euclidean models of the same non-Euclidean space. More formally, the third level classifies spaces up toisomorphism. An isomorphism between two spaces is defined as a one-to-one correspondence between the points of the first space and the points of the second space, that preserves all relations stipulated according to the first level. Mutually isomorphic spaces are thought of as copies of a single space. If one of them belongs to a given species then they all do. The notion of isomorphism sheds light on the upper-level classification. Given a one-to-one correspondence between two spaces of the same upper-level class, one may ask whether it is an isomorphism or not. This question makes no sense for two spaces of different classes. An isomorphism to itself is called an automorphism. Automorphisms of a Euclidean space are shifts, rotations, reflections and compositions of these. Euclidean space is homogeneous in the sense that every point can be transformed into every other point by some automorphism. Euclidean axioms[b]leave no freedom; they determine uniquely all geometric properties of the space. More exactly: all three-dimensional Euclidean spaces are mutually isomorphic. In this sense we have "the" three-dimensional Euclidean space. In Bourbaki's terms, the corresponding theory isunivalent. In contrast, topological spaces are generally non-isomorphic; their theory ismultivalent. A similar idea occurs in mathematical logic: a theory is called categorical if all its models of the same cardinality are mutually isomorphic. According to Bourbaki,[8]the study of multivalent theories is the most striking feature which distinguishes modern mathematics from classical mathematics. Topological notions (continuity, convergence, open sets, closed sets etc.) are defined naturally in every Euclidean space. In other words, every Euclidean space is also a topological space. Every isomorphism between two Euclidean spaces is also an isomorphism between the corresponding topological spaces (called "homeomorphism"), but the converse is wrong: a homeomorphism may distort distances. In Bourbaki's terms,[2]"topological space" is anunderlyingstructure of the "Euclidean space" structure. Similar ideas occur incategory theory: the category of Euclidean spaces is a concrete category over the category of topological spaces; theforgetful(or "stripping")functormaps the former category to the latter category. A three-dimensional Euclidean space is a special case of a Euclidean space. In Bourbaki's terms,[2]the species of three-dimensional Euclidean space isricherthan the species of Euclidean space. Likewise, the species of compact topological space is richer than the species of topological space. Such relations between species of spaces may be expressed diagrammatically as shown in Fig. 3. An arrow from A to B means that everyA-spaceis also aB-space,or may be treated as aB-space,or provides aB-space,etc. Treating A and B as classes of spaces one may interpret the arrow as a transition from A to B. (In Bourbaki's terms,[9]"procedure of deduction" of aB-spacefrom aA-space.Not quite a function unless theclassesA,B are sets; this nuance does not invalidate the following.) The two arrows on Fig. 3 are not invertible, but for different reasons. The transition from "Euclidean" to "topological" is forgetful. Topology distinguishes continuous from discontinuous, but does not distinguish rectilinear from curvilinear. Intuition tells us that the Euclidean structure cannot be restored from the topology. A proof uses an automorphism of the topological space (that is,self-homeomorphism) that is not an automorphism of the Euclidean space (that is, not a composition of shifts, rotations and reflections). Such transformation turns the given Euclidean structure into a (isomorphic but) different Euclidean structure; both Euclidean structures correspond to a single topological structure. In contrast, the transition from "3-dim Euclidean" to "Euclidean" is not forgetful; a Euclidean space need not be 3-dimensional, but if it happens to be 3-dimensional, it is full-fledged, no structure is lost. In other words, the latter transition isinjective(one-to-one), while the former transition is not injective (many-to-one). We denote injective transitions by an arrow with a barbed tail, "↣" rather than "→". Both transitions are notsurjective, that is, not every B-space results from some A-space. First, a 3-dim Euclidean space is a special (not general) case of a Euclidean space. Second, a topology of a Euclidean space is a special case of topology (for instance, it must be non-compact, and connected, etc). We denote surjective transitions by a two-headed arrow, "↠" rather than "→". See for example Fig. 4; there, the arrow from "real linear topological" to "real linear" is two-headed, since every real linear space admits some (at least one) topology compatible with its linear structure. Such topology is non-unique in general, but unique when the real linear space is finite-dimensional. For these spaces the transition is both injective and surjective, that is,bijective; see the arrow from "finite-dim real linear topological" to "finite-dim real linear" on Fig. 4. Theinversetransition exists (and could be shown by a second, backward arrow). The two species of structures are thus equivalent. In practice, one makes no distinction between equivalent species of structures.[10]Equivalent structures may be treated as a single structure, as shown by a large box on Fig. 4. The transitions denoted by the arrows obey isomorphisms. That is, two isomorphicA-spaceslead to two isomorphicB-spaces. The diagram on Fig. 4 iscommutative. That is, all directed paths in the diagram with the same start and endpoints lead to the same result. Other diagrams below are also commutative, except for dashed arrows on Fig. 9. The arrow from "topological" to "measurable" is dashed for the reason explained there: "In order to turn a topological space into a measurable space one endows it with a σ-algebra. The σ-algebra of Borel sets is the most popular, but not the only choice." A solid arrow denotes a prevalent, so-called "canonical" transition that suggests itself naturally and is widely used, often implicitly, by default. For example, speaking about a continuous function on a Euclidean space, one need not specify its topology explicitly. In fact, alternative topologies exist and are used sometimes, for example, thefine topology; but these are always specified explicitly, since they are much less notable that the prevalent topology. A dashed arrow indicates that several transitions are in use and no one is quite prevalent. Two basic spaces arelinear spaces(also called vector spaces) andtopological spaces. Linear spaces are ofalgebraicnature; there are real linear spaces (over thefieldofreal numbers), complex linear spaces (over the field ofcomplex numbers), and more generally, linear spaces over any field. Every complex linear space is also a real linear space (the latterunderliesthe former), since each complex number can be specified by two real numbers. For example, thecomplex planetreated as a one-dimensional complex linear space may be downgraded to a two-dimensional real linear space. In contrast, the real line can be treated as a one-dimensional real linear space but not a complex linear space. See alsofield extensions. More generally, a vector space over a field also has the structure of a vector space over a subfield of that field. Linear operations, given in a linear space by definition, lead to such notions as straight lines (and planes, and other linear subspaces); parallel lines; ellipses (and ellipsoids). However, it is impossible to define orthogonal (perpendicular) lines, or to single out circles among ellipses, because in a linear space there is no structure like a scalar product that could be used for measuring angles. The dimension of a linear space is defined as the maximal number oflinearly independentvectors or, equivalently, as the minimal number of vectors that span the space; it may be finite or infinite. Two linear spaces over the same field are isomorphic if and only if they are of the same dimension. An-dimensionalcomplex linear space is also a2n-dimensionalreal linear space. Topological spacesare ofanalyticnature.Open sets, given in a topological space by definition, lead to such notions ascontinuous functions, paths, maps;convergent sequences, limits; interior, boundary, exterior. However,uniform continuity,bounded sets,Cauchy sequences,differentiable functions(paths, maps) remain undefined. Isomorphisms between topological spaces are traditionally called homeomorphisms; these are one-to-one correspondences continuous in both directions. Theopen interval(0,1) is homeomorphic to the wholereal line(−∞,∞) but not homeomorphic to theclosed interval[0,1], nor to a circle. The surface of a cube is homeomorphic to a sphere (the surface of a ball) but not homeomorphic to a torus. Euclidean spaces of different dimensions are not homeomorphic, which seems evident, but is not easy to prove. The dimension of a topological space is difficult to define;inductive dimension(based on the observation that the dimension of the boundary of a geometric figure is usually one less than the dimension of the figure itself) andLebesgue covering dimensioncan be used. In the case of an-dimensionalEuclidean space, both topological dimensions are equal ton. Every subset of a topological space is itself a topological space (in contrast, onlylinearsubsets of a linear space are linear spaces). Arbitrary topological spaces, investigated bygeneral topology(called also point-set topology) are too diverse for a complete classification up to homeomorphism.Compact topological spacesare an important class of topological spaces ("species" of this "type"). Every continuous function is bounded on such space. The closed interval [0,1] and theextended real line[−∞,∞] are compact; the open interval (0,1) and the line (−∞,∞) are not. Geometric topology investigatesmanifolds(another "species" of this "type"); these are topological spaces locally homeomorphic to Euclidean spaces (and satisfying a few extra conditions). Low-dimensional manifolds are completely classified up to homeomorphism. Both the linear and topological structures underlie thelinear topological space(in other words, topological vector space) structure. A linear topological space is both a real or complex linear space and a topological space, such that the linear operations are continuous. So a linear space that is also topological is not in general a linear topological space. Every finite-dimensional real or complex linear space is a linear topological space in the sense that it carries one and only one topology that makes it a linear topological space. The two structures, "finite-dimensional real or complex linear space" and "finite-dimensional linear topological space", are thus equivalent, that is, mutually underlying. Accordingly, every invertible linear transformation of a finite-dimensional linear topological space is a homeomorphism. The three notions of dimension (one algebraic and two topological) agree for finite-dimensional real linear spaces. In infinite-dimensional spaces, however, different topologies can conform to a given linear structure, and invertible linear transformations are generally not homeomorphisms. It is convenient to introduceaffineandprojective spacesby means of linear spaces, as follows. An-dimensionallinear subspace of a(n+1)-dimensionallinear space, being itself an-dimensionallinear space, is not homogeneous; it contains a special point, the origin. Shifting it by a vector external to it, one obtains an-dimensionalaffine subspace. It is homogeneous. An affine space need not be included into a linear space, but is isomorphic to an affine subspace of a linear space. Alln-dimensionalaffine spaces over a given field are mutually isomorphic. In the words ofJohn Baez, "an affine space is a vector space that's forgotten its origin". In particular, every linear space is also an affine space. Given ann-dimensionalaffine subspaceAin a(n+1)-dimensionallinear spaceL, a straight line inAmay be defined as the intersection ofAwith atwo-dimensionallinear subspace ofLthat intersectsA: in other words, with a plane through the origin that is not parallel toA. More generally, ak-dimensionalaffine subspace ofAis the intersection ofAwith a(k+1)-dimensionallinear subspace ofLthat intersectsA. Every point of the affine subspaceAis the intersection ofAwith aone-dimensionallinear subspace ofL. However, someone-dimensionalsubspaces ofLare parallel toA; in some sense, they intersectAat infinity. The set of allone-dimensionallinear subspaces of a(n+1)-dimensionallinear space is, by definition, an-dimensionalprojective space. And the affine subspaceAis embedded into the projective space as a proper subset. However, the projective space itself is homogeneous. A straight line in the projective space corresponds to atwo-dimensionallinear subspace of the (n+1)-dimensional linear space. More generally, ak-dimensionalprojective subspace of the projective space corresponds to a(k+1)-dimensionallinear subspace of the (n+1)-dimensional linear space, and is isomorphic to thek-dimensionalprojective space. Defined this way, affine and projective spaces are of algebraic nature; they can be real, complex, and more generally, over any field. Every real or complex affine or projective space is also a topological space. An affine space is a non-compact manifold; a projective space is a compact manifold. In a real projective space a straight line is homeomorphic to a circle, therefore compact, in contrast to a straight line in a linear of affine space. Distances between points are defined in ametric space. Isomorphisms between metric spaces are called isometries. Every metric space is also a topological space. A topological space is calledmetrizable, if it underlies a metric space. All manifolds are metrizable. In a metric space, we can define bounded sets and Cauchy sequences. A metric space is calledcompleteif all Cauchy sequences converge. Every incomplete space is isometrically embedded, as a dense subset, into a complete space (the completion). Every compact metric space is complete; the real line is non-compact but complete; the open interval (0,1) is incomplete. Every Euclidean space is also a complete metric space. Moreover, all geometric notions immanent to a Euclidean space can be characterized in terms of its metric. For example, the straight segment connecting two given pointsAandCconsists of all pointsBsuch that the distance betweenAandCis equal to the sum of two distances, betweenAandBand betweenBandC. TheHausdorff dimension(related to the number of small balls that cover the given set) applies to metric spaces, and can be non-integer (especially forfractals). For an-dimensionalEuclidean space, the Hausdorff dimension is equal ton. Uniform spacesdo not introduce distances, but still allow one to use uniform continuity, Cauchy sequences (orfiltersornets), completeness and completion. Every uniform space is also a topological space. Everylineartopological space (metrizable or not) is also a uniform space, and is complete in finite dimension but generally incomplete in infinite dimension. More generally, every commutative topological group is also a uniform space. A non-commutative topological group, however, carries two uniform structures, one left-invariant, the other right-invariant. Vectors in a Euclidean space form a linear space, but each vectorx{\displaystyle x}has also a length, in other words, norm,‖x‖{\displaystyle \lVert x\rVert }. A real or complex linear space endowed with a norm is anormed space. Every normed space is both a linear topological space and a metric space. ABanach spaceis a complete normed space. Many spaces of sequences or functions are infinite-dimensional Banach spaces. The set of all vectors of norm less than one is called the unit ball of a normed space. It is a convex, centrally symmetric set, generally not an ellipsoid; for example, it may be a polygon (in the plane) or, more generally, a polytope (in arbitrary finite dimension). The parallelogram law (called also parallelogram identity) generally fails in normed spaces, but holds for vectors in Euclidean spaces, which follows from the fact that the squared Euclidean norm of a vector is its inner product with itself,‖x‖2=(x,x){\displaystyle \lVert x\rVert ^{2}=(x,x)}. Aninner product spaceis a real or complex linear space, endowed with a bilinear or respectively sesquilinear form, satisfying some conditions and called an inner product. Every inner product space is also a normed space. A normed space underlies an inner product space if and only if it satisfies the parallelogram law, or equivalently, if its unit ball is an ellipsoid. Angles between vectors are defined in inner product spaces. AHilbert spaceis defined as a complete inner product space. (Some authors insist that it must be complex, others admit also real Hilbert spaces.) Many spaces of sequences or functions are infinite-dimensional Hilbert spaces. Hilbert spaces are very important forquantum theory.[11] Alln-dimensionalreal inner product spaces are mutually isomorphic. One may say that then-dimensionalEuclidean space is then-dimensionalreal inner product space that forgot its origin. Smooth manifoldsare not called "spaces", but could be. Every smooth manifold is a topological manifold, and can be embedded into a finite-dimensional linear space. Smooth surfaces in a finite-dimensional linear space are smooth manifolds: for example, the surface of an ellipsoid is a smooth manifold, a polytope is not. Real or complex finite-dimensional linear, affine and projective spaces are also smooth manifolds. At each one of its points, a smooth path in a smooth manifold has a tangent vector that belongs to the manifold's tangent space at this point. Tangent spaces to ann-dimensionalsmooth manifold aren-dimensionallinear spaces. The differential of a smooth function on a smooth manifold provides a linear functional on the tangent space at each point. ARiemannian manifold, or Riemann space, is a smooth manifold whose tangent spaces are endowed with inner products satisfying some conditions. Euclidean spaces are also Riemann spaces. Smooth surfaces in Euclidean spaces are Riemann spaces. A hyperbolicnon-Euclideanspace is also a Riemann space. A curve in a Riemann space has a length, and the length of the shortest curve between two points defines a distance, such that the Riemann space is a metric space. The angle between two curves intersecting at a point is the angle between their tangent lines. Waiving positivity of inner products on tangent spaces, one obtainspseudo-Riemann spaces, including the Lorentzian spaces that are very important forgeneral relativity. Waiving distances and angles while retaining volumes (of geometric bodies) one reachesmeasure theory. Besides the volume, a measure generalizes the notions of area, length, mass (or charge) distribution, and also probability distribution, according toAndrey Kolmogorov'sapproach toprobability theory. A "geometric body" of classical mathematics is much more regular than just a set of points. The boundary of the body is of zero volume. Thus, the volume of the body is the volume of its interior, and the interior can be exhausted by an infinite sequence of cubes. In contrast, the boundary of an arbitrary set of points can be of non-zero volume (an example: the set of all rational points inside a given cube). Measure theory succeeded in extending the notion of volume to a vast class of sets, the so-calledmeasurable sets. Indeed, non-measurable sets almost never occur in applications. Measurable sets, given in ameasurable spaceby definition, lead to measurable functions and maps. In order to turn a topological space into a measurable space one endows it with aσ-algebra.Theσ-algebraofBorel setsis the most popular, but not the only choice. (Baire sets,universally measurable sets, etc, are also used sometimes.) The topology is not uniquely determined by the Borelσ-algebra;for example, thenorm topologyand theweak topologyon aseparableHilbert space lead to the same Borelσ-algebra. Not everyσ-algebrais the Borelσ-algebraof some topology.[c]Actually, aσ-algebracan be generated by a given collection of sets (or functions) irrespective of any topology. Every subset of a measurable space is itself a measurable space. Standard measurable spaces (also calledstandard Borel spaces) are especially useful due to some similarity to compact spaces (seeEoM). Every bijective measurable mapping between standard measurable spaces is an isomorphism; that is, the inverse mapping is also measurable. And a mapping between such spaces is measurable if and only if its graph is measurable in the product space. Similarly, every bijective continuous mapping between compact metric spaces is a homeomorphism; that is, the inverse mapping is also continuous. And a mapping between such spaces is continuous if and only if its graph is closed in the product space. Every Borel set in a Euclidean space (and more generally, in a complete separable metric space), endowed with the Borelσ-algebra,is a standard measurable space. All uncountable standard measurable spaces are mutually isomorphic. Ameasure spaceis a measurable space endowed with a measure. A Euclidean space with theLebesgue measureis a measure space.Integration theorydefines integrability and integrals of measurable functions on a measure space. Sets of measure 0, called null sets, are negligible. Accordingly, a "mod 0 isomorphism" is defined as isomorphism between subsets of full measure (that is, with negligible complement). Aprobability spaceis a measure space such that the measure of the whole space is equal to 1. The product of any family (finite or not) of probability spaces is a probability space. In contrast, for measure spaces in general, only the product of finitely many spaces is defined. Accordingly, there are many infinite-dimensional probability measures (especially,Gaussian measures), but no infinite-dimensional Lebesgue measures. Standard probability spacesareespecially useful. On a standard probability space a conditional expectation may be treated as the integral over the conditional measure (regular conditional probabilities, see alsodisintegration of measure). Given two standard probability spaces, every homomorphism of theirmeasure algebrasis induced by some measure preserving map. Every probability measure on a standard measurable space leads to a standard probability space. The product of a sequence (finite or not) of standard probability spaces is a standard probability space. All non-atomic standard probability spaces are mutually isomorphic mod 0; one of them is the interval (0,1) with the Lebesgue measure. These spaces are less geometric. In particular, the idea of dimension, applicable (in one form or another) to all other spaces, does not apply to measurable, measure and probability spaces. The theoretical study of calculus, known asmathematical analysis, led in the early 20th century to the consideration of linear spaces of real-valued or complex-valued functions. The earliest examples of these werefunction spaces, each one adapted to its own class of problems. These examples shared many common features, and these features were soon abstracted into Hilbert spaces, Banach spaces, and more general topological vector spaces. These were a powerful toolkit for the solution of a wide range of mathematical problems. The most detailed information was carried by a class of spaces calledBanach algebras. These are Banach spaces together with a continuous multiplication operation. An important early example was the Banach algebra of essentially bounded measurable functions on a measure spaceX. This set of functions is a Banach space under pointwise addition and scalar multiplication. With the operation of pointwise multiplication, it becomes a special type of Banach space, one now called a commutativevon Neumann algebra. Pointwise multiplication determines a representation of this algebra on the Hilbert space of square integrable functions onX. An early observation ofJohn von Neumannwas that this correspondence also worked in reverse: Given some mild technical hypotheses, a commutative von Neumann algebra together with a representation on a Hilbert space determines a measure space, and these two constructions (of a von Neumann algebra plus a representation and of a measure space) are mutually inverse. Von Neumann then proposed that non-commutative von Neumann algebras should have geometric meaning, just as commutative von Neumann algebras do. Together withFrancis Murray, he produced a classification of von Neumann algebras. Thedirect integralconstruction shows how to break any von Neumann algebra into a collection of simpler algebras calledfactors. Von Neumann and Murray classified factors into three types. Type I was nearly identical to the commutative case. Types II and III exhibited new phenomena. A type II von Neumann algebra determined a geometry with the peculiar feature that the dimension could be any non-negative real number, not just an integer. Type III algebras were those that were neither types I nor II, and after several decades of effort, these were proven to be closely related to type II factors. A slightly different approach to the geometry of function spaces developed at the same time as von Neumann and Murray's work on the classification of factors. This approach is the theory ofC*-algebras.Here, the motivating example is theC*-algebraC0(X){\displaystyle C_{0}(X)}, whereXis a locally compact Hausdorff topological space. By definition, this is the algebra of continuous complex-valued functions onXthat vanish at infinity (which loosely means that the farther you go from a chosen point, the closer the function gets to zero) with the operations of pointwise addition and multiplication. TheGelfand–Naimark theoremimplied that there is a correspondence between commutativeC*-algebrasand geometric objects: Every commutativeC*-algebrais of the formC0(X){\displaystyle C_{0}(X)}for some locally compact Hausdorff spaceX. Consequently it is possible to study locally compact Hausdorff spaces purely in terms of commutativeC*-algebras.Non-commutative geometry takes this as inspiration for the study of non-commutativeC*-algebras:If there were such a thing as a "non-commutative spaceX," then itsC0(X){\displaystyle C_{0}(X)}would be a non-commutativeC*-algebra; if in addition the Gelfand–Naimark theorem applied to these non-existent objects, then spaces (commutative or not) would be the same asC*-algebras;so, for lack of a direct approach to the definition of a non-commutative space, a non-commutative space isdefinedto be a non-commutativeC*-algebra.Many standard geometric tools can be restated in terms ofC*-algebras,and this gives geometrically-inspired techniques for studying non-commutativeC*-algebras. Both of these examples are now cases of a field callednon-commutative geometry. The specific examples of von Neumann algebras andC*-algebrasare known as non-commutative measure theory and non-commutative topology, respectively. Non-commutative geometry is not merely a pursuit of generality for its own sake and is not just a curiosity. Non-commutative spaces arise naturally, even inevitably, from some constructions. For example, consider the non-periodicPenrose tilingsof the plane by kites and darts. It is a theorem that, in such a tiling, every finite patch of kites and darts appears infinitely often. As a consequence, there is no way to distinguish two Penrose tilings by looking at a finite portion. This makes it impossible to assign the set of all tilings a topology in the traditional sense. Despite this, the Penrose tilings determine a non-commutativeC*-algebra,and consequently they can be studied by the techniques of non-commutative geometry. Another example, and one of great interest withindifferential geometry, comes fromfoliationsof manifolds. These are ways of splitting the manifold up into smaller-dimensional submanifolds calledleaves, each of which is locally parallel to others nearby. The set of all leaves can be made into a topological space. However, the example of anirrational rotationshows that this topological space can be inaccessible to the techniques of classical measure theory. However, there is a non-commutative von Neumann algebra associated to the leaf space of a foliation, and once again, this gives an otherwise unintelligible space a good geometric structure. Algebraic geometrystudies the geometric properties ofpolynomialequations. Polynomials are a type of function defined from the basic arithmetic operations of addition and multiplication. Because of this, they are closely tied to algebra. Algebraic geometry offers a way to apply geometric techniques to questions of pure algebra, and vice versa. Prior to the 1940s, algebraic geometry worked exclusively over the complex numbers, and the most fundamental variety was projective space. The geometry of projective space is closely related to the theory ofperspective, and its algebra is described byhomogeneous polynomials. All other varieties were defined as subsets of projective space. Projective varieties were subsets defined by a set of homogeneous polynomials. At each point of the projective variety, all the polynomials in the set were required to equal zero. The complement of the zero set of a linear polynomial is an affine space, and an affine variety was the intersection of a projective variety with an affine space. André Weilsaw that geometric reasoning could sometimes be applied in number-theoretic situations where the spaces in question might be discrete or even finite. In pursuit of this idea, Weil rewrote the foundations of algebraic geometry, both freeing algebraic geometry from its reliance on complex numbers and introducingabstract algebraic varietieswhich were not embedded in projective space. These are now simply calledvarieties. The type of space that underlies most modern algebraic geometry is even more general than Weil's abstract algebraic varieties. It was introduced byAlexander Grothendieckand is called ascheme. One of the motivations for scheme theory is that polynomials are unusually structured among functions, and algebraic varieties are consequently rigid. This presents problems when attempting to study degenerate situations. For example, almost any pair of points on a circle determines a unique line called the secant line, and as the two points move around the circle, the secant line varies continuously. However, when the two points collide, the secant line degenerates to a tangent line. The tangent line is unique, but the geometry of this configuration—a single point on a circle—is not expressive enough to determine a unique line. Studying situations like this requires a theory capable of assigning extra data to degenerate situations. One of the building blocks of a scheme is a topological space. Topological spaces have continuous functions, but continuous functions are too general to reflect the underlying algebraic structure of interest. The other ingredient in a scheme, therefore, is asheafon the topological space, called the "structure sheaf". On each open subset of the topological space, the sheaf specifies a collection of functions, called "regular functions". The topological space and the structure sheaf together are required to satisfy conditions that mean the functions come from algebraic operations. Like manifolds, schemes are defined as spaces that are locally modeled on a familiar space. In the case of manifolds, the familiar space is Euclidean space. For a scheme, the local models are calledaffine schemes. Affine schemes provide a direct link between algebraic geometry andcommutative algebra. The fundamental objects of study in commutative algebra arecommutative rings. IfR{\displaystyle R}is a commutative ring, then there is a corresponding affine schemeSpec⁡R{\displaystyle \operatorname {Spec} R}which translates the algebraic structure ofR{\displaystyle R}into geometry. Conversely, every affine scheme determines a commutative ring, namely, the ring of global sections of its structure sheaf. These two operations are mutually inverse, so affine schemes provide a new language with which to study questions in commutative algebra. By definition, every point in a scheme has an open neighborhood which is an affine scheme. There are many schemes that are not affine. In particular, projective spaces satisfy a condition calledpropernesswhich is analogous to compactness. Affine schemes cannot be proper (except in trivial situations like when the scheme has only a single point), and hence no projective space is an affine scheme (except for zero-dimensional projective spaces). Projective schemes, meaning those that arise as closed subschemes of a projective space, are the single most important family of schemes.[12] Several generalizations of schemes have been introduced.Michael Artindefined analgebraic spaceas the quotient of a scheme by theequivalence relationsthat defineétale morphisms. Algebraic spaces retain many of the useful properties of schemes while simultaneously being more flexible. For instance, theKeel–Mori theoremcan be used to show that manymoduli spacesare algebraic spaces. More general than an algebraic space is aDeligne–Mumford stack. DM stacks are similar to schemes, but they permit singularities that cannot be described solely in terms of polynomials. They play the same role for schemes thatorbifoldsdo formanifolds. For example, the quotient of the affine plane by a finitegroupof rotations around the origin yields a Deligne–Mumford stack that is not a scheme or an algebraic space. Away from the origin, the quotient by the group action identifies finite sets of equally spaced points on a circle. But at the origin, the circle consists of only a single point, the origin itself, and the group action fixes this point. In the quotient DM stack, however, this point comes with the extra data of being a quotient. This kind of refined structure is useful in the theory of moduli spaces, and in fact, it was originally introduced to describemoduli of algebraic curves. A further generalization are thealgebraic stacks, also called Artin stacks. DM stacks are limited to quotients by finite group actions. While this suffices for many problems in moduli theory, it is too restrictive for others, and Artin stacks permit more general quotients. In Grothendieck's work on theWeil conjectures, he introduced a new type of topology now called aGrothendieck topology. A topological space (in the ordinary sense) axiomatizes the notion of "nearness", making two points be nearby if and only if they lie in many of the same open sets. By contrast, a Grothendieck topology axiomatizes the notion of "covering". A covering of a space is a collection of subspaces that jointly contain all the information of the ambient space. Since sheaves are defined in terms of coverings, a Grothendieck topology can also be seen as an axiomatization of the theory of sheaves. Grothendieck's work on his topologies led him to the theory oftopoi. In his memoirRécoltes et Semailles, he called them his "most vast conception".[13]A sheaf (either on a topological space or with respect to a Grothendieck topology) is used to express local data. Thecategoryof all sheaves carries all possible ways of expressing local data. Since topological spaces are constructed from points, which are themselves a kind of local data, the category of sheaves can therefore be used as a replacement for the original space. Grothendieck consequently defined a topos to be a category of sheaves and studied topoi as objects of interest in their own right. These are now calledGrothendieck topoi. Every topological space determines a topos, and vice versa. There are topological spaces where taking the associated topos loses information, but these are generally considered pathological. (A necessary and sufficient condition is that the topological space be asober space.) Conversely, there are topoi whose associated topological spaces do not capture the original topos. But, far from being pathological, these topoi can be of great mathematical interest. For instance, Grothendieck's theory ofétale cohomology(which eventually led to the proof of the Weil conjectures) can be phrased as cohomology in the étale topos of a scheme, and this topos does not come from a topological space. Topological spaces in fact lead to very special topoi calledlocales. The set of open subsets of a topological space determines alattice. The axioms for a topological space cause these lattices to becomplete Heyting algebras. The theory of locales takes this as its starting point. A locale is defined to be a complete Heyting algebra, and the elementary properties of topological spaces are re-expressed and reproved in these terms. The concept of a locale turns out to be more general than a topological space, in that every sober topological space determines a unique locale, but many interesting locales do not come from topological spaces. Because locales need not have points, the study of locales is somewhat jokingly calledpointless topology. Topoi also display deep connections to mathematical logic. Every Grothendieck topos has a special sheaf called a subobject classifier. This subobject classifier functions like the set of all possible truth values. In the topos of sets, the subobject classifier is the set{0,1}{\displaystyle \{0,1\}}, corresponding to "False" and "True". But in other topoi, the subobject classifier can be much more complicated.LawvereandTierneyrecognized that axiomatizing the subobject classifier yielded a more general kind of topos, now known as anelementary topos, and that elementary topoi were models ofintuitionistic logic. In addition to providing a powerful way to apply tools from logic to geometry, this made possible the use of geometric methods in logic. According to Kevin Arlin, Nevertheless, a general definition of "structure" was proposed by Bourbaki;[2]it embraces alltypes of spacesmentioned above, (nearly?) all types of mathematical structures used till now, and more. It provides a general definition of isomorphism, and justifies transfer of properties between isomorphic structures. However, it was never used actively in mathematical practice (not even in the mathematical treatises written by Bourbaki himself). Here are the last phrases from a review by Robert Reed[14]of a book by Leo Corry: For more information on mathematical structures see Wikipedia:mathematical structure,equivalent definitions of mathematical structures, andtransport of structure. The distinction between geometric "spaces" and algebraic "structures" is sometimes clear, sometimes elusive. Clearly,groupsare algebraic, whileEuclidean spacesare geometric.Modulesoverringsare as algebraic as groups. In particular, when thering appears to be a field, themodule appears to be a linear space; is it algebraic or geometric? In particular, when it is finite-dimensional, over real numbers, andendowed with inner product, itbecomes Euclidean space; now geometric. The (algebraic?)field of real numbersis the same as the (geometric?)real line. Itsalgebraic closure, the (algebraic?)field of complex numbers, is the same as the (geometric?)complex plane. It is first of all "a place we doanalysis" (rather than algebra or geometry). Every space treated in Section "Types of spaces" above, except for "Non-commutative geometry", "Schemes" and "Topoi" subsections, is a set (the "principal base set" of the structure, according to Bourbaki) endowed with some additional structure; elements of the base set are usually called "points" of this space. In contrast, elements of (the base set of) an algebraic structure usually are not called "points". However, sometimes one uses more than one principal base set. For example, two-dimensional projective geometry may beformalized via two base sets, the set of points and the set of lines. Moreover,a striking feature of projective planes is the symmetry of the roles played by points and lines. A less geometric example: a graph may beformalized via two base sets, the set of vertices (called also nodes or points) and the set of edges (called also arcs or lines). Generally,finitely many principal base sets and finitely many auxiliary base setsare stipulated by Bourbaki. Many mathematical structures of geometric flavor treated in the "Non-commutative geometry", "Schemes" and "Topoi" subsections above do not stipulate a base set of points. For example, "pointless topology" (in other words, point-free topology, or locale theory) starts with a single base set whose elements imitate open sets in a topological space (but are not sets of points); see alsomereotopologyandpoint-free geometry. This article was submitted toWikiJournal of Sciencefor externalacademic peer reviewin 2017 (reviewer reports). The updated content was reintegrated into the Wikipedia page under aCC-BY-SA-3.0license (2018). The version of record as reviewed is:Boris Tsirelson; et al. (1 June 2018)."Spaces in mathematics"(PDF).WikiJournal of Science.1(1): 2.doi:10.15347/WJS/2018.002.ISSN2470-6345.WikidataQ55120290.
https://en.wikipedia.org/wiki/Space_(mathematics)
Inmathematics, anultrametric spaceis ametric spacein which thetriangle inequalityis strengthened tod(x,z)≤max{d(x,y),d(y,z)}{\displaystyle d(x,z)\leq \max \left\{d(x,y),d(y,z)\right\}}for allx{\displaystyle x},y{\displaystyle y}, andz{\displaystyle z}. Sometimes the associated metric is also called anon-Archimedean metricorsuper-metric. Anultrametricon asetMis areal-valued function (whereℝdenote thereal numbers), such that for allx,y,z∈M: Anultrametric spaceis a pair(M,d)consisting of a setMtogether with an ultrametricdonM, which is called the space's associated distance function (also called ametric). Ifdsatisfies all of the conditions except possibly condition 4, thendis called anultrapseudometriconM. Anultrapseudometric spaceis a pair(M,d)consisting of a setMand an ultrapseudometricdonM.[1] In the case whenMis an Abelian group (written additively) anddis generated by alength function‖⋅‖{\displaystyle \|\cdot \|}(so thatd(x,y)=‖x−y‖{\displaystyle d(x,y)=\|x-y\|}), the last property can be made stronger using theKrullsharpening to: We want to prove that if‖x+y‖≤max{‖x‖,‖y‖}{\displaystyle \|x+y\|\leq \max \left\{\|x\|,\|y\|\right\}}, then the equality occurs if‖x‖≠‖y‖{\displaystyle \|x\|\neq \|y\|}.Without loss of generality, let us assume that‖x‖>‖y‖.{\displaystyle \|x\|>\|y\|.}This implies that‖x+y‖≤‖x‖{\displaystyle \|x+y\|\leq \|x\|}. But we can also compute‖x‖=‖(x+y)−y‖≤max{‖x+y‖,‖y‖}{\displaystyle \|x\|=\|(x+y)-y\|\leq \max \left\{\|x+y\|,\|y\|\right\}}. Now, the value ofmax{‖x+y‖,‖y‖}{\displaystyle \max \left\{\|x+y\|,\|y\|\right\}}cannot be‖y‖{\displaystyle \|y\|}, for if that is the case, we have‖x‖≤‖y‖{\displaystyle \|x\|\leq \|y\|}contrary to the initial assumption. Thus,max{‖x+y‖,‖y‖}=‖x+y‖{\displaystyle \max \left\{\|x+y\|,\|y\|\right\}=\|x+y\|}, and‖x‖≤‖x+y‖{\displaystyle \|x\|\leq \|x+y\|}. Using the initial inequality, we have‖x‖≤‖x+y‖≤‖x‖{\displaystyle \|x\|\leq \|x+y\|\leq \|x\|}and therefore‖x+y‖=‖x‖{\displaystyle \|x+y\|=\|x\|}. From the above definition, one can conclude several typical properties of ultrametrics. For example, for allx,y,z∈M{\displaystyle x,y,z\in M}, at least one of the three equalitiesd(x,y)=d(y,z){\displaystyle d(x,y)=d(y,z)}ord(x,z)=d(y,z){\displaystyle d(x,z)=d(y,z)}ord(x,y)=d(z,x){\displaystyle d(x,y)=d(z,x)}holds. That is, every triple of points in the space forms anisosceles triangle, so the whole space is anisosceles set. Defining the(open) ballof radiusr>0{\displaystyle r>0}centred atx∈M{\displaystyle x\in M}asB(x;r):={y∈M∣d(x,y)<r}{\displaystyle B(x;r):=\{y\in M\mid d(x,y)<r\}}, we have the following properties: Proving these statements is an instructive exercise.[2]All directly derive from the ultrametric triangle inequality. Note that, by the second statement, a ball may have several center points that have non-zero distance. The intuition behind such seemingly strange effects is that, due to the strong triangle inequality, distances in ultrametrics do not add up.
https://en.wikipedia.org/wiki/Ultrametric_space
Inlinguistics,agreementorconcord(abbreviatedagr) occurs when awordchanges form depending on the other words to which it relates.[1]It is an instance ofinflection, and usually involves making the value of somegrammatical category(such asgenderorperson) "agree" between varied words or parts of thesentence. For example, inStandard English, one may sayI amorhe is, but not "I is" or "he am". This is becauseEnglish grammarrequires that the verb and itssubjectagree inperson. ThepronounsIandheare first and third person respectively, as are theverb formsamandis. The verb form must be selected so that it has the same person as the subject in contrast tonotional agreement, which is based on meaning.[2][3] Agreement generally involves matching the value of somegrammatical categorybetween differentconstituentsof a sentence (or sometimes between sentences, as in some cases where apronounis required to agree with itsantecedentorreferent). Some categories that commonly trigger grammatical agreement are noted below. Agreement based ongrammatical personis found mostly betweenverbandsubject. An example from English (I amvs.he is) has been given in the introduction to this article. Agreement between pronoun (or correspondingpossessive adjective) and antecedent also requires the selection of the correct person. For example, if the antecedent is the first person noun phraseMary and I, then a first person pronoun (we/us/our) is required; however, most noun phrases (the dog,my cats,Jack and Jill, etc.) are third person, and are replaced by a third person pronoun (he/she/it/theyetc.). Agreement based ongrammatical numbercan occur between verb and subject, as in the case of grammatical person discussed above. In fact the two categories are often conflated withinverb conjugationpatterns: there are specific verb forms for first person singular, second person plural and so on. Some examples: Again as with person, there is agreement in number between pronouns (or their corresponding possessives) and antecedents: Agreement also occurs between nouns and theirspecifierandmodifiers, in some situations. This is common in languages such as French and Spanish, wherearticles,determinersandadjectives(both attributive and predicative) agree in number with the nouns they qualify: In English this is not such a common feature, although there are certain determiners that occur specifically with singular or plural nouns only: In languages in whichgrammatical genderplays a significant role, there is often agreement in gender between a noun and its modifiers. For example, inFrench: Such agreement is also found withpredicate adjectives:l'homme est grand("the man is big") vs.la chaise est grande("the chair is big"). However, in some languages, such asGerman, this is not the case; only attributive modifiers show agreement: In the case of verbs, gender agreement is less common, although it may still occur, for example inArabic verbswhere the second and third persons take different inflections for masculine and feminine subjects. In theFrenchcompound past tense, the past participle – formally an adjective – agrees in certain circumstances with the subject or with an object (seepassé composéfor details). InRussianand most otherSlavic languages, the form of the past tense agrees in gender with the subject, again due to derivation from an earlier adjectival construction. There is also agreement in gender between pronouns and their antecedents. Examples of this can be found in English (although English pronouns principally follow natural gender rather than grammatical gender): For more detail seeGender in English. In languages that have a system ofcases, there is often agreement by case between a noun and its modifiers. For example, inGerman: In fact, the modifiers of nouns in languages such as German andLatinagree with their nouns in number, gender and case; all three categories are conflated together in paradigms ofdeclension. Case agreement is not a significant feature of English (onlypersonal pronounsand the pronounwhohave any case marking). Agreement between such pronouns can sometimes be observed: A rare type of agreement that phonologically copies parts of the head rather than agreeing with agrammatical category.[4]For example, inBainouk: katama-ŋɔ river-prox. in-ka this / / katama-ā-ŋɔ river-pl-prox. in-ka-ā these katama-ŋɔ in-ka/katama-ā-ŋɔ in-ka-ā river-prox. this / river-pl-prox. these In this example, what is copied is not a prefix, but rather the initial syllable of the head "river". Languages can have no conventional agreement whatsoever, as inJapaneseorMalay; barely any, as inEnglish; a small amount, as in spokenFrench; a moderate amount, as inGreekorLatin; or a large amount, as inSwahili. Modern English does not have a particularly large amount of agreement, although it is present. Apart from verbs, the main examples are the determiners “this” and “that”, which become “these” and “those” respectively when the following noun is plural: Allregular verbs(and nearly allirregularones) in English agree in the third-person singular of the presentindicativeby adding asuffixof either-sor-es. The latter is generally used after stems ending in thesibilantssh,ch,ss,orzz(e.g.he rushes,it lurches,she amasses,it buzzes.) Present tense ofto love: In the present tense (indicative mood), the following verbs have irregular conjugations for the third-person singular: Note that there is a distinction between irregular verb conjugations in the spoken language and irregular spellings of words in the written language. Linguistics generally concerns itself with the natural, spoken language, and not with spelling conventions in the written language. The verbto gois often given as an example of a verb with an irregular present tense conjugation, on account of adding "-es" instead of just "-s" for the third person singular conjugation. However, this is merely an arbitrary spelling convention. In the spoken language, the present tense conjugation ofto gois entirely regular. If we were to classifyto goas irregular based on the spelling ofgoes, then by the same reasoning, we would have to include other regular verbs with irregular spelling conventions such asto veto/vetoes,to echo/echoes,to carry/carries,to hurry/hurries, etc. In contrast, the verbto dois actually irregular in its spoken third-person singular conjugation, in addition to having a somewhat irregular spelling. While the verbdorhymes withshoe, its conjugationdoesdoes not rhyme withshoes; the verbdoesrhymes withfuzz. Conversely, the verbto say, while it may appear to be regular based on its spelling, is in fact irregular in its third person singular present tense conjugation:Sayis pronounced /seɪ/, butsaysis pronounced /sɛz/.Sayrhymes withpay, butsaysdoes not rhyme withpays. The highly irregular verbto beis the only verb with more agreement than this in the present tense. Present tense ofto be: In English,defective verbsgenerally show no agreement for person or number, they include themodal verbs:can,may,shall,will,must,should,ought. InEarly Modern Englishagreement existed for the second person singular of all verbs in the present tense, as well as in the past tense of some common verbs. This was usually in the form-est, but-stand-talso occurred. Note that this does not affect the endings for other persons and numbers. Example present tense forms:thou wilt,thou shalt,thou art,thou hast,thou canst. Example past tense forms:thou wouldst,thou shouldst,thou wast,thou hadst,thou couldst Note also the agreement shown byto beeven in thesubjunctive mood. However, for nearly all regular verbs, a separatethouform was no longer commonly used in the past tense. Thus theauxiliary verbto dois used, e.g.thou didst help, not*thou helpedst. Here are some special cases for subject–verb agreement in English: Always Singular -All's well that ends well. -One sows, another reaps. -Together Everyone Achieves More–that's why we're a TEAM. - If wealth is lost, nothing is lost. If health is lost, something is lost. If the character is lost, everything is lost. - Nothing succeeds like success. Exceptions:Noneis construed in the singular or plural as the sense may require, though the plural is commonly used.[5]Whennoneis clearly intended to meannot one, it should be followed by a singular verb. The SAT testing service, however, considersnoneto be strictly singular.[6] - None so deaf as those who don't hear. -None prosper by begging. -Every dog is a lion at home. - Many a penny makes a pound. -Each man and each woman has a vote. Exceptions: When the subject is followed byeach,the verb agrees to the original subject. - Double coincidence of wants occurs when two parties each desire to sell what the other exactly wants to buy. -Thousand dollars is a high price to pay. Exceptions:Ten dollars were scattered on the floor. (= Ten dollar bills) Exceptions: Fraction or percentage can be singular or plural based on the noun that follows it. - Half a loaf is better than no bread. - One in three people globally do not have access to safe drinking water. - Who is to bell the cat? - A food web is a graphical representation of what-eats-what in an ecosystem. -Two and two is four. Always Plural -The MD and the CEO of the company have arrived. -Time and tide wait for none. -Weal and woe come by turns. -Day and night are alike to a blind man. Exceptions: If the nouns, however, suggest one idea or refer to the same thing or person, the verb is singular.[5] -The good and generous thinks the whole world is friendly. -The new bed and breakfast opens this week. -The MD and CEO has arrived. Exceptions: Words joined to a subject bywith,in addition to,along with,as well (as), together with, besides, not,etc. are parenthetical and the verb agrees with the original subject.[5] -One cow breaks the fence, and a dozen leap it. -A dozen of eggs cost around $1.5. -1 mole of oxygen react with 2 moles of hydrogen gas to form water. -The rich plan for tomorrow, the poor for today. -Where the cattle stand together, the lion lies down hungry. Singular or Plural -Success or failure depends on individuals. -Neither I nor you are to blame. -Either you or he has to go. (But at times, it is considered better to reword such grammatically correct but awkward sentences.) - The jury has arrived at a unanimous decision. - The committee are divided in their opinion. - His family is quite large. - His family have given him full support in his times of grief. -There's a huge audience in the gallery today. -The audience are requested to take their seats. Exceptions: British English, however, tends to treat team and company names as plural. -India beat Sri Lanka by six wickets in a pulsating final to deliver World Cup glory to their cricket-mad population for the first time since 1983. (BBC)[7] -India wins cricket World Cup for 1st time in 28 years. (Washington Post)[8] - There's more than one way to skin a cat. Compared with English, Latin is an example of a highlyinflectedlanguage. The consequences for agreement are thus: Verbs must agree in person and number, and sometimes in gender, with their subjects. Articles and adjectives must agree in case, number and gender with the nouns they modify. Sample Latin verb: the present indicative active ofportare(portar), to carry: In Latin, a pronoun such as "ego" and "tu" is only inserted for contrast and selection. Proper nouns and common nouns functioning as subject are nonetheless frequent. For this reason, Latin is described as anull-subject language. Spoken French always distinguishes the second person plural, and the first person plural in formal speech, from each other and from the rest of the present tense in all verbs in the first conjugation (infinitives in -er) other thanaller. The first person plural form and pronoun (nous) are now usually replaced by the pronounon(literally: "one") and a third person singular verb form in Modern French. Thus,nous travaillons(formal) becomeson travaille. In most verbs from the other conjugations, each person in the plural can be distinguished among themselves and from the singular forms, again, when using the traditional first person plural. The other endings that appear in written French (i.e.: all singular endings, and also the third person plural of verbs other than those with infinitives in -er) are often pronounced the same, except inliaisoncontexts. Irregular verbs such asêtre,faire,aller, andavoirpossess more distinctly pronounced agreement forms than regular verbs. An example of this is the verbtravailler, which goes as follows (the single words in italic type are pronounced /tʁa.vaj/): On the other hand, a verb likepartirhas (the single words in italic type are pronounced /paʁ/): The final S or T is silent, and the other three forms sound different from one another and from the singular forms. Adjectives agree in gender and number with the nouns that they modify in French. As with verbs, the agreements are sometimes only shown in spelling since forms that are written with different agreement suffixes are sometimes pronounced the same (e.g.joli,jolie); although in many cases the final consonant is pronounced in feminine forms, but silent in masculine forms (e.g.petitvs.petite). Most plural forms end in-s, but this consonant is only pronounced in liaison contexts, and it is determinants that help understand if the singular or plural is meant. Theparticiplesof verbs agree in gender and number with the subject or object in some instances. Articles, possessives and other determinants also decline for number and (only in the singular) for gender, with plural determinants being the same for both genders. This normally produces three forms: one for masculine singular nouns, one for feminine singular nouns, and another for plural nouns of either gender: Notice that some of the above also change (in the singular) if the following word begins with a vowel:leandlabecomel′,duandde labecomede l′,mabecomesmon(as if the noun were masculine) andcebecomescet. InHungarian, verbs havepolypersonal agreement, which means they agree with more than one of the verb'sarguments: not only its subject but also its (accusative) object. Difference is made between the case when there is a definite object and the case when the object is indefinite or there is no object at all. (The adverbs do not affect the form of the verb.) Examples:Szeretek(I love somebody or something unspecified),szeretem(I love him, her, it, or them, specifically),szeretlek(I love you);szeret(he loves me, us, you, someone, or something unspecified),szereti(he loves her, him, it, or them specifically). Of course, nouns or pronouns may specify the exact object. In short, there is agreement between a verb and the person and number of its subject and the specificity of its object (which often refers to the person more or less exactly). Thepredicateagrees in number with the subject and if it iscopulative(i.e., it consists of a noun/adjective and a linking verb), both parts agree in number with the subject. For example:A könyvekérdekesekvoltak"The books were interesting" ("a": the, "könyv": book, "érdekes": interesting, "voltak": were): the plural is marked on the subject as well as both the adjectival and the copulative part of the predicate. Within noun phrases, adjectives do not show agreement with the noun, though pronouns do. e.g.a szép könyveitekkel"with your nice books" ("szép": nice): the suffixes of the plural, the possessive "your" and the case marking "with" are only marked on the noun. In theScandinavian languages, adjectives (bothattributiveandpredicative) are declined according to thegender,number, anddefinitenessof the noun they modify. InIcelandicandFaroese, adjectives are also declined according togrammatical case, unlike the other Scandinavian languages. In some cases inSwedish,NorwegianandDanish, adjectives and participles aspredicatesappear to disagree with their subjects. This phenomenon is referred to aspancake sentences. InNorwegian nynorsk,Swedish,IcelandicandFaroesethe past participle must agree in gender, number and definiteness when the participle is in anattributiveorpredicativeposition. In Icelandic and Faroese, past participles would also have to agree in grammatical case. InNorwegian bokmålandDanishit is only required to decline past participles in number and definiteness when in anattributiveposition. MostSlavic languagesare highly inflected, except forBulgarianandMacedonian. The agreement is similar to Latin, for instance between adjectives and nouns in gender, number, case andanimacy(if counted as a separate category). The following examples are fromSerbo-Croatian: Verbs have 6 different forms in the present tense, for three persons in singular and plural. As in Latin, subject is frequently dropped. Another characteristic is agreement in participles, which have different forms for different genders: Swahili, like all otherBantu languages, has numerousnoun classes. Verbs must agree in class with their subjects and objects, and adjectives with the nouns that they qualify. For example:Kitabukimojakitatosha(One book will be enough),Mchungwammojautatosha(One orange-tree will be enough),Chungwa mojalitatosha(One orange will be enough). There is also agreement in number. For example:Vitabuviwilivitatosha(Two books will be enough),Michungwamiwiliitatosha(Two orange-trees will be enough),Machungwamawiliyatatosha(Two oranges will be enough). Class and number are indicated with prefixes (or sometimes their absence), which are not always the same for nouns, adjectives and verbs, as illustrated by the examples. Manysign languageshave developed verb agreement with person. TheASLverb for "see" (V handshape), moves from the subject to the object. In the case of a third person subject, it goes from a locationindexedto the subject to the object, and vice versa. Also, inGerman Sign Languagenot all verbs are capable of subject/object verb agreement, so anauxiliary verbis used to convey this, carrying the meaning of the previous verb while still inflecting for person. In addition, some verbs also agree with theclassifierthe subject takes. In theAmerican Sign Languageverb for "to be under", the classifier a verb takes goes under a downward-facing B handshape (palm facing downward). For example, if a person or an animal was crawled under something, a V handshape with bent fingers would go under the palm, but if it was a pencil, an 1-handshape (pointer finger out) would go under the palm.
https://en.wikipedia.org/wiki/Agreement_(linguistics)
Diction(Latin:dictionem(nom.dictio), "a saying, expression, word"),[1]in its original meaning, is a writer's or speaker's distinctivevocabularychoices and style of expression in a piece of writing such as a poem or story.[2][3]In its common meaning, it is the distinctiveness ofspeech:[3][4][5]the art of speaking so that each word is clearly heard and understood to its fullest complexity and extremity, and concernspronunciationand tone, rather than word choice and style. This is more precisely and commonly expressed with the termenunciationor with its synonym,articulation.[6] Diction has multiple concerns, of whichregister, the adaptation of style and formality to the social context, is foremost. Literary diction analysis reveals how a passage establishestoneand characterization, e.g. a preponderance of verbs relating physical movement suggests an active character, while a preponderance of verbs relating states of mind portrays an introspective character. Diction also has an impact upon word choice and syntax. Aristotle, inThe Poetics(20), defines the parts of diction (λέξις)[7]as theletter, thesyllable, theconjunction, thearticle, thenoun, theverb, thecase, and the speech (λόγος),[8]though one commentator remarks that "the text is so confused and some of the words have such a variety of meanings that one cannot always be certain what the Greek says, much less what Aristotle means."[9] Diction is usually judged in reference to the prevailing standards of proper writing and speech and is seen as the mark of quality of the writing. It is also understood as the selection of certain words or phrases that become peculiar to a writer or character.[10] Certain writers in the modern day and age use archaic terms such as "thy", "thee", and "wherefore" to imbue a Shakespearean mood to their work. Forms of diction include: archaic diction (diction that is antique, that is rarely used), high diction (lofty sounding language), and low diction (everyday language). Each of these forms is meant to enhance the meaning or artistry of an author's work.
https://en.wikipedia.org/wiki/Diction
Inlinguistics,intonationis the variation inpitchused to indicate the speaker's attitudes and emotions, to highlight orfocusan expression, to signal theillocutionary actperformed by a sentence, or to regulate the flow ofdiscourse. For example, theEnglishquestion "Does Maria speak Spanish or French?" is interpreted as ayes-or-no questionwhen it is uttered with a single rising intonation contour, but is interpreted as analternative questionwhen uttered with a rising contour on "Spanish" and a falling contour on "French". Although intonation is primarily a matter of pitch variation, its effects almost always work hand-in-hand with otherprosodicfeatures. Intonation is distinct fromtone, the phenomenon where pitch is used to distinguish words (as inMandarin) or to mark grammatical features (as inKinyarwanda). Most transcription conventions have been devised for describing one particular accent or language, and the specific conventions therefore need to be explained in the context of what is being described. However, for general purposes theInternational Phonetic Alphabetoffers the two intonation marks shown in the box at the head of this article. Global rising and falling intonation are marked with a diagonal arrow rising left-to-right[↗︎]and falling left-to-right[↘︎], respectively. These may be written as part of a syllable, or separated with a space when they have a broader scope: Here the rising pitch onstreetindicates that the question hinges on that word, on where he found it, not whether he found it. Here, as is common withwh-questions, there is a rising intonation on the question word, and a falling intonation at the end of the question. In many descriptions of English, the following intonation patterns are distinguished: It is also common to trace the pitch of a phrase with a line above the phrase, adjacent to the phrase, or even through (overstriking) the phrase. Such usage is not supported by Unicode as of 2015, but the symbols have been submitted. The following example requires anSIL fontsuch asGentium Plus, either as the default browser font or as the user-defined font for IPA text, for which seeTemplate:IPA#Usage. Allvocal languagesuse pitch pragmatically in intonation—for instance for emphasis, to convey surprise orirony, or to pose a question.Tonal languagessuch asChineseandHausause intonation in addition to using pitch for distinguishing words.[1]Many writers have attempted to produce a list of distinct functions of intonation. Perhaps the longest was that of W.R. Lee,[2]who proposed ten. J.C. Wells[3]andE. Couper-Kuhlen[4]both put forward six functions. Wells's list is given below; the examples are not his: It is not known whether such a list would apply to other languages without alteration. The description of English intonation has developed along different lines in the US and in Britain. British descriptions of English intonation can be traced back to the 16th century.[5]Early in the 20th century the dominant approach in the description of English and French intonation was based on a small number of basic "tunes" associated with intonation units: in a typical description, Tune 1 is falling, with final fall, while Tune 2 has a final rise.[6]Phoneticians such as H. E. Palmer[7]broke up the intonation of such units into smaller components, the most important of which was thenucleus, which corresponds to the main accented syllable of the intonation unit, usually in the last lexical word of the intonation unit. Each nucleus carries one of a small number of nuclear tones, usually including fall, rise, fall-rise, rise-fall, and possibly others. The nucleus may be preceded by aheadcontaining stressed syllables preceding the nucleus, and atailconsisting of syllables following the nucleus within the tone unit. Unstressed syllables preceding the head (if present) or nucleus (if there is no head) constitute apre-head. This approach was further developed by Halliday[8]and by O'Connor and Arnold,[9]though with considerable variation in terminology. This "Standard British" treatment of intonation in its present-day form is explained in detail by Wells[10]and in a simplified version by Roach.[11]Halliday saw the functions of intonation as depending on choices in three main variables:Tonality(division of speech into intonation units),Tonicity(the placement of the tonic syllable or nucleus) andTone(choice of nuclear tone);[12]these terms (sometimes referred to as "the three T's") have been used more recently.[10] Research by Crystal[13][14]emphasized the importance of making generalizations about intonation based on authentic, unscripted speech, and the roles played by prosodic features such as tempo, pitch range, loudness and rhythmicality in communicative functions traditionally attributed to intonation alone. The transcription of intonation in such approaches is normally incorporated into the line of text. A typical example would be: In this example, the | mark indicates a division between intonation units. An influential development in British studies of intonation has been Discourse Intonation, an offshoot ofDiscourse Analysisfirst put forward by David Brazil.[15][16]This approach lays great emphasis on the communicative and informational use of intonation, pointing out its use for distinguishing between presenting new information and referring to old, shared information, as well as signalling the relative status of participants in a conversation (e.g. teacher-pupil, or doctor-patient) and helping to regulate conversationalturn-taking. The description of intonation in this approach owes much to Halliday. Intonation is analysed purely in terms of pitch movements and "key" and makes little reference to the other prosodic features usually thought to play a part in conversational interaction. The dominant framework used forAmerican Englishfrom the 1940s to the 1990s was based on the idea of pitch phonemes, ortonemes. In the work ofTragerand Smith[17]there are four contrastive levels of pitch: low (1), middle (2), high (3), and very high (4). (The important work ofKenneth Pikeon the same subject[18]had the four pitch levels labelled in the opposite way, with (1) being high and (4) being low). In its final form, the Trager and Smith system was highly complex, each pitch phoneme having four pitch allophones (or allotones); there was also a Terminal Contour to end an intonation clause, as well as four stress phonemes.[19]Some generalizations using this formalism are given below. The American linguistDwight Bolingercarried on a long campaign to argue that pitchcontourswere more important in the study of intonation than individual pitch levels.[20] Thus the two basic sentence pitch contours are rising-falling and rising. However, other within-sentence rises and falls result from the placement of prominence on the stressed syllables of certain words. For declaratives or wh-questions with a final decline, the decline is located as a step-down to the syllable after the last prominently stressed syllable, or as a down-glide on the last syllable itself if it is prominently stressed. But for final rising pitch on yes–no questions, the rise always occurs as an upward step to the last stressed syllable, and the high (3) pitch is retained through the rest of the sentence. A more recent approach to the analysis of intonation grew out of the research ofJanet Pierrehumbert[23]and developed into the system most widely known by the name ofToBI(short for "Tones and Break Indices"). The approach is sometimes referred to asautosegmental. The most important points of this system are the following: A simplified example of a ToBI transcription is given below. In this example, two phrases "we looked at the sky" and "and saw the clouds" are combined into one larger intonational phrase; there is a rise on "sky" and a fall on "clouds": Because of its simplicity compared with previous analyses, the ToBI system has been very influential and has been adapted for describing several other languages.[24] Frenchintonation differs substantially from that of English.[25]There are four primary patterns. The most distinctive feature of French intonation is the continuation pattern. While many languages, such as English andSpanish, placestresson a particular syllable of each word, and while many speakers of languages such as English may accompany this stress with a rising intonation, French has neither stress nor distinctive intonation on a given syllable. Instead, on the final syllable of every "rhythm group" except the last one in a sentence, there is placed a rising pitch. For example[26](as before the pitch change arrows ↘ and ↗ apply to the syllable immediately following the arrow): Adjectives are in the same rhythm group as their noun. Each item in a list forms its own rhythm group: Side comments inserted into the middle of a sentence form their own rhythm group: As can be seen in the example sentences above, a sharp fall in pitch is placed on the last syllable of a declarative statement. The preceding syllables of the final rhythm group are at a relatively high pitch. Most commonly in informal speech, a yes/no question is indicated by a sharply rising pitch alone, without any change or rearrangement of words. For example[27] A form found in both spoken and written French is theEst-ce que ...("Is it that ...") construction, in which the spoken question can end in either a rising or a falling pitch: The most formal form for a yes/no question, which is also found in both spoken and written French, inverts the order of the subject and verb. There too, the spoken question can end in either a rising or a falling pitch: Sometimes yes/no questions begin with a topic phrase, specifying the focus of the utterance. Then, the initial topic phrase follows the intonation pattern of a declarative sentence, and the rest of the question follows the usual yes/no question pattern:[28] Information questions begin with a question word such asqui, pourquoi, combien,etc., referred to in linguistics asinterrogatives. The question word may be followed in French byest-ce que(as in English "(where) is it that ...") orest-ce qui, or by inversion of the subject-verb order (as in "where goes he?"). The sentence starts at a relatively high pitch which falls away rapidly after the question word, or its first syllable in case of a polysyllabic question word. There may be a small increase in pitch on the final syllable of the question. For example:[29] In both cases, the question both begins and ends at higher pitches than does a declarative sentence. In informal speech, the question word is sometimes put at the end of the sentence. In this case, the question ends at a high pitch, often with a slight rise on the high final syllable. The question may also start at a slightly higher pitch:[30] Mandarin Chineseis atonal languageso pitch contours within a word distinguish the word from other words with the same vowels and consonants. Nevertheless, Mandarin also has intonation patterns that indicate the nature of the sentence as a whole. There are four basic sentence types having distinctive intonation: declarative sentences, unmarked interrogative questions, yes–no questions marked as such with the sentence-final particlema, andA-not-A questionsof the form "He go not go" (meaning "Does he go or not?"). In theBeijing dialect, they are intonationally distinguished for the average speaker as follows, using a pitch scale from 1 (lowest) to 9 (highest):[31][32] Thus, questions are begun with a higher pitch than are declarative sentences; pitch rises and then falls in all sentences; and in yes–no questions and unmarked questions pitch rises at the end of the sentence, while for declarative sentences and A-not-A questions the sentence ends at very low pitch. Because Mandarin distinguishes words on the basis of within-syllable tones, these tones create fluctuations of pitch around the sentence patterns indicated above. Thus, sentence patterns can be thought of as bands whose pitch varies over the course of the sentence, and changes of syllable pitch cause fluctuations within the band. Furthermore, the details of Mandarin intonation are affected by various factors like the tone of the final syllable, the presence or absence offocus(centering of attention) on the final word, and the dialect of the speaker.[31] Intonation in Punjabi has always been an area of discussion and experimentation. There are different studies [Gill and Gleason (1969), Malik (1995), Kalra (1982), Bhatia (1993), Joshi (1972 & 1989)][33][34][35]that explain intonation in Punjabi, according to their respective theories and models. Chander Shekhar Singh carried forward a description of the experimental phonetics and phonology of Punjabi intonation based on sentences read in isolation. His research design is based on the classification of two different levels of intonation (horizontal level and vertical level). The first experiment (at the horizontal level) is conducted to investigate three utterance types: declarative, imperative, and interrogative. In his second experiment, the investigation of sentences is conducted to view intonation but in vertical sense. 'Vertical' here means a comparative analysis of intonations of the three types of sentences by keeping the nuclear intonation constant.[36] The experiment shows some extremely significant results. The vertical level demonstrates four different types of accentuations in Punjabi: The second experiment provides a significant difference between the horizontal level and the vertical level.[37] Cruttenden points out the extreme difficulty of making meaningful comparisons among the intonation systems of different languages, the difficulty being compounded by the lack of an agreed descriptive framework.[38] Falling intonation is said to be used at the end of questions in some languages, includingHawaiian,Fijian, andSamoanand inGreenlandic. It is also used inHawaiian Creole English, presumably derived from Hawaiian. Rises are common on statements in urbanBelfast; falls on most questions have been said to be typical of urbanLeedsspeech.[citation needed] AnESRC-funded project (E. Grabe, B. Post and F. Nolan) to study the intonation of nine urban accents of British English in five different speaking styles has resulted in the IViE Corpus and a purpose-built transcription system. The corpus and notation system can be downloaded from the project's website.[39]Following on this work is a paper explaining that the dialects of British and Irish English vary substantially.[40] A project to bring together descriptions of the intonation of twenty different languages, ideally using a unified descriptive framework (INTSINT), resulted in a book published in 1998 by D. Hirst and A. Di Cristo.[41]The languages described are American English, British English, German, Dutch, Swedish, Danish, Spanish, European Portuguese, Brazilian Portuguese, French, Italian, Romanian, Russian, Bulgarian, Greek, Finnish, Hungarian, Western Arabic (Moroccan), Japanese, Thai, Vietnamese and Beijing Chinese. A number of contributing authors did not use the INTSINT system but preferred to use their own system. Those with congenitalamusiashow impaired ability to discriminate, identify and imitate the intonation of the final words in sentences.[42]
https://en.wikipedia.org/wiki/Intonation_(linguistics)
Nonconcatenative morphology, also calleddiscontinuous morphologyandintroflection, is a form of word formation and inflection in which therootis modified and which does not involve stringingmorphemestogether sequentially.[1] InEnglish, for example, whilepluralsare usually formed by adding the suffix -s, certain words use nonconcatenative processes for their plural forms: Manyirregular verbsform their past tenses, past participles, or both in this manner: This specific form of nonconcatenative morphology is known asbase modificationorablaut,a form in which part of the root undergoes a phonological change without necessarily adding newphonologicalmaterial. In traditionalIndo-Europeanistusage, these changes are termedablautonly when they result from vowel gradations inProto-Indo-European. An example is the English stems⌂ng, resulting in the four distinct words:sing-sang-song-sung.[2]: 72An example from German is the stemspr⌂ch"speak", which results in various distinct forms such asspricht-sprechen-sprach-gesprochen-Spruch.[2]: 72 Changes such asfoot/feet, on the other hand, which are due to the influence of a since-lostfront vowel, are calledumlautor more specificallyI-mutation. Other forms of base modification includelengthening of a vowel, as inHindi: or change intoneor stress: Consonantal apophony, such as theinitial-consonant mutations in Celtic languages, also exists. Another form of nonconcatenative morphology is known astransfixation, in which vowel and consonant morphemes are interdigitated. For example, depending on the vowels, theArabicconsonantal rootk-t-b can have different but semantically related meanings. Thus,[kataba]'he wrote' and[kitaːb]'book' both come from the root k-t-b. Words fromk-t-bare formed by filling in the vowels, e.g.kitāb"book",kutub"books",kātib"writer",kuttāb"writers",kataba"he wrote",yaktubu"he writes", etc. In the analysis provided byMcCarthy's account of nonconcatenative morphology, the consonantal root is assigned to onetier, and the vowel pattern to another.[3]Extensive use of transfixation only occurs inAfro-Asiaticand someNilo-Saharanlanguages (such asLugbara) and is rare or unknown elsewhere.[4] Yet another common type of nonconcatenative morphology isreduplication, a process in which all or part of the root is reduplicated. InSakha, this process is used to form intensifiedadjectives: /k̠ɨhɨl/"red" ↔/k̠ɨp-k̠ɨhɨl/"flaming red" A final type of nonconcatenative morphology is variously referred to astruncation,deletion, orsubtraction; the morpheme is sometimes called adisfix. This process removes phonological material from the root. InspokenFrench, this process can be found in a small subset of plurals (although their spellings follow regular plural-marking rules): /ɔs/ "bone" ↔ /o/ "bones" /œf/ "egg" ↔ /ø/ "eggs" Nonconcatenative morphology is extremely well developed in theSemitic languagesin which it forms the basis of virtually all higher-levelword formation(as with the example given in the diagram). That is especially pronounced inArabic, which also uses it to form approximately 41%[5]of plurals in what is often called thebroken plural.
https://en.wikipedia.org/wiki/Introflection
ʾIʿrāb(إِعْرَاب,IPA:[ʔiʕraːb]) is anArabicterm for thedeclensionsystem of nominal, adjectival, or verbalsuffixesofClassical Arabicto markgrammatical case. These suffixes are written in fullyvocalized Arabic texts, notably theQur’ānor texts written for children or Arabic learners, and they are articulated when a text is formally read aloud, but they do not survive in any spokendialect of Arabic. Even inLiterary Arabic, these suffixes are often not pronouncedinpausa(ٱلْوَقْفal-waqf); i.e. when the word occurs at the end of the sentence, in accordance with certain rules of Arabic pronunciation. (That is, thenunationsuffix-nis generally dropped at the end of a sentence or line of poetry, with the notable exception of thenuniyya; the vowel suffix may or may not be, depending on the requirements of metre.) Depending on the knowledge ofʾiʿrāb, some Arabic speakers may omit case endings when reading out inModern Standard Arabic, thus making it similar to spoken dialects. Many Arabic textbooks for foreigners teach Arabic without a heavy focus onʾiʿrāb, either omitting the endings altogether or only giving a small introduction. Arabic without case endings may require a different and fixedword order, similar to spoken Arabic dialects. The term literally means 'making [the word] Arabic'. It is thestem IVmasdarof the root ‘-r-b (ع-ر-ب), meaning "to be fluent", soʾiʿrābmeans "making a thing expressed, disclosed or eloquent". The term is cognate to thewordArabitself. Case is not shown in standard orthography, with the exception of indefinite accusative nouns ending in any letter buttā’ marbūṭah(ة) oraliffollowed byhamzah(ء), where the-a(n)"sits" on the letter before an alif added at the end of the word (the alif shows up even in unvowelled texts). Cases, however, are marked in the Qur'an, children's books, and to remove ambiguous situations. If marked, it is shown at the end of the noun. Further information on the types of declensions is discussed in the following section, along with examples. Grammatical case endings are not pronounced inpausaand in less formal forms of Arabic. Invocalised Arabic(where vowel points are written), the case endings may be written even if they are not pronounced. Some Arabic textbooks or children's books skip case endings in vocalised Arabic, thus allowing both types of pronunciation. The nominative (al-marfū‘ٱلْمَرْفُوعُ) is used in several situations: For singular nouns and broken plurals, it is marked as ausually unwrittenضَمَّةḍammah(-u) for the definite orḍammah+ nunation (-un) for the indefinite. The dual and regular masculine plural are formed by addingـَانِ-an(i) andـُونَ-ūn(a)respectively (justـَا-āandـُو-ūin theconstruct state). The regular feminine plural is formed by addingـَاتُ-āt(u)in the definite andـَاتٌ-āt(un)in the indefinite (same spelling). The accusative (al-manṣūbٱلْمَنْصُوب) has several uses: For singular nouns and broken plurals, it is marked as a usually unwrittenفَتْحَةfatḥah(-a) for the definite orfatḥah+ nunation (-an) for the indefinite. For the indefinite accusative, thefatḥah+ nunation is added to anاalif, e.g.ـًا, which is added to the ending of all nouns not ending with aaliffollowed byhamzahor atā’ marbūṭah. This is the only case (when alif is written), which affects the unvocalisedwrittenArabic (e.g.بَيْتاًbayt-an). The dual and regular masculine plural are formed by addingـَيْنِ-ayn(i)andـِينَ-īn(a)respectively (spelled identically!) (ـَيْ-ayandـِي-īin the construct state, again, spelled identically). The regular feminine plural is formed by addingـَاتِ-āt(i)in the definite and-āt(in)in the indefinite (spelled identically). Some forms of indefinite accusative are mandatory even for spoken and pausal forms of Arabic, sometimes-anis changed to a simple-ain pausa or spoken Arabic. Diptotes never take an alif ending in the written Arabic and are never pronounced with the ending-an. The genitive case (al-majrūr,ٱلْمَجْرُورُ) For singular nouns and broken plurals, it is marked as a usually unwrittenكَسْرَةkasrah(-i) for the definite orkasrah+ nunation (-in) for the indefinite. The dual and regular masculine plural are formed by addingـَيْنِ-ayn(i)andـِيْنَ-īn(a)respectively (spelled identically) (ـَيْ-ayandـِي-īin the construct state, again, spelled identically). The regular feminine plural is formed by addingـَاتِ-āt(i)in the definite andـَاتٍ-āt(in)in the indefinite (spelled identically in Arabic). For fully declined nouns, known as "triptote" (‏مُنْصَرِفٌ‎munṣarif), that is, having three separate case endings, the suffixes are-u,-a,-ifornominative,accusative, andgenitivecase respectively, with the addition of a final/n/(nunation, ortanwīn) to produce-un,-an, and-inwhen the word is indefinite. This system applies to most singular nouns in Arabic. It also applies to feminine nouns ending inة-a/-at(tā’ marbūṭah) andءhamzah, but for these,اalif is not written in the accusative case. It also applies to many "broken plurals". When words end in-a/-at(tā’ marbūṭah) thetis pronounced when the case ending is added; thusرِسَالَة("message") is pronouncedrisālain pausal form, but in Classical Arabic it becomesرِسَالَةٌrisālatun,رِسَالَةًrisālatan, andرِسَالَةٍrisālatinwhen case endings are added (all usually spelledرسالةwhen written without thevowel points). The final/n/is dropped when the noun is preceded by the definite articleal-). The/n/is also dropped when the noun is used iniḍāfah(construct state), that is, when it is followed by a genitive. Thus: Nominative (مَرْفُوعٌmarfū‘; literally, "raised"): Accusative (مَنْصُوبٌmanṣūb); literally, 'erected'): Genitive (مَجْرُورٌmajrūr; literally, 'dragged'): The final/n/is also dropped in classical poetry at the end of a couplet, and the vowel of the ending is pronounced long. A few singular nouns (including many proper names and names of places), and certain types of "broken plural", are known asdiptotes(ٱلْمَمْنُوعُ مِنْ ٱلصَّرْفِal-mamnū‘ min aṣ-ṣarf, literally 'forbidden from inflecting') meaning that they only have two case endings. When the noun is indefinite, the endings are-ufor the nominative and-afor the genitive and accusative with no nunation. The genitive reverts to the normal-iwhen the diptotic noun becomes definite (preceded byal-or is in the construct state)). Diptotes never take an alif in the accusative case in written Arabic. In the case of sound masculine plurals (جَمْعُ ٱلْمُذَكَّرُ ٱلسَّالِمُ- jam‘ al-mudhakkar as-sālim), mostly denoting male human beings, the suffixes are respectivelyـُونَ-ūnaandـِينَ-īna. These stay the same whetherالal-precedes or not. The final-ais usually dropped in speech. In less formal Arabic only-īnais used for all cases and the final-ais dropped inpausaand in less formal Arabic. Theن-nais dropped when the noun is iniḍāfah(construct state). Thus: Nominative: Accusative and genitive: Note: endingـِينَ-īnais spelled identically toـَيْنِ-ayni(see above). In the case of sound feminine plurals (جَمْعُ ٱلْمُؤَنَّثُ ٱلسَّالِمُjam‘ al-mu’annath as-sālim), the suffixes are respectivelyـَاتٌ, ـَاتُ-ātu(n),ـَاتٍ, ـَاتِ-āti(n)andـَاتٍ, ـَاتِ-āti(n)(identical spelling). Thenis only there when the noun is indefinite (not preceded byal-). Again the final vowel is dropped in speech andpausa, leaving onlyـَات-āt, making all cases pronounced identically. The final "n" is dropped when the noun is iniḍāfah(construct state). Nominative: Accusative and genitive: The Dual -These nouns denote two of something. They decline very similarly to the sound masculine plurals because they are not marked for definiteness and look the same in both the accusative and genitive cases. For the nominative, the marking is-āniand for the accusative/genitive,-ayni. An example is "parents," which iswālidāniandwālidaynirespectively. ٱسْمُ ٱلْمَنْقُوصِism al-manqūṣ(deficient nouns ending withyā’) -These nouns behave differently due to the instability of the final vowel. When indefinite, these nouns take a final-inin the nominative/genitive, and-iyanin the accusative. When definite, they take a long-īin the nominative/genitive, and-iyain the accusative. These nouns were reckoned by the grammarians to have originally taken the triptotic endings, but through morpho-phonotactic processes, the latter resulted. An example is "judge," which isqāḍin,qāḍiyan, versusal-qāḍī, andal-qāḍiyarespectively. Also, a noun can be bothism al-manqūṣand diptotal: for example,layālin'nights', is a broken plural with a final unstable vowel. With case endings this noun becomeslayālin,layāliya, andal-layālī,al-layāliya. ٱسْمُ ٱلْمَقْصُورِism al-maqṣūr(deficient nouns ending withaliforalif maqṣūrah) -These nouns, like their close relativeism al-manqūṣ, also behave differently due to the instability of a final vowel. These nouns are markedonlyfor definiteness, as morpho-phonotactic processes have resulted in the complete loss of the case distinctions. When indefinite, they take-an, which rests on analif maqṣūrahor occasionallyalif. When definite, they are not marked, and they simply retain their longaliforalif maqṣūrah. An example is "hospital," which ismustashfanandal-mustashfārespectively. If a noun is bothism al-maqṣūrand diptotic, then it is completely invariable for case. Invariable nouns -Invariable nouns are usually those foreign names that end inalifor nouns that end in an additionalaliforalif maqṣūrah(when thataliforalif maqṣūrahis not part of the root). Also, nouns that are bothism al-maqṣūrand diptotic fall into this category. Additionally, there are rare invariable nouns which have other endings, like any name ending with "-ayhi," like Sībawayhi (colloquially pronounced, for example, in Egypt:[sebæˈweː]. An example of a common invariable noun isfuṣḥá(al-fuṣḥá), meaning 'the most eloquent [Arabic]'. Another example isdunyā(al-dunyā) 'world'. A noun's case depends on the role that the noun plays in the sentence. There are multiple sentence structures in Arabic, each of which demands different case endings for the roles in the sentence. "Subject" does not always correspond to "nominative", nor does "object" always correspond to "accusative". Sentences in Arabic are divided into two branches, of which are the incomplete phrases (jumla inshaiya) and the complete phrases (jumla khabariya). Jumla inshaiya is composed of the descriptive phrase and possessive phrase, while the jumla khabariya is made up of the verbal sentence (jumla fi'lya khabariya) and the nominal sentence (jumla ismiya khabariya). The incomplete phrase cannot be a sentence in itself, and is usually used in the complete phrases. In a verbal sentence (ٱلْجُمْلَةُ ٱلْفِعْلِيَّةُal-jumlah al-fi‘līyah), there isverb–subject–objectword order. This is the preferred word order of Classical Arabic. In a verbal sentence, the subject takes nominative case and the object takes accusative case. Such a sentence ("This writer wrote the written") would be formed as follows (read from right to left): In a nominal sentence (ٱلْجُمْلَةُ ٱلْاِسْمِيَّةُal-jumlah al-ismīyah), there issubject–verb–objectword order. If the verb would be "is" (that is, the predicate merely attributes something to the subject—seePredicative (adjectival or nominal)), then there is no verb used. Both the subject and the predicate take nominative case when there is no overt verb. Such a sentence ("This writer is famous") is formed as follows (read from right to left): If there is an overt verb, the subject takes nominative and the predicate takes accusative. Such a sentence ("This writer wrote the book") is formed as follows (read from right to left): There is a class of words in Arabic called the "sisters ofinna" (أَخَوَاتُ إِنَّakhawāt inna) that share characteristics ofإِنَّ. Among them are: If one of the sisters ofإِنَّbegins a clause, then the subject takes accusative case instead of nominative. Such a sentence using the particleإِنَّ("Verily, this writer wrote the book") would be formed as follows (read from right to left): Although there was an overt verb in the above example, a nominal sentence without an overt verb will also have its subject take accusative case because of the introduction of one ofinna's sisters. (The predicate of an equation is unaffected and will remain in the nominative.) Consider the following example ("Verily, this writer is famous"): The verbkāna(كَانَ) and its sisters (أَخَوَاتُ كَانَakhawāt kāna) form a class of 13 verbs that mark the time/duration of actions, states, and events. Sentences that use these verbs are considered to be a type of nominal sentence according to Arabic grammar, not a type of verbal sentence. Although the word order may seem to beverb–subject–objectwhen there is no other verb in the sentence, it is possible to have a sentence in which the order issubject–verb–object. Such a non-equation sentence clearly showssubject–verb–objectword order. Among the sisters of kāna are: If one of the sisters ofكَانَbegins a clause, then the subject takes nominative case and the object takes accusative case. (Because of this, Arabic contrasts[The man]NOMis [a doctor]NOMin the present tense with[The man]NOMwas [a doctor]ACCin the past tense.) Such a sentence using the verbكَانَ("This writer was famous") would be formed as follows (read from right to left): In a sentence with an explicit verb, the sister of kāna marks aspect for the actual verb. A sentence likeكَانَ ٱلْكَاتِبُ يَكْتُبُ ٱلْكِتَابَ(was the.writer he.writes the.book, 'the writer was writing the book'), for instance, has both a main verb (يَكْتُبُ) and a sister of kāna that indicates the non-completed aspect of the main verb. The imperfective tense of the verb also has suffixed vowels, which determine the mood of the verb, There are six moods in theClassical Arabic, Thus: All the first three forms are spelledيكتبin unvocalised Arabic, and the final vowel is not pronounced in pausa and in informal Arabic, leaving just one pronunciation:yaktub. Traditional Arab grammarians equated the indicative with the nominative of nouns, the subjunctive with the accusative, and the jussive with the genitive, as indicated by their names (the only pair that is not borne out in the name is the jussive-genitive pair, probably because the-ivowel is usually dropped). It is not known whether there is a genuine historical connection or whether the resemblance is mere coincidence, caused by the fact that these are the only three short vowels available.
https://en.wikipedia.org/wiki/%CA%BEI%CA%BFrab
Alexeme(/ˈlɛksiːm/ⓘ) is a unit oflexicalmeaning that underlies a set of words that are related throughinflection. It is a basic abstract unit of meaning,[1]aunitofmorphologicalanalysisinlinguisticsthat roughly corresponds to a set of forms taken by a single rootword. For example, inthe English language,run,runs,ranandrunningare forms of the same lexeme, which can be represented asRUN.[note 1] One form, thelemma(or citation form), is chosen by convention as the canonical form of a lexeme. The lemma is the form used in dictionaries as an entry'sheadword. Other forms of a lexeme are often listed later in the entry if they are uncommon or irregularly inflected. The notion of the lexeme is central tomorphology,[2]the basis for defining other concepts in that field. For example, the difference betweeninflectionandderivationcan be stated in terms of lexemes: A lexeme belongs to a particularsyntactic category, has a certainmeaning(semantic value), and in inflecting languages, has a correspondinginflectional paradigm. That is, a lexeme in many languages will have many different forms. For example, the lexemeRUNhas a presentthird personsingularformruns, a present non-third-person singular formrun(which also functions as thepast participleandnon-finiteform), a past formran, and a presentparticiplerunning. (It does not includerunner, runners, runnableetc.) The use of the forms of a lexeme is governed by rules ofgrammar. In the case of English verbs such asRUN, they include subject–verbagreement and compoundtenserules, which determine the form of a verb that can be used in a givensentence. In manyformaltheories oflanguage, lexemes havesubcategorization framesto account for the number and types of complements. They occur withinsentencesand othersyntactic structures. A language's lexemes are often composed of smaller units with individual meaning calledmorphemes, according toroot morpheme+derivational morphemes+affix(not necessarily in that order), where: The compound root morpheme + derivational morphemes is often called thestem.[6]The decomposition stem +desinencecan then be used to study inflection.
https://en.wikipedia.org/wiki/Lexeme
Inlinguistics, amarkeris a free or boundmorphemethat indicates thegrammatical functionof the marked word, phrase, or sentence. Most characteristically, markers occur ascliticsorinflectionalaffixes. Inanalytic languagesandagglutinative languages, markers are generally easily distinguished. Infusional languagesandpolysynthetic languages, this is often not the case. For example, in Latin, a highly fusional language, the wordamō("I love") is marked by suffix-ōfor indicative mood, active voice, first person, singular, present tense. Analytic languages tend to have a relatively limited number of markers. Markers should be distinguished from the linguistic concept ofmarkedness. Anunmarkedform is the basic "neutral" form of a word, typically used as its dictionarylemma, such as—in English—for nouns the singular (e.g.catversuscats), and for verbs the infinitive (e.g.to eatversuseats,ateandeaten). Unmarked forms (e.g. thenominative casein many languages) tend to be less likely to have markers, but this is not true for all languages (compareLatin). Conversely, a marked form may happen to have azero affix, like thegenitiveplural of some nouns inRussian(e.g.сапо́г). In some languages, the same forms of a marker have multiple functions, such as when used in differentcasesordeclensions(for example-īsin Latin). Thislinguisticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Marker_(linguistics)
Amorphemeis any of the smallest meaningful constituents within a linguistic expression and particularly within a word.[1]Many words are themselves standalone morphemes, while other words contain multiple morphemes; in linguistic terminology, this is the distinction, respectively, betweenfree and bound morphemes. The field oflinguisticstudy dedicated to morphemes is calledmorphology. In English, inside a word with multiple morphemes, the main morpheme that gives the word its basic meaning is called aroot(such ascatinside the wordcats), which can be bound or free. Meanwhile, additional bound morphemes, calledaffixes, may be added before or after the root, like the-sincats, which indicates plurality but is always bound to a rootnounand is not regarded as a word on its own.[2]However, in some languages, including English andLatin, even many roots cannot stand alone; i.e., they are bound morphemes. For instance, the Latin rootreg-('king') must always be suffixed with a case marker:regis,regi,rex(reg+s), etc. The same is true of the English rootnat(e)— ultimately inherited from a Latin root meaning "birth, born" — which appears in words likenative,nation,nature,innate, andneonate. These sample English words have the following morphological analyses: Every morpheme can be classified as free or bound:[4] Bound morphemes can be further classified as derivational or inflectional morphemes. The main difference between them is their function in relation to words. Allomorphsare variants of a morpheme that differ in form but are semantically similar. For example, the English pluralmarkerhas three allomorphs:/-z/(bugs),/-s/(bats), or/-ɪz,-əz/(buses). An allomorph is a concrete realization of a morpheme, which is an abstract unit. That is parallel to the relation of anallophoneand aphoneme. A zero-morpheme is a type of morpheme that carriessemanticmeaning, but is not represented byauditoryphoneme. A word with a zero-morpheme is analyzed as having the morpheme for grammatical purposes, but the morpheme is not realized in speech. They are often represented by /∅/ withinglosses.[7] Generally, such morphemes have no visible changes. For instance,sheepis both the singular and the plural form of that noun; rather than taking the usual plural suffix-sto form hypothetical*sheeps, the plural is analyzed as being composed ofsheep + -∅, the null plural suffix. The intended meaning is thus derived from theco-occurrencedeterminer (in this case, "some-" or "a-").[8] In some cases, a zero-morpheme may also be used to contrast with other inflected forms of a word that contain an audible morpheme. For example, the plural nouncatsin English consists of the rootcatand the plural suffix-s,and so the singularcatmay be analyzed as the root inflected with the null singular suffix -∅.[9] Content morphemesexpress a concrete meaning orcontent, and function morphemes have more of a grammatical role. For example, the morphemesfastandsadcan be considered content morphemes. On the other hand, the suffix-edis a function morpheme since it has the grammatical function of indicatingpast tense. Both categories may seem very clear and intuitive, but the idea behind them is occasionally more difficult to grasp since they overlap with each other.[10]Examples of ambiguous situations are theprepositionoverand thedetermineryour, which seem to have concrete meanings but are considered function morphemes since their role is to connect ideas grammatically.[11]Here is a general rule to determine the category of a morpheme: Roots are composed of only one morpheme, but stems can be composed of more than one morpheme. Any additional affixes are considered morphemes. For example, in the wordquirkiness, the root isquirk, but the stem isquirky, which has two morphemes. Moreover, some pairs of affixes have identical phonological form but different meanings. For example, the suffix-ercan be either derivational (e.g.sell⇒seller) or inflectional (e.g.small⇒smaller). Such morphemes are calledhomophonous.[11] Some words might seem to be composed of multiple morphemes but are not. Therefore, not only form but also meaning must be considered when identifying morphemes. For example, the wordMadagascaris long and might seem to have morphemes likemad,gas, andcar, but it does not. Conversely, some short words have multiple morphemes (e.g.dogs=dog+s).[11] Innatural language processingforJapanese,Chinese, and other languages, morphological analysis is the process of segmenting a sentence into a row of morphemes. Morphological analysis is closely related topart-of-speech tagging, but word segmentation is required for those languages because word boundaries are not indicated by blank spaces.[12] The purpose of morphological analysis is to determine the minimal units of meaning in a language (morphemes) by comparison of similar forms: such as comparing "She is walking" and "They are walking" with each other, rather than either with something less similar like "You are reading". Those forms can be effectively broken down into parts, and the different morphemes can be distinguished. Both meaning and form are equally important for the identification of morphemes. An agent morpheme is an affix like-erthat in English transforms a verb into a noun (e.g.teach→teacher). English also has another morpheme that is identical in pronunciation (and written form) but has an unrelated meaning and function: a comparative morpheme that changes an adjective into another degree of comparison (but remains the same adjective) (e.g.small→smaller). The opposite can also occur: a pair of morphemes with identical meaning but different forms.[11] Ingenerative grammar, the definition of a morpheme depends heavily on whether syntactic trees have morphemes as leaves or features as leaves. Given the definition of a morpheme as "the smallest meaningful unit", nanosyntax aims to account for idioms in which an entire syntactic tree often contributes "the smallest meaningful unit". An exampleidiomis "Don't let the cat out of the bag". There, the idiom is composed of "let the cat out of the bag". That might be considered a semantic morpheme, which is itself composed of many syntactic morphemes. Other cases of the "smallest meaningful unit" being longer than a word include some collocations such as "in view of" and "business intelligence" in which the words, when together, have a specific meaning. The definition of morphemes also plays a significant role in the interfaces of generative grammar in the following theoretical constructs:
https://en.wikipedia.org/wiki/Morpheme
Nominal TAMis the indication oftense–aspect–moodbyinflectinga noun, rather than a verb. Inclausal nominal TAM, the noun indicates TAM information about theclause(as opposed to the noun phrase). Whether or not a particular language can best be understood as having clausal nominal TAM can be controversial, and there are various borderline cases. A language that can indicate tense by attaching a verbalcliticto a noun (such as the -'llclitic in English) is not generally regarded as using nominal TAM. Various languages have been shown to have clausal nominal TAM.[1]In theNiger-Congo languageSupyire, the form of the first person and second pronouns reflects whether the clause has declarative or non-declarative mood. In theGǀwi languageof Botswana, subject pronouns reflect the imperative or non-imperative mood of the clause (while the verb itself does not). In theChamicuro languageof Peru, the definite article accompanying the subject or object of a clause indicates either past or non-past tense. In thePitta Pitta languageof Australia, the mandatory case marking system differs depending on the tense of the clause. Other languages exhibiting clausal nominal TAM includeLardil(Australia), Gurnu (Australia),Yag Dii(Cameroon),Sahidic Coptic(Egypt),Gusiilay(Niger-Congo),Iai(Oceania),Tigak(Oceania), andGuaymi(Panama and Costa Rica). In theGuaranilanguage of Paraguay, nouns can optionally take several different past and future markers to express ideas[2]such as "our old house (the one we no longer live in)", "the abandoned car", "what was once a bridge", "bride-to-be" or even "my ex-future-wife," or rather, "the woman who at one point was going to be my wife." Although verbal clitics such as -'llin English are attached to nouns and indicate TAM information, they are not really examples of nominal TAM because they arecliticsrather thaninflectionsand therefore not part of the noun at all.[3]This is easily seen in sentences where the clitic is attached to another part of speech, such as "The one you want'll be in the shed". Another way to tell the difference is to consider the following hypothetical dialogue: The speaker cannot emphasise the future time by placing voice stress onshe'll, and so instead uses the expanded phraseshe will. This is characteristic of clitics as opposed to inflections (i.e. clitics cannot be emphasised by placing voice stress on the word to which they are attached). The significance of this can be seen by comparison with a second hypothetical dialogue, using the English negative suffix -n't(which is best understood as an inflection rather than a clitic): In this case the speaker could choose to sayisn'trather thanis not. Even though the stress then falls on the syllableIS, the meaning of the sentence is understood as emphasising theNOT. This indicates thatisn'tis one inflected word rather than a word with a clitic attached.
https://en.wikipedia.org/wiki/Nominal_TAM
In linguistics and literature,periphrasis(/pəˈrɪfrəsɪs/)[1]is the use of a larger number of words, with an implicit comparison to the possibility of using fewer. The comparison may be within a language or between languages. For example, "more happy" is periphrastic in comparison to "happier", and English "I will eat" is periphrastic in comparison to Spanishcomeré. The term originates from the Greek wordπεριφράζομαιperiphrazomai'talking around',[2][3][4]and was originally used for examples that came up in ancient Greek. In epic poetry, it was common to use periphrasis in examples such as "the sons of the Achaeans" (meaning the Achaeans), or "How did such words escape the fence of your teeth?" (adding a layer of poetic imagery to "your teeth"). Sometimes periphrastic forms were used for verbs that would otherwise be unpronounceable.[5]For example, the verbδείκνυμιdeiknumi'to show', has a hypothetical form *δεδείκνταιdedeikntai, which has the disallowed consonant cluster-knt-, so one would instead sayδεδειγμένοι εἰσίdedeigmenoi eisi, using a periphrasis with a participle. In modern linguistics, the term periphrasis is typically used for examples like "more happy": the use of one or morefunction wordsto express meaning that otherwise may be expressed by attaching anaffixorcliticto aword. The resultingphraseincludes two or morecollocatedwordsinstead of oneinflectedword.[6]Periphrastic forms are a characteristic ofanalytic languages, whereas the absence of periphrasis is a characteristic ofsynthetic languages. While periphrasis concerns all categories of syntax, it is most visible with verbcatena.[clarification needed]The verb catenae ofEnglish(verb phrases constructed with auxiliary verbs) are highly periphrastic. The distinction between inflected and periphrastic forms is usually illustrated across distinct languages. However, comparative and superlative forms of adjectives (and adverbs) in English provide a straightforward illustration of the phenomenon.[7]For many speakers, both the simple and periphrastic forms in the following table are possible: The periphrastic forms are periphrastic by virtue of the appearance ofmoreormost, and they therefore contain two words instead of just one. The wordsmoreandmostcontribute functional meaning only, just like the inflectional affixes-erand-est. Such distinctions occur in many languages. The following table provides some examples across Latin and English: Periphrasis is a characteristic ofanalytic languages, which tend to avoid inflection. Even strongly inflectedsynthetic languagessometimes make use of periphrasis to fill out an inflectional paradigm that is missing certain forms.[8]A comparison of someLatinforms of the verbdūcere'lead' with their English translations illustrates further that English uses periphrasis in many instances where Latin uses inflection. English often needs two or three verbs to express the same meaning that Latin expresses with a single verb. Latin is a relatively synthetic language; it expresses grammatical meaning using inflection, whereas the verb system of English, a Germanic language, is relatively analytic; it uses auxiliary verbs to express functional meaning. UnlikeBiblical Hebrew,Israeli Hebrewuses a few periphrastic verbal constructions in specific circumstances, such as slang or military language. Consider the following pairs/triplets, in which the first are a Biblical Hebrew synthetic form and the last are an Israeli Hebrew analytic periphrasis:[9] צעק‎ tsaák "shouted" → שם צעקה‎ sam tseaká "shouted" (lit.means "put a shout") צעק‎ →שם צעקה‎ {tsaák} {} {sam tseaká} {"shouted"} {} {"shouted" (lit.means "put a shout")} הביט‎ hibít "looked at" → נתן מבט‎ natán mabát "looked" (lit.means "gave a look") AND העיף מבט‎ heíf mabát "looked" (lit."flew/threw a look";cf. the English expressions "cast a glance", "threw a look" and "tossed a glance") הביט‎ →נתן מבט‎ ANDהעיף מבט‎ {hibít} {} {natán mabát} {} {heíf mabát} {"looked at"} {} {"looked" (lit.means "gave a look")} {} {"looked" (lit."flew/threw a look";cf. the English expressions "cast a glance", "threw a look" and "tossed a glance")} According toGhil'ad Zuckermann, the Israeli periphrastic construction (using auxiliary verbs followed by a noun) is employed here for the desire to express swift action, and stems from Yiddish. He compares the Israeli periphrasis to the following Yiddish expressions all meaning "to have a look": געבן א קוק‎ gébņ a kuk געבן א קוק‎ {gébņ a kuk} "to give a look" טאן א קוק‎ ton a kuk טאן א קוק‎ {ton a kuk} lit."to do a look" כאפן א קוק‎ khapņ a kuk כאפן א קוק‎ {khapņ a kuk} lit."to catch a look" (colloquial expression) Zuckermann emphasizes that the Israeli periphrastic constructions "are not nonce, ad hoc lexical calques of Yiddish. The Israeli system is productive and the lexical realization often differs from that of Yiddish". He provides the following Israeli examples: הרביץ‎ hirbíts הרביץ‎ hirbíts "hit, beat; gave", yielded מהירות‎ mehirút "speed" → הרביץ מהירות‎ hirbíts mehirút "drove very fast" מהירות‎ →הרביץ מהירות‎ mehirút {} {hirbíts mehirút} "speed" {} {"drove very fast"} ארוחה‎ arukhá "meal" → הרביץ ארוחה‎ hirbíts arukhá "ate a big meal"cf. English "hit the buffet" "eat a lot at the buffet";"hit the liquor/bottle" "drink alcohol". ארוחה‎ →הרביץ ארוחה‎ arukhá {} {hirbíts arukhá} "meal" {} {"ate a big meal"cf. English "hit the buffet" "eat a lot at the buffet";"hit the liquor/bottle" "drink alcohol".} דפק הופעה‎ dafák hofaá דפק הופעה‎ {dafák hofaá} "dressed smartly" (lit."hit an appearance")[9] But while Zuckermann attempted to use these examples to claim that Israeli Hebrew grew similar to European languages, it will be noticed that all of these examples are from the slang and therefore linguistically marked. The normal and daily usage of the verb paradigm in Israeli modern Hebrew is of the synthetic form, as in Biblical Hebrew:צָעַק, הִבִּיט‎ The correspondence in meaning across inflected forms and their periphrastic equivalents within the same language or across different languages leads to a basic question. Individual words are alwaysconstituents, but their periphrastic equivalents are oftennot. Given this mismatch in syntactic form, one can pose the following questions: how should the form-meaning correspondence across periphrastic and non-periphrastic forms be understood?; how does it come to pass that a specific meaning-bearing unit can be a constituent in one case but in another case, it is a combination of words that does not qualify as a constituent? An answer to this question that has recently come to light is expressed in terms of thecatenaunit, as implied above.[10]The periphrastic word combinations are catenae even when they are not constituents, and individual words are also catenae. The form-meaning correspondence is therefore consistent. A given inflected one-word catena corresponds to a periphrastic multiple-word catena. The role of catenae for the theory of periphrasis is illustrated with the trees that follow. The first example is across French and English. Future tense/time in French is often constructed with an inflected form, whereas English typically employs a periphrastic form, e.g. Where French expresses future tense/time using the single (inflected) verb catenasera, English employs a periphrastic two-word catena, or perhaps a periphrastic four-word catena, to express the same basic meaning. The next example is across German and English: German often indicates an object of aprepositionwith a single dative case pronoun. For English to express the same meaning, it usually employs the periphrastic two-wordprepositional phrasewithfor. The following trees illustrate the periphrasis of light verb constructions: Each time, the catena in green is the matrixpredicate. Each of these predicates is a periphrastic form insofar as at least onefunction wordis present. The b-predicates are, however, more periphrastic than the a-predicates since they contain more words. The closely similar meaning of these predicates across the a- and b-variants is accommodated in terms of catenae, since each predicate is a catena.
https://en.wikipedia.org/wiki/Periphrasis
Ingenerativemorphology, therighthand head ruleis aruleofgrammarthat specifies that the rightmostmorphemein amorphological structureis almost always theheadin certain languages. What this means is that it is the righthand element that provides the primarysyntacticand/orsemanticinformation. The projection of syntactic information from the righthand element onto the outputwordis known asfeature percolation. The righthand head rule is considered a broadly general and universal principle ofmorphology. In certain other languages it is proposed that rather than a righthand head rule, alefthand head ruleapplies, where the lefthand element provides this information. Inderivational morphology(i.e. the creation of newwords), theheadis thatmorphemethat provides thepart of speech(PoS) information. According to the righthand head rule, this is of course the righthand element. For instance, theword'person' is anoun, but if thesuffix'-al' were added then 'personal' is derived. 'Personal' is anadjective, and the righthand head rule holds that thePoSinformation is provided by the suffix '-al', which is the righthand element. Theadverb'personally' is derived from 'personal' by adding thesuffix'-ly'. ThePoS-information is provided by thissuffixwhich is added to the right of 'personal'. The same applies to thenoun'personality', which is also derived from 'personal', this time by adding thenominalsuffix'-ity' to the right of the inputword. Again thePoS-information is projected from the righthand element. The three above examples may be formalized thus (N=noun, ADJ=adjective, ADV=adverb): They are all instances of the righthand head rule, which may be formalized as: The righthand head rule may also be applied toinflectional morphology(i.e. the addition of semantic information without changing theword class). In relation toinflectional morphology, the righthand head rule holds that the rightmost element of awordprovides the most essential additionalsemanticinformation. For example, thepast tenseform of 'play' is created by adding thepast tensesuffix'-(e)d' to the right. Thissuffixprovides thepast tensefeature which is also the main additionalsemanticcontent of the outputword'played'. Likewise, thepluralform of 'dog' is created by the addition of thepluralnominalsuffix'-s' to the right of the input. Thus 'dogs' inherits its plurality feature from thesuffix. The same thing goes for thecomparativeform of theadjective'ugly'. 'Uglier' is created by the addition of thecomparativesuffix'-er' to the right, thus receiving itscomparativefeature from thesuffix. Formalizing the examples shows that the underlying principle ofinflectionis basically the same as the righthand head rule (INF=infinitive, P=past tense, SG=singular, POS=positive, COM=comparative): Another area ofmorphologywhere the righthand head rule seems applicable is that ofcompounding(i.e. the creation of awordby combining two or more otherwords), in which it holds that the righthandwordprovides both the essentialsemanticinformation and theword class. For instance, thenoun'runway' combines averband anoun. Since it refers to a kind of way rather than a kind of running, and since it is anounand not averb, the head is 'way', which appears on the right. The noun 'wheelchair' combines twonouns. The primary element is the righthand one - namely, 'chair' - since thewordrefers to a kind of chair rather than a kind of wheel. Again formalizations show that the underlying principle must be the righthand head rule: The righthand head rule is taken to be a universal principle ofmorphology, but has been subject to much severe criticism. The main point of criticism is that it is empirically insufficient because it ignores numerous cases where the head does not appear in the righthand position (PREP=preposition, NEG=negation): Another main point of criticism is that the righthand head rule is tooEurocentric, or evenAnglocentric, taking into consideration only morphological processes typical ofEuropeanlanguages(mainlyEnglish) and ignoring processes fromlanguagesall over the world. Certainly in certain languages a lefthand head rule applies rather than a righthand head rule.[1] Many linguists reject the righthand head rule as being too idealizing and empirically inadequate.
https://en.wikipedia.org/wiki/Righthand_head_rule
Inlinguisticsandetymology,suppletionis traditionally understood as the use of one word as theinflectedform of another word when the two words are notcognate. For those learning a language, suppletive forms will be seen as "irregular" or even "highly irregular". For example,go:wentis a suppletive paradigm, becausegoandwentare not etymologically related, whereasmouse:miceis irregular but not suppletive, since the two words come from the same Old English ancestor. The term "suppletion" implies that a gap in theparadigmwas filled by a form "supplied" by a different paradigm. Instances of suppletion are overwhelmingly restricted to the most commonly usedlexical itemsin a language. An irregular paradigm is one in which the derived forms of a word cannot be deduced by simple rules from the base form. For example, someone who knows only a little English can deduce that the plural ofgirlisgirlsbut cannot deduce that the plural ofmanismen. Language learners are often most aware ofirregular verbs, but any part of speech with inflections can be irregular. For most synchronic purposes—first-language acquisition studies,psycholinguistics, language-teaching theory—it suffices to note that these forms are irregular. However, historical linguistics seeks to explain how they came to be so and distinguishes different kinds of irregularity according to their origins. Most irregular paradigms (likeman:men) can be explained by phonological developments that affected one form of a word but not another (in this case,Germanic umlaut). In such cases, the historical antecedents of the current forms once constituted a regular paradigm. Historical linguistics uses the term "suppletion"[1]to distinguish irregularities likeperson:peopleorcow:cattlethat cannot be so explained because the parts of the paradigm have not evolved out of a single form. Hermann Osthoffcoined the term "suppletion" in German in an 1899 study of the phenomenon inIndo-European languages.[2][3][4] Suppletion exists in many languages around the world.[5]These languages are from various language families:Indo-Aryan,Dravidian,Semitic,Romance, etc. For example, inGeorgian, the paradigm for the verb "to come" is composed of four different roots (mi-,-val-,-vid-, and-sul-;მი-,-ვალ-,-ვიდ-,-სულ-).[6] Similarly, inModern Standard Arabic, the verbjāʾ('come') usually uses the formtaʿālfor its imperative, and the plural ofmarʾah('woman') isnisāʾ. Some of the more archaic Indo-European languages are particularly known for suppletion.Ancient Greek, for example, has sometwenty verbs with suppletive paradigms, many with three separate roots. InEnglish, the past tense of the verbgoiswent, which comes from the past tense of the verbwend, archaic in this sense. (The modern past tense ofwendiswended.) SeeGo (verb). TheRomance languageshave a variety of suppletive forms in conjugating the verb "to go", as these first-person singular forms illustrate (second-person singular forms in imperative):[7] The sources of these forms, numbered in the table, are six differentLatinverbs: Many of the Romance languages use forms from different verbs in the present tense; for example, French hasje vais‘I go’ fromvadere, butnous allons‘we go’ fromambulare. Galician-Portuguese has a similar example:imosfromire‘to go’ andvamosfromvadere‘we go’; the former is somewhat disused in modern Portuguese but very alive in modern Galician. Evenides, fromitissecond-person plural ofire, is the only form for ‘you (plural) go’ both in Galician and Portuguese (Spanishvais, fromvadere). Sometimes, the conjugations differ between dialects. For instance, theLimba Sarda Comunastandard of Sardinian supported a fully regular conjugation ofandare, but other dialects likeLogudoresedo not (see alsoSardinian conjugation). In Romansh,Rumantsch Grischunsubstitutes present and subjunctive forms ofirwithvomandgiaja(both are from Latinvādereandīre, respectively) in the place ofmonandmondiin Sursilvan. Similarly, the Welsh verbmynd‘to go’ has a variety of suppletive forms such asaf‘I shall go’ andeuthum‘we went’. Irishtéigh‘to go’ also has suppletive forms:dul‘going’ andrachaidh‘will go’. InEstonian, the inflected forms of the verbminema‘to go’ were originally those of a verb cognate with the Finnishlähteä‘to leave’, except for the passive and infinitive. InGermanic,Romance(exceptRomanian),Celtic,Slavic(exceptBulgarianandMacedonian), andIndo-Iranianlanguages, thecomparativeandsuperlativeof the adjective "good" is suppletive; in many of these languages the adjective "bad" is also suppletive. cognate toSanskrit:gadhya,lit.'what one clings to' cognate toSanskrit:bhadra"fortunate" fromOld Latin:duenos fromProto-Indo-European:*meh₂-"ripen", "mature" from Proto-Indo-European:*wers-"peak" Old Church Slavonic:лоучии"more suitable, appropriate"[13] The comparison of "good" is also suppletive inEstonian:hea→parem→parimandFinnish:hyvä→parempi→paras. Similarly to the Italian noted above, the English adverb form of "good" is the unrelated word "well", from Old Englishwel, cognate towyllan"to wish". Celtic languages: In many Slavic languages,greatandsmallare suppletive: InAlbanianthere are 14 irregular verbs divided into suppletive and non-suppletive: Ancient Greekhad a large number of suppletive verbs. A few examples, listed byprincipal parts: InBulgarian, the wordчовек,chovek("man", "human being") is suppletive. The strict plural form,човеци,chovetsi, is used only in Biblical context (like "brethren" as the archaic or symbolic plural of "brother" in English). In modern usage it has been replaced by the Greek loanхора,khora. The counter form (the special form for masculine nouns, used after numerals) is suppletive as well:души,dushi(with the accent on the first syllable). For example,двама, трима души,dvama, trima dushi("two, three people"); this form has no singular either. (A related but different noun is the pluralдуши,dushi, singularдуша,dusha("soul"), both with accent on the last syllable.) In English, the complicatedirregular verbto behas forms from several different roots: This verb is suppletive in most Indo-European languages, as well as in some non-Indo-European languages such asFinnish. An incomplete suppletion exists in English with the plural ofperson(from theLatinpersona). The regular pluralpersonsoccurs mainly in legalistic use. More commonly, the singular of the unrelated nounpeople(from Latinpopulus) is used as the plural; for example, "two people were living on a one-person salary" (note the plural verb). In its original sense of "populace, ethnic group",peopleis itself a singular noun with regular pluralpeoples. Several irregularIrish verbsare suppletive: There are several suppletivecomparative and superlativeforms in Irish; in addition to the oneslisted above, there is: In modern Japanese, the copulae だ, である and です take な to create "attributive forms" ofadjectival nouns[19](hence the English moniker, "na-adjectives"): The "conclusive" and "attributive" forms, だ and な, were constructed similarly, from a combination of aparticleand an inflection form of the old verb あり (ari, "to exist"). (Note: で itself was also a contraction of earlier にて.[22]) In modern Japanese, である ("conclusive") simply retains the older appearance of だ, while です is a different verb that can be used as a suppleted form of だ. Multiple hypotheses have been proposed for the etymology of です, one of which is a contraction of であります:[23] The basic construction of the negative form of a Japanese verb is the "irrealis" form followed by ない, which would result in such hypothetical constructions as *だらない and *であらない. However, these constructions are not used in modern Japanese, and the construction ではない is used instead.[24]This is because *あらない, the hypothetically regular negative form of ある, is not used either, and is simply replaced with ない. While the auxiliary ない causes suppletion, other auxiliaries such as ん and ありません do not necessarily. For です, its historical "irrealis" form, でせ has not been attested to create a negative form (only でせう → でしょう has been attested, and there were and are no *でせん and *でせない).[25]Thus, it has to borrow でありません as its negative form instead.[24] To express a potential meaning, as in "can do", most verbs use the "irrealis" form followed by れる or られる. する, notably has no such construction, and has to use a different verb for this meaning, できる. Latinhas several suppletive verbs. A few examples, listed byprincipal parts: In some Slavic languages, a few verbs have imperfective and perfective forms arising from different roots. For example, inPolish: Note thatz—,przy—,w—, andwy—areprefixesand are not part of the root InPolish, the plural form ofrok("year") islatawhich comes from the plural oflato("summer"). A similar suppletion occurs inRussian:год,romanized:god("year") >лет,let(genitive of "years"). The Romanian verba fi("to be") is suppletive and irregular, with the infinitive coming from Latinfieri, but conjugated forms from forms of already suppletive Latinsum. For example,eu sunt("I am"),tu ești("you are"),eu am fost("I have been"),eu eram("I used to be"),eu fusei/fui("I was"); while the subjunctive, also used to form the future ino să fiu("I will be/am going to be"), is linked to the infinitive. InRussian, the wordчеловек,chelovek("man, human being") is suppletive. The strict plural form,человеки,cheloveki, is used only in Orthodox Church contexts, with numerals (e. g.пять человек,pyat chelovek"five people") and in humorous context. It may have originally been the unattested*человекы,*cheloveky. In any case, in modern usage, it has been replaced byлюди,lyudi, the singular form of which is known in Russian only as a component of compound words (such asпростолюдин,prostolyudin). This suppletion also exists inPolish(człowiek>ludzie),Czech(člověk>lidé),Serbo-Croatian(čovjek>ljudi),[26]Slovene(človek>ljudje), andMacedonian(човек(čovek) >луѓе(lugje)). Strictly speaking, suppletion occurs when differentinflectionsof a lexeme (i.e., with the samelexical category) have etymologicallyunrelatedstems. The term is also used in looser senses, albeit less formally. The term "suppletion" is also used in the looser sense when there is a semantic link between words but not an etymological one; unlike the strict inflectional sense, these may be in differentlexical categories, such as noun/verb.[27][28] English noun/adjective pairs such as father/paternal or cow/bovine are also referred to ascollateral adjectives. In this sense of the term,father/fatherlyis non-suppletive.Fatherlyisderivedfromfather, while father/paternal is suppletive. Likewisecow/cowishis non-suppletive, whilecow/bovineis suppletive. In these cases, father/pater- and cow/bov- are cognate viaProto-Indo-European, but 'paternal' and 'bovine' are borrowings into English (via Old French and Latin). The pairs are distantly etymologically related, but the words are not from a single Modern English stem. The term "weak suppletion" is sometimes used in contemporary synchronic morphology in reference to sets of stems whose alternations cannot be accounted for bysynchronicallyproductivephonological rules. For example, the two formschild/childrenare etymologically from the same source, but the alternation does not reflect any regular morphological process in modern English: this makes the pair appear to be suppletive, even though the forms go back to the same root. In that understanding, English has abundant examples of weak suppletion in itsverbal inflection: e.g.bring/brought,take/took,see/saw, etc. Even though the forms areetymologically relatedin each pair, no productive morphological rule can derive one form from the other in synchrony. Alternations just have to be learned by speakers — in much the same way as truly suppletive pairs such asgo/went. Such cases, which were traditionally simply labelled "irregular", are sometimes described with the term "weak suppletion", so as to restrict the term "suppletion" to etymologically unrelated stems.
https://en.wikipedia.org/wiki/Suppletion
Asynthetic languageis a language that is statistically characterized by a higher morpheme-to-word ratio. Rule-wise, a synthetic language is characterized by denotingsyntacticrelationships between words viainflectionoragglutination, withfusional languagesfavoring the former andagglutinative languagesthe latter subtype of word synthesis. Further divisions includepolysynthetic languages(most belonging to an agglutinative-polysynthetic subtype, althoughNavajoand otherAthabaskan languagesare often classified as belonging to a fusional subtype) andoligosynthetic languages(only found inconstructed languages). In contrast, rule-wise, theanalytic languagesrely more onauxiliary verbsandword orderto denote syntactic relationship between words. Addingmorphemesto a root word is used in inflection to convey a grammatical property of the word, such as denoting a subject or an object.[1]Combining two or more morphemes into one word is used inagglutinating languages, instead.[2]For example, the wordfast, if inflectionally combined with-erto form the wordfaster, remains an adjective, while the wordteachderivatively combined with-erto form the wordteacherceases to be a verb. Some linguists consider relational morphology to be a type of derivational morphology, which may complicate the classification.[3] Derivational and relational morphology represent opposite ends of a spectrum; that is, a single word in a given language may exhibit varying degrees of both of them simultaneously. Similarly, some words may have derivational morphology while others have relational morphology. Inderivational synthesis, morphemes of different types (nouns,verbs,affixes, etc.) are joined to create new words. That is, in general, the morphemes being combined are more concrete units of meaning.[3]The morphemes being synthesized in the following examples either belong to a particular grammatical class – such asadjectives, nouns, orprepositions– or are affixes that usually have a single form and meaning: Aufsicht supervision -s- Rat council -s- Mitglieder members Versammlung assembly Aufsicht-s-Rat-s-MitgliederVersammlung supervision {} council {} members assembly "Meeting of members of the supervisory board" προ pro pre παρ- par next to οξύ oxý sharp τόν tón pitch/tone -ησις -esis tendency προπαρ-οξύτόν-ησις pro par oxý tón -esis pre {next to} sharp pitch/tone tendency "Tendency to accent on theproparoxytone[third-to-last] position" przystań harbor -ek DIM przystań-ek harbor DIM "Public transportation stop [without facilities]" (i.e.bus stop,tram stop, orrail halt)—compare todworzec. anti- against dis- ending establish to institute -ment NS -arian advocate -ism ideology anti-dis-establish-ment-arian-ism against ending {to institute}NSadvocate ideology "the movement to prevent revoking the Church of England's status as the official church [of England, Ireland, and Wales]." досто dosto deserving примечательн primečátelʹn notable -ость -ostʹ NS достопримечательн-ость dosto primečátelʹn -ostʹ deserving notableNS "Place of interest" نواز navâz play music ــ‌نده -ande -ing ــ‌گی -gi NS نوازــ‌ندهــ‌گی navâz -ande -gi {play music} -ingNS "musicianship" or "playing a musical instrument" на na direction/intent вз vz adjective до do approach гін hin fast movement на вз до гін na vz do hin {direction/intent} {adjective} {approach} {fast movement} "after something or someone that is moving away" hyper- high cholesterol cholesterol -emia blood hyper-cholesterol-emia high cholesterol blood the presence of high levels ofcholesterolin the blood. Inrelational synthesis,root wordsare joined tobound morphemesto show grammatical function. In other words, it involves the combination of more abstract units of meaning than derivational synthesis.[3]In the following examples many of the morphemes are related tovoice(e.g. passive voice), whether a word is in thesubjectorobjectof the sentence,possession,plurality, or other abstract distinctions in a language: comunic communicate -ando GER ve you.PL le those.FEM.PL comunic-andovele communicate GER you.PL those.FEM.PL "Communicating those[feminine plural] to you[plural]" escrib write iéndo GER me me lo it escribiéndomelo write GER me it "Writing it to me" raske raske-sti-kasvata-tav kasvatama raskekasvatama raske-sti-kasvata-tav heavy-ly-educat-ableMismatch in the number of words between lines: 2 word(s) in line 1, 1 word(s) in line 2 (help); |"with learning disabilities" an go -em we -se/-nos ourselves -en/'n from an-em-se/-nos-en/'n go we ourselves from "Let's get out of here" ō PAST c 3SG-OBJ ā water lti CAUS zquiya IRR ō c ā lti zquiya PAST 3SG-OBJwater CAUS IRR "She would have bathed him" com together prim crush unt they ur PASS comprimuntur together crush they PASS "They are crushed together" 見 mi see させ sase CAUS られ rare PASS がたい gatai difficult 見させられがたい mi sase rare gatai see CAUS PASS difficult "It's difficult to be shown [this]" juosta run -ella FREQ -isin I.COND -ko Q -han CAS juosta-ella-isin-ko-han run FREQ I.COND QCAS "I wonder if I should run around [aimlessly]" ház house -a POSS -i PL -tok your.PL -ban in ház-a-i-tok-ban house POSS PL your.PL in "In your houses" szeret love -lek IREFLyou szeret-lek love {I REFL you} "I love you" Afyonkarahisar Afyonkarahisar -lı citizen of -laş transform -tır PASS -ama notbe (y) (thematic) -abil able -ecek FUT -ler PL -imiz we -den among misiniz? you-PL-FUT-Q Afyonkarahisar-lı-laş -tır-ama(y)-abil-ecek-ler-imiz-denmisiniz? Afyonkarahisar {citizen of} transform PASS notbe (thematic) able FUT PL we among you-PL-FUT-Q "Are you[plural/formal] amongst the ones whom we might not be able to make citizens ofAfyonkarahisar?" გადმო- gadmo გვ- gv ა- a ხტუნ khtun -ებ eb -ინ in -ებ eb -დ d -ნენ nen -ო o გადმო-გვ- ა-ხტუნ-ებ-ინ-ებ-დ -ნენ-ო gadmo gv a khtun eb in eb d nen o "They said that they would be forced by them [the others] to make someone to jump over in this direction." (The word describes the whole sentence that incorporates tense, subject, object, relation between them, direction of the action, conditional and causative markers etc.) Agglutinating languages have a high rate of agglutination in their words and sentences, meaning that the morphological construction of words consists of distinct morphemes that usually carry a single unique meaning.[4]These morphemes tend to look the same no matter what word they are in, so it is easy to separate a word into its individual morphemes.[1]Morphemes may be bound (that is, they must be attached to a word to have meaning, like affixes) orfree(they can stand alone and still have meaning). Fusional languages are similar to agglutinating languages in that they involve the combination of many distinct morphemes. However, morphemes in fusional languages are often assigned several different lexical meanings, and they tend to be fused together so that it is difficult to separate individual morphemes from one another.[1][5] Polysynthetic languages are considered the most synthetic of the three types because they combine multiplestemsas well as other morphemes into a single continuous word. These languages often turn nouns into verbs.[1]ManyNative Alaskanand other Native American languages are polysynthetic. Oligosynthetic languages are a theoretical notion created byBenjamin Whorf. Such languages would be functionally synthetic, but make use of a very limited array of morphemes (perhaps just a few hundred). The concept of an oligosynthetic language type was proposed by Whorf to describe theNative AmericanlanguageNahuatl, although he did not further pursue this idea.[6]Though no natural language uses this process, it has found its use in the world ofconstructed languages, inauxlangssuch as Ygyde[7]andaUI. Synthetic languages combine (synthesize) multiple concepts into each word.Analytic languagesbreak up (analyze) concepts into separate words. These classifications comprise two ends of a spectrum along which different languages can be classified. The present-dayEnglishis seen as analytic, but it used to be fusional. Certain synthetic qualities (as in the inflection of verbs to showtense) were retained. The distinction is, therefore, a matter of degree. The most analytic languages,isolating languages, consistently have one morpheme per word, while at the other extreme, in polysynthetic languages such as someNative American languages[8]a single inflected verb may contain as much information as an entire English sentence. In order to demonstrate the nature of the isolating-analytic–synthetic–polysynthetic classification as a "continuum", some examples are shown below. However, with rare exceptions, each syllable in Mandarin (corresponding to a single written character) represents a morpheme with an identifiable meaning, even if many of such morphemes arebound. This gives rise to thecommon misconceptionthat Chinese consists exclusively of "words of one syllable". As the sentence above illustrates, however, even simple Chinese words such asmíngtiān'tomorrow' (míng"next" +tīan"day") andpéngyou'friend' (a compound ofpéngandyǒu, both of which mean 'friend') are synthetic compound words. The Chinese language of the classic works (ofConfuciusfor example) and southern dialects to a certain extent is more strictly monosyllabic: each character represents one word. The evolution of modern Mandarin Chinese was accompanied by a reduction in the total number of phonemes. Words which previously were phonetically distinct became homophones. Many disyllabic words in modern Mandarin are the result of joining two related words (such as péngyou, literally "friend-friend") in order to resolve the phonetic ambiguity. A similar process is observed in some English dialects. For instance, in theSouthern dialects of American English, it is not unusual for the short vowel sounds[ɪ]and[ɛ]to be indistinguishable beforenasal consonants: thus the words "pen" and "pin" arehomophones(seepin-pen merger). In these dialects, the ambiguity is often resolved by using the compounds "ink-pen" and "stick-pin", in order to clarify which "p*n" is being discussed. The definite articles are not only suffixes but are also noun inflections expressing thought in a synthetic manner. Haspelmath and Michaelis[9]observed that analyticity is increasing in a number of European languages. In theGermanexample, the first phrase makes use of inflection, but the second phrase uses a preposition. The development of preposition suggests the moving from synthetic to analytic. des the.GEN.SG Hauses house.GEN.SG des Hauses the.GEN.SG house.GEN.SG 'the house's' von of dem the.DAT.SG Haus house.DAT.SG von dem Haus of the.DAT.SG house.DAT.SG 'of the house' It has been argued that analytic grammatical structures are easier for adultslearning a foreign language. Consequently, a larger proportion of non-native speakers learning a language over the course of its historical development may lead to a simpler morphology, as the preferences of adult learners get passed on to second generation native speakers. This is especially noticeable in the grammar ofcreole languages. A 2010 paper inPLOS ONEsuggests that evidence for this hypothesis can be seen in correlations between morphological complexity and factors such as the number of speakers of a language, geographic spread, and the degree of inter-linguistic contact.[10] According toGhil'ad Zuckermann,Modern Hebrew(which he calls "Israeli") "is much more analytic, both with nouns and verbs", compared withClassical Hebrew(which he calls "Hebrew").[11]
https://en.wikipedia.org/wiki/Synthetic_language
Tense–aspect–mood(commonly abbreviatedtaminlinguistics) ortense–modality–aspect(abbreviated astma) is an important group ofgrammatical categories, which are marked in different ways by differentlanguages.[1] TAM covers the expression of three major components of words which lead to or assist in the correct understanding of the speaker's meaning:[2] For example, in English the word "walk" would be used in different ways for the different combinations of TAM: In the last example, there is no difference in the articulation of the word, although it is being used in a different way, one for conveying information, the other for instructing. In some languages,evidentiality(whether evidence exists for the statement, and if so what kind) andmirativity(surprise) may also be included. Therefore, some authors extend this term astense–aspect–mood–evidentiality(tamein short).[3] It is often difficult to untangle these features of a language. Several features (or categories) may be conveyed by a single grammatical construction (for instance,English-sis used for the third person singular present). However, this system may not be complete in that not all possible combinations may have an available construction. On the other hand, the same category may be expressed with multiple constructions. In other cases, there may not be delineated categories of tense and mood, or aspect and mood. For instance, manyIndo-European languagesdo not clearly distinguish tense from aspect.[4][5][6][7][8] In some languages, such asSpanishandModern Greek, the imperfective aspect is fused with the past tense in a form traditionally called theimperfect. Other languages with distinct past imperfectives includeLatinandPersian. In the traditional grammatical description of some languages, including English, manyRomance languages, and Greek and Latin, "tense" or the equivalent term in that language refers to a set ofinflectedorperiphrasticverb forms that express a combination of tense, aspect, and mood. InSpanish, the simple conditional (Spanish:condicional simple) is classified as one of the simple tenses (Spanish:tiempos simples), but is named for the mood (conditional) that it expresses. InAncient Greek, the perfect tense (Ancient Greek:χρόνος παρακείμενος,romanized:khrónos parakeímenos)[9]is a set of forms that express both present tense and perfect aspect (finite forms), or simply perfect aspect (non-finite forms). However, not all languages conflate tense, aspect and mood. Someanalytic languagessuch asCreole languageshave separate grammatical markers for tense, aspect, and/or mood, which comes close to the theoretical distinction. Creoles, both Atlantic and non-Atlantic, tend to share a large number ofsyntacticfeatures, including the avoidance ofbound morphemes. Tense, aspect, and mood are usually indicated with separate invariant pre-verbalauxiliaries. Typically the unmarked verb is used for either the timeless habitual or thestativeaspect or the pastperfectivetense–aspect combination. In general creoles tend to put less emphasis on marking tense than on marking aspect. Typically aspectually unmarked stative verbs can be marked with theanterior tense, and non-statives, with or without the anterior marker, can optionally be marked for theprogressive,habitual, orcompletive aspector for theirrealis mood. In some creoles the anterior can be used to mark thecounterfactual. When any of tense, aspect, and modality are specified, they are typically indicated separately with the invariant pre-verbalmarkersin the sequence anteriorrelative tense(prior to the time focused on), irrealis mode (conditional or future),non-punctual aspect.[10]: pp. 176–9, p. 191,[11] Hawaiian Creole English (HCE), or Hawaiian Pidgin, is a creole language with most of its vocabulary drawn from itssuperstrateEnglish, but as with all creoles its grammar is very different from that of its superstrate. HCE verbs[12]have only two morphologically distinct forms: the unmarked form (e.g.teik"take") and the progressive form with the suffix-inappended to the unmarked form (teikin"taking"). The past tense is indicated either by the unmarked form or by the preverbal auxiliarywen(Ai wen see om"I saw him") orbin(especially among older speakers) orhaed(especially on Kauai). However, for "to say" the marked past tense has the obligatory irregular formsed"said", and there are optional irregular past tense formssinorsaw=wen si"saw",keim=wen kam"came", andtol=wen tel"told". The past is indicated only once in a sentence since it is a relative tense. The future marker is the preverbal auxiliarygonorgoin"am/is/are going to":gon bai"is going to buy". The future of the past tense/aspect uses the future form since the use of the past tense form to mark the time of perspective retains its influence throughout the rest of the sentence:Da gai sed hi gon fiks mi ap("The guy said he [was] gonna fix me up"). There are various preverbal modal auxiliaries:kaen"can",laik"want to",gata"have got to",haeftu"have to",baeta"had better",sapostu"am/is/are supposed to". Tense markers are used infrequently before modals:gon kaen kam"is going to be able to come".Waz"was" can indicate past tense before the future markergonand the modalsapostu:Ai waz gon lift weits"I was gonna lift weights";Ai waz sapostu go"I was supposed to go". There is a preverbal auxiliaryyustufor past tense habitual aspect:yustu tink so("used to think so"). The progressive aspect can be marked with the auxiliarystein place of or in addition to the verbal suffix-in:Wat yu ste it?=Wat yu itin?("What are you eating?");Wi ste mekin da plaen("We're making the plan"). The latter, double-marked, form tends to imply a transitory nature of the action. Without the suffix,stecan alternatively indicate perfective aspect:Ai ste kuk da stu awredi("I cooked the stew already"); this is true, for instance, after a modal:yu sapostu ste mek da rais awredi("You're supposed to have made the rice already").Statis an auxiliary forinchoative aspectwhen combined with the verbal suffix-in:gon stat plein("gonna start playing"). The auxiliarypauwithout the verbal suffix indicates completion:pau tich"finish(ed) teaching". Aspect auxiliaries can co-occur with tense markers:gon ste plei("gonna be playing");wen ste it("was eating"). Modern Greek[13]: pp. 50–76distinguishes the perfective and imperfective aspects by the use of two different verb stems. For the imperfective aspect, suffixes are used to indicate the past tense indicative mood, the non-past indicative mood, and the subjunctive and imperative moods. For the perfective aspect, suffixes are used to indicate the past tense indicative mood, the subjunctive mood, and the imperative mood. The perfective subjunctive is twice as common as the imperfective subjunctive. The subjunctive mood form is used in dependent clauses and in situations where English would use an infinitive (which is absent in Greek). There is a perfect form in both tenses, which is expressed by an inflected form of the imperfective auxiliary verbέχω"have" and an invariant verb form derived from the perfective stem of the main verb. The perfect form is much rarer than in English. The non-past perfect form is a true perfect aspect as in English. In addition, all the basic forms (past and non-past, imperfective and perfective) can be combined with a particle indicating future tense/conditional mood. Combined with the non-past forms, this expresses an imperfective future and a perfective future. Combined with the imperfective past it is used to indicate the conditional, and with the perfective past to indicate the inferential. If the future particle precedes the present perfect form, a future perfect form results. InHindustani, grammatical aspects are overtly marked. There are four aspects in Hindustani:Simple Aspect,Habitual Aspect,Perfective AspectandProgressive Aspect.PeriphrasticHindustani verb forms consist of two elements, the first of these two elements is the aspect marker and the second element (the copula) is the tense-mood marker.[14]These three aspects are formed from their participle forms being used with the copula verb of Hindustani. However, the aspectual participles can also have the verbsrêhnā(to stay/remain),ānā(to come) &jānā(to go) as their copula which themselves can be conjugated into any of the three grammatical aspects hence forming sub-aspects.[15][16]Each copula besideshonā(to be) gives a different nuance to the aspect. The habitual aspect infinitives when formed using the copularêhnā(to stay, remain) the following sub-aspectual forms are formed: The maincopulahonā(to be) in its conjugated form is shown in the table below. These conjugated forms are used to assign atenseand agrammatical moodto the aspectual forms. In all Slavic languages, most verbs come in pairs with one member indicating an imperfective aspect and the other indicating a perfective one. Most Russian verbs[17]: pp. 53–85come in pairs, one with imperfective aspect and the other with perfective aspect, the latter usually formed from the former with a prefix but occasionally with a stem change or using a different root. Perfective verbs, whether derived or basic, can be made imperfective with a suffix.[4]: p84Each aspect has a past form and a non-past form. The non-past verb forms are conjugated by person/number, while the past verb forms are conjugated by gender/number. The present tense is indicated with the non-past imperfective form. The future in the perfective aspect is expressed by applying the conjugation of the present form to the perfective version of the verb. There is also a compound future imperfective form consisting of the future of "to be" plus the infinitive of the imperfective verb. The conditional mood is expressed by a particle (=English "would") after the past tense form. There are conjugated modal verbs, followed by the infinitive, for obligation, necessity, and possibility/permission. Romance languages have from five to eight simple inflected forms capturing tense–aspect–mood, as well as corresponding compound structures combining the simple forms of "to have" or "to be" with a past participle. There is aperfective/imperfective aspectdistinction. French has inflectionally distinct imperative, subjunctive, indicative and conditional mood forms. As in English, the conditional mood form can also be used to indicate a future-as-viewed-from-the-past tense–aspect combination in the indicative mood. The subjunctive mood form is used frequently to express doubt, desire, request, etc. in dependent clauses. There are indicative mood forms for, in addition to the future-as-viewed-from-the-past usage of the conditional mood form, the following combinations: future; an imperfective past tense–aspect combination whose form can also be used in contrary-to-fact "if" clauses with present reference; a perfective past tense–aspect combination whose form is only used for literary purposes; and a catch-all formulation known as the "present" form, which can be used to express the present, past historical events, or thenear-future. All synthetic forms are also marked for person and number. Additionally, the indicative mood has five compound (two-word) verb forms, each of which results from using one of the above simple forms of "to have" (or of "to be" for intransitive verbs of motion) plus a past participle. These forms are used to shift back the time of an event relative to the time from which the event is viewed. This perfect form as applied to the present tense does not represent the perfect tense/aspect (past event with continuation to or relevance for the present), but rather represents a perfective past tense–aspect combination (a past action viewed in its entirety).[4]: pp. 144, 171 Unlike Italian or Spanish, French does not mark for a continuous aspect. Thus, "I am doing it" and "I do it" both translate to the same sentence in French:Je le fais. However, this information is often clear from context, and when not, it can be conveyed using periphrasis: for example, the expressionêtre en train de [faire quelque chose]("to be in the middle of [doing something]") is often used to convey the sense of a continuous aspect; the addition of adverbs likeencore("still") may also convey the continuous, repetitive or frequent aspects. The use of the participle mood (at present tense, inherited from the Latin gerundive) has almost completely fallen out of use in modern French for denoting the continuous aspect of verbs, but remains used for other aspects like simultaneity or causality, and this participle mood also competes with the infinitive mood (seen as a form of nominalisation of the verb) for other aspects marked by nominal prepositions. Italian has synthetic forms for the indicative, imperative, conditional, and subjunctive moods. The conditional mood form can also be used for hearsay:Secondo lui, sarebbe tempo di andare"According to him, it would be [is] time to go".[18]: p.76The indicative mood has simple forms (one word, but conjugated by person and number) for the present tense, the imperfective aspect in the past tense, the perfective aspect in the past, and the future (and the future form can also be used to express present probability, as in the English "It will be raining now").[18]: p.75As with other Romance languages, compound verbs shifting the action to the past from the point in time from which it is perceived can be formed by preceding a past participle by a conjugated simple form of "to have", or "to be" in the case of intransitive verbs. As with French, this form when applied to the present tense of "to have" or "to be" does not conveyperfect aspectbut rather the perfective aspect in the past.[18]: p.62In the compound pluperfect, the helping verb is in the past imperfective form in a main clause but in the past perfective form in a dependent clause.[18]: p.71 Unlike French, Italian has a form to express progressive aspect: in either the present or the past imperfective, the verbstare("to stand", "to be temporarily") conjugated for person and number is followed by a present gerund (indicated by the suffix-andoor-endo("-ing")).[18]: p.59 Portuguese has synthetic forms for the indicative, imperative, conditional, and subjunctive moods. The conditional mood form can also express past probability:[19]: p.62 Seria it would be ele he que that falava was speaking Seria ele que falava {it would be} he that {was speaking} The subjunctive form seldom appears outside dependent clauses. In the indicative, there are five one-word forms conjugated for person and number: one for the present tense (which can indicate progressive or non-progressive aspect); one for the perfective aspect of the past; one for the imperfective aspect of the past; a form for the pluperfect aspect that is only used in formal writing;[19]: pp. 57–58, 85and a future tense form that, as in Italian, can also indicate present tense combined with probabilistic modality. As with other Romance languages, compound verbs shifting the time of action to the past relative to the time from which it is perceived can be formed by preceding a past participle by a conjugated simple form of "to have". Using the past tense of the helping verb gives the pluperfect form that is used in conversation. Using the present tense form of the helping verb gives a true perfect aspect, though one whose scope is narrower than that in English: It refers to events occurring in the past and extending to the present, as inTem feito muito frio este inverno("It's been very cold this winter (and still is)").[19]: p.84 Portuguese expresses progressive aspect in any tense by using conjugatedestar("to stand", "to be temporarily"), plus the present participle ending in-ando,-endo, orindo:Estou escrevendo uma carta("I am writing a letter").[19]: p.52 Futurity can be expressed in three ways other than the simple future form:[19]: pp. 61–62 Vou I go ver to see João John esta this tarde afternoon Vou ver João esta tarde {I go} {to see} John this afternoon Temos we have que that ver to see João John hoje today Temos que ver João hoje {we have} that {to see} John today Hei I have de of ver to see João John amanhã tomorrow Hei de ver João amanhã {I have} of {to see} John tomorrow Spanish morphologically distinguishes the indicative, imperative, subjunctive, and conditional moods. In the indicative mood, there are synthetic (one-word, conjugated for person/number) forms for the present tense, the past tense in the imperfective aspect, the past tense in the perfective aspect, and the future tense. The past can be viewed from any given time perspective by using conjugated "to have" in any of its synthetic forms plus the past participle. When this compound form is used with the present tense form of "to have", perfect tense/aspect (past action with present continuation or relevance) is conveyed (as in Portuguese but unlike in Italian or French). Spanish expresses the progressive similarly to English, Italian, and Portuguese, using the verb "to be" plus the present participle:estoy leyendo"I am reading". Germanic languages tend to have two morphologically distinct simple forms, for past and non-past, as well as a compound construction for the past or for the perfect, and they use modal auxiliary verbs. The simple forms, the first part of the non-modal compound form, and possibly the modal auxiliaries, are usually conjugated for person and/or number. A subjunctive mood form is sometimes present. English also has a compound construction for continuous aspect. Unlike some Indo-European languages such as the Romance and Slavic languages, Germanic languages have no perfective/imperfective dichotomy.[4]: p. 167 The most common past tense construction inGermanis thehaben("to have") plus past participle (or for intransitive verbs of motion, thesein("to be") plus past participle) form, which is a pure past construction rather than conveyingperfect aspect. The pastprogressiveis conveyed by the simple past form. Thefuturecan be conveyed by the auxiliarywerden, which is conjugated for person and number; but often the simple non-past form is used to convey the future (in fact,werdenis also used to mark anassumptivesimilar todürfte,müsste'should' andwohl'arguably', such as when it is combined withjetzt'now', and this may be considered its primary function, from which future marking is derived). Modality is conveyed via conjugated pre-verbal modals:müssen"to have to",wollen"to want to",können"to be able to";würden"would" (conditional),sollten"should" (the subjunctive form ofsollen),sollen"to be supposed to",mögen"to like",dürfen"to be allowed to".[20] Danishhas the usual Germanic simple past and non-past tense forms and the compound construction using "to have" (or for intransitive verbs of motion, "to be"), the compound construction indicating past tense rather than perfect aspect. Futurity is usually expressed with the simple non-past form, but the auxiliary modalsvil("want") andskal("must"—obligation) are sometimes used (seeFuture tense#Danish). Other modals includekan("can"),kan gerne("may"—permission),må("must"), andmå gerne("may—permission). Progressivity can be expressed periphrastically as in: er is ved in process at to læse read er ved at læse is {in process} to read er is i in færd process med with at to vaske wash er i færd med at vaske is in process with to wash sidder sits og and læser reads sidder og læser sits and reads står stands og and taler talks står og taler stands and talks The subjunctive mood form has disappeared except for a few stock phrases.[21] The simple non-past form can convey the progressive, which can also be expressed by the infinitive preceded byliggen"lie",lopen"walk, run",staan"stand", orzitten"sit" pluste. The compound "have" (or "be" before intransitive verbs of motion toward a specific destination) plus past participle is synonymous with, and more frequently used than, the simple past form, which is used especially for narrating a past sequence of events. The past perfect construction is analogous to that in English. Futurity is often expressed with the simple non-past form, but can also be expressed using the infinitive preceded by the conjugated present tense ofzullen; the latter form can also be used for probabilistic modality in the present. Futurity can also be expressed with "go" plus the infinitive: Hij he gaat goes een a brief letter schrijven to write Hij gaat een brief schrijven he goes a letter {to write} "He is going to write a letter." The future perfect tense/aspect combination is formed by conjugatedzullen+hebben("to have") (orzijn("to be")) + past participle:Zij zullen naar Breda gegaan zijn("They will have gone to Breda"). The conditional mood construction uses the conjugated past tense ofzullen: Hij he zou would graag gladly thuis home blijven to stay Hij zou graag thuis blijven he would gladly home {to stay} "He would gladly stay home." The past tense/conditional mood combination is formed using the auxiliary "to have" or "to be": Hij he zou would graag gladly thuis home gebleven stayed zijn to be Hij zou graag thuis gebleven zijn he would gladly home stayed {to be} "He would gladly have stayed home." In contemporary use thesubjunctiveform is mostly, but not completely, confined to set phrases and semi-fixed expressions, though in older Dutch texts the use of the subjunctive form can be encountered frequently. There are various conjugated modal auxiliaries:kunnen"to be able",moeten"to have to",mogen"to be possible" or "to have permission",willen"to want to",laten"to allow" or "to cause". Unlike in English, these modals can be combined with the future tense form: Hij he zal will ons us niet not kunnen to be able helpen to help Hij zal ons niet kunnen helpen he will us not {to be able} {to help} "He will not be able to help us".[22]: pp. 45–65 As with other Germanic languages, Icelandic[23]: pp. 135–164has two simple verb forms: past and non-past. Compound constructions that look to the past from a given time perspective use conjugated "to have" (or "to be" for intransitive verbs of motion) plus past participle. In each voice there are forms for the indicative mood and the subjunctive mood for each of the simple past, the simple non-past, the perfect, the past perfect, the future, and the future perfect, and there are a non-past conditional mood form and a past conditional mood form, as well as an imperative mood. The perfect form is used for a past event with reference to the present or stretching to the present, or for a past event about which there is doubt, so the perfect form represents aspect or modality and not tense. The future tense form is seldom used. The non-past subjunctive form expresses a wish or command; the past subjunctive form expresses possibility. The indicative mood form is used in both clauses of "if [possible situation]...then..." sentences, although "if" can be replaced by the use of the subjunctive mood form. The subjunctive form is used in both clauses of "if [imaginary situation]...then..." sentences, and is often used in subordinate clauses. There are various modal auxiliary verbs. There is a progressive construction using "to be" which is used only for abstract concepts like "learn" and not for activities like "sit":ég er að læra"I am [at] learning". The English language allows a wide variety of expressions of combinations of tense, aspect, and mood, with a variety of grammatical constructions. These constructions involve puremorphologicalchanges (suffixes and internal sound changes of the verb), conjugatedcompound verbs, and invariant auxiliaries. For Englishtamfrom the perspective of modality, see Palmer;[7]and Nielsen[24]for Englishtamfrom the perspective of tense, see Comrie[5]and Fleischman;[25]for Englishtamfrom the perspective of aspect, see Comrie.[6] Theunmarkedverb form (as inrun,feel) is theinfinitivewith the particletoomitted. It indicatesnonpast tensewith no modal implication. In an inherently stative verb such asfeel, it can indicate present time (I feel well) or future independent clauses(I'll come tomorrow if I feel better). In an inherently non-stative verb such asrun, the unmarked form can indicategnomicorhabitualsituations (birds fly; I run every day) or scheduled futurity, often with a habitual reading (tomorrow I run the 100 metre race at 5:00; next month I run the 100 metre race every day). Non-stative verbs in unmarked form appearing in dependent clauses can indicate even unscheduled futurity (I'll feel better after I run tomorrow; I'll feel better if I run every day next month). The unmarked verb isnegatedby preceding it withdo/does not(I do not feel well,He does not run every day). Heredohas no implication of emphasis, unlike the affirmative (I do feel better,I do run every day). The aspectually and modally unmarked past tense is usually marked for tense by the suffix -ed, pronounced as/t/,/d/, or/əd/depending on thephonologicalcontext. However, over 400 verbs (including over 200 with distinct roots – short verbs for features of everyday life, of Germanic origin) areirregularand their morphological changes are internal (as inI take, I took). (SeeList of English irregular verbs.) This aspectually unmarked past tense form appears in innately stative verbs ("I felt bad.") and in non-stative verbs, in which case the aspect could be habitual ("I took one brownie every day last week.") or perfective ("I took a brownie yesterday."), but not progressive. This form is negated with an invariantanalyticalconstruction using the morphologically unmarked verb (I / he did not feel bad,I did not take a brownie). As withdoanddo not, no emphasis is imparted by the use ofdidin combination with the negativenot(compare the affirmativeI / he did take the brownie, in whichdidconveys emphasis). For the morphological changes associated with the subjunctive mood, seeEnglish subjunctive. There are two types of conjugated compound verbs in English, which can be combined. Both of these morphological changes can be combined with the compound verbal constructions given below involving invariant auxiliaries, to form verb phrases such aswill have been taking. Aside from the above-mentioned auxiliary verbs, English has fourteen invariant auxiliaries (often calledmodal verbs), which are used before the morphologically unmarked verb to indicate mood, aspect, tense, or some combination thereof.[7]Some of these have more than one modal interpretation, the choice between which must be based on context; in these cases, the equivalent past tense construction may apply to one but not the other of the modal interpretations. For more details seeEnglish modal verbs. Although several verbal categories are expressed purely morphologically in Basque,[27]periphrastic verbal formations predominate. For the few verbs that have synthetic conjugations, Basque has forms for past tense continuous aspect (state or ongoing action) and present tense continuous aspect, as well as imperative mood. In the compound verbal constructions, there are forms for the indicative mood, the conditional mood, a mood for conditional possibility ("would be able to"), an imperative mood, a mood of ability or possibility, a mood for hypothetical "if" clauses in the present or future time, a counterfactual mood in the past tense, and a subjunctive mood (used mostly in literary style in complement clauses and purpose/wish clauses). Within the indicative mood, there is a present tense habitual aspect form (which can also be used with stative verbs), a past tense habitual aspect form (which also can be used with stative verbs), a near past tense form, a remote past tense form (which can also be used to convey past perspective on an immediately prior situation or event), a future-in-the-past form (which can also be used modally for a conjecture about the past or as a conditional result of a counterfactual premise), and a future tense form (which can also be used for the modality of present conjecture, especially with a lexically stative verb, or of determination/intention). There are also some constructions showing an even greater degree of periphrasis: one for progressive aspect and ones for the modalities of volition ("want to"), necessity/obligation ("have to", "need to"), and ability ("be able to"). Hawaiian[4]: ch.6,[28]is anisolating language, so its verbal grammar exclusively relies on unconjugated auxiliary verbs. It has indicative and imperative mood forms, the imperative indicated bye+ verb (or in the negative bymai+ verb). In the indicative its tense/aspect forms are: unmarked (used generically and for the habitual aspect as well as the perfective aspect for past time),ua+ verb (perfective aspect, but frequently replaced by the unmarked form),ke+ verb +nei(present tense progressive aspect; very frequently used), ande+ verb +ana(imperfective aspect, especially for non-present time). Modality is expressed with different verbal auxuliaries.Ponoconveys obligation/necessity as inHe pono i na kamali'i a pau e maka'ala'Children should beware'; ability is conveyed byhikias inUa hiki i keia kamali'i ke heluhelu'This child can read'.
https://en.wikipedia.org/wiki/Tense%E2%80%93aspect%E2%80%93mood
Inlinguistic morphology, anuninflected wordis awordthat has no morphologicalmarkers(inflection) such asaffixes,ablaut,consonant gradation, etc., indicatingdeclensionorconjugation. If a word has an uninflected form, this is usually the form used as thelemmafor the word.[1] InEnglishand many otherlanguages, uninflected words includeprepositions,interjections, andconjunctions, often calledinvariable words. These cannot be inflected under any circumstances (unless they are used as different parts of speech, as in "ifs and buts"). Only words that cannot be inflected at all are called "invariable". In the strict sense of the term "uninflected", only invariable words are uninflected, but in broader linguistic usage, these terms are extended to be inflectable words that appear in their basic form. For example, Englishnounsare said to be uninflected in thesingular, while they show inflection in theplural(represented by the affix-s/-es). The term "uninflected" can also refer to uninflectability with respect to one or more, but not all, morphological features; for example, one can say thatJapaneseverbs are uninflected for person and number, but they do inflect for tense, politeness, and several moods and aspects. In the strict sense, among English nouns onlymass nouns(such assand,information, orequipment) are truly uninflected, since they have only one form that does not change;count nounsare always inflected for number, even if the singular inflection is shown by an "invisible" affix (thenull morpheme). In the same way, English verbs are inflected for person and tense even if the morphology showing those categories is realized as null morphemes. In contrast, otheranalytic languageslikeMandarin Chinesehave true uninflected nouns and verbs, where the notions of number and tense are completely absent. In manyinflected languages, such asGreekandRussian, some nouns and adjectives of foreign origin are left uninflected in contexts where native words would be inflected; for instance, the nameAbraamin Greek (fromHebrew), the Modern Greek word μπλεble(fromFrenchbleu), theItalianwordcomputer, and theRussianwordsкенгуру,kenguru(kangaroo) andпальто,pal'to(coat, from Frenchpaletot). InGerman, allmodal particlesare uninflected.[2]
https://en.wikipedia.org/wiki/Uninflected_word
Linguistic relativityasserts thatlanguageinfluencesworldvieworcognition. One form of linguistic relativity,linguistic determinism, regards peoples' languages as determining and influencing the scope of culturalperceptionsof their surrounding world.[1] Various colloquialisms refer to linguistic relativism: theWhorf hypothesis; theSapir–Whorf hypothesis(/səˌpɪərˈhwɔːrf/sə-PEERWHORF); theWhorf-Sapir hypothesis; andWhorfianism. The hypothesis is in dispute, with many different variations throughout its history.[2][3]Thestrong hypothesisof linguistic relativity, now referred to as linguistic determinism, is that languagedeterminesthought and that linguistic categories limit and restrict cognitive categories. This was a claim by some earlier linguists pre-World War II;[4]since then it has fallen out of acceptance by contemporary linguists.[5][need quotation to verify]Nevertheless, research has produced positiveempirical evidencesupporting aweakerversion of linguistic relativity:[5][4]that a language's structures influence a speaker's perceptions, without strictly limiting or obstructing them. Although common, the termSapir–Whorf hypothesisis sometimes considered amisnomerfor several reasons.Edward Sapir(1884–1939) andBenjamin Lee Whorf(1897–1941) never co-authored any works and never stated their ideas in terms of a hypothesis. The distinction between a weak and a strong version of this hypothesis is also a later development; Sapir and Whorf never used such a dichotomy, although often their writings and their opinions of this relativity principle expressed it in stronger or weaker terms.[6][7] The principle of linguistic relativity and the relationship between language and thought has also received attention in varying academic fields, includingphilosophy,psychologyandanthropology. It has also influenced works of fiction and the invention ofconstructed languages. The idea was first expressed explicitly by 19th-century thinkers such asWilhelm von HumboldtandJohann Gottfried Herder, who considered language as the expression of the spirit of a nation. Members of the early 20th-century school of American anthropology includingFranz BoasandEdward Sapiralso approved versions of the idea to a certain extent, including in a 1928 meeting of the Linguistic Society of America,[8]but Sapir, in particular, wrote more often against than in favor of anything like linguistic determinism. Sapir's student,Benjamin Lee Whorf, came to be considered as the primary proponent as a result of his published observations of how he perceived linguistic differences to have consequences for human cognition and behavior.Harry Hoijer, another of Sapir's students, introduced the term "Sapir–Whorf hypothesis",[9]even though the two scholars never formally advanced any such hypothesis.[10]A strong version of relativist theory was developed from the late 1920s by the German linguistLeo Weisgerber. Whorf's principle of linguistic relativity was reformulated as a testable hypothesis byRoger BrownandEric Lennebergwho performed experiments designed to determine whethercolor perceptionvaries between speakers of languages that classified colors differently. As the emphasis of the universal nature of human language and cognition developed during the 1960s, the idea of linguistic relativity became disfavored among linguists. From the late 1980s, a new school of linguistic relativity scholars has examined the effects of differences in linguistic categorization on cognition, finding broad support for non-deterministic versions of the hypothesis in experimental contexts.[11][12]Some effects of linguistic relativity have been shown in several semantic domains, although they are generally weak. Currently, a nuanced opinion of linguistic relativity is espoused by most linguists holding that language influences certain kinds of cognitive processes in non-trivial ways, but that other processes are better considered as developing fromconnectionistfactors. Research emphasizes exploring the manners and extent to which language influences thought.[11] The idea that language and thought are intertwined is ancient. In his dialogueCratylus,Platoexplores the idea that conceptions of reality, such asHeracliteanflux, are embedded in language. But Plato has been read as arguing againstsophistthinkers such asGorgias of Leontini, who claimed that the physical world cannot be experienced except through language; this made the question of truth dependent on aesthetic preferences or functional consequences. Plato may have held instead that the world consisted of eternal ideas and that language should represent these ideas as accurately as possible.[13]Nevertheless, Plato'sSeventh Letterclaims that ultimate truth is inexpressible in words. Following Plato,St. Augustine, for example, argued that language was merely like labels applied to concepts existing already. This opinion remained prevalent throughout theMiddle Ages.[14]Roger Baconhad the opinion that language was but a veil covering eternal truths, hiding them from human experience. ForImmanuel Kant, language was but one of several methods used by humans to experience the world. During the late 18th and early 19th centuries, the idea of the existence of different national characters, orVolksgeister, of different ethnic groups was a major motivator for the German romantics school and the beginning ideologies of ethnic nationalism.[15] Johann Georg Hamannis often suggested to be the first among the actual German Romantics to discuss the concept of the "genius" of a language.[16][17]In his "Essay Concerning an Academic Question", Hamann suggests that a people's language affects their worldview: The lineaments of their language will thus correspond to the direction of their mentality.[18] In 1820,Wilhelm von Humboldtassociated the study of language with the national romanticist program by proposing that language is the fabric of thought. Thoughts are produced as a kind of internal dialog using the same grammar as the thinker's native language.[19]This opinion was part of a greater idea in which the assumptions of an ethnic nation, their "Weltanschauung", was considered as being represented by the grammar of their language. Von Humboldt argued that languages with aninflectionalmorphological type, such as German, English and the otherIndo-European languages, were the most perfect languages and that accordingly this explained the dominance of their speakers with respect to the speakers of less perfect languages. Wilhelm von Humboldt declared in 1820: The diversity of languages is not a diversity of signs and sounds but a diversity of views of the world.[19] In Humboldt's humanistic understanding of linguistics, each language creates the individual's worldview in its particular way through its lexical andgrammatical categories, conceptual organization, and syntactic models.[20] Herder worked alongside Hamann to establish the idea of whether or not language had a human/rational or a divine origin.[21]Herder added the emotional component of the hypothesis and Humboldt then took this information and applied to various languages to expand on the hypothesis. The idea that some languages are superior to others and that lesser languages maintained their speakers in intellectual poverty was widespread during the early 20th century.[22]American linguistWilliam Dwight Whitney, for example, actively strove to eradicateNative American languages, arguing that their speakers were savages and would be better off learning English and adopting a "civilized" way of life.[23]The first anthropologist and linguist to challenge this opinion wasFranz Boas.[24]While performing geographical research in northern Canada he became fascinated with theInuitand decided to become anethnographer. Boas stressed the equal worth of all cultures and languages, that there was no such thing as a primitive language and that all languages were capable of expressing the same content, albeit by widely differing means.[25]Boas saw language as an inseparable part of culture and he was among the first to require of ethnographers to learn the native language of the culture to be studied and to document verbal culture such asmythsand legends in the original language.[26][27] Boas: It does not seem likely [...] that there is any direct relation between the culture of a tribe and the language they speak, except in so far as the form of the language will be moulded by the state of the culture, but not in so far as a certain state of the culture is conditioned by the morphological traits of the language."[28] Boas' student Edward Sapir referred to the Humboldtian idea that languages were a major factor for understanding the cultural assumptions of peoples.[29]He espoused the opinion that because of the differences in the grammatical systems of languages no two languages were similar enough to allow for perfect cross-translation. Sapir also thought because language represented reality differently, it followed that the speakers of different languages would perceive reality differently. Sapir: No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are distinct worlds, not merely the same world with different labels attached.[30] However, Sapir explicitly rejected strong linguistic determinism by stating, "It would be naïve to imagine that any analysis of experience is dependent on pattern expressed in language."[31] Sapir was explicit that the associations between language and culture were neither extensive nor particularly profound, if they existed at all: It is easy to show that language and culture are not intrinsically associated. Totally unrelated languages share in one culture; closely related languages—even a single language—belong to distinct culture spheres. There are many excellent examples in Aboriginal America. The Athabaskan languages form as clearly unified, as structurally specialized, a group as any that I know of. The speakers of these languages belong to four distinct culture areas... The cultural adaptability of the Athabaskan-speaking peoples is in the strangest contrast to the inaccessibility to foreign influences of the languages themselves.[32] Sapir offered similar observations about speakers of so-called "world" or"modern" languages, noting, "possession of a common language is still and will continue to be a smoother of the way to a mutual understanding between England and America, but it is very clear that other factors, some of them rapidly cumulative, are working powerfully to counteract this leveling influence. A common language cannot indefinitely set the seal on a common culture when the geographical, physical, and economics determinants of the culture are no longer the same throughout the area."[33] While Sapir never made a practice of studying directly how languages affected thought, some notion of (probably "weak") linguistic relativity affected his basic understanding of language, and would be developed by Whorf.[34] Drawing on influences such as Humboldt andFriedrich Nietzsche, some European thinkers developed ideas similar to those of Sapir and Whorf, generally working in isolation from each other. Prominent in Germany from the late 1920s through the 1960s were the strongly relativist theories ofLeo Weisgerberand his concept of a 'linguistic inter-world', mediating between external reality and the forms of a given language, in ways peculiar to that language.[35]Russian psychologistLev Vygotskyread Sapir's work and experimentally studied the ways in which the development of concepts in children was influenced by structures given in language. His 1934 work "Thought and Language"[36]has been compared to Whorf's and taken as mutually supportive evidence of language's influence on cognition.[37]Drawing on Nietzsche's ideas of perspectivismAlfred Korzybskideveloped the theory ofgeneral semanticsthat has been compared to Whorf's notions of linguistic relativity.[38]Though influential in their own right, this work has not been influential in the debate on linguistic relativity, which has tended to be based on the American paradigm exemplified by Sapir and Whorf. More than any linguist, Benjamin Lee Whorf has become associated with what he termed the "linguistic relativity principle".[39]StudyingNative Americanlanguages, he attempted to account for the ways in which grammatical systems and language-use differences affected perception. Whorf's opinions regarding the nature of the relation between language and thought remain under contention. However, a version of theory holds some "merit", for example, "different words mean different things in different languages; not every word in every language has a one-to-one exact translation in a different language"[40]Critics such as Lenneberg,[41]Black, andPinker[42]attribute to Whorf a strong linguistic determinism, whileLucy,SilversteinandLevinsonpoint to Whorf's explicit rejections of determinism, and where he contends that translation andcommensurationare possible. Detractors such as Lenneberg,[41]Chomskyand Pinker[43]criticized him for insufficient clarity of his description of how language influences thought, and for not proving his conjectures. Most of his arguments were in the form of anecdotes and speculations that served as attempts to show how "exotic" grammatical traits were associated with what were apparently equally exotic worlds of thought. In Whorf's words: We dissect nature along lines laid down by our native language. The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscope flux of impressions which has to be organized by our minds—and this means largely by the linguistic systems of our minds. We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way—an agreement that holds throughout our speech community and is codified in the patterns of our language [...] all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar, or can in some way be calibrated.[44] Among Whorf's best-known examples of linguistic relativity are instances where a non-European language has several terms for a concept that is only described with one word in European languages (Whorf used the acronym SAE "Standard Average European" to allude to the rather similar grammatical structures of the well-studied European languages in contrast to the greater diversity of less-studied languages). One of Whorf's examples was the supposedly large number of words for'snow' in the Inuit languages, an example that later was contested as a misrepresentation.[45] Another is theHopi language's words for water, one indicating drinking water in a container and another indicating a natural body of water.[46] These examples ofpolysemyserved the double purpose of showing that non-European languages sometimes made more specific semantic distinctions than European languages and that direct translation between two languages, even of seemingly basic concepts such as snow or water, is not always possible.[47] Another example is from Whorf's experience as a chemical engineer working for an insurance company as a fire inspector.[45]While inspecting a chemical plant he observed that the plant had two storage rooms for gasoline barrels, one for the full barrels and one for the empty ones. He further noticed that while no employees smoked cigarettes in the room for full barrels, no-one minded smoking in the room with empty barrels, although this was potentially much more dangerous because of the flammable vapors still in the barrels. He concluded that the use of the wordemptyin association to the barrels had resulted in the workers unconsciously regarding them as harmless, although consciously they were probably aware of the risk of explosion. This example was later criticized by Lenneberg[41]as not actually demonstrating causality between the use of the wordemptyand the action of smoking, but instead was an example ofcircular reasoning. Pinker inThe Language Instinctridiculed this example, claiming that this was a failing of human insight rather than language.[43] Whorf's most elaborate argument for linguistic relativity regarded what he believed to be a fundamental difference in the understanding oftime as a conceptual category among the Hopi.[48]He argued that in contrast to English and otherSAE languages, Hopi does not treat the flow of time as a sequence of distinct, countable instances, like "three days" or "five years", but rather as a single process and that consequently it has no nouns referring to units of time as SAE speakers understand them. He proposed that this view of time was fundamental toHopiculture and explained certain Hopi behavioral patterns. Ekkehart Malotkilater claimed that he had found no evidence of Whorf's claims in 1980's era Hopi speakers, nor in historical documents dating back to the arrival of Europeans. Malotki used evidence from archaeological data, calendars, historical documents, and modern speech; he concluded that there was no evidence that Hopi conceptualize time in the way Whorf suggested. Many universalist scholars such as Pinker consider Malotki's study as a final refutation of Whorf's claim about Hopi, whereas relativist scholars such asJohn A Lucyand Penny Lee criticized Malotki's study for mischaracterizing Whorf's claims and for forcing Hopi grammar into a model of analysis that does not fit the data.[49] Whorf's argument about Hopi speakers' conceptualization of time is an example of the structure-centered method of research into linguistic relativity, which Lucy identified as one of three main types of research of the topic.[50]The "structure-centered" method starts with a language's structural peculiarity and examines its possible ramifications for thought and behavior. The defining example is Whorf's observation of discrepancies between the grammar of time expressions in Hopi and English. More recent research in this vein is Lucy's research describing how usage of the categories of grammatical number and of numeral classifiers in theMayan languageYucatecresult in Mayan speakers classifying objects according to material rather than to shape as preferred by English speakers.[51]However, philosophers includingDonald DavidsonandJason Josephson Stormhave argued that Whorf's Hopi examples are self-refuting, as Whorf had to translate Hopi terms into English in order to explain how they are untranslatable.[52] Whorf died in 1941 at age 44, leaving multiple unpublished papers. His ideas were continued by linguists and anthropologists such as Hoijer andLee, who both continued investigating the effect of language on habitual thought, andTrager, who prepared a number of Whorf's papers for posthumous publishing. The most important event for the dissemination of Whorf's ideas to a larger public was the publication in 1956 of his major writings on the topic of linguistic relativity in a single volume titledLanguage, Thought and Reality. In 1953,Eric Lennebergcriticized Whorf's examples from anobjectivistphilosophy of language, claiming that languages are principally meant to represent events in the real world, and that even though languages express these ideas in various ways, the meanings of such expressions and therefore the thoughts of the speaker are equivalent. He argued that Whorf's English descriptions of a Hopi speaker's idea of time were in fact translations of the Hopi concept into English, therefore disproving linguistic relativity. However Whorf was concerned with how the habitualuseof language influences habitual behavior, rather than translatability. Whorf's point was that while English speakers may be able tounderstandhow a Hopi speaker thinks, they do notthinkin that way.[53] Lenneberg's main criticism of Whorf's works was that he never showed the necessary association between a linguistic phenomenon and a mental phenomenon. With Brown, Lenneberg proposed that proving such an association required directly matching linguistic phenomena with behavior. They assessed linguistic relativity experimentally and published their findings in 1954. Since neither Sapir nor Whorf had ever stated a formal hypothesis, Brown and Lenneberg formulated their own. Their two tenets were (i) "the world is differently experienced and conceived in different linguistic communities" and (ii) "language causes a particular cognitive structure".[54]Brown later developed them into the so-called "weak" and "strong" formulation: Brown's formulations became known widely and were retrospectively attributed to Whorf and Sapir although the second formulation, verging on linguistic determinism, was never advanced by either of them. Joshua Fishmanargued that Whorf's true assertion was largely overlooked. In 1978, he suggested that Whorf was a "neo-Herderianchampion"[56]and in 1982, he proposed "Whorfianism of the third kind" in an attempt to reemphasize what he claimed was Whorf's real interest, namely the intrinsic value of "little peoples" and "little languages".[57]Whorf had criticizedOgden'sBasic Englishthus: But to restrict thinking to the patterns merely of English [...] is to lose a power of thought which, once lost, can never be regained. It is the 'plainest' English which contains the greatest number of unconscious assumptions about nature. [...] We handle even our plain English with much greater effect if we direct it from the vantage point of a multilingual awareness.[58] Where Brown's weak version of the linguistic relativity hypothesis proposes that languageinfluencesthought and the strong version that languagedeterminesthought, Fishman's "Whorfianism of the third kind" proposes that languageis a key to culture. TheLeiden schoolis alinguistic theorythat models languages as parasites. Notable proponentFrederik Kortlandt, in a 1985 paper outlining Leiden school theory, advocates for a form of linguistic relativity: "The observation that in allYuman languagesthe word for 'work' is a loan fromSpanishshould be a major blow to any current economic theory." In the next paragraph, he quotes directly from Sapir: "Even in the most primitive cultures the strategic word is likely to be more powerful than the direct blow."[59] The publication of the 1996 anthologyRethinking Linguistic Relativityedited byGumperzandLevinsonbegan a new period of linguistic relativity studies that emphasized cognitive and social aspects. The book included studies on linguistic relativity and universalist traditions. Levinson documented significant linguistic relativity effects in the different linguistic conceptualization of spatial categories in different languages. For example, men speaking theGuugu Yimithirr languageinQueenslandgave accurate navigation instructions using a compass-like system of north, south, east and west, along with a hand gesture pointing to the starting direction.[60] Lucy defines this method as "domain-centered" because researchers select asemantic domainand compare it across linguistic and cultural groups.[50]Space is another semantic domain that has proven fruitful for linguistic relativity studies.[61]Spatial categories vary greatly across languages. Speakers rely on the linguistic conceptualization of space in performing many ordinary tasks. Levinson and others reported three basic spatial categorizations. While many languages use combinations of them, some languages exhibit only one type and related behaviors. For example,Yimithirronly uses absolute directions when describing spatial relations—the position of everything is described by using the cardinal directions. Speakers define a location as "north of the house", while an English speaker may use relative positions, saying "in front of the house" or "to the left of the house".[62] Separate studies byBowermanandSlobinanalyzed the role of language in cognitive processes. Bowerman showed that certain cognitive processes did not use language to any significant extent and therefore could not be subject to linguistic relativity.[clarification needed][63]Slobin described another kind of cognitive process that he named "thinking for speaking"—- the kind of process in which perceptional data and other kinds of prelinguistic cognition are translated into linguistic terms for communication.[clarification needed]These, Slobin argues, are the kinds of cognitive process that are the basis of linguistic relativity.[64] Since Brown and Lenneberg believed that the objective reality denoted by language was the same for speakers of all languages, they decided to test how different languages codified the same message differently and whether differences in codification could be proven to affect behavior. Brown and Lenneberg designed experiments involving the codification of colors. In their first experiment, they investigated whether it was easier for speakers of English to remember color shades for which they had a specific name than to remember colors that were not as easily definable by words. This allowed them to compare the linguistic categorization directly to a non-linguistic task. In a later experiment, speakers of two languages that categorize colors differently (EnglishandZuni) were asked to recognize colors. In this manner, it could be determined whether the differing color categories of the two speakers would determine their ability to recognize nuances within color categories. Brown and Lenneberg found that Zuni speakers whoclassify green and blue togetheras a single color did have trouble recognizing and remembering nuances within the green/blue category.[65]This method, which Lucy later classified as domain-centered,[50]is acknowledged to be sub-optimal, because color perception, unlike othersemantic domains, is hardwired into the neural system and as such is subject to more universal restrictions than other semantic domains. In a similar study done by German ophthalmologist Hugo Magnus during the 1870s, he circulated a questionnaire to missionaries and traders with ten standardized color samples and instructions for using them. These instructions contained an explicit warning that failure of a language to distinguish lexically between two colors did not necessarily imply that speakers of that language did not distinguish the two colors perceptually. Magnus received completed questionnaires on twenty-five African, fifteen Asian, three Australian, and two European languages. He concluded in part, "As regards the range of the color sense of the primitive peoples tested with our questionnaire, it appears in general to remain within the same bounds as the color sense of the civilized nations. At least, we could not establish a complete lack of the perception of the so-called main colors as a special racial characteristic of any one of the tribes investigated for us. We consider red, yellow, green, and blue as the main representatives of the colors of long and short wavelength; among the tribes we tested not a one lacks the knowledge of any of these four colors" (Magnus 1880, p. 6, as trans. in Berlin and Kay 1969, p. 141). Magnus did find widespread lexical neutralization of green and blue, that is, a single word covering both these colors, as have all subsequent comparative studies of color lexicons.[66] Brown and Lenneberg's study began a tradition of investigation of linguistic relativity through color terminology. The studies showed a correlation between color term numbers and ease of recall in both Zuni and English speakers. Researchers attributed this to focal colors having greater codability than less focal colors, and not to linguistic relativity effects. Berlin/Kay found universal typological color principles that are determined by biological rather than linguistic factors.[67]This study sparked studies into typological universals of color terminology. Researchers such as Lucy,[50]Saunders[68]and Levinson[69]argued that Berlin and Kay's study does not refute linguistic relativity in color naming, because of unsupported assumptions in their study (such as whether all cultures in fact have a clearly defined category of "color") and because of related data problems. Researchers such as Maclaury continued investigation into color naming. Like Berlin and Kay, Maclaury concluded that the domain is governed mostly by physical-biological universals.[70][71] Studies byBerlinandKaycontinued Lenneberg's color research. They studied color terminology formation and showed clear universal trends in color naming. For example, they found that even though languages have different color terminologies, they generally recognize certain hues as more focal than others. They showed that in languages with few color terms, it is predictable from the number of terms which hues are chosen as focal colors: For example, languages with only three color terms always have the focal colors black, white, and red.[67]The fact that what had been believed to be random differences between color naming in different languages could be shown to follow universal patterns was seen as a powerful argument against linguistic relativity.[72]Berlin and Kay's research has since been criticized by relativists such as Lucy, who argued that Berlin and Kay's conclusions were skewed by their insistence that color terms encode only color information.[51]This, Lucy argues, made them unaware of the instances in which color terms provided other information that might be considered examples of linguistic relativity. Universalist scholars began a period of dissent from ideas about linguistic relativity. Lenneberg was one of the first cognitive scientists to begin development of the Universalist theory of language that was formulated by Chomsky asuniversal grammar, effectively arguing that all languages share the same underlying structure. The Chomskyan school also includes the belief that linguistic structures are largely innate and that what are perceived as differences between specific languages are surface phenomena that do not affect the brain's universal cognitive processes. This theory became the dominant paradigm of American linguistics from the 1960s through the 1980s, while linguistic relativity became the object of ridicule.[73] Other universalist researchers dedicated themselves to dispelling other aspects of linguistic relativity, often attacking Whorf's specific examples. For example, Malotki's monumental study of time expressions in Hopi presented many examples that challenged Whorf's "timeless" interpretation of Hopi language and culture,[74]but seemingly failed to address the linguistic relativist argument actually posed by Whorf (i.e. that the understanding of time by native Hopi speakers differed from that of speakers of European languages due to the differences in the organization and construction of their respective languages; Whorf never claimed that Hopi speakers lacked any concept of time).[75]Malotki himself acknowledges that the conceptualizations are different, but because he ignores Whorf's use of quotes around the word "time" and the qualifier "what we call", takes Whorf to be arguing that the Hopi have no concept of time at all.[76][77][78] Currently many believers of the universalist school of thought still oppose linguistic relativity. For example, Pinker argues inThe Language Instinctthat thought is independent of language, that language is itself meaningless in any fundamental way to human thought, and that human beings do not even think in "natural" language, i.e. any language that we actually communicate in; rather, we think in a meta-language, preceding any natural language, termed "mentalese". Pinker attacks what he terms "Whorf's radical position", declaring, "the more you examine Whorf's arguments, the less sense they make".[43] Pinker and other universalists have been accused by relativists of misrepresenting Whorf's ideas and committing theStrawman fallacy.[79][80][53] During the late 1980s and early 1990s, advances incognitive psychologyand cognitive linguistics renewed interest in the Sapir–Whorf hypothesis.[81]One of those who adopted a more Whorfian philosophy wasGeorge Lakoff. He argued that language is often used metaphorically and that languages use differentcultural metaphorsthat reveal something about how speakers of that language think. For example, English employs conceptual metaphors likening time to money, so that time can be saved and spent and invested, whereas other languages do not talk about time in that manner. Other such metaphors are common to many languages because they are based on general human experience, for example, metaphors associatingupwithgoodandbadwithdown. Lakoff also argued that metaphor plays an important part in political debates such as the "right to life" or the "right to choose"; or "illegal aliens" or "undocumented workers".[82] An unpublished study by Boroditsky et al. in 2003 reported finding empirical evidence favoring the hypothesis and demonstrating that differences in languages' systems ofgrammatical gendercan affect the way speakers of those languages think about objects. Speakers of Spanish and German (which have different gender systems) were asked to use adjectives to describe various objects designated by words that were either masculine or feminine in their respective languages. Speakers tended to describe objects in ways that were consistent with the gender of the noun in their language, indicating that the gender system of a language can influence speakers' perceptions of objects. Despite numerous citations, the experiment was criticised after the reported effects could not be replicated by independent trials.[83][84]Additionally, a large-scale data analysis usingword embeddingsof language models found no correlation between adjectives and inanimate noun genders,[85]while another study using large text corpora found a slight correlation between the gender of animate and inanimate nouns and their adjectives as well as verbs by measuring theirmutual information.[86] Colin Murray Turbaynealso argued that the pervasive use of ancient "dead metaphors" by researchers within different linguistic traditions has contributed to needless confusion in the development of modern empirical theories over time.[87]He points to several examples within theRomanceandGermaniclanguages of the subtle manner in which mankind has become unknowingly victimized by such "unmasked metaphors". Cases include the incorporation of mechanistic metaphors first introduced byRene DescartesandIsaac Newtonduring the 17th century into scientific theories which were subsequently developed byGeorge Berkeley,David HumeandImmanuel Kantduring the 18th century;[88][89][90]and the influence exerted byPlatonicmetaphors in the dialogueTimaeusupon the development of contemporary theories oflanguagein modern times.[91][92] In his 1987 bookWomen, Fire, and Dangerous Things: What Categories Reveal About the Mind,[53]Lakoff reappraised linguistic relativity and especially Whorf's ideas about how linguistic categorization represents and/or influences mental categories. He concluded that the debate had been confused. He identified four parameters on which researchers differed in their opinions about what constitutes linguistic relativity: Lakoff concluded that many of Whorf's critics had criticized him using novel definitions of linguistic relativity, rendering their criticisms moot. Researchers such asBoroditsky,Choi,Majid, Lucy and Levinson believe that language influences thought in more limited ways than the broadest early claims. Researchers examine the interface between thought (or cognition), language and culture and describe the relevant influences. They use experimental data to back up their conclusions.[93][94]Kay ultimately concluded that "[the] Whorf hypothesis is supported in the right visual field but not the left".[95]His findings show that accounting forbrain lateralizationoffers another perspective. Recent studies have also used a "behavior-based" method, which starts by comparing behavior across linguistic groups and then searches for causes for that behavior in the linguistic system.[50]In an early example of this method, Whorf attributed the occurrence of fires at a chemical plant to the workers' use of the word 'empty' to describe barrels containing only explosive vapors. More recently, Bloom noticed that speakers of Chinese had unexpected difficulties answeringcounterfactualquestions posed to them in a questionnaire. He concluded that this was related to the way in which counter-factuality is marked grammatically in Chinese. Other researchers attributed this result to Bloom's flawed translations.[96]Strømnes examined why Finnish factories had a greater occurrence of work related accidents than similar Swedish ones. He concluded that cognitive differences between the grammatical usage of Swedishprepositionsand Finnishcasescould have caused Swedish factories to pay more attention to the work process while Finnish factory organizers paid more attention to the individual worker.[97] Everett's work on thePirahã languageof theBrazilianAmazon[98]found several peculiarities that he interpreted as corresponding to linguistically rare features, such as a lack of numbers and color terms in the way those are otherwise defined and the absence of certain types of clauses. Everett's conclusions were met with skepticism from universalists[99]who claimed that the linguistic deficit is explained by the lack of need for such concepts.[100] Recent research with non-linguistic experiments in languages with different grammatical properties (e.g., languages with and withoutnumeral classifiersor with different gender grammar systems) showed that language differences in human categorization are due to such differences.[101]Experimental research suggests that this linguistic influence on thought diminishes over time, as when speakers of one language are exposed to another.[102] Research on time-space congruency suggests that temporal perception is shaped by spatial metaphors embedded in language. Casasanto & Boroditsky (2008) found that people often use spatial metaphors to conceptualize time, linking longer distances with longer durations.[103]Research has shown that linguistic differences can influence the perception of time. Swedish, like English, tends to describe time in terms of spatial distance (e.g., "a long meeting"), whereas Spanish often uses quantity-based metaphors (e.g., "a big meeting"). These linguistic patterns correlate with differences in how speakers estimate temporal durations: Swedish speakers are more influenced by spatial length, while Spanish speakers are more sensitive to volume.[104] Expanding on this, research on time-space congruency suggests that temporal perception is shaped by spatial metaphors embedded in language. In many languages, time is conceptualized along a horizontal axis (e.g., "looking forward to the future" in English). However, Mandarin speakers also employ vertical metaphors for time, referring to earlier events as "up" and later events as "down".[105]Experiments have shown that Mandarin speakers are quicker to recognize temporal sequences when they are presented vertically, whereas English speakers exhibit no such bias. Kashima & Kashima observed a correlation between the perceivedindividualism or collectivismin the social norms of a given country, with the tendency to neglect the use ofpronounsin the country's language. They argued that explicit reference to "you" and "I" reinforces a distinction between theselfand the other in the speaker.[106] Research also suggests that this structural difference influences how speakers attribute intentionality in events. Fausey & Boroditsky (2010) conducted experiments comparing how English and Spanish speakers describe accidental versus intentional actions. Their results showed that English speakers, who are accustomed to using explicit pronouns, were more likely to specify the agent responsible for an accidental event (e.g., "John broke the vase"). In contrast, Spanish speakers, who frequently omit pronouns, were more likely to use agent-neutral descriptions for accidental events (e.g., "The vase broke").[107] A 2013 study found that those who speak "futureless" languages with no grammatical marking of the future tense save more, retire with more wealth, smoke less, practice safer sex, and are less obese than those who do not.[108]This effect has come to be termed the linguistic-savings hypothesis and has been replicated in several cross-cultural and cross-country studies. However, a study of Chinese, which can be spoken both with and without the grammatical future marking "will", found that subjects do not behave more impatiently when "will" is used repetitively. This laboratory-based finding of elective variation within a single language does not refute the linguistic savings hypothesis but some have suggested that it shows the effect may be due to culture or other non-linguistic factors.[109] Psycholinguisticstudies explored motion perception, emotion perception, object representation and memory.[110][111][112][113]The gold standard of psycholinguistic studies on linguistic relativity is now finding non-linguistic cognitive differences[example needed]in speakers of different languages (thus rendering inapplicable Pinker's criticism that linguistic relativity is "circular"). Recent work withbilingualspeakers attempts to distinguish the effects of language from those of culture on bilingual cognition including perceptions of time, space, motion, colors and emotion.[114]Researchers described differences between bilinguals andmonolingualsin perception of color,[115]representations of time[116][117][118]and other elements of cognition.[119] Linguistic relativity inspired others to consider whether thought and emotion could be influenced by manipulating language. The question bears on philosophical, psychological, linguistic and anthropological questions.[clarification needed] A major question is whether human psychological faculties are mostly innate or whether they are mostly a result of learning, and hence subject to cultural and social processes such as language. The innate opinion is that humans share the same set of basic faculties, variability due to cultural differences is less important, and the human mind is a mostly biological construction, so all humans who share the same neurological configuration can be expected to have similar cognitive patterns. Multiple alternatives have advocates. The contraryconstructivistposition holds that human faculties and concepts are largely influenced by socially constructed and learned categories, without many biological restrictions. Another variant isidealist, which holds that human mental capacities are generally unrestricted by biological-material structures. Another isessentialist, which holds that essential differences[clarification needed]may influence the ways individuals or groups experience and conceptualize the world. Yet another isrelativist(cultural relativism), which sees different cultural groups as employing different conceptual schemes that are not necessarily compatible or commensurable, nor more or less in accord with external reality.[120] Another debate considers whether thought is a type of internal speech or is independent of and prior to language.[121] In thephilosophy of language, the question addresses the relations between language, knowledge and the external world, and the concept oftruth. Philosophers such asPutnam,Fodor,Davidson, andDennettsee language as directly representing entities from the objective world, and categorization as reflecting that world. Other philosophers (e.g.Quine,Searle, andFoucault) argue that categorization and conceptualization issubjectiveand arbitrary. Another view, represented byJason Storm, seeks a third way by emphasizing how language changes and imperfectly represents reality without being completely divorced from ontology.[122] Another question is whether language is a tool for representing and referring to objects in the world, or whether it is a system used to construct mental representations that can be communicated.[clarification needed] Sapir/Whorf contemporaryAlfred Korzybskiwas independently developing his theory ofgeneral semantics, which was intended to use language's influence of thinking to maximize human cognitive abilities. Korzybski's thinking was influenced by logical philosophy such asRussellandWhitehead'sPrincipia MathematicaandWittgenstein'sTractatus Logico-Philosophicus.[123]Although Korzybski was not aware of Sapir and Whorf's writings, the philosophy was adopted by Whorf-admirer Stuart Chase, who fused Whorf's interest in cultural-linguistic variation with Korzybski's programme in his popular work "The Tyranny of Words".S. I. Hayakawawas a follower and popularizer of Korzybski's work, writingLanguage in Thought and Action. The general semantics philosophy influenced the development ofneuro-linguistic programming(NLP), another therapeutic technique that seeks to use awareness of language use to influence cognitive patterns.[124] Korzybski independently described a "strong" version of the hypothesis of linguistic relativity.[125] We do not realize what tremendous power the structure of an habitual language has. It is not an exaggeration to say that it enslaves us through the mechanism of s[emantic] r[eactions] and that the structure which a language exhibits, and impresses upon us unconsciously, is automatically projected upon the world around us. In their fiction, authors such asAyn RandandGeorge Orwellexplored how linguistic relativity might be exploited for political purposes. In Rand'sAnthem, a fictivecommunistsociety removed the possibility of individualism by removing the word "I" from the language.[127]In Orwell's1984the authoritarian state created the languageNewspeakto make it impossible for people to think critically about the government, or even to contemplate that they might be impoverished or oppressed, by reducing the number of words to reduce the thought of the locutor.[128] Others have been fascinated by the possibilities of creating new languages that could enable new, and perhaps better, ways of thinking. Examples of such languages designed to explore the human mind includeLoglan, explicitly designed byJames Cooke Brownto test the linguistic relativity hypothesis, by experimenting whether it would make its speakers think more logically.Suzette Haden Elgin, who was involved with the early development of neuro-linguistic programming, invented the languageLáadanto explore linguistic relativity by making it easier to express what Elgin considered the female worldview, as opposed toStandard Average Europeanlanguages, which she considered to convey a "male centered" worldview.[129]John Quijada's languageIthkuilwas designed to explore the limits of the number of cognitive categories a language can keep its speakers aware of at once.[130]Similarly, Sonja Lang'sToki Ponawas developed according to aTaoistphilosophy for exploring how (or if) such a language would direct human thought.[131] APL programming languageoriginatorKenneth E. Iversonbelieved that the Sapir–Whorf hypothesis applied to computer languages (without actually mentioning it by name). HisTuring Awardlecture, "Notation as a Tool of Thought", was devoted to this theme, arguing that more powerful notations aided thinking about computer algorithms.[132][133] The essays ofPaul Grahamexplore similar themes, such as a conceptual hierarchy of computer languages, with more expressive and succinct languages at the top. Thus, the so-calledblubparadox(after a hypothetical programming language of average complexity calledBlub) says that anyone preferentially using some particular programming language willknowthat it is more powerful than some, but not that it is less powerful than others. The reason is thatwritingin some language meansthinkingin that language. Hence the paradox, because typically programmers are "satisfied with whatever language they happen to use, because it dictates the way they think about programs".[134] In a 2003 presentation at anopen sourceconvention,Yukihiro Matsumoto, creator of theprogramming languageRuby, said that one of his inspirations for developing the language was the science fiction novelBabel-17, based on the Whorf Hypothesis.[135] Numerous examples of linguistic relativity have appeared in science fiction. Sociolinguistics affects some variables within language, including the manner in which words are pronounced, word selection in certain dialogue, context, and tone. It's suggested that these effects[138]may have implications for linguistic relativity.
https://en.wikipedia.org/wiki/Linguistic_relativity
Abilingual punis apuncreated by a word or phrase in one language sounding similar to a different word or phrase in another language. The result of a bilingual pun can be a joke that makes sense in more than one language (a joke that can be translated) or a joke which requires understanding of both languages (a joke specifically for those that are bilingual). A bilingual pun can be made with a word from another language that has the same meaning, or an opposite meaning. A bilingual pun involves a word from one language which has the same or similar meaning in another language's word. The word is often homophonic whether on purpose or by accident.[1]Another feature of the bilingual pun is that the person does not always need to have the ability to speak both languages in order to understand the pun. The bilingual pun can also demonstrate common ground with a person who speaks another language.[2] There are what appear to be Biblical bilingual puns. InExodus 10:10,Mosesis warned by theEgyptian Pharaohthat evil awaits him. InHebrewthe word "ra" (רע) means evil, but in Egyptian "Ra" is the sun god. So when Moses was warned the word "ra" can mean the sun god stands in the way, or evil stands in the way.[3] Unintentional bilingual puns occur in translations of one ofShakespeare's plays:Henry V. The line spoken by Katherine, "I cannot speak your England" becomes political in French.[4][specify] The famous paper “Fun withF1” is a French-English pun as 1 is un in French. Wario's name is aportmanteauof the name Mario and the Japanese wordwarui(悪い), meaning "bad", reflecting how he is a bad version ofMario. But in English, "Wario" can be seen as either a portmanteau of the name Mario and "war", or as a flip of the M in "Mario".
https://en.wikipedia.org/wiki/Bilingual_pun
Ahybrid wordorhybridismis awordthatetymologicallyderives from at least two languages. Such words are a type ofmacaronic language. The most common form of hybrid word inEnglishcombinesLatinandGreekparts. Since manyprefixesandsuffixesin English are of Latin or Greeketymology, it is straightforward to add a prefix or suffix from one language to an English word that comes from a different language, thus creating a hybrid word.[citation needed] Hybridisms were formerly often considered to bebarbarisms.[1][2] Modern Hebrewabounds with non-Semiticderivational affixes, which are applied to words of both Semitic and non-Semitic descent. The following hybrid words consist of a Hebrew-descent word and a non-Semitic descent suffix:[15] The following Modern Hebrew hybrid words have an international prefix: Some hybrid words consist of both a non-Hebrew word and a non-Hebrew suffix of different origins: Some hybrid words consist of a non-Hebrew word and a Hebrew suffix: Modern Hebrew also has a productive derogatory prefixalshm-, which results in an 'echoic expressive'. For example,um shmum(או״ם־שמו״ם‎), literally 'United Nations shm-United Nations', was a pejorative description by Israel's first Prime Minister,David Ben-Gurion, of theUnited Nations, called in Modern Hebrewumot meukhadot(אומות מאוחדות‎) and abbreviatedum(או״ם‎). Thus, when a Hebrew speaker would like to express their impatience with or disdain for philosophy, they can sayfilosófya-shmilosófya(פילוסופיה־שמילוסופיה‎). Modern Hebrewshm-is traceable back toYiddish, and is found in English as well asshm-reduplication. This is comparable to the Turkic initial m-segment conveying a sense of 'and so on' as in Turkishdergi mergiokumuyor, literally 'magazine "shmagazine" read:NEGATIVE:PRESENT:3rd.person.singular', i.e. '(He) doesn't read magazine, journals or anything like that'.[15] InFilipino, hybrid words are calledsiyokoy(literally "merman"). For example, the wordconcernado("concerned") has "concern-" come from English and "-ado" come from Spanish. InJapanese, hybrid words are common inkango(words formed fromkanjicharacters) in which some of the characters may be pronounced using Chinese pronunciations (on'yomi,from Chinese morphemes), and others in the same word are pronounced using Japanese pronunciations (kun'yomi,from Japanese morphemes). These words are known asjūbako(重箱) oryutō(湯桶), which are themselves examples of this kind of compound (they areautological words): the first character ofjūbakois read usingon'yomi, the secondkun'yomi, while it is the other way around withyutō. Other examples include 場所basho"place" (kun-on), 金色kin'iro"golden" (on-kun) and 合気道aikidō"the martial artAikido" (kun-on-on). Some hybrid words are neitherjūbakonoryutō(縦中横tatechūyoko(kun-on-kun)). Foreign words may also be hybridized with Chinese or Japanese readings in slang words such as 高層ビルkōsōbiru"high-rise building" (on-on-katakana) and 飯テロmeshitero"food terrorism" (kun-katakana).
https://en.wikipedia.org/wiki/Hybrid_word
Aninkhorn termis aloanword, or a word coined from existing roots, which is deemed to be unnecessary or over-pretentious.[1] An inkhorn is aninkwellmade ofhorn. It was an important item for many scholars, which soon became symbolic of writers in general. Later, it became a byword for fussy or pedantic writers.[1]The phrase "inkhorn term" is found as early as 1553.[2] And ere that we will suffer such a prince,So kind a father of the commonweal,To be disgracèd by an inkhorn mate Controversy over inkhorn terms was rife from the mid-16th to the mid-17th century when English competed with Latin as the main language of science and learning in England, having just displaced French.[3][1]Many words, often self-consciously borrowed fromclassical literature, were deemed useless by critics who argued that the understanding of these redundant borrowings depends on knowledge of classical languages. Some borrowings filled a technical or scientificsemantic gap, but others coexisted withGermanicwords, often overtaking them. Writers such asThomas ElyotandGeorge Pettiewere enthusiastic borrowers whereasThomas WilsonandJohn Chekeopposed borrowing.[4]Cheke wrote: I am of this opinion thatour own tung should be written cleane and pure, unmixt and unmangeled with borowing of other tunges; wherein if we take not heed by tiim, ever borowing and never paying, she shall be fain to keep her house as bankrupt. Many of these so-called inkhorn terms, such asdismiss,celebrate,encyclopedia,commit,capacityandingenious, stayed in the language. Many otherneologismsfaded soon after they were first used; for example,expedeis now obsolete, although the synonymexpediteand the similar wordimpedesurvive. Faced with the influx of loanwords, writers as well known asCharles Dickenstried to either resurrect English words, e.g.gleemanformusician(seeglee),sickerforcertainly,inwitforconscience,yblentforconfused; or coin brand-new words from English'sGermanicroots (endsayforconclusion,yeartideforanniversary,foresayerforprophet). Few of these words coined in opposition to inkhorn terms remained in common usage, and the writers who disdained the use ofLatinatewords often could not avoid using other loanwords. Although the inkhorn controversy was over by the end of the 17th century, many writers sought to return to what they saw as the purer roots of the language.William Barnescoined words, such asstarloreforastronomyandspeechcraftforgrammar, but they were not widely accepted. George Orwellfamously analysed and criticised the socio-political effects of the use of such words: Bad writers, and especially scientific, political, and sociological writers, are nearly always haunted by the notion that Latin or Greek words are grander than Saxon ones, and unnecessary words like expedite, ameliorate, predict, extraneous, deracinated, clandestine, subaqueous, and hundreds of others constantly gain ground from theirAnglo-Saxonopposite numbers.
https://en.wikipedia.org/wiki/Inkhorn_term
Language contactoccurs when speakers of two or morelanguagesorvarietiesinteract with and influence each other. The study of language contact is calledcontact linguistics. Language contact can occur atlanguage borders,[1]betweenadstratumlanguages, or as the result ofmigration, with an intrusive language acting as either asuperstratumor asubstratum. When speakers of different languages interact closely, it is typical for their languages to influence each other. Intensive language contact may result inlanguage convergenceorrelexification. In some cases a newcontact languagemay be created as a result of the influence, such as apidgin,creole, ormixed language. In many other cases, contact between speakers occurs with smaller-scale lasting effects on the language; these may include theborrowingofloanwords,calques, or other types of linguistic material. Multilingualismhas been common throughout much ofhuman history, and today most people in the world are multilingual.[2]Multilingual speakers may engage incode-switching, the use of multiple languages in a single conversation. Methods fromsociolinguistics[3](the study of language use in society), fromcorpus linguisticsand fromformal linguisticsare used in the study of language contact. The most common way that languages influence each other is the exchange of words. Much is made about the contemporary borrowing ofEnglishwords into other languages, but this phenomenon is not new, and it is not very large by historical standards. The large-scale importation of words fromLatin,Frenchand other languages into English in the 16th and the 17th centuries was more significant. Some languages have borrowed so much that they have become scarcely recognisable.Armenianborrowed so many words fromIranian languages, for example, that it was at first considered a divergent branch of theIndo-Iranian languagesand was not recognised as an independent branch of theIndo-European languagesfor many decades.[4][5] The influence can go deeper, extending to the exchange of even basic characteristics of a language such asmorphologyandgrammar. Newar, for example, spoken inNepal, is aSino-Tibetan languagedistantly related toChinesebut has had so many centuries of contact with neighbouringIndo-Iranian languagesthat it has even developed nouninflection, a trait that is typical of theIndo-Europeanfamily but rare in Sino-Tibetan. Newar has also absorbed grammatical features likeverb tenses. Also,Romanianwas influenced by theSlavic languagesthat were spoken by neighbouring tribes in the centuries after the fall of theRoman Empirenot only in vocabulary but alsophonology.[citation needed]English has a few phrases, adapted from French, in which the adjective follows the noun: court-martial, attorney-general, Lake Superior.[citation needed] A language's influence widens as its speakers grow in power. Chinese,Greek, Latin,Portuguese, French,Spanish,Arabic,Persian,Sanskrit,Russian, German and English have each seen periods of widespread importance and have had varying degrees of influence on the native languages spoken in the areas over which they have held sway. Especially during and since the 1990s, the internet, along with previous influences such as radio and television, telephone communication and printed materials,[6]has expanded and changed the many ways in which languages can be influenced by each other and by technology. Change as a result of contact is often one-sided. Chinese, for instance, has had a profound effect on the development ofJapanese, but Chinese remains relatively free of Japanese influence other than some modernterms that were reborrowedafter they were coined in Japan and based on Chinese forms and using Chinese characters. InIndia,Hindiand other native languages have been influenced by English, and loanwords from English are part of everyday vocabulary. In some cases, language contact may lead to mutual exchange, but that may be confined to a particular geographic region. For example, inSwitzerland, the local French has been influenced byGermanand vice versa. InScotland,Scotshas been heavily influenced by English, and many Scots terms have been adopted into the regional English dialect. The result of the contact of two languages can be the replacement of one by the other. This is most common when one language has a higher social position (prestige). This sometimes leads to language endangerment orextinction. When language shift occurs, the language that is replaced (known as thesubstratum) can leave a profound impression on the replacing language (known as thesuperstratum) when peopleretain featuresof the substratum as they learn the new language and pass these features on to their children, which leads to the development of a new variety. For example, the Latin that came to replace local languages in present-dayFranceduringAncient Rometimes was influenced byGaulishandGermanic. The distinct pronunciation of theHiberno-Englishdialect, spoken inIreland, comes partially from the influence of the substratum ofIrish. Outside theIndo-Europeanfamily,Coptic, the last stage of ancientEgyptian, is a substratum ofEgyptian Arabic. Language contact can also lead to the development of new languages when people without a common language interact closely. Resulting from this contact apidginmay develop, which may eventually become a full-fledgedcreole languagethrough the process of creolization (though some linguists assert that a creole need not emerge from a pidgin). Prime examples of this areAukanandSaramaccan, spoken inSuriname, which have vocabulary mainly from Portuguese, English and Dutch. A much rarer but still observed process, according to some linguists, is the formation ofmixed languages. Whereas creoles are formed by communities lacking a common language, mixed languages are formed by communities fluent in both languages. They tend to inherit much more of the complexity (grammatical, phonological, etc.) of their parent languages, whereas creoles begin as simple languages and then develop in complexity more independently. It is sometimes explained as bilingual communities that no longer identify with the cultures of either of the languages they speak, and seek to develop their own language as an expression of their own cultural uniqueness. Some forms of language contact affect only a particular segment of a speech community. Consequently, change may be manifested only in particulardialects,jargons, orregisters.South African English, for example, has been significantly affected byAfrikaansin terms oflexisandpronunciation, but the other dialects of English have remained almost totally unaffected by Afrikaans other than a few loanwords.[citation needed] In some cases, a language develops anacrolectthat contains elements of a more prestigious language. For example, inEnglandduring a large part of theMiddle Ages, upper-class speech was dramatically influenced byNormanto the point that it often resembled a dialect.[citation needed] The broader study of contact varieties within a society is calledlinguistic ecology.[7] Language contact can take place between two or more sign languages, and the expected contact phenomena occur: lexical borrowing, foreign "accent", interference, code switching, pidgins, creoles, and mixed systems. Language contact is extremely common in mostdeaf communities, which are almost always located within a dominantoral languageculture. However, between a sign language and an oral language, even if lexical borrowing and code switching also occur, the interface between the oral and signed modes produces unique phenomena:fingerspelling, fingerspelling/sign combination, initialisation,CODAtalk,TDDconversation,mouthingandcontact signing.
https://en.wikipedia.org/wiki/Language_contact
Phono-semantic matching(PSM) is the incorporation of awordinto one language from another, often creating aneologism, where the word's non-native quality is hidden by replacing it withphoneticallyandsemanticallysimilar words or roots from the adopting language. Thus the approximatesoundandmeaningof the original expression in the sourcelanguageare preserved, though the new expression (the PSM – the phono-semantic match) in the target language may sound native. Phono-semantic matching is distinct fromcalquing, which includes (semantic)translationbut does not include phonetic matching (i.e., retention of the approximate sound of theborrowed wordthrough matching it with a similar-sounding pre-existent word ormorphemein the target language). Phono-semantic matching is also distinct fromhomophonic translation, which retains the sound of a word but not the meaning. The term "phono-semantic matching" was introduced by linguist and revivalistGhil'ad Zuckermann.[1]It challengedEinar Haugen's classictypologyof lexical borrowing (loanwords).[2]While Haugen categorized borrowing into either substitution or importation, camouflaged borrowing in the form of PSM is a case of "simultaneous substitution and importation." Zuckermann proposed a new classification of multisourced neologisms, words deriving from two or more sources at the same time. Examples of such mechanisms are phonetic matching, semanticized phonetic matching and phono-semantic matching. Zuckermann concludes thatlanguage planners, for example members of theAcademy of the Hebrew Language, employ the very same techniques used infolk etymologybylaymen, as well as by religious leaders.[3]He urgeslexicographersandetymologiststo recognize the widespread phenomena of camouflaged borrowing and multisourced neologization and not to force one source on multi-parentallexical items. Zuckermann analyses the evolution of the wordartichoke.[4]Beginning in Arabicالخرشوف('al-khurshūf) "the artichoke", it was adapted intoAndalusian Arabicalxarshofa, thenOld Spanishalcarchofa, thenItalianalcarcioffo, thenNorthern Italianarcicioffo>arciciocco>articiocco, then phonetically realised in English asartichoke. The word was eventuallyphono-semantically matchedback into colloquialLevantine Arabic(for example inSyria,LebanonandIsrael) asأرضي شوكي(arḍī shawkī), consisting ofأرضي(arḍī) "earthly" andشوكي(shawkī) "thorny". Arabic has made use of phono-semantic matching to replace blatantly imported new terminology with a word derived from an existingtriliteral root. Examples are: A number of PSMs exist inDutchas well. One notable example ishangmat("hammock"), which is a modification of Spanishhamaca, also the source of the English word. Natively, the word is transparently analysed as a "hang-mat", which aptly describes the object. Similarly: A few PSMs exist in English. The French wordchartreuse("Carthusianmonastery") was translated to the Englishcharterhouse. The French wordchoupique, itself an adaptation of theChoctawname for thebowfin, has likewise beenAnglicizedasshoepike,[7]although it is unrelated to thepikes. The French name for theOsage orange,bois d'arc(lit."bow-wood"), is sometimes rendered as "bowdark".[8]In Canada, thecloudberryis called "bakeapple" after the French phrasebaie qu'appelle'the what-do-you-call-it berry'.[dubious–discuss] The second part of the wordmuskratwas altered to matchrat, replacing the original formmusquash, which derives from anAlgonquian(possiblyPowhatan[9][better source needed]) word,muscascus(literally "it is red"), or from theAbenakinative wordmòskwas. The use ofrunagatesinPsalm 68of theAnglicanBook of Common Prayerderives from phono-semantic matching between Latinrenegatusand Englishrunagate.[citation needed] TheFinnishcompound word for "jealous,"mustasukkainen, literally means "black-socked" (musta"black" andsukka"sock"). However, the word is a case of a misunderstood loan translation from Swedishsvartsjuk"black-sick". The Finnish wordsukkafit with a close phonological equivalent to the Swedishsjuk. Similar cases aretyömyyrä"hardworking person", literally "work mole", fromarbetsmyra"work ant", matchingmyra"ant" tomyyrä"mole"; andliikavarvas"clavus", literally "extra toe", fromliktå<liktorn"dead thorn", matchingliika"extra" tolik"dead (archaic)" andvarvas"toe" totå<torn"thorn".[10][11] Mailhammer (2008)"applies the concepts of multisourced neologisation and, more generally, camouflaged borrowing, as established byZuckermann (2003a)to Modern German, pursuing a twofold aim, namely to underline the significance of multisourced neologisation for language contact theory and secondly to demonstrate that together with other forms of camouflaged borrowing it remains an important borrowing mechanism in contemporary German."[12] Sapir & Zuckermann (2008)demonstrate how Icelandic camouflages many English words by means of phono-semantic matching. For example, the Icelandic-looking wordeyðni, meaning "AIDS", is a PSM of the English acronymAIDS, using the pre-existent Icelandic verbeyða, meaning "to destroy", and the Icelandic nominal suffix-ni.[13]Similarly, the Icelandic wordtækni, meaning "technology, technique", derives fromtæki, meaning "tool", combined with the nominal suffix-ni, but is, in fact, a PSM of the Danishteknik(or of another derivative of Greekτεχνικόςtekhnikós), meaning "technology, technique".Tækniwas coined in 1912 by Dr Björn Bjarnarson from Viðfjörður in the East of Iceland. It had been in little use until the 1940s, but has since become common, as a lexeme and as an element in new formations, such asraftækni, lit. "electrical technics", i.e. "electronics",tæknilegur"technical" andtæknir"technician".[14]Other PSMs discussed in the article arebeygla,bifra–bifrari,brokkál,dapur–dapurleiki-depurð,fjárfesta-fjárfesting,heila,guðspjall,ímynd,júgurð,korréttur,Létt og laggott,musl,pallborð–pallborðsumræður,páfagaukur,ratsjá,setur,staða,staðall–staðla–stöðlun,toga–togari,uppiandveira.[15] In modern Japanese, loanwords are generally represented phonetically viakatakana. However, in earlier times loanwords were often represented bykanji(Chinese characters), a process calledatejiwhen used for phonetic matching, orjukujikunwhen used for semantic matching. Some of these continue to be used; the characters chosen may correspond to the sound, the meaning, or both. In most cases the characters used were chosen only for their matching sound or only for their matching meaning. For example, in the word寿司(sushi), the two characters are respectively read assuandshi, but the character寿means "one's natural life span" and司means "to administer", neither of which has anything to do with the food – this isateji. Conversely, in the word煙草(tabako) for "tobacco", the individual kanji respectively mean "smoke" and "herb", which corresponds to the meaning, while none of their possible readings have a phonetic relationship to the wordtabako– this isjukujikun. In some cases, however, the kanji were chosen for both their semantic and phonetic values, a form of phono-semantic matching. A stock example is倶楽部(kurabu) for "club", where the characters can be interpreted loosely in sequence as "together-fun-place" (which has since been borrowed into Chinese during the early 20th century with the same meaning, including the individual characters, but with a pronunciation that differs considerably from the original English and the Japanese,jùlèbù). Another example is合羽(kappa) for thePortuguesecapa, a kind ofraincoat. The characters can mean "wings coming together", as the pointedcaparesembles a bird with wings folded together. PSM is frequently used inMandarinborrowings.[16][17]An example is theTaiwanese Mandarinword威而剛wēi'érgāng, which literally means "powerful and hard" and refers toViagra, the drug for treatingerectile dysfunctionin men, manufactured byPfizer.[18] Another example is the Mandarin form ofWorld Wide Web, which iswàn wéi wǎng(simplified Chinese:万维网;traditional Chinese:萬維網), which satisfies "www" and literally means "myriad dimensional net".[19]The English wordhackerhas been borrowed into Mandarin as黑客(hēikè, "dark/wicked visitor").[20] Modern Standard Chinese声纳/聲納shēngnà"sonar" uses the characters声/聲shēng"sound" and纳/納nà"receive, accept". The pronunciationsshēngandnàare phonetically somewhat similar to the two syllables of the English word. Chinese has a large number of homo/heterotonal homophonous morphemes, which would have been a better phonetic fit thanshēng, but not nearly as good semantically – consider the syllablesong(cf.送sòng'deliver, carry, give (as a present)',松sōng'pine; loose, slack',耸/聳sǒng'tower; alarm, attract' etc.),sou(cf.搜sōu'search',叟sŏu'old man',馊/餿sōu'sour, spoiled' and many others) orshou(cf.收shōu'receive, accept',受shòu'receive, accept',手shǒu'hand',首shǒu'head',兽/獸shòu'beast',瘦shòu'thin' and so forth).[21] According to Zuckermann, PSM in Mandarin is common in: From a monolingual Chinese view, Mandarin PSM is the 'lesser evil' compared with Latin script (indigraphicwriting) orcode-switching(in speech). Zuckermann's exploration of PSM in Standard Chinese andMeiji-periodJapaneseconcludes that theChinese writing systemis multifunctional:pleremic("full" of meaning, e.g.,logographic),cenemic("empty" ofmeaning, e.g., phonographic - like asyllabary), andphono-logographic(simultaneously cenemic and pleremic). Zuckermann argues thatLeonard Bloomfield's assertion that "a language is the same no matter what system of writing may be used"[24]is inaccurate. "If Chinese had been writtenusing roman letters, thousands of Chinese words would not have been coined, or would have been coined with completely different forms".[25]Evidence of this can be seen in theDungan language, a Chinese language that is closely related to Mandarin, but written phonetically inCyrillic, where words are directly borrowed, often from Russian, without PSM.[26] A related practice is thetranslation of Western names into Chinese characters. Often in phono-semantic matching, the source language determines both the root word and the noun-pattern. This makes it difficult to determine the source language's influence on the target languagemorphology. For example, "the phono-semantic matcher ofEnglishdockwithIsraeli Hebrewמבדוק‎mivdókcould have used – after deliberately choosing the phonetically and semantically suitablerootb-d-qבדק‎ meaning 'check' (Rabbinic) or 'repair' (Biblical) – the noun-patternsmi⌂⌂a⌂á,ma⌂⌂e⌂á,mi⌂⌂é⌂et,mi⌂⌂a⌂áimetc. (each ⌂ represents a slot where a radical is inserted). Instead,mi⌂⌂ó⌂, which was not highly productive, was chosen because its [o] makes the final syllable ofמבדוק‎mivdóksound like Englishdock."[27] The Hebrew nameיְרוּשָׁלַיִם(Yərūšālayim) forJerusalemis rendered asἹεροσόλυμα(Hierosóluma) in, e.g.Matthew 2:1. The first part corresponds to theAncient Greekprefixἱερo-(hiero-), meaning "sacred, holy". Old High Germanwidarlōn("repayment of a loan") was rendered aswiderdonum("reward") inMedieval Latin. The last part corresponds to the Latindonum("gift").[28][29]: 157 Viagra, a brand name which was suggested by Interbrand Wood (the consultancy firm hired by Pfizer), is itself a multisourced neologism, based onSanskritव्याघ्रvyāghráh("tiger") but enhanced by the wordsvigour(i.e. strength) andNiagara(i.e. free/forceful flow).[18] Other than throughSinoxenicborrowings,Vietnameseemploys phono-semantic matching less commonly than Chinese. Examples includema trận("matrix", from the words for "magic" and "battle array"),áp dụng("apply", from the words for "press down" and "use"), andHuỳnh Phi Long(Huey P. Long, from "yellow flying dragon", evoking theHuey P. Long Bridge). According to Zuckermann, PSM has various advantages from the point of view of apuristiclanguage planner:[1] Other motivations for PSM include the following: An expressive loan is a loanword incorporated into the expressive system of the borrowing language, making it resemble native words oronomatopoeia. Expressive loanwords are hard to identify, and by definition, they follow the common phonetic sound change patterns poorly.[30]Likewise, there is a continuum between "pure" loanwords and "expressive" loanwords. The difference to a folk etymology (or aneggcorn) is that a folk etymology is based on misunderstanding, whereas an expressive loan is changed on purpose, the speaker taking the loanword knowing full well that the descriptive quality is different from the original sound and meaning. South-easternFinnish, for example, has many expressive loans. The main source language,Russian, does not use the vowels 'y', 'ä' or 'ö' [y æ ø]. Thus, it is common to add these to redescriptivized loans to remove the degree of foreignness that the loanword would otherwise have. For example,tytinä"brawn" means "wobblyness",[clarification needed]and superficially it looks like a native construction, originating from the verbtutista"to wobble" added with a front vowel sound in thevowel harmony. However, it is expressivized fromtyyteni(which is a confusing word as-niis apossessive suffix), which in turn is a loanword from Russianstúden'.[31]A somewhat more obvious example istökötti"sticky, tarry goo", which could be mistaken as a derivation from the onomatopoetic wordtök(cf. the verbtökkiä"to poke"). However, it is an expressive loan of Russiand'ogot'"tar".[32]
https://en.wikipedia.org/wiki/Phono-semantic_matching
Reborrowingis the process where a word travels from one language to another and then back to the originating language in a different form or with a different meaning. A reborrowed word is sometimes called aRückwanderer(German, a 'returner'). The result is generally adoublet, where the reborrowed word exists alongside the original word, though in other cases the original word may have died out. Alternatively, a specific sense of a borrowed word can be reborrowed as asemantic loan; for example, Englishpioneerwas borrowed fromMiddle Frenchin the sense of "digger, foot soldier, pedestrian", then acquired the sense of "early colonist, innovator" in English, which was reborrowed into French.[1]In other cases the term may becalqued(loan translated) at some stage, such as Englishready-to-wear→ Frenchprêt-à-porter(1951) → Englishprêt-à-porter(1957).[1] In some cases the borrowing process can be more complicated and the words might move through different languages before coming back to the originating language. The single move from one language to the other is called "loan" (seeloanword). Reborrowing is the result of more than one loan, when the final recipient language is the same as the originating one. A similar process occurs when a word is coined in a language based on roots from another language, and then the compound is borrowed into this other language or a modern descendant. Inthe Westthis primarily occurs withclassical compounds, formed on Latin orAncient Greekroots, which may then be borrowed into a Romance language or Modern Greek. Latin is sufficiently widespread that Latinate terms coined in a non-Romance language (such as English or German) and then borrowed by a Romance language (such as French or Spanish) are not conspicuous, but modern coinages on Ancient Greek roots borrowed into Modern Greek are, and include terms such as τηλεγράφημαtilegráfima('telegram').[7]These arevery common. This process is particularly conspicuous in Chinese and Japanese, where in the late 19th and early 20th century many terms were coined inJapanese on Chinese roots(historically terms had often passed via Korea), known aswasei kango(和製漢語, Japanese-made Chinese-words), then borrowed into modern Chinese (and often Korean) with corresponding pronunciation; from the mid 20th century such borrowings are much rarer. Often these words could have been coined in Chinese, but happened to be coined first in Japanese; notable examples include文化bunka('culture') and革命kakumei('revolution').[7]
https://en.wikipedia.org/wiki/Reborrowing
Inlinguistics,semantic loanis a process (or an instance or result) of borrowingsemantic meaning(rather thanlexical items) from anotherlanguage. It is very similar to the formation ofcalques, excepting that in this case the complete word in the borrowing language already exists; the change is that its meaning isextendedto include another meaning that is already possessed by its counterpart in the lending language. Semantic loans are often grouped roughly together with calques andloanwordsunder the phraseborrowing. Semantic loans often occur when two languages are in close contact, and they take various forms. The source and target word may becognates, which may or may not share any contemporary meaning in common; they may be an existing loan translation or parallel construction (compound of corresponding words); or they may be unrelated words that share an existing meaning. A typical example is the French wordsouris, which means "mouse" (the animal). After the English wordmouseacquired the additional sense of "computer mouse", when French speakers began speaking of computer mice, they did so by extending the meaning of their own wordsourisby analogy with how English speakers had extended the meaning ofmouse. (Had French speakers started using the wordmouse, that would have been a borrowing; had they created a new lexeme out of multiple French morphemes, as withdisque durfor "hard disk", that would have been a calque.) Another example, in this case propelled by speakers of the source language, is the English wordalready. TheYiddishword for the literal senses of "already" isשויןshoyn, which is also used as a tag to express impatience. Yiddish speakers who also spoke English began using the English wordalreadyto express this additional sense in English, and this usage came to be adopted in the larger English-speaking community (as inEnough alreadyorWould you hurry up already?) This sense ofalreadyis therefore a semantic borrowing of that sense ofshoyn. Some examples arise fromreborrowing. For example, Englishpioneerwas borrowed fromMiddle Frenchin the sense of "digger, foot soldier, pedestrian", then acquired the sense of "early colonist, innovator" in English, which was reborrowed into French, adding to the senses of the wordpionnier.[1] Typical semantic loans also include the Germanrealisieren. The English verb "to realise" has more than one meaning: it means both "to make something happen/come true" and "to become aware of something". TheGermanverbrealisierenoriginally only meant the former: to make something real. However, German later borrowed the other meaning of "to realise" from English, and today, according toDuden,[2]realisierenalso means "to become aware of something" (this meaning is still considered by many to be anAnglicism). The wordrealisierenitself already existed before the borrowing took place; the only thing borrowed was this second meaning. (Compare this with a calque, such asantibody, from the GermanAntikörper, where the word "antibody" did not exist in English before it was borrowed.) A similar example is the German verbüberziehen, which meant only to draw something across, before it took on the additional borrowed meaning of its literal English translationoverdrawin the financial sense.[2]Note that the first halves of the terms are cognate (über/over), but the second halves are not (ziehen/draw). Semantic loans may be adopted by many different languages, as [computer]mousehas been. As another example, each ofHebrewכוכבkokháv,Russianзвездаzvezdá,Polishgwiazda,Finnishtähti,Chinese明星míngxīng, andVietnamesesaooriginally meant "star" in the astronomical sense but have acquired from English thesememe"star", as in a famous entertainer.[3]In this case the words are unrelated (save for the Russian and Polish words), but share a base meaning, here extended metaphorically.
https://en.wikipedia.org/wiki/Semantic_loan
Abbreviationsin music are of two kinds, namely,abbreviations of termsrelated tomusical expression, and the true musical abbreviations by the help of which certain passages, chords, etc., may be notated in a shortened form, to the greater convenience of both composer and performer. Abbreviations of the first kind are like mostabbreviationsin language; they consist for the most part of the initial letter or first syllable of the word employed—as for instance,porffor thedynamic markingspiano and forte,cresc.forcrescendo,Ob.foroboe,Fag.forbassoon(Italian:fagotto). This article is about abbreviations used inmusic notation. The continued repetition of a note or chord is expressed by a stroke or strokes across thestem, or above or below the note if it be awhole noteordouble whole note. The number of strokes denotes the subdivision of the written note intoeighth notes,sixteenth notes, etc., unless the wordtremoloor tremolando is added, in which case the repetition is as rapid as possible, without regard to the exact number of notes played. (When strokes are added to notes shorter than a quarter note, each beam counts as a stroke.) In the first bar of the example below, the half note with the single stroke across the stem in thewrittenstaff becomes 4 eighth notes in theplayedstaff. Through the use of 2 strokes across the stem in the second bar, the next full note is expressed as a phrase of 16 sixteenth notes. Onbowed instrumentsthe rapid reiteration of a single note is easy, but inpianomusic an octave or chord becomes necessary to produce a tremolo, the manner of writing and performing of which is seen below. In the abbreviation expressed by strokes, as above, the passage to be abbreviated can contain no note of greater length than an eighth note, but it is possible also to divide a long note into quarter notes, by means of dots (sometimes known asdivisidots) placed over it, as below. This is however seldom done, as only a small amount of space is saved. When a long note has to be repeated in the form oftriplets or sextuplets, the figure 3 or 6 is usually placed over it in addition to the stroke across the stem, and the note is sometimes, though not necessarily, written dotted. The repetition of a group of two notes is abbreviated by two notes (most oftenhalf notesorwhole notes) connected by the number of strokes ordinarily used to express eighth notes, sixteenth notes, etc., according to the rate of movement intended, as below. It will be observed that a passage lasting for the value of one half note requires two half notes to express it, on account of the group consisting of two notes. As seen above, half notes are often written with the strokes beaming the notes together (which is unambiguous as white notes with beams are not otherwise used in music), but with quarter notes and shorter the strokes must be separated from the stems to prevent them being misread as a shorter note value. A group of three, four, or more notes is abbreviated by the repetition of the cross strokes without the notes as many times as the group has to be repeated. This can also be written with the notes forming the group are written as a chord, with the necessary number of strokes across the stem. In this case the wordsimiliorsegueis added, to show that the order of notes in the first group (which must be written out in full) is to be repeated, and to prevent the possibility of mistaking the effect intended for the repetition of the chord as a whole. Another sign of abbreviation of a group consists of an oblique line with two dots, one on each side; this serves to indicate the repetition of a group of any number of notes of any length. This can even apply to a passage composed of several groups, provided such passage is not more than two bars in length. A more usual method of abbreviating the repetition of a passage of the length of the above is to write over it the wordbis(twice), or in some casester(three times), or to enclose it between the dots of an ordinaryrepeat sign. Passages intended to be played in octaves are often written as single notes with the wordscoll' ottavaorcoll' 8vaplaced above or below them, according as the upper or lower octave is to be added. The word8va(or sometimes8vaaltaor8vabassa) written above or below a passage does not add octaves, but merely transposes the passage an octave higher or lower. Inclarinetmusic the wordchalumeauis used to signify that the passage is to be played an octave lower than written. All these alterations (which can scarcely be considered abbreviations except that they spare the use ofledger lines) are counteracted, and the passage restored to its usual position, by the ending of the enclosing bracket, the wordloco, or in clarinet musicclarinette. Inorchestralmusic it often happens that certain of the instruments play inunison; when this is the case the parts are sometimes not all written in the score, but the lines belonging to one or more of the instruments are left blank, and the wordscoi violiniorcol basso, etc., are added, to indicate that the instruments in question have to play in unison with the violins or basses, as the case may be, or when two instruments of the same kind, such as first and second violins, have to play in unison, the wordunisonoorcol primois placed instead of the notes in the line belonging to the second. Where two parts are written on one staff in a score the signa 2denotes that both play the same notes; anda 1that the second of the two is resting. The indicationa 3ora 4at the head offuguesindicates the number of parts or voices in which the fugue is written. An abbreviation which is often very troublesome to the conductor occurs inmanuscriptscores, when a considerable part of the composition is repeated without alteration, and the corresponding number of bars are left vacant, with the remarkcome sopra(as above). This is not met with in printed scores. There are also abbreviations relating tomusic analysis, some of which are of great value. Infigured bass, for instance, the various chords are expressed by figures, and the several authors in the nineteenth century invented or availed themselves of various methods of shortly expressing the different chords and intervals, particularly usingRoman numeral analysis. Gottfried Weberrepresents an interval by a number with one or two dots before it to expressminorordiminished, and one or two after it formajororaugmented.[clarification needed][citation needed] Johann Anton Andrémakes use of aright triangleto express atriad, and asquare, for aseventh chord, the inversions being indicated by one, two, or three small vertical lines across their base, and the classification into major, minor, diminished, or augmented by the numbers 1, 2, 3, or 4, placed in the centre.[clarification needed][citation needed]
https://en.wikipedia.org/wiki/Abbreviation_(music)
Inlinguistics, ablend—also known as ablend word,lexical blend, orportmanteau[a]—is a word formed by combining the meanings, and parts of the sounds, of two or more words together.[2][3][4]English examples includesmog, coined by blendingsmokeandfog,[3][5]andmotel, frommotor(motorist) andhotel.[6] A blend is similar to acontraction. On one hand, mainstream blends tend to be formed at a particular historical moment followed by a rapid rise in popularity. On the other hand, contractions are formed by the gradual drifting together of words over time due to the words commonly appearing together in sequence, such asdo notnaturally becomingdon't(phonologically,/duːnɒt/becoming/doʊnt/). A blend also differs from acompound, which fully preserves thestemsof the original words. The BritishlecturerValerie Adams's 1973Introduction to Modern English Word-Formationexplains that "In words such asmotel...,hotelis represented by various shorter substitutes –‑otel... – which I shall call splinters. Words containing splinters I shall call blends".[7][n 1]Thus, at least one of the parts of a blend, strictly speaking, is not a completemorpheme, but instead a mere splinter or leftover word fragment. For instance,starfishis a compound, not a blend, ofstarandfish, as it includes both words in full. However, if it were called a "stish" or a "starsh", it would be a blend. Furthermore, when blends are formed by shortening establishedcompoundsor phrases, they can be consideredclipped compounds, such asromcomforromantic comedy.[8] Blends of two or more words may be classified from each of three viewpoints: morphotactic, morphonological, and morphosemantic.[9] Blends may be classifiedmorphotacticallyinto two kinds:totalandpartial.[9] In a total blend, each of the words creating the blend is reduced to a mere splinter.[9]Some linguists limit blends to these (perhaps with additional conditions): for example,Ingo Plagconsiders "proper blends" to be total blends that semantically are coordinate, the remainder being "shortened compounds".[10] Commonly for English blends, the beginning of one word is followed by the end of another: Much less commonly in English, the beginning of one word may be followed by the beginning of another: Some linguists do not regard beginning+beginning concatenations as blends, instead calling them complex clippings,[11]clipping compounds[12]orclipped compounds.[13] Unusually in English, the end of one word may be followed by the end of another: A splinter of one word may replace part of another, as in two coined byLewis Carrollin "Jabberwocky": They are sometimes termedintercalativeblends; these words are among the original "portmanteaus" for which this meaning of the word was created.[14] In a partial blend, one entire word is concatenated with a splinter from another.[9]Some linguists do not recognize these as blends.[15] An entire word may be followed by a splinter: A splinter may be followed by an entire word: An entire word may replace part of another: These have also been calledsandwichwords,[16]and classed amongintercalativeblends.[14] (When two words are combined in their entirety, the result is considered acompound wordrather than a blend. For example,bagpipeis a compound, not a blend, ofbagandpipe.) Morphologically, blends fall into two kinds:overlappingandnon-overlapping.[9] Overlapping blends are those for which the ingredients' consonants, vowels or even syllables overlap to some extent. The overlap can be of different kinds.[9]These are also called haplologic blends.[17] There may be an overlap that is both phonological and orthographic, but with no other shortening: The overlap may be both phonological and orthographic, and with some additional shortening to at least one of the ingredients: Such an overlap may be discontinuous: These are also termed imperfect blends.[18][19] It can occur with three components: The phonological overlap need not also be orthographic: If the phonological but non-orthographic overlap encompasses the whole of the shorter ingredient, as in then the effect depends on orthography alone. (They are also called orthographic blends.[20]) An orthographic overlap need not also be phonological: For some linguists, an overlap is a condition for a blend.[21] Non-overlapping blends (also called substitution blends) have no overlap, whether phonological or orthographic: Morphosemantically, blends fall into two kinds:attributiveandcoordinate.[9] Attributive blends (also called syntactic or telescope blends) are blends where one of the ingredients is the head and the other is attributive. Aporta-lightis a portable light, not a 'light-emitting' or light portability; in this instance,lightis the head, while "porta-" is attributive. Asnobjectis a snobbery-satisfying object and not an objective or other kind of snob; object is the head.[9] As is also true for (conventional, non-blend) attributivecompounds(among whichbathroom, for example, is a kind of room, not a kind of bath), the attributive blends of English are mostlyhead-finaland mostlyendocentric. As an example of anexocentricattributive blend,Fruitopiamay metaphorically take the buyer to a fruity utopia (and not a utopian fruit); however, it is not a utopia but a drink. Coordinate blends (also called associative or portmanteau blends) combine two words having equal status, and have two heads. Thusbrunchis neither a breakfasty lunch nor a lunchtime breakfast but instead some hybrid of breakfast and lunch;Oxbridgeis equally Oxford and Cambridge universities. This too parallels (conventional, non-blend) compounds: anactor–directoris equally an actor and a director.[9] Two kinds of coordinate blends are particularly conspicuous: those that combine (near‑) synonyms: and those that combine (near‑) opposites: Blending can also apply torootsrather than words, for instance inIsraeli Hebrew: "There are two possible etymological analyses for Israeli Hebrew כספרkaspár'bank clerk, teller'. The first is that it consists of (Hebrew>) Israeli כסףkésef'money' and the (International/Hebrew>) Israeliagentivesuffixר--ár. The second is that it is a quasi-portmanteau wordwhich blends כסףkésef'money' and (Hebrew>) Israeli ספר √spr 'count'. Israeli Hebrew כספרkaspárstarted as a brand name but soon entered the common language. Even if the second analysis is the correct one, the final syllable ר--árapparently facilitated nativization since it was regarded as the Hebrew suffix ר--år(probably ofPersianpedigree), which usually refers to craftsmen and professionals, for instance as inMendele Mocher Sforim's coinage סמרטוטרsmartutár'rag-dealer'."[24] Blending may occur with an error inlexical selection, the process by which a speaker uses his semantic knowledge to choose words. Lewis Carroll's explanation, which gave rise to the use of 'portmanteau' for such combinations, was: Humpty Dumpty's theory, of two meanings packed into one word like a portmanteau, seems to me the right explanation for all. For instance, take the two words "fuming" and "furious." Make up your mind that you will say both words ... you will say "frumious."[25] The errors are based on similarity of meanings, rather thanphonologicalsimilarities, and the morphemes or phonemes stay in the same position within the syllable.[26] Some languages, likeJapanese, encourage the shortening and merging of borrowed foreign words (as ingairaigo), because they are long or difficult to pronounce in the target language. For example,karaoke, a combination of the Japanese wordkara(meaningempty) and the clipped formokeof the English loanword "orchestra" (J.ōkesutora,オーケストラ), is a Japanese blend that has entered the English language. TheVietnamese languagealso encourages blend words formed fromSino-Vietnamese vocabulary. For example, the termViệt Cộngis derived from the first syllables of "Việt Nam" (Vietnam) and "Cộng sản" (communist). Many corporatebrand names, trademarks, and initiatives, and names of corporations and organizations themselves, are blends. For example,Wiktionary, one ofWikipedia's sister projects, is a blend ofwikianddictionary. The wordportmanteauwas introduced in this sense byLewis Carrollin the bookThrough the Looking-Glass(1871),[27]whereHumpty Dumptyexplains to Alice the coinage of unusual words used in "Jabberwocky".[28]Slithymeans "slimy and lithe" andmimsymeans "miserable and flimsy". Humpty Dumpty explains to Alice the practice of combining words in various ways, comparing it to the then-commontype of luggage, which opens into two equal parts: You see it's like a portmanteau—there are two meanings packed up into one word. In his introduction to his 1876 poemThe Hunting of the Snark, Carroll again usesportmanteauwhen discussing lexical selection:[28] Humpty Dumpty's theory, of two meanings packed into one word like a portmanteau, seems to me the right explanation for all. For instance, take the two words "fuming" and "furious". Make up your mind that you will say both words, but leave it unsettled which you will say first … if you have the rarest of gifts, a perfectly balanced mind, you will say "frumious". In then-contemporary English, a portmanteau was asuitcasethat opened into two equal sections. According to theOED Online, a portmanteau is a "case or bag for carrying clothing and other belongings when travelling; (originally) one of a form suitable for carrying on horseback; (now esp.) one in the form of a stiff leather case hinged at the back to open into two equal parts".[29]According toThe American Heritage Dictionary of the English Language(AHD), the etymology of the word is the Frenchporte-manteau, fromporter, "to carry", andmanteau, "cloak" (from Old Frenchmantel, from Latinmantellum).[30]According to theOED Online, the etymology of the word is the "officer who carries the mantle of a person in a high position (1507 in Middle French), case or bag for carrying clothing (1547), clothes rack (1640)".[29]In modern French, aporte-manteauis aclothes valet, a coat-tree or similar article of furniture for hanging up jackets, hats, umbrellas and the like.[31][32][33] An occasional synonym for "portmanteau word" isfrankenword, anautological wordexemplifying the phenomenon it describes, blending "Frankenstein" and "word".[34] Manyneologismsare examples of blends, but many blends have become part of the lexicon.[28]InPunchin 1896, the wordbrunch(breakfast + lunch) was introduced as a "portmanteau word".[35]In 1964, the newly independent African republic ofTanganyikaandZanzibarchose the portmanteau wordTanzaniaas its name. SimilarlyEurasiais a portmanteau of Europe and Asia. Some city names are portmanteaus of the border regions they straddle:Texarkanaspreads across the Texas-Arkansas-Louisiana border, whileCalexicoandMexicaliare respectively the American and Mexican sides of a singleconurbation. A scientific example is aliger, which is a cross between a male lion and a female tiger (atigonis a similar cross in which the male is a tiger). A more modern blend of ‘Cat’ and ‘Rabbit’ was founded 2023 on X (formerly known as Twitter) to describe a circulating image of a mix between the two, producing the word ‘Cabbit’. Many company or brand names are portmanteaus, includingMicrosoft, a portmanteau ofmicrocomputerandsoftware; the cheeseCambozolacombines a similar rind toCamembertwith the same mould used to makeGorgonzola; passenger rail companyAmtrak, a portmanteau ofAmericaandtrack;Velcro, a portmanteau of the Frenchvelours(velvet) andcrochet(hook);Verizon, a portmanteau ofveritas(Latin for truth) andhorizon;Viacom, a portmanteau of Video and Audio communications, andComEd(a Chicago-area electric utility company), a portmanteau ofCommonwealthandEdison. Jeoportmanteau!is a recurring category on the American televisionquiz showJeopardy!The category's name is itself a portmanteau of the wordsJeopardyandportmanteau. Responses in the category are portmanteaus constructed by fitting two words together. Portmanteau words may be produced by joiningproper nounswith common nouns, such as "gerrymandering", which refers to the scheme of Massachusetts GovernorElbridge Gerryfor politically contrived redistricting; the perimeter of one of the districts thereby created resembled a very curvysalamanderin outline. The term gerrymander has itself contributed to portmanteau termsbjelkemanderandplaymander. Oxbridgeis a common portmanteau for the UK's two oldest universities, those ofOxfordandCambridge. In 2016, Britain's plannedexit from the European Unionbecame known as "Brexit". The wordrefudiatewas famously used bySarah Palinwhen she misspoke, conflating the wordsrefuteandrepudiate. Though the word was agaffe, it was recognized as theNew Oxford American Dictionary's "Word of the Year" in 2010.[36] The business lexicon includes words like "advertainment" (advertising as entertainment), "advertorial" (a blurred distinction between advertising and editorial), "infotainment" (information about entertainment or itself intended to entertain by its manner of presentation), and "infomercial" (informational commercial). Company and product names may also use portmanteau words: examples includeTimex(a portmanteau ofTime[referring toTime magazine] andKleenex),[37]Renault'sTwingo(a combination oftwist,swingandtango),[38]andGarmin(portmanteau of company founders' first namesGary BurrellandMin Kao). "Desilu Productions" was a Los Angeles–based company jointly owned by actor coupleDesi ArnazandLucille Ball.Miramaxis the combination of the first names of the parents of theWeinstein brothers. Two proper names can also be used in creating a portmanteau word in reference to the partnership between people, especially in cases where both persons are well-known, or sometimes to produceepithetssuch as "Billary" (referring to former United States presidentBill Clintonand his wife, former United States Secretary of StateHillary Clinton). In this example of recent American political history, the purpose for blending is not so much to combine the meanings of the source words but "to suggest a resemblance of one named person to the other"; the effect is often derogatory, as linguistBenjamin Zimmerstates.[39]For instance,Putleris used by critics ofVladimir Putin, merging his name withAdolf Hitler. By contrast, the public, including the media, use portmanteaus to refer to their favorite pairings as a way to "...giv[e] people an essence of who they are within the same name."[40]This is particularly seen in cases of fictional and real-life "supercouples". An early known example,Bennifer, referred to film starsBen AffleckandJennifer Lopez. Other examples includeBrangelina(Brad PittandAngelina Jolie) andTomKat(Tom CruiseandKatie Holmes).[40]On Wednesday, 28 June 2017,The New York Timescrosswordincluded the quip, "How I wishNatalie PortmandatedJacques Cousteau, so I could call them 'Portmanteau'".[41] Holidays are another example, as inThanksgivukkah, a portmanteau neologism given to the convergence of the American holiday ofThanksgivingand the first day of theJewish holidayofHanukkahon Thursday, 28 November 2013.[42][43]Chrismukkahis another pop-culture portmanteau neologism popularized by the TV dramaThe O.C., a merging of the holidays of Christianity's Christmas and Judaism's Hanukkah. In theDisneyfilmBig Hero 6, the film is situated in a fictitious city called "San Fransokyo", which is a portmanteau of two real locations,San FranciscoandTokyo.[44] Modern Hebrewabounds with blending. Along with CD, or simplyדיסק(disk), Hebrew has the blendתקליטור(taklitór), which consists ofתקליט(taklít'phonograph record') andאור(or'light'). Other blends in Hebrew include the following:[45] Sometimes the root of the second word is truncated, giving rise to a blend that resembles anacrostic: A few portmanteaus are in use in modern Irish, for example: There is a tradition oflinguistic purism in Icelandic, andneologismsare frequently created from pre-existing words. For example,tölva'computer' is a portmanteau oftala'digit, number' andvölva'oracle, seeress'.[53] InIndonesian, portmanteaus andacronymsare very common in both formal and informal usage. A common use of a portmanteau in the Indonesian language is to refer to locations and areas of the country. For example,Jabodetabekis a portmanteau that refers to theJakarta metropolitan areaorGreater Jakarta, which includes the regions of Jakarta, Bogor, Depok, Tangerang, Bekasi). In the Malaysian national language ofBahasa Melayu, the wordjadongwas constructed out of three Malay words for evil (jahat), stupid (bodoh) and arrogant (sombong) to be used on the worst kinds of community and religious leaders who mislead naive, submissive and powerless folk under their thrall.[citation needed] A very common type of portmanteau in Japanese forms one word from the beginnings of two others (that is, from twoback-clippings).[54]The portion of each input word retained is usually twomorae, which is tantamount to onekanjiin most words written in kanji. The inputs to the process can be native words,Sino-Japanese words,gairaigo(later borrowings), or combinations thereof. A Sino-Japanese example is the name東大(Tōdai)for theUniversity of Tokyo, in full東京大学(Tōkyōdaigaku). With borrowings, typical results are words such asパソコン(pasokon), meaningpersonal computer(PC), which despite being formed of English elements does not exist in English; it is auniquely Japanesecontraction of the Englishpersonal computer(パーソナル・コンピュータ,pāsonarukonpyūta). Another example,Pokémon(ポケモン), is a contracted form of the English wordspocket(ポケット,poketto)andmonsters(モンスター,monsutā).[55]A famous example of a blend with mixed sources iskaraoke(カラオケ,karaoke), blending the Japanese word forempty(空,kara)and the Greek wordorchestra(オーケストラ,ōkesutora). The Japanese fad of egg-shaped keychain pet toys from the 1990s,Tamagotchi, is a portmanteau combining the two Japanese wordstamago(たまご, 'egg'), anduotchi(ウオッチ, 'watch'). The portmanteau can also be seen as a combination oftamago(たまご, 'egg'), andtomodachi(友だち, 'friend'). Sometitlesalso are portmanteaus, such asHetalia(ヘタリア). It came fromHetare(ヘタレ, 'idiot') andItalia(イタリア, 'Italy'). Another example isServamp, which came from the English wordsServant(サーヴァント)andVampire(ヴァンパイア). InBrazilian Portuguese, portmanteaus are usually slang, including: InEuropean Portuguese, portmanteaus are also used. Some of them include: Although traditionally uncommon in Spanish, portmanteaus are increasingly finding their way into the language, mainly for marketing and commercial purposes. Examples inMexican Spanishincludecafebreríafrom combiningcafetería'coffee shop' andlibrería'bookstore', orteletón'telethon' from combiningtelevisiónandmaratón. Portmanteaus are also frequently used to make commercial brands, such as "chocolleta" from "chocolate" + "galleta". They are also often used to create business company names, especially for small, family-owned businesses, where owners' names are combined to create a unique name (such as Rocar, from "Roberto" + "Carlos", or Mafer, from "María" + "Fernanda"). These usages help to create distinguishable trademarks. It is a common occurrence for people with two names to combine them into a single nickname, like Juanca for Juan Carlos, Or Marilú for María de Lourdes. Other examples: A somewhat popular example in Spain is the wordgallifante,[64]a portmanteau ofgallo y elefante'cockerel and elephant'. It was the prize on the Spanish version of the children TV showChild's Play(Spanish:Juego de niños) that ran on the public television channelLa 1ofTelevisión Española(TVE) from 1988 to 1992.[65] Inlinguistics, a blend is an amalgamation or fusion of independentlexemes, while aportmanteauorportmanteau morphis a singlemorphthat is analyzed as representing two (or more) underlyingmorphemes.[66][67][68][69]For example, in the Latin wordanimalis, the ending-isis a portmanteau morph because it is an unanalysable combination of two morphemes: a morpheme for the singular number and one for the genitive case. In English, two separate morphs are used:of ananimal. Other examples include French: *à le⇒au[o]and*de le⇒du[dy].[66]
https://en.wikipedia.org/wiki/Blend_word
This is a selection ofportmanteauwords. 'Babelonagain" def.: (portmanteau of babel and pn again.) The language of a person who talks endless nonsense.Attribution: Mark Boles 04.28.25
https://en.wikipedia.org/wiki/List_of_portmanteaus
Inlinguistics,clipping, also calledtruncationorshortening,[1]isword formationby removing somesegmentsof an existing word to create adiminutiveword or aclipped compound. Clipping differs fromabbreviation, which is based on a shortening of the written, rather than the spoken, form of an existing word or phrase. Clipping is also different fromback-formation, which proceeds by (pseudo-)morphemerather than segment, and where the new word may differ in sense andword classfrom its source.[2]In English, clipping may extend tocontraction, which mostly involves theelisionof a vowel that is replaced by anapostrophein writing. According toHans Marchand, clippings are not coined as words belonging to thecore lexiconof a language.[3]They typically originate assynonyms[3]within thejargonorslangof anin-group, such as schools, army, police, and the medical profession. For example,exam(ination),math(ematics), andlab(oratory) originated in schoolslang;spec(ulation) andtick(et = credit) in stock-exchange slang; andvet(eran) andcap(tain) in army slang. Clipped forms can pass into common usage when they are widely useful, becoming part of standard language, which most speakers would agree has happened withmath/maths,lab,exam,phone(fromtelephone),fridge(fromrefrigerator), and various others. When their usefulness is limited to narrower contexts, they remain outside thestandard register. Many, such asmaniandpediformanicureandpedicureormic/mikeformicrophone, occupy a middle ground in which their appropriate register is a subjective judgment, but succeeding decades tend to see them become more widely used. According toIrina Arnold[ru], clipping mainly consists of the following types:[4] Final and initial clipping may be combined into a sort of "bilateral clipping", and result in curtailed words with the middle part of the prototype retained, which usually includes the syllable withprimary stress. Examples:fridge(refrigerator),rizz(charisma),rona(coronavirus),shrink(head-shrinker),tec(detective); alsoflu(which omits the stressed syllable ofinfluenza),jams(retaining thebinary noun-s of pajamas/pyjamas) orjammies(adding diminutive-ie). Another common shortening in English will clip a word and then add some sort of suffix. That suffix can be either neutral or casual in nature, as in the-oofcombo(combination) andconvo(conversation), or else diminutive and/or hypochoric, as in the-yor-ieofSammy(Samantha) andselfie(self portrait), and the-sofbabes(baby, as a term of endearment) andBarbs(Barbara). Sometimes, the adding of this suffix can make the word which was originally shortened from a longer form end up with the same number of syllables as the original longer form; i.e.choccy(chocolate) orDavy(David). In a final clipping, the most common type in English, the beginning of the prototype is retained. The unclipped original may be either a simple or a composite. Examples includeadandadvert(advertisement),cable(cablegram),doc(doctor),exam(examination),fax(facsimile),gas(gasoline),gym(gymnastics, gymnasium),memo(memorandum),mutt(muttonhead),pub(public house),pop(popular music), andclit(clitoris).[5]: 109An example of apocope in Israeli Hebrew is the wordlehit, which derives from להתראותlehitraot, meaning "see you, goodbye".[5]: 155 Because final clippings are most common in English, this often leads to clipped forms from different sources which end up looking identical. For example,appcan equally refer to anappetizeror anapplicationdepending on the context, whilevetcan be short for eitherveteranorveterinarian. Initial (or fore) clipping retains the final part of the word. Examples:bot(robot),chute(parachute),roach(cockroach),gator(alligator),phone(telephone),pike(turnpike),varsity(university),net(Internet). Words with the middle part of the word left out are few. They may be further subdivided into two groups: (a) words with a final-clipped stem retaining the functional morpheme:maths(mathematics),specs(spectacles); (b) contractions due to a gradual process of elision under the influence of rhythm and context. Thus,fancy(fantasy),ma'am(madam), andfo'c'slemay be regarded as accelerated forms. Clipped forms are also used incompounds. One part of the original compound most often remains intact. Examples are:cablegram(cabletelegram),op art(opticalart),org-man(organizationman),linocut(linoleumcut). Sometimes both halves of a compound are clipped as innavicert(navigationcertificate). In these cases it is difficult to know whether the resultant formation should be treated as a clipping or as ablend, for the border between the two types is not always clear. According to Bauer (1983),[6]the easiest way to draw the distinction is to say that those forms which retain compound stress are clipped compounds, whereas those that take simple word stress are not. By this criterionbodbiz, Chicom, Comsymp, Intelsat, midcult, pro-am, photo op, sci-fi, andsitcomare all compounds made of clippings.
https://en.wikipedia.org/wiki/Clipping_(morphology)
Agramogram,grammagram, orletteral wordis a letter or group of letters which can be pronounced to form one or more words, as in "CU" for "see you".[1][2][3]They are a subset ofrebuses, and are commonly used as abbreviations. They are sometimes used as a component ofcryptic crosswordclues.[1][4] A poem reportedly appeared in theWoman's Home Companionof July 1903 using many gramograms: it was preceded by the line "ICQ out so that I can CU have fun translating thesound FXof this poem".[2] TheMarcel Duchamp"readymade"L.H.O.O.Q.is an example of a gramogram. Those letters, pronounced in French, sound like "Elle a chaud au cul", an idiom which translates to "she has a hot ass",[5]or in Duchamp's words "there is fire down below". TheWilliam SteigbooksCDB!(1968) andCDC?(1984) use letters in the place of words.[6]Steig has been credited as being a founder of this literary technique.[7][8] The suicide prevention charityR U OK?'s name is a gramogram, with supporters encouraged to text "R U OK?" to friends and family to see how that person's mental health is going. A short gramogram dialogue opening with a customer asking "FUNEX" ("Have you any eggs?") appears in a 1949 bookHail fellow well metbySeymour Hicks[9]and was expanded into a longer sketch ofphrasebook-style gramogram dialogue for the comedy sketch showThe Two Ronnies, under the titleSwedish made simple.[10][11] The 1980s Canadian gameshowBumper Stumpersrequired contestants to decode gramograms presented as fictional vanity licence plates. Here Come the ABCs, a 2005 children's album byThey Might Be Giants, contains the song "I C U", which is entirely made up of gramograms.
https://en.wikipedia.org/wiki/Gramogram
This is alist of abbreviations used in medical prescriptions, including hospital orders (the patient-directed part of which is referred to assig codes). This list does not include abbreviations for pharmaceuticals or drug name suffixes such as CD, CR, ER, XT (SeeTime release technology § List of abbreviationsfor those). Capitalisationand the use offull stopsare a matter ofstyle. In the list, abbreviations in English are capitalized whereas those in Latin are not. These abbreviations can be verified inreference works, both recent[1]and older.[2][3][4]Some of those works (such as Wyeth 1901[4]) are so comprehensive that their entire content cannot be reproduced here. This list includes all that are frequently encountered in today'shealth careinEnglish-speakingregions. Some of these are obsolete; others remain current. There is a risk of serious consequences when abbreviations are misread or misinterpreted. In the United Kingdom, all prescriptions should be in English without abbreviation (apart from some units such as mg and mL; micrograms and nanograms shouldnotbe abbreviated).[5]In the United States, abbreviations which aredeprecatedby theJoint Commissionare marked in red; those abbreviations which are deprecated by other organizations, such as theInstitute for Safe Medication Practices(ISMP) and theAmerican Medical Association(AMA), are marked in orange. The Joint Commission is an independent, non-profit, non-governmental organization which offersaccreditationto hospitals and other health care organizations in the United States. While their recommendations are not binding on U.S. physicians, they are required of organizations who wish accreditation by the Joint Commission. 0–9ABCDEFGHIJKLMNOPQRSTUVWXYZ
https://en.wikipedia.org/wiki/List_of_abbreviations_used_in_medical_prescriptions
During most of the 20th century photography depended mainly upon the photochemical technology of silver halide emulsions onglass platesorroll film.[1]Early in the 21st century this technology was displaced by the electronic technology ofdigital cameras. The development of digitalimage sensors,microprocessors,memory cards, miniaturised devices andimage editingsoftware enabled these cameras to offer their users a much wider range of operating options than was possible with the older silver halide technology.[2][3]This has led to a proliferation of newabbreviations,acronyms and initialisms. The commonest of these are listed below. Some are used in related fields ofopticsandelectronicsbut many are specific todigital photography.
https://en.wikipedia.org/wiki/List_of_abbreviations_in_photography
Anacronymis a type ofabbreviationconsisting of a phrase whose only pronounced elements are the initial letters or initial sounds of words inside that phrase. Acronyms are often spelled with the initialletterof eachwordinall capswith nopunctuation. For some, aninitialism[1]oralphabetismconnotesthis general meaning, and anacronymis asubsetwith a narrower definition; an acronym is pronounced as a word rather than as a sequence of letters. In this sense,NASA(/ˈnæsə/) is an acronym, butUSA(/ˌjuː.ɛsˈeɪ/) is not.[2][3] The broader sense ofacronym, ignoring pronunciation, is its original meaning[4]and in common use.[5]Dictionary and style-guide editors dispute whether the termacronymcan be legitimately applied to abbreviations which are not pronounced as words, and they do not agree on acronymspacing,casing, and punctuation. The phrase that the acronym stands for is called itsexpansion. Themeaningof an acronym includes both its expansion and the meaning of its expansion. The wordacronymis formed from theGreek rootsakro-, meaning 'height, summit, or tip', and-nym, 'name'.[6][unreliable source]Thisneoclassical compoundappears to have originated inGerman, with attestations for the German formAkronymappearing as early as 1921.[7][8]Citations in English date to a 1940 translation of a novel by the German writerLion Feuchtwanger.[9] It is an unsettled question in Englishlexicographyandstyle guideswhether it is legitimate to use the wordacronymto describe forms that use initials but are not pronounced as a word. While there is plenty of evidence thatacronymis used widely in this way, some sources do not acknowledge this usage, reserving the termacronymonly for forms pronounced as a word, and usinginitialismorabbreviationfor those that are not. Some sources acknowledge the usage, but vary in whether they criticize or forbid it, allow it without comment, or explicitly advocate it. Some mainstream English dictionaries from across the English-speaking world affirm asenseofacronymwhich does not require being pronounced as a word. American English dictionaries such asMerriam-Webster,[10]Dictionary.com'sRandom House Webster's Unabridged Dictionary[11]and theAmerican Heritage Dictionary[12]as well as the BritishOxford English Dictionary[4]and the AustralianMacquarie Dictionary[13]all include a sense in their entries foracronymequating it withinitialism, althoughThe American Heritage Dictionarycriticizes it with the label "usage problem".[12]However, many English language dictionaries, such as theCollins COBUILD Advanced Dictionary,[14]Cambridge Advanced Learner's Dictionary,[15]Macmillan Dictionary,[16]Longman Dictionary of Contemporary English,[17]New Oxford American Dictionary,[18]Webster's New World Dictionary,[19]andLexicofrom Oxford University Press[20]do not acknowledge such a sense. Most of the dictionary entries and style guide recommendations regarding the termacronymin the twentieth century did not explicitly acknowledge or support the expansive sense. TheMerriam–Webster's Dictionary of English Usagefrom 1994 is one of the earliest publications to advocate for the expansive sense,[21]and all the major dictionary editions that include a sense ofacronymequating it withinitialismwere first published in the twenty-first century. The trend among dictionary editors appears to be towards including a sense definingacronymasinitialism: theMerriam-Webster's Collegiate Dictionaryadded such a sense in its 11th edition in 2003,[22][23]and both theOxford English Dictionary[24][4]andThe American Heritage Dictionary[25][12]added such senses in their 2011 editions. The 1989 edition of theOxford English Dictionaryonly included the exclusive sense foracronymand its earliest citation was from 1943.[24]In early December 2010,Duke Universityresearcher Stephen Goranson published a citation foracronymto theAmerican Dialect Societye-mail discussion list which refers toPGNbeing pronounced "pee-gee-enn",antedatingEnglish language usage of the word to 1940.[26]LinguistBen Zimmerthen mentioned this citation in his December 16, 2010 "On Language" column about acronyms inThe New York Times Magazine.[27]By 2011, the publication of the 3rd edition of theOxford English Dictionaryadded the expansive sense to its entry foracronymand included the 1940 citation.[4]As theOxford English Dictionarystructures the senses in order of chronological development,[28]it now gives the "initialism" sense first. English language usage and style guides which have entries foracronymgenerally criticize the usage that refers to forms that are not pronounceable words.Fowler's Dictionary of Modern English Usagesays thatacronym"denotes abbreviations formed from initial letters of other words and pronounced as a single word, such asNATO(as distinct fromB-B-C)" but adds later "In everyday use,acronymis often applied to abbreviations that are technically initialisms, since they are pronounced as separate letters."[29]The Chicago Manual of Styleacknowledges the complexity ("Furthermore, an acronym and initialism are occasionally combined (JPEG), and the line between initialism and acronym is not always clear") but still defines the terms as mutually exclusive.[30]Other guides outright deny any legitimacy to the usage:Bryson's Dictionary of Troublesome Wordssays "Abbreviations that are not pronounced as words (IBM, ABC, NFL) are not acronyms; they are just abbreviations."[31]Garner's Modern American Usagesays "An acronym is made from the first letters or parts of a compound term. It's read or spoken as a single word, not letter by letter."[32]The New York Times Manual of Style and Usagesays "Unless pronounced as a word, an abbreviation is not an acronym."[33] In contrast, some style guides do support it, whether explicitly or implicitly. The 1994 edition ofMerriam-Webster's Dictionary of English Usagedefends the usage on the basis of a claim that dictionaries do not make a distinction.[21]TheBuzzFeedstyle guide describes CBS and PBS as "acronyms ending in S".[34] Acronymy, likeretronymy, is a linguistic process that has existed throughout history but for which there was little to nonaming, conscious attention, orsystematic analysisuntil relatively recent times. Like retronymy, it became much more common in the twentieth century than it had formerly been. Ancient examples of acronymy (before the term "acronym" was invented) include the following: During the mid- to late nineteenth century, acronyms became a trend among American and European businessmen: abbreviatingcorporationnames, such as on the sides ofrailroad cars(e.g., "Richmond, Fredericksburg and Potomac Railroad" → "RF&P"); on the sides of barrels and crates; and onticker tapeand newspaper stock listings (e.g. American Telephone and Telegraph Company → AT&T). Some well-known commercial examples dating from the 1890s through 1920s include "Nabisco" ("National Biscuit Company"),[37]"Esso" (from "S.O.", from "Standard Oil"), and "Sunoco" ("Sun Oil Company"). Another field for the adoption of acronyms was modern warfare, with its many highly technical terms. While there is no recorded use of military acronyms dating from theAmerican Civil War(acronyms such as "ANV" for "Army of Northern Virginia" post-date the war itself), they became somewhat common inWorld War I, and byWorld War IIthey were widespread even in the slang of soldiers,[38]who referred to themselves asG.I.s. The widespread, frequent use of acronyms across the whole range of linguisticregistersis relatively new in most languages, becoming increasingly evident since the mid-twentieth century. As literacy spread and technology produced a constant stream of new and complex terms, abbreviations became increasingly convenient. TheOxford English Dictionary(OED) records the first printed use of the wordinitialismas occurring in 1899, but it did not come into general use until 1965, well afteracronymhad become common. In English, acronymspronounced as wordsmay be a twentieth-century phenomenon. Linguist David Wilton inWord Myths: Debunking Linguistic Urban Legendsclaims that "forming words from acronyms is a distinctly twentieth- (and now twenty-first-) century phenomenon. There is only one known pre-twentieth-century [English] word with an acronymic origin and it was in vogue for only a short time in 1886. The word iscolinderiesorcolinda, an acronym for theColonial and Indian Expositionheld in London in that year."[39][40]However, although acronymic words seem not to have beenemployed in general vocabularybefore the twentieth century (as Wilton points out), theconcept of their formationis treated as effortlessly understood (and evidently not novel) in anEdgar Allan Poestory of the 1830s, "How to Write a Blackwood Article", which includes the contrived acronym "P.R.E.T.T.Y.B.L.U.E.B.A.T.C.H." The use of Latin and Neo-Latin terms invernacularshas been pan-European and pre-dates modern English. Some examples of acronyms in this class are: The earliest example of a word derived from an acronym listed by theOEDis "abjud" (now "abjad"), formed from the original first four letters of theArabic alphabetin the late eighteenth century.[41]Someacrosticspre-date this, however, such as theRestorationwitticism arranging the names of some members ofCharles II's Committee for Foreign Affairs to produce the"CABAL" ministry.[42] OK, a term of disputed origin, dates back at least to the early nineteenth century and is now used around the world. Acronyms are used most often to abbreviate names of organizations and long or frequently referenced terms. Thearmed forcesand government agencies frequently employ acronyms; some well-known examples from the United States are among the "alphabet agencies" (jokingly referred to as "alphabet soup") created under theNew DealbyFranklin D. Roosevelt(himself known as "FDR"). Business and industry also coin acronyms prolifically. The rapid advance of science and technology also drives the usage, as new inventions and concepts with multiword names create a demand for shorter, more pronounceable names.[citation needed]One representative example, from the U.S. Navy, is "COMCRUDESPAC", which stands for "commander, cruisers destroyers Pacific"; it is also seen as "ComCruDesPac". Inventors are encouraged to anticipate the formation of acronyms by making new terms "YABA-compatible" ("yet another bloody acronym"), meaning the term's acronym can be pronounced and is not an offensive word: "When choosing a new name, be sure it is 'YABA-compatible'."[43] Acronym use has been further popularized by text messaging on mobile phones withshort message service(SMS), andinstant messenger(IM). To fit messages into the 160-character SMS limit, and to save time, acronyms such as "GF" ("girlfriend"), "LOL" ("laughing out loud"), and "DL" ("download" or "down low") have become popular.[44]Someprescriptivistsdisdain texting acronyms and abbreviations as decreasing clarity, or as failure to use "pure" or "proper" English. Others point out that languages have alwayscontinually changed, and argue that acronyms should be embraced as inevitable, or as innovation that adapts the language to changing circumstances. In this view, the modern practice is just the "proper" English of the current generation of speakers, much like the earlier abbreviation of corporation names on ticker tape or newspapers. Exact pronunciation of "word acronyms" (those pronounced as words rather than sounded out as individual letters) often vary by speaker population. These may be regional, occupational, or generational differences, or simply personal preference. For instance, there have been decades of online debate about how to pronounceGIF(/ɡɪf/or/dʒɪf/) andBIOS(/ˈbaɪoʊs/,/ˈbaɪoʊz/, or/ˈbaɪɒs/). Similarly, some letter-by-letter initialisms may become word acronyms over time, especially in combining forms:IPforInternet Protocolis generally said as two letters, butIPsecforInternet Protocol Securityis usually pronounced as/ˌaɪˈpiːsɛk/or/ˈɪpsɛk/, along with variant capitalization like "IPSEC" and "Ipsec". Pronunciation may even vary within a single speaker's vocabulary, depending on narrow contexts. As an example, the database programming languageSQLis usually said as three letters, but in reference toMicrosoft's implementationis traditionally pronounced like the wordsequel. In writing for a broad audience, the words of an acronym are typically written out in full at its first occurrence within a given text. Expansion At First Use (EAFU) benefits readers unfamiliar with the acronym.[45] Another text aid is an abbreviation key which lists and expands all acronyms used, a reference for readers who skipped past the first use. (This is especially important for paper media, where no search utility is available to find the first use.) It also gives students a convenient review list to memorize the important acronyms introduced in a textbook chapter. Expansion at first use and abbreviation keys originated in the print era, but they are equally useful forelectronic text. While acronyms provide convenience and succinctness for specialists, they often degenerate into confusingjargon. This may be intentional, to exclude readers without domain-specific knowledge. New acronyms may also confuse when they coincide with an already existing acronym having a different meaning. Medical literature has been struggling to control the proliferation of acronyms, including efforts by the American Academy of Dermatology.[46] Acronyms are often taught asmnemonicdevices: for example the colors of the rainbow areROY G. BIV(red, orange, yellow, green, blue, indigo, violet). They are also used as mental checklists: in aviationGUMPSstands for gas-undercarriage-mixture-propeller-seat belts. Other mnemonic acronyms includeCAN SLIMin finance, PAVPANIC in English grammar, andPEMDASin mathematics. It is not uncommon for acronyms to be cited in a kind offalse etymology, called afolk etymology, for a word. Such etymologies persist in popular culture but have no factual basis inhistorical linguistics, and are examples of language-relatedurban legends. For example, "cop" is commonly cited as being derived, it is presumed, from "constable on patrol",[47]and "posh" from "port outward, starboard home".[48]With some of these specious expansions, the "belief" that the etymology is acronymic has clearly beentongue-in-cheekamong many citers, as with "gentlemen only, ladies forbidden" for "golf", although many other (morecredulous) people have uncritically taken it for fact.[48][49]Taboo wordsin particular commonly have such false etymologies: "shit" from "ship/store high in transit"[39][38]or "special high-intensity training" and "fuck" from "for unlawful carnal knowledge", or "fornication under consent/command of the king".[38] In English, abbreviations have previously been marked by a wide variety ofpunctuation. Obsolete forms include using anoverbarorcolonto show theellipsisof letters following the initial part. Theforward slashis still common in many dialects for some fixed expressions—such as inw/for "with" orA/Cfor "air conditioning"—while only infrequently being used to abbreviate new terms. Theapostropheis common forgrammatical contractions(e.g.don't,y'all, andain't) and for contractions marking unusual pronunciations (e.g.a'ight,cap'n, andfo'c'slefor "all right", "captain", and "forecastle"). By the early twentieth century, it was standard to use afull stop/period/point, especially in the cases of initialisms and acronyms. Previously, especially forLatin abbreviations, this was done with a full space between every full word (e.g.A. D.,i. e., ande. g.for "Anno Domini", "id est", and "exempli gratia"). This even included punctuation after bothRomanandArabic numeralsto indicate their use in place of the full names of each number (e.g.LII.or52.in place of "fifty-two" and "1/4." or "1./4." to indicate "one-fourth"). Both conventions have fallen out of common use in all dialects of English, except in places where an Arabicdecimalincludes a medialdecimal point. Particularly inBritishandCommonwealth English, all such punctuation marking acronyms and other capitalized abbreviations is now uncommon and considered either unnecessary or incorrect. The presence of all-capital letters is now thought sufficient to indicate the nature of theUK, theEU, and theUN. Forms such asthe U.S.A.for "theUnited States of America" are now considered to indicateAmericanorNorth American English. Even within those dialects, such punctuation is becoming increasingly uncommon.[50] Somestyle guides, such as that of theBBC, no longer require punctuation to showellipsis; some even proscribe it.Larry Trask, American author ofThePenguinGuide to Punctuation, states categorically that, inBritish English, "this tiresome and unnecessary practice is now obsolete."[51] Nevertheless, some influentialstyle guides, many of themAmerican, still require periods in certain instances. For example,The New York Times Manual of Style and Usagerecommends following each segment with a period when the letters are pronounced individually, as in "K.G.B.", but not when pronounced as a word, as in "NATO".[52]The logic of this style is that the pronunciation is reflected graphically by the punctuation scheme. When a multiple-letter abbreviation is formed from a single word, periods are in general not used, although they may be common in informal usage. "TV", for example, may stand for asingleword ("television" or "transvestite", for instance), and is in general spelled without punctuation (except in the plural). Although "PS" stands for the single English word "postscript" or the Latinpostscriptum, it is often spelled with periods ("P.S.") as if parsed as Latinpost scriptuminstead. Theslash('/', orsolidus) is sometimes used to separate the letters in an acronym, as in "N/A" ("not applicable, not available") and "c/o" ("care of"). Inconveniently long words used frequently in related contexts can be represented according to their letter count as anumeronym. For example, "i18n" abbreviates "internationalization", a computer-science term for adapting software for worldwide use; the "18" represents the 18 letters that come between the first and the last in "internationalization". Similarly, "localization" can be abbreviated "l10n"; "multilingualization" "m17n"; and "accessibility" "a11y". In addition to the use of a specific number replacing that many letters, the more general "x" can be used to replace an unspecified number of letters. Examples include "Crxn" for "crystallization" and the series familiar to physicians forhistory,diagnosis, andtreatment("hx", "dx", "tx"). Terms relating to a command structure may also sometimes use this formatting, for example gold, silver, and bronze levels of command in UK policing being referred to as Gx, Sx, and Bx. There is a question about how to pluralize acronyms. Often a writer will add an 's' following an apostrophe, as in "PC's". However,Kate L. Turabian'sA Manual for Writers of Research Papers, Theses, and Dissertations, writing about style in academic writings,[53]allows for an apostrophe to form plural acronyms "only when an abbreviation contains internal periods or both capital and lowercase letters". Turabian would therefore prefer "DVDs" and "URLs" but "Ph.D.'s". The style guides of theModern Language Association[54]andAmerican Psychological Association[55][56]prohibit apostrophes from being used to pluralize acronyms regardless of periods (so "compact discs" would be "CDs" or "C.D.s"), whereasThe New York Times Manual of Style and Usagerequires an apostrophe when pluralizing all abbreviations regardless of periods (preferring "PC's, TV's and VCR's").[57] Possessive plurals that also include apostrophes for mere pluralization and periods appear especially complex: for example, "the C.D.'s' labels" (the labels of the compact discs). In some instances, however, an apostrophe may increase clarity: for example, if the final letter of an abbreviation is "S", as in "SOS's" (although abbreviations ending with S can also take "-es", e.g. "SOSes"), or when pluralizing an abbreviation that has periods.[58][59] A particularly rich source of options arises when the plural of an acronym would normally be indicated in a word other than the final word if spelled out in full. A classic example is "Member of Parliament", which in plural is "Members of Parliament". It is possible then to abbreviate this as "M's P", which was fairly common in mid-twentieth-century Australian news writing[60][61](or similar),[62]and used by former Australian Prime MinisterBen Chifley.[63][64][65]This usage is less common than forms with "s" at the end, such as "MPs", and may appear dated or pedantic. In common usage, therefore, "weapons of mass destruction" becomes "WMDs", "prisoners of war" becomes "POWs", and "runs batted in" becomes "RBIs".[66] Abbreviations that come from single, rather than multiple, words—such as "TV" ("television")—are usually pluralized without apostrophes ("two TVs"); most writers feel that the apostrophe should be reserved for the possessive ("the TV's antenna").[citation needed] In some languages, the convention of doubling the letters in the acronym is used to indicate plural words: for example, the SpanishEE.UU., forEstados Unidos('United States'). This old convention is still sometimes followed for a limited number of English abbreviations, such asSS.forSaints,pp.for the plural of 'pages', ormss.formanuscripts.[citation needed] The most commoncapitalizationscheme seen with acronyms is all-uppercase (all caps).Small capsare sometimes used to make the run of capital letters seem less jarring to the reader. For example, the style of some American publications, including theAtlantic MonthlyandUSA Today, is to use small caps for acronyms longer than three letters;[citation needed]thus "U.S." and "FDR" in normal caps, but "nato" in small caps. The acronyms "AD" and "BC" are often smallcapped as well, as in: "From4004bctoad525". Where an acronym has linguistically taken on an identity as regular word, the acronym may use normal case rules, e.g. it would appear generally in lower case, but with an initial capital when starting a sentence or when in a title. Once knowledge of the words underlying such an acronym has faded from common recall, the acronym may be termed ananacronym.[67]Examples of anacronyms are the words "scuba", "radar", and "laser". The word "anacronym" should not be confused with the word "anachronym", which is a type of misnomer. Words derived from an acronym by affixing are typically expressed in mixed case, so the root acronym is clear. For example, "pre-WWII politics", "post-NATO world", "DNase". In some cases a derived acronym may also be expressed in mixed case. For example, "messenger RNA" and "transfer RNA" become "mRNA" and "tRNA". Some publications choose to capitalize only the first letter of acronyms, reserving all-caps styling for initialisms, writing the pronounced acronyms "Nato" and "Aids" in mixed case, but the initialisms "USA" and "FBI" in all caps. For example, this is the style used inThe Guardian,[68]andBBC Newstypically edits to this style (though its official style guide, dating from 2003, still recommends all-caps[69]). The logic of this style is that the pronunciation is reflected graphically by the capitalization scheme. However, it conflicts with conventional English usage of first-letter upper-casing as a marker of proper names in many cases; e.g.AIDSstands foracquired immuno-deficiency syndromewhich is not a proper name, whileAidsis in the style of one. Some style manuals also base the letters'caseon their number.The New York Times, for example, keeps "NATO" in all capitals (while several guides in the British press may render it "Nato"), but uses lower case in "Unicef" (from "United Nations International Children's Emergency Fund") because it is more than four letters, and to style it in caps might look ungainly (flirting with the appearance of "shouting capitals"). While abbreviations typically exclude the initials of shortfunction words(such as "and", "or", "of", or "to"), this is not always the case. Sometimes function words are included to make a pronounceable acronym, such as CORE (Congress of Racial Equality). Sometimes the letters representing these words are written in lower case, such as in the cases of "TfL" ("Transport for London") andLotR(The Lord of the Rings); this usually occurs when the acronym represents a multi-word proper noun. Numbers (bothcardinalandordinal) in names are often represented bydigitsrather than initial letters, as in "4GL" ("fourth generation language") or "G77" ("Group of 77"). Large numbers may usemetric prefixes, as with "Y2K" for "Year 2000". Exceptions using initials for numbers include "TLA" ("three-letter acronym/abbreviation") and "GoF" ("Gang of Four"). Abbreviations using numbers for other purposes include repetitions, such as "A2DP" ("Advanced Audio Distribution Profile"), "W3C" ("World Wide Web Consortium"), andT3(Trends, Tips & Tools for Everyday Living); pronunciation, such as "B2B" ("business to business"); andnumeronyms, such as "i18n" ("internationalization"; "18" represents the 18 letters between the initial "i" and the final "n"). Authors ofexpository writingwill sometimes capitalize or otherwise distinctively format the initials of the expansion forpedagogicalemphasis (for example, writing: "the onset of Congestive Heart Failure (CHF)" or "the onset ofcongestiveheartfailure (CHF)"). Capitalization like this, however, conflicts with the convention of English orthography, which generally reserves capitals in the middle of sentences for proper nouns; when following theAMA Manual of Style, this would instead be rendered as "the onset of congestive heart failure (CHF)".[70] Some apparent acronyms or other abbreviations do not stand for anything and cannot be expanded to some meaning. Such pseudo-acronyms may be pronunciation-based, such as "BBQ" (bee-bee-cue), for "barbecue", and "K9" (kay-nine) for "canine". Pseudo-acronyms also frequently develop as "orphan initialisms": an existing acronym is redefined as a non-acronymous name, severing its link to its previous meaning.[71][72]For example, the letters of the "SAT", a US college entrance test originally dubbed "Scholastic Aptitude Test", no longer officially stand for anything.[73][74]The US-basedabortion-rightsorganization "NARAL" is another example of this; in that case, the organization changed its name three times, with the long-form of the name always corresponding to the letters "NARAL", before eventually opting to simply be known by the short-form, without being connected to a long-form. This is common with companies that want to retainbrand recognitionwhile moving away from an outdated image: American Telephone and Telegraph becameAT&T[71]andBritish Petroleumbecame BP.[72][75]Russia Todayhas rebranded itself asRT.American Movie Classicshas simply rebranded itself as AMC. Genzyme Transgenics Corporation became GTC Biotherapeutics, Inc.;The Learning Channelbecame TLC;MTVdropped the name Music Television out of its brand; andAmerican District Telegraphbecame simply known as ADT. "Kentucky Fried Chicken" went partway, re-branding itself with its initialism "KFC" to de-emphasize the role of frying in the preparation of its signature dishes, though they have since returned to using both interchangeably.[76][a]The East Coast Hockey League became theECHLwhen it expanded to include cities in the western United States prior to the 2003–2004 season. Pseudo-acronyms may have advantages in international markets: for example, some nationalaffiliatesofInternational Business Machinesare legally incorporated with "IBM" in their names (for example, IBM Canada) to avoid translating the full name into local languages.[citation needed]Likewise,UBSis the name of the mergedUnion Bank of SwitzerlandandSwiss Bank Corporation,[77]andHSBChas replaced the long name Hongkong and Shanghai Banking Corporation. Some companies which have a name giving a clear indication of their place of origin will choose to use acronyms when expanding to foreign markets: for example,Toronto-Dominion Banksometimes continues to operate under its full name in Canada, but its U.S. subsidiary is known only asTD Bank, just asRoyal Bank of Canadasometimes still uses its full name in Canada (aconstitutional monarchy) while its U.S. subsidiary is always only calledRBC Bank. The India-basedJSW Groupof companies is another example of the original name (Jindal South West Group) being re-branded into a pseudo-acronym while expanding into other geographical areas in and outside of India. Rebranding can lead toredundant acronym syndrome, as whenTrustee Savings Bankbecame TSB Bank, or whenRailway Express Agencybecame REA Express. A fewhigh-techcompanies have taken the redundant acronym to the extreme: for example, ISM Information Systems Management Corp. and SHL Systemhouse Ltd. Examples in entertainment include the television showsCSI: Crime Scene InvestigationandNavy: NCIS("Navy" was dropped in the second season), where the redundancy was likely designed to educate new viewers as to what the initials stood for. The same reasoning was in evidence when theRoyal Bank of Canada's Canadian operations rebranded to RBC Royal Bank, or whenBank of Montrealrebranded their retail banking subsidiary BMO Bank of Montreal. Another common example is "RAMmemory", which is redundant because "RAM" ("random-access memory") includes the initial of the word "memory". "PIN" stands for "personal identification number", obviating the second word in "PINnumber"; in this case its retention may be motivated to avoid ambiguity with the homophonous word "pin". Other examples include "ATMmachine", "EABbank", "HIVvirus", Microsoft'sNTTechnology, and the formerly redundant "SATtest", now simply "SAT Reasoning Test").TNN(The Nashville/National Network) also renamed itself "The New TNN" for a brief interlude. In some cases, while the initials in an acronym may stay the same, for what those letters stand may change. Examples include the following: Abackronym(orbacronym) is aphrasethat is constructed "after the fact" from a previously existing word. For example, the novelist and criticAnthony Burgessonce proposed that the word "book" ought to stand for "box of organized knowledge".[83]A classic real-world example of this is the name of the predecessor to the Apple Macintosh, theApple Lisa, which was said to refer to "Local Integrated Software Architecture", but was actually named after Steve Jobs' daughter, born in 1978. Acronyms are sometimescontrived, that is, deliberately designed to be especially apt for the thing being named (by having a dual meaning or by borrowing the positive connotations of an existing word). Some examples of contrived acronyms areUSA PATRIOT,CAN SPAM,CAPTCHAandACT UP.[citation needed]The clothing companyFrench Connectionbegan referring to itself asfcuk, standing for "French Connection United Kingdom". The company then created T-shirts and several advertising campaigns that exploit the acronym's similarity to the taboo word "fuck". Contrived acronyms find frequent use as names offictional agencies, with a famous example being frequentJames Bondantagonist organizationSPECTRE(SPecial Executive for Counterintelligence, Terrorism, Revenge and Extortion). TheU.S. Department of Defense's Defense Advanced Research Projects Agency (DARPA) is known for developing contrived acronyms to name projects, includingRESURRECT,NIRVANA, andDUDE. In July 2010,Wiredmagazine reported that DARPA announced programs to "transform biology from a descriptive to a predictive field of science" namedBATMANandROBINfor "Biochronicity and Temporal Mechanisms Arising in Nature" and "Robustness of Biologically-Inspired Networks",[84]a reference to comic-book superheroesBatmanandRobin. The short-formnames of clinical trialsand other scientific studies constitute a large class of acronyms that includes many contrived examples, as well as many with a partial rather than complete correspondence of letters to expansion components. These trials tend to have full names that are accurately descriptive of what the trial is about but are thus also too long to serve practically asnameswithin the syntax of a sentence, so a short name is also developed, which can serve as a syntactically useful handle and also provide at least a degree ofmnemonicreminder as to the full name. Examples widely known inmedicineinclude the ALLHAT trial (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) and the CHARM trial (Candesartan in Heart Failure: Assessment of Reduction in Mortality and Morbidity). The fact thatRAS syndromeis often involved, as well as that the letters often do not entirely match, have sometimes been pointed out by annoyed researchers preoccupied by the idea that because thearchetypalform of acronyms originated with one-to-one letter matching, there must be some impropriety in their ever deviating from that form. However, theraison d'êtreof clinical trial acronyms, as withgene and protein symbols, is simply to have a syntactically usable and easilyrecalledshort name to complement the long name that is often syntactically unusable and notmemorized. It is useful for the short name to give a reminder of the long name, which supports the reasonable censure of "cutesy" examples that provide little to no hint of it. But beyond that reasonably close correspondence, the short name's chief utility is in functioning cognitively as aname, rather than being acrypticand forgettable string, albeit faithful to the matching of letters. However, other reasonable critiques have been (1) that it is irresponsible to mention trial acronyms without explaining them at least once by providing the long names somewhere in the document,[85]and (2) that the proliferation of trial acronyms has resulted in ambiguity, such as three different trials all called ASPECT, which is another reason why failing to explain them somewhere in the document is irresponsible in scientific communication.[85]At least one study has evaluated thecitation impactand other traits of acronym-named trials compared with others,[86]finding both good aspects (mnemonic help, name recall) and potential flaws (connotativelydrivenbias).[86] Some acronyms are chosen deliberately to avoid a name considered undesirable: For example,Verliebt in Berlin(ViB), a Germantelenovela, was first intended to beAlles nur aus Liebe('All for Love'), but was changed to avoid the resultant acronymANAL. Likewise, the Computer Literacy and Internet Technology qualification is known asCLaIT,[87]rather thanCLIT. In Canada, theCanadian Conservative Reform Alliance (Party)was quickly renamed to the "Canadian Reform Conservative Alliance" when its opponents pointed out that its initials spelled CCRAP (pronounced "seecrap"). Two Irish institutes of technology (Galway and Tralee) chose different acronyms from other institutes when they were upgraded from regional technical colleges. Tralee RTC became the Institute of Technology Tralee (ITT), as opposed to Tralee Institute of Technology (TIT). Galway RTC became Galway-Mayo Institute of Technology (GMIT), as opposed to Galway Institute of Technology (GIT). The charity sports organizationTeam in Trainingis known as "TNT" and not "TIT".Technological Institute of Textile & Sciences, however, is still known as "TITS".George Mason Universitywas planning to name their law school the "Antonin Scalia School of Law" (ASSOL) in honor of the lateAntonin Scalia, only to change it to the "Antonin Scalia Law School" later.[88] Amacronym, ornested acronym, is an acronym in which one or more letters stand for acronyms (or abbreviations) themselves. The word "macronym" is aportmanteauof "macro-" and "acronym". Some examples of macronyms are: Some macronyms can be multiply nested: the second-order acronym points to another one further down a hierarchy.VITAL, for example, which expands to "VHDLInitiative TowardsASICLibraries" is a total of 15 words when fully expanded. In an informal competition run by the magazineNew Scientist, a fully documented specimen was discovered that may be the most deeply nested of all: RARS is the "Regional ATOVS Retransmission Service"; ATOVS is "Advanced TOVS"; TOVS is "TIROSoperational vertical sounder"; and TIROS is "Television infrared observational satellite".[89]Fully expanded, "RARS" might thus become "Regional Advanced Television Infrared Observational Satellite Operational Vertical Sounder Retransmission Service", which would produce the much more unwieldy acronym "RATIOSOVSRS". However, to say that "RARS" stands directly for that string of words, or can be interchanged with it insyntax(in the same way that "CHF" can be usefully interchanged with "congestive heart failure"), is aprescriptivemisapprehension rather than a linguistically accurate description; the true nature of such a term is closer toanacronymicthan to being interchangeable like simpler acronyms are. The latter are fully reducible in an attempt to "spell everything out and avoid all abbreviations", but the former are irreducible in that respect; they can beannotatedwith parenthetical explanations, but they cannot be eliminated from speech or writing in any useful or practical way. Just as the wordslaserandradarfunction as words insyntaxandcognitionwithout a need to focus on their acronymic origins, terms such as "RARS" and "CHA2DS2–VASc score" are irreducible innatural language; if they are purged, the form of language that is left may conform to some imposed rule, but it cannot be described as remaining natural. Similarly,proteinandgenenomenclature,which uses symbols extensively, includes such terms as the name of theNACHT protein domain, which reflects the symbols of some proteins that contain the domain – NAIP (NLR family apoptosisinhibitor protein), C2TA (major histocompatibility complex class II transcription activator), HET-E (incompatibility locus protein fromPodospora anserine), and TP1 (telomerase-associated protein) – but is not syntactically reducible to them. The name is thus itself more symbol than acronym, and its expansion cannot replace it while preserving its function in natural syntax as anamewithin aclauseclearlyparsableby human readers or listeners. A special type of macronym, therecursive acronym, has letters whose expansion refers back to the macronym itself. One of the earliest examples appears inThe Hacker's DictionaryasMUNG, which stands for "MUNG Until No Good". Some examples of recursive acronyms are: With English terminology, discussions of languages withsyllabicorlogographicwriting systems (such as Chinese, Japanese, and Korean), "acronyms" describe the short forms that take selected characters from a multi-character word. For example, in Chinese, 'university' (大學/大学,lit.'great learning') is usually abbreviated simply as大('great') when used with the name of the institute. So 'Peking University' (北京大学) is commonly shortened to北大(lit.'north-great') by also only taking the first character ofPeking, the "northern capital" (北京;Beijing). In some cases, however, other characters than the first can be selected. For example, the local short form of 'Hong Kong University' (香港大學) usesKong(港大) rather thanHong. There are also cases where some longer phrases are abbreviated drastically, especially in Chinese politics, where proper nouns were initially translated from Soviet Leninist terms. For instance, the full name of China's highest ruling council, thePolitburo Standing Committee(PSC), is 'Standing Committee of the Central Political Bureau of the Communist Party of China' (中国共产党中央政治局常务委员会). The term then reduced the 'Communist Party of China' part of its name through acronyms, then the 'Standing Committee' part, again through acronyms, to create中共中央政治局常委. Alternatively, it omitted the 'Communist Party' part altogether, creating 'Politburo Standing Committee' (政治局常委会), and eventually just 'Standing Committee' (常委会). The PSC's members full designations are 'Member of the Standing Committee of the Central Political Bureau of the Communist Party of China' (中国共产党中央政治局常务委员会委员); this was eventually drastically reduced to simplyChangwei(常委), with the termRuchang(入常) used increasingly for officials destined for a future seat on the PSC. In another example, the word全国人民代表大会('National People's Congress') can be broken into four parts:全国= 'the whole nation',人民= 'people',代表= 'representatives',大会= 'conference'. Yet, in its short form人大(literally 'man/people big'), only the first characters from the second and the fourth parts are selected; the first part (全国) and the third part (代表) are completely dropped. Many proper nouns become shorter and shorter over time. For example, theCCTV New Year's Gala, whose full name is literally read as 'China Central Television Spring Festival Joint Celebration Evening Gala' (中国中央电视台春节联欢晚会) was first shortened to 'Spring Festival Joint Celebration Evening Gala' (春节联欢晚会), but eventually referred to as simplyChunwan(春晚). In the same vein, CCTV orZhongguo Zhongyang Dianshi Tai(中国中央电视台) was reduced toYangshi(央视) in the mid-2000s. Many aspects of academics in Korea follow similar acronym patterns as Chinese, owing to the two languages' commonalities, like using the word for 'big' or 'great' i.e.dae(대), to refer to universities (대학;daehak, literally 'great learning' although 'big school' is an acceptable alternate). They can be interpreted similarly to American university appellations, such as "UPenn" or "Texas Tech". Some acronyms are shortened forms of the school's name, like howHongik University(홍익대학교,Hongik Daehakgyo) is shortened toHongdae(홍대, 'Hong, the big [school]' or 'Hong-U') Other acronyms can refer to the university's main subject, e.g.Korea National University of Education(한국교원대학교,Hanguk Gyowon Daehakgyo) is shortened toGyowondae(교원대, 'Big Ed.' or 'Ed.-U'). Other schools use a Koreanized version of their English acronym. TheKorea Advanced Institute of Science and Technology(한국과학기술원,Hanguk Gwahak Gisulwon) is referred to as KAIST (카이스트,Kaiseuteu) in both English and Korean. The 3 most prestigious schools in Korea are known as SKY (스카이,seukai), combining the first letter of their English names (Seoul National,Korea, andYonsei Universities). In addition, the College Scholastic Ability Test (대학수학능력시험,Daehak Suhang Neungryeok Siheom) is shortened toSuneung(수능, 'S.A.'). TheJapanese languagemakes extensive use of abbreviations, but only some of these are acronyms. Chinese-based words (Sino-Japanese vocabulary) uses similar acronym formation to Chinese, likeTōdai(東大)forTōkyō Daigaku(東京大学,Tokyo University). In some cases alternative pronunciations are used, as inSaikyōfor 埼京, fromSaitama+Tōkyō(埼玉+東京), rather than Saitō. Non-Chinese foreign borrowings (gairaigo) are instead frequently abbreviated asclipped compounds, rather than acronyms, using several initial sounds. This is visible inkatakanatranscriptions of foreign words, but is also found with native words (written inhiragana). For example, thePokémonmedia franchise's name originally stood for "pocket monsters" (ポケット·モンスター[po-ke-tto-mon-su-tā]ポケモン), which is still the long-form of the name in Japanese, and "wāpuro" stands for "word processor" (ワード·プロセッサー[wā-do-pu-ro-se-ssā]ワープロ). To a greater degree than English does, German tends toward acronyms that use initial syllables rather than initial single letters, although it uses many of the latter type as well. Some examples of the syllabic type areGestaporather thanGSP(forGeheime Staatspolizei, 'Secret State Police');Flakrather thanFAK(forFliegerabwehrkanone, 'anti-aircraftgun');Kriporather thanKP(forKriminalpolizei, 'detective division police'). The extension of such contraction to a pervasive or whimsical degree has been mockingly labeledAküfi(forAbkürzungsfimmel, 'strange habit of abbreviating'). Examples ofAküfiincludeVokuhila(forvorne kurz, hinten lang, 'short in the front, long in the back', i.e., amullethaircut) and the mocking ofAdolf Hitler's title asGröfaz(Größter Feldherr aller Zeiten, 'Greatest General of all Time'). It is common to take more than just one initial letter from each of the words composing the acronym; regardless of this, the abbreviation signgershayim⟨״⟩is always written between the second-last and last letters of the non-inflected form of the acronym, even if by this it separates letters of the same original word. Examples (keeping in mind that Hebrew reads right-to-left):ארה״ב(forארצות הברית, the United States);ברה״מ(forברית המועצות, the Soviet Union);ראשל״צ(forראשון לציון,Rishon LeZion);ביה״ס(forבית הספר, the school). An example that takes only the initial letters from its component words isצה״ל(Tzahal, forצבא הגנה לישראל,Israel Defense Forces). In inflected forms, the abbreviation signgershayimremains between the second-last and last letters of the non-inflected form of the acronym (e.g. 'report', singular:דו״ח, plural:דו״חות; 'squad commander', masculine:מ״כ, feminine:מ״כית). There is also a widespread use of acronyms inIndonesiain every aspect of social life. For example, theGolkarpolitical party stands forPartai Golongan Karya,Monasstands forMonumen Nasional('National Monument'), theAngkotpublic transport stands forAngkutan Kota('city public transportation'),warnetstands forwarung internet('internet cafe'), and many others. Some acronyms are considered formal (or officially adopted), while many more are considered informal,slang, orcolloquial. The capital's metropolitan area (Jakartaand its surroundingsatellite regions),Jabodetabek, is another acronym. This stands forJakarta-Bogor-Depok-Tangerang-Bekasi. Many highways are also named by the acronym method; e.g.Jalan Tol('Toll Road')Jagorawi(Jakarta-Bogor-Ciawi),Purbaleunyi(Purwakarta-Bandung-Cileunyi), andJoglo Semar(Jogja-Solo-Semarang). In some languages, especially those that use certainalphabets, many acronyms come from the governmental use, particularly in the military and law enforcement services. TheIndonesian military(TNI –Tentara Nasional Indonesia) andIndonesian police(POLRI –Kepolisian Republik Indonesia) are known for heavy acronyms use. Examples include theKopassus(Komando Pasukan Khusus; 'Special ForcesCommand'),Kopaska(Komando Pasukan Katak; 'FrogmenCommand'),Kodim(Komando Distrik Militer; 'Military District Command' – one of the Indonesian army'sadministrative divisions),Serka(Sersan Kepala; 'HeadSergeant'),Akmil(Akademi Militer; 'Military Academy' – inMagelang), and many other terms regardingranks, units, divisions, procedures, etc. Although not as common as in Indonesian, a number of Malay words are formed by merging two words, such astadikafromtaman didikan kanak-kanak('kindergarten') andpawagamfrompanggung wayang gambar. This, however, has been less prevalent in the modern era, in contrary to Indonesian. It is still often for names such as organisation names, among the most famous being MARA fromMajlis Amanah Rakyat('People's Trust Council'), a government agency in Malaysia. Some acronyms are developed from theJawi(Malay in Arabic script) spelling of the name and may not reflect its Latin counterpart such as PAS fromParti Islam Se-Malaysia('Malaysian Islamic Party') which originated from the Jawi acronymڤاس from ڤرتي إسلام سمليسيا, with the same pronunciation, since the first letter of the word 'Islam' in Jawi uses the letterAleph, which is pronounced like the letterAwhen in such position as in the acronym. Rules in writing initialisms in Malay differ based on its script. In its Latin form, the initialism would be spelt much like in English, using capitals written without any spacing, such as TNB forTenaga NasionalBerhad. In Jawi, however, initialisms differ depending on the source language. For Malay initialisms, the initial Jawi letters would be written separated by a period such asد.ب.ڤforديوان بهاس دان ڤوستاک.[90]If the initialism is from a different language, however, it would be written by transliterating each letter from the original language, such asعيم.سي.عيم.سي.forMCMC, orالفا.ڤي.ثيتاforΑ.Π.Θ.[91] Acronyms that use parts of words (not necessarily syllables) are commonplace in Russian as well, e.g.Газпром(Gazprom), forГазовая промышленность(Gazovaya promyshlennost, 'gas industry'). There are also initialisms, such asСМИ('SMI', forсредства массовой информацииsredstva massovoy informatsii, 'means of mass informing';ГУЛаг(GULag) combines two initials and three letters of the final word: it stands forГлавное управление лагерей(Glavnoe upravlenie lagerey, 'Chief Administration of Camps'). Historically,OTMAwas an acronym sometimes used by the daughters ofEmperorNicholas II of Russiaand his consort,Alexandra Feodorovna, as a group nickname for themselves, built from the first letter of each girl's name in the order of their births: Olga, Tatiana, Maria, and Anastasia. InSwahili, acronyms are common for naming organizations such asTUKI, which stands forTaasisi ya Uchunguzi wa Kiswahili('Institute for Swahili Research'). Multiple initial letters (often the initial syllable of words) are often drawn together, as seen more in some languages than others. InVietnamese, which has an abundance of compound words, initialisms are very commonly used for both proper and common nouns. Examples includeTP.HCM(Thành phố Hồ Chí Minh, 'Ho Chi Minh City'),THPT(trung học phổ thông, 'high school'),CLB(câu lạc bộ, 'club'),CSDL(cơ sở dữ liệu, 'database'),NXB(nhà xuất bản, 'publisher'),ÔBACE(ông bà anh chị em, a general form of address), andCTTĐVN(các Thánh tử đạo Việt Nam, 'Vietnamese Martyrs'). Longer examples includeCHXHCNVN(Cộng hòa Xã hội chủ nghĩa Việt Nam, 'Socialist Republic of Vietnam') andMTDTGPMNVN(Mặt trận Dân tộc Giải phóng miền Nam Việt Nam, 'Liberation Army of South Vietnam or the National Liberation Front of South Vietnam'). Long initialisms have become widespread in legal contexts inVietnam, for exampleTTLT-VKSNDTC-TANDTC.[92]It is also common for a writer to coin an ad hoc initialism for repeated use in an article. Each letter in an initialism corresponds to onemorpheme, that is, one syllable. When the first letter of a syllable has a tone mark or other diacritic, the diacritic may be omitted from the initialism, for exampleĐNAorĐNÁforĐông Nam Á('Southeast Asia') andLMCAorLMCÂforLiên minh châu Âu('European Union'). The letterƯis often replaced byWin initialisms to avoid confusion withU, for exampleUBTWMTTQVNorUBTƯMTTQVNforỦy ban Trung ương Mặt trận Tổ quốc Việt Nam('Central Committee of theVietnamese Fatherland Front'). Initialisms are purely a written convenience, being pronounced the same way as their expansions. As thenames of many Vietnamese lettersare disyllabic, it would be less convenient to pronounce an initialism by its individual letters. Acronyms pronounced as words are rare in Vietnamese, occurring when an acronym itself is borrowed from another language. Examples includeSIĐA(pronounced[s̪i˧ˀɗaː˧]), a respelling of the French acronymSIDA('AIDS');VOA(pronounced[vwaː˧]), a literal reading of the English initialism for 'Voice of America'; andNASA(pronounced[naː˧zaː˧]), borrowed directly from the English acronym. As inChinese, many compound words can be shortened to the first syllable when forming a longer word. For example, the termViệt Cộngis derived from the first syllables ofViệt Nam('Vietnam') andCộng sản('communist'). This mechanism is limited toSino-Vietnamese vocabulary. Unlike with Chinese, suchclipped compoundsare considered to beportmanteauwords orblend wordsrather than acronyms or initialisms, because theVietnamese alphabetstill requires each component word to be written as more than one character. In languages where nouns aredeclined, various methods are used. An example isFinnish, where a colon is used to separate inflection from the letters: The process above is similar to the way that hyphens are used for clarity in English when prefixes are added to acronyms: thuspre-NATO policy(rather thanpreNATO). In languages such asScottish GaelicandIrish, wherelenition(initial consonant mutation) is commonplace, acronyms must also be modified in situations where case and context dictate it. In the case of Scottish Gaelic, a lower-casehis often added after the initial consonant; for example, 'BBC Scotland' in the genitive case would be written asBhBC Alba, with the acronym pronouncedVBC. Likewise, the Gaelic acronym fortelebhisean'television' isTBh, pronouncedTV, as in English. acronym,n. Pronunciation:Brit. /ˈakrənɪm/, U.S. /ˈækrəˌnɪm/Origin:Formed within English, by compounding; modelled on a German lexical item.Etymons:acro-comb. form,-onymcomb. form.Etymology:<acro-comb. form +-onymcomb. form, after GermanAkronym(1921 or earlier).OriginallyU.S.1.A group of initial letters used as an abbreviation for a name or expression, each letter or part being pronounced separately; an initialism (such asATM,TLS).In theO.E.D.the terminitialismis used for this phenomenon. (See sense 2 forO.E.D.use of the word.) 2.A word formed from the initial letters of other words or (occasionally) from the initial parts of syllables taken from other words, the whole being pronounced as a single word (such asNATO,RADA). acronymnounac·ro·nym | \ˈa-krə-ˌnim\Definition ofacronym: a word (such asNATO,radar, orlaser) formed from the initial letter or letters of each of the successive parts or major parts of a compound termalso: an abbreviation (such asFBI) formed from initial letters :initialism ac·ro·nym (ăk′rə-nĭm′)n.1.A word formed by combining the initial letters of a multipart name, such asNATOfromNorthAtlanticTreatyOrganization or by combining the initial letters or parts of a series of words, such asradarfromradiodetectingandranging.2.Usage ProblemAn initialism.[acr(o)- + -onym.]ac′ro·nym′ic, a·cron′y·mous (ə-krŏn′ə-məs)adj.Usage Note:In strict usage, the termacronymrefers to a word made from the initial letters or parts of other words, such assonarfromso(und) na(vigation and) r(anging). The distinguishing feature of an acronym is that it is pronounced as if it were a single word, in the manner ofNATOandNASA. Acronyms are often distinguished from initialisms likeFBIandNIH, whose individual letters are pronounced as separate syllables. While observing this distinction has some virtue in precision, it may be lost on many people, for whom the termacronymrefers to both kinds of abbreviations. acronym/ˈækrənɪm/ ('say''akruhnim)noun1.a word formed from the initial letters of a sequence of words, asradar(fromradio detection and ranging) orANZAC(fromAustralian and New Zealand Army Corps). Compareinitialism.2.an initialism.[acro-+-(o)nym; modelled onsynonym] ac·ro·nym/ˈakrəˌnim/ ▸n.an abbreviation formed from the initial letters of other words and pronounced as a word (e.g.ASCII,NASA).—origin1940s: from Greekakron'end, tip' +onoma'name,' on the pattern ofhomonym. acronymsA number of commentators (as Copperud 1970, Janis 1984, Howard 1984) believe that acronyms can be differentiated from other abbreviations in being pronounceable as words. Dictionaries, however, do not make this distinction because writers in general do not: "The powder metallurgy industry has officially adopted the acronym 'P/M Parts'"—Precision Metal Molding, January 1966."Users of the termacronymmake no distinction between those pronounced as words ... and those pronounced as a series of characters" —Jean Praninskas,Trade Name Creation, 1968."It is not J.C.B.'s fault that its name, let alone its acronym, is not a household word among European scholars"—Times Literary Supp.5 February 1970."... the confusion in the Pentagon about abbreviations and acronyms—words formed from the first letters of other words"—Bernard Weinraub,N.Y. Times, 11 December 1978. Pyles & Algeo 1970 divide acronyms into "initialisms", which consists of initial letters pronounced with the letter names, and "word acronyms", which are pronounced as words.Initialism, an older word thanacronym, seems to be too little known to the general public to serve as the customary term standing in contrast withacronymin a narrow sense.
https://en.wikipedia.org/wiki/Acronym
Lists of acronymscontainacronyms, a type of abbreviation formed from the initial components of the words of a longer name or phrase. They are organized alphabetically and by field.
https://en.wikipedia.org/wiki/List_of_acronyms
This is a list ofabbreviations used in a business or financial context.
https://en.wikipedia.org/wiki/List_of_business_and_finance_abbreviations
The following list contains a selection from theLatinabbreviationsthat occur in the writings and inscriptions of theRomans.[1][2]A few other non-classical Latin abbreviations are added.
https://en.wikipedia.org/wiki/List_of_classical_abbreviations
Examples ofsiglain use in theMiddle Ages:
https://en.wikipedia.org/wiki/List_of_medieval_abbreviations
Anumeronymis a word, usually anabbreviation, composed partially or wholly of numerals. The term can be used to describe several different number-based constructs, but it most commonly refers to a contraction in which all letters between the first and last of a word are replaced with the number of omitted letters (for example, "i18n" for "internationalization").[1]According to Anne H. Soukhanov, editor of theMicrosoft Encarta College Dictionary, it originally referred tophonewords– words spelled by the letters of keys of a telephone pad.[2] A numeronym can also be called analphanumeric acronymoralphanumeric abbreviation. A number may be substituted into a word where its pronunciation matches that of the omitted letters. For example, "K9" is pronounced "kay-nine", which sounds like "canine" (relating todogs). Examples of numeronyms based on homophones include: Alternatively, letters between the first and last letters of a word may be replaced by the number of letters omitted. For example, the word "internationalization" can be abbreviated by replacing the eighteen middle letters ("nternationalizatio") with "18", leaving "i18n". Sometimes the last letter is also counted and omitted. These word shortenings are sometimes callednumerical contractions. According to Tex Texin, the first numeronym of this kind was "S12n", theelectronic mailaccount name given toDigital Equipment Corporation(DEC) employee Jan Scherpenhuizen by asystem administratorbecause hissurnamewas too long to be an account name. The use of such numeronyms became part of DEC corporate culture.[3] Examples of numerical contractions include: Some numeronyms are composed entirely of numbers, such as "212" for "New Yorker", "4-1-1" for "information", "9-1-1" for "help", "101" for "basic introduction to a subject", and "420" for "Cannabis". Words of this type have existed for decades, including those in10-code, which has been in use since beforeWorld War II. Chapter or title numbers of some jurisdictions' statutes have become numeronyms, for example5150and187from California's penal code. Largely because the production of many American movies and television programs are based in California, usage of these terms has spread beyond its original location and user population. Examples of purely numeric words include: A number may also denote how many times the character before or after it is repeated. This is typically used to represent a name or phrase in which several consecutive words start with the same letter, as inW3(World Wide Web) orW3C(World Wide Web Consortium).Amazon Web Servicesuses this in naming some of its popular services such as S3 (Simple Storage Service)[20]and EC2 (Elastic Cloud Compute)[21] Numeronyms can also make use of SI prefixes, as are commonly used to abbreviate long numbers (e.g. "1k" for1000or "1M" for1000000). Examples of numeronyms using SI prefixes include
https://en.wikipedia.org/wiki/Numeronym
RAS syndrome, whereRASstands forredundant acronym syndrome(making the phrase "RAS syndrome"autological), is the redundant use of one or more of the words that make up anacronymin conjunction with the abbreviated form. This means, in effect, repeating one or more words from the acronym. For example:PINnumber(expanding to "personal identification number number") andATMmachine(expanding to "automated teller machine machine"). The termRAS syndromewas coined in 2001 in a light-hearted column inNew Scientist.[1][2][3] A person is said to "suffer" from RAS syndrome when they redundantly use one or more of the words that make up an acronym or initialism with the abbreviation itself. Usage commentators consider such redundant acronyms poor style that is best avoided in writing, especially in a formal context, though they are common in speech.[4]The degree to which there is a need to avoidpleonasmssuch as redundant acronyms depends on one's balance point ofprescriptivism(ideas about how languageshouldbe used) versusdescriptivism(the realities of hownatural languageisused).[5]For writing intended to persuade, impress, or avoid criticism, many usage guides advise writers to avoid pleonasm as much as possible, not because such usage is always wrong, but rather because most of one's audience maybelievethat it is always wrong.[6] Although there are many instances in editing where removal of redundancy improves clarity,[7]the pure-logic ideal ofzeroredundancy is seldom maintained in human languages.Bill Brysonsays: "Not all repetition is bad. It can be used for effect ..., or for clarity, or in deference toidiom. 'OPECcountries', 'SALTtalks' and 'HIVvirus' are all technically redundant because the second word is already contained in the preceding abbreviation, but only the ultra-finicky would deplore them. Similarly, in 'Wipe that smile off your face' the last two words aretautological—there is no other place a smile could be—but the sentence would not stand without them."[7] A limited amount of redundancy can improve the effectiveness of communication, either for the whole readership or at least to offer help to those readers who need it. A phonetic example of that principle is the need forspelling alphabetsin radiotelephony. Some instances of RAS syndrome can be viewed as syntactic examples of the principle. The redundancy may help the listener by providing context and decreasing the "alphabet soupquotient" (thecrypticoverabundance of abbreviations and acronyms) of the communication. Acronyms from foreign languages are often treated as unanalyzedmorphemeswhen they are not translated. For example, in French, "le protocole IP" (theInternet Protocolprotocol) is often used, and in English "pleaseRSVP" (roughly "please respond please") is very common.[4][8]This occurs for the samelinguisticreasons that causemany toponyms to be tautological. Thetautologyis not parsed by the mind in most instances of real-world use (in many cases because the foreign word's meaning is not known anyway; in others simply because the usage is idiomatic). Examples of RAS phrases include:
https://en.wikipedia.org/wiki/RAS_syndrome
Short Message Service(SMS)languageortextese[a]is the abbreviated language andslangcommonly used in the late 1990s and early 2000s with mobile phonetext messaging, and occasionally throughInternet-based communication such asemailandinstant messaging.[1]Many call the words used in texting "textisms" or "internet slang." Features of early mobile phone messaging encouraged users to use abbreviations.2G technologymade text entry difficult, requiring multiple key presses on a small keypad to generate each letter, and messages were generally limited to 160bytes(or 1280bits). Additionally, SMS language made text messages quicker to type, while also avoiding additional charges from mobile network providers for lengthy messages exceeding 160 characters. SMSlanguage is similar to telegraphs' language where charges were by the word. It seeks to use the fewest letters to produce ultra-concise words and sentiments[2]in dealing with the space, time, and cost constraints oftext messaging. It follows from how early SMS permitted only 160 characters and that carriers began charging a small fee for each message sent (and sometimes received). Together with the difficulty and inefficiency in creating messages, it led the desire for a more economical language for the new medium.[3] SMS language also shares some of these characteristics with Internet slang andTelexspeak, as it evolved alongside the use of shorthand in Internetchat rooms. Likewise, such a change sought to accommodate the small number of characters allowed per message, and to increase convenience for the time-consuming and often smallkeyboardson mobile phones. Similarellipticalstyles of writing can be traced to the days oftelegraphese120 years back, when telegraph operators were reported to use abbreviations similar to modern text when chatting amongst themselves in between the sending of official messages.[4]Faramerz Dabhoiwala wrote inThe Guardianin 2016: "modern usages that horrifylinguistic puristsin fact have deep historical roots. 'OMG' was used by a septuagenarian naval hero, admiral of the fleetJohn Fisher, 1st Baron Fisherin a letter toWinston Churchill, in 1917".[5][6] In general, SMS language thus permits the sender to type less and communicate more quickly than one could without such shortcuts. One example is the use of "tmr" instead of "tomorrow". Nevertheless, there are no standard rules for the creation and use of SMS languages. Any word may be shortened (for example, "text" to "txt"). Words can also be combined with numbers to make them shorter (for example, "later" to "l8r"), using the numeral "8" for itshomophonicquality.[7] Some may view SMS language to be a dialect of theEnglish language,[2]that is a dialect strongly if not completely derivative of the English language. This may not be so. Such generalization may have risen from the fact that mobile phones had only been able to support a limited number of default languages in the early stages of its conception and distribution.[8] Amobile operating system(OS) such asSymbianand language packs enable the linguistic localization of products that are equipped with such interfaces, where the final Symbian release (Symbian Belle) supported the scripts andorthographiesof over 48 languages and dialects, though such provisions are by no means fully comprehensive as to the languages used by users all over the world. Researcher Mohammad Shirali-Shahreza (2007)[8]further observes that mobile phone producers offer support "of local language of the country" within which their phone sets are to be distributed. Nevertheless, various factors contribute as additional constraints to the use of non-English languages and scripts in SMS. This motivates theanglicizationof such languages, especially those using non-Latin orthographies(i.e. not using Latin alphabets) following for instance, the even more limited message lengths involved when using for example,CyrillicorGreekletters.[9]On the other side, researcher Gillian Perrett observes the de-anglicization[10]of the English language following its use and incorporation into non-English linguistic contexts. As such, on top of the measures taken to minimize space, time and cost constraints in SMS language, further constraints upon the varied nature and characteristics of languages worldwide add to the distinct properties and style of SMS language(s). The primary motivation for the creation and use of SMS language was to convey a comprehensible message using the fewest characters possible. This was for two reasons: first of all, telecommunication companies limited the number of characters per SMS and charged the user per SMS sent. To keep costs down, users had to find a way of being concise while still communicating the desired message. Secondly, typing on a phone is normally slower than with a keyboard, and capitalization is even slower. As a result, punctuation, grammar, and capitalization are largely ignored. The advent oftouchscreenphones with large screens, swipe-basedinput methodsand increasingly advancedautocompleteandspelling suggestionfunctionality, as well as the increasing popularity of free-to-useinstant messagingsystems likeWhatsAppover pay-per-message SMS[11]has decreased the need to use SMS language. Observations and classifications as to the linguistic and stylistic properties of SMS language have been made and proposed by Crispin Thurlow,[12]López Rúa,[13]and David Crystal.[9]Although they are by no means exhaustive, some of these properties involve the use of: There are many examples of words or phrases that share the same abbreviations (e.g.,lolcould meanlaugh out loud,lots of love, orlittle old lady, andcryncould meancrayonorcryin(g)). For words that have no common abbreviation, users most commonly remove the vowels from a word, and the reader is required to interpret a string of consonants by re-adding the vowels (e.g.,dictionarybecomesdctnryandkeyboardbecomeskybrd). Omission of words, especially function words (e.g., determiners like "a" and "the") are also employed as part of the effort to overcome time and space constraints.[14] The advent ofpredictive textinput andsmartphonesfeaturing fullQWERTY keyboardsmay contribute to a reduction in the use of shortenings in SMS language.[citation needed] Recipients may have to interpret the abbreviated words depending on the context in which they are being used. For instance, should someone usettyl, lolthey may meantalk to you later, lots of loveas opposed totalk to you later, laugh out loud. In another instance, if someone were to useomg, lolthey may meanoh my god, laugh out loudas opposed tooh my god, lots of love. Therefore, context is crucial when interpretingtextese, and it is precisely this shortfall that critics cite as a reason not to use it (although the English language in general, like many otherlanguages, has many words that have different meanings in different contexts). SMS language does not always obey or follow standardgrammar, and additionally the words used are not usually found in standard dictionaries or recognized bylanguage academies. A 2024 study found that using abbreviations in texting makes the sender seem less sincere, and leads to fewer replies.[15] The feature of "reactive tokens" that is ubiquitous inInternet Relay Chat(IRC), is also commonly found in SMS language. Reactive tokens include phrases or words like "yeah I know", which signifies a reaction to a previous message. In SMS language, however, the difference is that many words are shortened unlike in spoken speech.[16] Some tokens of the SMS language can be likened to arebus, using pictures and single letters or numbers to represent whole words (e.g., "i <3 u", which uses the pictogram of a heart forlove, and the letterureplacesyou). The dialect has a fewhieroglyphs(codes comprehensible to initiates) and a range of face symbols.[17] Prosodic features in SMS language aim to provide added semantic and syntactic information and context from which recipients can use to deduce a more contextually relevant and accurate interpretation. These may aim to convey the textual equivalent of verbal prosodic features such as facial expression and tone of voice.[18][19]Indeed, even though SMS language exists in the format of written text, it closely resembles normal speech in that it does not have a complicated structure and that its meaning is greatly contextualised. In the case of capitalization in SMS language, there are three scenarios:[20] Most SMS messages have done away with capitalization. Use of capitalizations on the first word of a message may in fact, not be intentional, and may likely be due to the default capitalization setting of devices. Capitalization too may encode prosodic elements, where copious use may signify the textual equivalent of raised voice to indicate heightened emotion.[18] Just as body language and facial expressions can alter how speech is perceived, emoji and emoticons can alter the meaning of a text message, the difference being that the real tone of the SMS sender is less easily discerned merely by the emoticon. Using a smiling face can be perceived as being sarcastic rather than happy, thus the reader has to decide which it is by looking at the whole message.[21] Use of punctuation and capitalization to form emoticons distracts from the more traditional function of such features and symbols. Nevertheless, uses do differ across individuals and cultures. For example, overpunctuation may simply be used to communicate paralinguistic aspects of communication without the need to create an emotion from it like so: "Hello!!!!".[14] While vowels and punctuation of words in SMS language are generally omitted,David Crystalobserves that apostrophes occur unusually frequently. He cites an American study of 544 messages, where the occurrence of apostrophes in SMS language is approximately 35 percent.[9]This is unexpected, seeing that it is a hassle to input an apostrophe in a text message with the multiple steps involved. The use of apostrophes cannot be attributed to users attempting to disambiguate words that might otherwise be misunderstood without it. There are few cases in English where leaving out the apostrophe causes misunderstanding of the message. For example, "we're" without the apostrophe could be misread as "were". Even so, these are mostly understood correctly despite being ambiguous, as readers can rely on other cues such as part of sentence and context where the word appears to decide what the word should be. For many other words like "Im" and "Shes", there is no ambiguity. Since users don't need to use apostrophes to ensure that their message is understood accurately, this phenomenon may in part be attributed to texters wanting to maintain clarity so that the message can be more easily understood in a shorter amount of time.[9]The widespread mobile phone auto-correct feature contributes to the frequency of the apostrophe in SMS messages, since, even without user awareness, it will insert an apostrophe in many common words, such as "I'm", "I'll", and "I'd". Users may also use spellings that reflect their illocutionary force and intention rather than using the standard spelling. For example, the use of "haha" to signify "standard" laughter, and "muahaha" to encode perhaps more raucous or evil sound of laughter.[16] In this, regional variations in spelling can also be observed. As such, SMS language, with its intergroup variations, also serves as an identity marker.[19] SMS language has yet to be accepted as a conventional and stable form, either as a dialect or as a language. As a result, (as much as it is also a consequence), notablelexicographicalefforts and publications (e.g., dictionaries) dealing specifically with SMS language have yet to emerge.[22]Some experts have suggested that the usage of "ungrammatical" text message slang has enabled SMS to become a part of "normal language" for many children.[citation needed] Many informal attempts at documenting SMS have been done. For example, service providerVodacomprovides its clients with an SMS dictionary as a supplement to their cell phone purchase.[22]Vodacom provides lists of abbreviations and acronyms with their meanings in its web site.[23][22] Many other efforts have been made to provide SMS dictionaries on the Internet. Usually an alphabetical list of "words" used in SMS language is provided, along with their intended meanings.[24][25]Text messages can also be "translated" to standard language on certain web sites as well, although the "translations" are not always universally accepted.[26] Many people are likely to use these abbreviations in lower case letters. Many of the abbreviations were used previously on the Internet,bulletin boardsorminicom. Entire sounds within words would often be replaced by a letter or digit that would produce a similar sound when read by itself:[citation needed] ^‡kis sometimes considered passive aggressive^†kkcan also signal the end of a conversation[citation needed]^§using numbers phonetically is often intended to be sarcastic[citation needed]^The exclamation mark symbol!is scalable depending on the amount of shock, the most common use is!!! Combinations can shorten single or multiple words: In one American study, researchers found that less than 20% of messages used SMS language. Looking at his own texting history, the study's author, linguistDavid Crystal, said that just 10% of his messages used SMS language.[49] According to research done by Dr.Nenagh Kempof theUniversity of Tasmania, the evolution oftexteseis inherently coupled to a strong grasp of grammar and phonetics.[50] David Crystalhas countered the claims that SMS has a deleterious effect on language with numerous scholarly studies. The findings are summarized in his bookTxtng: the Gr8 Db8. In his book, Crystal argues that: He further observes that this is by no means a cause for bad spelling, where in fact, texting may lead to an improvement in the literacy of the user.[9][51] There are others who feel that the claims of SMS language being detrimental to English language proficiency are overrated. A study of the written work of 100 students by Freudenberg found that the actual amount of use of SMS language found in the written work was not very significant. Some features of SMS language such as the use of emoticons was not observed in any of the written work by the students. Of all the errors found, quite a substantial amount cannot be attributed to use of SMS language. These included errors that appeared before the advent of SMS language.[14] There are also views that SMS language has little or no effect on grammar.[52]Proponents of this view feel that SMS language is merely another language, and since learning a new language does not affect students' proficiency in English grammar, it cannot be said that SMS language can affect their grammar. With proper instruction, students should be able to distinguish between slang, SMS language and standard English and use them in their appropriate contexts.[52] According to a study, though SMS language is faster to write, more time is needed to read it compared to conventional English.[53] Although various other research supports the use of SMS language, the popular notion that text messaging is damaging to the linguistic development of young people persists and many view it as a corruption of the standard form of language.[54] Welsh journalist and television reporterJohn Humphryshas criticized SMS language as "wrecking our language". The author citesambiguityas one problem posed, illustrating with examples such as "lol", which may either be interpreted to mean "laughing out loud", "lots of love", and "little old lady" depending on the context in which it is being used. Ambiguous words and statements have always been present within languages. In English for example, the word "duck" can have more than one meaning. It could be referring to either the bird or the action, and such words are usually disambiguated by looking at the context in which it was written.[55] The proliferation of SMS language has been criticized for causing the deterioration of English language proficiency and its rich heritage. Opponents of SMS language feel that it undermines the properties of the English language that have lasted throughout its long history. Furthermore, words within the SMS language that are very similar to their English-language counterparts can be confused by young users as the actual English spelling and can therefore increase the prevalence of spelling mistakes.[56] Use of SMS language in schools tended to be seen as negative.[citation needed]There have been media reports of children using SMS language in school essays.[57]TheNew Zealand Qualifications Authoritydenied press reports that they had authorized the use of text abbreviations in exam answers, a spokesperson said "there had been no change to guidelines and there was no specific policy about text language."[58] A study performed by Cingel and Sundar (2012) investigated the relationship between the use of SMS language and grammar in adolescents.[59]By using a self-report survey where the 228 middle school participants would answer questions regarding their texting behaviors, as well as a ten minute in-class grammar assessment, the study gathered information on how the amount of time a student spent online affected their writing.[59]Cingel and Sundar hypothesized that the more text messages a student received and sent, the more grammar 'adaptations' their writing would contain.[59]The results reflected a negative relationship between text messaging and adolescent grammar skills. They concluded that the more time youth spend on technology, the more they become acquainted with "techspeak" or "textese," and thus allow their approach to grammar and academic writing to change.[59] According to Sean Ó Cadhain, abbreviations and acronyms elicits a sense of group identity as users must be familiar with the lingo of their group to be able to comprehend the SMS language used within the group.[60]The ability to use and understand these language short forms that are unique to each group indicates that an individual is part of the group, forging a group identity that excludes outsiders. SMS language is thus thought to be the "secret code of the youth" by some.[60]The fact that sometimes, shortened forms are used for reasons other than space constraints can be seen as interlocutors trying to establish solidarity with each other.[60] According to Norwegian researcherRichard Ling, there are differences in the SMS language of females and males.[20]The lexical, morphological and syntactic choices of male and female SMS users[16]suggested to Ling that women are more "adroit"[b]and more "literary" texters.[9]Richard Lingobserves: Circa 2005, advertisements have been increasingly influenced by SMS language. The longer the message in the advertisement, the less the impression it will leave. Hence, short messages that are more catchy, cost and space-saving are more commonly used.[22]The visual effect elicited by SMS language also lends a feeling of novelty that helps to make the advertisement more memorable. For example, an advertisement of a book uses the SMS language:EAT RIGHT 4 YOUR TYPE.[22] Companies focusing on the teen market have the tendency to make use of SMS language in their advertising to capture the attention of their target audience.[62]Since teenagers tend to be the ones using SMS language, they are able to relate to advertisements that use SMS language. Unilever's advertisement for their novel range of deodorant for teenage girls uses the phrase "OMG! Moments." David Lang, president of the team who created the advertisement commented that they desired to bring across the impression that they identify with youth culture and discourse.[62] Many other companies like McDonald's have also attempted to pursue the teenage market by using SMS language abbreviations in their commercials. McDonald's in Korea has an online video commercial which concludes with: "r u ready?".[62]
https://en.wikipedia.org/wiki/SMS_language
Athree-letter acronym(TLA), orthree-letter abbreviation, is as the phrase suggests anabbreviationconsisting of three letters. The abbreviation for TLA, TLA, has a special status among abbreviations and to some ishumoroussince abbreviations that are three-letters long are very common and TLA is, in fact, a TLA. TLAisautological. Most TLAs areinitialisms(the initial letter of each word of a phrase), but most are notacronymsin the strict sense since they are pronounced by saying each letter, as inAPA/ˌeɪpiːˈeɪ/AY-pee-AY. Some are true acronyms (pronounced as a word) such asCAT(as in CAT scan) which is pronounced asthe animal. The exact phrasethree-letter acronymappeared in the sociology literature in 1975.[1]Three-letter acronyms were used asmnemonicsin biological sciences, from 1977[2]and their practical advantage was promoted by Weber in 1982.[3]They are used in many other fields, but the term TLA is particularly associated with computing.[4]In 1980, the manual for the SinclairZX81home computer used and explained TLA.[5]The specific generation of three-letter acronyms in computing was mentioned in aJPLreport of 1982.[6]In 1988, in a paper titled "On the Cruelty of Really Teaching Computing Science", eminent computer scientistEdsger W. Dijkstrawrote (disparagingly), "No endeavour is respectable these days without a TLA"[7]By 1992 it was in aMicrosofthandbook.[8] The number of possible three-letter abbreviations using the 26 letters of the alphabet from A to Z (AAA, AAB, ... to ZZY, ZZZ) is 26 × 26 × 26 = 17,576. Allowing a single digit 0-9 increases this by 26 × 26 × 10 = 6,760 for each position, such as2FA,P2P, orWW2, giving a total of 37,856 such three-character strings. Out of the 17,576 possible TLAs that can be created using 3 uppercase letters, at least 94% of them had been used at least once in a dataset of 18 million scientific article abstracts. Three-letter acronyms are the most common type of acronym in scientific research papers, with acronyms of length 3 being twice as common as those of length 2 or 4.[9] In standardEnglish,WWWis the TLA whose pronunciation requires the mostsyllables—typically nine. The usefulness of a TLA typically comes from its being quicker to say than the phrase it represents; however saying 'WWW' in English requires three times as many syllables as the phrase it is meant to abbreviate (World Wide Web). "WWW" is sometimes abbreviated to "dubdubdub" in speech.[10]
https://en.wikipedia.org/wiki/Three-letter_acronym
InUnicode, characters can have a uniquename. A character can also have one or morealias names. An alias name can be an abbreviation, a C0 or C1 control name, a correction, an alternate name or a figment. An alias too is unique over all names and aliases, and therefore identifying. The formal, primary Unicode name is unique over all names, only uses certain characters & format, and is guaranteed never to change. The formal name consists of characters A–Z (uppercase), 0–9, " " (space), and "-" (hyphen). Next to this name, a character can have one or more formal (normative)alias names. Such an alias name also follows the rules of a name: characters used (A-Z, -, 0-9, <space>) and not used (a-z, %, $, etc.). Alias names are also unique in the full name set (that is, all names and alias names are all unique in their combined set). Alias names are formally described in the Unicode Standard.[1][2]In this sense, an abbreviation is also considered a Unicodename. There are five possible reasons to assign an alias name to a code point.[1]A character can have multiple aliases: for exampleU+0008<control-0008>has control aliasBACKSPACEand abbreviation aliasBS. The Unicode standard also uses and publishes alternative names that arenot formal, and are not listed as normative alias names. These labels may not be unique and may use irregular characters in their name. They are used in Unicode code charts, for exampleU+070FSYRIAC ABBREVIATION MARK:SAM.[3]
https://en.wikipedia.org/wiki/Unicode_alias_names_and_abbreviations
Instatistics,additive smoothing, also calledLaplacesmoothing[1]orLidstonesmoothing, is a technique used to smooth count data, eliminating issues caused by certain values having 0 occurrences. Given a set of observation countsx=⟨x1,x2,…,xd⟩{\displaystyle \mathbf {x} =\langle x_{1},x_{2},\ldots ,x_{d}\rangle }from ad{\displaystyle d}-dimensionalmultinomial distributionwithN{\displaystyle N}trials, a "smoothed" version of the counts gives theestimator where the smoothed countx^i=Nθ^i{\displaystyle {\hat {x}}_{i}=N{\hat {\theta }}_{i}}, and the "pseudocount"α> 0 is a smoothingparameter, withα= 0 corresponding to no smoothing (this parameter is explained in§ Pseudocountbelow). Additive smoothing is a type ofshrinkage estimator, as the resulting estimate will be between theempirical probability(relative frequency)xi/N{\displaystyle x_{i}/N}and theuniform probability1/d.{\displaystyle 1/d.}Common choices forαare 0 (no smoothing),+1⁄2(theJeffreys prior), or 1 (Laplace'srule of succession),[2][3]but the parameter may also be set empirically based on the observed data. From aBayesianpoint of view, this corresponds to theexpected valueof theposterior distribution, using a symmetricDirichlet distributionwith parameterαas aprior distribution. In the special case where the number of categories is 2, this is equivalent to using abeta distributionas the conjugate prior for the parameters of thebinomial distribution. Laplace came up with this smoothing technique when he tried to estimate the chance that the sun will rise tomorrow. His rationale was that even given a large sample of days with the rising sun, we still can not be completely sure that the sun will still rise tomorrow (known as thesunrise problem).[4] Apseudocountis an amount (not generally an integer, despite its name) added to the number of observed cases in order to change the expectedprobabilityin amodelof those data, when not known to be zero. It is so named because, roughly speaking, a pseudo-count of valueα{\displaystyle \alpha }weighs into theposterior distributionsimilarly to each category having an additional count ofα{\displaystyle \alpha }. If the frequency of each itemi{\displaystyle i}isxi{\displaystyle x_{i}}out ofN{\displaystyle N}samples, the empirical probability of eventi{\displaystyle i}is but the posterior probability when additively smoothed is as if to increase each countxi{\displaystyle x_{i}}byα{\displaystyle \alpha }a priori. Depending on the prior knowledge, which is sometimes a subjective value, a pseudocount may have any non-negative finite value. It may only be zero (or the possibility ignored) if impossible by definition, such as the possibility of a decimal digit ofπbeing a letter, or a physical possibility that would be rejected and so not counted, such as a computer printing a letter when a valid program forπis run, or excluded and not counted because of no interest, such as if only interested in the zeros and ones. Generally, there is also a possibility that no value may be computable or observable in a finite time (see thehalting problem). But at least one possibility must have a non-zero pseudocount, otherwise no prediction could be computed before the first observation. The relative values of pseudocounts represent the relative prior expected probabilities of their possibilities. The sum of the pseudocounts, which may be very large, represents the estimated weight of the prior knowledge compared with all the actual observations (one for each) when determining the expected probability. In any observed data set orsamplethere is the possibility, especially with low-probabilityeventsand with small data sets, of a possible event not occurring. Its observed frequency is therefore zero, apparently implying a probability of zero. This oversimplification is inaccurate and often unhelpful, particularly in probability-basedmachine learningtechniques such asartificial neural networksandhidden Markov models. By artificially adjusting the probability of rare (but not impossible) events so those probabilities are not exactly zero,zero-frequency problemsare avoided. Also seeCromwell's rule. One common approach is to add 1 to each observed number of events, including the zero-count possibilities. This is sometimes called Laplace'srule of succession. This approach is equivalent to assuming a uniform prior distribution over the probabilities for each possible event (spanning the simplex where each probability is between 0 and 1, and they all sum to 1). Using theJeffreys priorapproach, a pseudocount of one half should be added to each possible outcome. Pseudocounts should be set to one or one-half only when there is no prior knowledge at all – see theprinciple of indifference. However, given appropriate prior knowledge, the sum should be adjusted in proportion to the expectation that the prior probabilities should be considered correct, despite evidence to the contrary – seefurther analysis. Higher values are appropriate inasmuch as there is prior knowledge of the true values (for a mint-condition coin, say); lower values inasmuch as there is prior knowledge that there is probable bias, but of unknown degree (for a bent coin, say). One way to motivate pseudocounts, particularly for binomial data, is via a formula for the midpoint of aninterval estimate, particularly abinomial proportion confidence interval. The best-known is due toEdwin Bidwell Wilson, inWilson (1927): the midpoint of theWilson score intervalcorresponding to⁠z{\displaystyle z}⁠standard deviations on either side is Takingz=2{\displaystyle z=2}standard deviations to approximate a 95% confidence interval (⁠z≈1.96{\displaystyle z\approx 1.96}⁠) yields pseudocount of 2 for each outcome, so 4 in total, colloquially known as the "plus four rule": This is also the midpoint of theAgresti–Coull interval(Agresti & Coull 1998). Often the bias of an unknown trial population is tested against a control population with known parameters (incidence rates)μ=⟨μ1,μ2,…,μd⟩.{\displaystyle {\boldsymbol {\mu }}=\langle \mu _{1},\mu _{2},\ldots ,\mu _{d}\rangle .}In this case the uniform probability1/d{\displaystyle 1/d}should be replaced by the known incidence rate of the control populationμi{\displaystyle \mu _{i}}to calculate the smoothed estimator: As a consistency check, if the empirical estimator happens to equal the incidence rate, i.e.μi=xi/N,{\displaystyle \mu _{i}=x_{i}/N,}the smoothed estimator is independent ofα{\displaystyle \alpha }and also equals the incidence rate. Additive smoothing is commonly a component ofnaive Bayes classifiers. In abag of words modelof natural language processing and information retrieval, the data consists of the number of occurrences of each word in a document. Additive smoothing allows the assignment of non-zero probabilities to words which do not occur in the sample. Studies have shown that additive smoothing is more effective than other probability smoothing methods in several retrieval tasks such as language-model-basedpseudo-relevance feedbackandrecommender systems.[5][6]
https://en.wikipedia.org/wiki/Additive_smoothing
Feature engineeringis a preprocessing step insupervised machine learningandstatistical modeling[1]which transforms raw data into a more effective set of inputs. Each input comprises several attributes, known as features. By providing models with relevant information, feature engineering significantly enhances their predictive accuracy and decision-making capability.[2][3][4] Beyond machine learning, the principles of feature engineering are applied in various scientific fields, including physics. For example, physicists constructdimensionless numberssuch as theReynolds numberinfluid dynamics, theNusselt numberinheat transfer, and theArchimedes numberinsedimentation. They also develop first approximations of solutions, such as analytical solutions for thestrength of materialsin mechanics.[5] One of the applications of feature engineering has been clustering of feature-objects or sample-objects in a dataset. Especially, feature engineering based onmatrix decompositionhas been extensively used for data clustering under non-negativity constraints on the feature coefficients. These includeNon-Negative Matrix Factorization(NMF),[6]Non-Negative Matrix-Tri Factorization(NMTF),[7]Non-Negative Tensor Decomposition/Factorization(NTF/NTD),[8]etc. The non-negativity constraints on coefficients of the feature vectors mined by the above-stated algorithms yields a part-based representation, and different factor matrices exhibit natural clustering properties. Several extensions of the above-stated feature engineering methods have been reported in literature, includingorthogonality-constrained factorizationfor hard clustering, andmanifold learningto overcome inherent issues with these algorithms. Other classes of feature engineering algorithms include leveraging a common hidden structure across multiple inter-related datasets to obtain a consensus (common) clustering scheme. An example isMulti-view Classification based on Consensus Matrix Decomposition(MCMD),[2]which mines a common clustering scheme across multiple datasets. MCMD is designed to output two types of class labels (scale-variant and scale-invariant clustering), and: Coupled matrix and tensor decompositions are popular in multi-view feature engineering.[9] Feature engineering inmachine learningandstatistical modelinginvolves selecting, creating, transforming, and extracting data features. Key components include feature creation from existing data, transforming and imputing missing or invalid features, reducing data dimensionality through methods likePrincipal Components Analysis(PCA),Independent Component Analysis(ICA), andLinear Discriminant Analysis(LDA), and selecting the most relevant features for model training based on importance scores andcorrelation matrices.[10] Features vary in significance.[11]Even relatively insignificant features may contribute to a model.Feature selectioncan reduce the number of features to prevent a model from becoming too specific to the training data set (overfitting).[12] Feature explosion occurs when the number of identified features is too large for effective model estimation or optimization. Common causes include: Feature explosion can be limited via techniques such as:regularization,kernel methods, andfeature selection.[13] Automation of feature engineering is a research topic that dates back to the 1990s.[14]Machine learning software that incorporatesautomated feature engineeringhas been commercially available since 2016.[15]Related academic literature can be roughly separated into two types: Multi-relational Decision Tree Learning (MRDTL) extends traditional decision tree methods torelational databases, handling complex data relationships across tables. It innovatively uses selection graphs asdecision nodes, refined systematically until a specific termination criterion is reached.[14] Most MRDTL studies base implementations on relational databases, which results in many redundant operations. These redundancies can be reduced by using techniques such as tuple id propagation.[16][17] There are a number of open-source libraries and tools that automate feature engineering on relational data and time series: [OneBM] helps data scientists reduce data exploration time allowing them to try and error many ideas in short time. On the other hand, it enables non-experts, who are not familiar with data science, to quickly extract value from their data with a little effort, time, and cost.[22] The deep feature synthesis (DFS) algorithm beat 615 of 906 human teams in a competition.[32][33] Thefeature storeis where the features are stored and organized for the explicit purpose of being used to either train models (by data scientists) or make predictions (by applications that have a trained model). It is a central location where you can either create or update groups of features created from multiple different data sources, or create and update new datasets from those feature groups for training models or for use in applications that do not want to compute the features but just retrieve them when it needs them to make predictions.[34] A feature store includes the ability to store code used to generate features, apply the code to raw data, and serve those features to models upon request. Useful capabilities include feature versioning and policies governing the circumstances under which features can be used.[35] Feature stores can be standalone software tools or built into machine learning platforms. Feature engineering can be a time-consuming and error-prone process, as it requires domain expertise and often involves trial and error.[36][37]Deep learning algorithmsmay be used to process a large raw dataset without having to resort to feature engineering.[38]However, deep learning algorithms still require careful preprocessing and cleaning of the input data.[39]In addition, choosing the right architecture, hyperparameters, and optimization algorithm for a deep neural network can be a challenging and iterative process.[40]
https://en.wikipedia.org/wiki/Feature_extraction
Machine learning(ML) is afield of studyinartificial intelligenceconcerned with the development and study ofstatistical algorithmsthat can learn fromdataandgeneraliseto unseen data, and thus performtaskswithout explicitinstructions.[1]Within a subdiscipline in machine learning, advances in the field ofdeep learninghave allowedneural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance.[2] ML finds application in many fields, includingnatural language processing,computer vision,speech recognition,email filtering,agriculture, andmedicine.[3][4]The application of ML to business problems is known aspredictive analytics. Statisticsandmathematical optimisation(mathematical programming) methods comprise the foundations of machine learning.Data miningis a related field of study, focusing onexploratory data analysis(EDA) viaunsupervised learning.[6][7] From a theoretical viewpoint,probably approximately correct learningprovides a framework for describing machine learning. The termmachine learningwas coined in 1959 byArthur Samuel, anIBMemployee and pioneer in the field ofcomputer gamingandartificial intelligence.[8][9]The synonymself-teaching computerswas also used in this time period.[10][11] Although the earliest machine learning model was introduced in the 1950s whenArthur Samuelinvented aprogramthat calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes.[12]In 1949,CanadianpsychologistDonald Hebbpublished the bookThe Organization of Behavior, in which he introduced atheoretical neural structureformed by certain interactions amongnerve cells.[13]Hebb's model ofneuronsinteracting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, orartificial neuronsused by computers to communicate data.[12]Other researchers who have studied humancognitive systemscontributed to the modern machine learning technologies as well, including logicianWalter PittsandWarren McCulloch, who proposed the early mathematical models of neural networks to come up withalgorithmsthat mirror human thought processes.[12] By the early 1960s, an experimental "learning machine" withpunched tapememory, called Cybertron, had been developed byRaytheon Companyto analysesonarsignals,electrocardiograms, and speech patterns using rudimentaryreinforcement learning. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions.[14]A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.[15]Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973.[16]In 1981 a report was given on using teaching strategies so that anartificial neural networklearns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.[17] Tom M. Mitchellprovided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experienceEwith respect to some class of tasksTand performance measurePif its performance at tasks inT, as measured byP, improves with experienceE."[18]This definition of the tasks in which machine learning is concerned offers a fundamentallyoperational definitionrather than defining the field in cognitive terms. This followsAlan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".[19] Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions.[20] As a scientific endeavour, machine learning grew out of the quest forartificial intelligence(AI). In the early days of AI as anacademic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostlyperceptronsandother modelsthat were later found to be reinventions of thegeneralised linear modelsof statistics.[22]Probabilistic reasoningwas also employed, especially inautomated medical diagnosis.[23]: 488 However, an increasing emphasis on thelogical, knowledge-based approachcaused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.[23]: 488By 1980,expert systemshad come to dominate AI, and statistics was out of favour.[24]Work on symbolic/knowledge-based learning did continue within AI, leading toinductive logic programming(ILP), but the more statistical line of research was now outside the field of AI proper, inpattern recognitionandinformation retrieval.[23]: 708–710, 755Neural networks research had been abandoned by AI andcomputer sciencearound the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines includingJohn Hopfield,David Rumelhart, andGeoffrey Hinton. Their main success came in the mid-1980s with the reinvention ofbackpropagation.[23]: 25 Machine learning (ML), reorganised and recognised as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from thesymbolic approachesit had inherited from AI, and toward methods and models borrowed from statistics,fuzzy logic, andprobability theory.[24] There is a close connection between machine learning and compression. A system that predicts theposterior probabilitiesof a sequence given its entire history can be used for optimal data compression (by usingarithmetic codingon the output distribution). Conversely, an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence".[25][26][27] An alternative view can show compression algorithms implicitly map strings into implicitfeature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponding to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM.[28] According toAIXItheory, a connection more directly explained inHutter Prize, the best possible compression of x is the smallest possible software that generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can not unzip it without both, but there may be an even smaller combined form. Examples of AI-powered audio/video compression software includeNVIDIA Maxine, AIVC.[29]Examples of software that can perform AI-powered image compression includeOpenCV,TensorFlow,MATLAB's Image Processing Toolbox (IPT) and High-Fidelity Generative Image Compression.[30] Inunsupervised machine learning,k-means clusteringcan be utilized to compress data by grouping similar data points into clusters. This technique simplifies handling extensive datasets that lack predefined labels and finds widespread use in fields such asimage compression.[31] Data compression aims to reduce the size of data files, enhancing storage efficiency and speeding up data transmission. K-means clustering, an unsupervised machine learning algorithm, is employed to partition a dataset into a specified number of clusters, k, each represented by thecentroidof its points. This process condenses extensive datasets into a more compact set of representative points. Particularly beneficial inimageandsignal processing, k-means clustering aids in data reduction by replacing groups of data points with their centroids, thereby preserving the core information of the original data while significantly decreasing the required storage space.[32] Machine learning anddata miningoften employ the same methods and overlap significantly, but while machine learning focuses on prediction, based onknownproperties learned from the training data, data mining focuses on thediscoveryof (previously)unknownproperties in the data (this is the analysis step ofknowledge discoveryin databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals,ECML PKDDbeing a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability toreproduce knownknowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previouslyunknownknowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data. Machine learning also has intimate ties tooptimisation: Many learning problems are formulated as minimisation of someloss functionon a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign alabelto instances, and models are trained to correctly predict the preassigned labels of a set of examples).[35] Characterizing the generalisation of various learning algorithms is an active topic of current research, especially fordeep learningalgorithms. Machine learning andstatisticsare closely related fields in terms of methods, but distinct in their principal goal: statistics draws populationinferencesfrom asample, while machine learning finds generalisable predictive patterns.[36]According toMichael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.[37]He also suggested the termdata scienceas a placeholder to call the overall field.[37] Conventional statistical analyses require the a priori selection of a model most suitable for the study data set. In addition, only significant or theoretically relevant variables based on previous experience are included for analysis. In contrast, machine learning is not built on a pre-structured model; rather, the data shape the model by detecting underlying patterns. The more variables (input) used to train the model, the more accurate the ultimate model will be.[38] Leo Breimandistinguished two statistical modelling paradigms: data model and algorithmic model,[39]wherein "algorithmic model" means more or less the machine learning algorithms likeRandom Forest. Some statisticians have adopted methods from machine learning, leading to a combined field that they callstatistical learning.[40] Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyse the weight space ofdeep neural networks.[41]Statistical physics is thus finding applications in the area ofmedical diagnostics.[42] A core objective of a learner is to generalise from its experience.[5][43]Generalisation in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases. The computational analysis of machine learning algorithms and their performance is a branch oftheoretical computer scienceknown ascomputational learning theoryvia theprobably approximately correct learningmodel. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. Thebias–variance decompositionis one way to quantify generalisationerror. For the best performance in the context of generalisation, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject tooverfittingand generalisation will be poorer.[44] In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done inpolynomial time. There are two kinds oftime complexityresults: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system: Although each algorithm has advantages and limitations, no single algorithm works for all problems.[45][46][47] Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs.[48]The data, known astraining data, consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by anarrayor vector, sometimes called afeature vector, and the training data is represented by amatrix. Throughiterative optimisationof anobjective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs.[49]An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.[18] Types of supervised-learning algorithms includeactive learning,classificationandregression.[50]Classification algorithms are used when the outputs are restricted to a limited set of values, while regression algorithms are used when the outputs can take any numerical value within a range. For example, in a classification algorithm that filters emails, the input is an incoming email, and the output is the folder in which to file the email. In contrast, regression is used for tasks such as predicting a person's height based on factors like age and genetics or forecasting future temperatures based on historical data.[51] Similarity learningis an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications inranking,recommendation systems, visual identity tracking, face verification, and speaker verification. Unsupervised learning algorithms find structures in data that has not been labelled, classified or categorised. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering,dimensionality reduction,[7]anddensity estimation.[52] Cluster analysis is the assignment of a set of observations into subsets (calledclusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by somesimilarity metricand evaluated, for example, byinternal compactness, or the similarity between members of the same cluster, andseparation, the difference between clusters. Other methods are based onestimated densityandgraph connectivity. A special type of unsupervised learning called,self-supervised learninginvolves training a model by generating the supervisory signal from the data itself.[53][54] Semi-supervised learning falls betweenunsupervised learning(without any labelled training data) andsupervised learning(with completely labelled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabelled data, when used in conjunction with a small amount of labelled data, can produce a considerable improvement in learning accuracy. Inweakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.[55] Reinforcement learning is an area of machine learning concerned with howsoftware agentsought to takeactionsin an environment so as to maximise some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such asgame theory,control theory,operations research,information theory,simulation-based optimisation,multi-agent systems,swarm intelligence,statisticsandgenetic algorithms. In reinforcement learning, the environment is typically represented as aMarkov decision process(MDP). Many reinforcement learning algorithms usedynamic programmingtechniques.[56]Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent. Dimensionality reductionis a process of reducing the number of random variables under consideration by obtaining a set of principal variables.[57]In other words, it is a process of reducing the dimension of thefeatureset, also called the "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination orextraction. One of the popular methods of dimensionality reduction isprincipal component analysis(PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D). Themanifold hypothesisproposes that high-dimensional data sets lie along low-dimensionalmanifolds, and many dimensionality reduction techniques make this assumption, leading to the area ofmanifold learningandmanifold regularisation. Other approaches have been developed which do not fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example,topic modelling,meta-learning.[58] Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, namedcrossbar adaptive array(CAA).[59][60]It gives a solution to the problem learning without any external reward, by introducing emotion as an internal reward. Emotion is used as state evaluation of a self-learning agent. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion.[61]The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine: It is a system with only one input, situation, and only one output, action (or behaviour) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioural environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioural environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behaviour, in an environment that contains both desirable and undesirable situations.[62] Several learning algorithms aim at discovering better representations of the inputs provided during training.[63]Classic examples includeprincipal component analysisand cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manualfeature engineering, and allows a machine to both learn the features and use them to perform a specific task. Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labelled input data. Examples includeartificial neural networks,multilayer perceptrons, and superviseddictionary learning. In unsupervised feature learning, features are learned with unlabelled input data. Examples include dictionary learning,independent component analysis,autoencoders,matrix factorisation[64]and various forms ofclustering.[65][66][67] Manifold learningalgorithms attempt to do so under the constraint that the learned representation is low-dimensional.Sparse codingalgorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros.Multilinear subspace learningalgorithms aim to learn low-dimensional representations directly fromtensorrepresentations for multidimensional data, without reshaping them into higher-dimensional vectors.[68]Deep learningalgorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.[69] Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination ofbasis functionsand assumed to be asparse matrix. The method isstrongly NP-hardand difficult to solve approximately.[70]A popularheuristicmethod for sparse dictionary learning is thek-SVDalgorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied inimage de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.[71] Indata mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.[72]Typically, the anomalous items represent an issue such asbank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to asoutliers, novelties, noise, deviations and exceptions.[73] In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.[74] Three broad categories of anomaly detection techniques exist.[75]Unsupervised anomaly detection techniques detect anomalies in an unlabelled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labelled as "normal" and "abnormal" and involves training a classifier (the key difference from many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behaviour from a given normal training data set and then test the likelihood of a test instance to be generated by the model. Robot learningis inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning,[76][77]and finallymeta-learning(e.g. MAML). Association rule learning is arule-based machine learningmethod for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".[78] Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilisation of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.[79]Rule-based machine learning approaches includelearning classifier systems, association rule learning, andartificial immune systems. Based on the concept of strong rules,Rakesh Agrawal,Tomasz Imielińskiand Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded bypoint-of-sale(POS) systems in supermarkets.[80]For example, the rule{onions,potatoes}⇒{burger}{\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotionalpricingorproduct placements. In addition tomarket basket analysis, association rules are employed today in application areas includingWeb usage mining,intrusion detection,continuous production, andbioinformatics. In contrast withsequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. Learning classifier systems(LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically agenetic algorithm, with a learning component, performing eithersupervised learning,reinforcement learning, orunsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in apiecewisemanner in order to make predictions.[81] Inductive logic programming(ILP) is an approach to rule learning usinglogic programmingas a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program thatentailsall positive and no negative examples.Inductive programmingis a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such asfunctional programs. Inductive logic programming is particularly useful inbioinformaticsandnatural language processing.Gordon PlotkinandEhud Shapirolaid the initial theoretical foundation for inductive machine learning in a logical setting.[82][83][84]Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples.[85]The terminductivehere refers tophilosophicalinduction, suggesting a theory to explain observed facts, rather thanmathematical induction, proving a property for all members of a well-ordered set. Amachine learning modelis a type ofmathematical modelthat, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimise errors in its predictions.[86]By extension, the term "model" can refer to several levels of specificity, from a general class of models and their associated learning algorithms to a fully trained model with all its internal parameters tuned.[87] Various types of models have been used and researched for machine learning systems, picking the best model for a task is calledmodel selection. Artificial neural networks (ANNs), orconnectionistsystems, are computing systems vaguely inspired by thebiological neural networksthat constitute animalbrains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model theneuronsin a biological brain. Each connection, like thesynapsesin a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is areal number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have aweightthat adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that ahuman brainwould. However, over time, attention moved to performing specific tasks, leading to deviations frombiology. Artificial neural networks have been used on a variety of tasks, includingcomputer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesandmedical diagnosis. Deep learningconsists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[88] Decision tree learning uses adecision treeas apredictive modelto go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures,leavesrepresent class labels, and branches representconjunctionsof features that lead to those class labels. Decision trees where the target variable can take continuous values (typicallyreal numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions anddecision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making. Random forest regression (RFR) falls under umbrella of decisiontree-based models. RFR is an ensemble learning method that builds multiple decision trees and averages their predictions to improve accuracy and to avoid overfitting.  To build decision trees, RFR uses bootstrapped sampling, for instance each decision tree is trained on random data of from training set. This random selection of RFR for training enables model to reduce bias predictions and achieve accuracy. RFR generates independent decision trees, and it can work on single output data as well multiple regressor task. This makes RFR compatible to be used in various application.[89][90] Support-vector machines (SVMs), also known as support-vector networks, are a set of relatedsupervised learningmethods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category.[91]An SVM training algorithm is a non-probabilistic,binary,linear classifier, although methods such asPlatt scalingexist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called thekernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form islinear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such asordinary least squares. The latter is often extended byregularisationmethods to mitigate overfitting and bias, as inridge regression. When dealing with non-linear problems, go-to models includepolynomial regression(for example, used for trendline fitting in Microsoft Excel[92]),logistic regression(often used instatistical classification) or evenkernel regression, which introduces non-linearity by taking advantage of thekernel trickto implicitly map input variables to higher-dimensional space. Multivariate linear regressionextends the concept of linear regression to handle multiple dependent variables simultaneously. This approach estimates the relationships between a set of input variables and several output variables by fitting amultidimensionallinear model. It is particularly useful in scenarios where outputs are interdependent or share underlying patterns, such as predicting multiple economic indicators or reconstructing images,[93]which are inherently multi-dimensional. A Bayesian network, belief network, or directed acyclic graphical model is a probabilisticgraphical modelthat represents a set ofrandom variablesand theirconditional independencewith adirected acyclic graph(DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that performinferenceand learning. Bayesian networks that model sequences of variables, likespeech signalsorprotein sequences, are calleddynamic Bayesian networks. Generalisations of Bayesian networks that can represent and solve decision problems under uncertainty are calledinfluence diagrams. A Gaussian process is astochastic processin which every finite collection of the random variables in the process has amultivariate normal distribution, and it relies on a pre-definedcovariance function, or kernel, that models how pairs of points relate to each other depending on their locations. Given a set of observed points, or input–output examples, the distribution of the (unobserved) output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point. Gaussian processes are popular surrogate models inBayesian optimisationused to dohyperparameter optimisation. A genetic algorithm (GA) is asearch algorithmandheuristictechnique that mimics the process ofnatural selection, using methods such asmutationandcrossoverto generate newgenotypesin the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[95][96]Conversely, machine learning techniques have been used to improve the performance of genetic andevolutionary algorithms.[97] The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such asprobability,possibilityandimprecise probability theories. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined (e.g., Dempster's rule of combination), just like how in apmf-based Bayesian approach would combine probabilities.[98]However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance anduncertainty quantification. These belief function approaches that are implemented within the machine learning domain typically leverage a fusion approach of variousensemble methodsto better handle the learner'sdecision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving.[4][9]However, the computational complexity of these algorithms are dependent on the number of propositions (classes), and can lead to a much higher computation time when compared to other machine learning approaches. Rule-based machine learning (RBML) is a branch of machine learning that automatically discovers and learns 'rules' from data. It provides interpretable models, making it useful for decision-making in fields like healthcare, fraud detection, and cybersecurity. Key RBML techniques includeslearning classifier systems,[99]association rule learning,[100]artificial immune systems,[101]and other similar models. These methods extract patterns from data and evolve rules over time. Typically, machine learning models require a high quantity of reliable data to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representativesampleof data. Data from the training set can be as varied as acorpus of text, a collection of images,sensordata, and data collected from individual users of a service.Overfittingis something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Biased models may result in detrimental outcomes, thereby furthering the negative impacts on society or objectives.Algorithmic biasis a potential result of data not being fully prepared for training. Machine learning ethics is becoming a field of study and notably, becoming integrated within machine learning engineering teams. Federated learning is an adapted form ofdistributed artificial intelligenceto training machine learning models that decentralises the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralised server. This also increases efficiency by decentralising the training process to many devices. For example,Gboarduses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back toGoogle.[102] There are many applications for machine learning, including: In 2006, the media-services providerNetflixheld the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers fromAT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built anensemble modelto win the Grand Prize in 2009 for $1 million.[105]Shortly after the prize was awarded, Netflix realised that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[106]In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[107]In 2012, co-founder ofSun Microsystems,Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[108]In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognised influences among artists.[109]In 2019Springer Naturepublished the first research book created using machine learning.[110]In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19.[111]Machine learning was recently applied to predict the pro-environmental behaviour of travellers.[112]Recently, machine learning technology was also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone.[113][114][115]When applied correctly, machine learning algorithms (MLAs) can utilise a wide range of company characteristics to predict stock returns withoutoverfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques likeOLS.[116] Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes.[117] Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes.[118][119][120]Other applications have been focusing on pre evacuation decisions in building fires.[121][122] Machine learning is also emerging as a promising tool in geotechnical engineering, where it is used to support tasks such as ground classification, hazard prediction, and site characterization. Recent research emphasizes a move toward data-centric methods in this field, where machine learning is not a replacement for engineering judgment, but a way to enhance it using site-specific data and patterns.[123] Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[124][125][126]Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[127] The "black box theory" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data.[128]The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes.[128] In 2018, a self-driving car fromUberfailed to detect a pedestrian, who was killed after a collision.[129]Attempts to use machine learning in healthcare with theIBM Watsonsystem failed to deliver even after years of time and billions of dollars invested.[130][131]Microsoft'sBing Chatchatbot has been reported to produce hostile and offensive response against its users.[132] Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves.[133] Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI.[134]It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision.[135]By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalising the theory in accordance with how complex the theory is.[136] Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[137]A real-world example is that, unlike humans, current image classifiers often do not primarily make judgements from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies.[138][139] Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel.[140]Machine learning models are often vulnerable to manipulation or evasion viaadversarial machine learning.[141] Researchers have demonstrated howbackdoorscan be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type ofdata/software transparencyis provided, possibly includingwhite-box access.[142][143][144] Classification of machine learning models can be validated by accuracy estimation techniques like theholdoutmethod, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validationmethod randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods,bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[145] In addition to overall accuracy, investigators frequently reportsensitivity and specificitymeaning true positive rate (TPR) and true negative rate (TNR) respectively. Similarly, investigators sometimes report thefalse positive rate(FPR) as well as thefalse negative rate(FNR). However, these rates are ratios that fail to reveal their numerators and denominators.Receiver operating characteristic(ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model.[146] Theethicsofartificial intelligencecovers a broad range of topics within AI that are considered to have particular ethical stakes.[147]This includesalgorithmic biases,fairness,[148]automated decision-making,[149]accountability,privacy, andregulation. It also covers various emerging or potential future challenges such asmachine ethics(how to make machines that behave ethically),lethal autonomous weapon systems,arms racedynamics,AI safetyandalignment,technological unemployment, AI-enabledmisinformation, how to treat certain AI systems if they have amoral status(AI welfare and rights),artificial superintelligenceandexistential risks.[147] Different machine learning approaches can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society.[150] Systems that are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitising cultural prejudices.[151]For example, in 1988, the UK'sCommission for Racial Equalityfound thatSt. George's Medical Schoolhad been using a computer program trained from data of previous admissions staff and that this program had denied nearly 60 candidates who were found to either be women or have non-European sounding names.[150]Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants.[152][153]Another example includes predictive policing companyGeolitica's predictive algorithm that resulted in "disproportionately high levels of over-policing in low-income and minority communities" after being trained with historical crime data.[154] While responsiblecollection of dataand documentation of algorithmic rules used by a system is considered a critical part of machine learning, some researchers blame lack of participation and representation of minority population in the field of AI for machine learning's vulnerability to biases.[155]In fact, according to research carried out by the Computing Research Association (CRA) in 2021, "female faculty merely make up 16.1%" of all faculty members who focus on AI among several universities around the world.[156]Furthermore, among the group of "new U.S. resident AI PhD graduates," 45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as African American, which further demonstrates a lack of diversity in the field of AI.[156] Language models learned from data have been shown to contain human-like biases.[157][158]Because human languages contain biases, machines trained on languagecorporawill necessarily also learn these biases.[159][160]In 2016, Microsoft testedTay, achatbotthat learned from Twitter, and it quickly picked up racist and sexist language.[161] In an experiment carried out byProPublica, aninvestigative journalismorganisation, a machine learning algorithm's insight into the recidivism rates among prisoners falsely flagged "black defendants high risk twice as often as white defendants".[154]In 2015, Google Photos once tagged a couple of black people as gorillas, which caused controversy. The gorilla label was subsequently removed, and in 2023, it still cannot recognise gorillas.[162]Similar issues with recognising non-white people have been found in many other systems.[163] Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[164]Concern forfairnessin machine learning, that is, reducing bias in machine learning and propelling its use for human good, is increasingly expressed by artificial intelligence scientists, includingFei-Fei Li, who said that "[t]here's nothing artificial about AI. It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility."[165] There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated.[166] Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for trainingdeep neural networks(a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units.[167]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[168]OpenAIestimated the hardware compute used in the largest deep learning projects fromAlexNet(2012) toAlphaZero(2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[169][170] Tensor Processing Units (TPUs)are specialised hardware accelerators developed byGooglespecifically for machine learning workloads. Unlike general-purposeGPUsandFPGAs, TPUs are optimised for tensor computations, making them particularly efficient for deep learning tasks such as training and inference. They are widely used in Google Cloud AI services and large-scale machine learning models like Google's DeepMind AlphaFold and large language models. TPUs leverage matrix multiplication units and high-bandwidth memory to accelerate computations while maintaining energy efficiency.[171]Since their introduction in 2016, TPUs have become a key component of AI infrastructure, especially in cloud-based environments. Neuromorphic computingrefers to a class of computing systems designed to emulate the structure and functionality of biological neural networks. These systems may be implemented through software-based simulations on conventional hardware or through specialised hardware architectures.[172] Aphysical neural networkis a specific type of neuromorphic hardware that relies on electrically adjustable materials, such as memristors, to emulate the function ofneural synapses. The term "physical neural network" highlights the use of physical hardware for computation, as opposed to software-based implementations. It broadly refers to artificial neural networks that use materials with adjustable resistance to replicate neural synapses.[173][174] Embedded machine learning is a sub-field of machine learning where models are deployed onembedded systemswith limited computing resources, such aswearable computers,edge devicesandmicrocontrollers.[175][176][177][178]Running models directly on these devices eliminates the need to transfer and store data on cloud servers for further processing, thereby reducing the risk of data breaches, privacy leaks and theft of intellectual property, personal data and business secrets. Embedded machine learning can be achieved through various techniques, such ashardware acceleration,[179][180]approximate computing,[181]and model optimisation.[182][183]Common optimisation techniques includepruning,quantisation,knowledge distillation, low-rank factorisation, network architecture search, and parameter sharing. Software suitescontaining a variety of machine learning algorithms include the following:
https://en.wikipedia.org/wiki/Machine_learning
Incomputer scienceanddata mining,MinHash(or themin-wise independent permutationslocality sensitive hashingscheme) is a technique for quickly estimating howsimilartwo sets are. The scheme was published byAndrei Broderin a 1997 conference,[1]and initially used in theAltaVistasearch engine to detect duplicate web pages and eliminate them from search results.[2]It has also been applied in large-scaleclusteringproblems, such asclustering documentsby the similarity of their sets of words.[1] TheJaccard similarity coefficientis a commonly used indicator of the similarity between two sets. LetUbe a set andAandBbe subsets ofU, then the Jaccard index is defined to be the ratio of the number of elements of theirintersectionand the number of elements of theirunion: This value is 0 when the two sets aredisjoint, 1 when they are equal, and strictly between 0 and 1 otherwise. Two sets are more similar (i.e. have relatively more members in common) when their Jaccard index is closer to 1. The goal of MinHash is to estimateJ(A,B)quickly, without explicitly computing the intersection and union. Lethbe ahash functionthat maps the members ofUto distinct integers, letpermbe a randompermutationof the elements of the setU, and for any subsetSofUdefinehmin(S)to be the minimal member ofSwith respect toh∘perm—that is, the memberxofSwith the minimum value ofh(perm(x)). (In cases where the hash function used is assumed to have pseudo-random properties, the random permutation would not be used.) Now, applyinghminto bothAandB, and assuming no hash collisions, we see that the values are equal (hmin(A) =hmin(B)) if and only if among all elements ofA∪B{\displaystyle A\cup B}, the element with the minimum hash value lies in the intersectionA∩B{\displaystyle A\cap B}. The probability of this being true is exactly the Jaccard index, therefore: That is, theprobabilitythathmin(A) =hmin(B)is true is equal to the similarityJ(A,B), assuming drawingpermfrom a uniform distribution. In other words, ifris therandom variablethat is one whenhmin(A) =hmin(B)and zero otherwise, thenris anunbiased estimatorofJ(A,B).rhas too high avarianceto be a useful estimator for the Jaccard similarity on its own, becauser{\displaystyle r}is always zero or one. The idea of the MinHash scheme is to reduce this variance by averaging together several variables constructed in the same way. The simplest version of the minhash scheme useskdifferent hash functions, wherekis a fixed integer parameter, and represents each setSby thekvalues ofhmin(S)for thesekfunctions. To estimateJ(A,B)using this version of the scheme, letybe the number of hash functions for whichhmin(A) =hmin(B), and usey/kas the estimate. This estimate is the average ofkdifferent 0-1 random variables, each of which is one whenhmin(A) =hmin(B)and zero otherwise, and each of which is an unbiased estimator ofJ(A,B). Therefore, their average is also an unbiased estimator, and by standard deviation for sums of 0-1 random variables, its expected error isO(1/√k).[3] Therefore, for any constantε > 0there is a constantk= O(1/ε2)such that the expected error of the estimate is at mostε. For example, 400 hashes would be required to estimateJ(A,B)with an expected error less than or equal to .05. It may be computationally expensive to compute multiple hash functions, but a related version of MinHash scheme avoids this penalty by using only a single hash function and uses it to select multiple values from each set rather than selecting only a single minimum value per hash function. Lethbe a hash function, and letkbe a fixed integer. IfSis any set ofkor more values in the domain ofh, defineh(k)(S)to be the subset of thekmembers ofSthat have the smallest values ofh. This subseth(k)(S)is used as asignaturefor the setS, and the similarity of any two sets is estimated by comparing their signatures. Specifically, letAandBbe any two sets. ThenX=h(k)(h(k)(A) ∪h(k)(B)) =h(k)(A∪B)is a set ofkelements ofA∪B, and ifhis a random function then any subset ofkelements is equally likely to be chosen; that is,Xis asimple random sampleofA∪B. The subsetY=X∩h(k)(A) ∩h(k)(B)is the set of members ofXthat belong to the intersectionA∩B. Therefore, |Y|/kis an unbiased estimator ofJ(A,B). The difference between this estimator and the estimator produced by multiple hash functions is thatXalways has exactlykmembers, whereas the multiple hash functions may lead to a smaller number of sampled elements due to the possibility that two different hash functions may have the same minima. However, whenkis small relative to the sizes of the sets, this difference is negligible. By standardChernoff boundsfor sampling without replacement, this estimator has expected errorO(1/√k), matching the performance of the multiple-hash-function scheme. The estimator|Y|/kcan be computed in timeO(k)from the two signatures of the given sets, in either variant of the scheme. Therefore, whenεandkare constants, the time to compute the estimated similarity from the signatures is also constant. The signature of each set can be computed inlinear timeon the size of the set, so when many pairwise similarities need to be estimated this method can lead to a substantial savings in running time compared to doing a full comparison of the members of each set. Specifically, for set sizenthe many hash variant takesO(nk)time. The single hash variant is generally faster, requiringO(n)time to maintain the queue of minimum hash values assumingn>>k.[1] A variety of techniques to introduce weights into the computation of MinHashes have been developed. The simplest extends it to integer weights.[4]Extend our hash functionhto accept both a set member and an integer, then generate multiple hashes for each item, according to its weight. If itemioccursntimes, generate hashesh(i,1),h(i,2),…,h(i,n){\displaystyle h(i,1),h(i,2),\ldots ,h(i,n)}. Run the original algorithm on this expanded set of hashes. Doing so yields theweighted Jaccard Indexas the collision probability. Further extensions that achieve this collision probability on real weights with better runtime have been developed, one for dense data,[5]and another for sparse data.[6] Another family of extensions use exponentially distributed hashes. A uniformly random hash between 0 and 1 can be converted to follow an exponential distribution byCDF inversion. This method exploits the many beautiful properties of theminimum of a set of exponential variables. This yields as its collision probability theprobability Jaccard index[7] In order to implement the MinHash scheme as described above, one needs the hash functionhto define arandom permutationonnelements, wherenis the total number of distinct elements in the union of all of the sets to be compared. But because there aren!different permutations, it would requireΩ(nlogn)bits just to specify a truly random permutation, an infeasibly large number for even moderate values ofn. Because of this fact, by analogy to the theory ofuniversal hashing, there has been significant work on finding a family of permutations that is "min-wise independent", meaning that for any subset of the domain, any element is equally likely to be the minimum. It has been established that a min-wise independent family of permutations must include at least different permutations, and therefore that it needsΩ(n)bits to specify a single permutation, still infeasibly large.[2] Because of the above impracticality, two variant notions of min-wise independence have been introduced: restricted min-wise independent permutations families, and approximate min-wise independent families. Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at mostk.[8]Approximate min-wise independence has at most a fixed probabilityεof varying from full independence.[9] In 1999Piotr Indykproved[10]that anyk-wise independent family of hash functionsis also approximately min-wise independent fork{\displaystyle k}large enough. In particular, there are constantsc,c′>0{\displaystyle c,c'>0}such that ifk≥clog⁡1ϵ{\displaystyle k\geq c\log {\tfrac {1}{\epsilon }}}, then for all sets|X|≤ϵnc′{\displaystyle |X|\leq \epsilon nc'}andx∉X{\displaystyle x\not \in X}. (Note, here(1±ϵ){\displaystyle (1\pm \epsilon )}means the probability is at most a factor1+ϵ{\displaystyle 1+\epsilon }too big, and at most1−ϵ{\displaystyle 1-\epsilon }too small.) This guarantee is, among other things, sufficient to give the Jaccard bound required by the MinHash algorithm. That is, ifA{\displaystyle A}andB{\displaystyle B}are sets, then Sincek-wise independent hash functionscan be specified using justklog⁡n{\displaystyle k\log n}bits, this approach is much more practical than using completely min-wise independent permutations. Another practical family of hash functions that give approximate min-wise independence isTabulation hashing. The original applications for MinHash involved clustering and eliminating near-duplicates among web documents, represented as sets of the words occurring in those documents.[1][2][11]Similar techniques have also been used for clustering and near-duplicate elimination for other types of data, such as images: in the case of image data, an image can be represented as a set of smaller subimages cropped from it, or as sets of more complex image feature descriptions.[12] Indata mining,Cohen et al. (2001)use MinHash as a tool forassociation rule learning. Given a database in which each entry has multiple attributes (viewed as a0–1 matrixwith a row per database entry and a column per attribute) they use MinHash-based approximations to the Jaccard index to identify candidate pairs of attributes that frequently co-occur, and then compute the exact value of the index for only those pairs to determine the ones whose frequencies of co-occurrence are below a given strict threshold.[13] The MinHash algorithm has been adapted for bioinformatics, where the problem of comparing genome sequences has a similar theoretical underpinning to that of comparing documents on the web. MinHash-based tools[14][15]allow rapid comparison of whole genome sequencing data with reference genomes (around 3 minutes to compare one genome with the 90000 reference genomes inRefSeq), and are suitable for speciation and maybe a limited degree of microbial sub-typing. There are also applications for metagenomics[14]and the use of MinHash derived algorithms for genome alignment and genome assembly.[16]Accurate average nucleotide identity (ANI) values can be generated very efficiently with MinHash-based algorithms.[17] The MinHash scheme may be seen as an instance oflocality-sensitive hashing, a collection of techniques for using hash functions to map large sets of objects down to smaller hash values in such a way that, when two objects have a small distance from each other, their hash values are likely to be the same. In this instance, the signature of a set may be seen as its hash value. Other locality sensitive hashing techniques exist forHamming distancebetween sets andcosine distancebetweenvectors; locality sensitive hashing has important applications innearest neighbor searchalgorithms.[18]For large distributed systems, and in particularMapReduce, there exist modified versions of MinHash to help compute similarities with no dependence on the point dimension.[19] A large scale evaluation was conducted byGooglein 2006[20]to compare the performance of Minhash andSimHash[21]algorithms. In 2007 Google reported using Simhash for duplicate detection for web crawling[22]and using Minhash andLSHforGoogle Newspersonalization.[23]
https://en.wikipedia.org/wiki/MinHash
Inmathematical statistics, theKullback–Leibler(KL)divergence(also calledrelative entropyandI-divergence[1]), denotedDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}, is a type ofstatistical distance: a measure of how much a modelprobability distributionQis different from a true probability distributionP.[2][3]Mathematically, it is defined as DKL(P∥Q)=∑x∈XP(x)log⁡P(x)Q(x).{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}.} A simpleinterpretationof the KL divergence ofPfromQis theexpectedexcesssurprisefrom usingQas a model instead ofPwhen the actual distribution isP. While it is a measure of how different two distributions are and is thus a distance in some sense, it is not actually ametric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions (in contrast tovariation of information), and does not satisfy thetriangle inequality. Instead, in terms ofinformation geometry, it is a type ofdivergence,[4]a generalization ofsquared distance, and for certain classes of distributions (notably anexponential family), it satisfies a generalizedPythagorean theorem(which applies to squared distances).[5] Relative entropy is always a non-negativereal number, with value 0 if and only if the two distributions in question are identical. It has diverse applications, both theoretical, such as characterizing the relative(Shannon) entropyin information systems, randomness in continuoustime-series, and information gain when comparing statistical models ofinference; and practical, such as applied statistics,fluid mechanics,neuroscience,bioinformatics, andmachine learning. Consider two probability distributionsPandQ. Usually,Prepresents the data, the observations, or a measured probability distribution. DistributionQrepresents instead a theory, a model, a description or an approximation ofP. The Kullback–Leibler divergenceDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is then interpreted as the average difference of the number of bits required for encoding samples ofPusing a code optimized forQrather than one optimized forP. Note that the roles ofPandQcan be reversed in some situations where that is easier to compute, such as with theexpectation–maximization algorithm (EM)andevidence lower bound (ELBO)computations. The relative entropy was introduced bySolomon KullbackandRichard LeiblerinKullback & Leibler (1951)as "the mean information for discrimination betweenH1{\displaystyle H_{1}}andH2{\displaystyle H_{2}}per observation fromμ1{\displaystyle \mu _{1}}",[6]where one is comparing two probability measuresμ1,μ2{\displaystyle \mu _{1},\mu _{2}}, andH1,H2{\displaystyle H_{1},H_{2}}are the hypotheses that one is selecting from measureμ1,μ2{\displaystyle \mu _{1},\mu _{2}}(respectively). They denoted this byI(1:2){\displaystyle I(1:2)}, and defined the "'divergence' betweenμ1{\displaystyle \mu _{1}}andμ2{\displaystyle \mu _{2}}" as the symmetrized quantityJ(1,2)=I(1:2)+I(2:1){\displaystyle J(1,2)=I(1:2)+I(2:1)}, which had already been defined and used byHarold Jeffreysin 1948.[7]InKullback (1959), the symmetrized form is again referred to as the "divergence", and the relative entropies in each direction are referred to as a "directed divergences" between two distributions;[8]Kullback preferred the termdiscrimination information.[9]The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality.[10]Numerous references to earlier uses of the symmetrized divergence and to otherstatistical distancesare given inKullback (1959, pp. 6–7, §1.3 Divergence). The asymmetric "directed divergence" has come to be known as the Kullback–Leibler divergence, while the symmetrized "divergence" is now referred to as theJeffreys divergence. Fordiscrete probability distributionsPandQdefined on the samesample space,X{\displaystyle {\mathcal {X}}},the relative entropy fromQtoPis defined[11]to be DKL(P∥Q)=∑x∈XP(x)log⁡P(x)Q(x),{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}\,,} which is equivalent to DKL(P∥Q)=−∑x∈XP(x)log⁡Q(x)P(x).{\displaystyle D_{\text{KL}}(P\parallel Q)=-\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {Q(x)}{P(x)}}\,.} In other words, it is theexpectationof the logarithmic difference between the probabilitiesPandQ, where the expectation is taken using the probabilitiesP. Relative entropy is only defined in this way if, for allx,Q(x)=0{\displaystyle Q(x)=0}impliesP(x)=0{\displaystyle P(x)=0}(absolute continuity). Otherwise, it is often defined as+∞{\displaystyle +\infty },[1]but the value+∞{\displaystyle \ +\infty \ }is possible even ifQ(x)≠0{\displaystyle Q(x)\neq 0}everywhere,[12][13]provided thatX{\displaystyle {\mathcal {X}}}is infinite in extent. Analogous comments apply to the continuous and general measure cases defined below. WheneverP(x){\displaystyle P(x)}is zero the contribution of the corresponding term is interpreted as zero because limx→0+xlog⁡(x)=0.{\displaystyle \lim _{x\to 0^{+}}x\,\log(x)=0\,.} For distributionsPandQof acontinuous random variable, relative entropy is defined to be the integral[14] DKL(P∥Q)=∫−∞∞p(x)log⁡p(x)q(x)dx,{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{-\infty }^{\infty }p(x)\,\log {\frac {p(x)}{q(x)}}\,dx\,,} wherepandqdenote theprobability densitiesofPandQ. More generally, ifPandQare probabilitymeasureson ameasurable spaceX,{\displaystyle {\mathcal {X}}\,,}andPisabsolutely continuouswith respect toQ, then the relative entropy fromQtoPis defined as DKL(P∥Q)=∫x∈Xlog⁡P(dx)Q(dx)P(dx),{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}\log {\frac {P(dx)}{Q(dx)}}\,P(dx)\,,} whereP(dx)Q(dx){\displaystyle {\frac {P(dx)}{Q(dx)}}}is theRadon–Nikodym derivativeofPwith respect toQ, i.e. the uniqueQalmost everywhere defined functionronX{\displaystyle {\mathcal {X}}}such thatP(dx)=r(x)Q(dx){\displaystyle P(dx)=r(x)Q(dx)}which exists becausePis absolutely continuous with respect toQ. Also we assume the expression on the right-hand side exists. Equivalently (by thechain rule), this can be written as DKL(P∥Q)=∫x∈XP(dx)Q(dx)log⁡P(dx)Q(dx)Q(dx),{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}{\frac {P(dx)}{Q(dx)}}\ \log {\frac {P(dx)}{Q(dx)}}\ Q(dx)\,,} which is theentropyofPrelative toQ. Continuing in this case, ifμ{\displaystyle \mu }is any measure onX{\displaystyle {\mathcal {X}}}for which densitiespandqwithP(dx)=p(x)μ(dx){\displaystyle P(dx)=p(x)\mu (dx)}andQ(dx)=q(x)μ(dx){\displaystyle Q(dx)=q(x)\mu (dx)}exist (meaning thatPandQare both absolutely continuous with respect toμ{\displaystyle \mu }),then the relative entropy fromQtoPis given as DKL(P∥Q)=∫x∈Xp(x)log⁡p(x)q(x)μ(dx).{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}p(x)\,\log {\frac {p(x)}{q(x)}}\ \mu (dx)\,.} Note that such a measureμ{\displaystyle \mu }for which densities can be defined always exists, since one can takeμ=12(P+Q){\textstyle \mu ={\frac {1}{2}}\left(P+Q\right)}although in practice it will usually be one that applies in the context likecounting measurefor discrete distributions, orLebesgue measureor a convenient variant thereof likeGaussian measureor the uniform measure on thesphere,Haar measureon aLie groupetc. for continuous distributions. The logarithms in these formulae are usually taken tobase2 if information is measured in units ofbits, or to baseeif information is measured innats. Most formulas involving relative entropy hold regardless of the base of the logarithm. Various conventions exist for referring toDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}in words. Often it is referred to as the divergencebetweenPandQ, but this fails to convey the fundamental asymmetry in the relation. Sometimes, as in this article, it may be described as the divergence ofPfromQor as the divergencefromQtoP. This reflects theasymmetryinBayesian inference, which startsfromapriorQand updatestotheposteriorP. Another common way to refer toDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is as the relative entropy ofPwith respect toQor theinformation gainfromPoverQ. Kullback[3]gives the following example (Table 2.1, Example 2.1). LetPandQbe the distributions shown in the table and figure.Pis the distribution on the left side of the figure, abinomial distributionwithN=2{\displaystyle N=2}andp=0.4{\displaystyle p=0.4}.Qis the distribution on the right side of the figure, adiscrete uniform distributionwith the three possible outcomesx=0,1,2(i.e.X={0,1,2}{\displaystyle {\mathcal {X}}=\{0,1,2\}}), each with probabilityp=1/3{\displaystyle p=1/3}. Relative entropiesDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}andDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}are calculated as follows. This example uses thenatural logwith basee, designatedlnto get results innats(seeunits of information): DKL(P∥Q)=∑x∈XP(x)ln⁡P(x)Q(x)=925ln⁡9/251/3+1225ln⁡12/251/3+425ln⁡4/251/3=125(32ln⁡2+55ln⁡3−50ln⁡5)≈0.0852996,{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}P(x)\,\ln {\frac {P(x)}{Q(x)}}\\&={\frac {9}{25}}\ln {\frac {9/25}{1/3}}+{\frac {12}{25}}\ln {\frac {12/25}{1/3}}+{\frac {4}{25}}\ln {\frac {4/25}{1/3}}\\&={\frac {1}{25}}\left(32\ln 2+55\ln 3-50\ln 5\right)\\&\approx 0.0852996,\end{aligned}}} DKL(Q∥P)=∑x∈XQ(x)ln⁡Q(x)P(x)=13ln⁡1/39/25+13ln⁡1/312/25+13ln⁡1/34/25=13(−4ln⁡2−6ln⁡3+6ln⁡5)≈0.097455.{\displaystyle {\begin{aligned}D_{\text{KL}}(Q\parallel P)&=\sum _{x\in {\mathcal {X}}}Q(x)\,\ln {\frac {Q(x)}{P(x)}}\\&={\frac {1}{3}}\,\ln {\frac {1/3}{9/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{12/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{4/25}}\\&={\frac {1}{3}}\left(-4\ln 2-6\ln 3+6\ln 5\right)\\&\approx 0.097455.\end{aligned}}} In the field of statistics, theNeyman–Pearson lemmastates that the most powerful way to distinguish between the two distributionsPandQbased on an observationY(drawn from one of them) is through the log of the ratio of their likelihoods:log⁡P(Y)−log⁡Q(Y){\displaystyle \log P(Y)-\log Q(Y)}. The KL divergence is the expected value of this statistic ifYis actually drawn fromP. Kullback motivated the statistic as an expected log likelihood ratio.[15] In the context ofcoding theory,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}can be constructed by measuring the expected number of extrabitsrequired tocodesamples fromPusing a code optimized forQrather than the code optimized forP. In the context ofmachine learning,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is often called theinformation gainachieved ifPwould be used instead ofQwhich is currently used. By analogy with information theory, it is called therelative entropyofPwith respect toQ. Expressed in the language ofBayesian inference,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is a measure of the information gained by revising one's beliefs from theprior probability distributionQto theposterior probability distributionP. In other words, it is the amount of information lost whenQis used to approximateP.[16] In applications,Ptypically represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution, whileQtypically represents a theory, model, description, orapproximationofP. In order to find a distributionQthat is closest toP, we can minimize the KL divergence and compute aninformation projection. While it is astatistical distance, it is not ametric, the most familiar type of distance, but instead it is adivergence.[4]While metrics are symmetric and generalizelineardistance, satisfying thetriangle inequality, divergences are asymmetric and generalizesquareddistance, in some cases satisfying a generalizedPythagorean theorem. In generalDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}does not equalDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}, and the asymmetry is an important part of the geometry.[4]Theinfinitesimalform of relative entropy, specifically itsHessian, gives ametric tensorthat equals theFisher information metric; see§ Fisher information metric. Fisher information metric on the certain probability distribution let determine the natural gradient for information-geometric optimization algorithms.[17]Its quantum version is Fubini-study metric.[18]Relative entropy satisfies a generalized Pythagorean theorem forexponential families(geometrically interpreted asdually flat manifolds), and this allows one to minimize relative entropy by geometric means, for example byinformation projectionand inmaximum likelihood estimation.[5] The relative entropy is theBregman divergencegenerated by the negative entropy, but it is also of the form of anf-divergence. For probabilities over a finitealphabet, it is unique in being a member of both of these classes ofstatistical divergences. The application of Bregman divergence can be found in mirror descent.[19] Consider a growth-optimizing investor in a fair game with mutually exclusive outcomes (e.g. a “horse race” in which the official odds add up to one). The rate of return expected by such an investor is equal to the relative entropy between the investor's believed probabilities and the official odds.[20]This is a special case of a much more general connection between financial returns and divergence measures.[21] Financial risks are connected toDKL{\displaystyle D_{\text{KL}}}via information geometry.[22]Investors' views, the prevailing market view, and risky scenarios form triangles on the relevant manifold of probability distributions. The shape of the triangles determines key financial risks (both qualitatively and quantitatively). For instance, obtuse triangles in which investors' views and risk scenarios appear on “opposite sides” relative to the market describe negative risks, acute triangles describe positive exposure, and the right-angled situation in the middle corresponds to zero risk. Extending this concept, relative entropy can be hypothetically utilised to identify the behaviour of informed investors, if one takes this to be represented by the magnitude and deviations away from the prior expectations of fund flows, for example.[23] In information theory, theKraft–McMillan theoremestablishes that any directly decodable coding scheme for coding a message to identify one valuexi{\displaystyle x_{i}}out of a set of possibilitiesXcan be seen as representing an implicit probability distributionq(xi)=2−ℓi{\displaystyle q(x_{i})=2^{-\ell _{i}}}overX, whereℓi{\displaystyle \ell _{i}}is the length of the code forxi{\displaystyle x_{i}}in bits. Therefore, relative entropy can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distributionQis used, compared to using a code based on the true distributionP: it is theexcessentropy. DKL(P∥Q)=∑x∈Xp(x)log⁡1q(x)−∑x∈Xp(x)log⁡1p(x)=H(P,Q)−H(P){\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{q(x)}}-\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{p(x)}}\\[5pt]&=\mathrm {H} (P,Q)-\mathrm {H} (P)\end{aligned}}} whereH(P,Q){\displaystyle \mathrm {H} (P,Q)}is thecross entropyofQrelative toPandH(P){\displaystyle \mathrm {H} (P)}is theentropyofP(which is the same as the cross-entropy of P with itself). The relative entropyDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}can be thought of geometrically as astatistical distance, a measure of how far the distributionQis from the distributionP. Geometrically it is adivergence: an asymmetric, generalized form of squared distance. The cross-entropyH(P,Q){\displaystyle H(P,Q)}is itself such a measurement (formally aloss function), but it cannot be thought of as a distance, sinceH(P,P)=:H(P){\displaystyle H(P,P)=:H(P)}is not zero. This can be fixed by subtractingH(P){\displaystyle H(P)}to makeDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}agree more closely with our notion of distance, as theexcessloss. The resulting function is asymmetric, and while this can be symmetrized (see§ Symmetrised divergence), the asymmetric form is more useful. See§ Interpretationsfor more on the geometric interpretation. Relative entropy relates to "rate function" in the theory oflarge deviations.[24][25] Arthur Hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly usedcharacterization of entropy.[26]Consequently,mutual informationis the only measure of mutual dependence that obeys certain related conditions, since it can be definedin terms of Kullback–Leibler divergence. In particular, ifP(dx)=p(x)μ(dx){\displaystyle P(dx)=p(x)\mu (dx)}andQ(dx)=q(x)μ(dx){\displaystyle Q(dx)=q(x)\mu (dx)}, thenp(x)=q(x){\displaystyle p(x)=q(x)}μ{\displaystyle \mu }-almost everywhere. The entropyH(P){\displaystyle \mathrm {H} (P)}thus sets a minimum value for the cross-entropyH(P,Q){\displaystyle \mathrm {H} (P,Q)}, theexpectednumber ofbitsrequired when using a code based onQrather thanP; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a valuexdrawn fromX, if a code is used corresponding to the probability distributionQ, rather than the "true" distributionP. Denotef(α):=DKL((1−α)Q+αP∥Q){\displaystyle f(\alpha ):=D_{\text{KL}}((1-\alpha )Q+\alpha P\parallel Q)}and note thatDKL(P∥Q)=f(1){\displaystyle D_{\text{KL}}(P\parallel Q)=f(1)}. The first derivative off{\displaystyle f}may be derived and evaluated as followsf′(α)=∑x∈X(P(x)−Q(x))(log⁡((1−α)Q(x)+αP(x)Q(x))+1)=∑x∈X(P(x)−Q(x))log⁡((1−α)Q(x)+αP(x)Q(x))f′(0)=0{\displaystyle {\begin{aligned}f'(\alpha )&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\left(\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)+1\right)\\&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)\\f'(0)&=0\end{aligned}}}Further derivatives may be derived and evaluated as followsf″(α)=∑x∈X(P(x)−Q(x))2(1−α)Q(x)+αP(x)f″(0)=∑x∈X(P(x)−Q(x))2Q(x)f(n)(α)=(−1)n(n−2)!∑x∈X(P(x)−Q(x))n((1−α)Q(x)+αP(x))n−1f(n)(0)=(−1)n(n−2)!∑x∈X(P(x)−Q(x))nQ(x)n−1{\displaystyle {\begin{aligned}f''(\alpha )&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{(1-\alpha )Q(x)+\alpha P(x)}}\\f''(0)&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{Q(x)}}\\f^{(n)}(\alpha )&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{\left((1-\alpha )Q(x)+\alpha P(x)\right)^{n-1}}}\\f^{(n)}(0)&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}}Hence solving forDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}via the Taylor expansion off{\displaystyle f}about0{\displaystyle 0}evaluated atα=1{\displaystyle \alpha =1}yieldsDKL(P∥Q)=∑n=0∞f(n)(0)n!=∑n=2∞1n(n−1)∑x∈X(Q(x)−P(x))nQ(x)n−1{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}\\&=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}}P≤2Q{\displaystyle P\leq 2Q}a.s. is a sufficient condition for convergence of the series by the following absolute convergence argument∑n=2∞|1n(n−1)∑x∈X(Q(x)−P(x))nQ(x)n−1|=∑n=2∞1n(n−1)∑x∈X|Q(x)−P(x)||1−P(x)Q(x)|n−1≤∑n=2∞1n(n−1)∑x∈X|Q(x)−P(x)|≤∑n=2∞1n(n−1)=1{\displaystyle {\begin{aligned}\sum _{n=2}^{\infty }\left\vert {\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\right\vert &=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \left\vert 1-{\frac {P(x)}{Q(x)}}\right\vert ^{n-1}\\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\\&=1\end{aligned}}}P≤2Q{\displaystyle P\leq 2Q}a.s. is also a necessary condition for convergence of the series by the following proof by contradiction. Assume thatP>2Q{\displaystyle P>2Q}with measure strictly greater than0{\displaystyle 0}. It then follows that there must exist some valuesε>0{\displaystyle \varepsilon >0},ρ>0{\displaystyle \rho >0}, andU<∞{\displaystyle U<\infty }such thatP≥2Q+ε{\displaystyle P\geq 2Q+\varepsilon }andQ≤U{\displaystyle Q\leq U}with measureρ{\displaystyle \rho }. The previous proof of sufficiency demonstrated that the measure1−ρ{\displaystyle 1-\rho }component of the series whereP≤2Q{\displaystyle P\leq 2Q}is bounded, so we need only concern ourselves with the behavior of the measureρ{\displaystyle \rho }component of the series whereP≥2Q+ε{\displaystyle P\geq 2Q+\varepsilon }. The absolute value of then{\displaystyle n}th term of this component of the series is then lower bounded by1n(n−1)ρ(1+εU)n{\displaystyle {\frac {1}{n(n-1)}}\rho \left(1+{\frac {\varepsilon }{U}}\right)^{n}}, which is unbounded asn→∞{\displaystyle n\to \infty }, so the series diverges. The following result, due to Donsker and Varadhan,[29]is known asDonsker and Varadhan's variational formula. Theorem [Duality Formula for Variational Inference]—LetΘ{\displaystyle \Theta }be a set endowed with an appropriateσ{\displaystyle \sigma }-fieldF{\displaystyle {\mathcal {F}}}, and two probability measuresPandQ, which formulate twoprobability spaces(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}and(Θ,F,Q){\displaystyle (\Theta ,{\mathcal {F}},Q)}, withQ≪P{\displaystyle Q\ll P}. (Q≪P{\displaystyle Q\ll P}indicates thatQis absolutely continuous with respect toP.) Lethbe a real-valued integrablerandom variableon(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}. Then the following equality holds log⁡EP[exp⁡h]=supQ≪P⁡{EQ[h]−DKL(Q∥P)}.{\displaystyle \log E_{P}[\exp h]=\operatorname {sup} _{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.} Further, the supremum on the right-hand side is attained if and only if it holds Q(dθ)P(dθ)=exp⁡h(θ)EP[exp⁡h],{\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},} almost surely with respect to probability measureP, whereQ(dθ)P(dθ){\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}}denotes the Radon-Nikodym derivative ofQwith respect toP. For a short proof assuming integrability ofexp⁡(h){\displaystyle \exp(h)}with respect toP, letQ∗{\displaystyle Q^{*}}haveP-densityexp⁡h(θ)EP[exp⁡h]{\displaystyle {\frac {\exp h(\theta )}{E_{P}[\exp h]}}}, i.e.Q∗(dθ)=exp⁡h(θ)EP[exp⁡h]P(dθ){\displaystyle Q^{*}(d\theta )={\frac {\exp h(\theta )}{E_{P}[\exp h]}}P(d\theta )}Then DKL(Q∥Q∗)−DKL(Q∥P)=−EQ[h]+log⁡EP[exp⁡h].{\displaystyle D_{\text{KL}}(Q\parallel Q^{*})-D_{\text{KL}}(Q\parallel P)=-E_{Q}[h]+\log E_{P}[\exp h].} Therefore, EQ[h]−DKL(Q∥P)=log⁡EP[exp⁡h]−DKL(Q∥Q∗)≤log⁡EP[exp⁡h],{\displaystyle E_{Q}[h]-D_{\text{KL}}(Q\parallel P)=\log E_{P}[\exp h]-D_{\text{KL}}(Q\parallel Q^{*})\leq \log E_{P}[\exp h],} where the last inequality follows fromDKL(Q∥Q∗)≥0{\displaystyle D_{\text{KL}}(Q\parallel Q^{*})\geq 0}, for which equality occurs if and only ifQ=Q∗{\displaystyle Q=Q^{*}}. The conclusion follows. Suppose that we have twomultivariate normal distributions, with meansμ0,μ1{\displaystyle \mu _{0},\mu _{1}}and with (non-singular)covariance matricesΣ0,Σ1.{\displaystyle \Sigma _{0},\Sigma _{1}.}If the two distributions have the same dimension,k, then the relative entropy between the distributions is as follows:[30] DKL(N0∥N1)=12[tr⁡(Σ1−1Σ0)−k+(μ1−μ0)TΣ1−1(μ1−μ0)+ln⁡detΣ1detΣ0].{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left[\operatorname {tr} \left(\Sigma _{1}^{-1}\Sigma _{0}\right)-k+\left(\mu _{1}-\mu _{0}\right)^{\mathsf {T}}\Sigma _{1}^{-1}\left(\mu _{1}-\mu _{0}\right)+\ln {\frac {\det \Sigma _{1}}{\det \Sigma _{0}}}\right].} Thelogarithmin the last term must be taken to baseesince all terms apart from the last are base-elogarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured innats. Dividing the entire expression above byln⁡(2){\displaystyle \ln(2)}yields the divergence inbits. In a numerical implementation, it is helpful to express the result in terms of the Cholesky decompositionsL0,L1{\displaystyle L_{0},L_{1}}such thatΣ0=L0L0T{\displaystyle \Sigma _{0}=L_{0}L_{0}^{T}}andΣ1=L1L1T{\displaystyle \Sigma _{1}=L_{1}L_{1}^{T}}. Then withMandysolutions to the triangular linear systemsL1M=L0{\displaystyle L_{1}M=L_{0}}, andL1y=μ1−μ0{\displaystyle L_{1}y=\mu _{1}-\mu _{0}}, DKL(N0∥N1)=12(∑i,j=1k(Mij)2−k+|y|2+2∑i=1kln⁡(L1)ii(L0)ii).{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left(\sum _{i,j=1}^{k}{\left(M_{ij}\right)}^{2}-k+|y|^{2}+2\sum _{i=1}^{k}\ln {\frac {(L_{1})_{ii}}{(L_{0})_{ii}}}\right).} A special case, and a common quantity invariational inference, is the relative entropy between a diagonal multivariate normal, and a standard normal distribution (with zero mean and unit variance): DKL(N((μ1,…,μk)T,diag⁡(σ12,…,σk2))∥N(0,I))=12∑i=1k[σi2+μi2−1−ln⁡(σi2)].{\displaystyle D_{\text{KL}}\left({\mathcal {N}}\left(\left(\mu _{1},\ldots ,\mu _{k}\right)^{\mathsf {T}},\operatorname {diag} \left(\sigma _{1}^{2},\ldots ,\sigma _{k}^{2}\right)\right)\parallel {\mathcal {N}}\left(\mathbf {0} ,\mathbf {I} \right)\right)={\frac {1}{2}}\sum _{i=1}^{k}\left[\sigma _{i}^{2}+\mu _{i}^{2}-1-\ln \left(\sigma _{i}^{2}\right)\right].} For two univariate normal distributionspandqthe above simplifies to[31]DKL(p∥q)=log⁡σ1σ0+σ02+(μ0−μ1)22σ12−12{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {\sigma _{1}}{\sigma _{0}}}+{\frac {\sigma _{0}^{2}+{\left(\mu _{0}-\mu _{1}\right)}^{2}}{2\sigma _{1}^{2}}}-{\frac {1}{2}}} In the case of co-centered normal distributions withk=σ1/σ0{\displaystyle k=\sigma _{1}/\sigma _{0}}, this simplifies[32]to: DKL(p∥q)=log2⁡k+(k−2−1)/2/ln⁡(2)bits{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log _{2}k+(k^{-2}-1)/2/\ln(2)\mathrm {bits} } Consider two uniform distributions, with the support ofp=[A,B]{\displaystyle p=[A,B]}enclosed withinq=[C,D]{\displaystyle q=[C,D]}(C≤A<B≤D{\displaystyle C\leq A<B\leq D}). Then the information gain is: DKL(p∥q)=log⁡D−CB−A{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {D-C}{B-A}}} Intuitively,[32]the information gain to aktimes narrower uniform distribution containslog2⁡k{\displaystyle \log _{2}k}bits. This connects with the use of bits in computing, wherelog2⁡k{\displaystyle \log _{2}k}bits would be needed to identify one element of aklong stream. Theexponential familyof distribution is given by pX(x|θ)=h(x)exp⁡(θTT(x)−A(θ)){\displaystyle p_{X}(x|\theta )=h(x)\exp \left(\theta ^{\mathsf {T}}T(x)-A(\theta )\right)} whereh(x){\displaystyle h(x)}is reference measure,T(x){\displaystyle T(x)}is sufficient statistics,θ{\displaystyle \theta }is canonical natural parameters, andA(θ){\displaystyle A(\theta )}is the log-partition function. The KL divergence between two distributionsp(x|θ1){\displaystyle p(x|\theta _{1})}andp(x|θ2){\displaystyle p(x|\theta _{2})}is given by[33] DKL(θ1∥θ2)=(θ1−θ2)Tμ1−A(θ1)+A(θ2){\displaystyle D_{\text{KL}}(\theta _{1}\parallel \theta _{2})={\left(\theta _{1}-\theta _{2}\right)}^{\mathsf {T}}\mu _{1}-A(\theta _{1})+A(\theta _{2})} whereμ1=Eθ1[T(X)]=∇A(θ1){\displaystyle \mu _{1}=E_{\theta _{1}}[T(X)]=\nabla A(\theta _{1})}is the mean parameter ofp(x|θ1){\displaystyle p(x|\theta _{1})}. For example, for the Poisson distribution with meanλ{\displaystyle \lambda }, the sufficient statisticsT(x)=x{\displaystyle T(x)=x}, the natural parameterθ=log⁡λ{\displaystyle \theta =\log \lambda }, and log partition functionA(θ)=eθ{\displaystyle A(\theta )=e^{\theta }}. As such, the divergence between two Poisson distributions with meansλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}is DKL(λ1∥λ2)=λ1log⁡λ1λ2−λ1+λ2.{\displaystyle D_{\text{KL}}(\lambda _{1}\parallel \lambda _{2})=\lambda _{1}\log {\frac {\lambda _{1}}{\lambda _{2}}}-\lambda _{1}+\lambda _{2}.} As another example, for a normal distribution with unit varianceN(μ,1){\displaystyle N(\mu ,1)}, the sufficient statisticsT(x)=x{\displaystyle T(x)=x}, the natural parameterθ=μ{\displaystyle \theta =\mu }, and log partition functionA(θ)=μ2/2{\displaystyle A(\theta )=\mu ^{2}/2}. Thus, the divergence between two normal distributionsN(μ1,1){\displaystyle N(\mu _{1},1)}andN(μ2,1){\displaystyle N(\mu _{2},1)}is DKL(μ1∥μ2)=(μ1−μ2)μ1−μ122+μ222=(μ2−μ1)22.{\displaystyle D_{\text{KL}}(\mu _{1}\parallel \mu _{2})=\left(\mu _{1}-\mu _{2}\right)\mu _{1}-{\frac {\mu _{1}^{2}}{2}}+{\frac {\mu _{2}^{2}}{2}}={\frac {{\left(\mu _{2}-\mu _{1}\right)}^{2}}{2}}.} As final example, the divergence between a normal distribution with unit varianceN(μ,1){\displaystyle N(\mu ,1)}and a Poisson distribution with meanλ{\displaystyle \lambda }is DKL(μ∥λ)=(μ−log⁡λ)μ−μ22+λ.{\displaystyle D_{\text{KL}}(\mu \parallel \lambda )=(\mu -\log \lambda )\mu -{\frac {\mu ^{2}}{2}}+\lambda .} While relative entropy is astatistical distance, it is not ametricon the space of probability distributions, but instead it is adivergence.[4]While metrics are symmetric and generalizelineardistance, satisfying thetriangle inequality, divergences are asymmetric in general and generalizesquareddistance, in some cases satisfying a generalizedPythagorean theorem. In generalDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}does not equalDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}, and while this can be symmetrized (see§ Symmetrised divergence), the asymmetry is an important part of the geometry.[4] It generates atopologyon the space ofprobability distributions. More concretely, if{P1,P2,…}{\displaystyle \{P_{1},P_{2},\ldots \}}is a sequence of distributions such that limn→∞DKL(Pn∥Q)=0,{\displaystyle \lim _{n\to \infty }D_{\text{KL}}(P_{n}\parallel Q)=0,} then it is said that Pn→DQ.{\displaystyle P_{n}\xrightarrow {D} \,Q.} Pinsker's inequalityentails that Pn→DP⇒Pn→TVP,{\displaystyle P_{n}\xrightarrow {D} P\Rightarrow P_{n}\xrightarrow {TV} P,} where the latter stands for the usual convergence intotal variation. Relative entropy is directly related to theFisher information metric. This can be made explicit as follows. Assume that the probability distributionsPandQare both parameterized by some (possibly multi-dimensional) parameterθ{\displaystyle \theta }. Consider then two close by values ofP=P(θ){\displaystyle P=P(\theta )}andQ=P(θ0){\displaystyle Q=P(\theta _{0})}so that the parameterθ{\displaystyle \theta }differs by only a small amount from the parameter valueθ0{\displaystyle \theta _{0}}. Specifically, up to first order one has (using theEinstein summation convention)P(θ)=P(θ0)+ΔθjPj(θ0)+⋯{\displaystyle P(\theta )=P(\theta _{0})+\Delta \theta _{j}\,P_{j}(\theta _{0})+\cdots } withΔθj=(θ−θ0)j{\displaystyle \Delta \theta _{j}=(\theta -\theta _{0})_{j}}a small change ofθ{\displaystyle \theta }in thejdirection, andPj(θ0)=∂P∂θj(θ0){\displaystyle P_{j}\left(\theta _{0}\right)={\frac {\partial P}{\partial \theta _{j}}}(\theta _{0})}the corresponding rate of change in the probability distribution. Since relative entropy has an absolute minimum 0 forP=Q{\displaystyle P=Q}, i.e.θ=θ0{\displaystyle \theta =\theta _{0}}, it changes only tosecondorder in the small parametersΔθj{\displaystyle \Delta \theta _{j}}. More formally, as for any minimum, the first derivatives of the divergence vanish ∂∂θj|θ=θ0DKL(P(θ)∥P(θ0))=0,{\displaystyle \left.{\frac {\partial }{\partial \theta _{j}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))=0,} and by theTaylor expansionone has up to second order DKL(P(θ)∥P(θ0))=12ΔθjΔθkgjk(θ0)+⋯{\displaystyle D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))={\frac {1}{2}}\,\Delta \theta _{j}\,\Delta \theta _{k}\,g_{jk}(\theta _{0})+\cdots } where theHessian matrixof the divergence gjk(θ0)=∂2∂θj∂θk|θ=θ0DKL(P(θ)∥P(θ0)){\displaystyle g_{jk}(\theta _{0})=\left.{\frac {\partial ^{2}}{\partial \theta _{j}\,\partial \theta _{k}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))} must bepositive semidefinite. Lettingθ0{\displaystyle \theta _{0}}vary (and dropping the subindex 0) the Hessiangjk(θ){\displaystyle g_{jk}(\theta )}defines a (possibly degenerate)Riemannian metricon theθparameter space, called the Fisher information metric. Whenp(x,ρ){\displaystyle p_{(x,\rho )}}satisfies the following regularity conditions: ∂log⁡(p)∂ρ,∂2log⁡(p)∂ρ2,∂3log⁡(p)∂ρ3{\displaystyle {\frac {\partial \log(p)}{\partial \rho }},{\frac {\partial ^{2}\log(p)}{\partial \rho ^{2}}},{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}}exist,|∂p∂ρ|<F(x):∫x=0∞F(x)dx<∞,|∂2p∂ρ2|<G(x):∫x=0∞G(x)dx<∞|∂3log⁡(p)∂ρ3|<H(x):∫x=0∞p(x,0)H(x)dx<ξ<∞{\displaystyle {\begin{aligned}\left|{\frac {\partial p}{\partial \rho }}\right|&<F(x):\int _{x=0}^{\infty }F(x)\,dx<\infty ,\\\left|{\frac {\partial ^{2}p}{\partial \rho ^{2}}}\right|&<G(x):\int _{x=0}^{\infty }G(x)\,dx<\infty \\\left|{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}\right|&<H(x):\int _{x=0}^{\infty }p(x,0)H(x)\,dx<\xi <\infty \end{aligned}}} whereξis independent ofρ∫x=0∞∂p(x,ρ)∂ρ|ρ=0dx=∫x=0∞∂2p(x,ρ)∂ρ2|ρ=0dx=0{\displaystyle \left.\int _{x=0}^{\infty }{\frac {\partial p(x,\rho )}{\partial \rho }}\right|_{\rho =0}\,dx=\left.\int _{x=0}^{\infty }{\frac {\partial ^{2}p(x,\rho )}{\partial \rho ^{2}}}\right|_{\rho =0}\,dx=0} then:D(p(x,0)∥p(x,ρ))=cρ22+O(ρ3)asρ→0.{\displaystyle {\mathcal {D}}(p(x,0)\parallel p(x,\rho ))={\frac {c\rho ^{2}}{2}}+{\mathcal {O}}\left(\rho ^{3}\right){\text{ as }}\rho \to 0.} Another information-theoretic metric isvariation of information, which is roughly a symmetrization ofconditional entropy. It is a metric on the set ofpartitionsof a discreteprobability space. MAUVE is a measure of the statistical gap between two text distributions, such as the difference between text generated by a model and human-written text. This measure is computed using Kullback–Leibler divergences between the two distributions in a quantized embedding space of a foundation model. Many of the other quantities of information theory can be interpreted as applications of relative entropy to specific cases. Theself-information, also known as theinformation contentof a signal, random variable, oreventis defined as the negative logarithm of theprobabilityof the given outcome occurring. When applied to adiscrete random variable, the self-information can be represented as[citation needed] I⁡(m)=DKL(δim∥{pi}),{\displaystyle \operatorname {\operatorname {I} } (m)=D_{\text{KL}}\left(\delta _{\text{im}}\parallel \{p_{i}\}\right),} is the relative entropy of the probability distributionP(i){\displaystyle P(i)}from aKronecker deltarepresenting certainty thati=m{\displaystyle i=m}— i.e. the number of extra bits that must be transmitted to identifyiif only the probability distributionP(i){\displaystyle P(i)}is available to the receiver, not the fact thati=m{\displaystyle i=m}. Themutual information, I⁡(X;Y)=DKL(P(X,Y)∥P(X)P(Y))=EX⁡{DKL(P(Y∣X)∥P(Y))}=EY⁡{DKL(P(X∣Y)∥P(X))}{\displaystyle {\begin{aligned}\operatorname {I} (X;Y)&=D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))\\[5pt]&=\operatorname {E} _{X}\{D_{\text{KL}}(P(Y\mid X)\parallel P(Y))\}\\[5pt]&=\operatorname {E} _{Y}\{D_{\text{KL}}(P(X\mid Y)\parallel P(X))\}\end{aligned}}} is the relative entropy of thejoint probability distributionP(X,Y){\displaystyle P(X,Y)}from the productP(X)P(Y){\displaystyle P(X)P(Y)}of the twomarginal probability distributions— i.e. the expected number of extra bits that must be transmitted to identifyXandYif they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probabilityP(X,Y){\displaystyle P(X,Y)}isknown, it is the expected number of extra bits that must on average be sent to identifyYif the value ofXis not already known to the receiver. TheShannon entropy, H(X)=E⁡[IX⁡(x)]=log⁡N−DKL(pX(x)∥PU(X)){\displaystyle {\begin{aligned}\mathrm {H} (X)&=\operatorname {E} \left[\operatorname {I} _{X}(x)\right]\\&=\log N-D_{\text{KL}}{\left(p_{X}(x)\parallel P_{U}(X)\right)}\end{aligned}}} is the number of bits which would have to be transmitted to identifyXfromNequally likely possibilities,lessthe relative entropy of the uniform distribution on therandom variatesofX,PU(X){\displaystyle P_{U}(X)}, from the true distributionP(X){\displaystyle P(X)}— i.e.lessthe expected number of bits saved, which would have had to be sent if the value ofXwere coded according to the uniform distributionPU(X){\displaystyle P_{U}(X)}rather than the true distributionP(X){\displaystyle P(X)}. This definition of Shannon entropy forms the basis ofE.T. Jaynes's alternative generalization to continuous distributions, thelimiting density of discrete points(as opposed to the usualdifferential entropy), which defines the continuous entropy aslimN→∞HN(X)=log⁡N−∫p(x)log⁡p(x)m(x)dx,{\displaystyle \lim _{N\to \infty }H_{N}(X)=\log N-\int p(x)\log {\frac {p(x)}{m(x)}}\,dx,}which is equivalent to:log⁡(N)−DKL(p(x)||m(x)){\displaystyle \log(N)-D_{\text{KL}}(p(x)||m(x))} Theconditional entropy[34], H(X∣Y)=log⁡N−DKL(P(X,Y)∥PU(X)P(Y))=log⁡N−DKL(P(X,Y)∥P(X)P(Y))−DKL(P(X)∥PU(X))=H(X)−I⁡(X;Y)=log⁡N−EY⁡[DKL(P(X∣Y)∥PU(X))]{\displaystyle {\begin{aligned}\mathrm {H} (X\mid Y)&=\log N-D_{\text{KL}}(P(X,Y)\parallel P_{U}(X)P(Y))\\[5pt]&=\log N-D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))-D_{\text{KL}}(P(X)\parallel P_{U}(X))\\[5pt]&=\mathrm {H} (X)-\operatorname {I} (X;Y)\\[5pt]&=\log N-\operatorname {E} _{Y}\left[D_{\text{KL}}\left(P\left(X\mid Y\right)\parallel P_{U}(X)\right)\right]\end{aligned}}} is the number of bits which would have to be transmitted to identifyXfromNequally likely possibilities,lessthe relative entropy of the product distributionPU(X)P(Y){\displaystyle P_{U}(X)P(Y)}from the true joint distributionP(X,Y){\displaystyle P(X,Y)}— i.e.lessthe expected number of bits saved which would have had to be sent if the value ofXwere coded according to the uniform distributionPU(X){\displaystyle P_{U}(X)}rather than the conditional distributionP(X|Y){\displaystyle P(X|Y)}ofXgivenY. When we have a set of possible events, coming from the distributionp, we can encode them (with alossless data compression) usingentropy encoding. This compresses the data by replacing each fixed-length input symbol with a corresponding unique, variable-length,prefix-free code(e.g.: the events (A, B, C) with probabilities p = (1/2, 1/4, 1/4) can be encoded as the bits (0, 10, 11)). If we know the distributionpin advance, we can devise an encoding that would be optimal (e.g.: usingHuffman coding). Meaning the messages we encode will have the shortest length on average (assuming the encoded events are sampled fromp), which will be equal toShannon's Entropyofp(denoted asH(p){\displaystyle \mathrm {H} (p)}). However, if we use a different probability distribution (q) when creating the entropy encoding scheme, then a larger number ofbitswill be used (on average) to identify an event from a set of possibilities. This new (larger) number is measured by thecross entropybetweenpandq. Thecross entropybetween twoprobability distributions(pandq) measures the average number ofbitsneeded to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distributionq, rather than the "true" distributionp. The cross entropy for two distributionspandqover the sameprobability spaceis thus defined as follows. H(p,q)=Ep⁡[−log⁡q]=H(p)+DKL(p∥q).{\displaystyle \mathrm {H} (p,q)=\operatorname {E} _{p}[-\log q]=\mathrm {H} (p)+D_{\text{KL}}(p\parallel q).} For explicit derivation of this, see theMotivationsection above. Under this scenario, relative entropies (kl-divergence) can be interpreted as the extra number of bits, on average, that are needed (beyondH(p){\displaystyle \mathrm {H} (p)}) for encoding the events because of usingqfor constructing the encoding scheme instead ofp. InBayesian statistics, relative entropy can be used as a measure of the information gain in moving from aprior distributionto aposterior distribution:p(x)→p(x∣I){\displaystyle p(x)\to p(x\mid I)}. If some new factY=y{\displaystyle Y=y}is discovered, it can be used to update the posterior distribution forXfromp(x∣I){\displaystyle p(x\mid I)}to a new posterior distributionp(x∣y,I){\displaystyle p(x\mid y,I)}usingBayes' theorem: p(x∣y,I)=p(y∣x,I)p(x∣I)p(y∣I){\displaystyle p(x\mid y,I)={\frac {p(y\mid x,I)p(x\mid I)}{p(y\mid I)}}} This distribution has a newentropy: H(p(x∣y,I))=−∑xp(x∣y,I)log⁡p(x∣y,I),{\displaystyle \mathrm {H} {\big (}p(x\mid y,I){\big )}=-\sum _{x}p(x\mid y,I)\log p(x\mid y,I),} which may be less than or greater than the original entropyH(p(x∣I)){\displaystyle \mathrm {H} (p(x\mid I))}. However, from the standpoint of the new probability distribution one can estimate that to have used the original code based onp(x∣I){\displaystyle p(x\mid I)}instead of a new code based onp(x∣y,I){\displaystyle p(x\mid y,I)}would have added an expected number of bits: DKL(p(x∣y,I)∥p(x∣I))=∑xp(x∣y,I)log⁡p(x∣y,I)p(x∣I){\displaystyle D_{\text{KL}}{\big (}p(x\mid y,I)\parallel p(x\mid I){\big )}=\sum _{x}p(x\mid y,I)\log {\frac {p(x\mid y,I)}{p(x\mid I)}}} to the message length. This therefore represents the amount of useful information, or information gain, aboutX, that has been learned by discoveringY=y{\displaystyle Y=y}. If a further piece of data,Y2=y2{\displaystyle Y_{2}=y_{2}}, subsequently comes in, the probability distribution forxcan be updated further, to give a new best guessp(x∣y1,y2,I){\displaystyle p(x\mid y_{1},y_{2},I)}. If one reinvestigates the information gain for usingp(x∣y1,I){\displaystyle p(x\mid y_{1},I)}rather thanp(x∣I){\displaystyle p(x\mid I)}, it turns out that it may be either greater or less than previously estimated: ∑xp(x∣y1,y2,I)log⁡p(x∣y1,y2,I)p(x∣I){\displaystyle \sum _{x}p(x\mid y_{1},y_{2},I)\log {\frac {p(x\mid y_{1},y_{2},I)}{p(x\mid I)}}}may be ≤ or > than∑xp(x∣y1,I)log⁡p(x∣y1,I)p(x∣I){\textstyle \sum _{x}p(x\mid y_{1},I)\log {\frac {p(x\mid y_{1},I)}{p(x\mid I)}}} and so the combined information gain doesnotobey the triangle inequality: DKL(p(x∣y1,y2,I)∥p(x∣I)){\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid I){\big )}}may be <, = or > thanDKL(p(x∣y1,y2,I)∥p(x∣y1,I))+DKL(p(x∣y1,I)∥p(x∣I)){\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid y_{1},I){\big )}+D_{\text{KL}}{\big (}p(x\mid y_{1},I)\parallel p(x\mid I){\big )}} All one can say is that onaverage, averaging usingp(y2∣y1,x,I){\displaystyle p(y_{2}\mid y_{1},x,I)}, the two sides will average out. A common goal inBayesian experimental designis to maximise the expected relative entropy between the prior and the posterior.[35]When posteriors are approximated to be Gaussian distributions, a design maximising the expected relative entropy is calledBayes d-optimal. Relative entropyDKL(p(x∣H1)∥p(x∣H0)){\textstyle D_{\text{KL}}{\bigl (}p(x\mid H_{1})\parallel p(x\mid H_{0}){\bigr )}}can also be interpreted as the expecteddiscrimination informationforH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}: the mean information per sample for discriminating in favor of a hypothesisH1{\displaystyle H_{1}}against a hypothesisH0{\displaystyle H_{0}}, when hypothesisH1{\displaystyle H_{1}}is true.[36]Another name for this quantity, given to it byI. J. Good, is the expected weight of evidence forH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}to be expected from each sample. The expected weight of evidence forH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}isnotthe same as the information gain expected per sample about the probability distributionp(H){\displaystyle p(H)}of the hypotheses, DKL(p(x∣H1)∥p(x∣H0))≠IG=DKL(p(H∣x)∥p(H∣I)).{\displaystyle D_{\text{KL}}(p(x\mid H_{1})\parallel p(x\mid H_{0}))\neq IG=D_{\text{KL}}(p(H\mid x)\parallel p(H\mid I)).} Either of the two quantities can be used as autility functionin Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies. On the entropy scale ofinformation gainthere is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on thelogitscale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, theRiemann hypothesisis correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales ofloss functionfor uncertainty arebothuseful, according to how well each reflects the particular circumstances of the problem in question. The idea of relative entropy as discrimination information led Kullback to propose the Principle ofMinimum Discrimination Information(MDI): given new facts, a new distributionfshould be chosen which is as hard to discriminate from the original distributionf0{\displaystyle f_{0}}as possible; so that the new data produces as small an information gainDKL(f∥f0){\displaystyle D_{\text{KL}}(f\parallel f_{0})}as possible. For example, if one had a prior distributionp(x,a){\displaystyle p(x,a)}overxanda, and subsequently learnt the true distribution ofawasu(a){\displaystyle u(a)}, then the relative entropy between the new joint distribution forxanda,q(x∣a)u(a){\displaystyle q(x\mid a)u(a)}, and the earlier prior distribution would be: DKL(q(x∣a)u(a)∥p(x,a))=Eu(a)⁡{DKL(q(x∣a)∥p(x∣a))}+DKL(u(a)∥p(a)),{\displaystyle D_{\text{KL}}(q(x\mid a)u(a)\parallel p(x,a))=\operatorname {E} _{u(a)}\left\{D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))\right\}+D_{\text{KL}}(u(a)\parallel p(a)),} i.e. the sum of the relative entropy ofp(a){\displaystyle p(a)}the prior distribution forafrom the updated distributionu(a){\displaystyle u(a)}, plus the expected value (using the probability distributionu(a){\displaystyle u(a)}) of the relative entropy of the prior conditional distributionp(x∣a){\displaystyle p(x\mid a)}from the new conditional distributionq(x∣a){\displaystyle q(x\mid a)}. (Note that often the later expected value is called theconditional relative entropy(orconditional Kullback–Leibler divergence) and denoted byDKL(q(x∣a)∥p(x∣a)){\displaystyle D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))}[3][34]) This is minimized ifq(x∣a)=p(x∣a){\displaystyle q(x\mid a)=p(x\mid a)}over the whole support ofu(a){\displaystyle u(a)}; and we note that this result incorporates Bayes' theorem, if the new distributionu(a){\displaystyle u(a)}is in fact a δ function representing certainty thatahas one particular value. MDI can be seen as an extension ofLaplace'sPrinciple of Insufficient Reason, and thePrinciple of Maximum EntropyofE.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (seedifferential entropy), but the relative entropy continues to be just as relevant. In the engineering literature, MDI is sometimes called thePrinciple of Minimum Cross-Entropy(MCE) orMinxentfor short. Minimising relative entropy frommtopwith respect tomis equivalent to minimizing the cross-entropy ofpandm, since H(p,m)=H(p)+DKL(p∥m),{\displaystyle \mathrm {H} (p,m)=\mathrm {H} (p)+D_{\text{KL}}(p\parallel m),} which is appropriate if one is trying to choose an adequate approximation top. However, this is just as oftennotthe task one is trying to achieve. Instead, just as often it ismthat is some fixed prior reference measure, andpthat one is attempting to optimise by minimisingDKL(p∥m){\displaystyle D_{\text{KL}}(p\parallel m)}subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to beDKL(p∥m){\displaystyle D_{\text{KL}}(p\parallel m)}, rather thanH(p,m){\displaystyle \mathrm {H} (p,m)}[citation needed]. Surprisals[37]add where probabilities multiply. The surprisal for an event of probabilitypis defined ass=−kln⁡p{\displaystyle s=-k\ln p}. Ifkis{1,1/ln⁡2,1.38×10−23}{\displaystyle \left\{1,1/\ln 2,1.38\times 10^{-23}\right\}}then surprisal is in{{\displaystyle \{}nats, bits, orJ/K}{\displaystyle J/K\}}so that, for instance, there areNbits of surprisal for landing all "heads" on a toss ofNcoins. Best-guess states (e.g. for atoms in a gas) are inferred by maximizing theaverage surprisalS(entropy) for a given set of control parameters (like pressurePor volumeV). This constrainedentropy maximization, both classically[38]and quantum mechanically,[39]minimizesGibbsavailability in entropy units[40]A≡−kln⁡Z{\displaystyle A\equiv -k\ln Z}whereZis a constrained multiplicity orpartition function. When temperatureTis fixed, free energy (T×A{\displaystyle T\times A}) is also minimized. Thus ifT,V{\displaystyle T,V}and number of moleculesNare constant, theHelmholtz free energyF≡U−TS{\displaystyle F\equiv U-TS}(whereUis energy andSis entropy) is minimized as a system "equilibrates." IfTandPare held constant (say during processes in your body), theGibbs free energyG=U+PV−TS{\displaystyle G=U+PV-TS}is minimized instead. The change in free energy under these conditions is a measure of availableworkthat might be done in the process. Thus available work for an ideal gas at constant temperatureTo{\displaystyle T_{o}}and pressurePo{\displaystyle P_{o}}isW=ΔG=NkToΘ(V/Vo){\displaystyle W=\Delta G=NkT_{o}\Theta (V/V_{o})}whereVo=NkTo/Po{\displaystyle V_{o}=NkT_{o}/P_{o}}andΘ(x)=x−1−ln⁡x≥0{\displaystyle \Theta (x)=x-1-\ln x\geq 0}(see alsoGibbs inequality). More generally[41]thework availablerelative to some ambient is obtained by multiplying ambient temperatureTo{\displaystyle T_{o}}by relative entropy ornet surprisalΔI≥0,{\displaystyle \Delta I\geq 0,}defined as the average value ofkln⁡(p/po){\displaystyle k\ln(p/p_{o})}wherepo{\displaystyle p_{o}}is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values ofVo{\displaystyle V_{o}}andTo{\displaystyle T_{o}}is thusW=ToΔI{\displaystyle W=T_{o}\Delta I}, where relative entropy ΔI=Nk[Θ(VVo)+32Θ(TTo)].{\displaystyle \Delta I=Nk\left[\Theta {\left({\frac {V}{V_{o}}}\right)}+{\frac {3}{2}}\Theta {\left({\frac {T}{T_{o}}}\right)}\right].} The resulting contours of constant relative entropy, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here.[42]Thus relative entropy measures thermodynamic availability in bits. Fordensity matricesPandQon aHilbert space, thequantum relative entropyfromQtoPis defined to be DKL(P∥Q)=Tr⁡(P(log⁡P−log⁡Q)).{\displaystyle D_{\text{KL}}(P\parallel Q)=\operatorname {Tr} (P(\log P-\log Q)).} Inquantum information sciencethe minimum ofDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}over all separable statesQcan also be used as a measure ofentanglementin the stateP. Just as relative entropy of "actual from ambient" measures thermodynamic availability, relative entropy of "reality from a model" is also useful even if the only clues we have about reality are some experimental measurements. In the former case relative entropy describesdistance to equilibriumor (when multiplied by ambient temperature) the amount ofavailable work, while in the latter case it tells you about surprises that reality has up its sleeve or, in other words,how much the model has yet to learn. Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting astatistical modelviaAkaike information criterionare particularly well described in papers[43]and a book[44]by Burnham and Anderson. In a nutshell the relative entropy of reality from a model may be estimated, to within a constant additive term, by a function of the deviations observed between data and the model's predictions (like themean squared deviation) . Estimates of such divergence for models that share the same additive term can in turn be used to select among models. When trying to fit parametrized models to data there are various estimators which attempt to minimize relative entropy, such asmaximum likelihoodandmaximum spacingestimators.[citation needed] Kullback & Leibler (1951)also considered the symmetrized function:[6] DKL(P∥Q)+DKL(Q∥P){\displaystyle D_{\text{KL}}(P\parallel Q)+D_{\text{KL}}(Q\parallel P)} which they referred to as the "divergence", though today the "KL divergence" refers to the asymmetric function (see§ Etymologyfor the evolution of the term). This function is symmetric and nonnegative, and had already been defined and used byHarold Jeffreysin 1948;[7]it is accordingly called theJeffreys divergence. This quantity has sometimes been used forfeature selectioninclassificationproblems, wherePandQare the conditionalpdfsof a feature under two different classes. In the Banking and Finance industries, this quantity is referred to asPopulation Stability Index(PSI), and is used to assess distributional shifts in model features through time. An alternative is given via theλ{\displaystyle \lambda }-divergence, Dλ(P∥Q)=λDKL(P∥λP+(1−λ)Q)+(1−λ)DKL(Q∥λP+(1−λ)Q),{\displaystyle D_{\lambda }(P\parallel Q)=\lambda D_{\text{KL}}(P\parallel \lambda P+(1-\lambda )Q)+(1-\lambda )D_{\text{KL}}(Q\parallel \lambda P+(1-\lambda )Q),} which can be interpreted as the expected information gain aboutXfrom discovering which probability distributionXis drawn from,PorQ, if they currently have probabilitiesλ{\displaystyle \lambda }and1−λ{\displaystyle 1-\lambda }respectively.[clarification needed][citation needed] The valueλ=0.5{\displaystyle \lambda =0.5}gives theJensen–Shannon divergence, defined by DJS=12DKL(P∥M)+12DKL(Q∥M){\displaystyle D_{\text{JS}}={\tfrac {1}{2}}D_{\text{KL}}(P\parallel M)+{\tfrac {1}{2}}D_{\text{KL}}(Q\parallel M)} whereMis the average of the two distributions, M=12(P+Q).{\displaystyle M={\tfrac {1}{2}}\left(P+Q\right).} We can also interpretDJS{\displaystyle D_{\text{JS}}}as the capacity of a noisy information channel with two inputs giving the output distributionsPandQ. The Jensen–Shannon divergence, like allf-divergences, islocallyproportional to theFisher information metric. It is similar to theHellinger metric(in the sense that it induces the same affine connection on astatistical manifold). Furthermore, the Jensen–Shannon divergence can be generalized using abstract statistical M-mixtures relying on an abstract mean M.[45][46] There are many other important measures ofprobability distance. Some of these are particularly connected with relative entropy. For example: Other notable measures of distance include theHellinger distance,histogram intersection,Chi-squared statistic,quadratic form distance,match distance,Kolmogorov–Smirnov distance, andearth mover's distance.[49] Just asabsoluteentropy serves as theoretical background fordatacompression,relativeentropy serves as theoretical background fordatadifferencing– the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the targetgiventhe source (minimum size of apatch).
https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
Anoun phrase– orNPornominal (phrase)– is aphrasethat usually has anounorpronounas itshead, and has the samegrammaticalfunctions as a noun.[1]Noun phrases are very commoncross-linguistically, and they may be the most frequently occurring phrase type. Noun phrases often function as verbsubjectsandobjects, aspredicative expressions, and as complements ofprepositions. One NP can be embedded inside another NP; for instance,some of his constituentshas as a constituent the shorter NPhis constituents.[2] In some theories of grammar, noun phrases withdeterminersare analyzed as having the determiner as the head of the phrase, see for instanceChomsky(1995) andHudson(1990)[citation needed]. Some examples of noun phrases are underlined in the sentences below. The head noun appears in bold. Noun phrases can be identified by the possibility of pronoun substitution, as is illustrated in the examples below. A string of words that can be replaced by a single pronoun without rendering the sentence grammatically unacceptable is a noun phrase. As to whether the string must contain at least two words, see the following section. Traditionally, aphraseis understood to contain two or morewords. The traditional progression in the size of syntactic units isword < phrase <clause, and in this approach a single word (such as a noun or pronoun) would not be referred to as a phrase. However, many modern schools of syntax – especially those that have been influenced byX-bar theory– make no such restriction.[3]Here many single words are judged to be phrases based on a desire for theory-internal consistency. A phrase is deemed to be a word or a combination of words that appears in a set syntactic position, for instance in subject position or object position. On this understanding of phrases, the nouns and pronouns in bold in the following sentences are noun phrases (as well as nouns or pronouns): The words in bold are called phrases since they appear in the syntactic positions where multiple-word phrases (i.e. traditional phrases) can appear. This practice takes the constellation to be primitive rather than the words themselves. The wordhe, for instance, functions as a pronoun, but within the sentence it also functions as a noun phrase. Thephrase structure grammarsof the Chomskyan tradition (government and binding theoryand theminimalist program) are primary examples of theories that apply this understanding of phrases. Other grammars such asdependency grammarsare likely to reject this approach to phrases, since they take the words themselves to be primitive. For them, phrases must contain two or more words. A typical noun phrase consists of a noun (theheadof the phrase) together with zero or more dependents of various types. (These dependents, since they modify a noun, are calledadnominal.) The chief types of these dependents are: The allowability, form and position of these elements depend on the syntax of the language in question. In English, determiners, adjectives (and some adjective phrases) and noun modifiers precede the head noun, whereas the heavier units – phrases and clauses – generally follow it. This is part of a strong tendency in English to place heavier constituents to the right, making English more of ahead-initiallanguage. Head-final languages (e.g.JapaneseandTurkish) are more likely to place all modifiers before the head noun. Other languages, such asFrench, often place even single-word adjectives after the noun. Noun phrases can take different forms than that described above, for example when the head is a pronoun rather than a noun, or when elements are linked with acoordinating conjunctionsuch asand,or,but. For more information about the structure of noun phrases in English, seeEnglish grammar § Phrases. Noun phrases typically bearargumentfunctions.[4]That is, thesyntactic functionsthat they fulfill are those of the arguments of the main clausepredicate, particularly those ofsubject,objectandpredicative expression. They also function as arguments in such constructs asparticipial phrasesandprepositional phrases. For example: Sometimes a noun phrase can also function as anadjunctof the main clause predicate, thus taking on anadverbialfunction, e.g. In some languages, including English, noun phrases are required to be "completed" with adeterminerin many contexts, and thus a distinction is made in syntactic analysis between phrases that have received their required determiner (such asthe big house), and those in which the determiner is lacking (such asbig house). The situation is complicated by the fact that in some contexts a noun phrase may nonetheless be used without a determiner (as inI like big houses); in this case the phrase may be described as having a "null determiner". (Situations in which this is possible depend on the rules of the language in question; for English, seeEnglish articles.) In the originalX-bar theory, the two respective types of entity are called noun phrase (NP) and N-bar (N, N′). Thus in the sentenceHere is the big house, bothhouseandbig houseare N-bars, whilethe big houseis a noun phrase. In the sentenceI like big houses, bothhousesandbig housesare N-bars, butbig housesalso functions as a noun phrase (in this case without an explicit determiner). In some modern theories of syntax, however, what are called "noun phrases" above are no longer considered to be headed by a noun, but by the determiner (which may be null), and they are thus calleddeterminer phrases(DP) instead of noun phrases. (In some accounts that take this approach, the constituent lacking the determiner – that called N-bar above – may be referred to as a noun phrase.) This analysis of noun phrases is widely referred to as theDP hypothesis. It has been the preferred analysis of noun phrases in theminimalist programfrom its start (since the early 1990s), though the arguments in its favor tend to be theory-internal. By taking the determiner, a function word, to be head over the noun, a structure is established that is analogous to the structure of thefinite clause, with acomplementizer. Apart from the minimalist program, however, the DP hypothesis is rejected by most other modern theories of syntax and grammar, in part because these theories lack the relevant functional categories.[5]Dependency grammars, for instance, almost all assume the traditional NP analysis of noun phrases. For illustrations of different analyses of noun phrases depending on whether the DP hypothesis is rejected or accepted, see the next section. The representation of noun phrases usingparse treesdepends on the basic approach to syntactic structure adopted. The layered trees of manyphrase structure grammarsgrant noun phrases an intricate structure that acknowledges a hierarchy of functional projections.Dependency grammars, in contrast, since the basic architecture of dependency places a major limitation on the amount of structure that the theory can assume, produce simple, relatively flat structures for noun phrases. The representation also depends on whether the noun or the determiner is taken to be the head of the phrase (see the discussion of the DP hypothesis in the previous section). Below are some possible trees for the two noun phrasesthe big houseandbig houses(as in the sentencesHere is the big houseandI like big houses). 1.Phrase-structuretrees, first using the original X-bar theory, then using the current DP approach: 2.Dependencytrees, first using the traditional NP approach, then using the DP approach: The following trees represent a more complex phrase. For simplicity, only dependency-based trees are given.[6] The first tree is based on the traditional assumption that nouns, rather than determiners, are the heads of phrases. The head nounpicturehas the four dependentsthe,old,of Fred, andthat I found in the drawer. The tree shows how the lighter dependents appear as pre-dependents (preceding their head) and the heavier ones as post-dependents (following their head). The second tree assumes the DP hypothesis, namely that determiners serve as phrase heads, rather than nouns. The determinertheis now depicted as the head of the entire phrase, thus making the phrase a determiner phrase. There is still a noun phrase present (old picture of Fred that I found in the drawer) but this phrase is below the determiner. An early conception of the noun phrase can be found inFirst work in EnglishbyAlexander Murison.[7]In this conception a noun phrase is "the infinitive of the verb" (p. 146), which may appear "in any position in the sentence where a noun may appear". For example,to be justis more important thanto be generoushas two underlined infinitives which may be replaced by nouns, as injustice is more important than generosity. This same conception can be found in subsequent grammars, such as 1878'sA Tamil Grammar[8]or 1882'sMurby's English grammar and analysis, where the conception of an X phrase is a phrase that can stand in for X.[9]By 1912, the concept of a noun phrase as being based around a noun can be found, for example, "an adverbial noun phrases is a group of words of which the noun is the base word, that tells the time or place of an action, or how long, how far, or how much".[10]By 1924, the idea of a noun phrase being a noun plus dependents seems to be established. For example, "Note order of words in noun-phrase--noun + adj. + genitive" suggests[11]a more modern conception of noun phrases. See also:
https://en.wikipedia.org/wiki/Noun_phrase
Theword countis the number ofwordsin a document or passage of text. Word counting may be needed when a text is required to stay within certain numbers of words. This may particularly be the case inacademia, legal proceedings,journalismandadvertising. Word count is commonly used bytranslatorsto determine the price of a translation job. Word counts may also be used to calculate measures ofreadabilityand to measure typing and reading speeds (usually inwords per minute). When convertingcharactercounts to words, a measure of 5 or 6 characters to a word is generally used for English.[1] Modernweb browserssupport word counting viaextensions, via aJavaScriptbookmarklet, or ascriptthat is hosted in a website. Mostword processorscan also count words.Unix-like systems include a program,wc, specifically for word counting. There are a wide variety of word counting tools available online. Different word counting programs may give varying results, depending on thetext segmentationrule details. The exact number of words often is not a strict requirement, thus the variation is acceptable. NovelistJane Smileysuggests that length is an important quality of thenovel.[2]However, novels can vary tremendously in length; Smiley lists novels as typically being between 100,000 and 175,000 words,[3]whileNational Novel Writing Monthrequires its novels to be at least 50,000 words. There are no firm rules: for example, the boundary between anovellaand anovelis arbitrary and a literary work may be difficult to categorise.[4]But while the length of a novel is mainly dependent on its writer,[5]lengths may also vary by subgenre; manychapter booksfor children start at a length of about 16,000 words,[6]and a typical mystery novel might be in the 60,000 to 80,000 word range while a thriller could be well over 100,000 words.[7] TheScience Fiction and Fantasy Writers of Americaspecifies word lengths for each category of itsNebula Awardcategories:[8] The acceptable length of an academicdissertationvaries greatly, dependent predominantly on the subject. Numerous American universities limit Ph.D. dissertations to 100,000 words, barring special permission for exceeding this limit.[9]
https://en.wikipedia.org/wiki/Word_count
TheSMART (System for the Mechanical Analysis and Retrieval of Text) Information Retrieval Systemis aninformation retrievalsystem developed atCornell Universityin the 1960s.[1]Many important concepts in information retrieval were developed as part of research on the SMART system, including thevector space model,relevance feedback, andRocchio classification. Gerard Saltonled the group that developed SMART. Other contributors includedMike Lesk. The SMART system also provides a set of corpora, queries and reference rankings, taken from different subjects, notably To the legacy of the SMART system belongs the so-called SMART triple notation, a mnemonic scheme for denotingtf-idfweighting variants in the vector space model. The mnemonic for representing a combination of weights takes the formddd.qqq, where the first three letters represents the term weighting of the collection document vector and the second three letters represents the term weighting for the query document vector. For example,ltc.lnnrepresents theltcweighting applied to a collection document and thelnnweighting applied to a query document. The following tables establish the SMART notation:[2] The gray letters in the first, fifth, and ninth columns are the scheme used by Salton and Buckley in their 1988 paper.[4]The bold letters in the second, sixth, and tenth columns are the scheme used in experiments reported thereafter. Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/SMART_Information_Retrieval_System
Inlinguistics, anempty category, which may also be referred to as acovert category, is an element in the study ofsyntaxthat does not have any phonological content and is therefore unpronounced.[1]Empty categories exist in contrast to overt categories which are pronounced.[1]When representing empty categories in tree structures, linguists use a null symbol (∅) to depict the idea that there is a mental category at the level being represented, even if the word(s) are being left out of overt speech. The phenomenon was named and outlined byNoam Chomskyin his 1981LGBframework,[1][2]and serves to address apparent violations oflocality of selection— there are different types of empty categories that each appear to account for locality violations in different environments.[3]Empty categories are present in most of the world's languages, although different languages allow for different categories to be empty. While the classical theory recognizes four types of null DPs (DP-trace, WH-trace, PRO, andpro), recent research has found evidence for null DPs that don't appear to fit the classical model such as the distinction of null subjects and null objects. In the classical theory model, empty (or null) DPs can be broken down into four main types: DP-trace, WH-trace, PRO, andpro. Each appears in a specific environment, and is further differentiated by twobindingfeatures: theanaphoricfeature [a] and thepronominalfeature [p]. The four possible interactions of plus or minus values for these features yield the four types of null DPs.[4] In the table, [+a] means that the particular element must be bound within its governing category. [+p] means that the empty category is taking the place of an overt pronoun. Having a negative value for a specific feature indicates that a particular type of null DP isnotsubject to the requirements of the feature. Not all empty categories enter the derivation of a sentence at the same point. Both DP-trace and WH-trace, as well as all the null heads, are only generated as the result ofmovementoperations. "-trace" refers to the position in the sentence that holds syntactic content in thedeep structure, but that has undergone movement so that it is not present at thesurface structure. Conversely, both "PRO" and "pro"are not the result of movement and must be generated in the deep structure.[1]In both thegovernment and bindingandminimalismframeworks, the only method of base-generation is lexical insertion. This means that both "PRO" and "pro"are held to be entries in the mentallexicon, whereas DP-trace and Wh-trace, and null heads are not categories in the lexicon. The empty category subclass called PRO, referred to orally as "big pro",[5]is a DP which appears in a caseless position.[6]PRO is a universal lexical element, it is said to be able to occur in every language, in an environment with a non-finite embedded clause.[6]However, its occurrence is limited: PROmustoccupy the specifier position of the embedded, non-finite clause,[7]such as in the example below: This example does not use PRO, but instead, uses an overt pronoun ("you") in the specifier position of the embedded non-finite clause: This example does use PRO, because instead of an overt pronoun, there is an empty category which isco-referencedwith "He", appearing in the specifier position of the non-finite embedded clause: The example tree to the right is the tree structure for this sentence, [Heiwould like PROito stay], and shows PRO surfacing in the specifier position of the TP in the embedded clause, and co-referenced to (referring to the same being as) the subject of the matrix clause. We can interpret this as the DP subject [He] having control over PRO. In other words, the meaning of PRO is determined by the meaning of DP [He], as they are co-referenced.[8]This is an example of a subject control construction, where the pronominal subject [He] is selected for by both the main verb [like] and the embedded infinitive verb [stay], thus forcing the introduction of an unpronounced lexical item (PRO) at the subject of the embedded clause, in order to fulfil the selectional requirements of both verbs.[9]Alternatively, we see object control constructions when the object of the sentence controls the meaning of PRO.[10] However, while the meaning of PRO can be determined by its controller (here, the subject of the matrix clause), it does not have to be. PRO can either be controlled ("obligatory control") or uncontrolled ("optional control").[11]The realization that PRO does not behave exactly like an R-Expression, an anaphor, or a pronoun (it is in fact, simultaneously an anaphor and a pronoun[12]) led to the conclusion that it must be a category in and of itself. It can sometimes be bound, is sometimes co-referenced in the sentence, and does not fit intobinding theory.[11] Note that in modern theories, embedded clauses that introduce PRO as a subject are CPs.[12][13] "Little pro" occurs in a subject position of afiniteclause and has case.[14]The DP is ‘dropped’ from a sentence if its reference can be recovered from the context;"pro"is the silent counterpart of an overt pronoun.[15]Spanish is an example of a language with rich subject-verbmorphologythat can allow null subjects. The agreement-marking on the verb in Spanish allows the subject to be identified even if the subject is absent from the spoken form of the sentence. This does not happen in English because the agreement-markings in English are not sufficient for a listener to be able to deduce the meaning of a missing referent.[7][16] Chinese is an example of apro-drop language, where both subjects and objects can be dropped from the pronounced part offinitesentences, and their meaning remains clear from the context. In pro-drop languages, the covert "pro" is allowed to replace all overt pronouns, resulting in the grammaticality of sentences that do not have a subject nor object that is overtly pronounced: This example illustrates how aChinesequestion might be asked with "Zhangsan" as the subject and "Lisi" as the object:[17] Zhangsan Zhangsan kanjian see Lisi Lisi le ASP ma? Q? Zhangsan kanjian Lisi le ma? Zhangsan see Lisi ASP Q? ‘Did Zhangsan see Lisi?’ Below is an example of a response to the question above. Both subject and object are optionally pronounced categories. The meaning of the sentence can be easily recovered, even though the pronouns are dropped. (Round brackets indicate an optional element.)[17] (ta) (He) kanjian saw (ta) (him) le. PRF. (ta) kanjian (ta) le. (He) saw (him) PRF. The same point can be made with overt pronouns in English, as in the sentence “John said I saw him”, where the chance of picking [John] as the antecedent for [him] is clearly greater than that of picking any other person. In example 4), the null object must be referring to the matrix clause subject [Zhangsan] but not the embedded subject [Lisi], sincecondition Cof theBinding Theorystates that it must be free. (Square brackets indicate that an element is covert (not pronounced), as in the second English translation.)[17] Zhangsan Zhangsan shuo say Lisi Lisi hen very xihuan. like. Zhangsan shuo Lisi hen xihuan. Zhangsan say Lisi very like. 'Zhangsan said that Lisi liked [him].' In certain syntactic environments (e.g. specifier VP and the specifier position of a TP which introduces a non-finite verb), case features are unable to be “checked”, and adeterminer phrasemust move throughout the phrase structure in order to check the case features.[18]When this happens, a movement rule is initiated, and the structure is altered so that we hear the DP overtly pronounced in the position of the sentence which it has been moved to; a DP-trace is an empty category that appears at the original spot (the underlying position) of the DP, and stands for the syntactic space in the tree that the DP previously occupied.[5]DP-trace is found in complementary distribution to PRO.[5] Underlying word order in the sentence "Cheri seems to like Tony." Spoken form of the sentence "Cheri seems to like Tony." *Square brackets throughout example 2 indicate an empty DP category This English example shows that DP [Cheri] is originally introduced in thespecifierposition of the embedded infinitive clause, before moving to the specifier position of the matrix clause. This movement happens in order to check the features of theraising verb[seem],[19]and leaves behind a DP-trace (tDP) in the original position of the DP. You can use the position of the DP-trace to identify where the DP is introduced in theunderlying structure. DPs can move for another reason: in the case of Wh-questions. In English, these are questions that begin with <wh> (e.g. who/whom, what, when, where, why, which, and how); words that serve the same function in other languages do not necessarily begin with <wh>, but are still treated as “Wh-items” under this framework. The responses to these questions cannot be yes or no; they must be answered using informative phrases.[20]Wh-items undergo Wh-movement to the specifier of CP, leaving a Wh-trace (tWH) in its original position. Just like for DP-movement, this movement is the result of feature checking, this time, to check the [+WH] feature in C.[21] To form a Wh-question in the example below, the DP [who] moves to the specifier of the CP position, leaving a Wh-trace in its original position. Due to the extended projection principle, there is DP movement to the specifier of TP position. There is also T to C movement, with the addition of Do-support. These additional movement operations are not shown in the given example, for simplicity. Example 5: Underlying order of words in the sentence “Who did Lucy see?” (Square brackets throughout example 5 indicate an empty category.) Spoken form of the sentence "Who did Lucy see?" *Square brackets throughout example 5 indicate an empty category. *You can see where "Who" was in the initial word order by where the WH-Trace appears in the spoken form. The tree to the right illustrates this example of WH-trace. Initially, the sentence is "[CP] did Lucy see who,” which has an empty specifier position of CP, as indicated by square brackets. After the Wh-item [who] is relocated to the specifier position of CP, the empty position is left at the end, in the original position of [who]. What is left in its place is the WH-trace. A special relationship holds between the WH-item and the complementizer of a sentence: In this example, the complementizer or the WH-item can have null categories, and one or the other may show up as null. However, they cannot both be null when the WH-item is the subject.[22] An important note to remember is that DP-trace and WH-trace are the result of movement operations, while "pro" and "PRO" must be base generated.[1] Null-subject languages, such asChineseandItalian, allow the omission of an explicit subject in an independent clause by replacing it with a null subject. This is unlike languages likeEnglishorFrenchwhich require an explicit subject in this sentence. This phenomenon is similar, but not identical, to that ofpro-drop languages, which may omit subject, object or both pronouns. While all pro-drop languages are null-subject languages, not all null-subject languages are pro-drop. For example, inItalianthe subject "she" can be either explicit or implicit: Maria Maria non not vuole wants mangiare. [to-]eat Maria non vuole mangiare. Maria not wants [to-]eat "Maria does not want to eat." Subject Non not vuole wants mangiare. [to-]eat {} Non vuole mangiare. Subjectnot wants [to-]eat "[(S)he] does not want to eat." Many languages such asPortuguesefreely allow for the omission of the object of a transitive verb and use a variable empty category in its place.[23]Unlikepro(little pro), variable empty objects are R-expressions and must respect Principle C ofBinding Theory. The following is an example of a null variable object construction inPortuguese: a the Joana Joana viu saw ∅ him/her/it na on the TV TV‍ ontem. yesterday a Joana viu ∅ na TV ontem. the Joana saw him/her/it {on the} TV‍ yesterday 'Joana saw him/her/it on TV yesterday.' Not only can phrasal constituents such as DPs be empty, heads may be empty as well; this includes both lexical categories andfunctional categories. All null heads are the result of some movement operation on the underlying structure, forcing a lexical item out of its original position, and leaving an empty category behind. There are many types of null functional categories, includingdeterminers,complementizersandtense markers, which are the result of more recent research in the field of linguistics. Null heads are positions which end up being unpronounced at the surface level but are not included in the anaphoric and pronominal features chart that accounts for other types of empty categories. Nulldeterminersare used mainly when theThetaassignment of a verb only allows an option for a DP as a phrase category in the sentence (with no option for a D head).Proper nounsand pronouns cannot grammatically have a determiner attached to them, though they still are part of the DP phrase.[24]In this case, one needs to include a null category to stand as the D of the phrase as its head. Since a DP phrase has a determiner as its head, but one can end up with NPs that are not preceded by an overt determiner, a null symbol is used to represent the null determiner at the beginning of the DP. Examples of nouns that do not need a determiner: The null determiners are subdivided into the same classes as overt determiners are, since the different null determiners are thought to appear in different grammatical contexts:[25] Ø[+PROPER] Ø[,+PRONOUN] Ø[+PLURAL] Cross-linguistically, complementizer-less environments (phrases which lack an overt C element) are often attested. In many cases, the complementizer is optional. In the following example, in (a), the complement clause "the cat is cute" is introduced by the overt complementizer "that". In (b), C is null; this is represented by the null symbol "Ø". The existence of null complementizers has led to theories that attempt to account for complementizer-less environments: the CP Hypothesis and the IP Hypothesis. TheCP Hypothesisstates that finite subordinate clauses that lack an overt C at the surface level contain a CP layer that projects an empty (or unpronounced) C head.[26] Some evidence for this claim arises from cross-linguistic analyses of yes/no question formation, where the phenomenon ofsubject-auxiliary inversion(utilized in English) appears in complementary distribution with an overt complementizer question marker (for example, in Irish). Such work suggests that these are not two distinct mechanisms for yes/no question formation, but instead, that a subject-auxiliary inversion construction simply contains a special type of silent question marked complementizer. This claim is further supported by the fact that English does exhibit one environment — namely, embedded questions — that utilizes the overt question marked C “if”, and that these phrases do not employ subject-auxiliary inversion.[27] In addition to this, some compelling data from theKansaidialect ofJapanese, in which the same adverb can evoke different meaning depending on where it is attached in a clause, also points towards the existence of a null C. For example, in both complementizer-less and complementizer environments, the adverbial particledake(“only”) evokes the same phrasal meaning:[28] John-wa John-TOP [Mary-ga Mary-NOM okot-ta get.angry-PAST tte-dake] that-only yuu-ta. say-PAST John-wa [Mary-ga okot-ta tte-dake] yuu-ta. John-TOP Mary-NOM get.angry-PAST that-only say-PAST ‘John said only that Mary got angry.' John-wa John-TOP [Mary-ga Mary-NOM okot-ta get.angry-PAST dake] only yuu-ta. say-PAST John-wa [Mary-ga okot-ta dake] yuu-ta. John-TOP Mary-NOM get.angry-PAST only say-PAST 'John said only Mary got angry.' The interpretation of both (a) and (b) is as follows: “among a number of things that John might have said, John said only that Mary got angry” John-wa John-TOP [Mary-ga Mary-NOM okot-ta-dake get.angry-PAST-only tte] that yuu-ta. say-PAST John-wa [Mary-ga okot-ta-dake tte] yuu-ta. John-TOP Mary-NOM get.angry-PAST-only that say-PAST 'John said that it is only the case that Mary got angry.' The interpretation of (c) is as follows: “John said that among a number of people that might have gotten angry, only Mary did.” As demonstrated by (c), the adverb should evoke a different meaning than in (a) if it is attached to any item other than a complementizer. Because (a) and (b) yield the same interpretation, this suggests that the adverbial particle must be attached at the same spot in both clauses. In (a), the adverb "dake" is clearly attached to a complementizer; so even in the complementizer-less environment (b), the adverb "dake" must still attach to a complementizer, thus pointing to the existence of a null complementizer in this phrase.[28] TheIP Hypothesis, on the other hand, asserts that the complementizer layer is simply nonexistent in complementizerless clauses.[26] Literature arguing for this hypothesis is based upon the fact that there are some syntactic environments under which a null C head would violate the rules of government under the Empty Category Principle, and thus should be disallowed.[29][30] Other work focuses on some differences in grammatical adjunction possibilities to “that” versus “that-less” clauses in English, for which the CP Hypothesis apparently cannot account. It states that under the CP Hypothesis, both clauses are CPs and thus should display the same adjunction possibilities; this is not what we find in the data. Instead, disparities in grammaticality emerge in environments of topicalization and adverbial adjunction.[30]The IP Hypothesis is said to make up for the shortcomings of the CP hypothesis in these domains. InIcelandic, for example, the verb"vonast til" selects for an infinitival complement:[31] Strákarnir boy.PL.M.DEF vonast hope.PL.3 til for að to PRO verða become aðstoðaðir. assisted.PRET.PTCP.PL.M Strákarnir vonast til að PRO verða aðstoðaðir. boy.PL.M.DEF hope.PL.3 for to {} become assisted.PRET.PTCP.PL.M 'The boys hoped that somebody would help them.' While inLatvian, the equivalent verb "cerēt" takes an overt complementizer phrase: Zēni boy.PL cer, hope.3 ka that viņiem they.DAT.PL kāds someone palīdzēs. help.FUT.3 Zēni cer, ka viņiem kāds palīdzēs. boy.PL hope.3 that they.DAT.PL someone help.FUT.3 'The boys hope that somebody will help them.' However, while both hypothesis possess some strong arguments, they also both have some crucial shortcomings. Further research is needed in order to concretely establish a more widely agreed-upon theory. Tense markers are used to put events in time on a timeline in relation to a reference point, usually the moment of speech. A null tense marker is when this indication of time undergoes a movement operation in the underlying structure and leaves an empty category behind. In rare cases, a null tense marker can also be the byproduct of a coordination operation, such as in Korean. For the case of Korean, some researchers suggest that in two adjacent conjuncts, the first will have a null tense morpheme.[32]For a proper tense interpretation of the first conjunct conjunct, it is necessary to construct a phonetically null tense inflection as schown schematically in the template below: Verbs that select for three arguments cause an issue forX-bar theory, where ternary branching trees are not allowed. In order to overcome this, a second VP, called a "VP shell," is introduced in order to make room for the third argument. As a consequence, a null V is created:[33] The verb "put" moves to the higher V in order to assign case to the second argument, "the key."[34] Consider the following sentences: The selectional properties - "the towel" always being considered the subject of "wet" - suggest the presence of a silent V contributing acausativemeaning.[3]In other words, the head is responsible for the object's theta-role.[33] One of the main questions that arises in linguistics when examining grammatical concepts is how children learn them. For empty categories, this is a particularly interesting consideration, since, when children ask for a certain object, their guardians usually respond in “motherese”. An example of a motherese utterance which doesn't use empty categories is in response to a child's request for a certain object. A parent might respond “You want what?” instead of “What do you want?”.[35]In this sentence, the wh-word doesn't move, and so in the sentence that the child hears, there is no wh-trace. Possible explanations for the eventualacquisitionof the notion of empty categories are that the child then learns that even when he or she doesn't hear a word in the original position, they assume one is still there, because they are used to hearing a word. At the beginning of acquisition, children do not have a concrete concept of an empty category; it is simply a weaker version of the concept. It is noted that ‘thematic government’ may be all the child possesses at a young age and this is enough to recognize the concept of empty category. The proper amount of time must be given to learn the certain aspects of an empty category (case marking,monotonicity properties, etc.).[35]
https://en.wikipedia.org/wiki/Empty_category
Inlinguistics, agrammatical categoryorgrammatical featureis a property of items within thegrammarof alanguage. Within each category there are two or more possible values (sometimes calledgrammemes), which are normally mutually exclusive. Frequently encountered grammatical categories include: Although the use of terms varies from author to author, a distinction should be made between grammatical categories and lexical categories.Lexical categories(consideredsyntactic categories) largely correspond to theparts of speechof traditional grammar, and refer to nouns, adjectives, etc. Aphonologicalmanifestation of a category value (for example, a word ending that marks "number" on a noun) is sometimes called anexponent. Grammatical relationsdefine relationships between words and phrases with certain parts of speech, depending on their position in the syntactic tree. Traditional relations includesubject,object, andindirect object. A givenconstituentof an expression can normally take only one value in each category. For example, a noun ornoun phrasecannot be both singular and plural, since these are both values of the "number" category. It can, however, be both plural and feminine, since these represent different categories (number and gender). Categories may be described and named with regard to the type ofmeaningsthat they are used to express. For example, the category oftenseusually expresses the time of occurrence (e.g. past, present or future). However, purely grammatical features do not always correspond simply or consistently to elements of meaning, and different authors may take significantly different approaches in their terminology and analysis. For example, the meanings associated with the categories of tense,aspectandmoodare often bound up in verbconjugationpatterns that do not have separate grammatical elements corresponding to each of the three categories; seeTense–aspect–mood. Categories may be marked onwordsby means ofinflection. InEnglish, for example, the number of anounis usually marked by leaving the noun uninflected if it is singular, and by adding the suffix-sif it is plural (although some nouns haveirregular plural forms). On other occasions, a category may not be marked overtly on the item to which it pertains, being manifested only through other grammatical features of the sentence, often by way of grammaticalagreement. For example: The bird can sing.The birdscan sing. In the above sentences, the number of the noun is marked by the absence or presence of the ending-s. The sheepisrunning.The sheeparerunning. In the above, the number of the noun is not marked on the noun itself (sheepdoes not inflect according to the regular pattern), but it is reflected in agreement between the noun and verb: singular number triggersis, and plural numberare. The birdissinging.The birdsaresinging. In this case the number is marked overtly on the noun, and is also reflected by verb agreement. However: The sheep can run. In this case the number of the noun (or of the verb) is not manifested at all in thesurface formof the sentence, and thus ambiguity is introduced (at least, when the sentence is viewed in isolation). Exponents of grammatical categories often appear in the same position or "slot" in the word (such asprefix,suffixorenclitic). An example of this is theLatin cases, which are all suffixal:rosa, rosae, rosae, rosam, rosa, rosā("rose", in thenominative,genitive,dative,accusative,vocativeandablative). Categories can also pertain to sentence constituents that are larger than a single word (phrases, or sometimesclauses). A phrase often inherits category values from itsheadword; for example, in the above sentences, thenoun phrasethe birdsinherits plural number from the nounbirds. In other cases such values are associated with the way in which the phrase is constructed; for example, in thecoordinatednoun phraseTom and Mary, the phrase has plural number (it would take a plural verb), even though both the nouns from which it is built up are singular. In traditional structural grammar, grammatical categories are semantic distinctions; this is reflected in a morphological or syntactic paradigm. But ingenerative grammar, which sees meaning as separate from grammar, they are categories that define the distribution of syntactic elements.[1]For structuralists such asRoman Jakobsongrammatical categories were lexemes that were based on binary oppositions of "a single feature of meaning that is equally present in all contexts of use". Another way to define a grammatical category is as a category that expresses meanings from a single conceptual domain, contrasts with other such categories, and is expressed through formally similar expressions.[2]Another definition distinguishes grammatical categories from lexical categories, such that the elements in a grammatical category have a common grammatical meaning – that is, they are part of the language's grammatical structure.[3]
https://en.wikipedia.org/wiki/Grammatical_category
Ingrammar, apart of speechorpart-of-speech(abbreviatedasPOSorPoS, also known asword class[1]orgrammatical category[2][a]) is a category of words (or, more generally, oflexical items) that have similargrammaticalproperties. Words that are assigned to the same part of speech generally display similarsyntacticbehavior (they play similar roles within the grammatical structure of sentences), sometimes similarmorphologicalbehavior in that they undergoinflectionfor similar properties and even similarsemanticbehavior. Commonly listedEnglishparts of speech arenoun,verb,adjective,adverb,pronoun,preposition,conjunction,interjection,numeral,article, anddeterminer. Other terms thanpart of speech—particularly in modernlinguisticclassifications, which often make more precise distinctions than the traditional scheme does—includeword class,lexical class, andlexical category. Some authors restrict the termlexical categoryto refer only to a particular type ofsyntactic category; for them the term excludes those parts of speech that are considered to befunction words, such as pronouns. The termform classis also used, although this has various conflicting definitions.[3]Word classes may be classified asopen or closed:open classes(typically including nouns, verbs and adjectives) acquire new members constantly, whileclosed classes(such as pronouns and conjunctions) acquire new members infrequently, if at all. Almost all languages have the word classes noun and verb, but beyond these two there are significant variations among different languages.[4]For example: Because of such variation in the number of categories and their identifying properties, analysis of parts of speech must be done for each individual language. Nevertheless, the labels for each category are assigned on the basis of universal criteria.[4] The classification of words into lexical categories is found from the earliest moments in thehistory of linguistics.[5] In theNirukta, written in the 6th or 5th century BCE, theSanskritgrammarianYāskadefined four main categories of words:[6] These four were grouped into two larger classes:inflectable(nouns and verbs) and uninflectable (pre-verbs and particles). The ancient work on the grammar of theTamil language,Tolkāppiyam, argued to have been written around 2nd century CE,[7]classifies Tamil words aspeyar(பெயர்; noun),vinai(வினை; verb),idai(part of speech which modifies the relationships between verbs and nouns), anduri(word that further qualifies a noun or verb).[8] A century or two after the work of Yāska, theGreekscholarPlatowrote in hisCratylusdialogue, "sentences are, I conceive, a combination of verbs [rhêma] and nouns [ónoma]".[9]Aristotleadded another class, "conjunction" [sýndesmos], which included not only the words known today asconjunctions, but also other parts (the interpretations differ; in one interpretation it ispronouns,prepositions, and thearticle).[10] By the end of the 2nd century BCE, grammarians had expanded this classification scheme into eight categories, seen in theArt of Grammar, attributed toDionysius Thrax:[11] It can be seen that these parts of speech are defined bymorphological,syntacticandsemanticcriteria. TheLatingrammarianPriscian(fl.500 CE) modified the above eightfold system, excluding "article" (since theLatin language, unlike Greek, does not have articles) but adding "interjection".[13][14] The Latin names for the parts of speech, from which the corresponding modern English terms derive, werenomen,verbum,participium,pronomen,praepositio,adverbium,conjunctioandinterjectio. The categorynomenincludedsubstantives(nomen substantivum, corresponding to what are today called nouns in English),adjectives(nomen adjectivum)andnumerals(nomen numerale). This is reflected in the older English terminologynoun substantive,noun adjectiveandnoun numeral. Later[15]the adjective became a separate class, as often did the numerals, and the English wordnouncame to be applied to substantives only. Works ofEnglish grammargenerally follow the pattern of the European tradition as described above, except that participles are now usually regarded as forms of verbs rather than as a separate part of speech, and numerals are often conflated with other parts of speech: nouns (cardinal numerals, e.g., "one", andcollective numerals, e.g., "dozen"), adjectives (ordinal numerals, e.g., "first", andmultiplier numerals, e.g., "single") and adverbs (multiplicative numerals, e.g., "once", anddistributive numerals, e.g., "singly"). Eight or nine parts of speech are commonly listed: Some traditional classifications consider articles to be adjectives, yielding eight parts of speech rather than nine. And some modern classifications define further classes in addition to these. For discussion see the sections below. Additionally, there are other parts of speech includingparticles(yes,no)[b]andpostpositions(ago,notwithstanding) although many fewer words are in these categories. The classification below, or slight expansions of it, is still followed in mostdictionaries: English words are not generallymarkedas belonging to one part of speech or another; this contrasts with many other European languages, which useinflectionmore extensively, meaning that a given word form can often be identified as belonging to a particular part of speech and having certain additionalgrammatical properties. In English, most words are uninflected, while the inflected endings that exist are mostly ambiguous:-edmay mark a verbal past tense, a participle or a fully adjectival form;-smay mark a plural noun, a possessive noun, or a present-tense verb form;-ingmay mark a participle,gerund, or pure adjective or noun. Although-lyis a frequent adverb marker, some adverbs (e.g.tomorrow,fast,very) do not have that ending, while many adjectives do have it (e.g.friendly,ugly,lovely), as do occasional words in other parts of speech (e.g.jelly,fly,rely). Many English words can belong to more than one part of speech. Words likeneigh,break,outlaw,laser,microwave, andtelephonemight all be either verbs or nouns. In certain circumstances, even words with primarily grammatical functions can be used as verbs or nouns, as in, "We must look to thehowsand not just thewhys." The process whereby a word comes to be used as a different part of speech is calledconversionor zero derivation. Linguistsrecognize that the above list of eight or nine word classes is drastically simplified.[17]For example, "adverb" is to some extent a catch-all class that includes words with many different functions. Some have even argued that the most basic of category distinctions, that of nouns and verbs, is unfounded,[18]or not applicable to certain languages.[19][20]Modern linguists have proposed many different schemes whereby the words of English or other languages are placed into more specific categories and subcategories based on a more precise understanding of their grammatical functions. Common lexical category set defined by function may include the following (not all of them will necessarily be applicable in a given language): Within a given category, subgroups of words may be identified based on more precise grammatical properties. For example, verbs may be specified according to the number and type ofobjectsor othercomplementswhich they take. This is calledsubcategorization. Many modern descriptions of grammar include not only lexical categories or word classes, but alsophrasal categories, used to classifyphrases, in the sense of groups of words that form units having specific grammatical functions. Phrasal categories may includenoun phrases(NP),verb phrases(VP) and so on. Lexical and phrasal categories together are calledsyntactic categories. Word classes may be either open or closed. Anopen classis one that commonly accepts the addition of new words, while aclosed classis one to which new items are very rarely added. Open classes normally contain large numbers of words, while closed classes are much smaller. Typical open classes found in English and many other languages arenouns,verbs(excludingauxiliary verbs, if these are regarded as a separate class),adjectives,adverbsandinterjections.Ideophonesare often an open class, though less familiar to English speakers,[21][22][c]and are often open tononce words. Typical closed classes areprepositions(or postpositions),determiners,conjunctions, andpronouns.[24] The open–closed distinction is related to the distinction betweenlexical and functional categories, and to that betweencontent wordsandfunction words, and some authors consider these identical, but the connection is not strict. Open classes are generally lexical categories in the stricter sense, containing words with greater semantic content,[25]while closed classes are normally functional categories, consisting of words that perform essentially grammatical functions. This is not universal: in many languages verbs and adjectives[26][27][28]are closed classes, usually consisting of few members, and in Japanese the formation of new pronouns from existing nouns is relatively common, though to what extent these form a distinct word class is debated. Words are added to open classes through such processes ascompounding,derivation,coining, andborrowing. When a new word is added through some such process, it can subsequently be used grammatically in sentences in the same ways as other words in its class.[29]A closed class may obtain new items through these same processes, but such changes are much rarer and take much more time. A closed class is normally seen as part of the core language and is not expected to change. In English, for example, new nouns, verbs, etc. are being added to the language constantly (including by the common process ofverbingand other types ofconversion, where an existing word comes to be used in a different part of speech). However, it is very unusual for a new pronoun, for example, to become accepted in the language, even in cases where there may be felt to be a need for one, as in the case ofgender-neutral pronouns. The open or closed status of word classes varies between languages, even assuming that corresponding word classes exist. Most conspicuously, in many languages verbs and adjectives form closed classes of content words. An extreme example is found inJingulu, which has only three verbs, while even the modern Indo-EuropeanPersianhas no more than a few hundred simple verbs, a great deal of which are archaic. (Some twenty Persian verbs are used aslight verbsto form compounds; this lack of lexical verbs is shared with other Iranian languages.) Japanese is similar, having few lexical verbs.[30][failed verification]Basque verbsare also a closed class, with the vast majority of verbal senses instead expressed periphrastically. InJapanese, verbs and adjectives are closed classes,[31]though these are quite large, with about 700 adjectives,[32][33]and verbs have opened slightly in recent years.Japanese adjectivesare closely related to verbs (they can predicate a sentence, for instance). New verbal meanings are nearly always expressed periphrastically by appendingsuru(する, to do)to a noun, as inundō suru(運動する, to (do) exercise), and new adjectival meanings are nearly always expressed byadjectival nouns, using the suffix-na(〜な)when an adjectival noun modifies a noun phrase, as inhen-na ojisan(変なおじさん, strange man). The closedness of verbs has weakened in recent years, and in a few cases new verbs are created by appending-ru(〜る)to a noun or using it to replace the end of a word. This is mostly in casual speech for borrowed words, with the most well-established example beingsabo-ru(サボる, cut class; play hooky), fromsabotāju(サボタージュ, sabotage).[34]This recent innovation aside, the huge contribution ofSino-Japanese vocabularywas almost entirely borrowed as nouns (often verbal nouns or adjectival nouns). Other languages where adjectives are closed class include Swahili,[28]Bemba, andLuganda. By contrast,Japanese pronounsare an open class and nouns become used as pronouns with some frequency; a recent example isjibun(自分, self), now used by some as a first-person pronoun. The status of Japanese pronouns as a distinct class is disputed, however, with some considering it only a use of nouns, not a distinct class. The case is similar in languages of Southeast Asia, including Thai and Lao, in which, like Japanese, pronouns and terms of address vary significantly based on relative social standing and respect.[35] Some word classes are universally closed, however, including demonstratives and interrogative words.[35]
https://en.wikipedia.org/wiki/Lexical_category
Mergeis one of the basic operations in theMinimalist Program, a leading approach togenerative syntax, when two syntactic objects are combined to form a new syntactic unit (aset). Merge also has the property ofrecursionin that it may be applied to its own output: the objects combined by Merge are eitherlexical itemsor sets that were themselves formed by Merge. This recursive property of Merge has been claimed to be a fundamental characteristic that distinguishes language from other cognitive faculties. AsNoam Chomsky(1999) puts it, Merge is "an indispensable operation of a recursive system ... which takes two syntactic objects A and B and forms the new object G={A,B}" (p. 2).[1] Within the Minimalist Program, syntax is derivational, and Merge is the structure-building operation. Merge is assumed to have certain formal properties constraining syntactic structure, and is implemented with specific mechanisms. In terms of a merge-base theory oflanguage acquisition, complements and specifiers are simply notations for first-merge (read as "complement-of" [head-complement]), and later second-merge (read as "specifier-of" [specifier-head]), with merge always forming to a head. First-merge establishes only a set {a, b} and is not an ordered pair. In its original formulation by Chomsky in 1995 Merge was defined as inherently asymmetric; in Moro 2000 it was first proposed that Merge can generate symmetrical structures provided that they are rescued by movement and asymmetry is restored[2]For example, an {N, N}-compound of 'boat-house' would allow the ambiguous readings of either 'a kind of house' and/or 'a kind of boat'. It is only with second-merge that order is derived out of a set {a {a, b}} which yields therecursiveproperties of syntax. For example, a 'House-boat' {house {house, boat}} now reads unambiguously only as a 'kind of boat'. It is this property of recursion that allows for projection and labeling of a phrase to take place;[2]in this case, that the Noun 'boat' is the head of the compound, and 'house' acting as a kind of specifier/modifier. External-merge (first-merge) establishes substantive 'base structure' inherent to the VP, yielding theta/argument structure, and may go beyond the lexical-category VP to involve the functional-category light verb vP. Internal-merge (second-merge) establishes more formal aspects related to edge-properties of scope and discourse-related material pegged toCP. In a Phase-based theory, this twin vP/CP distinction follows the "duality of semantics" discussed within theMinimalist Program, and is further developed into a dual distinction regarding a probe-goal relation.[3]As a consequence, at the "external/first-merge-only" stage, young children would show an inability to interpret readings from a given ordered pair, since they would only have access to the mental parsing of a non-recursive set. (See Roeper for a full discussion of recursion in child language acquisition).[4]In addition to word-order violations, other more ubiquitous results of a first-merge stage would show that children's initial utterances lack the recursive properties of inflectional morphology, yielding a strict Non-inflectional stage-1, consistent with an incrementalStructure building model of child language.[5] Merge takes two objects α and β and combines them, creating a binary structure. In some variants of the Minimalist Program Merge is triggered byfeature checking, e.g. the verbeatselects the nouncheesecakebecause the verb has an uninterpretable N-feature [uN] ("u" stands for "uninterpretable"), which must be checked (or deleted) due tofull interpretation.[6]By saying that this verb has a nominal uninterpretable feature, we rule out suchungrammaticalconstructions as *eat beautiful (the verb selects an adjective). Schematically it can be illustrated as: There are three different accounts of how strong features force movement:[7][8] 1.Phonetic Form(PF)crash theory(Chomsky 1993) is conceptually motivated. The argument goes as follows: under the assumption that Logical Form (LF) is invariant, it must be the case that any parametric differences between languages reduce to morphological properties that are reflected at PF (Chomsky 1993:192). Two possible implementations of the PF crash theory are discussed by Chomsky: 2.Logical Form(LF) crash theory (Chomsky 1994) is empirically motivated by VP ellipsis. 3. Immediate elimination theory ((Chomsky 1995)) Initially, the cooperation of Last Resort (LR) and the Uniformity Condition (UC) were the indicators of the structures provided by Bare Phrase which contain labels and are constructed by move, as well the impact of theStructure Preservation Hypothesis.[9] When we consider the features of the word that provide the label when the word projects, we assume that the categorical feature of the word is always among the features that become the label of the newly created syntactic object.[11]In this example below, Cecchetto demonstrated how projection selects a head as the label. In this example by Cecchetto (2015), the verb "read" unambiguously labels the structure because "read" is a word, which means it is aprobeby definition, in which "read" selects "the book". the bigger constituent generated by merging the word with the syntactic objects receives the label of the word itself, which allow us to label the tree as demonstrated. In this tree, the verb "read" is the head selecting the DP "the book", which makes the constituent a VP. Merge operates blindly, projecting labels in all possible combinations. Thesubcategorizationfeatures of the head act as a filter by admitting only labelled projections that are consistent with the selectional properties of the head. All other alternatives are eliminated. Merge does nothing more than combine two syntactic objects (SO's) into a unit, but does not affect the properties of the combining elements in any way. This is called theNo Tampering Condition(NTC). Therefore, if α (as a syntactic object) has some property before combining with β (which is likewise a syntactic object) it will still have this property after it has combined with β. This allows Merge to account for further merging, which enables structures with movement dependencies (such as wh-movement) to occur. All grammatical dependencies are established under Merge: this means that if α and β are grammatically linked, α and β must have merged.[12] A major development of the Minimalist Program is Bare Phrase Structure (BPS), a theory ofphrase structure(structure building operations) developed byNoam Chomskyin 1994.[13]BPS is a representation of the structure of phrases in which syntactic units are not explicitly assigned to categories.[14]The introduction of BPS moves the generative grammar towardsdependency grammar(discussed below), which operates with significantly less structure than mostphrase structure grammars.[15]The constitutional operation of BPS is Merge. Bare phrase structure attempts to: (i) eliminate unnecessary elements; (ii) generate simplertrees; (ii) account for variation across languages.[16] Bare Phrase Structure defines projection levels according to the following features:[9] The minimalist program brings into focus four fundamental properties that govern the structure ofhuman language:[17][18] Since the publication ofbare phrase structurein 1994,[13]otherlinguistshave continued to build on this theory. In 2002, Chris Collins continued research on Chomsky's proposal to eliminate labels, backing up Chomsky's suggestion of a more simple theory of phrase structure.[19]Collins proposed that economy features, such as Minimality, govern derivations and lead to simpler representations. In more recent work by John Lowe and John Lundstrand, published in 2020,minimal phrase structureis formulated as an extension to bare phrase structure andX-bar theory. However it does not adopt all of the assumptions associated with the Minimalist Program (see above). Lowe and Lundstrand argue that any successfulphrase structuretheory, should include the following seven features:[16] Although Bare Phrase Structure includes many of these features, it does not include all of them, therefore other theories have attempted to incorporate all of these features in order to present a successfulphrase structure theory. Chomsky (2001) distinguishes between external and internal Merge: if A and B are separate objects then we deal with external Merge; if either of them is part of the other it is internal Merge.[20] As it is commonly understood, standard Merge adopts three key assumptions about the nature of syntactic structure and the faculty of language: While these three assumptions are taken for granted for the most part by those working within the broad scope of the Minimalist Program, other theories of syntax reject one or more of them. Merge is commonly seen as merging smallerconstituentsto greater constituents until the greatest constituent, the sentence, is reached. This bottom-up view of structure generation is rejected by representational (non-derivational) theories (e.g.Generalized Phrase Structure Grammar,Head-Driven Phrase Structure Grammar,Lexical Functional Grammar, mostdependency grammars, etc.), and it is contrary to early work inTransformational Grammar. Thephrase structure rulesofcontext free grammar, for instance, were generating sentence structure top down. The Minimalist view that Merge is strictly binary is justified with the argument that ann{\displaystyle n}-ary Merge wheren≥3{\displaystyle n\geq 3}would inevitably lead to both under and overgeneration, and as such Merge must be strictly binary.[21]More formally, the forms of undergeneration given in Marcolli et al., (2023) are such that for anyn{\displaystyle n}-ary Merge withn≥3{\displaystyle n\geq 3}, only strings of lengthk(n−1)+1{\displaystyle k(n-1)+1}for somek≥1{\displaystyle k\geq 1}can be generated (so sentences like "it rains" cannot be), and further, there are always strings of lengthk(n−1)+1{\displaystyle k(n-1)+1}that are ambiguous when parsed with binary Merge, for which ann{\displaystyle n}-ary merge withn≥3{\displaystyle n\geq 3}would not be able to account for. Further,n{\displaystyle n}-ary Merge wheren≥3{\displaystyle n\geq 3}is also said to necessarily lead to overgeneration. If we take a binary tree and ann{\displaystyle n}-ary tree with identical sets of leaves, then the binary tree will have a smaller number of accessible pairs of terms compared to the totaln{\displaystyle n}-tuples of accessible terms in then{\displaystyle n}-ary tree. This is responsible for the generation of ungrammatical sentences like "peanuts monkeys children will throw" (as opposed to "children will throw monkeys peanuts") with a ternary Merge.[22]Despite this, there have also been empirical arguments against strictly binary Merge, such as that coming fromconstituency tests,[23]and so some theories of grammar such asHead-Driven Phrase Structure Grammarstill retainn{\displaystyle n}-ary branching in the syntax. Merge merges two constituents in such a manner that these constituents become sister constituents and are daughters of the newly created mother constituent. This understanding of how structure is generated is constituency-based (as opposed to dependency-based). Dependency grammars (e.g.Meaning-Text Theory,Functional Generative Description,Word grammar) disagree with this aspect of Merge, since they take syntactic structure to be dependency-based.[24] In other approaches togenerative syntax, such asHead-driven phrase structure grammar,Lexical functional grammarand other types of unification grammar, the analogue to Merge is the unification operation ofgraph theory. In these theories, operations over attribute-value matrices (feature structures) are used to account for many of the same facts. Though Merge is usually assumed to be unique to language, the linguistsJonah KatzandDavid Pesetskyhave argued that the harmonic structure oftonal musicis also a result of the operation Merge.[25] This notion of 'merge' may in fact be related to Fauconnier's 'blending' notion incognitive linguistics. Phrase structure grammar(PSG) representsimmediate constituencyrelations (i.e. how words group together) as well as linear precedence relations (i.e. how words are ordered). In a PSG, a constituent contains at least one member, but has no upper bound. In contrast, with Merge theory, a constituent contains at most two members. Specifically, in Merge theory, each syntactic object is a constituent. X-bar theoryis a template that claims that all lexical items project three levels of structure: X, X', and XP. Consequently, there is a three-way distinction betweenHead, Complement, andSpecifier: While the first application of Merge is equivalent to the Head-Complement relation, the second application of Merge is equivalent to the Specifier-Head relation. However, the two theories differ in the claims they make about the nature of the Specifier-Head-Complement (S-H-C) structure. In X-bar theory, S-H-C is a primitive, an example of this is Kayne'santisymmetrytheory. In a Merge theory, S-H-C is derivative.
https://en.wikipedia.org/wiki/Merge_(linguistics)
Ingrammar, aphrase—calledexpressionin some contexts—is a group of words or singular word acting as a grammatical unit. For instance, theEnglishexpression "the very happy squirrel" is anoun phrasewhich contains theadjective phrase"very happy". Phrases can consist of a single word or a complete sentence. Intheoretical linguistics, phrases are often analyzed as units of syntactic structure such as aconstituent. There is a difference between the common use of the termphraseand its technical use in linguistics. In common usage, a phrase is usually a group of words with some specialidiomaticmeaning or other significance, such as "all rights reserved", "economical with the truth", "kick the bucket", and the like. It may be aeuphemism, asayingorproverb, afixed expression, afigure of speech, etc.. Inlinguistics, these are known asphrasemes. In theories ofsyntax, a phrase is any group of words, or sometimes a single word, which plays a particular role within the syntactic structure of asentence. It does not have to have any special meaning or significance, or even exist anywhere outside of the sentence being analyzed, but it must function there as a complete grammatical unit. For example, in the sentenceYesterday I saw an orange bird with a white neck, the wordsan orange bird with a white neckform anoun phrase, or adeterminer phrasein some theories, which functions as theobjectof the sentence. Many theories of syntax and grammar illustrate sentence structure using phrase 'trees', which provide schematics of how the words in a sentence are grouped and relate to each other. A tree shows the words, phrases, and clauses that make up a sentence. Any word combination that corresponds to a complete subtree can be seen as a phrase. There are two competing principles for constructing trees; they produce 'constituency' and 'dependency' trees and both are illustrated here using an example sentence. The constituency-based tree is on the left and the dependency-based tree is on the right (whereadjective(A),determiner(D),noun(N), sentence (S),verb(V),noun phrase(NP),prepositional phrase(PP),verb phrase(VP)): The tree on the left is of the constituency-based,phrase structure grammar, and the tree on the right is of thedependency grammar. The node labels in the two trees mark thesyntactic categoryof the differentconstituents, or word elements, of the sentence. In the constituency tree each phrase is marked by a phrasal node (NP, PP, VP); and there are eight phrases identified by phrase structure analysis in the example sentence. On the other hand, the dependency tree identifies a phrase by any node that exerts dependency upon, or dominates, another node. And, using dependency analysis, there are six phrases in the sentence. The trees and phrase-counts demonstrate that different theories of syntax differ in the word combinations they qualify as a phrase. Here the constituency tree identifies three phrases that the dependency trees does not, namely:house at the end of the street,end of the street, andthe end. More analysis, including about the plausibilities of both grammars, can be made empirically by applyingconstituency tests. In grammatical analysis, most phrases contain ahead, which identifies the type and linguistic features of the phrase. Thesyntactic categoryof the head is used to name the category of the phrase;[1]for example, a phrase whose head is anounis called anoun phrase. The remaining words in a phrase are called the dependents of the head. In the following phrases the head-word, or head, is bolded: The above five examples are the most common of phrase types; but, by the logic of heads and dependents, others can be routinely produced. For instance, thesubordinatorphrase: By linguistic analysis this is a group of words that qualifies as a phrase, and the head-word gives its syntactic name, "subordinator", to the grammatical category of the entire phrase. But this phrase, "beforethat happened", is more commonly classified in other grammars, including traditional English grammars, as asubordinate clause(ordependent clause); and it is then labellednotas a phrase, but as aclause. Most theories of syntax view most phrases as having a head, but some non-headed phrases are acknowledged. A phrase lacking a head is known asexocentric, and phrases with heads areendocentric. Some modern theories of syntax introducefunctional categoriesin which the head of a phrase is a functional lexical item. Some functional heads in some languages are not pronounced, but are rathercovert. For example, in order to explain certain syntactic patterns which correlate with thespeech acta sentence performs, some researchers have positedforce phrases(ForceP), whose heads are not pronounced in many languages including English. Similarly, many frameworks assume that covertdeterminersare present in bare noun phrases such asproper names. Another type is theinflectional phrase, where (for example) afinite verbphrase is taken to be the complement of a functional, possibly covert head (denoted INFL) which is supposed to encode the requirements for the verb toinflect– foragreementwith its subject (which is thespecifierof INFL), fortenseandaspect, etc. If these factors are treated separately, then more specific categories may be considered:tense phrase(TP), where the verb phrase is the complement of an abstract "tense" element;aspect phrase;agreement phraseand so on. Further examples of such proposed categories includetopic phraseandfocus phrase, which are argued to be headed by elements that encode the need for a constituent of the sentence to be marked as thetopicorfocus. Theories of syntax differ in what they regard as a phrase. For instance, while most if not all theories of syntax acknowledge the existence ofverb phrases(VPs),Phrase structure grammarsacknowledge bothfinite verbphrases andnon-finite verbphrases whiledependency grammarsonly acknowledge non-finite verb phrases. The split between these views persists due to conflicting results from the standard empirical diagnostics of phrasehood such asconstituency tests.[2] The distinction is illustrated with the following examples: The syntax trees of this sentence are next: The constituency tree on the left shows the finite verb stringmay nominate Newtas a constituent; it corresponds to VP1. In contrast, this same string is not shown as a phrase in the dependency tree on the right. However, both trees, take the non-finite VP stringnominate Newtto be a constituent.
https://en.wikipedia.org/wiki/Phrase
Inlinguistics,syntax(/ˈsɪntæks/SIN-taks)[1][2]is the study of how words andmorphemescombine to form larger units such asphrasesandsentences. Central concerns of syntax includeword order,grammatical relations, hierarchical sentence structure (constituency),[3]agreement, the nature of crosslinguistic variation, and the relationship between form and meaning (semantics). Diverse approaches, such asgenerative grammarand functional grammar, offer unique perspectives on syntax, reflecting its complexity and centrality to understandinghuman language. The wordsyntaxcomes from theancient Greekwordσύνταξις, meaning an orderly or systematic arrangement, which consists ofσύν-(syn-, "together" or "alike"), andτάξις(táxis, "arrangement"). InHellenistic Greek, this also specifically developed a use referring to the grammatical order of words, with a slightly altered spelling:συντάσσειν. The English term, which first appeared in 1548, is partly borrowed from Latin (syntaxis) and Greek, though the Latin term developed from Greek.[4] The field of syntax contains a number of various topics that a syntactic theory is often designed to handle. The relation between the topics is treated differently in different theories, and some of them may not be considered to be distinct but instead to be derived from one another (i.e. word order can be seen as the result of movement rules derived from grammatical relations). One basic description of a language's syntax is the sequence in which thesubject(S),verb(V), andobject(O) usually appear in sentences. Over 85% of languages usually place the subject first, either in the sequenceSVOor the sequenceSOV. The other possible sequences areVSO,VOS,OVS, andOSV, the last three of which are rare. In most generative theories of syntax, the surface differences arise from a more complex clausal phrase structure, and each order may be compatible with multiple derivations. However, word order can also reflect the semantics or function of the ordered elements.[5] Another description of a language considers the set of possible grammatical relations in a language or in general and how they behave in relation to one another in themorphosyntactic alignmentof the language. The description of grammatical relations can also reflect transitivity,passivization, and head-dependent-marking or other agreement. Languages have different criteria for grammatical relations. For example, subjecthood criteria may have implications for how the subject is referred to from a relative clause or coreferential with an element in an infinite clause.[6] Constituency is the feature of being aconstituentand how words can work together to form a constituent (orphrase). Constituents are often moved as units, and the constituent can be the domain of agreement. Some languages allow discontinuous phrases in which words belonging to the same constituent are not immediately adjacent but are broken up by other constituents. Constituents may berecursive, as they may consist of other constituents, potentially of the same type. TheAṣṭādhyāyīofPāṇini, fromc.4th century BCinAncient India, is often cited as an example of a premodern work that approaches the sophistication of a modern syntactic theory since works ongrammarhad been written long before modern syntax came about.[7]In the West, the school of thought that came to be known as "traditional grammar" began with the work ofDionysius Thrax. For centuries, a framework known asgrammaire générale, first expounded in 1660 byAntoine ArnauldandClaude Lancelotin abook of the same title, dominated work in syntax:[8]as its basic premise the assumption that language is a direct reflection of thought processes and so there is a single most natural way to express a thought.[9] However, in the 19th century, with the development ofhistorical-comparative linguistics, linguists began to realize the sheer diversity of human language and to question fundamental assumptions about the relationship between language and logic. It became apparent that there was no such thing as the most natural way to express a thought and sologiccould no longer be relied upon as a basis for studying the structure of language.[citation needed] ThePort-Royalgrammar modeled the study of syntax upon that of logic. (Indeed, large parts ofPort-Royal Logicwere copied or adapted from theGrammaire générale.[10]) Syntactic categories were identified with logical ones, and all sentences were analyzed in terms of "subject – copula – predicate". Initially, that view was adopted even by the early comparative linguists such asFranz Bopp. The central role of syntax withintheoretical linguisticsbecame clear only in the 20th century, which could reasonably be called the "century of syntactic theory" as far as linguistics is concerned. (For a detailed and critical survey of the history of syntax in the last two centuries, see the monumental work by Giorgio Graffi (2001).[11]) There are a number of theoretical approaches to the discipline of syntax. One school of thought, founded in the works ofDerek Bickerton,[12]sees syntax as a branch of biology, since it conceives of syntax as the study of linguistic knowledge as embodied in the humanmind. Other linguists (e.g.,Gerald Gazdar) take a morePlatonisticview since they regard syntax to be the study of an abstractformal system.[13]Yet others (e.g.,Joseph Greenberg) consider syntax a taxonomical device to reach broad generalizations across languages. Syntacticians have attempted to explain the causes of word-order variation within individual languages and cross-linguistically. Much of such work has been done within the framework of generative grammar, which holds that syntax depends on agenetic endowmentcommon to the human species. In that framework and in others,linguistic typologyanduniversalshave been primary explicanda.[14] Alternative explanations, such as those byfunctional linguists, have been sought inlanguage processing. It is suggested that the brain finds it easier toparsesyntactic patternsthat are either right- or left-branchingbut not mixed. The most-widely held approach is the performance–grammar correspondence hypothesis byJohn A. Hawkins, who suggests that language is a non-innateadaptationto innatecognitivemechanisms. Cross-linguistic tendencies are considered as being based on language users' preference for grammars that are organized efficiently and on their avoidance of word orderings that cause processing difficulty. Some languages, however, exhibit regular inefficient patterning such as the VO languagesChinese, with theadpositional phrasebefore the verb, andFinnish, which has postpositions, but there are few other profoundly exceptional languages.[15]More recently, it is suggested that the left- versus right-branching patterns are cross-linguistically related only to the place of role-marking connectives (adpositionsandsubordinators), which links the phenomena with the semantic mapping of sentences.[16] Dependency grammaris an approach to sentence structure in which syntactic units are arranged according to the dependency relation, as opposed to the constituency relation ofphrase structure grammars. Dependencies are directed links between words. The (finite) verb is seen as the root of all clause structure and all the other words in the clause are either directly or indirectly dependent on this root (i.e. the verb). Some prominent dependency-based theories of syntax are the following: Lucien Tesnière(1893–1954) is widely seen as the father of modern dependency-based theories of syntax and grammar. He argued strongly against the binary division of the clause intosubjectandpredicatethat is associated with the grammars of his day (S → NP VP) and remains at the core of most phrase structure grammars. In place of that division, he positioned the verb as the root of all clause structure.[17] Categorial grammaris an approach in which constituents combine asfunctionandargument, according to combinatory possibilities specified in theirsyntactic categories. For example, other approaches might posit a rule that combines a noun phrase (NP) and a verb phrase (VP), but CG would posit a syntactic categoryNPand anotherNP\S, read as "a category that searches to the left (indicated by \) for an NP (the element on the left) and outputs a sentence (the element on the right)." Thus, the syntactic category for anintransitiveverb is a complex formula representing the fact that the verb acts as afunction wordrequiring an NP as an input and produces a sentence level structure as an output. The complex category is notated as (NP\S) instead of V. The category oftransitive verbis defined as an element that requires two NPs (its subject and its direct object) to form a sentence. That is notated as (NP/(NP\S)), which means, "A category that searches to the right (indicated by /) for an NP (the object) and generates a function (equivalent to the VP) which is (NP\S), which in turn represents a function that searches to the left for an NP and produces a sentence." Tree-adjoining grammaris a categorial grammar that adds in partialtree structuresto the categories. Theoretical approaches to syntax that are based uponprobability theoryare known asstochastic grammars. One common implementation of such an approach makes use of aneural networkorconnectionism. Functionalist models of grammar study the form–function interaction by performing a structural and a functional analysis. Generative syntax is the study of syntax within the overarching framework ofgenerative grammar. Generative theories of syntax typically propose analyses of grammatical patterns using formal tools such asphrase structure grammarsaugmented with additional operations such assyntactic movement. Their goal in analyzing a particular language is to specify rules which generate all and only the expressions which arewell-formedin that language. In doing so, they seek to identify innate domain-specific principles of linguistic cognition, in line with the wider goals of the generative enterprise. Generative syntax is among the approaches that adopt the principle of theautonomy of syntaxby assuming that meaning and communicative intent is determined by the syntax, rather than the other way around. Generative syntax was proposed in the late 1950s byNoam Chomsky, building on earlier work byZellig Harris,Louis Hjelmslev, and others. Since then, numerous theories have been proposed under its umbrella: Other theories that find their origin in the generative paradigm are: The Cognitive Linguistics framework stems fromgenerative grammarbut adheres toevolutionary, rather thanChomskyan, linguistics. Cognitive models often recognise the generative assumption that the object belongs to the verb phrase. Cognitive frameworks include the following:
https://en.wikipedia.org/wiki/Syntax