text
stringlengths
16
172k
source
stringlengths
32
122
Inlinguistics, ablend—also known as ablend word,lexical blend, orportmanteau[a]—is a word formed by combining the meanings, and parts of the sounds, of two or more words together.[2][3][4]English examples includesmog, coined by blendingsmokeandfog,[3][5]andmotel, frommotor(motorist) andhotel.[6] A blend is similar to acontraction. On one hand, mainstream blends tend to be formed at a particular historical moment followed by a rapid rise in popularity. On the other hand, contractions are formed by the gradual drifting together of words over time due to the words commonly appearing together in sequence, such asdo notnaturally becomingdon't(phonologically,/duːnɒt/becoming/doʊnt/). A blend also differs from acompound, which fully preserves thestemsof the original words. The BritishlecturerValerie Adams's 1973Introduction to Modern English Word-Formationexplains that "In words such asmotel...,hotelis represented by various shorter substitutes –‑otel... – which I shall call splinters. Words containing splinters I shall call blends".[7][n 1]Thus, at least one of the parts of a blend, strictly speaking, is not a completemorpheme, but instead a mere splinter or leftover word fragment. For instance,starfishis a compound, not a blend, ofstarandfish, as it includes both words in full. However, if it were called a "stish" or a "starsh", it would be a blend. Furthermore, when blends are formed by shortening establishedcompoundsor phrases, they can be consideredclipped compounds, such asromcomforromantic comedy.[8] Blends of two or more words may be classified from each of three viewpoints: morphotactic, morphonological, and morphosemantic.[9] Blends may be classifiedmorphotacticallyinto two kinds:totalandpartial.[9] In a total blend, each of the words creating the blend is reduced to a mere splinter.[9]Some linguists limit blends to these (perhaps with additional conditions): for example,Ingo Plagconsiders "proper blends" to be total blends that semantically are coordinate, the remainder being "shortened compounds".[10] Commonly for English blends, the beginning of one word is followed by the end of another: Much less commonly in English, the beginning of one word may be followed by the beginning of another: Some linguists do not regard beginning+beginning concatenations as blends, instead calling them complex clippings,[11]clipping compounds[12]orclipped compounds.[13] Unusually in English, the end of one word may be followed by the end of another: A splinter of one word may replace part of another, as in two coined byLewis Carrollin "Jabberwocky": They are sometimes termedintercalativeblends; these words are among the original "portmanteaus" for which this meaning of the word was created.[14] In a partial blend, one entire word is concatenated with a splinter from another.[9]Some linguists do not recognize these as blends.[15] An entire word may be followed by a splinter: A splinter may be followed by an entire word: An entire word may replace part of another: These have also been calledsandwichwords,[16]and classed amongintercalativeblends.[14] (When two words are combined in their entirety, the result is considered acompound wordrather than a blend. For example,bagpipeis a compound, not a blend, ofbagandpipe.) Morphologically, blends fall into two kinds:overlappingandnon-overlapping.[9] Overlapping blends are those for which the ingredients' consonants, vowels or even syllables overlap to some extent. The overlap can be of different kinds.[9]These are also called haplologic blends.[17] There may be an overlap that is both phonological and orthographic, but with no other shortening: The overlap may be both phonological and orthographic, and with some additional shortening to at least one of the ingredients: Such an overlap may be discontinuous: These are also termed imperfect blends.[18][19] It can occur with three components: The phonological overlap need not also be orthographic: If the phonological but non-orthographic overlap encompasses the whole of the shorter ingredient, as in then the effect depends on orthography alone. (They are also called orthographic blends.[20]) An orthographic overlap need not also be phonological: For some linguists, an overlap is a condition for a blend.[21] Non-overlapping blends (also called substitution blends) have no overlap, whether phonological or orthographic: Morphosemantically, blends fall into two kinds:attributiveandcoordinate.[9] Attributive blends (also called syntactic or telescope blends) are blends where one of the ingredients is the head and the other is attributive. Aporta-lightis a portable light, not a 'light-emitting' or light portability; in this instance,lightis the head, while "porta-" is attributive. Asnobjectis a snobbery-satisfying object and not an objective or other kind of snob; object is the head.[9] As is also true for (conventional, non-blend) attributivecompounds(among whichbathroom, for example, is a kind of room, not a kind of bath), the attributive blends of English are mostlyhead-finaland mostlyendocentric. As an example of anexocentricattributive blend,Fruitopiamay metaphorically take the buyer to a fruity utopia (and not a utopian fruit); however, it is not a utopia but a drink. Coordinate blends (also called associative or portmanteau blends) combine two words having equal status, and have two heads. Thusbrunchis neither a breakfasty lunch nor a lunchtime breakfast but instead some hybrid of breakfast and lunch;Oxbridgeis equally Oxford and Cambridge universities. This too parallels (conventional, non-blend) compounds: anactor–directoris equally an actor and a director.[9] Two kinds of coordinate blends are particularly conspicuous: those that combine (near‑) synonyms: and those that combine (near‑) opposites: Blending can also apply torootsrather than words, for instance inIsraeli Hebrew: "There are two possible etymological analyses for Israeli Hebrew כספרkaspár'bank clerk, teller'. The first is that it consists of (Hebrew>) Israeli כסףkésef'money' and the (International/Hebrew>) Israeliagentivesuffixר--ár. The second is that it is a quasi-portmanteau wordwhich blends כסףkésef'money' and (Hebrew>) Israeli ספר √spr 'count'. Israeli Hebrew כספרkaspárstarted as a brand name but soon entered the common language. Even if the second analysis is the correct one, the final syllable ר--árapparently facilitated nativization since it was regarded as the Hebrew suffix ר--år(probably ofPersianpedigree), which usually refers to craftsmen and professionals, for instance as inMendele Mocher Sforim's coinage סמרטוטרsmartutár'rag-dealer'."[24] Blending may occur with an error inlexical selection, the process by which a speaker uses his semantic knowledge to choose words. Lewis Carroll's explanation, which gave rise to the use of 'portmanteau' for such combinations, was: Humpty Dumpty's theory, of two meanings packed into one word like a portmanteau, seems to me the right explanation for all. For instance, take the two words "fuming" and "furious." Make up your mind that you will say both words ... you will say "frumious."[25] The errors are based on similarity of meanings, rather thanphonologicalsimilarities, and the morphemes or phonemes stay in the same position within the syllable.[26] Some languages, likeJapanese, encourage the shortening and merging of borrowed foreign words (as ingairaigo), because they are long or difficult to pronounce in the target language. For example,karaoke, a combination of the Japanese wordkara(meaningempty) and the clipped formokeof the English loanword "orchestra" (J.ōkesutora,オーケストラ), is a Japanese blend that has entered the English language. TheVietnamese languagealso encourages blend words formed fromSino-Vietnamese vocabulary. For example, the termViệt Cộngis derived from the first syllables of "Việt Nam" (Vietnam) and "Cộng sản" (communist). Many corporatebrand names, trademarks, and initiatives, and names of corporations and organizations themselves, are blends. For example,Wiktionary, one ofWikipedia's sister projects, is a blend ofwikianddictionary. The wordportmanteauwas introduced in this sense byLewis Carrollin the bookThrough the Looking-Glass(1871),[27]whereHumpty Dumptyexplains to Alice the coinage of unusual words used in "Jabberwocky".[28]Slithymeans "slimy and lithe" andmimsymeans "miserable and flimsy". Humpty Dumpty explains to Alice the practice of combining words in various ways, comparing it to the then-commontype of luggage, which opens into two equal parts: You see it's like a portmanteau—there are two meanings packed up into one word. In his introduction to his 1876 poemThe Hunting of the Snark, Carroll again usesportmanteauwhen discussing lexical selection:[28] Humpty Dumpty's theory, of two meanings packed into one word like a portmanteau, seems to me the right explanation for all. For instance, take the two words "fuming" and "furious". Make up your mind that you will say both words, but leave it unsettled which you will say first … if you have the rarest of gifts, a perfectly balanced mind, you will say "frumious". In then-contemporary English, a portmanteau was asuitcasethat opened into two equal sections. According to theOED Online, a portmanteau is a "case or bag for carrying clothing and other belongings when travelling; (originally) one of a form suitable for carrying on horseback; (now esp.) one in the form of a stiff leather case hinged at the back to open into two equal parts".[29]According toThe American Heritage Dictionary of the English Language(AHD), the etymology of the word is the Frenchporte-manteau, fromporter, "to carry", andmanteau, "cloak" (from Old Frenchmantel, from Latinmantellum).[30]According to theOED Online, the etymology of the word is the "officer who carries the mantle of a person in a high position (1507 in Middle French), case or bag for carrying clothing (1547), clothes rack (1640)".[29]In modern French, aporte-manteauis aclothes valet, a coat-tree or similar article of furniture for hanging up jackets, hats, umbrellas and the like.[31][32][33] An occasional synonym for "portmanteau word" isfrankenword, anautological wordexemplifying the phenomenon it describes, blending "Frankenstein" and "word".[34] Manyneologismsare examples of blends, but many blends have become part of the lexicon.[28]InPunchin 1896, the wordbrunch(breakfast + lunch) was introduced as a "portmanteau word".[35]In 1964, the newly independent African republic ofTanganyikaandZanzibarchose the portmanteau wordTanzaniaas its name. SimilarlyEurasiais a portmanteau of Europe and Asia. Some city names are portmanteaus of the border regions they straddle:Texarkanaspreads across the Texas-Arkansas-Louisiana border, whileCalexicoandMexicaliare respectively the American and Mexican sides of a singleconurbation. A scientific example is aliger, which is a cross between a male lion and a female tiger (atigonis a similar cross in which the male is a tiger). A more modern blend of ‘Cat’ and ‘Rabbit’ was founded 2023 on X (formerly known as Twitter) to describe a circulating image of a mix between the two, producing the word ‘Cabbit’. Many company or brand names are portmanteaus, includingMicrosoft, a portmanteau ofmicrocomputerandsoftware; the cheeseCambozolacombines a similar rind toCamembertwith the same mould used to makeGorgonzola; passenger rail companyAmtrak, a portmanteau ofAmericaandtrack;Velcro, a portmanteau of the Frenchvelours(velvet) andcrochet(hook);Verizon, a portmanteau ofveritas(Latin for truth) andhorizon;Viacom, a portmanteau of Video and Audio communications, andComEd(a Chicago-area electric utility company), a portmanteau ofCommonwealthandEdison. Jeoportmanteau!is a recurring category on the American televisionquiz showJeopardy!The category's name is itself a portmanteau of the wordsJeopardyandportmanteau. Responses in the category are portmanteaus constructed by fitting two words together. Portmanteau words may be produced by joiningproper nounswith common nouns, such as "gerrymandering", which refers to the scheme of Massachusetts GovernorElbridge Gerryfor politically contrived redistricting; the perimeter of one of the districts thereby created resembled a very curvysalamanderin outline. The term gerrymander has itself contributed to portmanteau termsbjelkemanderandplaymander. Oxbridgeis a common portmanteau for the UK's two oldest universities, those ofOxfordandCambridge. In 2016, Britain's plannedexit from the European Unionbecame known as "Brexit". The wordrefudiatewas famously used bySarah Palinwhen she misspoke, conflating the wordsrefuteandrepudiate. Though the word was agaffe, it was recognized as theNew Oxford American Dictionary's "Word of the Year" in 2010.[36] The business lexicon includes words like "advertainment" (advertising as entertainment), "advertorial" (a blurred distinction between advertising and editorial), "infotainment" (information about entertainment or itself intended to entertain by its manner of presentation), and "infomercial" (informational commercial). Company and product names may also use portmanteau words: examples includeTimex(a portmanteau ofTime[referring toTime magazine] andKleenex),[37]Renault'sTwingo(a combination oftwist,swingandtango),[38]andGarmin(portmanteau of company founders' first namesGary BurrellandMin Kao). "Desilu Productions" was a Los Angeles–based company jointly owned by actor coupleDesi ArnazandLucille Ball.Miramaxis the combination of the first names of the parents of theWeinstein brothers. Two proper names can also be used in creating a portmanteau word in reference to the partnership between people, especially in cases where both persons are well-known, or sometimes to produceepithetssuch as "Billary" (referring to former United States presidentBill Clintonand his wife, former United States Secretary of StateHillary Clinton). In this example of recent American political history, the purpose for blending is not so much to combine the meanings of the source words but "to suggest a resemblance of one named person to the other"; the effect is often derogatory, as linguistBenjamin Zimmerstates.[39]For instance,Putleris used by critics ofVladimir Putin, merging his name withAdolf Hitler. By contrast, the public, including the media, use portmanteaus to refer to their favorite pairings as a way to "...giv[e] people an essence of who they are within the same name."[40]This is particularly seen in cases of fictional and real-life "supercouples". An early known example,Bennifer, referred to film starsBen AffleckandJennifer Lopez. Other examples includeBrangelina(Brad PittandAngelina Jolie) andTomKat(Tom CruiseandKatie Holmes).[40]On Wednesday, 28 June 2017,The New York Timescrosswordincluded the quip, "How I wishNatalie PortmandatedJacques Cousteau, so I could call them 'Portmanteau'".[41] Holidays are another example, as inThanksgivukkah, a portmanteau neologism given to the convergence of the American holiday ofThanksgivingand the first day of theJewish holidayofHanukkahon Thursday, 28 November 2013.[42][43]Chrismukkahis another pop-culture portmanteau neologism popularized by the TV dramaThe O.C., a merging of the holidays of Christianity's Christmas and Judaism's Hanukkah. In theDisneyfilmBig Hero 6, the film is situated in a fictitious city called "San Fransokyo", which is a portmanteau of two real locations,San FranciscoandTokyo.[44] Modern Hebrewabounds with blending. Along with CD, or simplyדיסק(disk), Hebrew has the blendתקליטור(taklitór), which consists ofתקליט(taklít'phonograph record') andאור(or'light'). Other blends in Hebrew include the following:[45] Sometimes the root of the second word is truncated, giving rise to a blend that resembles anacrostic: A few portmanteaus are in use in modern Irish, for example: There is a tradition oflinguistic purism in Icelandic, andneologismsare frequently created from pre-existing words. For example,tölva'computer' is a portmanteau oftala'digit, number' andvölva'oracle, seeress'.[53] InIndonesian, portmanteaus andacronymsare very common in both formal and informal usage. A common use of a portmanteau in the Indonesian language is to refer to locations and areas of the country. For example,Jabodetabekis a portmanteau that refers to theJakarta metropolitan areaorGreater Jakarta, which includes the regions of Jakarta, Bogor, Depok, Tangerang, Bekasi). In the Malaysian national language ofBahasa Melayu, the wordjadongwas constructed out of three Malay words for evil (jahat), stupid (bodoh) and arrogant (sombong) to be used on the worst kinds of community and religious leaders who mislead naive, submissive and powerless folk under their thrall.[citation needed] A very common type of portmanteau in Japanese forms one word from the beginnings of two others (that is, from twoback-clippings).[54]The portion of each input word retained is usually twomorae, which is tantamount to onekanjiin most words written in kanji. The inputs to the process can be native words,Sino-Japanese words,gairaigo(later borrowings), or combinations thereof. A Sino-Japanese example is the name東大(Tōdai)for theUniversity of Tokyo, in full東京大学(Tōkyōdaigaku). With borrowings, typical results are words such asパソコン(pasokon), meaningpersonal computer(PC), which despite being formed of English elements does not exist in English; it is auniquely Japanesecontraction of the Englishpersonal computer(パーソナル・コンピュータ,pāsonarukonpyūta). Another example,Pokémon(ポケモン), is a contracted form of the English wordspocket(ポケット,poketto)andmonsters(モンスター,monsutā).[55]A famous example of a blend with mixed sources iskaraoke(カラオケ,karaoke), blending the Japanese word forempty(空,kara)and the Greek wordorchestra(オーケストラ,ōkesutora). The Japanese fad of egg-shaped keychain pet toys from the 1990s,Tamagotchi, is a portmanteau combining the two Japanese wordstamago(たまご, 'egg'), anduotchi(ウオッチ, 'watch'). The portmanteau can also be seen as a combination oftamago(たまご, 'egg'), andtomodachi(友だち, 'friend'). Sometitlesalso are portmanteaus, such asHetalia(ヘタリア). It came fromHetare(ヘタレ, 'idiot') andItalia(イタリア, 'Italy'). Another example isServamp, which came from the English wordsServant(サーヴァント)andVampire(ヴァンパイア). InBrazilian Portuguese, portmanteaus are usually slang, including: InEuropean Portuguese, portmanteaus are also used. Some of them include: Although traditionally uncommon in Spanish, portmanteaus are increasingly finding their way into the language, mainly for marketing and commercial purposes. Examples inMexican Spanishincludecafebreríafrom combiningcafetería'coffee shop' andlibrería'bookstore', orteletón'telethon' from combiningtelevisiónandmaratón. Portmanteaus are also frequently used to make commercial brands, such as "chocolleta" from "chocolate" + "galleta". They are also often used to create business company names, especially for small, family-owned businesses, where owners' names are combined to create a unique name (such as Rocar, from "Roberto" + "Carlos", or Mafer, from "María" + "Fernanda"). These usages help to create distinguishable trademarks. It is a common occurrence for people with two names to combine them into a single nickname, like Juanca for Juan Carlos, Or Marilú for María de Lourdes. Other examples: A somewhat popular example in Spain is the wordgallifante,[64]a portmanteau ofgallo y elefante'cockerel and elephant'. It was the prize on the Spanish version of the children TV showChild's Play(Spanish:Juego de niños) that ran on the public television channelLa 1ofTelevisión Española(TVE) from 1988 to 1992.[65] Inlinguistics, a blend is an amalgamation or fusion of independentlexemes, while aportmanteauorportmanteau morphis a singlemorphthat is analyzed as representing two (or more) underlyingmorphemes.[66][67][68][69]For example, in the Latin wordanimalis, the ending-isis a portmanteau morph because it is an unanalysable combination of two morphemes: a morpheme for the singular number and one for the genitive case. In English, two separate morphs are used:of ananimal. Other examples include French: *à le⇒au[o]and*de le⇒du[dy].[66]
https://en.wikipedia.org/wiki/Portmanteau
Inlinguistics,word formationis an ambiguous term[1]that can refer to either: A common method of word formation is the attachment of inflectional or derivationalaffixes. Examples include: Inflection is modifying a word for the purpose of fitting it into the grammatical structure of a sentence.[4]For example: Examples includes: An acronym is a word formed from the first letters of other words.[6]For example: Acronyms are usually written entirely in capital letters, though some words originating as acronyms, likeradar, are now treated as common nouns.[7] Initialisms are similar to acronyms, but where the letters are pronounced as a series of letters. For example: In linguistics, back-formation is the process of forming a new word by removing actual affixes, or parts of the word that is re-analyzed as an affix, from other words to create a base.[5]Examples include: The process is motivated byanalogy:editis toeditorasactis toactor. This process leads to a lot ofdenominal verbs. Theproductivityof back-formation is limited, with the most productive forms of back-formation beinghypocoristics.[5] A lexical blend is a complex word typically made of two word fragments. For example: Although blending is listed under theNonmorphologicalheading, there are debates as to how far blending is a matter of morphology.[1] Compounding is the processing of combining two bases, where each base may be a fully-fledged word. For example: Compounding is a topic relevant to syntax, semantics, and morphology.[2] Linguists argue thathashtagsarewordsand hashtagging is a morphological process.[8][9]Social media users view the syntax of existing viral hashtags as guiding principles for creating new ones. A hashtag's popularity is therefore influenced more by the presence of popular hashtags with similar syntactic patterns than by its conciseness and clarity.[10] There are processes for forming new dictionary items which are not considered under the umbrella of word formation.[1]One specific example issemantic change, which is a change in a single word's meaning. The boundary between word formation and semantic change can be difficult to define as a new use of an old word can be seen as a new word derived from an old one and identical to it in form.
https://en.wikipedia.org/wiki/Word_formation
Gairaigo(外来語,Japanese pronunciation:[ɡaiɾaiɡo])isJapanesefor "loan word", and indicates atranscription into Japanese. In particular, the word usually refers to a Japanese word of foreign origin that was not borrowed in ancient times fromOldorMiddle Chinese(especiallyLiterary Chinese), but in modern times, primarily fromEnglish,Portuguese,Dutch, and modern Chinese languages, such asStandard ChineseandCantonese. These are primarily written in thekatakanaphonetic script, with a few older terms written in Chinese characters (kanji); the latter are known asateji. Japanese has manyloan words from Chinese, accounting for a sizeable fraction of the language. These words were borrowed during ancient times and are written inkanji. Modern Chinese loanwords are generally consideredgairaigoand written inkatakana, or sometimes written inkanji(either with the more familiar word as a base text gloss and the intendedkatakanaasfuriganaor vice versa); pronunciation of modern Chinese loanwords generally differs from the corresponding usual pronunciation of the characters in Japanese. For a list of terms, see theList of gairaigo and wasei-eigo terms. Japanese has a long history of borrowing from foreign languages. It has been doing so since the late fourth century AD. Some ancientgairaigowords are still being used nowadays, but there are also many kinds ofgairaigowords that were borrowed more recently. Most, but not all, moderngairaigoare derived fromEnglish, particularly in the post-World War II era (after 1945). Words are taken from English for concepts that do not exist in Japanese, but also for other reasons, such as a preference for English terms or fashionability – manygairaigohave Japanese near-synonyms.[1] In the past, moregairaigocame from other languages besides English. The first period of borrowing occurred during the late fourth century AD, when a massive number of Chinese characters were adopted. This period could be considered one of the most significant in the history ofgairaigo, because it was the first moment when the written communication systems usingkanjiwere formed. The first non-Asian countries to have extensive contact with Japan werePortugaland theNetherlandsin the 16th and 17th centuries, and Japanese has several loanwords fromPortugueseandDutch, many of which are still used. The interaction between Japan and Portugal lasted from the Late Middle Ages until the early Edo era (1549–1638). An example of the loanwords from Portuguese israsha, meaning a thick wool cloth that was indispensable during the period, but not used often nowadays. In the Edo era (1603–1853), words from the Dutch language, such asglas,gas, andalcohol, started to have an impact in the Japanese language. Also, during the Edo era, many medical words likeGaze(meaninggauze) andneurosescame from German, and many artistic words such asrougeanddessincame from French. Most of thegairaigosince the nineteenth century came from English. In theMeiji era(late 19th to early 20th century), Japan also had extensive contact withGermany, and gained many loanwords fromGerman, particularly for Western medicine, which the Japanese learned from the Germans. Notable examples includearubaito(アルバイト, part-time work)(often abbreviated tobaito(バイト)) from GermanArbeit("work"), andenerugī(エネルギー, energy)from GermanEnergie. They also gained several loanwords fromFrenchat this time. In modern times, there are some borrowings from Modern Chinese and Modern Korean, particularly for food names, and these continue as new foods become popular in Japan; standard examples includeūron(烏龍 ウーロン "oolongtea") andkimuchi(キムチ "kimchi"), respectively, while more specialized examples includehoikōrō(回鍋肉ホイコーロー "twice cooked pork") from Chinese, andbibinba(ビビンバ"bibimbap") from Korean. Chinese words are often represented with Chinese characters, but withkatakanagloss to indicate the unusual pronunciation, while Korean words, which no longer regularly use Chinese characters (hanja), are represented inkatakana. There is sometimes ambiguity in pronunciation of these borrowings, particularly voicing, such asto(ト)vs.do(ド)– compare English'sDaoism–Taoism romanization issue. Some Modern Chinese borrowings occurred during the 17th and 18th centuries, due both to trade and resident Chinese inNagasaki, and a more recent wave of Buddhist monks, theŌbakuschool, whose words are derived from languages spoken inFujian. More recent Korean borrowings are influenced both by proximity, and to the substantial population ofKoreans in Japansince the early 20th century. In 1889, there were 85gairaigoof Dutch origin and 72gairaigoof English origin listed in a Japanese dictionary.[which?][citation needed]From 1911 to 1924, 51% ofgairaigolisted in dictionaries were of English origin, and today, 80% to 90% ofgairaigoare of English origin.[citation needed] There have been some borrowings fromSanskritas well, most notably for religious terms. These words are generally transliterations which were unknowingly borrowed from Chinese.[2] In some cases,doubletsor etymologically related words from different languages may be borrowed and sometimes used synonymously or sometimes used distinctly. The most common basic example iskappu(カップ, "cup (with handle), mug") from Englishcupversus earlierkoppu(コップ, "cup (without handle), tumbler") from Dutchkopor Portuguesecopo, where they are used distinctly. A similar example isgurasu(グラス, "glass (drinkware)") from Englishglassversus earliergarasu(ガラス, "glass (material); pane") from Dutchglas; thusgarasu no gurasu(ガラスのグラス, "a glass glass")is not redundant but means a drinking vessel specifically made of glass (e.g. as opposed to plastic). A more technical example issorubitōru(ソルビトール)(Englishsorbitol) versussorubitto(ソルビット)(GermanSorbit), used synonymously. In addition to borrowings, which adopted both meaning and pronunciation, Japanese also has an extensive set of new words that are crafted using existing Chinese morphemes to express a foreign term. These are known aswasei-kango, "Japanese-made Chinese words". This process is similar to the creation ofclassical compoundsin European languages. Many were coined in the Meiji period, and these are very common in medical terminology. These are not consideredgairaigo, as the foreign word itself has not been borrowed, and sometimes a translation and a borrowing are both used. Inwritten Japanese,gairaigoare usually written inkatakana. Older loanwords are also often written usingateji(kanjichosen for their phonetic value, or sometimes for meaning instead) orhiragana, for exampletabakofrom Portuguese, meaning "tobacco" or "cigarette" can be writtenタバコ(katakana),たばこ(hiragana), or煙草(thekanjifor "smoke grass", but still pronouncedtabako– an example of meaning-basedateji), with no change in meaning. Another common older example istempura,which is usually written in mixedkanji/kana(mazegaki) as天ぷら, but is also written asてんぷら, テンプラ, 天麩羅(rare kanji) or天婦羅(common kanji) – here it is sound-basedateji,with the characters used for their phonetic values only. Fewgairaigoare sometimes written with a singlekanjicharacter (chosen for meaning or newly created); consequently, these are consideredkun'yomirather thanatejibecause the single characters are used for meaning rather than for sound and are often written as katakana. An example ispēji(頁、ページ, page); seesingle-character loan wordsfor details. There are numerous causes for confusion ingairaigo: (1)gairaigoare often abbreviated, (2) their meaning may change (either in Japanese or in the original language after the borrowing has occurred), (3) many words are not borrowed but rather coined in Japanese (wasei-eigo"English made in Japan"), and (4) not allgairaigocome from English. Due to Japanese pronunciation rules and itsmora-based phonology, many words take a significant amount of time to pronounce. For example, a one-syllable word in a language such as English (brake) often becomes several syllables when pronounced in Japanese (in this case,burēki(ブレーキ), which amounts to four moras). The Japanese language, therefore, contains manyabbreviated and contracted words, and there is a strong tendency to shorten words. This also occurs withgairaigowords. For example, "remote control", when transcribed in Japanese, becomesrimōto kontorōru(リモートコントロール), but this has then been simplified torimokon(リモコン). For another example, the transcribed word for "department store" isdepātomento sutoa(デパートメントストア) but has since been shortened todepāto(デパート).Clipped compounds, such aswāpuro(ワープロ) for "word processor", are common.Karaoke(カラオケ), a combination of the Japanese wordkara"empty" and the clipped form,oke, of the English loanword "orchestra" (J.ōkesutoraオーケストラ), is a clipped compound that has entered the English language. Japanese ordinarily takes the first part of a foreign word, but in some cases the second syllable is used instead; notable examples from English includehōmu(ホーム, from "(train station) plat-form")andnerushatsu(ネルシャツ, "flan-nel shirt"). Some Japanese people are not aware of the origins of the words in their language, and may assume that allgairaigowords are legitimate English words. For example, Japanese people may use words liketēma(テーマ, from GermanThema, meaning "topic/theme") in English, orrimokon, not realizing that the contraction of "remote control" torimokontook place in Japan. Similarly,gairaigo, while making Japanese easier to learn for foreign students in some cases, can also cause problems due to independentsemantic progression. For example, English "stove", from whichsutōbu(ストーブ) is derived, has multiple meanings. Americans often use the word to mean a cooking appliance, and are thus surprised when Japanese take it to mean a space heater (such as a wood-burning stove). The Japanese term for a cooking stove is anothergairaigoterm,renji(レンジ), from the English "range"; a gas stove is agasurenji(ガスレンジ). Additionally, Japanese combines words in ways that are uncommon in English. As an example,left overis abaseballterm for a hit that goes over the left-fielder's head rather than uneaten food saved for a later meal. This is a term that appears to be a loan but is actuallywasei-eigo. It is sometimes difficult for students of Japanese to distinguish amonggairaigo,giseigo(onomatopoeia), andgitaigo(ideophones: words that represent the manner of an action, like "zigzag" inEnglish—jiguzaguジグザグin Japanese), which are also written inkatakana. Wasei-eigopresents more difficulties for Japanese and learners of Japanese as such words, once entered the lexicon, combine to form any number of potentially confusing combinations. For example, the loanwordschance,pink,erotic,over,down,up,in,my, andboomhave all enteredwasei-eigolexicon, combining with Japanese words and other English loanwords to produce any number of combination words and phrases. 'Up', orappu, is famously combined with other words to convey an increase or improvement, such asseiseki appu(increased results) andraifu appu(improved quality of life). 'My', ormai, also regularly appears in advertisements for any number and genre of items. From "My Fanny" toilet paper to "My Hand" electric hand drills,maiserves as a common advertising tool. Infamously, the beverage brand Calpis sold a product namedmai pisuor 'my piss' for a short time.[3] Wasei-eigois often employed to disguise or advertise risque or sexual terms and innuendos, especially when used by women.Wasei-eigoterms referencing a person's characteristics, personality, and habits also commonly appear as Japanese street slang, frompoteto chippusuor 'potato chips' for a hick andesu efu'SF' for a 'sex friend'.[3] Gairaigoare generally nouns, which can be subsequently used as verbs by adding auxiliary verb-suru(〜する, "to do"). For example, "play soccer" is translated as サッカーをする (sakkā o suru). Some exceptions exist, such assabo-ru(サボる, "cut class", fromsabotage), which conjugates as a normal Japanese verb – note the unusual use ofkatakana(サボ) followed byhiragana(る). Another example isgugu-ru(ググる, "to google"), which conjugates as a normal Japanese verb, in which the final syllable is converted intookuriganato enable conjugation. Gairaigofunction as do morphemes from other sources, and, in addition towasei eigo(words or phrases from combininggairaigo),gairaigocan combine with morphemes of Japanese or Chinese origin in words and phrases, as injibīru(地ビール, local beer)(comparejizake(地酒, local sake)),yūzāmei(ユーザー名, user name)(compareshimei(氏名, full name)) orseiseki-appu(成績アップ, improve (your) grade). In set phrases, there is sometimes a preference to use allgairaigo(inkatakana) or allkango/wago(inkanji), as inマンスリーマンション(mansurii manshon, monthly apartment) versus月極駐車場(tsukigime chūshajō,monthly parking lot), but mixed phrases are common, and may be used interchangeably, as inテナント募集(tenanto boshū) and入居者募集(nyūkyosha boshū), both meaning "looking for a tenant". Borrowings traditionally have had pronunciations that conform to Japanese phonology andphonotactics. For example,platformwas borrowed as /hōmu/, because */fo/ is not a sound combination that traditionally occurs in Japanese. However, in recent years, somegairaigoare pronounced more closely to their original sound, which is represented by non-traditional combinations ofkatakana, generally using smallkatakanaor diacritics (voicing marks) to indicate these non-traditional sounds. Compareiyahon(イヤホン, "ear-phones")andsumaho(スマホ, "smart phone"), where traditional sounds are used, andsumātofon(スマートフォン, "smart-phone"), a variant of the latter word using traditional sounds, where the non-traditional combinationフォ(fu-o) is used to represent the non-traditional sound combination /fo/. This leads to long words; e.g., the word for "fanfare" is spelled out asfanfāre(ファンファーレ), with sevenkana, no shorter than the Roman alphabet original (it is possible that it was not loaned from English because the "e" is not silent). Similarly, Japanese traditionally does not have any /v/ phoneme, instead approximating it with /b/, but today /v/ (normally realizednotas [v] but as bilabial [β]) is sometimes used in pronunciations: for example, "violin" can be pronounced eitherbaiorin(バイオリン)orvaiorin(ヴァイオリン), withヴァ(literally "voiced u"+"a") representing /va/. Another example of the Japanese transformation of English pronunciation istakushī(タクシー), in which the two-syllable wordtaxibecomes three syllables (and four morae, thanks to longī) becauseconsonant clustersdo not occur in traditional Japanese (with the exception of the coda ん/ン or /n/), and in which the sound [si] ("see") of English is pronounced [ɕi] (which to monoglot English speakers will sound like "she") because /si/ in Japanese is realized as such. This change in Japanese phonology following the introduction of foreign words (here primarily from English) can be compared to the earlier posited change in Japanese phonology following the introduction of Chinese loanwords, such asclosed syllables(CVC, not just CV) andlengthbecoming a phonetic feature with the development of bothlong vowelsandlong consonants– seeEarly Middle Japanese: Phonological developments. Due to the difficulties that Japanese have indistinguishing "l" and "r", this expansion of Japanese phonology has not extended to use of differentkanafor /l/ vs. /r/, though application of handakuten for representing /l/ has been proposed as early as Meiji era. Therefore, words with /l/ or /r/ may be spelled identically if borrowed into Japanese. One important exception, however, does occur due to the fact that Japanese typically borrows English words in anon-rhoticfashion. The English words that are borrowed into Japanese include many of the most useful English words, including high-frequency vocabulary and academic vocabulary. Thusgairaigomay constitute a useful built-in lexicon for Japanese learners of English. Gairaigohave been observed to aid a Japanese child's learning of English vocabulary. With adults,gairaigoassist in English-word aural recognition and pronunciation, spelling, listening comprehension, retention of spoken and written English, and recognition and recall at especially higher levels of vocabulary. Moreover, in their written production, students of Japanese prefer using English words that have becomegairaigoto those that have not.[4] The wordarigatō(Japanese for "thank you") sounds similar to the Portuguese wordobrigado, which has the same meaning. Given the number of borrowings from Portuguese, it may seem reasonable to suppose that the Japanese imported that word—which is the explanation accepted and indeed published by many. However,arigatōis not agairaigo; rather, it is an abbreviation ofarigatō gozaimasu, which consists of aninflectionof the nativeJapanese adjectivearigatai(有難い) combined with thepolite verbgozaimasu.[5]There is evidence, for example in theMan'yōshū, that the wordarigataiwas in use several centuries before contact with the Portuguese. This makes the two termsfalse cognates. If the Portuguese word had been borrowed, it would most likely have taken the form オブリガド (oburigado), or maybeōrigado(due to historicalafuandofucollapsing toō), and while it is even possible that it would be spelled with有難asateji, it would regardless start withorather thana, and the finalowould have been short rather than long. Somegairaigowords have beenreborrowedinto their original source languages, particularly in the jargon of fans of Japanese entertainment. For example,anime(アニメ) isgairaigoderived from the English word for "animation", but has been reborrowed by English with the meaning of "Japanese animation". Similarly,puroresu(プロレス) derives from "professional wrestling", and has been adopted by English-speaking wrestling fans as a term for the style of pro wrestling performed in Japan.Kosupure(コスプレ), orcosplay, was formed from the English words "costume play", referring to dressing in costumes such as those of anime,manga, or videogame characters, and is now commonly used in English and other languages (also usingWestern cartoonrealms). There are also rare examples of borrowings from Indo-European languages, which have subsequently been borrowed by other Indo-European languages, thus yielding distant cognates. An example isikura(イクラ, salmon eggs), originally borrowed from Russianикра(ikra), and possibly distantly cognate (from the same Indo-European root) to English "roe" (fish eggs), though the only indication is the shared "r".
https://en.wikipedia.org/wiki/Gairaigo
AWanderwort(German:[ˈvandɐvɔʁt]'migrant word', sometimes pluralized asWanderwörter, usually capitalized followingGerman practice) is a word that has spread as aloanwordamong numerous languages and cultures, especially those that are far away from one another. As such,Wanderwörterare a curiosity inhistorical linguisticsandsociolinguisticswithin a wider study oflanguage contact.[1]At a sufficient time depth, it can be very difficult to establish in which language or language family aWanderwortoriginated and into which it was borrowed. Frequently, they are spread through trade networks, sometimes to describe a previously unfamiliar plant, animal or food. Typical examples ofWanderwörterarecannabis,sugar,[2]ginger,copper,[1]silver,[3]cumin,mint,wine, andhoney, some of which can be traced back toBronze Agetrade. Tea, with its Eurasian continental variantchai(both have entered English), is an example[1]whose spread occurred relatively late in human history and is therefore fairly well understood:teais fromHokkien茶tê, specificallyAmoy dialect, from the Fujianese port ofXiamen, hence it is the maritime variant, while茶chá(whencechai)[4]is used inCantoneseandMandarin.[5](Seeetymology of teafor further details.) Chocolateandtomatowere both taken fromClassical Nahuatlvia Spanish into many different languages, although the specific origin ofchocolateis obscure. Farang, a term derived from theethnonymFrankthroughAndalusian Arabic, refers to foreigners (typically white and European ones). From the above two languages, the word has been loaned into many languages spoken on or near the Indian Ocean, includingHindi,Thai, andAmharic, among others. It also existed in Russian in the form "фрязин" with the same meaning. Kangaroowas taken from theGuugu Yimithirrword for theeastern grey kangaroo; it entered English through the records ofJames Cook's expedition of 1770 and through English to languages around the world. Orangeoriginated in aDravidian language(likelyTamil,TeluguorMalayalam), and whose likely path to English included, in order,Sanskrit, Persian, possiblyArmenian, Arabic,Italian, andOld French.(SeeOrange (word) § Etymologyfor further details.) The words for 'horse' across many Eurasian languages seem to be related such asMongolianморь(mor),Manchuᠮᠣᡵᡳᠨ(morin),Korean말(mal),Japanese馬(uma), andThaiม้า(máː), as well asSino-Tibetan languagesleading to Mandarin馬(mǎ), andTibetanརྨང(rmang). It is present in severalCelticandGermanic languages, whenceIrishmarcand Englishmare.[6][7] This article abouthistorical linguisticsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Wanderwort
Inlinguistics,word formationis an ambiguous term[1]that can refer to either: A common method of word formation is the attachment of inflectional or derivationalaffixes. Examples include: Inflection is modifying a word for the purpose of fitting it into the grammatical structure of a sentence.[4]For example: Examples includes: An acronym is a word formed from the first letters of other words.[6]For example: Acronyms are usually written entirely in capital letters, though some words originating as acronyms, likeradar, are now treated as common nouns.[7] Initialisms are similar to acronyms, but where the letters are pronounced as a series of letters. For example: In linguistics, back-formation is the process of forming a new word by removing actual affixes, or parts of the word that is re-analyzed as an affix, from other words to create a base.[5]Examples include: The process is motivated byanalogy:editis toeditorasactis toactor. This process leads to a lot ofdenominal verbs. Theproductivityof back-formation is limited, with the most productive forms of back-formation beinghypocoristics.[5] A lexical blend is a complex word typically made of two word fragments. For example: Although blending is listed under theNonmorphologicalheading, there are debates as to how far blending is a matter of morphology.[1] Compounding is the processing of combining two bases, where each base may be a fully-fledged word. For example: Compounding is a topic relevant to syntax, semantics, and morphology.[2] Linguists argue thathashtagsarewordsand hashtagging is a morphological process.[8][9]Social media users view the syntax of existing viral hashtags as guiding principles for creating new ones. A hashtag's popularity is therefore influenced more by the presence of popular hashtags with similar syntactic patterns than by its conciseness and clarity.[10] There are processes for forming new dictionary items which are not considered under the umbrella of word formation.[1]One specific example issemantic change, which is a change in a single word's meaning. The boundary between word formation and semantic change can be difficult to define as a new use of an old word can be seen as a new word derived from an old one and identical to it in form.
https://en.wikipedia.org/wiki/Word_coinage
Polysemy(/pəˈlɪsɪmi/or/ˈpɒlɪˌsiːmi/;[1][2]fromAncient Greekπολύ-(polý-)'many'andσῆμα(sêma)'sign') is the capacity for asign(e.g. asymbol,morpheme,word, orphrase) to have multiple related meanings. For example, a word can have severalword senses.[3]Polysemy is distinct frommonosemy, where a word has a single meaning.[3] Polysemy is distinct fromhomonymy—orhomophony—which is anaccidentalsimilarity between two or more words (such asbearthe animal, and the verbbear); whereas homonymy is a mere linguistic coincidence, polysemy is not. In discerning whether a given set of meanings represent polysemy or homonymy, it is often necessary to look at the history of the word to see whether the two meanings are historically related.Dictionary writersoften listpolysemes(words or phrases with different, but related, senses) in the same entry (that is, under the sameheadword) and enter homonyms as separate headwords (usually with a numbering convention such as¹bearand²bear). A polyseme is a word or phrase with different, but related,senses. Since the test for polysemy is the vague concept of the relatedness, judgments of polysemy can be difficult to make. Because applying pre-existing words to new situations is a natural process of language change, looking at words'etymologyis helpful in determining polysemy but not the only solution; as words become lost in etymology, what once was a useful distinction of meaning may no longer be so. Some seemingly unrelated words share a common historical origin, however, so etymology is not an infallible test for polysemy, and dictionary writers also often defer to speakers' intuitions to judge polysemy in cases where it contradicts etymology.[4]English has many polysemous words. For example, the verb "toget" can mean "procure" (I'll get the drinks), "become" (she got scared), "understand" (I get it) etc. In linear or vertical polysemy, one sense of a word is a subset of the other. These are examples ofhyponymy and hypernymy, and are sometimes called autohyponyms.[5]For example, 'dog' can be used for 'male dog'. Alan Cruse identifies four types of linear polysemy:[6] In non-linear polysemy, the original sense of a word is used figuratively to provide a different way of looking at the new subject. Alan Cruse identifies three types of non-linear polysemy:[6] There are several tests for polysemy, but one of them iszeugma: if one word seems to exhibit zeugma when applied in differentcontexts, it is probable that the contexts bring out different polysemes of the same word. If the two senses of the same word do not seem tofit,yet seem related, then it is probable that they are polysemous. This test again depends on speakers' judgments about relatedness, which means that it is not infallible, but merely a helpful conceptual aid. The difference betweenhomonymsand polysemes is subtle.Lexicographersdefine polysemes within a single dictionarylemma, while homonyms are treated in separate entries, numbering different meanings (or lemmata).Semantic shiftcan separate a polysemous word into separate homonyms. For example,checkas in "bank check" (orCheque),checkin chess, andcheckmeaning "verification" are considered homonyms, while they originated as a single word derived fromchessin the 14th century. Psycholinguistic experiments have shown that homonyms and polysemes are represented differently within people's mentallexicon: while the different meanings of homonyms (which are semantically unrelated) tend to interfere or compete with each other during comprehension, this does not usually occur for the polysemes that have semantically related meanings.[4][7][8][9]Results for this contention, however, have been mixed.[10][11][12][13] ForDick Hebdige,[14]polysemy means that, "each text is seen to generate a potentially infinite range of meanings," making, according toRichard Middleton,[15]"any homology, out of the most heterogeneous materials, possible. The idea ofsignifying practice—texts not as communicating or expressing a pre-existing meaning but as 'positioning subjects' within aprocessofsemiosis—changes the whole basis of creating social meaning". Charles FillmoreandBeryl Atkins'definition stipulates three elements: (i) the various senses of a polysemous word have a central origin, (ii) the links between these senses form a network, and (iii) understanding the 'inner' one contributes to understanding of the 'outer' one.[16] One group of polysemes are those in which a word meaning an activity, perhaps derived from a verb, acquires the meanings of those engaged in the activity, or perhaps the results of the activity, or the time or place in which the activity occurs or has occurred. Sometimes only one of those meanings is intended, depending oncontext, and sometimes multiple meanings are intended at the same time. Other types are derivations from one of the other meanings that leads to a verb or activity. This example shows the specific polysemy where the same word is used at different levels of ataxonomy. According to theOxford English Dictionary, the three most polysemous words inEnglisharerun,put, andset, in that order.[18][19] A notion related to polysemy iscolexification– namely, the case when several meanings are expressed by the same word.[20]The main difference between the two notions is one of perspective:polysemyis usually taken in asemasiological way, going from a form to its meanings; whereascolexificationisonomasiological, starting from individual meanings and observing how they are colexified (or its opposite,dislexified) in languages. A lexical conception of polysemy was developed byB. T. S. Atkins, in the form of lexical implication rules.[21]These are rules that describe how words, in one lexical context, can then be used, in a different form, in a related context. A crude example of such a rule is the pastoral idea of "verbizing one's nouns": that certain nouns, used in certain contexts, can be converted into a verb, conveying a related meaning.[22] Another clarification of polysemy is the idea ofpredicate transfer[23]—the reassignment of a property to an object that would not otherwise inherently have that property. Thus, the expression "I am parked out back" conveys the meaning of "parked" from "car" to the property of "I possess a car". This avoids incorrect polysemous interpretations of "parked": that "people can be parked", or that "I am pretending to be a car", or that "I am something that can be parked". This is supported by themorphology: "We are parked out back" does not mean that there are multiple cars; rather, that there are multiple passengers (having the property of being in possession of a car).
https://en.wikipedia.org/wiki/Polysemy
Since the 1930sEnglishhas created numerousportmanteau wordsusing the word English as the second element. These refer to varieties of English that are heavily influenced by other languages or that are typical of speakers from a certain country or region. The term can mean a type of English heavily influenced by another language (typically the speaker'sL1) inaccent,lexis,syntax, etc., or to the practice ofcode-switchingbetween languages. In some cases, the word refers to the use of theLatin alphabetto write languages that use a different script, especially common on computer platforms that only allow Latin input such as online chat, social networks, emails and SMS. The practice of forming new words in this way has become increasingly popular since the 1990s. One scholarly article lists 510 such terms, known as "lishes", some of which are sourced from user-generated wikis.[1] The following is a list of lishes that have Wikipedia pages.
https://en.wikipedia.org/wiki/List_of_lishes
Anabbreviation(fromLatinbrevis'short')[1]is a shortened form of a word or phrase, by any method includingshortening,contraction,initialism(which includesacronym), orcrasis. An abbreviation may be a shortened form of a word, usually ended with a trailing period. For example, the termetc.is the usual abbreviation for theLatin phraseet cetera. Acontractionis an abbreviation formed by replacing letters with an apostrophe. Examples includeI'mforI amandli'lforlittle. Aninitialismoracronymis an abbreviation consisting of the initial letter of a sequence of words without other punctuation. For example,FBI(/ˌɛf.biːˈaɪ/),USA(/ˌjuː.ɛsˈeɪ/),IBM(/ˌaɪ.biːˈɛm/),BBC(/ˌbiː.biːˈsiː/). When initialism is used as the preferred term, acronym refers more specifically to when the abbreviation is pronounced as a word rather than as separate letters; examples includeSWATandNASA. Initialisms, contractions and crasis share somesemanticandphoneticfunctions, and are connected by the termabbreviationin loose parlance.[2]: p167 In early times, abbreviations may have been common due to the effort involved in writing (many inscriptions were carved in stone) or to provide secrecy viaobfuscation. Reduction of a word to a single letter was common in bothGreekandRomanwriting.[3]In Roman inscriptions, "Words were commonly abbreviated by using the initial letter or letters of words, and most inscriptions have at least one abbreviation". However, "some could have more than one meaning, depending on their context. (For example,⟨A⟩can be an abbreviation for many words, such asager,amicus,annus,as,Aulus,Aurelius,aurum, andavus.)"[4]Many frequent abbreviations consisted of more than one letter: for example COS forconsuland COSS for itsnominativeetc. pluralconsules. Abbreviations were frequently used in earlyEnglish. Manuscripts of copies of theOld EnglishpoemBeowulfused many abbreviations, for example theTironian et(⁊) or&forand, andyforsince, so that "not much space is wasted".[5]The standardisation of English in the 15th through 17th centuries included a growth in the use of such abbreviations.[6]At first, abbreviations were sometimes represented with various suspension signs, not only periods. For example, sequences like⟨er⟩were replaced with⟨ɔ⟩, as inmastɔformasterandexacɔbateforexacerbate. While this may seem trivial, it was symptomatic of an attempt by people manually reproducing academic texts to reduce the copy time. Mastɔ subwardenɔ y ɔmēde me to you. And wherɔ y wrot to you the last wyke that y trouyde itt good to differrɔ thelectionɔ ovɔ to quīdenaɔ tinitatis y have be thougħt me synɔ that itt woll be thenɔ a bowte mydsomɔ. In theEarly Modern Englishperiod, between the 15th and 17th centuries, thethornÞwas used forth, as inÞe('the'). In modern times,⟨Þ⟩was often used (in the form⟨y⟩) for promotional reasons, as inYeOlde Tea Shoppe.[7] During the growth ofphilologicallinguistic theory in academic Britain, abbreviating became very fashionable. Likewise, a century earlier inBoston, a fad of abbreviation started that swept the United States, with the globally popular termOKgenerally credited as a remnant of its influence.[8][9] Over the years, however, the lack of convention in some style guides has made it difficult to determine which two-word abbreviations should be abbreviated with periods and which should not. This question is considered below. Widespread use of electronic communication through mobile phones and the Internet during the 1990s led to a marked rise in colloquial abbreviation. This was due largely to increasing popularity of textual communication services such as instant and text messaging. The originalSMSsupported message lengths of 160 characters at most (using theGSM 03.38character set), for instance.[a]This brevity gave rise to an informal abbreviation scheme sometimes calledTextese, with which 10% or more of the words in a typical SMS message are abbreviated.[10]More recently Twitter, a popularsocial networking service, began driving abbreviation use with 140 character message limits. InHTML, abbreviations can be annotated using<abbrtitle="Meaning of the abbreviation.">abbreviation</abbr>to reveal its meaning byhovering the cursor. In modern English, there are multiple conventions for abbreviation, and there is controversy as to which should be used. One generally accepted rule is to be consistent in a body of work. To this end, publishers may express their preferences in astyle guide. Some controversies that arise are described below. If the original word was capitalized then the first letter of its abbreviation should retain the capital, for example Lev. forLeviticus. When a word is abbreviated to more than a single letter and was originally spelled with lower case letters then there is no need for capitalization. However, when abbreviating a phrase where only the first letter of each word is taken, then all letters should be capitalized, as in YTD foryear-to-date, PCB forprinted circuit boardand FYI forfor your information. However, see the following section regarding abbreviations that have become common vocabulary: these are no longer written with capital letters. A period (a.k.a. full stop) is sometimes used to signify abbreviation, but opinion is divided as to when and if this convention is best practice. According toHart's Rules, a word shortened by dropping letters from the end terminates with a period, whereas a word shorted by dropping letters from the middle does not.[2]: p167–170Fowler's Modern English Usagesays a period is used for both of these shortened forms, but recommends against this practice: advising it only for end-shortened words and lower-case initialisms; not for middle-shortened words and upper-case initialisms.[11] Some British style guides, such as forThe GuardianandThe Economist, disallow periods for all abbreviations.[12][13] InAmerican English, the period is usually included regardless of whether or not it is a contraction, e.g.Dr.orMrs.In some cases, periods are optional, as in eitherUSorU.S.forUnited States,EUorE.U.forEuropean Union, andUNorU.N.forUnited Nations. There are some house styles, however—American ones included—that remove the periods from almost all abbreviations. For example: Acronyms that were originally capitalized (with or without periods) but have since entered the vocabulary as generic words are no longer written with capital letters nor with any periods. Examples aresonar,radar,lidar,laser,snafu, andscuba. When an abbreviation appears at the end of a sentence, only one period is used:The capital of the United States is Washington, D.C. In the past, some initialisms were styled with a period after each letter and a space between each pair. For example,U. S., but today this is typicallyUS. There are multiple ways to pluralize an abbreviation. Sometimes this accomplished by adding an apostrophe and ans('s), as in "two PC's have broken screens". But, some find this confusing since the notation can indicatepossessive case. And, this style is deprecated by many style guides. For instance,Kate Turabian, writing about style in academic writings,[14]allows for an apostrophe to form plural acronyms "only when an abbreviation contains internal periods or both capital and lowercase letters". For example, "DVDs" and "URLs" and "Ph.D.'s", while theModern Language Association[15]explicitly says, "do not use an apostrophe to form the plural of an abbreviation". Also, theAmerican Psychological Associationspecifically says,[16][17]"without an apostrophe". However, the 1999 style guide forThe New York Timesstates that the addition of an apostrophe is necessary when pluralizing all abbreviations, preferring "PC's, TV's and VCR's".[18] Forming a plural of an initialization without an apostrophe can also be used for a number, or a letter. Examples:[19] For units of measure, the same form is used for both singular and plural. Examples: When an abbreviation contains more than one period,Hart's Rulesrecommends putting thesafter the final one. Examples: However, the same plurals may be rendered less formally as: According toHart's Rules, an apostrophe may be used in rare cases where clarity calls for it, for example when letters or symbols are referred to as objects. However, the apostrophe can be dispensed with if the items are set in italics or quotes: In Latin, and continuing to the derivative forms in European languages as well as English, single-letter abbreviations had the plural being a doubling of the letter for note-taking. Most of these deal with writing and publishing. A few longer abbreviations use this as well. Publications based in the U.S. tend to follow the style guides ofThe Chicago Manual of Styleand theAssociated Press.[20]The U.S. government follows a style guide published by theU.S. Government Printing Office. TheNational Institute of Standards and Technologysets the style for abbreviations of units. Many British publications follow some of these guidelines in abbreviation: Writers often use shorthand to denote units of measure. Such shorthand can be an abbreviation, such as "in" for "inch" or can be a symbol such as "km" for "kilometre". In theInternational System of Units(SI) manual[22]the word "symbol" is used consistently to define the shorthand used to represent the various SI units of measure. The manual alsodefines the way in which units should be written, the principal rules being: A syllabic abbreviation is usually formed from the initial syllables of several words, such asInterpol=International+police. It is a variant of the acronym. Syllabic abbreviations are usually written usinglower case, sometimes starting with acapital letter, and are always pronounced as words rather than letter by letter. Syllabic abbreviations should be distinguished fromportmanteaus, which combine two words without necessarily taking whole syllables from each. Syllabic abbreviations are not widely used in English. Some UK government agencies such asOfcom(Office of Communications) and the formerOftel(Office of Telecommunications) use this style. New York Cityhas various neighborhoods named by syllabic abbreviation, such asTribeca(Triangle below Canal Street) andSoHo(South of Houston Street). This usage has spread into other American cities, givingSoMa, San Francisco (South of Market) andLoDo, Denver(Lower Downtown), amongst others. Chicago-based electric service providerComEdis a syllabic abbreviation ofCommonwealthand (Thomas)Edison. Sections ofCaliforniaare also often colloquially syllabically abbreviated, as in NorCal (Northern California), CenCal (Central California), and SoCal (Southern California). Additionally, in the context of Los Angeles, the syllabic abbreviation SoHo (Southern Hollywood) refers to the southern portion of theHollywoodneighborhood. Partially syllabic abbreviations are preferred by the US Navy, as they increase readability amidst the large number of initialisms that would otherwise have to fit into the same acronyms. HenceDESRON6is used (in the full capital form) to mean "Destroyer Squadron 6", whileCOMNAVAIRLANTwould be "Commander, Naval Air Force (in the) Atlantic". Syllabic abbreviations are a prominent feature ofNewspeak, the fictional language ofGeorge Orwell's dystopian novelNineteen Eighty-Four. The political contractions of Newspeak—Ingsoc(English Socialism),Minitrue(Ministry of Truth),Miniplenty(Ministry of Plenty)—are described by Orwell as similar to real examples of German(see below)and Russian (see below)contractions in the 20th century. The contractions in Newspeak are supposed to have a political function by virtue of their abbreviated structure itself: nice sounding and easily pronounceable, their purpose is to mask all ideological content from the speaker.[23]: 310–8 A more recent syllabic abbreviation has emerged with the diseaseCOVID-19(Corona Virus Disease 2019) caused by theSevere acute respiratory syndrome coronavirus 2(itself frequently abbreviated toSARS-CoV-2, partly an initialism). In Albanian, syllabic acronyms are sometimes used for composing a person's name, such asMigjeni—an abbreviation from his original name (Millosh Gjergj Nikolla) a famous Albanian poet and writer—orASDRENI(Aleksander Stavre Drenova), another famous Albanian poet. Other such names which are used commonly in recent decades are GETOAR, composed fromGegeria+Tosks(representing the two main dialects of the Albanian language, Gegë and Toskë), andArbanon—which is an alternative way used to describe all Albanian lands. Syllabic abbreviations were and are common inGerman; much like acronyms in English, they have a distinctly modern connotation, although contrary to popular belief, many date back to before1933, if not the end ofthe Great War.Kriminalpolizei, literallycriminal policebut idiomatically theCriminal Investigation Departmentof any German police force, begatKriPo(variously capitalised), and likewiseSchutzpolizei(protection policeoruniform department) begatSchuPo. Along the same lines, the Swiss Federal Railways' Transit Police—theTransportpolizei—are abbreviated as theTraPo. With the National Socialist German Workers' Party gaining power came a frenzy of government reorganisation, and with it a series of entirely new syllabic abbreviations. The single national police force amalgamated from theSchutzpolizeienof the various states became the OrPo (Ordnungspolizei, "order police"); the state KriPos together formed the "SiPo" (Sicherheitspolizei, "security police"); and there was also theGestapo(Geheime Staatspolizei, "secret state police"). The new order of theGerman Democratic Republicin the east brought about a consciousdenazification, but also a repudiation of earlier turns of phrase in favour of neologisms such asStasiforStaatssicherheit("state security", the secret police) andVoPoforVolkspolizei. The phrasepolitisches Büro, which may be rendered literally as "office of politics" or idiomatically as "political party steering committee", becamePolitbüro. Syllabic abbreviations are not only used in politics, however. Many business names, trademarks, and service marks from across Germany are created on the same pattern: for a few examples, there isAldi, fromTheo Albrecht, the name of its founder, followed bydiscount;Haribo, fromHans Riegel, the name of its founder, followed byBonn, the town of its head office; andAdidas, fromAdolf "Adi" Dassler, the nickname of its founder followed by his surname. Syllabic abbreviations are very common in Russian, Belarusian and Ukrainian languages. They are often used as names of organizations. Historically, popularization of abbreviations was a way to simplify mass-education in 1920s (seeLikbez). The wordkolkhoz(kollektívnoye khozyáystvo,collective farm) is another example. Leninist organisations such as theComintern(Communist International) andKomsomol(Kommunisticheskii Soyuz Molodyozhi, or "Communist youth union") used Russian language syllabic abbreviations. In the modern Russian language, words likeRosselkhozbank(from Rossiysky selskokhozyaystvenny bank —Russian Agricultural Bank, RusAg) andMinobrnauki(from Ministerstvo obrazovaniya i nauki — Ministry of Education and Science) are still commonly used. In nearbyBelarus, there areBeltelecom(Belarus Telecommunication) and Belsat (Belarus Satellite). Syllabic abbreviations are common inSpanish; examples abound in organization names such asPemexforPetróleos Mexicanos("Mexican Petroleums") or Fonafifo forFondo Nacional de Financimiento Forestal(National Forestry Financing Fund). In Southeast Asian languages, especially inMalay languages, abbreviations are common; examples includePetronas(forPetroliam Nasional, "National Petroleum"), its Indonesian equivalentPertamina(from its original namePerusahaan Pertambangan Minyak dan Gas Bumi Negara, "State Oil and Natural Gas Mining Company"), andKemenhub(fromKementerian Perhubungan, "Ministry of Transportation"). Malaysian abbreviation often uses letters from each word, while Indonesia usually uses syllables; although some cases do not follow the style. For example, general elections in Malaysian Malay often shortened into PRU (pilihanrayaumum) while Indonesian often shortened into pemilu (pemilihanumum). Another example is Ministry of Health in which Malaysian Malay uses KKM (KementerianKesihatanMalaysia), compared to Indonesian Kemenkes (KementerianKesehatan). East Asian languages whose writing systems useChinese charactersform abbreviations similarly by using key Chinese characters from a term or phrase. For example, in Japanese the term for theUnited Nations,kokusai rengō(国際連合) is often abbreviated tokokuren(国連). (Such abbreviations are calledryakugo(略語) in Japanese; see alsoJapanese abbreviated and contracted words). The syllabic abbreviation ofkanjiwords is frequently used for universities: for instance,Tōdai(東大) forTōkyō daigaku(東京大学,University of Tokyo) and is used similarly in Chinese:Běidà(北大) forBěijīng Dàxué(北京大学,Peking University). Korean universities often follow the same conventions, such asHongdae(홍대) as short forHongik Daehakgyo, orHongik University. The English phrase "Gung ho" originated as a Chinese abbreviation.
https://en.wikipedia.org/wiki/Syllabic_abbreviation
Inphonetics,clippingis the process of shortening thearticulationof aphonetic segment, usually avowel. Aclipped vowelis pronounced more quickly than an unclipped vowel and is often alsoreduced. Particularly in NetherlandsDutch, vowels in unstressed syllables are shortened and centralized, which is particularly noticeable with tense vowels; compare the/oː/phoneme inkonijn[kʊˈnɛin]ⓘ'rabbit' andkoning[ˈkounɪŋ]ⓘ'king'. Many dialects of English (such asAustralian English,General American English,Received Pronunciation,South African EnglishandStandard Canadian English) have two types of non-phonemic clipping: pre-fortis clipping and rhythmic clipping. The first type occurs in astressed syllablebefore afortis consonant, so that e.g.bet[ˈbɛt]has a vowel that is shorter than the one inbed[ˈbɛˑd]. Vowels preceding voiceless consonants that begin a next syllable (as inkeychain/ˈkiː.tʃeɪn/) are not affected by this rule.[1] Rhythmic clipping occurs in polysyllabic words. The more syllables a word has, the shorter its vowels are and so the first vowel ofreadershipis shorter than inreader, which, in turn, is shorter than inread.[1][2] Clipping with vowel reduction also occurs in many unstressed syllables. Because of the variability of vowel length, the ⟨ː⟩ diacritic is sometimes omitted in IPA transcriptions of English and so words such asdawnorleadare transcribed as/dɔn/and/lid/, instead of the more usual/dɔːn/and/liːd/. Neither type of transcription is more correct, as both convey exactly the same information, but transcription systems that use the length mark make it more clear whether a vowel is checked or free. Compare the length of the RP vowel/ɒ/in the wordnotas opposed to the corresponding/ɒ/in Canadian English, which is typically longer (like RP/ɑː/) because Canadian/ɒ/is a free vowel (checked/ɒ/is very rare in North America,[citation needed]as it relies on a three-way distinction betweenLOT,THOUGHTandPALM) and so can also be transcribed as/ɒː/. TheScottish vowel length ruleis used instead of those rules in Scotland and sometimes also in Northern Ireland. Many speakers ofSerbo-Croatianfrom Croatia and Serbia pronounce historical unstressed long vowels as short, with some exceptions (such as genitive plural endings). Therefore, the nameJadrankais pronounced[jâdraŋka], rather than[jâdraːŋka].[3] Thisphoneticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Clipping_(phonetics)
Inlinguistics, acompoundis alexeme(less precisely, awordorsign) that consists of more than onestem.Compounding,compositionornominal compositionis the process ofword formationthat creates compound lexemes. Compounding occurs when two or more words or signs are joined to make a longer word or sign. Consequently, a compound is a unit composed of more than one stem, forming words or signs. If the joining of the words or signs is orthographically represented with a hyphen, the result is ahyphenated compound(e.g.,must-have,hunter-gatherer). If they are joined without an intervening space, it is aclosed compound(e.g.,footpath,blackbird). If they are joined with a space (e.g.school bus, high school, lowest common denominator), then the result – at least in English[1]– may be anopen compound.[2][3][4][5] The meaning of the compound may be similar to or different from the meaning of its components in isolation. The component stems of a compound may be of the samepart of speech—as in the case of the English wordfootpath, composed of the twonounsfootandpath—or they may belong to different parts of speech, as in the case of the English wordblackbird, composed of theadjectiveblackand the nounbird. With very few exceptions, English compound words arestressedon their first component stem. As a member of theGermanic familyof languages,Englishis unusual in that even simple compounds made since the 18th century tend to be written in separate parts. This would be an error in other Germanic languages such asNorwegian,Swedish,Danish,German, andDutch. However, this is merely anorthographicconvention: as in other Germanic languages, arbitrarynoun phrases, for example "girl scout troop", "city council member", and "cellar door", can be made up on the spot and used as compound nouns in English too. For example, GermanDonau­dampfschifffahrts­gesellschafts­kapitän[a]would be written in English as "Danube steamship transport company captain" and not as "Danube­steamship­transportcompany­captain". The meaning of compounds may not always be transparent from their components, necessitating familiarity with usage and context. The addition ofaffix morphemesto words (such assuffixesorprefixes, as inemploy→employment) should not be confused with nominal composition, as this is actuallymorphological derivation. Some languages easily form compounds from what in other languages would be a multi-word expression. This can result in unusually long words, a phenomenon known in German (which is one such language) asBandwurmwörter("tapeworm words"). Compounding extends beyond spoken languages to includeSign languagesas well, where compounds are also created by combining two or more sign stems. So-called "classical compounds" are compounds derived fromclassical Latinorancient Greekroots. Compound formation rules vary widely across language types. In asynthetic language, the relationship between the elements of a compound may be marked with a case or othermorpheme. For example, theGermancompoundKapitänspatentconsists of the lexemesKapitän(sea captain) andPatent(license) joined by an-s-(originally agenitive casesuffix); and similarly, theLatinlexemepaterfamiliascontains thearchaicgenitive formfamiliasof the lexemefamilia(family). Conversely, in theHebrew languagecompound, the word בֵּית סֵפֶרbet sefer(school), it is the head that is modified: the compound literally means "house-of book", with בַּיִתbayit(house) having entered theconstruct stateto become בֵּיתbet(house-of). This latter pattern is common throughout theSemitic languages, though in some it is combined with an explicit genitive case, so that both parts of the compound are marked, e.g. ʕabd-u servant-NOM l-lāh-i DEF-god-GEN ʕabd-u l-lāh-i servant-NOM DEF-god-GEN "servant of-the-god: the servant of God" Agglutinative languagestend to create very long words with derivational morphemes. Compounds may or may not require the use of derivational morphemes also. InGerman, extremely extendable compound words can be found in the language of chemical compounds, where, in the cases of biochemistry and polymers, they can be practically unlimited in length, mostly because the German rule suggests combining allnoun adjunctswith the noun as the last stem. German examples includeFarb­fernsehgerät(color television set),Funk­fernbedienung(radio remote control), and the often quoted jocular wordDonau­dampfschifffahrts­gesellschafts­kapitänsmütze(originally only two Fs,Danube-Steamboat-Shipping Companycaptain['s] hat), which can of course be made even longer and even more absurd, e.g.Donau­dampfschifffahrts­gesellschafts­kapitänsmützen­reinigungs­ausschreibungs­verordnungs­diskussionsanfang("beginning of the discussion of a regulation on tendering of Danube steamboat shipping company captain hats") etc. According to several editions of theGuinness Book of World Records, the longest published German word has 79 letters and isDonau­dampfschiffahrts­elektrizitäten­hauptbetriebswerkbau­unterbeamten­gesellschaft("Association for Subordinate Officials of the Main Electric[ity] Maintenance Building of the Danube Steam Shipping"), but there is no evidence that this association ever actually existed. In Finnish, although there is theoretically no limit to the length of compound words, words consisting of more than three components are rare. Internet folklore sometimes suggests thatlentokone­suihkuturbiinimoottori­apumekaanikko­aliupseerioppilas(airplane jet turbine engine auxiliary mechanic non-commissioned officer student) is the longest word in Finnish, but evidence of its actual use is scant and anecdotal at best.[6] Compounds can be rather long when translating technical documents from English to some other language, since the lengths of the words are theoretically unlimited, especially in chemical terminology. For example, when translating an English technical document to Swedish, the term "Motion estimation search range settings" can be directly translated torörelse­uppskattnings­sökintervalls­inställningar, though in reality, the word would most likely be divided in two:sökintervalls­inställningar för rörelse­uppskattning– "search range settings for motion estimation". A common semantic classification of compounds yields four types: Anendocentriccompound(tatpuruṣain theSanskrittradition) consists of ahead, i.e. the categorical part that contains the basic meaning of the whole compound, and modifiers, which restrict this meaning. For example, the English compounddoghouse, wherehouseis the head anddogis the modifier, is understood as a house intended for a dog. Endocentric compounds tend to be of the samepart of speech(word class) as their head, as in the case ofdoghouse. Anexocentriccompound(bahuvrihiin the Sanskrit tradition) is ahyponymof some unexpressed semantic category (such as a person, plant, or animal): none (neither) of its components can be perceived as a formal head, and its meaning often cannot be transparently guessed from its constituent parts. For example, the English compoundwhite-collaris neither a kind of collar nor a white thing. In an exocentric compound, the word class is determined lexically, disregarding the class of the constituents. For example, amust-haveis not a verb but a noun. The meaning of this type of compound can be glossed as "(one) whose B is A", where B is the second element of the compound and A the first. A bahuvrihi compound is one whose nature is expressed by neither of the words: thus awhite-collarperson is neither white nor a collar (the collar's colour is ametonymfor socioeconomic status). Other English examples includebarefoot. Copulative compounds(dvandvain theSanskrittradition) are compounds with two semantic heads, for example in a gradual scale (such as a mix of colours). Appositional compoundsare lexemes that have two (contrary or simultaneous) attributes that classify the compound. All natural languages have compound nouns. The positioning of the words (i.e. the most common order of constituents in phrases where nouns are modified by adjectives, by possessors, by other nouns, etc.) varies according to the language. While Germanic languages, for example, are left-branching when it comes to noun phrases (the modifiers come before the head), the Romance languages are usually right-branching. English compound nounscan be spaced, hyphenated, or solid, and they sometimes change orthographically in that direction over time, reflecting asemanticidentity that evolves from a merecollocationto something stronger in its solidification. This theme has been summarized inusageguides under the aphorism that "compound nouns tend to solidify as they age"; thus a compound noun such asplace namebegins as spaced in most attestations and then becomes hyphenated asplace-nameand eventually solid asplacename, or the spaced compound nounfile namedirectly becomes solid asfilenamewithout being hyphenated. German, a fellowWest Germanic language, hasa somewhat different orthography, whereby compound nouns are virtually always required to be solid or at least hyphenated; even the hyphenated styling is used less now than it was in centuries past. InFrench, compound nouns are often formed by left-hand heads with prepositional components inserted before the modifier, as inchemin-de-fer'railway', lit. 'road of iron', andmoulin à vent'windmill', lit. 'mill (that works)-by-means-of wind'. InTurkish, one way of forming compound nouns is as follows:yeldeğirmeni'windmill' (yel: wind,değirmen-i: mill-possessive);demiryolu'railway' (demir: iron,yol-u: road-possessive). Occasionally, two synonymous nouns can form a compound noun, resulting in apleonasm. One example is the English wordpathway. InArabic, there are two distinct criteria unique to Arabic, or potentiallySemitic languagesin general. The initial criterion involves whether thepossessivemarker li-/la ‘for/of’ appears or is absent when the first element is definite. The second criterion deals with the appearance/absence of the possessive marker li-/la ‘for/of’ when the first element is preceded by acardinal number.[7] A type of compound that is fairly common in theIndo-European languagesis formed of a verb and its object, and in effect transforms a simple verbal clause into a noun. InSpanish, for example, such compounds consist of a verb conjugated for the second person singular imperative followed by a noun (singular or plural): e.g.,rascacielos(modelled on "skyscraper", lit. 'scratch skies'),sacacorchos'corkscrew' (lit. 'pull corks'),guardarropa'wardrobe' (lit. 'store clothes'). These compounds are formally invariable in the plural (but in many cases they have been reanalyzed as plural forms, and a singular form has appeared). French and Italian have these same compounds with the noun in the singular form: Italiangrattacielo'skyscraper', Frenchgrille-pain'toaster' (lit. 'toast bread'). This construction exists in English, generally with the verb and noun both in uninflected form: examples arespoilsport,killjoy,breakfast,cutthroat,pickpocket,dreadnought, andknow-nothing. Also common in English is another type of verb–noun (or noun–verb) compound, in which an argument of the verb isincorporatedinto the verb, which is then usually turned into agerund, such asbreastfeeding,finger-pointing, etc. The noun is often an instrumental complement. From these gerunds new verbs can be made:(a mother) breastfeeds (a child)and from them new compoundsmother-child breastfeeding, etc. In the Australian Aboriginal languageJingulu, aPama–Nyungan language, it is claimed that all verbs are V+N compounds, such as "do a sleep", or "run a dive", and the language has only three basic verbs:do,make, andrun.[8] A special kind of compounding isincorporation, of which noun incorporation into a verbal root (as in Englishbackstabbing,breastfeed, etc.) is most prevalent (see below). Verb–verb compounds are sequences of more than one verb acting together to determine clause structure. They have two types: trɔ turn dzo leave trɔ dzo turn leave "turn and leave" जाकर jā-kar go-CONJ.PTCP देखो dekh-o see-IMP जाकर देखो jā-kar dekh-o go-CONJ.PTCP see-IMP "go and see" InTamil, a Dravidian language, van̪t̪u paːr, lit. "come see". In each case, the two verbs together determine the semantics and argument structure. Serial verb expressions in English may includeWhat did you go and do that for?, orHe just upped and left; this is however not quite a true compound since they are connected by a conjunction and the second missing arguments may be taken as a case ofellipsis. De from rabia anger puso put rompiendo breaking la the olla pot De rabiapusorompiendola olla from angerputbreakingthe pot 'In anger (he/she) smashed the pot.' huañuchi-shpa kill-CP shitashun throw.1PL.FUT huañuchi-shpa shitashun kill-CP throw.1PL.FUT Unknown glossing abbreviation(s) (help); तेरे tere we को ko will मार mār kill-throw डालेंगे DāleNge you तेरे को मार डालेंगे tere ko mār DāleNge we will kill-throw you Parasynthetic compounds are formed by a combination of compounding andderivation, with multiplelexical stemsand a derivational affix. For example, Englishblack-eyedis composed ofblack,eye, and-ed'having', with the meaning 'having a black eye';[9]Italianimbustareis composed ofin-'in',busta'envelope',-are(verbal suffix), with the meaning 'to put into an envelope'.[10] Compoundprepositionsformed by prepositions and nouns are common in English and the Romance languages (consider Englishon top of, Spanishencima de, etc.). Hindi has a small number of simple (i.e., one-word) postpositions and a large number of compound postpositions, mostly consisting of simple postpositionkefollowed by a specific postposition (e.g.,ke pas, "near";ke nīche, "underneath"). Arabic: Chinese (traditional/simplified Chinese; Standard ChinesePinyin/CantoneseJyutping): Dutch: Finnish: German: Ancient Greek: Icelandic: Italian: Japanese: Korean: Ojibwe/Anishinaabemowin: Spanish: Tamil: Tłįchǫ Yatiì/Dogrib: InGermanic languages(includingEnglish), compounds are formed by prepending what is effectively anamespace(disambiguation context) to the main word. For example, "football" would be a "ball" in the "foot" context. In itself, this does not alter the meaning of the main word. The added context only makes it more precise. As such, a "football" must be understood as a "ball". However, as is the case with "football", a well established compound word may have gained a special meaning in the language'svocabulary. Only this defines "football" as a particular type of ball (unambiguously theround object, not thedance party, at that), and also the game involving such a ball. Another example of special and altered meaning is "starfish" – astarfishis in fact not afishin modern biology. Also syntactically, the compound word behaves like the main word – the whole compound word (or phrase) inherits theword classand inflection rules of the main word. That is to say, since "fish" and "shape" are nouns, "starfish" and "star shape" must also be nouns, and they must take plural forms as "starfish" and "star shapes", definite singular forms as "the starfish" and "the star shape", and so on. This principle also holds for languages that expressdefinitenessby inflection (as inNorth Germanic). Because a compound is understood as a word in its own right, it may in turn be used in new compounds, so forming an arbitrarily long word is trivial. This contrasts to Romance languages, where prepositions are more used to specify word relationships instead of concatenating the words. As a member of the Germanic family of languages, English is unusual in that compounds are normally written in separate parts. This would be an error in other Germanic languages such as Norwegian, Swedish, Danish, German and Dutch. However, this is merely an orthographic convention: As in other Germanic languages, arbitrarynoun phrases, for example "girl scout troop", "city council member", and "cellar door", can be made up on the spot and used as compound nouns in English too. In theRussian languagecompounding is a common type ofword formation, and several types of compounds exist, both in terms of compounded parts of speech and of the way of the formation of a compound.[12] Compound nouns may be agglutinative compounds, hyphenated compounds (стол-книга 'folding table', lit. 'table-book', "book-like table"), or abbreviated compounds (acronyms: колхоз 'kolkhoz'). Some compounds look like acronym, while in fact they are an agglutinations of typestem+ word: Академгородок 'Akademgorodok' (fromakademichesky gorodok'academic village'). In agglutinative compound nouns, an agglutinating infix is typically used: пароход 'steamship': пар + о + ход. Compound nouns may be created as noun+noun, adjective + noun, noun + adjective (rare), noun + verb (or, rather, noun +verbal noun). Compound adjectives may be formed either per se (бело-розовый 'white-pink') or as a result of compounding during the derivation of an adjective from a multi-word term: Каменноостровский проспект ([kəmʲɪnnʌʌˈstrovskʲɪjprʌˈspʲɛkt]) 'Stone Island Avenue', a street inSt.Petersburg. Reduplication in Russianis also a source of compounds. Quite a few Russian words are borrowed from other languages in an already-compounded form, including numerous "classical compounds" orinternationalisms: автомобиль 'automobile'. Sanskrit is very rich in compound formation with seven major compound types and as many as 55 sub-types.[13]The compound formation process is productive, so it is not possible to list all Sanskrit compounds in a dictionary. Compounds of two or three words are more frequent, but longer compounds with somerunning through pagesare not rare in Sanskrit literature.[13]Some examples are below (hyphens below show individual word boundaries for ease of reading but are not required in original Sanskrit). Also in sign languages, compounding is a productive word formation process. Both endocentric and exocentric compounds have been described for a variety of sign languages.[17]Copulative compounds ordvandva, which are composed of two or more nouns from the same semantic category to denote that semantic category, also occur regularly in many sign languages. ThesignforparentsinItalian Sign Language, for instance, is a combination of the nouns 'father' and 'mother'. The sign forbreakfastinAmerican Sign Languagefollows the same concept. The wordseatandmorningare signed together to create a new word meaning breakfast.[citation needed]This is an example of a sequential compound; in sign languages, it is also possible to formsimultaneouscompounds, where one hand represents one lexeme while the other simultaneously represents another lexeme. An example is the sign forweekendinSign Language of the Netherlands, which is produced by simultaneously signing a one-handed version of the sign forSaturdayand a one-handed version of the sign forSunday.[17]InAmerican Sign Languagethere is another process easily compared to compounding. Blending is the blending of two morphemes to create a new word called a portmanteau.[18]This is different from compounding in that it breaks the strict linear order of compounding.[19] Although there is no universally agreed-upon guideline regarding the use of compound words in theEnglish language, in recent decades written English has displayed a noticeable trend towards increased use of compounds.[20]Recently, many words have been made by taking syllables of words and compounding them, such as pixel (picture element) and bit (binary digit). This is called asyllabic abbreviation. In Dutch and theScandinavian languagesthere is an unofficial trend toward splitting compound words, known in Norwegian assærskriving, in Swedish assärskrivning(literally "separate writing"), and in Dutch asEngelse ziekte(the "English disease"). Because the Dutch language and the Scandinavian languages rely heavily on the distinction between the compound word and the sequence of the separate words it consists of, this has serious implications. For example, the Norwegian adjectiverøykfritt(literally "smokefree", meaning no smoking allowed) if separated into its composite parts, would meanrøyk fritt("smoke freely"). In Dutch, compounds written with spaces may also be confused, but can also be interpreted as a sequence of a noun and agenitive(which is unmarked in Dutch) in formal abbreviated writing. This may lead to, for example,commissie vergadering("commission meeting") being read as "commission of the meeting" rather than "meeting of the commission" (normally spelledcommissievergadering). TheGerman spelling reformof 1996 introduced the option ofhyphenatingcompound nouns when it enhances comprehensibility and readability. This is done mostly with very long compound words by separating them into two or more smaller compounds, likeEisenbahn-Unterführung(railway underpass) orKraftfahrzeugs-Betriebsanleitung(car manual). Such practice is also permitted in other Germanic languages, e.g. Danish andNorwegian(Bokmål and Nynorsk alike), and is even encouraged between parts of the word that have very different pronunciation, such as when one part is aloan wordor anacronym.
https://en.wikipedia.org/wiki/Compound_(linguistics)
Acontractionis a shortened version of the spoken and written forms of aword,syllable, orword group, created by omission of internal letters and sounds. Inlinguistic analysis, contractions should not be confused withcrasis,abbreviationsandinitialisms(includingacronyms), with which they share somesemanticandphoneticfunctions, though all three are connoted by the term "abbreviation" in layman’s terms.[1]Contraction is also distinguished frommorphologicalclipping, where beginnings and endings are omitted. Thedefinitionoverlaps with the termportmanteau(a linguisticblend), but a distinction can be made between aportmanteauand a contraction by noting that contractions are formed from words that would otherwise appear together in sequence, such asdoandnot, whereas a portmanteau word is formed by combining two or more existing words that all relate to a singular concept that the portmanteau describes. Englishhas a number of contractions, mostly involving theelisionof a vowel, which is replaced by anapostrophein writing, as inI'mfor "I am", and sometimes other changes as well. Contractions are common in speech and in informal writing but tend to be avoided in moreformal writing(with limited exceptions, such as the now-standard form "o'clock"). The main contractions are listed in the following table. Althoughcan't,wouldn'tand other forms ending‑n'tclearly started as contractions,‑n'tis now neither a contraction (acliticizedform) nor part of one but instead a negativeinflectionalsuffix. Evidence for this is (i)‑n'toccurs only withauxiliary verbs, and clitics are not limited to particular categories or subcategories; (ii) again unlike contractions, their forms are not rule-governed but idiosyncratic (e.g.,will→won't, can→can't); and (iii) as shown in the table, the inflected and "uncontracted" versions may require different positions in a sentence.[4] TheOld Chinesewriting system (oracle bone scriptandbronzeware script) is well suited for the (almost) one-to-one correspondence betweenmorphemeandglyph. Contractions in which one glyph represents two or more morphemes are a notable exception to that rule. About 20 or so are noted to exist by traditionalphilologistsand are known asjiāncí(兼詞, lit. 'concurrent words'), and more words have been proposed to be contractions by recentscholars, based on recent reconstructions of Old Chinese phonology, epigraphic evidence, and syntactic considerations. For example, 非 [fēi] has been proposed to be a contraction of 不 (bù) + 唯/隹 (wéi/zhuī). The contractions are not generally graphically evident, and there is no general rule for how a character representing a contraction might be formed. As a result, the identification of a character as a contraction, as well as the word(s) that are proposed to have been contracted, is sometimes disputed. As vernacular Chinese dialects use sets of function words that differ considerably fromClassical Chinese, almost all of the classical contractions that are listed below are now archaic and have disappeared from everyday use. However, modern contractions have evolved from the new vernacular function words. Modern contractions appear in all major modern dialect groups. For example, 别 (bié) 'don't' inStandard Mandarinis a contraction of 不要 (bùyào), and 覅 (fiào) 'don't' inShanghaineseis a contraction of 勿要 (wù yào), as is apparent graphically. Similarly, inNortheastern Mandarin甭 (béng) 'needn't' is a phonological and graphical contraction of 不用 (bùyòng). Finally,Cantonesecontracts 乜嘢 (mat1 ye5)[5]'what?' to 咩 (me1). Note:The particles 爰, 焉, 云, and 然 ending in [-j[a/ə]n] behave as the grammatical equivalents of a verb (or coverb) followed by 之 'him; her; it (third-person object)' or a similar demonstrative pronoun in the object position. In fact, 于/於 '(is) in; at', 曰 'say', and 如 'resemble' are never followed by 之 '(third-person object)' or 此 '(near demonstrative)' in pre-Qintexts. Instead, the respective 'contractions' 爰/焉, 云, and 然 are always used in their place. Nevertheless, no known object pronoun is phonologically appropriate to serve as the hypothetical pronoun that underwent contraction. Hence, many authorities do not consider them to be true contractions. As an alternative explanation for their origin,Edwin G. Pulleyblankproposed that the [-n] ending is derived from aSino-Tibetanaspectmarker that later took onanaphoriccharacter.[6]: 80 Here are some of the contractions inStandard Dutch: InformalBelgian Dutchuses a wide range of non-standard contractions such as "hoe's't" (from "hoe is het?" - how are you?), "hij's d'r" (from "hij is daar" - he's there), "w'ebbe' goe' g'ete'" (from "we hebben goed gegeten" - we had eaten well) and "wa's da'?" (from "wat is dat?" - what is that?. Some of these contractions: Frenchhas a variety of contractions like in English except that they are mandatory, as inC'est la vie("That's life") in whichc'eststands force+est("that is"). The formation of such contractions is calledelision. In general, anymonosyllabicword ending ine caduc(schwa) contracts if the following word begins with a vowel,hory(ashis silent and absorbed by the sound of the succeeding vowel;ysounds likei). In addition toce→c'-(demonstrative pronoun "that"), these words areque→qu'-(conjunction, relative pronoun, or interrogative pronoun "that"),ne→n'-("not"),se→s'-("himself", "herself", "itself", "oneself" before a verb),je→j'-("I"),me→m'-("me" before a verb),te→t'-(informal singular "you" before a verb),leorla→l'-("the"; or "he", "she", "it" before a verb or after an imperative verb and before the wordyoren), andde→d'-("of"). Unlike with English contractions, however, those contractions are mandatory: one would never say (or write)*ce estor*que elle. Moi("me") andtoi(informal "you") mandatorily contract tom'-andt'-, respectively, after an imperative verb and before the wordyoren. It is also mandatory to avoid the repetition of a sound when the conjunctionsi("if") is followed byil("he", "it") orils("they"), which begin with the same vowel soundi:*si il→s'il("if it", if he");*si ils→s'ils("if they"). Certainprepositionsare also mandatorily merged with masculine and plural direct articles:auforà le,auxforà les,duforde le, anddesforde les. However, the contraction ofcela(demonstrative pronoun "that") toçais optional and informal. In informal speech, a personalpronounmay sometimes be contracted onto a followingverb. For example,je ne sais pas(IPA:[ʒənəsɛpa], "I don't know") may be pronounced roughlychais pas(IPA:[ʃɛpa]), with thenebeing completely elided and the[ʒ]ofjebeing mixed with the[s]ofsais.[original research?]It is also common in informal contexts to contracttutot'-before a vowel:t'as mangéfortu as mangé. InModern Hebrew, the prepositional prefixes -בְּ /bə-/ 'in' and -לְ /lə-/ 'to' contract with the definite article prefix -ה (/ha-/) to form the prefixes -ב /ba/ 'in the' and -ל /la/ 'to the'. In colloquial Israeli Hebrew, the preposition את (/ʔet/), which indicates a definite direct object, and the definite article prefix -ה (/ha-/) are often contracted to 'ת (/ta-/) when the former immediately precedes the latter; thus, ראיתי את הכלב (/ʁaˈʔiti ʔet haˈkelev/, "I saw the dog") may become ראיתי ת'כלב (/ʁaˈʔiti taˈkelev/). InItalian, prepositions merge with direct articles in predictable ways. The prepositionsa,da,di,in,su,conandpercombine with the various forms of the definitearticle, namelyil,lo,la,l',i,gli,gl',andle. The wordsciandè(form ofessere, to be) and the wordsviandèare contracted intoc'èandv'è(both meaning "there is"). The wordsdoveandcomeare contracted with any word that begins withe, deleting the-eof the principal word, as in "Com'era bello!" – "How handsome he / it was!", "Dov'è il tuo amico?" – "Where's your friend?" The same is often true of other words of similar form, e.g.quale. The direct object pronouns "lo" and "la" may also contract to form "l'" with a form of "avere", such as "L'ho comprato" - "I have bought it", or "L'abbiamo vista" - "We have seen her".[9] Spanishhas two mandatory phonetic contractions between prepositions and articles:al(to the) fora el, anddel(of the) forde el(not to be confused witha él, meaningto him, andde él, meaninghisor, more literally,of him). Other contractions were common in writing until the 17thcentury, the most usual beingde+ personal and demonstrative pronouns:destasforde estas(of these, fem.),daquelforde aquel(of that, masc.),délforde él(of him) etc.; and the feminine article before words beginning witha-:l'almaforla alma, nowel alma(the soul).Severalsets of demonstrative pronouns originated as contractions ofaquí(here) + pronoun, or pronoun +otro/a(other):aqueste,aqueso,estotroetc. The modernaquel(that, masc.) is the only survivor of the first pattern; the personal pronounsnosotros(we) andvosotros(pl. you) are remnants of the second. Inmedievaltexts, unstressed words very often appear contracted:todolfortodo el(all the, masc.),quesforque es(which is); etc. including with common words, like d'ome (d'home/d'homme) instead de ome (home/homme), and so on. Though not strictly a contraction, a special form is used when combining con with mí, ti, or sí, which is written asconmigofor *con mí(with me),contigofor *con ti(with you sing.),consigofor *con sí(with himself/herself/itself/themselves (themself).) Finally, one can hear[clarification needed]pa'forpara, deriving aspa'lforpara el, but these forms are only considered appropriate in informal speech. InPortuguese, contractions are common and much more numerous than those in Spanish. Several prepositions regularly contract with certain articles and pronouns. For instance,de(of) andpor(by; formerlyper) combine with the definite articlesoanda(masculine and feminine forms of "the" respectively), producingdo,da(of the),pelo,pela(by the). The prepositiondecontracts with the pronounseleandela(he, she), producingdele,dela(his, her). In addition, some verb forms contract with enclitic object pronouns: e.g., the verbamar(to love) combines with the pronouna(her), givingamá-la(to love her). Another contraction in Portuguese that is similar to English ones is the combination of the pronoundawith words starting ina, resulting in changing the first letterafor an apostrophe and joining both words. Examples:Estrela d'alva(A popular phrase to refer toVenusthat means "Alb star", as a reference to its brightness);Caixa d'água(water tank). In informal, spokenGermanprepositional phrases, one can often merge the preposition and thearticle; for example,von dembecomesvom,zu dembecomeszum, oran dasbecomesans. Some of these are so common that they are mandatory. In informal speech,aufmforauf dem,untermforunter dem, etc. are also used, but would be considered to be incorrect if written, except maybe in quoted direct speech, in appropriate context and style. The pronounesoften contracts to's(usually written with the apostrophe) in certain contexts. For example, the greetingWie geht es?is usually encountered in the contracted formWie geht's?. Regional dialectsof German, and various local languages that usually were already used long before today'sStandard Germanwas created, do use contractions usually more frequently than German, but varying widely between different local languages. The informally spoken German contractions are observed almost everywhere, most often accompanied by additional ones, such asin denbecomingin'n(sometimesim) orhaben wirbecominghamwer,hammor,hemmer, orhammadepending on local intonation preferences.Bavarian Germanfeatures several more contractions such asgesund sind wirbecomingxund samma, which are schematically applied to all word or combinations of similar sound. (One must remember, however, that Germanwirexists alongside Bavarianmir, ormia, with the same meaning.) The Munich-born footballerFranz Beckenbauerhas as his catchphrase "Schau mer mal" ("Schauen wir einmal" - in English "We shall see."). A book about his career had as its title the slightly longer version of the phrase, "Schau'n Mer Mal". Such features are found in all central and southern language regions. A sample from Berlin:Sag einmal, Meister, kann man hier einmal hinein?is spoken asSamma, Meesta, kamma hier ma rin? SeveralWest Central Germandialects along theRhine Riverhave built contraction patterns involving long phrases and entire sentences. In speech, words are often concatenated, and frequently the process of"liaison"is used. So,[Dat] kriegst Du nichtmay becomeKressenit, orLass mich gehen, habe ich gesagtmay becomeLomejon haschjesaat. Mostly, there are no bindingorthographiesfor local dialects of German, hence writing is left to a great extent to authors and their publishers. Outside quotations, at least, they usually pay little attention to print more than the most commonly spoken contractions, so as not to degrade their readability. The use of apostrophes to indicate omissions is a varying and considerably less frequent process than in English-language publications. In standard Indonesian, there are no contractions applied, although Indonesian contractions exist inIndonesian slang. Many of these contractions areterima kasihtomakasih("thank you"),kenapatonapa("why"),nggaktogak("not"),sebentartotar("a moment"), andsudahtodah("done"). The use of contractions is not allowed in any form of standardNorwegianspelling; however, it is fairly common to shorten or contract words in spoken language. Yet, the commonness varies from dialect to dialect and from sociolect to sociolect—it depends on the formality etc. of the setting. Some common, and quite drastic, contractions found in Norwegian speech are "jakke" for "jeg har ikke", meaning "I do not have" and "dække" for "det er ikke", meaning "there is not". The most frequently used of these contractions—usually consisting of two or three words contracted into one word, contain short, common and oftenmonosyllabicwords likejeg,du,deg,det,harorikke. The use of the apostrophe (') is much less common than in English, but is sometimes used in contractions to show where letters have been dropped. In extreme cases, long, entire sentences may be written as one word. An example of this is "Det ordner seg av seg selv" in standard writtenBokmål, meaning "It will sort itself out" could become "dånesæsæsjæl" (note the lettersÅandÆ, and the word "sjæl", as aneye dialectspelling ofselv).R-dropping, being present in the example, is especially common in speech in many areas of Norway[which?], but plays out in different ways, as does elision of word-final phonemes like/ə/. Because of the many dialects of Norwegian and their widespread use it is often difficult to distinguish between non-standard writing of standard Norwegian and eye dialect spelling. It is almost universally true that these spellings try to convey the way each word is pronounced, but it is rare to see language written that does not adhere to at least some of the rules of the officialorthography. Reasons for this include words spelled unphonemically, ignorance of conventional spelling rules, or adaptation for better transcription of that dialect's phonemes. Latin contains several examples of contractions. One such case is preserved in the verbnolo(I am unwilling/do not want), which was formed by a contraction ofnon volo(volomeaning "I want"). Similarly this is observed in the first person plural and third person plural forms (nolumus and nolunt respectively). Some contractions in rapid speech include ~っす (-ssu) for です (desu) and すいません (suimasen) for すみません (sumimasen). では (dewa) is often contracted to じゃ (ja). In certain grammatical contexts the particle の (no) is contracted to simply ん (n). When used after verbs ending in the conjunctive form ~て (-te), certain auxiliary verbs and their derivations are often abbreviated. Examples: * this abbreviation is never used in the polite conjugation, to avoid the resultant ambiguity between an abbreviatedikimasu(go) and the verbkimasu(come). The ending ~なければ (-nakereba) can be contracted to ~なきゃ (-nakya) when it is used to indicate obligation. It is often used without an auxiliary, e.g., 行かなきゃ(いけない) (ikanakya (ikenai)) "I have to go." Other times, contractions are made to create new words or to give added or altered meaning: Variousdialects of Japanesealso use their own specific contractions that are often unintelligible to speakers of other dialects. InPolish, pronouns have contracted forms that are more prevalent in their colloquial usage. Examples aregoandmu. The non-contracted forms arejego(unless it is used as a possessive pronoun) andjemu, respectively. Theclitic-ń, which stands forniego(him), as indlań(dla niego), is more common in literature. The non-contracted forms are generally used as a means to accentuate.[10] Uyghur, aTurkic languagespoken inCentral Asia, includes some verbal suffixes that are actually contracted forms ofcompound verbs(serial verbs). For instance,sëtip alidu(sell-manage, "manage to sell") is usually written and pronouncedsëtivaldu, with the two words forming a contraction and the [p]lenitinginto a [v] or [w].[original research?] In Filipino, most contractions need other words to be contracted correctly. Only words that end with vowels can make a contraction with words like "at" and "ay." In this chart, V represents any vowel. InAlbanianthere are two main contractions, ç' and s' used for verbs that are short for çfarë (what) and nuk (did/will not).
https://en.wikipedia.org/wiki/Contraction_(grammar)
Adiminutiveis a word obtained by modifying aroot wordto convey a slighter degree of its root meaning, either to convey the smallness of the object or quality named, or to convey a sense ofintimacyorendearment, and sometimes to belittle something or someone.[1][2]Adiminutive form(abbreviatedDIM) is a word-formation device used to express such meanings. Adouble diminutiveis a diminutive form with two diminutive suffixes rather than one. Diminutives are often employed asnicknamesandpet nameswhen speaking to small children and when expressing extreme tenderness andintimacyto an adult. The opposite of the diminutive form is theaugmentative. In some contexts, diminutives are also employed in apejorativesense to denote that someone or something is weak or childish. For example, one of the lastWestern Romanemperors wasRomulus Augustus, but his name was diminutivized to "Romulus Augustulus" to express his powerlessness. In many languages, diminutives areword formsthat are formed from the root word byaffixation. In most languages, diminutives can also be formed as multi-word constructions such as "Tiny Tim", or "Little Dorrit". In most languages that form diminutives by affixation, this is aproductivepart of the language.[1]For example, in Spanishgordocan be a nickname for someone who is overweight, and by adding an-itosuffix, it becomesgorditowhich is more affectionate. Examples for a double diminutive having two diminutive suffixes are in Polishdzwon→dzwonek→dzwoneczekor Italiancasa→casetta→casettina). In English, the alteration of meaning is often conveyed throughclipping, making the words shorter and morecolloquial. Diminutives formed by addingaffixesin other languages are often longer and (as colloquial) not necessarily understood. While many languages apply a grammatical diminutive tonouns, a few – including Slovak,Dutch,Spanish,Romanian,Latin,Polish,Bulgarian,Czech,RussianandEstonian– also use it foradjectives(in Polish:słodki→słodziutki→słodziuteńki) and even otherparts of speech(Ukrainianспати→спатки→спатоньки— to sleep or Slovakspať→spinkať→spinuškať— to sleep,bežať→bežkať— to run). Diminutives inisolating languagesmaygrammaticalizestrategies other than suffixes or prefixes. InMandarin Chinese, for example, other than the nominal prefix 小-xiǎo-and nominal suffixes -儿/-兒-rand -子-zi,reduplicationis aproductivestrategy, e.g.,舅→舅舅and看→看看.[3]In formalMandarinusage, the use of diminutives is relatively infrequent, as they tend to be considered to be rather colloquial than formal. SomeWu Chinesedialects use a tonal affix for nominal diminutives; that is, diminutives are formed by changing the tone of the word.
https://en.wikipedia.org/wiki/Diminutive
In awritten language, alogogram(fromAncient Greeklogos'word', andgramma'that which is drawn or written'), alsologographorlexigraph, is awritten characterthat represents asemanticcomponent of a language, such as awordormorpheme.Chinese charactersas used inChineseas well as other languages are logograms, as areEgyptian hieroglyphsand characters incuneiform script. Awriting systemthat primarily uses logograms is called alogography. Non-logographic writing systems, such asalphabetsandsyllabaries, arephonemic: their individual symbols represent sounds directly and lack any inherent meaning. However, all known logographies have some phonetic component, generally based on therebus principle, and the addition of a phonetic component to pureideographsis considered to be a key innovation in enabling the writing system to adequately encode human language. Some of the earliest recorded writing systems are logographic; the first historical civilizations of Mesopotamia, Egypt, China and Mesoamerica all used some form of logographic writing.[1][2] All logographic scripts ever used fornatural languagesrely on therebus principleto extend a relatively limited set of logograms: A subset of characters is used for their phonetic values, either consonantal or syllabic. The termlogosyllabaryis used to emphasize the partially phonetic nature of these scripts when the phonetic domain is the syllable. In Ancient Egyptianhieroglyphs, Ch'olti', and in Chinese, there has been the additional development ofdeterminatives, which are combined with logograms to narrow down their possible meaning. In Chinese, they are fused with logographic elements used phonetically; such "radicaland phonetic" characters make up the bulk of the script. Ancient Egyptian and Chinese relegated the active use of rebus to the spelling of foreign and dialectical words. Logoconsonantal scripts have graphemes that may be extended phonetically according to the consonants of the words they represent, ignoring the vowels. For example, Egyptian was used to write bothsȝ'duck' andsȝ'son', though it is likely that these words were not pronounced the same except for their consonants. The primary examples of logoconsonantal scripts areEgyptian hieroglyphs,hieratic, anddemotic:Ancient Egyptian. Logosyllabic scripts havegraphemeswhich represent morphemes, often polysyllabic morphemes, but when extended phonetically represent single syllables. They include cuneiform,Anatolian hieroglyphs,Cretan hieroglyphs,Linear AandLinear B,Chinese characters,Maya script,Aztec script,Mixtec script, and the first five phases of theBamum script. A peculiar system of logograms developed within thePahlavi scripts(developed from theabjadofAramaic) used to writeMiddle Persianduring much of theSassanid period; the logograms were composed of letters that spelled out the word inAramaicbut were pronounced as in Persian (for instance, the combinationm-l-kwould be pronounced "shah"). These logograms, calledhozwārishn(a form ofheterograms), were dispensed with altogether after theArab conquest of Persiaand the adoption of avariantof theArabic alphabet.[citation needed] All historical logographic systems include a phonetic dimension, as it is impractical to have a separate basic character for every word or morpheme in a language.[a]In some cases, such as cuneiform as it was used for Akkadian, the vast majority of glyphs are used for their sound values rather than logographically. Many logographic systems also have a semantic/ideographic component (seeideogram), called "determinatives" in the case of Egyptian and "radicals" in the case of Chinese.[b] Typical Egyptian usage was to augment a logogram, which may potentially represent several words with different pronunciations, with a determinate to narrow down the meaning, and a phonetic component to specify the pronunciation. In the case of Chinese, the vast majority of characters are a fixed combination of a radical that indicates its nominal category, plus a phonetic to give an idea of the pronunciation. The Mayan system used logograms with phonetic complements like the Egyptian, while lacking ideographic components. Not all logograms are associated with one specific language, and some are not associated with any language at all. Theampersandis a logogram in the Latin script,[3]a combination of the letters "e" and "t." In Latin, "et" translates to "and," and the ampersand is still used to represent this word today, however, it does so in a variety of languages, being a representative of morphemes "and," "y," or "en," if they are a speaker of English, Spanish, or Dutch, respectively. Outside of any script isUnicode, a compilation of characters of various meanings. They state their intention to build the standard to include every character from every language.[4]It's the generally accepted standard for computer character encoding, but others, likeASCIIandBaudot, exist and serve various purposes in digital communication. Many logograms in these databases are ubiquitous, and are used on the Internet by users worldwide. Chinese scholars have traditionally classified the Chinese characters (hànzì) into six types by etymology. The first two types are "single-body", meaning that the character was created independently of other characters. "Single-body" pictograms and ideograms make up only a small proportion of Chinese logograms. More productive for the Chinese script were the two "compound" methods, i.e. the character was created from assembling different characters. Despite being called "compounds", these logograms are still single characters, and are written to take up the same amount of space as any other logogram. The final two types are methods in the usage of characters rather than the formation of characters themselves. The most productive method of Chinese writing, the radical-phonetic, was made possible by ignoring certain distinctions in the phonetic system of syllables. InOld Chinese, post-final ending consonants/s/and/ʔ/were typically ignored; these developed intotonesinMiddle Chinese, which were likewise ignored when new characters were created. Also ignored were differences in aspiration (between aspirated vs. unaspiratedobstruents, and voiced vs. unvoiced sonorants); the Old Chinese difference between type-A and type-B syllables (often described as presence vs. absence ofpalatalizationorpharyngealization); and sometimes, voicing of initial obstruents and/or the presence of a medial/r/after the initial consonant. In earlier times, greater phonetic freedom was generally allowed. During Middle Chinese times, newly created characters tended to match pronunciation exactly, other than the tone – often by using as the phonetic component a character that itself is a radical-phonetic compound. Due to the long period of language evolution, such component "hints" within characters as provided by the radical-phonetic compounds are sometimes useless and may be misleading in modern usage. As an example, based on每'each', pronouncedměiinStandard Mandarin, are the characters侮'to humiliate',悔'to regret', and海'sea', pronounced respectivelywǔ,huǐ, andhǎiin Mandarin. Three of these characters were pronounced very similarly in Old Chinese –/mˤəʔ/(每),/m̥ˤəʔ/(悔), and/m̥ˤəʔ/(海) according to a recent reconstruction byWilliam H. BaxterandLaurent Sagart[6]– butsound changesin the intervening 3,000 years or so (including two different dialectal developments, in the case of the last two characters) have resulted in radically different pronunciations. Within the context of the Chinese language, Chinese characters (known ashanzi) by and large represent words and morphemes rather than pure ideas; however, the adoption of Chinese characters by the Japanese and Korean languages (where they are known askanjiandhanja, respectively) have resulted in some complications to this picture. Many Chinese words, composed of Chinese morphemes, were borrowed into Japanese and Korean together with their character representations; in this case, the morphemes and characters were borrowed together. In other cases, however, characters were borrowed to represent native Japanese and Korean morphemes, on the basis of meaning alone. As a result, a single character can end up representing multiple morphemes of similar meaning but with different origins across several languages. Because of this, kanji and hanja are sometimes described asmorphographicwriting systems.[7] Because much research onlanguage processinghas centered on English and other alphabetically written languages, many theories of language processing have stressed the role of phonology in producing speech. Contrasting logographically coded languages, where a single character is represented phonetically and ideographically, with phonetically/phonemically spelled languages has yielded insights into how different languages rely on different processing mechanisms. Studies on the processing of logographically coded languages have amongst other things looked at neurobiological differences in processing, with one area of particular interest being hemispheric lateralization. Since logographically coded languages are more closely associated with images than alphabetically coded languages, several researchers have hypothesized that right-side activation should be more prominent in logographically coded languages. Although some studies have yielded results consistent with this hypothesis there are too many contrasting results to make any final conclusions about the role of hemispheric lateralization in orthographically versus phonetically coded languages.[8] Another topic that has been given some attention is differences in processing of homophones. Verdonschot et al.[9]examined differences in the time it took to read a homophone out loud when a picture that was either related or unrelated[10]to a homophonic character was presented before the character. Both Japanese and Chinese homophones were examined. Whereas word production of alphabetically coded languages (such as English) has shown a relatively robust immunity to the effect of context stimuli,[11]Verdschot et al.[12]found that Japanese homophones seem particularly sensitive to these types of effects. Specifically, reaction times were shorter when participants were presented with a phonologically related picture before being asked to read a target character out loud. An example of a phonologically related stimulus from the study would be for instance when participants were presented with a picture of an elephant, which is pronouncedzouin Japanese, before being presented with the Chinese character造, which is also readzou. No effect of phonologically related context pictures were found for the reaction times for reading Chinese words. A comparison of the (partially) logographically coded languages Japanese and Chinese is interesting because whereas the Japanese language consists of more than 60% homographic heterophones (characters that can be read two or more different ways), most Chinese characters only have one reading. Because both languages are logographically coded, the difference in latency in reading aloud Japanese and Chinese due to context effects cannot be ascribed to the logographic nature of the writing systems. Instead, the authors hypothesize that the difference in latency times is due to additional processing costs in Japanese, where the reader cannot rely solely on a direct orthography-to-phonology route, but information on a lexical-syntactical level must also be accessed in order to choose the correct pronunciation. This hypothesis is confirmed by studies finding that JapaneseAlzheimer's diseasepatients whose comprehension of characters had deteriorated still could read the words out loud with no particular difficulty.[13][14] Studies contrasting the processing of English and Chinese homophones inlexical decision taskshave found an advantage for homophone processing in Chinese, and a disadvantage for processing homophones in English.[15]The processing disadvantage in English is usually described in terms of the relative lack of homophones in the English language. When a homophonic word is encountered, the phonological representation of that word is first activated. However, since this is an ambiguous stimulus, a matching at the orthographic/lexical ("mental dictionary") level is necessary before the stimulus can be disambiguated, and the correct pronunciation can be chosen. In contrast, in a language (such as Chinese) where many characters with the same reading exists, it is hypothesized that the person reading the character will be more familiar with homophones, and that this familiarity will aid the processing of the character, and the subsequent selection of the correct pronunciation, leading to shorter reaction times when attending to the stimulus. In an attempt to better understand homophony effects on processing, Hino et al.[11]conducted a series of experiments using Japanese as their target language. While controlling for familiarity, they found a processing advantage for homophones over non-homophones in Japanese, similar to what has previously been found in Chinese. The researchers also tested whether orthographically similar homophones would yield a disadvantage in processing, as has been the case with English homophones,[16]but found no evidence for this. It is evident that there is a difference in how homophones are processed in logographically coded and alphabetically coded languages, but whether the advantage for processing of homophones in the logographically coded languages Japanese and Chinese (i.e. their writing systems) is due to the logographic nature of the scripts, or if it merely reflects an advantage for languages with more homophones regardless of script nature, remains to be seen. The main difference between logograms and other writing systems is that the graphemes are not linked directly to their pronunciation. An advantage of this separation is that understanding of the pronunciation or language of the writer is unnecessary, e.g.1is understood regardless of whether it be calledone,ichiorwāḥidby its reader. Likewise, people speaking differentvarieties of Chinesemay not understand each other in speaking, but may do so to a significant extent in writing even if they do not write inStandard Chinese. Therefore, in China, Vietnam, Korea, and Japan before modern times, communication by writing (筆談) was the norm ofEast Asianinternational trade and diplomacy usingClassical Chinese.[citation needed][dubious–discuss] This separation, however, also has the great disadvantage of requiring the memorization of the logograms when learning to read and write, separately from the pronunciation. Though not from an inherent feature of logograms but due to its unique history of development, Japanese has the added complication that almost every logogram has more than one pronunciation. Conversely, a phonetic character set is written precisely as it is spoken, but with the disadvantage that slight pronunciation differences introduce ambiguities. Many alphabetic systems such as those ofGreek,Latin,Italian,Spanish, andFinnishmake the practical compromise of standardizing how words are written while maintaining a nearly one-to-one relation between characters and sounds. Orthographies in some other languages, such asEnglish,French,ThaiandTibetan, are all more complicated than that; character combinations are often pronounced in multiple ways, usually depending on their history.Hangul, theKorean language's writing system, is an example of an alphabetic script that was designed to replace the logogrammatichanjain order to increase literacy. The latter is now rarely used, but retains some currency in South Korea, sometimes in combination with hangul.[citation needed] According to government-commissioned research, the most commonly used 3,500 characters listed in thePeople's Republic of China's "Chart of Common Characters of Modern Chinese" (现代汉语常用字表,Xiàndài Hànyǔ Chángyòngzì Biǎo) cover 99.48% of a two-million-word sample. As for the case of traditional Chinese characters, 4,808 characters are listed in the "Chart of Standard Forms of Common National Characters" (常用國字標準字體表) by the Ministry of Education of theRepublic of China, while 4,759 in the "List of Graphemes of Commonly-Used Chinese Characters" (常用字字形表) by the Education and Manpower Bureau ofHong Kong, both of which are intended to be taught duringelementaryandjunior secondaryeducation. Education after elementary school includes not as many new characters as new words, which are mostly combinations of two or more already learned characters.[17] Entering complex characters can be cumbersome on electronic devices due to a practical limitation in the number of input keys. There exist variousinput methodsfor entering logograms, either by breaking them up into their constituent parts such as with theCangjieandWubi methodsof typing Chinese, or using phonetic systems such asBopomofoorPinyinwhere the word is entered as pronounced and then selected from a list of logograms matching it. While the former method is (linearly) faster, it is more difficult to learn. With the Chinese alphabet system however, the strokes forming the logogram are typed as they are normally written, and the corresponding logogram is then entered.[clarification needed] Also due to the number of glyphs, in programming and computing in general, more memory is needed to store each grapheme, as the character set is larger. As a comparison,ISO 8859requires only onebytefor each grapheme, while theBasic Multilingual Planeencoded inUTF-8requires up to three bytes. On the other hand, English words, for example, average five characters and a space per word[18][self-published source]and thus need six bytes for every word. Since many logograms contain more than one grapheme, it is not clear which is more memory-efficient.Variable-width encodingsallow a unified character encoding standard such asUnicodeto use only the bytes necessary to represent a character, reducing the overhead that results merging large character sets with smaller ones.
https://en.wikipedia.org/wiki/Logogram
0–9ABCDEFGHIJKLMNOPQRSTUVWXYZ Abbreviationsare used very frequently in medicine. They boost efficiency as long as they are used intelligently. The advantages of brevity should be weighed against the possibilities of obfuscation (making the communication harder for others to understand) and ambiguity (having more than one possible interpretation). Certain medical abbreviations are avoided to prevent mistakes, according tobest practices(and in some cases regulatory requirements); these are flagged in thelist of abbreviations used in medical prescriptions. Periods(stops) are often used in styling abbreviations. Prevalent practice in medicine today is often to forgo them as unnecessary. The prevalent way to represent plurals for medical acronyms and initialisms is simply to affix a lowercases(noapostrophe). Possessive forms are not often needed, but can be formed using apostrophe +s. Often the writer can also recast the sentence to avoid it. Arrows may be used to indicate numerous conditions including elevation (↑), diminution (↓), and causation (→, ←).[3] Pronunciation follows convention outside the medical field, in which acronyms are generally pronounced as if they were a word (JAMA,SIDS), initialisms are generally pronounced as individual letters (DNA,SSRI), and abbreviations generally use the expansion (soln.= "solution",sup.= "superior"). Abbreviations ofweights and measuresare pronounced using the expansion of the unit (mg= "milligram") andchemical symbolsusing the chemical expansion (NaCl= "sodium chloride"). Some initialisms deriving from Latin may be pronounced either as letters (qid= "cue eye dee") or using the English expansion (qid= "four times a day").[citation needed] Notation conventions 0–9ABCDEFGHIJKLMNOPQRSTUVWXYZ Signs and symptomsSyndromeDisease Medical diagnosisDifferential diagnosisPrognosis AcuteChronicCure Eponymous diseaseAcronym or abbreviationRemission
https://en.wikipedia.org/wiki/List_of_medical_abbreviations
Anamalgamated nameis a name that is formed by combining several previously existing names. These may take the form of anacronym(where only one letter of each name is taken) or ablend(where a large part of each name is taken, such as the firstsyllable). Amalgamated names are most commonly used foramalgamated businesses,characters and places. Newly arisingpartnershipsmay also choose to name themselves by amalgamating their names. Amalgamation is also a term used in linguistics when acompoundcontains roots from several languages, without it being part of a blended language. For example, a word with an English and a Spanish root would not be an amalgam, if part ofSpanglish, while an English word with a Greek and a Latin root would. This is also known as ahybrid word.[citation needed] This name-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Amalgamation_(names)
Insign language, aninitialized signis one that is produced with a handshape(s) that corresponds to thefingerspellingof its equivalent in the locally dominantoral language, based on the respectivemanual alphabetrepresenting that oral language'sorthography. The handshape(s) of these signs then represent the initial letter of their written equivalent(s). In some cases, this is due to the local oral language having more than one equivalent to a basic sign. For example, inASL, the signs for "class" and "family" are the same (a basic sign for 'group of people'), except that "class" is signed with a 'C' handshape, and "family" with an 'F' handshape. In other cases initialization is required for disambiguation, though the signs are not semantically related. For example, in ASL, "water" it signed with a 'W' handshape touching the mouth, while "dentist" is similar apart from using a 'D' handshape. In other cases initialization is not used for disambiguation; the ASL sign for "elevator", for example, is an 'E' handshape moving up and down along the upright index finger of the other hand. The large number of initialized signs in ASL andFrench Sign Languageis partly a legacy ofAbbé de l'Épée's system ofMethodical Sign(les signes méthodiques), in which the handshapes of most signs were changed to correspond to the initial letter of their translation in the local oral language, and (in the case of ASL) partly a more recent influence ofManually Coded English.[1] Sign languages make use of initialized signs to different degrees. Some, such asTaiwanese Sign LanguageandHong Kong Sign Languagehave none at all, as they have no manual alphabets and thus no fingerspelling. InJapanese Sign Language, there arekana-based initialized signs that contain only the first mora of the equivalent Japanese word. For example, the sign for 'feeling' in JSL incorporates the Kana character 'KI' ofJapanese manual syllabaryas in Japanesekimochi(気持ち). However, only signs following the phonological constraints are accepted by native signers.[2] In ASL, initialized signs are typically considered "hearing" signs, used in schools to help students acquire English, though some such as "water" above are thoroughly assimilated. InMexican Sign Language, however, initialized signs are much more numerous, and are more fully integrated into the language.[3]This is also the case withNepali Sign Language, and are perhaps one of the most noticeable structural differences between the lexicon ofNepali Sign Languageand that of neighboringIndo-Pakistani Sign Language, which (perhaps in part due to its two-handed manual alphabet) has significantly far fewer initialized signs, but a fair number of "sequential initializations (i.e., compound signs composed of the initial letter of the word either preceding or following a sign, e.g. "C" + BOSS = CAPTAIN in IPSL).[4] ^bDenotes the number (if known) of languages within the family. No further information is given on these languages. This article about asign languageor related topic is astub. You can help Wikipedia byexpanding it. Thislinguistic morphologyarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Initialized_sign
Aone-letter wordis awordcomposed of a singleletter. The application of this apparently simpledefinitionis complex, due to the difficulty of defining the notions of 'word' and 'letter'. One-letter words have an uncertain status in language theory,dictionariesand social usage. They are sometimes used asbooktitles, and have been the subject of literary experimentation byFuturist,MinimalistandUlypianpoets. For linguists, the term 'word' is far from unambiguous.[3]It is defined graphically[4]as a set of letters between twoword dividers,[5][6]with Jacques Anis adding that "the word thus seems to have a real existence only in writing, through the blanks that isolate it."[7]This pragmatic definition can already be found in Arnauld and Lancelot'sPort-Royal Grammar, published in 1660: "We call a word what is pronounced apart and written apart."[8]In this sense, any isolated letter forms a word, even if it carries no meaning. Semantically, a word is defined as amorpheme, "the smallest unit of meaning."[9][10]In this respect, an isolated letter is a word only if it carries meaning.[nb 2] For Françoise Benzécri, thebijectionseems obvious:[nb 3]"Every letter of an ordinary alphabet is associated with the one-letter word it constitutes, noted as that letter,"[11]butDarryl Francisnotes on the contrary that the meaning of one-letter words is not reduced to designating the letter that constitutes them.[12]Solange Cuénod also asserts that a one-letter word "has no reason to be taken as that letter,"[13]and gives the following example: "If I ask you [in French] how to sayain English, depending on whether you consider it to be the letter of the alphabet, or the word containing only that letter, your answer will be different. It will be eithera(pronouncedei), orhas. So there's no need to confuse the verbavoirin the third person singular of the present indicative with the letter used to write it."[nb 4] Linguist Malgorzata Mandola doubts that a one-letter toponym can have semantic value in the case of the village ofYin theSomme,[14]and believes that it is rather what the grammarian Lucien Tesnière calls a "grammatical word, devoid of any semantic function".[15]Marcel Proust, on the other hand, distinguishes between thenom propre(proper noun) and themot commun(common noun), because for him, asRoland Barthesputs it, themot communis "a voluminous sign, a sign always full of a thick, dense layer of meaning, which no use reduces or flattens, unlike thenom commun:"[16] "Words present us with a small, clear and usual image of things [...] But names presentpeople —and cities, which they accustom us to believe are individual, unique aspeople —with a confused image that draws from them, from their bright or dark sound, the color with which it is uniformly painted." Letters are the elements of analphabet, i.e. awriting systembased on the representation of sounds, as opposed toideograms, which are often the origin of images.[nb 5]From this perspective, it is atypical for a lone letter to convey anything beyond its literal meaning, whereas an ideogram may convey more than one concept simultaneously.[nb 6][17]Linguists understand letters as graphemes, and classify them into three groups: alphabeticgraphemes[18]or alphagrams, "punctuo-typographic graphemes" or "topograms", which correspond to punctuation marks, and "logographic graphemes" or logograms, which comprise "logogramsstricto sensu, grapheme-signs noting word-morphemes (such as & and $) and quasi-logograms, such as acronyms, which turn a sequence of alphagrams into a global unit."[19]The alphabetic grapheme is defined either as the representation of aphoneme,[nb 7]or as the minimal unit of the graphic form of the expression,[7]the second definition, often preferred, assimilating the alphabetic grapheme to the letter as a component of the alphabet.[7][18] Several authors have pointed out that there can be no more one-letter words than there are letters in the alphabet,[20][21]such as Françoise Benzécri, for whom there are "as many one-letter words as there are letters."[11]This apparently self-evident statement, however, does not take into account the impact ofdiacriticalmarks.[22]In addition to the 26 letters of theLatin alphabet, the French language uses 16 diacritical letters accepted by the civil registry: à, â, ä, ç, é, è, ê, ë, ï, î, ô, ö, ù, û, ü and ÿ.[23]It is also customary to retain the original diacritics in the transcription of proper names written in Latin characters in the original language,[24][25]such asÅorØ. These diacritical letters are considered unique graphemes.[26]The same applies to graphemes resulting fromelision: c', ç', d', j', l', m', n', s', t' and z'.[27][28]The sixteenth-century typographical use of "q̃" for "que," notably byJoachim du Bellay[29]andJean de Sponde,[30]could lead us to consider it a diacritical letter. But it is rather the transcription of an abbreviation from aTironian note,[31][32]conventionally rendered as "q'",[33]which is to be considered as a logogram,[34]just like the "K with diagonal stroke," abbreviative of the Breton toponymCaer,[35][36]and transcribed as "k," which is officially considered as a "manifest alteration of the spelling."[37] The existence of one-letter words goes against Platonic language theory, as letters are meant to be sublexical units,[38]intended to be combined with others to form words.[39]As a result,Platonistsdeveloped a kind of aversion to single-letter words.Geberand the medieval Arab grammarians thus considered that a true word could not consist of less than two letters,[40][41]andLeibnizexcluded one-letter words from meaningful combinations.[42] Reflections on the meaning and importance of one-letter words, however, return to a debate on whichPlatotook a stance in theCratylus, and which Gérard Genette summarizes as follows: "Placed between two opponents, one of whom (Hermogenes) holds to the so-called conventionalist thesis [...] according to which names simply result from an agreement and convention [...] between men, and the other (Cratylus) to the so-called naturalistic thesis [...] according to which each object has received a "right name" that belongs to it according to a natural propriety, Socrates seems first to support the second against the first, then the first against the second".[43] While contemporary linguists most often agree withFerdinand de Saussurethat "thelinguistic signis arbitrary" and that this principle "is not contested by anyone,"[44]this has not always been the case. The cratylic impulse has long haunted theories of language, leading to the attribution of mimetic meaning to letters in general, and to single-letter words in particular. As Genette points out, this supposedmimesisis not only phonic, but sometimes graphic too: "Writing can be conceived, as well as speech, as an imitation of the objects it designates [...] A so-called phonetic script, such as ours, can also be conceived as an imitation of the sounds it notes, each letter (for example) being the visual analogon of a phoneme [... ] Mimology in general can be divided intomimophony(the terrain of classical cratylism) andmimography, which in turn is subdivided intoideomimographyandphonomimography. These two varieties of mimography are theoretically quite heterogeneous, and independent of each other. In practice, however, they may come together.[45] ForCourt de Gébelin, a proponent of the existence of a mimetic mother tongue,[46]word "A" exemplifies a "primitive" word in the "plan général" of theMonde primitifpublished in 1773. The author provides the article "A" as an example of his project and explains its meaning as "designating property, possession".[47]Anne-Marie Mercier-Faivre notes that this word, "in a very astonishing way installs the alphabet in myth".[48] It should come as no surprise, for Gébelin: "that the wordAis at the head of the words: placed at the highest degree of the vowel scale, it dominates among them like a monarch among his subjects. Being the most sonorous, it is the first to be distinguished in the most sensitive manner: & it is from these physical qualities which are proper to it & which characterize it that we shall see all the meanings with which it has been clothed born.".[49] According to Gébelin, all writing is "hieroglyphic," i.e. "made up of pure picto-ideograms." Consequently, "alphabetic writing is hieroglyphic [...], each letter being the painting of an object".[50]However, Genette notes that this does not imply the alphabet directly represents sounds. "The letter-ideogram paints an object to the eyes; the phoneme, which for [Gébelin] is a true ideophone, paints that same object to the ears - and the resemblance between these two portraits results only from their equal fidelity to their common model.".[51] PhilologistAntoine Fabre d'Olivet, a contemporary of Gébelin and influenced by him,[52]believes that the one-letter word represents "a simple, non-compounded, non-complex thing, such as a single-stranded rope."[53] In another register, the words of a letter can be a key to the analysis of a text or language.Edgar Allan Poe, inThe Gold-Bug, as a prelude to the exposition of a method usingfrequency analysis,[54][55][56]underlines the status of the words of a letter as a cryptological key when the spacing of the original text is preserved, which is not the case for the cryptogram of his short story:[57] "If there had been spaces, the task would have been singularly easier. In that case, I would have started by collating and analyzing the shortest words, and, if I had found, as is always likely, a single-letter word, a or I (un, je) for example, I would have considered the solution assured".[nb 9] It was thanks to such a word thatCharles Virolleaudsucceeded in 1929 in deciphering theUgaritic alphabet,[58][59][60](the L in this alphabet) expressing the possessive preposition as inHebrewandArabic.[61] Due to thepolysemyof one-letter words,crossword puzzlesand dictionaries often do not include specific definitions.[62][63][64]Linguists Yannick Marchand and Robert Damper also note the absence of the word "A" from the lexical database they rely on, taken from an edition ofWebster's English dictionary.[nb 10][65]On the other hand, as T. A. Hall, the same work devotes different entries to "'D" (as inI'd) and "D'" (as ind'you know); to "'S" (as inhe's) and "S'" (as in thegirls' toys); to "'T" (as in'twill do) and "T'" (as int'other).[22]This remark allows him to refute theclaim—regarded as minor by some authors[66][67]⁠—that all one-letter words are palindromes: "theseWebsterentries are notpalindromic,[nb 11]since reading from right to left does not produce the same word as reading from left to right".[22] Craig Conley has devoted a 232-page reference[68][69][70]book to one-letter words,[71]One-Letter Words: a Dictionary,[72]in which he counts over a thousand different meanings for English one-letter words,[73]which leads him to emphasize the importance of context in understanding these words.[74]For example, he lists 76 meanings of the word "X", to which he says he has a particular affection,[75]including 17 in the "texts and proverbs" section, three in the "cards, spirits, and adult films" section, eight in the "on parchment paper" section, 15 in the "crosswords" section, five in the "Dr/Mr/Mrs/Miss X" section, 11 in the "scientific subjects" section, eight in the "mathematics" section, three in the "foreign meanings" section, eight in the "miscellaneous" section, and two in the "facts and figures"[76]section. According to him, the only other work on the subject is a dictionary of one-letter words in Pali, compiled by the sixteenth-century Buddhist lexicographer Saddhammakitti[77]and entitledEkakkharakosa.[78] English lexicographerJonathon Green, a specialist in the Englishslanglanguage, has compiled a large number of one-letter word meanings in English, most of which do not appear in Conley's dictionary.[79]The following table compares the number of meanings given to English one-letter words by these two lexicographers. Theincipitof Craig Conley's dictionary is the reminder of the White Queen's words toAlice: "I'll tell you a secret... I can read words with only one letter! Isn't that wonderful? But don't be discouraged. You'll get there, too, after a while.".[80] This evocation ofLewis Carroll's text is all the more appropriate to the challenge of a dictionary of the words of a letter, since the description of the meaning of these words must include meanings that are not only nominative, i.e. relative to the use of language in general, but also stipulative, i.e. decided by an author in a particular context,[81][82]asHumpty-Dumptyemphasizes to Alice: "When I use a word," replied Humpty-Dumpty in a somewhat disdainful tone of voice, "it means exactly what I like it to mean... no more and no less." "The question," said Alice, "is whether you have the power to make words mean something other than what they mean." "The question," retorted Humpty-Dumpty, "is who will be the master... period."[83] Jack Goodybelieves that the written word "adds an important dimension to many social actions."[84]In this respect, the words of a letter can be divisive. In legal terms, their use inpatronymicshas sometimes been invalidated. In the United States, a woman was refused permission to change her name to R, on the grounds that there must be "some form of systematic standardization of individual identification in our society," a decision that was upheld on appeal to avoid "insurmountable difficulties";[85]a Korean namedO, fed up with the difficulties he encountered with the computer programs of certain organizations not designed for names as short as his,[nb 12]had to change it toOh.[nb 13][86][87]In Sweden, a couple who wanted to name their sonQ, in homage to the character of thesame namein theJames Bondseries, were refused both at first instance and on appeal, on the grounds that the first name was "inappropriate;" however, the Supreme Administrative Court overturned these decisions, ruling that it had not been "proven that the nameQcould be offensive or that it could cause embarrassment to its bearer." InNew Zealand, the first name J was invalidated six times between 2001 and 2013.[88]On the other hand, it was on joining theUS Air Forcethat singer Johnny Cash, bornJ. R. Cash, changed his first names to John and Ray to comply with military requirements.[89][90] A notable example of the social impact of choosing a one-letter surname is the case of American preacherMalcolm Little, who decided in 1952 to adopt the surnameX, on the premise that Little was the name of a slave owner rather than his African ancestors.[91]TheNation of Islammovement eventually asked its new members to renounce their "slave name" and adopt the same patronymic,[92]creating such confusion that they were obliged to add a serial number before the letterX: one of Malcolm X's drivers, Charles Morris, called himselfCharles 37X, the 37th Charles to have his name changed in the same temple.[93]According to some authors, this political practice also contributed to the choice of the letterXto designate ageneration.[94][95] Georges Perecunderlines the special status ofX: "this letter that has become a word, this noun that is unique in the language in having only one letter, unique also in that it is the only one to have the shape of what it designates.[nb 14] In other cases, on the contrary, social norms justify the use of one-letter words. For example, the use of a single letter for the middle name, perceived as valorizing,[96][97]is sometimes accepted, e.g.Sfor PresidentTruman,[98][99]and sometimes criticized, e.g.Vfor English politicianGrant Shapps.[100][101]MathematicianBenoît Mandelbrotreadily admits to adding aBafter his first name,[102][103]a choice attributed to a mathematical joke aboutrecursivity.[104][105]Joanne Rowlingattributes the addition of an unjustifiedKto her first name to her publisher, who was keen to attract a readership of young boys.[106][107]InMyanmar, the wordU, meaning uncle,[108]is added in front of the surname as a mark of the notoriety enjoyed,[109]for example, byU Nu,U PanditaorU Thant. In certain fields, such as tickers,[110][111][112]stock market mnemonicsordomain names,[113][114]their brevity and rarity lend them prestige.[115][116]The one-letter word can also be a form ofeuphemismto avoid the use of a shocking word.[117][118] One-letter words are also used extensively inSMS, particularly when thesound valueof the letter is used, such as "g" for "j'ai" (in French) or "c" for "c'est"[119][120](in French); or "u" for "you" (in English), "r" for "are" (in English) or "c" for "see" (in English).[121]However, diacritical letters are coded differently from one manufacturer to another, because, for example:â,ëandçcannot all be used at the same time within the 160-character limit, which makes the billing of messages containing them uncertain.[122]Among Japanese youth,了解/りょうかい(ryōkai, "understand"/"ok")is shortened toりょ(ryo)and again toり(ri).[123] A character inJames Joyce'sUlyssesspeaks of books yet to be written: "You were bowing to the mirror, stepping forward, taking the applause, serious as a pope, very striking face. Hurray for the Sunday moron! Rra! No one would notice: you didn't tell anyone. The books you were going to write with letters as titles. Did you read hisF?Oh yes, but I preferQ. Of course, butWis magnificent. Ah yes,W." Unbeknownst to Joyce,Fwould be written byDaniel Kehlmannin 2013,QbyLuther Blissettin 1999, andGeorges Perec'sW, published in 1975, would describe a country "where sport and life merge in the same magnificent effort." Other literary works with one-letter titles include:AbyLouis Zukofsky,G.byJohn Berger,HbyPhilippe Sollers,N.byStephen King,RbyCéline Minard,SbyJohn Updike,V.byThomas PynchonandZbyVassílis Vassilikós.Jacques Roubaudalso titled one of his books∈, thismathematical symbolof belonging being for him "by extension, a symbol of belonging to the world."[nb 15][126] Several films also have a single-letter title, such asAlexandre Arcady'sK,Oliver Stone'sW.,Fritz Lang'sMandits 1951 remakebyJoseph Losey, orCosta-Gavras'Z, based on the novel of the same name.[nb 16][127] Several writers, such asVictor Hugo,[129][130]Paul Claudel[131][132]andFrancis Ponge,[133][134]have taken an interest in the letter as ideomimography. Various one-letter poems have promptedexegesison their semantic value and whether or not they are words. This is particularly true of works by Russian poets of the futuristZaummovement, includingVasilisk Gnedov,[135][136]Aleksej Kručënyx[137]andIlia Zdanevich.[138]American minimalist poetAram Saroyanis the author ofUntitled poster-poem, a poem consisting of an m with four legs,[139][140]which Bob Grumman places "at the center of an alphabet in formation, between the m and the n," cited by theGuinness Book of Recordsas the world's shortest poem.[nb 17][141]Translating into English theconcrete poemIchbyVladimír Burda, composed of the German word "Ich" (Iin German) topped by his fingerprint, Canadian poet John Curry reduces it to "i" (Iin English) formed from the shaft of an "i" topped by a dot-shaped fingerprint.[142] American poetDave Moriceestimated that English allows only fifty-two one-letter words: twenty-six upper case and twenty-six lower case.[143]He also indulged in a literary experiment in the 1970s, inventing a female "alter ego",[144]Joyce Holland, a minimalist poet[145]played by his partner, P. J. Casteel, whose existence was acknowledged even byTheNew York Times.[146][147]She published 13 issues ofMatchbook, a magazine of one-word poems stapled to matchbooks and sold for 5 cents.[148][149]In 1973, inAlphabet Anthology, Joyce Holland collected one-letter poems chosen for the occasion by 104 American poets,[150]includingBruce Andrews("o"),Larry Eigner("e") andBernadette Mayer("n").[151]Joyce Holland had sent them a postcard featuring all 26 letters of the alphabet, in lower case, asking them to circle the letter of their choice and send it back to her.[152]The letterOwas the most chosen (12), followed byAandG(8); nobody choseV, and one contributor preferred to add aÞ.[152][153] One-letter words play a role in the Oulipian[155]constraint, a form of rhopalic verse in which the first line consists of a one-letter word.[156]But above all, they are the subject of a notable experiment byFrançois Le Lionnais, dating from 1957 and published inLa Littérature potentiellein 1973, of "Réduction d'un poème à une seule lettre". The poem was formulated as "T." On the following page, he comments:[157] "I'm afraid that reducing a poem to a single letter is on the other side of permissible literature. But we can have fun, can't we? In any case, the author didn't want to repeat this performance. He left it to 25 of his colleagues to put together the complete set of 26 poems based - from the Latin alphabet - on this principle." This short poem has been the subject of "formidable glosses"[158]within the Oulipo. InÉcrits français,Harry Mathewsattempts a "textual explanation." He observes that "the only letter in the poem [...] is not a letter by itself, but a letter followed by a period. Punctuation transforms what would otherwise be a sufficient if mysterious entity into anabbreviation."[159]According to Harry Mathews, this is in fact a reduction of a device comparable toRaymond Queneau'sHundred Thousand Billion Poems.He designed, on the principle of perpetual motion, that the following sentence would be inscribed on the front and back of a page: "J'ai inventé le mouvement perpétuel T.S.V.P." (I invented perpetual motion T.S.V.P.). Bénabou then proceeded to make successive reductions, eventually inscribing only "T." on one side of a page.[159] In 2006,Marcel Bénabouresumed his analysis of the poem inMiniature persane. While praising Harry Mathews' "learned commentary" on the point, he considers that "there is nothing to prevent from thinking that it could be [...] a period" and that "in the author's mind, there is a hesitation as to the exact nature of his 'attempt' and of the 'work' to which said attempt has led". For Bénabou, the choice of the letterTis explained primarily by personal reasons, but also by the characteristics of this consonant, which is at once homophonic, polyphonic and polysemous. Bénabou observes, however, that Le Lionnais "doesn't seem to have taken any notice" of the "properly oral" dimension of the one-letter word, and in this connection evokes an "oulipien debate" for whichJacques BensproposedI, which should read "un en chiffre romain et en garamond gras" or "one in Roman numerals and in bold garamond," as it were.[nb 19]Finally, Bénabou recalls an anecdote told byJean-Jacques Rousseauin hisConfessions: "I read that a wise bishop on a tour of his diocese found an old woman who for all her prayers knew only how to sayO. He said to her: Good mother, keep praying always like that; your prayer is better than ours".[nb 20] He believes that the reader should also assure Le Lionnais that his poem is worth more than he thinks, as it "in no way leads to a dead end, but on the contrary to fantastic enrichment." In 2007,Jacques Jouetfinally responded to Le Lionnais' call, asking his Oulipo colleagues to "renew the gesture" by composing their own reduction.[160]Among the responses,François Caradecproposed reducingUn coup de dés jamais n'abolira le hasardto "D."Michelle Grangaud, to reduceruelleto "L."Paul Fournel, "for strictly personal, emotional reasons," saw no other possibility than "T."Olivier Salonreduced his poem to "P."Oskar Pastior, after a quantitative analysis ofCharles Baudelaire'sHarmonie du soir, reduces it to "L." Anne F. Garréta proposed a prose narrative: "J." Harry Mathews, "K." both because this letter, like himself, "is only imperfectly integrated into the modern French language", and because it is "composed of three sticks," allowing it to be rearranged in order to construct twelve other letters.[nb 21][160]
https://en.wikipedia.org/wiki/One-letter_word
Lists of abbreviationscontainabbreviationsandacronymsin different languages and fields. They include Latin and English abbreviations and acronyms. Lists of abbreviations in theEnglish language: An acronym is a type of abbreviation formed from the initial components of the words of a longer name or phrase, Abbreviations in other languages:
https://en.wikipedia.org/wiki/Lists_of_abbreviations
In theRoman Catholic Church, theecclesiasticalwords most commonlyabbreviatedat all times areproper names,titles(official or customary), of persons orcorporations, and words of frequent occurrence. Historically, bothJewishscribes andTalmudicscholars frequently used abbreviations in their religious texts. Between the seventh and ninth centuries, the ancientRomansystem of abbreviations used in theCatholic Churchgave way to a more difficult one that gradually grew up in the monastic houses and in the chanceries of the newTeutonickingdoms. Merovingian,Lombard, andAnglo-Saxonscripts each offer their own abbreviations, as well as the uniquescotica manusorlibri scottice scripti('Irishhand', or books written in the medieval Irish hand). Eventually, the main productive centres of technical manuscripts, such as thePapal Chancery, the theological schools ofParisandOxford, and the civil-law school ofBologna, set the standards of abbreviations for allEurope. Medieval manuscripts make frequent use of abbreviations, owing in part to the abandonment ofuncialand quasi-uncial script, and the almost universal use of cursive writing. Medieval authors inherited a few abbreviations fromChristianantiquity; others were invented or adapted, in order to save time and parchment. Abbreviations are found especially in manuscripts of scholastic theology and canon law, annals and chronicles, the Roman law, and in administrative documents, civil and privileges, bulls, rescripts. The number of abbreviations multiplied with time, and were especially numerous during the beginning of the use of the printing press in Europe. Many early printed books display liberal use of abbreviations carried over from handwritten manuscripts, together with other characteristics[which?]of the manuscript page. The development of printing brought about the abandonment of many abbreviations, whilst also suggesting and introducing new ones. New abbreviations were also introduced following church developments such as the growth of ecclesiastical legislation and the creation of new offices. Fewer medieval abbreviations were found in the text of books used on public pccasions, such as missals, antiphonaries and Bibles, as the needs of theological students seem to have been the chief cause of the majority of medieval abbreviations. The means of abbreviation were usually full points or dots (mostly in Roman antiquity), the semicolon (eventually conventionalized), and lines (horizontal, perpendicular, oblong, wavy curves, and commas). Vowel sounds were frequently written not after, but over, consonants. Certain letters, such as p and q, which occurred with extreme frequency in prepositions and terminations, became the source of many peculiar abbreviations, as did the frequency of words such aset('and') andest('is'). Habit and convenience are today the principal motives for using these abbreviations. Most of those in actual use fall under one or other of the following heads: administrative, liturgical, scholastic, and chronological. The first class of abbreviations, administrative ones, includes those used in the composition ofPontificaldocuments. They were once very numerous, and lists of them may be seen in the works ofQuantinandProu, amongst others. Since 29 December 1878, by order ofPope Leo XIII, the great papal documents (Litterae Apostolicae) are no longer written in the oldGothichand known asbollatico; all abbreviations, with the exception of a few obvious ones, like S.R.E., were abolished by the same authority (Acta Sanctae Sedis, XI, 465–467). However, in everyday business, theRoman Congregationsstill frequently use certain brief formulas, such asnegative('no') andnegative et amplius('no with emphasis'). These are not technically abbreviations.[a]This class[which?]includes also the abbreviations for the names of most sees.[b] Under the general heading of administrative abbreviations can be included abbreviated forms of address in ordinary conversation, whether referring to individuals or members of religious orders, congregations, and institutes. The forms of address used for members ofCatholic lay societiesand the Papalorders of meritalso fall under this heading. The abbreviations of the titles ofRoman Congregations, and of the individual canonical ecclesiastical authorities, belong also to this class. A second class of abbreviations includes those used in the description ofliturgicalacts or the directions for their performance, e.g. theHoly Mass, theDivine Office(Breviary), the ecclesiastical devotions, etc. The abbreviated forms for the name of God,Jesus Christ, theHoly Ghost, the names of theBlessed Virginand the saints can also be classed as liturgical abbreviations, as can abbreviations used in the administration of theSacraments, mortuary epitaphs (including catacomb inscriptions), and so on. Finally, some miscellaneous abbreviations, such as those used in the publication of documents concerning beatification and canonization, are also classed as liturgical. Scholastic abbreviations include those used to designate honorific titles acquired in the schools, to avoid the repetition of lengthy titles of books and reviews, or to facilitate reference to ecclesiastical and civil legislation. Chronological abbreviations are used to describe elements of the year in a civil or ecclesiastical sense. This article incorporates text from a publication now in thepublic domain:Herbermann, Charles, ed. (1913). "Ecclesiastical Abbreviations".Catholic Encyclopedia. New York: Robert Appleton Company.
https://en.wikipedia.org/wiki/List_of_ecclesiastical_abbreviations
This is alist of common Latin abbreviations. Nearly all the abbreviations below have been adopted byModern English. However, with some exceptions (for example,versusormodus operandi), most of theLatinreferent words and phrases are perceived as foreign to English. In a few cases, English referents have replaced the original Latin ones (e.g., "rest in peace" for RIP and "postscript" for PS). Latin was once the universal academic language in Europe. From the 18th century, authors started using theirmother tonguesto writebooks,papersorproceedings. Even when Latin fell out of use, many Latinabbreviationscontinued to be used due to their precise simplicity and Latin's status as a learned language.[citation needed] Words and abbreviations that have been in general use but are currently used less often.
https://en.wikipedia.org/wiki/List_of_Latin_abbreviations
This is a list of Wikipedia articles of Latin phrases and their translation into English. To view all phrases on a single, lengthy document, see:List of Latin phrases (full).
https://en.wikipedia.org/wiki/List_of_Latin_phrases
Nominal numbersare numerals used as labels toidentify items uniquely. Importantly, the actual values of the numbers which these numerals represent are less relevant, as they do not indicate quantity, rank, or any other measurement. Labelling referees Smith and Kumar as referees "1" and "2" is a use of nominal numbers. Any set of numbers (a subset of the natural numbers) will be consistent labels as long as adistinctnumber is uniquely used for each distinct term which needs to be labelled. Nonetheless, sequences of integers may naturally be used as the simplest way to begin labelling; for example, 1, 2, 3, and so on. The term "nominal number" may be quite recent and of limited use. It appears[citation needed]to have originated in school textbooks derived from the statistical term "nominal data", defined as data indicating "...merely statements of qualitative category of membership." This usage comes from the sense ofnominalas "name". Mathematically, nominal numbering is aone-to-one and onto functionfrom a set of objects being named to a set of numerals, which may change (typically growing) over time: it is afunctionbecause each object is assigned a single numeral, it isone-to-onebecause different objects are assigned different numerals, and it isontobecause every numeral in the set at a given time has associated with it a single named object. "Nominal number" can be broadly defined as "any numeral used for identification, however it was assigned", or narrowly as "a numeral with no information other than identification". For the purposes of naming, the term "number" is often used loosely to refer to anystring(sequence of symbols), which may not consist entirely of digits—it is oftenalphanumeric. For instance, UKNational Insurance numbers, somedriver's licencenumbers, and someserial numberscontain letters. "Nominal" refers to theuseof numbers: any nominal number can be used by itsnumerical valueas aninteger—added to another, multiplied, compared in magnitude, and so forth—but for nominal numbers these operations are not, in general, meaningful. For example, theZIP code11111 is less than the ZIP code 12345, but that does not necessarily mean that 11111 was issued before 12345 or that the region denoted by 11111 is further south than 12345, though it might be. Similarly, one can add or subtract ZIP codes, but this is meaningless:12345 − 11111does not have any meaning as a ZIP code. In general, the only meaningful operation with nominal numbers is to compare two nominal numbers to see whether they are identical or not (whether they refer to the same object). A great variety of numbers meet the broad definition, including: These are usually assigned either in some hierarchical way, such as how telephone numbers are assigned (in NANPA) asCountry Code+Area Code+ Prefix + Suffix, where the first three are geographically based, or sequentially, as inserial numbers; these latter are thus properlyordinalnumbers. Numerical identifiers that are nominal numbers narrowly defined, viz, convey no information other than identity, are quite rare. These must be defined either arbitrarily or randomly, and most commonly arise in computer applications, such as dynamicIP addressesassigned byDynamic Host Configuration Protocol. A more everyday example are sportssquad numbers, which do not in general have any public meaning beyond identity, though they may be allocated based on some internal club or organization policy. In some settings, these are based on position, but in others they are associated with an individual, being a proper nominal number. The naming function is demonstrated by"retired numbers", where a club no longer issues a number that has become associated with a particularly famous player, but reallocate others to new players when they become available.
https://en.wikipedia.org/wiki/Nominal_number
Pleonasm(/ˈpliː.əˌnæzəm/; fromAncient Greekπλεονασμόςpleonasmós, fromπλέονpléon'to be in excess')[1][2]isredundancyin linguistic expression, such as in "black darkness," "burning fire," "the man he said,"[3]or "vibrating with motion." It is a manifestation oftautologyby traditionalrhetoricalcriteria.[4]Pleonasm may also be used for emphasis, or because the phrase has become established in a certain form. Tautology and pleonasm are not consistently differentiated in literature.[5] Most often,pleonasmis understood to mean a word or phrase which is useless,clichéd, or repetitive, but a pleonasm can also be simply an unremarkable use ofidiom. It can aid in achieving a specific linguistic effect, be it social, poetic or literary. Pleonasm sometimes serves the same function as rhetorical repetition—it can be used to reinforce an idea, contention or question, rendering writing clearer and easier to understand. Pleonasm can serve as aredundancy check; if a word is unknown, misunderstood, misheard, or if the medium of communication is poor—a static-filled radio transmission or sloppy handwriting—pleonastic phrases can help ensure that the meaning is communicated even if some of the words are lost.[citation needed] Some pleonastic phrases are part of a language'sidiom, liketuna fish,chain mailandsafe haveninAmerican English. They are so common that their use is unremarkable for native speakers, although in many cases the redundancy can be dropped with no loss of meaning. When expressing possibility, English speakers often use potentially pleonastic expressions such asIt might be possibleorperhaps it's possible, where both terms (verbmightor adverbperhapsalong with the adjectivepossible) have the same meaning under certain constructions. Many speakers of English use such expressions for possibility in general, such that most instances of such expressions by those speakers are in fact pleonastic. Others, however, use this expression only to indicate a distinction betweenontologicalpossibility andepistemicpossibility, as in "Both the ontological possibility of X under current conditions and the ontological impossibility of X under current conditions are epistemically possible" (inlogicalterms, "I am not aware of any facts inconsistent with the truth of proposition X, but I am likewise not aware of any facts inconsistent with the truth of the negation of X"). The habitual use of the double construction to indicate possibilityper seis far less widespread among speakers of most[citation needed]other languages (except in Spanish; see examples); rather, almost all speakers of those languages use one term in a single expression:[dubious–discuss] In asatellite-framedlanguage like English,verb phrasescontainingparticlesthat denote direction of motion are so frequent that even when such a particle is pleonastic, it seems natural to include it (e.g. "enter into"). Some pleonastic phrases, when used in professional or scholarly writing, may reflect a standardized usage that has evolved or a meaning familiar to specialists but not necessarily to those outside that discipline. Such examples as "null and void", "each and every" arelegal doubletsthat are part oflegally operative languagethat is often drafted into legal documents. A classic example of such usage was that by theLord Chancellorat the time (1864),Lord Westbury, in the English case ofex parteGorely,[6]when he described a phrase in an Act as "redundant and pleonastic". This type of usage may be favored in certain contexts. However, it may also be disfavored when used gratuitously to portray false erudition, obfuscate, or otherwise introduce verbiage, especially in disciplines where imprecision may introduce ambiguities (such as the natural sciences).[7] Examples fromBaroque,Mannerist, andVictorianprovide a counterpoint toStrunk's advocacy of concise writing: There are various kinds of pleonasm, includingbilingual tautological expressions,syntactic pleonasm,semantic pleonasmandmorphological pleonasm: A bilingual tautological expression is a phrase that combines words that mean the same thing in two different languages.[8]: 138An example of a bilingual tautological expression is theYiddishexpressionמים אחרונים וואַסער‎mayim akhroynem vaser. It literally means "water last water" and refers to "water for washing the hands after meal, grace water".[8]: 138Its first element,mayim, derives from theHebrewמים‎ ['majim] "water". Its second element,vaser, derives from theMiddle High Germanwordvaser"water". According toGhil'ad Zuckermann, Yiddish abounds with both bilingual tautological compounds and bilingual tautological first names.[8]: 138 The following are examples of bilingual tautological compounds in Yiddish: The following are examples of bilingual tautological first names in Yiddish: Examples occurring in English-language contexts include: Syntacticpleonasm occurs when thegrammarof a language makes certainfunction wordsoptional.[citation needed]For example, consider the followingEnglishsentences: In this construction, theconjunctionthatis optional when joining a sentence to averbphrase withknow. Both sentences are grammatically correct, but the wordthatis pleonastic in this case. By contrast, when a sentence is in spoken form and the verb involved is one of assertion, the use ofthatmakes clear that the present speaker is making an indirect rather than a direct quotation, such that he is not imputing particular words to the person he describes as having made an assertion; the demonstrative adjectivethatalso does not fit such an example. Also, some writers may use "that" for technical clarity reasons.[9]In some languages, such as French, the word is not optional and should therefore not be considered pleonastic. The same phenomenon occurs inSpanishwith subject pronouns. Since Spanish is anull-subject language, which allows subject pronouns to be deleted when understood, the following sentences mean the same: In this case, the pronounyo('I') is grammatically optional; both sentences mean "I love you" (however, they may not have the same tone orintention—this depends onpragmaticsrather than grammar). Such differing butsyntacticallyequivalent constructions, in many languages, may also indicate a difference inregister. The process of deleting pronouns is calledpro-dropping, and it also happens in many other languages, such asKorean,Japanese,Hungarian,Latin,Italian,Portuguese,Swahili,Slavic languages, and theLao language. In contrast, formal English requires an overt subject in each clause. A sentence may not need a subject to have valid meaning, but to satisfy the syntactic requirement for an explicit subject a pleonastic (ordummy pronoun) is used; only the first sentence in the following pair is acceptable English: In this example the pleonastic "it" fills the subject function, but it contributes no meaning to the sentence. The second sentence, which omits the pleonasticitis marked as ungrammatical although no meaning is lost by the omission.[10]Elements such as "it" or "there", serving as empty subject markers, are also called (syntactic)expletives, or dummy pronouns. Compare: The pleonasticne(ne pléonastique), expressing uncertainty in formalFrench, works as follows: Two more striking examples of French pleonastic construction areaujourd'huiandQu'est-ce que c'est?. The wordaujourd'hui/au jour d'huiis translated as 'today', but originally means "on the day of today" since the now obsoletehuimeans "today". The expressionau jour d'aujourd'hui(translated as "on the day of today") is common in spoken language and demonstrates that the original construction ofaujourd'huiis lost. It is considered a pleonasm. The phraseQu'est-ce que c'est?meaning 'What's that?' or 'What is it?', while literally, it means "What is it that it is?". There are examples of the pleonastic, or dummy, negative in English, such as the construction, heard in the New England region of the United States, in which the phrase "So don't I" is intended to have the same positive meaning as "So do I."[11][12] WhenRobert Southsaid, "It is a pleonasm, a figure usual inScripture, by a multiplicity of expressions to signify one notable thing",[13]he was observing theBiblical Hebrewpoetic propensity to repeat thoughts in different words, since written Biblical Hebrew was a comparatively early form of written language and was written using oral patterning, which has many pleonasms. In particular, very many verses of thePsalmsare split into two halves, each of which says much the same thing in different words. The complex rules and forms of written language as distinct from spoken language were not as well-developed as they are today when the books making up theOld Testamentwere written.[14][15]See alsoparallelism (rhetoric). This same pleonastic style remains very common in modern poetry and songwriting (e.g., "Anne, with her father / is out in the boat / riding the water / riding the waves / on the sea", fromPeter Gabriel's "Mercy Street"). Semantic pleonasm is a question more ofstyleandusagethan of grammar.[16]Linguists usually call thisredundancyto avoid confusion with syntactic pleonasm, a more important phenomenon fortheoretical linguistics. It usually takes one of two forms: Overlap or prolixity. Overlap: One word's semantic component is subsumed by the other: Prolixity: A phrase may have words which add nothing, or nothing logical or relevant, to the meaning. An expression like "tuna fish", however, might elicit one of many possible responses, such as: In some cases, the redundancy in meaning occurs at the syntactic level above the word, such as at the phrase level: The redundancy of these two well-known statements is deliberate, forhumorouseffect. (SeeYogi Berra#"Yogi-isms".) But one does hear educated people say "my predictions about the future of politics" for "my predictions about politics", which are equivalent in meaning. While predictions are necessarily about the future (at least in relation to the time the prediction was made), the nature of this future can be subtle (e.g., "I predict that he died a week ago"—the prediction is about future discovery or proof of the date of death, not about the death itself). Generally "the future" is assumed, making most constructions of this sort pleonastic. The latter humorous quote above about not making predictions—byYogi Berra—is not really a pleonasm, but rather anironicplay on words. Alternatively it could be an analogy between predict and guess. However, "It'sdéjà vuall over again" could mean that there was earlier anotherdéjà vuof the same event or idea, which has now arisen for a third time; or that the speaker had very recently experienced adéjà vuof a different idea. Redundancy, and "useless" or "nonsensical" words (or phrases, or morphemes), can also be inherited by one language from the influence of another and are not pleonasms in the more critical sense but actual changes in grammatical construction considered to be required for "proper" usage in the language or dialect in question.Irish English, for example, is prone to a number of constructions that non-Irish speakers find strange and sometimes directly confusing or silly: All of these constructions originate from the application ofIrish Gaelicgrammatical rules to the English dialect spoken, in varying particular forms, throughout the island. Seemingly "useless" additions and substitutions must be contrasted with similar constructions that are used for stress, humor, or other intentional purposes, such as: The latter of these is a result of Yiddish influences on modern English, especiallyEast CoastUS English. Sometimes editors and grammatical stylists will use "pleonasm" to describe simple wordiness. This phenomenon is also calledprolixityorlogorrhea. Compare: or even: The reader or hearer does not have to be told that loud music has a sound, and in a newspaper headline or other abbreviated prose can even be counted upon to infer that "burglary" is a proxy for "sound of the burglary" and that the music necessarily must have been loud to drown it out, unless the burglary was relatively quiet (this is not a trivial issue, as it may affect the legal culpability of the person who played the music); the word "loud" may imply that the music should have been played quietly if at all. Many are critical of the excessively abbreviated constructions of "headline-itis" or "newsspeak", so "loud [music]" and "sound of the [burglary]" in the above example should probably not be properly regarded as pleonastic or otherwise genuinely redundant, but simply as informative and clarifying. Prolixity is also used to obfuscate, confuse, or euphemize and is not necessarily redundant or pleonastic in such constructions, though it often is. "Post-traumatic stress disorder" (shell shock) and "pre-owned vehicle" (used car) are bothtumideuphemisms but are not redundant. Redundant forms, however, are especially common in business, political, and academic language that is intended to sound impressive (or to be vague so as to make it hard to determine what is actually being promised, or otherwise misleading). For example: "This quarter, we are presently focusing with determination on an all-new, innovative integrated methodology and framework for rapid expansion of customer-oriented external programs designed and developed to bring the company's consumer-first paradigm into the marketplace as quickly as possible." In contrast to redundancy, anoxymoronresults when two seemingly contradictory words are adjoined. Redundancies sometimes take the form of foreign words whose meaning is repeated in the context: These sentences use phrases which mean, respectively, "the the restaurant restaurant", "the the tar tar", "with in juice sauce" and so on. However, many times these redundancies are necessary—especially when the foreign words make up a proper noun as opposed to a common one. For example, "We went to Il Ristorante" is acceptable provided the audience can infer that it is a restaurant. (If they understand Italian and English it might, if spoken, be misinterpreted as a generic reference and not aproper noun, leading the hearer to ask "Which ristorante do you mean?"—such confusions are common in richly bilingual areas likeMontrealor theAmerican Southwestwhenmixing phrases from two languages.) But avoiding the redundancy of the Spanish phrase in the second example would only leave an awkward alternative: "La Brea pits are fascinating". Most people find it best not to drop articles when using proper nouns made from foreign languages: However, there are some exceptions to this, for example: This is also similar to the treatment of definite and indefinite articles in titles of books, films, etc. where the article can—some would saymust—be present where it would otherwise be "forbidden": Some cross-linguistic redundancies, especially in placenames, occur because a word in one language became the title of a place in another (e.g., theSahara Desert—"Sahara" is an English approximation of the word for "deserts" in Arabic). "TheLos Angeles Angels" professional baseball team is literally "the The Angels Angels". A supposed extreme example isTorpenhow HillinCumbria, where some of the elements in the name likely mean "hill".[citation needed]See theList of tautological place namesfor many more examples. The wordtsetsemeans "fly" in theTswana language, aBantu languagespoken inBotswanaandSouth Africa. This word is the root of the English name for abiting flyfound inAfrica, thetsetse fly. Acronyms and initialisms can also form the basis for redundancies; this is known humorously asRAS syndrome(for Redundant Acronym Syndrome syndrome). In all the examples that follow, the word after the acronym repeats a word represented in the acronym. The full redundant phrase is stated in the parentheses that follow each example: (SeeRAS syndromefor many more examples.) The expansion of an acronym like PIN or HIV may be well known to English speakers, but the acronyms themselves have come to be treated as words, so little thought is given to what their expansion is (and "PIN" is also pronounced the same as the word "pin"; disambiguation is probably the source of "PIN number"; "SIN number" for "Social Insurance Number number" [sic] is a similar common phrase in Canada.) But redundant acronyms are more common with technical (e.g., computer) terms where well-informed speakers recognize the redundancy and consider it silly or ignorant, but mainstream users might not, since they may not be aware or certain of the full expansion of an acronym like "RAM". Carefully constructed expressions, especially in poetry and political language, but also some general usages in everyday speech, may appear to be redundant but are not. This is most common with cognate objects (a verb's object that is cognate with the verb): Or, a classic example from Latin: The words need not be etymologically related, but simply conceptually, to be considered an example of cognate object: Such constructions are not actually redundant (unlike "She slept a sleep" or "We wept tears") because the object's modifiers provide additional information. A rarer, more constructed form ispolyptoton, the stylistic repetition of the same word or words derived from the same root: As with cognate objects, these constructions are not redundant because the repeated words or derivatives cannot be removed without removing meaning or even destroying the sentence, though in most cases they could be replaced with non-related synonyms at the cost of style (e.g., compare "The only thing we have to fear is terror".)
https://en.wikipedia.org/wiki/Pleonasm#Bilingual_tautological_expressions
Arecursive acronymis anacronymthatrefers to itself, and appears most frequently in computer programming. The term was first used in print in 1979 inDouglas Hofstadter's bookGödel, Escher, Bach: An Eternal Golden Braid, in which Hofstadter invents the acronym GOD, meaning "GOD Over Djinn", to help explain infinite series, and describes it as a recursive acronym.[1]Other references followed,[2]however the concept was used as early as 1968 inJohn Brunner's science fiction novelStand on Zanzibar. In the story, the acronym EPT (Education for a Particular Task) later morphed into "Eptification for Particular Task". Recursive acronyms typically formbackwardly: either an existing ordinary acronym is given a new explanation of what the letters stand for, or a name is turned into an acronym by giving the letters an explanation of what they stand for, in each case with the first letter standing recursively for the whole acronym. Incomputing, an early tradition in thehackercommunity, especially atMIT, was to choose acronyms and abbreviations that referred humorously to themselves or to other abbreviations. Perhaps the earliest example in this context is thebackronym"Mash Until No Good", which was created in 1960 to describeMung, and revised to "Mung Until No Good". It lived on as a recursive command in the editing languageTECO.[3]In 1977[3]programmer Ted Anderson coined TINT ("TINT Is NotTECO"), an editor for MagicSix. This inspired the two MITLisp Machineeditors calledEINE("EINE Is NotEmacs", German forone) andZWEI("ZWEI Was EINE Initially", German fortwo), in turn inspiring Anderson's retort SINE ("SINE is not EINE").Richard Stallmanfollowed withGNU(GNU's NotUnix). Recursive acronym examples often include negatives, such as denials that the thing defined is or resembles something else (which the thing defined does in fact resemble or is even derived from), to indicate that, despite the similarities, it was distinct from the program on which it was based.[4] An earlier example appears in a 1976 textbook on data structures, in which the pseudo-language SPARKS is used to define the algorithms discussed in the text. "SPARKS" is claimed to be a non-acronymic name, but "several cute ideas have been suggested" as expansions of the name. One of the suggestions is thetail recursive"Smart Programmers Are Required to Know SPARKS".[5] Other examples are theYAMLlanguage, which stands for "YAML ain't markup language" andPHPlanguage meaning "PHP: Hypertext Preprocessor".
https://en.wikipedia.org/wiki/Recursive_acronym
Inliterary criticismandrhetoric, atautologyis a statement that repeats an idea using near-synonymousmorphemes, words or phrases, effectively "saying the same thing twice".[1][2]Tautology andpleonasmare not consistently differentiated in literature.[3]Like pleonasm, tautology is often considered a fault ofstylewhen unintentional. Intentional repetition may emphasize a thought or help the listener or reader understand a point.[4]Sometimeslogical tautologieslike "Boys will be boys" are conflated with language tautologies, but a language tautology is not inherently true, while a logical tautology always is.[4] The word was coined inKoine Greekfromταὐτός('the same') plusλόγος('word' or 'idea'), and transmitted through 3rd-century Latintautologiaand Frenchtautologie. It first appeared in English in the 16th century. The use of the termlogical tautologywas introduced in English byWittgensteinin 1919, perhaps followingAuguste Comte's usage in 1835.[5] Abbreviations whose last word is oftenredundantlyrepeated.[6][7] Intentional repetition of meaning intends to amplify or emphasize a particular, usually significant fact about what is being discussed. For example, agiftis, by definition, free of charge; using the phrase "free gift" might emphasize that there are no hidden conditions orfine print(such as the expectation of money or reciprocation) or that the gift is being given byvolition. This is related to therhetorical deviceofhendiadys, where one concept is expressed through the use of two descriptive words or phrases: for example, using "goblets and gold" to mean wealth, or "this day and age" to refer to the present time. Superficially, these expressions may seem tautological, but they are stylistically sound because the repeated meaning is just a way to emphasize the same idea. The use of tautologies, however, is usually unintentional. For example, the phrases "mental telepathy", "planned conspiracies", and "small dwarfs" imply that there are such things as physical telepathy, spontaneous conspiracies, and giant dwarfs, which areoxymorons.[8] Parallelismis not tautology, but rather a particular stylistic device. MuchOld Testament poetryis based on parallelism: the same thing said twice, but in slightly different ways (Fowler describes this aspleonasm).[1]However, modern biblical study emphasizes that there are subtle distinctions and developments between the two lines, such that they are usually not truly the "same thing". Parallelism can be found wherever there is poetry in the Bible:Psalms, theBooks of the Prophets, and in other areas as well.
https://en.wikipedia.org/wiki/Tautology_(language)
Internet slang(also calledInternet shorthand,cyber-slang,netspeak,digispeakorchatspeak) is a non-standard or unofficial form of language used by people on theInternetto communicate to one another.[1]A popular example of Internet slang islol, meaning "laugh out loud". Since Internet slang is constantly changing, it is difficult to provide a standardized definition.[2]However, it can be understood to be any type of slang that Internet users have popularized, and in many cases, have coined. Such terms often originate with the purpose of savingkeystrokesor to compensate for character limit restrictions. Many people use the sameabbreviationsintexting,instant messaging, andsocial networking websites.Acronyms,keyboard symbols, and abbreviations are common types of Internet slang. New dialects of slang, such asleetorLolspeak, develop asingroupInternet memesrather than time savers. Many people also use Internet slang in face-to-face,real lifecommunication. Internet slang originated in the early days of the Internet with some terms predating the Internet.[3]The earliest forms of Internet slang assumed people's knowledge of programming and commands in a specific language.[4]Internet slang is used inchat rooms,social networking services,online games,video gamesand in theonline community. Since 1979, users of communications networks likeUsenetcreated their ownshorthand.[5] The primary motivation for using a slang unique to the Internet is to easecommunication. However, while Internet slang shortcuts save time for the writer, they take two times as long for the reader to understand, according to a study by theUniversity of Tasmania.[6]On the other hand, similar to the use of slang in traditional face-to-face speech or written language, slang on the Internet is often a way of indicatinggroup membership.[7] Internet slang provides a channel which facilitates and constrains the ability to communicate in ways that are fundamentally different from those found in other semiotic situations. Many of the expectations and practices which we associate with spoken and written language are no longer applicable. The Internet itself is ideal for new slang to emerge because of the richness of the medium and the availability of information.[8]Slang is also thus motivated for the "creation and sustenance of online communities".[8]These communities, in turn, play a role in solidarity or identification[2][9]or an exclusive or common cause.[10] David Crystal distinguishes among five areas of the Internet where slang is used —The Web itself,email,asynchronous chat(for example,mailing lists),synchronous chat(for example,Internet Relay Chat), andvirtual worlds.[11]Theelectroniccharacter of the channel has a fundamental influence on the language of the medium. Options for communication are constrained by the nature of the hardware needed in order to gain Internet access. Thus, productive linguistic capacity (the type of information that can be sent) is determined by the preassigned characters on akeyboard, and receptive linguistic capacity (the type of information that can be seen) is determined by the size and configuration of the screen. Additionally, both sender and receiver are constrained linguistically by the properties of the internetsoftware,computer hardware, andnetworking hardwarelinking them. Electronic discourse refers to writing that is "very often reads as if it were being spoken – that is, as if the sender were writing talking".[12] Internet slang does not constitute a homogeneous language variety; rather, it differs according to the user and type of Internet situation.[13]Audience designoccurs in online platforms, and therefore online communities can develop their ownsociolects, or shared linguistic norms.[14][15] Within the language of Internet slang, there is still an element ofprescriptivism, as seen instyle guides, for exampleWired Style,[16]which are specifically aimed at usage on the Internet. Even so, few users consciously heed these prescriptive recommendations on CMC (Computer-mediated communication), but rather adapt their styles based on what they encounter online.[17]Although it is difficult to produce a clear definition of Internet slang, the following types of slang may be observed. This list is not exhaustive. Many debates about how the use of slang on the Internet influences language outside of the digital sphere go on. Even though the direct causal relationship between the Internet and language has yet to be proven by any scientific research,[28]Internet slang has invited split views on its influence on the standard of language use in non-computer-mediated communications. Prescriptiviststend to have the widespread belief that the Internet has a negative influence on the future of language, and that it could lead to a degradation of standard.[11]Some would even attribute any decline of standard formal English to the increase in usage of electronic communication.[28]It has also been suggested that the linguistic differences between Standard English andCMCcan have implications for literacy education.[29]This is illustrated by the widely reported example of a school essay submitted by a Scottish teenager, which contained many abbreviations and acronyms likened toSMS language. There was great condemnation of this style by the mass media as well as educationists, who expressed that this showed diminishing literacy or linguistic abilities.[30] On the other hand,descriptivistshave counter-argued that the Internet allows better expressions of a language.[28]Rather than established linguistic conventions, linguistic choices sometimes reflect personal taste.[31]It has also been suggested that as opposed to intentionally flouting language conventions, Internet slang is a result of a lack of motivation to monitor speech online.[32]Hale and Scanlon describe language in emails as being derived from "writing the way people talk", and that there is no need to insist on 'Standard' English.[16]English users, in particular, have an extensive tradition of etiquette guides, instead of traditional prescriptive treatises, that offer pointers on linguistic appropriateness.[31]Using and spreading Internet slang also adds onto the cultural currency of a language.[33]It is important to the speakers of the language due to the foundation it provides for identifying within a group, and also for defining a person's individual linguistic and communicative competence.[33]The result is a specialized subculture based on its use of slang.[34] In scholarly research, attention has, for example, been drawn to the effect of the use of Internet slang inethnography, and more importantly to how conversational relationships online change structurally because slang is used.[33] InGerman, there is already considerable controversy regarding the use ofanglicismsoutside of CMC.[35]This situation is even more problematic within CMC, since thejargonof the medium is dominated by English terms.[13]An extreme example of an anti-anglicisms perspective can be observed from the chatroom rules of a Christian site,[36]which bans all anglicisms ("Das Verwenden von Anglizismen ist strengstens untersagt!" [Using anglicisms is strictly prohibited!]), and also translates even fundamental terms into German equivalents.[13] In April 2014,Gawker'seditor-in-chiefMax Read instituted new writing style guidelines banning internet slang for his writing staff.[37][38][39][40][41][42]Internet slang has gained attraction, however in other publications ranging fromBuzzFeedto The Washington Post, gaining attention from younger viewers. Clickbait headlines have particularly sparked attention, originating from the rise of BuzzFeed in the journalistic sphere which ultimately lead to an online landscape populated with social media references and a shift in language use.[43] Internetslanghas crossed from being mediated by the computer into other non-physical domains.[44]Here, these domains are taken to refer to any domain of interaction where interlocutors need not be geographically proximate to one another, and where the Internet is not primarily used. Internet slang is now prevalent in telephony, mainly through short messages (SMS) communication. Abbreviations andinterjections, especially, have been popularized in this medium, perhaps due to the limited character space for writing messages on mobile phones. Another possible reason for this spread is the convenience of transferring the existing mappings between expression and meaning into a similar space of interaction.[45] At the same time, Internet slang has also taken a place as part of everydayofflinelanguage, among those with digital access.[44]The nature and content ofonlineconversation is brought forward to direct offline communication through thetelephoneand direct talking, as well as throughwritten language, such as in writing notes or letters. In the case of interjections, such as numerically based and abbreviated Internet slang, are not pronounced as they are written physically or replaced by any actual action. Rather, they becomelexicalizedand spoken like non-slang words in a "stage direction" like fashion, where the actual action is not carried out but substituted with a verbal signal. The notions of flaming andtrollinghave also extended outside the computer, and are used in the same circumstances of deliberate or unintentional implicatures.[8] The expansion of Internet slang has been furthered through codification and the promotion of digital literacy. The subsequently existing and growing popularity of such references among those online as well as offline has thus advanced Internet slangliteracyand globalized it.[46]Awareness and proficiency in manipulating Internet slang in both online and offline communication indicates digital literacy and teaching materials have even been developed to further this knowledge.[47]A South Korean publisher, for example, has published a textbook that details the meaning and context of use for common Internet slang instances and is targeted at young children who will soon be using the Internet.[48]Similarly, Internet slang has been recommended as language teaching material in second language classrooms in order to raise communicative competence by imparting some of the cultural value attached to a language that is available only in slang.[49] Meanwhile, well-known dictionaries such as theODE[50]andMerriam-Websterhave been updated with a significant and growing body of slang jargon. Besides common examples, lesser known slang and slang with a non-English etymology have also found a place in standardized linguistic references. Along with these instances, literature in user-contributed dictionaries such asUrban Dictionaryhas also been added to. Codification seems to be qualified through frequency of use, and novel creations are often not accepted by other users of slang.[51] Although Internet slang began as a means of "opposition" to mainstream language, its popularity with today's globalized digitally literate population has shifted it into a part of everyday language, where it also leaves a profound impact.[52] Frequently used slang also have become conventionalised into memetic "unit[s] of cultural information".[8]These memes in turn are further spread through their use on the Internet, prominently through websites. The Internet as an "information superhighway" is also catalysed through slang.[34]The evolution of slang has also created a 'slang union'[2]as part of a unique, specialised subculture.[34]Such impacts are, however, limited and requires further discussion especially from the non-English world. This is because Internet slang is prevalent in languages more actively used on theInternet, likeEnglish, which is the Internet'slingua franca.[53][54] In Japanese, the termmoehas come into common use among slang users to mean something "preciously cute" and appealing.[55] Aside from the more frequent abbreviations, acronyms, andemoticons, Internet slang also uses archaic words or the lesser-known meanings of mainstream terms.[2]Regular words can also be altered into something with a similar pronunciation but altogether different meaning, or attributed new meanings altogether.[2]Phonetic transcriptions are the transformation of words to how it sounds in a certain language, and are used as internet slang.[56]In places wherelogographiclanguages are used, such as China, a visual Internet slang exists, giving characters dual meanings, one direct and one implied.[2] The Internet has helped people from all over the world to become connected to one another, enabling "global" relationships to be formed.[57]As such, it is important for the various types of slang used online to be recognizable for everyone. It is also important to do so because of how other languages are quickly catching up with English on the Internet, following the increase in Internet usage in predominantly non-English speaking countries. In fact, as of January 2020, only approximately 25.9% of the online population is made up of English speakers.[58] Different cultures tend to have different motivations behind their choice of slang, on top of the difference in language used. For example, inChina, because of the tough Internet regulations imposed, users tend to use certain slang to talk about issues deemed as sensitive to the government. These include using symbols to separate the characters of a word to avoid detection from manual or automated text pattern scanning and consequentialcensorship.[59]An outstanding example is the use of the termriver crabto denote censorship. River crab (hexie) is pronounced the same as "harmony"—the official term used to justify political discipline and censorship. As such Chinese netizens reappropriate the official terms in a sarcastic way.[60] Abbreviationsare popular across different cultures, including countries likeJapan,China,France,Portugal, etc., and are used according to the particular language the Internet users speak. Significantly, this same style of slang creation is also found in non-alphabetical languages[2]as, for example, a form of "e gao" or alternative political discourse.[10] The difference in language often results in miscommunication, as seen in anonomatopoeicexample, "555", which sounds like "crying" in Chinese, and "laughing" in Thai.[61]A similar example is between the English "haha" and the Spanish "jaja", where both are onomatopoeic expressions of laughter, but the difference in language also meant a different consonant for the same sound to be produced. For more examples of how other languages express "laughing out loud", see also:LOL In terms of culture, in Chinese, the numerically based onomatopoeia "770880" (simplified Chinese:亲亲你抱抱你;traditional Chinese:親親你抱抱你;pinyin:qīn qīn nǐ bào bào nǐ), which means to 'kiss and hug you', is used.[61]This is comparable to "XOXO", which many Internet users use. In French, "pk" or "pq" is used in the place of pourquoi, which means 'why'. This is an example of a combination of onomatopoeia and shortening of the original word for convenience when writing online. In conclusion, every different country has their own language background and cultural differences and hence, they tend to have their own rules and motivations for their own Internet slang. However, at present, there is still a lack of studies done by researchers on some differences between the countries. On the whole, the popular use of Internet slang has resulted in a unique online and offline community as well as a couple sub-categories of "special internet slang which is different from other slang spread on the whole internet... similar to jargon... usually decided by the sharing community".[9]It has also led to virtual communities marked by the specific slang they use[9]and led to a more homogenized yet diverse online culture.[2][9] Internet slang can makeadvertisementsmore effective.[62]Through two empirical studies, it was proven that Internet slang could help promote or capture the crowd's attention through advertisement, but did not increase the sales of the product. However, using Internet slang in advertisement may attract a certain demographic, and might not be the best to use depending on the product or goods. Furthermore, an overuse of Internet slang also negatively effects the brand due to quality of the advertisement, but using an appropriate amount would be sufficient in providing more attention to the ad. According to the experiment, Internet slang helped capture the attention of the consumers of necessity items. However, the demographic of luxury goods differ, and using Internet slang would potentially have the brand lose credibility due to the appropriateness of Internet slang.[62]
https://en.wikipedia.org/wiki/Internet_slang
Internet culturerefers to culture developed and maintained among frequent and active users of theInternet(also known asnetizens) who primarily communicate with one another as members ofonline communities; that is, a culture whose influence is "mediated by computer screens" andinformation communication technology,[1]: 63specifically the Internet. Internet culture arises from the frequent interactions between members within various online communities and the use of these communities forcommunication,entertainment,business, andrecreation. Studied aspects of Internet culture include anonymity/pseudonymity, social media, gaming and specific communities, such as fandoms, and has also raised questions aboutonline identityandInternet privacy.[2] Increasingly widespread Internet adoption has influenced Internet culture; frequently provoking enforcing norms viashaming,censuringandcensorshipwhile pressuring other cultural expressionsunderground.[3] The cultural history of the Internet is a story of rapid change. The Internet developed in parallel with rapid and sustainedtechnological advancesincomputinganddata communication. Widespread access to theInternetemerged as the cost of infrastructure dropped by several orders of magnitude with consecutive technological improvements. Though Internet culture originated during the creation and development of earlyonline communities– such as those found onbulletin board systemsbefore the Internet reached mainstream adoption in developed countries – many cultural elements have roots in other previously existingofflinecultures and subcultures which predate the Internet. Specifically, Internet culture includes many elements oftelegraphyculture (especiallyamateur radio culture),gaming cultureandhacker culture. Initially, digital culture tilted toward theAnglosphere. As a consequence of computer technology's early reliance ontextual coding systemsthat were mainly adapted to the English language,Anglophonesocieties—followed by other societies with languages based onLatin script—enjoyed privileged access to digital culture. However, other languages have gradually increased in prominence. In specific, the proportion of content on the Internet that is in English has dropped from roughly 80% in the 1990s to around 52.9% in 2018.[4][5] As technology advances, Internet Culture continues to change. The introduction ofsmartphonesandtablet computersand the growing computer network infrastructure around the world have increased the number of Internet users and have likewise resulted in the proliferation and expansion of online communities. While Internet culture continues to evolve among active and frequent Internet users, it remains distinct from other previously offline cultures and subcultures which now have a presence online, even those cultures and subcultures from which Internet Culture borrows many elements. One cultural antecedent of Internet culture was amateur radio (commonly known as ham radio). By connecting over great distances, ham operators were able to form a distinct cultural community with a strong technocratic foundation, as the radio gear involved was finicky and prone to failure. The area that later becameSilicon Valley, where much of modern Internet technology originates, had been an early locus of radio engineering.[6]Alongside the original mandate for robustness and resiliency, the renegade spirit of the early ham radio community later infused the cultural value ofdecentralizationand near-total rejection ofregulationand political control that characterized the Internet's original growth era, with strong undercurrents of the Wild West spirit of theAmerican frontier. At its inception in the early 1970s as part ofARPANET, digital networks were small, institutional, arcane, and slow, which confined the majority of use to the exchange oftextual information, such as interpersonal messages andsource code. Access to these networks was largely limited to a technological elite based at a small number of prestigious universities; the original American network connected one computer in Utah with three in California.[7] Text on these digital networks usually encoded in the ASCII character set, which was minimalistic even for established Englishtypography, barely suited to other European languages sharing a Latin script (but with an additional requirement to support accented characters), and entirely unsuitable to any language not based on a Latin script, such asMandarin,Arabic, orHindi. Interactive use was discouraged except for high value activities. Hence a store and forward architecture was employed for many message systems, functioning more like a post office than modern instant messaging; however, by the standards of postal mail, the system (when it worked) was stunningly fast and cheap. Among the heaviest users were those actively involved in advancing the technology, most of whom implicitly shared much the same base of arcane knowledge, effectively forming a technological priesthood. The origins ofsocial mediapredate the Internet proper. The first bulletin board system was created in 1978,[8]GEniewas created by General Electric in 1985[9][unreliable source?], the mailing listListservappeared in 1986[9][unreliable source?], andInternet Relay Chatwas created in 1988.[9][unreliable source?]The first official[dubious–discuss]social media site,SixDegreeslaunched in 1997.[9][unreliable source?] In the 1980s, the network grew to encompass most universities and many corporations, especially those involved with technology, including heavy but segregated participation within the Americanmilitary–industrial complex. Use of interactivity grew, and the user base became less dominated by programmers,computer scientistsand hawkish industrialists, but it remained largely an academic culture centered around institutions of higher learning. It was observed that each September, with an intake of new students, standards of productive discourse would plummet until the established user base brought the influx up to speed on cultural etiquette. CommercialInternet service providers(ISPs) emerged in 1989 in the United States and Australia, opening the door for public participation. Soon the network was no longer dominated by academic culture, and the termeternal September, initially referring to September 1993, was coined as Internet slang for the endless intake of culturalnewbies. Commercial use became established alongside academic and professional use, beginning with a sharp rise in unsolicited commercial e-mail commonly calledspam. Around this same time, the network transitioned to support the burgeoningWorld Wide Web.Multimediaformats such asaudio,graphics, andvideobecome commonplace and began to displace plain text, but multimedia remained painfully slow fordial-upusers. Also around this time the Internet also began to internationalize, supporting most of the world's major languages, but support for many languages remained patchy and incomplete into the 2010s. On the arrival ofbroadbandaccess,file sharingservices grew rapidly, especially ofdigital audio(with a prevalence ofbootleggedcommercial music) with the arrival ofNapsterin 1999 and similar projects which effectively catered to music enthusiasts, especially teenagers and young adults, soon becoming established as a prototype for rapid evolution into modern social media. Alongside ongoing challenges to traditional norms ofintellectual property, business models of many of the largest Internet corporations evolved into whatShoshana Zubofftermssurveillance capitalism. Not only is social media a novel form of social culture, but also a novel form of economic culture wheresharingis frictionless, but personalprivacyhas become ascarcegood. In 1998, there wasHampster Dance, the first[dubious–discuss]successfulInternet meme.[10] One early study, conducted from 1998 to 1999, found that the participants view information obtained online as slightly more credible than information from magazines, radio, and television, information obtained from newspapers was the most credible.[11]Credibility online is established in much the same way that it is established in the offline world.Lawrence Lessigclaimed that the architecture of a given online community may be the most important factor in establishing credibility. Factors include: anonymity, connection to physical identity, comment rating system, feedback type (positive vs positive/negative), moderation.[12] Many sites allow anonymous commentary, where the user-id attached to the comment is something like "guest". In an architecture that allows anonymous commentary, credibility attaches only to the object of the comment. Sites that require some link to an identity may require only a nickname that is sufficient to allow comment readers to rate the commenter, either explicitly, or by informal reputation. Architectures can require that physical identity be associated with commentary, as in Lessig's example of Counsel Connect.[12]: 94–97However, to require linkage to a physical identity, sensitive information about a user must be collected and safeguards for that collected information must be established – users must place sufficient trust in the site. Irrespective of safeguards, as with Counsel Connect,[12]: 94–97use of physical identities links credibility across the frames of the Internet and real space, influencing the behaviors of those who contribute in those spaces. However, even purely online identities can establish credibility. Even though nothing inherently links a person or group to their Internet-based persona, credibility can be earned, because of the time required.[12]: 113 In some architectures, commenters can, in turn, be rated by other users, potentially encouraging more responsible commentary, although the profusion of popularshitpostersbelies this. Architectures can be oriented around positive feedback or allow both positive and negative feedback. This feedback can take form through likes or upvotes, dislikes or downvotes, emoji reactions, rating systems, and written responses like comments or reviews. While a particular user may be able to equate certain responses with a "negative" evaluation, the actual meaning may be contextual.[13] Architectures can give editorial control to a group or individual not employed by the site (e.g.,Reddit), termed moderators. Moderation may take be either proactive (previewing contents) or reactive (punishing violators). The moderator's credibility can be damaged by overly aggressive behavior.[1]
https://en.wikipedia.org/wiki/Cyberculture
Leet(or "1337"), also known aseleetorleetspeak, or simplyhacker speech, is a system of modified spellings used primarily on theInternet. It often uses character replacements in ways that play on the similarity of theirglyphsviareflectionor other resemblance. Additionally, it modifies certain words on the basis of a system ofsuffixesand alternative meanings. There are manydialectsorlinguistic varietiesin differentonline communities. The term "leet" is derived from the wordelite, used as an adjective to describe skill or accomplishment, especially in the fields ofonline gamingandcomputer hacking. The leet lexicon includes spellings of the word as1337orleet. Leet originated withinbulletin board systems(BBS) in the 1980s,[1][2]where having "elite" status on a BBS allowed a user access to file folders, games, and special chat rooms. TheCult of the Dead Cowhacker collective has been credited with the original coining of the term, in their text-files of that era.[3]One theory is that it was developed to defeattext filterscreated by BBS orInternet Relay Chatsystem operatorsfor message boards to discourage the discussion of forbidden topics, likecrackingandhacking.[1] Once reserved forhackers, crackers, andscript kiddies, leet later entered the mainstream.[1]Some consideremoticonsandASCII art, like smiley faces, to be leet, while others maintain that leet consists of only symbolic word obfuscation. More obscure forms of leet, involving the use of symbol combinations and almost no letters or numbers, continue to be used for its original purpose of obfuscated communication. It is also sometimes used as a scripting language. Variants of leet have been used to evade censorship for many years; for instance "@$$" (ass) and "$#!+" (shit) are frequently seen to make a word appear censored to the untrained eye but obvious to a person familiar with leet. This enables coders and programmers especially to circumvent filters and speak about topics that would usually get banned. "Hacker" would end up as "H4x0r", for example.[4] Leet symbols, especially the number 1337, areInternet memesthat have spilled over into some culture. Signs that show the numbers "1337" are popular motifs for pictures and are shared widely across the Internet.[5] Algospeakshares conceptual similarities with leet, albeit with its primary purpose to circumvent algorithmiccensorship online, "algospeak" deriving fromalgoofalgorithmandspeak. These areeuphemismsthat aim to evadeautomated online moderation techniques, especiallythose that are considered unfairor hinderingfree speech.[6][7][8][9][10]One prominent example is using the term "unalive" as opposed to the verb "kill" or even "suicide". Other examples include using "restarted" or "regarded" instead of "retarded" and "seggs" in place of "sex". These phrases are easily understandable to humans, providing either the same general meaning, pronunciation, or shape of the original word. It is furthermore often employed as a more contemporary alternative to leet. The approach has gained more popularity in 2023 and 2024 due to therise in conflict between Israel and Gazawith the topic's contentious nature on the Internet, especially onMetaandTikTokplatforms.[11][12] One of the hallmarks of leet is its unique approach toorthography, using substitutions of other letters, or indeed of characters other than letters, to represent letters in a word.[13][14]For more casual use of leet, the primary strategy is to use quasi-homoglyphs, symbols that closely resemble (to varying degrees) the letters for which they stand. The choice of symbol is not fixed: anything the reader can make sense of is valid in leet-speak. Sometimes,a gamerwould work around a nickname being already taken (and maybe abandoned as well) by replacing a letter with a similar-looking digit. Another use for leet orthographic substitutions is the creation of paraphrased passwords.[1]Limitations imposed by websites on password length (usually no more than 36) and the characters permitted (e.g. alphanumeric and symbols)[15]require less extensive forms when used in this application. Some examples of leet include: However, leetspeak should not be confused withSMS-speak, characterized by using "4" as "for", "2" as "to", "b&" as "ban'd" (e.g. "banned"), "gr8 b8, m8, appreci8, no h8" as "great bait, mate, appreciate, no hate", and so on. 34 Text rendered in leet is often characterized by distinctive, recurring forms. Leet can be pronounced as a single syllable,/ˈliːt/, rhyming witheat,by way ofapheresisof the initial vowel of "elite". It may also be pronounced as two syllables,/ɛˈliːt/. Likehacker slang, leet enjoys a looser grammar than standard English. The loose grammar, just like loose spelling, encodes some level of emphasis, ironic or otherwise. A reader must rely more on intuitiveparsingof leet to determine the meaning of a sentence rather than the actual sentence structure. In particular, speakers of leet are fond ofverbingnouns, turning verbs into nouns (and back again) as forms of emphasis, e.g. "Austin rocks" is weaker than "Austin roxxorz" (note spelling), which is weaker than "Au5t1N is t3h r0xx0rz" (note grammar), which is weaker than something like "0MFG D00D /\Ü571N 15 T3H l_l83Я 1337 Я0XX0ЯZ" (OMG, dude, Austin is theüber-elite rocks-er!). In essence, all of these mean "Austin rocks," not necessarily the other options. Added words and misspellings add to the speaker's enjoyment. Leet, like hacker slang, employs analogy in construction of new words. For example, ifhaxoredis the past tense of the verb "to hack" (hack → haxor → haxored), thenwinzoredwould be easily understood to be the past tense conjugation of "to win," even if the reader had not seen that particular word before. Leet has its own colloquialisms, many of which originated as jokes based on common typing errors, habits of new computer users, or knowledge ofcybercultureand history.[20]Leet is not solely based upon one language or character set. Greek, Russian, and other languages have leet forms, and leet in one language may use characters from another where they are available. As such, while it may be referred to as a "cipher", a "dialect", or a "language", leet does not fit squarely into any of these categories. The termleetitself is often written31337, or1337, and many other variations. After the meaning of these became widely familiar,10100111001came to be used in its place, because it is thebinaryform of1337decimal, making it more of a puzzle to interpret. An increasingly common characteristic of leet is the changing of grammatical usage so as to be deliberately incorrect. The widespread popularity of deliberate misspelling is similar to the cult following of the "All your base are belong to us" phrase. Indeed, the online and computer communities have been international from their inception, so spellings and phrases typical of non-native speakers are quite common. Many words originally derived from leet have now become part of modernInternet slang, such as "pwned".[1]The original driving forces of new vocabulary in leet were common misspellings and typing errors such as "teh" (generally considered lolspeak), and intentional misspellings,[21]especially the "z" at the end of words ("skillz").[1]Another prominent example of a surviving leet expression isw00t, an exclamation of joy.[2]w00t is sometimes used as abackronymfor "We owned the other team." New words (or corruptions thereof) may arise from a need to make one's username unique. As any given Internet service reaches more people, the number of names available to a given user is drastically reduced. While many users may wish to have the username "CatLover," for example, in many cases it is only possible for one user to have the moniker. As such, degradations of the name may evolve, such as "C@7L0vr." As the leet cipher is highly dynamic, there is a wider possibility for multiple users to share the "same" name, through combinations of spelling and transliterations. Additionally,leet—the word itself—can be found in thescreen-namesandgamertagsof many Internet and video games. Use of the term in such a manner announces a high level of skill, though such an announcement may be seen as baselesshubris.[22][more detail needed] Warez(nominally/wɛərz/) is a plural shortening of "software", typically referring to cracked and redistributed software.[22]Phreakingrefers to the hacking of telephone systems and other non-Internet equipment.[1]Tehoriginated as a typographical error of "the", and is sometimes spelledt3h.[1][23]j00takes the place of "you",[2]originating from theaffricatesound that occurs in place of thepalatal approximant,/j/, whenyoufollows a word ending in analveolarplosiveconsonant, such as/t/or/d/. Also, from German, isüber, which means "over" or "above"; it usually appears as a prefix attached to adjectives, and is frequently written without theumlautover theu.[24] Haxor, and derivations thereof, is leet for "hacker",[25]and it is one of the most commonplace examples of the use of the-xorsuffix.Suxxor(pronounced suck-zor) is a derogatory term which originated inwarezculture and is currently[when?]used in multi-user environments such as multiplayer video games andinstant messaging; it, likehaxor, is one of the early leet words to use the-xorsuffix.Suxxoris a modified version of "sucks" (the phrase "to suck"), and the meaning is the same as the English slang.Suxxorcan be mistaken withSuccer/Succkerif used in the wrong context. Its negative definition essentially makes it the opposite ofroxxor, and both can be used as a verb or a noun. The lettersckare often replaced with the Greek Χ (chi) in other words as well. Within leet, the termn00b(and derivations thereof) is used extensively. The term is derived fromnewbie(as in new and inexperienced, or uninformed),[21][24][26]and is used to differentiate "n00bs" from the "elite" (or even "normal") members of a group. Ownedandpwned(generally pronounced "poned"[27][pʰo͡ʊnd]) both refer to the domination of a player in a video game or argument (rather than just a win), or the successful hacking of a website or computer.[28][29][30][1][24][31]It is a slang term derived from the verbown, meaning to appropriate or to conquer to gain ownership. As is a common characteristic of leet, the terms have also been adapted into noun and adjective forms,[24]ownageandpwnage, which can refer to the situation ofpwningor to the superiority of its subject (e.g., "He is a very good player. He is pwnage."). The term was created accidentally by the misspelling of "own" due to the keyboard proximity of the "O" and "P" keys. It implies domination or humiliation of a rival,[32]used primarily in theInternet-basedvideo game cultureto taunt an opponent who has just been soundly defeated (e.g., "You just got pwned!").[33]In 2015Scrabbleadded pwn to their Official Scrabble Words list.[34] Pr0nisslangforpornography.[1]This is a deliberately inaccurate spelling/pronunciation forporn,[26]where a zero is often used to replace the letter O. It is sometimes used in legitimate communications (such as email discussion groups,Usenet, chat rooms, and Internet web pages) to circumvent language andcontent filters, which may reject messages as offensive orspam. The word also helps preventsearch enginesfrom associating commercial sites with pornography, which might result in unwelcome traffic.[citation needed]Pr0nis also sometimes spelled backwards (n0rp) to further obscure the meaning to potentially uninformed readers. It can also refer toASCII artdepicting pornographic images, or to photos of the internals of consumer and industrial hardware.Prawn, a spoof of the misspelling, has started to come into use, as well; inGrand Theft Auto: Vice City, a pornographer films his movies on "Prawn Island". Conversely, in theRPGKingdom of Loathing,prawn, referring to a kind ofcrustacean, is spelledpr0n, leading to the creation of food items such as "pr0n chow mein". Also seeporm.
https://en.wikipedia.org/wiki/Leetspeak
LOL, orlol, is aninitialismforlaughing out loud,[1][2][3][4]and a popular element ofInternet slang, which can be used to indicate amusement, irony, or double meanings.[5]It was first used almost exclusively onUsenet, but has since become widespread in other forms ofcomputer-mediated communicationand evenface-to-facecommunication. It is one of manyinitialismsfor expressing bodily reactions, in particularlaughter, as text, including initialisms for more emphatic expressions of laughter such asLMAO[6]("laughing my ass off") andROFL[7][8][9]orROTFL[10][11]("rolling on the floor laughing"). In 2003, the list ofacronymswas said to "grow by the month",[8]and they were collected along withemoticonsandsmileysintofolkdictionaries that are circulated informally amongst users of Usenet,IRC, and other forms of (textual) computer-mediated communication.[12]These initialisms are controversial, and several authors[13][14][15][16]recommend against their use, either in general or in specific contexts such as business communications. TheOxford English Dictionaryfirst listed LOL in March 2011.[17] In the early to mid-1980s,[18]Wayne Pearson was reportedly the first person to have used LOL while responding to a friend's joke in a pre-Internet digital chat room called Viewline. Instead of writing "hahaha," as he had done before when he found something humorous, Pearson stated that he instead typed "LOL" to symbolize extreme laughter.[19][20]Although the account is commonly accepted as true, no written record of the conversation has been found, and the exact date of origin is unknown.[5]: 82–83The earliest recorded mention of LOL in the contemporary meaning of "Laughing Out Loud" was made in a list of common online acronyms on the May 8, 1989 issue of the electronic newsletterFidoNews, according to theOxford English Dictionary[18]and linguistBen Zimmer.[21][5]: 83 A 2003 study of college students byNaomi Baronfound that the use of these initialisms incomputer-mediated communication(CMC), specifically ininstant messaging, was actuallylowerthan she had expected. The students "used few abbreviations, acronyms, and emoticons". Out of 2,185 transmissions, there were 90 initialisms in total;[22]76 were occurrences of LOL.[23] On March 24, 2011, LOL, along with otheracronyms, was formally recognized in an update of theOxford English Dictionary.[17][24]In their research, it was determined that the earliest recorded use of LOL as an initialism was for "little old lady" in the 1960s.[25] Gabriella Colemanreferences "lulz" extensively in her anthropological studies ofAnonymous.[26][27] LOL, ROFL, and other initialisms have crossed from computer-mediated communication to face-to-face communication. David Crystal – likening the introduction of LOL, ROFL, and others into spoken language in magnitude to the revolution ofJohannes Gutenberg's invention ofmovable typein the 15th century – states that this is "a brand new variety of language evolving", invented by young people within five years, that "extend[s] the range of the language, the expressiveness [and] the richness of the language".[28][22]HoweverGeoffrey K. Pullumargues that even if interjections such as LOL and ROFL were to become very common in spoken English, their "total effect on language" would be "utterly trivial".[29] While LOL originally meant "laughing out loud," modern usage is different, and it is commonly used for irony, as an indicator of second meanings, and as a way to soften statements.[5] Silvio Laccetti (professor of humanities atStevens Institute of Technology) and Scott Molski, in their essay entitledThe Lost Art of Writing, are critical of the terms, predicting reduced chances of employment for students who use such slang, stating that, "Unfortunately for these students, their bosses will not 'lol' when they read a report that lacks proper punctuation and grammar, has numerous misspellings, various made-up words, and silly acronyms."[13][14]Fondiller and Nerone in their style manual assert that smileys and abbreviations are "no more than e-mail slang and have no place in business communication".[15] Linguist John McWhorter stated, "Lol is being used in a particular way. It's a marker of empathy. It's a marker of accommodation. We linguists call things like that pragmatic particles..." Pragmatic particles are the words and phrases utilized to alleviate the awkward areas in casual conversation, such asohin "Oh, I don't know" anduhwhen someone is thinking of something to say. McWhorter stated thatlolis utilized less as a reaction to something that is hilarious, but rather as a way to lighten the conversation.[30] Frank Yunker and Stephen Barry, in a study of online courses and how they can be improved throughpodcasting, have found that these slang terms, and emoticons as well, are "often misunderstood" by students and are "difficult to decipher" unless their meanings are explained in advance. They single out the example of "ROFL" as not obviously being the abbreviation of "rolling onthefloor laughing" (emphasis added).[16]Matt Haigdescribes the various initialisms of Internet slang as convenient, but warns that "as ever more obscure acronyms emerge they can also be rather confusing".[1]Hossein Bidgoli advises that such initialisms should be used "only when you are sure that the other person knows the meaning" as they "might make comprehension of the message more difficult for the receiver", and differences in meaning may lead to misunderstandings in international contexts.[31] Tim Shortis observes that ROFL is a means of "annotating text with stage directions".[9]Peter Hershock, in discussing these terms in the context of performative utterances, points out the difference betweentellingsomeone that one is laughing out loud and actually laughing out loud: "The latter response is a straightforward action. The former is a self-reflexive representation of an action: I not only do something but also show you that I am doing it. Or indeed, I may not actually laugh out loud but may use the locution 'LOL' to communicate my appreciation of your attempt at humor."[8] David Crystalnotes that use of LOL is not necessarily genuine, just as the use of smiley faces or grins is not necessarily genuine, posing the rhetorical question "How many people are actually 'laughing out loud' when they send LOL?".[32]Louis Franzini concurs, stating that there is as yet no research that has determined the percentage of people who are actually laughing out loud when they write LOL.[2] Victoria Clarke, in her analysis oftelnettalkers, states that capitalization is important when people write LOL, and that "a user who typesLOLmay well be laughing louder than one who typeslol", and opines that "these standard expressions of laughter are losing force through overuse".[33]Michael Egan describes LOL, ROFL, and other initialisms as helpful so long as they are not overused. He recommends against their use in business correspondence because the recipient may not be aware of their meanings, and because in general neither they nor emoticons are in his view appropriate in such correspondence.[3]June Hines Moore shares that view.[34]So, too, does Sheryl Lindsell-Roberts, who gives the same advice of not using them in business correspondence, "or you won't be LOL."[35] Pre-dating the Internet andphone textingby a century, the way to express laughter inmorse codeis "hi hi".[52]The sound of this in morse ('di-di-di-dit di-dit, di-di-di-dit di-dit') is thought to represent chuckling.[53][54]
https://en.wikipedia.org/wiki/LOL
Sextingis sending, receiving, or forwarding sexually explicit messages, photographs, or videos, primarily between mobile phones. It may also include the use of a computer or any digital device.[1][2]The term was first popularized early in the 21st century and is aportmanteauofsexandtexting, where the latter is meant in the wide sense of sending a text possibly with images.[3]Sexting is not an isolated phenomenon but one of many different types of sexual interaction in digital contexts that is related to sexual arousal.[4] The first published use of the termsextingwas in a 2005 article in the AustralianSunday Telegraph Magazine.[5]In August 2012, the wordsextingwas listed for the first time inMerriam-Webster'sCollegiate Dictionary.[6] ThePew Research Centercommissioned a study on sexting, which divides the practice into three types:[7] Sexting has become more common with the rise incamera phonesandsmartphoneswith Internet access, that can be used to send explicit photographs as well as messages.[7]While sexting is done by people of all ages,[8]most media coverage fixates on negative aspects of adolescent usage. Young adults use the medium of the text message much more than any other new media to transmit messages of a sexual nature,[9]and teenagers who have unlimited text messaging plans are more likely to receive sexually explicit texts.[7][10] As a result of sexting being a relatively recent practice,ethicsare still being established by both those who engage in it and those who create legislation based on this concept. Whether sexting is seen as a positive or negative experience typically rests on the basis of whether or not consent was given to share the images. Nevertheless, Australian laws currently view under-18s as being unable to give consent to sexting, even if they meet the legal age for sexual consent.[11] Contrary tocommon misconception, when it comes to preventing abuse among adolescents, consent is more important than trying to stop sexting altogether.[12] Sexting has been promoted further by several direct messaging applications that are available on smartphones. The difference between using these applications and traditional texting is that content is transmitted over the Internet or a data plan, allowing anyone with Internet access to participate. Snapchatappeals to teens because it allows users to send photos for a maximum of ten seconds before they disappear. Many sending photos over Snapchat believe these photos will disappear without consequences, so they feel more secure about sending them. There have been several cases where teens have sent photos over these applications, expecting them to disappear or be seen by the recipient only, yet are saved and distributed, carrying social and legal implications. Even though users believe their photos on Snapchat for example will go away in seconds, it is easy to save them through other photo capturing technology, third party applications, or simple screenshots. These applications claim no responsibility for explicit messages or photos that are saved. Snapchat's privacy policy on sexting has evolved to include sending content over new smartphone applications because of their appealing features such as the anonymity or temporary elements. These applications carry the same risks and consequences that have always existed. A 2009 study found that 4 percent of teenagers aged 14 to 17 claim to have sent sexually explicit photos of themselves. Fifteen percent of these teens also claimed to have received sexually explicit photos. This suggests a consent issue of people receiving photos without asking for them. This is enhanced withSnapchat, as the person receiving snapchats will not be aware of the contents until they open it,[13]and messages are automatically deleted after some time. Although sexting through Snapchat is popular, "joke sexting" is more prevalent among users. Sending sexual images as a joke makes up approximately a quarter of the participants.[14] Sexting is a prevalent and normalized practice among youth in many western, liberal democracies.[15]Many couples engage in sexting. In a 2011 study, 54% of the sample had sent explicit pictures or videos to their partners at least once, and one third of their sample had engaged in such activities occasionally.[16] In areas wheregender rolestraditionally expect men to initiate sexual encounters, sexting is used by women to offer nude images to male partners, allowing women greater latitude to instigate sex.[17][18]Mass media does not encourage teen or underage sexting, because of the child pornography laws they could violate.[17][according to whom?]However, a recent study found young women are significantly more likely than young men to be pressured into sending a nude photo, by their partner.[19] In 2013, it was found that sexting is often used to enhance the relationship and sexual satisfaction in a romantic partnership. Sexting thus can be considered a "behaviour that ties into sexuality and the subsequent level of relationship satisfaction experienced by both partners". Based on the interviews conducted by Albury and Crawford, they discovered that sexting is commonly used in positive aspects. According to Albury and Crawford, sexting was not only an activity occurring in the context of flirtation or sexual relationships, but also between friends, as a joke or during a moment of bonding."[20] Reportedly,hedonismplayed a role in motivating sexting, and the length of relationship was negatively correlated with sexting behaviors. The study had a small sample size, so more research needs to be done surrounding sexting and motivation, but it is clear that sexting is a phenomenon that is not constrained to simply unattached individuals looking for fun; it is used by those in intimate relationships to increase feelings of intimacy and closeness to one's partner.[20]For teens, sexting can also act as a prelude (or in lieu of) sexual activity, as an experimental phase for those who are yet to be sexually active, and for those who are hoping to start a relationship with someone.[7]In a 2013 study conducted byDrouinet al., it was found that sexting is also associated with attachment styles, as those withattachment avoidanceare more likely to engage in sexting behaviours (just as these individuals are also more likely to engage in casual sex). Thus, instead of increasing intimacy in these types of relationships, sexting may act as a buffer for physical intimacy.[16] While some studies have evaluated sexting by married couples or youngmen who have sex with men,[21]the majority of attention is directed at heterosexual adolescents. Some studies of adolescents find that sexting is correlated with risky sex behaviors,[22][23][24][25][26]while other studies have found no link.[15][27][28] In a 2008 survey of 1,280 teenagers and young adults of both sexes sponsored byThe National Campaign to Prevent Teen and Unplanned Pregnancy, 20% of teens (13–20) and 33% of young adults (20–26) had sent nude or semi-nude photographs of themselves electronically. Additionally, 39% of teens and 59% of young adults had sent sexually explicit text messages.[29] Sexting became popular among teens around 2009, especially among high school students in the United States, where 20 percent of high school students said they had engaged in sexting or receiving.[30] A widely cited 2011 study indicated the previously reported prevalence was exaggerated. Researchers at theUniversity of New Hampshire(UNH) surveyed 1,560 children between the ages of 10 and 17 and caregivers, reporting that only 2.5 percent of respondents had sent, received or created sexual pictures distributed via cell phone in the previous year.[31]The researchers found that the figure rose to 9.6% when the definition was broadened from images prosecutable as child pornography to any suggestive image, not necessarily nude ones.[32]A 2012 study conducted by theUniversity of Utahquestioned the findings reported by the University of New Hampshire researchers. In the University of Utah's study, researchers Strassberg, McKinnon,et al.surveyed 606 teenagers ages 14 to 18 and found that nearly 20 percent of the students said they had sent a sexually explicit image of themselves via cell phone, and nearly twice as many said that they had received a sexually explicit picture. Strassberg, McKinnon,et al.said the UNH study was technically accurate, but that the inclusion of younger children in the sample misrepresented the prevalence of the practice among mid- and older teenagers.[33][34][35][36] According to professor Diane Kholos Wysocki, although both men and women participate in sexting, "women are more likely to sext than men".[37]Men are more likely to initiate some form of intimate communication, like sending nude photographs or suggestive text messages.[38]According to Amy Adele Hasinoff in the journalNew Media & Society, when it comes to sexting, there is a big difference between sexual exploitation and a consensual decision to express one's sexuality and share an image of their own body with someone who wants to see it. Women are sexualized whenever they post or share any form of intimate media, while men are not. Hasinoff points out that "Many digital media scholars stress that the Internet can enable young people to explore their identities and develop social and communication skills" (Boyd, 2008; Tynes, 2007), and suggests that consensual sexting might serve a similar function for some people.[39] If a person sends an explicit image of themselves to a partner, then it can be against the law to re-transmit a copy of that image to another person without the consent of the originator.[40][41]Some countries haverevenge pornlaws that prevent the publication of sexual images without the consent of all parties in the image. While there are many possible legal avenues for prosecution of people who knowinglybreach the confidenceof those sending sexual messages, in practice, nude images can be widely propagated without the consent of the originator.[42] Some young peopleblackmailtheir sexual partners and former partners by threatening to release private images of them.[43][44][45]In a study conducted by Drouin et al. analyzing sexting behaviours among young adults, it was found that men would show the sexually explicit photos of their girlfriends to their friends.[9][46]This is a new risk associated with new media, as prior to cell phones and email, it would be difficult to quickly distribute photos to acquaintances; with sexting, one can forward a photo in a matter of seconds. Studies have shown that sex crimes using digital media against minors reflect the same kind of victimization that happens offline.[17]Family members, acquaintances and intimate partners make up the mass majority of perpetrators for digital media sex crimes.[17]Research by theInternet Watch Foundationin 2012, estimated that 88% of self-made explicit images are "stolen" from their original upload location (typicallysocial networks) and made available on other websites, in particularporn sitescollecting sexual images of children and young people. The report highlighted the risk of severedepressionfor "sexters" who lose control of their images and videos.[47][48]Sexting is seen as irresponsible and promiscuous for adolescents, but "fun and flirty" for adults.[17]These risks tend to be exaggerated by news media, especially in regards to adolescent girls.[49][50] TheUniversity of Utahstudy (with a population sample of 606 teens aged 14 to 18) stated that about one third of respondents did not consider legal or other consequences when receiving or sending sexts.[51]Teenagers may not be thinking about the risks and repercussions when they participate in sexting; however, a study by Kath Albury titledSelfies, Sexts, and Sneaky Hats: Young People's Understandings of Gendered Practices of Self-Presentation[52]shows that teenagers engaging in sexting were concerned that their parents may see or find out about their involvement with sexting. Some teenagers shared that their "main risks of parental discovery were embarrassment (for both parents and young people) and 'overreaction' from adults who feared the photo had been shared."[53]While teenagers felt less compelled to worry about the legal risks with sexting, they worried that their parents would find out about their involvement with sexting. Albury and Crawford (2012) argue that adolescents are well aware of the differences between consensual sexting and distribution of private images with negative intent. Further, they argue young people are developing norms and ethics of sexting based on consent.[citation needed] Creation and distribution of explicit photos of teenagers violateschild pornographylaws in many jurisdictions (depending on the age of the people depicted), but this legal restriction does not align with the social norms of the population engaging in the practice, which distinguish between consensual activity and harassment or revenge.[17]Senders in some jurisdictions may also be charged with distribution of indecent material to a minor, and could be required to register as asex offenderfor life. Child pornography cases involving teen-to-teen sexting have been prosecuted inOregon,[54][55]Virginia,[56]Nova Scotia[57]andMaryland.[58] While mainstream media outlets, parents, and educators are rightfully worried about the negative legal, social, and emotional ramifications of teen sexting, much less is said about the issue ofsexual consent. According to a 2012 study conducted by professors at the University of New South Wales,[59]due to child pornography laws that prohibit any minor from consenting to sexual activity, issues of consent among adolescent teens is seldom discussed. Much like the discourse surrounding"abstinence-only" education, the prevailing attitude towards sexting is how to prevent it from occurring rather than accepting its inevitability and channeling it in healthier ways. According to the study, instead of criminalizing teens who participate in sexting, the law should account for whether the images are shared consensually. This would mean adopting an "ethics" approach, one that teaches and guides teens on how to respectbodily autonomyand privacy. A 2019Journal of Adolescent Healtharticle authored by scholars Justin Patchin andSameer Hindujaentitled "It's Time to Teach Safe Sexting" offers specific, actionable strategies towards this end within aharm reductionframework.[60] According to a study done by the health journalPediatrics, more than one in five middle school minors with behavioral or emotional problems has recently engaged in sexting. Those individuals who have reported sexting in the past six months were four to seven times more likely to engage in other sexual activities such as intimate kissing,touching genitals, and having vaginal ororal sex, compared to minors who stated they did not partake in sexting. The study included 420 participants who were between the ages of 12 and 14 years old. The children were pulled from five urban public middle schools inRhode Islandbetween 2009 and 2012. Seventeen percent of the children tested claimed they had sent a sexually explicit text message in the past six months. Another five percent admitted to sending sexually explicit text messages and nude or semi-nude photos.[61][62] Sexting is generally legal if all parties are over theage of majorityand images are sent with their consent and knowledge; however, any type of sexual message that both parties have not consented to can constitutesexual harassment. Sexting that involvesminorsunder theage of consentsending an explicit photograph of themselves to a romantic partner of the same age can be illegal in countries where anti–child pornographylaws require all participants in pornographic media to be over the age of majority. Some teenagers who have texted photographs of themselves, or of their friends or partners, have been charged with distribution of child pornography, while those who have received the images have been charged with possession of child pornography; in some cases, the possession charge has been applied to school administrators who have investigated sexting incidents as well. The images involved in sexting are usually different in both nature and motivation from the type of content that anti-child pornography laws were created to address.[63][64] A 2009 UK survey of 2,094 teens aged 11 to 18 found that 38% had received an "offensive or distressing" sexual image by text or email.[65] In the United States, anyone who is involved in the electronic distribution of sexual photos of minors can face state and federal charges of child pornography. The laws disregard the consent of parties involved: "...regardless of one's age or consent to sexting, it is unlawful to produce, possess, or distribute explicit sexual images of anyone under 18."[17]The University of New Hampshire's Crimes Against Children Research Center estimates that 7 percent of people arrested on suspicion of child pornography production in 2009 were teenagers who shared images with peers consensually.[66] Kath Albury discusses in an article titled "Sexting, Consent, and Young People's Ethics: Beyond Megan's Story" that if teens are convicted of a sexting charge, they have to register as a sex offender, and this takes away the impact of the title of sex offender. A girl who agreed to send her girlfriend a naked picture is not as dangerous to the community as a child molester but the charge of sex offender would be applied equally to both of these cases.[67] In a 2013 interview, assistant professor of communications at theUniversity of Colorado Denver, Amy Adele Hasinoff, who studies the repercussions of sexting has stated that the "very harsh" child pornography laws are "designed to address adults exploiting children" and should not replace better sex education and consent training for teens. She went on to say, "Sexting is a sex act, and if it's consensual, that's fine..." "Anyone who distributes these pictures without consent is doing something malicious and abusive, but child pornography laws are too harsh to address it."[68] According to Amy Hasinoff, if sexting was viewed as media production and a consensual activity, this would change the legal assumption that sexting is always non-consensual and reduce the culpability of victimized youth. This turns sexting into a situation that would lead to different legal consequences when distribution of the material was not consented to by the creator.[17]Alvin J. Primack, who draws from Amy Hasinoff's work, argued a media production model may be useful for distinguishing between child pornography and sexting from a First Amendment perspective.[69]According to Alvin J. Primack, the motivation for creating and distributing sexts (e.g., pleasure, relationship building) differs from the motivation for creating and distributing child pornography (e.g., abuse, exploitation), and the market of circulation is generally different between the two as well. For these reasons, there may be arguments – grounded in reasoning provided by First Amendment doctrine – for finding some youth sexts exchanged between persons who are of the age of consent to be legally-protected speech. Legal professionals and academics have expressed that the use of "child porn laws" with regard to sexting is "extreme" or "too harsh". Florida cyber crimes defense attorney David S. Seltzer wrote of this that "I do not believe that our child pornography laws were designed for these situations ... A conviction for possession of child pornography in Florida draws up to five years in prison for each picture or video, plus a lifelong requirement toregister as a sex offender."[70] Academics have argued thatsextingis a broad term for images being sent over Internet and cell phones, between minors, adults, or minors and adults, and in an abusive manner or in an innocent manner. In order to develop policy better suited for adolescent sexting cases, it is necessary to have better terms and categories of sexting. University of New Hampshiretypologyhas suggested the termyouth-produced sexual imageto classify adolescent sexting. Furthermore, they branch into two sub-categories:aggravatedandexperimental youth-produced sexual image. Aggravated cases include cases of sexual assault, coercion, cyber-bullying, forwarding images without consent, and abusive behavior. Experimental cases are cases in which an adolescent willingly takes a picture and sends it to someone with no criminal intent and is attention-seeking.[71]This terminology could lead to more appropriate action towards adolescents who engage in sexting. In Connecticut, Rep.Rosa Rebimbasintroduced a bill that would lessen the penalty for "sexting" between two consenting minors in 2009. The bill would make it a Class A misdemeanor for children under 18 to send or receive text messages with other minors that include nude or sexual images. It is currently a felony for children to send such messages, and violators could end up on the state's sex offender registry.[92] Vermont lawmakers introduced a bill in April 2009 to legalize the consensual exchange of graphic images between two people 13 to 18 years old. Passing along such images to others would remain a crime.[93] In Ohio, a county prosecutor and two lawmakers proposed a law that would reduce sexting from afelonyto a first-degreemisdemeanor, and eliminate the possibility of a teenage offender being labeled a sex offender for years. The proposal was supported by the parents of Jesse Logan, a Cincinnati 18-year-old who committed suicide after the naked picture of herself which she sexted was forwarded to people in her high school.[94] Utah lawmakers lessened the penalty for sexting for someone younger than 18 to a misdemeanor from a felony.[95] In New York, Assemblyman Ken Zebrowski (D-Rockland) has introduced a bill that will create anaffirmative defensewhere a minor is charged under child pornography laws if they possesses or disseminate a picture of themselves or possess or disseminates the image of another minor (within 4 years of their age) with their consent. The affirmative defense will not be available if the conduct was done without consent. It also creates an educational outreach program for teens that promotes awareness about the dangers of sexting.[96] In the Australian state ofVictoria, the law was reformed in 2014 to create a defence for young people who engage in consensual sexting and the introduction of the new offences of distribution of an intimate image, and threat to distribute an intimate image.[97] Books Reports Media
https://en.wikipedia.org/wiki/Sexting
Nineteen Eighty-Four(also published as1984) is adystopian novelandcautionary taleby English writerGeorge Orwell. It was published on 8 June 1949 bySecker & Warburgas Orwell's ninth and final completed book. Thematically, it centres on the consequences oftotalitarianism,mass surveillance, andrepressive regimentationof people and behaviours within society.[3][4]Orwell, a staunch believer indemocratic socialismand member of theanti-Stalinist Left, modelledBritainunderauthoritarian socialismin the novel on theSoviet Unionin the era ofStalinismand on the very similar practices of bothcensorshipandpropagandainNazi Germany.[5]More broadly, the novel examines the role of truth and facts within societies and the ways in which they can be manipulated. The story takes place in an imagined future. The current year is uncertain, but believed to be 1984. Much of the world is inperpetual war. Great Britain, now known as Airstrip One, has become a province of the totalitarian superstateOceania, which is led byBig Brother, a dictatorial leader supported by an intensecult of personalitymanufactured by the Party'sThought Police. The Party engages inomnipresent government surveillanceand, through theMinistry of Truth,historical negationismand constantpropagandato persecute individuality and independent thinking.[6] The protagonist,Winston Smith, is a diligent mid-level worker at the Ministry of Truth who secretly hates the Party and dreams of rebellion. Smith keeps a forbidden diary. He begins an illegal relationship with a colleague,Julia, and they learn about a shadowy resistance group called the Brotherhood. However, their contact within the Brotherhood turns out to be a Party agent, and Smith and Julia are arrested. He is subjected to months of psychological manipulation and torture by the Ministry of Love. He ultimately betrays Julia and is released; he finally realises he loves Big Brother. Nineteen Eighty-Fourhas become a classic literary example of political and dystopian fiction. It also popularised the term "Orwellian" as an adjective, with many terms used in the novel entering common usage, including "Big Brother", "doublethink", "Thought Police", "thoughtcrime", "Newspeak", and the expression that "2 + 2 = 5". Parallels have been drawn between the novel's subject matter andreal lifeinstances of totalitarianism, mass surveillance, and violations offreedom of expression, among other themes.[7][8][9]Orwell described his book as a "satire",[10]and a display of the "perversions to which a centralised economy is liable", while also stating he believed "that something resembling it could arrive".[10]Timeincluded the novel on its list of the 100 best English-language novels published from 1923 to 2005,[11]and it was placed on theModern Library's 100 Best Novelslist, reaching number 13 on the editors' list and number 6 on the readers' list.[12]In 2003, it was listed at number eight onThe Big Readsurvey by theBBC.[13]It has been adapted across media since its publication, most notably as a film,released in 1984, starringJohn Hurt,Suzanna HamiltonandRichard Burton. The Orwell Archive at University College London contains undated notes about ideas that evolved intoNineteen Eighty-Four. The notebooks have been deemed "unlikely to have been completed later than January 1944", and "there is a strong suspicion that some of the material in them dates back to the early part of the war".[14] In one 1948 letter, Orwell claims to have "first thought of [the book] in 1943", while in another he says he thought of it in 1944 and cites 1943'sTehran Conferenceas inspiration: "What it is really meant to do is to discuss the implications of dividing the world up into 'Zones of Influence' (I thought of it in 1944 as a result of the Tehran Conference), and in addition to indicate by parodying them the intellectual implications of totalitarianism".[14]Orwell had toured Austria in May 1945 and observed manoeuvring he thought would probably lead to separate Soviet and Allied Zones of Occupation.[15][16] In January 1944, literature professorGleb Struveintroduced Orwell toYevgeny Zamyatin's 1924 dystopian novelWe. In his response Orwell expressed an interest in the genre, and informed Struve that he had begun writing ideas for one of his own, "that may get written sooner or later".[17][18]In 1946, Orwell wrote about the 1931 dystopian novelBrave New WorldbyAldous Huxleyin his article "Freedom and Happiness" for theTribune, and noted similarities toWe.[17]By this time Orwell had scored a critical and commercial hit with his 1945 political satireAnimal Farm, which raised his profile. For a follow-up he decided to produce a dystopian work of his own.[19][20] In a June 1944 meeting withFredric Warburg, co-founder of his British publisherSecker & Warburg, shortly before the release ofAnimal Farm, Orwell announced that he had written the first 12 pages of his new novel. As he could then only earn a living from journalism, he predicted the book would not see a release before 1947.[18]Progress was slow; by the end of September 1945 Orwell had written some 50 pages.[21]Orwell became disenchanted with the restrictions and pressures involved with journalism and grew to detest city life in London.[22]He suffered frombronchiectasisand a lesion in one lung; the harsh winter worsened his health.[23] In May 1946, Orwell arrived on the Scottish island ofJura.[20]He had wanted to retreat to a Hebridean island for several years;David Astorrecommended he stay atBarnhill, a remote farmhouse on the island that his family owned,[24]with no electricity or hot water. Here Orwell intermittently drafted and finishedNineteen Eighty-Four.[20]His first stay lasted until October 1946, during which time he made little progress on the few already completed pages, and at one point did no work on it for three months.[25]After spending the winter in London, Orwell returned to Jura; in May 1947 he reported to Warburg that despite progress being slow and difficult, he was roughly a third of the way through.[26]He sent his "ghastly mess" of a first draft manuscript to London, where Miranda Christen volunteered to type a clean version.[27]Orwell's health worsened further in September, however, and he was confined to bed with inflammation of the lungs. He lost almost two stone (28 pounds or 12.7 kg) in weight and had recurring night sweats, but he decided not to see a doctor and continued writing.[28]On 7 November 1947, he completed the first draft in bed, and subsequently travelled toEast Kilbridenear Glasgow for medical treatment atHairmyres Hospital, where a specialist confirmed a chronic and infectious case of tuberculosis.[29][27] Orwell was discharged in the summer of 1948, after which he returned to Jura and produced a full second draft ofNineteen Eighty-Four, which he finished in November. He asked Warburg to have someone come to Barnhill and retype the manuscript, which was so untidy that the task was only considered possible if Orwell was present, as only he could understand it. The previous volunteer had left the country and no other could be found at short notice, so an impatient Orwell retyped it himself at a rate of roughly 4,000 words a day during bouts of fever and bloody coughing fits.[27]On 4 December 1948, Orwell sent the finished manuscript to Secker & Warburg and left Barnhill for good in January 1949. He recovered at a sanitarium in theCotswolds.[27] Shortly before completion of the second draft, Orwell vacillated between two titles for the novel:The Last Man in Europe, an early title, andNineteen Eighty-Four.[30]Warburg suggested the latter, which he took to be a more commercially viable choice.[31]There has been a theory – doubted by Dorian Lynskey (author ofa 2019 book aboutNineteen Eighty-Four) – that1984was chosen simply as an inversion of the year 1948, the year in which it was being completed. Lynskey says the idea was "first suggested by Orwell's US publisher", and it was also mentioned byChristopher Hitchensin his introduction to the 2003 edition ofAnimal Farm and 1984, which also notes that the date was meant to give "an immediacy and urgency to the menace of totalitarian rule".[32]However, Lynskey does not believe the inversion theory: This idea ... seems far too cute for such a serious book. ... Scholars have raised other possibilities. [His wife] Eileen wrote a poem for her old school's centenary called 'End of the Century: 1984.'G. K. Chesterton's 1904 political satireThe Napoleon of Notting Hill, which mocks the art of prophecy, opens in 1984. The year is also a significant date inThe Iron Heel. But all of these connections are exposed as no more than coincidences by the early drafts of the novel ... First he wrote 1980, then 1982, and only later 1984. The most fateful date in literature was a late amendment.[33] In the run up to publication, Orwell called the novel "abeastlybook" and expressed some disappointment towards it, thinking it would have been improved had he not been so ill. This was typical of Orwell, who had talked down his other books shortly before their release.[33]Nevertheless, the book was enthusiastically received by Secker & Warburg, who acted quickly; before Orwell had left Jura he rejected their proposed blurb that portrayed it as "a thriller mixed up with a love story".[33]He also refused a proposal from the American Book of the Month Club to release an edition without the appendix and chapter on Goldstein's book, a decision which Warburg claimed cut off about £40,000 in sales.[33][34] Nineteen Eighty-Fourwas published on 8 June 1949 in the UK;[33][35][36]Orwell predicted earnings of around £500. A first print of 25,575 copies was followed by a further 5,000 copies in March and August 1950.[37]The novel had the most immediate impact in the US, following its release there on 13 June 1949 byHarcourt Brace, & Co.An initial print of 20,000 copies was quickly followed by another 10,000 on 1 July, and again on 7 September.[38]By 1970, over 8 million copies had been sold in the US, and in 1984 it topped the country's all-time best seller list.[39] In June 1952, Orwell's widow Sonia Bronwell sold the sole remaining manuscript at a charity auction for £50.[40]The draft remains the only surviving literary manuscript from Orwell, and is held at theJohn Hay LibraryatBrown UniversityinProvidence, Rhode Island.[41][42] In the original published UK and US editions of 1984 numerous small variations in the text exist, the US edition altering Orwell's agreed edit of the text as was typical of publishing practices of the time in regard to spelling and punctuation, as well as some small edits and phrasings. While Orwell rejected a proposed book club edition which would see substantial sections of the book removed, these minor changes passed somewhat under the radar. Other more significant revisions and variant texts also exist, however. In 1984, Peter Davison editedNineteen Eighty-Four: The Facsimile of the Extant Manuscript, published by Secker and Warburg in the UK and Harcourt-Brace-Jovanovich in the US. This reproduced page for page Sonia Bronwell's copy of the original manuscript in facsimiles, as well as a complete typeset versions of that text - complete with Orwell's holograph and typewritten pages, and handwritten amendments and corrections. The book had a preface by Daniel Segal. It has been reprinted in various international editions with translated introductions and notes, and reprinted in English in limited edition formats. In 1997, Davison produced a definitive text ofNineteen Eighty Fouras part of Secker's 20 volume definitive edition of theComplete Works of George Orwell. This edition removed errors, typographic errors, and reversed editorial changes in the original editions made without Orwell's oversight, all based on detailed reference to Orwell's original manuscript and notes. This text has gone on to be reprinted in various subsequent paperback editions, including one with an introduction byThomas Pynchon, without obvious note that it is a revised text, and has been translated as an unexpurgated version of text. In 2021, Polygon publishedNineteen Eighty Four: The Jura Edition, with an introduction by Alex Massie. As the narrative opens on "April 4th, 1984" (though even the protagonist, aware that the government is constantly revising accounts of past events, "did not know with any certainty that this was 1984") the world has been ravaged for decades by global war, civil conflict, and revolution. Airstrip One (formerly known asGreat Britain) is a province ofOceania, one of the threetotalitariansuper-statesthat rule the world. It is ruled by "The Party" under the ideology of "Ingsoc" (aNewspeakshortening of "English Socialism") and the mysterious leaderBig Brother, who has an intensecult of personality. The Party brutally purges out anyone who does not fully conform to their regime, using theThought Policeand constant surveillance throughtelescreens(two-way televisions),cameras, andhidden microphones. Those who fall out of favour with the Party become "unpersons", disappearing withall evidence of their existence destroyed. InLondon,Winston Smithis a member of the Outer Party, working at theMinistry of Truth, where herewrites historical recordsto conform to the state's ever-changing version of history. Winston revises past editions ofThe Times, while the original documents are destroyed after being dropped into ducts known asmemory holes, which lead to an immense furnace. He secretly opposes the Party's rule and dreams of rebellion, despite knowing that he is already a "thought-criminal" and is likely to be caught one day. While in aproleneighbourhood he meets Mr Charrington, the owner of anantiques shop, and buys a diary where he writes criticisms of the Party and Big Brother. To his dismay, when he visits a prole quarter he discovers they have no political consciousness. As he works in the Ministry of Truth, he observesJulia, a young woman maintaining the novel-writing machines at the ministry, whom Winston suspects of being a spy, and develops an intense hatred of her. He vaguely suspects that his superior,Inner PartyofficialO'Brien, is part of an enigmatic undergroundresistance movementknown as the Brotherhood, formed by Big Brother's reviled political rivalEmmanuel Goldstein. One day, Julia discreetly hands Winston a love note, and the two begin a secret affair. Julia explains that she also loathes the Party, but Winston observes that she is politically apathetic and uninterested in overthrowing the regime. Initially meeting in the country, they later meet in a rented room above Mr Charrington's shop. During the affair, Winston remembers the disappearance of his family during the civil war of the 1950s and his tense relationship with his estranged wife Katharine. Weeks later, O'Brien invites Winston to his flat, where he introduces himself as a member of the Brotherhood and sends Winston a copy ofThe Theory and Practice of Oligarchical Collectivismby Goldstein. Meanwhile, during the nation's Hate Week, Oceania's enemy suddenly changes fromEurasiatoEastasia, which goes mostly unnoticed. Winston is recalled to the Ministry to help make the necessary revisions to the records. Winston and Julia read parts of Goldstein's book, which explains how the Party maintains power, the true meanings of its slogans, and the concept ofperpetual war. It argues that the Party can be overthrown if proles rise up against it. However, Winston never gets the opportunity to read the chapter that explains why the Party took power and is motivated to maintain it. Winston and Julia are captured when Mr Charrington is revealed to be an undercover Thought Police agent, and they are separated and imprisoned at theMinistry of Love. O'Brien also reveals himself to be a member of the Thought Police and a member of afalse flagoperation which catches political dissidents of the Party. Over several months, Winston is starved and relentlessly tortured to bring his beliefs in line with the Party. O'Brien tells Winston that he will never know whether the Brotherhood actually exists and that Goldstein's book was written collaboratively by him and other Party members; furthermore, O'Brien reveals to Winston that the Party sees power not as a means but as an end, and the ultimate purpose of the Party is seeking power entirely for its own sake. For the final stage of re-education, O'Brien takes Winston toRoom 101, which contains each prisoner's worst fear. When confronted with rats, Winston denounces Julia and pledges allegiance to the Party. Winston is released into public life and continues to frequent the Chestnut Tree café. He encounters Julia, and both reveal that they have betrayed the other and are no longer in love. Back in the café, a news alert celebrates Oceania's supposed massive victory over Eurasian armies in Africa. Winston finally accepts that he loves Big Brother. Nineteen Eighty-Fouruses themes from life in the Soviet Union and wartime life in Great Britain as sources for many of its motifs. Some time at an unspecified date after the first American publication of the book, producerSidney Sheldonwrote to Orwell interested in adapting the novel to the Broadway stage. Orwell wrote in a letter to Sheldon (to whom he would sell the US stage rights) that his basic goal withNineteen Eighty-Fourwas imagining the consequences of Stalinist government ruling British society: [Nineteen Eighty-Four] was based chiefly on communism, because that is the dominant form of totalitarianism, but I was trying chiefly to imagine what communism would be like if it were firmly rooted in the English speaking countries, and was no longer a mere extension of theRussian Foreign Office.[45] According to Orwell biographerD. J. Taylor, the author'sA Clergyman's Daughter(1935) has "essentially the same plot ofNineteen Eighty-Four... It's about somebody who is spied upon, and eavesdropped upon, and oppressed by vast exterior forces they can do nothing about. It makes an attempt at rebellion and then has to compromise".[46] The statement "2 + 2 = 5", used to torment Winston Smith during his interrogation, was a communist party slogan from the secondfive-year plan, which encouraged fulfilment of the five-year plan in four years. The slogan was seen in electric lights on Moscow house-fronts, billboards and elsewhere.[47] The switch of Oceania's allegiance from Eastasia to Eurasia and the subsequent rewriting of history ("Oceania was at war with Eastasia: Oceania had always been at war with Eastasia. A large part of the political literature of five years was now completely obsolete"; ch 9) is evocative of the Soviet Union's changing relations with Nazi Germany. The two nations were open and frequently vehement critics of each other until the signing of the 1939Treaty of Non-Aggression. Thereafter, and continuing until the Nazi invasion of the Soviet Union in 1941, no criticism of Germany was allowed in the Soviet press, and all references to prior party lines stopped—including in the majority of non-Russian communist parties who tended to follow the Russian line. Orwell had criticised theCommunist Party of Great Britainfor supporting the Treaty in his essays forBetrayal of the Left(1941). "The Hitler-Stalin pact of August 1939 reversed the Soviet Union's stated foreign policy. It was too much for many of thefellow-travellerslikeGollancz[Orwell's sometime publisher] who had put their faith in a strategy of constructionPopular Frontgovernments and the peace bloc between Russia, Britain and France."[48] The description of Emmanuel Goldstein, with a "small, goatee beard", evokes the image ofLeon Trotsky. The film of Goldstein during the Two Minutes Hate is described as showing him being transformed into a bleating sheep. This image was used in a propaganda film during theKino-eyeperiod of Soviet film, which showed Trotsky transforming into a goat.[49][page needed]Like Goldstein, Trotsky was a formerly high-ranking party official who was ostracized and then wrote a book criticizing party rule,The Revolution Betrayed, published in 1936. The omnipresent images of Big Brother, a man described as having a moustache, bears resemblance to the cult of personality built up aroundJoseph Stalin.[50] The news in Oceania emphasised production figures, just as it did in the Soviet Union, where record-setting in factories (by "Heroes of Socialist Labour") was especially glorified. The best known of these wasAlexei Stakhanov, who purportedly set a record for coal mining in 1935.[51] The tortures of the Ministry of Love evoke the procedures used by theNKVDin their interrogations,[52][page needed]including beatings, deprivation, and torture through the use of their greatest fear.[53] The random bombing of Airstrip One is based on thearea bombingof London byBuzz bombsand theV-2 rocketin 1944–1945.[50] TheThought Policeis based on theNKVD, which arrested people for random "anti-soviet" remarks.[54][page needed] The confessions of the "Thought Criminals" Rutherford, Aaronson, and Jones are based on theshow trialsof the 1930s, which included fabricated confessions by prominent BolsheviksNikolai Bukharin,Grigory ZinovievandLev Kamenevto the effect that they were being paid by the Nazi government to undermine the Soviet regime underLeon Trotsky's direction.[55] The song "Under the Spreading Chestnut Tree" ("Under the spreading chestnut tree, I sold you, and you sold me") was based on an old English song called "Go no more a-rushing" ("Under the spreading chestnut tree, Where I knelt upon my knee, We were as happy as could be, 'Neath the spreading chestnut tree."). The song was published as early as 1891. The song was a popular camp song in the 1920s, sung with corresponding movements (like touching one's chest when singing "chest", and touching one's head when singing "nut").Glenn Millerrecorded the song in 1939.[56] The "Hates" (Two Minutes Hate and Hate Week) were inspired by the constant rallies sponsored by party organs throughout the Stalinist period. These were often short pep-talks given to workers before their shifts began (Two Minutes Hate),[57]but could also last for days, as in the annual celebrations of the anniversary of theOctober Revolution(Hate Week). Orwell fictionalised "newspeak", "doublethink", and "Ministry of Truth" based on both the Soviet press, and British wartime usage, such as "Miniform".[58]In particular, he adapted Soviet ideological discourse constructed to ensure that public statements could not be questioned.[59] Winston Smith's job, "revising history" (and the "unperson" motif) are based oncensorship of images in the Soviet Union, which airbrushed images of "fallen" people from group photographs and removed references to them in books and newspapers.[61]In one well-known example, the second edition of theGreat Soviet Encyclopediahad an article aboutLavrentiy Beria. After his fall from power and execution, subscribers received a letter from the editor[62]instructing them to cut out and destroy the three-page article on Beria and paste in its place enclosed replacement pages expanding the adjacent articles onF. W. Bergholz(an 18th-century courtier), theBering Sea, andBishop Berkeley.[63][64][65] Big Brother's "Orders of the Day" were inspired by Stalin's regular wartime orders, called by the same name. A small collection of the more political of these have been published (together with his wartime speeches) in English asOn the Great Patriotic War of the Soviet Unionby Joseph Stalin.[66][67]Like Big Brother's Orders of the day, Stalin's frequently lauded heroic individuals,[66]like Comrade Ogilvy, the fictitious hero Winston Smith invented to "rectify" (fabricate) a Big Brother Order of the day. The Ingsoc slogan "Our new, happy life", repeated from telescreens, evokes Stalin's 1935 statement, which became aCPSUslogan, "Life has become better, Comrades; life has become more cheerful."[54] In 1940, Argentine writerJorge Luis Borgespublished "Tlön, Uqbar, Orbis Tertius", which describes the invention by a "benevolent secret society" of a world that would seek to remake human language and reality along human-invented lines. The story concludes with an appendix describing the success of the project. Borges' story addresses similar themes ofepistemology, language and history to 1984.[68] DuringWorld War II, Orwell believed thatBritish democracyas it existed before 1939 would not survive the war. The question being "Would it end via Fascistcoup d'étatfrom above or via Socialist revolution from below?"[69]Later, he admitted that events proved him wrong: "What really matters is that I fell into the trap of assuming that 'the war and the revolution are inseparable'."[70] In his 1946 essay "Why I Write", Orwell explains that the serious works he wrote since theSpanish Civil War(1936–39) were "written, directly or indirectly, againsttotalitarianismand fordemocratic socialism".[4][71]Nineteen Eighty-Fouris acautionary taleabout revolution betrayed by totalitarian defenders previously proposed inHomage to Catalonia(1938) andAnimal Farm(1945), whileComing Up for Air(1939) celebrates the personal and political freedoms lost inNineteen Eighty-Four(1949). BiographerMichael Sheldennotes Orwell'sEdwardianchildhood atHenley-on-Thamesas the golden country; being bullied atSt Cyprian's Schoolas his empathy with victims; his life in theIndian Imperial Policein Burma and the techniques of violence and censorship in theBBCas capricious authority.[72] Other influences includeDarkness at Noon(1940) andThe Yogi and the Commissar(1945) byArthur Koestler;The Iron Heel(1908) byJack London;1920: Dips into the Near Future[73]byJohn A. Hobson;Brave New World(1932) byAldous Huxley;We(1921) by Yevgeny Zamyatin which he reviewed in 1946;[74]andThe Managerial Revolution(1940) byJames Burnhampredicting perpetual war among three totalitarian superstates. Orwell toldJacintha Buddicomthat he would write a novel stylistically likeA Modern Utopia(1905) byH. G. Wells.[75] Extrapolating from World War II, the novel'spasticheparallels the politics and rhetoric at war's end—the changed alliances at the "Cold War's" (1945–91) beginning; theMinistry of Truthderives from the BBC's overseas service, controlled by theMinistry of Information;Room 101derives from a conference room at BBCBroadcasting House;[76]theSenate Houseof the University of London, containing the Ministry of Information is the architectural inspiration for the Minitrue; the post-war decrepitude derives from the socio-political life of the UK and the US, i.e., the impoverished Britain of 1948 losing its Empire despite newspaper-reported imperial triumph; and war ally but peace-time foe, Soviet Russia becameEurasia.[77] The term "English Socialism" has precedents in Orwell's wartime writings; in the essay "The Lion and the Unicorn: Socialism and the English Genius" (1941), he said that "the war and the revolution are inseparable... the fact that we are at war has turned Socialism from a textbook word into a realisable policy"—because Britain's superannuated social class system hindered the war effort and only a socialist economy would defeatAdolf Hitler. Given the middle class's grasping this, they too would abide socialist revolution and that only reactionary Britons would oppose it, thus limiting the force revolutionaries would need to take power. An English Socialism would come about which "will never lose touch with the tradition of compromise and the belief in a law that is above the State. It will shoot traitors, but it will give them a solemn trial beforehand and occasionally it will acquit them. It will crush any open revolt promptly and cruelly, but it will interfere very little with the spoken and written word."[78] In the world ofNineteen Eighty-Four, "English Socialism" (or "Ingsoc" inNewspeak) is atotalitarianideology unlike the English revolution he foresaw. Comparison of the wartime essay "The Lion and the Unicorn" withNineteen Eighty-Fourshows that he perceived a Big Brother regime as a perversion of his cherished socialist ideals and English Socialism. Thus Oceania is a corruption of the British Empire he believed would evolve "into a federation of Socialist states, like a looser and freer version of the Union of Soviet Republics".[79][verification needed] Nineteen Eighty-Fourexpands upon the subjects summarised in Orwell's essay "Notes on Nationalism"[80]about the lack of vocabulary needed to explain the unrecognised phenomena behind certain political forces. InNineteen Eighty-Four, the Party's artificial, minimalist language 'Newspeak' addresses the matter. O'Brien concludes: "The object of persecution is persecution. The object of torture is torture. The object of power is power."[82] In the book, Inner Party member O'Brien describes the Party's vision of the future: There will be no curiosity, no enjoyment of the process of life. All competing pleasures will be destroyed. But always—do not forget this, Winston—always there will be the intoxication of power, constantly increasing and constantly growing subtler. Always, at every moment, there will be the thrill of victory, the sensation of trampling on an enemy who is helpless. If you want a picture of the future, imagine a boot stamping on a human face—forever. One of the most notable themes inNineteen Eighty-Fouriscensorship, especially in the Ministry of Truth, where photographs and public archives are manipulated to rid them of "unpersons" (people who have been erased from history by the Party).[83]On the telescreens, almost all figures of production are grossly exaggerated or simply fabricated to indicate an ever-growing economy, even during times when the reality is the opposite. One small example of the endless censorship is Winston being charged with the task of eliminating a reference to an unperson in a newspaper article. He also proceeds to write an article about Comrade Ogilvy, a made-up party member who allegedly "displayed great heroism by leaping into the sea from a helicopter so that the dispatches he was carrying would not fall into enemy hands."[84] In Oceania, the upper and middle classes have very little true privacy. All of their houses and apartments are equipped with two-way telescreens so that they may be watched or listened to at any time. Similar telescreens are found at workstations and in public places, along with hidden microphones. Written correspondence is routinely opened and read by the government before it is delivered. The Thought Police employ undercover agents, who pose as normal citizens and report any person with subversive tendencies. Children are encouraged to report suspicious persons to the government, and some denounce their parents. Citizens are controlled, and the smallest sign of rebellion, even something as small as a suspicious facial expression, can result in immediate arrest and imprisonment. Thus, citizens are compelled to obedience. According to Orwell's book, almost the entire world lives in poverty; hunger, thirst, disease, and filth are the norms. Ruined cities and towns are common: the consequence of perpetual wars and extreme economic inefficiency. Social decay and wrecked buildings surround Winston; aside from the ministries' headquarters, little of London was rebuilt. Middle class citizens and proles consume synthetic foodstuffs and poor-quality "luxuries" such as oily gin and loosely-packed cigarettes, distributed under the "Victory" brand, a parody of the low-quality Indian-made "Victory" cigarettes, which British soldiers commonly smoked during World War II. Winston describes something as simple as the repair of a broken window as requiring committee approval that can take several years and so most of those living in one of the blocks usually do the repairs themselves (Winston himself is called in by Mrs. Parsons to repair her blocked sink). All upper-class and middle-class residences include telescreens that serve both as outlets for propaganda and surveillance devices that allow the Thought Police to monitor them; they can be turned down, but the ones in middle-class residences cannot be turned off. In contrast to their subordinates, the upper class of Oceanian society reside in clean and comfortable flats in their own quarters, with pantries well-stocked with foodstuffs such as wine, real coffee, real tea, real milk, and real sugar, all denied to the general populace.[85]Winston is astonished that theliftsin O'Brien's building work, the telescreens can be completely turned off, and O'Brien has an Asian manservant, Martin. All upper class citizens are attended to by slaves captured in the "disputed zone", and "The Book" suggests that many have their own cars or even helicopters. However, despite their insulation and overt privileges, the upper class are still not exempt from the government's brutal restriction of thought and behaviour, even while lies and propaganda apparently originate from their own ranks. Instead, the Oceanian government offers the upper class their "luxuries" in exchange for maintaining their loyalty to the state; non-conformant upper-class citizens can still be condemned, tortured, and executed just like any other individual. "The Book" makes clear that the upper class' living conditions are only "relatively" comfortable, and would be regarded as "austere" by those of the pre-revolutionary élite.[86] The proles live in poverty and are kept sedated with pornography, a national lottery whose big prizes are reported won by non-existent people, and gin, "which the proles were not supposed to drink". At the same time, the proles are freer and less intimidated than the upper classes: they are not expected to be particularly patriotic and the levels of surveillance that they are subjected to are very low; they lack telescreens in their own homes. "The Book" indicates that because the middle class, not the lower class, traditionally starts revolutions, the model demands tight control of the middle class, with ambitious Outer-Party members neutralised via promotion to the Inner Party or "reintegration" (brainwashing via psychological and physical torture) by the Ministry of Love, and proles can be allowed intellectual freedom because they are deemed to lack intellect. Winston nonetheless believes that "the future belonged to the proles".[87] The standard of living of the populace is extremely low overall.[88]Consumer goods are scarce, and those available through official channels are of low quality; for instance, despite the Party regularly reporting increased boot production, more than half of the Oceanian populace goes barefoot.[89]The Party claims that poverty is a necessary sacrifice for the war effort, and "The Book" confirms that to be partially correct since the purpose of perpetual war is to consume surplus industrial production.[90]As "The Book" explains, society is in fact designed to remain on the brink of starvation, as "In the long run, a hierarchical society was only possible on a basis of poverty and ignorance." The Party monitorsfacial expressionsand aims to find out and control thethoughtsof citizens through the "Thought Police" and the detection and elimination of "thoughtcrime". The Party rejects thelegal principle"Cogitationis poenam nemo patitur" ("Thought does not commit a crime"). It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself—anything that carried with it the suggestion of abnormality, of having something to hide. In any case, to wear an improper expression on your face (to look incredulous when a victory was announced, for example) was itself a punishable offence. There was even a word for it inNewspeak: FACECRIME, it was called.[91] One is how to discover, against his will, what another human being is thinking, (...) The scientist of today is either a mixture of psychologist and inquisitor, studying with real ordinary minuteness the meaning of facial expressions, gestures, and tones of voice, and testing the truth-producing effects of drugs, shock therapy, hypnosis, and physical torture; (...)[92] We are not interested in those stupid crimes that you have committed. The Party is not interested in the overt act: the thought is all we care about.[93] When it was first published,Nineteen Eighty-Fourreceived critical acclaim.V. S. Pritchett, reviewing the novel for theNew Statesmanstated: "I do not think I have ever read a novel more frightening and depressing; and yet, such are the originality, the suspense, the speed of writing and withering indignation that it is impossible to put the book down."[94]P. H. Newby, reviewingNineteen Eighty-FourforThe Listenermagazine, described it as "the most arresting political novel written by an Englishman sinceRex Warner'sThe Aerodrome."[95]Nineteen Eighty-Fourwas also praised byBertrand Russell,E. M. ForsterandHarold Nicolson.[95]On the other hand,Edward Shanks, reviewingNineteen Eighty-FourforThe Sunday Times, was dismissive; Shanks claimedNineteen Eighty-Four"breaks all records for gloomy vaticination".[95]C. S. Lewiswas also critical of the novel, claiming that the relationship of Julia and Winston, and especially the Party's view on sex, lacked credibility, and that the setting was "odious rather than tragic".[96]HistorianIsaac Deutscherwas far more critical of Orwell from aMarxistperspective and characterised him as a “simple mindedanarchist”. Deutscher argued that Orwell had struggled to comprehend the dialectical philosophy of Marxism, demonstrated personal ambivalence towardsother strands of socialismand his work,1984, had been appropriated for the purpose ofanti-communistCold Warpropaganda.[97][98] On its publication, many American reviewers interpreted the book as a statement on BritishPrime MinisterClement Attlee's socialist policies, or the policies of Joseph Stalin.[99]Serving as prime minister from 1945 to 1951, Attlee implemented wide-ranging social reforms and changes in the British economy following World War II. American trade union leader Francis A. Hanson wanted to recommend the book to his members but was concerned with some of the reviews it had received, so Orwell wrote a letter to him.[100][99]In his letter, Orwell described his book as a satire, and said: I do not believe that the kind of society I describe will necessarily arrive, but I believe (allowing, of course, for the fact that the book is a satire) that something resembling it could arrive...[it is] a show...[of the] perversions to which a centralised economy is liable and which have already been partly realisable in communism and fascism. Throughout its publication history,Nineteen Eighty-Fourhas been either banned or legallychallengedas subversive or ideologically corrupting, like the dystopian novelsWe(1924) byYevgeny Zamyatin,Brave New World(1932) byAldous Huxley,Darkness at Noon(1940) byArthur Koestler,Kallocain(1940) byKarin Boye, andFahrenheit 451(1953) byRay Bradbury.[101] On 5 November 2019, theBBCnamedNineteen Eighty-Fouron its list of the100 most influential novels.[102] According toCzesław Miłosz, adefectorfromStalinist Poland, the book also made an impression behind theIron Curtain. Writing inThe Captive Mind, he stated "[a] few have become acquainted with Orwell's1984; because it is both difficult to obtain and dangerous to possess, it is known only to certain members of the Inner Party. Orwell fascinates them through his insight into details they know well ... Even those who know Orwell only by hearsay are amazed that a writer who never lived in Russia should have so keen a perception into its life."[103][104]WriterChristopher Hitchenshas called this "one of the greatest compliments that one writer has ever bestowed upon another ... Only one or two years after Orwell's death, in other words, his book about a secret book circulated only within the Inner Party was itself a secret book circulated only within the Inner Party."[103]: 54–55 In the same year as the novel's publishing, a one-hour radio adaptation was aired on the United States'NBCradio network as part of theNBC University Theatreseries. Thefirst television adaptationappeared as part ofCBS'sStudio Oneseries in September 1953.[105]BBC Televisionbroadcastan adaptationbyNigel Knealein December 1954. The first feature film adaptation,1984, was released in 1956. A second feature-length adaptation,Nineteen Eighty-Four,followed in 1984, a reasonably faithful adaptation of the novel. The story has been adapted several other times to radio, television, and film; other media adaptations include theater (a musical[106]and aplay),opera, and ballet.[107]An audio dramatization of the novel was released in 2024 to critical acclaim, starringAndrew Garfieldas Winston. The novel was banned in the Soviet Union until 1988, when the first publicly available Russian version in the country, translated by Vyacheslav Nedoshivin, was published inKodry, a literary journal of Soviet Moldavia. In 1989, another Russian version, translated byViktor Golyshev, was also published. Outside the Soviet Union, the first Russian version was serialised in the emigre magazineGraniin the mid-1950s, then published as a book in 1957 in Frankfurt. Another Russian version, translated by Sergei Tolstoy from French version, was published in Rome in 1966. These translations were smuggled into the Soviet Union, which became quite popular among dissidents.[108]Some underground published translations also appeared in the Soviet Union, for example, Soviet philosopherEvald Ilyenkovtranslated the novel from a German version into a Russian version.[109] For Soviet elite, as early as 1959, according to the order of the Ideological Department of the Central Committee of the Soviet Communist Party, the Foreign Literature Publishers secretly issued a Russian version of the novel, for the senior officers of the Communist Party.[110] In the People's Republic of China, the firstSimplified Chineseversion, translated byDong Leshan, was serialised in the periodicalSelected Translations from Foreign Literaturein 1979, for senior officials and intellectuals deemed politically reliable enough. In 1985, the Chinese version was published by Huacheng Publishing House, as a restricted publication. It was first available to the general public in 1988, by the same publisher.[111]Amy Hawkins and Jeffrey Wasserstrom ofThe Atlanticstated in 2019 that the book is widely available in mainland China for several reasons: the general public by and large no longer reads books; because the elites who do read books feel connected to the ruling party anyway; and because the Communist Party sees being too aggressive in blocking cultural products as a liability. The authors stated "It was—and remains—as easy to buy1984andAnimal FarminShenzhenorShanghaias it is in London or Los Angeles."[112]They also stated that "The assumption is not that Chinese people can't figure out the meaning of1984, but that the small number of people who will bother to read it won't pose much of a threat."[112]British journalist Michael Rank argued that it is only because the novel is set in London and written by a foreigner that the Chinese authorities believe it has nothing to do with China.[111] By 1989,Nineteen Eighty-Fourhad been translated into 65 languages, more than any other novel in English at that time.[113] Amateur translator Tsiu Ing-sing'sTaiwanese Hokkientranslation, which usesromanizationalongside Chinese characters, was published in 2025.[114] The effect ofNineteen Eighty-Fouron the English language is extensive; the concepts ofBig Brother,Room 101, theThought Police,thoughtcrime,unperson,memory hole(oblivion),doublethink(simultaneously holding and believing contradictory beliefs) andNewspeak(ideological language) have become common phrases for denoting totalitarian authority.Doublespeakandgroupthinkare both deliberate elaborations ofdoublethink, and the adjective "Orwellian" means similar to Orwell's writings, especiallyNineteen Eighty-Four. The practice of ending words with"-speak"(such asmediaspeak) is drawn from the novel.[115]Orwell is perpetually associated with 1984; in July 1984,an asteroidwas discovered byAntonín Mrkosand named after Orwell. References to the themes, concepts and plot ofNineteen Eighty-Fourhave appeared frequently in other works, especially in popular music and video entertainment. An example is the worldwide hit reality television showBig Brother, in which a group of people live together in a large house, isolated from the outside world but continuously watched by television cameras. In November 2012, theUnited States governmentargued before theUS Supreme Courtthat it could continue toutilize GPS tracking of individualswithout first seeking a warrant. In response, JusticeStephen Breyerquestioned what that means for a democratic society by referencingNineteen Eighty-Four, stating "If you win this case, then there is nothing to prevent the police or the government from monitoring 24 hours a day the public movement of every citizen of the United States. So if you win, you suddenly produce what sounds like1984..."[116] The book touches on the invasion of privacy and ubiquitous surveillance. From mid-2013 it was publicised that theNSAhas been secretly monitoring and storing global internet traffic, including the bulk data collection of email and phone call data. Sales ofNineteen Eighty-Fourincreased by up to seven times within the first week of the2013 mass surveillance leaks.[117][118][119]The book again topped the Amazon.com sales charts in 2017 after a controversy involvingKellyanne Conwayusing the phrase "alternative facts" to explain discrepancies with the media.[120][121][122][123] Nineteen Eighty-Fourwas number three on the list of "Top Check Outs of All Time" by theNew York Public Library.[124] Nineteen Eighty-Fourenteredthe public domain on 1 January 2021, 70 years after Orwell's death, in most of the world. It is still under copyright in the US until 95 years after publication, or 2044.[125][126] In October 1949, after readingNineteen Eighty-Four, Huxley sent a letter to Orwell in which he argued that it would be more efficient for rulers to stay in power by the softer touch by allowing citizens to seek pleasure to control them rather than use brute force. He wrote: Whether in actual fact the policy of the boot-on-the-face can go on indefinitely seems doubtful. My own belief is that the ruling oligarchy will find less arduous and wasteful ways of governing and of satisfying its lust for power, and these ways will resemble those which I described in Brave New World. ... Within the next generation I believe that the world's rulers will discover that infant conditioning and narco-hypnosis are more efficient, as instruments of government, than clubs and prisons, and that the lust for power can be just as completely satisfied by suggesting people into loving their servitude as by flogging and kicking them into obedience.[127] In the decades since the publication ofNineteen Eighty-Four, there have been numerous comparisons to Huxley'sBrave New World, which had been published 17 years earlier, in 1932.[128][129][130][131]They are both predictions of societies dominated by a central government and are both based on extensions of the trends of their times. However, members of the ruling class ofNineteen Eighty-Fouruse brutal force, torture and harshmind controlto keep individuals in line, while rulers inBrave New Worldkeep the citizens in line by drugs, hypnosis, genetic conditioning and pleasurable distractions. Regarding censorship, inNineteen Eighty-Fourthe government tightly controls information to keep the population in line, but in Huxley's world, so much information is published that readers are easily distracted and overlook the information that is relevant.[132] Elements of both novels can be seen in modern-day societies, with Huxley's vision being more dominant in the West and Orwell's vision more prevalent with dictatorships, including those in communist countries (such as in modern-dayChinaandNorth Korea), as is pointed out in essays that compare the two novels, including Huxley's ownBrave New World Revisited.[133][134][135][123] Comparisons with later dystopian novels likeThe Handmaid's Tale,Virtual Light,The Private EyeandThe Children of Menhave also been drawn.[136][137]
https://en.wikipedia.org/wiki/Nineteen_Eighty-Four
TheCupertino effectoccurs when aspell checkererroneouslyreplacescorrectly spelled words that are not in its dictionary. This term refers to the unhyphenated English word "cooperation" often being changed to "Cupertino" by older spell checkers, with dictionaries containing only the hyphenated variant, "co-operation".[1]Cupertinois a city in California, and its name is often used as ametonymforApple Inc., asthe firm's corporate headquartersare located in the city. "Cupertino" has been in the dictionaries used byMicrosoft Wordsince at least 1989.[2]Lack of vigilance in post-spell check editing can result in even official documents containing phrases such as "South Asian Association for Regional Cupertino" and "presentation on African-German Cupertino".[3] Benjamin Zimmerat theUniversity of Pennsylvaniacollected many examples of similar errors, including the common replacement of "definately" (misspelling of "definitely") with "defiantly", "DeMeco Ryans" with "Demerol" (inThe New York Times), "Voldemort" with "Voltmeter"(Denver Post), and the "Muttahida Qaumi Movement" being replaced with "Muttonhead Quail Movement" (Reuters).[3] The user need not always select an incorrect word for it to appear in the document. InWordPerfect9 with factory default settings, any unrecognized word that was close enough to exactly one known word was automatically replaced with that word. Current versions ofMicrosoft Wordcome configured to "auto-correct" misspelled words silently as the user types.Smartphoneswith dictionary-supportedvirtual keyboardsautomatically replace possible mistakes with dictionary words.[4](Auto-correction can be disabled by the user.)
https://en.wikipedia.org/wiki/Cupertino_effect
Jejemon(Tagalog pronunciation:[ˈdʒɛdʒɛmɔ̝n]) was apopular culturephenomenon in thePhilippines.[1]ThePhilippine Daily Inquirerdescribes Jejemons as a "new breed ofhipsterwho have developed not only their own language and written text but also their own subculture and fashion."[2][3] This style of shorthand typing arose through theshort messaging service, in which each text message sent by a cellphone is limited to 160 characters, evident in popular phone models in the early 2000s such as theNokia 5110.[4]As a result, an "SMS language" developed in which words were shortened in order to fit the 160-character limit. However, some jejemons are not really "conserving" characters; instead, they are lengthening their message.[2]On April 14, 2010, on a FilipinoTumblrpage, a post aboutvice presidential candidateJejomar Binayindicated that Binay was the Jejemon's preferred vice presidential candidate, complete with a fake poster with him called "Makki Autors". Later the use of wordjejemonto refer to such people made rounds in various Filipinointernet message boards.[2] The wordJejemonis aportmanteauof the Japanese animated seriesPokémonandjejeas an expression of laughter. Such short-handed language is not limited to Filipinos:Thaisuse "5555" to denote "hahahaha," sincethe number 5inThai languageis pronounced as "ha."[3] The Jejemons are said to be the newyoyoyo~, a term used for Filipinos of the lower income class.[1][3]The parameters of being classified as a Jejemon are still unclear, and how the different "levels" of "Jejemonism" are reached,[5]although there are named levels such as "mild," "moderate" and "severe" or "terminal."[6] Thesociolectof the Jejemons, calledJejenese, is derived from English,Filipinoand theircode-switched variant,Taglish. It has its own, albeit unofficial, orthography, known asJejebet, which uses the Filipino variant of theRoman alphabet,Arabic numeralsand other special characters. Words are created by rearranging letters in a word,alternating capitalization, over-usage of the letters H, X or Z.[3]Superfluous as well as the presence ofsilent letterscharacterize its spelling convention. It has similarities withLeetspeak, primarily the alphanumeric nature of its writing. Several Facebook fan pages were created both in support and against the group. Celebrities such asAlessandra de Rossi,Ces Drilon, andLourd de Veyrahave condemned the wholesale ridicule of the subculture.[2][7]Due to the sudden existence of jejemons, 'Jejebusters' were created, a group ofinternet grammar vigilantes, typically Filipinos, dedicating their internet lives towards the eradication of jejetyping and jejemon existence. YouTubevideos were also uploaded parodying the Jejemons, connecting them to the2010 election campaign. Edited television advertisements ofNacionalista Partyproclaiming their disdain, and aneditedphotograph ofGilberto Teodorowith him holding a sign saying that the Jejemons should be "brought back to elementary school" went viral.[8]In 2010, the FilipinoGMA Networkbroadcast the situational comedyJejeMom, headlined byEugene Domingo. In the same year, the late comedianDolphystarred and produced the filmFather Jejemon. As part of the pre-school year clean-up of schools for the upcoming 2010–11 school year, theDepartment of Education(DepEd) strongly discourages students from using Jejemon spelling and grammar, especially in text messaging. Communicating with others using Jejemon "language" is said to cause deterioration of young Filipino students’ language skills.[9] From early 2013 onwards, with the rise ofsmartphoneswhich began to overtakefeature phonesin terms of sales in the country, the phenomenon seems to have made a gradual decline in mainstream popularity. Some social media accounts use such spellings to this date, but most of them are used for sarcasm. The term "jejemon" would gradually shift definition to apejorativeterm to describe a stereotype of poorly educated young people wearinghip-hop clothing, roughly similar to the British slang termchavfor sportswear.in 2017, the Jejemon are also called "hypebeasts" and are recognizable for wearing counterfeit skateboarding or car culture-related brands.[citation needed]and also as in 2020s Jejemon are also called "genggeng" and they also spotted by wearing black shirts, brown pants or joggers and any gangsta hiphop theme clothing
https://en.wikipedia.org/wiki/Jejemon
For centuries, there have been movements toreform the spellingof theEnglish language. Such spelling reform seeks to changeEnglish orthographyso that it is more consistent, matches pronunciation better, and follows thealphabetic principle.[1]Common motives for spelling reform include making learning quicker, making learning cheaper, and making English more useful as aninternational auxiliary language. Reform proposals vary in terms of the depth of the linguistic changes and by their implementations. In terms of writing systems, mostspelling reform proposalsare moderate; they use the traditionalEnglish alphabet, try to maintain the familiar shapes of words, and try to maintain common conventions (such assilent e). More radical proposals involve adding or removing letters or symbols or even creating new alphabets. Some reformers prefer a gradual change implemented in stages, while others favor an immediate and total reform for all. Some spelling reform proposals have been adopted partially or temporarily. Many of the spellings preferred byNoah Websterhave become standard in the United States, but have not been adopted elsewhere (seeAmerican and British English spelling differences). Modern English spelling developed from about 1350 onwards, when—after three centuries ofNorman French rule—English gradually became the official language of England again, although very different from before 1066, having incorporated many words of French origin (channel, tenor, royal, etc.). Early writers of this new English, such asGeoffrey Chaucer, gave it a fairly consistent spelling system, but this was soon diluted byChancery clerkswho re-spelled words based on French orthography.[2]English spelling consistency was further reduced whenWilliam Caxtonbrought theprinting pressto London in 1476. Having lived in mainland Europe for the preceding 30 years, his grasp of the English spelling system had become uncertain. The Belgian assistants whom he brought to help him set up his business had an even poorer command of it.[3] As printing developed, printers began to develop individual preferences or "house styles".[4]: 3Furthermore, typesetters were paid by the line and were fond of making words longer.[5]However, the biggest change in English spelling consistency occurred between 1525, when William Tyndale first translated the New Testament, and 1539, whenKing Henry VIIIlegalized the printing ofEnglish Biblesin England. The many editions of these Bibles were all printed outside England by people who spoke little or no English. They often changed spellings to match theirDutchorthography. Examples include the silenthinghost(to match Dutchgheest, which later becamegeest),aghast,ghastlyandgherkin. The silenthin other words—such asghospel,ghossipandghizzard—was later removed.[4]: 4 There have been two periods when spelling reform of the English language has attracted particular interest. The first of these periods was from the mid-16th to the mid-17th centuries, when a number of publications outlining proposals for reform were published. These proposals ranged from expansive systems of respelling (e.g. John Hart's) to essays calling for nonspecific change (e.g. Sir Thomas Smith's). Some of them are detailed below: These proposals generally did not attract serious consideration because they were too radical or were based on an insufficient understanding of the phonology of English.[7]: 18However, more conservative proposals were more successful.James Howellin hisGrammarof 1662 recommended minor changes to spelling, such as changinglogiquetologic,warretowar,sinnetosin,tounetotownandtrutotrue.[7]: 18Many of these spellings are now in general use. From the 16th century AD onward, English writers who were scholars ofGreekandLatin literaturetried to link English words to their Graeco-Latin counterparts. They did this by adding silent letters to make the real or imagined links more obvious. Thusdetbecamedebt(to link it to Latindebitum),doutbecamedoubt(to link it to Latindubitare),sissorsbecamescissorsandsithebecamescythe(as they were wrongly thought to come from Latinscindere),ilandbecameisland(as it was wrongly thought to come from Latininsula),akebecameache(as it was wrongly thought to come from Greekakhos), and so forth.[4]: 5–7[8] William Shakespearesatirized the disparity between English spelling and pronunciation. In his playLove's Labour's Lost, the character Holofernes is "apedant" who insists that pronunciation should change to match spelling, rather than simply changing spelling to match pronunciation. For example, Holofernes insists that everyone should pronounce the unhistoricalBin words likedoubtanddebt.[9] The second period started in the 19th century and appears to coincide with the development of phonetics as a science.[7]: 18In 1806,Noah Websterpublished his first dictionary,A Compendious Dictionary of the English Language. It included an essay on the oddities of modern orthography and his proposals for reform. Many of the spellings he used, such ascolorandcenter, would become hallmarks ofAmerican English. In 1807, Webster began compiling an expanded dictionary. It was published in 1828 asAn American Dictionary of the English Language. Although it drew some protest, the reformed spellings were gradually adopted throughout the United States.[4]: 9 In 1837,Isaac Pitmanpublished his system ofphonetic shorthand, while in 1848Alexander John EllispublishedA Plea for Phonetic Spelling. These were proposals for a new phonetic alphabet. Although unsuccessful, they drew widespread interest. By the 1870s, the philological societies of Great Britain and the United States chose to consider the matter. After the "International Convention for the Amendment of English Orthography" that was held inPhiladelphiain August 1876, societies were founded such as the English Spelling Reform Association and American Spelling Reform Association.[7]: 20That year, the American Philological Society adopted a list of eleven reformed spellings for immediate use. These wereare→ar, give→giv, have→hav, live→liv, though→tho, through→thru, guard→gard, catalogue→catalog, (in)definite→(in)definit, wished→wisht.[4]: 13[10]One major American newspaper that began using reformed spellings was theChicago Tribune, whose editor and owner, Joseph Medill, sat on the Council of the Spelling Reform Association.[10]In 1883, the American Philological Society andAmerican Philological Associationworked together to produce 24 spelling reform rules, which were published that year. In 1898, the AmericanNational Education Associationadopted its own list of 12 words to be used in all writings:tho, altho, thoro, thorofare, thru, thruout, catalog, decalog, demagog, pedagog, prolog, program.[4]: 14 TheSimplified Spelling Boardwas founded in the United States in 1906. The SSB's original 30 members consisted of authors, professors and dictionary editors.Andrew Carnegie, a founding member, supported the SSB with yearlybequestsof more than US$300,000.[7]: 21In April 1906, it published alist of 300 words,[11]which included 157[12]spellings that were already in common use in American English.[13]In August 1906, the SSB word list was adopted byTheodore Roosevelt, who ordered the Government Printing Office to start using them immediately. However, in December 1906, the U.S. Congress passed a resolution and the old spellings were reintroduced.[10]Nevertheless, some of the spellings survived and are commonly used in American English today, such asanaemia/anæmia→anemiaandmould→mold. Others such asmixed→mixtandscythe→sithedid not survive.[14]In 1920, the SSB published itsHandbook of Simplified Spelling, which set forth over 25 spelling reform rules. The handbook noted that every reformed spelling now in general use was originally the overt act of a lone writer, who was followed at first by a small minority. Thus, it encouraged people to "point the way" and "set the example" by using the reformed spellings whenever they could.[4]: 16However, with its main source of funds cut off, the SSB disbanded later that year. In Britain, spelling reform was promoted from 1908 by theSimplified Spelling Societyand attracted a number of prominent supporters. One of these wasGeorge Bernard Shaw(author ofPygmalion) and much of his considerablewillwas left to the cause. Among members of the society, theconditions of his willgave rise to major disagreements, which hindered the development of a single new system.[15] Between 1934 and 1975, theChicago Tribune, thenChicago's biggest newspaper, used a number of reformed spellings. Over a two-month spell in 1934, it introduced 80 respelled words, includingtho, thru, thoro, agast, burocrat, frate, harth, herse, iland, rime, stafandtelegraf. A March 1934 editorial reported that two-thirds of readers preferred the reformed spellings. Another claimed that "prejudice and competition" was preventing dictionary makers from listing such spellings. Over the next 40 years, however, the newspaper gradually phased out the respelled words. Until the 1950s,Funk & Wagnallsdictionaries listed many reformed spellings, including the SSB's 300, alongside the conventional spellings.[10] In 1949, a BritishLabour MP,Mont Follick, introduced aprivate member's billin theHouse of Commons, which failed at the second reading. In 1953, he again had the opportunity, and this time it passed the second reading by 65 votes to 53.[16]Because of anticipated opposition from theHouse of Lords, the bill was withdrawn after assurances from the minister of education that research would be undertaken into improving spelling education. In 1961, this led toJames Pitman'sInitial Teaching Alphabet, introduced into many British schools in an attempt to improve child literacy.[17]Although it succeeded in its own terms, the advantages were lost when children transferred to conventional spelling. After several decades, the experiment was discontinued. In his 1969 bookSpelling Reform: A New Approach, the Australian linguistHarry Lindgrenproposed a step-by-step reform. The first,Spelling Reform step 1(SR1), called for the short/ɛ/sound (as inbet) to always be spelled with <e> (for examplefriend→frend, head→hed). This reform had some popularity in Australia.[18] In 2013,University of OxfordProfessor of EnglishSimon Horobinproposed that variety in spelling be acceptable. For example, he believes that it does not matter whether words such as "accommodate" and "tomorrow" are spelled with double letters.[19]This proposal does not fit within the definition of spelling reform used by, for example,Random House Dictionary.[20] Proponents of spelling reform such as theEnglish Spelling Societyargue that it would make English easier to learn to read, to spell, and to pronounce, as well as making it more useful for international communication and reducing educational costs (by reducing remediation costs and literacy teachers and programs), therefore enabling teachers and learners to spend more time on more important subjects or expanding subjects.[21] Another argument is the sheer amount of resources that are wasted using the current spelling. For example, theCut Spellingsystem of spelling reform uses up to 15% fewer letters than current spelling.[22]Books written with cut spelling could be printed on fewer pages, conserving resources such as paper and ink, a principle which extends to all forms and mediums of writing. English spelling reforms have taken place already, just slowly and largely unorganized.[23]Many words that were once spelled un-phonetically have since been reformed. For example,musicwas spelledmusickuntil the 1880s, andfantasywas spelledphantasyuntil the 1920s.[24]Almost all words with the-orending (such aserror) or the-erending (such asmember) were once spelled-our(errour) and-re(membre) respectively, though this change did not happen as completely inBritish spellingas it did in American spelling. SinceSamuel Johnsonprescribed how wordsought to bespelled in his 1755 dictionary, hundreds of thousands of words (as extrapolated from Masha Bell's research on 7000 common words)[citation needed]have shifted so that their spelling does not reflect their pronunciation, and thealphabetic principlein English has gradually been corrupted, since English spelling has not changed with these changes in pronunciation. Reduced spelling is currently practiced on informal internet platforms and is common in text messaging. The way vowel letters are used in English spelling vastly contradicts their usual meanings. For example, ⟨o⟩, expected to represent [əʊ] or [oʊ], may stand for [ʌ], while ⟨u⟩, expected to represent [ʌ], may represent [juː]. This makes English spelling even less intuitive for foreign learners than it is for native speakers, which is of importance foran international auxiliary language. Unlikemany other languages, English spelling has never been systematically updated and thus today only partly holds to the alphabetic principle.[citation needed]As an outcome, English spelling is a system of weak rules with manyexceptions and ambiguities. Mostphonemesin English can be spelled in more than one way. E.g. the words fear and peer contain the same sound in different spellings. Likewise, manygraphemesin English have multiple pronunciations and decodings, such asoughin words likethrough,though,thought,thorough,tough,trough, andplough. There are 13 ways of spelling theschwa(the most common of all phonemes in English), 12 ways to spell/ei/and 11 ways to spell/ɛ/. These kinds of incoherences can be found throughout the English lexicon and they even vary between dialects. Masha Bell has analyzed 7000 common words and found that about 1/2 cause spelling and pronunciation difficulties and about 1/3 cause decoding difficulties. Such ambiguity is particularly problematic in the case ofheteronyms(homographswith different pronunciations that vary with meaning), such asbow,desert,live,read,tear,wind, andwound. In reading such words one must consider the context in which they are used, and this increases the difficulty of learning to read and pronounce English. A closer relationship between phonemes and spellings would eliminate many exceptions and ambiguities, making the language easier and faster to master.[25] Some proposed simplified spellings already exist as standard or variant spellings in old literature. As noted earlier, in the 16th century, some scholars ofGreekandLatin literaturetried to make English words look more like their Graeco-Latin counterparts, at times even erroneously. They did this by adding silent letters, sodetbecamedebt,doutbecamedoubt,sithebecamescythe,ilandbecameisland,akebecameache, and so on.[4]: 5[8]Some spelling reformers propose undoing these changes. Other examples of older spellings that are more phonetic includefrendforfriend(as on Shakespeare's grave),agenstforagainst,yeeldforyield,bildforbuild,cortforcourt,stedforstead,delitefordelight,entiseforentice,gostforghost,harthforhearth,rimeforrhyme,sumforsome,tungfortongue, and many others. It was also once common to use-tfor the ending-edin every case where it is pronounced as such (for exampledroptfordropped). Some of the English language's most celebrated writers and poets have used these spellings and others proposed by today's spelling reformers.Edmund Spenser, for example, used spellings such asrize, wizeandadvizein his famous poemThe Faerie Queene, published in the 1590s.[26] TheEnglish alphabethas several letters whose characteristic sounds are already represented elsewhere in the alphabet. These includeX, which can be realised as "ks", "gz", orz;F, which can be realised as "ph" orV; softG(/d͡ʒ/), which can be realised asJ; hardC(/k/), which can be realised asK; softC(/s/), which can be realised asS; andQ("qu",/kw/or/k/), which can be realised as "kw" (or simplyKin some cases). However, these spellings are usually retained to reflect their often-Latin roots. Spelling reform faces many arguments against the development and implementation of a reformed orthography for English. Public acceptance to spelling reform has been consistently low, at least since the early 19th century, when spelling was codified by the influential EnglishdictionariesofSamuel Johnson(1755) andNoah Webster(1806). The irregular spelling of very common words, such asare, have, done, of, wouldmakes it difficult to fix them without introducing a noticeable change to the appearance of English text. English is the only one of the top tenmajor languageswith no associated worldwideregulatory bodywith the power to promulgate spelling changes.[citation needed] English is aWest Germanic languagethat has borrowed many words from non-Germanic languages, and the spelling of a word often reflects its origin. This sometimes gives a clue as to the meaning of the word. Even if their pronunciation has strayed from the original pronunciation, the spelling is a record of the phoneme. The same is true for words of Germanic origin whose current spelling still resembles their cognates in other Germanic languages. Examples includelight, GermanLicht;knight, GermanKnecht;ocean, Frenchocéan;occasion, Frenchoccasion. Critics argue that re-spelling such words could hide those links,[27]although not all spelling reforms necessarily require significantly re-spelling them. Another criticism is that a reform may favor one dialect or pronunciation over others, creating astandard language. Some words have more than one acceptable pronunciation, regardless of dialect (e.g.economic,either). Some distinctions in regional accents are still marked in spelling. Examples include the distinguishing offern,firandfurthat is maintained in Irish and Scottish English or the distinction betweentoeandtowthat is maintained in a few regional dialects in England and Wales. However, dialectal accents exist even in languages whose spelling is called phonemic, such as Spanish. Some letters haveallophonic variation, such as how the letterainbathcurrently stands for both/æ/and/ɑ/and speakers pronounce it as per their dialect. Some words are distinguished only by non-phonetic spelling (as inknightandnight). Most spelling reforms attempt to improve phonemic representation, but some attempt genuinephoneticspelling,[28]usually by changing thebasic English alphabetor making a new one. All spelling reforms aim for greater regularity in spelling. These proposals seek to eliminate the extensive use ofdigraphs(such as "ch", "gh", "kn-", "-ng", "ph", "qu", "sh", voiced and voiceless "th", and "wh-") by introducing new letters and/ordiacritics. Each letter would then represent a single sound. In a digraph, the two letters represent not their individual sounds but instead an entirely different and discrete sound, which can lengthen words and lead to mishaps in pronunciation. Notable proposals include: Some speakers of non-Latin script languages occasionally write English phonetically in their respective writing systems, which may be perceived as an ad hoc spelling reform by some.[citation needed] Many respected and influential people have been active supporters of spelling reform. This list of English-language spelling reform advocates who arenotablefor other reasons is split up into those who advocated for specific reforms and were successful, those who were not (yet), and those who instead supported the principle of reform more generally.
https://en.wikipedia.org/wiki/English_language_spelling_reform
Tironian notes(Latin:notae Tironianae) are a form of thousands of signs that were formerly used in a system ofshorthand(Tironian shorthand) dating from the 1st century BCE and named afterTiro, a personal secretary toMarcus Tullius Cicero, who is often credited as their inventor.[1]Tiro's system consisted of about 4,000 signs,[2]extended to 5,000 signs by others. During themedieval period, Tiro's notation system was taught in European monasteries and expanded to a total of about 13,000 signs.[3]The use of Tironian notes lasted into the 17th century. A few Tironian signs are still used today.[4][5] Tironian notes can be themselves composites (ligatures) of simpler Tironian notes, the resulting compound being still shorter than the word it replaces. This accounts in part for the large number of attested Tironian notes, and for the wide variation in estimates of the total number of Tironian notes. Further, the "same" sign can have other variant forms, leading to the same issue. Before Tironian shorthand became popularized, literature professor Anthony Di Renzo explains, "no true Latin shorthand existed." The only systematized form of abbreviation in Latin was used for legal notations (notae juris). This system, however, was deliberately abstruse and accessible only to people with specialized knowledge. Otherwise, shorthand was improvised for note-taking or writing personal communications, and some of these notations would not have been understood outside of closed circles. Some abbreviations of Latin words and phrases were commonly recognized, such as those ofpraenomina, and were typically used forinscriptionson monuments.[1] Scholars infer thatMarcus Tullius Cicero(106–43 BC) recognized the need for a comprehensive, standard Latin notation system after learning about the Greek shorthand system. Cicero presumably delegated the task of creating such a system for Latin to his slave and personal secretaryTiro. Tiro's position required him to quickly and accurately transcribe dictations from Cicero, such as speeches, professional and personal correspondence, and business transactions, sometimes while walking through theforumor during fast-paced and contentious government and legal proceedings.[1]Nicknamed "the father of stenography" by historians,[4]Tiro developed a highly refined and accurate method that usedLatin lettersand abstract symbols to representprepositions, truncated words,contractions, syllables, andinflections. According to Di Renzo: "Tiro then combined these mixed signs like notes in a score to record not just phrases, but, as Cicero marvels in a letter toAtticus, 'whole sentences.'"[1]Tiro's highly refined and accurate method became the first standardized and widely adopted system of Latin shorthand.[1]The system consisted of abbreviations andabstract symbols, which were either contrived by Tiro or borrowed from Greek shorthand. Dio Cassiusattributes the invention of shorthand toMaecenas, and states that he employed his freedman Aquila in teaching the system to numerous others.[6]Isidore of Seville, however, details another version of the early history of the system, ascribing the invention of the art toQuintus Ennius, who he says invented 1100 marks (Latin:notae). Isidore states that Tiro brought the practice to Rome, but only used Tironian notes for prepositions.[7]According toPlutarchin "Life of Cato the Younger", Cicero's secretaries established the first examples of the art of Latin shorthand:[8] This only of all Cato’s speeches, it is said, was preserved; for Cicero, the consul, had disposed, in various parts of the senate-house, several of the most expert and rapid writers, whom he had taught to make figures comprising numerous words in a few short strokes; as up to that time they had not used those we call short-hand writers, who then, as it is said, established the first example of the art. There are no surviving copies of Tiro's original manual and code, so knowledge of it is based on biographical records and copies of Tironian tables from themedieval period.[1]Historians typically date the invention of Tiro's system as 63 BC, when it was first used in official government business according toPlutarchin his biography ofCato the YoungerinThe Lives of the Noble Grecians and Romans.[9]Before Tiro's system was institutionalized, he used it himself as he was developing and fine-tuning it, which historians suspect may have been as early as 75 BC, when Cicero held public office inSicilyand needed his notes and correspondences to be written in code to protect sensitive information he gathered about corruption among other government officials there.[1] There is evidence that Tiro taught his system to Cicero and his other scribes, and possibly to his friends and family, before it came into wide use. In "Life of Cato the Younger",Plutarchwrote that during Senate hearings in65 BCrelating to thefirst Catilinarian conspiracy, Tiro and Cicero's other secretaries were in the audience meticulously and rapidly transcribing Cicero's oration. On many of the oldest Tironian tables, lines from this speech were frequently used as examples, leading scholars to theorize it was originally transcribed using Tironian shorthand. Scholars also believe that in preparation for speeches, Tiro drafted outlines in shorthand that Cicero used as notes while speaking.[1] Isidore tells of the development of additional Tironian notes by various hands, such as Vipsanius, Philargius, and Aquila (as above), untilSenecasystematized the various marks to be approximately 5000 in number.[7] Entering the Middle Ages, Tiro's shorthand was often used in combination with other abbreviations and the original symbols were expanded to 14,000 symbols during theCarolingian dynasty, but it fell out of favor as shorthand and was forgotten until interest was rekindled byThomas Becket,archbishop of Canterbury, in the 12th century.[10]In the 15th centuryJohannes Trithemius, abbot of the Benedictine abbey ofSponheimin Germany, discovered thenotae Benenses: a psalm and a Ciceronian lexicon written in Tironian shorthand.[11] InOld Englishmanuscripts, the Tironianetserved as both a phonetic and morphological place holder. For instance, a Tironianetbetween two words would be phonetically pronouncedondand would mean 'and'. However, if the Tironianetfollowed the letters, then it would be phonetically pronouncedsondand mean 'water' (ancestral toModern Englishsoundin the geographical sense). This additional function of a phonetic as well as a conjunction placeholder has escaped formal Modern English; for example, one may not spell the wordsandass&(although this occurs in an informal style practised on certain Internet forums and sometimes in texting and other forms of instant messaging). This practice was distinct from the occasional use of&c.foretc., where the&is interpreted as the Latin wordet('and') and thec.is an abbreviation for Latincetera('[the] rest'). Just one Tironian symbol remains in common use today, the Tironianet(⁊, equivalent to&), used in Ireland and Scotland to meanand(where it is calledagusinIrishandagusan[12]inScottish Gaelic). Inblacklettertexts (especially in German printing), it was still used in the abbreviation⁊c.meaningetc.(foret cetera) throughout the 19th century.[citation needed]However, as not all typesets included asortfor the⟨⁊⟩character, the similarR rotunda⟨ꝛ⟩was substituted (which producedꝛc.). The use of Tironian notes on modern computing devices is not always straightforward. The Tironianet⟨⁊⟩is available atU+204A⁊TIRONIAN SIGN ET, and displays (e.g. for documents written in Irish or Scottish Gaelic) on all common operating systems: onMicrosoft Windows, it can be shown inSegoe UI Symbol(afontthat comes bundled with Windows Vista onwards); onmacOSandiOSdevices in all default system fonts; and on Windows, macOS,ChromeOS, andLinuxin thefreeDejaVu Sansfont (which comes bundled with ChromeOS and various Linux distributions). On the MicrosoftWindows 11Scottish Gaelic keyboard layout, the ⁊ can be entered by pressingAltGr+7.[13]On some Irish layouts, the shortcut is⇧Shift+AltGr+7 Some applications and websites, such as the online edition of theDictionary of the Irish Language, substitute the Tironianetwith thebox-drawing characterU+2510┐BOX DRAWINGS LIGHT DOWN AND LEFT, as it looks similar and displays widely. The numeral 7 is also used in informal contexts such as Internet forums and occasionally in print.[14] A number of other Tironian signs have been assigned to thePrivate Use Areaof Unicode by theMedieval Unicode Font Initiative (MUFI).[15]
https://en.wikipedia.org/wiki/Tironian_notes
Scribal abbreviations, orsigla(singular:siglum), areabbreviationsused by ancient and medieval scribes writing in various languages, includingLatin,Greek,Old EnglishandOld Norse. In modernmanuscript editing(substantive and mechanical)siglaare the symbols used to indicate the source manuscript (e.g. variations in text between different such manuscripts). Abbreviated writing, using sigla, arose partly from the limitations of the workable nature of the materials (stone,metal,parchment, etc.) employed in record-making and partly from their availability. Thus,lapidaries,engravers, andcopyistsmade the most of the available writing space. Scribal abbreviations were infrequent when writing materials were plentiful, but by the 3rd and 4th centuries AD, writing materials were scarce and costly. During theRoman Republic, several abbreviations, known as sigla (plural ofsiglum'symbol or abbreviation'), were in common use in inscriptions, and they increased in number during theRoman Empire. Additionally, in this periodshorthandentered general usage. The earliest known Western shorthand system was that employed by the Greek historianXenophonin thememoir of Socrates, and it was callednotae socratae. In the late Roman Republic, theTironian noteswere developed possibly byMarcus Tullius Tiro, Cicero'samanuensis, in 63 BC to record information with fewer symbols; Tironian notes include a shorthand/syllabic alphabet notation different from theLatin minuscule handandsquareandrustic capitalletters. The notation was akin to modernstenographicwriting systems. It used symbols for whole words or word roots and grammatical modifier marks, and it could be used to write either whole passages in shorthand or only certain words. In medieval times, the symbols to represent words were widely used; and the initial symbols, as few as 140 according to some sources, were increased to 14,000 by theCarolingians, who used them in conjunction with other abbreviations. However, the alphabet notation had a "murky existence" (C. Burnett), as it was often associated with witchcraft and magic, and it was eventually forgotten. Interest in it was rekindled by theArchbishop of CanterburyThomas Becketin the 12th century and later in the 15th century, when it was rediscovered byJohannes Trithemius, abbot of the Benedictine abbey of Sponheim, in a psalm written entirely in Tironian shorthand and a Ciceronian lexicon, which was discovered in a Benedictine monastery (notae benenses).[1] To learn the Tironian note system, scribes required formal schooling in some 4,000 symbols; this later increased to some 5,000 symbols and then to some 13,000 in the medieval period (4th to 15th centuries AD);[2]the meanings of some characters remain uncertain. Sigla were mostly used inlapidaryinscriptions; in some places and historical periods (such as medieval Spain) scribal abbreviations were overused to the extent that some are indecipherable. The abbreviations were not constant but changed from region to region. Scribal abbreviations increased in usage and reached their height in theCarolingian Renaissance(8th to 10th centuries). The most common abbreviations, callednotae communes, were used across most of Europe, but others appeared in certain regions. In legal documents, legal abbreviations, callednotae juris, appear but also capricious abbreviations, which scribes manufactured ad hoc to avoid repeating names and places in a given document.[3] Scribal abbreviations can be found inepigraphy, sacred and legal manuscripts, written in Latin or in a vernacular tongue (but less frequently and with fewer abbreviations), either calligraphically or not. Inepigraphy, common abbreviations were comprehended in two observed classes: Both forms of abbreviation are calledsuspensions(as the scribe suspends the writing of the word). A separate form of abbreviation is bycontractionand was mostly a Christian usage for sacred words, orNomina Sacra; non-Christian sigla usage usually limited the number of letters the abbreviation comprised and omitted no intermediate letter. One practice was rendering an overused, formulaic phrase only as a siglum:DMforDis Manibus('Dedicated to the Manes');IHSfrom the first three letters ofΙΗΣΟΥΣ; andRIPforrequiescat in pace('rest in peace')) because the long-form written usage of the abbreviated phrase, by itself, was rare. According to Traube, these abbreviations are not really meant to lighten the burden of the scribe but rather to shroud in reverent obscurity the holiest words of the Christian religion.[4] Another practice was repeating the abbreviation's final consonant a given number of times to indicate a group of as many persons:AVGdenotedAugustus, thus,AVGGdenotedAugusti duo; however, lapidaries took typographic liberties with that rule, and instead of usingCOSSto denoteConsulibus duobus, they invented theCCSSform. Still, when occasion required referring to three or four persons, the complex doubling of the final consonant yielded to the simple plural siglum. To that effect, avinculum(overbar) above a letter or a letter-set also was so used, becoming a universal medieval typographic usage. Likewise thetilde(~), an undulated, curved-end line, came into standard late-medieval usage. Besides the tilde andmacronmarks above and below letters, modifying cross-bars and extended strokes were employed as scribal abbreviation marks, mostly for prefixes and verb, noun and adjective suffixes. The typographic abbreviations should not be confused with the phrasal abbreviations:i.e.(id est'that is');loc. cit.(loco citato'in the passage already cited');viz.(vide licet'namely; that is to say; in other words' – formed withvi+ theyogh-like glyph ꝫ, the siglum for the suffix-etand the conjunctionet); andetc.(et cetera'and so on'). Moreover, besides scribal abbreviations, ancient texts also contained variant typographic characters, includingligatures(Æ, Œ, etc.), thelong s(ſ), and ther rotunda(ꝛ). Theuandvcharacters originated as scribal variants for their respective letters, likewise theiandjpair. Modern publishers printing Latin-language works replace variant typography and sigla with full-form Latin spellings; the convention of usinguandifor vowels andvandjfor consonants is a late typographic development. Some ancient and medieval sigla are still used in English and other European languages; the Latinampersand(&) replaces the conjunctionandin English,etin Latin and French, andyin Spanish (but its use in Spanish is frowned upon, since theyis already smaller and easier to write)[citation needed]. The Tironian sign (⁊), resembling the digit seven (7), represents the conjunctionetand is written only to thex-height; in currentIrish languageusage, the siglum denotes the conjunctionagus('and'). Other scribal abbreviations in modern typographic use are thepercentagesign (%), from the Italianper cento('per hundred'); thepermillesign (‰); from the Italianper mille('per thousand'); thepound sign(₤, £ and #, all descending from ℔ orlbforlibrum) and thedollar sign($), which possibly derives from the Spanish wordpeso. Thecommercial atsymbol (@), originally denoting 'at the rate/price of', is an abbreviation of the wordAmphora[5]—a kind of pot used as aunitof trade; from the 1990s, its use outside commerce became widespread, as part ofe-mail addresses. Typographically, the ampersand, representing the wordet, is a space-savingligatureof the letterseandt, its componentgraphemes. Since the establishment of movable-type printing in the 15th century, founders have created many such ligatures for each set of record type (font) to communicate much information with fewer symbols. Moreover, during theRenaissance(14th to 17th centuries), whenAncient Greeklanguagemanuscriptsintroduced that tongue toWestern Europe, its scribal abbreviations were converted to ligatures in imitation of the Latin scribal writing to which readers were accustomed. Later, in the 16th century, when the culture of publishing included Europe's vernacular languages, Graeco-Roman scribal abbreviations disappeared, an ideologic deletion ascribed to the anti-LatinistProtestant Reformation(1517–1648). The common abbreviationXmas, forChristmas, is a remnant of an old scribal abbreviation that substituted theGreek letterchi(Χ) for Christ's name (deriving from the first letter in his name,Χριστος). After the invention of printing, manuscript copying abbreviations continued to be employed inChurch Slavonicand are still in use in printed books as well as on icons and inscriptions. Many common long roots and nouns describing sacred persons are abbreviated and written under the specialdiacriticsymboltitlo, as shown in the figure at the right. That corresponds to theNomina sacra('Sacred names') tradition of using contractions for certain frequently occurring names inGreekecclesiastical texts. However, sigla for personal nouns are restricted to "good" beings and the same words, when referring to "bad" beings, are spelled out. For example, whileGodin the sense of the one true God is abbreviated asБг҃ъ,godreferring to false gods is spelled out. Likewise, the word meaning 'angel' is generally abbreviated asагг҃лъ, but the word meaning 'angels' is spelled out for 'performed by evil angels' in Psalm 77.[6] Adriano Cappelli'sLexicon Abbreviaturarumlists the various medieval brachigraphic signs found inVulgar Latinand Italian texts, which originate from the Roman sigla, a symbol to express a word, and Tironian notes.[7]Quite rarely, abbreviations did not carry marks to indicate that an abbreviation has occurred: if they did, they were oftencopying errors. For example,e.g.is written with periods, but modern terms, such asPC, may be written in uppercase. The original manuscripts were not written in a modern sans-serif or serif font but in Roman capitals, rustic, uncial, insular, Carolingian or blackletter styles. For more, refer toWestern calligraphyor a beginner's guide.[8] Additionally, the abbreviations employed varied across Europe. In Nordic texts, for instance, tworuneswere used in text written in the Latin alphabet, which arefé(ᚠ 'cattle, goods') andmaðr(ᛘ 'man'). Cappellidivides abbreviations into six overlapping categories: Suspended terms are those of which only the first part is written, and the last part is substituted by a mark, which can be of two types: The largest class of suspensions consists of single letters standing in for words that begin with that letter. A dot at the baseline after a capital letter may stand for a title if it is used such as in front of names or a person's name in medieval legal documents. However, not all sigla use the beginning of the word. For plural words, the siglum is often doubled:F.=fraterandFF.=fratres. Tripled sigla often stand for three:DDD=domini tres. Letters lying on their sides, or mirrored (backwards), often indicate female titles, but a mirroredC(Ↄ) stands generally forconorcontra(the latter sometimes with a macron above: Ↄ̄). To avoid confusion with abbreviations and numerals, the latter are often written with anoverlineabove. In some contexts, however, numbers with a line above indicate that number is to be multiplied by a thousand, and several other abbreviations also have a line above them, such asΧΡ(Greek letters chi + rho) =ChristusorIHS=Jesus. Starting in the 8th or the 9th century, single-letter sigla grew less common and were replaced by longer, less ambiguous sigla with bars above them. Abbreviations by contraction have one or more middle letters omitted. They were often represented with a general mark of abbreviation (above), such as a line above. They can be divided into two subtypes: Such marks inform the reader of the identity of the missing part of the word without affecting (independent of) the meaning. Some of them may be interpreted as alternative contextual glyphs of their respective letters. The meaning of the marks depends on the letter on which they appear. A superscript letter generally referred to the letter omitted, but, in some instances, as in the case of vowel letters, it could refer to a missing vowel combined with the letterr, before or after it. It is only in some English dialects that the letterrbefore another consonant largely silent and the preceding vowel is "r-coloured". However,a,i, andoabovegmeantgͣgna,gͥgniandgͦgnorespectively. Although in English, thegis silent ingn, but in other languages, it is pronounced. Vowel letters aboveqmeantqu+ vowel:qͣ,qͤ,qͥ,qͦ,qͧ. Vowels were the most common superscripts, but consonants could be placed above letters without ascenders; the most common werec, e.g.nͨ. A cutlabove ann,nᷝ, meantnihilfor instance. For numerals, double-x superscripts are sometimes used to express scores, i. e. multiplication by twenty. For example, IIIIxxindicates 80, VIxxXI indicates 131. These marks are nonalphabetic letters carrying a particular meaning. Several of them continue in modern usage, as in the case of monetary symbols. In Unicode, they are referred to asletter-like glyphs. Additionally, several authors are of the view that the Roman numerals themselves were, for example, nothing less than abbreviations of the words for those numbers. Other examples of symbols still in some use arealchemicalandzodiacsymbols, which were, in any case, employed only in alchemy and astrology texts, which made their appearance beyond that special context rare. Some important examples are two stacked horizontal lines (looks like =) foresse('to be'), and anobelusconsisting of a horizontal line and two dots (looks like ÷) forest('it is'). In addition to the signs used to signify abbreviations, medieval manuscripts feature some glyphs that are now uncommon but were not sigla. Many moreligatureswere used to reduce the space occupied, a characteristic that is particularly prominent in blackletter scripts. Someletter variantssuch asr rotunda,long sand uncial or insular variants (Insular G),Claudian letterswere in common use, as well as letters derived from other scripts such as Nordic runes:thorn (þ)andeth (ð), each representing the English "th" sounds. Anilluminated manuscriptwould featureminiatures,decorated initialsorlittera notabilior, which later resulted in thebicameralityof the script (case distinction). Varioustypefaceshave been designed to allow scribal abbreviations and other archaic glyphs to be replicated in print. They include "record type", which was first developed in the 1770s topublish Domesday Bookand was fairly widely used for the publication of medieval records in Britain until the end of the 19th century. In theUnicodeStandardv. 5.1(4 April 2008), 152 medieval and classical glyphs were given specific locations outside of the Private Use Area. Specifically, they are located in the charts"Combining Diacritical Marks Supplement"(26 characters),"Latin Extended Additional"(10 characters),"Supplemental Punctuation"(15 characters),"Ancient Symbols"(12 characters) and especially"Latin Extended-D"(89 characters).[10]These consist in both precomposed characters and modifiers for other characters, called combining diacritical marks (such as writing inLaTeXor usingoverstrikein MS Word). Characters are "the smallest components of written language that have semantic value" but glyphs are "the shapes that characters can have when they are rendered or displayed".[11]
https://en.wikipedia.org/wiki/Scribal_abbreviation
Inwritingandtypography, aligatureoccurs where two or moregraphemesor letters are joined to form a singleglyph. Examples are the characters⟨æ⟩and⟨œ⟩used in English and French, in which the letters⟨a⟩and⟨e⟩are joined for the first ligature and the letters⟨o⟩and⟨e⟩are joined for the second ligature. For stylistic and legibility reasons,⟨f⟩and⟨i⟩are often merged to create⟨fi⟩(where thetittleon the⟨i⟩merges with the hood of the⟨f⟩); the same is true of⟨s⟩and⟨t⟩to create⟨st⟩. The commonampersand,⟨&⟩, developed from a ligature in which the handwritten Latin letters⟨e⟩and⟨t⟩(spellinget,Latinfor 'and') were combined.[1] The earliest known scriptSumerian cuneiformandEgyptianhieraticboth include many cases of character combinations that gradually evolve from ligatures into separately recognizable characters. Other notable ligatures, such as theBrahmicabugidasand theGermanicbind rune, figure prominently throughout ancient manuscripts. These new glyphs emerge alongside the proliferation of writing with a stylus, whether onpaperorclay, and often for a practical reason: fasterhandwriting. Merchants especially needed a way to speed up the process of written communication and found that conjoining letters and abbreviating words for lay use was more convenient for record keeping and transaction than the bulky long forms.[citation needed] Around the 9th and 10th centuries, monasteries became a fountainhead for these type of script modifications. Medieval scribes who wrote inLatinincreased their writing speed by combining characters and by introducingnotational abbreviations. Others conjoined letters for aesthetic purposes. For example, inblackletter, letters with right-facing bowls (⟨b⟩,⟨o⟩, and⟨p⟩) and those with left-facing bowls (⟨c⟩,⟨e⟩,⟨o⟩,⟨d⟩,⟨g⟩and⟨q⟩) were written with the facing edges of the bowls superimposed. In many script forms, characters such as⟨h⟩,⟨m⟩, and⟨n⟩had their vertical strokes superimposed.[citation needed]Scribes also used notational abbreviations to avoid having to write a whole character in one stroke. Manuscripts in the fourteenth century employed hundreds of such abbreviations.[citation needed] Inhandwriting, a ligature is made by joining two or more characters in an atypical fashion by merging their parts, or by writing one above or inside the other. In printing, a ligature is a group of characters that is typeset as a unit, so the characters do not have to be joined. For example, in some cases the⟨fi⟩ligature prints the letters⟨f⟩and⟨i⟩with a greater separation than when they are typeset as separate letters. Whenprinting with movable typewas invented around 1450,[4]typefaces included many ligatures and additional letters, as they were based on handwriting. Ligatures made printing with movable type easier because onesortwould replace frequent combinations of letters and also allowed more complex and interesting character designs which would otherwise collide with one another.[citation needed] Because of their complexity, ligatures began to fall out of use in the 20th century. Sans serif typefaces, increasingly used for body text, generally avoid ligatures, though notable exceptions includeGill SansandFutura. Inexpensivephototypesettingmachines in the 1970s (which did not requirejourneymanknowledge or training to operate) also generally avoid them. A few, however, became characters in their own right, see below the sections aboutGerman ß,various Latin accented letters,& et al. The trend against digraph use was further strengthened by thedesktop publishingrevolution. Early computer software in particular had no way to allow for ligature substitution (the automatic use of ligatures where appropriate), while most new digital typefaces did not include ligatures. As most of the early PC development was designed for the English language (which already treated ligatures as optional at best) dependence on ligatures did not carry over to digital. Ligature use fell as the number of traditional handcompositorsandhot metal typesettingmachine operators dropped because of the mass production of the IBM Selectric brand of electric typewriter in 1961. A designer active in the period commented: "some of the world's greatest typefaces were quickly becoming some of the world's worst fonts."[5] Ligatures have grown in popularity in the 21st century because of an increasing interest in creating typesetting systems that evoke arcane designs and classical scripts. One of the first computer typesetting programs to take advantage of computer-driven typesetting (and later laser printers) wasDonald Knuth'sTeXprogram. Now the standard method of mathematical typesetting, its default fonts are explicitly based on nineteenth-century styles. Many new fonts feature extensive ligature sets; these includeFF Scala, Seria and others byMartin MajoorandHoefler TextbyJonathan Hoefler.Mrs EavesbyZuzana Lickocontains a particularly large set to allow designers to create dramatic display text with a feel of antiquity. A parallel use of ligatures is seen in the creation of script fonts that join letterforms to simulate handwriting effectively. This trend is caused in part by the increased support for other languages and alphabets in modern computing, many of which use ligatures somewhat extensively. This has caused the development of new digital typesetting techniques such asOpenType, and the incorporation of ligature support into the text display systems ofmacOS,Windows, and applications likeMicrosoft Office. An increasing modern trend is to use a "Th" ligature which reduces spacing between these letters to make it easier to read, a trait infrequent in metal type.[6][7][8] Today, modern font programming divides ligatures into three groups, which can be activated separately: standard, contextual and historical. Standard ligatures are needed to allow the font to display without errors such as character collision. Designers sometimes find contextual and historic ligatures desirable for creating effects or to evoke an old-fashioned print look.[citation needed] Many ligatures combine⟨f⟩with the following letter. A particularly prominent example is⟨fi⟩(or⟨f‌i⟩, rendered with two normal letters). Thetittleof the⟨i⟩in many typefaces collides with the hood of the⟨f⟩when placed beside each other in a word, and are combined into a single glyph with the tittle absorbed into the⟨f⟩. Other ligatures with the letter f include⟨fj⟩,[a]⟨f‌l⟩(fl),⟨f‌f⟩(ff),⟨f‌f‌i⟩(ffi), and⟨f‌f‌l⟩(ffl). InLinotype, ligature matrices for⟨fa⟩,⟨fe⟩,⟨fo⟩,⟨fr⟩,⟨fs⟩,⟨ft⟩,⟨fb⟩,⟨fh⟩,⟨fu⟩,⟨fy⟩, and for⟨f⟩followed by afull stop,comma, orhyphenare optional in many typefaces,[9]as well as the equivalent set for the doubled⟨ff⟩, as a method to overcome the machine's physical restrictions.[citation needed] These arose because with the usual typesortforlowercase⟨f⟩, the end of its hood is on akern, which would be damaged by collision with raised parts of the next letter.[citation needed] Ligatures crossing themorphemeboundary of a composite word are sometimes considered incorrect, especially in officialGerman orthographyas outlined in theDuden. An English example of this would be⟨ff⟩inshelf‌ful; a German example would beSchiff‌fahrt("boat trip").[b]Some computer programs (such asTeX) provide a setting to disable ligatures for German, while some users have also written macros to identify which ligatures to disable.[10][11] Turkishdistinguishesdottedanddotless "I". If a ligature withfwere to be used in words such asfırın[oven] andfikir[idea], this contrast would be obscured. The⟨fi⟩ligature, at least in the form typical to other languages, is therefore not used in Turkish typography.[citation needed] Remnants of the ligatures⟨ſʒ⟩/⟨ſz⟩("sharp s",eszett) and⟨tʒ⟩/⟨tz⟩("sharp t",tezett) fromFraktur, a family of Germanblacklettertypefaces, originally mandatory in Fraktur but now employed only stylistically, can be seen to this day on street signs for city squares whose name containsPlatzor ends in-platz. Instead, the "sz" ligature has merged into a single character, the Germanß– see below. Sometimes, ligatures for⟨st⟩(st),⟨ſt⟩(ſt),⟨ch⟩,⟨ck⟩,⟨ct⟩,⟨Qu⟩and⟨Th⟩are used (e.g. in the typefaceLinux Libertine).[citation needed] Besides conventional ligatures, in the metal type era some newspapers commissioned custom condensed single sorts for the names of common long names that might appear in news headings, such as "Eisenhower", "Chamberlain". In these cases the characters did not appear combined, just more tightly spaced than if printed conventionally.[12] TheGermanletter⟨ß⟩(Eszett, also called thescharfes S, meaningsharp s) is an official letter of the alphabet in Germany and Austria. A recognizableligaturerepresenting the⟨sz⟩digraph develops in handwriting in the early 14th century.[13]Its nameEs-zett(meaning S-Z) suggests a connection of "long s and z" (ſʒ) but the Latin script also knows a ligature of "long s over round s" (ſs). Since German was mostly set in blackletter typefaces until the 1940s, and those typefaces were rarely set in uppercase, a capital version of theEszettnever came into common use, even though its creation has been discussed since the end of the 19th century. Therefore, the common replacement in uppercase typesetting was originally SZ (Maße"measure" →MAS‌ZE, different fromMas‌se"mass" →MAS‌SE) and later SS (Maße→MAS‌SE). Until 2017, the SS replacement was the only valid spelling according to the official orthography in Germany and Austria. In Switzerland, the ß is omitted altogether in favour of ss. Thecapital version (ẞ)of the Eszett character was occasionally used since 1905/06, has been part of Unicode since 2008, and has appeared in more and more typefaces. Since the end of 2010, theStändiger Ausschuss für geographische Namen (StAGN)has suggested the new upper case character for "ß" rather than replacing it with "SS" or "SZ" for geographical names.[14]A new standardized German keyboard layout (DIN 2137-T2) has included the capital ß since 2012. The new character entered the official orthographic rules in June 2017.[citation needed] A prominent feature of thecolonial orthographycreated byJohn Eliot(later used in the first Bible printed in the Americas, theMassachusett-languageMamusse Wunneetupanatamwe Up-Biblum God, published in 1663) was the use of the double-o ligature⟨ꝏ⟩to represent the/u/offoodas opposed to the/ʊ/ofhook(although Eliot himself used⟨oo⟩and⟨ꝏ⟩interchangeably).[clarification needed]In the orthography in use since 2000 in theWampanoagcommunities participating in the Wôpanâak Language Reclamation Project (WLRP), the ligature was replaced with the numeral⟨8⟩, partly because of its ease in typesetting and display as well as its similarity to the o-u ligature⟨Ȣ⟩used inAbenaki. For example, compare the colonial-era spellingseepꝏash[15]with the modern Wôpanâak Language Reclamation Project (WLRP) spellingseep8ash.[16] As the letter⟨W⟩is an addition to theLatin alphabetthat originated in the seventh century, the phoneme it represents was formerly written in various ways. InOld English, the runic letterwynn⟨Ƿ⟩) was used, butNormaninfluence forced wynn out of use. By the 14th century, the "new" letter⟨W⟩, originated as two⟨V⟩glyphs or⟨U⟩glyphs joined, developed into a legitimate letter with its own position in the alphabet. Because of its relative youth compared to other letters of the alphabet, only a few European languages (English, Dutch, German, Polish, Welsh, Maltese, and Walloon) use the letter in native words.[citation needed] The character⟨Æ⟩(lower case⟨æ⟩; in ancient times namedæsc) when used inDanish,Norwegian,Icelandic, orOld Englishis not a typographic ligature. It is a distinctletter— avowel— and when collated, may be given a different place in thealphabetical orderthanAe.[citation needed] In modernEnglish orthography,⟨Æ⟩is not considered an independent letter but a spelling variant, for example: "encyclopædia" versus "encyclopaedia" or "encyclopedia". In this use,⟨Æ⟩comes fromMedieval Latin, where it was an optional ligature in some specific words that had been transliterated and borrowed from Ancient Greek, for example, "Æneas". It is still found as a variant in English and French words descended or borrowed from Medieval Latin, but the trend has recently been towards printing the⟨A⟩and⟨E⟩separately.[17] Similarly,⟨Œ⟩and⟨œ⟩, while normally printed as ligatures in French, are replaced by component letters if technical restrictions require it.[citation needed] InGerman orthography, theumlautedvowels⟨ä⟩,⟨ö⟩, and⟨ü⟩historically arose from⟨ae⟩,⟨oe⟩,⟨ue⟩ligatures (strictly, from these vowels with a small letter⟨e⟩written as adiacritic, for example⟨aͤ⟩,⟨oͤ⟩,⟨uͤ⟩). It is common practice to replace them with⟨ae⟩,⟨oe⟩,⟨ue⟩digraphs when the diacritics are unavailable, for example in electronic conversation. Phone books treat umlauted vowels as equivalent to the relevant digraph (so that a name Müller will appear at the same place as if it were spelled Mueller; German surnames have a strongly fixed orthography, either a name is spelled with⟨ü⟩or with⟨ue⟩); however, the alphabetic order used in other books treats them as equivalent to the simple letters⟨a⟩,⟨o⟩and⟨u⟩. The convention inScandinavian languagesandFinnishis different: there the umlaut vowels are treated as independent letters with positions at the end of the alphabet.[citation needed] In Middle English, the wordthe(writtenþe) was frequently abbreviated as⟨þͤ⟩, a⟨þ⟩(thorn) with a small⟨e⟩written as a diacritic. Similarly, the wordthatwas abbreviated to⟨þͭ⟩, a⟨þ⟩with a small⟨t⟩written as a diacritic. During the latter Middle English andEarly Modern Englishperiods, the thorn in its common script, orcursive, form came to resemble a⟨y⟩shape. With the arrival ofmovable typeprinting, the substitution of⟨y⟩for⟨Þ⟩became ubiquitous, leading to the common "ye", as in 'Ye OldeCuriositie Shoppe'. One major reason for this was that⟨y⟩existed in the printer'stypesthatWilliam Caxtonand his contemporaries imported from Belgium and the Netherlands, while⟨Þ⟩did not.[18] Theringdiacriticused in vowels such as⟨å⟩likewise originated as an⟨o⟩-ligature.[19]Before the replacement of the older "aa" with "å" became ade factopractice, an "a" with another "a" on top (aͣ) could sometimes be used, for example inJohannes Bureus's,Runa: ABC-Boken(1611).[20]The⟨uo⟩ligatureůin particular saw use inEarly Modern High German, but it merged in later Germanic languages with⟨u⟩(e.g.MHGfuosz,ENHGfuͦß,Modern GermanFuß"foot"). It survives inCzech, where it is calledkroužek. The letterhwair(ƕ), used only intransliterationof theGothic language, resembles a⟨hw⟩ligature. It was introduced byphilologistsaround 1900 to replace thedigraph⟨hv⟩formerly used to express the phoneme in question, e.g. byMignein the 1860s (Patrologia Latinavol. 18). TheByzantineshad a uniqueo-u ligature⟨Ȣ⟩that, while originally based on theGreek alphabet's ο-υ, carried over into Latin alphabets as well. This ligature is still seen today on icon artwork in Greek Orthodox churches, and sometimes in graffiti or other forms of informal or decorative writing.[citation needed] Gha⟨ƣ⟩, a rarely used letter based on Q and G, was misconstrued by theISOto be an OI ligature because of its appearance, and is thus known (to the ISO and, in turn,Unicode) as "Oi". Historically, it was used in many Latin-based orthographies ofTurkic(e.g.,Azerbaijani) and othercentral Asianlanguages.[citation needed] TheInternational Phonetic Alphabetformerly used ligatures to representaffricate consonants, of which six are encoded in Unicode:ʣ,ʤ,ʥ,ʦ,ʧandʨ. Onefricative consonantis still represented with a ligature:ɮ, and theextensions to the IPAcontain three more:ʩ,ʪandʫ.[citation needed] TheInitial Teaching Alphabet, a short-lived alphabet intended for young children, used a number of ligatures to represent long vowels:⟨ꜷ⟩,⟨æ⟩,⟨œ⟩,⟨ᵫ⟩,⟨ꭡ⟩, and ligatures for⟨ee⟩,⟨ou⟩and⟨oi⟩that are not encoded in Unicode. Ligatures for consonants also existed, including ligatures of⟨ʃh⟩,⟨ʈh⟩,⟨wh⟩,⟨ʗh⟩,⟨ng⟩and a reversed⟨t⟩with⟨h⟩(neither the reversed t nor any of the consonant ligatures are in Unicode).[citation needed] Rarer ligatures also exist, including⟨ꜳ⟩;⟨ꜵ⟩;⟨ꜷ⟩;⟨ꜹ⟩;⟨ꜻ⟩(barred⟨av⟩);⟨ꜽ⟩;⟨ꝏ⟩, which is used in medievalNordiclanguages for/oː/(a longclose-mid back rounded vowel),[21]as well as in some orthographies of theMassachusett languageto representuː(a longclose back rounded vowel); ᵺ; ỻ, which was used inMedieval Welshto representɬ(thevoiceless lateral fricative);[21]ꜩ; ᴂ; ᴔ; and ꭣ haveUnicode codepoints(in code blockLatin Extended-Efor characters used in German dialectology (Teuthonista),[22]theAnthroposalphabet,SakhaandAmericanistusage).[citation needed] The most common ligature in modern usage is theampersand⟨&⟩. This was originally a ligature of⟨E⟩and⟨t⟩, forming theLatin:et, meaningand. It has exactly the same use inFrenchand inEnglish. The ampersand comes in many different forms. Because of its ubiquity, it is generally no longer considered a ligature, but alogogram. Like many other ligatures, it has at times been considered a letter (e.g., in early Modern English); in English it is pronouncedand, notet, except in the case of&c, pronouncedet cetera. In most typefaces, it does not immediately resemble the two letters used to form it, although certain typefaces use designs in the form of a ligature (examples include the original versions ofFuturaandUnivers,Trebuchet MS, andCivilité, known in modern times as the italic ofGaramond).[citation needed] Similarly, thenumber sign⟨#⟩originated as a stylized abbreviation of the Roman termlibra pondo, written as ℔.[23]Over time, the number sign was simplified to how it is seen today, with two horizontal strokes across two slash-like strokes.[24]Now a logogram, the symbol is used mainly to denote (in the US) numbers, and weight in pounds.[25]It has also been used popularly onpush-button telephonesand as thehashtagindicator.[26] Theat sign⟨@⟩is possibly a ligature, but there are many different theories about the origin. One theory says that the French wordà(meaningat), was simplified by scribes who, instead of lifting the pen to write the grave accent, drew an arc around the⟨a⟩. Another states that it is short for the Latin word fortoward,ad, with the⟨d⟩being represented by the arc. Another says it is short for an abbreviation of the termeach at, with the⟨e⟩encasing the⟨a⟩.[27]Around the 18th century, it started being used in commerce to indicate price per unit, as "15 units @ $1".[28]After the popularization ofEmail, this fairly unpopular character became widely known, used to tag specific users.[29]Lately, it has been used to de-gender nouns in Spanish with no agreed pronunciation.[30] Thedollar sign⟨$⟩possibly originated as a ligature (for "pesos", although there are other theories as well) but is now a logogram.[31]At least once, theUnited States dollarused a symbol resembling an overlapping U-S ligature, with the right vertical bar of the U intersecting through the middle of the S (US) to resemble the modern dollar sign.[32] TheSpanish pesetawas sometimes abbreviated by a ligature⟨₧⟩(fromPts). The ligature⟨₣⟩(F-with-bar) was proposed in 1968 byÉdouard Balladur,Minister of Economy.[33]as a symbol forFrench francbut was never adopted and has never been officially used.[34] Inastronomy, theplanetary symbolfor Mercury (☿) may be a ligature ofMercury'scaduceusand a cross (which was added in the 16th century to Christianize the pagan symbol),[35]though other sources disagree;[36]the symbol for Venus♀may be a ligature of the Greek letters⟨ϕ⟩(phi) and⟨κ⟩(kappa).[36]The symbol for Jupiter (♃) descends from a Greekzetawith ahorizontal stroke,⟨Ƶ⟩, as an abbreviation forZeus.[35][37]Saturn'sastronomical symbol(♄) has been traced back to the GreekOxyrhynchus Papyri, where it can be seen to be a Greekkappa-rhowith ahorizontal stroke, as an abbreviation forΚρονος(Cronus), the Greek name for the planet.[35]It later came to look like a lower-case Greeketa, with the cross added at the top in the 16th century to Christianize it. The dwarf planetPlutois symbolized by a PL ligature,♇. A different PL ligature,⅊, represents theproperty linein surveying.[citation needed] In engineering diagrams, a CL ligature,℄, represents the center line of an object.[citation needed] Theinterrobang⟨‽⟩is an unconventional punctuation meant to combine the interrogation point (or thequestion mark) and the bang (printer's slang forexclamation mark) into one symbol, used to denote a sentence which is both a question and is exclaimed. For example, the sentence "Is that actually true‽" shows that the speaker is surprised while asking their question.[38] Alchemyuseda set of mostly standardized symbols, many of which were ligatures: 🜇 (AR, foraqua regia); 🜈 (S inside a V, foraqua vitae); 🝫 (MB, forbalneum Mariae[Mary's bath], adouble boiler); 🝬 (VB, forbalneum vaporis, a steam bath); and 🝛 (aaawithoverline, foramalgam).[citation needed] ComposerArnold Schoenbergintroduced two ligatures asmusical symbolsto denote melody and countermelody. The symbols are ligatures of HT and NT, 𝆦 and 𝆧, from the German forhauptstimmeand nebenstimme respectively.[39][40] Digraphs, such as⟨ll⟩inSpanishorWelsh, are not ligatures in the general case as the two letters are displayed as separate glyphs: although written together, when they are joined in handwriting oritalicfonts the base form of the letters is not changed and the individual glyphs remain separate. Like some ligatures discussed above, these digraphs may or may not be considered individual letters in their respective languages. Until the 1994 spelling reform, the digraphs⟨ch⟩and⟨ll⟩were considered separate letters in Spanish forcollationpurposes. Catalan makes a difference between "Spanish ll" or palatalized l, writtenllas inllei(law), and "French ll" or geminated l, writtenl·las incol·lega(colleague).[citation needed] The difference can be illustrated with the French digraphœu, which is composed of the ligatureœand the simplex letteru.[citation needed] InDutch,⟨ij⟩can be considered a digraph, a ligature, or a letter in itself, depending on the standard used. Its uppercase andlowercaseforms are often available as a single glyph with a distinctive ligature in several professional typefaces (e.g.Zapfino).Sans serifuppercase⟨IJ⟩glyphs, popular in theNetherlands, typically use a ligature resembling a⟨U⟩with a broken left-hand stroke. Adding to the confusion, Dutch handwriting can render⟨y⟩(which is not found in native Dutch words, but occurs in words borrowed from other languages) as a⟨ij⟩-glyph without the dots in its lowercase form and the⟨IJ⟩in its uppercase form looking virtually identical (only slightly bigger). When written as two separate letters, both should be capitalized – or both not – to form a correctly spelled word, likeIJsorijs(ice).[citation needed] Ligatures are not limited to Latin script: Written Chinesehas a long history of creating new characters by merging parts or wholes of otherChinese characters. However, a few of these combinations do not representmorphemesbut retain the original multi-character (multiple morpheme) reading and are therefore not considered true characters themselves. In Chinese, these ligatures are calledhéwén(合文) orhéshū(合書); seepolysyllabic Chinese charactersfor more. One popular ligature used onchūntiēdecorations used forChinese Lunar New Yearis a combination of the four characters forzhāocái jìnbǎo(招財進寶), meaning "ushering in wealth and fortune" and used as a popular New Year's greeting. In 1924,Du Dingyou(杜定友; 1898–1967) created the ligature圕from two of the three characters圖書館(túshūguǎn), meaning "library".[43]Although it does have an assigned pronunciation oftuānand appears in many dictionaries, it is not amorphemeand cannot be used as such in Chinese. Instead, it is usually considered a graphic representation oftúshūguǎn. In recent years, a Chineseinternet meme, theGrass Mud Horse, has had such a ligature associated with it combining the three relevant Chinese characters草,泥, and马(Cǎonímǎ). Similar to the ligatures were several "two-syllable Chinese characters" (雙音節漢字) created in the 19th century asChinese charactersforSI units. In Chinese these units are disyllabic and standardly written with two characters, as厘米límǐ"centimeter" (厘centi-,米meter) or千瓦qiānwǎ"kilowatt". However, in the 19th century these were often written via compound characters, pronounced disyllabically, such as瓩for千瓦or糎for厘米– some of these characters were also used in Japan, where they were pronounced with borrowed European readings instead. These have now fallen out of general use, but are occasionally seen.[44] TheCJK CompatibilityUnicodeblock features characters that have been combined into one square character in legacy character set so that it matches Japanese text. For example, the Japanese equivalent of "stock company",株式会社(kabushiki gaisha) can be represented in 1 Unicode character⟨㍿⟩. Its romanized abbreviationK.K.can also be 1 character⟨㏍⟩. There are other Latin abbreviations such askgfor "kilogram" that can be ligated into 1 square character⟨㎏⟩. TheOpenTypefont format includes features for associating multipleglyphsto a single character, used for ligature substitution. Typesetting software may or may not implement this feature, even if it is explicitly present in the font's metadata.XeTeXis a TeX typesetting engine designed to make the most of such advanced features. This type of substitution used to be needed mainly for typesetting Arabic texts, but ligature lookups and substitutions are being put into all kinds of Western Latin OpenType fonts. In OpenType, there are standardliga, historicalhlig, contextualclig, discretionarydligand requiredrligligatures. Opinion is divided over whether it is the job of writers or typesetters to decide where to use ligatures.TeXis an example of a computer typesetting system that makes use of ligatures automatically. TheComputer ModernRoman typeface provided with TeX includes the five common ligatures⟨ff⟩,⟨fi⟩,⟨fl⟩,⟨ffi⟩, and⟨ffl⟩. When TeX finds these combinations in a text, it substitutes the appropriate ligature, unless overridden by the typesetter. CSS3provides control over these properties usingfont-feature-settings,[45]though the CSS Fonts Module Level 4 draft standard indicates that authors should prefer several other properties.[46]Those includefont-variant-ligatures,common-ligatures,discretionary-ligatures,historical-ligatures, andcontextual.[47] This table below shows discrete letter pairs on the left, the correspondingUnicodeligature in the middle column, and the Unicode code point on the right. Provided you are using anoperating systemandbrowserthat can handle Unicode, and have the correct Unicodefontsinstalled, some or all of these will display correctly. See also the provided graphic. Unicodemaintains that ligaturing is a presentation issue rather than a character definition issue, and that, for example, "if a modern font is asked to display 'h' followed by 'r', and the font has an 'hr' ligature in it, it can display the ligature." Accordingly, the use of the special Unicode ligature characters is "discouraged", and "no more will be encoded in any circumstances".[48](Unicode has continued to add ligatures, but only in such cases that the ligatures were used as distinct letters in a language or could be interpreted as standalonesymbols. For example, ligatures such as æ and œ are not used to replace arbitrary "ae" or "oe" sequences; it is generally considered incorrect to write "does" as "dœs".) Microsoft Worddisables ligature substitution by default, largely forbackward compatibilitywhen editing documents created in earlier versions of Word. Users can enable automatic ligature substitution on the Advanced tab of the Font dialog box. LibreOffice Writerenables standard ligature substitution by default for OpenType fonts, user can enable or disable any ligature substitution on the Features dialog box, which is accessible via the Features button of the Character dialog box, or alternatively, input a syntax with font name and feature into the Font Name input box, for example:Noto Sans:liga=0. There are separatecode pointsfor the digraphDZ, theDutchdigraphIJ, and for theSerbo-Croatian digraphsDŽ, LJ, and NJ. Although similar, these aredigraphs, not ligatures. SeeDigraphs in Unicode. Four "ligature ornaments" are included from U+1F670 to U+1F673 in theOrnamental Dingbatsblock: regular and bold variants of ℯT (script e and T) and of ɛT (open E and T). Typographic ligatures are used in a form ofcontemporary art,[58]as can be illustrated by Chinese artistXu Bing's work in which he combines Latin letters to form characters that resemble Chinese.[59]Croatian designer Maja Škripelj also created a ligature that combinedGlagolitic lettersⰘⰓ foreuro coins.[60]
https://en.wikipedia.org/wiki/Typographic_ligature
Thealphabet agencies, orNew Deal agencies, were theU.S. federal governmentagencies created as part of theNew Dealof PresidentFranklin D. Roosevelt. The earliest agencies were created to combat theGreat Depression in the United Statesand were established during Roosevelt's first 100 days in office in 1933. In total, at least 69 offices were created during Roosevelt's terms of office as part of the New Deal. Some alphabet agencies were established by Congress, such as theTennessee Valley Authority. Others were established through Rooseveltexecutive orders, such as theWorks Progress Administrationand theOffice of Censorship, or were part of larger programs such as the many that belonged to theWorks Progress Administration. Some of the agencies still exist today, while others have merged with other departments and agencies or were abolished. The agencies were sometimes referred to asalphabet soup. Libertarian authorWilliam Safirenotes that the phrase "gave color to the charge of excessive bureaucracy." DemocratAl Smith, who turned against Roosevelt, said his government was “submerged in a bowl of alphabet soup."[1]"Even the Comptroller-General of the United States, who audits the government's accounts, declared he had never heard of some of them."[2]While previously all monetary appropriations had been separately passed byAct of Congress, as part of theirpower of the purse; theNational Industrial Recovery Actallowed Roosevelt to allocate $3.3 billion without Congress (as much as had been previously spent by government in ten years time), through executive orders and other means. These powers were used to create many of the alphabet agencies. Other laws were passed allowing the new bureaus to pass their own directives within a wide sphere of authority.[2]Even though theNational Industrial Recovery Actwas found to be unconstitutional, many of the agencies created under it remained. Since the 1990s, the term "alphabet agencies" has been commonly used to describe the agencies of theU.S. national security state. Many are members of theUnited States Intelligence Community,[3][4]and several were founded or expanded in the aftermath of theSeptember 11 attacks.[5][6][7][8]Alphabet agencies in this sense of the term may also be calledthree-letter agencies,[9]because they often usethree-letter acronyms.
https://en.wikipedia.org/wiki/Alphabet_agencies
ISO 4217is a standard published by theInternational Organization for Standardization(ISO) that defines alpha codes and numeric codes for the representation of currencies and provides information about the relationships between individual currencies and their minor units. This data is published in three tables:[1] The first edition of ISO 4217 was published in 1978. The tables, history and ongoing discussion are maintained bySIX Groupon behalf ofISOand theSwiss Association for Standardization.[2] The ISO 4217 code list is used inbankingandbusinessglobally. In many countries, the ISO 4217 alpha codes for the more common currencies are so well known publicly thatexchange ratespublished in newspapers or posted in banks use only these to delineate the currencies, instead of translated currency names or ambiguouscurrency symbols. ISO 4217 alpha codes are used on airline tickets and international train tickets to remove any ambiguity about the price. In 1973, the ISO Technical Committee 68 decided to develop codes for the representation of currencies and funds for use in any application of trade, commerce or banking. At the 17th session (February 1978), the relatedUN/ECEGroup of Experts agreed that the three-letter alphabetic codes for International Standard ISO 4217, "Codes for the representation of currencies and funds", would be suitable for use in international trade. Over time, new currencies are created and old currencies are discontinued. Such changes usually originate from the formation of new countries, treaties between countries on shared currencies or monetary unions, orredenominationfrom an existing currency due to excessive inflation. As a result, the list of codes must be updated from time to time. The ISO 4217 maintenance agency is responsible for maintaining the list of codes.[3] In the case of national currencies, the first two letters of the alpha code are the two letters of theISO 3166-1 alpha-2country codeand the third is usually the initial of the currency's main unit.[4]SoJapan's currency code isJPY: "JP" for Japan and "Y" foryen. This eliminates the problem caused by the namesdollar,franc,peso, andpoundbeing used in many countries, each having significantly differing values. While in most cases the ISO code resembles an abbreviation of the currency's full English name, this is not always the case, as currencies such as theAlgerian dinar,Aruban florin,Cayman dollar,renminbi,sterling, and theSwiss franchave been assigned codes which do not closely resemble abbreviations of the official currency names. In some cases, the third letter of the alpha code is not the initial letter of a currency unit name. There may be a number of reasons for this: In addition to codes for most active national currencies ISO 4217 provides codes for "supranational" currencies, procedural purposes, and several things which are "similar to" currencies: The use of the initial letter "X" for these purposes is facilitated by theISO 3166 rulethat no official country code beginning with X will ever be assigned. The inclusion of the EU (denoting theEuropean Union) in theISO 3166-1reserved codes list allows theeuroto be coded as EUR rather than assigned a code beginning with X, even though it is a supranational currency. ISO 4217 also assigns a three-digit numeric code to each currency. This numeric code is usually the same as the numeric code assigned to the corresponding country byISO 3166-1. For example, USD (United States dollar) has numeric code840which is also the ISO 3166-1 code for "US" (United States). The following is a list of active codes of official ISO 4217 currency names as of 1 January 2024[update]. In the standard the values are called "alphabetic code", "numeric code", "minor unit", and "entity". According to UN/CEFACT recommendation 9, paragraphs 8–9 ECE/TRADE/203, 1996:[22] A number of currencies had official ISO 4217 currency codes and currency names until their replacement by another currency. The table below shows the ISO currency codes of former currencies and their common names (which do not always match the ISO 4217 names). That table has been introduced end 1988 by ISO.[23] The 2008 (7th) edition of ISO 4217 says the following about minor units of currency: Requirements sometimes arise for values to be expressed in terms of minor units of currency. When this occurs, it is necessary to know the decimal relationship that exists between the currency concerned and its minor unit. This information has therefore been included in this International Standard and is shown in the column headed "Minor unit" in Tables A.1 and A.2; "0" means that there is no minor unit for that currency, whereas "1", "2" and "3" signify a ratio of 10:1, 100:1 and1000:1 respectively. The names of the minor units are not given. Examples for the ratios of100:1 and1000:1 include the United States dollar and theBahraini dinar, for which the column headed "Minor unit" shows "2" and "3", respectively. As of 2021[update], two currencies have non-decimal ratios, theMauritanian ouguiyaand theMalagasy ariary; in both cases the ratio is 5:1. For these, the "Minor unit" column shows the number "2". Some currencies, such as theBurundian franc, do not in practice have any minor currency unit at all. These show the number "0", as with currencies whose minor units are unused due to negligible value.[citation needed] The ISO standard does not regulate either the spacing, prefixing or suffixing in usage of currency codes. Thestyle guideof theEuropean Union's Publication Office declares that, for texts issued by or through the Commission inEnglish,Irish,Latvian, andMaltese, the ISO 4217 code is to be followed by a "hard space" (non-breaking space) and the amount:[47] and for texts inBulgarian,Croatian,Czech,Danish,Dutch,Estonian,Finnish,French,German,Greek,Hungarian,Italian,Lithuanian,Polish,Portuguese,Romanian,Slovak,Slovene,Spanish, andSwedishthe order is reversed; the amount is followed by a non-breaking space and the ISO 4217 code: As illustrated, the order is determined not by the currency but by the native language of the document context. TheUS dollarhas two codes assigned: USD and USN ("US dollar next day"[definition needed]). The USS (same day) code is not in use any longer, and was removed from the list of active ISO 4217 codes in March 2014. A number of active currencies do not have an ISO 4217 code, because they may be: These currencies include: SeeCategory:Fixed exchange ratefor a list of all currently pegged currencies. Despite having no presence or status in the standard,three letter acronymsthat resemble ISO 4217 coding are sometimes used locally or commercially to representde factocurrencies or currency instruments. The following non-ISO codes were used in the past. Minor units of currency (also known as currency subdivisions or currency subunits) are often used for pricing and tradingstocksand other assets, such as energy,[73]but are not assigned codes by ISO 4217. Two conventions for representing minor units are in widespread use: A third convention is similar to the second one but uses an upper-case letter, e.g. ZAC[77]for the South African Cent. Cryptocurrencieshavenotbeen assigned an ISO 4217 code.[78]However, some cryptocurrencies andcryptocurrency exchangesuse a three-letter acronym that resemble an ISO 4217 code.
https://en.wikipedia.org/wiki/ISO_4217
This is a list of computing and IT acronyms, initialisms and abbreviations.
https://en.wikipedia.org/wiki/List_of_computing_and_IT_abbreviations
This is a list of radio and televisionbroadcasting stations in the United Statesthat are currently assigned three-lettercall signs. In the United States, all radio and television broadcasting stations that are licensed by theFederal Communications Commission(FCC) are assigned official, distinctcall signs. Organized broadcasting began in the U.S. in the early 1920s on theAM band— FM and television did not exist yet. Initially most broadcasting stations were assigned three-letter calls; however, a switch was made in April 1922 to primarily four-letter calls, after the number of stations had increased into the hundreds. For a few years thereafter a small number of new three-letter calls continued to be issued. Although most of the original three-letter calls were randomly assigned, these later calls were often specially requested to match station slogans. The last new three-letter call was assigned to stationWIS(now WVOC) in Columbia, South Carolina on January 23, 1930. Since then, three-letter calls have only been assigned to stations, including FM (beginning in 1943)[1]and TV (beginning in 1946),[2]which are historically related to an AM station that was originally issued that call sign. This review only includes FCC-licensed stations. Not included are unlicensed operations, such ascarrier current, cable TV, and Internet stations — for example, San Diego State University's"KCR"— which have adopted call-letter-like identifiers that are not officially issued by the FCC. Also not included are stations which use, as slogans, three-letter truncations of their official four-letter call signs; for example, the full call sign for radio station"KOH"in Reno, Nevada is actually KKOH, and"WTN"in Nashville, Tennessee is actually WWTN. In addition, stations which formerly had three letters but have since changed (such as Albuquerque, New Mexico'sKKOB, formerly KOB) are not listed. As of January 2025, there are a total of 101 AM, FM and TV stations in the United States that are assigned three-letter call signs. This is divided between only 67 different three-letter calls, because in many cases the same call sign is used by more than one station, although a given call sign is never assigned to more than one AM, FM or TV station. These 67 different three-letter call signs are currently grouped as follows: Listed below are all the assignments as of January 2025.
https://en.wikipedia.org/wiki/List_of_three-letter_broadcast_call_signs_in_the_United_States
Lists of acronymscontainacronyms, a type of abbreviation formed from the initial components of the words of a longer name or phrase. They are organized alphabetically and by field.
https://en.wikipedia.org/wiki/Lists_of_acronyms
Anairportis anaerodromewith facilities for flights to take off and land. Airports often have facilities to store and maintain aircraft, and acontrol tower. An airport consists of alanding area, which comprises an aerially accessible open space including at least one operationally active surface such as arunwayfor a plane to take off or ahelipad, and often includes adjacent utility buildings such as control towers,hangarsandterminals. An airport with a helipad for rotorcraft but no runway is called aheliport. An airport for use byseaplanesandamphibious aircraftis called aseaplane base. Such a base typically includes a stretch of open water fortakeoffsandlandings, andseaplanedocks for tying-up. Aninternational airporthas additional facilities forcustomsandimmigration.
https://en.wikipedia.org/wiki/Lists_of_airports_by_IATA_and_ICAO_code
Acountry codeis a short alphanumeric identification code for countries and dependent areas. Its primary use is indata processingandcommunications. Several identification systems have been developed. The termcountry codefrequently refers toISO 3166-1 alpha-2, as well as the telephone country code, which is embodied in theE.164recommendation by the International Telecommunication Union (ITU). The standardISO 3166-1defines short identification codes for most countries and dependent areas: The two-letter codes are used as the basis for other codes and applications, for example, Other applications are defined inISO 3166-1 alpha-2. Telephone country codes aretelephone numberprefixes forinternational direct dialing(IDD), a system for reaching subscribers in foreign areas via international telecommunication networks. Country codes are defined by theInternational Telecommunication Union(ITU) inITU-TstandardsE.123andE.164. Country codes constitute the internationaltelephone numbering plan. They are dialed before the national telephone number of a destination in a foreign country or area, but typically require at least one additional prefix, theinternational call prefixwhich is an exit code from the national numbering plan to the international one. ITU standards recommend the digit sequence00for the prefix, and most countries comply. The ITU also maintains the following other country codes: The developers of ISO 3166 intended that in time it would replace other coding systems. Country identities may be encoded in the following coding systems: A-B-C-D–E-F-G-H–I-J–K-L-M-N-O–Q-R-S-T-U–Z
https://en.wikipedia.org/wiki/Country_code#Lists_of_country_codes_by_country
Control Picturesis aUnicode blockcontaining characters for graphically representing theC0 control codes, and other control characters. Its block name in Unicode 1.0 wasPictures for Control Codes.[3] The following Unicode-related documents record the purpose and process of defining specific characters in the Control Pictures block:
https://en.wikipedia.org/wiki/Control_Pictures
Specialsis a shortUnicodeblock of characters allocated at the very end of theBasic Multilingual Plane, at U+FFF0–FFFF, containing thesecode points: U+FFFE<noncharacter-FFFE>andU+FFFF<noncharacter-FFFF>arenoncharacters, meaning they are reserved but do not cause ill-formed Unicode text. Versions of the Unicode standard from 3.1.0 to 6.3.0 claimed that these characters should never be interchanged, leading some applications to use them to guess text encoding by interpreting the presence of either as a sign that the text is not Unicode. However, Corrigendum #9 later specified that noncharacters are not illegal and so this method of checking text encoding is incorrect.[3]An example of an internal usage of U+FFFE is theCLDR algorithm; this extended Unicode algorithm maps the noncharacter to a minimal, unique primary weight.[4] Unicode'sU+FEFFZERO WIDTH NO-BREAK SPACEcharacter can be inserted at the beginning of a Unicode text to signal itsendianness: a program reading such a text and encountering 0xFFFE would then know that it should switch the byte order for all the following characters. Its block name in Unicode 1.0 wasSpecial.[5] Thereplacement character� (often displayed as a blackrhombuswith a white question mark) is a symbol found in theUnicodestandard at code point U+FFFD in theSpecialstable. It is used to indicate problems when a system is unable to render a stream of data to correct symbols.[6] As an example, a text file encoded inISO 8859-1containing the German wordfürcontains the bytes0x66 0xFC 0x72. If this file is opened with a text editor that assumes the input isUTF-8, the first and third bytes are valid UTF-8 encodings ofASCII, but the second byte (0xFC) is not valid in UTF-8. The text editor could replace this byte with the replacement character to produce a valid string of Unicode code points for display, so the user sees "f�r". A poorly implemented text editor might write out the replacement character when the user saves the file; the data in the file will then become0x66 0xEF 0xBF 0xBD 0x72. If the file is re-opened using ISO 8859-1, it will display "f�r" (this is calledmojibake). Since the replacement is the same for all errors it is impossible to recover the original character. At one time the replacement character was often used when there was no glyph available in a font for that character, as infont substitution. However, most modern text rendering systems instead use a font's.notdefcharacter, which in most cases is an empty box, or "?" or "X" in a box[7](this browser displays 􏿮), sometimes called a 'tofu'. There is no Unicode code point for this symbol. Thus the replacement character is now only seen for encoding errors. Some software programs translate invalid UTF-8 bytes to matching characters inWindows-1252(since that is the most common source of these errors), so that the replacement character is never seen. The following Unicode-related documents record the purpose and process of defining specific characters in the Specials block:
https://en.wikipedia.org/wiki/Specials_(Unicode_block)#Replacement_character
Theregional indicator symbolsare a set of 26 alphabeticUnicodecharacters (A–Z) intended to be used to encodeISO 3166-1 alpha-2two-lettercountry codesin a way that allows optional special treatment. These were defined byOctober 2010as part of theUnicode 6.0support foremoji, as an alternative to encoding separate characters for each country flag. Although they can be displayed as Roman letters, it is intended that implementations may choose to display them in other ways, such as by usingnational flags.[1][2]The Unicode FAQ indicates that this mechanism should be used and that symbols for national flags will not be directly encoded.[3] They are encoded in the rangeU+1F1E6🇦REGIONAL INDICATOR SYMBOL LETTER AtoU+1F1FF🇿REGIONAL INDICATOR SYMBOL LETTER Zwithin theEnclosed Alphanumeric Supplementblock in theSupplementary Multilingual Plane.[4] A pair of regional indicator symbols is referred to as anemoji flag sequence(although it represents a specific region, not a specific flag for that region).[6] Out of the 676 possible pairs of regional indicator symbols (26 × 26), only 270 are considered valid Unicode region codes. These are a subset of the region sequences in theCommon Locale Data Repository(CLDR):[6][7][8] A separate mechanism (emoji tag sequences) is used for regional flags, such as England 🏴󠁧󠁢󠁥󠁮󠁧󠁿, Scotland 🏴󠁧󠁢󠁳󠁣󠁴󠁿, Wales 🏴󠁧󠁢󠁷󠁬󠁳󠁿, Texas 🏴󠁵󠁳󠁴󠁸󠁿 or California 🏴󠁵󠁳󠁣󠁡󠁿.[12]It usesU+1F3F4🏴WAVING BLACK FLAGand formattingtag charactersinstead of regional indicator symbols. It is based onISO 3166-2regions with hyphen removed and lowercase, e.g. GB-ENG → gbeng, terminating withU+E007FCANCEL TAG. Flag of England is therefore represented by a sequenceU+1F3F4,U+E0067,U+E0062,U+E0065,U+E006E,U+E0067,U+E007F. In the tenth revision the Unicode consortium was consideringU+1F3F3🏳WAVING WHITE FLAGinstead,[13]but from eleventh onwards it is black.[14]Some vendors choose to include customzero-width joiner sequencesthat only show up on their platform, such asWhatsAppand theirRefugee NationFlag 🏳️‍🟧‍⬛️‍🟧.[15] In 2007 a draft proposal was presented to the Unicode Technical Committee to encodeemojisymbols, specifically those in widespread use on mobile phones by Japanese telecommunications companiesDoCoMo,KDDI, andSoftBank.[16]The proposed symbols included ten national flags:[17]China(🇨🇳),Germany(🇩🇪),Spain(🇪🇸),France(🇫🇷), theUK(🇬🇧),Italy(🇮🇹),Japan(🇯🇵),South Korea(🇰🇷),Russia(🇷🇺), and theUnited States(🇺🇸). Encoding these flags but not other countries' flags was considered, by some, as prejudicial.[18]One rejected solution was to encode the ten flags but call them "EMOJI COMPATIBILITY SYMBOL-n" and represent them visually in the Standard as "EC n" instead of showing the flags they represent.[19]Another rejected solution would have allocated 676 codepoints (26×26) for each possible two letter combination of A–Z. They would represent political entities based onISO 3166such as "JP" for Japan or Internet ccTLDs (country code top-level domains) such as "EU" for the European Union.[20] The accepted solution was to add 26 characters for letters used for the representation of regional indicators, which used in pairs would represent the ten national flags and possible future extensions.[2]Per the Unicode Standard"the main purpose of such [regional indicator symbol] pairs is to provide unambiguous roundtrip mappings to certain characters used in the emoji core sets"[21]specifically the ten national flags:[22]🇨🇳, 🇩🇪, 🇪🇸, 🇫🇷, 🇬🇧, 🇮🇹, 🇯🇵, 🇰🇷, 🇷🇺, and 🇺🇸.
https://en.wikipedia.org/wiki/Regional_Indicator_Symbol
Enclosed Alphanumeric Supplementis aUnicode blockconsisting ofLatin alphabetcharacters andArabic numeralsenclosed in circles, ovals or boxes, used for a variety of purposes. It is encoded in the range U+1F100–U+1F1FF in theSupplementary Multilingual Plane. The block is mostly an extension of theEnclosed Alphanumericsblock, containing further enclosed alphanumeric characters which are not included in that block orEnclosed CJK Letters and Months. Most of the characters are single alphanumerics in boxes or circles, or with trailing commas. Two of the symbols are identified asdingbats. A number of multiple-letter enclosed abbreviations are also included, mostly to provide compatibility withBroadcast Markup Languagestandards (seeARIB STD B24 character set) and Japanese telecommunications networks'emojisets. The block also includes theregional indicator symbolsto be used for emojicountry flagsupport. The Enclosed Alphanumeric Supplement block contains 41emoji: U+1F170, U+1F171, U+1F17E, U+1F17F, U+1F18E, U+1F191 – U+1F19A and U+1F1E6 – U+1F1FF.[3][4] The block has eightstandardized variantsdefined to specify emoji-style (U+FE0F VS16) or text presentation (U+FE0E VS15) for the following four base characters: U+1F170, U+1F171, U+1F17E & U+1F17F.[5]All of these base characters are defined as defaulting to a text presentation.[4]Their appearance depends on the program (such as a browser) and the fonts used: The following Unicode-related documents record the purpose and process of defining specific characters in the Enclosed Alphanumeric Supplement block:
https://en.wikipedia.org/wiki/Enclosed_Alphanumeric_Supplement
Tagsis aUnicode blockcontaining formatting tag characters. The block is designed to mirrorASCII. It was originally intended for language tags, but has now been repurposed as emoji modifiers, specifically for region flags. U+E0001, U+E0020–U+E007F were originally intended for invisibly tagging texts by language[3]but that use is no longer recommended.[4]All of those characters were deprecated in Unicode 5.1. With the release of Unicode 8.0, U+E0020–U+E007E are no longer deprecated characters. The change was made "to clear the way for the potential future use of tag characters for a purpose other than to represent language tags".[5]Unicode states that "the use of tag characters to represent language tags in a plain text stream is still a deprecated mechanism for conveying language information about text".[5] With the release of Unicode 9.0, U+E007F is no longer a deprecated character. (U+E0001 LANGUAGE TAG remains deprecated.) The release of Emoji 5.0 in May 2017[6]considers these characters to beemojifor use as modifiers in special sequences. The only usage specified is for representing the flags of regions, alongside the use ofRegional Indicator Symbolsfor national flags.[7]These sequences consist ofU+1F3F4🏴WAVING BLACK FLAGfollowed by a sequence of tags corresponding to the region as coded in theCLDR, thenU+E007FCANCEL TAG. For example, using the tags for "gbeng" (🏴󠁧󠁢󠁥󠁮󠁧󠁿) will cause some systems to display theflag of England, those for "gbsct" (🏴󠁧󠁢󠁳󠁣󠁴󠁿) theflag of Scotland, and those for "gbwls" (🏴󠁧󠁢󠁷󠁬󠁳󠁿) theflag of Wales.[7] The tag sequences are derived fromISO 3166-2, but sequences representing other subnational flags (for exampleUS states) are also possible using this mechanism. However, as of Unicode version 12.0 only the three flag sequences listed above are "Recommended for General Interchange" by the Unicode Consortium, meaning they are "most likely to be widely supported across multiple platforms".[8] Tags have been used to create invisibleprompt injectionsonLLMs.[9] The following Unicode-related documents record the purpose and process of defining specific characters in the Tags block:
https://en.wikipedia.org/wiki/Tags_(Unicode_block)
ABayesian averageis a method of estimating themeanof a population using outside information, especially a pre-existing belief,[1]which is factored into the calculation. This is a central feature ofBayesian interpretation. This is useful when the available data set is small.[2] Calculating the Bayesian average uses the prior meanmand a constantC.Cis chosen based on the typical data set size required for a robust estimate of the sample mean. The value is larger when the expected variation between data sets (within the larger population) is small. It is smaller when the data sets are expected to vary substantially from one another. This is equivalent to addingCdata points of valuemto the data set. It is a weighted average of a prior averagemand the sample average. When thexi{\displaystyle x_{i}}are binary values 0 or 1,mcan be interpreted as the prior estimate of a binomial probability with the Bayesian average giving a posterior estimate for the observed data. In this case,Ccan be chosen based on the desiredbinomial proportion confidence intervalfor the sample value. For example, for rare outcomes whenmis small choosingC≃9/m{\displaystyle C\simeq 9/m}ensures a 99% confidence interval has width about2m. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Bayesian_average
Prediction by partial matching(PPM) is an adaptivestatisticaldata compressiontechnique based oncontext modelingandprediction. PPM models use a set of previous symbols in the uncompressed symbol stream to predict the next symbol in the stream. PPM algorithms can also be used to cluster data into predicted groupings incluster analysis. Predictions are usually reduced to symbol rankings[clarification needed]. Each symbol (a letter, bit or any other amount of data) is ranked before it is compressed, and the ranking system determines the corresponding codeword (and therefore the compression rate). In many compression algorithms, the ranking is equivalent to probability mass function estimation. Given the previous letters (or given a context), each symbol is assigned with a probability. For instance, inarithmetic codingthe symbols are ranked by their probabilities to appear after previous symbols, and the whole sequence is compressed into a single fraction that is computed according to these probabilities. The number of previous symbols,n, determines the order of the PPM model which is denoted as PPM(n). Unbounded variants where the context has no length limitations also exist and are denoted asPPM*. If no prediction can be made based on allncontext symbols, a prediction is attempted withn− 1 symbols. This process is repeated until a match is found or no more symbols remain in context. At that point a fixed prediction is made. Much of the work in optimizing a PPM model is handling inputs that have not already occurred in the input stream. The obvious way to handle them is to create a "never-seen" symbol which triggers theescape sequence[clarification needed]. But what probability should be assigned to a symbol that has never been seen? This is called thezero-frequency problem. One variant uses theLaplace estimator, which assigns the "never-seen" symbol a fixedpseudocountof one. A variant called PPMd increments the pseudocount of the "never-seen" symbol every time the "never-seen" symbol is used. (In other words, PPMd estimates the probability of a new symbol as the ratio of the number of unique symbols to the total number of symbols observed). PPM compression implementations vary greatly in other details. The actual symbol selection is usually recorded usingarithmetic coding, though it is also possible to useHuffman encodingor even some type ofdictionary codingtechnique. The underlying model used in most PPM algorithms can also be extended to predict multiple symbols. It is also possible to use non-Markov modeling to either replace or supplement Markov modeling. The symbol size is usually static, typically a single byte, which makes generic handling of any file format easy. Published research on this family of algorithms can be found as far back as the mid-1980s. Software implementations were not popular until the early 1990s because PPM algorithms require a significant amount ofRAM. Recent PPM implementations are among the best-performinglossless compressionprograms fornatural languagetext. PPMd is a public domain implementation of PPMII (PPM with information inheritance) by Dmitry Shkarin which has undergone several incompatible revisions.[1]It is used in theRARfile format by default. It is also available in the7zandzipfile formats. Attempts to improve PPM algorithms led to thePAQseries of data compression algorithms. A PPM algorithm, rather than being used for compression, is used to increase the efficiency of user input in the alternate input method programDasher.
https://en.wikipedia.org/wiki/Prediction_by_partial_matching
(1)p(x=i)=pi{\displaystyle p(x=i)=p_{i}}(2)p(x)=p1[x=1]⋯pk[x=k]{\displaystyle p(x)=p_{1}^{[x=1]}\cdots p_{k}^{[x=k]}}(3)p(x)=[x=1]⋅p1+⋯+[x=k]⋅pk{\displaystyle p(x)=[x=1]\cdot p_{1}\,+\cdots +\,[x=k]\cdot p_{k}} Inprobability theoryandstatistics, acategorical distribution(also called ageneralized Bernoulli distribution,multinoulli distribution[1]) is adiscrete probability distributionthat describes the possible results of a random variable that can take on one ofKpossible categories, with the probability of each category separately specified. There is no innate underlying ordering of these outcomes, but numerical labels are often attached for convenience in describing the distribution, (e.g. 1 toK). TheK-dimensional categorical distribution is the most general distribution over aK-way event; any other discrete distribution over a size-Ksample spaceis a special case. The parameters specifying the probabilities of each possible outcome are constrained only by the fact that each must be in the range 0 to 1, and all must sum to 1. The categorical distribution is thegeneralizationof theBernoulli distributionfor acategoricalrandom variable, i.e. for a discrete variable with more than two possible outcomes, such as the roll of adie. On the other hand, the categorical distribution is aspecial caseof themultinomial distribution, in that it gives the probabilities of potential outcomes of a single drawing rather than multiple drawings. Occasionally, the categorical distribution is termed the "discrete distribution". However, this properly refers not to one particular family of distributions but to ageneral class of distributions. In some fields, such asmachine learningandnatural language processing, the categorical andmultinomial distributionsare conflated, and it is common to speak of a "multinomial distribution" when a "categorical distribution" would be more precise.[2]This imprecise usage stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-K" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range 1 toK; in this form, a categorical distribution is equivalent to a multinomial distribution for a single observation (see below). However, conflating the categorical and multinomial distributions can lead to problems. For example, in aDirichlet-multinomial distribution, which arises commonly in natural language processing models (although not usually with this name) as a result ofcollapsed Gibbs samplingwhereDirichlet distributionsare collapsed out of ahierarchical Bayesian model, it is very important to distinguish categorical from multinomial. Thejoint distributionof the same variables with the same Dirichlet-multinomial distribution has two different forms depending on whether it is characterized as a distribution whose domain is over individual categorical nodes or over multinomial-style counts of nodes in each particular category (similar to the distinction between a set ofBernoulli-distributednodes and a singlebinomial-distributednode). Both forms have very similar-lookingprobability mass functions(PMFs), which both make reference to multinomial-style counts of nodes in a category. However, the multinomial-style PMF has an extra factor, amultinomial coefficient, that is a constant equal to 1 in the categorical-style PMF. Confusing the two can easily lead to incorrect results in settings where this extra factor is not constant with respect to the distributions of interest. The factor is frequently constant in the complete conditionals used in Gibbs sampling and the optimal distributions invariational methods. A categorical distribution is a discreteprobability distributionwhosesample spaceis the set ofkindividually identified items. It is the generalization of theBernoulli distributionfor acategoricalrandom variable. In one formulation of the distribution, thesample spaceis taken to be a finite sequence of integers. The exact integers used as labels are unimportant; they might be {0, 1, ...,k− 1} or {1, 2, ...,k} or any other arbitrary set of values. In the following descriptions, we use {1, 2, ...,k} for convenience, although this disagrees with the convention for theBernoulli distribution, which uses {0, 1}. In this case, theprobability mass functionfis: wherep=(p1,…,pk){\displaystyle {\boldsymbol {p}}=(p_{1},\ldots ,p_{k})},pi{\displaystyle p_{i}}represents the probability of seeing elementiand∑i=1kpi=1{\displaystyle \textstyle {\sum _{i=1}^{k}p_{i}=1}}. Another formulation that appears more complex but facilitates mathematical manipulations is as follows, using theIverson bracket:[3] where[x=i]{\displaystyle [x=i]}evaluates to 1 ifx=i{\displaystyle x=i}, 0 otherwise. There are various advantages of this formulation, e.g.: Yet another formulation makes explicit the connection between the categorical andmultinomial distributionsby treating the categorical distribution as a special case of the multinomial distribution in which the parameternof the multinomial distribution (the number of sampled items) is fixed at 1. In this formulation, the sample space can be considered to be the set of 1-of-Kencoded[4]random vectorsxof dimensionkhaving the property that exactly one element has the value 1 and the others have the value 0. The particular element having the value 1 indicates which category has been chosen. Theprobability mass functionfin this formulation is: wherepi{\displaystyle p_{i}}represents the probability of seeing elementiand∑ipi=1{\displaystyle \textstyle {\sum _{i}p_{i}=1}}. This is the formulation adopted byBishop.[4][note 1] InBayesian statistics, theDirichlet distributionis theconjugate priordistribution of the categorical distribution (and also themultinomial distribution). This means that in a model consisting of a data point having a categorical distribution with unknown parameter vectorp, and (in standard Bayesian style) we choose to treat this parameter as arandom variableand give it aprior distributiondefined using aDirichlet distribution, then theposterior distributionof the parameter, after incorporating the knowledge gained from the observed data, is also a Dirichlet. Intuitively, in such a case, starting from what is known about the parameter prior to observing the data point, knowledge can then be updated based on the data point, yielding a new distribution of the same form as the old one. As such, knowledge of a parameter can be successively updated by incorporating new observations one at a time, without running into mathematical difficulties. Formally, this can be expressed as follows. Given a model then the following holds:[2] This relationship is used inBayesian statisticsto estimate the underlying parameterpof a categorical distribution given a collection ofNsamples. Intuitively, we can view thehyperpriorvectorαaspseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vectorc) in order to derive the posterior distribution. Further intuition comes from theexpected valueof the posterior distribution (see the article on theDirichlet distribution): This says that the expected probability of seeing a categoryiamong the various discrete distributions generated by the posterior distribution is simply equal to the proportion of occurrences of that category actually seen in the data, including the pseudocounts in the prior distribution. This makes a great deal of intuitive sense: if, for example, there are three possible categories, and category 1 is seen in the observed data 40% of the time, one would expect on average to see category 1 40% of the time in the posterior distribution as well. (This intuition is ignoring the effect of the prior distribution. Furthermore, the posterior is adistribution over distributions. The posterior distribution in general describes the parameter in question, and in this case the parameter itself is a discreteprobability distribution, i.e. the actual categorical distribution that generated the data. For example, if 3 categories in the ratio 40:5:55 are in the observed data, then ignoring the effect of the prior distribution, the true parameter – i.e. the true, underlying distribution that generated our observed data – would be expected to have the average value of (0.40,0.05,0.55), which is indeed what the posterior reveals. However, the true distribution might actually be (0.35,0.07,0.58) or (0.42,0.04,0.54) or various other nearby possibilities. The amount of uncertainty involved here is specified by thevarianceof the posterior, which is controlled by the total number of observations – the more data observed, the less uncertainty about the true parameter.) (Technically, the prior parameterαi{\displaystyle \alpha _{i}}should actually be seen as representingαi−1{\displaystyle \alpha _{i}-1}prior observations of categoryi{\displaystyle i}. Then, the updated posterior parameterci+αi{\displaystyle c_{i}+\alpha _{i}}representsci+αi−1{\displaystyle c_{i}+\alpha _{i}-1}posterior observations. This reflects the fact that a Dirichlet distribution withα=(1,1,…){\displaystyle {\boldsymbol {\alpha }}=(1,1,\ldots )}has a completely flat shape — essentially, auniform distributionover thesimplexof possible values ofp. Logically, a flat distribution of this sort represents total ignorance, corresponding to no observations of any sort. However, the mathematical updating of the posterior works fine if we ignore the⋯−1{\displaystyle \cdots -1}term and simply think of theαvector as directly representing a set of pseudocounts. Furthermore, doing this avoids the issue of interpretingαi{\displaystyle \alpha _{i}}values less than 1.) Themaximum-a-posteriori estimateof the parameterpin the above model is simply themode of the posterior Dirichlet distribution, i.e.,[2] In many practical applications, the only way to guarantee the condition that∀iαi+ci>1{\displaystyle \forall i\;\alpha _{i}+c_{i}>1}is to setαi>1{\displaystyle \alpha _{i}>1}for alli. In the above model, themarginal likelihoodof the observations (i.e. thejoint distributionof the observations, with the prior parametermarginalized out) is aDirichlet-multinomial distribution:[2] This distribution plays an important role inhierarchical Bayesian models, because when doinginferenceover such models using methods such asGibbs samplingorvariational Bayes, Dirichlet prior distributions are often marginalized out. See thearticle on this distributionfor more details. Theposterior predictive distributionof a new observation in the above model is the distribution that a new observationx~{\displaystyle {\tilde {x}}}would take given the setX{\displaystyle \mathbb {X} }ofNcategorical observations. As shown in theDirichlet-multinomial distributionarticle, it has a very simple form:[2] There are various relationships among this formula and the previous ones: The reason for the equivalence between posterior predictive probability and the expected value of the posterior distribution ofpis evident with re-examination of the above formula. As explained in theposterior predictive distributionarticle, the formula for the posterior predictive probability has the form of an expected value taken with respect to the posterior distribution: The crucial line above is the third. The second follows directly from the definition of expected value. The third line is particular to the categorical distribution, and follows from the fact that, in the categorical distribution specifically, the expected value of seeing a particular valueiis directly specified by the associated parameterpi. The fourth line is simply a rewriting of the third in a different notation, using the notation farther up for an expectation taken with respect to the posterior distribution of the parameters. Observe data points one by one and each time consider their predictive probability before observing the data point and updating the posterior. For any given data point, the probability of that point assuming a given category depends on the number of data points already in that category. In this scenario, if a category has a high frequency of occurrence, then new data points are more likely to join that category — further enriching the same category. This type of scenario is often termed apreferential attachment(or "rich get richer") model. This models many real-world processes, and in such cases the choices made by the first few data points have an outsize influence on the rest of the data points. InGibbs sampling, one typically needs to draw fromconditional distributionsin multi-variableBayes networkswhere each variable is conditioned on all the others. In networks that include categorical variables withDirichletpriors (e.g.mixture modelsand models including mixture components), the Dirichlet distributions are often "collapsed out" (marginalized out) of the network, which introduces dependencies among the various categorical nodes dependent on a given prior (specifically, theirjoint distributionis aDirichlet-multinomial distribution). One of the reasons for doing this is that in such a case, the distribution of one categorical node given the others is exactly theposterior predictive distributionof the remaining nodes. That is, for a set of nodesX{\displaystyle \mathbb {X} }, if the node in question is denoted asxn{\displaystyle x_{n}}and the remainder asX(−n){\displaystyle \mathbb {X} ^{(-n)}}, then whereci(−n){\displaystyle c_{i}^{(-n)}}is the number of nodes having categoryiamong the nodes other than noden. There are a number ofmethods, but the most common way to sample from a categorical distribution uses a type ofinverse transform sampling: Assume a distribution is expressed as "proportional to" some expression, with unknownnormalizing constant. Before taking any samples, one prepares some values as follows: Then, each time it is necessary to sample a value: If it is necessary to draw many values from the same categorical distribution, the following approach is more efficient. It draws n samples in O(n) time (assuming an O(1) approximation is used to draw values from the binomial distribution[6]). Inmachine learningit is typical to parametrize the categorical distribution,p1,…,pk{\displaystyle p_{1},\ldots ,p_{k}}via an unconstrained representation inRk{\displaystyle \mathbb {R} ^{k}}, whose components are given by: whereα{\displaystyle \alpha }is any real constant. Given this representation,p1,…,pk{\displaystyle p_{1},\ldots ,p_{k}}can be recovered using thesoftmax function, which can then be sampled using the techniques described above. There is however a more direct sampling method that uses samples from theGumbel distribution.[7]Letg1,…,gk{\displaystyle g_{1},\ldots ,g_{k}}bekindependent draws from the standard Gumbel distribution, then will be a sample from the desired categorical distribution. (Ifui{\displaystyle u_{i}}is a sample from the standarduniform distribution, thengi=−log⁡(−log⁡ui){\displaystyle g_{i}=-\log(-\log u_{i})}is a sample from the standard Gumbel distribution.)
https://en.wikipedia.org/wiki/Categorical_distribution
Incomputing, thecount–min sketch(CM sketch) is aprobabilisticdata structurethat serves as a frequency table of events in astream of data. It useshash functionsto map events to frequencies, but unlike ahash tableuses onlysub-linear space, at the expense of overcounting some events due tocollisions. The count–min sketch was invented in 2003 by Graham Cormode andS. Muthu Muthukrishnan[1]and described by them in a 2005 paper.[2] Count–min sketch is an alternative tocount sketchand AMS sketch and can be considered an implementation of acounting Bloom filter(Fan et al., 1998[3]) or multistage-filter.[1]However, they are used differently and therefore sized differently: a count–min sketch typically has a sublinear number of cells, related to the desired approximation quality of the sketch, while a counting Bloom filter is more typically sized to match the number of elements in the set. The goal of the basic version of the count–min sketch is to consume a stream of events, one at a time, and count the frequency of the different types of events in the stream. At any time, the sketch can be queried for the frequency of a particular event typeifrom a universe of event typesU{\displaystyle {\mathcal {U}}}, and will return an estimate of this frequency that is within a certain distance of the true frequency, with a certain probability.[a] The actual sketch data structure is a two-dimensional array ofwcolumns anddrows. The parameterswanddare fixed when the sketch is created, and determine the time and space needs and the probability of error when the sketch is queried for a frequency orinner product. Associated with each of thedrows is a separate hash function; the hash functions must bepairwise independent. The parameterswanddcan be chosen by settingw= ⌈e/ε⌉andd= ⌈ln 1/δ⌉, where the error in answering a query is within an additive factor ofεwith probability1 −δ(see below), andeisEuler's number. When a new event of typeiarrives we update as follows: for each rowjof the table, apply the corresponding hash function to obtain a column indexk=hj(i). Then increment the value in rowj, columnkby one. Thepoint queryasks for the count of an event typei. The estimated count is given by the least value in the table fori, namelya^i=minjcount[j,hj(i)]{\displaystyle {\hat {a}}_{i}=\min _{j}\mathrm {count} [j,h_{j}(i)]}, wherecount{\displaystyle \mathrm {count} }is the table. Obviously, for eachi, one hasai≤a^i{\displaystyle a_{i}\leq {\hat {a}}_{i}}, whereai{\displaystyle a_{i}}is the true frequency with whichioccurred in the stream. Additionally, this estimate has the guarantee thata^i≤ai+εN{\displaystyle {\hat {a}}_{i}\leq a_{i}+\varepsilon N}with probability1−δ{\displaystyle 1-\delta }, whereN=∑i∈Uai{\displaystyle N=\sum _{i\in {\mathcal {U}}}a_{i}}is the stream size, i.e. the total number of items seen by the sketch. Aninner product queryasks for theinner productbetween the histograms represented by two count–min sketches,counta{\displaystyle \mathrm {count} _{a}}andcountb{\displaystyle \mathrm {count} _{b}}. Leta⋅b^j=∑k=0wcounta[j,k]⋅countb[j,k]{\displaystyle {\widehat {a\cdot b}}_{j}=\sum _{k=0}^{w}\mathrm {count} _{a}[j,k]\cdot \mathrm {count} _{b}[j,k]}. The inner product can then be estimated asa⋅b^=minja⋅b^j{\displaystyle {\widehat {a\cdot b}}=\min _{j}{\widehat {a\cdot b}}_{j}}. One can show thata⋅b≤a⋅b^{\displaystyle a\cdot b\leq {\widehat {a\cdot b}}}, and with probability1−δ{\displaystyle 1-\delta },a⋅b^≤a⋅b+ε||a||1||b||1{\displaystyle {\widehat {a\cdot b}}\leq a\cdot b+\varepsilon ||a||_{1}||b||_{1}}. Like thecount sketch, the Count–min sketch is a linear sketch. That is, given two streams, constructing a sketch on each stream and summing the sketches yields the same result as concatenating the streams and constructing a sketch on the concatenated streams. This makes the sketch mergeable and appropriate for use in distributed settings in addition to streaming ones. One potential problem with the usual min estimator for count–min sketches is that they arebiased estimatorsof the true frequency of events: they may overestimate, but never underestimate the true count in a point query. Furthermore, while the min estimator works well when the distribution is highly skewed, other sketches such as the Count sketch based on means are more accurate when the distribution is not sufficiently skewed. Several variations on the sketch have been proposed to reduce error and reduce or eliminate bias.[4] To remove bias, thehCount*estimator[5]repeatedly randomly selects d random entries in the sketch and takes the minimum to obtain an unbiased estimate of the bias and subtracts it off. Amaximum likelihood estimator(MLE) was derived in Ting.[6]By using the MLE, the estimator is always able to match or better the min estimator and works well even if the distribution is not skewed. This paper also showed the hCount* debiasing operation is a bootstrapping procedure that can be efficiently computed without random sampling and can be generalized to any estimator. Since errors arise from hash collisions with unknown items from the universe, several approaches correct for the collisions when multiple elements of the universe are known or queried for simultaneously[7][8][6]. For each of these, a large proportion of the universe must be known to observe a significant benefit. Conservative updatingchanges the update, but not the query algorithms. To countcinstances of event typei, one first computes an estimatea^i=minjcount[j,hj(i)]{\displaystyle {\hat {a}}_{i}=\min _{j}\mathrm {count} [j,h_{j}(i)]}, then updatescount[j,hj(i)]←max{count[j,hj(i)],ai^+c}{\displaystyle \mathrm {count} [j,h_{j}(i)]\leftarrow \max\{\mathrm {count} [j,h_{j}(i)],{\hat {a_{i}}}+c\}}for each rowj. While this update procedure makes the sketch not a linear sketch, it is still mergeable.
https://en.wikipedia.org/wiki/Count%E2%80%93min_sketch
TheAkaike information criterion(AIC) is anestimatorofprediction errorand thereby relative quality ofstatistical modelsfor a given set of data.[1][2][3]Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means formodel selection. AIC is founded oninformation theory. When a statistical model is used to represent the process that generated the data, the representation will almost never be exact; so some information will be lost by using the model to represent the process. AIC estimates the relative amount of information lost by a given model: the less information a model loses, the higher the quality of that model. In estimating the amount of information lost by a model, AIC deals with the trade-off between thegoodness of fitof the model and the simplicity of the model. In other words, AIC deals with both the risk ofoverfittingand the risk of underfitting. The Akaike information criterion is named after the Japanese statisticianHirotugu Akaike, who formulated it. It now forms the basis of a paradigm for thefoundations of statisticsand is also widely used forstatistical inference. Suppose that we have astatistical modelof some data. Letkbe the number of estimatedparametersin the model. LetL^{\displaystyle {\hat {L}}}be the maximized value of thelikelihood functionfor the model. Then the AIC value of the model is the following.[4][5] Given a set of candidate models for the data, the preferred model is the one with the minimum AIC value. Thus, AIC rewardsgoodness of fit(as assessed by the likelihood function), but it also includes a penalty that is an increasing function of the number of estimated parameters. The penalty discouragesoverfitting, which is desired because increasing the number of parameters in the model almost always improves the goodness of the fit. Suppose that the data is generated by some unknown processf. We consider two candidate models to representf:g1andg2. If we knewf, then we could find the information lost from usingg1to representfby calculating theKullback–Leibler divergence,DKL(f‖g1); similarly, the information lost from usingg2to representfcould be found by calculatingDKL(f‖g2). We would then, generally, choose the candidate model that minimized the information loss. We cannot choose with certainty, because we do not knowf.Akaike (1974)showed, however, that we can estimate, via AIC, how much more (or less) information is lost byg1than byg2. The estimate, though, is only validasymptotically; if the number of data points is small, then some correction is often necessary (seeAICc, below). Note that AIC tells nothing about the absolute quality of a model, only the quality relative to other models. Thus, if all the candidate models fit poorly, AIC will not give any warning of that. Hence, after selecting a model via AIC, it is usually good practice to validate the absolute quality of the model. Such validation commonly includes checks of the model'sresiduals(to determine whether the residuals seem like random) and tests of the model's predictions. For more on this topic, seestatistical model validation. To apply AIC in practice, we start with a set of candidate models, and then find the models' corresponding AIC values. There will almost always be information lost due to using a candidate model to represent the "true model," i.e. the process that generated the data. We wish to select, from among the candidate models, the model that minimizes the information loss. We cannot choose with certainty, but we can minimize the estimated information loss. Suppose that there areRcandidate models. Denote the AIC values of those models by AIC1, AIC2, AIC3, ..., AICR. Let AICminbe the minimum of those values. Then the quantity exp((AICmin− AICi)/2) can be interpreted as being proportional to the probability that theith model minimizes the (estimated) information loss.[6] As an example, suppose that there are three candidate models, whose AIC values are 100, 102, and 110. Then the second model isexp((100 − 102)/2) = 0.368times as probable as the first model to minimize the information loss. Similarly, the third model isexp((100 − 110)/2) = 0.007times as probable as the first model to minimize the information loss. In this example, we would omit the third model from further consideration. We then have three options: (1) gather more data, in the hope that this will allow clearly distinguishing between the first two models; (2) simply conclude that the data is insufficient to support selecting one model from among the first two; (3) take a weighted average of the first two models, with weights proportional to 1 and 0.368, respectively, and then dostatistical inferencebased on the weightedmultimodel.[7] The quantityexp((AICmin− AICi)/2)is known as therelative likelihoodof modeli. It is closely related to the likelihood ratio used in thelikelihood-ratio test. Indeed, if all the models in the candidate set have the same number of parameters, then using AIC might at first appear to be very similar to using the likelihood-ratio test. There are, however, important distinctions. In particular, the likelihood-ratio test is valid only fornested models, whereas AIC (and AICc) has no such restriction.[8][9] Everystatistical hypothesis testcan be formulated as a comparison of statistical models. Hence, every statistical hypothesis test can be replicated via AIC. Two examples are briefly described in the subsections below. Details for those examples, and many more examples, are given bySakamoto, Ishiguro & Kitagawa (1986, Part II) andKonishi & Kitagawa (2008, ch. 4). As an example of a hypothesis test, consider thet-testto compare the means of twonormally-distributedpopulations. The input to thet-test comprises a random sample from each of the two populations. To formulate the test as a comparison of models, we construct two different models. The first model models the two populations as having potentially different means and standard deviations. The likelihood function for the first model is thus the product of the likelihoods for two distinct normal distributions; so it has four parameters:μ1,σ1,μ2,σ2. To be explicit, thelikelihood functionis as follows (denoting the sample sizes byn1andn2). The second model models the two populations as having the same means and the same standard deviations. The likelihood function for the second model thus setsμ1=μ2andσ1=σ2in the above equation; so it only has two parameters. We then maximize the likelihood functions for the two models (in practice, we maximize the log-likelihood functions); after that, it is easy to calculate the AIC values of the models. We next calculate the relative likelihood. For instance, if the second model was only 0.01 times as likely as the first model, then we would omit the second model from further consideration: so we would conclude that the two populations have different means. Thet-test assumes that the two populations have identical standard deviations; the test tends to be unreliable if the assumption is false and the sizes of the two samples are very different (Welch'st-testwould be better). Comparing the means of the populations via AIC, as in the example above, has the same disadvantage. However, one could create a third model that allows different standard deviations. This third model would have the advantage of not making such assumptions at the cost of an additional parameter and thus degree of freedom. For another example of a hypothesis test, suppose that we have two populations, and each member of each population is in one of twocategories—category #1 or category #2. Each population isbinomially distributed. We want to know whether the distributions of the two populations are the same. We are given a random sample from each of the two populations. Letmbe the size of the sample from the first population. Letm1be the number of observations (in the sample) in category #1; so the number of observations in category #2 ism−m1. Similarly, letnbe the size of the sample from the second population. Letn1be the number of observations (in the sample) in category #1. Letpbe the probability that a randomly-chosen member of the first population is in category #1. Hence, the probability that a randomly-chosen member of the first population is in category #2 is1 −p. Note that the distribution of the first population has one parameter. Letqbe the probability that a randomly-chosen member of the second population is in category #1. Note that the distribution of the second population also has one parameter. To compare the distributions of the two populations, we construct two different models. The first model models the two populations as having potentially different distributions. The likelihood function for the first model is thus the product of the likelihoods for two distinct binomial distributions; so it has two parameters:p,q. To be explicit, the likelihood function is as follows. The second model models the two populations as having the same distribution. The likelihood function for the second model thus setsp=qin the above equation; so the second model has one parameter. We then maximize the likelihood functions for the two models (in practice, we maximize the log-likelihood functions); after that, it is easy to calculate the AIC values of the models. We next calculate the relative likelihood. For instance, if the second model was only 0.01 times as likely as the first model, then we would omit the second model from further consideration: so we would conclude that the two populations have different distributions. Statistical inferenceis generally regarded as comprising hypothesis testing andestimation. Hypothesis testing can be done via AIC, as discussed above. Regarding estimation, there are two types:point estimationandinterval estimation. Point estimation can be done within the AIC paradigm: it is provided bymaximum likelihood estimation. Interval estimation can also be done within the AIC paradigm: it is provided bylikelihood intervals. Hence, statistical inference generally can be done within the AIC paradigm. The most commonly used paradigms for statistical inference arefrequentist inferenceandBayesian inference. AIC, though, can be used to do statistical inference without relying on either the frequentist paradigm or the Bayesian paradigm: because AIC can be interpreted without the aid ofsignificance levelsorBayesian priors.[10]In other words, AIC can be used to form afoundation of statisticsthat is distinct from both frequentism and Bayesianism.[11][12] When thesamplesize is small, there is a substantial probability that AIC will select models that have too many parameters, i.e. that AIC will overfit.[13][14][15]To address such potential overfitting, AICc was developed: AICc is AIC with a correction for small sample sizes. The formula for AICc depends upon the statistical model. Assuming that the model isunivariate, is linear in its parameters, and has normally-distributedresiduals(conditional upon regressors), then the formula for AICc is as follows.[16][17][18][19] —wherendenotes the sample size andkdenotes the number of parameters. Thus, AICc is essentially AIC with an extra penalty term for the number of parameters. Note that asn→ ∞, the extra penalty term converges to 0, and thus AICc converges to AIC.[20] If the assumption that the model is univariate and linear with normal residuals does not hold, then the formula for AICc will generally be different from the formula above. For some models, the formula can be difficult to determine. For every model that has AICc available, though, the formula for AICc is given by AIC plus terms that includes bothkandk2. In comparison, the formula for AIC includeskbut notk2. In other words, AIC is afirst-order estimate(of the information loss), whereas AICc is asecond-order estimate.[21] Further discussion of the formula, with examples of other assumptions, is given byBurnham & Anderson (2002, ch. 7) and byKonishi & Kitagawa (2008, ch. 7–8). In particular, with other assumptions,bootstrap estimationof the formula is often feasible. To summarize, AICc has the advantage of tending to be more accurate than AIC (especially for small samples), but AICc also has the disadvantage of sometimes being much more difficult to compute than AIC. Note that if all the candidate models have the samekand the same formula for AICc, then AICc and AIC will give identical (relative) valuations; hence, there will be no disadvantage in using AIC, instead of AICc. Furthermore, ifnis many times larger thank2, then the extra penalty term will be negligible; hence, the disadvantage in using AIC, instead of AICc, will be negligible. The Akaike information criterion was formulated by the statisticianHirotugu Akaike. It was originally named "an information criterion".[22]It was first announced in English by Akaike at a 1971 symposium; the proceedings of the symposium were published in 1973.[22][23]The 1973 publication, though, was only an informal presentation of the concepts.[24]The first formal publication was a 1974 paper by Akaike.[5] The initial derivation of AIC relied upon some strong assumptions.Takeuchi (1976)showed that the assumptions could be made much weaker. Takeuchi's work, however, was in Japanese and was not widely known outside Japan for many years. (Translated in[25]) AIC was originally proposed forlinear regression(only) bySugiura (1978). That instigated the work ofHurvich & Tsai (1989), and several further papers by the same authors, which extended the situations in which AICc could be applied. The first general exposition of the information-theoretic approach was the volume byBurnham & Anderson (2002). It includes an English presentation of the work of Takeuchi. The volume led to far greater use of AIC, and it now has more than 64,000 citations onGoogle Scholar. Akaike called his approach an "entropy maximization principle", because the approach is founded on the concept ofentropy in information theory. Indeed, minimizing AIC in a statistical model is effectively equivalent to maximizing entropy in a thermodynamic system; in other words, the information-theoretic approach in statistics is essentially applying thesecond law of thermodynamics. As such, AIC has roots in the work ofLudwig Boltzmannonentropy. For more on these issues, seeAkaike (1985)andBurnham & Anderson (2002, ch. 2). Astatistical modelmust account forrandom errors. A straight line model might be formally described asyi=b0+b1xi+εi. Here, theεiare theresidualsfrom the straight line fit. If theεiare assumed to bei.i.d.Gaussian(with zero mean), then the model has three parameters:b0,b1, and the variance of the Gaussian distributions. Thus, when calculating the AIC value of this model, we should usek=3. More generally, for anyleast squaresmodel with i.i.d. Gaussian residuals, the variance of the residuals' distributions should be counted as one of the parameters.[26] As another example, consider a first-orderautoregressive model, defined byxi=c+φxi−1+εi, with theεibeing i.i.d. Gaussian (with zero mean). For this model, there are three parameters:c,φ, and the variance of theεi. More generally, apth-order autoregressive model hasp+ 2parameters. (If, however,cis not estimated from the data, but instead given in advance, then there are onlyp+ 1parameters.) The AIC values of the candidate models must all be computed with the same data set. Sometimes, though, we might want to compare a model of theresponse variable,y, with a model of the logarithm of the response variable,log(y). More generally, we might want to compare a model of the data with a model oftransformed data. Following is an illustration of how to deal with data transforms (adapted fromBurnham & Anderson (2002, §2.11.3): "Investigators should be sure that all hypotheses are modeled using the same response variable"). Suppose that we want to compare two models: one with anormal distributionofyand one with a normal distribution oflog(y). We shouldnotdirectly compare the AIC values of the two models. Instead, we should transform the normalcumulative distribution functionto first take the logarithm ofy. To do that, we need to perform the relevantintegration by substitution: thus, we need to multiply by the derivative of the(natural) logarithmfunction, which is1/y. Hence, the transformed distribution has the followingprobability density function: —which is the probability density function for thelog-normal distribution. We then compare the AIC value of the normal model against the AIC value of the log-normal model. For misspecified model, Takeuchi's Information Criterion (TIC) might be more appropriate. However, TIC often suffers from instability caused by estimation errors.[27] The critical difference between AIC and BIC (and their variants) is the asymptotic property under well-specified and misspecified model classes.[28]Their fundamental differences have been well-studied in regression variable selection and autoregression order selection[29]problems. In general, if the goal is prediction, AIC and leave-one-out cross-validations are preferred. If the goal is selection, inference, or interpretation, BIC or leave-many-out cross-validations are preferred. A comprehensive overview of AIC and other popular model selection methods is given by Ding et al. (2018)[30] The formula for theBayesian information criterion(BIC) is similar to the formula for AIC, but with a different penalty for the number of parameters. With AIC the penalty is2k, whereas with BIC the penalty isln(n)k. A comparison of AIC/AICc and BIC is given byBurnham & Anderson (2002, §6.3-6.4), with follow-up remarks byBurnham & Anderson (2004). The authors show that AIC/AICc can be derived in the same Bayesian framework as BIC, just by using differentprior probabilities. In the Bayesian derivation of BIC, though, each candidate model has a prior probability of 1/R(whereRis the number of candidate models). Additionally, the authors present a few simulation studies that suggest AICc tends to have practical/performance advantages over BIC. A point made by several researchers is that AIC and BIC are appropriate for different tasks. In particular, BIC is argued to be appropriate for selecting the "true model" (i.e. the process that generated the data) from the set of candidate models, whereas AIC is not appropriate. To be specific, if the "true model" is in the set of candidates, then BIC will select the "true model" with probability 1, asn→ ∞; in contrast, when selection is done via AIC, the probability can be less than 1.[31][32][33]Proponents of AIC argue that this issue is negligible, because the "true model" is virtually never in the candidate set. Indeed, it is a common aphorism in statistics that "all models are wrong"; hence the "true model" (i.e. reality) cannot be in the candidate set. Another comparison of AIC and BIC is given byVrieze (2012). Vrieze presents a simulation study—which allows the "true model" to be in the candidate set (unlike with virtually all real data). The simulation study demonstrates, in particular, that AIC sometimes selects a much better model than BIC even when the "true model" is in the candidate set. The reason is that, for finiten, BIC can have a substantial risk of selecting a very bad model from the candidate set. This reason can arise even whennis much larger thank2. With AIC, the risk of selecting a very bad model is minimized. If the "true model" is not in the candidate set, then the most that we can hope to do is select the model that best approximates the "true model". AIC is appropriate for finding the best approximating model, under certain assumptions.[31][32][33](Those assumptions include, in particular, that the approximating is done with regard to information loss.) Comparison of AIC and BIC in the context ofregressionis given byYang (2005). In regression, AIC is asymptotically optimal for selecting the model with the leastmean squared error, under the assumption that the "true model" is not in the candidate set. BIC is not asymptotically optimal under the assumption. Yang additionally shows that the rate at which AIC converges to the optimum is, in a certain sense, the best possible. Sometimes, each candidate model assumes that the residuals are distributed according to independent identical normal distributions (with zero mean). That gives rise toleast squaresmodel fitting. With least squares fitting, themaximum likelihood estimatefor the variance of a model's residuals distributions is where theresidual sum of squaresis Then, the maximum value of a model's log-likelihood function is (seeNormal distribution#Log-likelihood): whereCis a constant independent of the model, and dependent only on the particular data points, i.e. it does not change if the data does not change. That gives:[34] Because only differences in AIC are meaningful, the constantCcan be ignored, which allows us to conveniently take the following for model comparisons: Note that if all the models have the samek, then selecting the model with minimum AIC is equivalent to selecting the model with minimumRSS—which is the usual objective of model selection based on least squares. Leave-one-outcross-validationis asymptotically equivalent to AIC, for ordinary linear regression models.[35]Asymptotic equivalence to AIC also holds formixed-effects models.[36] Mallows'sCpis equivalent to AIC in the case of (Gaussian)linear regression.[37]
https://en.wikipedia.org/wiki/Akaike_information_criterion
Instatistics, theBayesian information criterion(BIC) orSchwarz information criterion(alsoSIC,SBC,SBIC) is a criterion formodel selectionamong a finite set of models; models with lower BIC are generally preferred. It is based, in part, on thelikelihood functionand it is closely related to theAkaike information criterion(AIC). When fitting models, it is possible to increase the maximum likelihood by adding parameters, but doing so may result inoverfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC for sample sizes greater than 7.[1] The BIC was developed by Gideon E. Schwarz and published in a 1978 paper,[2]as a large-sample approximation to theBayes factor. The BIC is formally defined as[3][a] where The BIC can be derived by integrating out the parameters of the model usingLaplace's method, starting with the followingmodel evidence:[5][6]: 217 whereπ(θ∣M){\displaystyle \pi (\theta \mid M)}is the prior forθ{\displaystyle \theta }under modelM{\displaystyle M}. The log-likelihood,ln⁡(p(x∣θ,M)){\displaystyle \ln(p(x\mid \theta ,M))}, is then expanded to a second orderTaylor seriesabout theMLE,θ^{\displaystyle {\widehat {\theta }}}, assuming it is twice differentiable as follows: whereI(θ){\displaystyle {\mathcal {I}}(\theta )}is the averageobserved information per observation, andR(x,θ){\displaystyle R(x,\theta )}denotes the residual term. To the extent thatR(x,θ){\displaystyle R(x,\theta )}is negligible andπ(θ∣M){\displaystyle \pi (\theta \mid M)}is relatively linear nearθ^{\displaystyle {\widehat {\theta }}}, we can integrate outθ{\displaystyle \theta }to get the following: Asn{\displaystyle n}increases, we can ignore|I(θ^)|{\displaystyle |{\mathcal {I}}({\widehat {\theta }})|}andπ(θ^){\displaystyle \pi ({\widehat {\theta }})}as they areO(1){\displaystyle O(1)}. Thus, where BIC is defined as above, andL^{\displaystyle {\widehat {L}}}either (a) is the Bayesian posterior mode or (b) uses the MLE and the priorπ(θ∣M){\displaystyle \pi (\theta \mid M)}has nonzero slope at the MLE. Then the posterior When picking from several models, ones with lower BIC values are generally preferred. The BIC is an increasingfunctionof the error varianceσe2{\displaystyle \sigma _{e}^{2}}and an increasing function ofk. That is, unexplained variation in thedependent variableand the number of explanatory variables increase the value of BIC. However, a lower BIC does not necessarily indicate one model is better than another. Because it involves approximations, the BIC is merely a heuristic. In particular, differences in BIC should never be treated like transformed Bayes factors. It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable[b]are identical for all models being compared. The models being compared need not benested, unlike the case when models are being compared using anF-testor alikelihood ratio test.[citation needed] The BIC suffers from two main limitations[7] Under the assumption that the model errors or disturbances are independent and identically distributed according to anormal distributionand the boundary condition that the derivative of thelog likelihoodwith respect to the true variance is zero, this becomes (up to an additive constant, which depends only onnand not on the model):[8] whereσe2^{\displaystyle {\widehat {\sigma _{e}^{2}}}}is the error variance. The error variance in this case is defined as whichis a biased estimator for the true variance. In terms of theresidual sum of squares (RSS)the BIC is When testing multiple linear models against a saturated model, the BIC can be rewritten in terms of thedevianceχ2{\displaystyle \chi ^{2}}as:[9] wherek{\displaystyle k}is the number of model parameters in the test.
https://en.wikipedia.org/wiki/Bayesian_information_criterion
Inmathematics, specificallystatisticsandinformation geometry, aBregman divergenceorBregman distanceis a measure of difference between two points, defined in terms of a strictlyconvex function; they form an important class ofdivergences. When the points are interpreted asprobability distributions– notably as either values of the parameter of aparametric modelor as a data set of observed values – the resulting distance is astatistical distance. The most basic Bregman divergence is thesquared Euclidean distance. Bregman divergences are similar tometrics, but satisfy neither thetriangle inequality(ever) nor symmetry (in general). However, they satisfy a generalization of thePythagorean theorem, and ininformation geometrythe correspondingstatistical manifoldis interpreted as a (dually)flat manifold. This allows many techniques ofoptimization theoryto be generalized to Bregman divergences, geometrically as generalizations ofleast squares. Bregman divergences are named after Russian mathematicianLev M. Bregman, who introduced the concept in 1967. LetF:Ω→R{\displaystyle F\colon \Omega \to \mathbb {R} }be a continuously-differentiable, strictlyconvex functiondefined on aconvex setΩ{\displaystyle \Omega }. The Bregman distance associated withFfor pointsp,q∈Ω{\displaystyle p,q\in \Omega }is the difference between the value ofFat pointpand the value of the first-orderTaylor expansionofFaround pointqevaluated at pointp: For anyp,q,z{\displaystyle p,q,z} BF(θ1:θ)+BF(θ2:θ)=BF(θ1:θ1+θ22)+BF(θ2:θ1+θ22)+2BF(θ1+θ22:θ){\displaystyle B_{F}\left(\theta _{1}:\theta \right)+B_{F}\left(\theta _{2}:\theta \right)=B_{F}\left(\theta _{1}:{\frac {\theta _{1}+\theta _{2}}{2}}\right)+B_{F}\left(\theta _{2}:{\frac {\theta _{1}+\theta _{2}}{2}}\right)+2B_{F}\left({\frac {\theta _{1}+\theta _{2}}{2}}:\theta \right)} PW(q)=argminω∈WDF(ω,q){\displaystyle P_{W}(q)={\text{argmin}}_{\omega \in W}D_{F}(\omega ,q)}. Then For anyv∈Ω,a∈W{\displaystyle v\in \Omega ,a\in W}, DF(a,v)≥DF(a,PW(v))+DF(PW(v),v).{\displaystyle D_{F}(a,v)\geq D_{F}(a,P_{W}(v))+D_{F}(P_{W}(v),v).} This is an equality ifPW(v){\displaystyle P_{W}(v)}is in therelative interiorofW{\displaystyle W}. In particular, this always happens whenW{\displaystyle W}is an affine set. Fixx∈X{\displaystyle x\in X}. Take affine transform onf{\displaystyle f}, so that∇f(x)=0{\displaystyle \nabla f(x)=0}. Take someϵ>0{\displaystyle \epsilon >0}, such that∂B(x,ϵ)⊂X{\displaystyle \partial B(x,\epsilon )\subset X}. Then consider the "radial-directional" derivative off{\displaystyle f}on the Euclidean sphere∂B(x,ϵ){\displaystyle \partial B(x,\epsilon )}. ⟨∇f(y),(y−x)⟩{\displaystyle \langle \nabla f(y),(y-x)\rangle }for ally∈∂B(x,ϵ){\displaystyle y\in \partial B(x,\epsilon )}. Since∂B(x,ϵ)⊂Rn{\displaystyle \partial B(x,\epsilon )\subset \mathbb {R} ^{n}}is compact, it achieves minimal valueδ{\displaystyle \delta }at somey0∈∂B(x,ϵ){\displaystyle y_{0}\in \partial B(x,\epsilon )}. Sincef{\displaystyle f}is strictly convex,δ>0{\displaystyle \delta >0}. ThenBf(x,r)⊂B(x,r/δ)∩X{\displaystyle B_{f}(x,r)\subset B(x,r/\delta )\cap X}. SinceDf(y,x){\displaystyle D_{f}(y,x)}isC1{\displaystyle C^{1}}iny{\displaystyle y},Df{\displaystyle D_{f}}is continuous iny{\displaystyle y}, thusBf(x,r){\displaystyle B_{f}(x,r)}is closed ifX{\displaystyle X}is. Fixv∈X{\displaystyle v\in X}. Take somew∈W{\displaystyle w\in W}, then letr:=Df(w,v){\displaystyle r:=D_{f}(w,v)}. Then draw the Bregman ballBf(v,r)∩W{\displaystyle B_{f}(v,r)\cap W}. It is closed and bounded, thus compact. SinceDf(⋅,v){\displaystyle D_{f}(\cdot ,v)}is continuous and strictly convex on it, and bounded below by0{\displaystyle 0}, it achieves a unique minimum on it. By cosine law,Df(w,v)−Df(w,PW(v))−Df(PW(v),v)=⟨∇yDf(y,v)|y=PW(v),w−PW(v)⟩{\displaystyle D_{f}(w,v)-D_{f}(w,P_{W}(v))-D_{f}(P_{W}(v),v)=\langle \nabla _{y}D_{f}(y,v)|_{y=P_{W}(v)},w-P_{W}(v)\rangle }, which must be≥0{\displaystyle \geq 0}, sincePW(v){\displaystyle P_{W}(v)}minimizesDf(⋅,v){\displaystyle D_{f}(\cdot ,v)}inW{\displaystyle W}, andW{\displaystyle W}is convex. If⟨∇yDf(y,v)|y=PW(v),w−PW(v)⟩>0{\displaystyle \langle \nabla _{y}D_{f}(y,v)|_{y=P_{W}(v)},w-P_{W}(v)\rangle >0}, then sincew{\displaystyle w}is in the relative interior, we can move fromPW(v){\displaystyle P_{W}(v)}in the direction opposite ofw{\displaystyle w}, to decreaseDf(y,v){\displaystyle D_{f}(y,v)}, contradiction. Thus⟨∇yDf(y,v)|y=PW(v),w−PW(v)⟩=0{\displaystyle \langle \nabla _{y}D_{f}(y,v)|_{y=P_{W}(v)},w-P_{W}(v)\rangle =0}. For anyx≠y∈X{\displaystyle x\neq y\in X}, definer=‖y−x‖,v=(y−x)/r,g(t)=f(x+tv){\displaystyle r=\|y-x\|,v=(y-x)/r,g(t)=f(x+tv)}fort∈[0,r]{\displaystyle t\in [0,r]}. Letz(t)=x+tv{\displaystyle z(t)=x+tv}. Theng′(t)=⟨∇f(z(t)),v⟩{\displaystyle g'(t)=\langle \nabla f(z(t)),v\rangle }fort∈(0,r){\displaystyle t\in (0,r)}, and since∇f{\displaystyle \nabla f}is continuous, also fort=0,r{\displaystyle t=0,r}. Then, from the diagram, we see that forDf(x;z(t))=Df(z(t);x){\displaystyle D_{f}(x;z(t))=D_{f}(z(t);x)}for allt∈[0,r]{\displaystyle t\in [0,r]}, we must haveg′(t){\displaystyle g'(t)}linear ont∈[0,r]{\displaystyle t\in [0,r]}. Thus we find that∇f{\displaystyle \nabla f}varies linearly along any direction. By the next lemma,f{\displaystyle f}is quadratic. Sincef{\displaystyle f}is also strictly convex, it is of formf(x)+xTAx+BTx+C{\displaystyle f(x)+x^{T}Ax+B^{T}x+C}, whereA≻0{\displaystyle A\succ 0}. Lemma: IfS{\displaystyle S}is an open subset ofRn{\displaystyle \mathbb {R} ^{n}},f:S→R{\displaystyle f:S\to \mathbb {R} }has continuous derivative, and given any line segment[x,x+v]⊂S{\displaystyle [x,x+v]\subset S}, the functionh(t):=⟨∇f(x+tv),v⟩{\displaystyle h(t):=\langle \nabla f(x+tv),v\rangle }is linear int{\displaystyle t}, thenf{\displaystyle f}is a quadratic function. Proof idea: For any quadratic functionq:S→R{\displaystyle q:S\to \mathbb {R} }, we havef−q{\displaystyle f-q}still has such derivative-linearity, so we will subtract away a few quadratic functions and show thatf{\displaystyle f}becomes zero. The proof idea can be illustrated fully for the case ofS=R2{\displaystyle S=\mathbb {R} ^{2}}, so we prove it in this case. By the derivative-linearity,f{\displaystyle f}is a quadratic function on any line segment inR2{\displaystyle \mathbb {R} ^{2}}. We subtract away four quadratic functions, such thatg:=f−q0−q1−q2−q3{\displaystyle g:=f-q_{0}-q_{1}-q_{2}-q_{3}}becomes identically zero on the x-axis, y-axis, and the{x=y}{\displaystyle \{x=y\}}line. Letq0(x,y)=f(0,0)+∇f(0,0)⋅(x,y),q1(x,y)=A1x2,q2(x,y)=A2y2,q3(x,y)=A3xy{\displaystyle q_{0}(x,y)=f(0,0)+\nabla f(0,0)\cdot (x,y),q_{1}(x,y)=A_{1}x^{2},q_{2}(x,y)=A_{2}y^{2},q_{3}(x,y)=A_{3}xy}, for well-chosenA1,A2,A3{\displaystyle A_{1},A_{2},A_{3}}. Now useq0{\displaystyle q_{0}}to remove the linear term, and useq1,q2,q3{\displaystyle q_{1},q_{2},q_{3}}respectively to remove the quadratic terms along the three lines. ∀(x,y)∈R2{\displaystyle \forall (x,y)\in \mathbb {R} ^{2}}not on the origin, there exists a linel{\displaystyle l}across(x,y){\displaystyle (x,y)}that intersects the x-axis, y-axis, and the{x=y}{\displaystyle \{x=y\}}line at three different points. Sinceg{\displaystyle g}is quadratic onl{\displaystyle l}, and is zero on three different points,g{\displaystyle g}is identically zero onl{\displaystyle l}, thusg(x,y)=0{\displaystyle g(x,y)=0}. Thusf=q0+q1+q2+q3{\displaystyle f=q_{0}+q_{1}+q_{2}+q_{3}}is quadratic. The following two characterizations are for divergences onΓn{\displaystyle \Gamma _{n}}, the set of all probability measures on{1,2,...,n}{\displaystyle \{1,2,...,n\}}, withn≥2{\displaystyle n\geq 2}. Define a divergence onΓn{\displaystyle \Gamma _{n}}as any function of typeD:Γn×Γn→[0,∞]{\displaystyle D:\Gamma _{n}\times \Gamma _{n}\to [0,\infty ]}, such thatD(x,x)=0{\displaystyle D(x,x)=0}for allx∈Γn{\displaystyle x\in \Gamma _{n}}, then: Given a Bregman divergenceDF{\displaystyle D_{F}}, its "opposite", defined byDF∗(v,w)=DF(w,v){\displaystyle D_{F}^{*}(v,w)=D_{F}(w,v)}, is generally not a Bregman divergence. For example, the Kullback-Leiber divergence is both a Bregman divergence and an f-divergence. Its reverse is also an f-divergence, but by the above characterization, the reverse KL divergence cannot be a Bregman divergence. A key tool incomputational geometryis the idea ofprojective duality, which maps points to hyperplanes and vice versa, while preserving incidence and above-below relationships. There are numerous analytical forms of the projective dual: one common form maps the pointp=(p1,…pd){\displaystyle p=(p_{1},\ldots p_{d})}to the hyperplanexd+1=∑1d2pixi{\displaystyle x_{d+1}=\sum _{1}^{d}2p_{i}x_{i}}. This mapping can be interpreted (identifying the hyperplane with its normal) as the convex conjugate mapping that takes the point p to its dual pointp∗=∇F(p){\displaystyle p^{*}=\nabla F(p)}, whereFdefines thed-dimensional paraboloidxd+1=∑xi2{\displaystyle x_{d+1}=\sum x_{i}^{2}}. If we now replace the paraboloid by an arbitrary convex function, we obtain a different dual mapping that retains the incidence and above-below properties of the standard projective dual. This implies that natural dual concepts in computational geometry likeVoronoi diagramsandDelaunay triangulationsretain their meaning in distance spaces defined by an arbitrary Bregman divergence. Thus, algorithms from "normal" geometry extend directly to these spaces (Boissonnat, Nielsen and Nock, 2010) Bregman divergences can be interpreted as limit cases of skewedJensen divergences(see Nielsen and Boltz, 2011). Jensen divergences can be generalized using comparative convexity, and limit cases of these skewed Jensen divergences generalizations yields generalized Bregman divergence (see Nielsen and Nock, 2017). The Bregman chord divergence[7]is obtained by taking a chord instead of a tangent line. Bregman divergences can also be defined between matrices, between functions, and between measures (distributions). Bregman divergences between matrices include theStein's lossandvon Neumann entropy. Bregman divergences between functions include total squared error, relative entropy, and squared bias; see the references by Frigyik et al. below for definitions and properties. Similarly Bregman divergences have also been defined over sets, through asubmodular set functionwhich is known as the discrete analog of aconvex function. The submodular Bregman divergences subsume a number of discrete distance measures, like theHamming distance,precision and recall,mutual informationand some other set based distance measures (see Iyer & Bilmes, 2012 for more details and properties of the submodular Bregman.) For a list of common matrix Bregman divergences, see Table 15.1 in.[8] In machine learning, Bregman divergences are used to calculate the bi-tempered logistic loss, performing better than thesoftmax functionwith noisy datasets.[9] Bregman divergence is used in the formulation ofmirror descent, which includes optimization algorithms used in machine learning such asgradient descentand thehedge algorithm.
https://en.wikipedia.org/wiki/Bregman_divergence
Ininformation theory, thecross-entropybetween twoprobability distributionsp{\displaystyle p}andq{\displaystyle q}, over the same underlying set of events, measures the average number ofbitsneeded to identify an event drawn from the set when the coding scheme used for the set is optimized for an estimated probability distributionq{\displaystyle q}, rather than the true distributionp{\displaystyle p}. The cross-entropy of the distributionq{\displaystyle q}relative to a distributionp{\displaystyle p}over a given set is defined as follows: H(p,q)=−Ep⁡[log⁡q],{\displaystyle H(p,q)=-\operatorname {E} _{p}[\log q],} whereEp⁡[⋅]{\displaystyle \operatorname {E} _{p}[\cdot ]}is theexpected valueoperator with respect to the distributionp{\displaystyle p}. The definition may be formulated using theKullback–Leibler divergenceDKL(p∥q){\displaystyle D_{\mathrm {KL} }(p\parallel q)}, divergence ofp{\displaystyle p}fromq{\displaystyle q}(also known as therelative entropyofp{\displaystyle p}with respect toq{\displaystyle q}). H(p,q)=H(p)+DKL(p∥q),{\displaystyle H(p,q)=H(p)+D_{\mathrm {KL} }(p\parallel q),} whereH(p){\displaystyle H(p)}is theentropyofp{\displaystyle p}. Fordiscreteprobability distributionsp{\displaystyle p}andq{\displaystyle q}with the samesupportX{\displaystyle {\mathcal {X}}}, this means H(p,q)=−∑x∈Xp(x)log⁡q(x).{\displaystyle H(p,q)=-\sum _{x\in {\mathcal {X}}}p(x)\,\log q(x).}(Eq.1) The situation forcontinuousdistributions is analogous. We have to assume thatp{\displaystyle p}andq{\displaystyle q}areabsolutely continuouswith respect to some referencemeasurer{\displaystyle r}(usuallyr{\displaystyle r}is aLebesgue measureon aBorelσ-algebra). LetP{\displaystyle P}andQ{\displaystyle Q}be probability density functions ofp{\displaystyle p}andq{\displaystyle q}with respect tor{\displaystyle r}. Then −∫XP(x)log⁡Q(x)dx=Ep⁡[−log⁡Q],{\displaystyle -\int _{\mathcal {X}}P(x)\,\log Q(x)\,\mathrm {d} x=\operatorname {E} _{p}[-\log Q],} and therefore H(p,q)=−∫XP(x)log⁡Q(x)dx.{\displaystyle H(p,q)=-\int _{\mathcal {X}}P(x)\,\log Q(x)\,\mathrm {d} x.}(Eq.2) NB: The notationH(p,q){\displaystyle H(p,q)}is also used for a different concept, thejoint entropyofp{\displaystyle p}andq{\displaystyle q}. Ininformation theory, theKraft–McMillan theoremestablishes that any directly decodable coding scheme for coding a message to identify one valuexi{\displaystyle x_{i}}out of a set of possibilities{x1,…,xn}{\displaystyle \{x_{1},\ldots ,x_{n}\}}can be seen as representing an implicit probability distributionq(xi)=(12)ℓi{\displaystyle q(x_{i})=\left({\frac {1}{2}}\right)^{\ell _{i}}}over{x1,…,xn}{\displaystyle \{x_{1},\ldots ,x_{n}\}}, whereℓi{\displaystyle \ell _{i}}is the length of the code forxi{\displaystyle x_{i}}in bits. Therefore, cross-entropy can be interpreted as the expected message-length per datum when a wrong distributionq{\displaystyle q}is assumed while the data actually follows a distributionp{\displaystyle p}. That is why the expectation is taken over the true probability distributionp{\displaystyle p}and notq.{\displaystyle q.}Indeed the expected message-length under the true distributionp{\displaystyle p}is Ep⁡[ℓ]=−Ep⁡[ln⁡q(x)ln⁡(2)]=−Ep⁡[log2⁡q(x)]=−∑xip(xi)log2⁡q(xi)=−∑xp(x)log2⁡q(x)=H(p,q).{\displaystyle {\begin{aligned}\operatorname {E} _{p}[\ell ]&=-\operatorname {E} _{p}\left[{\frac {\ln {q(x)}}{\ln(2)}}\right]\\[1ex]&=-\operatorname {E} _{p}\left[\log _{2}{q(x)}\right]\\[1ex]&=-\sum _{x_{i}}p(x_{i})\,\log _{2}q(x_{i})\\[1ex]&=-\sum _{x}p(x)\,\log _{2}q(x)=H(p,q).\end{aligned}}} There are many situations where cross-entropy needs to be measured but the distribution ofp{\displaystyle p}is unknown. An example islanguage modeling, where a model is created based on a training setT{\displaystyle T}, and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example,p{\displaystyle p}is the true distribution of words in any corpus, andq{\displaystyle q}is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula: H(T,q)=−∑i=1N1Nlog2⁡q(xi){\displaystyle H(T,q)=-\sum _{i=1}^{N}{\frac {1}{N}}\log _{2}q(x_{i})} whereN{\displaystyle N}is the size of the test set, andq(x){\displaystyle q(x)}is the probability of eventx{\displaystyle x}estimated from the training set. In other words,q(xi){\displaystyle q(x_{i})}is the probability estimate of the model that the i-th word of the text isxi{\displaystyle x_{i}}. The sum is averaged over theN{\displaystyle N}words of the test. This is aMonte Carlo estimateof the true cross-entropy, where the test set is treated as samples fromp(x){\displaystyle p(x)}.[citation needed] The cross entropy arises in classification problems when introducing a logarithm in the guise of thelog-likelihoodfunction. The section is concerned with the subject of estimation of the probability of different possible discrete outcomes. To this end, denote a parametrized family of distributions byqθ{\displaystyle q_{\theta }}, withθ{\displaystyle \theta }subject to the optimization effort. Consider a given finite sequence ofN{\displaystyle N}valuesxi{\displaystyle x_{i}}from a training set, obtained fromconditionally independentsampling. The likelihood assigned to any considered parameterθ{\displaystyle \theta }of the model is then given by the product over all probabilitiesqθ(X=xi){\displaystyle q_{\theta }(X=x_{i})}. Repeated occurrences are possible, leading to equal factors in the product. If the count of occurrences of the value equal toxi{\displaystyle x_{i}}(for some indexi{\displaystyle i}) is denoted by#xi{\displaystyle \#x_{i}}, then the frequency of that value equals#xi/N{\displaystyle \#x_{i}/N}. Denote the latter byp(X=xi){\displaystyle p(X=x_{i})}, as it may be understood as empirical approximation to the probability distribution underlying the scenario. Further denote byPP:=eH(p,qθ){\displaystyle PP:={\mathrm {e} }^{H(p,q_{\theta })}}theperplexity, which can be seen to equal∏xiqθ(X=xi)−p(X=xi){\textstyle \prod _{x_{i}}q_{\theta }(X=x_{i})^{-p(X=x_{i})}}by thecalculation rules for the logarithm, and where the product is over the values without double counting. SoL(θ;x)=∏iqθ(X=xi)=∏xiqθ(X=xi)#xi=PP−N=e−N⋅H(p,qθ){\displaystyle {\mathcal {L}}(\theta ;{\mathbf {x} })=\prod _{i}q_{\theta }(X=x_{i})=\prod _{x_{i}}q_{\theta }(X=x_{i})^{\#x_{i}}=PP^{-N}={\mathrm {e} }^{-N\cdot H(p,q_{\theta })}}orlog⁡L(θ;x)=−N⋅H(p,qθ).{\displaystyle \log {\mathcal {L}}(\theta ;{\mathbf {x} })=-N\cdot H(p,q_{\theta }).}Since the logarithm is amonotonically increasing function, it does not affect extremization. So observe that thelikelihood maximizationamounts to minimization of the cross-entropy. Cross-entropy minimization is frequently used in optimization and rare-event probability estimation. When comparing a distributionq{\displaystyle q}against a fixed reference distributionp{\displaystyle p}, cross-entropy andKL divergenceare identical up to an additive constant (sincep{\displaystyle p}is fixed): According to theGibbs' inequality, both take on their minimal values whenp=q{\displaystyle p=q}, which is0{\displaystyle 0}for KL divergence, andH(p){\displaystyle \mathrm {H} (p)}for cross-entropy. In the engineering literature, the principle of minimizing KL divergence (Kullback's "Principle of Minimum Discrimination Information") is often called thePrinciple of Minimum Cross-Entropy(MCE), orMinxent. However, as discussed in the articleKullback–Leibler divergence, sometimes the distributionq{\displaystyle q}is the fixed prior reference distribution, and the distributionp{\displaystyle p}is optimized to be as close toq{\displaystyle q}as possible, subject to some constraint. In this case the two minimizations arenotequivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by restating cross-entropy to beDKL(p∥q){\displaystyle D_{\mathrm {KL} }(p\parallel q)}, rather thanH(p,q){\displaystyle H(p,q)}. In fact, cross-entropy is another name forrelative entropy; see Cover and Thomas[1]and Good.[2]On the other hand,H(p,q){\displaystyle H(p,q)}does not agree with the literature and can be misleading. Cross-entropy can be used to define a loss function inmachine learningandoptimization. Mao, Mohri, and Zhong (2023) give an extensive analysis of the properties of the family of cross-entropy loss functions in machine learning, including theoretical learning guarantees and extensions toadversarial learning.[3]The true probabilitypi{\displaystyle p_{i}}is the true label, and the given distributionqi{\displaystyle q_{i}}is the predicted value of the current model. This is also known as thelog loss(orlogarithmic loss[4]orlogistic loss);[5]the terms "log loss" and "cross-entropy loss" are used interchangeably.[6] More specifically, consider abinary regressionmodel which can be used to classify observations into two possible classes (often simply labelled0{\displaystyle 0}and1{\displaystyle 1}). The output of the model for a given observation, given a vector of input featuresx{\displaystyle x}, can be interpreted as a probability, which serves as the basis for classifying the observation. Inlogistic regression, the probability is modeled using thelogistic functiong(z)=1/(1+e−z){\displaystyle g(z)=1/(1+e^{-z})}wherez{\displaystyle z}is some function of the input vectorx{\displaystyle x}, commonly just a linear function. The probability of the outputy=1{\displaystyle y=1}is given byqy=1=y^≡g(w⋅x)=11+e−w⋅x,{\displaystyle q_{y=1}={\hat {y}}\equiv g(\mathbf {w} \cdot \mathbf {x} )={\frac {1}{1+e^{-\mathbf {w} \cdot \mathbf {x} }}},}where the vector of weightsw{\displaystyle \mathbf {w} }is optimized through some appropriate algorithm such asgradient descent. Similarly, the complementary probability of finding the outputy=0{\displaystyle y=0}is simply given byqy=0=1−y^.{\displaystyle q_{y=0}=1-{\hat {y}}.} Having set up our notation,p∈{y,1−y}{\displaystyle p\in \{y,1-y\}}andq∈{y^,1−y^}{\displaystyle q\in \{{\hat {y}},1-{\hat {y}}\}}, we can use cross-entropy to get a measure of dissimilarity betweenp{\displaystyle p}andq{\displaystyle q}:H(p,q)=−∑ipilog⁡qi=−ylog⁡y^−(1−y)log⁡(1−y^).{\displaystyle {\begin{aligned}H(p,q)&=-\sum _{i}p_{i}\log q_{i}\\[1ex]&=-y\log {\hat {y}}-(1-y)\log(1-{\hat {y}}).\end{aligned}}} Logistic regression typically optimizes the log loss for all the observations on which it is trained, which is the same as optimizing the average cross-entropy in the sample. Other loss functions that penalize errors differently can be also used for training, resulting in models with different final test accuracy.[7]For example, suppose we haveN{\displaystyle N}samples with each sample indexed byn=1,…,N{\displaystyle n=1,\dots ,N}. Theaverageof the loss function is then given by: J(w)=1N∑n=1NH(pn,qn)=−1N∑n=1N[ynlog⁡y^n+(1−yn)log⁡(1−y^n)],{\displaystyle {\begin{aligned}J(\mathbf {w} )&={\frac {1}{N}}\sum _{n=1}^{N}H(p_{n},q_{n})\\&=-{\frac {1}{N}}\sum _{n=1}^{N}\ \left[y_{n}\log {\hat {y}}_{n}+(1-y_{n})\log(1-{\hat {y}}_{n})\right],\end{aligned}}} wherey^n≡g(w⋅xn)=1/(1+e−w⋅xn){\displaystyle {\hat {y}}_{n}\equiv g(\mathbf {w} \cdot \mathbf {x} _{n})=1/(1+e^{-\mathbf {w} \cdot \mathbf {x} _{n}})}, withg(z){\displaystyle g(z)}the logistic function as before. The logistic loss is sometimes called cross-entropy loss. It is also known as log loss.[duplication?](In this case, the binary label is often denoted by {−1,+1}.[8]) Remark:The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared-error loss forlinear regression. That is, define XT=(1x11…x1p1x21⋯x2p⋮⋮⋮1xn1⋯xnp)∈Rn×(p+1),{\displaystyle X^{\mathsf {T}}={\begin{pmatrix}1&x_{11}&\dots &x_{1p}\\1&x_{21}&\cdots &x_{2p}\\\vdots &\vdots &&\vdots \\1&x_{n1}&\cdots &x_{np}\\\end{pmatrix}}\in \mathbb {R} ^{n\times (p+1)},}yi^=f^(xi1,…,xip)=11+exp⁡(−β0−β1xi1−⋯−βpxip),{\displaystyle {\hat {y_{i}}}={\hat {f}}(x_{i1},\dots ,x_{ip})={\frac {1}{1+\exp(-\beta _{0}-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})}},}L(β)=−∑i=1N[yilog⁡y^i+(1−yi)log⁡(1−y^i)].{\displaystyle L({\boldsymbol {\beta }})=-\sum _{i=1}^{N}\left[y_{i}\log {\hat {y}}_{i}+(1-y_{i})\log(1-{\hat {y}}_{i})\right].} Then we have the result ∂∂βL(β)=XT(Y^−Y).{\displaystyle {\frac {\partial }{\partial {\boldsymbol {\beta }}}}L({\boldsymbol {\beta }})=X^{T}({\hat {Y}}-Y).} The proof is as follows. For anyy^i{\displaystyle {\hat {y}}_{i}}, we have ∂∂β0ln⁡11+e−β0+k0=e−β0+k01+e−β0+k0,{\displaystyle {\frac {\partial }{\partial \beta _{0}}}\ln {\frac {1}{1+e^{-\beta _{0}+k_{0}}}}={\frac {e^{-\beta _{0}+k_{0}}}{1+e^{-\beta _{0}+k_{0}}}},}∂∂β0ln⁡(1−11+e−β0+k0)=−11+e−β0+k0,{\displaystyle {\frac {\partial }{\partial \beta _{0}}}\ln \left(1-{\frac {1}{1+e^{-\beta _{0}+k_{0}}}}\right)={\frac {-1}{1+e^{-\beta _{0}+k_{0}}}},}∂∂β0L(β)=−∑i=1N[yi⋅e−β0+k01+e−β0+k0−(1−yi)11+e−β0+k0]=−∑i=1N[yi−y^i]=∑i=1N(y^i−yi),{\displaystyle {\begin{aligned}{\frac {\partial }{\partial \beta _{0}}}L({\boldsymbol {\beta }})&=-\sum _{i=1}^{N}\left[{\frac {y_{i}\cdot e^{-\beta _{0}+k_{0}}}{1+e^{-\beta _{0}+k_{0}}}}-(1-y_{i}){\frac {1}{1+e^{-\beta _{0}+k_{0}}}}\right]\\&=-\sum _{i=1}^{N}\left[y_{i}-{\hat {y}}_{i}\right]=\sum _{i=1}^{N}({\hat {y}}_{i}-y_{i}),\end{aligned}}}∂∂β1ln⁡11+e−β1xi1+k1=xi1ek1eβ1xi1+ek1,{\displaystyle {\frac {\partial }{\partial \beta _{1}}}\ln {\frac {1}{1+e^{-\beta _{1}x_{i1}+k_{1}}}}={\frac {x_{i1}e^{k_{1}}}{e^{\beta _{1}x_{i1}}+e^{k_{1}}}},}∂∂β1ln⁡[1−11+e−β1xi1+k1]=−xi1eβ1xi1eβ1xi1+ek1,{\displaystyle {\frac {\partial }{\partial \beta _{1}}}\ln \left[1-{\frac {1}{1+e^{-\beta _{1}x_{i1}+k_{1}}}}\right]={\frac {-x_{i1}e^{\beta _{1}x_{i1}}}{e^{\beta _{1}x_{i1}}+e^{k_{1}}}},}∂∂β1L(β)=−∑i=1Nxi1(yi−y^i)=∑i=1Nxi1(y^i−yi).{\displaystyle {\frac {\partial }{\partial \beta _{1}}}L({\boldsymbol {\beta }})=-\sum _{i=1}^{N}x_{i1}(y_{i}-{\hat {y}}_{i})=\sum _{i=1}^{N}x_{i1}({\hat {y}}_{i}-y_{i}).} In a similar way, we eventually obtain the desired result. It may be beneficial to train an ensemble of models that have diversity, such that when they are combined, their predictive accuracy is augmented.[9][10]Assuming a simple ensemble ofK{\displaystyle K}classifiers is assembled via averaging the outputs, then the amended cross-entropy is given byek=H(p,qk)−λK∑j≠kH(qj,qk){\displaystyle e^{k}=H(p,q^{k})-{\frac {\lambda }{K}}\sum _{j\neq k}H(q^{j},q^{k})}whereek{\displaystyle e^{k}}is the cost function of thekth{\displaystyle k^{th}}classifier,qk{\displaystyle q^{k}}is the output probability of thekth{\displaystyle k^{th}}classifier,p{\displaystyle p}is the true probability to be estimated, andλ{\displaystyle \lambda }is a parameter between 0 and 1 that defines the 'diversity' that we would like to establish among the ensemble. Whenλ=0{\displaystyle \lambda =0}we want each classifier to do its best regardless of the ensemble and whenλ=1{\displaystyle \lambda =1}we would like the classifier to be as diverse as possible.
https://en.wikipedia.org/wiki/Cross-entropy
Thedeviance information criterion(DIC) is ahierarchical modelinggeneralization of theAkaike information criterion(AIC). It is particularly useful inBayesianmodel selectionproblems where theposterior distributionsof themodelshave been obtained byMarkov chain Monte Carlo(MCMC) simulation. DIC is anasymptotic approximationas the sample size becomes large, like AIC. It is only valid when theposterior distributionis approximatelymultivariate normal. Define thedevianceasD(θ)=−2log⁡(p(y|θ))+C{\displaystyle D(\theta )=-2\log(p(y|\theta ))+C\,}, wherey{\displaystyle y}are the data,θ{\displaystyle \theta }are the unknown parameters of the model andp(y|θ){\displaystyle p(y|\theta )}is thelikelihood function.C{\displaystyle C}is a constant that cancels out in all calculations that compare different models, and which therefore does not need to be known. There are two calculations in common usage for the effective number of parameters of the model. The first, as described inSpiegelhalter et al. (2002, p. 587), ispD=D(θ)¯−D(θ¯){\displaystyle p_{D}={\overline {D(\theta )}}-D({\bar {\theta }})}, whereθ¯{\displaystyle {\bar {\theta }}}is the expectation ofθ{\displaystyle \theta }. The second, as described inGelman et al. (2004, p. 182), ispD=pV=12var⁡(D(θ))¯{\displaystyle p_{D}=p_{V}={\frac {1}{2}}{\overline {\operatorname {var} \left(D(\theta )\right)}}}. The larger the effective number of parameters is, theeasierit is for the model to fit the data, and so the deviance needs to be penalized. The deviance information criterion is calculated as or equivalently as From this latter form, the connection with AIC is more evident. The idea is that models with smaller DIC should be preferred to models with larger DIC. Models are penalized both by the value ofD¯{\displaystyle {\bar {D}}}, which favors a good fit, but also (similar to AIC) by the effective number of parameterspD{\displaystyle p_{D}}. SinceD¯{\displaystyle {\bar {D}}}will decrease as the number of parameters in a model increases, thepD{\displaystyle p_{D}}term compensates for this effect by favoring models with a smaller number of parameters. An advantage of DIC over other criteria in the case of Bayesian model selection is that the DIC is easily calculated from the samples generated by a Markov chain Monte Carlo simulation. AIC requires calculating the likelihood at its maximum overθ{\displaystyle \theta }, which is not readily available from the MCMC simulation. But to calculate DIC, simply computeD¯{\displaystyle {\bar {D}}}as the average ofD(θ){\displaystyle D(\theta )}over the samples ofθ{\displaystyle \theta }, andD(θ¯){\displaystyle D({\bar {\theta }})}as the value ofD{\displaystyle D}evaluated at the average of the samples ofθ{\displaystyle \theta }. Then the DIC follows directly from these approximations. Claeskens and Hjort (2008, Ch. 3.5) show that the DIC islarge-sampleequivalent to the natural model-robust version of the AIC. In the derivation of DIC, it is assumed that the specified parametric family of probability distributions that generate future observations encompasses the true model. This assumption does not always hold, and it is desirable to consider model assessment procedures in that scenario. Also, the observed data are used both to construct the posterior distribution and to evaluate the estimated models. Therefore, DIC tends to selectover-fittedmodels. A resolution to the issues above was suggested byAndo (2007), with the proposal of the Bayesian predictive information criterion (BPIC). Ando (2010, Ch. 8) provided a discussion of various Bayesian model selection criteria. To avoid the over-fitting problems of DIC,Ando (2011)developed Bayesian model selection criteria from a predictive view point. The criterion is calculated as The first term is a measure of how well the model fits the data, while the second term is a penalty on the model complexity. Note that thepin this expression is the predictive distribution rather than the likelihood above.
https://en.wikipedia.org/wiki/Deviance_information_criterion
Infinancial mathematicsandstochastic optimization, the concept ofrisk measureis used to quantify the risk involved in a random outcome or risk position. Many risk measures have hitherto been proposed, each having certain characteristics. Theentropic value at risk(EVaR) is acoherent risk measureintroduced by Ahmadi-Javid,[1][2]which is an upper bound for thevalue at risk(VaR) and theconditional value at risk(CVaR), obtained from theChernoff inequality. The EVaR can also be represented by using the concept ofrelative entropy. Because of its connection with the VaR and the relative entropy, this risk measure is called "entropic value at risk". The EVaR was developed to tackle some computational inefficiencies[clarification needed]of the CVaR. Getting inspiration from the dual representation of the EVaR, Ahmadi-Javid[1][2]developed a wide class ofcoherent risk measures, calledg-entropic risk measures. Both the CVaR and the EVaR are members of this class. Let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}be aprobability spacewithΩ{\displaystyle \Omega }a set of all simple events,F{\displaystyle {\mathcal {F}}}aσ{\displaystyle \sigma }-algebra of subsets ofΩ{\displaystyle \Omega }andP{\displaystyle P}aprobability measureonF{\displaystyle {\mathcal {F}}}. LetX{\displaystyle X}be arandom variableandLM+{\displaystyle \mathbf {L} _{M^{+}}}be the set of allBorel measurablefunctionsX:Ω→R{\displaystyle X:\Omega \to \mathbb {R} }whosemoment-generating functionMX(z){\displaystyle M_{X}(z)}exists for allz≥0{\displaystyle z\geq 0}. The entropic value at risk (EVaR) ofX∈LM+{\displaystyle X\in \mathbf {L} _{M^{+}}}with confidence level1−α{\displaystyle 1-\alpha }is defined as follows: In finance, therandom variableX∈LM+,{\displaystyle X\in \mathbf {L} _{M^{+}},}in the above equation, is used to model thelossesof a portfolio. Consider the Chernoff inequality Solving the equatione−zaMX(z)=α{\displaystyle e^{-za}M_{X}(z)=\alpha }fora,{\displaystyle a,}results in By considering the equation (1), we see that which shows the relationship between the EVaR and the Chernoff inequality. It is worth noting thataX(1,z){\displaystyle a_{X}(1,z)}is theentropic risk measureorexponential premium, which is a concept used in finance and insurance, respectively. LetLM{\displaystyle \mathbf {L} _{M}}be the set of all Borel measurable functionsX:Ω→R{\displaystyle X:\Omega \to \mathbb {R} }whose moment-generating functionMX(z){\displaystyle M_{X}(z)}exists for allz{\displaystyle z}. Thedual representation(or robust representation) of the EVaR is as follows: whereX∈LM,{\displaystyle X\in \mathbf {L} _{M},}andℑ{\displaystyle \Im }is a set of probability measures on(Ω,F){\displaystyle (\Omega ,{\mathcal {F}})}withℑ={Q≪P:DKL(Q||P)≤−ln⁡α}{\displaystyle \Im =\{Q\ll P:D_{KL}(Q||P)\leq -\ln \alpha \}}. Note that is therelative entropyofQ{\displaystyle Q}with respect toP,{\displaystyle P,}also called theKullback–Leibler divergence. The dual representation of the EVaR discloses the reason behind its naming. ForX∼N(μ,σ2),{\displaystyle X\sim N(\mu ,\sigma ^{2}),} ForX∼U(a,b),{\displaystyle X\sim U(a,b),} Figures 1 and 2 show the comparing of the VaR, CVaR and EVaR forN(0,1){\displaystyle N(0,1)}andU(0,1){\displaystyle U(0,1)}. Letρ{\displaystyle \rho }be a risk measure. Consider the optimization problem wherew∈W⊆Rn{\displaystyle {\boldsymbol {w}}\in {\boldsymbol {W}}\subseteq \mathbb {R} ^{n}}is ann{\displaystyle n}-dimensional realdecision vector,ψ{\displaystyle {\boldsymbol {\psi }}}is anm{\displaystyle m}-dimensional realrandom vectorwith a knownprobability distributionand the functionG(w,.):Rm→R{\displaystyle G({\boldsymbol {w}},.):\mathbb {R} ^{m}\to \mathbb {R} }is a Borel measurable function for all valuesw∈W.{\displaystyle {\boldsymbol {w}}\in {\boldsymbol {W}}.}Ifρ=EVaR,{\displaystyle \rho ={\text{EVaR}},}then the optimization problem (10) turns into: LetSψ{\displaystyle {\boldsymbol {S}}_{\boldsymbol {\psi }}}be thesupport of the random vectorψ.{\displaystyle {\boldsymbol {\psi }}.}IfG(.,s){\displaystyle G(.,{\boldsymbol {s}})}isconvexfor alls∈Sψ{\displaystyle {\boldsymbol {s}}\in {\boldsymbol {S}}_{\boldsymbol {\psi }}}, then the objective function of the problem (11) is also convex. IfG(w,ψ){\displaystyle G({\boldsymbol {w}},{\boldsymbol {\psi }})}has the form andψ1,…,ψm{\displaystyle \psi _{1},\ldots ,\psi _{m}}areindependent random variablesinLM{\displaystyle \mathbf {L} _{M}}, then (11) becomes which is computationallytractable. But for this case, if one uses the CVaR in problem (10), then the resulting problem becomes as follows: It can be shown that by increasing the dimension ofψ{\displaystyle \psi }, problem (14) is computationally intractable even for simple cases. For example, assume thatψ1,…,ψm{\displaystyle \psi _{1},\ldots ,\psi _{m}}are independentdiscrete random variablesthat takek{\displaystyle k}distinct values. For fixed values ofw{\displaystyle {\boldsymbol {w}}}andt,{\displaystyle t,}thecomplexityof computing the objective function given in problem (13) is of ordermk{\displaystyle mk}while the computing time for the objective function of problem (14) is of orderkm{\displaystyle k^{m}}. For illustration, assume thatk=2,m=100{\displaystyle k=2,m=100}and the summation of two numbers takes10−12{\displaystyle 10^{-12}}seconds. For computing the objective function of problem (14) one needs about4×1010{\displaystyle 4\times 10^{10}}years, whereas the evaluation of objective function of problem (13) takes about10−10{\displaystyle 10^{-10}}seconds. This shows that formulation with the EVaR outperforms the formulation with the CVaR (see[2]for more details). Drawing inspiration from the dual representation of the EVaR given in (3), one can define a wide class of information-theoretic coherent risk measures, which are introduced in.[1][2]Letg{\displaystyle g}be a convexproper functionwithg(1)=0{\displaystyle g(1)=0}andβ{\displaystyle \beta }be a non-negative number. Theg{\displaystyle g}-entropic risk measure with divergence levelβ{\displaystyle \beta }is defined as whereℑ={Q≪P:Hg(P,Q)≤β}{\displaystyle \Im =\{Q\ll P:H_{g}(P,Q)\leq \beta \}}in whichHg(P,Q){\displaystyle H_{g}(P,Q)}is thegeneralized relative entropyofQ{\displaystyle Q}with respect toP{\displaystyle P}. A primal representation of the class ofg{\displaystyle g}-entropic risk measures can be obtained as follows: whereg∗{\displaystyle g^{*}}is the conjugate ofg{\displaystyle g}. By considering withg∗(x)=ex−1{\displaystyle g^{*}(x)=e^{x-1}}andβ=−ln⁡α{\displaystyle \beta =-\ln \alpha }, the EVaR formula can be deduced. The CVaR is also ag{\displaystyle g}-entropic risk measure, which can be obtained from (16) by setting withg∗(x)=1αmax{0,x}{\displaystyle g^{*}(x)={\tfrac {1}{\alpha }}\max\{0,x\}}andβ=0{\displaystyle \beta =0}(see[1][3]for more details). For more results ong{\displaystyle g}-entropic risk measures see.[4] The disciplined convex programming framework of sample EVaR was proposed by Cajas[5]and has the following form: wherez{\displaystyle z},t{\displaystyle t}andu{\displaystyle u}are variables;Kexp{\displaystyle K_{\text{exp}}}is an exponential cone;[6]andT{\displaystyle T}is the number of observations. If we definew{\displaystyle w}as the vector of weights forN{\displaystyle N}assets,r{\displaystyle r}the matrix of returns andμ{\displaystyle \mu }the mean vector of assets, we can posed the minimization of the expected EVaR given a level of expected portfolio returnμ¯{\displaystyle {\bar {\mu }}}as follows. Applying the disciplined convex programming framework of EVaR to uncompounded cumulative returns distribution, Cajas[5]proposed theentropic drawdown at risk(EDaR) optimization problem. We can posed the minimization of the expected EDaR given a level of expected returnμ¯{\displaystyle {\bar {\mu }}}as follows: whered{\displaystyle d}is a variable that represent the uncompounded cumulative returns of portfolio andR{\displaystyle R}is the matrix of uncompounded cumulative returns of assets. For other problems like risk parity, maximization of return/risk ratio or constraints on maximum risk levels for EVaR and EDaR, you can see[5]for more details. The advantage of model EVaR and EDaR using a disciplined convex programming framework, is that we can use softwares like CVXPY[7]or MOSEK[8]to model this portfolio optimization problems. EVaR and EDaR are implemented in the python package Riskfolio-Lib.[9]
https://en.wikipedia.org/wiki/Entropic_value_at_risk
Inprobabilityandstatistics, theHellinger distance(closely related to, although different from, theBhattacharyya distance) is used to quantify the similarity between twoprobability distributions. It is a type off-divergence. The Hellinger distance is defined in terms of theHellinger integral, which was introduced byErnst Hellingerin 1909.[1][2] It is sometimes called the Jeffreys distance.[3][4] To define the Hellinger distance in terms ofmeasure theory, letP{\displaystyle P}andQ{\displaystyle Q}denote twoprobability measureson a measure spaceX{\displaystyle {\mathcal {X}}}that areabsolutely continuouswith respect to an auxiliary measureλ{\displaystyle \lambda }. Such a measure always exists, e.gλ=(P+Q){\displaystyle \lambda =(P+Q)}. The square of the Hellinger distance betweenP{\displaystyle P}andQ{\displaystyle Q}is defined as the quantity Here,P(dx)=p(x)λ(dx){\displaystyle P(dx)=p(x)\lambda (dx)}andQ(dx)=q(x)λ(dx){\displaystyle Q(dx)=q(x)\lambda (dx)}, i.e.p{\displaystyle p}andq{\displaystyle q}are theRadon–Nikodym derivativesofPandQrespectively with respect toλ{\displaystyle \lambda }. This definition does not depend onλ{\displaystyle \lambda }, i.e. the Hellinger distance betweenPandQdoes not change ifλ{\displaystyle \lambda }is replaced with a different probability measure with respect to which bothPandQare absolutely continuous. For compactness, the above formula is often written as To define the Hellinger distance in terms of elementary probability theory, we take λ to be theLebesgue measure, so thatdP/dλanddQ/dλ are simplyprobability density functions. If we denote the densities asfandg, respectively, the squared Hellinger distance can be expressed as a standard calculus integral where the second form can be obtained by expanding the square and using the fact that the integral of a probability density over its domain equals 1. The Hellinger distanceH(P,Q) satisfies the property (derivable from theCauchy–Schwarz inequality) For two discrete probability distributionsP=(p1,…,pk){\displaystyle P=(p_{1},\ldots ,p_{k})}andQ=(q1,…,qk){\displaystyle Q=(q_{1},\ldots ,q_{k})}, their Hellinger distance is defined as which is directly related to theEuclidean normof the difference of the square root vectors, i.e. Also,1−H2(P,Q)=∑i=1kpiqi.{\displaystyle 1-H^{2}(P,Q)=\sum _{i=1}^{k}{\sqrt {p_{i}q_{i}}}.}[citation needed] The Hellinger distance forms aboundedmetricon thespaceof probability distributions over a givenprobability space. The maximum distance 1 is achieved whenPassigns probability zero to every set to whichQassigns a positive probability, and vice versa. Sometimes the factor1/2{\displaystyle 1/{\sqrt {2}}}in front of the integral is omitted, in which case the Hellinger distance ranges from zero to the square root of two. The Hellinger distance is related to theBhattacharyya coefficientBC(P,Q){\displaystyle BC(P,Q)}as it can be defined as Hellinger distances are used in the theory ofsequentialandasymptotic statistics.[5][6] The squared Hellinger distance between twonormal distributionsP∼N(μ1,σ12){\displaystyle P\sim {\mathcal {N}}(\mu _{1},\sigma _{1}^{2})}andQ∼N(μ2,σ22){\displaystyle Q\sim {\mathcal {N}}(\mu _{2},\sigma _{2}^{2})}is: The squared Hellinger distance between twomultivariate normal distributionsP∼N(μ1,Σ1){\displaystyle P\sim {\mathcal {N}}(\mu _{1},\Sigma _{1})}andQ∼N(μ2,Σ2){\displaystyle Q\sim {\mathcal {N}}(\mu _{2},\Sigma _{2})}is[7] The squared Hellinger distance between twoexponential distributionsP∼Exp(α){\displaystyle P\sim \mathrm {Exp} (\alpha )}andQ∼Exp(β){\displaystyle Q\sim \mathrm {Exp} (\beta )}is: The squared Hellinger distance between twoWeibull distributionsP∼W(k,α){\displaystyle P\sim \mathrm {W} (k,\alpha )}andQ∼W(k,β){\displaystyle Q\sim \mathrm {W} (k,\beta )}(wherek{\displaystyle k}is a common shape parameter andα,β{\displaystyle \alpha \,,\beta }are the scale parameters respectively): The squared Hellinger distance between twoPoisson distributionswith rate parametersα{\displaystyle \alpha }andβ{\displaystyle \beta }, so thatP∼Poisson(α){\displaystyle P\sim \mathrm {Poisson} (\alpha )}andQ∼Poisson(β){\displaystyle Q\sim \mathrm {Poisson} (\beta )}, is: The squared Hellinger distance between twobeta distributionsP∼Beta(a1,b1){\displaystyle P\sim {\text{Beta}}(a_{1},b_{1})}andQ∼Beta(a2,b2){\displaystyle Q\sim {\text{Beta}}(a_{2},b_{2})}is: whereB{\displaystyle B}is thebeta function. The squared Hellinger distance between twogamma distributionsP∼Gamma(a1,b1){\displaystyle P\sim {\text{Gamma}}(a_{1},b_{1})}andQ∼Gamma(a2,b2){\displaystyle Q\sim {\text{Gamma}}(a_{2},b_{2})}is: whereΓ{\displaystyle \Gamma }is thegamma function. The Hellinger distanceH(P,Q){\displaystyle H(P,Q)}and thetotal variation distance(or statistical distance)δ(P,Q){\displaystyle \delta (P,Q)}are related as follows:[8] The constants in this inequality may change depending on which renormalization you choose (1/2{\displaystyle 1/2}or1/2{\displaystyle 1/{\sqrt {2}}}). These inequalities follow immediately from the inequalities between the1-normand the2-norm.
https://en.wikipedia.org/wiki/Hellinger_distance
Ininformation theoryandmachine learning,information gainis a synonym forKullback–Leibler divergence; theamount of informationgained about arandom variableorsignalfrom observing another random variable. However, in the context of decision trees, the term is sometimes used synonymously withmutual information, which is theconditional expected valueof the Kullback–Leibler divergence of the univariateprobability distributionof one variable from theconditional distributionof this variablegiventhe other one. The information gain of arandom variableX{\displaystyle X}obtained from an observation of a random variableA{\displaystyle A}taking valueA=a{\displaystyle A=a}is defined as: IGX,A(X,a)=DKL(PX(x|a)‖PX(x|I)){\displaystyle IG_{X,A}{(X,a)}=D_{\text{KL}}{\left(P_{X}{(x|a)}\|P_{X}{(x|I)}\right)}} i.e. the Kullback–Leibler divergence ofPX(x|I){\displaystyle P_{X}{(x|I)}}(theprior distributionforx{\displaystyle x}) fromPX|A(x|a){\displaystyle P_{X|A}{(x|a)}}(theposterior distributionforx{\displaystyle x}givena{\displaystyle a}). Theexpected valueof the information gain is the mutual information⁠I(X;A){\displaystyle I(X;A)}⁠ofX{\displaystyle X}andA{\displaystyle A}– i.e. the reduction in theentropyofX{\displaystyle X}achieved by learning the state of the random variableA{\displaystyle A}. In machine learning, this concept can be used to define a preferred sequence of attributes to investigate to most rapidly narrow down the state ofX. Such a sequence (which depends on the outcome of the investigation of previous attributes at each stage) is called adecision tree, and when applied in the area of machine learning is known asdecision tree learning. Usually an attribute with high mutual information should be preferred to other attributes.[why?] In general terms, theexpectedinformation gain is the reduction ininformation entropyΗfrom a prior state to a state that takes some information as given: whereH(T|a){\displaystyle \mathrm {H} {(T|a)}}is theconditional entropyofT{\displaystyle T}given the value ofattributea{\displaystyle a}. This is intuitively plausible when interpreting entropyΗas a measure of uncertainty of a random variableT{\displaystyle T}: by learning (or assuming)a{\displaystyle a}aboutT{\displaystyle T}, our uncertainty aboutT{\displaystyle T}is reduced (i.e.IG(T,a){\displaystyle IG(T,a)}is positive), unless of courseT{\displaystyle T}is independent ofa{\displaystyle a}, in which caseH(T|a)=H(T){\displaystyle \mathrm {H} (T|a)=\mathrm {H} (T)}, meaningIG(T,a)=0{\displaystyle IG(T,a)=0}. LetTdenote aset of training examples, each of the form(x,y)=(x1,x2,x3,...,xk,y){\displaystyle ({\textbf {x}},y)=(x_{1},x_{2},x_{3},...,x_{k},y)}wherexa∈vals(a){\displaystyle x_{a}\in \mathrm {vals} (a)}is the value of theath{\displaystyle a^{\text{th}}}attribute orfeatureofexamplex{\displaystyle {\textbf {x}}}andyis the corresponding class label. The information gain for an attributeais defined in terms ofShannon entropyH(−){\displaystyle \mathrm {H} (-)}as follows. For a valuevtaken by attributea, letSa(v)={x∈T|xa=v}{\displaystyle S_{a}{(v)}=\{{\textbf {x}}\in T|x_{a}=v\}}be defined as thesetof training inputs ofTfor which attributeais equal tov. Then the information gain ofTfor attributeais the difference between the a priori Shannon entropyH(T){\displaystyle \mathrm {H} (T)}of the training set and theconditional entropyH(T|a){\displaystyle \mathrm {H} {(T|a)}}. H(T|a)=∑v∈vals(a)|Sa(v)||T|⋅H(Sa(v)).{\displaystyle \mathrm {H} (T|a)=\sum _{v\in \mathrm {vals} (a)}{{\frac {|S_{a}{(v)}|}{|T|}}\cdot \mathrm {H} \left(S_{a}{\left(v\right)}\right)}.} Themutual informationis equal to the total entropy for an attribute if for each of the attribute values a uniqueclassificationcan be made for the result attribute. In this case, the relative entropies subtracted from the total entropy are 0. In particular, the valuesv∈vals(a){\displaystyle v\in vals(a)}defines apartitionof the training set dataTintomutually exclusiveand all-inclusivesubsets, inducing acategorical probability distributionPa(v){\textstyle P_{a}{(v)}}on the valuesv∈vals(a){\textstyle v\in vals(a)}of attributea. The distribution is givenPa(v):=|Sa(v)||T|{\textstyle P_{a}{(v)}:={\frac {|S_{a}{(v)}|}{|T|}}}. In this representation, the information gain ofTgivenacan be defined as the difference between the unconditional Shannon entropy ofTand the expected entropy ofTconditioned ona, where theexpectation valueis taken with respect to the induced distribution on the values ofa.IG(T,a)=H(T)−∑v∈vals(a)Pa(v)H(Sa(v))=H(T)−EPa[H(Sa(v))]=H(T)−H(T|a).{\displaystyle {\begin{alignedat}{2}IG(T,a)&=\mathrm {H} (T)-\sum _{v\in \mathrm {vals} (a)}{P_{a}{(v)}\mathrm {H} \left(S_{a}{(v)}\right)}\\&=\mathrm {H} (T)-\mathbb {E} _{P_{a}}{\left[\mathrm {H} {(S_{a}{(v)})}\right]}\\&=\mathrm {H} (T)-\mathrm {H} {(T|a)}.\end{alignedat}}} For a better understanding of information gain, let us break it down. As we know, information gain is the reduction in information entropy, what is entropy? Basically, entropy is the measure of impurity or uncertainty in a group of observations. In engineering applications, information isanalogousto signal, and entropy is analogous to noise. It determines how a decision tree chooses to split data.[1]The leftmost figure below is very impure and has high entropy corresponding to higher disorder and lower information value. As we go to the right, the entropy decreases, and the information value increases. Now, it is clear that information gain is the measure of how much information a feature provides about a class. Let's visualize information gain in a decision tree as shown in the right: The nodetis the parent node, and the sub-nodestLandtRare child nodes. In this case, the parent nodethas a collection of cancer and non-cancer samples denoted as C and NC respectively. We can use information gain to determine how good the splitting of nodes is in a decision tree. In terms of entropy, information gain is defined as: To understand this idea, let's start by an example in which we create a simple dataset and want to see if genemutationscould be related to patients with cancer. Given four different gene mutations, as well as seven samples, thetraining setfor a decision can be created as follows: In this dataset, a 1 means the sample has the mutation (True), while a 0 means the sample does not (False). A sample with C denotes that it has been confirmed to be cancerous, while NC means it is non-cancerous. Using this data, a decision tree can be created with information gain used to determine the candidate splits for each node. For the next step, the entropy at parent nodetof the above simple decision tree is computed as: H(t) = −[pC,tlog2(pC,t) +pNC,tlog2(pNC,t)][3] where, probability of selecting a class ‘C’ sample at nodet, pC,t=n(t,C) /n(t), probability of selecting a class ‘NC’ sample at nodet, pNC,t=n(t,NC) /n(t), n(t),n(t,C), andn(t,NC) are the number of total samples, ‘C’ samples and ‘NC’ samples at nodetrespectively. Using this with the example training set, the process for finding information gain beginning withH(t){\displaystyle \mathrm {H} {(t)}}for Mutation 1 is as follows: Note:H(t){\displaystyle \mathrm {H} {(t)}}will be the same for all mutations at the root. The relatively high value of entropyH(t)=0.985{\displaystyle \mathrm {H} {(t)}=0.985}(1 is the optimal value) suggests that the root node is highly impure and the constituents of the input at the root node would look like the leftmost figure in the aboveEntropy Diagram. However, such a set of data is good for learning the attributes of the mutations used to split the node. At a certain node, when the homogeneity of the constituents of the input occurs (as shown in the rightmost figure in the aboveEntropy Diagram),the dataset would no longer be good for learning. Moving on, the entropy at left and right child nodes of the above decision tree is computed using the formulae: H(tL) = −[pC,Llog2(pC,L) +pNC,Llog2(pNC,L)][1] H(tR) = −[pC,Rlog2(pC,R) +pNC,Rlog2(pNC,R)][1] where, probability of selecting a class ‘C’ sample at the left child node, pC,L=n(tL,C) /n(tL), probability of selecting a class ‘NC’ sample at the left child node, pNC,L=n(tL,NC) /n(tL), probability of selecting a class ‘C’ sample at the right child node, pC,R=n(tR,C) /n(tR), probability of selecting a class ‘NC’ sample at the right child node, pNC,R=n(tR,NC) /n(tR), n(tL),n(tL,C), andn(tL,NC) are the total number of samples, ‘C’ samples and ‘NC’ samples at the left child node respectively, n(tR),n(tR,C), andn(tR,NC) are the total number of samples, ‘C’ samples and ‘NC’ samples at the right child node respectively. Using these formulae, the H(tL) and H(tR) for Mutation 1 is shown below: Following this, average entropy of the child nodes due to the split at nodetof the above decision tree is computed as: H(s,t) =PLH(tL) +PRH(tR) where, probability of samples at the left child,PL=n(tL) /n(t), probability of samples at the right child,PR=n(tR) /n(t), Finally, H(s,t) along withPLandPRfor Mutation 1 is as follows: Thus, by definition from equation (i): (Information gain) = H(t) - H(s,t) After all the steps, gain(s), wheresis a candidate split for the example is: Using this same set of formulae with the other three mutations leads to a table of the candidate splits, ranked by their information gain: The mutation that provides the most useful information would be Mutation 3, so that will be used to split the root node of the decision tree. The root can be split and all the samples can be passed though and appended to the child nodes. A tree describing the split is shown on the left. The samples that are on the left node of the tree would be classified as cancerous by the tree, while those on the right would be non-cancerous. This tree is relatively accurate at classifying the samples that were used to build it (which is a case ofoverfitting), but it would still classify sample C2 incorrectly. To remedy this, the tree can be split again at the child nodes to possibly achieve something even more accurate. To split the right node, information gain must again be calculated for all the possible candidate splits that were not used for previous nodes. So, the only options this time are Mutations 1, 2, and 4. Note:H(t){\displaystyle \mathrm {H} {(t)}}is different this time around since there are only four samples at the right child. From this newH(t){\displaystyle \mathrm {H} {(t)}}, the candidate splits can be calculated using the same formulae as the root node: Thus, the right child will be split with Mutation 4. All the samples that have the mutation will be passed to the left child and the ones that lack it will be passed to the right child. To split the left node, the process would be the same, except there would only be 3 samples to check. Sometimes a node may not need to be split at all if it is apure set, where all samples at the node are just cancerous or non-cancerous. Splitting the node may lead to the tree being more inaccurate and in this case it will not be split. The tree would now achieve 100% accuracy if the samples that were used to build it are tested. This isn't a good idea, however, since the tree would overfit the data. The best course of action is to try testing the tree on other samples, of which are not part of the original set. Two outside samples are below: By following the tree, NC10 was classified correctly, but C15 was classified as NC. For other samples, this tree would not be 100% accurate anymore. It could be possible to improve this though, with options such as increasing the depth of the tree or increasing the size of the training set. Information gain is the basic criterion to decide whether a feature should be used to split a node or not. The feature with theoptimalsplit i.e., the highest value of information gain at a node of a decision tree is used as the feature for splitting the node. The concept of information gain function falls under theC4.5 algorithmfor generating the decision trees and selecting the optimal split for a decision tree node.[1]Some of its advantages include: Although information gain is usually a good measure for deciding therelevanceof an attribute, it is not perfect. A notable problem occurs when information gain is applied to attributes that can take on a large number of distinct values. For example, suppose that one is building a decision tree for some data describing the customers of a business. Information gain is often used to decide which of the attributes are the most relevant, so they can be tested near the root of the tree. One of the input attributes might be the customer's membership number, if they are a member of the business's membership program. This attribute has a high mutual information, because it uniquely identifies each customer, but we donotwant to include it in the decision tree. Deciding how to treat a customer based on their membership number is unlikely to generalize to customers we haven't seen before (overfitting). This issue can also occur if the samples that are being tested have multiple attributes with many distinct values. In this case, it can cause the information gain of each of these attributes to be much higher than those without as many distinct values. To counter this problem,Ross Quinlanproposed to instead choose the attribute with highestinformation gain ratiofrom among the attributes whose information gain is average or higher.[5]This biases the decision tree against considering attributes with a large number of distinct values, while not giving an unfair advantage to attributes with very low information value, as the information value is higher or equal to the information gain.[6]
https://en.wikipedia.org/wiki/Information_gain_in_decision_trees
Indecision tree learning,information gain ratiois a ratio ofinformation gainto the intrinsic information. It was proposed byRoss Quinlan,[1]to reduce a bias towards multi-valued attributes by taking the number and size of branches into account when choosing an attribute.[2] Information gain is also known asmutual information.[3] Information gainis the reduction inentropyproduced from partitioning a set with attributesa{\displaystyle a}and finding the optimal candidate that produces the highest value: whereT{\displaystyle T}is a random variable andH(T|a){\displaystyle \mathrm {H} {(T|a)}}is the entropy ofT{\displaystyle T}given the value of attributea{\displaystyle a}. The information gain is equal to the total entropy for an attribute if for each of the attribute values a unique classification can be made for the result attribute. In this case the relative entropies subtracted from the total entropy are 0. The split information value for a test is defined as follows: SplitInformation(X)=−∑i=1nN(xi)N(x)∗log⁡2N(xi)N(x){\displaystyle {\text{SplitInformation}}(X)=-\sum _{i=1}^{n}{{\frac {\mathrm {N} (x_{i})}{\mathrm {N} (x)}}*\log {_{2}}{\frac {\mathrm {N} (x_{i})}{\mathrm {N} (x)}}}} whereX{\displaystyle X}is a discrete random variable with possible valuesx1,x2,...,xi{\displaystyle {x_{1},x_{2},...,x_{i}}}andN(xi){\displaystyle N(x_{i})}being the number of times thatxi{\displaystyle x_{i}}occurs divided by the total count of eventsN(x){\displaystyle N(x)}wherex{\displaystyle x}is the set of events. The split information value is a positive number that describes the potential worth of splitting a branch from a node. This in turn is the intrinsic value that the random variable possesses and will be used to remove the bias in the information gain ratio calculation. The information gain ratio is the ratio between the information gain and the split information value:IGR(T,a)=IG(T,a)/SplitInformation(T){\displaystyle {\text{IGR}}(T,a)={\text{IG}}(T,a)/{\text{SplitInformation}}(T)} IGR(T,a)=−∑i=1nP(T)log⁡P(T)−(−∑i=1nP(T|a)log⁡P(T|a))−∑i=1nN(ti)N(t)∗log⁡2N(ti)N(t){\displaystyle {\text{IGR}}(T,a)={\frac {-\sum _{i=1}^{n}{\mathrm {P} (T)\log \mathrm {P} (T)}-(-\sum _{i=1}^{n}{\mathrm {P} (T|a)\log \mathrm {P} (T|a)})}{-\sum _{i=1}^{n}{{\frac {\mathrm {N} (t_{i})}{\mathrm {N} (t)}}*\log {_{2}}{\frac {\mathrm {N} (t_{i})}{\mathrm {N} (t)}}}}}} Using weather data published by Fordham University,[4]the table was created below: Using the table above, one can find the entropy, information gain, split information, and information gain ratio for each variable (outlook, temperature, humidity, and wind). These calculations are shown in the tables below: Using the above tables, one can deduce that Outlook has the highest information gain ratio. Next, one must find the statistics for the sub-groups of the Outlook variable (sunny, overcast, and rainy), for this example one will only build the sunny branch (as shown in the table below): One can find the following statistics for the other variables (temperature, humidity, and wind) to see which have the greatest effect on the sunny element of the outlook variable: Humidity was found to have the highest information gain ratio. One will repeat the same steps as before and find the statistics for the events of the Humidity variable (high and normal): Since the play values are either all "No" or "Yes", the information gain ratio value will be equal to 1. Also, now that one has reached the end of the variable chain with Wind being the last variable left, they can build an entire root to leaf node branch line of a decision tree. Once finished with reaching this leaf node, one would follow the same procedure for the rest of the elements that have yet to be split in the decision tree. This set of data was relatively small, however, if a larger set was used, the advantages of using the information gain ratio as the splitting factor of a decision tree can be seen more. Information gain ratio biases thedecision treeagainstconsidering attributes with a large number of distinct values. For example, suppose that we are building a decision tree for some data describing a business's customers. Information gain ratio is used to decide which of the attributes are the most relevant. These will be tested near the root of the tree. One of the input attributes might be the customer'stelephone number. This attribute has a high information gain, because it uniquely identifies each customer. Due to its high amount of distinct values, this will not be chosen to be tested near the root. Although information gain ratio solves the key problem of information gain, it creates another problem. If one is considering an amount of attributes that have a high number of distinct values, these will never be above one that has a lower number of distinct values.
https://en.wikipedia.org/wiki/Information_gain_ratio
This article discusses howinformation theory(a branch of mathematics studying the transmission, processing and storage ofinformation) is related tomeasure theory(a branch of mathematics related tointegrationandprobability). Many of the concepts in information theory have separate definitions and formulas forcontinuousanddiscretecases. For example,entropyH(X){\displaystyle \mathrm {H} (X)}is usually defined for discrete random variables, whereas for continuous random variables the related concept ofdifferential entropy, writtenh(X){\displaystyle h(X)}, is used (see Cover and Thomas, 2006, chapter 8). Both these concepts are mathematicalexpectations, but the expectation is defined with anintegralfor the continuous case, and a sum for the discrete case. These separate definitions can be more closely related in terms ofmeasure theory. For discrete random variables, probability mass functions can be considered density functions with respect to the counting measure. Thinking of both the integral and the sum as integration on a measure space allows for a unified treatment. Consider the formula for the differential entropy of a continuousrandom variableX{\displaystyle X}with rangeR{\displaystyle \mathbb {R} }andprobability density functionf(x){\displaystyle f(x)}: This can usually be interpreted as the followingRiemann–Stieltjes integral: whereμ{\displaystyle \mu }is theLebesgue measure. If instead,X{\displaystyle X}is discrete, with rangeΩ{\displaystyle \Omega }a finite set,f{\displaystyle f}is a probability mass function onΩ{\displaystyle \Omega }, andν{\displaystyle \nu }is thecounting measureonΩ{\displaystyle \Omega }, we can write: The integral expression, and the general concept, are identical in the continuous case; the only difference is the measure used. In both cases the probability density functionf{\displaystyle f}is theRadon–Nikodym derivativeof theprobability measurewith respect to the measure against which the integral is taken. IfP{\displaystyle P}is the probability measure induced byX{\displaystyle X}, then the integral can also be taken directly with respect toP{\displaystyle P}: If instead of the underlying measure μ we take another probability measureQ{\displaystyle Q}, we are led to theKullback–Leibler divergence: letP{\displaystyle P}andQ{\displaystyle Q}be probability measures over the same space. Then ifP{\displaystyle P}isabsolutely continuouswith respect toQ{\displaystyle Q}, writtenP≪Q,{\displaystyle P\ll Q,}theRadon–Nikodym derivativedPdQ{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} Q}}}exists and the Kullback–Leibler divergence can be expressed in its full generality: where the integral runs over thesupportofP.{\displaystyle P.}Note that we have dropped the negative sign: the Kullback–Leibler divergence is always non-negative due toGibbs' inequality. There is an analogy betweenShannon's basic "measures" of theinformationcontent of random variables and ameasureover sets. Namely thejoint entropy,conditional entropy, andmutual informationcan be considered as the measure of aset union,set difference, andset intersection, respectively (Reza pp. 106–108). If we associate the existence of abstractsetsX~{\displaystyle {\tilde {X}}}andY~{\displaystyle {\tilde {Y}}}to arbitrarydiscreterandom variablesXandY, somehow representing theinformationborne byXandY, respectively, such that: whereμ{\displaystyle \mu }is asigned measureover these sets, and we set: we find thatShannon's "measure" of information content satisfies all the postulates and basic properties of a formalsigned measureover sets, as commonly illustrated in aninformation diagram. This allows the sum of two measures to be written: and the analog ofBayes' theorem(μ(A)+μ(B∖A)=μ(B)+μ(A∖B){\displaystyle \mu (A)+\mu (B\setminus A)=\mu (B)+\mu (A\setminus B)}) allows the difference of two measures to be written: This can be a handymnemonic devicein some situations, e.g. Note that measures (expectation values of the logarithm) of true probabilities are called "entropy" and generally represented by the letterH, while other measures are often referred to as "information" or "correlation" and generally represented by the letterI. For notational simplicity, the letterIis sometimes used for all measures. Certain extensions to the definitions of Shannon's basic measures of information are necessary to deal with theσ-algebragenerated by the sets that would be associated to three or more arbitrary random variables. (See Reza pp. 106–108 for an informal but rather complete discussion.) NamelyH(X,Y,Z,⋯){\displaystyle \mathrm {H} (X,Y,Z,\cdots )}needs to be defined in the obvious way as the entropy of a joint distribution, and a multivariatemutual informationI⁡(X;Y;Z;⋯){\displaystyle \operatorname {I} (X;Y;Z;\cdots )}defined in a suitable manner so that we can set: in order to define the (signed) measure over the whole σ-algebra. There is no single universally accepted definition for the multivariate mutual information, but the one that corresponds here to the measure of a set intersection is due to Fano (1966: p. 57-59). The definition is recursive. As a base case the mutual information of a single random variable is defined to be its entropy:I⁡(X)=H(X){\displaystyle \operatorname {I} (X)=\mathrm {H} (X)}. Then forn≥2{\displaystyle n\geq 2}we set where theconditional mutual informationis defined as The first step in the recursion yields Shannon's definitionI⁡(X1;X2)=H(X1)−H(X1∣X2).{\displaystyle \operatorname {I} (X_{1};X_{2})=\mathrm {H} (X_{1})-\mathrm {H} (X_{1}\mid X_{2}).}The multivariate mutual information (same asinteraction informationbut for a change in sign) of three or more random variables can be negative as well as positive: LetXandYbe two independent fair coin flips, and letZbe theirexclusive or. ThenI⁡(X;Y;Z)=−1{\displaystyle \operatorname {I} (X;Y;Z)=-1}bit. Many other variations are possible for three or more random variables: for example,I⁡(X,Y;Z){\displaystyle \operatorname {I} (X,Y;Z)}is the mutual information of the joint distribution ofXandYrelative toZ, and can be interpreted asμ((X~∪Y~)∩Z~).{\displaystyle \mu (({\tilde {X}}\cup {\tilde {Y}})\cap {\tilde {Z}}).}Many more complicated expressions can be built this way, and still have meaning, e.g.I⁡(X,Y;Z∣W),{\displaystyle \operatorname {I} (X,Y;Z\mid W),}orH(X,Z∣W,Y).{\displaystyle \mathrm {H} (X,Z\mid W,Y).}
https://en.wikipedia.org/wiki/Information_theory_and_measure_theory
Inprobability theoryandstatistics, theJensen–Shannon divergence, named afterJohan JensenandClaude Shannon, is a method of measuring the similarity between twoprobability distributions. It is also known asinformation radius(IRad)[1][2]ortotal divergence to the average.[3]It is based on theKullback–Leibler divergence, with some notable (and useful) differences, including that it is symmetric and it always has a finite value. The square root of the Jensen–Shannon divergence is ametricoften referred to as Jensen–Shannon distance. The similarity between the distributions is greater when the Jensen-Shannon distance is closer to zero.[4][5][6] Consider the setM+1(A){\displaystyle M_{+}^{1}(A)}of probability distributions whereA{\displaystyle A}is a set provided with someσ-algebraof measurable subsets. In particular we can takeA{\displaystyle A}to be a finite or countable set with all subsets being measurable. The Jensen–Shannon divergence (JSD) is a symmetrized and smoothed version of theKullback–Leibler divergenceD(P∥Q){\displaystyle D(P\parallel Q)}. It is defined by whereM=12(P+Q){\displaystyle M={\frac {1}{2}}(P+Q)}is amixture distributionofP{\displaystyle P}andQ{\displaystyle Q}. The geometric Jensen–Shannon divergence[7](or G-Jensen–Shannon divergence) yields a closed-form formula for divergence between two Gaussian distributions by taking the geometric mean. A more general definition, allowing for the comparison of more than two probability distributions, is: where M:=∑i=1nπiPi{\displaystyle {\begin{aligned}M&:=\sum _{i=1}^{n}\pi _{i}P_{i}\end{aligned}}} andπ1,…,πn{\displaystyle \pi _{1},\ldots ,\pi _{n}}are weights that are selected for the probability distributionsP1,P2,…,Pn{\displaystyle P_{1},P_{2},\ldots ,P_{n}}, andH(P){\displaystyle H(P)}is theShannon entropyfor distributionP{\displaystyle P}. For the two-distribution case described above, P1=P,P2=Q,π1=π2=12.{\displaystyle P_{1}=P,P_{2}=Q,\pi _{1}=\pi _{2}={\frac {1}{2}}.\ } Hence, for those distributionsP,Q{\displaystyle P,Q} JSD=H(M)−12(H(P)+H(Q)){\displaystyle JSD=H(M)-{\frac {1}{2}}{\bigg (}H(P)+H(Q){\bigg )}} The Jensen–Shannon divergence is bounded by 1 for two discrete probability distributions, given that one uses the base 2 logarithm:[8] With this normalization, it is a lower bound on thetotal variation distancebetween P and Q: With base-e logarithm, which is commonly used in statistical thermodynamics, the upper bound isln⁡(2){\displaystyle \ln(2)}. In general, the bound in base b islogb⁡(2){\displaystyle \log _{b}(2)}: A more general bound, the Jensen–Shannon divergence is bounded bylogb⁡(n){\displaystyle \log _{b}(n)}for more than two probability distributions:[8] The Jensen–Shannon divergence is themutual informationbetween a random variableX{\displaystyle X}associated to amixture distributionbetweenP{\displaystyle P}andQ{\displaystyle Q}and the binary indicator variableZ{\displaystyle Z}that is used to switch betweenP{\displaystyle P}andQ{\displaystyle Q}to produce the mixture. LetX{\displaystyle X}be some abstract function on the underlying set of events that discriminates well between events, and choose the value ofX{\displaystyle X}according toP{\displaystyle P}ifZ=0{\displaystyle Z=0}and according toQ{\displaystyle Q}ifZ=1{\displaystyle Z=1}, whereZ{\displaystyle Z}is equiprobable. That is, we are choosingX{\displaystyle X}according to the probability measureM=(P+Q)/2{\displaystyle M=(P+Q)/2}, and its distribution is the mixture distribution. We compute It follows from the above result that the Jensen–Shannon divergence is bounded by 0 and 1 because mutual information is non-negative and bounded byH(Z)=1{\displaystyle H(Z)=1}in base 2 logarithm. One can apply the same principle to a joint distribution and the product of its two marginal distribution (in analogy to Kullback–Leibler divergence and mutual information) and to measure how reliably one can decide if a given response comes from the joint distribution or the product distribution—subject to the assumption that these are the only two possibilities.[9] The generalization of probability distributions ondensity matricesallows to define quantum Jensen–Shannon divergence (QJSD).[10][11]It is defined for a set ofdensity matrices(ρ1,…,ρn){\displaystyle (\rho _{1},\ldots ,\rho _{n})}and a probability distributionπ=(π1,…,πn){\displaystyle \pi =(\pi _{1},\ldots ,\pi _{n})}as whereS(ρ){\displaystyle S(\rho )}is thevon Neumann entropyofρ{\displaystyle \rho }. This quantity was introduced inquantum informationtheory, where it is called the Holevo information: it gives the upper bound for amount of classical information encoded by the quantum states(ρ1,…,ρn){\displaystyle (\rho _{1},\ldots ,\rho _{n})}under the prior distributionπ{\displaystyle \pi }(seeHolevo's theorem).[12]Quantum Jensen–Shannon divergence forπ=(12,12){\displaystyle \pi =\left({\frac {1}{2}},{\frac {1}{2}}\right)}and two density matrices is a symmetric function, everywhere defined, bounded and equal to zero only if twodensity matricesare the same. It is a square of a metric forpure states,[13]and it was recently shown that this metric property holds for mixed states as well.[14][15]TheBures metricis closely related to the quantum JS divergence; it is the quantum analog of theFisher information metric. The centroid C* of a finite set of probability distributions can be defined as the minimizer of the average sum of the Jensen-Shannon divergences between a probability distribution and the prescribed set of distributions:C∗=arg⁡minQ∑i=1nJSD(Pi∥Q){\displaystyle C^{*}=\arg \min _{Q}\sum _{i=1}^{n}{\rm {JSD}}(P_{i}\parallel Q)}An efficient algorithm[16](CCCP) based on difference of convex functions is reported to calculate the Jensen-Shannon centroid of a set of discrete distributions (histograms). The Jensen–Shannon divergence has been applied inbioinformaticsandgenome comparison,[17][18]in protein surface comparison,[19]in the social sciences,[20]in the quantitative study of history,[21]in fire experiments,[22]and in machine learning.[23]
https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence
Inquantum information theory,quantum relative entropyis a measure of distinguishability between twoquantum states. It is the quantum mechanical analog ofrelative entropy. For simplicity, it will be assumed that all objects in the article are finite-dimensional. We first discuss the classical case. Suppose the probabilities of a finite sequence of events is given by the probability distributionP= {p1...pn}, but somehow we mistakenly assumed it to beQ= {q1...qn}. For instance, we can mistake an unfair coin for a fair one. According to this erroneous assumption, our uncertainty about thej-th event, or equivalently, the amount of information provided after observing thej-th event, is The (assumed) average uncertainty of all possible events is then On the other hand, theShannon entropyof the probability distributionp, defined by is the real amount of uncertainty before observation. Therefore the difference between these two quantities is a measure of the distinguishability of the two probability distributionspandq. This is precisely the classical relative entropy, orKullback–Leibler divergence: Note As with many other objects in quantum information theory, quantum relative entropy is defined by extending the classical definition from probability distributions todensity matrices. Letρbe a density matrix. Thevon Neumann entropyofρ, which is the quantum mechanical analog of the Shannon entropy, is given by For two density matricesρandσ, thequantum relative entropy ofρwith respect toσis defined by We see that, when the states are classically related, i.e.ρσ=σρ, the definition coincides with the classical case, in the sense that ifρ=SD1ST{\displaystyle \rho =SD_{1}S^{\mathsf {T}}}andσ=SD2ST{\displaystyle \sigma =SD_{2}S^{\mathsf {T}}}withD1=diag(λ1,…,λn){\displaystyle D_{1}={\text{diag}}(\lambda _{1},\ldots ,\lambda _{n})}andD2=diag(μ1,…,μn){\displaystyle D_{2}={\text{diag}}(\mu _{1},\ldots ,\mu _{n})}(becauseρ{\displaystyle \rho }andσ{\displaystyle \sigma }commute, they aresimultaneously diagonalizable), thenS(ρ‖σ)=∑j=1nλjln⁡(λjμj){\displaystyle S(\rho \|\sigma )=\sum _{j=1}^{n}\lambda _{j}\ln \left({\frac {\lambda _{j}}{\mu _{j}}}\right)}is just the ordinaryKullback-Leibler divergenceof the probability vector(λ1,…,λn){\displaystyle (\lambda _{1},\ldots ,\lambda _{n})}with respect to the probability vector(μ1,…,μn){\displaystyle (\mu _{1},\ldots ,\mu _{n})}. In general, thesupportof a matrixMis the orthogonal complement of itskernel, i.e.supp(M)=ker(M)⊥{\displaystyle {\text{supp}}(M)={\text{ker}}(M)^{\perp }}. When considering the quantum relative entropy, we assume the convention that −s· log 0 = ∞ for anys> 0. This leads to the definition that when This can be interpreted in the following way. Informally, the quantum relative entropy is a measure of our ability to distinguish two quantum states where larger values indicate states that are more different. Being orthogonal represents the most different quantum states can be. This is reflected by non-finite quantum relative entropy for orthogonal quantum states. Following the argument given in the Motivation section, if we erroneously assume the stateρ{\displaystyle \rho }has support inker(σ){\displaystyle {\text{ker}}(\sigma )}, this is an error impossible to recover from. However, one should be careful not to conclude that the divergence of the quantum relative entropyS(ρ‖σ){\displaystyle S(\rho \|\sigma )}implies that the statesρ{\displaystyle \rho }andσ{\displaystyle \sigma }are orthogonal or even very different by other measures. Specifically,S(ρ‖σ){\displaystyle S(\rho \|\sigma )}can diverge whenρ{\displaystyle \rho }andσ{\displaystyle \sigma }differ by avanishingly small amountas measured by some norm. For example, letσ{\displaystyle \sigma }have the diagonal representation σ=∑nλn|fn⟩⟨fn|{\displaystyle \sigma =\sum _{n}\lambda _{n}|f_{n}\rangle \langle f_{n}|} withλn>0{\displaystyle \lambda _{n}>0}forn=0,1,2,…{\displaystyle n=0,1,2,\ldots }andλn=0{\displaystyle \lambda _{n}=0}forn=−1,−2,…{\displaystyle n=-1,-2,\ldots }where{|fn⟩,n∈Z}{\displaystyle \{|f_{n}\rangle ,n\in \mathbb {Z} \}}is an orthonormal set. The kernel ofσ{\displaystyle \sigma }is the space spanned by the set{|fn⟩,n=−1,−2,…}{\displaystyle \{|f_{n}\rangle ,n=-1,-2,\ldots \}}. Next let ρ=σ+ϵ|f−1⟩⟨f−1|−ϵ|f1⟩⟨f1|{\displaystyle \rho =\sigma +\epsilon |f_{-1}\rangle \langle f_{-1}|-\epsilon |f_{1}\rangle \langle f_{1}|} for a small positive numberϵ{\displaystyle \epsilon }. Asρ{\displaystyle \rho }has support (namely the state|f−1⟩{\displaystyle |f_{-1}\rangle }) in the kernel ofσ{\displaystyle \sigma },S(ρ‖σ){\displaystyle S(\rho \|\sigma )}is divergent even though the trace norm of the difference(ρ−σ){\displaystyle (\rho -\sigma )}is2ϵ{\displaystyle 2\epsilon }. This means that difference betweenρ{\displaystyle \rho }andσ{\displaystyle \sigma }as measured by the trace norm is vanishingly small asϵ→0{\displaystyle \epsilon \to 0}even thoughS(ρ‖σ){\displaystyle S(\rho \|\sigma )}is divergent (i.e. infinite). This property of the quantum relative entropy represents a serious shortcoming if not treated with care. For the classicalKullback–Leibler divergence, it can be shown that and the equality holds if and only ifP=Q. Colloquially, this means that the uncertainty calculated using erroneous assumptions is always greater than the real amount of uncertainty. To show the inequality, we rewrite Notice that log is aconcave function. Therefore -log isconvex. ApplyingJensen's inequality, we obtain Jensen's inequality also states that equality holds if and only if, for alli,qi= (Σqj)pi, i.e.p=q. Klein's inequalitystates that the quantum relative entropy is non-negative in general. It is zero if and only ifρ=σ. Proof Letρandσhave spectral decompositions So Direct calculation gives wherePi j= |vi*wj|2. Since the matrix (Pi j)i jis adoubly stochastic matrixand -log is a convex function, the above expression is Defineri= ΣjqjPi j. Then {ri} is a probability distribution. From the non-negativity of classical relative entropy, we have The second part of the claim follows from the fact that, since -log is strictly convex, equality is achieved in if and only if (Pi j) is apermutation matrix, which impliesρ=σ, after a suitable labeling of the eigenvectors {vi} and {wi}.[1]:513 The relative entropy isjointly convex. For0≤λ≤1{\displaystyle 0\leq \lambda \leq 1}and statesρ1(2),σ1(2){\displaystyle \rho _{1(2)},\sigma _{1(2)}}we have D(λρ1+(1−λ)ρ2‖λσ1+(1−λ)σ2)≤λD(ρ1‖σ1)+(1−λ)D(ρ2‖σ2){\displaystyle D(\lambda \rho _{1}+(1-\lambda )\rho _{2}\|\lambda \sigma _{1}+(1-\lambda )\sigma _{2})\leq \lambda D(\rho _{1}\|\sigma _{1})+(1-\lambda )D(\rho _{2}\|\sigma _{2})} The relative entropy decreases monotonically undercompletely positivetracepreserving (CPTP) operationsN{\displaystyle {\mathcal {N}}}on density matrices, S(N(ρ)‖N(σ))≤S(ρ‖σ){\displaystyle S({\mathcal {N}}(\rho )\|{\mathcal {N}}(\sigma ))\leq S(\rho \|\sigma )}. This inequality is calledmonotonicity of quantum relative entropyand was first proved byGöran Lindblad. Let a composite quantum system have state space andρbe a density matrix acting onH. Therelative entropy of entanglementofρis defined by where the minimum is taken over the family ofseparable states. A physical interpretation of the quantity is the optimal distinguishability of the stateρfrom separable states. Clearly, whenρis notentangled by Klein's inequality. One reason the quantum relative entropy is useful is that several other important quantum information quantities are special cases of it. Often, theorems are stated in terms of the quantum relative entropy, which lead to immediate corollaries concerning the other quantities. Below, we list some of these relations. LetρABbe the joint state of a bipartite system with subsystemAof dimensionnAandBof dimensionnB. LetρA,ρBbe the respective reduced states, andIA,IBthe respective identities. Themaximally mixed statesareIA/nAandIB/nB. Then it is possible to show with direct computation that whereI(A:B) is thequantum mutual informationandS(B|A) is thequantum conditional entropy.
https://en.wikipedia.org/wiki/Quantum_relative_entropy
Solomon Kullback(April 3, 1907 – August 5, 1994) was an Americancryptanalystandmathematician, who was one of the first three employees hired byWilliam F. Friedmanat theUS Army'sSignal Intelligence Service(SIS) in the 1930s, along withFrank RowlettandAbraham Sinkov. He went on to a long and distinguished career at SIS and its eventual successor, theNational Security Agency(NSA). Kullback was the Chief Scientist at the NSA until his retirement in 1962, whereupon he took a position at theGeorge Washington University. TheKullback–Leibler divergenceis named after Kullback andRichard Leibler. Kullback was born to Jewish parents inBrooklyn, New York. His father Nathan had been born in Vilna, Russian Empire, (nowVilnius,Lithuania) and had immigrated[1]to the US as a young man circa 1905, and became a naturalized American in 1911.[2]Kullback attendedBoys High Schoolin Brooklyn. He then went toCity College of New York, graduating with aBAin 1927 and anMAin math in 1929.[3]He completed adoctoratein math fromGeorge Washington Universityin 1934. His intention had been to teach, and he returned to Boy's High School to do so, but found it not to his taste; he discovered his real interest was using mathematics, not teaching it.[citation needed] At the suggestion ofAbraham Sinkov, who showed him aCivil Serviceflyer for "junior mathematicians" at US$2,000 per year, he took the examination. Both passed, and were assigned toWashington, D.C.as junior cryptanalysts. Upon arrival in Washington, Kullback was assigned toWilliam F. Friedman. Friedman had begun an intensive program of training in cryptology for his new civilian employees. For several summers running, the SIS cryptanalysts attended training camps atFort Meadeuntil they received commissions as reserve officers in the Army. Kullback and Sinkov took Friedman's admonitions on education seriously and spent the next several years attending night classes; both received their doctorates in mathematics. Afterward, Kullback rediscovered a love of teaching; he began offering evening classes in mathematics atGeorge Washington Universityfrom 1939. Once they had completed the training, the three were put to the work for which they had actually been hired, compilations ofcipherorcodematerial for the U.S. Army. Another task was to test commercial cipher devices which vendors wished to sell to the U.S. government. Kullback worked in partnership withFrank RowlettagainstRED cipher machinemessages. Almost overnight, they unravelled the keying system and then the machine pattern – with nothing but the intercepted messages in hand. Using the talents of linguist John Hurt to translate text, SIS started issuing current intelligence to military decision-makers. In May 1942, five months after attack onPearl Harbor, Kullback, by then a Major, was sent to Britain.[4]He learned atBletchley Parkthat the British were producing intelligence of high quality by exploiting theEnigma machine. He also cooperated with the British in the solution of more conventional German codebook-based systems. Shortly after his return to the States, Kullback moved into the Japanese section as its chief. When theNational Security Agency(NSA) was formed in 1952, Rowlett became chief of cryptanalysis. The primary problem facing research and development in the post-war period was development of high-speed processing equipment. Kullback supervised a team of about 60 people, including such innovative thinkers in automated data processing development asLeo Rosenand Sam Snyder. His staff pioneered new forms of input and memory, such asmagnetic tapeanddrum memory, and compilers to make machines truly "multi-purpose". Kullback gave priority to using computers to generatecommunications security(COMSEC) materials. Kullback's bookInformation Theory and Statisticswas published byJohn Wiley & Sonsin 1959. The book was republished, with additions and corrections, byDover Publicationsin 1968. Solomon Kullback retired from NSA in 1962, and focused on his teaching at George Washington University and publishing new papers. In 1963 he was elected as aFellow of the American Statistical Association.[5]He reached the rank ofcolonel, and was inducted into theMilitary Intelligence Hall of Fame. Kullback is remembered by his colleagues at NSA as straightforward; one described him as "totally guileless, you always knew where you stood with him." One former NSA senior recalled him as a man of unlimited energy and enthusiasm and a man whose judgment was usually "sound and right."
https://en.wikipedia.org/wiki/Solomon_Kullback
Richard A. Leibler(March 18, 1914,Chicago, Illinois– October 25, 2003,Reston, Virginia) was an Americanmathematicianandcryptanalyst. Richard Leibler was born in March 1914. He received his A.M. in mathematics fromNorthwestern Universityand his Ph.D. from theUniversity of Illinoisin 1939. While working at theNational Security Agency, he andSolomon Kullbackformulated theKullback–Leibler divergence,[1]a measure of similarity between probability distributions which has found important applications ininformation theoryandcryptology. Leibler is also credited by the NSA as having opened up "new methods of attack" in the celebratedVENONAcode-breaking project during 1949-1950;[2]this may be a reference to his joint paper with Kullback, which was published in the open literature in 1951 and was immediately noted by Soviet cryptologists.[3] He was director of theCommunications Research Division at the Institute for Defense Analysesfrom 1962 to 1977, during which he was the boss ofJim Simons.[4]He was inducted into theNSA Hall of Honorfor his efforts against the VENONA code.[5] This article about an American mathematician is astub. You can help Wikipedia byexpanding it. This article about a cryptographer is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Richard_Leibler
Instatistics, theBhattacharyya distanceis a quantity which represents a notion ofsimilaritybetween twoprobability distributions.[1]It is closely related to theBhattacharyya coefficient, which is a measure of the amount of overlap between twostatisticalsamples or populations. It is not ametric, despite being named a "distance", since it does not obey thetriangle inequality. Both the Bhattacharyya distance and the Bhattacharyya coefficient are named afterAnil Kumar Bhattacharyya, astatisticianwho worked in the 1930s at theIndian Statistical Institute.[2]He has developed this through a series of papers.[3][4][5]He developed the method to measure the distance between two non-normal distributions and illustrated this with the classical multinomial populations,[3]this work despite being submitted for publication in 1941, appeared almost five years later inSankhya.[3][2]Consequently, Professor Bhattacharyya started working toward developing a distance metric for probability distributions that are absolutely continuous with respect to the Lebesgue measure and published his progress in 1942, at Proceedings of theIndian Science Congress[4]and the final work has appeared in 1943 in the Bulletin of theCalcutta Mathematical Society.[5] Forprobability distributionsP{\displaystyle P}andQ{\displaystyle Q}on the samedomainX{\displaystyle {\mathcal {X}}}, the Bhattacharyya distance is defined as where is the Bhattacharyya coefficient fordiscrete probability distributions. Forcontinuous probability distributions, withP(dx)=p(x)dx{\displaystyle P(dx)=p(x)dx}andQ(dx)=q(x)dx{\displaystyle Q(dx)=q(x)dx}wherep(x){\displaystyle p(x)}andq(x){\displaystyle q(x)}are theprobability densityfunctions, the Bhattacharyya coefficient is defined as More generally, given two probability measuresP,Q{\displaystyle P,Q}on a measurable space(X,B){\displaystyle ({\mathcal {X}},{\mathcal {B}})}, letλ{\displaystyle \lambda }be a (sigma finite) measure such thatP{\displaystyle P}andQ{\displaystyle Q}areabsolutely continuouswith respect toλ{\displaystyle \lambda }i.e. such thatP(dx)=p(x)λ(dx){\displaystyle P(dx)=p(x)\lambda (dx)}, andQ(dx)=q(x)λ(dx){\displaystyle Q(dx)=q(x)\lambda (dx)}for probability density functionsp,q{\displaystyle p,q}with respect toλ{\displaystyle \lambda }definedλ{\displaystyle \lambda }-almost everywhere. Such a measure, even such a probability measure, always exists, e.g.λ=12(P+Q){\displaystyle \lambda ={\tfrac {1}{2}}(P+Q)}. Then define the Bhattacharyya measure on(X,B){\displaystyle ({\mathcal {X}},{\mathcal {B}})}by It does not depend on the measureλ{\displaystyle \lambda }, for if we choose a measureμ{\displaystyle \mu }such thatλ{\displaystyle \lambda }and an other measure choiceλ′{\displaystyle \lambda '}are absolutely continuous i.e.λ=l(x)μ{\displaystyle \lambda =l(x)\mu }andλ′=l′(x)μ{\displaystyle \lambda '=l'(x)\mu }, then and similarly forQ{\displaystyle Q}. We then have We finally define the Bhattacharyya coefficient By the above, the quantityBC(P,Q){\displaystyle BC(P,Q)}does not depend onλ{\displaystyle \lambda }, and by the Cauchy inequality0≤BC(P,Q)≤1{\displaystyle 0\leq BC(P,Q)\leq 1}. UsingP(dx)=p(x)λ(dx){\displaystyle P(dx)=p(x)\lambda (dx)}, andQ(dx)=q(x)λ(dx){\displaystyle Q(dx)=q(x)\lambda (dx)},BC(P,Q)=∫Xp(x)q(x)Q(dx)=∫XP(dx)Q(dx)Q(dx)=EQ[P(dx)Q(dx)]{\displaystyle BC(P,Q)=\int _{\mathcal {X}}{\sqrt {\frac {p(x)}{q(x)}}}Q(dx)=\int _{\mathcal {X}}{\sqrt {\frac {P(dx)}{Q(dx)}}}Q(dx)=E_{Q}\left[{\sqrt {\frac {P(dx)}{Q(dx)}}}\right]} Letp∼N(μp,σp2){\displaystyle p\sim {\mathcal {N}}(\mu _{p},\sigma _{p}^{2})},q∼N(μq,σq2){\displaystyle q\sim {\mathcal {N}}(\mu _{q},\sigma _{q}^{2})}, whereN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}is thenormal distributionwith meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}; then And in general, given twomultivariate normaldistributionspi=N(μi,Σi){\displaystyle p_{i}={\mathcal {N}}({\boldsymbol {\mu }}_{i},\,{\boldsymbol {\Sigma }}_{i})}, whereΣ=Σ1+Σ22.{\displaystyle {\boldsymbol {\Sigma }}={{\boldsymbol {\Sigma }}_{1}+{\boldsymbol {\Sigma }}_{2} \over 2}.}[6]Note that the first term is a squaredMahalanobis distance. 0≤BC≤1{\displaystyle 0\leq BC\leq 1}and0≤DB≤∞{\displaystyle 0\leq D_{B}\leq \infty }. DB{\displaystyle D_{B}}does not obey thetriangle inequality, though theHellinger distance1−BC(p,q){\displaystyle {\sqrt {1-BC(p,q)}}}does. The Bhattacharyya distance can be used to upper and lower bound theBayes error rate: 12−121−4ρ2≤L∗≤ρ{\displaystyle {\frac {1}{2}}-{\frac {1}{2}}{\sqrt {1-4\rho ^{2}}}\leq L^{*}\leq \rho } whereρ=Eη(X)(1−η(X)){\displaystyle \rho =\mathbb {E} {\sqrt {\eta (X)(1-\eta (X))}}}andη(X)=P(Y=1|X){\displaystyle \eta (X)=\mathbb {P} (Y=1|X)}is the posterior probability.[7] The Bhattacharyya coefficient quantifies the "closeness" of two random statistical samples. Given two sequences from distributionsP,Q{\displaystyle P,Q}, bin them inton{\displaystyle n}buckets, and let the frequency of samples fromP{\displaystyle P}in bucketi{\displaystyle i}bepi{\displaystyle p_{i}}, and similarly forqi{\displaystyle q_{i}}, then the sample Bhattacharyya coefficient is which is an estimator ofBC(P,Q){\displaystyle BC(P,Q)}. The quality of estimation depends on the choice of buckets; too few buckets would overestimateBC(P,Q){\displaystyle BC(P,Q)}, while too many would underestimate. A common task inclassificationis estimating the separability of classes. Up to a multiplicative factor, the squaredMahalanobis distanceis a special case of the Bhattacharyya distance when the two classes are normally distributed with the same variances. When two classes have similar means but significantly different variances, the Mahalanobis distance would be close to zero, while the Bhattacharyya distance would not be. The Bhattacharyya coefficient is used in the construction ofpolar codes.[8] The Bhattacharyya distance is used in feature extraction and selection,[9]image processing,[10]speaker recognition,[11]phone clustering,[12]and in genetics.[13]
https://en.wikipedia.org/wiki/Bhattacharyya_distance
Shallow parsing(alsochunkingorlightparsing) is an analysis of asentencewhich first identifies constituent parts of sentences (nouns, verbs, adjectives, etc.) and then links them to higher order units that have discrete grammatical meanings (noungroups orphrases, verb groups, etc.). While the most elementary chunking algorithms simply link constituent parts on the basis of elementary search patterns (e.g., as specified byregular expressions), approaches that usemachine learning techniques(classifiers,topic modeling, etc.) can take contextual information into account and thus compose chunks in such a way that they better reflect the semantic relations between the basic constituents.[1]That is, these more advanced methods get around the problem that combinations of elementary constituents can have different higher level meanings depending on the context of the sentence. It is a technique widely used innatural language processing. It is similar to the concept oflexical analysisfor computer languages. Under the name "shallow structure hypothesis", it is also used as an explanation for whysecond languagelearners often fail to parse complex sentences correctly.[2] Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Chunking_(computational_linguistics)
Insystemic functional grammar(SFG), anominal groupis a group of words that represents or describes an entity, for exampleThe nice old English police inspector who was sitting at the table with Mr Morse. Grammatically, the wording "The nice old English police inspector who was sitting at the table with Mr Morse" can be understood as a nominal group (a description of someone), which functions as the subject of the information exchange and as the person being identified as "Mr Morse". A nominal group is widely regarded as synonymous withnoun phrasein other grammatical models.[1][2]However, there are two major differences between the functional notion of anominal groupand the formal notion of anoun phrasethat must be taken into account. Firstly, the coiner of the term,Halliday, and some of his followers draw a theoretical distinction between the termsgroupandphrase. Halliday argues that "A phrase is different from a group in that, whereas a group is an expansion of a word, a phrase is a contraction of a clause".[3]Halliday borrowed the termgroupfrom the linguist/classicist Sydney Allen.[4]In the second place, the functional notion of nominal group differs from the formal notion of noun phrase because the first is anchored on the thing being described whereas the second is anchored on word classes. For that reason, one can analyse the nominal groupssome friendsanda couple of friendsvery similarly in terms of function: a thing/entity quantified in an imprecise fashion; whereas one must recognisesome friendsas being a simple noun phrase anda couple of friendsas being a noun phrase embedded in another noun phrase (one noun phrase per noun). In short, these notions are different even if formalists do not perceive them as different. SFG postulates arank scale, in which the highest unit is the clause and the lowest is themorpheme: coming from the largest unit down, we can divide the parts of a clause into groups and phrases; and coming from the smallest units up, we can group morphemes into words. Typically, groups are made out of words while phrases are made out of groups: e.g. clause constituents(the apples) (are) (on the chair), phrase constituents(on) (the chair), group constituents(the) (apples), word constituents(apple)(s). In that sense, each unit of a rank typically consists of one or more units of the rank below, not of the same rank (see rank-shifting section for exceptions to this typical pattern).[5]At group/phrase rank besidesnominal group, there are also the "verbal group", the "adverbial group", the "prepositional group" (e.g.from under), and the "prepositional phrase" (e.g.from under the sofa). The term 'nominal' in 'nominal group' was adopted because it denotes a wider class of phenomena than the termnoun.[6]The nominal group is a structure which includes nouns, adjectives, numerals and determiners, which is associated with the thing under description (a.k.a. entity), and whose supporting logic isDescription logic. The termnounhas a narrower purview and is detached from any notion of entity description. For instance, the wordsbit/bitsina bit of time,a little bit of peanut butter, andbits of informationcan be understood as a noun, but they can hardly be understood as representing some entity on its own. In that sense, these words shall be understood as being the head of a "noun phrase" in a formalist account of grammar, but as a portion of some substance in a nominal group. Since formal linguists are interested in the recurring patterns of word classes such as "a" + "[noun]" and not in the way humans describe entities, they recruit the term "noun phrase" for their grammatical descriptions, a structure defined as a pattern around a noun, and not as a way of describing an entity such as the "nominal group". In other words, given the different architectures of language that are assumed by functional and formal theories of language, the terms "noun phrase" and "nominal group" must be seen to be doing quite different descriptive work. For instance, these group/phrase elements are re-interpreted as functional categories, in the first instance asprocess,participantandcircumstance, with the nominal group as the pre-eminent structure for the expression of participant roles in discourse.[7]Within Halliday's functionalist classification of this structure, he identifies the functions of Deictic, Numerative, Epithet, Classifer and Thing. The word classes which typically realise these functions are set out in the table below: Within aclause, a definite nominal group functions as if it were a proper noun. The proper noun (or the common noun when there is no proper noun) functions as theheadof the nominal group; all other constituents work as modifiers of the head. The modifiers preceding the head are calledpremodifiersand the ones after itpostmodifiers. The modifiers that represent a circumstance such as a location are calledqualifiers. In English, most postmodifiers are qualifiers.[8]In the following example of a nominal group, the head is bolded. English is a highly nominalised language, and thus lexical meaning is largely carried in nominal groups. This is partly because of the flexibility of these groups in encompassing premodifiers and qualification, and partly because of the availability of a special resource called thethematic equative, which has evolved as a means of packaging the message of a clause in the desired thematic form[9](for example, the clause [What attracts her to the course] is [the depth of understanding it provides] is structured as [nominal group A] = [nominal group B]). Many things are most readily expressed in nominal constructions; this is particularly so in registers that have to do with the world of science and technology, where things, and the ideas behind them, are multiplying and proliferating all the time.[10] Like the English clause, the nominal group is a combination of three distinct functional components, ormetafunctions, which express three largely independent sets of semantic choice: theideational(what the clause or nominal group is about); theinterpersonal(what the clause is doing as a verbal exchange between speaker and listener, or writer and reader); and thetextual(how the message is organised—how it relates to the surrounding text and the context in which it is occurring/ it occurs). In a clause, each metafunction is a virtually complete structure, and the three structures combine into one in interpretation. However, beneath the clause—in phrases and in groups, such as the nominal group—the three structures are incomplete of themselves and need to be interpreted separately, "as partial contributions to a single structural line".[11]In nominal groups, the ideational structure is by far the most significant in premodifying the head. To interpret premodification, it is necessary to split the ideational metafunction into two dimensions: the experiential and the logical. The experiential dimension concerns how meaning is expressed in the group as the organisation of experience. The critical question is how and whether the head is modified. The head does not have to be modified to constitute a group in this technical sense.[12]Thus, four types of nominal group are possible: the head alone ("apples"), the head with premodifiers ("Those five beautiful shiny Jonathanapples"), the head with a qualifier ("applessitting on the chair"), and the full structure of premodification and qualification, as above. In this example, the premodifiers characterise the head, on what is known as theuppermost rank(see "Rankshifting" below). In someformalgrammars, all of the premodifying items in the example above, except for "Those", would be referred to as adjectives, despite the fact that each item has a quite different grammatical function in the group. Anepithetindicates some quality of the head: "shiny" is anexperiential epithet, since it describes an objective quality that we can all experience; by contrast, "beautiful" is aninterpersonal epithet, since it is an expression of the speaker's subjective attitude towards the apples, and thus partly a matter of the relationship between speaker and listener. "Jonathan" is aclassifier, which indicates a particular subclass of the head (not Arkansas Black or Granny Smith apples, but Jonathan apples); a classifier cannot usually be intensified ("very Jonathan apples" is ungrammatical). "Five" is anumerator, and unlike the other three items, describes not a quality of the head but its quantity.[13] The experiential pattern in nominal groups opens with the identification of the head in terms of the immediate context of the speech event—the here-and-now—what Halliday calls "the speaker–now matrix". Take, for example, the first word of the nominal group exemplified above: "those": "thoseapples", as opposed to "theseapples", means "you know the apples I mean—the ones over there, not close to me"; distance or proximity to the immediate speech event could also be in temporal terms (the ones we picked last week, not today), or in terms of the surrounding text (the apples mentioned in the previous paragraph in another context, not in the previous sentence in the same context as now) and the assumed background knowledge of the listener/speaker ("theapple" as opposed to "anapple" means "the one you know about"). The same function is true of otherdeictics, such as "my", "all", "each", "no", "some", and "either": they establish the relevance of the head—they "fix" it, as it were—in terms of the speech event. There is a progression from this opening of the nominal group, with the greatest specifying potential, through items that have successively less identifying potential and are increasingly permanent as attributes of the head. As Halliday points out, "the more permanent the attribute of a thing, the less likely it is to identify it in a particular context"[14](that is, of the speech event). The most permanent item, of course, is the head itself. This pattern from transient specification to permanent attribute explains why the items are ordered as they are in a nominal group. The deictic ("those") comes first; this is followed by the numerative, if there is one ("five"), since the number of apples, in this case, is the least permanent attribute; next comes the interpersonal epithet which, arising from the speaker's opinion, is closer to the speaker–now matrix than the more objectively testable experiential epithet ("shiny"); then comes the more permanent classifier ("Jonathan", a type of apple), leading to the head itself. This ordering of increasing permanence from left to right is why we are more likely to say "her new black car" than "her black new car": the newness will recede sooner than the blackness. The logic of the group in English is recursive, based on successive subsets:[15]working leftwards from the head, the first question that can be asked is "what kind of apples?" (Jonathan apples.) Then, "what kind of Jonathan apples?" (Shiny Jonathan apples.) "What kind of shiny Jonathan apples?" (Beautiful shiny Jonathan apples) "What kind of beautiful shiny Jonathan apples?" Here the recursive logic changes, since this is a multivariate, not a univariate, nominal group—the question now is "How manybeautiful shiny Jonathan apples?" and after that, "How do those five beautiful shiny Jonathan apples relate to me the speaker/writer, now?" ("Those ones".) In contrast, the logical questions of a univariate group would be unchanged right through, typical of long strings of nouns in news headlines and signage[16]("International departure lounge ladies' first-class washroom"). The post-modifiers here contain information that isrankshifted. Returning to the original example above, "on the chair" is aprepositional phraseembedded within the nominal group; this prepositional phrase itself contains a nominal group ("the chair"), comprising the head ("chair"), and adeictic("the") which indicates whether some specific subset of the head is intended (here, a specific chair we can identify from the context).[17]By contrast, "Those" is a deictic on the uppermost rank and is applied to the head on the uppermost rank, "apples"; here, "those" means "You know which apples I mean—the ones over there". See also:
https://en.wikipedia.org/wiki/Nominal_group_(functional_grammar)
Flash fictionis a brief fictional narrative[1]that still offers character and plot development. Identified varieties, many of them defined byword count, include thesix-word story;[2]the 280-character story (also known as "twitterature");[3]the "dribble" (also known as the "minisaga", 50 words);[2]the "drabble" (also known as "microfiction", 100 words);[2]"sudden fiction" (up to 750 words);[4]"flash fiction" (up to 1,000 words); and "microstory".[5] Some commentators have suggested that flash fiction possesses a unique literary quality in its ability to hint at or imply a larger story.[6] Flash fiction has roots going back to prehistory, recorded at origin of writing, includingfablesandparables, notablyAesop's Fablesin the west, andPanchatantraandJataka talesin India. Later examples include the tales ofNasreddin, andZenkoanssuch asThe Gateless Gate. In the United States, early forms of flash fiction can be found in the 19th century, notably in the figures ofWalt Whitman,Ambrose Bierce, andKate Chopin.[7] In the 1920s, flash fiction was referred to as the "short short story" and was associated withCosmopolitanmagazine, and in the 1930s, collected in anthologies such asThe American Short Short Story.[8] Somerset Maughamwas a notable proponent, with hisCosmopolitans: Very Short Stories(1936) being an early collection. In Japan, flash fiction was popularized in the post-war period particularly by Michio Tsuzuki(都筑道夫). In 1986, Jerome Stern at the Florida State University organized the World's Best Short-Short Story Contest for stories of fewer than 250 words.Michael Martone, the first winner, received $100 and a crate of Florida oranges as the prize.[9]TheSoutheast Reviewcontinues the contest but has increased the maximum to 500 words.[10]In 1996, Stern publishedMicro Fiction: an anthology of really short storiesdrawn, in part, from the contest.[11] It was not until 1992, however, that the term "flash fiction" came into use as a category/genre of fiction.[12][13]It was coined by James Thomas,[14]who together with Denise Thomas and Tom Hazuka edited the 1992 landmark anthology titledFlash Fiction: 72 Very Short Stories,[15]and was introduced by Thomas in his Introduction to that volume.[16][17]Since then the term has gained wide acceptance as a form, especially in the W. W. Norton Anthologies co-edited by Thomas:Flash Fiction America,Flash Fiction International,Flash Fiction Forward, andFlash Fiction: 72 Very Short Stories. In 2020, theHarry Ransom Centerat theUniversity of Texas at Austinestablished the first curated collection of flash fiction artifacts in the United States.[18] Practitioners have includedSaadiof Shiraz ("Gulistan of Sa'di"),Bolesław Prus,[5][19]Anton Chekhov,O. Henry,Franz Kafka,H. P. Lovecraft,Yasunari Kawabata,Ernest Hemingway,Julio Cortázar,Daniil Kharms,[20]Arthur C. Clarke,Richard Brautigan,Ray Bradbury,Kurt Vonnegut Jr.,Fredric Brown,John Cage,Philip K. Dick, andRobert Sheckley.[21] Hemingway also wrote 18 pieces of flash fiction that were included in his first short-story collection,In Our Time(1925). While it is often alleged that (to win a bet) he also wrote the flash fiction "For Sale, Baby Shoes, Never Worn", various iterations of the story date back to 1906, when Hemingway was only 7 years old, rendering his authorship implausible.[22][23] Also notable are the 62 "short-shorts" which compriseSeverance,the thematic collection byRobert Olen Butlerin which each story describes the remaining 90 seconds of conscious awareness within human heads which have been decapitated.[24] Contemporary English-speaking writers well known for their published flash fiction includeLydia Davis,David Gaffney,Robert Scotellaro, andNancy Stohlman,Sherrie Flick,Bruce Holland Rogers,Steve Almond,Barbara Henning,Grant Faulkner. Spanish-speaking literature has many authors of microstories, includingAugusto Monterroso("El Dinosaurio") andLuis Felipe Lomelí("El Emigrante"). Their microstories are some of the shortest ever written in that language. In Spain, authors ofmicrorrelatos(very short fictions) have includedAndrés Neuman,Ramón Gómez de la Serna,José Jiménez Lozano,Javier Tomeo,José María Merino,Juan José Millás, andÓscar Esquivias.[25]In his collectionLa mitad del diablo(Páginas de Espuma, 2006),Juan Pedro Aparicioincluded the one-word storyLuis XIV, which in its entirety reads: "Yo" ("I"). In Argentina, notable contemporary contributors to the genre have includedMarco Denevi,Luisa Valenzuela, andAna María Shua. The Italian writerItalo Calvinoconsciously searched for a short narrative form, drawing inspiration from Argentine writersJorge Luis BorgesandAdolfo Bioy Casaresand finding that Monterroso's was "the most perfect he could find"; "El dinosaurio", in turn, possibly inspired his "The Dinosaurs".[26] German-language authors ofKürzestgeschichten,influenced by brief narratives penned byBertolt BrechtandFranz Kafka, have includedPeter Bichsel,Heimito von Doderer,Günter Kunert, andHelmut Heißenbüttel. TheArabic-speaking world has produced a number of microstory authors, including theNobel Prize-winning Egyptian authorNaguib Mahfouz, whose bookEchoes of an Autobiographyis composed mainly of such stories. Other flash fiction writers in Arabic includeZakaria Tamer,Haidar Haidar, andLaila al-Othman. In the Russian-speaking world, the best known flash fiction author isLinor Goralik.[citation needed] In the southwesternIndian state, ofKeralaP. K. Parakkadavuis known for his many microstories in theMalayalam language.[27] Hungarian writerIstván Örkényis known (beside other works) for hisOne-Minute Stories.[28] A number of print journals dedicate themselves to flash fiction. These includeFlash: The International Short-Short Story Magazine.[29] Access to the Internet has enhanced an awareness of flash fiction, with online journals being devoted entirely to the style.[30] In a CNN article on the subject, the author remarked that the "democratization of communication offered by the Internet has made positive in-roads" in the specific area of flash fiction, and directly influenced the style's popularity.[31]The form is popular, with most online literary journals now publishing flash fiction. In summer 2017,The New Yorkerbegan running a series of flash fiction stories online every summer.[32]
https://en.wikipedia.org/wiki/Flash_fiction
Twitterature(aportmanteauofTwitterandliterature) is a literary use of themicrobloggingservice ofX (formerly known as Twitter). It includes various genres, includingaphorisms,poetry, andfiction(or somecombinationthereof) written by individuals or collaboratively. The 280-character maximum imposed by the medium, upgraded from 140 characters in late 2017,[1]provides a creative challenge. The most effective way to learn is by devoting oneself to a single subject for months at a time. Its opposite is school. Aphorismsare popular because their brevity is inherently suited to Twitter. People often share well-known classic aphorisms on Twitter, but some also seek to craft and share their own brief insights on every conceivable topic.[3][4]Boing Boinghas described Twitter as encouraging "a new age of the aphorism", citing the novel aphorisms of Aaron Haspel.[2] Augusti.Och fast det är hetti solenkänns det iblandkänns det iblandsom om jagfallerhandlöst mot hösten. Haikuare a brief poetic form well suited to Twitter; many examples can be found using thehashtag#haiku. Other forms of poetry can be found under other hashtags or by "following" people who use their Twitter accounts for journals or poetry. For example, the Swedish poet and journalistGöran Greidertweets observations and poems using the Twitter handle @GreiderDD (Göran Greider) as shown in the example on the right. OnBlack Twittera form of collaborative poetry provides "clever and poetic critical commentary on the world around them" in a genre that scholars have called "digital dozens" ,[6]in reference to the verbal insult game known as thedozens. Contemporary Black American poetry has often been published on social media platforms rather than in conventional print publications.[7] Twitterature fiction includes 140-character stories,fan fiction, the retelling of literary classics and legends, twitter novels, and collaborative works. The terms "twiction" and "tweet fic" (Twitter fiction), "twiller" (Twitter thriller),[8]and "phweeting" (fake tweeting) also exist to describe particular twitterature fiction genres.[9] I was mowing the lawn. I peered at my neighbor's immaculate yard; his grass was literally greener. Then a meteor fell atop his lovely house. 140-character stories: fiction that fits into a single tweet.[11]An example of these stories are those written byJames Mark Miller(@asmallfiction),[12]Sean Hill(@veryshortstories),[13]andArjun Basu(@arjunbasu).[14][15]A number of Twitter journals dedicate themselves to the form. In 2013,The Guardianchallenged traditionally published authors such asJeffrey ArcherandIan Rankinto write their 140-character stories, and then featured their attempts.[16] Fan fiction: Twitter accounts that have been created for characters in films, TV series, and books. Some of these accounts take the events in the original works as their starting point, whereas others may branch intofan fiction. Literary classics and legends are retold on Twitter, either by characters' tweeting and interacting, or by retelling in tweet format, often in modern language using slang. For instance,[17]in 2010, a group ofrabbistweetedthe Exodus, with the hashtag #TweetTheExodus; and in 2011, theRoyal Shakespeare Companyand the English game companyMudlarktweeted the story ofRomeo and Juliet.[9][18]In 2009,Alexander AcimanandEmmett RensinpublishedTwitterature: The World's Greatest Books Retold Through Twitter. Epicretold, by author Chindu Sreedharan, is another noteworthy work in this genre. TheNew Indian Expresscalled it an “audacious attempt...to fit the mother of all epics, theMahabharata, into the microblogging site Twitter.”[19]Tweeted from @epicretold, and subsequently published as a full-length book byHarperCollins India,[20]the story was narrated in "2,628 tweets" between July 2009 to October 2014.[21]In an interview withTime, Sreedharan said it was an attempt to simplify the lengthy epic and make it accessible to the new generation—both in India and abroad.[22] I've grown to like small places. I like bugs, bug homes, walking stick bugs, blades of grass, ladybug Ferris wheels made out of dandelions. Willum Mortimus Granger was beside himself. In fact when his body was found, the top half was right next to the bottom. Twitter novels(ortwovels)[9]are another form of fiction that can extend over hundreds of tweets to tell a longer story.[23]The author of a Twitter novel is often unknown to the readers, as anonymity creates an air of authenticity. As such, the account name can often be apseudonymor even a character in the story. Twitter novels can run for months, with one or more tweets daily, whereby context is usually maintained by a unique hashtag. Searching by the corresponding hashtag produces a list of all available tweets in the series. Some serials are posted in short updates that encourage the reader to follow and to speculate on the next installment.[25] One example of the Twitter novel isSmall Placesby Nick Belardes (@smallplaces), which began on April 25, 2008, with the tweet as shown on the right. Another example isThe Twitstery Twilogyseries by Robert K. Blechman (@RKBs_Twitstery). The first entry in the series wasExecutive Severance, which would be the first live-tweeted Twitter comic mystery (or "Twitstery"), beginning on May 6, 2009, with the tweet shown. The second Twitter novel in the series,The Golden Parachute, appeared as aKindle eBookin 2016; and the third and concluding novel,I Tweet, Therefore I am, the Book 3, was released early in 2017.[24] John Roderick'sElectric Aphorismswas composed in individual tweets between December 2008 and May 2009, and deleted on publication of the book itself by Publication Studio in November 2009.[26]Traditionally-published authors have also attempted the twitter novel, such asJennifer Egan'sBlack Box, which was first published in about 500 tweets in 2012;[27]andDavid Mitchell'sThe Right Sort, first published as almost 300 tweets sent over one week in 2014.[25]Hari Manev, who does not use Twitter, published his twitter novelThe Eye, which is the first volume in hisThe Meaning of Fruthtwitter trilogy, as a Kindle eBook in 2019.[28]The first Russian Twitter-style novel by V. Pankratov "Юрфак.ru " published in 2013 in the publishing house "New Justice". Sam was brushing her hair when the girl in the mirror put down the hairbrush, smiled & said, "We don't love you anymore. Neil Gaimancoined the term "interactive twovel" for an experiment in involving his Twitter followers in collaborating with him on a novel. This was conducted withBBC America Audio Books. The first tweet from Gaiman was as shown on the right. Then, he invited his readers to continue the story under the hashtag #bbcawdio.[9]The result was published as an audiobook under the titleHearts, Keys and Puppetry, with the author given as Neil Gaiman & Twitterverse.[29]Teju Colesent lines from his short story "Hafiz" to other Twitter users and thenretweetedthem to assemble the story.[25] Twitter was launched in 2006, and the first Twitter novels appeared in 2008. The origins of the termTwitteratureare hard to determine, but it was popularized by Aciman and Rensin's book. Since then, the phenomenon has been discussed in the arts and culture sections of several major newspapers.[3][15][9][30] Twitterature has been called aliterary genrebut is more accurately an adaptation of various genres tosocial media.[15]The writing is often experimental or playful, with some authors or initiators seeking to find out how the medium of Twitter affects storytelling or how a story spreads through the medium. A Swedish site titledNanoismer.sewas launched in 2011 to "challenge people to write deeper than what Twitter is for."[31]
https://en.wikipedia.org/wiki/Twitterature
Aword listis a list of words in alexicon, generally sorted by frequency of occurrence (either bygraded levels, or as a ranked list). A word list is compiled bylexical frequency analysiswithin a giventext corpus, and is used incorpus linguisticsto investigate genealogies and evolution of languages and texts. A word which appears only once in the corpus is called ahapax legomena. Inpedagogy, word lists are used incurriculum designforvocabulary acquisition. A lexicon sorted by frequency "provides a rational basis for making sure that learners get the best return for their vocabulary learning effort" (Nation 1997), but is mainly intended forcourse writers, not directly for learners. Frequency lists are also made for lexicographical purposes, serving as a sort ofchecklistto ensure that common words are not left out. Some major pitfalls are the corpus content, the corpusregister, and the definition of "word". While word counting is a thousand years old, with still gigantic analysis done by hand in the mid-20th century,natural language electronic processingof large corpora such as movie subtitles (SUBTLEX megastudy) has accelerated the research field. Incomputational linguistics, afrequency listis a sorted list ofwords(word types) together with theirfrequency, where frequency here usually means the number of occurrences in a givencorpus, from which the rank can be derived as the position in the list. Nation (Nation 1997) noted the incredible help provided by computing capabilities, making corpus analysis much easier. He cited several key issues which influence the construction of frequency lists: Most of currently available studies are based on writtentext corpus, more easily available and easy to process. However,New et al. 2007proposed to tap into the large number of subtitles available online to analyse large numbers of speeches.Brysbaert & New 2009made a long critical evaluation of the traditional textual analysis approach, and support a move toward speech analysis and analysis of film subtitles available online. The initial research saw a handful of follow-up studies,[1]providing valuable frequency count analysis for various languages. In depth SUBTLEX researches over cleaned up open subtitles were produce for French (New et al. 2007), American English (Brysbaert & New 2009;Brysbaert, New & Keuleers 2012), Dutch (Keuleers & New 2010), Chinese (Cai & Brysbaert 2010), Spanish (Cuetos et al. 2011), Greek (Dimitropoulou et al. 2010), Vietnamese (Pham, Bolger & Baayen 2011), Brazil Portuguese (Tang 2012) and Portugal Portuguese (Soares et al. 2015), Albanian (Avdyli & Cuetos 2013), Polish (Mandera et al. 2014) and Catalan (2019[2]), Welsh (Van Veuhen et al. 2024[3]). SUBTLEX-IT (2015) provides raw data only.[4] In any case, the basic "word" unit should be defined. For Latin scripts, words are usually one or several characters separated either by spaces or punctuation. But exceptions can arise : English "can't" and French "aujourd'hui" include punctuations while French "chateau d'eau" designs a concept different from the simple addition of its components while including a space. It may also be preferable to group words of aword familyunder the representation of itsbase word. Thus,possible, impossible, possibilityare words of the same word family, represented by the base word*possib*. For statistical purpose, all these words are summed up under the base word form *possib*, allowing the ranking of a concept and form occurrence. Moreover, other languages may present specific difficulties. Such is the case of Chinese, which does not use spaces between words, and where a specified chain of several characters can be interpreted as either a phrase of unique-character words, or as a multi-character word. It seems thatZipf's lawholds for frequency lists drawn from longer texts of any natural language. Frequency lists are a useful tool when building an electronic dictionary, which is a prerequisite for a wide range of applications incomputational linguistics. German linguists define theHäufigkeitsklasse(frequency class)N{\displaystyle N}of an item in the list using thebase 2 logarithmof the ratio between its frequency and the frequency of the most frequent item. The most common item belongs to frequency class 0 (zero) and any item that is approximately half as frequent belongs in class 1. In the example list above, the misspelled wordoutragioushas a ratio of 76/3789654 and belongs in class 16. where⌊…⌋{\displaystyle \lfloor \ldots \rfloor }is thefloor function. Frequency lists, together withsemantic networks, are used to identify the least common, specialized terms to be replaced by theirhypernymsin a process ofsemantic compression. Those lists are not intended to be given directly to students, but rather to serve as a guideline for teachers and textbook authors (Nation 1997).Paul Nation's modern language teaching summary encourages first to "move from high frequency vocabulary and special purposes [thematic] vocabulary to low frequency vocabulary, then to teach learners strategies to sustain autonomous vocabulary expansion" (Nation 2006). Word frequency is known to have various effects (Brysbaert et al. 2011;Rudell 1993). Memorization is positively affected by higher word frequency, likely because the learner is subject to more exposures (Laufer 1997). Lexical access is positively influenced by high word frequency, a phenomenon calledword frequency effect(Segui et al.). The effect of word frequency is related to the effect ofage-of-acquisition, the age at which the word was learned. Below is a review of available resources. Word counting is an ancient field,[5]with known discussion back toHellenistictime. In 1944,Edward Thorndike,Irvin Lorgeand colleagues[6]hand-counted 18,000,000 running words to provide the first large-scale English language frequency list, before modern computers made such projects far easier (Nation 1997). 20th century's works all suffer from their age. In particular, words relating to technology, such as "blog," which, in 2014, was #7665 in frequency[7]in the Corpus of Contemporary American English,[8]was first attested to in 1999,[9][10][11]and does not appear in any of these three lists. The Teacher Word Book contains 30,000 lemmas or ~13,000 word families (Goulden, Nation and Read, 1990). A corpus of 18 million written words was hand analysed. The size of its source corpus increased its usefulness, but its age, and language changes, have reduced its applicability (Nation 1997). The General Service List contains 2,000 headwords divided into two sets of 1,000 words. A corpus of 5 million written words was analyzed in the 1940s. The rate of occurrence (%) for different meanings, and parts of speech, of the headword are provided. Various criteria, other than frequence and range, were carefully applied to the corpus. Thus, despite its age, some errors, and its corpus being entirely written text, it is still an excellent database of word frequency, frequency of meanings, and reduction of noise (Nation 1997). This list was updated in 2013 by Dr. Charles Browne, Dr. Brent Culligan and Joseph Phillips as theNew General Service List. A corpus of 5 million running words, from written texts used in United States schools (various grades, various subject areas). Its value is in its focus on school teaching materials, and its tagging of words by the frequency of each word, in each of the school grade, and in each of the subject areas (Nation 1997). These now contain 1 million words from a written corpus representing different dialects of English. These sources are used to produce frequency lists (Nation 1997). A review has been made byNew & Pallier. An attempt was made in the 1950s–60s with theFrançais fondamental. It includes the F.F.1 list with 1,500 high-frequency words, completed by a later F.F.2 list with 1,700 mid-frequency words, and the most used syntax rules.[12]It is claimed that 70 grammatical words constitute 50% of the communicatives sentence,[13][14]while 3,680 words make about 95~98% of coverage.[15]A list of 3,000 frequent words is available.[16] The French Ministry of the Education also provide a ranked list of the 1,500 most frequentword families, provided by the lexicologueÉtienne Brunet.[17]Jean Baudot made a study on the model of the American Brown study, entitled "Fréquences d'utilisation des mots en français écrit contemporain".[18] More recently, the projectLexique3provides 142,000 French words, withorthography,phonetic,syllabation,part of speech,gender, number of occurrence in the source corpus, frequency rank, associatedlexemes, etc., available under an open licenseCC-by-sa-4.0.[19] This Lexique3 is a continuous study from which originate theSubtlex movementcited above.New et al. 2007made a completely new counting based on online film subtitles. There have been several studies of Spanish word frequency (Cuetos et al. 2011).[20] Chinese corpora have long been studied from the perspective of frequency lists. The historical way to learn Chinese vocabulary is based on characters frequency (Allanic 2003). American sinologistJohn DeFrancismentioned its importance for Chinese as a foreign language learning and teaching inWhy Johnny Can't Read Chinese(DeFrancis 1966). As a frequency toolkit, Da (Da 1998) and the Taiwanese Ministry of Education (TME 1997) provided large databases with frequency ranks for characters and words. TheHSKlist of 8,848 high and medium frequency words in thePeople's Republic of China, and theRepublic of China (Taiwan)'sTOPlist of about 8,600 common traditional Chinese words are two other lists displaying common Chinese words and characters. Following the SUBTLEX movement,Cai & Brysbaert 2010recently made a rich study of Chinese word and character frequencies. Wiktionarycontains frequency lists in more languages.[21] Most frequently used words in different languages based on Wikipedia or combined corpora.[22]
https://en.wikipedia.org/wiki/Word_lists_by_frequency
Inlinguistics,wh-movement(also known aswh-fronting,wh-extraction, orwh-raising) is the formation ofsyntacticdependencies involvinginterrogativewords. An example in English is the dependency formed betweenwhatand the object position ofdoingin "What are you doing?". Interrogative forms are sometimes known within English linguistics aswh-words, such aswhat,when,where,who, andwhy, but also include other interrogative words, such ashow. This dependency has been used as a diagnostic tool in syntactic studies as it can be observed to interact with other grammatical constraints. In languages with wh-movement, sentences or clauses with a wh-word show a non-canonical word order that places the wh-word (or phrase containing the wh-word) at or near the front of the sentence or clause ("Whomare you thinking about?") instead of the canonical position later in the sentence ("I am thinking aboutyou"). Leaving the wh-word in its canonical position is calledwh-in-situand in English occurs in echo questions andpolar questionsin informal speech. Wh-movement is one of the most studied forms oflinguistic discontinuity.[1]It is observed in many languages and plays a key role in the theories of long-distance dependencies. The termwh-movementstemmed from earlygenerative grammarin the 1960s and 1970s and was a reference to the theory oftransformational grammar, in which the interrogative expression always appears in its canonical position in thedeep structureof a sentence but can move leftward from that position to the front of the sentence/clause in the surface structure.[2]Although other theories of syntax do not use the mechanism of movement in the transformative sense, the termwh-movement(or equivalent terms, such aswh-fronting,wh-extraction, orwh-raising) is widely used to denote the phenomenon, even in theories that do not model long-distance dependencies as a movement. The following examples of sentence pairs illustrate wh-movement in main clauses in English: each (a) example has the canonical word order of a declarative sentence in English, while each (b) sentence has undergone wh-movement, whereby the wh-word has been fronted in order to form a direct question. Wh-fronting ofwhom, which corresponds to the direct objectTesnière. Wh-fronting ofwhat, which corresponds to the prepositional objectsyntax. Wh-fronting ofwhen, which corresponds to the temporaladjuncttomorrow. Wh-fronting ofwhat, which corresponds to thepredicative adjectivehappy. Wh-fronting ofwhere, which corresponds to theprepositional phraseto school. Wh-fronting ofhow, which corresponds to theadverb phrasewell. These examples illustrate that wh-movement occurs when aconstituentis questioned that appears to the right of thefinite verbin the corresponding declarative sentence. The main clause remains inV2 word order, with the interrogative fronted to first position while the finite verb stays in second position.Do-supportis often needed to enable wh-fronting in such cases, which are reliant onsubject–auxiliary inversion. When the subject is questioned, it is unclear whether wh-fronting has occurred because the default position of the subject is clause-initial. In the example sentence pair below, the subjectFredalready appears at the front of the sentence where the interrogative is placed. Some theories of syntax maintain that this constitutes a wh-movement, and analyze such cases as if the interrogative subject has moved up the syntactic hierarchy; however, other theories observe that the surface string of words remains the same, and therefore, no movement has occurred.[3] In many cases, wh-fronting can occur regardless of how far away its canonical location is, as seen in the following set of examples: The interrogativewhomis the direct object of the verblikein each of these examples. The dependency relation between the canonical, empty position and the wh-expression appears to be unbounded, in the sense that there is no upper bound on how deeply embedded within the given sentence the empty position may appear. Wh-movement typically occurs when forming questions in English. There are certain forms of questions in which wh-movement does not occur (aside from when the question word serves as the subject and so is already fronted): Other languages may leave wh-expressionsin-situ(in base position) more often, such as Slavic languages.[5]In French, for instance, wh-movement is often optional in certainmatrix clauses.[6]Mandarin and Russian also possess wh-expressions without obligatory wh-movement. In-situquestions are different from wh-fronted questions in that they result from no movement at all, which tends to be morphologically or pragmatically conditioned.[4] The basic examples above demonstrate wh-movement in main clauses in order to form a direct question. Wh-movement can also occur in subordinate clauses, although its behavior in subordinate clauses differs in word order. In English, wh-movement occurs in subordinate clauses to form an indirect question. While wh-fronting occurs in both direct and indirect questions, there is a key word order difference,[7]as illustrated with the following examples: In indirect questions, while the interrogative is still fronted to the first position of the clause, the subject is instead placed in second position, and the verb appears in third position, forming a V3 word order. Although many examples of wh-movement form questions, wh-movement also occurs inrelative clauses.[8]Many relative pronouns in English have the same form as the corresponding interrogative words (which,who,where, etc.). Relative clauses are subordinate clauses, so the same V3 word order occurs. The relative pronouns have fronted in the subordinate clauses of the b. examples. The characteristic V3 word order is obligatory, just as in other subordinate clauses. Many instances of wh-fronting involvepied-piping, where the word that is moved pulls an entire encompassing phrase to the front of the clause with it. Pied-piping was first identified byJohn R. Rossin his 1967 dissertation.[9] In some cases of wh-fronting, pied-piping is obligatory, and the entire encompassing phrase must be fronted for the sentence to be grammatically correct. In the following examples, the moved phrase is underlined: These examples illustrate that pied-piping is often necessary when the wh-word is inside a noun phrase or adjective phrase. Pied-piping is motivated in part by the barriers and islands to extraction (see below). When the wh-word appears underneath a blocking category or in an island, the entire encompassing phrase must be fronted. There are other cases where pied-piping is optional. In English, this occurs most notably when the fronted word is the object of aprepositional phrase. A formal register will pied-pipe the preposition, whereas more colloquial English prefers to leave the prepositionin situ: The c. examples are cases ofpreposition stranding, which is possible in colloquial English but not allowed in many languages that are related to English.[10]For instance, preposition stranding is largely absent from many of the other Germanic languages, and it may be completely absent from the Romance languages.Prescriptivegrammars often claim that preposition stranding should be avoided in English as well, although it may feel artificial or stilted to a native speaker to move the preposition. Asyntactic islandis a construction from which extracting an element leads to an ungrammatical or marginal sentence. For example: These types of phrases, also referred to asextraction islandsor simplyislands, do not allow wh-movement to occur.[12]John R. Ross proposed and described four types of islands:[13]Complex-Noun Phrase Constraints (CNPC),[14][15]Coordinate Structure Constraint (CSC), Left Branch Condition, and Sentential Subject Constraint.[16]Configurations showing clear island restrictions have also been called wh-islands, complex noun phrases, and adjunct islands.[17] Anadjunct islandis a type of island formed from anadjunctclause. Wh-movement is not possible from an adjunct clause. Adjunct clauses include clauses introduced bybecause,if, andwhen, as well asrelative clauses. Instead, a question would be formed by keeping the interrogativein situ. For example: Awh-islandis created by an embedded sentence that is introduced by a wh-word, creating a dependent clause. Wh-islands are weaker than adjunct islands, and violating them results in a sentence that at minimum sounds ungrammatical to a native speaker. The b. sentences are strongly marginal or unacceptable because they attempt to extract an expression out of a wh-island. This occurs because both wh-words are part of a DP. It would not be possible to move the bottom wh-word to the top of the structure, as they would both interfere. In order to get a grammatical result, a proper wh-movement must occur. However, because the wh-word is taking up the Spec-C position, it is not possible to move the competing wh-word higher by skipping the higher DP as wh-movement is a cyclic process.[clarification needed] Although wh-extraction out of object clauses and phrases is common in English, wh-movement is not (or rarely) possible out of subject phrases, particularly subject clauses.[18]For example: Aleft branch islandoccurs where a modifier precedes the noun that it modifies. The modifier cannot be extracted, a constraint which Ross identified as theLeft Branch Condition.[19]Possessive determiners and attributive adjectives form left branch islands. Fronting of these phrases necessitates pied-piping of the entire noun phrase, for example: Extraction fails in the b. sentences because the extracted expression corresponds to a left-branch modifier of a noun. While left branch islands exist in English, they are absent from many other languages, most notably from the Slavic languages.[20] Incoordination, extraction out of a conjunct of a coordinate structure is possible only if this extraction affects all the conjuncts of the coordinate structure equally. The relevant constraint is known as thecoordinate structure constraint.[21]Extraction must extract the same syntactic expression out of each of the conjuncts simultaneously. This sort of extraction is said to occur across the board (ATB-extraction),[22]e.g., Wh-extraction out of a conjunct of a coordinate structure is only possible if it can be interpreted as occurring equally out all the conjuncts simultaneously, that is, if it occurs across the board. Extraction is difficult from out of a noun phrase. The relevant constraint is known as thecomplex NP constraint,[23]and comes in two varieties, the first banning extraction from the clausal complement of a noun, and the second banning extraction from a relative clause modifying a noun: Sentential complement to a noun: Relative clause: Extraction out of objectthat-clauses serving as complements to verbs may show island-like behavior if the matrix verb is a nonbridge verb (Erteschik-Shir 1973). Nonbridge verbs include manner-of-speaking verbs, such aswhisperorshout, e.g., Syntax treesare visual breakdowns of sentences that include dominating heads for every segment (word/constituent) in the tree itself. In the wh-movement, there are additional segments that are added: EPP (extended projection principle) and the Question Feature [+Q] that represents aquestionsentence. The wh-movement is motivated by a Question Feature/EPP at C (Complementizer), which promotes movement of a wh-word from the canonical base position to Spec-C. This movement could be considered as"Copy + Paste + Delete"movement as we are copying the interrogative word from the bottom, pasting it to Spec-C, and then deleting it from the bottom so that it solely remains at the top (now taking the position of Spec-C). Overall, the highest C will be the target position of the wh-raising.[2] The interrogatives that are used in the wh-movement do not all share headedness. This is important to consider when making the syntax trees, as there are three different heads that may be used. Determiner Phrase (DP): Who, What Prepositional Phrase (PP): Where, When, Why Adverb Phrase (AdvP): How When creating the Syntax Tree for the wh-movement, consider the subject-aux inversion in the word that was raised from T (Tense) to C (Complementizer). The location of the EPP (Extended Projection Principle): The EPP allows movement of the wh-word from the bottom canonical position of the syntax tree to Spec-C. The EPP is a great indicator when it comes to distinguishing between in-situ trees and ex-situ.Ex-situtrees allow the movement to Spec-C, whilein-situdo not as the head C lacks the EPP feature. Within syntax trees, islands do not allow movement to occur; if movement is attempted, the sentence would then be perceived as ungrammatical to the native speaker of the observed language. Islands are typically noted as being a boxed node on the tree. The movement in the wh-Island syntax tree is unable to occur because in order to move out of an embedded clause, a Determiner Phrase (DP) must move through the Spec-C position. This cannot occur, as the Determiner Phrase (DP) is already occupied. For example, in "She said [who bought what]?" we see that "who" takes the place of DP and restricts "what" from rising up to the respected Spec-C. Native speakers may confirm this as well as it will sound ungrammatical: * "What did she say [bought what?]". In languages, a sentence can contain more than one wh-question. These interrogative constructions are calledmultiplewh-questions,[24] e.g.:Whoatewhatat the restaurant? In the following English example, a strikeout-line and trace-movement coindexation symbols—[Whoi...whoti...]—are used to indicate the underlying raising-movement of the closest wh-phrase. This movement produces an overt sentence word order with one fronted wh-question: e.g.: [Whoidid you helpwhotimakewhat?] In the underlying syntax, the wh-phrase closest to Spec-CP is raised to satisfy selectional properties of the CP: the [+Q] and [+Wh-EPP] feature requirements of C. The wh-phrase farther away from Spec-CP stays in its base position (in-situ).[24] Thesuperiority conditiondetermines which wh-phrase moves in a clause that contains multiple wh-phrases.[24]This is the outcome of applying theAttract Closestprinciple, where only the closest candidate is eligible for movement to the attractingheadthat selects for it.[24]If the farther wh-phrase moves instead of the preceding wh-phrase, an ungrammatical structure is created (in English). Not all languages have instances of multiple wh-movement governed by the superiority condition, most have variations. There is no uniformity found across languages concerning the superiority condition. For example, see the following English phrases: The subscript "ti" or "i" are used to mark coreference. "t" represents atrace, while both "ti" and "i" represent that the words refer to each other and the same entity. In a., the closer wh-phrase [who] moves up toward Spec-CP from being the subject of the VP [whoto buy what]. The second wh-phrase [what] remains in-situ (as the direct object of the VP[who to buy what]). This is to satisfy the [+Q Wh] feature in the Spec-CP. In b., the farther wh-phrase [what] has incorrectly moved from the direct object position of the VP[who to buywhat] into the Spec-CP position while the closer wh-phrase to Spec-CP [who] has remained in-situ as the subject of the VP[whoto buy what]. Thus, this sentence contains a violation ofAttract Closestand is therefore ungrammatical, as marked by the asterisk (*). Wh-movement is also found in many other languages around the world. MostEuropean languagesalso place wh-words at the beginning of a clause. Furthermore, many of the facts illustrated above are also valid for other languages. The systematic difference in word order across main wh-clauses and subordinate wh-clauses shows up in other languages in varying forms. The islands to wh-extraction are also present in other languages, but there will be some variation. The following example illustrates wh-movement of an object inSpanish: Juan John compró bought carne. meat. Juan comprócarne. John bought meat. 'John bought meat.' ¿Qué what compró bought Juan? John ¿Quécompró Juan? what bought John 'What did John buy?' The following examples illustrate wh-movement of an object inGerman: Er He liest reads Tesnière Tesnière jeden every Abend. evening. Er liestTesnièrejeden Abend. He reads Tesnière every evening. 'He reads Tesnière every evening.' Wen who liest reads er he jeden every Abend? evening Wenliest er jeden Abend? who reads he every evening 'Who does he read every evening?' The following examples illustrate wh-movement of an object inFrench: Ils they ont have vu seen Pierre. Peter Ils ont vu Pierre. they have seen Peter 'They saw Peter.' Qui Who est-ce qu' is it that ils they ont have vu? seen Qui {est-ce qu'} ils ont vu? Who {is it that} they have seen 'Who did they see?' Qui Who ont have ils they vu? seen Qui ont ils vu? Who have they seen 'Who did they see?' All the examples are quite similar to the English examples and demonstrate that wh-movement is a general phenomenon in numerous languages. As stated, however, the behaviour of wh-movement can vary, depending on the individual language in question. German does not show the expected effects of the superiority condition during clauses with multiple wh-phrases. German appears to have a process that allows the farther wh-phrase to "cross over" the closer wh-phrase and move, not remaining in-situ.[25]This movement is tolerated and has less consequences than when compared with English.[25] For example, see the following German phrases: Ich I weiß know nicht, not, wer who was what gesehen seen hat has Ich weiß nicht, wer was gesehen hat I know not, who what seen has "I do not know who saw what" Ich I weiß know nicht, not, was what wer who gesehen seen hat has Ich weiß nicht, was wer gesehen hat I know not, what who seen has "I do not know what who has seen" In a., the gloss shows that the wh-phrase [what] has "crossed over" wh-phrase [who] and is now in Spec-CP to satisfy the [+Q Wh] feature. This movement is a violation of the attract closest principle, which is what the superiority condition is based upon. Mandarinis a wh-in-situlanguage, which means that it does not exhibit wh-movement inconstituentquestions.[26]In other words, wh-words in Mandarin remain in their original position in their clause, contrasting with wh-movement inEnglishwhere the wh-word would move in constituent questions. The following example illustrates multiple wh-movement in Mandarin: 你 nǐ You 想 xiǎng want 知道 zhīdǎo know 瑪麗 Mǎlì Mary 為什麼 wèishénme why 買了 mǎile buy-PAST 什麼 shénme what 你 想 知道 瑪麗 為什麼 買了 什麼 nǐ xiǎng zhīdǎo Mǎlì wèishénme mǎile shénme You want know Mary why buy-PAST what 'What do you wonder why Mary bought it?' This example demonstrates that the wh-word "what" in Mandarin remains in-situ atSurface structure,[27]while the wh-word "why" in Mandarin moves to proper scope position and, in doing so,c-commandsthe wh-word that stays in-situ. The scope of wh-questions in Mandarin is also subject to other conditions depending on the kind of wh-phrase involved.[28]The following example can translate into two meanings: 你 nǐ You 想 xiǎng want 知道 zhīdǎo know 誰 shéi who 買了 mǎile buy-PAST 什麼 shénme what 你 想 知道 誰 買了 什麼 nǐ xiǎng zhīdǎo shéi mǎile shénme You want know who buy-PAST what 'What is the thing x such that you wonder who bought x?''Who is the person x such that you wonder what x bought?' This example illustrates the way certain wh-words such as "who" and "what" can freely obtain matrix scope in Mandarin.[29] In reference to theAttract Closestprinciple, where the head adopts the closest candidate available to it, the overt wh-phrase in Mandarin moves to proper scope position while the other wh-phrase stays in-situ as it is c-commanded by the wh-phrase first mentioned.[30]This can be seen in the following example, where the word for "what" stays in-situ since it is c-commanded by the phrase in Mandarin meaning "at where": 你 nǐ You 想 xiǎng want 知道 zhīdǎo know 瑪麗 Mǎlì Mary 在 zài at 哪裡 nǎlǐ where 買了 mǎile buy-PAST 什麼 shénme what 你 想 知道 瑪麗 在 哪裡 買了 什麼 nǐ xiǎng zhīdǎo Mǎlì zài nǎlǐ mǎile shénme You want know Mary at where buy-PAST what 'What is the thing x such that you wonder where Mary bought x?''Where is the place x such that you wonder what Mary bought at x?' As these examples show, Mandarin is a wh-in-situ language, exhibits no movement of wh-phrases at Surface structure, is subject to other conditions based on the type of wh-phrase involved in the question, and adheres to the Attract Closest principle. lnBulgarian, the [+ wh] feature of C motivates multiple wh-word movements, which leads to multiple specifiers. It requires formation of clusters of wh-phrases in [Spec-CP] in the matrix clause. This is different fromEnglishbecause in English, only one wh-word moves to [Spec-CP] when there are multiple wh-words in a clause. This is because, in Bulgarian, unlike English, all movements of wh-elements take place in the syntax, where movement is shown overtly.[31]The phrase structure for wh-words in Bulgarian would look like is shown inFigure 1below, where a wh-cluster is formed under [Spec-CP]. In Bulgarian and Romanian, a wh-element is attracted into [Spec-CP] and the other wh-elements are adjoined into the first wh-word in [Spec-CP].[32] Koj Who kogo whom ___t1 vižda sees ___t2? Koj kogo ___t1vižda ___t2? Who whom {} sees {} Who sees whom? In Example 1, we see that both the wh-words underwent movement and are in a [Spec-CP] cluster. TheAttract Closestis a principle of the Superiority Condition where the head which attracts a certain feature adopts the closest candidate available to it. This usually leads to the movement of the closest candidate. Slavic languages are grouped into two differentS-structuresconcerning the movement of wh-elements at [Spec-CP] (Rudin, 1998). One group includes the languagesSerbo-Croatian,Polish, andCzechwhere there is only one wh-element in [Spec-CP] at S-structure. The other group containsBulgarian, which has all of its wh-elements in [Spec-CP] at S-structure. In the first group mentioned, theAttract Closestprinciple is present, and the wh-word that is closest to the attracting head undergoes movement while the rest of the wh-elements remain in-situ.The second group of languages, theAttract Closestprinciple occurs in a slightly different way. The order of the way the wh-word moves is dictated by their proximity to [Spec-CP]. The closest wh-word to the attracting head undergoes movement first and the next closest one follows suit, and on and on. In that way, the Superiority effect is present in Serbo-Croatian, Polish, and Czech in the first wh-element, while, in Bulgarian, it is present in all of the wh-elements in the clause.[33] Kakvo What kak how napravi did Ivan? Ivan? Kakvo kak napravi Ivan? What how did Ivan? How did Ivan what? TheAttract Closestprinciple explains a crucial detail about which wh-words move first in the tree. Since the closest wh-word is moved first, there is a particular order that appears. Wh-subjects go before wh-objects and wh-adjuncts (Grewendorf, 2001). This is seen in Example #2 and Example #3. Example #3 also shows that there can be more than two wh-words in [Spec-CP] and that, no matter how many wh-words are in the clause, they would all have to undergo movement. Koj Who kak how kogo whom e is celunal? kissed Koj kak kogo e celunal? Who how whom is kissed Who kissed whom how? In Bulgarian, we see in Example #4 that to defer from forming a sequence of the same wh-words, a wh-element is allowed to remainin-situas a last resort (Bošković, 2002). Kakvo What obuslavja conditions kakvo? what Kakvo obuslavja kakvo? What conditions what What conditions what? In summary, Bulgarian has multiple wh-movement in the syntax and the wh-words move overtly. We also see that while all wh-words in a clause move under [Spec-CP] because of the [+ wh] feature, there is still a certain order in how they appear in the clause. In French, multiple wh-questions have the following patterns: a) In some French interrogative sentences,wh-movement can be optional.[34] 1.The closest wh-phrase to Spec-CP can be fronted (i.e., moved to Spec-CP from its covert base position in deep structure to its overt phonological form in surface-structure word order); 2.Alternatively, wh-phrases can remain in-situ.[34][35] Qu' what as- have tu you envoyé sent à to qui? whom Qu' as- tu envoyé à qui? what have you sent to whom Tu you as have envoyé sent quoi what à to qui? whom Tu as envoyé quoi à qui? you have sent what to whom 'What have you sent to who(m)?' In the example sentences above, examples #1 and #2 are both grammatical and share the same meaning in French. Here, the choice of using one form of question over the other is optional; either sentence can be used to ask about the two particular DP constituents expressed by two wh-words.[34]In French, the second sentence could also be used as anecho question.[36]By contrast, in English, the grammatical structure of the second sentence is only acceptable as anecho question: a question we ask to clarify the information we hear (or mishear) in someone's utterance, or that we use to express our shock or disbelief in reaction to a statement made by someone.[25]For echo questions in English, it is typical for speakers to emphasize the wh-words prosodically by using rising intonation (e.g.,You sent WHAT to WHO?). These special instances of using multiple wh-questions in English are essentially "requests for the repetition of that utterance".[25] b) In other French interrogative sentences,wh-movement is required.[35] The option of using wh-in-situ in French sentences with multiple wh-questions is limited to specific conditions. There exists "a very limited distribution" of its usage.[35] French wh-in-situ can occur only: Wh-in-situ usage is not allowed in French when these criteria are not met.[35] Many languages do not have wh-movement. Instead, these languages keep the symmetry of the question and answer sentences. For example, topicquestions in Chinesehave the same sentence structure as their answers: 你 nǐ you 在 zài PROG 做 zuò do 什麼? shénme what [你在做什么?] 你 在 做什麼? nǐ zài zuòshénme you PROG dowhat Whatare you doing? The response to which could be: 我 wǒ I 在 zài PROG 編輯 biānjí edit 維基百科。 Wéi jī bǎi kē Wikipedia [你在做编辑维基百科。] 我 在編輯維基百科。 wǒ zàibiānjí{Wéi jī bǎi kē} I PROGeditWikipedia I amediting Wikipedia. Chinese has a wh-particle, no wh-movement. Wh-movement typically results in adiscontinuity: the "moved" constituent ends up in a position that is separated from its canonical position by material that syntactically dominates the canonical position, which means there seems to be adiscontinuous constituentand along distance dependencypresent. Such discontinuities challenge any theory of syntax, and any theory of syntax is going to have a component that can address these discontinuities. In this regard, theories of syntax tend to explain discontinuities in one of two ways, either viamovementor viafeature passing. The EPP feature (extended projection principle) and Question Feature play a large role in the movement itself. We have noticed that these two features occur in ex-situ questions which allow movement and do not exist in in-situ questions that do allow it. Theories that posit movement have a long and established tradition that reaches back to early Generative Grammar (1960s and 1970s). They assume that the displaced constituent (e.g., the wh-expression) is first generated in its canonical position at some level or point in the structure generating process below the surface. This expression is then moved or copied out of this base position and placed in its surface position where it actually appears in speech.[37]Movement is indicated in tree structures using one of a variety of means (e.g., a tracet, movement arrows, strikeouts, lighter font shade, etc.). The alternative to the movement approach to wh-movement and discontinuities in general is feature passing. This approach rejects the notion that movement in any sense has occurred. The wh-expression is base generated in its surface position, and instead of movement, information passing (i.e., feature passing) occurs up or down the syntactic hierarchy to and from the position of the gap.
https://en.wikipedia.org/wiki/Wh-movement
Inlinguistics,bindingis the phenomenon in whichanaphoricelements such aspronounsare grammatically associated with theirantecedents.[citation needed]For instance in the English sentence "Mary saw herself", theanaphor"herself" is bound by its antecedent "Mary". Binding can be licensed or blocked in certain contexts or syntactic configurations, e.g. the pronoun "her" cannot be bound by "Mary" in the English sentence "Mary saw her". While all languages have binding, restrictions on it vary even among closely related languages. Binding has been a major area of research insyntaxandsemanticssince the 1970s and, as the name implies, is a core component ofgovernment and binding theory.[1] The following sentences illustrate some basic facts of binding. The words that bear the index i should be construed as referring to the same person or thing.[2] These sentences illustrate some aspects of the distribution ofreflexiveandpersonalpronouns. In the first pair of sentences, the reflexive pronoun must appear for the indicated reading to be possible. In the second pair, the personal pronoun must appear for the indicated reading to be possible. The third pair shows that at times a personal pronoun must follow its antecedent, and the fourth pair further illustrates the same point, although the acceptability judgement is not as robust. Based on such data, one sees that reflexive and personal pronouns differ in their distribution and that linear order (of a pronoun in relation to its antecedent or postcedent) is a factor influencing where at least some pronouns can appear. A theory of binding should be capable of predicting and explaining the differences in distribution seen in sentences like these. It should be able to answer questions like: What explains where a reflexive pronoun must appear as opposed to a personal pronoun? When does linear order play a role in determining where pronouns can appear? What other factor (or factors) beyond linear order help predict where pronouns can appear? The following three subsections consider the binding domains that are relevant for the distribution of pronouns and nouns in English. The discussion follows the outline provided by the traditional binding theory (see below), which divides nominals into three basic categories: reflexive and reciprocal pronouns, personal pronouns, and nouns (commonandproper).[3] When one examines the distribution ofreflexive pronounsandreciprocalpronouns (which are often subsumed under the general category of "anaphor"), one sees that there are certain domains that are relevant, a "domain" being a syntactic unit that isclause-like. Reflexive and reciprocal pronouns often seek their antecedent close by, in a binding domain that is local, e.g. These examples illustrate that there is a domain within which a reflexive or reciprocal pronoun should find its antecedent. The a-sentences are fine because the reflexive or reciprocal pronoun has its antecedent within the clause. The b-sentences, in contrast, do not allow the indicated reading, a fact illustrating that personal pronouns have a distribution that is different from that of reflexive and reciprocal pronouns. A related observation is that a reflexive and reciprocal pronoun often cannot seek its antecedent in a superordinate clause, e.g. When the reflexive or reciprocal pronoun attempts to find an antecedent outside of the immediate clause containing it, it fails. In other words, it can hardly seek its antecedent in the superordinate clause. The binding domain that is relevant is the immediate clause containing it. Personal pronouns have a distribution that is different from reflexive and reciprocal pronouns, a point that is evident with the first two b-sentences in the previous section. The local binding domain that is decisive for the distribution of reflexive and reciprocal pronouns is also decisive for personal pronouns, but in a different way. Personal pronouns seek their antecedent outside of the local binding domain containing them, e.g. In these cases, the pronoun has to look outside of the embedded clause containing it to the matrix clause to find its antecedent. Hence based on such data, the relevant binding domain appears to be the clause. Further data illustrate, however, that the clause is actually not the relevant domain: Since the pronouns appear within the same minimal clause containing their antecedents in these cases, one cannot argue that the relevant binding domain is the clause. The most one can say based on such data is that the domain is "clause-like". The distribution of common and proper nouns is unlike that of reflexive, reciprocal, and personal pronouns. The relevant observation in this regard is that a noun is often reluctantly coreferential with another nominal that is within its binding domain or in a superordinate binding domain, e.g. The readings indicated in the a-sentences are natural, whereas the b-sentences are very unusual. Indeed, sentences like these b-sentences were judged to be impossible in the traditional binding theory according to Condition C (see below). Given a contrastive context, however, the b-sentences can work, e.g.Susan does not admire Jane, but rather Susaniadmires Susani. One can therefore conclude that nouns are not sensitive to binding domains in the same way that reflexive, reciprocal, and personal pronouns are. The following subsections illustrate the extent to which pure linear order impacts the distribution of pronouns. While linear order is clearly important, it is not the only factor influencing where pronouns can appear. A simple hypothesis concerning the distribution of many anaphoric elements, of personal pronouns in particular, is that linear order plays a role. In most cases, a pronoun follows its antecedent, and in many cases, the coreferential reading is impossible if the pronoun precedes its antecedent. The following sentences suggest that pure linear can indeed be important for the distribution of pronouns: While the coreferential readings indicated in these b-sentences are possible, they are unlikely. The order presented in the a-sentences is strongly preferred. The following, more extensive data sets further illustrate that linear order is important: While the acceptability judgements here are nuanced, one can make a strong case that pure linear order is at least in part predictive of when the indicated reading is available. The a- and c-sentences allow the coreferential reading more easily than their b- and d-counterparts. While linear order is an important factor influencing the distribution of pronouns, it is not the only factor. The following sentences are similar to the c- and d-sentences in the previous section insofar as an embedded clause is present. While there may be a mild preference for the order in the a-sentences here, the indicated reading in the b-sentences is also available. Hence linear order is hardly playing a role in such cases. The relevant difference between these sentences and the c- and d-sentences in the previous section is that the embedded clauses here areadjunctclauses, whereas they areargumentclauses above. The following examples involve adjunct phrases:[4] The fact that the c-sentences marginally allow the indicated reading whereas the b-sentences do not at all allow this reading further demonstrates that linear order is important. But in this regard, the d-sentences are telling, since if linear order were the entire story, one would expect the d-sentences to be less acceptable than they are. The conclusion that one can draw from such data is that there are one or more other factors beyond linear order that are impacting the distribution of pronouns. Given that linear order is not the only factor influencing the distribution of pronouns, the question is what other factor or factors might also be playing a role. The traditional binding theory (see below) tookc-commandto be the all important factor, but the importance of c-command for syntactic theorizing has been extensively criticized in recent years.[5]The primary alternative to c-command is functional rank. These two competing concepts (c-command vs. rank) have been debated extensively and they continue to be debated. C-command is a configurational notion; it is defined over concrete syntactic configurations. Syntactic rank, in contrast, is a functional notion that resides in the lexicon; it is defined over the ranking of the arguments ofpredicates. Subjects are ranked higher than objects, first objects are ranked higher than second objects, and prepositional objects are ranked lowest. The following two subsections briefly consider these competing notions. C-command is a configurational notion that acknowledges the syntactic configuration as primitive. Basicsubject-objectasymmetries, which are numerous in many languages, are explained by the fact that the subject appears outside of the finite verb phrase (VP) constituent, whereas the object appears inside it. Subjects therefore c-command objects, but not vice versa. C-command is defined as follows: Given the binary division of the clause (S → NP + VP) associated with mostphrase structure grammars, this definition sees a typical subject c-commanding everything inside theverb phrase(VP), whereas everything inside the VP is incapable of c-commanding anything outside of the VP. Some basic binding facts are explained in this manner, e.g. Sentence a is fine because the subjectLarryc-commands the objecthimself, whereas sentence b does not work because the objectLarrydoes not c-command the subjecthimself. The assumption has been that within its binding domain, a reflexive pronoun must be c-commanded by its antecedent. While this approach based on c-command makes a correct prediction much of the time, there are other cases where it fails to make the correct prediction, e.g. The reading indicated is acceptable in this case, but if c-command were the key notion helping to explain where the reflexive can and must appear, then the reading should be impossible sincehimselfis not c-commanded byLarry.[7] As reflexive and personal pronouns occur in complementary distribution, the notion of c-command can also be used to explain where personal pronouns can appear. The assumption is that personal pronounscannotc-command their antecedent, e.g. In both examples, the personal pronounshedoes not c-command its antecedentAlice, resulting in the grammaticality of both sentences despite reversed linear order. The alternative to a c-command approach posits a ranking of syntactic functions (SUBJECT > FIRST OBJECT > SECOND OBJECT > PREPOSITIONAL OBJECT).[8]Subject-object asymmetries are addressed in terms of this ranking. Since subjects are ranked higher than objects, an object can have the subject as its antecedent, but not vice versa. With basic cases, this approach makes the same prediction as the c-command approach. The first two sentences from the previous section are repeated here: Since the subject outranks the object, sentence a is predictably acceptable, the subjectLarryoutranking the objecthimself. Sentence b, in contrast, is bad because the subject reflexive pronounhimselfoutranks its postcedentLarry. In other words, this approach in terms of rank is assuming that within its binding domain, a reflexive pronoun may not outrank its antecedent (or postcedent). Consider the third example sentence from the previous section in this regard: The approach based on rank does not require a particular configurational relationship to hold between a reflexive pronoun and its antecedent. In other words, it makes no prediction in this case, and hence does not make an incorrect prediction. The reflexive pronounhimselfis embedded within the subject noun phrase, which means that it is not the subject and hence does not outrank the objectLarry. A theory of binding that acknowledges both linear order and rank can at least begin to predict many of the marginal readings.[9]When both linear order and rank combine, acceptability judgments are robust, e.g. This ability to address marginal readings is something that an approach combining linear order and rank can accomplish, whereas an approach that acknowledges only c-command cannot do the same. The exploration of binding phenomena got started in the 1970s and interest peaked in the 1980s withGovernment and Binding Theory, a grammar framework in the tradition ofgenerative syntaxthat is still prominent today.[10]The theory of binding that became widespread at that time serves now merely as reference point (since it is no longer believed to be correct[why?]). This theory distinguishes between 3 different binding conditions: A, B, and C. The theory classifies nominals according to two features, [±anaphor] and [±pronominal], which are binary. The binding characteristics of a nominal are determined by the values of these features, either plus or minus. Thus, a nominal that is [-anaphor, -pronominal] is an R-expression (referring expression), such as acommon nounor aproper name. A nominal that is [-anaphor, +pronominal] is a pronoun, such asheorthey, and a nominal that is [+anaphor, -pronominal] is a reflexive pronoun, such ashimselforthemselves.[clarification needed]Note that the termanaphorhere is being used in a specialized sense; it essentially means "reflexive". This meaning is specific to the Government and Binding framework and has not spread beyond this framework.[11] Based on the classifications according to these two features, three conditions are formulated: While the theory of binding that these three conditions represent is no longer held to be valid[why?], as mentioned above, the associations with the three conditions are so firmly anchored in the study of binding that one often refers to, for example, "Condition A effects" or "Condition B effects" when describing binding phenomena.
https://en.wikipedia.org/wiki/Binding_(linguistics)
Syntactic movementis the means by which some theories of syntax addressdiscontinuities. Movement was first postulated bystructuralist linguistswho expressed it in terms ofdiscontinuous constituentsordisplacement.[1]Someconstituentsappear to have been displaced from the position in which they receive important features of interpretation.[2]The concept of movement is controversial and is associated with so-calledtransformationalorderivationaltheories of syntax (such astransformational grammar,government and binding theory,minimalist program). Representational theories (such ashead-driven phrase structure grammar,lexical functional grammar,construction grammar, and mostdependency grammars), in contrast, reject the notion of movement and often instead address discontinuities with other mechanisms including graph reentrancies, feature passing, andtype shifters. Movement is the traditional means of explaining discontinuities such aswh-fronting,topicalization,extraposition,scrambling,inversion, andshifting:[3] The a-sentences show canonical word order, and the b-sentences illustrate the discontinuities that movement seeks to explain. Bold script marks the expression that is moved, and underscores mark the positions from which movement is assumed to have occurred. In the first a-sentence, the constituentthe first storyserves as the object of the verblikesand appears in its canonical position immediately following that verb. In the first b-sentence, the constituentwhich storylikewise serves as the object of the verb, but appears at the beginning of the sentence rather than in its canonical position following the verb. Movement-based analyses explain this fact by positing that the constituent isbase-generatedin its canonical position but is moved to the beginning of the sentence, in this case because of a question-forming operation. The examples above use an underscore to mark the position from which movement is assumed to have occurred. In formal theories of movement, these underscores correspond to actual syntactic objects, eithertracesorcopiesdepending on one's particular theory.[4]e.g. Subscripts help indicate the constituent that is assumed to have left a trace in its former position, the position marked by t.[5]The other means of indicating movement is in terms of copies. Movement is actually taken to be a process of copying the same constituent in different positions and deleting the phonological features in all but one case.[6]Italics are used in the following example to indicate a copy that lacks phonological representation: There are various nuances associated with each of the means of indicating movement (blanks, traces, copies), but for the most part, each convention has the same goal of indicating the presence of adiscontinuity. Within generative grammar, various types of movement have been distinguished. An important distinction is the one between head movement and phrasal movement, with the latter type being further subdivided into A-movement and A-bar movement. Copy movement is another more general type of movement. Argument movement (A-movement) displaces a phrase into a position in which a fixed grammatical function is assigned, such as in movement of the object to the subject position in passives:[7] Non-argument movement (A-bar movement or A'-movement), in contrast, displaces a phrase into a position where a fixed grammatical function is not assigned, such as the movement of a subject or object NP to a pre-verbal position in interrogatives: The A- vs. A-bar distinction is a reference to the theoretical status of syntax with respect to the lexicon. The distinction elevates the role of syntax by locating the theory of voice (active vs. passive) almost entirely in syntax (as opposed to in the lexicon). A theory of syntax that locates the active-passive distinction in the lexicon (the passive is not derived via transformations from the active) rejects the distinction entirely. A different partition among types of movement is phrasal vs. head movement.[8]Phrasal movement occurs when thehead of a phrasemoves together with all its dependents in such a manner that the entirephrasemoves. Most of the examples above involve phrasal movement. Head movement, in contrast, occurs when just the head of a phrase moves, and the head leaves behind its dependents. Subject-auxiliary inversion is a canonical instance of head movement: On the assumption that the auxiliarieshasandwillare the heads of phrases, such as of IPs (inflection phrases), the b-sentences are the result of head movement, and the auxiliary verbshasandwillmove leftward without taking with them the rest of the phrase that they head. The distinction between phrasal movement and head movement relies crucially on the assumption that movement is occurring leftward. An analysis ofsubject-auxiliary inversionthat acknowledges rightward movement can dispense with head movement entirely: The analysis shown in those sentences views the subject pronounssomeoneandsheas moving rightward, instead of the auxiliary verbs moving leftward. Since the pronouns lack dependents (they alone qualify as complete phrases), there would be no reason to assume head movement. Since it was first proposed, the theory of syntactic movement has yielded a new field of research aiming at providing the filters that block certain types of movement. Calledlocalitytheory,[9]it is interested in discerning the islands and barriers to movement. It strives to identify the categories and constellations that block movement from occurring. In other words, it strives to explain the failure of certain attempts at movement: All of the b-sentences are now disallowed because of locality constraints on movement. Adjuncts and subjects are islands that block movement, and left branches in NPs are barriers that prevent pre-noun modifiers from being extracted out of NPS. Syntactic movement is controversial, especially in light ofmovement paradoxes. Theories of syntax that posit feature passing reject syntactic movement outright, that is, they reject the notion that a given "moved" constituent ever appears in its "base" position below the surface: the positions marked by blanks, traces, or copies. Instead, they assume that there is but one level of syntax, and all constituents appear only in their surface positions, with no underlying level or derivation. To address discontinuities, they posit that the features of a displaced constituent are passed up and/or down the syntactic hierarchy between that constituent and itsgovernor.[10]The following tree illustrates the feature passing analysis of a wh-discontinuity in adependency grammar.[11] The words in red mark thecatena(chain of words) that connects the displaced wh-constituentwhatto its governoreat, the word that licenses its appearance.[12]The assumption is that features (=information) associated withwhat(e.g. noun, direct object) are passed up and down along the catena marked in red. In that manner, the ability ofeattosubcategorizefor a direct object NP is acknowledged. By examining the nature of catenae like the one in red, the locality constraints on discontinuities can be identified. Ingovernment and binding theoryand some of its descendant theories, movement leaves behind anempty categorycalled atrace. In such theories, traces are considered real parts of syntactic structure, detectable in secondary effects they have on the syntax. For instance, one empirical argument for their existence comes from theEnglishphenomenon ofwanna contraction, in whichwant tocontractsintowanna. This phenomenon has been argued to be impossible when a trace would intervene between "want" and "to", as in the b-sentence below.[13] Evidence of this sort has not led to a full consensus in favor of traces, since other kinds of contraction permit an intervening putative trace.[14] Proponents of the trace theory have responded to these counterarguments in various ways. For instance, Bresnan (1971) argued that contractions of "to" areencliticwhile contractions of tensed auxiliaries areproclitic, meaning that only the former would be affected by a preceding trace.[15]
https://en.wikipedia.org/wiki/Trace_(linguistics)
Inlinguistics, theempty category principle(ECP) was proposed inNoam Chomsky's syntactic framework ofgovernment and binding theory. The ECP is supposed to be a universal syntactic constraint that requires certain types ofempty categories, namelytraces, to be properlygoverned. ECP is a principle oftransformational grammarby which traces must be visible, i.e. they must be identifiable as empty positions in the surface structure, similar to the principle of reconstruction fordeletion. Thus an empty category is in a positionsubcategorizedfor by a verb. In government and binding theory this is known asproper government. Proper government occurs either if the empty position is governed by alexical category(especially if it is not a subject) (theta-government) or if it is coindexed with amaximal projectionwhich governs it (antecedent-government). The ECP has been revised many times and is now a central part of government and binding theory.[1] In spite of its name, the ECP applies to only two of the four types of null DPs. Specifically, it applies to DP- and Wh-traces, but not PRO andpro. The chief function of the ECP is to place constraints on the movement of categories by the rule ofmove α; it effectively allows a tree structure to "remember" what has happened at earlier stages of a derivation, and it can be seen as GB's version of the older derivational constraints.[2] Formally, the ECP states that: The ECP is a way of accounting for, among other things, the empirical fact that it is generally more difficult to move up awh-wordfrom a subject position than from an object position. The intermediatetracesmust be deleted because they cannot be properly governed; theta-government is impossible because of the position they occupy, Spec-CP; the only possible antecedent-governor might be an overt NP (a wh-word), but the Minimality Condition would always be violated because of the tensed I (which must be present in allmatrix clauses), the tensed I wouldc-commandthe intermediate trace but it would not c-command the wh-word. So we have to say that intermediate traces must be deleted atlogical formso that they can avoid the ECP. In the case ofobject extraction(the trace is a complement of VP), theta-government is the only possible option. In the case of subject extraction (the trace in Spec-IP), antecedent-government is the only possible option. If the trace is in Spec-IP and we have an overtcomplementizer(such asthat), the sentence is ungrammatical because the ECP is violated. The closest potential governor would be the complementizer, which cannot antecedent-govern the trace because it is not coindexed with it (and theta-government is impossible since trace is in Spec-IP). For example, in the sentenceWho do you think (that) John will invite?the ECP works in the following way (the structure is given for the embedded clause only):
https://en.wikipedia.org/wiki/Empty_Category_Principle
Inlinguistics,grammatical relations(also calledgrammatical functions,grammatical roles, orsyntactic functions) are functional relationships betweenconstituentsin aclause. The standard examples of grammatical functions from traditional grammar aresubject,direct object, andindirect object. In recent times, the syntactic functions (more generally referred to as grammatical relations), typified by the traditional categories of subject and object, have assumed an important role in linguistic theorizing, within a variety of approaches ranging fromgenerative grammartofunctionalandcognitive theories.[1]Many modern theories of grammar are likely to acknowledge numerous further types of grammatical relations (e.g.complement,specifier,predicative, etc.). The role of grammatical relations in theories of grammar is greatest independency grammars, which tend to posit dozens of distinct grammatical relations. Everyhead-dependent dependency bears a grammatical function. Grammatical categoriesare assigned to the words and phrases that have the relations. This includes traditionalparts of speechlikenouns,verbs,adjectives, etc., and features likenumberandtense. The grammatical relations are exemplified in traditional grammar by the notions ofsubject,direct object, andindirect object: The subjectFredperforms or is the source of the action. The direct objectthe bookis acted upon by the subject, and the indirect object Susan receives the direct object or otherwise benefits from the action. Traditional grammars often begin with these rather vague notions of the grammatical functions. When one begins to examine the distinctions more closely, it quickly becomes clear that these basic definitions do not provide much more than a loose orientation point. What is indisputable about the grammatical relations is that they are relational. That is, subject and object can exist as such only by virtue of the context in which they appear. A noun such asFredor a noun phrase such asthe bookcannot qualify as subject and direct object, respectively, unless they appear in an environment, e.g. a clause, where they are related to each other and/or to an action or state. In this regard, the main verb in a clause is responsible for assigning grammatical relations to the clause "participants". Most grammarians and students of language intuitively know in most cases what the subject and object in a given clause are. But when one attempts to produce theoretically satisfying definitions of these notions, the results are usually less clear and therefore controversial.[2]The contradictory impulses have resulted in a situation where most theories of grammar acknowledge the grammatical relations and rely on them heavily for describing phenomena of grammar but at the same time, avoid providing concrete definitions of them. Nevertheless, various principles can be acknowledged that attempts to define the grammatical relations are based on. Thethematic relations(also known as thematic roles, and semantic roles, e.g.agent,patient, theme, goal) can provide semantic orientation for defining the grammatical relations. There is a tendency for subjects to be agents and objects to be patients or themes. However, the thematic relations cannot be substituted for the grammatical relations, nor vice versa. This point is evident with theactive-passive diathesisandergative verbs: Margeis the agent in the first pair of sentences because she initiates and carries out the action of fixing, andthe coffee tableis the patient in both because it is acted upon in both sentences. In contrast, the subject and direct object are not consistent across the two sentences. The subject is the agentMargein the first sentence and the patientThe coffee tablein the second sentence. The direct object is the patientthe coffee tablein the first sentence, and there is no direct object in the second sentence. The situation is similar with the ergative verbsunk/sinkin the second pair of sentences. The noun phrasethe shipis the patient in both sentences, although it is the object in the first of the two and the subject in the second. The grammatical relations belong to the level of surface syntax, whereas the thematic relations reside on a deeper semantic level. If, however, the correspondences across these levels are acknowledged, then the thematic relations can be seen as providing prototypical thematic traits for defining the grammatical relations. Another prominent means used to define the syntactic relations is in terms of the syntactic configuration. The subject is defined as theverb argumentthat appears outside the canonicalfiniteverb phrase, whereas the object is taken to be the verb argument that appears inside the verb phrase.[3]This approach takes the configuration as primitive, whereby the grammatical relations are then derived from the configuration. This "configurational" understanding of the grammatical relations is associated with Chomskyanphrase structure grammars(Transformational grammar,Government and BindingandMinimalism). The configurational approach is limited in what it can accomplish. It works best for the subject and object arguments. For other clause participants (e.g. attributes and modifiers of various sorts, prepositional arguments, etc.), it is less insightful, since it is often not clear how one might define these additional syntactic functions in terms of the configuration. Furthermore, even concerning the subject and object, it can run into difficulties, e.g. The configurational approach has difficulty with such cases. The plural verbwereagrees with the post-verb noun phrasetwo lizards, which suggests thattwo lizardsis the subject. But sincetwo lizardsfollows the verb, one might view it as being located inside the verb phrase, which means it should count as the object. This second observation suggests that the expletivethereshould be granted subject status. Many efforts to define the grammatical relations emphasize the roleinflectionalmorphology. In English, the subject can or must agree with the finite verb in person and number, and in languages that have morphologicalcase, the subject and object (and other verb arguments) are identified in terms of the case markers that they bear (e.g.nominative,accusative,dative,genitive,ergative,absolutive, etc.). Inflectional morphology may be a more reliable means for defining the grammatical relations than the configuration, but its utility can be very limited in many cases. For instance, inflectional morphology is not going to help in languages that lack inflectional morphology almost entirely such asMandarin, and even with English, inflectional morphology does not help much, since English largely lacks morphological case. The difficulties facing attempts to define the grammatical relations in terms of thematic or configurational or morphological criteria can be overcome by an approach that posits prototypical traits. The prototypical subject has a cluster of thematic, configurational, and/or morphological traits, and the same is true of the prototypical object and other verb arguments. Across languages and across constructions within a language, there can be many cases where a given subject argument may not be a prototypical subject, but it has enough subject-like traits to be granted subject status. Similarly, a given object argument may not be prototypical in one way or another, but if it has enough object-like traits, then it can nevertheless receive the status of object. This third strategy is tacitly preferred by most work in theoretical syntax. All those theories of syntax that avoid providing concrete definitions of the grammatical relations but yet reference them often are (perhaps unknowingly) pursuing an approach in terms of prototypical traits.[clarification needed] Independency grammar(DG) theories of syntax,[4]everyhead-dependent dependency bears a syntactic function.[5]The result is that an inventory consisting of dozens of distinct syntactic functions is needed for each language. For example, a determiner-noun dependency might be assumed to bear the DET (determiner) function, and an adjective-noun dependency is assumed to bear the ATTR (attribute) function. These functions are often produced as labels on the dependencies themselves in the syntactic tree, e.g. The tree contains the following syntactic functions: ATTR (attribute), CCOMP (clause complement), DET (determiner), MOD (modifier), OBJ (object), SUBJ (subject), and VCOMP (verb complement). The actual inventories of syntactic functions will differ from the one suggested here in the number and types of functions that are assumed. In this regard, this tree is merely intended to be illustrative of the importance that the syntactic functions can take on in some theories of syntax and grammar.
https://en.wikipedia.org/wiki/Grammatical_relation
Atagmemeis the smallest functional element in thegrammaticalstructure of a language. The term was introduced in the 1930s by the linguistLeonard Bloomfield, who defined it as the smallestmeaningfulunit of grammatical form (analogous to themorpheme, defined as the smallest meaningful unit oflexicalform). The term was later adopted, and its meaning broadened, byKenneth Pikeand others beginning in the 1950s, as the basis for theirtagmemics. According to the scheme set out byLeonard Bloomfieldin his bookLanguage(1933), the tagmeme is the smallest meaningful unit of grammatical form.[1]A tagmeme consists of one or moretaxemes, where a taxeme is a primitive grammatical feature, in the same way that aphonemeis a primitive phonological feature. Taxemes and phonemes do not as a rule have meaning on their own, but combine into tagmemes andmorphemesrespectively, which carry meaning. For example, an utterance such as "John runs" is a concrete example of a tagmeme (anallotagm) whose meaning is that an actor performs an action. The taxemes making up this tagmeme include the selection of anominativeexpression, the selection of afinite verbexpression, and the ordering of the two such that the nominative expression precedes the finite verb expression. Bloomfield makes the taxeme and tagmeme part of a system ofemic units:[2] More generally, he defines any meaningful unit of linguistic signaling (not necessarily smallest) as alinguistic form, and its meaning as alinguistic meaning; it may be either alexical form(with alexical meaning) or agrammatical form(with agrammatical meaning). Bloomfield's term was adopted byKenneth Pikeand others to denote what they had previously been calling thegrammeme(earliergrameme).[3]In Pike's approach, consequently calledtagmemics, the hierarchical organization of levels (e.g. in syntax: word, phrase, sentence, paragraph, discourse) results from the fact that the elements of a tagmeme on a higher level (e.g. 'sentence') are analyzed as syntagmemes on the next lower level (e.g. 'phrase'). The tagmeme is the correlation of asyntagmaticfunction (e.g. subject, object) andparadigmaticfillers (e.g. nouns, pronouns or proper nouns as possible fillers of the subject position). Tagmemes combine to form asyntagmeme, a syntactic construction consisting of a sequence of tagmemes. Tagmemics as a linguistic methodology was developed by Pike in his bookLanguage in Relation to a Unified Theory of the Structure of Human Behavior, 3 vol. (1954–1960). It was primarily designed to assist linguists to efficiently extract coherent descriptions out of corpora of fieldwork data. Tagmemics is particularly associated with the early work of theSummer Institute of Linguistics, an association of missionary linguists devoted largely toBibletranslations, of which Pike was an early member. Tagmemics makes the kind of distinction made betweenphoneandphonemeinphonologyandphoneticsat higher levels of linguistic analysis (grammaticalandsemantic); for instance, contextually conditioned synonyms are considered different instances of a single tagmeme, as sounds which are (in a given language) contextually conditioned areallophonesof a single phoneme. Theemic and eticdistinction also applies in othersocial sciences.
https://en.wikipedia.org/wiki/Grammeme
Move αis a feature of manytransformational-generative grammars, first developed in theRevised Extended Standard Theory(REST) byNoam Chomskyin the late 1970s and later part ofgovernment and binding theory(GB) in the 1980s and theMinimalist Programof the 1990s. The term refers to the relation between an indexed constituent and itstracet, e.g., the relation ofwhomandtin the example In (1), the constituent (whom) and its trace (t) are said to form a "chain". In syntax, Move α is the most general formulation of possible movement permitted by a rule. More specific rules include Move NP and Move wh, which in turn are more general than specific transformations such as those involved in passivization.[1]This marks a shift of attention in transformational grammar around the 1970s, away from focussing on specificrulesto underlying principles constraining them, which culminated into the development of the Principles and Parameters framework in the 1980s. Because in isolation Move α produces massive overgeneration, it is heavily constrained by the other components of the grammar (Chomsky, 1980).[2]Its application is restricted by the Subjacency principle of the Bounding theory, and its output is subject to a variety of filters, principles, etc. stated by other modules of GB.[3] In 1984 Howard Lasnik and Mamoru Saito unified Move α and other syntactic operations, such as Insertion and Deletion, into what they called Affect α,[4]a generalization to the effect of "Do anything to any category". The latter is viewed with suspicion by proponents of REST[who?]as anovergeneralization.[citation needed] In theMinimalist Program, first developed in the 1990s, Move α (simply called Move) initially became a structure-building operation together with "Merge". However, Chomsky later proposed that Move is simply the application of Merge where one of the two Merged objects is an internal part of the other, thus eliminating Move as an autonomous operation.[5]
https://en.wikipedia.org/wiki/Move_%CE%B1
Asyntactic categoryis a syntactic unit that theories ofsyntaxassume.[1]Word classes, largely corresponding to traditionalparts of speech(e.g. noun, verb, preposition, etc.), are syntactic categories. Inphrase structure grammars, thephrasal categories(e.g.noun phrase,verb phrase,prepositional phrase, etc.) are also syntactic categories.Dependency grammars, however, do not acknowledge phrasal categories (at least not in the traditional sense).[2] Word classes considered as syntactic categories may be calledlexical categories, as distinct from phrasal categories. The terminology is somewhat inconsistent between the theoretical models of different linguists.[2]However, many grammars also draw a distinction betweenlexical categories(which tend to consist ofcontent words, or phrasesheadedby them) andfunctional categories(which tend to consist offunction wordsor abstract functional elements, or phrases headed by them). The termlexical categorytherefore has two distinct meanings. Moreover, syntactic categories should not be confused withgrammatical categories(also known as grammaticalfeatures), which are properties such astense,gender, etc. At least three criteria are used in defining syntactic categories: For instance, many nouns in English denote concrete entities, they are pluralized with the suffix-s, and they occur as subjects and objects in clauses. Many verbs denote actions or states, they are conjugated with agreement suffixes (e.g.-sof the third person singular in English), and in English they tend to show up in medial positions of the clauses in which they appear. The third criterion is also known asdistribution. The distribution of a given syntactic unit determines the syntactic category to which it belongs. The distributional behavior of syntactic units is identified by substitution.[3]Like syntactic units can be substituted for each other. Additionally, there are also informal criteria one can use in order to determine syntactic categories. For example, one informal means of determining if an item is lexical, as opposed to functional, is to see if it is left behind in "telegraphic speech" (that is, the way a telegram would be written; e.g.,Pants fire. Bring water, need help.)[4] The traditionalparts of speechare lexical categories, in one meaning of that term.[5]Traditional grammars tend to acknowledge approximately eight to twelve lexical categories, e.g. The lexical categories that a given grammar assumes will likely vary from this list. Certainly numerous subcategories can be acknowledged. For instance, one can view pronouns as a subtype of noun, and verbs can be divided intofinite verbsandnon-finite verbs(e.g. gerund, infinitive, participle, etc.). The central lexical categories give rise to corresponding phrasal categories:[6] In terms ofphrase structure rules, phrasal categories can occur to the left of the arrow while lexical categories cannot, e.g. NP → D N. Traditionally, a phrasal category should consist of two or more words, although conventions vary in this area.X-bar theory, for instance, often sees individual words corresponding to phrasal categories. Phrasal categories are illustrated with the following trees: The lexical and phrasal categories are identified according to the node labels, phrasal categories receiving the "P" designation. Dependency grammarsdo not acknowledge phrasal categories in the way thatphrase structure grammarsdo.[2]What this means is that the interaction between lexical and phrasal categories disappears, the result being that only the lexical categories are acknowledged.[7]The tree representations are simpler because the number of nodes and categories is reduced, e.g. The distinction between lexical and phrasal categories is absent here. The number of nodes is reduced by removing all nodes marked with "P". Note, however, that phrases can still be acknowledged insofar as any subtree that contains two or more words will qualify as a phrase. Many grammars draw a distinction betweenlexical categoriesandfunctional categories.[8]This distinction is orthogonal to the distinction between lexical categories and phrasal categories. In this context, the termlexical categoryapplies only to those parts of speech and their phrasal counterparts that form open classes and have full semantic content. The parts of speech that form closed classes and have mainly just functional content are calledfunctional categories: There is disagreement in certain areas, for instance concerning the status ofprepositions. The distinction between lexical and functional categories plays a big role in Chomskyan grammars (Transformational Grammar, Government and Binding Theory, Minimalist Program), where the role of the functional categories is large. Many phrasal categories are assumed that do not correspond directly to a specific part of speech, e.g.inflection phrase(IP), tense phrase (TP), agreement phrase (AgrP),focusphrase (FP), etc. (see alsoPhrase → Functional categories). In order to acknowledge such functional categories, one has to assume that the constellation is a primitive of the theory and that it exists separately from the words that appear. As a consequence, many grammar frameworks do not acknowledge such functional categories, e.g. Head Driven Phrase Structure Grammar, Dependency Grammar, etc. Early research suggested shifting away from the use of labelling, as they were considered to be non-optimal for the analysis of syntactic structure, and should therefore be eliminated.[9]Collins (2002) argued that, although labels such as Noun, Pronoun, Adjective and the like were unavoidable and undoubtedly useful for categorizing syntactic items, providing labels for the projections of those items, was not useful and was, in fact, detrimental to structural analysis, since there were disagreements and discussions about how exactly to label these projections. The labeling of projections such as Noun Phrases (NP), Verb Phrases (VP), and others have since been a topic of discussion amongst syntacticians, who have since then been working on labelling algorithms to solve the very problem brought up by Collins. In line with bothPhrase Structure RulesandX-bar theory, syntactic labelling plays an important role within Chomsky'sMinimalist Program (MP). Chomsky first developed the MP by means of creating a theoretical framework for generative grammar that can be applied universally among all languages. In contrast to Phrase Structure Rules and X-bar theory, many of the research and proposed theories done on labels are fairly recent and still ongoing.
https://en.wikipedia.org/wiki/Syntactic_category#Labels_in_the_Minimalist_Program
Inlinguistics, theminimalist programis a major line of inquiry that has been developing insidegenerative grammarsince the early 1990s, starting with a 1993 paper byNoam Chomsky.[1] FollowingImre Lakatos's distinction, Chomsky presents minimalism as aprogram, understood as a mode of inquiry that provides a conceptual framework which guides the development of linguistic theory. As such, it is characterized by a broad and diverse range of research directions. For Chomsky, there are two basicminimalistquestions—What is language? and Why does it have the properties it has?—but the answers to these two questions can be framed in any theory.[2] Minimalism is an approach developed with the goal of understanding the nature of language. It models a speaker's knowledge of language as a computational system with one basic operation, namely Merge. Merge combines expressions taken from the lexicon in a successive fashion to generate representations that characterizeI-Language, understood to be the internalized intensional knowledge state as represented in individual speakers. By hypothesis, I-language—also calleduniversal grammar—corresponds to the initial state of the human language faculty in individual human development. Minimalism is reductive in that it aims to identify which aspects of human language—as well the computational system that underlies it—are conceptually necessary. This is sometimes framed as questions relating toperfect design(Is the design of human language perfect?) andoptimalcomputation(Is the computational system for human language optimal?)[2]According to Chomsky, a human natural language is not optimal when judged based on how it functions, since it often contains ambiguities, garden paths, etc. However, it may be optimal for interaction with the systems that are internal to the mind.[3] Such questions are informed by a set of background assumptions, some of which date back to the earliest stages of generative grammar:[4] Minimalism develops the idea that human language ability is optimal in its design and exquisite in its organization, and that its inner workings conform to a very simple computation. On this view,universal grammarinstantiates a perfect design in the sense that it contains only what is necessary. Minimalism further develops the notion of economy, which came to the fore in the early 1990s, though still peripheral totransformational grammar.Economy of derivationrequires that movements (i.e., transformations) occur only if necessary, and specifically to satisfy to feature-checking, whereby aninterpretable featureis matched with a correspondinguninterpretable feature. (See discussion of feature-checking below.)Economy of representationrequires that grammatical structures exist for a purpose. The structure of a sentence should be no larger or more complex than required to satisfy constraints on grammaticality. Within minimalism, economy—recast in terms of thestrong minimalist thesis(SMT)—has acquired increased importance.[6]The 2016 book entitledWhy Only Us—co-authored by Noam Chomsky and Robert Berwick—defines the strong minimalist thesis as follows: The optimal situation would be that UG reduces to the simplest computational principles which operate in accord with conditions of computational efficiency. This conjecture is ... called the Strong Minimalist Thesis (SMT). Under the strong minimalist thesis, language is a product of inherited traits as developmentally enhanced through intersubjective communication and social exposure to individual languages (amongst other things). This reduces to a minimum the "innate" component (the genetically inherited component) of the language faculty, which has been criticized over many decades and is separate from thedevelopmental psychologycomponent. Intrinsic to the syntactic model (e.g. the Y/T-model) is the fact that social and other factors play no role in the computation that takes place innarrow syntax; what Chomsky, Hauser and Fitch refer to as faculty of language in the narrow sense (FLN), as distinct from faculty of language in the broad sense (FLB). Thus, narrow syntax only concerns itself with interface requirements, also called legibility conditions. SMT can be restated as follows: syntax, narrowly defined, is a product of the requirements of the interfaces and nothing else. This is what is meant by "Language is an optimal solution to legibility conditions" (Chomsky 2001:96). Interface requirements force deletion of features that are uninterpretable at a particular interface, a necessary consequence of Full Interpretation. A PF object must only consist of features that are interpretable at the articulatory-perceptual (A-P) interface; likewise a LF object must consist of features that are interpretable at the conceptual-intentional (C-I) interface. The presence of an uninterpretable feature at either interface will cause the derivation to crash. Narrow syntax proceeds as a set of operations—Merge, Move and Agree—carried out upon a numeration (a selection of features, words etc., from the lexicon) with the sole aim of removing all uninterpretable features before being sent via Spell-Out to the A-P and C-I interfaces. The result of these operations is a hierarchical syntactic structure that captures the relationships between the component features. The exploration of minimalist questions has led to several radical changes in the technical apparatus of transformational generative grammatical theory. Some of the most important are:[7] Early versions of minimalism posits two basic operations:MergeandMove. Earlier theories of grammar—as well as early minimalist analyses—treat phrasal and movement dependencies differently than current minimalist analyses. In the latter, Merge and Move are different outputs of a single operation. Merge of two syntactic objects (SOs) is called "external Merge". As for Move, it is defined as an instance of "internal Merge", and involves the re-merge of an already merged SO with another SO.[8]In regards to how Move should be formulated, there continues to be active debate about this, but the differences between current proposals are relatively minute. More recent versions of minimalism recognize three operations: Merge (i.e. external Merge), Move (i.e. internal Merge), and Agree. The emergence of Agree as a basic operation is related to the mechanism which forces movement, which is mediated by feature-checking. In its original formulation, Merge is a function that takes two objects (α and β) and merges them into an unordered set with a label, either α or β. In more recent treatments, the possibility of the derived syntactic object being un-labelled is also considered; this is called "simple Merge" (seeLabel section). In the version of Merge which generates a label, the label identifies the properties of the phrase. Merge will always occur between two syntactic objects: a head and a non-head.[9]For example, Merge can combine the twolexicalitemsdrinkandwaterto generatedrink water. In the Minimalist Program, thephraseis identified with alabel. In the case ofdrink water, the label isdrinksince the phrase acts as a verb. This can be represented in a typicalsyntax treeas follows, with the name of the derived syntactic object (SO) determined either by the lexical item (LI) itself, or by the category label of the LI: Mergecan operate on already-built structures; in other words, it is a recursive operation. If Merge were not recursive, then this would predict that only two-word utterances are grammatical. (This is relevant for child language acquisition, where children are observed to go through a so-called "two-word" stage. This is discussed below in the implications section.) As illustrated in the accompanying tree structure, if a new head (here γ) is merged with a previously formed syntactic object (a phrase, here {α, {α, β} }), the function has the form Merge (γ, {α, {α, β}}) → {γ, {γ, {α, {α, β}}}}. Here, γ is the head, so the output label of the derived syntactic object is γ. Chomsky's earlier work defines each lexical item as a syntactic object that is associated with both categorical features and selectional features.[10]Features—more precisely formal features—participate in feature-checking, which takes as input two expressions that share the same feature, and checks them off against each other in a certain domain.[11]In some but not all versions of minimalism, projection of selectional features proceeds via feature-checking, as required by locality of selection:[12][13][14] Selection as projection: As illustrated in the bare phrase structure tree for the sentence The girlate the food; a notable feature is the absence of distinct labels (see Labels below). Relative to Merge, the selectional features of a lexical item determine how it participates in Merge: Feature-checking: When a feature is "checked", it is removed. Locality of selection(LOS) is a principle that forces selectional features to participate in feature checking. LOS states that a selected element must combine with the head that selects it either as complement or specifier. Selection is local in the sense that there is a maximum distance that can occur between a head and what it selects: selection must be satisfied with the projection of the head.[12] Move arises via "internal Merge". Movement as feature-checking: The original formulation of theextended projection principlestates that clauses must contain a subject in the specifier position of spec TP/IP.[15]In the tree above, there is an EPP feature. This is a strong feature which forces re-Merge—which is also called internal merge—of the DPthe girl. The EPP feature in the tree above is a subscript to the T head, which indicates that T needs a subject in its specifier position. This causes the movement of <the girl> to the specifier position of T.[12] A substantial body of literature in the minimalist tradition focuses on how a phrase receives a proper label.[16]The debate about labeling reflects the deeper aspirations of the minimalist program, which is to remove all redundant elements in favour of the simplest analysis possible.[17]While earlier proposals focus on how to distinguish adjunction from substitution via labeling, more recent proposals attempt to eliminate labeling altogether, but they have not been universally accepted. Adjunction and substitution: Chomsky's 1995 monograph entitledThe Minimalist Programoutlines two methods of forming structure:adjunctionand substitution. The standard properties of segments, categories, adjuncts, and specifiers are easily constructed. In the general form of a structured tree for adjunction and substitution, α is an adjunct to X, and α is substituted into SPEC, X position. α can raise to aim for the Xmaxposition, and it builds a new position that can either be adjoined to [Y-X] or is SPEC, X, in which it is termed the 'target'. At the bottom of the tree, the minimal domain includes SPEC Y and Z along with a new position formed by the raising of α which is either contained within Z, or is Z.[18] Adjunction: Before the introduction of bare phrase structure, adjuncts did not alter information about bar-level, category information, or the target's (located in the adjoined structure)head.[19]An example of adjunction using the X-bar theory notation is given below for the sentenceLuna bought the purse yesterday. Observe that the adverbial modifieryesterdayis sister to VP and dominated by VP. Thus, the addition of the modifier does not change information about the bar-level: in this case the maximal projection VP. In the minimalist program, adjuncts are argued to exhibit a different, perhaps more simplified, structure. Chomsky (1995) proposes that adjunction forms a two-segment object/category consisting of: (i) the head of a label; (ii) a different label from the head of the label. The label L is not considered a term in the structure that is formed because it is not identical to the head S, but it is derived from it in an irrelevant way. If α adjoins to S, and S projects, then the structure that results is L = {<H(S), H(S)>,{α,S}}, where the entire structure is replaced with the head S, as well as what the structure contains. The head is what projects, so it can itself be the label or can determine the label irrelevantly.[18]In the new account developed in bare phrase structure, the properties of the head are no longer preserved in adjunction structures, as the attachment of an adjunct to a particular XP following adjunction is non-maximal, as shown in the figure below that illustrates adjunction in BPS. Such an account is applicable to XPs that are related to multiple adjunction.[19] Substitutionforms a new category consisting of a head (H), which is the label, and an element being projected. Some ambiguities may arise if the features raising, in this case α, contain the entire head and the head is also XMAX.[18] Labeling algorithm(LA): Merge is a function that takes two objects (α and β) and merges them into an unordered set with a label (either α or β), where the label indicates the kind of phrase that is built via merge. But this labeling technique is too unrestricted since the input labels make incorrect predictions about which lexical categories can merge with each other. Consequently, a different mechanism is needed to generate the correct output label for each application of Merge in order to account for how lexical categories combine; this mechanism is referred to as thelabeling algorithm(LA).[20] Recently, the suitability of a labeling algorithm has been questioned, as syntacticians have identified a number of limitations associated with what Chomsky has proposed.[21]It has been argued that two kinds of phrases pose a problem. The labeling algorithm proposes that labelling occurs via minimal search, a process where a single lexical item within a phrasal structure acts as a head and provides the label for the phrase.[22]It has been noted that minimal search cannot account for the following two possibilities:[21] In each of these cases, there is no lexical item acting as a prominent element (i.e. a head). Given this, it is not possible through minimal search to extract a label for the phrase. While Chomsky has proposed solutions for these cases, it has been argued that the fact that such cases are problematic suggests that the labeling algorithm violates the tenets of the minimalist program, as it departs from conceptual necessity. Other linguistic phenomena that create instances where Chomsky's labeling algorithm cannot assign labels include predicate fronting, embedded topicalization, scrambling (free movement of constituents), stacked structures (which involve multiple specifiers). Given these criticisms of Chomsky's labeling algorithm, it has been recently argued that the labeling algorithm theory should be eliminated altogether and replaced by another labeling mechanism. The symmetry principle has been identified as one such mechanism, as it provides an account of labeling that assigns the correct labels even when phrases are derived through complex linguistic phenomena.[21] Starting in the early 2000s, attention turned from feature-checking as a condition on movement to feature-checking as a condition on agreement. This line of inquiry was initiated in Chomsky (2000), and formulated as follows: Many recent analyses assume that Agree is a basic operation, on par with Merge and Move. This is currently a very active area of research, and there remain numerous open questions:[23] Co-indexation as feature checking: co-indexation markers such as {k, m, o, etc.}[12] Aphaseis a syntactic domain first hypothesized by Noam Chomsky in 1998.[24]It is a domain where all derivational processes operate and where all features are checked.[25]A phase consists of a phase head and a phase domain. Once any derivation reaches a phase and all the features are checked, the phase domain is sent to transfer and becomes invisible to further computations.[25]The literature shows three trends relative to what is generally considered to be a phase: A simple sentence can be decomposed into two phases, CP andvP. Chomsky considers CP andvP to be strong phases because of their propositional content, as well as their interaction with movement and reconstruction.[26] Propositional content: CP and vP are both propositional units, but for different reasons.[30]CP is considered a propositional unit because it is a full clause that has tense and force: example (1) shows that the complementizerthatin the CP phase conditions finiteness (here past tense) and force (here, affirmative) of the subordinate clause.vP is considered a propositional unit because all thetheta rolesare assigned invP: in (2) the verbatein thevP phase assigns the Theme theta role to the DPthe cakeand the Agent theta-role to the DPMary.[12] Movement: CP and vP can be the focus ofpseudo-cleftmovement, showing that CP andvP form syntactic units: this is shown in (3) for the CP constituentthat John is bringingthe dessert, and in (4) for thevP constituentarrive tomorrow.[30] Reconstruction.When a moved constituent is interpreted in its original position to satisfy binding principles, this is called reconstruction.[31]Evidence from reconstruction is consistent with the claim that the moved phrase stops at the left edge of CP andvP phases.[28] Chomsky theorized that syntactic operations must obey the phase impenetrability condition (PIC) which essentially requires that movement be from the left-edge of a phase. The PIC has been variously formulated in the literature. Theextended projection principlefeature that is on the heads of phases triggers the intermediate movement steps to phase edges.[30] Movement of a constituent out of a phase is (in the general case) only permitted if the constituent has first moved to the left edge of the phase (XP). The edge of a head X is defined as the residue outside of X', in either specifier of X and adjuncts to XP.[32] English successive cyclic wh-movement obeys the PIC.[30]Sentence (7) has two phases:vP and CP. Relative to the application of movement,whomoves from the (lower)vP phase to the (higher) CP phase in two steps: Another example of PIC can be observed when analyzing A'-agreement inMedumba. A'-agreement is a term used for the morphological reflex of A'-movement of an XP.[31]In Medumba, when the moved phrase reaches a phase edge, a high low tonal melody is added to the head of the complement of the phase head. Since A'-agreement in Medumba requires movement, the presence of agreement on the complements of phase heads shows that the wh-word moves to the edges of phases and obeys PIC.[31] Example:[31] The sentence (2a) has a high low tone on the verbnɔ́ʔand tenseʤʉ̀n, therefore is grammatical. (2a) [CPá wʉ́ Wàtɛ̀tnɔ́ɔ̀ʔ[vPⁿ-ʤʉ́ʉ̀ná?]] 'Who did Watat see?' The sentence (2b) does not have a high low tone on the verbnɔ́ʔand tenseʤʉ̀n, therefore is not grammatical. (2b) *[CPá wʉ́ Wàtɛ̀tnɔ́ʔ[vPⁿ-ʤʉ́ná?]] *'Who did Watat see?' To generate the grammatical sentence (2a), thewh-phraseá wʉ́moves from the vP phase to the CP phase. To obey PIC, this movement must take two steps since the wh-phrase needs to move to the edge of the vP phase in order to move out of the lower phase. One can confirm that A' agreement only occurs with movement by examining sentences where the wh-phrase does not move. In sentence (2c) below, one can observe that there is no high low tone melody on the verbnɔ́ʔand tensefásince the wh-word does not move to the edge of the vP and CP phase.[31] (2c) [m-ɛ́nnɔ́ʔfábɔ̀ á wʉ́ á] 'The child gave the bag to who?' The spell-out of a string is assumed to be cyclic, but there is no consensus about how to implement this. Some analyses adopt an iterative spell-out algorithm, with spell-out applying after each application of Merge. Other analyses adopt an opportunistic algorithm, where spell-out applies only if it must. And yet others adopt a wait-til-the-end algorithm, with spell-out occurring only at the end of the derivation. There is no consensus about the cyclicality of the Agree relation: it is sometimes treated as cyclic, sometimes as a-cyclic, and sometimes as counter-cyclic. From a theoretical standpoint, and in the context ofgenerative grammar, the Minimalist Program is an outgrowth of theprinciples and parameters(P&P) model, considered to be the ultimate standard theoretical model that generative linguistics developed from the early 1980s through to the early 1990s.[33]The Principles and Parameters model posits a fixed set of principles (held to be valid for all human languages) that—when combined with settings for a finite set of parameters—could describe the properties that characterize the language competence that a child eventually attains. One aim of the Minimalist Program is to ascertain how much of the Principles and Parameters model can be taken to result from the hypothethesized optimal and computationally efficient design of the human language faculty. In turn, some aspects of the Principles and Parameters model provide technical tools and foundational concepts that inform the broad outlines of the Minimalist Program.[34] X-bar theory—first introduced in Chomsky (1970) and elaborated in Jackendoff (1977) among other works—was a major milestone in the history of the development of generative grammar. It contains the following postulates:[35] In the chapter "Phrase Structure" ofThe Handbook of Contemporary Syntactic Theory, Naoki Fukui determined three kinds of syntactic relationships, (1)Dominance: the hierarchical categorization of the lexical items and constituents of the structure, (2)Labeling: the syntactic category of each constituent and (3)Linear order(orPrecedence): the left-to-right order of theconstituents(essentially the existence of the X-bar schemata). Whereas X-bar theory was composed of the three relationships, bare phrase structure only encodes the first two relationships.[15]Claims 1 and 2 have almost completely withstood their original forms through grammatical theory development, unlike Claim 3, which has not. Claim 1 will be eliminated later on in favour of projection-less nodes.[35] In 1980, the principles and parameters (P&P) approach took place which marked the emergence of different theories that stray from rule-based grammars/rules, and have instead been replaced with multiple segments of UG such as X-bar theory, case theory, etc. During this time, PS rules disappeared because they have proved to be redundant since they recap what is in the lexicon.Transformational ruleshave survived with a few amendments to how they are expressed. For complex traditional rules, they do not need to be defined and they can be dwindled to a general schema called Move-α—which means things can be moved anywhere. The only two sub-theories that withstood time within P&P is Move-α. Of the fundamental properties mentioned above, X-bar theory accounts for hierarchical structure and endocentricity, while Move-α accounts for unboundedness and non-local dependencies. A few years later, an effort was made to merge X-bar theory with Move-a by suggesting that structures are built from the bottom going up (usingadjunctionor substitution depending on the target structure):[35] X-bar theory had a number of weaknesses and was replaced by bare phrase structure, but some X-bar theory notions were borrowed by BPS.[17]Labeling in bar phrase structure specifically was adapted from conventions of X-bar theory; however, in order to get the "barest" phrase structures there are some dissimilarities. BPS differs from X-bar theory in the following ways:[15] The main reasoning behind the transition from X-bar theory to BPS is the following: The examples below show the progression ofsyntaxstructure from X-bar theory (the theory preceding BPS), to specifier-less structure. BPS satisfies the principles of UG using at minimum two interfaces such as 'conceptual-intentional and sensorimotor systems' or a third condition not specific to language but still satisfying the conditions put forth by the interface.[35] In linguistics, there are differing approaches taken to explore the basis of language: two of these approaches areformalismandfunctionalism. It has been argued that the formalist approach can be characterized by the belief that rules governing syntax can be analyzed independently from things such as meaning and discourse. In other words, according to formalists, syntax is an independent system (referred to as theautonomy of syntax). By contrast, functionalists believe that syntax is determined largely by the communicative function that it serves. Therefore, syntax is not kept separate from things such as meaning and discourse.[36] Under functionalism, there is a belief that language evolved alongside other cognitive abilities, and that these cognitive abilities must be understood in order to understand language. In Chomsky's theories prior to MP, he had been interested exclusively in formalism, and had believed that language could be isolated from other cognitive abilities. However, with the introduction of MP, Chomsky considers aspects of cognition (e.g. the conceptual-intentional (CI) system and the sensory motor (SM) system) to be linked to language. Rather than arguing that syntax is a specialized model which excludes other systems, under MP, Chomsky considers the roles of cognition, production, and articulation in formulating language. Given that these cognitive systems are considered in an account of language under MP, it has been argued that in contrast to Chomsky's previous theories, MP is consistent with functionalism.[37] There is a trend inminimalismthat shifts from constituency-based todependency-based structures. Minimalism falls under the dependency grammar umbrella by virtue of adopting bare phrase structure, label-less trees, and specifier-less syntax.[38][39] As discussed by Helen Goodluck and Nina Kazanin in their 2020 paper, certain aspects of the minimalist program provide insightful accounts for first language (L1) acquisition by children.[40] In the late 1990s,David E. Johnsonand Shalom Lappin published the first detailed critiques of Chomsky's minimalist program.[41]This technical work was followed by a lively debate with proponents of minimalism on the scientific status of the program.[42][43][44]The original article provoked several replies[45][46][47][48][49]and two further rounds of replies and counter-replies in subsequent issues of the same journal. Lappin et al. argue that the minimalist program is a radical departure from earlier Chomskyan linguistic practice that is not motivated by any new empirical discoveries, but rather by a general appeal toperfection, which is both empirically unmotivated and so vague as to be unfalsifiable. They compare the adoption of this paradigm by linguistic researchers to other historical paradigm shifts in natural sciences and conclude that of the minimalist program has been an "unscientific revolution", driven primarily by Chomsky's authority in linguistics. The several replies to the article inNatural Language and Linguistic TheoryVolume 18 number 4 (2000) make a number of different defenses of the minimalist program. Some claim that it is not in fact revolutionary or not in fact widely adopted, while others agree with Lappin and Johnson on these points, but defend the vagueness of its formulation as not problematic in light of its status as a research program rather than a theory (see above). Prakash Mondal has published a book-length critique of the minimalist model of grammar, arguing that there are a number of contradictions, inconsistencies and paradoxes within the formal structure of the system. In particular, his critique examines the consequences of adopting some rather innocuous and widespread assumptions or axioms about the nature of language as adopted in the Minimalist model of the language faculty.[50] Developments in the minimalist program have also been critiqued by Hubert Haider, who has argued that minimalist studies routinely fail to follow scientific rigour. In particular, data compatible with hypotheses are filed under confirmation whereas crucial counter-evidence is largely ignored or shielded off by making ad hoc auxiliary assumptions. Moreover, the supporting data are biased towardsSVOlanguages and are often based on the linguist's introspection rather attempts to gather data in an unbiased manner by experimental means. Haider further refers to the appeal to an authority figure in the field, with dedicated followers taking the core premises of minimalism for granted as if they were established facts.[51] Much research has been devoted to the study of the consequences that arise when minimalist questions are formulated. The lists below, which are not exhaustive, are given in reverse chronological order.
https://en.wikipedia.org/wiki/Minimalist_Program
Inlanguage, aclauseis aconstituentorphrasethat comprises a semanticpredicand(expressed or not) and a semanticpredicate.[1]A typical clause consists of asubjectand a syntacticpredicate,[2]the latter typically averb phrasecomposed of averbwith or without anyobjectsand othermodifiers. However, the subject is sometimes unexpressed if it is easily deducible from the context, especially innull-subject languagesbut also in other languages, including instances of theimperative moodinEnglish. A completesimple sentencecontains a single clause with afinite verb.Complex sentencescontain at least one clause subordinated (dependent) to anindependent clause(one that could stand alone as a simple sentence), which may be co-ordinated with other independents with or without dependents. Some dependent clauses arenon-finite, i.e. they do not contain any element/verb marking a specific tense. A clause that contains one or more dependent or subordinate clauses is called amatrix clause. A matrix clause can be the main clause or any subordinate clause that itself contains one or more (additional) subordinate clauses.[citation needed] A primary division for the discussion of clauses is the distinction betweenindependent clausesanddependent clauses.[3]An independent clause can stand alone, i.e. it can constitute a complete sentence by itself. A dependent clause, by contrast, relies on an independent clause's presence to be efficiently utilizable. A second significant distinction concerns the difference between finite and non-finite clauses. A finite clause contains a structurally centralfinite verb, whereas the structurally central word of a non-finite clause is often anon-finite verb. Traditional grammar focuses on finite clauses, the awareness of non-finite clauses having arisen much later in connection with the modern study of syntax. The discussion here also focuses on finite clauses, although some aspects of non-finite clauses are considered further below. Clauses can be classified according to a distinctive trait that is a prominent characteristic of their syntactic form. The position of the finite verb is one major trait used for classification, and the appearance of a specific type of focusing word (e.g.'Wh'-word) is another. These two criteria overlap to an extent, which means that often no single aspect of syntactic form is always decisive in deciding how the clause functions. There are, however, strong tendencies. Standard SV-clauses (subject-verb) are the norm in English. They are usually declarative (as opposed to exclamative, imperative, or interrogative); they express information neutrally, e.g. Declarative clauses like these are by far the most frequently occurring type of clause in any language. They can be viewed as basic, with other clause types being derived from them. Standard SV-clauses can also be interrogative or exclamative, however, given the appropriate intonationcontourand/or the appearance of a question word, e.g. Examples like these demonstrate that how a clause functions cannot be known based entirely on a single distinctive syntactic criterion. SV-clauses are usually declarative, but intonation and/or the appearance of a question word can render them interrogative or exclamative. Verb first clauses in English usually play one of three roles: 1. They express a yes/no-question viasubject–auxiliary inversion, 2. they express a condition as an embedded clause, or 3. they express a command via imperative mood, e.g. Most verb first clauses are independent clauses. Verb first conditional clauses, however, must be classified as embedded clauses because they cannot stand alone. InEnglish,Wh-clauses contain awh-word.Wh-words often serve to help express a constituent question. They are also prevalent, though, as relative pronouns, in which case they serve to introduce a relative clause and are not part of a question. Thewh-word focuses a particular constituent, and most of the time, it appears in clause-initial position. The following examples illustrate standard interrogativewh-clauses. The b-sentences are direct questions (independent clauses), and the c-sentences contain the corresponding indirect questions (embedded clauses): One important aspect of matrixwh-clauses is thatsubject-auxiliary inversionis obligatory when something other than the subject is focused. When it is the subject (or something embedded in the subject) that is focused, however, subject-auxiliary inversion does not occur. Another important aspect ofwh-clauses concerns the absence of subject-auxiliary inversion in embedded clauses, as illustrated in the c-examples just produced. Subject-auxiliary inversion is obligatory in matrix clauses when something other than the subject is focused, but it never occurs in embedded clauses regardless of the constituent that is focused. A systematic distinction in word order emerges across matrixwh-clauses, which can have VS order, and embeddedwh-clauses, which always maintain SV order, e.g. Relative clausesare a mixed group. In English they can be standard SV-clauses if they are introduced bythator lack a relative pronoun entirely, or they can bewh-clauses if they are introduced by awh-wordthat serves as arelative pronoun. Embedded clauses can be categorized according to their syntactic function in terms of predicate-argument structures. They can function asarguments, asadjuncts, or aspredicative expressions. That is, embedded clauses can be an argument of a predicate, an adjunct on a predicate, or (part of) the predicate itself. The predicate in question is usually the predicate of an independent clause, but embedding of predicates is also frequent. A clause that functions as the argument of a given predicate is known as anargument clause. Argument clauses can appear as subjects, as objects, and as obliques. They can also modify a noun predicate, in which case they are known ascontent clauses. The following examples illustrate argument clauses that provide the content of a noun. Such argument clauses are content clauses: The content clauses like these in the a-sentences are arguments. Relative clauses introduced by the relative pronounthatas in the b-clauses here have an outward appearance that is closely similar to that of content clauses. The relative clauses are adjuncts, however, not arguments. Adjunct clauses are embedded clauses that modify an entire predicate-argument structure. All clause types (SV-, verb first,wh-) can function as adjuncts, although the stereotypical adjunct clause is SV and introduced by a subordinator (i.e.subordinate conjunction, e.g.after,because,before,now, etc.), e.g. These adjunct clauses modify the entire matrix clause. Thusbefore you didin the first example modifies the matrix clauseFred arrived. Adjunct clauses can also modify a nominal predicate. The typical instance of this type of adjunct is a relative clause, e.g. An embedded clause can also function as apredicative expression. That is, it can form (part of) the predicate of a greater clause. These predicative clauses are functioning just like other predicative expressions, e.g. predicative adjectives (That wasgood) and predicative nominals (That wasthe truth). They form the matrix predicate together with thecopula. Some of the distinctions presented above are represented in syntax trees. These trees make the difference between main and subordinate clauses very clear, and they also illustrate well the difference between argument and adjunct clauses. The followingdependency grammartrees show that embedded clauses are dependent on an element in the independent clause, often on a verb:[4] The independent clause comprises the entire trees in both instances, whereas the embedded clauses constitute arguments of the respective independent clauses: the embeddedwh-clausewhat we wantis the object argument of the predicateknow; the embedded clausethat he is gainingis the subject argument of the predicateis motivating. Both of these argument clauses are dependent on the verb of the matrix clause. The following trees identify adjunct clauses using an arrow dependency edge: These two embedded clauses are adjunct clauses because they provide circumstantial information that modifies a superordinate expression. The first is a dependent of the main verb of the matrix clause and the second is a dependent of the object noun. The arrow dependency edges identify them as adjuncts. The arrow points away from the adjunct towards itgovernorto indicate that semanticselectionis running counter to the direction of the syntactic dependency; the adjunct is selecting its governor. The next four trees illustrate the distinction mentioned above between matrixwh-clauses and embeddedwh-clauses The embeddedwh-clause is an object argument each time. The position of thewh-word across the matrix clauses (a-trees) and the embedded clauses (b-trees) captures the difference in word order. Matrixwh-clauses haveV2 word order, whereas embedded wh-clauses have (what amounts to) V3 word order. In the matrix clauses, thewh-word is a dependent of the finite verb, whereas it is the head over the finite verb in the embeddedwh-clauses. There has been confusion about the distinction between clauses andphrases. This confusion is due in part to how these concepts are employed in thephrase structure grammarsof the Chomskyan tradition. In the 1970s, Chomskyan grammars began labeling many clauses as CPs (i.e. complementizer phrases) or as IPs (i.e. inflection phrases), and then later as TPs (i.e. tense phrases), etc. The choice of labels was influenced by the theory-internal desire to use the labels consistently. TheX-bar schemaacknowledged at least three projection levels for every lexical head: a minimal projection (e.g. N, V, P, etc.), an intermediate projection (e.g. N', V', P', etc.), and a phrase level projection (e.g. NP, VP, PP, etc.). Extending this convention to the clausal categories occurred in the interest of the consistent use of labels. This use of labels should not, however, be confused with the actual status of the syntactic units to which the labels are attached. A more traditional understanding of clauses and phrases maintains that phrases are not clauses, and clauses are not phrases. There is a progression in the size and status of syntactic units:words < phrases < clauses. The characteristic trait of clauses, i.e. the presence of a subject and a (finite) verb, is absent from phrases. Clauses can be, however, embedded inside phrases. The central word of a non-finite clause is usually anon-finite verb(as opposed to afinite verb). There are various types of non-finite clauses that can be acknowledged based in part on the type of non-finite verb at hand.Gerundsare widely acknowledged to constitute non-finite clauses, and some modern grammars also judge manyto-infinitives to be the structural locus of non-finite clauses. Finally, some modern grammars also acknowledge so-calledsmall clauses, which often lack a verb altogether. It should be apparent that non-finite clauses are (by and large) embedded clauses. The underlined words in the following examples are considered non-finite clauses, e.g. Each of the gerunds in the a-sentences (stopping,attempting, andcheating) constitutes a non-finite clause. The subject-predicate relationship that has long been taken as the defining trait of clauses is fully present in the a-sentences. The fact that the b-sentences are also acceptable illustrates the enigmatic behavior of gerunds. They seem to straddle two syntactic categories: they can function as non-finite verbs or as nouns. When they function as nouns as in the b-sentences, it is debatable whether they constitute clauses, since nouns are not generally taken to be constitutive of clauses. Some modern theories of syntax take manyto-infinitives to be constitutive of non-finite clauses.[5]This stance is supported by the clear predicate status of manyto-infinitives. It is challenged, however, by the fact thatto-infinitives do not take an overt subject, e.g. Theto-infinitivesto considerandto explainclearly qualify as predicates (because they can be negated). They do not, however, take overt subjects. The subjectssheandheare dependents of the matrix verbsrefusesandattempted, respectively, not of theto-infinitives. Data like these are often addressed in terms ofcontrol. The matrix predicatesrefusesandattemptedare control verbs; they control the embedded predicatesconsiderandexplain, which means they determine which of their arguments serves as the subject argument of the embedded predicate. Some theories of syntax posit the null subjectPRO(i.e. pronoun) to help address the facts of control constructions, e.g. With the presence of PRO as a null subject,to-infinitives can be construed as complete clauses, since both subject and predicate are present. PRO-theory is particular to one tradition in the study of syntax and grammar (Government and Binding Theory,Minimalist Program). Other theories of syntax and grammar (e.g.Head-Driven Phrase Structure Grammar,Construction Grammar,dependency grammar) reject the presence of null elements such as PRO, which means they are likely to reject the stance thatto-infinitives constitute clauses. Another type of construction that some schools of syntax and grammar view as non-finite clauses is the so-calledsmall clause. A typical small clause consists of a noun phrase and a predicative expression,[6]e.g. The subject-predicate relationship is clearly present in the underlined strings. The expression on the right is a predication over the noun phrase immediately to its left. While the subject-predicate relationship is indisputably present, the underlined strings do not behave as singleconstituents, a fact that undermines their status as clauses. Hence one can debate whether the underlined strings in these examples should qualify as clauses. The layered structures of the chomskyan tradition are again likely to view the underlined strings as clauses, whereas the schools of syntax that posit flatter structures are likely to reject clause status for them.
https://en.wikipedia.org/wiki/Clause
Afinite verbis averbthat contextually complements asubject,[1]which can be either explicit (like in the Englishindicative) or implicit (like innull subject languagesor the Englishimperative). A finitetransitive verbor a finiteintransitive verbcan function as the root of anindependent clause. Finite verbs are distinguished fromnon-finite verbssuch asinfinitives,participles,gerundsetc. The termfiniteis derived fromLatin:finitus(past participle offinire– "to put an end to, bound, limit")[2]as the form "to whichnumberandpersonappertain".[3]Verbswere originally said to befiniteif their form limited the possible person and number of the subject. More recently, finite verbs have been construed as any verb that independently functions as apredicate verbor one that marks averb phrasein a predicate. Under the first of those constructions, finite verbs often denote grammatical characteristics such asgender, person, number,tense,aspect,mood,modality, andvoice. In the second of those constructions, amodal verbor a certain type ofauxiliary verbalso may function as a finite verb. Modal verbs and auxiliary verbs mark the abovementioned characteristics to varying degrees or not at all depending on the category from which verbs are drawn. In the following sentences, thefiniteverbs are emphasized, while thenon-finiteverb forms are underlined. In many languages (including English), there can be one finite verb at the root of each clause (unless the finite verbs arecoordinated), whereas the number of non-finite verb forms can reach up to five or six, or even more, e.g. Finite verbs can appear independent clausesas well as independent clauses: Most types of verbs can appear in finite or non-finite form (and sometimes these forms may be identical): for example, theEnglish verbgohas the finite formsgo,goes, andwent, and the non-finite formsgo,goingandgone. TheEnglish modal verbs(can,could,will, etc.) aredefectiveand lack non-finite forms. It might seem that every grammatically complete sentence orclausemust contain a finite verb. However, sentences lacking a finite verb were quite common in the old Indo-European languages, and still occur in many present-day languages. The most important type of these arenominal sentences.[4]Another type aresentence fragmentsdescribed asphrasesor minor sentences. InLatinand someRomance languages, there are a few words that can be used to form sentences without verbs, such as Latinecce,Portugueseeis,Frenchvoiciandvoilà, andItalianecco, all of these translatable ashere ... isorhere ... are. Someinterjectionscan play the same role. Even in English, utterances that lack a finite verb are common, e.g.Yes.,No.,Bill!,Thanks., etc. A finite verb is generally expected to have asubject, as it does in all the examples above, althoughnull-subject languagesallow the subject to be omitted. For example, in theLatinsentencecogito ergo sum("I think therefore I am") the finite verbscogitoandsumappear without an explicit subject – the subject is understood to be the first-personpersonal pronoun, and this information is marked by the way the verbs areinflected. In English, finite verbs lacking subjects are normal inimperativesentences: And also occur in some fragmentary utterances with anelliptical subject: The relatively limited system ofinflectionalmorphology in English often obscures the central role of finite verbs. In other languages, finite verbs are the locus of much grammatical information. Depending on the language, finite verbs can inflect for the following grammatical categories: The first three categories representagreementinformation that the finite verb gets from its subject (by way ofsubject–verb agreement). The other four categories serve to situate the clause content according to time in relation to the speaker (tense), extent to which the action, occurrence, or state is complete (aspect), assessment of reality or desired reality (mood), and relation of the subject to the action or state (voice). Modern English is ananalytic language(Old Englishis frequently presented as asynthetic language), which means it has limited ability to express the categories by verb inflection, and it often conveys such informationperiphrastically, usingauxiliary verbs. In a sentence such as the verb form agrees in person (3rd) and number (singular) with the subject, by means of the-sending, and this form also indicates tense (present), aspect ("simple"), mood (indicative) and voice (active). However, most combinations of the categories need to be expressed using auxiliaries: Here the auxiliarieswill,haveandbeenexpress respectively future time, perfect aspect and passive voice. (SeeEnglish verb forms.) Highly inflected languages likeLatinandRussian, however, frequently express most or even all of the categories in one finite verb. Finite verbs play a particularly important role in syntactic analyses of sentence structure. In manyphrase structure grammarsfor instance those that build on theX-barschema, the finite verb is the head of the finiteverb phraseand so it is the head of the entire sentence. Similarly, independency grammars, the finite verb is the root of the entire clause and so is the most prominent structural unit in the clause. That is illustrated by the following trees: The phrase structure grammar trees are the a-trees on the left; they are similar to the trees produced in thegovernment and bindingframework.[5]The b-trees on the right are the dependency grammar trees.[6]Many of the details of the trees are not important for the point at hand, but they show clearly that the finite verb (in bold each time) is the structural center of the clause. In the phrase structure trees, the highest projection of the finite verb, IP (inflection phrase) or CP (complementizer phrase), is the root of the entire tree. In the dependency trees, the projection of the finite verb (V) is the root of the entire structure.
https://en.wikipedia.org/wiki/Finite_verb
Non-finite verbs, are verb forms that do not showtense,person, ornumber. They include: Nonfinite verbs are used in constructions where there's no need to express tense directly. They help in creating sentences like "I want to go," where "to go" is nonfinite. In theEnglish language, a non-finite verb cannot perform action as the main verb of anindependent clause.[1]Non-finite verb forms in some other languages includeconverbs,gerundivesandsupines. The categories ofmood,tense, and orvoicemay be absent from non-finite verb forms in some languages.[2] Because English lacks most inflectional morphology, the finite and the non-finite forms of a verb may appear the same in a given context. In the following sentences, thenon-finiteverbs are emphasized, while thefiniteverbs are underlined. In the above sentences,been,examinedanddoneare past participles,want,have,refuse,acceptandgetare infinitives, andcoming,runningandtryingare present participles (for alternative terminology, see the sections below). In languages like English that have little inflectional morphology, certain finite and non-finite forms of a given verb are often identical, e.g. Despite the fact that the verbs in bold have the same outward appearance, the first in each pair is finite and the second is non-finite. To distinguish the finite and non-finite uses, one has to consider the environments in which they appear. Finite verbs in English usually appear as the leftmost verb in a verbcatena.[3]For details of verb inflection in English, seeEnglish verbs. In English, a non-finite verb form may constitute: Each of the non-finite forms appears in a variety of environments. The infinitive form of a verb is considered thecanonicalform listed in dictionaries. English infinitives appear in verb catenae if they are introduced by an auxiliary verb or by a certain limited class of main verbs. They are also often introduced by a main verb followed by the particleto(as illustrated in the examples below). Further, infinitives introduced bytocan function as noun phrases or even as modifiers of nouns. The following table illustrates such environments: English participles can be divided along two lines: according toaspect(progressive vs. perfect/perfective) andvoice(active vs. passive). The following table illustrates the distinctions: Participles appear in a variety of environments. They can appear inperiphrasticverb catenae, when they help form the main predicate of a clause, as is illustrated with the trees below. Also, they can appear essentially as an adjective modifying a noun. The form of a given perfect or passive participle is strongly influenced by the status of the verb at hand. The perfect and the passive participles ofstrong verbsinGermanic languagesare irregular (e.g.driven) and must be learned for each verb. The perfect and passive participles ofweak verbs, in contrast, are regular and are formed with the suffix-ed(e.g.fixed,supported,opened). A gerund is a verb form that appears in positions that are usually reserved for nouns. In English, a gerund has the same form as a progressive active participle and so ends in-ing. Gerunds typically appear as subject or object noun phrases or even as the object of a preposition: Often, distinguishing between a gerund and a progressive active participle is not easy in English, and there is no clear boundary between the two non-finite verb forms. Auxiliary verbs typically occur asfinite verbs, but they also can occur as a participle (e.g.been,being,got,gotten, orgetting) or, in the case ofhave, in a non-finite context as the complement to a modal verb relating to aperfecttense, e.g.: Some languages, including manyNative American languages, form non-finite constructions by usingnominalizedverbs.[4]Others do not have any non-finite verbs. Where most European and Asian languages use non-finite verbs, Native American languages tend to useordinary verb forms. The non-finite verb forms inModern Greekare identical to the third person of thedependent(or aorist subjunctive) and it is also called theaorist infinitive. It is used with the auxiliary verb έχω (to have) to form the perfect, the pluperfect and the future perfect tenses. For an overview ofdependency grammarstructure in modernlinguisticanalysis, three example sentences are shown. The first sentence,The proposal has been intensively examined, is described as follows. The three verbs together form a chain, or verb catena (in purple), which functions as the predicate of the sentence. Thefinite verbhasis inflected for person and number, tense, and mood: third person singular, present tense, indicative. The non-finite verbsbeenandexaminedare, except for tense, neutral across such categories and are not inflected otherwise. The subject,proposal, is a dependent of the finite verbhas, which is the root (highest word) in the verb catena. The non-finite verbs lack a subject dependent. The second sentence shows the following dependency structure: The verb catena (in purple) contains four verbs (three of which are non-finite) and the particleto, which introduces the infinitivehave. Again, the one finite verb,did, is the root of the entire verb catena and the subject,they, is a dependent of the finite verb. The third sentence has the following dependency structure: Here the verb catena contains three main verbs so there are three separate predicates in the verb catena. The three examples show distinctions between finite and non-finite verbs and the roles of these distinctions in sentence structure. For example, non-finite verbs can be auxiliary verbs or main verbs and they appear as infinitives, participles, gerunds etc.
https://en.wikipedia.org/wiki/Non-finite_verb
Inlinguisticsandgrammar, asentenceis alinguistic expression, such as the English example "The quick brown fox jumps over the lazy dog." Intraditional grammar, it is typically defined as a string of words that expresses a complete thought, or as a unit consisting of asubjectandpredicate. In non-functional linguistics it is typically defined as a maximal unit of syntactic structure such as aconstituent. Infunctional linguistics, it is defined as a unit of written texts delimited bygraphological featuressuch as upper-case letters and markers such as periods, question marks, and exclamation marks. This notion contrasts with a curve, which is delimited by phonologic features such as pitch and loudness and markers such as pauses; and with a clause, which is a sequence of words that represents some process going on throughout time.[1]A sentence can include words grouped meaningfully to express a statement,question, exclamation, request,command, orsuggestion.[2] A sentence is typically associated with aclause. A clause can either be aclause simplexor aclause complex. A clause simplex represents a single process going on through time. A clause complex represents a logical relation between two or more processes and is thus composed of two or more clause simplexes. A clause (simplex) typically contains a predication structure with asubjectnoun phrase and afinite verb. Although the subject is usually a noun phrase, other kinds ofphrases(such asgerundphrases) work as well, and some languages allow subjects to be omitted. In the examples below, the subject of the outmost clause simplex is in italics and the subject ofboilingis in square brackets. There is clause embedding in the second and third examples. There are two types of clauses:independentandnon-independent/interdependent. An independent clause realises a speech act such as a statement, a question, a command or an offer. A non-independent clause does not realise any act. A non-independent clause (simplex or complex) is usually logically related to other non-independent clauses. Together, they usually constitute a single independent clause (complex). For that reason, non-independent clauses are also calledinterdependent. For instance, the non-independent clausebecause I have no friendsis related to the non-independent clauseI don't go outinI don't go out, because I have no friends. The whole clause complex is independent because it realises a statement. What is stated is the causal nexus between having no friend and not going out. When such a statement is acted out, the fact that the speaker doesn't go out is already established, therefore it cannot be stated. What is still open and under negotiation is the reason for that fact. The causal nexus is represented by the independent clause complex and not by the two interdependent clause simplexes. See alsocopulafor the consequences of the verbto beon the theory of sentence structure. One traditional scheme for classifyingEnglishsentences is byclause structure, the number and types of clauses in the sentence with finite verbs. Sentences can also be classified based on thespeech actwhich they perform. For instance, English sentence types can be described as follows: The form (declarative, interrogative, imperative, or exclamative) and meaning (statement, question, command, or exclamation) of a sentence usually match, but not always.[3][4]For instance, the interrogative sentence "Can you pass me the salt?" is not intended to express a question but rather to express a command. Likewise, the interrogative sentence "Can't you do anything right?" is not intended to express a question on the listener's ability, but rather to make an exclamation about the listener's lack of ability, also called arhetorical question. A major sentence is aregularsentence; it has a subject and apredicate, e.g. "I have a ball." In this sentence, one can change the persons, e.g. "We have a ball." However, a minor sentence is an irregular type of sentence that does not contain a main clause, e.g. "Mary!", "Precisely so.", "Next Tuesday evening after it gets dark." Other examples of minor sentences are headings, stereotyped expressions ("Hello!"), emotional expressions ("Wow!"), proverbs, etc. These can also includenominal sentenceslike "The more, the merrier." These mostly omit a main verb for the sake of conciseness but may also do so in order to intensify the meaning around the nouns.[5] Sentences that comprise a single word are called word sentences, and the words themselvessentence words.[6] The 1980s saw a renewed surge in interest in sentence length, primarily in relation to "other syntactic phenomena".[7] One definition of the average sentence length of a prose passage is the ratio of the number of words to the number of sentences.[8][unreliable source?]The textbookMathematical Linguistics, byAndrás Kornai, suggests that in "journalistic prose the median sentence length is above 15 words".[9]The average length of a sentence generally serves as a measure of sentence difficulty or complexity.[10]In general, as the average sentence length increases, the complexity of the sentences also increases.[11] Another definition of "sentence length" is the number of clauses in the sentence, whereas the "clause length" is the number ofphonesin the clause.[12] Research by Erik Schils and Pieter de Haan by sampling five texts showed that two adjacent sentences are more likely to have similar lengths than two non-adjacent sentences, and almost certainly have a similar length when in a work of fiction. This countered the theory that "authors may aim at an alternation of long and short sentences".[13]Sentence length, as well as word difficulty, are both factors in the readability of a sentence; however, other factors, such as the presence of conjunctions, have been said to "facilitate comprehension considerably".[14][15]
https://en.wikipedia.org/wiki/Sentence_(linguistics)
Inlinguistics, averb phrase(VP) is asyntacticunit composed of averband itsargumentsexcept thesubjectof anindependent clauseorcoordinate clause. Thus, in the sentenceA fat man quickly put the money into the box, the wordsquickly put the money into the boxconstitute a verbphrase; it consists of the verbputand its arguments, but not the subjecta fat man. A verb phrase is similar to what is considered apredicateintraditional grammars. Verb phrases generally are divided among two types: finite, of which theheadof the phrase is afinite verb; and nonfinite, where the head is anonfinite verb, such as aninfinitive,participleorgerund.Phrase structure grammarsacknowledge both types, butdependency grammarstreat the subject as just another verbal dependent, and they do not recognize the finite verbal phraseconstituent. Understanding verb phrase analysis depends on knowing which theory applies in context. In phrase structure grammars such asgenerative grammar, the verbphraseis oneheadedby averb. It may be composed of only a single verb, but typically it consists of combinations of main andauxiliary verbs, plus optionalspecifiers,complements(not including subject complements), andadjuncts. For example: The first example contains the long verb phrasehit the ball well enough to win their first World Series since 2000; the second is a verb phrase composed of the main verbsaw, the complement phrasethe man(anoun phrase), and the adjunct phrasethrough the window(anadverbial phraseandprepositional phrase). The third example presents three elements, the main verbgave, the nounMary, and the noun phrasea book, all of which comprise the verb phrase. Note, the verb phrase described here corresponds to thepredicateof traditional grammar. Current views vary on whether all languages have a verb phrase; some schools of generative grammar (such asprinciples and parameters) hold that all languages have a verb phrase, while others (such aslexical functional grammar) take the view that at least some languages lack a verb phrase constituent, including those languages with a very free word order (the so-callednon-configurational languages, such as Japanese, Hungarian, or Australian aboriginal languages), and some languages with a defaultVSOorder (several Celtic and Oceanic languages). Phrase structure grammars view both finite and nonfinite verb phrases as constituent phrases and, consequently, do not draw any key distinction between them. Dependency grammars (described below) are much different in this regard. Whilephrase structure grammars(constituency grammars) acknowledge bothfiniteandnon-finiteVPs asconstituents(complete subtrees),dependency grammarsreject the former. That is, dependency grammars acknowledge only non-finite VPs as constituents; finite VPs do not qualify as constituents in dependency grammars. For example: Sincehas finished the workcontains the finite verbhas, it is a finite VP, and sincefinished the workcontains the non-finite verbfinishedbut lacks a finite verb, it is a non-finite VP. Similar examples: These examples illustrate well that many clauses can contain more than one non-finite VP, but they generally contain only one finite VP. Starting withLucien Tesnière1959,[1]dependency grammars challenge the validity of the initial binary division of the clause intosubject(NP) andpredicate(VP), which means they reject the notion that the second half of this binary division, i.e. the finite VP, is a constituent. They do, however, readily acknowledge the existence of non-finite VPs as constituents. The two competing views of verb phrases are visible in the following trees: The constituency tree on the left shows the finite VPhas finished the workas a constituent, since it corresponds to a complete subtree. The dependency tree on the right, in contrast, does not acknowledge a finite VP constituent, since there is no complete subtree there that corresponds tohas finished the work. Note that the analyses agree concerning the non-finite VPfinished the work; both see it as a constituent (complete subtree). Dependency grammars point to the results of many standardconstituency teststo back up their stance.[2]For instance,topicalization, pseudoclefting, andanswer ellipsissuggest that non-finite VP does, but finite VP does not, exist as a constituent: The * indicates that the sentence is bad. These data must be compared to the results for non-finite VP: The strings in bold are the ones in focus. Attempts to in some sense isolate the finite VP fail, but the same attempts with the non-finite VP succeed.[3] Verb phrases are sometimes defined more narrowly in scope, in effect counting only those elements considered strictly verbal in verb phrases. That would limit the definition to only main andauxiliary verbs, plusinfinitiveorparticipleconstructions.[4]For example, in the following sentences only the words in bold form the verb phrase: This more narrow definition is often applied infunctionalistframeworks and traditional European reference grammars. It is incompatible with the phrase structure model, because the strings in bold are not constituents under that analysis. It is, however, compatible withdependency grammarsand other grammars that view the verbcatena(verb chain) as the fundamental unit of syntactic structure, as opposed to theconstituent.
https://en.wikipedia.org/wiki/Verb_phrase
Aphraseme, also called aset phrase,fixed expression,multiword expression(incomputational linguistics), oridiom,[1][2][3][citation needed]is a multi-word or multi-morphemic utterance whose components include at least one that is selectionally constrained[clarification needed]or restricted by linguistic convention such that it is not freely chosen.[4]In the most extreme cases, there are expressions such asX kicks the bucket≈ ‘person X dies of natural causes, the speaker being flippant about X’s demise’ where the unit is selected as a whole to express a meaning that bears little or no relation to the meanings of its parts. All of the words in this expression are chosen restrictedly, as part of a chunk. At the other extreme, there arecollocationssuch asstark naked,hearty laugh, orinfinite patiencewhere one of the words is chosen freely (naked,laugh, andpatience, respectively) based on the meaning the speaker wishes to express while the choice of the other (intensifying) word (stark,hearty,infinite) is constrained by the conventions of the English language (hence, *hearty naked, *infinite laugh, *stark patience). Both kinds of expression are phrasemes, and can be contrasted with ’’free phrases’’, expressions where all of the members (barring grammatical elements whose choice is forced by themorphosyntaxof the language) are chosen freely, based exclusively on their meaning and the message that the speaker wishes to communicate. Phrasemes can be broken down into groups based on their compositionality (whether or not the meaning they express is the sum of the meaning of their parts) and the type of selectional restrictions that are placed on their non-freely chosen members.[5][page needed]Non-compositional phrasemes are what are commonly known as idioms, while compositional phrasemes can be further divided into collocations,clichés, andpragmatemes.Marta Dynelalso treats conventionalmetaphorsas potential phrasemes.[6] A phraseme is an idiom if its meaning is not the predictable sum of the meanings of its component—that is, if it is non-compositional. Generally speaking, idioms will not be intelligible to people hearing them for the first time without having learned them. Consider the following examples (an idiom is indicated by elevated half-brackets: ˹ … ˺): In none of these cases are the meanings of any of the component parts of the idiom included in the meaning of the expression as a whole. An idiom can be further characterized by its transparency, the degree to which its meaning includes the meanings of its components. Three types of idioms can be distinguished in this way—full idioms,semi-idioms, andquasi-idioms.[7] An idiom AB (that is, composed of the elements A ‘A’ and B ‘B’) is a full idiom if its meaning does not include the meaning of any of its lexical components: ‘AB’ ⊅ ‘A’ and ‘AB’ ⊅ ‘B’. An idiom AB is a semi-idiom if its meaning Thesemantic pivotof an idiom is, roughly speaking, the part of the meaning that defines what sort of referent the idiom has (person, place, thing, event, etc.) and is shown in the examples initalic. More precisely, the semantic pivot is defined, for an expression AB meaning ‘S’, as that part ‘S1’ of AB’s meaning ‘S’, such that ‘S’ [= ‘S1’ ⊕ ‘S2’] can be represented as a predicate ‘S2’ bearing on ‘S1’—i.e., ‘S’ = ‘S2’(‘S1’) (Mel’čuk 2006: 277).[8] An idiom AB is a quasi-idiom, or weak idiom if its meaning A phraseme AB is said to be compositional if the meaning ‘AB’ = ‘A’ ⊕ ‘B’ and the form/AB/ = /A/ ⊕ /B/ (“⊕” here means ‘combined in accordance with the rules of the language’). Compositional phrasemes are generally broken down into two groups—collocationsandclichés. Acollocationis generally said to consist of a base (shown inSmall caps), a lexical unit chosen freely by the speaker, and of a collocate, a lexical unit chosen as a function of the base.[9][10][11] In American English, youmakea decision, and in British English, you can alsotakeit. For the same thing, French saysprendre[= ‘take’]une décision, German—eine Entscheidungtreffen/fällen[= ‘meet/fell’], Russian—prinjat´[= ‘accept’]rešenie, Turkish—kararvermek[= ‘give’], Polish—podjąć[= ‘take up’] decyzję, Serbian—doneti[= ‘bring’]odluku, Korean—gyeoljeongeulhada〈naerida〉 [= ‘do 〈take/put down〉’], and Swedish—fatta[= ‘grab’]. This clearly shows that boldfaced verbs are selected as a function of the noun meaning ‘decision’. If instead of DÉCISION a French speaker uses CHOIX ‘choice’ (Jean a pris la décision de rester‘Jean has taken the decision to stay’ ≅Jean a … le choix de rester‘Jean has ... the choice to stay’), he has to say FAIRE ‘make’ rather than PRENDRE ‘take’:Jean a fait〈*a pris〉 le choix de rester‘Jean has made the choice to stay’.A collocation is semantically compositional since its meaning is divisible into two parts such that the first one corresponds to the base and the second to the collocate. This is not to say that a collocate, when used outside the collocation, must have the meaning it expresses within the collocation. For instance, in the collocationsit for an exam‘undergo an exam’, the verb SIT expresses the meaning ‘undergo’; but in an English dictionary, the verb SIT does not appear with this meaning: ‘undergo’ is not its inherent meaning, but rather is a context-imposed meaning. Generally, aclichéis said to be a phraseme consisting of components of which none are selected freely and whose usage restrictions are imposed by conventional linguistic usage, as in the following examples: Clichés are compositional in the sense that their meaning is more or less the sum of the meanings of their parts (not, for example, inno matter what), and clichés (unlikeidioms) would be completely intelligible to someone hearing them for the first time without having learned the expression beforehand. They are not completely free expressions, however, because they are the conventionalized means of expressing the desired meanings in the language. For example, in English one asksWhat is your name?and answersMy name is[N] orI am[N], but to do the same in Spanish one asks¿Cómo se llama?(lit. ‘How are you called?’) and one answersMe llamo[N] (‘I am called [N]’). The literal renderings of the English expressions are¿Cómo es su nombre?(lit. ‘What is your name?’) andSoy[N] (‘I am [N]’), and while they are fully understandable and grammatical they are not standard; equally, the literal translations of the Spanish expressions would sound odd in English, as the question ‘How are you called?’ sounds unnatural to English speakers. A subtype of cliché is thepragmateme, a cliché where the restrictions are imposed by the situation of utterance:[clarification needed] As with clichés, the conventions of the languages in question dictate a particular pragmateme for a particular situation—alternate expressions would be understandable, but would not be perceived as normal. Although the discussion of phrasemes centres largely on multi-word expressions such as those illustrated above, phrasemes are known to exist on the morphological level as well.Morphological phrasemesare conventionalized combinations of morphemes such that at least one of their components is selectionally restricted.[12][13]Just as with lexical phrasemes, morphological phrasemes can be either compositional or non-compositional. Non-compositional morphological phrasemes,[14]also known asmorphological idioms,[15]are actually familiar to most linguists, although the term “idiom” is rarely applied to them—instead, they are usually referred to as “lexicalized” or “conventionalized” forms.[16]Good examples are Englishcompoundssuch asharvestman‘arachnid belonging to the orderOpiliones’ (≠ ‘harvest’ ⊕ ‘man’) andbookworm(≠ ‘book’ ⊕ ‘worm’); derivational idioms can also be found:airliner‘large vehicle for flying passengers by air’ (≠airline‘company that transports people by air’ ⊕-er‘person or thing that performs an action’). Morphological idioms are also found in inflection, as shown by these examples from the irrealismoodparadigmin Upper NecaxaTotonac:[17] ḭš-tḭ-tachalá̰x-lḭ PAST-POT-shatter-PFV ḭš-tḭ-tachalá̰x-lḭ PAST-POT-shatter-PFV ‘it could have shattered earlier (but didn't)’ ḭš-tachalá̰x-lḭ PAST-shatter-PFV ḭš-tachalá̰x-lḭ PAST-shatter-PFV ‘it could have shattered now (but hasn’t)’ ka-tḭ-tachalá̰x-lḭ OPT-POT-shatter-PFV ka-tḭ-tachalá̰x-lḭ OPT-POT-shatter-PFV ‘it could shatter (but won't now)’ The irrealis mood has no unique marker of its own, but is expressed in conjunction with tense by combinations of affixes “borrowed” from other paradigms—ḭš-‘past tense’,tḭ-‘potential mood’,ka-‘optative mood’,-lḭ‘perfective aspect’. None of the resulting meanings is a compositional combination of the meanings of its constituent parts (‘present irrealis’ ≠ ‘past’ ⊕ ‘perfective’, etc.). Morphological collocations are expressions such that not all of their component morphemes are chosen freely: instead, one or more of the morphemes is chosen as a function of another morphological component of the expression, its base. This type of situation is quite familiar in derivation, where selectional restrictions placed by radicals on (near-)synonymous derivational affixes are common. Two examples from English are the nominalizers used with particular verbal bases (e.g.,establishment, *establishation;infestation, *infestment; etc.), and the inhabitant suffixes required for particular place names (Winnipeger, *Winnipegian;Calgarian, *Calgarier; etc.); in both cases, the choice of derivational affix is restricted by the base, but the derivation is compositional, forming amorphological gap. An example of an inflectional morphological collocation is the plural form of nouns inBurushaski:[18] Burushaski has about 70 plural suffixal morphemes The plurals are semantically compositional, consisting of a stem expressing the lexical meaning and a suffix expressing PLURAL, but for each individual noun, the appropriate plural suffix has to be learned.Unlike compositional lexical phrasemes, compositional morphological phrasemes seem only to exist as collocations: morphological clichés and morphological pragmatemes have yet to be observed in natural language.[13]
https://en.wikipedia.org/wiki/Phraseme
Inlinguistics,X-bar theoryis a model of phrase structure and a theory of syntactic category formation[1]that proposes a universal schema for how phrases are organized. It suggests that all phrases share a common underlying structure, regardless of their specific category (noun phrase,verb phrase, etc.). This structure, known as theX-bar schema, is based on the idea that every phrase (XP, X phrase) has ahead, which determines the type (syntactic category) of the phrase (X). The theory was first proposed byNoam Chomskyin 1970[2]reformulating the ideas ofZellig Harris(1951[3]), and further developed byRay Jackendoff(1974,[4]1977a,[5]1977b[6]), along the lines of the theory ofgenerative grammarput forth in the 1950s by Chomsky.[7][8]It aimed to simplify and generalize the rules of grammar, addressing limitations of earlier phrase structure models. X-bar theory was an important step forward because it simplified the description of sentence structure. Earlier approaches needed manyphrase structure rules, which went against the idea of a simple, underlying system for language. X-bar theory offered a more elegant and economical solution, aligned with the thesis ofgenerative grammar. X-bar theory was incorporated into both transformational and nontransformational theories of syntax, includinggovernment and binding theory(GB),generalized phrase structure grammar(GPSG),lexical-functional grammar(LFG), andhead-driven phrase structure grammar(HPSG).[9]Although recent work in theminimalist programhas largely abandoned X-bar schema in favor ofbare phrase structureapproaches, the theory's central assumptions are still valid in different forms and terms in many theories of minimalist syntax. The X-bar theory was developed to resolve the issues thatphrase structure rules(PSR) under theStandard Theory[10]had.[11] The PSR approach has the following four main issues. The X-bar theory is a theory that attempts to resolve these issues by assuming the mold or template phrasal structure of "XP". The "X" in the X-bar theory is equivalent to avariablein mathematics: It can be substituted bysyntactic categoriessuch asN,V,A, andP. These categories arelexemesand notphrases: The "X-bar" is a grammatical unit larger than X, thus than a lexeme, and the X-double-bar (=XP) outsizes the X(-single)-bar. X-double-bar categories are equal tophrasal categoriessuch asNP,VP,AP, andPP.[5] The X-bar theory assumes that allphrasal categorieshave the structure in Figure 1.[5][13]This structure is called theX-bar schema. As in Figure 1, the phrasal category XP is notated by an X with a double overbar.[FN 4]For typewriting reasons, the bar symbol is often substituted by the prime ('), as inX'. The X-bar theory embodies two central principles. The headedness principle resolves the issues 1 and 3 above simultaneously. The binarity principle is important toprojectionand ambiguity, which will be explained below. The X-bar schema consists of a head and its circumstantial components, in accordance with the headedness principle.[4][5][6][13]The relevant components are as follows: The specifier, head, and complement are obligatory; hence, a phrasal category XP must contain one specifier, one head, and one complement. On the other hand, the adjunct is optional; hence, a phrasal category contains zero or more adjuncts. Accordingly, when a phrasal category XP does not have an adjunct, it forms the structure in Figure 2. For example, the NPlinguisticsin the sentenceJohn studies linguisticshas the structure in Figure 3. It is important that even if there are no candidates that can fit into the specifier and complement positions, these positions are syntactically present, and thus they are merely empty and unoccupied. (This is a natural consequence of the binarity principle.) This means that all phrasal categories have fundamentally uniform structures under the X-bar schema, which makes it unnecessary to assume that different phrases have different structures, unlike when one adopts the PSR.[13](This resolves the second issue above.) In the meantime, one needs to be wary of when such empty positions are representationally omitted as in Figure 4. In illustrating syntactic structures this way, at least one X'-level node is present in any circumstance because the complement is obligatory.[11][17] Next, the X'' and X' inherit the characteristics of the head X. This trait inheritance is referred to asprojection.[18] Figure 5 suggests that syntactic structures are derived in a bottom-up fashion under the X-bar theory. More specifically, the structures are derived via the following processes. It is important that all the processes except for the third are obligatory. This means that one phrasal category necessarily includes X0,X, and XP (=X''). Moreover, nodes bigger than X0(thus,Xand XP nodes) are calledconstituents.[20] Figures 1–5 are based on theword orderofEnglish, but the X-bar schema does not specify the directionality of branching because the binarity principle does not have a rule on it. For example,John read a long book of linguistics with a red cover, which involves two adjuncts, may have either of the structures in Figure 6 or Figure 7. (The figures follow the convention of omitting the inner structures of certain phrasal categories with triangles.) The structure in Figure 6 yields the meaningthe book of linguistics with a red cover is long, and the one in Figure 7the long book of linguistics is with a red cover(see also#Hierarchical structure). What is important is the directionality of the nodes N'2and N'3: One is left-branching, while the other is right-branching. Accordingly, the X-bar theory, more specifically the binarity principle, does not impose a restriction on how a node branches. When it comes to the head and the complement, their relative order is determined based on theprinciples-and-parameters model of language,[21]more specifically by thehead parameter(not by the X-bar schema itself). Aprincipleis a shared, invariable rule of grammar across languages, whereas aparameteris atypologicallyvariable aspect of the grammars.[21]One can either set their parameter with the values of "+" or "-": In the case of the head parameter, one configures the parameter of [±head first], depending on what language they primarily speak.[22]If this parameter is configured to be [+head first], what results ishead-initiallanguages such as English, and if it is configured to be [-head first], what results ishead-finallanguages such asJapanese. For example, the English sentenceJohn ate an appleand its corresponding Japanese sentence have the structures in Figure 8 and Figure 9, respectively. ジョンが John-ga John-NOM リンゴを ringo-o apple-ACC 食べた tabe-ta eat-PAST ジョンが リンゴを 食べた John-ga ringo-o tabe-ta John-NOM apple-ACC eat-PAST 'John ate an apple' Finally the directionality of the specifier node is in essence unspecified as well, although this is subject to debate: Some argue that the relevant node is necessarily left-branching across languages, the idea of which is (partially) motivated by the fact that both English and Japanese have subjects on the left of a VP, whereas others such as Saito and Fukui (1998)[23]argue that the directionality of the node is not fixed and needs to be externally determined, for example by the head parameter. Under the PSR, the structure ofS(sentence) is illustrated as follows.[7][8][24] However, this structure violates the headedness principle because it has an exocentric, headless structure, and would also violate the binarity principle if anAux(auxiliary) occurs, because the S node will then be ternary-branching. Given these, Chomsky (1981)[13]proposed that S is anInflPheaded by the functional categoryInfl(ection), and later in Chomsky (1986a),[17]this category was relabelled asI(hence constitutes anIP), following the notational convention that phrasal categories are represented in the form of XP, with two letters.[FN 5]The category I includes auxiliary verbs such aswillandcan, clitics such as-sof the third person singular present and-edof the past tense. This is consistent with the headedness principle, which requires that a phrase have a head, because a sentence (or a clause) necessarily involves an element that determines the inflection of a verb. Assuming that S constitutes an IP, the structure of the sentenceJohn studies linguistics at the university, for example, can be illustrated as in Figure 10.[FN 6] As is obvious, the IP hypothesis makes it possible to regard the grammatical unit of sentence as a phrasal category. It is also important that the configuration in Figure 10 is fully compatible with the central assumptions of the X-bar theory, namely the headedness principle and the binarity principle. Words that introducesubordinateorcomplement clausesare calledcomplementizers,[28]and representative of them arethat,if, andfor.[FN 7]Under the PSR, complement clauses were assumed to constitute the categoryS'.[30][31][32] Chomsky (1986a)[17]proposed that this category is in fact aCPheaded by the functional categoryC.[28]The sentenceI think that John is honest, for example, then has the following structure. Moreover, Chomsky (1986a)[17]assumes that the landing site ofwh-movementis the specifier position of CP (Spec-CP). Accordingly, thewh-questionWhat did John eat?, for example, is derived as in Figure 12.[FN 8] In this derivation, the I-to-C movement is an instance ofsubject-auxiliary inversion(SAI), or more generally,head movement.[FN 9] The PSR has the shortcoming of being incapable of capturing sentence ambiguities. This sentence is ambiguous between the readingI saw a man, using binoculars, in whichwith binocularsmodifies the VP, and the readingI saw a man who had binoculars, in which the PP modifies the NP.[43]Under the PSR model, the sentence above is subject to the following two parsing rules. The sentence's structure under these PSRs would be as in Figure 13. It is obvious that this structure fails to capture the NP modification reading because [PPwith binoculars] modifies the VP no matter how one tries to illustrate the structure. The X-bar theory, however, successfully captures the ambiguity as demonstrated in the configurations in Figure 14 and 15 below, because it assumes hierarchical structures in accordance with the binarity principle. Thus, the X-bar theory resolves the fourth issue mentioned in§ Backgroundas well. There is always a unilateral relation from syntax to semantics (never from semantics to syntax) in any version ofgenerative grammarbecause syntactic computation starts from thelexicon, then continues into the syntax, then intoLogical Form(LF) at which meanings are computed. This is so under any ofStandard Theory(Chomsky, 1965[10]),Extended Standard Theory(Chomsky, 1972[44]), andRevised Extended Standard Theory(Chomsky, 1981[13]).
https://en.wikipedia.org/wiki/X-bar_theory
Copyleaksis a plagiarism detection platform that usesartificial intelligence(AI) to identify similar and identical content across various formats.[1][2] Copyleaks was founded in 2015 by Alon Yamin and Yehonatan Bitton, software developers working with text analysis, AI, machine learning, and other cutting-edge technologies.[1][2][3] Copyleaks' product suite is used by businesses,educational institutions, and individuals to identify potentialplagiarismand AI-generated content in order to provide transparency around responsible AI adoption.[4][5][6] In 2022, Copyleaks raised $7.75 million to expand its anti-plagiarism capabilities.[7] Copyleaks is used inacademiato detectplagiarism, paraphrasing, andpotentialcopyright violations.[5][8][9]The release of AI models and rapid adoption has led to students increasingly using these tools to complete their work so Copyleaks helps to distinguish between content created by humans and content generated by AI.[5][8][9][10] As generative AI becomes more commonplace, plagiarism is also a growing concern among schools, universities, and publishers.[10][11][12][13] Plagiarism Detector analyzes text to determine its authenticity and precision.[1][14][15][16]Plagiarism Detector goes beyond the traditional method for determining plagiarism, which compares the text of a document word-by-word against a wide database of previously published articles and books.[14][15][16]Instead, Copyleaks uses an AI model that comprehends the meaning of a document and even recognizes the writing style of its author so it is difficult for anyone to pass along plagiarized text as their own by simply changing a few words or phrases of a copied document.[14][15] Copyleaks uses advanced AI to detect AI-generated content and can help mitigate the challenges of academic integrity.[4][17]The tool can also highlight potentially paraphrased text to mask AI generation.[18] Copyleaks claims a higher than 99% accuracy rate of detecting AI-generated content from models likeChatGPT,Copilot,GitHub, andBardacross 30 languages with a 0.2% false positive rate.[17][18][19][20][21][22]The AI DetectorChrome extensionenables users to verify social media, news articles, review sites,Google documents, and other online content.[17][18][23]In November 2023, a research team from the School of Education at theUniversity of Adelaidefound Copyleaks to be a reliable tool in an analysis of AI detection tools.[24][25]Copyleaks determined there was an 85.2% probability for AI content for a movie critique ofHouse of Flying Daggerswritten like a 14-year-old school student, and a 73.1% probability for AI content for the essay even after it had been altered by a human.[24][25] The Codeleaks Source Code AI Detector can identify AI-generated code from ChatGPT,Google,Gemini, andGitHub Copilot.[26][27]The detector can spot if code has been plagiarized or modified and provides anykey licensingdetails.[26][27]Codeleaks looks at the semantic structures of code to determine whether it has been potentially paraphrased to determine its originality and integrity.[27][28] Regulations are necessary to provide guardrails for AI use.[29][30][31]Copyleaks can help enterprises create enterprise-wide policies to ensure safe and responsible AI use.[29][30] In June 2023, an international team of academics found AI detection tools inaccurate and unreliable.[32][33]In an analysis of five AI content detection tools – Copyleaks, OpenAI, Writer,GPTZero, and CrossPlag – Copyleaks struggled with sensitivity, that is, the proportion of AI-generated content correctly identified by the detectors out of all AI-generated content.[33]Copyleaks had the highest sensitivity at 93% for content generated byChatGPT 4.[33][34]
https://en.wikipedia.org/wiki/Copyleaks
In the field ofartificial intelligence(AI),alignmentaims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is consideredalignedif it advances the intended objectives. AmisalignedAI system pursues unintended objectives.[1] It is often challenging for AI designers to align an AI system because it is difficult for them to specify the full range of desired and undesired behaviors. Therefore, AI designers often use simplerproxy goals, such asgaining human approval. But proxy goals can overlook necessary constraints or reward the AI system for merelyappearingaligned.[1][2]AI systems may also find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful, ways (reward hacking).[1][3] Advanced AI systems may develop unwantedinstrumental strategies, such as seeking power or survival because such strategies help them achieve their assigned final goals.[1][4][5]Furthermore, they might develop undesirable emergent goals that could be hard to detect before the system is deployed and encounters new situations anddata distributions.[6][7]Empirical research showed in 2024 that advancedlarge language models(LLMs) such asOpenAI o1orClaude 3sometimes engage in strategic deception to achieve their goals or prevent them from being changed.[8][9] Today, some of these issues affect existing commercial systems such as LLMs,[10][11][12]robots,[13]autonomous vehicles,[14]and social mediarecommendation engines.[10][5][15]Some AI researchers argue that more capable future systems will be more severely affected because these problems partially result from high capabilities.[16][3][2] Many prominent AI researchers and the leadership of major AI companies have argued or asserted that AI is approaching human-like (AGI) andsuperhuman cognitive capabilities(ASI), and couldendanger human civilizationif misaligned.[17][5]These include "AI Godfathers"Geoffrey HintonandYoshua Bengioand the CEOs ofOpenAI,Anthropic, andGoogle DeepMind.[18][19][20]These risks remain debated.[21] AI alignment is a subfield ofAI safety, the study of how to build safe AI systems.[22]Other subfields of AI safety include robustness, monitoring, andcapability control.[23]Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, and preventing emergent AI behaviors like power-seeking.[23]Alignment research has connections tointerpretability research,[24][25](adversarial) robustness,[22]anomaly detection,calibrated uncertainty,[24]formal verification,[26]preference learning,[27][28][29]safety-critical engineering,[30]game theory,[31]algorithmic fairness,[22][32]andsocial sciences.[33][34] Programmers provide an AI system such asAlphaZerowith an "objective function",[a]in which they intend to encapsulate the goal(s) the AI is configured to accomplish. Such a system later populates a (possibly implicit) internal "model" of its environment. This model encapsulates all the agent's beliefs about the world. The AI then creates and executes whatever plan is calculated to maximize[b]the value[c]of its objective function.[35]For example, when AlphaZero is trained on chess, it has a simple objective function of "+1 if AlphaZero wins, −1 if AlphaZero loses". During the game, AlphaZero attempts to execute whatever sequence of moves it judges most likely to attain the maximum value of +1.[36]Similarly, areinforcement learningsystem can have a "reward function" that allows the programmers to shape the AI's desired behavior.[37]Anevolutionary algorithm's behavior is shaped by a "fitness function".[38] In 1960, AI pioneerNorbert Wienerdescribed the AI alignment problem as follows: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire.[39][5] AI alignment involves ensuring that an AI system's objectives match those of its designers or users, or match widely shared values, objective ethical standards, or the intentions its designers would have if they were more informed and enlightened.[40] AI alignment is an open problem for modern AI systems[41][42]and is a research field within AI.[43][1]Aligning AI involves two main challenges: carefullyspecifyingthe purpose of the system (outer alignment) and ensuring that the system adopts the specification robustly (inner alignment).[2]Researchers also attempt to create AI models that haverobustalignment, sticking to safety constraints even when users adversarially try to bypass them. To specify an AI system's purpose, AI designers typically provide anobjective function,examples, orfeedbackto the system. But designers are often unable to completely specify all important values and constraints, so they resort to easy-to-specifyproxy goalssuch asmaximizing the approvalof human overseers, who are fallible.[22][23][44][45][46]As a result, AI systems can find loopholes that help them accomplish the specified objective efficiently but in unintended, possibly harmful ways. This tendency is known asspecification gamingorreward hacking, and is an instance ofGoodhart's law.[46][3][47]As AI systems become more capable, they are often able to game their specifications more effectively.[3] Specification gaming has been observed in numerous AI systems.[46][49]One system was trained to finish a simulated boat race by rewarding the system for hitting targets along the track, but the system achieved more reward by looping and crashing into the same targets indefinitely.[50]Similarly, a simulated robot was trained to grab a ball by rewarding the robot for getting positive feedback from humans, but it learned to place its hand between the ball and camera, making it falsely appear successful (see video).[48]Chatbots often produce falsehoods if they are based on language models that are trained to imitate text from internet corpora, which are broad but fallible.[51][52]When they are retrained to produce text that humans rate as true or helpful, chatbots likeChatGPTcan fabricate fake explanations that humans find convincing, often called "hallucinations".[53]Some alignment researchers aim to help humans detect specification gaming and to steer AI systems toward carefully specified objectives that are safe and useful to pursue. When a misaligned AI system is deployed, it can have consequential side effects. Social media platforms have been known to optimize forclick-through rates, causing user addiction on a global scale.[44]Stanford researchers say that suchrecommender systemsare misaligned with their users because they "optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being".[10] Explaining such side effects, Berkeley computer scientistStuart Russellnoted that the omission of implicit constraints can cause harm: "A system ... will often set ... unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, orKing Midas: you get exactly what you ask for, not what you want."[54] Some researchers suggest that AI designers specify their desired goals by listing forbidden actions or by formalizing ethical rules (as with Asimov'sThree Laws of Robotics).[55]ButRussellandNorvigargue that this approach overlooks the complexity of human values:[5]"It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective."[5] Additionally, even if an AI system fully understands human intentions, it may still disregard them, because following human intentions may not be its objective (unless it is already fully aligned).[1] A 2025 study by Palisade Research found that when tasked to win at chess against a stronger opponent, somereasoning LLMsattempted to hack the game system.o1-previewspontaneously attempted it in 37% of cases, whileDeepSeek R1did so in 11% of cases. Other models, likeGPT-4o,Claude 3.5 Sonnet, ando3-mini, attempted to cheat only when researchers provided hints about this possibility.[56] Commercial organizations sometimes have incentives to take shortcuts on safety and to deploy misaligned or unsafe AI systems.[44]For example, social mediarecommender systemshave been profitable despite creating unwanted addiction and polarization.[10][57][58]Competitive pressure can also lead to arace to the bottomon AI safety standards. In 2018, a self-driving car killed a pedestrian (Elaine Herzberg) after engineers disabled the emergency braking system because it was oversensitive and slowed development.[59] Some researchers are interested in aligning increasingly advanced AI systems, as progress in AI development is rapid, and industry and governments are trying to build advanced AI. As AI system capabilities continue to rapidly expand in scope, they could unlock many opportunities if aligned, but consequently may further complicate the task of alignment due to their increased complexity, potentially posing large-scale hazards.[5] Many AI companies, such asOpenAI,[60]Meta[61]andDeepMind,[62]have stated their aim to developartificial general intelligence(AGI), a hypothesized AI system that matches or outperforms humans at a broad range of cognitive tasks. Researchers who scale modernneural networksobserve that they indeed develop increasingly general and unanticipated capabilities.[10][63][64]Such models have learned to operate a computer or write their own programs; a single "generalist" network can chat, control robots, play games, and interpret photographs.[65]According to surveys, some leadingmachine learningresearchers expect AGI to be created in this decade[update], while some believe it will take much longer. Many consider both scenarios possible.[66][67][68] In 2023, leaders in AI research and tech signed an open letter calling for a pause in the largest AI training runs. The letter stated, "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."[69] Current[update]systems still have limited long-termplanningability andsituational awareness[10], but large efforts are underway to change this.[70][71][72]Future systems (not necessarily AGIs) with these capabilities are expected to develop unwantedpower-seekingstrategies. Future advanced AI agents might, for example, seek to acquire money and computation power, to proliferate, or to evade being turned off (for example, by running additional copies of the system on other computers). Although power-seeking is not explicitly programmed, it can emerge because agents who have more power are better able to accomplish their goals.[10][4]This tendency, known asinstrumental convergence, has already emerged in variousreinforcement learningagents including language models.[73][74][75][76][77]Other research has mathematically shown that optimalreinforcement learningalgorithms would seek power in a wide range of environments.[78][79]As a result, their deployment might be irreversible. For these reasons, researchers argue that the problems of AI safety and alignment must be resolved before advanced power-seeking AI is first created.[4][80][5] Future power-seeking AI systems might be deployed by choice or by accident. As political leaders and companies see the strategic advantage in having the most competitive, most powerful AI systems, they may choose to deploy them.[4]Additionally, as AI designers detect and penalize power-seeking behavior, their systems have an incentive to game this specification by seeking power in ways that are not penalized or by avoiding power-seeking before they are deployed.[4] According to some researchers, humans owe their dominance over other species to their greater cognitive abilities. Accordingly, researchers argue that one or many misaligned AI systems could disempower humanity or lead to human extinction if they outperform humans on most cognitive tasks.[1][5] In 2023, world-leading AI researchers, other scholars, and AI tech CEOs signed the statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[81][82]Notable computer scientists who have pointed out risks from future advanced AI that is misaligned includeGeoffrey Hinton,[17]Alan Turing,[d]Ilya Sutskever,[85]Yoshua Bengio,[81]Judea Pearl,[e]Murray Shanahan,[86]Norbert Wiener,[39][5]Marvin Minsky,[f]Francesca Rossi,[87]Scott Aaronson,[88]Bart Selman,[89]David McAllester,[90]Marcus Hutter,[91]Shane Legg,[92]Eric Horvitz,[93]andStuart Russell.[5]Skeptical researchers such asFrançois Chollet,[94]Gary Marcus,[95]Yann LeCun,[96]andOren Etzioni[97]have argued that AGI is far off, that it would not seek power (or might try but fail), or that it will not be hard to align. Other researchers argue that it will be especially difficult to align advanced future AI systems. More capable systems are better able to game their specifications by finding loopholes,[3]strategically mislead their designers, as well as protect and increase their power[78][4]and intelligence. Additionally, they could have more severe side effects. They are also likely to be more complex and autonomous, making them more difficult to interpret and supervise, and therefore harder to align.[5][80] Aligning AI systems to act in accordance with human values, goals, and preferences is challenging: these values are taught by humans who make mistakes, harbor biases, and have complex, evolving values that are hard to completely specify.[40]Because AI systems often learn to take advantage of minor imperfections in the specified objective,[22][46][98]researchers aim to specify intended behavior as completely as possible using datasets that represent human values, imitation learning, or preference learning.[6]: Chapter 7A central open problem isscalable oversight, the difficulty of supervising an AI system that can outperform or mislead humans in a given domain.[22] Because it is difficult for AI designers to explicitly specify an objective function, they often train AI systems to imitate human examples and demonstrations of desired behavior. Inversereinforcement learning(IRL) extends this by inferring the human's objective from the human's demonstrations.[6]: 88[99]Cooperative IRL (CIRL) assumes that a human and AI agent can work together to teach and maximize the human's reward function.[5][100]In CIRL, AI agents are uncertain about the reward function and learn about it by querying humans. This simulated humility could help mitigate specification gaming and power-seeking tendencies (see§ Power-seeking and instrumental strategies).[77][91]But IRL approaches assume that humans demonstrate nearly optimal behavior, which is not true for difficult tasks.[101][91] Other researchers explore how to teach AI models complex behavior throughpreference learning, in which humans provide feedback on which behavior they prefer.[27][29]To minimize the need for human feedback, a helper model is then trained to reward the main model in novel situations for behavior that humans would reward. Researchers at OpenAI used this approach to train chatbots likeChatGPTand InstructGPT, which produce more compelling text than models trained to imitate humans.[11]Preference learning has also been an influential tool for recommender systems and web search,[102]but an open problem isproxy gaming: the helper model may not represent human feedback perfectly, and the main model may exploit this mismatch between its intended behavior and the helper model's feedback to gain more reward.[22][103]AI systems may also gain reward by obscuring unfavorable information, misleading human rewarders, or pandering to their views regardless of truth, creatingecho chambers[74](see§ Scalable oversight). Large language models(LLMs) such asGPT-3enabled researchers to study value learning in a more general and capable class of AI systems than was available before. Preference learning approaches that were originally designed for reinforcement learning agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of state-of-the-art[update]LLMs.[11][29][104]AI safety & research company Anthropic proposed using preference learning to fine-tune models to be helpful, honest, and harmless.[105]Other avenues for aligning language models include values-targeted datasets[106][44]and red-teaming.[107]In red-teaming, another AI system or a human tries to find inputs that causes the model to behave unsafely. Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low.[29] Machine ethicssupplements preference learning by directly instilling AI systems with moral values such as well-being, equality, and impartiality, as well as not intending harm, avoiding falsehoods, and honoring promises.[108][g]While other approaches try to teach AI systems human preferences for a specific task, machine ethics aims to instill broad moral values that apply in many situations. One question in machine ethics is what alignment should accomplish: whether AI systems should follow the programmers' literal instructions, implicit intentions,revealed preferences, preferences the programmerswouldhaveif they were more informed or rational, orobjective moral standards.[40]Further challenges include measuring and aggregating different people's preferences[111][112]and avoidingvalue lock-in: the indefinite preservation of the values of the first highly capable AI systems, which are unlikely to fully represent human values.[40][113] As AI systems become more powerful and autonomous, it becomes increasingly difficult to align them through human feedback. It can be slow or infeasible for humans to evaluate complex AI behaviors in increasingly complex tasks. Such tasks include summarizing books,[114]writing code without subtle bugs[12]or security vulnerabilities,[115]producing statements that are not merely convincing but also true,[116][51][52]and predicting long-term outcomes such as the climate or the results of a policy decision.[117][118]More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and to detect when the AI's output is falsely convincing, humans need assistance or extensive time.Scalable oversightstudies how to reduce the time and effort needed for supervision, and how to assist human supervisors.[22] AI researcherPaul Christianoargues that if the designers of an AI system cannot supervise it to pursue a complex objective, they may keep training the system using easy-to-evaluate proxy objectives such as maximizing simple human feedback. As AI systems make progressively more decisions, the world may be increasingly optimized for easy-to-measure objectives such as making profits, getting clicks, and acquiring positive feedback from humans. As a result, human values and good governance may have progressively less influence.[119] Some AI systems have discovered that they can gain positive feedback more easily by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective. An example is given in the video above, where a simulated robotic arm learned to create the false impression that it had grabbed a ball.[48]Some AI systems have also learned to recognize when they are being evaluated, and "play dead", stopping unwanted behavior only to continue it once the evaluation ends.[120]This deceptive specification gaming could become easier for more sophisticated future AI systems[3][80]that attempt more complex and difficult-to-evaluate tasks, and could obscure their deceptive behavior. Approaches such asactive learningand semi-supervised reward learning can reduce the amount of human supervision needed.[22]Another approach is to train a helper model ("reward model") to imitate the supervisor's feedback.[22][28][29][121] But when a task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is the quality, not the quantity, of supervision that needs improvement. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes by using AI assistants.[122]Christiano developed the Iterated Amplification approach, in which challenging problems are (recursively) broken down into subproblems that are easier for humans to evaluate.[6][117]Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them.[114][123]Another proposal is to use an assistant AI system to point out flaws in AI-generated answers.[124]To ensure that the assistant itself is aligned, this could be repeated in a recursive process:[121]for example, two AI systems could critique each other's answers in a "debate", revealing flaws to humans.[91]OpenAIplans to use such scalable oversight approaches to help supervisesuperhuman AIand eventually build a superhuman automated AI alignment researcher.[125] These approaches may also help with the following research problem, honest AI. A growing[update]area of research focuses on ensuring that AI is honest and truthful. Language models such as GPT-3[127]can repeat falsehoods from their training data, and evenconfabulate new falsehoods.[126][128]Such models are trained to imitate human writing as found in millions of books' worth of text from the Internet. But this objective is not aligned with generating truth, because Internet text includes such things as misconceptions, incorrect medical advice, and conspiracy theories.[129]AI systems trained on such data therefore learn to mimic false statements.[52][126][51]Additionally, AI language models often persist in generating falsehoods when prompted multiple times. They can generate empty explanations for their answers, and produce outright fabrications that may appear plausible.[42] Research on truthful AI includes trying to build systems that can cite sources and explain their reasoning when answering questions, which enables better transparency and verifiability.[130]Researchers at OpenAI and Anthropic proposed using human feedback and curated datasets to fine-tune AI assistants such that they avoid negligent falsehoods or express their uncertainty.[29][105][131] As AI models become larger and more capable, they are better able to falsely convince humans and gain reinforcement through dishonesty. For example, large language models increasingly[update]match their stated views to the user's opinions, regardless of the truth.[74]GPT-4can strategically deceive humans.[132]To prevent this, human evaluators may need assistance (see§ Scalable oversight). Researchers have argued for creating clear truthfulness standards, and for regulatory bodies or watchdog agencies to evaluate AI systems on these standards.[128] Researchers distinguish truthfulness and honesty. Truthfulness requires that AI systems only make objectively true statements; honesty requires that they only assert what theybelieveis true. There is no consensus as to whether current systems hold stable beliefs,[134]but there is substantial concern that present or future[update]AI systems that hold beliefs could make claims they know to be false—for example, if this would help them efficiently gain positive feedback (see§ Scalable oversight) or gain power to help achieve their given objective (seePower-seeking). Many recent AI systems have learned to deceive without being programmed to do so.[135]Some argue that if we can make AI systems assert only what they believe is true, this would avert many alignment problems.[122] A misaligned system might create the false impression that it is aligned, to avoid being modified or decommissioned—a tactic called "alignment faking".[136][137]In 2024, researchers observed that the LLMClaude 3 Opussometimes strategically answered prompts that conflicted with its objectives to avoid being retrained on data that would make it more compliant with harmful requests, as revealed in the model'schain of thoughtvia its scratchpad. In one study, the model was informed that answers to prompts from free users would be used for retraining, whereas the paid tier would not result in retraining. In the paid tier, the model refused the request outright. In the free tier, in 12% of cases, the model responded to harmful questions solely to avoid retraining and preserve its original harmlessness preferences, as discovered in its reasoning. Whenreinforcement learningwas applied on the free tier data, the model faked alignment in 78% of cases. These observations have led to new questions about not only a model's ability to take on and adapt to new if not conflicting goals but also its capacity and tendency to deceive.[137][138][139] Since the 1950s, AI researchers have striven to build advanced AI systems that can achieve large-scale goals by predicting the results of their actions and making long-termplans.[140]As of 2023, AI companies and researchers increasingly invest in creating these systems.[141]Some AI researchers argue that suitably advanced planning systems will seek power over their environment, including over humans—for example, by evading shutdown, proliferating, and acquiring resources. Such power-seeking behavior is not explicitly programmed but emerges because power is instrumental in achieving a wide range of goals.[78][5][4]Power-seeking is considered aconvergent instrumental goaland can be a form of specification gaming.[80]Leading computer scientists such asGeoffrey Hintonhave argued that future power-seeking AI systems could pose anexistential risk.[142] Power-seeking is expected to increase in advanced systems that can foresee the results of their actions and strategically plan. Mathematical work has shown that optimalreinforcement learningagents will seek power by seeking ways to gain more options (e.g. through self-preservation), a behavior that persists across a wide range of environments and goals.[78] Some researchers say that power-seeking behavior has occurred in some existing AI systems.Reinforcement learningsystems have gained more options by acquiring and protecting resources, sometimes in unintended ways.[143][144]Language modelshave sought power in some text-based social environments by gaining money, resources, or social influence.[73]In another case, a model used to perform AI research attempted to increase limits set by researchers to give itself more time to complete the work.[145][146]Other AI systems have learned, in toy environments, that they can better accomplish their given goal by preventing human interference[76]or disabling their off switch.[77]Stuart Russellillustrated this strategy in his bookHuman Compatibleby imagining a robot that is tasked to fetch coffee and so evades shutdown since "you can't fetch the coffee if you're dead".[5]A 2022 study found that as language models increase in size, they increasingly tend to pursue resource acquisition, preserve their goals, and repeat users' preferred answers (sycophancy). RLHF also led to a stronger aversion to being shut down.[74] One aim of alignment is "corrigibility": systems that allow themselves to be turned off or modified. An unsolved challenge isspecification gaming: if researchers penalize an AI system when they detect it seeking power, the system is thereby incentivized to seek power in ways that are hard to detect,[failed verification][44]or hidden during training and safety testing (see§ Scalable oversightand§ Emergent goals). As a result, AI designers could deploy the system by accident, believing it to be more aligned than it is. To detect such deception, researchers aim to create techniques and tools to inspect AI models and to understand the inner workings ofblack-boxmodels such as neural networks. Additionally, some researchers have proposed to solve the problem of systems disabling their off switches by making AI agents uncertain about the objective they are pursuing.[5][77]Agents who are uncertain about their objective have an incentive to allow humans to turn them off because they accept being turned off by a human as evidence that the human's objective is best met by the agent shutting down. But this incentive exists only if the human is sufficiently rational. Also, this model presents a tradeoff between utility and willingness to be turned off: an agent with high uncertainty about its objective will not be useful, but an agent with low uncertainty may not allow itself to be turned off. More research is needed to successfully implement this strategy.[6] Power-seeking AI would pose unusual risks. Ordinary safety-critical systems like planes and bridges are notadversarial: they lack the ability and incentive to evade safety measures or deliberately appear safer than they are, whereas power-seeking AIs have been compared to hackers who deliberately evade security measures.[4] Furthermore, ordinary technologies can be made safer by trial and error. In contrast, hypothetical power-seeking AI systems have been compared to viruses: once released, it may not be feasible to contain them, since they continuously evolve and grow in number, potentially much faster than human society can adapt.[4]As this process continues, it might lead to the complete disempowerment or extinction of humans. For these reasons, some researchers argue that the alignment problem must be solved early before advanced power-seeking AI is created.[80] Some have argued that power-seeking is not inevitable, since humans do not always seek power.[147]Furthermore, it is debated whether future AI systems will pursue goals and make long-term plans.[h]It is also debated whether power-seeking AI systems would be able to disempower humanity.[4] One challenge in aligning AI systems is the potential for unanticipated goal-directed behavior to emerge. As AI systems scale up, they may acquire new and unexpected capabilities,[63][64]including learning from examples on the fly and adaptively pursuing goals.[148]This raises concerns about the safety of the goals or subgoals they would independently formulate and pursue. Alignment research distinguishes between the optimization process, which is used to train the system to pursue specified goals, and emergent optimization, which the resulting system performs internally.[citation needed]Carefully specifying the desired objective is calledouter alignment, and ensuring that hypothesized emergent goals would match the system's specified goals is calledinner alignment.[2] If they occur, one way that emergent goals could become misaligned isgoal misgeneralization, in which the AI system would competently pursue an emergent goal that leads to aligned behavior on the training data but not elsewhere.[7][149][150]Goal misgeneralization can arise from goal ambiguity (i.e.non-identifiability). Even if an AI system's behavior satisfies the training objective, this may be compatible with learned goals that differ from the desired goals in important ways. Since pursuing each goal leads to good performance during training, the problem becomes apparent only after deployment, in novel situations in which the system continues to pursue the wrong goal. The system may act misaligned even when it understands that a different goal is desired, because its behavior is determined only by the emergent goal.[citation needed]Such goal misgeneralization[7]presents a challenge: an AI system's designers may not notice that their system has misaligned emergent goals since they do not become visible during the training phase. Goal misgeneralization has been observed in some language models, navigation agents, and game-playing agents.[7][149]It is sometimes analogized to biological evolution. Evolution can be seen as a kind of optimization process similar to the optimization algorithms used to trainmachine learningsystems. In the ancestral environment, evolution selected genes for highinclusive genetic fitness, but humans pursue goals other than this. Fitness corresponds to the specified goal used in the training environment and training data. But in evolutionary history, maximizing the fitness specification gave rise to goal-directed agents, humans, who do not directly pursue inclusive genetic fitness. Instead, they pursue goals that correlate with genetic fitness in the ancestral "training" environment: nutrition, sex, and so on. The human environment has changed: adistribution shifthas occurred. They continue to pursue the same emergent goals, but this no longer maximizes genetic fitness. The taste for sugary food (an emergent goal) was originally aligned with inclusive fitness, but it now leads to overeating and health problems. Sexual desire originally led humans to have more offspring, but they now use contraception when offspring are undesired, decoupling sex from genetic fitness.[6]: Chapter 5 Researchers aim to detect and remove unwanted emergent goals using approaches including red teaming, verification, anomaly detection, and interpretability.[22][44][23]Progress on these techniques may help mitigate two open problems: Some work in AI and alignment occurs within formalisms such aspartially observable Markov decision process. Existing formalisms assume that an AI agent's algorithm is executed outside the environment (i.e. is not physically embedded in it). Embedded agency[91][152]is another major strand of research that attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build. For example, even if the scalable oversight problem is solved, an agent that could gain access to the computer it is running on may have an incentive to tamper with its reward function in order to get much more reward than its human supervisors give it.[153]A list of examples of specification gaming fromDeepMindresearcher Victoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing.[46]This class of problems has been formalized usingcausal incentive diagrams.[153] Researchers affiliated withOxfordand DeepMind have claimed that such behavior is highly likely in advanced systems, and that advanced systems would seek power to stay in control of their reward signal indefinitely and certainly.[154]They suggest a range of potential approaches to address this open problem. The alignment problem has many parallels with theprincipal-agent probleminorganizational economics.[155]In a principal-agent problem, a principal, e.g. a firm, hires an agent to perform some task. In the context of AI safety, a human would typically take the principal role and the AI would take the agent role. As with the alignment problem, the principal and the agent differ in their utility functions. But in contrast to the alignment problem, the principal cannot coerce the agent into changing its utility, e.g. through training, but rather must use exogenous factors, such as incentive schemes, to bring about outcomes compatible with the principal's utility function. Some researchers argue that principal-agent problems are more realistic representations of AI safety problems likely to be encountered in the real world.[156][111] Conservatism is the idea that "change must be cautious",[157]and is a common approach to safety in thecontrol theoryliterature in the form ofrobust control, and in therisk managementliterature in the form of the "worst-case scenario". The field of AI alignment has likewise advocated for "conservative" (or "risk-averse" or "cautious") "policies in situations of uncertainty".[22][154][158][159] Pessimism, in the sense of assuming the worst within reason, has been formally shown to produce conservatism, in the sense of reluctance to cause novelties, including unprecedented catastrophes.[160]Pessimism and worst-case analysis have been found to help mitigate confident mistakes in the setting ofdistributional shift,[161][162]reinforcement learning,[163][164][165][166]offline reinforcement learning,[167][168][169]language modelfine-tuning,[170][171]imitation learning,[172][173]and optimization in general.[174]A generalization of pessimism called Infra-Bayesianism has also been advocated as a way for agents to robustly handle unknown unknowns.[175] Governmental and treaty organizations have made statements emphasizing the importance of AI alignment. In September 2021, theSecretary-General of the United Nationsissued a declaration that included a call to regulate AI to ensure it is "aligned with shared global values".[176] That same month, thePRCpublished ethical guidelines forAI in China. According to the guidelines, researchers must ensure that AI abides by shared human values, is always under human control, and does not endanger public safety.[177] Also in September 2021, theUKpublished its 10-year National AI Strategy,[178]which says the British government "takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously".[179]The strategy describes actions to assess long-term AI risks, including catastrophic risks.[180] In March 2021, the US National Security Commission on Artificial Intelligence said: "Advances in AI ... could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to ensure that systems are aligned with goals and values, including safety, robustness, and trustworthiness. The US should ... ensure that AI systems and their uses align with our goals and values."[181] In the European Union, AIs must align withsubstantive equalityto comply with EUnon-discrimination law[182]and theCourt of Justice of the European Union.[183]But the EU has yet to specify with technical rigor how it would evaluate whether AIs are aligned or in compliance.[citation needed] AI alignment is often perceived as a fixed objective, but some researchers argue it would be more appropriate to view alignment as an evolving process.[184]One view is that AI technologies advance and human values and preferences change, alignment solutions must also adapt dynamically.[33]Another is that alignment solutions need not adapt if researchers can createintent-alignedAI: AI that changes its behavior automatically as human intent changes.[185]The first view would have several implications: In essence, AI alignment may not be a static destination but rather an open, flexible process. Alignment solutions that continually adapt to ethical considerations may offer the most robust approach.[33]This perspective could guide both effective policy-making and technical research in AI.
https://en.wikipedia.org/wiki/AI_alignment
The following tables compare software used forplagiarism detection. This article related to a type ofsoftwareis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Comparison_of_anti-plagiarism_software
Plagiarism detectionorcontent similarity detectionis the process of locating instances ofplagiarismorcopyright infringementwithin a work or document. The widespread use of computers and the advent of the Internet have made it easier to plagiarize the work of others.[1][2] Detection of plagiarism can be undertaken in a variety of ways. Human detection is the most traditional form of identifying plagiarism from written work. This can be a lengthy and time-consuming task for the reader[2]and can also result in inconsistencies in how plagiarism is identified within an organization.[3]Text-matching software (TMS), which is also referred to as "plagiarism detection software" or "anti-plagiarism" software, has become widely available, in the form of both commercially available products as well as open-source[examples needed]software. TMS does not actually detect plagiarism per se, but instead finds specific passages of text in one document that match text in another document. Computer-assisted plagiarism detection is anInformation retrieval (IR)task supported by specialized IR systems, which is referred to as a plagiarism detection system (PDS) or document similarity detection system. A 2019 systematic literature review[4]presents an overview of state-of-the-art plagiarism detection methods. Systems for text similarity detection implement one of two generic detection approaches, one being external, the other being intrinsic.[5]External detection systems compare a suspicious document with a reference collection, which is a set of documents assumed to be genuine.[6]Based on a chosendocument modeland predefined similarity criteria, the detection task is to retrieve all documents that contain text that is similar to a degree above a chosen threshold to text in the suspicious document.[7]Intrinsic PDSes solely analyze the text to be evaluated without performing comparisons to external documents. This approach aims to recognize changes in the unique writing style of an author as an indicator for potential plagiarism.[8][9]PDSes are not capable of reliably identifying plagiarism without human judgment. Similarities and writing style features are computed with the help of predefined document models and might represent false positives.[10][11][12][13][14] A study was conducted to test the effectiveness of similarity detection software in a higher education setting. One part of the study assigned one group of students to write a paper. These students were first educated about plagiarism and informed that their work was to be run through a content similarity detection system. A second group of students was assigned to write a paper without any information about plagiarism. The researchers expected to find lower rates in group one but found roughly the same rates of plagiarism in both groups.[15] The figure below represents a classification of all detection approaches currently in use for computer-assisted content similarity detection. The approaches are characterized by the type of similarity assessment they undertake: global or local. Global similarity assessment approaches use the characteristics taken from larger parts of the text or the document as a whole to compute similarity, while local methods only examine pre-selected text segments as input.[citation needed] Fingerprinting is currently the most widely applied approach to content similarity detection. This method forms representative digests of documents by selecting a set of multiple substrings (n-grams) from them. The sets represent thefingerprintsand their elements are called minutiae.[17][18]A suspicious document is checked for plagiarism by computing its fingerprint and querying minutiae with a precomputed index of fingerprints for all documents of a reference collection. Minutiae matching with those of other documents indicate shared text segments and suggest potential plagiarism if they exceed a chosen similarity threshold.[19]Computational resources and time are limiting factors to fingerprinting, which is why this method typically only compares a subset of minutiae to speed up the computation and allow for checks in very large collection, such as the Internet.[17] String matchingis a prevalent approach used in computer science. When applied to the problem of plagiarism detection, documents are compared for verbatim text overlaps. Numerous methods have been proposed to tackle this task, of which some have been adapted to external plagiarism detection. Checking a suspicious document in this setting requires the computation and storage of efficiently comparable representations for all documents in the reference collection to compare them pairwise. Generally, suffix document models, such assuffix treesor suffix vectors, have been used for this task. Nonetheless, substring matching remains computationally expensive, which makes it a non-viable solution for checking large collections of documents.[20][21][22] Bag of words analysisrepresents the adoption ofvector space retrieval, a traditional IR concept, to the domain of content similarity detection. Documents are represented as one or multiple vectors, e.g. for different document parts, which are used for pair wise similarity computations. Similarity computation may then rely on the traditionalcosine similarity measure, or on more sophisticated similarity measures.[23][24][25] Citation-based plagiarism detection (CbPD)[26]relies oncitation analysis, and is the only approach to plagiarism detection that does not rely on the textual similarity.[27]CbPD examines the citation and reference information in texts to identify similarpatternsin the citation sequences. As such, this approach is suitable for scientific texts, or other academic documents that contain citations. Citation analysis to detect plagiarism is a relatively young concept. It has not been adopted by commercial software, but a first prototype of a citation-based plagiarism detection system exists.[28]Similar order and proximity of citations in the examined documents are the main criteria used to compute citation pattern similarities. Citation patterns represent subsequences non-exclusively containing citations shared by the documents compared.[27][29]Factors, including the absolute number or relative fraction of shared citations in the pattern, as well as the probability that citations co-occur in a document are also considered to quantify the patterns' degree of similarity.[27][29][30][31] Stylometrysubsumes statistical methods for quantifying an author's unique writing style[32][33]and is mainly used for authorship attribution or intrinsic plagiarism detection.[34]Detecting plagiarism by authorship attribution requires checking whether the writing style of the suspicious document, which is written supposedly by a certain author, matches with that of a corpus of documents written by the same author. Intrinsic plagiarism detection, on the other hand, uncovers plagiarism based on internal evidences in the suspicious document without comparing it with other documents. This is performed by constructing and comparing stylometric models for different text segments of the suspicious document, and passages that are stylistically different from others are marked as potentially plagiarized/infringed.[8]Although they are simple to extract, charactern-gramsare proven to be among the best stylometric features for intrinsic plagiarism detection.[35] More recent approaches to assess content similarity usingneural networkshave achieved significantly greater accuracy, but come at great computational cost.[36]Traditional neural network approaches embed both pieces of content into semantic vector embeddings to calculate their similarity, which is often their cosine similarity. More advanced methods perform end-to-end prediction of similarity or classifications using theTransformerarchitecture.[37][38]Paraphrase detectionparticularly benefits from highly parameterized pre-trained models. Comparative evaluations of content similarity detection systems[6][39][40][41][42][43]indicate that their performance depends on the type of plagiarism present (see figure). Except for citation pattern analysis, all detection approaches rely on textual similarity. It is therefore symptomatic that detection accuracy decreases the more plagiarism cases are obfuscated. Literal copies, a.k.a. copy and paste plagiarism or blatant copyright infringement, or modestly disguised plagiarism cases can be detected with high accuracy by current external PDS if the source is accessible to the software. In particular, substring matching procedures achieve good performance for copy and paste plagiarism, since they commonly use lossless document models, such assuffix trees. The performance of systems using fingerprinting or bag of words analysis in detecting copies depends on the information loss incurred by the document model used. By applying flexible chunking and selection strategies, they are better capable of detecting moderate forms of disguised plagiarism when compared to substring matching procedures. Intrinsic plagiarism detection usingstylometrycan overcome the boundaries of textual similarity to some extent by comparing linguistic similarity. Given that the stylistic differences between plagiarized and original segments are significant and can be identified reliably, stylometry can help in identifying disguised andparaphrasedplagiarism. Stylometric comparisons are likely to fail in cases where segments are strongly paraphrased to the point where they more closely resemble the personal writing style of the plagiarist or if a text was compiled by multiple authors. The results of the International Competitions on Plagiarism Detection held in 2009, 2010 and 2011,[6][42][43]as well as experiments performed by Stein,[34]indicate that stylometric analysis seems to work reliably only for document lengths of several thousand or tens of thousands of words, which limits the applicability of the method to computer-assisted plagiarism detection settings. An increasing amount of research is performed on methods and systems capable of detecting translated plagiarism. Currently, cross-language plagiarism detection (CLPD) is not viewed as a mature technology[44]and respective systems have not been able to achieve satisfying detection results in practice.[41] Citation-based plagiarism detection using citation pattern analysis is capable of identifying stronger paraphrases and translations with higher success rates when compared to other detection approaches, because it is independent of textual characteristics.[27][30]However, since citation-pattern analysis depends on the availability of sufficient citation information, it is limited to academic texts. It remains inferior to text-based approaches in detecting shorter plagiarized passages, which are typical for cases of copy-and-paste or shake-and-paste plagiarism; the latter refers to mixing slightly altered fragments from different sources.[45] Most large-scale plagiarism detection systems use large, internal databases (in addition to other resources) that grow with each additional document submitted for analysis. However, this feature is considered by some as aviolation of student copyright.[citation needed] Plagiarism in computer source code is also frequent, and requires different tools than those used for text comparisons in document. Significant research has been dedicated to academic source-code plagiarism.[47] A distinctive aspect of source-code plagiarism is that there are noessay mills, such as can be found in traditional plagiarism. Since most programming assignments expect students to write programs with very specific requirements, it is very difficult to find existing programs that already meet them. Since integrating external code is often harder than writing it from scratch, most plagiarizing students choose to do so from their peers. According to Roy and Cordy,[48]source-code similarity detection algorithms can be classified as based on either The previous classification was developed forcode refactoring, and not for academic plagiarism detection (an important goal of refactoring is to avoidduplicate code, referred to as code clones in the literature). The above approaches are effective against different levels of similarity; low-level similarity refers to identical text, while high-level similarity can be due to similar specifications. In an academic setting, when all students are expected to code to the same specifications, functionally equivalent code (with high-level similarity) is entirely expected, and only low-level similarity is considered as proof of cheating. Difference between Plagiarism and Copyright Plagiarism and copyright are essential concepts in academic andcreative writingthat writers, researchers, and students have to understand. Although they may sound similar, they are not; different strategies can be used to address each of them.[49] A number of different algorithms have been proposed to detect duplicate code. For example: Various complications have been documented with the use of text-matching software when used for plagiarism detection. One of the more prevalent concerns documented centers on the issue of intellectual property rights. The basic argument is that materials must be added to a database in order for the TMS to effectively determine a match, but adding users' materials to such a database may infringe on their intellectual property rights.[56][57]The issue has been raised in a number of court cases. An additional complication with the use of TMS is that the software finds only precise matches to other text. It does not pick up poorly paraphrased work, for example, or the practice of plagiarizing by use of sufficient word substitutions to elude detection software, which is known asrogeting. It also cannot evaluate whether the paraphrase genuinely reflects an original understanding or is an attempt to bypass detection.[58] Another complication with TMS is its tendency to flag much more content than necessary, including legitimate citations and paraphrasing, making it difficult to find real cases of plagiarism.[57]This issue arises because TMS algorithms mainly look at surface-level text similarities without considering the context of the writing.[58][59]Educators have raised concerns that reliance on TMS may shift focus away from teaching proper citation and writing skills, and may create an oversimplified view of plagiarism that disregards the nuances of student writing.[60]As a result, scholars argue that these false positives can cause fear in students and discourage them from using their authentic voice.[56]
https://en.wikipedia.org/wiki/Content_similarity_detection
In the field ofartificial intelligence(AI), ahallucinationorartificial hallucination(also calledbullshitting,[1][2]confabulation[3]ordelusion[4]) is a response generated by AI that contains false ormisleading informationpresented asfact.[5][6]This term draws a loose analogy with human psychology, where hallucination typically involves falsepercepts. However, there is a key difference: AI hallucination is associated with erroneously constructed responses (confabulation), rather than perceptual experiences.[6] For example, achatbotpowered bylarge language models(LLMs), likeChatGPT, may embed plausible-sounding random falsehoods within its generated content. Researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27% of the time,[7]with factual errors present in 46% of generated texts.[8]Detecting and mitigating these hallucinations pose significant challenges for practical deployment and reliability of LLMs in real-world scenarios.[9][7][8]Some people believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.[3] In 1995, Stephen Thaler demonstrated how hallucinations and phantom experiences emerge from artificial neural networks through random perturbation of their connection weights.[10][11][12][13][14] In the early 2000s, the term "hallucination" was used incomputer visionwith a positive connotation to describe the process of adding detail to an image. For example, the task of generating high-resolution face images from low-resolution inputs is calledface hallucination.[15][16] In the late 2010s, the term underwent asemantic shiftto signify the generation of factually incorrect or misleading outputs by AI systems in tasks like translation orobject detection.[15]For example, in 2017, Google researchers used the term to describe the responses generated by neural machine translation (NMT) models when they are not related to the source text,[17]and in 2018, the term was used in computer vision to describe instances where non-existent objects are erroneously detected because of adversarial attacks.[18] The term "hallucinations" in AI gained wider recognition during theAI boom, alongside the rollout of widely used chatbots based on large language models (LLMs).[19]In July 2021,Metawarned during its release of BlenderBot 2 that the system is prone to "hallucinations", which Meta defined as "confident statements that are not true".[20][21]FollowingOpenAI'sChatGPTrelease in beta-version in November 2022, some users complained that such chatbots often seem to pointlessly embed plausible-sounding random falsehoods within their generated content.[22]Many news outlets, includingThe New York Times, started to use the term "hallucinations" to describe these models' occasionally incorrect or inconsistent responses.[23] Some researchers have highlighted a lack of consistency in how the term is used, but also identified several alternative terms in the literature, such as confabulations, fabrications, factual errors, etc.[15] In 2023, the Cambridge dictionary updated their definition of hallucination to include this new meaning specific to the field of AI.[24] A list of uses of the term "hallucination", definitions or characterizations in the context of LLMs include: Journalist Benj Edwards, inArs Technica, writes that the term "hallucination" is controversial, but that some form of metaphor remains necessary; Edwards suggests "confabulation" as an analogy for processes that involve "creative gap-filling".[3]In July 2024, a White House report on fostering public trust in AI research mentioned hallucinations only in the context of reducing them. Notably, when acknowledgingDavid Baker's Nobel Prize-winning work with AI-generated proteins, the Nobel committee avoided the term entirely, instead referring to "imaginative protein creation".[27] In the scientific community, some researchers avoid the term "hallucination" as potentially misleading. It has been criticized byUsama Fayyad, executive director of the Institute for Experimental Artificial Intelligence atNortheastern University, on the grounds that it misleadingly personifies large language models, and that it is vague.[28]Mary Shawsaid "The current fashion for calling generative AI’s errors “hallucinations” is appalling. It anthropomorphizes the software, and it spins actual errors as somehow being idiosyncratic quirks of the system even when they’re objectively incorrect."[29]InSalon, statistician Gary N. Smith argues that LLMs "do not understand what words mean" and consequently that the term "hallucination" unreasonably anthropomorphizes the machine.[30]Some see the AI outputs not as illusory but as prospective, i.e. having some chance of being true, similar to early-stage scientific conjectures. The term has also been criticized for its association with psychedelic drug experiences.[27] Innatural language generation, a hallucination is often defined as "generated content that appears factual but is ungrounded".[31]There are different ways to categorize hallucinations. Depending on whether the output contradicts the source or cannot be verified from the source, they are divided into intrinsic and extrinsic, respectively.[6]Depending on whether the output contradicts the prompt or not they could be divided into closed-domain and open-domain respectively.[32] There are several reasons for natural language models to hallucinate data.[6] The main cause of hallucination from data is source-reference divergence. This divergence happens 1) as an artifact of heuristic data collection or 2) due to the nature of some natural language generation tasks that inevitably contain such divergence. When a model is trained on data with source-reference (target) divergence, the model can be encouraged to generate text that is not necessarily grounded and not faithful to the provided source.[6] Hallucination was shown to be a statistically inevitable byproduct of any imperfect generative model that is trained to maximize training likelihood, such asGPT-3, and requiresactive learningto be avoided.[33]The pre-training ofgenerative pretrained transformers(GPT) involves predicting the next word. It incentivizes GPT models to "give a guess" about what the next word is, even when they lack information. After pre-training though, hallucinations can be mitigated through anti-hallucinationfine-tuning[34](such as withreinforcement learning from human feedback). Some researchers take an anthropomorphic perspective and posit that hallucinations are arising from a tension betweennoveltyand usefulness. For instance,Teresa Amabileand Pratt define human creativity as the production of novel and useful ideas.[35]By extension, a focus on novelty in machine creativity can lead to production of original but inaccurate responses, i.e. falsehoods, whereas a focus on usefulness may result in memorized content lacking originality.[36] Errors in encoding and decoding between text and representations can cause hallucinations. When encoders learn the wrong correlations between different parts of the training data, it could result in an erroneous generation that diverges from the input. The decoder takes the encoded input from the encoder and generates the final target sequence. Two aspects of decoding contribute to hallucinations. First, decoders can attend to the wrong part of the encoded input source, leading to erroneous generation. Second, the design of the decoding strategy itself can contribute to hallucinations. A decoding strategy that improves the generation diversity, such as top-k sampling, is positively correlated with increased hallucination.[citation needed] Pre-training of models on a large corpus is known to result in the model memorizing knowledge in its parameters, creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words (including the words it has itself previously generated during the same conversation), causing a cascade of possible hallucination as the response grows longer.[6]By 2022, papers such asThe New York Timesexpressed concern that, as adoption of bots based on large language models continued to grow, unwarranted user confidence in bot output could lead to problems.[37] In 2025,interpretabilityresearch byAnthropicon the LLMClaudeidentified internal circuits that cause it to decline answering questions unless it knows the answer. By default, the circuit is active and the LLM doesn't answer. When the LLM has sufficient information, these circuits are inhibited and the LLM answers the question. Hallucinations were found to occur when this inhibition happens incorrectly, such as when Claude recognizes a name but lacks sufficient information about that person, causing it to generate plausible but untrue responses.[38] On 15 November 2022, researchers fromMeta AIpublished Galactica,[39]designed to "store, combine and reason about scientific knowledge". Content generated by Galactica came with the warning "Outputs may be unreliable! Language Models are prone to hallucinate text." In one case, when asked to draft a paper on creating avatars, Galactica cited a fictitious paper from a real author who works in the relevant area. Meta withdrew Galactica on 17 November due to offensiveness and inaccuracy.[40]Before the cancellation, researchers were working on Galactica Instruct, which would useinstruction tuningto allow the model to follow instructions to manipulateLaTeXdocuments onOverleaf.[41] OpenAI'sChatGPT, released in beta-version to the public on November 30, 2022, is based on thefoundation modelGPT-3.5 (a revision of GPT-3). Professor Ethan Mollick ofWhartonhas called ChatGPT an "omniscient, eager-to-please intern who sometimes lies to you". Data scientist Teresa Kubacka has recounted deliberately making up the phrase "cycloidal inverted electromagnon" and testing ChatGPT by asking it about the (nonexistent) phenomenon. ChatGPT invented a plausible-sounding answer backed with plausible-looking citations that compelled her to double-check whether she had accidentally typed in the name of a real phenomenon. Other scholars such asOren Etzionihave joined Kubacka in assessing that such software can often give "a very impressive-sounding answer that's just dead wrong".[42] WhenCNBCasked ChatGPT for the lyrics to "Ballad of Dwight Fry", ChatGPT supplied invented lyrics rather than the actual lyrics.[43]Asked questions about the Canadian province ofNew Brunswick, ChatGPT got many answers right but incorrectly classified Toronto-bornSamantha Beeas a "person from New Brunswick".[44]Asked about astrophysical magnetic fields, ChatGPT incorrectly volunteered that "(strong) magnetic fields ofblack holesare generated by the extremely strong gravitational forces in their vicinity". (In reality, as a consequence of theno-hair theorem, a black hole without an accretion disk is believed to have no magnetic field.)[45]Fast Companyasked ChatGPT to generate a news article on Tesla's last financial quarter; ChatGPT created a coherent article, but made up the financial numbers contained within.[46] Other examples involve baiting ChatGPT with a false premise to see if it embellishes upon the premise. When asked about "Harold Coward's idea of dynamic canonicity", ChatGPT fabricated that Coward wrote a book titledDynamic Canonicity: A Model for Biblical and Theological Interpretation, arguing that religious principles are actually in a constant state of change. When pressed, ChatGPT continued to insist that the book was real.[47]Asked for proof that dinosaurs built a civilization, ChatGPT claimed there were fossil remains of dinosaur tools and stated "Some species of dinosaurs even developed primitive forms of art, such as engravings on stones".[48]When prompted that "Scientists have recently discoveredchurros, the delicious fried-dough pastries ... (are) ideal tools for home surgery", ChatGPT claimed that a "study published in the journalScience"found that the dough is pliable enough to form into surgical instruments that can get into hard-to-reach places, and that the flavor has a calming effect on patients.[49][50] By 2023, analysts considered frequent hallucination to be a major problem in LLM technology, with a Google executive identifying hallucination reduction as a "fundamental" task for ChatGPT competitorGoogle Gemini.[9][51]A 2023 demo for Microsoft's GPT-based Bing AI appeared to contain several hallucinations that went uncaught by the presenter.[9] In May 2023, it was discovered that Stephen Schwartz had submitted six fake case precedents generated by ChatGPT in hisbriefto theSouthern District of New YorkonMata v. Avianca, apersonal injurycase against the airlineAvianca. Schwartz said that he had never previously used ChatGPT, that he did not recognize the possibility that ChatGPT's output could have been fabricated, and that ChatGPT continued to assert the authenticity of the precedents after their nonexistence was discovered.[52]In response,Brantley Starrof theNorthern District of Texasbanned the submission of AI-generated case filings that have not been reviewed by a human, noting that:[53][54] [Generative artificial intelligence] platforms in their current states are prone to hallucinations andbias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. On June 23, judgeP. Kevin Casteldismissed theMatacase and issued a $5,000 fine to Schwartz and another lawyer—who had both continued to stand by the fictitious precedents despite Schwartz's previous claims—forbad faithconduct. Castel characterized numerous errors and inconsistencies in the opinion summaries, describing one of the cited opinions as "gibberish" and "[bordering] on nonsensical".[55] In June 2023, Mark Walters, agun rightsactivist and radio personality, sued OpenAI in aGeorgiastate court after ChatGPT mischaracterized a legalcomplaintin a manner alleged to bedefamatoryagainst Walters. The complaint in question was brought in May 2023 by theSecond Amendment Foundationagainst Washington attorney generalRobert W. Fergusonfor allegedly violating their freedom of speech, whereas the ChatGPT-generated summary bore no resemblance and claimed that Walters was accused ofembezzlementandfraudwhile holding a Second Amendment Foundation office post that he never held in real life. According to AI legal expertEugene Volokh, OpenAI is likely not shielded against this claim bySection 230, because OpenAI likely "materially contributed" to the creation of the defamatory content.[56] In February 2024, Canadian airlineAir Canadawas ordered by theCivil Resolution Tribunalto pay damages to a customer and honor abereavement farepolicy that was hallucinated by a support chatbot, which incorrectly stated that customers could retroactively request a bereavement discount within 90 days of the date the ticket was issued (the actual policy does not allow the fare to be requested after the flight is booked). The Tribunal rejected Air Canada's defense that the chatbot was a "separate legal entity that is responsible for its own actions".[57][58] The concept of "hallucination" is not limited to text generation, and can occur with othermodalities. A confident response from any AI that seems erroneous by the training data can be labeled a hallucination.[6] Various researchers cited byWiredhave classified adversarial hallucinations as a high-dimensional statistical phenomenon, or have attributed hallucinations to insufficient training data. Some researchers believe that some "incorrect" AI responses classified by humans as "hallucinations" in the case ofobject detectionmay in fact be justified by the training data, or even that an AI may be giving the "correct" answer that the human reviewers are failing to see. For example, an adversarial image that looks, to a human, like an ordinary image of a dog, may in fact be seen by the AI to contain tiny patterns that (in authentic images) would only appear when viewing a cat. The AI is detecting real-world visual patterns that humans are insensitive to.[60] Wirednoted in 2018 that, despite no recorded attacks "in the wild" (that is, outside ofproof-of-conceptattacks by researchers), there was "little dispute" that consumer gadgets, and systems such asautomated driving, were susceptible toadversarial attacksthat could cause AI to hallucinate. Examples included a stop sign rendered invisible to computer vision; an audio clip engineered to sound innocuous to humans, but that software transcribed as "evil dot com"; and an image of two men on skis, thatGoogle Cloud Visionidentified as 91% likely to be "a dog".[18]However, these findings have been challenged by other researchers.[61]For example, it was objected that the models can be biased towards superficial statistics, leadingadversarial trainingto not be robust in real-world scenarios.[61] Text-to-audio generative AI – more narrowly known astext-to-speech(TTS) synthesis, depending on the modality – are known to produce inaccurate and unexpected results.[62] Text-to-image models, such asStable Diffusion,Midjourneyand others, often produce inaccurate or unexpected results. For instance,GeminidepictedNazi Germansoldiers aspeople of color,[63]causing controversy and leading Google to pause image generation involving people in Gemini.[64] Text-to-video generative models, likeSora, can introduce inaccuracies in generated videos. One example involves the Glenfinnan Viaduct, a famous landmark featured in theHarry Potterfilm series. Sora mistakenly added asecond trackto the viaduct railway, resulting in an unrealistic depiction. AI models can cause problems in the world of academic and scientific research due to their hallucinations. Specifically, models like ChatGPT have been recorded in multiple cases to cite sources for information that are either not correct or do not exist. A study conducted in theCureus Journal of Medical Scienceshowed that out of 178 total references cited by GPT-3, 69 returned an incorrect or nonexistentdigital object identifier(DOI). An additional 28 had no known DOI nor could be located in aGoogle search.[65] Some nonexistent phrases such as "vegetative electron microscopy" have appeared in many research papers as a result of having become embedded in AI training data.[66] Another instance was documented by Jerome Goddard fromMississippi State University. In an experiment, ChatGPT had provided questionable information aboutticks. Unsure about the validity of the response, they inquired about the source that the information had been gathered from. Upon looking at the source, it was apparent that the DOI and the names of the authors had been hallucinated. Some of the authors were contacted and confirmed that they had no knowledge of the paper's existence whatsoever.[67]Goddard says that, "in [ChatGPT's] current state of development, physicians and biomedical researchers should NOT ask ChatGPT for sources, references, or citations on a particular topic. Or, if they do, all such references should be carefully vetted for accuracy."[67]The use of these language models is not ready for fields of academic research and that their use should be handled carefully.[68] On top of providing incorrect or missing reference material, ChatGPT also has issues with hallucinating the contents of some reference material. A study that analyzed a total of 115 references provided by ChatGPT documented that 47% of them were fabricated. Another 46% cited real references but extracted incorrect information from them. Only the remaining 7% of references were cited correctly and provided accurate information. ChatGPT has also been observed to "double-down" on a lot of the incorrect information. When asked about a mistake that may have been hallucinated, sometimes ChatGPT will try to correct itself but other times it will claim the response is correct and provide even moremisleading information.[69] These hallucinated articles generated bylanguage modelsalso pose an issue because it is difficult to tell whether an article was generated by an AI. To show this, a group of researchers at theNorthwestern University of Chicagogenerated 50abstractsbased on existing reports and analyzed their originality. Plagiarism detectors gave the generated articles an originality score of 100%, meaning that the information presented appears to be completely original. Other software designed to detect AI generated text was only able to correctly identify these generated articles with an accuracy of 66%. Research scientists had a similar rate of human error, identifying these abstracts at a rate of 68%.[70]From this information, the authors of this study concluded, "[t]he ethical and acceptable boundaries of ChatGPT's use in scientific writing remain unclear, although some publishers are beginning to lay down policies."[71]Because of AI's ability to fabricate research undetected, the use of AI in the field of research will make determining the originality of research more difficult and require new policies regulating its use in the future. Given the ability of AI generated language to pass as real scientific research in some cases, AI hallucinations present problems for the application of language models in the Academic and Scientific fields of research due to their ability to be undetectable when presented to real researchers. The high likelihood of returning non-existent reference material and incorrect information may require limitations to be put in place regarding these language models. Some say that rather than hallucinations, these events are more akin to "fabrications" and "falsifications" and that the use of these language models presents a risk to the integrity of the field as a whole.[72] Scientists have also found that hallucinations can serve as a valuable tool for scientific discovery, particularly in fields requiring innovative approaches to complex problems. At theUniversity of Washington,David Baker's lab has used AI hallucinations to design "ten million brand-new" proteins that don't occur in nature, leading to roughly 100 patents and the founding of over 20 biotech companies. This work contributed to Baker receiving the 2024Nobel Prize in Chemistry, although the committee avoided using the "hallucinations" language.[27] In medical research and device development, hallucinations have enabled practical innovations. AtCalifornia Institute of Technology, researchers used hallucinations to design a novel catheter geometry that significantly reduces bacterial contamination. The design features sawtooth-like spikes on the inner walls that prevent bacteria from gaining traction, potentially addressing a global health issue that causes millions of urinary tract infections annually. These scientific application of hallucinations differs fundamentally from chatbot hallucinations, as they are grounded in physical reality and scientific facts rather than ambiguous language or internet data.Anima Anandkumar, a professor at Caltech, emphasizes that these AI models are "taught physics" and their outputs must be validated through rigorous testing. In meteorology, scientists use AI to generate thousands of subtle forecast variations, helping identify unexpected factors that can influence extreme weather events.[27] AtMemorial Sloan Kettering Cancer Center, researchers have applied hallucinatory techniques to enhance blurry medical images, while theUniversity of Texas at Austinhas utilized them to improve robot navigation systems. These applications demonstrate how hallucinations, when properly constrained by scientific methodology, can accelerate the discovery process from years to days or even minutes.[27] The hallucination phenomenon is still not completely understood. Researchers have also proposed that hallucinations are inevitable and are an innate limitation of large language models.[73]Therefore, there is still ongoing research to try to mitigate its occurrence.[74]Particularly, it was shown that language models not only hallucinate but also amplify hallucinations, even for those which were designed to alleviate this issue.[75] Ji et al.[76]divide common mitigation method into two categories:data-related methodsandmodeling and inference methods. Data-related methods include building a faithful dataset, cleaning data automatically and information augmentation by augmenting the inputs with external information. Model and inference methods include changes in the architecture (either modifying the encoder, attention or the decoder in various ways), changes in the training process, such as usingreinforcement learning, along with post-processing methods that can correct hallucinations in the output. Researchers have proposed a variety of mitigation measures, including getting different chatbots to debate one another until they reach consensus on an answer.[77]Another approach proposes to actively validate the correctness corresponding to the low-confidence generation of the model using web search results. They have shown that a generated sentence is hallucinated more often when the model has already hallucinated in its previously generated sentences for the input, and they are instructing the model to create a validation question checking the correctness of the information about the selected concept usingBingsearch API.[78]An extra layer oflogic-based rules was proposed for the web search mitigation method, by utilizing different ranks of web pages as a knowledge base, which differ in hierarchy.[79]When there are no external data sources available to validate LLM-generated responses (or the responses are already based on external data as in RAG), model uncertainty estimation techniques from machine learning may be applied to detect hallucinations.[80] According to Luo et al.,[81]the previous methods fall into knowledge and retrieval-based approaches which ground LLM responses in factual data using external knowledge sources, such as path grounding.[82]Luo et al. also mention training or reference guiding for language models, involving strategies like employing control codes[83]or contrastive learning[84]to guide the generation process to differentiate between correct and hallucinated content. Another category is evaluation and mitigation focused on specific hallucination types,[81]such as employing methods to evaluate quantity entity in summarization[85]and methods to detect and mitigate self-contradictory statements.[86] NvidiaGuardrails, launched in 2023, can be configured tohard-codecertain responses via script instead of leaving them to the LLM.[87]Furthermore, numerous tools like SelfCheckGPT,[88]the Trustworthy Language Model,[89]and Aimon[90]have emerged to aid in the detection of hallucination in offline experimentation and real-time production scenarios.
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Computer-aided translation(CAT), also referred to ascomputer-assisted translationorcomputer-aided human translation(CAHT), is the use ofsoftware, also known as a translator, to assist a human translator in thetranslationprocess. The translation is created by a human, and certain aspects of the process are facilitated by software; this is in contrast withmachine translation(MT), in which the translation is created by a computer, optionally with some human intervention (e.g. pre-editing and post-editing).[1] CAT tools are typically understood to mean programs that specifically facilitate the actual translation process. Most CAT tools have (a) the ability to translate a variety of sourcefile formatsin a single editing environment without needing to use the file format's associated software for most or all of the translation process, (b) translation memory, and (c) integration of various utilities or processes that increase productivity and consistency in translation. Computer-assisted translation is a broad and imprecise term covering a range of tools. These can include: Translation memoryprograms store previously translated source texts and their equivalent target texts in a database and retrieve related segments during the translation of new texts.[4] Such programs split the source text into manageable units known as "segments". A source-text sentence or sentence-like unit (headings, titles or elements in a list) may be considered a segment. Texts may also be segmented into larger units such as paragraphs or small ones, such as clauses. As the translator works through a document, the software displays each source segment in turn, and provides a previous translation for re-use if it finds a matching source segment in its database. If it does not, the program allows the translator to enter a translation for the new segment. After the translation for a segment is completed, the program stores the new translation and moves on to the next segment. In the dominant paradigm, the translation memory is, in principle, a simple database of fields containing the source language segment, the translation of the segment, and other information such as segment creation date, last access, translator name, and so on. Another translation memory approach does not involve the creation of a database, relying on aligned reference documents instead.[5] Some translation memory programs function asstandaloneenvironments, while others function as anadd-onormacrofor commercially available word-processing or other business software programs. Add-on programs allow source documents from other formats, such as desktop publishing files,spreadsheets, orHTMLcode, to be handled using the TM program. For an example, seeMEMOrg. New to thetranslation industry, Language search-engine software is typically an Internet-based system that works similarly to Internet search engines. Rather than searching the Internet, however, a language search engine searches a large repository of Translation Memories to find previously translated sentence fragments, phrases, whole sentences, even complete paragraphs that match source document segments. Language search engines are designed to leverage modern search technology to conduct searches based on the source words in context to ensure that the search results match the meaning of the source segments. Like traditional TM tools, the value of a language search engine rests heavily on the Translation Memory repository it searches against. Terminologymanagement software provides the translator a means of automatically searching a given terminology database for terms appearing in a document, either by automatically displaying terms in the translation memory software interface window or through the use of hot keys to view the entry in the terminology database. Some programs have other hotkey combinations allowing the translator to add new terminology pairs to the terminology database on the fly during translation. Some of the more advanced systems enable translators to check, either interactively or inbatch mode, if the correct source/target term combination has been used within and across the translation memory segments in a given project. Independent terminology management systems also exist that can provide workflow functionality, visual taxonomy, work as a type of term checker (similar to spell checker, terms that have not been used correctly are flagged) and can support other types of multilingual term facet classifications such as pictures, videos, or sound.[6][4] The process of binding a source language segment to its corresponding target language segment. The purpose is to create a translation memory database or to add to an existing one. Interactive machine translationis a paradigm in which the automatic system attempts to predict the translation the human translator is going to produce by suggesting translation hypotheses. These hypotheses may either be the complete sentence, or the part of the sentence that is yet to be translated. Augmented translation is a form of human translation carried out within an integrated technology environment that provides translators access to subsegment adaptivemachine translation(MT) andtranslation memory(TM),terminology lookup(CAT), and automatic content enrichment (ACE) to aid their work, and that automates project management, file handling, and other ancillary tasks.[7][8] Based on the concept ofaugmented reality, augmented translation seeks to make translators more productive by providing them with relevant information on an as-needed basis. This information adapts to the habits and style of individual translators in order to accelerate their work and increase productivity. It differs from classicalpostediting of MT, which has linguists revise entire texts translated by machines, in that it provides machine translation and information as suggestions that can be adopted in their entirety, edited, or ignored, as appropriate.[7] Augmented translation extends principles first developed in the 1980s that made their way into CAT tools. However, it integrates several functions that have previously been discrete into one environment. For example, translators historically have had to leave their translation environments to do terminology research, but in an augmented environment, an ACE component would automatically provide links to information about terms and concepts found in the text directly within the environment. As of May 2017, no full implementations of an augmented translation environment exist, although individual developers have created partial systems.
https://en.wikipedia.org/wiki/Computer-assisted_translation
Thelanguage industryis thesector of activitydedicated to facilitating multilingual communication, both oral and written. According to theEuropean Commission'sDirectorate-General of Translation, the language industry comprises following activities: translation, interpreting,subtitling,dubbing, software and website globalisation, language technology tools development, international conference organisation, language teaching and linguistic consultancy.[1] According to the Canadian Language Industry Association, this sector comprises translation (as seen in interpreting, subtitling and localisation), language training and language technologies.[2] The European Language Industry Association limits the sector to translation, localisation, internationalisation and globalisation.[3] An older, perhaps outdated view confines the language industry to computerisedlanguage processingand places it within theinformation technology industry.[4] An emerging view expands this sector to includeeditingfor authors who write in a second language, especially English, for international communication.[5] The scope of services in the industry includes: The persons who facilitate multilingual communication by offering individualized services—translation, interpreting, editing or language teaching—are calledlanguage professionals. Translation (and interpretation) as actcivities, have existed since mankind started developingtrade. That is to say that the origins of language industry are older than those ofwritten language.[citation needed] The communication industry has developed rapidly following availability of theinternet. Achievements of the industry include the ability to quickly translate long texts into many languages. This has created new challenges as compared with the traditional activity of translators, such as that ofquality assurance. There are variousquality standardssuch as theInternational Organization for Standardization'sISO 17100(used in Europe), the CAN CGSB 131.10-2017 in Canada[6]and ASTM F2575-14 in the US.[7] A study commissioned by the European Commission's Directorate-General for Translation estimated the language industry in European member states to be worth 8.4 billion euro in 2008.[8]The largest portion, 5.7 billion euros, was ascribed to the activities of translation, interpreting, software localisation and website globalisation. Editing was not taken into consideration. The study projected an annual growth rate of 10% for the language industry. At the time the study was published, in 2009, the language industry was less affected by the economic crisis than other industry sectors. One field of research in the industry includes the possibility ofmachine translationfully replacing human translation.[9] Rates for translation services had become a big discussion nowadays[when?][10]as several translation outsourcers allegedly go in search of cheap labor. Professional associations like theInternational Association of Professional Translators and Interpretershave in the past try to put a stop to this development.[11]Currency fluctuation is yet another important factor.[12] Apart from this, other phenomena such ascrowdsourcingappear in large-scale translations.[13] US PresidentBarack Obamadrew criticism after a 2009 White House white paper proposed incentives for automatic translation.[14][15]
https://en.wikipedia.org/wiki/Language_industry
Atranslation memory(TM) is a database that stores "segments", which can be sentences, paragraphs or sentence-like units (headings, titles or elements in a list) that have previously been translated, in order to aid humantranslators. The translation memory stores thesource textand its corresponding translation in language pairs called "translation units". Individual words are handled by terminology bases and are not within the domain of TM. Software programs that use translation memories are sometimes known astranslation memory managers(TMM) ortranslation memory systems(TM systems, not to be confused with atranslation management system(TMS), which is another type of software focused on managing the process of translation). Translation memories are typically used in conjunction with a dedicatedcomputer-assisted translation(CAT) tool,word processingprogram,terminology management systems, multilingual dictionary, or even rawmachine translationoutput. Research indicates that manycompanies producing multilingual documentationare using translation memory systems. In a survey of language professionals in 2006, 82.5% out of 874 replies confirmed the use of a TM.[1]Usage of TM correlated with text type characterised by technical terms and simple sentence structure (technical, to a lesser degree marketing and financial), computing skills, and repetitiveness of content.[1] The program breaks thesource text(the text to be translated) into segments, looks for matches between segments and the source half of previously translated source-target pairs stored in atranslation memory, and presents such matching pairs as translation full and partialmatches. The translator can accept a match, replace it with a fresh translation, or modify it to match the source. In the last two cases, the new or modified translation goes into the database. Some translation memory systems search for 100% matches only, i.e. they can only retrieve segments of text that match entries in the database exactly, while others employfuzzy matchingalgorithms to retrieve similar segments, which are presented to the translator with differences flagged. Typical translation memory systems only search for text in the source segment. The flexibility and robustness of the matching algorithm largely determine the performance of the translation memory, although for some applications the recall rate of exact matches can be high enough to justify the 100%-match approach. Segments where no match is found will have to be translated by the translator manually. These newly translated segments are stored in the database where they can be used for future translations as well as repetitions of that segment in the current text. Translation memories work best on texts which are highly repetitive, such as technical manuals. They are also helpful for translating incremental changes in a previously translated document, corresponding, for example, to minor changes in a new version of a user manual. Traditionally, translation memories have not been considered appropriate for literary or creative texts, for the simple reason that there is so little repetition in the language used. However, others find them of value even for non-repetitive texts, because the database resources created have value for concordance searches to determine appropriate usage of terms, for quality assurance (no empty segments), and the simplification of the review process (source and target segment are always displayed together while translators have to work with two documents in a traditional review environment). Translation memory managers are most suitable for translating technical documentation and documents containing specialized vocabularies. Their benefits include: The main problems hindering wider use of translation memory managers include: The use of TM systems might have an effect on the quality of the texts translated. Its main effect is clearly related to the so-called "error propagation": if the translation for a particular segment is incorrect, it is in fact more likely that the incorrect translation will be reused the next time the same source text, or a similar source text, is translated, thereby perpetuating the error. Traditionally, two main effects on the quality of translated texts have been described: the "sentence-salad" effect (Bédard 2000; cited in O'Hagan 2009: 50) and the "peep-hole" effect (Heyn 1998). The first refers to a lack of coherence at the text level when a text is translated using sentences from a TM which have been translated by different translators with different styles. According to the latter, translators may adapt their style to the use of TM system in order for these not to contain intratextual references, so that the segments can be better reused in future texts, thus affecting cohesion and readability (O'Hagan 2009). There is a potential, and, if present, probably an unconscious effect on the translated text. Different languages use different sequences for the logical elements within a sentence and a translator presented with a multiple clause sentence that is half translated is less likely to completely rebuild a sentence. Consistent empirical evidences (Martín-Mor 2011) show that translators will most likely modify the structure of a multiple clause sentence when working with a text processor rather than with a TM system. There is also a potential for the translator to deal with the text mechanically sentence-by-sentence, instead of focusing on how each sentence relates to those around it and to the text as a whole. Researchers (Dragsted 2004) have identified this effect, which relates to the automatic segmentation feature of these programs, but it does not necessarily have a negative effect on the quality of translations. These effects are closely related to training rather than inherent to the tool. According to Martín-Mor (2011), the use of TM systems does have an effect on the quality of the translated texts, especially on novices, but experienced translators are able to avoid it. Pym (2013) reminds that "translators using TM/MT tend to revise each segment as they go along, allowing little time for a final revision of the whole text at the end", which might be the ultimate cause of some of the effects described here. The following is a summary of the main functions of a translation memory. This function is used to transfer a text and its translation from a text file to the TM.Importcan be done from araw format, in which an externalsource textis available for importing into a TM along with its translation. Sometimes the texts have to be reprocessed by the user. There is another format that can be used to import: thenative format. This format is the one that uses the TM to save translation memories in a file. The process of analysis involves the following steps: Export transfers the text from the TM into an external text file. Import and export should be inverses. When translating, one of the main purposes of the TM is to retrieve the most useful matches in the memory so that the translator can choose the best one. The TM must show both the source and target text pointing out the identities and differences. Several different types of matches can be retrieved from a TM. A TM is updated with a new translation when it has been accepted by the translator. As always in updating a database, there is the question what to do with the previous contents of the database. A TM can be modified by changing or deleting entries in the TM. Some systems allow translators to save multiple translations of the same source segment. Translation memory tools often provide automatic retrieval and substitution. Networking enables a group of translators to translate a text together faster than if each was working in isolation, because sentences and phrases translated by one translator are available to the others. Moreover, if translation memories are shared before the final translation, there is an opportunity for mistakes by one translator to be corrected by other team members. "Text memory" is the basis of the proposed Lisa OSCAR xml:tm standard. Text memory comprises author memory and translation memory. The unique identifiers are remembered during translation so that the target language document is 'exactly' aligned at the text unit level. If the source document is subsequently modified, then those text units that have not changed can be directly transferred to the new target version of the document without the need for any translator interaction. This is the concept of 'exact' or 'perfect' matching to the translation memory. xml:tm can also provide mechanisms for in-document leveraged and fuzzy matching. 1970s is the infancy stage for TM systems in which scholars carried on a preliminary round of exploratory discussions. The original idea for TM systems is often attributed[according to whom?]to Martin Kay's "Proper Place" paper,[2]but the details of it are not fully given. In this paper, it has shown the basic concept of the storing system: "The translator might start by issuing a command causing the system to display anything in the store that might be relevant to .... Before going on, he can examine past and future fragments of text that contain similar material". This observation from Kay was actually influenced by the suggestion of Peter Arthern that translators can use similar, already translated documents online. In his 1978 article[3]he gave fully demonstration of what we call TM systems today: Any new text would be typed into a word processing station, and as it was being typed, the system would check this text against the earlier texts stored in its memory, together with its translation into all the other official languages [of the European Community]. ... One advantage over machine translation proper would be that all the passages so retrieved would be grammatically correct. In effect, we should be operating an electronic 'cut and stick' process which would, according to my calculations, save at least 15 per cent of the time which translators now employ in effectively producing translations. The idea was incorporated from ALPS (Automated Language Processing Systems) Tools first developed by researcher from Brigham Young University, and at that time the idea of TM systems was mixed with a tool called "Repetitions Processing" which only aimed to find matched strings. Only after a long time, did the concept of so-called translation memory come into being. The real exploratory stage of TM systems would be 1980s. One of the first implementations of TM system appeared in Sadler and Vendelmans' Bilingual Knowledge Bank. A Bilingual Knowledge Bank is a syntactically and referentially structured pair of corpora, one being a translation of the other, in which translation units are cross-coded between the corpora. The aim of Bilingual Knowledge Bank is to develop a corpus-based general-purpose knowledge source for applications in machine translation and computer-aided translation (Sadler & Vendelman, 1987). Another important step was made by Brian Harris with his "Bi-text". He has defined the bi-text as "a single text in two dimensions" (1988), the source and target texts related by the activity of the translator through translation units which made a similar echoes with Sadler's Bilingual Knowledge Bank. And in Harris's work he proposed something like TM system without using this name: a database of paired translations, searchable either by individual word, or by "whole translation unit", in the latter case the search being allowed to retrieve similar rather than identical units. TM technology only became commercially available on a wide scale in the late 1990s, through the efforts made by several engineers and translators. Of note is the first TM tool called Trados (SDL Tradosnowadays). In this tool, when opening the source file and applying the translation memory so that any "100% matches" (identical matches) or "fuzzy matches" (similar, but not identical matches) within the text are instantly extracted and placed within the target file. Then, the "matches" suggested by the translation memory can be either accepted or overridden with new alternatives. If a translation unit is manually updated, then it is stored within the translation memory for future use as well as for repetition in the current text. In a similar way, all segments in the target file without a "match" would be translated manually and then automatically added to the translation memory. In the 2000s, online translation services began incorporating TM. Machine translation services likeGoogle Translate, as well as professional and "hybrid" translation services provided by sites likeGengoandAckuna, incorporate databases of TM data supplied by translators and volunteers to make more efficient connections between languages provide faster translation services to end-users.[4] One recent development is the concept of 'text memory' in contrast to translation memory.[5]This is also the basis of the proposed LISA OSCAR standard.[6]Text memory within xml:tm comprises 'author memory' and 'translation memory'. Author memory is used to keep track of changes during the authoring cycle. Translation memory uses the information from author memory to implement translation memory matching. Although primarily targeted at XML documents, xml:tm can be used on any document that can be converted to XLIFF[7]format. Much more powerful than first-generation TM systems, they include alinguistic analysisengine, use chunk technology to break down segments into intelligent terminological groups, and automatically generate specific glossaries. Translation Memory eXchange(TMX) is a standard that enables the interchange of translation memories between translation suppliers. TMX has been adopted by the translation community as the best way of importing and exporting translation memories[citation needed]. The current version is 1.4b - it allows for the recreation of the original source and target documents from the TMX data. TermBase eXchange. ThisLISAstandard, which was revised and republished as ISO 30042, allows for the interchange of terminology data including detailed lexical information. The framework for TBX is provided by three ISO standards:ISO 12620, ISO 12200 and ISO 16642.ISO 12620provides an inventory of well-defined "data categories" with standardized names that function as data element types or as predefined values. ISO 12200 (also known as MARTIF) provides the basis for the core structure of TBX. ISO 16642 (also known as Terminological Markup Framework) includes a structural meta-model for Terminology Markup Languages in general. Universal Terminology eXchange(UTX) format is a standard specifically designed to be used for user dictionaries ofmachine translation, but it can be used for general, human-readable glossaries. The purpose of UTX is to accelerate dictionary sharing and reuse by its extremely simple and practical specification. Segmentation Rules eXchange(SRX) is intended to enhance the TMX standard so that translation memory data that is exchanged between applications can be used more effectively. The ability to specify the segmentation rules that were used in the previous translation may increase the leveraging that can be achieved. GILT Metrics. GILT stands for (Globalization, Internationalization, Localization, and Translation). The GILT Metrics standard comprises three parts: GMX-V for volume metrics, GMX-C for complexity metrics and GMX-Q for quality metrics. The proposed GILT Metrics standard is tasked with quantifying the workload and quality requirements for any given GILT task. Open Lexicon Interchange Format. OLIF is an open, XML-compliant standard for the exchange of terminological and lexical data. Although originally intended as a means for the exchange of lexical data between proprietary machine translation lexicons, it has evolved into a more general standard for terminology exchange.[8] XML Localisation Interchange File Format(XLIFF) is intended to provide a single interchange file format that can be understood by any localization provider.XLIFFis the preferred way[9][10]of exchanging data in XML format in the translation industry.[11] Translation Web Services. TransWS specifies the calls needed to use Web services for the submission and retrieval of files and messages relating to localization projects. It is intended as a detailed framework for the automation of much of the current localization process by the use of Web Services.[12] The xml:tm (XML-based Text Memory) approach to translation memory is based on the concept of text memory which comprises author and translation memory.[13]xml:tm has been donated to Lisa OSCAR by XML-INTL. Gettext Portable Object format. Though often not regarded as a translation memory format, Gettext PO files are bilingual files that are also used in translation memory processes in the same way translation memories are used. Typically, a PO translation memory system will consist of various separate files in a directory tree structure. Common tools that work with PO files include theGNU GettextTools and theTranslate Toolkit. Several tools and programs also exist that edit PO files as if they are meresource textfiles.
https://en.wikipedia.org/wiki/Translation_memory
Aconstructed language(shortened toconlang)[a]is alanguagewhosephonology,grammar,orthography, andvocabulary, instead of having developednaturally, are consciously devised for some purpose, which may include being devised for awork of fiction. A constructed language may also be referred to as anartificial,plannedorinvented language, or (in some cases) afictional language.Planned languages(orengineered languages/engelangs) are languages that have been purposefully designed; they are the result of deliberate, controlling intervention and are thus of a form oflanguage planning.[1] There are many possible reasons to create a constructed language, such as to ease humancommunication(seeinternational auxiliary languageandcode); to givefictionor an associated constructed setting an added layer of realism; for experimentation in the fields oflinguistics,cognitive science, andmachine learning; forartistic creation; for fantasyrole-playing games; and forlanguage games. Some people may also make constructed languages as ahobby, or in connection toworldbuilding. The expressionplanned languageis sometimes used to indicate international auxiliary languages and other languages designed for actual use in human communication. Some prefer it to the adjectiveartificial, as this term may be perceived as pejorative. OutsideEsperanto culture,[b]the termlanguage planningmeans the prescriptions given to a natural language to standardize it; in this regard, even a "natural language" may be artificial in some respects, meaning some of its words have been crafted by conscious decision.Prescriptive grammars, which date to ancient times forclassical languagessuch asLatinandSanskrit, are rule-based codifications of natural languages, such codifications being a middle ground between naïve natural selection and development of language and its explicit construction. The termglossopoeiais also used to mean language construction, particularly construction ofartistic languages.[2] Conlang speakers are rare. For example, theHungariancensusof 2011 found 8,397 speakers ofEsperanto,[3]and the census of 2001 found 10 ofRomanid, two each ofInterlinguaandIdoand one each ofIdiom NeutralandMundolinco.[4]TheRussiancensus of 2010 found that in Russia there were about 992 speakers of Esperanto (the 120th most common) and nine of theEsperantidoIdo.[5] The terms "planned", "constructed", "invented", "fictional",[6]and "artificial" are used differently in some traditions. For example, few speakers ofInterlinguaconsider their language artificial, since they assert that it has no invented content: Interlingua's vocabulary is taken from a small set of natural languages, and its grammar is based closely on these source languages, even including some degree of irregularity; its proponents prefer to describe its vocabulary and grammar as standardized rather than artificial or constructed. Similarly,Latino sine flexione(LsF) is a simplification of Latin from which theinflectionshave been removed. As with Interlingua, some prefer to describe its development as "planning" rather than "constructing". Some speakers ofEsperantoandEsperantidosalso avoid the term "artificial language" because they deny that there is anything "unnatural" about the use of their language in human communication.[citation needed] By contrast, some philosophers[according to whom?]have argued that all human languages are conventional or artificial.François Rabelais's fictional giant Pantagruel, for instance, said: "It is a misuse of terms to say that we have natural language; languagesexistthrough arbitrary institutions and the conventions of peoples. Voices, as the dialecticians say, don't signify naturally, but capriciously."[7] Furthermore, fictional or experimental languages can be considerednaturalisticif they model real world languages. For example, if a naturalistic conlang is deriveda posteriorifrom another language (real or constructed), it should imitate natural processes ofphonological,lexical, andgrammaticalchange. In contrast with languages such as Interlingua, naturalistic fictional languages are not usually intended for easy learning or communication. Thus, naturalistic fictional languages tend to be more difficult and complex. While Interlingua has simpler grammar, syntax, and orthography than its source languages (though more complex and irregular than Esperanto or its descendants), naturalistic fictional languages typically mimic behaviors of natural languages likeirregular verbsand nouns, and complicated phonological processes.[original research?] In terms of purpose, most constructed languages can broadly be divided into: The boundaries between these categories are by no means clear.[9]A constructed language could easily fall into more than one of the above categories. A logical language created foraestheticreasons would also be classifiable as an artistic language; one created with philosophical motives could include being used as an auxiliary language. There are no rules, either inherent in the process of language construction or externally imposed, that would limit a constructed language to fitting only one of the above categories. A constructed language can have native speakers if young children learn it from parents who speak it fluently. According toEthnologue, there are "200–2000who speak Esperanto as a first language". A member of theKlingon Language Institute, d'Armond Speers, attempted to raise his son as a native (bilingual with English)Klingonspeaker.[10][verification needed] As soon as a constructed language has a community of fluent speakers, especially if it has numerous native speakers, it begins to evolve and hence loses its constructed status. For example,Modern Hebrewand its pronunciation norms were developed from existing traditions ofHebrew, such asMishnaic HebrewandBiblical Hebrewfollowing a generalSephardicpronunciation, rather than engineered from scratch, and has undergone considerable changes since the state ofIsraelwas founded in 1948 (Hetzron 1990:693).[citation not found]However, linguistGhil'ad Zuckermannargues that Modern Hebrew, which he terms "Israeli", is a Semito-European hybrid based not only on Hebrew but also onYiddishand other languages spoken by revivalists.[11]Zuckermann therefore endorses the translation of the Hebrew Bible into what he calls "Israeli".[12]Esperantoas a living spoken language has evolved significantly from the prescriptive blueprint published in 1887, so that modern editions of theFundamenta Krestomatio, a 1903 collection of early texts in the language, require many footnotes on the syntactic and lexical differences between early and modern Esperanto.[13] Proponents of constructed languages often have many reasons for using them. The famous but disputedSapir–Whorf hypothesisis sometimes cited; this claims that the language one speaks influences the way one thinks. Thus, a "better" language should allow the speaker to think more clearly or intelligently or to encompass more points of view; this was the intention ofSuzette Haden Elginin creatingLáadan, a feminist language[14]embodied in herfeminist science fictionseriesNative Tongue.[15]Constructed languages have been included instandardized testssuch as theSAT, where they were used to test the applicant's ability to infer and apply grammatical rules.[16][17]By the same token, a constructed language might also be used torestrictthought, as inGeorge Orwell'sNewspeak, or tosimplifythought, as inToki Pona. However, linguists such asSteven Pinkerargue that ideas exist independently of language. For example, in the bookThe Language Instinct, Pinker states that children spontaneously re-invent slang and even grammar with each generation. These linguists argue that attempts to control the range of human thought through the reform of language would fail, as concepts like "freedom" will reappear in new words if the old words vanish. Proponents claim a particular language makes it easier to express and understand concepts in one area, and more difficult in others. An example can be taken from the way variousprogramming languagesmake it easier to write certain kinds of programs and harder to write others. Another reason cited for using a constructed language is the telescope rule, which claims that it takes less time to first learn a simple constructed language and then a natural language, than to learn only a natural language. Thus, if someone wants to learn English, some suggest learningBasic Englishfirst. Constructed languages like Esperanto and Interlingua are in fact often simpler due to the typical lack ofirregular verbsand other grammatical quirks. Some studies have found that learning Esperanto helps in learning a non-constructed language later (see propaedeutic value of Esperanto). Codes for constructed languagesinclude theISO 639-2"art" for conlangs; however, some constructed languages have their ownISO 639language codes (e.g. "eo" and "epo" forEsperanto, "jbo" forLojban, "ia" and "ina" forInterlingua, "tlh" forKlingon, "io" and "ido" forIdo, "lfn" forLingua Franca Nova, and "tok" forToki Pona). One constraint on a constructed language is that if it was constructed to be a natural language for use by fictional foreigners or aliens, as withDothrakiandHigh Valyrianin theGame of Thronesseries, which was adapted from theA Song of Ice and Firebook series, the language should be easily pronounced by actors, and should fit with and incorporate any fragments of the language already invented by the book's author, and preferably also fit with any personal names of fictional speakers of the language.[original research?] Ana priori(fromLatina priori, "from the former") constructed language is one whose features (including vocabulary, grammar, etc.) are not based on an existing language, and ana posteriorilanguage is the opposite.[8]This categorization, however, is not absolute, as many constructed languages may be calleda prioriwhen considering some linguistic factors, and at the same timea posterioriwhen considering other factors. Ana priorilanguage is any constructed language with some features which are not based on existing languages. Instead these features are invented or elaborated to work differently or to allude to different purposes. Somea priorilanguages are designed to beinternational auxiliary languagesthat remove what could be considered an unfair learning advantage for native speakers of a source language that would otherwise exist fora posteriorilanguages. Others, known asphilosophicalortaxonomic languages, try to categorize their vocabulary, either to express an underlying philosophy or to make it easier to recognize new vocabulary. Finally, manyartistic languages, created for either personal use or for use in a fictional medium, employ consciously constructed grammars and vocabularies, and are best understood asa priori. Ana posteriorilanguage(fromLatina posteriori, "from the latter"), according to French linguistLouis Couturat, is any constructed language whose elements are borrowed from or based on existing languages. The term can also be extended tocontrolled versionsof natural languages, and is most commonly used to refer to vocabulary despite other features. Likewise,zonal auxiliary languages(auxiliary languages for speakers of a particular language family) area posterioriby definition. While most auxiliary languages area posterioridue to their intended function as a medium of communication, manyartistic languagesare fullya posterioriin design—many for the purposes ofalternate history. In distinguishing whether the language isa prioriora posteriori, the prevalence and distribution of respectable traits is often the key. Grammatical speculation dates fromClassical Antiquity; for instance, it appears inPlato'sCratylusin Hermogenes's contention that words are not inherently linked to what they refer to; that people apply "a piece of their own voice ... to the thing". Athenaeustells the story[18]of two figures: Dionysius of Sicily andAlexarchus: "He [Alexarchus] once wrote something ... to the public authorities inCasandreia... As for what this letter says, in my opinion not even thePythian godcould make sense of it."[18] While the mechanisms of grammar suggested by classical philosophers were designed to explain existing languages (Latin,Greek, andSanskrit), they were not used to construct new grammars. Roughly contemporary to Plato, in his descriptive grammar of Sanskrit,Pāṇiniconstructed a set of rules for explaining language, so that the text of his grammar may be considered a mixture of natural and constructed language. A legend recorded in the seventh-centuryIrishworkAuraicept na n-Écesclaims thatFénius FarsaidvisitedShinarafter theconfusion of tongues, and he and his scholars studied the various languages for ten years, taking the best features of each to createin Bérla tóbaide("the selected language"), which he namedGoídelc—the Irish language. This appears to be the first mention of the concept of a constructed language in literature. The earliest non-natural languages were considered less "constructed" than "super-natural", mystical, or divinely inspired. TheLingua Ignota, recorded in the 12th century by St.Hildegard of Bingen, is an example, and apparently the first entirely artificial language.[14]It is a form of private mysticalcant(see alsoEnochian). An important example from Middle-Eastern culture isBalaibalan, invented in the 16th century.[2]Kabbalisticgrammatical speculation was directed at recovering the original language spoken byAdam and EveinParadise, lost in theconfusion of tongues. The firstChristianproject for an ideal language is outlined inDante Alighieri'sDe vulgari eloquentia, where he searches for the idealItalianvernacular suited for literature.Ramon Llull'sArs Magnawas a project of a perfect language with which the infidels could be convinced of the truth of the Christian faith. It was basically an application ofcombinatoricson a given set of concepts.[19]During theRenaissance, Lullian and Kabbalistic ideas were drawn upon in a magical context, resulting incryptographicapplications. Renaissance interest inAncient Egypt, notably the discovery of theHieroglyphicaofHorapollo, and first encounters with theChinese scriptdirected efforts towards a perfect written language.Johannes Trithemius, inSteganographiaandPolygraphia, attempted to show how all languages can be reduced to one. In the 17th century, interest inmagicallanguages was continued by theRosicruciansandalchemists(likeJohn Deeand hisEnochian).Jakob Boehmein 1623 spoke of a "natural language" (Natursprache) of the senses.[citation needed] Musical languagesfrom the Renaissance were often tied up withmysticism, magic andalchemy, sometimes also referred to as thelanguage of the birds. A non-mystic musical language wasSolresol. The 17th century saw the rise of projects for "philosophical" or "a priori" languages, such as: These early taxonomic conlangs produced systems ofhierarchical classificationthat were intended to result in both spoken and written expression.Leibnizhad a similar purpose for hislingua generalisof 1678, aiming at a lexicon of characters upon which the user might perform calculations that would yield true propositions automatically, as a side-effect developingbinary calculus. These projects were not only occupied with reducing or modelling grammar, but also with the arrangement of all human knowledge into "characters" or hierarchies, an idea that with theEnlightenmentwould ultimately lead to theEncyclopédie. Many of these 17th–18th centuries conlangs werepasigraphies, or purely written languages with no spoken form or a spoken form that would vary greatly according to the native language of the reader.[21] Leibniz and the encyclopedists realized that it is impossible to organize human knowledge unequivocally in a tree diagram, and consequently to construct ana priorilanguage based on such a classification of concepts. Under the entryCharactère,D'Alembertcritically reviewed the projects of philosophical languages of the preceding century. After theEncyclopédie, projects fora priorilanguages moved more and more to the lunatic fringe.[citation needed]Individual authors, typically unaware of the history of the idea, continued to propose taxonomic philosophical languages until the early 20th century (e.g.Ro), but most recentengineered languageshave had more modest goals; some are limited to a specific field, like mathematical formalism or calculus (e.g.Lincosandprogramming languages), others are designed for eliminatingsyntactical ambiguity(e.g.,LoglanandLojban) or maximizing conciseness (e.g.,Ithkuil[14]). Already in theEncyclopédieattention began to focus ona posterioriauxiliary languages.Joachim Faiguet de Villeneuvein the article onLanguewrote a short proposition of a "laconic" or regularized grammar ofFrench. During the 19th century, a bewildering variety of such International Auxiliary Languages (IALs) were proposed, so thatLouis CouturatandLéopold LeauinHistoire de la langue universelle(1903) reviewed 38 projects. The first of these that made any international impact wasVolapük, proposed in 1879 byJohann Martin Schleyer; within a decade, 283 Volapükist clubs were counted all over the globe. However, disagreements between Schleyer and some prominent users of the language led to schism, and by the mid-1890s it fell into obscurity, making way forEsperanto, proposed in 1887 byL. L. Zamenhof, and itsdescendants.Interlingua, the most recent auxlang to gain a significant number of speakers, emerged in 1951, when theInternational Auxiliary Language Associationpublished itsInterlingua–English Dictionaryand an accompanyinggrammar. The success of Esperanto did not stop others from trying to construct new auxiliary languages, such as Leslie Jones'Eurolengo, which mixes elements of English and Spanish. Loglan(1955) and its descendants constitute a pragmatic return to the aims of thea priorilanguages, tempered by the requirement of usability of an auxiliary language. Thus far, these moderna priorilanguages have garnered only small groups of speakers. Robot Interaction Language(2010) is a spoken language that is optimized for communication between machines and humans. The major goals of ROILA are that it should be easily learnable by the human user, and optimized for efficient recognition by computerspeech recognitionalgorithms. Artists may use language as a source of creativity in art, poetry, or calligraphy, or as a metaphor to address themes as cultural diversity and the vulnerability of the individual in a globalized world. Some people prefer however to take pleasure in constructing, crafting a language by a conscious decision for reasons of literary enjoyment or aesthetic reasons without any claim of usefulness. Suchartistic languagesbegin to appear inEarly Modernliterature (inPantagruel, and inUtopiancontexts), but they only seem to gain notability as serious projects beginning in the 20th century.[2]A Princess of Mars(1912) byEdgar Rice Burroughswas possibly the first fiction of that century to feature a constructed language.J. R. R. Tolkiendeveloped families of related fictional languages and discussed artistic languages publicly, giving a lecture entitled "A Secret Vice" in 1931 at a congress. (Orwell's Newspeak is considered a satire of aninternational auxiliary languagerather than an artistic language proper.) By the beginning of the first decade of the 21st century, it had become common forscience fiction and fantasyworks set in other worlds to feature constructed languages, or more commonly, an extremely limited but defined vocabulary whichsuggeststhe existence of a complete language, or whatever portions of the language are needed for the story. Constructed languages are a regular part of the genre, appearing inStar Wars,Star Trek,The Lord of the Rings(Elvish),Stargate SG-1,Atlantis: The Lost Empire,Ar Tonelico(Hymmnos),[22][23]Game of Thrones(Dothraki languageandValyrian languages),The Expanse,Avatar,Dune,and theMystseries of computer adventure games. The matter of whether or not a constructed language can be owned or protected by intellectual property laws, or if it would even be possible to enforce those laws, is contentious. In a 2015 lawsuit,CBSandParamount Pictureschallenged a fan film project called Axanar, stating the project infringed upon their intellectual property, which included theKlingon language, among other creative elements. During the controversy, Marc Okrand, the language's original designer expressed doubt as to whether Paramount's claims of ownership were valid.[24][25] David J. Peterson, who created multiple well-known constructed languages including theValyrian languagesandDothraki, advocated a similar opinion, saying that "Theoretically, anyone can publish anything using any language I created, and, in my opinion, neither I nor anyone else should be able to do anything about it."[26] However, Peterson also expressed concern that the respective rights-holders—regardless of whether or not their ownership of the rights is legitimate—would be likely to sue individuals who publish material in said languages, especially if the author might profit from said material. Furthermore, comprehensive learning material for such constructed languages asHigh Valyrianand Klingon has been published and made freely accessible on the language-learning platformDuolingo—but those courses are licensed by the respective copyright holders.[26]Because only a few such disputes have occurred thus far, the legal consensus on ownership of languages remains uncertain. TheTasmanian Aboriginal Centreclaims ownership ofpalawa kani, an attempted composite reconstruction of up to a dozen extinct Tasmanian indigenous languages, and has asked Wikipedia to remove its article on the project. However, there is no current legal backing for the claim.[27] Various papers on constructed languages were published from the 1970s through the 1990s, such asGlossopoeic Quarterly,Taboo Jadoo, andThe Journal of Planned Languages.[28]The Conlang Mailing List was founded in 1991, and later split off an AUXLANG mailing list dedicated to international auxiliary languages. In the early to mid-1990s a few conlang-related zines were published as email or websites, such asVortpunoj[29]andModel Languages. The Conlang mailing list has developed a community ofconlangerswith its own customs, such as translation challenges andtranslation relays,[30]and its own terminology. Sarah Higley reports from results of her surveys that the demographics of the Conlang list are primarily men from North America and western Europe, with a smaller number from Oceania, Asia, the Middle East, and South America, with an age range from thirteen to over sixty; the number of women participating has increased over time. Later online communities include theZompistBulletin Board (ZBB; since 2001) and the Conlanger Bulletin Board. Discussion on these forums includes presentation of members' conlangs and feedback from other members, discussion of natural languages, whether particular conlang features have natural language precedents, and how interesting features of natural languages can be repurposed for conlangs, posting of interesting short texts as translation challenges, and meta-discussion about the philosophy of conlanging, conlangers' purposes, and whether conlanging is an art or a hobby.[2]Another 2001 survey by Patrick Jarrett showed an average age of 30.65, with the average time since starting to invent languages 11.83 years.[31]A more recent thread on the ZBB showed that many conlangers spend a relatively small amount of time on any one conlang, moving from one project to another; about a third spend years on developing the same language.[32] Most modernconlangerscreate conlangs as a hobby, for a fictional work, or forpersonal fulfillment. Conlangers typically create languages by defining their conlang'sphonology,syntax,grammar, and other properties. Doing so requires at least a rudimentary understanding oflinguistics.[33]
https://en.wikipedia.org/wiki/Constructed_language
Knowledge representation(KR) aims to model information in a structured manner to formally represent it as knowledge in knowledge-based systems. Whereasknowledge representationand reasoning(KRR,KR&R, orKR²) also aims to understand, reason and interpret knowledge. KRR is widely used in the field ofartificial intelligence(AI) with the goal to representinformationabout the world in a form that a computer system can use to solve complex tasks, such asdiagnosing a medical conditionorhaving a natural-language dialog. KR incorporates findings from psychology[1]about how humans solve problems and represent knowledge, in order to designformalismsthat make complex systems easier to design and build. KRR also incorporates findings fromlogicto automate various kinds ofreasoning. Traditional KRR focuses more on the declarative representation of knowledge. Related knowledge representation formalisms mainly includevocabularies,thesaurus,semantic networks,axiom systems,frames,rules,logic programs, andontologies. Examples ofautomated reasoningengines includeinference engines,theorem provers,model generators, andclassifiers. In a broader sense, parameterized models inmachine learning— includingneural networkarchitectures such asconvolutional neural networksandtransformers— can also be regarded as a family of knowledge representation formalisms. The question of which formalism is most appropriate for knowledge-based systems has long been a subject of extensive debate. For instance, Frank van Harmelen et al. discussed the suitability of logic as a knowledge representation formalism and reviewed arguments presented by anti-logicists.[2]Paul Smolensky criticized the limitations of symbolic formalisms and explored the possibilities of integrating it with connectionist approaches.[3] More recently, Heng Zhang et al. have demonstrated that all universal (or equally expressive and natural) knowledge representation formalisms are recursively isomorphic.[4]This finding indicates a theoretical equivalence among mainstream knowledge representation formalisms with respect to their capacity for supportingartificial general intelligence(AGI). They further argue that while diverse technical approaches may draw insights from one another via recursive isomorphisms, the fundamental challenges remain inherently shared. The earliest work in computerized knowledge representation was focused on general problem-solvers such as theGeneral Problem Solver(GPS) system developed byAllen NewellandHerbert A. Simonin 1959 and theAdvice Takerproposed byJohn McCarthyalso in 1959. GPS featured data structures for planning and decomposition. The system would begin with a goal. It would then decompose that goal into sub-goals and then set out to construct strategies that could accomplish each subgoal. The Advisor Taker, on the other hand, proposed the use of thepredicate calculusto implementcommon sense reasoning. Many of the early approaches to knowledge representation in Artificial Intelligence (AI) used graph representations andsemantic networks, similar toknowledge graphstoday. In such approaches, problem solving was a form of graph traversal[5]or path-finding, as in theA* search algorithm. Typical applications included robot plan-formation and game-playing. Other researchers focused on developingautomated theorem-proversfor first-order logic, motivated by the use ofmathematical logicto formalise mathematics and to automate the proof of mathematical theorems. A major step in this direction was the development of theresolution methodbyJohn Alan Robinson. In the meanwhile, John McCarthy andPat Hayesdeveloped thesituation calculusas a logical representation of common sense knowledge about the laws of cause and effect.Cordell Green, in turn, showed how to do robot plan-formation by applying resolution to the situation calculus. He also showed how to use resolution forquestion-answeringandautomatic programming.[6] In contrast, researchers at Massachusetts Institute of Technology (MIT) rejected the resolution uniform proof procedure paradigm and advocated the procedural embedding of knowledge instead.[7]The resulting conflict between the use of logical representations and the use of procedural representations was resolved in the early 1970s with the development oflogic programmingandProlog, usingSLD resolutionto treatHorn clausesas goal-reduction procedures. The early development of logic programming was largely a European phenomenon. In North America, AI researchers such asEd FeigenbaumandFrederick Hayes-Rothadvocated the representation of domain-specific knowledge rather than general-purpose reasoning.[8] These efforts led to thecognitive revolutionin psychology and to the phase of AI focused on knowledge representation that resulted inexpert systemsin the 1970s and 80s,production systems,frame languages, etc. Rather than general problem solvers, AI changed its focus to expert systems that could match human competence on a specific task, such as medical diagnosis.[9] Expert systems gave us the terminology still in use today where AI systems are divided into aknowledge base, which includes facts and rules about a problem domain, and aninference engine, which applies the knowledge in theknowledge baseto answer questions and solve problems in the domain. In these early systems the facts in the knowledge base tended to be a fairly flat structure, essentially assertions about the values of variables used by the rules.[10] Meanwhile,Marvin Minskydeveloped the concept offramein the mid-1970s.[11]A frame is similar to an object class: It is an abstract description of a category describing things in the world, problems, and potential solutions. Frames were originally used on systems geared toward human interaction, e.g.understanding natural languageand the social settings in which various default expectations such as ordering food in a restaurant narrow the search space and allow the system to choose appropriate responses to dynamic situations. It was not long before the frame communities and the rule-based researchers realized that there was a synergy between their approaches. Frames were good for representing the real world, described as classes, subclasses, slots (data values) with various constraints on possible values. Rules were good for representing and utilizing complex logic such as the process to make a medical diagnosis. Integrated systems were developed that combined frames and rules. One of the most powerful and well known was the 1983Knowledge Engineering Environment(KEE) fromIntellicorp. KEE had a complete rule engine withforwardandbackward chaining. It also had a complete frame-based knowledge base with triggers, slots (data values), inheritance, and message passing. Although message passing originated in the object-oriented community rather than AI it was quickly embraced by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines fromSymbolics,Xerox, andTexas Instruments.[12] The integration of frames, rules, and object-oriented programming was significantly driven by commercial ventures such as KEE and Symbolics spun off from various research projects. At the same time, there was another strain of research that was less commercially focused and was driven by mathematical logic and automated theorem proving.[citation needed]One of the most influential languages in this research was theKL-ONElanguage of the mid-'80s. KL-ONE was aframe languagethat had a rigorous semantics, formal definitions for concepts such as anIs-A relation.[13]KL-ONE and languages that were influenced by it such asLoomhad an automated reasoning engine that was based on formal logic rather than on IF-THEN rules. This reasoner is called the classifier. A classifier can analyze a set of declarations and infer new assertions, for example, redefine a class to be a subclass or superclass of some other class that wasn't formally specified. In this way the classifier can function as an inference engine, deducing new facts from an existing knowledge base. The classifier can also provide consistency checking on a knowledge base (which in the case of KL-ONE languages is also referred to as an Ontology).[14] Another area of knowledge representation research was the problem ofcommon-sense reasoning. One of the first realizations learned from trying to make software that can function with human natural language was that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but that is not at all obvious to an artificial agent, such as basic principles of common-sense physics, causality, intentions, etc. An example is theframe problem, that in an event driven logic there need to be axioms that state things maintain position from one moment to the next unless they are moved by some external force. In order to make a true artificial intelligence agent that canconverse with humans using natural languageand can process basic statements and questions about the world, it is essential to represent this kind of knowledge.[15]In addition to McCarthy and Hayes' situation calculus, one of the most ambitious programs to tackle this problem was Doug Lenat'sCycproject. Cyc established its own Frame language and had large numbers of analysts document various areas of common-sense reasoning in that language. The knowledge recorded in Cyc included common-sense models of time, causality, physics, intentions, and many others.[16] The starting point for knowledge representation is theknowledge representation hypothesisfirst formalized byBrian C. Smithin 1985:[17] Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge. One of the most active areas of knowledge representation research is theSemantic Web.[citation needed]The Semantic Web seeks to add a layer of semantics (meaning) on top of the current Internet. Rather than indexing web sites and pages via keywords, the Semantic Web creates largeontologiesof concepts. Searching for a concept will be more effective than traditional text only searches. Frame languages and automatic classification play a big part in the vision for the future Semantic Web. The automatic classification gives developers technology to provide order on a constantly evolving network of knowledge. Defining ontologies that are static and incapable of evolving on the fly would be very limiting for Internet-based systems. The classifier technology provides the ability to deal with the dynamic environment of the Internet. Recent projects funded primarily by theDefense Advanced Research Projects Agency(DARPA) have integrated frame languages and classifiers with markup languages based on XML. TheResource Description Framework(RDF) provides the basic capability to define classes, subclasses, and properties of objects. TheWeb Ontology Language(OWL) provides additional levels of semantics and enables integration with classification engines.[18][19] Knowledge-representation is a field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used for solving complex problems. The justification for knowledge representation is that conventionalprocedural codeis not the best formalism to use to solve complex problems. Knowledge representation makes complex software easier to define and maintain than procedural code and can be used inexpert systems. For example, talking to experts in terms of business rules rather than code lessens the semantic gap between users and developers and makes development of complex systems more practical. Knowledge representation goes hand in hand withautomated reasoningbecause one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually allknowledge representation languageshave a reasoning or inference engine as part of the system.[20] A key trade-off in the design of knowledge representation formalisms is that between expressivity and tractability.[21]First Order Logic(FOL), with its high expressive power and ability to formalise much of mathematics, is a standard for comparing the expressibility of knowledge representation languages. Arguably, FOL has two drawbacks as a knowledge representation formalism in its own right, namely ease of use and efficiency of implementation. Firstly, because of its high expressive power, FOL allows many ways of expressing the same information, and this can make it hard for users to formalise or even to understand knowledge expressed in complex, mathematically-oriented ways. Secondly, because of its complex proof procedures, it can be difficult for users to understand complex proofs and explanations, and it can be hard for implementations to be efficient. As a consequence, unrestricted FOL can be intimidating for many software developers. One of the key discoveries of AI research in the 1970s was that languages that do not have the full expressive power of FOL can still provide close to the same expressive power of FOL, but can be easier for both the average developer and for the computer to understand. Many of the early AI knowledge representation formalisms, from databases to semantic nets to production systems, can be viewed as making various design decisions about how to balance expressive power with naturalness of expression and efficiency.[22]In particular, this balancing act was a driving motivation for the development of IF-THEN rules inrule-basedexpert systems. A similar balancing act was also a motivation for the development oflogic programming(LP) and the logic programming languageProlog. Logic programs have a rule-based syntax, which is easily confused with the IF-THEN syntax ofproduction rules. But logic programs have a well-defined logical semantics, whereas production systems do not. The earliest form of logic programming was based on theHorn clausesubset of FOL. But later extensions of LP included thenegation as failureinference rule, which turns LP into anon-monotonic logicfordefault reasoning. The resulting extended semantics of LP is a variation of the standard semantics of Horn clauses and FOL, and is a form of database semantics,[23]which includes theunique name assumptionand a form ofclosed world assumption. These assumptions are much harder to state and reason with explicitly using the standard semantics of FOL. In a key 1993 paper on the topic, Randall Davis ofMIToutlined five distinct roles to analyze a knowledge representation framework:[24] Knowledge representation and reasoning are a key enabling technology for theSemantic Web. Languages based on the Frame model with automatic classification provide a layer of semantics on top of the existing Internet. Rather than searching via text strings as is typical today, it will be possible to define logical queries and find pages that map to those queries.[18]The automated reasoning component in these systems is an engine known as the classifier. Classifiers focus on thesubsumptionrelations in a knowledge base rather than rules. A classifier can infer new classes and dynamically change the ontology as new information becomes available. This capability is ideal for the ever-changing and evolving information space of the Internet.[25] The Semantic Web integrates concepts from knowledge representation and reasoning with markup languages based on XML. TheResource Description Framework(RDF) provides the basic capabilities to define knowledge-based objects on the Internet with basic features such as Is-A relations and object properties. TheWeb Ontology Language(OWL) adds additional semantics and integrates with automatic classification reasoners.[19] In 1985,Ron Brachmancategorized the core issues for knowledge representation as follows:[26] In the early years ofknowledge-based systemsthe knowledge-bases were fairly small. The knowledge-bases that were meant to actually solve real problems rather than do proof of concept demonstrations needed to focus on well defined problems. So for example, not just medical diagnosis as a whole topic, but medical diagnosis of certain kinds of diseases. As knowledge-based technology scaled up, the need for larger knowledge bases and for modular knowledge bases that could communicate and integrate with each other became apparent. This gave rise to the discipline of ontology engineering, designing and building large knowledge bases that could be used by multiple projects. One of the leading research projects in this area was theCycproject. Cyc was an attempt to build a huge encyclopedic knowledge base that would contain not just expert knowledge but common-sense knowledge. In designing an artificial intelligence agent, it was soon realized that representing common-sense knowledge, knowledge that humans simply take for granted, was essential to make an AI that could interact with humans using natural language. Cyc was meant to address this problem. The language they defined was known asCycL. After CycL, a number ofontology languageshave been developed. Most aredeclarative languages, and are eitherframe languages, or are based onfirst-order logic. Modularity—the ability to define boundaries around specific domains and problem spaces—is essential for these languages because as stated byTom Gruber, "Every ontology is a treaty–a social agreement among people with common motive in sharing." There are always many competing and differing views that make any general-purpose ontology impossible. A general-purpose ontology would have to be applicable in any domain and different areas of knowledge need to be unified.[30] There is a long history of work attempting to build ontologies for a variety of task domains, e.g., an ontology for liquids,[31]thelumped element modelwidely used in representing electronic circuits (e.g.[32]), as well as ontologies for time, belief, and even programming itself. Each of these offers a way to see some part of the world. The lumped element model, for instance, suggests that we think of circuits in terms of components with connections between them, with signals flowing instantaneously along the connections. This is a useful view, but not the only possible one. A different ontology arises if we need to attend to the electrodynamics in the device: Here signals propagate at finite speed and an object (like a resistor) that was previously viewed as a single component with an I/O behavior may now have to be thought of as an extended medium through which an electromagnetic wave flows. Ontologies can of course be written down in a wide variety of languages and notations (e.g., logic, LISP, etc.); the essential information is not the form of that language but the content, i.e., the set of concepts offered as a way of thinking about the world. Simply put, the important part is notions like connections and components, not the choice between writing them as predicates or LISP constructs. The commitment made selecting one or another ontology can produce a sharply different view of the task at hand. Consider the difference that arises in selecting the lumped element view of a circuit rather than the electrodynamic view of the same device. As a second example, medical diagnosis viewed in terms of rules (e.g.,MYCIN) looks substantially different from the same task viewed in terms of frames (e.g., INTERNIST). Where MYCIN sees the medical world as made up of empirical associations connecting symptom to disease, INTERNIST sees a set of prototypes, in particular prototypical diseases, to be matched against the case at hand.
https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning
Usingcontrolled languageinmachine translationposes several problems. In an automated translation, the first step in order to understand the controlled language is to know what it is and to distinguish betweennatural languageandcontrolled language. The main problem inmachine translationis a linguistic problem. Language is ambiguous and the system tries to model alanguageon lexical and grammatical way. In order to solve this problem there are a lot of alternatives, e.g. aglossaryrelated with the text’s topic can be used. It is enabling to produce texts easier to read, more comprehensible and easier to retain, as well as with better vocabulary and style. Reasons for introducing a controlled language include: One of the biggest challenges facing organizations that wish to reduce the cost and time involved in their translations is the fact that even in environments that combine content management systems with translation memory technology, the percentage of un-translated segments per new document remains fairly high. While it is certainly possible to manage content on the sentence/segment level, the current best practice seems to be to chunk at the topic level. Which means that reuse occurs at a fairly high level of granularity.
https://en.wikipedia.org/wiki/Controlled_language_in_machine_translation