text
stringlengths
16
172k
source
stringlengths
32
122
Theprivate language argumentargues that a language understandable by only a single individual is incoherent. It was introduced byLudwig Wittgensteinin his later work, especially in thePhilosophical Investigations.[1]The argument was central to philosophical discussion in the second half of the 20th century. In theInvestigations, Wittgenstein does not present his arguments in a succinct and linear fashion; instead, he describes particular uses of language and prompts the reader to contemplate the implications of those uses. This technique gives rise to considerable dispute about both the nature of the argument and its implications. Indeed, it has become common to talk of private languagearguments. Historians of philosophy see precursors of the private language argument in a variety of sources, notably in the work ofGottlob FregeandJohn Locke.[2]Locke is also a prominent exponent of the view targeted by the argument, since he proposed in hisAn Essay Concerning Human Understandingthat the referent of a word is theideait stands for. The private language argument is of central importance to debates about the nature of language. One compelling theory about language is that language maps words to ideas, concepts, or representations in each person's mind. On this account, the concepts in one's head are distinct from the concepts in another's head. One can match their concepts to a word in a common language, and then speak the word to another. The listener can then match the word to a concept in their mind. So the shared concepts, in effect, form a private language that one can translate into a common language and thereby share. This account is found, for example, in Locke'sAn Essay Concerning Human Understandingand more recently inJerry Fodor'slanguage of thoughttheory. In his later work, Wittgenstein argues that this account of private language is inconsistent. If the idea of a private language is inconsistent, then a logical conclusion would be that all language serves a social function. This would have profound implications for other areas of philosophical and psychological study. For example, if one cannot have a private language, it might not make any sense to talk of private experiences or of private mental states. The argument is found in part one of thePhilosophical investigations. This part consists of a series of "remarks" numbered sequentially. The core of the argument is generally thought to be presented in §256 and onward, though the idea is first introduced in §243. If someone were to behave as if they understood a language of which no one else can make sense, we might call this an example of a private language.[3]It is not sufficient here, however, for the language to simply be one that has not yet been translated. In order to count as aprivate languagein Wittgenstein's sense, it must be in principle incapable of translation into an ordinary language – if for example it were to describe those inner experiences supposed to be inaccessible to others.[4]The private language being considered is not simply a languagein factunderstood by one person, but a language thatin principlecan only be understood by one person. So the last speaker of a dying language would not be speaking a private language, since the language remains in principle learnable. A private language must be unlearnable and untranslatable, and yet it must appear that the speaker is able to make sense of it. Wittgenstein sets up athought experimentin which someone is imagined to associate some recurrent sensation with a symbol by writingSin their calendar when the sensation occurs.[5]Such a case would be a private language in the Wittgensteinian sense. Furthermore, it is presupposed thatScannot be defined using other terms, for example, "the feeling I get when themanometerrises"; for to do so would be to giveSa place in our public language, in which caseScould not be a statement in a private language.[6] It might be supposed that one might use "a kind ofostensive definition" forSby focusing on the sensation and on the symbol. Early inPhilosophical Investigations, Wittgenstein attacks the usefulness of ostensive definition.[7]He considers the example of someone pointing to two nuts while saying "This is calledtwo". He considers how it comes about that the listener associates this with thenumberof items, rather than the type of nut, their colour, or even a compass direction. One conclusion of this is that to participate in an ostensive definition presupposes an understanding of the process and context involved, of theform of life.[8]Another is that "an ostensive definition can be variously interpreted ineverycase".[9] In the case of the sensationS, Wittgenstein argues that no criterion exists for the correctness of such an ostensive definition, since whateverseemsright willberight, and that only means that here we cannot talk about "right".'[5]The exact reason for the rejection of private language has been contentious. One interpretation, which has been calledmemory scepticism, is that one mightrememberthe sensation wrongly, and as a result one might misuse the termS. The other, calledmeaning scepticism, is that one can never be sure of themeaningof a term defined in this way. One common interpretation is that the possibility exists that one might misremember the sensation, and therefore one does not have any firmcriterionfor usingSin each case.[10]So, for example, one might focus on one particular sensation one day and link it to the symbolS; but the next day, one would have no criteria for knowing that the sensation one has then is the same as the sensation that one had the previous day, excepts for one's memory; and since one's memory might fail, one has no firm criteria for knowing that the sensation one has is indeed the sensationS. However, memory scepticism has been criticized[by whom?]as applying to public language, also. If one person can misremember, it is entirely possible that several people can misremember. So memory scepticism could be applied with equal effect to ostensive definitions given in a public language. For example, Jim and Jenny might one day decide to call some particular treeT; but the next daybothmisremember which tree it was they named. If they were depending entirely on their memory and had not written down the location of the tree or told anyone else its location, then they would appear to have the same difficulties as the individual who definedSostensively. And so, if this is the case, the argument presented against private language would apply equally to public language. This interpretation (and the criticism of Wittgenstein that arises from it) is based on a complete misreading[citation needed], however, because Wittgenstein's argument has nothing to do with the fallibility of human memory[citation needed]but rather concerns theintelligibilityof remembering something for which there is no external criterion of correctness. It is not that we will not, in fact, remember the sensation correctly, but rather that it makes no sense to talk about our memory being either correct or incorrect in this case. The point, as Diego Marconi puts it[citation needed], is not so much that private language is "a game at which we can't win, it is a game we can't lose". Wittgenstein makes this clear in §258: "A definition surely serves to establish the meaning of a sign.—Well, that is done precisely by the concentrating of my attention; for in this way I impress on myself the connexion between the sign and the sensation.—But "I impress it on myself" can only mean: this process brings it about that I remember the connexion right in the future. But in the present case, I have no criterion of correctness." This absence of any criterion of correctness is not a problem because it makes it more difficult for the private linguist to remember his sensation correctly; it is a problem because it undermines the intelligibility of such a concept as remembering the sensation, whether correctly or incorrectly. Wittgenstein explains this unintelligibility with a series of analogies. For example, in section 265 he observes the pointlessness of a dictionary that exists only in the imagination. Since the idea of a dictionary is to justify the translation of one word by another, and thus constitute the reference of justification for such a translation, all this is lost the moment we talk of a dictionary in the imagination; for “justification consists in appealing to something independent". Hence, to appeal to a private ostensive definition as the standard or correct use of a term would be "as if someone were to buy several copies of the morning paper to assure himself that what it said was true." Another interpretation, found for example in the account presented byAnthony Kenny[11]has it that the problem with a private ostensive definition is not just that it might be misremembered, but that such a definition cannot lead to a meaningful statement. Let us first consider a case of ostensive definition in a public language. Jim and Jenny might one day decide to call some particular treeT; but the next day misremember which tree it was they named. In this ordinary language case, it makes sense to ask questions such as "is this the tree we namedTyesterday?" and make statements such as "This is not the tree we namedTyesterday". So one can appeal to other parts of the form of life, perhaps arguing: "this is the only Oak in the forest;Twas an oak; therefore this isT". An everyday ostensive definition is embedded in a public language, and so in the form of life in which that language occurs. Participation in a public form of life enables correction to occur. That is, in the case of a public language there are other ways to check the use of a term that has been ostensively defined. We canjustifyour use of the new nameTby making the ostensive definition more or less explicit. But this is not the case withS. Recall that becauseSis part of a private language, it is not possible to provide an explicit definition ofS. The onlypossibledefinition is the private, ostensive one of associatingSwiththatfeeling. But this is thevery thing being questioned. "Imagine someone saying: 'But I know how tall I am!' and laying his hand on top of his head to prove it."[12] A recurrent theme in Wittgenstein's work is that for some term or utterance to have a sense, it must be conceivable that it be doubted. For Wittgenstein,tautologiesdo not have sense, do not say anything, and so do not admit of doubt. But furthermore, if any other sort of utterance does not admit of doubt, it must be senseless.Rush Rhees, in his notes on lectures given by Wittgenstein, while discussing the reality of physical objects, has him say: We get something similar when we write a tautology like "p → p". We formulate such expressions to get something in which there is no doubt – even though the sense has vanished with the doubt.[13] As Kenny put it, "Even to thinkfalselythat something isS, I must know the meaning ofS; and this is what Wittgenstein argues is impossible in the private language."[14]Because there is no way to check the meaning (or use) ofSapart fromthat private ostensive act of definition, it is not possible toknowwhatSmeans. The sense has vanished with the doubt. Wittgenstein uses the further analogy of the left hand giving the right hand money.[15]The physical act might take place, but the transaction could not count as a gift. Similarly, one might saySwhile focusing on a sensation, but no act of naming has occurred. The beetle-in-a-box is a famous thought experiment that Wittgenstein introduces in the context of his investigation of pains.[16] Pains occupy a distinct and vital place in the philosophy of mind for several reasons.[17]One is that pains seem to collapse the appearance/reality distinction.[18]If an object appears to be red it might not be so in reality, but if one seems to oneself to be in pain, it must be so: there can be no case here of seeming at all. At the same time, one cannot feel another person's pain, but only infer it from their behavior and their reports of it. If we accept pains as specialqualiaknown absolutely but exclusively by the solitary minds that perceive them, this may be taken to ground aCartesianview of the self and consciousness. Our consciousness, of pains anyway, would seem unassailable. Against this, one might acknowledge the absolute fact of one's own pain but claim skepticism about the existence of anyone else's pains. Alternatively, one might take a behaviorist line and claim that our pains are merely neurological stimulations accompanied by a disposition to behave.[19] Wittgenstein invites readers to imagine a community in which the individuals each have a box containing a "beetle". "No one can look into anyone else's box, and everyone says he knows what a beetle is only by looking athisbeetle."[16] If the "beetle" had a use in the language of these people, it could not be as the name of something – because it is entirely possible that each person had something completely different in their box, or even that the thing in the box constantly changed, or that each box was in fact empty. The content of the box is irrelevant to whatever language game it is used in. By analogy, it does not matter that one cannot experience another's subjective sensations. Unless talk of such subjective experience is learned through public experience the actual content is irrelevant; all we can discuss is what is available in our public language. By offering the "beetle" as an analogy to pains, Wittgenstein suggests that the case of pains is not really amenable to the uses philosophers would make of it. "That is to say: if we construe the grammar of the expression of sensation on the model of 'object and designation', the object drops out of consideration as irrelevant."[16] It is common to describe language use in terms of the rules that one follows, and Wittgenstein considers rules in some detail. He famously suggests that any act can be made out to follow from a given rule.[20]He does this in setting up a dilemma: This was our paradox: no course of action could be determined by a rule, because every course of action can be made out to accord with the rule. The answer was: if everything can be made out to accord with the rule, then it can also be made out to conflict with it. And there would be neither accord nor conflict here.[21] One can give an explanation of why one followed a particular rule in a particular case. But any explanation for rule following behaviour cannot be given in terms of following a rule, without involving circularity. One can say something like "She did X because of the rule R" but if you say "She followed R because of the rule R1" one can then ask "but why did she follow rule R1?" and so potentially become involved in a regression. Explanation must have an end.[22] His conclusion: What this shows is that there is a way of grasping a rule which isnotaninterpretation, but which is exhibited in what we call "obeying the rule" and "going against it" in actual cases.[23] So following a rule is a practice. And furthermore, since one can think one is following a rule and yet be mistaken,thinkingone is following a rule is not the same as following it. Therefore, following a rule cannot be a private activity.[24] In 1982Saul Kripkepublished a new and innovative account of the argument in his bookWittgenstein on Rules and Private Language.[25]Kripke takes the paradox discussed in §201 to be the central problem of thePhilosophical Investigations. He develops the paradox into aGrue-likeproblem, arguing that it similarly results in skepticism, but aboutmeaningrather than aboutinduction.[26]He supposes a new form of addition, which he callsquus, which is identical withplusin all cases except those in which either of the numbers to be added is greater than 57, thus: x quus y={x + yforx,y<575otherwise{\displaystyle {\text{x quus y}}={\begin{cases}{\text{x + y}}&{\text{for }}x,y<57\\[12pt]5&{\text{otherwise}}\end{cases}}} He then asks if anyone could know that previously when they thought he had meantplus, he had not meantquus. He claims that his argument shows that "Each new application we make is a leap in the dark; any present intention could be interpreted to accord with anything we may choose to do. So there can be neither accord nor conflict."[27] Kripke's account is considered by some commentators to be unfaithful to Wittgenstein,[28]and as a result has been referred to as "Kripkenstein". Even Kripke himself suspected that many aspects of the account were inconsistent with Wittgenstein's original intent, leading him to urge that the book "should be thought of as expounding neither 'Wittgenstein's' argument nor 'Kripke's': rather Wittgenstein's argument as it struck Kripke, as it presented a problem for him."[29] Remarks in Part I ofInvestigationsare preceded by the symbol"§". Remarks in Part II are referenced by their Roman numeral or their page number in the third edition.
https://en.wikipedia.org/wiki/Private_language_argument
Semiofestis the main worldwide conference series and event on commercialsemiotics.[1]Its focus is on the methods of semiotic analysis which are helpful in solving interpretational conflicts and providing tools for better design of social meaning-making spaces. The topics covered include the applications of semiotics inmarketing,branddevelopment,design,advertising, applied aspects ofsocial semiotics,ecosemioticsetc.[2]The conference subtitle is "A Celebration of Semiotic Thinking". Commercial semiotics applies results fromsemiotic anthropology,cultural semiotics,ecosemioticsandbiosemioticsfor multi-sided analysis of meaning-making. The initiators and first organisers of Semiofest were several British companies and agencies that provide semiotic consultancy.[3]An aim of Semiofest is to bring together the specialists practicing semiotics in marketing and social consultancy, and academics researching and teaching semiotics in universities.[4] The conference have been organised in the following centres (and themes): Semiofest homepage Thissemioticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semiofest
Charles Sanders Peircebegan writing onsemiotics, which he also called semeiotics, meaning the philosophical study ofsigns, in the 1860s, around the time that he devised his system ofthree categories. During the 20th century, the term "semiotics" was adopted to cover all tendencies of sign researches, includingFerdinand de Saussure'ssemiology, which began in linguistics as a completely separate tradition. Peirce adopted the termsemiosis(orsemeiosis) and defined it to mean an "action, or influence, which is, or involves, a cooperation ofthreesubjects, such as a sign, its object, and itsinterpretant, this trirelative influence not being in any way resolvable into actions between pairs."[1]This specific type oftriadic relationis fundamental to Peirce's understanding of logic as formal semiotic.[2]By "logic" he meant philosophical logic. Heeventually divided(philosophical) logic, or formal semiotics, into (1) speculative grammar, or stechiology[3]on the elements of semiosis (sign, object, interpretant), how signs can signify and, in relation to that, what kinds of signs, objects, and interpretants there are, how signs combine, and how some signs embody or incorporate others; (2) logical critic, or logic proper, on the modes of inference; and (3) speculative rhetoric, or methodeutic, the philosophical theory of inquiry, includinghis form of pragmatism. His speculative grammar, or stechiology, is this article's subject. Peirce conceives of and discusses things like representations, interpretations, and assertions broadly and in terms of philosophical logic, rather than in terms of psychology, linguistics, or social studies. He places philosophy at a level of generality between mathematics and the special sciences of nature and mind, such that it draws principles from mathematics and supplies principles to special sciences.[4]On the one hand, his semiotic theory does not resort to special experiences or special experiments in order to settle its questions. On the other hand, he draws continually on examples from common experience, and his semiotics is not contained in a mathematical or deductive system and does not proceed chiefly by drawing necessary conclusions about purely hypothetical objects or cases. As philosophical logic, it isaboutthe drawing of conclusions deductive, inductive, or hypothetically explanatory. Peirce's semiotics, in its classifications, its critical analysis of kinds of inference, and its theory of inquiry, is philosophical logic studied in terms of signs and their triadic relations as positive phenomena in general. Peirce's semiotic theory is different from Saussure's conceptualization in the sense that it rejects his dualist view of the Cartesian self. He believed that semiotics is a unifying and synthesizing discipline.[5]More importantly, he included the element of "interpretant" into the fundamental understanding of the sign.[5] Here is Peirce's definition of the triadic sign relation that formed the core of his definition of logic: Namely, a sign is something,A, which brings something,B, itsinterpretantsign determined or created by it, into the same sort of correspondence with something,C, itsobject, as that in which itself stands toC. (Peirce 1902, NEM 4, 20–21).[6] This definition, together with Peirce's definitions ofcorrespondenceanddetermination, is sufficient to derive all of the statements that are necessarily true for all sign relations. Yet, there is much more to the theory of signs than simply proving universal theorems about generic sign relations. There is also the task of classifying the various species and subspecies of sign relations. As a practical matter, of course, familiarity with the full range of concrete examples is indispensable to theory and application both. In Peirce's theory of signs, asignis something that stands in a well-defined kind of relation to two other things, itsobjectand itsinterpretant sign.[7]Although Peirce's definition of a sign is independent of psychological subject matter and his theory of signs covers more ground than linguistics alone, it is nevertheless the case that many of the more familiar examples and illustrations of sign relations will naturally be drawn fromlinguisticsandpsychology, along with our ordinary experience of their subject matters. For example, one way to approach the concept of an interpretant is to think of a psycholinguistic process. In this context, an interpretant can be understood as a sign's effect on the mind, or on anything that acts like a mind, what Peirce calls aquasi-mind. An interpretant is a process of interpretation, one of the types of activity that falls under the heading ofsemiosis. One usually says that a sign standsforan objecttoan agent, an interpreter. In the upshot, however, it is the sign's effect on the agent that is paramount. This effect is what Peirce called theinterpretant sign, or theinterpretantfor short. An interpretant in its barest form is a sign's meaning, implication, or ramification, and especial interest attaches to the types of semiosis that proceed from obscure signs to relatively clear interpretants. In logic and mathematics the most clarified and most succinct signs for an object are calledcanonical formsornormal forms.The interpretant, in Peirce's conceptualization, is not the user of the sign but the "proper significate effect" or that mental concept produced by both the sign and by the user's experience of the object.[8] Peirce argued that logic is the formal study of signs in the broadest sense, not only signs that are artificial, linguistic, or symbolic, but also signs that are semblances or areindexicalsuch as reactions. Peirce held that "all this universe is perfused with signs, if it is not composed exclusively of signs",[9]along with their representational and inferential relations. He argued that, since all thought takes time, all thought is in signs: To say, therefore, that thought cannot happen in an instant, but requires a time, is but another way of saying that every thought must be interpreted in another, or that all thought is in signs. (Peirce, 1868[10]) Thought is not necessarily connected with a brain. It appears in the work of bees, of crystals, and throughout the purely physical world; and one can no more deny that it is really there, than that the colors, the shapes, etc., of objects are really there. Consistently adhere to that unwarrantable denial, and you will be driven to some form of idealistic nominalism akin to Fichte's. Not only is thought in the organic world, but it develops there. But as there cannot be a General without Instances embodying it, so there cannot be thought without Signs. We must here give "Sign" a very wide sense, no doubt, but not too wide a sense to come within our definition. Admitting that connected Signs must have a Quasi-mind, it may further be declared that there can be no isolated sign. Moreover, signs require at least two Quasi-minds; a Quasi-utterer and a Quasi-interpreter; and although these two are at one (i.e., are one mind) in the sign itself, they must nevertheless be distinct. In the Sign they are, so to say, welded. Accordingly, it is not merely a fact of human Psychology, but a necessity of Logic, that every logical evolution of thought should be dialogic. (Peirce, 1906[11]) Signhood is a way of being in relation, not a way of being in itself. Anything is a sign—not as itself, but in some relation to another. The role of sign is constituted as one role among three: object, sign, and interpretant sign. It is an irreducible triadic relation; the roles are distinct even when the things that fill them are not. The roles are but three: a sign of an object leads to interpretants, which, as signs, lead to further interpretants. In various relations, the same thing may be sign or semiotic object. The question of what a sign is depends on the concept of asign relation, which depends on the concept of atriadic relation. This, in turn, depends on the concept of arelationitself. Peirce depended on mathematical ideas about thereducibilityof relations—dyadic, triadic, tetradic, and so forth. According to Peirce's Reduction Thesis,[12](a) triads are necessary because genuinely triadic relations cannot be completely analyzed in terms of monadic and dyadic predicates, and (b) triads are sufficient because there are no genuinely tetradic or larger polyadic relations—all higher-arityn-adic relations can be analyzed in terms of triadic and lower-arity relations and are reducible to them. Peirce and others, notablyRobert W. Burch(1991) and Joachim Hereth Correia and Reinhard Pöschel (2006), have offered proofs of the Reduction Thesis.[13]According to Peirce, a genuinely monadic predicate characteristically expresses quality. A genuinely dyadic predicate—reaction or resistance. A genuinely triadic predicate—representation or mediation. Thus Peirce's theory of relations underpins his philosophical theory of three basic categories (see below). Extension × intension = information.[14]Two traditional approaches to sign relation, necessary though insufficient, are the way ofextension(a sign's objects, also called breadth, denotation, or application) and the way ofintension(the objects' characteristics, qualities, attributes referenced by the sign, also called depth,comprehension, significance, or connotation). Peirce adds a third, the way ofinformation, including change of information, in order to integrate the other two approaches into a unified whole.[15]For example, because of the equation above, if a term's total amount of information stays the same, then the more that the term 'intends' or signifies about objects, the fewer are the objects to which the term 'extends' or applies. A proposition's comprehension consists in its implications.[16] Determination.A sign depends on its object in such a way as to represent its object—the object enables and, in a sense, determines the sign. A physically causal sense of this stands out especially when a sign consists in an indicative reaction. The interpretant depends likewise on both the sign and the object—the object determines the sign to determine the interpretant. But this determination is not a succession of dyadic events, like a row of toppling dominoes; sign determination is triadic. For example, an interpretant does not merely represent something which represented an object; instead an interpretant represents somethingasa sign representing an object. It is an informational kind of determination, a rendering of something more determinately representative.[17]Peirce used the word "determine" not in strictly deterministic sense, but in a sense of "specializes",bestimmt,[17]involving variation in measure, like an influence. Peirce came to define sign, object, and interpretant by their (triadic) mode of determination, not by the idea of representation, since that is part of what is being defined.[18]The object determines the sign to determine another sign—the interpretant—to be related to the objectas the sign is related to the object, hence the interpretant, fulfilling its function as sign of the object, determines a further interpretant sign. The process is logically structured to perpetuate itself, and is definitive of sign, object, and interpretant in general.[19]In semiosis, every sign is an interpretant in a chain stretching both fore and aft. The relation of informational or logical determination which constrains object, sign, and interpretant is more general than the special cases of causal or physical determination. In general terms, any information about one of the items in the sign relation tells you something about the others, although the actual amount of this information may be nil in some species of sign relations. Peirce held that there are exactly three basic semiotic elements, the sign, object, and interpretant, as outlined above and fleshed out here in a bit more detail: Some of the understanding needed by the mind depends on familiarity with the object. In order to know what a given sign denotes, the mind needs some experience of that sign's object collaterally to that sign or sign system, and in this context Peirce speaks of collateral experience, collateral observation, collateral acquaintance, all in much the same terms.[22] "Representamen"(properly with the "a" long and stressed:/rɛprɪzɛnˈteɪmən/) was adopted (not coined) by Peirce as his blanket technical term for any and every sign or sign-like thing covered by his theory. It is a question of whether the theoretically defined "representamen" covers only the cases covered by the popular word "sign." The word "representamen" is there in case a divergence should emerge. Peirce's example was this: Sign action always involves a mind. If a sunflower, by doing nothing more than turning toward the sun, were thereby to become fully able to reproduce a sunflower turning in just the same way toward the sun, then the first sunflower's turning would be a representamen of the sun yet not a sign of the sun.[23]Peirce eventually stopped using the word "representamen."[24] Peirce made various classifications of his semiotic elements, especially of the sign and the interpretant. Of particular concern in understanding the sign-object-interpretant triad is this: In relation to a sign, its object and its interpretant are either immediate (present in the sign) or mediate. The immediate object is, from the viewpoint of a theorist, really a kind of sign of the dynamic object; but phenomenologically itisthe object until there is reason to go beyond it, and somebody analyzing (critically but not theoretically) a given semiosis will consider the immediate object to betheobject until there is reason to do otherwise.[26] Peirce preferred phrases likedynamic objectoverreal objectsince the object might be fictive—Hamlet, for instance, to whom one grants a fictive reality, a reality within the universe of discourse of the playHamlet.[20] It is initially tempting to regard immediate, dynamic, and final interpretants as forming a temporal succession in an actual process of semiosis, especially since their conceptions refer to beginning, midstages, and end of a semiotic process. But instead their distinctions from each other are modal or categorial. The immediate interpretant is a quality of impression which a sign is fitted to produce, a special potentiality. The dynamic interpretant is an actuality. The final interpretant is a kind of norm or necessity unaffected by actual trends of opinion or interpretation. One does not actually obtain a final interpretant per se; instead one may successfullycoincidewith it.[27]Peirce, afallibilist, holds that one has no guarantees that one has done so, but only compelling reasons, sometimes very compelling, to think so and, in practical matters, must sometimes act with complete confidence of having done so. (Peirce said that it is often better in practical matters to rely on instinct, sentiment, and tradition, than on theoretical inquiry.[28]) In any case, insofar as truth is the final interpretant of a pursuit of truth, one believes, in effect, that one coincides with a final interpretant of some question about what is true, whenever and to whatever extent that one believes that one reaches a truth. Peirce proposes several typologies and definitions of the signs. At least 76 definitions of what a sign is have been collected throughout Peirce's work.[29]Some canonical typologies can nonetheless be observed, one crucial one being the distinction between "icons", "indices" and "symbols" (CP 2.228, CP 2.229 and CP 5.473). The icon-index-symbol typology is chronologically the first but structurally the second of three that fit together as a trio of three-valued parameters in regular scheme of nine kinds of sign. (The three "parameters" (not Peirce's term) are not independent of one another, and the result is a system of ten classes of sign, which are shown further down in this article.) Peirce's three basic phenomenologicalcategoriescome into central play in these classifications. The 1-2-3 numerations used further below in the exposition of sign classes represents Peirce's associations of sign classes with the categories. The categories are as follows: *Note:An interpretant is an interpretation (human or otherwise) in the sense of the product of an interpretive process. The three sign typologies depend respectively on (I) the sign itself, (II) how the sign stands for its denoted object, and (III) how the signs stands for its object to its interpretant. Each of the three typologies is a three-way division, atrichotomy, via Peirce's three phenomenological categories. Every sign falls under one class or another within (I)andwithin (II)andwithin (III). Thus each of the three typologies is a three-valued parameter for every sign. The three parameters are not independent of each other; many co-classifications are not found.[36]The result is not 27 but instead ten classes of signs fully specified at this level of analysis. In later years, Peirce attempted a finer level of analysis, defining sign classes in terms of relations not just to sign, object, and interpretant, but to sign, immediate object, dynamic object, immediate interpretant, dynamic interpretant, and final or normal interpretant. He aimed at 10 trichotomies of signs, with the above three trichotomies interspersed among them, and issuing in 66 classes of signs. He did not bring that system into a finished form. In any case, in that system, icon, index, and symbol were classed by category of how they stood for the dynamic object, while rheme, dicisign, and argument were classed by the category of how they stood to the final or normal interpretant.[37] These conceptions are specific to Peirce's theory of signs and are not exactly equivalent to general uses of the notions of "icon", "index", "symbol", "tone", "token", "type", "term" (or "rheme"), "proposition" (or "pheme"), "argument". Also calledtone, token, type; and also calledpotisign, actisign, famisign. This is the typology of the sign as distinguished bysign's ownphenomenological category (set forth in 1903, 1904, etc.). Areplica(also calledinstance) of a legisign is a sign, often an actual individual one (a sinsign), which embodies that legisign. A replica is a sign for the associated legisign, and therefore is also a sign for the legisign's object. All legisigns need sinsigns as replicas, for expression. Some but not all legisigns are symbols. All symbols are legisigns. Different words with the same meaning are symbols which are replicas of that symbol which consists in their meaning but doesn't prescribe qualities of its replicas.[38]The replica of a rhematic symbol, for instance, calls up a mental image which image, owing to the habits and dispositions of such mind, often produce a general concept.[39]Here, the replica is interpreted as a sign of the object, which is then considered an instance of that concept.[39] This is the typology of the sign as distinguished by phenomenological category of its way of denoting theobject(set forth in 1867 and many times in later years). This typology emphasizes the different ways in which the sign refers to its object—the icon by a quality of its own, the index by real connection to its object, and the symbol by a habit or rule for its interpretant. The modes may be compounded, for instance, in a sign that displays a forking line iconically for a fork in the road and stands indicatively near a fork in the road. *Note:In "On a New List of Categories" (1867) Peirce gave the unqualified term "sign" as an alternate expression for "index", and gave "general sign" as an alternate expression for "symbol"."Representamen"was his blanket technical term for any and every sign or signlike thing covered by his theory.[48]Peirce soon reserved "sign" to its broadest sense, for index, icon, and symbol alike. He also eventually decided that the symbol is not the only sign which can be called a "general sign" in some sense, and that indices and icons can be generals, generalities, too. The general sign, as such, the generality as a sign, he eventually called, at various times, the "legisign" (1903, 1904), the "type" (1906, 1908), and the "famisign" (1908). This is the typology of the sign as distinguished by the phenomenological category which the sign's interpretant attributes to the sign's way of denoting the object (set forth in 1902, 1903, etc.): *Note:In his "Prolegomena to an Apology for Pragmaticism" (TheMonist, v. XVI, no. 4, Oct. 1906), Peirce uses the words "seme", "pheme", and "delome" (pp.506, 507, etc.) for the rheme-dicisign-argument typology, but retains the word "rheme" for the predicate (p. 530) in his system of Existential Graphs. Also note that Peirce once offered "seme" as an alternate expression for "index" in 1903.[43] The three typologies, labeled "I.", "II.", and "III.", are shown together in the table below. As parameters, they are not independent of one another. As previously said, many co-classifications are not found.[36]The slanting and vertical lines show the options for co-classification of a given sign (and appear in MS 339, August 7, 1904, viewablehereat the Lyris Peirce Archive[54]). The result is ten classes of sign. Words in parentheses in the table are alternate names for the same kinds of signs. *Note:As noted above, in "On a New List of Categories" (1867) Peirce gave the unqualified word "sign" as an alternate expression for "index", and gave "general sign" as an alternate expression for "symbol." Peirce soon reserved "sign" to its broadest sense, for index, icon, and symbol alike, and eventually decided that symbols are not the only signs which can be called "general signs" in some sense. Seenoteat end of section "II. Icon, index, symbol" for details.A term (in the conventional sense) is not just any rheme; it is a kind of rhematic symbol. Likewise a proposition (in the conventional sense) is not just any dicisign, it is a kind of dicent symbol. In the study of photography andfilm studiesPeirce's work is widely cited.[56]He has also been influential in the field ofart history.[57] I define a Sign as anything which is so determined by something else, called its Object, and so determines an effect upon a person, which effect I call its Interpretant, that the latter is thereby mediately determined by the former. My insertion of "upon a person" is a sop to Cerberus, because I despair of making my own broader conception understood. Now logical terms are of three grand classes. The first embraces those whoselogical forminvolves only the conception of quality, and which therefore represent a thing simply as "a —." These discriminate objects in the most rudimentary way, which does not involve any consciousness of discrimination. They regard an object as it is in itself assuch(quale); for example, as horse, tree, or man. These areabsolute terms. (Peirce, 1870. But also see "Quale-Consciousness", 1898, in CP 6.222–237.) For abbreviations of his works seeAbbreviations.
https://en.wikipedia.org/wiki/Semiotic_theory_of_Charles_Sanders_Peirce
Social semiotics(alsosocial semantics)[1]is a branch of the field ofsemioticswhich investigates human signifying practices in specific social and cultural circumstances, and which tries to explainmeaning-makingas a social practice. Semiotics, as originally defined byFerdinand de Saussure, is "the science of the life of signs in society". Social semiotics expands on Saussure's founding insights by exploring the implications of the fact that the "codes" of language and communication are formed by social processes. The crucial implication here is that meanings and semiotic systems are shaped by relations of power, and that as power shifts in society, our languages and other systems of socially accepted meanings can and do change. Social semiotics is the study of the social dimensions of meaning, and of the power of human processes of signification and interpretation (known assemiosis) in shaping individuals and societies. Social semiotics focuses onsocial meaning-makingpractices of all types, whether visual, verbal or aural in nature.[2]These different systems for meaning-making, or possible "channels" (e.g. speech, writing, images) are known assemiotic modes(orsemiotic registers). Semiotic modes can include visual, verbal, written, gestural and musical resources for communication. They also include various "multimodal" ensembles of any of these modes[3] Social semiotics can include the study of how people design and interpret meanings, the study of texts, and the study of how semiotic systems are shaped by social interests and ideologies, and how they are adapted as society changes (Hodge and Kress, 1988). Structuralist semiotics in the tradition of Ferdinand de Saussure focused primarily on theorising semiotic systems or structures (termedlangueby de Saussure, which change diachronically, i.e. over longer periods of time). In contrast, social semiotics tries to account for the variability of semiotic practices termedparoleby Saussure. This altered focus shows how individual creativity, changing historical circumstances, and new social identities and projects can all change patterns of usage and design (Hodge and Kress, 1988). From a social semiotic perspective, rather than being fixed into unchanging "codes", signs are considered to be resources which people use and adapt (or "design") to make meaning. In these respects, social semiotics was influenced by, and shares many of the preoccupations ofpragmatics(Charles W. Morris) andsociolinguisticsand has much in common withcultural studiesandcritical discourse analysis. The main task of social semiotics is to develop analytical and theoretical frameworks which can explain meaning-making in a social context.[2] Linguistic theorist,Michael Halliday, introduced the term ‘social semiotics’ into linguistics, when he used the phrase in the title of his book,Language as Social Semiotic. This work argues against the traditional separation between language and society, and exemplifies the start of a 'semiotic' approach, which broadens the narrow focus on written language in linguistics (1978). For Halliday, languages evolve as systems of "meaning potential" (Halliday, 1978:39) or as sets of resources which influence what the speaker can do with language, in a particular social context. For example, for Halliday, the grammar of the English language is a system organised for the following three purposes (areas or "metafunctions"): Any sentence in English is composed like a musical composition, with one strand of its meaning coming from each of the three semiotic areas or metafunctions.Bob Hodgegeneralises Halliday’s essays[4]on social semiotics into five premises:[5] Robert HodgeandGunther Kress'sSocial Semiotics(1988) focused on the uses of semiotic systems in social practice. They explain that the social power of texts in society depends on interpretation: "Each producer of a message relies on its recipients for it to function as intended." (1988:4) This process of interpretation (semiosis) situates individual texts within discourses, the exchanges of interpretative communities. The work of interpretation can contest the power of hegemonic discourses. Hodge and Kress give the example of feminist activists defacing a sexist advertising billboard, and spray-painting it with a new, feminist message. "Text is only a trace of discourses, frozen and preserved, more or less reliable or misleading. Yet discourse disappears too rapidly, surrounding a flow of texts." (1988:8) Hodge and Kress built on a range of traditions from linguistics (includingNoam Chomsky,Michael Halliday,Benjamin Lee Whorfandsociolinguistics), but the major impetus for their work is the critical perspective on ideology and society that originates withMarx. Hodge and Kress build a notion of semiosis as a dynamic process, where meaning is not determined by rigid structures, or predefined cultural codes. They argue thatFerdinand de Saussure's structuralist semiotics avoided addressing questions about creativity, movement, and change in language, possibly in reaction to the diachronic linguistic traditions of his time (the focus on the historical development fromIndo-European). This created a "problematic" legacy, with linguistic change relegated to the "contents of Saussure’s rubbish bin" (1988:16-17). Instead, Hodge and Kress propose to account for change in semiosis through the work ofCharles Sanders Peirce. Meaning is a process, in their interpretation of Peirce. They refer to Peirce's triadic model of semiosis, which depicts the "action" of a sign as a limitless process of infinite semiosis, where one "interpretant" (or idea linked to a sign) generates another. The flow of these infinite processes of interpretation are constrained in Peirce's model, they claim, by the material world (the "object"), and cultural rules of thought, or "habit". (1988:20) Social semiotics revisits De Saussure's doctrine of the "arbitrariness of the linguisticsign". This notion rests on the argument that the signifier only has anarbitraryrelationship to the signified) — in other words, that there is nothing about the sound or appearance of (verbal) signifiers (as, for example, the words "dog" or "chien") — to suggest what they signify. Hodge and Kress point out that questions of thereferentbecome more complicated when semiotics moves beyond verbal language. On the one hand, there is the need to account for the continuum of relationships between thereferentand the representation. Here, they draw on Pierce's differentiation between iconic signification (e.g. a colour photograph of smoke, where the signifier recreates the perceptual experience of the signified), indexical signification (e.g. a column of smoke, where there is a causal relationship between the physical signifier and the fire it might signify), and symbolic signification (e.g. the word "smoke", where the arbitrary link between signifier and signified is maintained bysocial convention). Social semiotics also addresses the question of how societies and cultures maintain or shift these conventional bonds between signifier and signified. De Saussure was unwilling to answer this question, Hodge and Kress claim. This leaves thesocially deterministimplication that meanings and interpretations are dictated from above, by "the whims of an inscrutably powerful collective being, Society." For Hodge and Kress, social semiotics must respond to the question and explain how the social shaping of meanings works in practice (1988:22). Social semiotics is currently extending this general framework beyond its linguistic origins to account for the growing importance of sound and visual images, and how modes of communication are combined in both traditional and digital media (semiotics of social networking) (see, for example, Kress and van Leeuwen, 1996), thus approaching semiotics of culture (Randviir 2004). Theorists such as Gunther Kress andTheo van Leeuwenhave built on Halliday's framework by providing new "grammars" for other semiotic modes. Like language, these grammars are seen as socially formed and changeable sets of available "resources" for making meaning, which are also shaped by the semiotic metafunctions originally identified by Halliday. The visual and aural modes have received particular attention. Accounting formultimodality(communication in and across a range of semiotic modes - verbal, visual, and aural) is considered a particularly important ongoing project, given the importance of the visual mode in contemporary communication. In the field of graphic design, the multimodal and social semiotic viewpoint can be perceived as an illustration that connects our sensory abilities together opening up new opportunities for deeper visual interaction. Utilizing this design process, the graphic designers create content which helps improvise the act of meaningful visual communication between the content creators and their audiences.
https://en.wikipedia.org/wiki/Social_semiotics
Universal languagemay refer to a hypothetical or historical language spoken and understood by all or most of the world's people. In some contexts, it refers to a means of communication said to be understood by all humans. It may be the idea of aninternational auxiliary languagefor communication between groups speaking different primary languages. A similar concept can be found inpidgin language, which is actually used to facilitate understanding between two or more people with no common language. In other conceptions, it may be the primary language of all speakers, or the only existing language. Some religious and mythological traditions state that there was once a single universal language among all people, or shared by humans andsupernaturalbeings. In other traditions, there is less interest in or a general deflection of the question. The writtenClassical Chinese languageis still read widely but pronounced differently by readers inChina,Vietnam,KoreaandJapan; for centuries it was ade factouniversalliterarylanguagefor a broad-based culture. In something of the same waySanskritinIndiaandNepal, andPaliinSri Lankaand inTheravadacountries ofSouth-East Asia(Burma,Thailand,Cambodia) andOld TamilinSouth IndiaandSri Lanka, were literary languages for many for whom they were not theirmother tongue. Comparably, theLatin language(quaMedieval Latin) was in effect a universal language ofliteratiin theMiddle Ages, and the language of theVulgate Biblein the area ofCatholicism, which covered most ofWestern Europeand parts ofNorthern EuropeandCentral Europe. In a more practical fashion, trade languages, such as ancientKoine Greek, may be seen as a kind ofrealuniversal language, that was used for commerce. Inhistorical linguistics,monogenesisrefers to the idea that all spoken human languages are descended from a single ancestral language spoken many thousands of years ago. Various religious texts, myths, and legends describe a state of humanity in which originally only one language was spoken. InJewishandChristianbeliefs, the story of theTower of Babeltells of a consequent "confusion of tongues" (the splintering of numerous languages from an originalAdamic language)[citation needed]as a punishment from God. Myths exist in other cultures describing the creation of multiple languages as an act of a god as well, such as the destruction of a 'knowledge tree' byBrahmain Indic tradition, or as a gift from the GodHermesin Greek myth. Other myths describe the creation of different languages as concurrent with the creation of different tribes of people, or due to supernatural events. Recognizable strands in the contemporary ideas on universal languages took form only inEarly ModernEurope. In the early 17th century, some believed that a universal language would facilitate greater unity among mankind largely due to the subsequent spread of religion, specifically Christianity, as espoused in the works ofComenius. But there were ideas of a universal language apart from religion as well. Alingua francaor trade language was nothing very new; but aninternational auxiliary languagewas a natural wish in light of the gradual decline of Latin. Literature in vernacular languages became more prominent with theRenaissance. Over the course of the 18th century, learned works largely ceased to be written inLatin. According to Colton Booth (Origin and Authority in Seventeenth-Century England(1994) p. 174) "The Renaissance had no single view of Adamic language and its relation to human understanding." The question was more exactly posed in the work ofFrancis Bacon. In the vast writings ofGottfried Leibnizcan be found many elements relating to a possible universal language, specifically aconstructed language, a concept that gradually came to replace that of a rationalized Latin as the natural basis for a projected universal language. Leibniz conceived of acharacteristica universalis(also seemathesis universalis), an "algebra" capable of expressing all conceptual thought. This algebra would include rules for symbolic manipulation, what he called acalculus ratiocinator. His goal was to putreasoningon a firmer basis by reducing much of it to a matter of calculation that many could grasp. Thecharacteristicawould build on analphabet of human thought. Leibniz's work is bracketed by some earlier mathematical ideas ofRené Descartes, and the satirical attack ofVoltaireonPanglossianism. Descartes's ambitions were far more modest than Leibniz's, and also far more successful, as shown by his wedding ofalgebraandgeometryto yield what we now know asanalytic geometry. Decades of research onsymbolic artificial intelligencehave not brought Leibniz's dream of acharacteristicaany closer to fruition. Other 17th-century proposals for a 'philosophical' (i.e. universal) language include those byFrancis Lodwick,Thomas Urquhart(possibly parodic),George Dalgarno(Ars signorum, 1661), andJohn Wilkins(An Essay towards a Real Character and a Philosophical Language, 1668). The classification scheme inRoget'sThesaurusultimately derives from Wilkins'sEssay. Candide, asatirewritten byVoltaire, took aim at Leibniz asDr. Pangloss, with the choice of name clearly putting universal language in his sights, but satirizing mainly theoptimismof the projector as much as the project. The argument takes the universal language itself no more seriously than the ideas of the speculative scientists andvirtuosiofJonathan Swift'sLaputa. For the like-minded of Voltaire's generation, universal language was tarred asfool's goldwith the same brush asphilologywith littleintellectual rigour, and universalmythography, as futile and arid directions. In the 18th century, some rationalist natural philosophers sought to recover a supposedEdenic language. It was assumed that education inevitably took people away from an innate state of goodness they possessed, and therefore there was an attempt to see what language a human child brought up in utter silence would speak. This was assumed to be the Edenic tongue, or at least thelapsariantongue. Others attempted to find a common linguistic ancestor to all tongues; there were, therefore, multiple attempts to relate esoteric languages toHebrew(e.g.BasqueandIrish), as well as the beginnings ofcomparative linguistics. The constructed language movement produced such languages asEsperanto(1887),Latino sine flexione(1903),Ido(1907),Interlingue(1922), andInterlingua(1951).[1] English remains the dominant language of international business and global communication through the influence of global media and the former British Empire that had established the use of English in regions around the world such as North America, Africa, Australia and New Zealand. However, English is not the only language used in major international organizations, because many countries do not recognize English as a universal language. For instance, theUnited Nationsuse six languages —Arabic,Chinese,English,French,Russian, andSpanish. The early ideas of a universal language with complete conceptual classification by categories is still debated on various levels.Michel Foucaultbelieved such classifications to be subjective, citingBorges' fictionalCelestial Emporium of Benevolent Knowledge's Taxonomyas an illustrative example.
https://en.wikipedia.org/wiki/Universal_language
Anacademic conferenceorscientific conference(alsocongress,symposium,workshop, ormeeting) is aneventforresearchers(not necessarilyacademics) to present and discuss their scholarly work. Together withacademicorscientific journalsandpreprintarchives, conferences provide an important channel for exchange of information between researchers. Further benefits of participating in academic conferences include learning effects in terms of presentation skills and "academichabitus", receiving feedback from peers for one's own research, the possibility to engage in informal communication with peers about work opportunities and collaborations, and getting an overview of current research in one or moredisciplines.[1][2] The first international academic conferences and congresses appeared in 19th century.[3] Conferences usually encompass variouspresentations. They tend to be short and concise, with a time span of about 10 to 30 minutes;presentationsare usually followed by adiscussion. The work may be bundled in written form asacademic papersandpublishedas the conferenceproceedings. Usually a conference will includekeynote speakers(often, scholars of some standing, but sometimes individuals from outside academia). The keynote lecture is often longer, lasting sometimes up to an hour and a half, particularly if there are several keynote speakers on apanel. In addition to presentations, conferences also featurepanel discussions,round tableson various issues,poster sessionsand workshops. Some conferences take more interactive formats, such as the participant driven "unconference" or various conversational formats.[4] Academic conferences have been held in three general formats: in-person,virtual or onlineandhybrid(in-person and virtual). Conferences have traditionally been organized in-person. Since theCOVID-19 pandemicmany conferences have either temporarily or permanently switched to a virtual or hybrid format. Some virtual conferences involve bothasynchronous and synchronousformats. For example, there is a mix of pre-recorded and live presentations.[5] Because virtual or hybrid events allow people from different time zones to participate simultaneously, some will have to participate during their night-time. Some virtual conferences try to mitigate this issue by alternating their schedule in a way so that everyone has the chance to participate at day time at least once.[6][7] Prospectivepresentersare usually asked to submit a shortabstractof their presentation, which will be reviewed before the presentation is accepted for the meeting. Some organizers, and therefore disciplines require presenters to submit a paper, which ispeer reviewedby members of theprogram committeeor referees chosen by them. In some disciplines, such as English and other languages, it is common for presenters to read from a prepared script. In other disciplines such as thesciences, presenters usually base their talk around a visual presentation that displays key figures and research results. A large meeting will usually be called a conference, while a smaller is termed a workshop. They might besingle trackormultiple track, where the former has only one session at a time, while a multiple track meeting has several parallel sessions with speakers in separate rooms speaking at the same time. However, there are no commonly shared definitions even within disciplines for each event type. There might be no conceivable difference between a symposium, a congress or a conference. The larger the conference, the more likely it is thatacademic publishing housesmay set up displays. Large conferences also may have a career and job search and interview activities. At some conferences, social or entertainment activities such as tours and receptions can be part of the program. Business meetings forlearned societies,interest groups, oraffinity groups[8]can also be part of the conference activities. Academic conferences typically fall into three categories: Increasing numbers ofamplified conferencesare being provided which exploit the potential of WiFi networks and mobile devices in order to enable remote participants to contribute to discussions and listen to ideas. Advanced technology for meeting with any yet unknown person in a conference is performed by active RFID that may indicate willfully identified and relatively located upon approach via electronic tags. Conferences are usually organized either by a scientific society or by a group of researchers with a common interest. Larger meetings may be handled on behalf of the scientific society by aProfessional Conference Organiseror PCO.[9] The meeting is announced by way of a Call For Papers (CFP) or a Call For Abstracts, which is sent to prospective presenters and explains how to submit their abstracts or papers. It describes the broad theme and lists the meeting's topics and formalities such as what kind of abstract or paper has to be submitted, to whom, and by whatdeadline. A CFP is usually distributed using a mailing list or on specialized online services such as Call for Papers[10](CFPs) Index. Contributions are usually submitted using an onlineabstract or paper managementservice such asSubmit A Manuscript[11]orConference Submissionsystem. Predatory conferences or predatory meetings are meetings set up to appear as legitimatescientific conferencesbut which are exploitative as they do not provide proper editorial control over presentations, and advertising can include claims of involvement of prominent academics who are, in fact, uninvolved. They are an expansion of thepredatory publishingbusiness model, which involves the creation of academic publications built around an exploitative business model that generally involves charging publication fees to authors without providing the editorial and publishing services associated with legitimate journals.[12][13]BIT Life SciencesandSCIgenare some of the conferences labeled as predatory. Academic conferences are criticized for being environmentally unfriendly, due to the amount of airplane traffic generated by them.[14]A correspondence onNature.compoints out the "paradox of needing to fly to conferences" despite increased calls for sustainability by environmental scientists.[15][16]The academic community'scarbon footprintis comprised in large parts by emissions caused by air travel.[17]Few conferences enacted practices to reduce their environmental impact by 2017, despite guidelines being widely available: An analysis of academic conferences taking place in 2016 showed that only 4% of 116 conferences sampled offeredcarbon offsetoptions and only 9% of these conferences implemented any form of action to their reduce environmental impact.[16]More conferences included the use ofteleconferencingafter the COVID-19 pandemic. In-person conferences suffer from a number of issues.[18]Most importantly, they are fostering the existing social inequality in academia due to their inaccessibility for researchers from low income countries, researchers with care duties or researchers facing visa restrictions.
https://en.wikipedia.org/wiki/Academic_conference
Academic writingorscholarly writingrefers primarily tononfictionwriting that is produced as part of academic work in accordance with the standards of a particular academic subject or discipline, including: as well asundergraduateversions of all of these.[1] Academic writing typically uses a more formal tone and follows specific conventions. Central to academic writing is its intertextuality, or an engagement with existing scholarly conversations through meticulous citing or referencing of other academic work, which underscores the writer's participation in the broader discourse community. However, the exact style, content, and organization of academic writing can vary depending on the specific genre and publication method. Despite this variation, all academic writing shares some common features,[2][page needed]including a commitment to intellectual integrity, the advancement of knowledge, and the rigorous application of disciplinary methodologies. Academic writing often features proseregisterthat is conventionally characterized by "evidence...that the writer(s) have been persistent, open-minded and disciplined in the study"; that prioritizes "reason over emotion or sensual perception"; and that imagines a reader who is "coolly rational, reading for information, and intending to formulate a reasoned response."[3] Three linguistic patterns[4][page needed]that correspond to these goals across fields and genres, include the following: The stylistic means of achieving these conventions will differ by academic discipline, seen, for example, in the distinctions between writing in history versus engineering, or writing in physics versus philosophy.[8][page needed]Biber and Gray propose further differences in the complexity of academic writing between disciplines, seen, for example, in the distinctions between writing in thehumanitiesversus writing in thesciences. In the humanities, academic style is often seen in elaborated complex texts, while in the sciences, academic style is often seen in highly structured concise texts. These stylistic differences are thought to be related to the types of knowledge and information being communicated in these two broad fields.[9] One theory that attempts to account for these differences in writing is known as "discourse communities".[10] Academic style has often been criticized for being too full ofjargonand hard to understand by the general public.[11][12]In 2022, Joelle Renstrom argued that theCOVID-19 pandemichas had a negative impact on academic writing and that many scientific articles now "contain more jargon than ever, which encourages misinterpretation, political spin, and a declining public trust in the scientific process."[13] Adiscourse communityis a group of people that shares mutual interests and beliefs. "It establishes limits and regularities...who may speak, what may be spoken, and how it is to be said; in addition, [rules] prescribe what is true and false, what is reasonable and what foolish, and what is meant and what not."[14] The concept of a discourse community is vital to academic writers across all disciplines, for the academic writer's purpose is to influence how their community understands its field of study: whether by maintaining, adding to, revising, or contesting what that community regards as "known" or "true." To effectively communicate and persuade within their field, academic writers are motivated to adhere to the conventions and standards set forth by their discourse community. Such adherence ensures that their contributions are intelligible and recognized as legitimate. Constraints are the discourse community's accepted rules and norms of writing that determine what can and cannot be said in a particular field or discipline. They define what constitutes an acceptable argument. Every discourse community expects to see writers construct their arguments using the community's conventional style of language, vocabulary, and sources, which are the building blocks of any argument in that community.[15] STEM writing often follows strict formats, like the IMRD structure. This format helps organize ideas clearly and also makes research easier to repeat and check.[16] For writers to become familiar with some of the constraints of the discourse community they are writing for, across most discourse communities, writers must: The structure and presentation of arguments can vary based on the discourse community the writer is a part of. For example, a high school student would typically present arguments differently than a college student. It is important for academic writers to familiarize themselves with the conventions of their discourse community by analyzing existing literature within the field. Such an in-depth understanding will enable writers to convey their ideas and arguments more effectively, ensuring that their contributions resonate with and are valued by their peers in the discourse community. Writing Across the Curriculum(WAC) is a comprehensive educational initiative designed not only to enhance student writing proficiency across diverse disciplinary contexts but also to foster faculty development and interdisciplinary dialogue.[17][18][7]The Writing Across the Curriculum Clearinghouse provides resources for such programs at all levels of education. Collaboration between writing centers and STEM faculty help students follow writing rules in their fields. Programs like WATTS train peer tutors to give better feedback on academic papers. These programs focus on writing strategies rather than subject knowledge.[19] In a discourse community, academic writers build on the ideas of previous writers to establish their own claims. Successful writers know the importance of conducting research within their community and applying the knowledge gained to their own work. By synthesizing and expanding upon existing ideas, writers are able to make novel contributions to the discourse. Students improve by analyzing scientific data and solving real-world problems and hence create new ideas. It also helps them connect ideas from different subjects.[20] Intertextualityis the combining of past writings into original, new pieces of text. According toJulia Kristeva, all texts are part of a larger network of intertextuality, meaning they are connected to prior texts through various links, such as allusions, repetitions, and direct quotations, whether they are acknowledged or not.[17]Writers (often unwittingly) make use of what has previously been written and thus some degree of borrowing is inevitable. One of the key characteristics of academic writing across disciplines is the use of explicit conventions for acknowledging intertextuality, such ascitationand bibliography. The conventions for marking intertextuality vary depending on the discourse community, with examples including MLA, APA, IEEE, and Chicago styles. Summarizing and integrating other texts in academic writing is often metaphorically described as "entering the conversation," as described by Kenneth Burke:[18] "Imagine that you enter a parlor. You come late. When you arrive, others have long preceded you, and they are engaged in a heated discussion, a discussion too heated for them to pause and tell you exactly what it is about. In fact the discussion had already begun long before any of them got there, so that no one present is qualified to retrace for you all the steps that had gone before. You listen for a while, until you decide that you have caught the tenor of the argument; then you put in your oar. Someone answers; you answer him; another comes to your defense; another aligns himself against you, to either the embarrassment or gratification of your opponent, depending on the quality of your ally's assistance. However, the discussion is interminable. The hour grows late, you must depart, with the discussion still vigorously in progress." In science writing, writers must connect their work to past research. This keeps their arguments relevant and also shows how ideas grow and change in science.[21] While the need for appropriate references and the avoidance of plagiarism are undisputed in academic and scholarly writing, the appropriate style is still a matter of debate. Some aspects of writing are universally accepted as important, while others are more subjective and open to interpretation. Academic writing encompasses many different genres, indicating the many different kinds of authors, audiences and activities engaged in the academy and the variety of kinds of messages sent among various people engaged in the academy. The partial list below indicates the complexity of academic writing and the academic world it is part of. STEM papers often focus on showing methods and data, but some teachers now ask students to explain their ideas better which helps students write like scientists.[26] These are acceptable to some academic disciplines, e.g.Cultural studies,Fine art,Feminist studies,Queer theory,Literary studies Participating in higher education writing can entail high stakes. For instance, one's GPA may be influenced by writing performance in a class and the consequent grade received, potentially stirring negative emotions such as confusion and anxiety. Research on emotions and writing indicates that there is a relationship between writing identity and displaying emotions within an academic atmosphere. Instructors cannot simply read off one's identity and determine how it should be formatted. The structure of higher education, particularly within universities, is in a state of continual evolution, shaping and developing student writing identities.[27]Nevertheless, this dynamic can lead to a positive contribution to one's academic writing identity in higher education.[28]Unfortunately, higher education does not value mistakes, which makes it difficult for students to discover an academic identity. This can lead to a lack of confidence when submitting assignments. A student must learn to be confident enough to adapt and refine previous writing styles to succeed.[29] Academic writing can be seen as stressful, uninteresting, and difficult. When placed in the university setting, these emotions can contribute to student dropout. However, academic writing development can prevent fear and anxiety from developing if self-efficacy is high and anxiety is low.[30]External factors can also prevent enjoyment in academic writing including finding time and space to complete assignments. Studies have shown core members of a "community of practice" concerning writing reports are more of a positive experience than those who do not.[31]Overall emotions, lack of confidence, and prescriptive notions about what an academic writing identity should resemble can hinder a student's ability to succeed. Confidence in writing helps students do better. Programs with feedback and teamwork lower anxiety; these programs also help students feel more comfortable with their writing.[32] A commonly recognized format for presenting original research in the social and applied sciences is known asIMRD, an initialism that refers to the usual ordering of subsections: and Standalone methods sections are atypical in presenting research in the humanities; other common formats in the applied and social sciences are IMRAD (which offers an "Analysis" section separate from the implications presented in the "Discussion" section) and IRDM (found in some engineering subdisciplines, which features Methods at the end of the document). Other common sections in academic documents are:
https://en.wikipedia.org/wiki/Academic_writing
Journalology(also known aspublication science) is the scholarly study of all aspects of theacademic publishingprocess.[1][2]The field seeks to improve the quality of scholarly research by implementingevidence-based practicesin academic publishing.[3]The term "journalology" was coined byStephen Lock, the formereditor-in-chiefofthe BMJ. The first Peer Review Congress, held in 1989 inChicago,Illinois, is considered a pivotal moment in the founding of journalology as a distinct field.[3]The field of journalology has been influential in pushing for studypre-registrationin science, particularly inclinical trials.Clinical trial registrationis now expected in most countries.[3]Journalology researchers also work to reform thepeer reviewprocess. The earliest scientific journals were founded in the seventeenth century. While most early journals usedpeer review, peer review did not become common practice in medical journals until afterWorld War II.[4]The scholarly publishing process (including peer review) did not arise by scientific means and still suffers from problems with reliability (consistency and dependability),[5]such as a lack of uniform standards and validity (well-founded, efficacious).[6][7]Attempts to reform the academic publishing practice began to gain traction in the late twentieth century.[8]The field of journalology was formally established in 1989.[3]
https://en.wikipedia.org/wiki/Journalology
"Publish or perish" is anaphorismdescribing the pressure topublish academic workin order to succeed in anacademic career.[1][2][3]Such institutional pressure is generally strongest atresearch universities.[4]Some researchers have identified the publish or perish environment as a contributing factor to thereplication crisis.[5] Successful publications bring attention to scholars and their sponsoring institutions, which can help continued funding and their careers. In popular academic perception, scholars who publish infrequently, or who focus on activities that do not result in publications, such as instructingundergraduates, may lose ground in competition for available tenure-track positions. The pressure to publish has been cited as a cause of poor work being submitted toacademic journals.[6]The value of published work is often determined by the prestige of the academic journal it is published in. Journals can be measured by theirimpact factor (IF), which is the average number of citations to articles published in a particular journal over the last two years.[7] The pressure to publish has been strongly criticized on the basis that over-emphasis on publishing may decrease the value of resulting scholarship, as scholars must spend more time scrambling to publish whatever they can get into print, rather than spending time developing significant research agendas.[8]Similarly, humanities scholarCamille Pagliahas described the publish or perish paradigm as "tyranny" and further writes that "The [academic] profession has become obsessed with quantity rather than quality. ... One brilliant article should outweigh one mediocre book."[9] The pressure to publish or perish also detracts from the time and effort professors can devote to teaching undergraduate courses and mentoring graduate students. The rewards for exceptional teaching rarely match the rewards for exceptional research, which encourages faculty to favor the latter whenever they conflict.[10] Also, publish-or-perish is linked toscientific misconductor at least questionable ethics.[11]It has also been argued that the quality of scientific work has suffered due to publication pressures. PhysicistPeter Higgs, namesake of theHiggs boson, was quoted in 2013 as saying that academic expectations since the 1990s would likely have prevented him from both making his groundbreaking research contributions and attainingtenure: "It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I didin 1964... Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough."[12] According to some researchers, the publish or perish culture might also perpetuate bias in academic institutions. Overall, women publish less frequently than men, and when they do publish their work receives fewer citations than their male counterparts, even when it is published in journals with significantly higherimpact factors.[13]Furthermore, one study pointed out that gaps in the promotion and progress of women in academic medicine may be significantly influenced by gender-based variances in article citations.[14] Research-oriented universities may attempt to manage the unhealthy aspects of the publish or perish practices, but their administrators often argue that some pressure to produce cutting-edge research is necessary to motivate scholars early in their careers to focus on research advancement, and learn to balance its achievement with the other responsibilities of the professorial role. The call to abolishtenureis very much a minority opinion in such settings.[15] TheMIT Media Lab's directorNicholas Negroponteinstituted the motto "demo or die", privilegingdemonstrationsover publication.[16]Another director,Joi Ito, modified this to "deploy or die", emphasizing the adoption of the technology.[17] In 2024, card game "Publish or Perish" attracted more that 280,000 dollars founding atKickstarter.[18]In this game players are aimed to publish articles of moderate quality to get more citations.[19] The earliest known use of the term in an academic context was in a 1928 journal article.[20][21]The phrase appeared in a non-academic context in the 1932 book,Archibald Cary Coolidge: Life and Letters,by Harold Jefferson Coolidge.[22]In 1938, the phrase appeared in a college-related publication.[23]According to Eugene Garfield, the expression first appeared in an academic context in Logan Wilson's book, "The Academic Man: A Study in the Sociology of a Profession", published in 1942.[24]Others have attributed the phrase toColumbia UniversitygeneticistKimball C. Atwood III.[25][26][27]
https://en.wikipedia.org/wiki/Publish_or_perish
Athesis(pl.:theses), ordissertation[note 1](abbreviateddiss.),[2]is a document submitted in support of candidature for anacademic degreeor professional qualification presenting the author's research and findings.[3]In some contexts, the wordthesisor a cognate is used for part of abachelor'sormaster'scourse, whiledissertationis normally applied to adoctorate. This is the typical arrangement inAmerican English. In other contexts, such as within most institutions of theUnited Kingdom,South Africa, theCommonwealth Countries, and Brazil, the reverse is true.[4][5]The termgraduate thesisis sometimes used to refer to both master's theses and doctoral dissertations.[6] The required complexity or quality of research of a thesis or dissertation can vary by country, university, or program, and the required minimum study period may thus vary significantly in duration. The worddissertationcan at times be used to describe atreatisewithout relation to obtaining an academic degree. The termthesisis also used to refer to the general claim of an essay or similar work. The termthesiscomes from the Greek wordθέσις, meaning "something put forth", and refers to an intellectualproposition.Dissertationcomes from theLatindissertātiō, meaning "discussion".Aristotlewas the first philosopher to define the term thesis. A 'thesis' is a supposition of some eminent philosopher that conflicts with the general opinion... for to take notice when any ordinary person expresses views contrary to men's usual opinions would be silly.[7] For Aristotle, a thesis would therefore be a supposition that is stated in contradiction with general opinion or express disagreement with other philosophers (104b33-35). A supposition is a statement or opinion that may or may not be true depending on the evidence and/or proof that is offered (152b32).[relevant?]The purpose of the dissertation is thus to outline the proofs of why the author disagrees with other philosophers or the general opinion.[original research?] A thesis (or dissertation) may be arranged as athesis by publicationor amonograph, with or without appended papers, respectively, though many graduate programs allow candidates to submit a curatedcollection of articles. An ordinary monograph has atitle page, anabstract, atable of contents, comprising the various chapters like introduction, literature review, methodology, results, discussion, andbibliographyor more usually a references section. They differ in their structure in accordance with the many different areas of study (arts, humanities, social sciences, technology, sciences, etc.) and the differences between them. In a thesis by publication, the chapters constitute an introductory and comprehensive review of the appended published and unpublished article documents. Dissertations normally report on a research project or study, or an extended analysis of a topic. The structure of a thesis or dissertation explains the purpose, the previous research literature impinging on the topic of the study, the methods used, and the findings of the project. Most world universities use a multiple chapter format: Degree-awarding institutions often define their ownhouse stylethat candidates have to follow when preparing a thesis document. In addition to institution-specific house styles, there exist a number of field-specific, national, and international standards and recommendations for the presentation of theses, for instanceISO 7144.[3]Other applicable international standards includeISO 2145on section numbers,ISO 690on bibliographic references, andISO 31or its revisionISO 80000on quantities or units. Some older house styles specify thatfront matter(title page, abstract, table of content, etc.) must use a separate page number sequence from the main text, usingRoman numerals. The relevant international standard[3]and many newer style guides recognize that thisbook designpractice can cause confusion where electronic document viewers number all pages of a document continuously from the first page, independent of any printed page numbers. They, therefore, avoid the traditional separate number sequence for front matter and require a single sequence ofArabic numeralsstarting with 1 for the first printed page (therectoof the title page). Presentation requirements, including pagination, layout, type and color of paper, use ofacid-free paper(where a copy of the dissertation will become a permanent part of the library collection),paper size, order of components, and citation style, will be checked page by page by the accepting officer before the thesis is accepted and a receipt is issued. However, strict standards are not always required. Most Italian universities, for example, have only general requirements on the character size and the page formatting, and leave much freedom for the actual typographic details.[10] Increasingly, academic institutions are accepting digital andmultimodaldissertations that include elements such as video, audio, or interactive software.[11] Thethesis committee(ordissertation committee) is a committee that supervises a student's dissertation. In the US, these committees usually consist of a primary supervisor oradvisorand two or more committee members, who supervise the progress of the dissertation and may also act as the examining committee, or jury, at the oral examination of the thesis (see§ Thesis examinations). At most universities, the committee is chosen by the student in conjunction with their primary adviser, usually after completion of thecomprehensive examinationsor prospectus meeting, and may consist of members of the comps committee. The committee members are doctors in their field (whether a PhD or other designation) and have the task of reading the dissertation, making suggestions for changes and improvements, and sitting in on the defense. Sometimes, at least one member of the committee must be a professor in a department that is different from that of the student. The role of the thesis supervisor is to assist and support a student in their studies, and to determine whether a thesis is ready for examination.[12]The thesis is authored by the student, not the supervisor. The duties of the thesis supervisor also include checking for copyright compliance and ensuring that the student has included in/with the thesis a statement attesting that he/she is the sole author of the thesis.[13] In theLatin American docta, the academic dissertation can be referred to as different stages inside the academic program that the student is seeking to achieve into a recognizedArgentine University, in all the cases the students must develop original contribution in the chosen fields by means of several paper work and essays that comprehend the body of thethesis.[14]Correspondingly to the academic degree, the last phase of an academic thesis is called in Spanish adefensa de grado,defensa magistralordefensa doctoralin cases in which the university candidate is finalizing theirlicentiate,master's, orPhD program, respectively. According to a committee resolution, the dissertation can be approved or rejected by an academic committee consisting of the thesis director and at least one evaluator. All the dissertation referees must already have achieved at least the academic degree that the candidate is trying to reach.[15] At English-speakingCanadian universities, writings presented in fulfillment ofundergraduatecoursework requirements are normally calledpapers,term papersoressays. A longer paper or essay presented for completion of a 4-year bachelor's degree is sometimes called amajor paper. High-quality research papers presented as the empirical study of a "postgraduate" consecutive bachelorwith Honoursor Baccalaureatus Cum Honore degree are calledthesis(Honours Seminar Thesis). Major papers presented as the final project for a master's degree are normally calledthesis; and major papers presenting the student's research towards adoctoral degreeare calledthesesordissertations. At French-language universities, for the fulfillment of a master's degree, students can present a "mémoire"' or a shorter "essai"' (the latter requires the student to take more courses).[16]For the fulfillment of a doctoral degree, they may present a"thèse"or an"essai doctoral"(here too, the latter requires more courses).[17]All these documents are usually synthetic monograph related to the student's research work. A typical undergraduate paper or essay might be forty pages. Master's theses are approximately one hundred pages. PhD theses are usually over two hundred pages. This may vary greatly by discipline, program, college, or university. A study published in 2021 found that inQuébec universities, between 2000 and 2020, master's and PhD theses averaged 127.4 and 245.6 pages respectively.[18] Theses Canada acquires and preserves a comprehensive collection of Canadian theses atLibrary and Archives Canada(LAC) through a partnership with Canadian universities who participate in the program.[19]Most theses can also be found in the institutional repository of the university the student graduated from.[20] At most universityfacultiesin Croatia, a degree is obtained by defending a thesis after having passed all the classes specified in the degree programme. In theBolognasystem, the bachelor's thesis, calledzavršni rad(literally "final work" or "concluding work") is defended after 3 years of study and is about 30 pages long. Most students with bachelor's degrees continue onto master's programmes which end with a master's thesis calleddiplomski rad(literally "diploma work" or "graduate work"). The term dissertation is used for a doctoral degree paper (doktorska disertacija). In the Czech Republic, higher education is completed by passing all classes remaining to the educational compendium for given degree and defending a thesis. Forbachelorsprogramme the thesis is calledbakalářská práce(bachelor's thesis), for master's degrees and also doctor of medicine or dentistry degrees it is thediplomová práce(master's thesis), and for Philosophiae doctor (PhD.) degree it is dissertationdizertační práce. Thesis for so called Higher-Professional School (Vyšší odborná škola, VOŠ) is calledabsolventská práce. The following types of thesis are used in Finland (names in Finnish/Swedish): In France, the academic dissertation or thesis is called athèseand it is reserved for the final work of doctoral candidates. The minimum page length is generally (and not formally) 100 pages (or about 400,000 characters), but is usually several times longer (except for technical theses and for "exact sciences" such as physics and maths). To complete a master's degree in research, a student is required to write amémoire, the French equivalent of a master's thesis in other higher education systems. The worddissertationin French is reserved for shorter (1,000–2,000 words), more generic academic treatises. The defense is called asoutenance. Since 2023, at the end of the admission process, the doctoral student takes an oath of commitment to the principles of scientific integrity[22] In the presence of my peers. With the completion of my doctorate in [research field], in my quest for knowledge, I have carried out demanding research, demonstrated intellectual rigour, ethical reflection, and respect for the principles of research integrity. As I pursue my professional career, whatever my chosen field, I pledge, to the greatest of my ability, to continue to maintain integrity in my relationship to knowledge, in my methods and in my results. In Germany, an academic thesis is calledAbschlussarbeitor, more specifically, the basic name of the degree complemented by-arbeit(rough translation:-work; e.g.,Diplomarbeit,Masterarbeit,Doktorarbeit). For bachelor's and master's degrees, the name can alternatively be complemented by-thesisinstead (e.g.,Bachelorthesis). Length is often given in page count and depends upon departments, faculties, and fields of study. A bachelor's thesis is often 40–60 pages long, adiploma thesisand a master's thesis usually 60–100. The required submission for a doctorate is called aDissertationorDoktorarbeit. The submission for aHabilitation, which is an academic qualification, not an academic degree, is calledHabilitationsschrift, notHabilitationsarbeit.[23][24] A doctoral degree is often earned with multiple levels of aLatin honorsremark for the thesis ranging fromsumma cum laude(best) torite(duly). A thesis can also be rejected with a Latin remark (non-rite,non-sufficitor worst assub omni canone). Bachelor's and master's theses receivenumerical gradesfrom 1.0 (best) to 5.0 (failed). In India the thesis defense is called aviva voce(Latinfor "by live voice") examination (vivain short). Involved in thevivaare two examiners, one guide (student guide) and the candidate. One examiner is an academic from the candidate's own university department (but not one of the candidate's supervisors) and the other is anexternal examinerfrom a different university.[25] In India, PG Qualifications such as MSc Physics accompanies submission of dissertation in Part I and submission of a Project (a working model of an innovation) in Part II. Engineering and Designing qualifications such as BTech, B.E., B.Des, MTech, M.E. or M.Des also involves submission of dissertation. In all the cases, the dissertation can be extended for summer internship at certainresearch and developmentorganizations or also as PhD synopsis. InIndonesia, the termthesisis used specifically to refer to master's theses. Theundergraduatethesis is calledskripsi, while the doctoral dissertation is calleddisertasi. In general, those three terms are usually called astugas akhir(final assignment), which is mostly mandatory for the completion of adegree. Undergraduate students usually begin to write their final assignment in their third, fourth or fifth enrollment year, depends on the requirements of their respective disciplines and universities. In some universities, students are required to write aproposal skripsiorproposal tesis(thesis proposal) before they could write their final assignment. If the thesis proposal is considered to fulfill the qualification by the academic examiners, students then may proceed to write their final assignment. InIran, usually students are required to present a thesis (Persian:پایان‌نامهpāyān-nāmeh) in their master's degree and a dissertation (رسالهresāleh) in their Doctorate degree, both of which requiring the students to defend their research before a committee and gaining their approval. Most of the norms and rules of writing a thesis or a dissertation are influenced by the French higher education system.[citation needed] In Italy there are normally three types of thesis. In order of complexity: one for the Laurea (equivalent to the UK Bachelor's Degree), another one for the Laurea Magistrale (equivalent to the UK Master's Degree) and then a thesis to complete the Dottorato di Ricerca (PhD). Thesis requirements vary greatly between degrees and disciplines, ranging from as low as 3–4 ECTS credits to more than 30. Thesis work is mandatory for the completion of a degree. In Kazakhstan, a bachelor's degree typically requires abachelor's diploma work(kz "бакалаврдың дипломдық жұмысы"), while the master's and PhD degree require amaster's/doctoral dissertation(kz "магистрлік/докторлық диссертация"). All the works are publicly presented to the special council at the end of the training, which thoroughly examines the work. PhD candidates may be allowed to present their work without a written thesis, if they provide enough publications in leading journals of the field, and one of which should be a review article specifically.[26] Malaysian universities often follow the British model for dissertations and degrees. However, a few universities follow the United States model for theses and dissertations. Some public universities have both British and US style PhD programs. Branch campuses of British, Australian and Middle East universities in Malaysia use the respective models of the home campuses. In Pakistan, at undergraduate level the thesis is usually called final year project, as it is completed in the senior year of the degree, the name project usually implies that the work carried out is less extensive than a thesis and bears lesser credit hours too. The undergraduate level project is presented through an elaborate written report and a presentation to the advisor, a board of faculty members and students. At graduate level however, i.e. in MS, some universities allow students to accomplish a project of 6 credits or a thesis of 9 credits, at least one publication[citation needed]is normally considered enough for the awarding of the degree with project and is considered mandatory for the awarding of a degree with thesis. A written report and a public thesis defense is mandatory, in the presence of a board of senior researchers, consisting of members from an outside organization or a university. A PhD candidate is supposed to accomplish extensive research work to fulfill the dissertation requirements with international publications being a mandatory requirement. The defense of the research work is done publicly. In the Philippines, an academic thesis is named by the degree, such as bachelor/undergraduate thesis or masteral thesis. However, inPhilippine English, the termdoctorateis typically replaced withdoctoral(as in the case of "doctoral dissertation"), though in official documentation the former is still used. The termsthesisanddissertationare commonly used interchangeably in everyday language yet it is generally understood that athesisrefers to bachelor/undergraduate and master academic work while adissertationis named for doctorate work. The Philippine system is influenced by American collegiate system, in that it requires aresearch projectto be submitted before being allowed to write a thesis. This project is mostly given as a prerequisite writing course to the actual thesis and is accomplished in the term period before; supervision is provided by one professor assigned to a class. This project is later to be presented in front of an academic panel, often the entire faculty of an academic department, with their recommendations contributing to the acceptance, revision, or rejection of the initial topic. In addition, the presentation of the research project will help the candidate choose their primary thesis adviser. An undergraduate thesis is completed in the final year of the degree alongside existing seminar (lecture) or laboratory courses, and is often divided into two presentations:proposalandthesispresentations (though this varies across universities), whereas a master thesis or doctorate dissertation is accomplished in the last term alone and is defended once. In most universities, a thesis is required for the bestowment of a degree to a candidate alongside a number of units earned throughout their academic period of stay, though for practice and skills-based degrees apracticumand a written report can be achieved instead. The examination board often consists of three to five examiners, often professors in a university (with a Masters or PhD degree) depending on the university's examination rules. Required word length, complexity, and contribution to scholarship varies widely across universities in the country. In Poland, a bachelor's degree usually requires apraca licencjacka(bachelor's thesis) or the similar level degree in engineering requires apraca inżynierska(engineer's thesis/bachelor's thesis), the master's degree requires apraca magisterska(master's thesis). The academic dissertation for a PhD is called adysertacjaorpraca doktorska. The submission for theHabilitationis calledpraca habilitacyjnaordysertacja habilitacyjna. Thus the termdysertacjais reserved for PhD and Habilitation degrees. All the theses need to be "defended" by the author during a special examination for the given degree. Examinations for PhD and Habilitation degrees are public. In Portugal and Brazil, a dissertation (dissertação) is required for completion of a master's degree. The defense is done in a public presentation in which teachers, students, and the general public can participate. For the PhD, a thesis (tese) is presented for defense in a public exam. The exam typically extends over 3 hours. The examination board typically involves 5 to 6 scholars (including the advisor) or other experts with a PhD degree (generally at least half of them must be external to the university where the candidate defends the thesis, but it may depend on the university). Each university / faculty defines the length of these documents, and it can vary also in respect to the domains (a thesis in fields like philosophy, history, geography, etc., usually has more pages than a thesis in mathematics, computer science, statistics, etc.) but typical numbers of pages are around 60–80 for MSc and 150–250 for PhD.[citation needed] In Brazil the Bachelor's Thesis is calledTCCorTrabalho de Conclusão de Curso(Final Term / Undergraduate Thesis / Final Paper).[27] In Russia, Belarus, and Ukraine an academic dissertation or thesis is called what can be literally translated as a "master's degree work" (thesis), whereas the worddissertationis reserved for doctoral theses (Candidate of Sciences). To complete both bachelor's and master's degree, a student is required to write a thesis and to then defend the work publicly. The length of this manuscript usually is given in page count and depends upon educational institution, its departments, faculties, and fields of study[citation needed] At universities in Slovenia, an academic thesis calleddiploma thesisis a prerequisite for completing undergraduate studies. The thesis used to be 40–60 pages long, but has been reduced to 20–30 pages in newBologna processprogrammes. To complete Master's studies, a candidate must writemagistrsko delo(Master's thesis) that is longer and more detailed than the undergraduate thesis. The required submission for the doctorate is calleddoktorska disertacija(doctoral dissertation). In pre Bologna programmes students were able to skip the preparation and presentation of a Master's thesis and continue straightforward towards doctorate. In Slovakia, higher education is completed by defending a thesis, which is calledbachelor's thesis"bakalárska práca" for bachelors programme, master's thesis or "diplomová práca" for master's degrees, and also doctor of medicine or dentistry degrees and dissertation "dizertačná práca" for Philosophiae doctor (PhD.) degree. In Sweden, there are different types of theses. Practices and definitions vary between fields but commonly include theC thesis/Bachelor thesis, which corresponds to 15 HP or 10 weeks of independent studies,D thesis/'/Magister/one year master's thesis, which corresponds to 15 HP or 10 weeks of independent studies andE Thesis/two-year master's thesis, which corresponds to 30 HP or 20 weeks of independent studies. The undergraduate theses are calleduppsats("essay"), sometimesexamensarbete, especially at technical programmes. After that there are two types of post graduate theses:licentiatethesis (licentiatuppsats) and PhD dissertation (doktorsavhandling). A licentiate degree is approximately "half a PhD" in terms of the size and scope of the thesis. Swedish PhD studies should in theory last for four years, including course work and thesis work, but as many PhD students also teach, the PhD often takes longer to complete. The thesis can be written as amonographor as acompilation thesis; in the latter case, the introductory chapters are called thekappa(literally "coat").[28] Outside the academic community, the termsthesisanddissertationare interchangeable. At universities in the United Kingdom, the termthesisis usually associated with PhD/EngD(doctoral) and research master's degrees, whiledissertationis the more common term for a substantial project submitted as part of a taught master's degree or anundergraduate degree(e.g.MSc,BA, BSc,BMus, BEd,BEngetc.). Thesis word lengths may differ by faculty/department and are set by individual universities. A wide range of supervisory arrangements can be found in the British academy, from single supervisors (more usual for undergraduate and Masters level work) to supervisory teams of up to three supervisors. In teams, there will often be a Director of Studies, usually someone with broader experience (perhaps having passed some threshold of successful supervisions). The Director may be involved with regular supervision along with the other supervisors, or may have more of an oversight role, with the other supervisors taking on the more day-to-day responsibilities of supervision. In some U.S. doctoral programs, the "dissertation" can take up the major part of the student's total time spent (along with two or three years of classes) and may take years of full-time work to complete. At most universities,dissertationis the term for the required submission for the doctorate, andthesisrefers only to the master's degree requirement. Thesisis also used to describe a cumulative project for a bachelor's degree and is more common at selective colleges and universities, or for those seeking admittance to graduate school or to obtain anhonorsacademic designation. These projects are called "senior projects" or "senior theses"; they are generally done in the senior year near graduation after having completed other courses, the independent study period, and the internship or student teaching period (the completion of most of the requirements before the writing of the paper ensures adequate knowledge and aptitude for the challenge). Unlike a dissertation or master's thesis, they are not as long and they do not require a novel contribution to knowledge or even a very narrow focus on a set subtopic. Like them, they can be lengthy and require months of work, they require supervision by at least one professor adviser, they must be focused on a certain area of knowledge, and they must use an appreciable amount of scholarly citations. They may or may not be defended before a committee but usually are not; there is generally no preceding examination before the writing of the paper, except for at very few colleges. Because of the nature of the graduate thesis or dissertation having to be more narrow and more novel, the result of original research, these usually have a smaller proportion of the work that is cited from other sources, though the fact that they are lengthier may mean they still have more total citations. Specific undergraduate courses, especially writing-intensive courses or courses taken by upperclassmen, may also require one or more extensive written assignments referred to variously as theses, essays, or papers. Increasingly, high schools are requiring students to complete a senior project or senior thesis on a chosen topic during the final year as a prerequisite for graduation. Theextended essaycomponent of theInternational Baccalaureate Diploma Programme, offered in a growing number of American high schools, is another example of this trend. Generally speaking, a dissertation is judged as to whether it makes an original and unique contribution to scholarship. Lesser projects (a master's thesis, for example) are judged by whether they demonstrate mastery of available scholarship in the presentation of an idea.[dubious–discuss] The required complexity or quality of research of a thesis may vary significantly among universities or programs. One of the requirements for certain advanced degrees is often an oral examination (called aviva voceexamination or justvivain the UK and certain other English-speaking countries). This examination normally occurs after the dissertation is finished but before it is submitted to the university, and may comprise a presentation (often public) by the student and questions posed by an examining committee or jury. In North America, an initial oral examination in the field of specialization may take place just before the student settles down to work on the dissertation. An additional oral exam may take place after the dissertation is completed and is known as athesis defenseordissertation defense, which at some universities may be a mere formality and at others may result in the student being required to make significant revisions. The result of the examination may be given immediately following deliberation by theexamination committee(in which case the candidate may immediately be considered to have received their degree), or at a later date, in which case the examiners may prepare a defense report that is forwarded to a Board or Committee of Postgraduate Studies, which then officially recommends the candidate for the degree. Potential decisions (or "verdicts") include: At most North American institutions the latter two verdicts are extremely rare, for two reasons. First, to obtain the status of doctoral candidates, graduate students typically pass a qualifying examination or comprehensive examination, which often includes an oral defense. Students who pass the qualifying examination are deemed capable of completing scholarly work independently and are allowed to proceed with working on a dissertation. Second, since the thesis supervisor (and the other members of the advisory committee) will normally have reviewed the thesis extensively before recommending the student to proceed to the defense, such an outcome would be regarded as a major failure not only on the part of the candidate but also by the candidate's supervisor (who should have recognized the substandard quality of the dissertation long before the defense was allowed to take place). It is also fairly rare for a thesis to be accepted without any revisions; the most common outcome of a defense is for the examiners to specify minor revisions (which the candidate typically completes in a few days or weeks). At universities on the British pattern it is not uncommon for theses at thevivastage to be subject to major revisions in which a substantial rewrite is required, sometimes followed by a newviva. Very rarely, the thesis may be awarded the lesser degree of M.Phil. (Master of Philosophy) instead, preventing the candidate from resubmitting the thesis. In Australia, doctoral theses are usually examined by three examiners (although some, like theAustralian Catholic University, theUniversity of New South Wales, andWestern Sydney Universityhave shifted to using only two examiners) without a live defense except in extremely rare exceptions. In the case of a master's degree by research, the thesis is usually examined by only two examiners. Typically, one of these examiners will be from within the candidate's own department; the other(s) will usually be from other universities and often from overseas. Following submission of the thesis, copies are sent by mail to examiners and then reports sent back to the institution. Similar to a thesis for a master's degree by research, a thesis for the research component of a master's degree by coursework is also usually examined by two examiners, one from the candidate's department and one from another university. For an Honours year, which is a fourth year in addition to the usual three-year bachelor's degree, the thesis is also examined by two examiners, though both are usually from the candidate's own department. Honours and Master's theses sometimes require an oral defense before they are accepted. In Germany, a thesis is usually examined with an oral examination. This applies to almost allDiplom,Magister, master's and doctoral degrees as well as to most bachelor's degrees. However, a process that allows for revisions of the thesis is usually only implemented for doctoral degrees. There are several different kinds of oral examinations used in practice. TheDisputation, also calledVerteidigung("defense"), is usually public (at least to members of the university) and is focused on the topic of the thesis. In contrast, theRigorosum(oral exam) is not held in public and also encompasses fields in addition to the topic of the thesis. TheRigorosumis only common for doctoral degrees. Another term for an oral examination isKolloquium, which generally refers to a usually public scientific discussion and is often used synonymously withVerteidigung. In each case, what exactly is expected differs between universities and between faculties. Some universities also demand a combination of several of these forms. Like the British model, the PhD or MPhil student is required to submit their thesis or dissertation for examination by two or three examiners. The first examiner is from the university concerned, the second examiner is from another local university and the third examiner is from a suitable foreign university (usually from Commonwealth countries). The choice of examiners must be approved by the university senate. In some public universities, a PhD or MPhil candidate may also have to show a number of publications in peer reviewed academic journals as part of the requirement. An oral viva is conducted after the examiners have submitted their reports to the university. The oral viva session is attended by the Oral Viva chairman, a rapporteur with a PhD qualification, the first examiner, the second examiner and sometimes the third examiner. Branch campuses of British, Australian and Middle East universities in Malaysia use the respective models of the home campuses to examine their PhD or MPhil candidates. In the Philippines, a thesis is followed by an oral defense. In most universities, this applies to all bachelor, master, and doctorate degrees. However, the oral defense is held in once per semester (usually in the middle or by the end) with a presentation of revisions (so-called "plenary presentation") at the end of each semester. The oral defense is typically not held in public for bachelor and master oral defenses, however a colloquium is held for doctorate degrees. In Portugal, a thesis is examined with an oral defense, which includes an initial presentation by the candidate followed by an extensive question and answer session. In North America, thethesis defenseororal defenseis the final examination fordoctoralcandidates, and sometimes formaster'scandidates. The examining committee normally consists of the thesis committee, usually a given number of professors mainly from the student's university plus their primary supervisor, anexternal examiner(someone not otherwise connected to the university), and a chair person. Each committee member will have been given a completed copy of the dissertation prior to the defense, and will come prepared to ask questions about the thesis itself and the subject matter. In many schools, master's thesis defenses are restricted to the examinee and the examiners, but doctoral defenses are open to the public. The typical format will see the candidate giving a short (20–40-minute) presentation of their research, followed by one to two hours of questions. At some U.S. institutions, a longer public lecture (known as a "thesis talk" or "thesis seminar") by the candidate will accompany the defense itself, in which case only the candidate, the examiners, and other members of the faculty may attend the actual defense. In Norway, the final examination for a PhD student is called adisputas.[29]The examination consists of a public lecture by the candidate where the audience can ask questions. Then twoopponentswill ask the candidate questions. Prior to the examination, the candidate must pass a trial lecture on a topic typically not part of the student's research. This is used to assess the candidate's ability to finding the relevant literature and to disseminate knowledge to an audience. A student in Russia or Ukraine has to complete a thesis and then defend it in front of their department. Sometimes the defense meeting is made up of the learning institute's professionals and sometimes the students peers are allowed to view or join in. After the presentation and defense of the thesis, the final conclusion of the department should be that none of them have reservations on the content and quality of the thesis. A conclusion on the thesis has to be approved by therectorof the educational institute. This conclusion (final grade so to speak) of the thesis can be defended/argued not only at the thesis council, but also in any other thesis council of Russia or Ukraine. The formerDiploma de estudios avanzados(DEA) lasted two years and candidates were required to complete coursework and demonstrate their ability to research the specific topics they have studied. From 2011 on, these courses were replaced by academic Master's programmes that include specific training on epistemology, and scientific methodology. After its completion, students are able to enroll in a specific PhD programme (programa de doctorado) and begin a dissertation on a set topic for a maximum time of three years (full-time) and five years (part-time). All students must have a full professor as an academic advisor (director de tesis) and a tutor, who is usually the same person. A dissertation (tesis doctoral), with an average of 250 pages, is the main requisite along with typically one previously published journal article. Once candidates have published their written dissertations, they will be evaluated by two external academics (evaluadores externos) and subsequently it is usually exhibited publicly for fifteen natural days. After its approval, candidates must defend publicly their research before a three-member committee (tribunal) with at least one visiting academic: chair, secretary and member (presidente,secretarioyvocal). A typical public Thesis Defence (defensa) lasts 45 minutes and all attendants holding a doctoral degree are eligible to ask questions. In Hong Kong, Ireland and the United Kingdom, the thesis defense is called aviva voce(Latinfor 'by live voice') examination (vivafor short). A typicalvivalasts for approximately 3 hours, though there is no formal time limit. Involved in thevivaare two examiners and the candidate. Usually, one examiner is an academic from the candidate's own university department (but not one of the candidate's supervisors) and the other is anexternal examinerfrom a different university. Increasingly, the examination may involve a third academic, the 'chair'; this person, from the candidate's institution, acts as an impartial observer with oversight of the examination process to ensure that the examination is fair. The 'chair' does not ask academic questions of the candidate.[30] In the United Kingdom, there are only two or at most three examiners, and in many universities the examination is held in private. The candidate's primary supervisor is not permitted to ask or answer questions during the viva, and their presence is not necessary. However, some universities permit members of the faculty or the university to attend. At the University of Oxford, for instance, any member of the university may attend a DPhil viva (the university's regulations require that details of the examination and its time and place be published formally in advance) provided they attend in full academic dress.[31] A submission of the thesis is the last formal requirement for most students after the defense. By the finaldeadline, the student must submit a complete copy of the thesis to the appropriate body within the accepting institution, along with the appropriate forms, bearing the signatures of the primary supervisor, the examiners, and in some cases, the head of the student's department. Other required forms may include library authorizations (giving the university library permission to make the thesis available as part of its collection) andcopyrightpermissions (in the event that the student has incorporated copyrighted materials in the thesis). Many large scientific publishing houses (e.g.Taylor & Francis,Elsevier) use copyright agreements that allow the authors to incorporate their published articles into dissertations without separate authorization. Once all the paperwork is in order, copies of the thesis may be made available in one or more universitylibraries. Specialist abstracting services exist to publicize the content of these beyond the institutions in which they are produced. Many institutions now insist on submission of digitized as well as printed copies of theses; the digitized versions of successful theses are often made available online.
https://en.wikipedia.org/wiki/Thesis
Athesis as a collection of articles[1]orseries of papers,[2]also known asthesis by published works,[1]orarticle thesis,[3]is adoctoraldissertationthat, as opposed to a coherentmonograph, is a collection of research papers with an introductory section consisting of summary chapters. Other less used terms are "sandwich thesis" and "stapler thesis". It is composed of already-published journal articles, conference papers and book chapters; and, occasionally, not-yet-published manuscripts. Athesisby publication is a form ofcompilation thesis(a term used in Nordic countries). Another form of compilation thesis is theessay thesis, which is composed of previously unpublished independentessays.[3] Today, article theses are the standard format in natural, medical, and engineering sciences (e.g., in theNordic countries), while in social and cultural sciences, there is a strong but decreasing tradition to produce coherent monographs, i.e., thesis as a series of linked chapters. At other times, doctoral students may have a choice between writing a monograph or a compilation thesis.[4][5] The thesis by published works format is chosen in cases where the student intends to first publish the thesis in parts in international journals. It often results in a higher number of publications during doctoral studies than a monograph, and may render in a higher number of citations in other research publications – something that may be advantageous from research funding point of view and may facilitate readership appointment after the dissertation.[clarification needed]A further reason for writing a compilation thesis is that some of the articles can be written together with other authors, which may be especially helpful for new doctoral students. A majority of the articles should be reviewed by referees outside of the student's own department, supplementing the audit carried out by the supervisory staff and dissertation opponent, thus assuring international standards.[4] The introductory or summary chapters of a thesis by published works should be written independently by the student. They should include an extensive annotatedbibliographyorliterature review, placing the scope and results of the articles in the wider context of the current state of international research. They constitute a comprehensive summary of the appended papers, and should clarify the contribution of the doctoral student if the papers are written by several authors. They should not provide new results, but may provide synthesis of new conclusions by combining results from several of the papers. They may supplement the articles with a motivation of the chosen scope, research problems, objectives and methods, and a strengthening of the theoretical framework, analysis and conclusions, since the extent of the articles normally does not allow these kind of longer discussions.[3][6][7]
https://en.wikipedia.org/wiki/Collection_of_articles
Atreatiseis aformaland systematic writtendiscourseon some subject concerned with investigating or exposing the main principles of the subject and its conclusions.[1]Amonographis a treatise on a specialized topic.[2] The word "treatise" has its origins in the early 14th century, derived from the Anglo-French termtretiz, which itself comes from the Old Frenchtraitis, meaning "treatise" or "account." This Old French term is rooted in the verbtraitier, which means "to deal with" or "to set forth in speech or writing".[3] The etymological lineage can be traced further back to the Latin wordtractatus, which is a form of the verbtractare, meaning "to handle," "to manage," or "to deal with".[4][5]The Latin roots suggest a connotation of engaging with or discussing a subject in depth, which aligns with the modern understanding of a treatise as a formal and systematic written discourse on a specific topic.[6] The works presented here have been identified as influential by scholars on the development of human civilization. Euclid'sElementshas appeared in more editions than any other books except theBibleand is one of the most important mathematical treatises ever. It has been translated to numerous languages and remains continuously in print since the beginning of printing. Before the invention of the printing press, it was manually copied and widely circulated. When scholars recognized its excellence, they removed inferior works from circulation in its favor. Many subsequent authors, such asTheon of Alexandria, made their own editions, with alterations, comments, and new theorems or lemmas. Many mathematicians were influenced and inspired by Euclid's masterpiece. For example,Archimedes of SyracuseandApollonius of Perga, the greatest mathematicians of their time, received their training from Euclid's students and hisElementsand were able to solve many open problems at the time of Euclid. It is a prime example of how to write a text in pure mathematics, featuring simple and logical axioms, precise definitions, clearly stated theorems, and logical deductive proofs. TheElementsconsists of thirteen books dealing with geometry (including the geometry of three-dimensional objects such as polyhedra), number theory, and the theory of proportions. It was essentially a compilation of all mathematics known to the Greeks up until Euclid's time.[10] Drawing on the work of his predecessors, especially the experimental research ofMichael Faraday, the analogy with heat flow byWilliam Thomson(later Lord Kelvin) and the mathematical analysis ofGeorge Green, James Clerk Maxwell synthesized all that was known about electricity and magnetism into a single mathematical framework,Maxwell's equations. Originally, there were 20 equations in total. In hisTreatise on Electricity and Magnetism(1873), Maxwell reduced them to eight.[11]Maxwell used his equations to predict the existence of electromagnetic waves, which travel at the speed of light. In other words, light is but one kind of electromagnetic wave. Maxwell's theory predicted there ought to be other types, with different frequencies. After some ingenious experiments, Maxwell's prediction was confirmed byHeinrich Hertz. In the process, Hertz generated and detected what are now called radio waves and built crude radio antennas and the predecessors of satellite dishes.[12]Hendrik Lorentzderived, using suitable boundary conditions,Fresnel's equationsfor the reflection and transmission of light in different media from Maxwell's equations. He also showed that Maxwell's theory succeeded in illuminating the phenomenon of light dispersion where other models failed.John William Strutt(Lord Rayleigh) andJosiah Willard Gibbsthen proved that the optical equations derived from Maxwell's theory are the only self-consistent description of the reflection, refraction, and dispersion of light consistent with experimental results.Opticsthus found a new foundation inelectromagnetism.[11] Hertz's experimental work in electromagnetism stimulated interest in the possibility of wireless communication, which did not require long and expensive cables and was faster than even the telegraph.Guglielmo Marconiadapted Hertz's equipment for this purpose in the 1890s. He achieved the first international wireless transmission between England and France in 1900 and by the following year, he succeeded in sending messages inMorse codeacross the Atlantic. Seeing its value, the shipping industry adopted this technology at once.Radio broadcastingbecame extremely popular in the twentieth century and remains in common use in the early twenty-first.[12]But it wasOliver Heaviside, an enthusiastic supporter of Maxwell's electromagnetic theory, who deserves most of the credit for shaping how people understood and applied Maxwell's work for decades to come; he was responsible for considerable progress in electrical telegraphy, telephony, and the study of the propagation of electromagnetic waves. Independent of Gibbs, Heaviside assembled a set of mathematical tools known asvector calculusto replace thequaternions, which were in vogue at the time but which Heaviside dismissed as "antiphysical and unnatural."[13]
https://en.wikipedia.org/wiki/Treatise
Incomputer programming, anaming conventionis a set of rules for choosing the character sequence to be used foridentifierswhich denotevariables,types,functions, and other entities insource codeanddocumentation. Reasons for using a naming convention (as opposed to allowingprogrammersto choose any character sequence) include the following: The choice of naming conventions can be a controversial issue, with partisans of each holding theirs to be the best and others to be inferior. Colloquially, this is said to be a matter ofdogma.[2]Many companies have also established their own set of conventions. Benefits of a naming convention can include the following: The choice of naming conventions (and the extent to which they are enforced) is often a contentious issue, with partisans holding their viewpoint to be the best and others to be inferior. Moreover, even with known and well-defined naming conventions in place, some organizations may fail to consistently adhere to them, causing inconsistency and confusion. These challenges may be exacerbated if the naming convention rules are internally inconsistent, arbitrary, difficult to remember, or otherwise perceived as more burdensome than beneficial. Well-chosen identifiers make it significantly easier for developers and analysts to understand what the system is doing and how to fix or extend thesource codeto apply for new needs. For example, although issyntacticallycorrect, its purpose is not evident. Contrast this with: which implies the intent and meaning of the source code, at least to those familiar with the context of the statement. Experiments suggest that identifier style affects recall and precision and that familiarity with a style speeds recall.[3] The exact rules of a naming convention depend on the context in which they are employed. Nevertheless, there are several common elements that influence most if not all naming conventions in common use today. Fundamental elements of all naming conventions are the rules related toidentifier length(i.e., the finite number of individual characters allowed in an identifier). Some rules dictate a fixed numerical bound, while others specify less precise heuristics or guidelines. Identifier length rules are routinely contested in practice, and subject to much debate academically. Some considerations: It is an open research issue whether some programmers prefer shorter identifiers because they are easier to type, or think up, than longer identifiers, or because in many situations a longer identifier simply clutters the visible code and provides no perceived additional benefit. Brevity in programming could be in part attributed to: Some naming conventions limit whether letters may appear in uppercase or lowercase. Other conventions do not restrict letter case, but attach a well-defined interpretation based on letter case. Some naming conventions specify whether alphabetic, numeric, or alphanumeric characters may be used, and if so, in what sequence. A common recommendation is "Use meaningful identifiers." A singlewordmay not be as meaningful, or specific, as multiple words. Consequently, some naming conventions specify rules for the treatment of "compound" identifiers containing more than one word. As mostprogramming languagesdo not allowwhitespacein identifiers, a method of delimiting each word is needed (to make it easier for subsequent readers to interpret which characters belong to which word). Historically some early languages, notablyFORTRAN(1955) andALGOL(1958), allowed spaces within identifiers, determining the end of identifiers by context. This was abandoned in later languages due to the difficulty oftokenization. It is possible to write names by simply concatenating words, and this is sometimes used, as inmypackagefor Java package names,[4]though legibility suffers for longer terms, so usually some form of separation is used. One approach is todelimitseparate words with anon-alphanumericcharacter. The two characters commonly used for this purpose are thehyphen("-") and theunderscore("_"); e.g., the two-word name "two words" would be represented as "two-words" or "two_words". The hyphen is used by nearly all programmers writingCOBOL(1959),Forth(1970), andLisp(1958); it is also common inUnixfor commands and packages, and is used inCSS.[5]This convention has no standard name, though it may be referred to aslisp-caseorCOBOL-CASE(comparePascal case),kebab-case,brochette-case, or other variants.[6][7][8][9]Of these,kebab-case, dating at least to 2012,[10]has achieved some currency since.[11][12] By contrast, languages in the FORTRAN/ALGOL tradition, notably languages in theCandPascalfamilies, used the hyphen for thesubtractioninfixoperator, and did not wish to require spaces around it (asfree-form languages), preventing its use in identifiers. An alternative is to use underscores; this is common in the C family (including Python), with lowercase words, being found for example inThe C Programming Language(1978), and has come to be known assnake caseorsnail case. Underscores with uppercase, as in UPPER_CASE, are commonly used forC preprocessormacros, hence known as MACRO_CASE, and forenvironment variablesin Unix, such as BASH_VERSION inbash. Sometimes this is humorously referred to as SCREAMING_SNAKE_CASE (alternatively SCREAMING_SNAIL_CASE). Another approach is to indicate word boundaries using medial capitalization, called "camelCase", "PascalCase", and many other names, thus respectively rendering "two words" as "twoWords" or "TwoWords". This convention is commonly used inPascal,Java,C#, andVisual Basic. Treatment of initialisms in identifiers (e.g. the "XML" and "HTTP" inXMLHttpRequest) varies. Some dictate that they be lowercase (e.g.XmlHttpRequest) to ease typing, readability and ease ofsegmentation, whereas others leave them uppercased (e.g.XMLHTTPRequest) for accuracy. Some naming conventions represent rules or requirements that go beyond the requirements of a specific project or problem domain, and instead reflect a greater overarching set of principles defined by thesoftware architecture, underlyingprogramming languageor other kind of cross-project methodology. Perhaps the most well-known isHungarian notation, which encodes either the purpose ("Apps Hungarian") or thetype("Systems Hungarian") of a variable in its name.[17]For example, the prefix "sz" for the variable szName indicates that the variable is a null-terminated string. A style used for very short (eight characters and less) could be: LCCIIL01, where LC would be the application (Letters of Credit), C for COBOL, IIL for the particular process subset, and the 01 a sequence number. This sort of convention is still in active use in mainframes dependent uponJCLand is also seen in the 8.3 (maximum eight characters with period separator followed by three character file type) MS-DOS style. IBM's "OF Language" was documented in an IMS (Information Management System) manual. It detailed the PRIME-MODIFIER-CLASS word scheme, which consisted of names like "CUST-ACT-NO" to indicate "customer account number". PRIME words were meant to indicate major "entities" of interest to a system. MODIFIER words were used for additional refinement, qualification and readability. CLASS words ideally would be a very short list of data types relevant to a particular application. Common CLASS words might be: NO (number), ID (identifier), TXT (text), AMT (amount), QTY (quantity), FL (flag), CD (code), W (work) and so forth. In practice, the available CLASS words would be a list of less than two dozen terms. CLASS words, typically positioned on the right (suffix), served much the same purpose asHungarian notationprefixes. The purpose of CLASS words, in addition to consistency, was to specify to the programmer thedata typeof a particular data field. Prior to the acceptance of BOOLEAN (two values only) fields, FL (flag) would indicate a field with only two possible values. Adobe's Coding Conventions and Best Practices suggests naming standards forActionScriptthat are mostly consistent with those ofECMAScript.[citation needed]The style of identifiers is similar to that ofJavascript. InAda, the only recommended style of identifiers isMixed_Case_With_Underscores.[18] InAPLdialects, the delta (Δ) is used between words, e.g. PERFΔSQUARE (no lowercase traditionally existed in older APL versions). If the name used underscored letters, then the delta underbar (⍙) would be used instead. InCandC++,keywordsandstandard libraryidentifiers are mostly lowercase. In theC standard library, abbreviated names are the most common (e.g.isalnumfor a function testing whether a character is alphanumeric), while theC++ standard libraryoften uses an underscore as a word separator (e.g.out_of_range). Identifiers representingmacrosare, by convention, written using only uppercase letters and underscores, for exampleNULLandEINVAL(this is related to the convention in many programming languages of using all-upper-case identifiers for constants). Names containing double underscore or beginning with an underscore and a capital letter are reserved for implementation (compiler,standard library) and should not be used (e.g.__reservedor_Reserved).[19][20]This is superficially similar tostropping, but the semantics differ: the underscores are part of the value of the identifier, rather than being quoting characters (as is stropping): the value of__foois__foo(which is reserved), notfoo(but in a different namespace). C#naming conventions generally follow the guidelines published by Microsoft for all .NET languages[21](see the .NET section, below), but no conventions are enforced by the C# compiler. The Microsoft guidelines recommend the exclusive use of onlyPascalCaseandcamelCase, with the latter used only for method parameter names and method-local variable names (including method-localconstvalues). A special exception to PascalCase is made for two-letter acronyms that begin an identifier; in these cases, both letters are capitalized (for example,IOStream); this is not the case for longer acronyms (for example,XmlStream). The guidelines further recommend that the name given to aninterfacebePascalCasepreceded by the capital letterI, as inIEnumerable. The Microsoft guidelines for naming fields are specific tostatic,public, andprotectedfields; fields that are notstaticand that have other accessibility levels (such asinternalandprivate) are explicitly not covered by the guidelines.[22]The most common practice is to usePascalCasefor the names of all fields, except for those which areprivate(and neitherconstnorstatic), which are given names that usecamelCasepreceded by a single underscore; for example,_totalCount. Any identifier name may be prefixed by the commercial-at symbol (@), without any change in meaning. That is, bothfactorand@factorrefer to the same object. By convention, this prefix is only used in cases when the identifier would otherwise be either a reserved keyword (such asforandwhile), which may not be used as an identifier without the prefix, or a contextual keyword (such asfromandwhere), in which cases the prefix is not strictly required (at least not at its declaration; for example, although the declarationdynamic dynamic;is valid, this would typically be seen asdynamic @dynamic;to indicate to the reader immediately that the latter is a variable name). In theDartlanguage, used in theFlutter SDK, the conventions are similar to those of Java, except that constants are written in lowerCamelCase. Dart imposes the syntactic rule that non-local identifiers beginning with an underscore (_) are treated as private (since the language does not have explicit keywords for public or private access). Additionally, source file names do not follow Java's "one public class per source file, name must match" rule, instead using snake_case for filenames.[23] InGo, the convention is to useMixedCapsormixedCapsrather than underscores to write multiword names. When referring to structs or functions, the first letter specifies the visibility for external packages. Making the first letter uppercase exports that piece of code, while lowercase makes it only usable within the current scope.[24] InJava, naming conventions for identifiers have been established and suggested by various Java communities such as Sun Microsystems,[25]Netscape,[26]AmbySoft,[27]etc. A sample of naming conventions set by Sun Microsystems are listed below, where a name in "CamelCase" is one composed of a number of words joined without spaces, with each word's -- excluding the first word's -- initial letter in capitals – for example "camelCase". Variable names should be short yet meaningful. The choice of a variable name should bemnemonic— that is, designed to indicate to the casual observer the intent of its use. One-character variable names should be avoided except for temporary "throwaway" variables. Common names for temporary variables are i, j, k, m, and n for integers; c, d, and e for characters. Java compilers do not enforce these rules, but failing to follow them may result in confusion and erroneous code. For example,widget.expand()andWidget.expand()imply significantly different behaviours:widget.expand()implies an invocation to methodexpand()in an instance namedwidget, whereasWidget.expand()implies an invocation to static methodexpand()in classWidget. One widely used Java coding style dictates thatUpperCamelCasebe used forclassesandlowerCamelCasebe used forinstancesandmethods.[25]Recognising this usage, someIDEs, such asEclipse, implement shortcuts based on CamelCase. For instance, in Eclipse'scontent assistfeature, typing just the upper-case letters of a CamelCase word will suggest any matching class or method name (for example, typing "NPE" and activating content assist could suggestNullPointerException). Initialisms of three or more letters are CamelCase instead of uppercase (e.g.,parseDbmXmlFromIPAddressinstead ofparseDBMXMLFromIPAddress). One may also set the boundary at two or more letters (e.g.parseDbmXmlFromIpAddress). The built-in JavaScript libraries use the same naming conventions as Java. Data types and constructor functions use upper camel case (RegExp,TypeError,XMLHttpRequest,DOMObject) and methods use lower camel case (getElementById,getElementsByTagNameNS,createCDATASection). In order to be consistent most JavaScript developers follow these conventions.[28]See also:Douglas Crockford's conventions Common practice in mostLispdialects is to use dashes to separate words in identifiers, as inwith-open-fileandmake-hash-table. Dynamic variable names conventionally start and end with asterisks:*map-walls*. Constants names are marked by plus signs:+map-size+.[29][30] Microsoft .NETrecommendsUpperCamelCase, also known asPascalCase, for most identifiers. (lowerCamelCaseis recommended forparametersandvariables) and is a shared convention for the .NET languages.[31]Microsoft further recommends that no type prefix hints (also known asHungarian notation) are used.[32]Instead of using Hungarian notation it is recommended to end the name with the base class' name;LoginButtoninstead ofBtnLogin.[33] Objective-Chas a common coding style that has its roots inSmalltalk. Top-level entities, including classes, protocols, categories, as well as C constructs that are used in Objective-C programs like global variables and functions, are in UpperCamelCase with a short all-uppercase prefix denoting namespace, likeNSString,UIAppDelegate,NSApporCGRectMake. Constants may optionally be prefixed with a lowercase letter "k" likekCFBooleanTrue. Instance variables of an object use lowerCamelCase prefixed with an underscore, like_delegateand_tableView. Method names use multiple lowerCamelCase parts separated by colons that delimit arguments, like:application:didFinishLaunchingWithOptions:,stringWithFormat:andisRunning. Wirthian languages Pascal, Modula-2 and Oberon generally useCapitalizedorUpperCamelCaseidentifiers for programs, modules, constants, types and procedures, andlowercaseorlowerCamelCaseidentifiers for math constants, variables, formal parameters and functions.[34]While some dialects support underscore and dollar signs in identifiers, snake case and macro case is more likely confined to use within foreign API interfaces.[35] Perltakes some cues from its C heritage for conventions. Locally scoped variables and subroutine names are lowercase with infix underscores. Subroutines and variables meant to be treated as private are prefixed with an underscore. Package variables are title cased. Declared constants are all caps. Package names are camel case excepting pragmata—e.g.,strictandmro—which are lowercase.[36][37] PHPrecommendations are contained in PSR-1 (PHP Standard Recommendation1) and PSR-12.[38]According to PSR-1, class names should be in PascalCase, class constants should be in MACRO_CASE, and function and method names should be in camelCase.[39] PythonandRubyboth recommendUpperCamelCasefor class names,CAPITALIZED_WITH_UNDERSCORESfor constants, andsnake_casefor other names. In Python, if a name is intended to be "private", it is prefixed by one or two underscores. Private variables are enforced in Python only by convention. Names can also be suffixed with an underscore to prevent conflict with Python keywords. Prefixing with double underscores changes behaviour in classes with regard toname mangling. Prefixingandsuffixing with double underscores - the so-called "dunder" ("double under") methods in Python - are reserved for "magic names" which fulfill special behaviour in Python objects.[40] While there is no official style guide forR, thetidyversestyle guide from R-guru Hadley Wickham sets the standard for most users.[41]This guide recommends using only numbers, lowercase letters and underscores for file, variable and function names e.g. fit_models.R. The Bioconductor style guide recommends UpperCamelCase for class names and lowerCamelCase for variable and function names. Its predecessors S and S-PLUS did not allow underscores in variable and function names, but instead used the period as a delimiter. As a result, many base functions in R still have a period as delimiter e.g. as.data.frame(). Hidden objects can be created with the dot prefix e.g. .hidden_object. These objects do not appear in the global environment. The dot prefix is often used by package developers for functions that are purely internal and are not supposed to be used by end users. It is similar to the underscore prefix in Python. Rakufollows more or less the same conventions as Perl, except that it allows an infix hyphen-or an apostrophe'(or single quote) within an identifier (but not two in a row), provided that it is followed by an alphabetic character. Raku programmers thus often usekebab casein their identifiers; for example,fish-foodanddon't-do-thatare valid identifiers.[42] RustrecommendsUpperCamelCasefor type aliases and struct, trait, enum, and enum variant names,SCREAMING_SNAKE_CASEfor constants or statics andsnake_casefor variable, function and struct member names.[43] Swifthas shifted its naming conventions with each individual release. However a major update with Swift 3.0 stabilised the naming conventions forlowerCamelCaseacross variables and function declarations. Constants are usually defined by enum types or constant parameters that are also written this way. Class and other object type declarations areUpperCamelCase. As of Swift 3.0 there have been made clear naming guidelines for the language in an effort to standardise the API naming and declaration conventions across all third party APIs.[44]
https://en.wikipedia.org/wiki/Naming_convention_(programming)
Inlinguistic typology,active–stative alignment(alsosplit intransitive alignmentorsemantic alignment) is a type ofmorphosyntactic alignmentin which the soleargument("subject") of anintransitiveclause (often symbolized asS) is sometimes marked in the same way as anagentof atransitive verb(that is, like asubjectsuch as "I" or "she" inEnglish) but other times in the same way as a direct object (such as "me" or "her" in English). Languages with active–stative alignment are often calledactive languages. Thecaseoragreementof the intransitive argument (S) depends on semantic or lexical criteria particular to each language. The criteria tend to be based on the degree ofvolition, or control over the verbal action exercised by the participant. For example, if one tripped and fell, an active–stative language might require one to say the equivalent of "fell me." To say "I fell" would mean that the person had done it on purpose, such as taking a fall in boxing. Another possibility is empathy; for example, if someone's dog were run over by a car, one might say the equivalent of "died her." To say "she died" would imply that the person was not affected emotionally. If the core arguments of a transitive clause are termedA(agentof a transitive verb) andP(patientof a transitive verb), active–stative languages can be described as languages that align intransitiveSasS = P/O∗∗("fell me") orS = A("I fell"), depending on the criteria described above. Active–stative languages contrast withaccusative languagessuch as English that generally alignSasS = A, and withergative languagesthat generally alignSasS = P/O. From this we can deduce that there are two types ofSin Active languages. On the other hand, in Ergative languages some types ofO/Pcan beO/P=A, and in this respect, we have to consider that there are also two types ofOin Ergative languages. Active languages can be said to be a phenomenon at the intersection of these complex issues. For most such languages, the case of the intransitive argument is lexically fixed for each verb, regardless of the actual degree of volition of the subject, but often corresponding to the most typical situation. For example, the argument ofswimmay always be treated like the transitive subject (agent-like), and the argument ofsleeplike the transitive direct object (patient-like). InDakota, arguments of active verbs such asto runare marked like transitive agents, as in accusative languages, and arguments of inactive verbs such asto standare marked like transitive objects, as in ergative languages. In such language, if the subject of a verb likerunorswallowis defined as agentive, it will be always marked so even if the action of swallowing is involuntary. This subtype is sometimes known assplit-S. In other languages, the marking of the intransitive argument is decided by the speaker, based on semantic considerations. For any given intransitive verb, the speaker may choose whether to mark the argument as agentive or patientive. In some of these languages, agentive marking encodes a degree ofvolitionor control over the action, with thepatientiveused as the default case; in others, patientive marking encodes a lack of volition or control, suffering from or being otherwise affected by the action, or sympathy on the part of the speaker, with the agentive used as the default case. These two subtypes (patientive-defaultandagentive-default) are sometimes known asfluid-S. If the language hasmorphologicalcase, the arguments of atransitive verbare marked by using the agentive case for the subject and the patientive case for the object. The argument of anintransitive verbmay be marked as either.[1] Languages lacking caseinflectionsmay indicate case by differentword orders,verb agreement, usingadpositions, etc. For example, the patientive argument might precede theverb, and the agentive argument might follow the verb. Cross-linguistically, the agentive argument tends to be marked, and the patientive argument tends to be unmarked. That is, if one case is indicated by zero-inflection, it is often the patientive. Additionally, active languages differ from ergative languages in how split case marking intersects with Silverstein's (1976) nominal hierarchy: Specifically, ergative languages with split case marking are more likely to use ergative rather than accusative marking for NPs lower down the hierarchy (to the right), whereas active languages are more likely to use active marking for NPs higher up the hierarchy (to the left), like first and second person pronouns.[2]Dixon states that "In active languages, if active marking applies to an NP type a, it applies to every NP type to the left of a on the nominal hierarchy." Active languages are a relatively new field of study. Activemorphosyntactic alignmentused to be not recognized as such, and it was treated mostly as an interesting deviation from the standard alternatives (nominative–accusative and ergative–absolutive). Also, active languages are few and often show complications and special cases ("pure" active alignment is an ideal).[3] Thus, the terminology used is rather flexible. The morphosyntactic alignment of active languages is also termedactive–stative alignmentorsemantic alignment. The termsagentive caseandpatientive caseused above are sometimes replaced by the termsactiveandinactive. (†) = extinct language According to Castro Alves (2010), a split-S alignment can be safely reconstructed for Proto-Northern Jê finite clauses. Clauses headed by a non-finite verb, on the contrary, would have been alignedergativelyin this reconstructed language. The reconstructedPre-Proto-Indo-Europeanlanguage,[7]not to be confused with theProto-Indo-European language, its direct descendant, shows many features known to correlate with active alignment like the animate vs. inanimate distinction, related to the distinction between active and inactive or stative verb arguments. Even in its descendant languages, there are traces of a morphological split between volitional and nonvolitional verbs, such as a pattern in verbs of perception and cognition where the argument takes an oblique case (calledquirky subject), a relic of which can be seen inMiddle Englishmethinksor in the distinction betweenseevs.lookorhearvs.listen. Other possible relics from a structure, in descendant languages of Indo-European, include conceptualization of possession and extensive use of particles.
https://en.wikipedia.org/wiki/Active%E2%80%93stative_alignment
Antecedent-contained deletion(ACD), also calledantecedent-contained ellipsis, is a phenomenon whereby anelidedverb phrase appears to be contained within its own antecedent. For instance, in the sentence "I read every book that you did", the verb phrase in the main clause appears to license ellipsis inside the relative clause which modifies itsobject. ACD is a classic puzzle for theories of thesyntax-semantics interface, since it threatens to introduce aninfinite regress. It is commonly taken as motivation for syntactic transformations such asquantifier raising, though some approaches explain it using semantic composition rules or by adoption more flexible notions of what it means to be a syntactic unit. To understand the issue, it is necessary to understand how VP-ellipsis works. Consider the following examples, where the expected but elided VP is represented with a smaller font and subscripts and the antecedent to the ellipsis are in bold: In each of these sentences, the VP has been elided in the second half, and the elided VP should be essentially identical to the antecedent in the first clause. That is, the missing VP in the first sentence can mean onlywash the dishes, and in the second sentence, the missing VP can mean onlywash the dishes on Tuesday. Assuming the missing VP must be essentially identical to an antecedent VP leads to a problem, first noticed by Bouton (1970): Since the elided VP must be essentially identical to its antecedent, and assuming that the antecedent is a full VP, an infinite regress occurs (the subscripted text). That is, if we substitute in the antecedent VP into the position of the ellipsis, we must repeat the substitution process ad infinitum. The difficulty is further illustrated with the tree for the sentence: The light grey font indicates the elided constituent, i.e. the ellipsis, and the underline marks the antecedent constituent to the ellipsis. Since the antecedent constituent contains the ellipsis itself, resolution of the ellipsis necessitates an infinite regress as the antecedent is substituted ad infinitum into the ellipsis site. To avoid this problem, Sag (1976) proposed that the NPevery book that Mary didundergoesquantifier raising(QR) to a position above the verb.[1] Now the reference for the elided VP is simply the following: The analysis can now assume that the elided VP in the example corresponds to justread, since after QR, the antecedent VP no longer contains the object raised NP: The infinite regress is now avoided because after QR, the antecedent VP contains just the verbread.
https://en.wikipedia.org/wiki/Antecedent-contained_deletion
Inlinguistics,coercionis a term applied to a process of reinterpretation triggered by a mismatch between thesemanticproperties of aselectorand thesemanticproperties of the selected element.[1]As Catalina Ramírez explains it, this phenomenon is calledcoercionbecause the process forces meaning into a lexical phrase where there is otherwise a discrepancy of the semantic aspects of the phrase.[2]The term was first used in the semantic literature in 1988 by Marc Moens andMark Steedman, who adopted it due to its "loose analogy with type-coercion in programming languages.”[3]In his written framework of thegenerative lexicon(a formal compositional approach to lexical semantics), Pustejovsky (1995:111) defines coercion as "a semantic operation that converts an argument to the type which is expected by a function, where it would otherwise result in a type error." Coercion in the Pustejovsky framework refers to bothcomplement coercionandaspectual coercion.Complement coercioninvolves a mismatch of semantic meaning between lexical items, whileaspectual coercioninvolves a mismatch of temporality between lexical items.[4] A commonly used example of complement coercion is the sentence "I began the book.” The phrase "I began" is assumed to be a selector which requires the following complement to denote an event, but "the book" denotes a noun phrase, not an event. So, as a result of coercion, "I began" forces “the book" from a simple noun phrase to an event involving that noun, causing the sentence to be interpreted to mean (most likely) "I began to read the book" or "I began to write the book."[4] An example of aspectual coercion involving temporal connectives is "Let's leave after dessert" (Pustejovsky 1995:230). Another example of aspectual coercion frompsycholinguisticsresearch is the sentence "The tiger jumped for an hour," where the prepositional phrase "for an hour" coerces the lexical meaning of "jumped" to be iterative across the entire duration, instead of having occurred only once.[5] Coercion is a well-discussed topic in the field of linguistics, especially in semantics andconstruction grammar.[6]It is also explored incognitive linguistics. An example is Yao-Ying Lai’s 2017 study on the effects of coercion on mental processing; results showed that phrases involving aspectual words (such as “start”) required longer reading times to understand than did phrases with psychological words (such as “enjoy” and “love”).[7]Currently, there is debate surrounding the proper approach to coercion in linguistics, including systemic coercion versus language-user coercion and a semantic perspective versuspragmaticperspective, among others.[1] Thispragmatics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Coercion_(linguistics)
David Roach Dowty(born 1945[1]) is a linguist known primarily for his work insemanticandsyntactictheory, and especially inMontague grammarandCategorial grammar. Dowty is a professor emeritus oflinguisticsat theOhio State University, and his research interests mainly lie in Semantic and Syntactic Theory,Lexical semanticsandThematic roles,Categorial grammar, and Semantics ofTenseandAspect. David Dowty received his PhD from theUniversity of Texas at Austin, with a thesis supervised by Robert Wall andEmmon Bachon the temporal semantics of verbs.[2] Dowty was editor-in-chief of the journalLinguistics and Philosophyfrom 1988 to 1992, and associate editor ofLanguage. For several years he was chairman of the Department of Linguistics at the Ohio State University. A one-day symposium was held at theUniversity of Groningenin honour of his sixtieth birthday, subsequently published asTheory and Evidence in Semantics.[2] This biography of a United States linguist is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/David_Dowty
Inlinguistics, aform-meaning mismatchis a natural mismatch between thegrammatical formand its expectedmeaning. Such form-meaning mismatches happen everywhere in language.[1]Nevertheless, there is often an expectation of a one-to-one relationship between meaning and form, and indeed, manytraditionaldefinitions are based on such an assumption. For example, Verbscome in threetenses:past,present, andfuture. The past is used to describe things that have already happened (e.g.,earlier in the day, yesterday, last week, three years ago). The present tense is used to describe things that are happening right now, or things that are continuous. The future tense describes things that have yet to happen (e.g.,later, tomorrow, next week, next year, three years from now).[2] While this accurately captures the typical behaviour of these three tenses, it's not unusual for a futurate meaning to have a present tense form (I'll see you before Igo) or a past tense form (If youcouldhelp, that would be great). There are three types of mismatch.[3] Syncretism is "the relation between words which have different morphosyntactic features but are identical in form."[4]For example, the English first persongenitivepronounsare distinct for dependentmyand independentmine, but forhe, there is syncretism: the dependent and independent pronouns share the formhis(e.g.,that'shisbook;it'shis). As a result, there is no consistent match between the form and function of the word. Similarly,Slovaknouns typically markcaseas in the word for "dog", which ispesinnominativecase butpsainaccusative. Butslovo"word" the nominative and accusative have come to share the same form, which means that it does not reliably indicate whether it is a subject or an object.[5] Thesubjectof a sentence is often defined as a noun phrase that denotes the semanticagentor "the doer of the action".[6][p. 69] a noun, noun phrase, or pronoun that usually comes before a main verb and represents the person or thing that performs the action of the verb, or about which something is stated.[7] But in many cases, the subject does not express the expected meaning of doer.[6][p. 69] Dummythereinthere's a book on the table, is the grammatical subject, butthereisn't the doer of the action or the thing about which something is stated. In fact it has no semantic role at all. The same is true ofitinit's cold today.[6][p. 252] In the case of object raising, theobjectof one verb can be the agent of another verb. For example, inwe expectJJto arrive at 2:00,JJis the object ofexpect, butJJis also the person who will be doing the arriving.[6][p. 221]Similarly, in Japanese, the potential form of verbs can raise the object of the main verb to the subject position. For example, in the sentence 私は寿司が食べられる (Watashi wa sushi ga taberareru, "I can eat sushi"), 寿司 ("sushi") is the object of the verb 食べる ("eat") but functions as the subject of the potential form verb 食べられる ("be able to eat").[8] From a semantic point of view, a definitenoun phraseis one that is identifiable and activated in the minds of thefirst personand the addressee. From a grammatical point of view in English, definiteness is typically marked by definitedeterminers, such asthis. “The theoretical distinction between grammatical definiteness and cognitive identifiability has the advantage of enabling us to distinguish between a discrete (grammatical) and a non-discrete (cognitive) category”[9][p. 84]So, in a case such asI metthis guy from Heidlebergon the train, the underlined noun phrase is grammatically definite but semantically indefinite;[9][p. 82]there is a form-meaning mismatch. Grammatical numberis typically marked on nouns in English, and present-tense verbs showagreementwith the subject. But there are cases of mismatch, such as with a singularcollective nounas the subject and plural agreement on the verb (e.g.,The team are working hard).[6][p. 89]The pronounyoualso triggers plural agreement regardless of whether it refers to one person or more (e.g.,Youarethe only one who can do this).[10]This is similar to the use ofhonorificconstructions in theToda language, where subject-verb agreement for number is generally marked by different verb conjugations, but there are exceptions with certain honorific forms. For example, consider the following verb forms for the verb "to give" in Toda: In the case of the honorific formkwēśt-, there is a form-meaning mismatch regarding number, as the same form is used to show respect to a single person or multiple people.[11] In some cases, the mismatch may be apparent rather than real due to a poorly chosen term. For example, "plural" in English suggest more than one, but "non-singular" may be a better term. We use plural marking for things less than one (e.g.,0.5 calories) or even for nothing at all (e.g.,zero degrees).[12] In some cases, thegrammatical genderof a word appears to be a mismatch with its meaning. For example, inGerman,das Fräuleinmeans the unmarried woman. A woman is naturally feminine in terms of socialgender, but the word here is neuter gender.[13] Also, inChichewa, a Bantu language, the word for "child" ismwaná(class 1) in the singular andaná(class 2) in the plural. When referring to a group of mixed-gender children, the plural form,aná, is used even though it belongs to a different noun class from that of the singular form,mwaná.[14] German and English compounds are quite different syntactically, but not semantically.[15] Form-meaning mismatches can lead to language change. An example of this is the split of the nominalgerundconstruction in English and a new “non-nominal” reference type becoming the most dominant function of the verbal gerund construction.[16] The syntax-semantics interface is one of the most vulnerable aspects in L2 acquisition. Therefore, L2 speakers are found to either often have incomplete grammar, or have highly variable syntactic-semantic awareness and performance.[17] Inmorphology, a morpheme can get trapped and eliminated. Consider this example: theOld Norwegianfor "horse's" washert-s, and the way to mark that as definite and genitive ("the" + GEN) was-in-s. When those went together, the genitive ofhert-swas lost, and the result ishest-en-s("the horse" + GEN) in modern Norwegian.[18][p. 90]The result is a form-meaning mismatch.
https://en.wikipedia.org/wiki/Form-meaning_mismatch
Inlinguistics,morphosyntactic alignmentis the grammatical relationship betweenarguments—specifically, between the two arguments (in English, subject and object) oftransitive verbslikethe dog chased the cat, and the single argument ofintransitive verbslikethe cat ran away. English has asubject,which merges the more active argument of transitive verbs with the argument of intransitive verbs, leaving theobjectin transitive verbs distinct; other languages may have different strategies, or, rarely, make no distinction at all. Distinctions may be mademorphologically(throughcaseandagreement),syntactically(throughword order), or both. The following notations will be used to discuss the various types of alignment:[1][2] Note that while the labels S, A, O/P originally stood for subject,agent, object, andpatient, respectively, the concepts of S, A, and O/P are distinct both from thegrammatical relationsandthematic relations. In other words, an A or S need not be an agent or subject, and an O need not be a patient. Note, however, that these semantic macro-roles in Dixon's model differ from those in Klimov's model (1983), which uses five macro-roles (with both S and O divided into two categories).[3] In a nominative–accusative system, S and A are grouped together, contrasting O. In an ergative–absolutive system, S and O are one group and contrast with A. TheEnglish languagerepresents a typical nominative–accusative system (accusativefor short). The name derived from thenominativeandaccusativecases.Basqueis an ergative–absolutive system (or simplyergative). The name stemmed from theergativeandabsolutivecases. S is said toalign witheither A (as in English) or O (as in Basque) when they take the same form. Listed below are argument roles used by Bickel and Nichols for the description of alignment types.[4]Their taxonomy is based onsemantic rolesandvalency(the number of arguments controlled by apredicate). The termlocusrefers to a location where the morphosyntacticmarkerreflecting the syntactic relations is situated. The markers may be located on theheadof a phrase, adependent, andbothornoneof them.[5][6][further explanation needed] The direct, tripartite, and transitive alignment types are all quite rare. The alignment types other than Austronesian alignment can be shown graphically like this: In addition, in some languages, bothnominative–accusativeand ergative–absolutive systems may be used, split between different grammatical contexts, calledsplit ergativity. The split may sometimes be linked toanimacy, as in manyAustralian Aboriginal languages, or toaspect, as inHindustaniandMayan languages. A few Australian languages, such asDiyari, are split among accusative, ergative, and tripartite alignment, depending on animacy. A popular idea, introduced in Anderson (1976),[8]is that some constructions universally favor accusative alignment while others are more flexible. In general, behavioral constructions (control,raising,relativization) are claimed to favor nominative–accusative alignment while coding constructions (especially case constructions) do not show any alignment preferences. This idea underlies early notions of ‘deep’ vs. ‘surface’ (or ‘syntactic’ vs. ‘morphological’) ergativity (e.g. Comrie 1978;[2]Dixon 1994[1]): many languages have surface ergativity only (ergative alignments only in their coding constructions, like case or agreement) but not in their behavioral constructions or at least not in all of them. Languages withdeep ergativity(with ergative alignment in behavioral constructions) appear to be less common. The arguments can be symbolized as follows: The S/A/O terminology avoids the use of terms like "subject" and "object", which are not stable concepts from language to language. Moreover, it avoids the terms "agent" and "patient", which are semantic roles that do not correspond consistently to particular arguments. For instance, the A might be an experiencer or a source, semantically, not just anagent. The relationship between ergative and accusative systems can be schematically represented as the following: The followingBasqueexamples demonstrate ergative–absolutive case marking system:[9] gizona-∅ the.man-ABS S etorri da has arrived VERBintrans gizona-∅{etorri da} the.man-ABS{has arrived} SVERBintrans 'The man has arrived.' gizona-k the.man-ERG A mutila-∅ boy-ABS O ikusi du saw VERBtrans gizona-kmutila-∅{ikusi du} the.man-ERGboy-ABSsaw AOVERBtrans 'The man saw the boy.' In Basque,gizonais "the man" andmutilais "the boy". In a sentence likemutila gizonak ikusi du, you know who is seeing whom because-kis added to the one doing the seeing. So the sentence means "the man saw the boy". If you want to say "the boy saw the man", add the-kinstead to the word meaning "the boy":mutilak gizona ikusi du. With a verb likeetorri, "come", there's no need to distinguish "who is doing the coming", so no-kis added. "The boy came" ismutila etorri da. Japanese– by contrast – marks nouns by following them with different particles which indicate their function in the sentence: kodomoga childNOM S tsuita arrived VERBintrans {kodomoga} tsuita {childNOM} arrived SVERBintrans 'The child arrived.' otokoga manNOM A kodomoo childACC O mita saw VERBtrans {otokoga} {kodomoo} mita {manNOM} {childACC} saw AOVERBtrans 'The man saw the child.' In this language, in the sentence "the man saw the child", the one doing the seeing ("man") may be marked withga, which works like Basque-k(and the one who is being seen may be marked witho). However, in sentences like "the child arrived"gacan still be used even though the situation involves only a "doer" and not a "done-to". This is unlike Basque, where-kis completely forbidden in such sentences.
https://en.wikipedia.org/wiki/Morphosyntactic_alignment
Role and reference grammar(RRG) is a model ofgrammardeveloped byWilliam A. FoleyandRobert Van Valin, Jr.in the 1980s, which incorporates many of the points of view of currentfunctional grammartheories. In RRG, the description of a sentence in a particular language is formulated in terms of (a) its logical (semantic) structure and communicative functions, and (b) the grammatical procedures that are available in the language for the expression of these meanings. Among the main features of RRG are the use oflexical decomposition, based upon the predicate semantics ofDavid Dowty(1979), an analysis ofclause structure, and the use of a set ofthematic rolesorganized into a hierarchy in which the highest-ranking roles are 'Actor' (for the most active participant) and 'Undergoer'. RRG's practical approach to language is demonstrated in the multilingualNatural Language Understanding(NLU) system of cognitive scientistJohn Ball. In 2012, Ball integrated his Patom Theory with Role and Reference Grammar, producing a language independent NLU breaking down language by meaning. Thissyntax-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Role_and_reference_grammar
In linguistics,selectiondenotes the ability ofpredicatesto determine the semantic content of theirarguments.[1]Predicates select their arguments, which means they limit the semantic content of their arguments. A distinction may sometimes be drawn between types of selection; viz.,s(emantic)-selectionversusc(ategory)-selection. Selection in general stands in contrast tosubcategorization:[2]selection is a semantic concept, whereas subcategorization is a syntactic one;[3]predicates bothselectandsubcategorizefor theircomplementarguments, but onlyselecttheir subject arguments. Selection is closely related tovalency, a term used in grammars other than the Chomskian generative grammar for a similar phenomenon. The following pairs of sentences illustrate the concept of selection; the # indicates semantic deviance: The predicateis wiltingselects a subject argument that is a plant or is plant-like. Similarly, the predicatedrankselects an object argument that is a liquid or is liquid-like. A building cannot normally be understood as wilting, just as a car cannot normally be interpreted as a liquid. The b-sentences are possible only given an unusual context that establishes appropriate metaphorical meaning. The deviance of the b-sentences is thus attributed to violation of those selectional restrictions determined by the predicatesis wiltinganddrank. When a mismatch between a selector and a selected element triggers reinterpretation of the meaning of those elements, that process is referred to ascoercion.[4] One sometimes encounters the termss(emantic)-selectionandc(ategory)-selection.[5]The concept of c-selection overlaps to an extent with subcategorization. Predicates c-select thesyntactic categoryof their complement arguments—e.g., noun (phrase), verb (phrase), adjective (phrase), etc.; that is, they determine thesyntactic categoryof their complements. In contrast, predicates s-select thesemantic contentof their arguments; thus, s-selection is a semantic concept, whereas c-selection is a syntactic one. (Note that when the termsselectionandselectional restrictionsappear without thec-ors-prefixes, they are usually understood to refer to s-selection.)[6][7] The b-sentences above do not contain violations of the c-selectional restrictions of the predicatesis wiltinganddrank; they are, rather, well-formed from a syntactic point of view (hence #, not *), for the argumentsthe buildinganda carsatisfy the c-selectional restrictions of their respective predicates (i.e., in this case, the arguments are required to be nouns or noun phrases). Only the s-selectional restrictions of the predicatesis wiltinganddrankare violated in the b-sentences. Selectional constraintsorselectional preferencesdescribe the degree of s-selection, in contrast toselectional restrictions, which treat s-selection as a binary yes-or-no.[8]Selectional preferences have often been used as a source of linguistic information innatural language processingapplications.[9]Thematic fitis a measure of how much a particular word in a particular role (like subject or direct object) matches the selectional preference of a particular predicate. For example, the wordcakehas a high thematic fit as a direct object forcut.[10] The concepts of c-selection and subcategorization overlap in meaning and use to a significant degree.[11]If there is a difference between these concepts, it resides with the status of the subject argument. Traditionally, predicates are interpreted as NOT subcategorizing for their subject argument, because the subject argument appears outside of the minimal VP containing the predicate.[12]Predicates do, however, c-select their subject arguments; e.g.: The predicateeatsc-selects both its subject argumentFredand its object argumentbeans, but as far as subcategorization is concerned,eatssubcategorizes for only its object argument,beans. This difference between c-selection and subcategorization depends, crucially, upon the understanding of subcategorization: an approach to subcategorization that sees predicates as subcategorizing for their subject argumentsas well asfor their object arguments will draw no distinction between c-selection and subcategorization; the two concepts are then synonymous. Selection can be closely associated withthematic relations(e.g. agent, patient, theme, goal, etc.).[13]By limiting the semantic content of their arguments, predicates are determining the thematic relations/roles that their arguments bear. Several linguistic theories make explicit use of selection. These include:
https://en.wikipedia.org/wiki/Selection_(linguistics)
Asemantic classcontains words that share asemantic feature. For example within nouns there are two sub classes, concrete nouns andabstract nouns. The concrete nouns include people, plants, animals, materials and objects while the abstract nouns refer to concepts such as qualities, actions, and processes. According to the nature of the noun, they are categorized into different semantic classes. Semantic classes may intersect. The intersection offemaleandyoungcan begirl. Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semantic_class
Asemantic featureis a component of the concept associated with a lexical item ('female' + 'performer' = 'actress'). More generally, it can also be a component of the concept associated with any grammatical unit, whether composed or not ('female' + 'performer' = 'the female performer' or 'the actress').[1]An individual semantic feature constitutes one component of a word'sintention, which is the inherent sense or concept evoked.[2]Linguistic meaning of a word is proposed to arise from contrasts and significant differences with other words. Semantic features enable linguistics to explain how words that share certain features may be members of the samesemantic domain. Correspondingly, the contrast in meanings of words is explained by diverging semantic features. For example,fatherandsonshare the common components of "human", "kinship", "male" and are thus part of a semantic domain of male family relations. They differ in terms of "generation" and "adulthood", which is what gives each its individual meaning.[3] The analysis of semantic features is utilized in the field of linguistic semantics, more specifically the subfields oflexical semantics,[4]andlexicology.[5]One aim of these subfields is to explain the meaning of a word in terms of their relationships with other words.[6]In order to accomplish this aim, one approach is to analyze the internal semantic structure of a word as composed of a number of distinct and minimal components of meaning.[7]This approach is calledcomponential analysis, also known as semantic decomposition.[8]Semantic decomposition allows any given lexical item to be defined based on minimal elements of meaning, which are called semantic features. The termsemantic featureis usually used interchangeably with the termsemantic component.[9]Additionally, semantic features/semantic components are also often referred to assemantic properties.[10] The theory of componential analysis and semantic features is not the only approach to analyzing the semantic structure of words. An alternative direction of research that contrasts with componential analysis isprototype semantics.[9] Thesemantic featuresof a word can be notated using a binary feature notation common to the framework ofcomponential analysis.[11]Asemantic propertyis specified in square brackets and a plus or minus sign indicates the existence or non-existence of that property.[12] Intersectingsemantic classesshare the same features. Some features need not be specifically mentioned as their presence or absence is obvious from another feature. This is aredundancyrule. Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semantic_feature
Natural semantic metalanguage(NSM) is a linguistic theory that reduces lexicons down to a set ofsemantic primitives. It is based on the conception of Polish professorAndrzej Bogusławski. The theory was formally developed byAnna WierzbickaatWarsaw Universityand later at theAustralian National Universityin the early 1970s,[1]andCliff GoddardatAustralia'sGriffith University.[2] The natural semantic metalanguage (NSM) theory attempts to reduce the semantics of all lexicons down to a restricted set of semantic primitives, or primes. Primes are universal in that they have the same translation in every language, and they are primitive in that they cannot be defined using other words. Primes are ordered together to formexplications, which are descriptions of semantic representations consisting solely of primes.[1] Research in the NSM approach deals extensively with language andcognition, and language andculture. Key areas of research includelexical semantics,grammatical semantics,phraseologyandpragmatics, as well ascross-cultural communication. Dozens of languages, including representatives of 16 language groups, have been studied using the NSM framework. They includeEnglish,Russian,Polish,French,Spanish,Italian,Swedish,Danish,Finnish,Malay,Japanese,Chinese,Korean,Ewe,Wolof,East Cree,Koromu, at least 16Australian languages, and a number ofcreole languagesincludingTrinidadian creole,Roper River Kriol,BislamaandTok Pisin.[3] Apart from the originatorsAnna WierzbickaandCliff Goddard, a number of other scholars have participated in NSM semantics, most notablyBert Peeters,Zhengdao Ye,Felix Ameka,Jean Harkins,Marie-Odile Junker,Anna Gladkova,Jock Wong,Carsten Levisen,Helen Bromhead,Karen Stollznow,Adrian Tien,Carol Priestley,Yuko Asano-CavanaghandGian Marco Farese. Semantic primes (also known as semantic primitives) are concepts that areuniversal, meaning that they can be translated literally into any known language and retain their semantic representation, andprimitive, as they are proposed to be the most simple linguistic concepts and are unable to be defined using simpler terms.[1] Proponents of the NSM theory argue that every language shares a core vocabulary of concepts. In 1994 and 2002, Goddard and Wierzbicka studied languages across the globe and found strong evidence supporting this argument.[1] Wierzbicka's 1972 study[4]proposed 14 semantic primes. That number was expanded to 60 in 2002 by Wierzbicka and Goddard, and the current agreed-upon number is 65.[5][6] Each language's translations of the semantic primes are called exponents. Below is a list of English exponents, or the English translation of the semantic primes. It is important to note that some of the exponents in the following list arepolysemousand can be associated with meanings in English (and other languages) that are not shared. However, when used as an exponent in the Natural semantic metalanguage, it is only the prime concept which is identified as universal. The following is a list ofEnglishexponents of semantic primes adapted fromLevisenand Waters (eds.) 2017.[7] NSM primes can be combined in a limited set ofsyntactic framesthat are also universal.[8]Thesevalencyoptions specify the specific types of grammatical functions that can be combined with the primes. While these combinations can be realized differently in other languages, it is believed that the meanings expressed by these syntactic combinations are universal. Examples of valency frames for the "say" semantic prime: A semantic analysis in the NSM approach results in a reductive paraphrase called an explication that captures the meaning of the concept explicated.[8]An ideal explication can be substituted for the original expression in context without change of meaning. For example:Someone X broke something Y: Semantic molecules are intermediary words used in explications and cultural scripts. While not semantic primes, they can be defined exclusively using primes. Semantic molecules can be determined as words that are necessary to build upon to explicate other words.[7]These molecules are marked by the notation [m] in explications and cultural scripts. Some molecules are proposed to be universal or near-universal, while others are culture- or area-specific.[10] Examples of proposed universal molecules: Minimal English is a derivative of the natural semantic metalanguage research, with the first major publication in 2018.[11]It is a reduced form of English designed for non-specialists to use when requiring clarity of expression or easily translatable materials.[12]Minimal English uses an expanded set of vocabulary to the semantic primes. It includes the proposed universal and near-universal molecules, as well as non-universal words which can assist in clarity.[13]As such, it already has counterparts targeted at speakers of other natural languages, e.g.Minimal French,[14]Minimal Polish,[15]65 Sanaa(Minimal Finnish)[11]: 225–258and so on. Minimal English differs from other simple Englishes (such asBasic English) as it has been specifically designed for maximal cross-translatability. Applications of NSM have also been proposed fornatural-language processing,natural-language understandingandartificial intelligence.[16] Ghil'ad Zuckermannsuggests that NSM can be of benefit inrevivalistics(language revitalization) as it "can neutralize the Western semantic bias involved in reconnecting with ancient Aboriginal traditions using English, and may allow a fuller understanding of the original meaning of the Aboriginal lexical items."[17]: 217
https://en.wikipedia.org/wiki/Semantic_primes
Semantic propertiesormeaning propertiesare those aspects of a linguistic unit, such as amorpheme,word, orsentence, that contribute to the meaning of that unit. Basic semantic properties include beingmeaningfulormeaningless– for example, whether a given word is part of a language's lexicon with a generally understood meaning;polysemy, having multiple, typically related, meanings;ambiguity, having meanings which aren't necessarily related; andanomaly, where the elements of a unit are semantically incompatible with each other, although possibly grammatically sound. Beyond the expression itself, there are higher-levelsemantic relationsthat describe the relationship between units: these includesynonymy,antonymy, andhyponymy.[1][2][3] Besides basic properties of semantics, semantic property is also sometimes used to describe the semantic components of a word, such asmanassuming that the referent ishuman,male, andadult, orfemalebeing a common component ofgirl,woman, andactress. In this sense, semantic properties are used to define thesemantic fieldof a word or set of words.[4][5] Semantic properties of nouns/entities can be divided into eight classes:specificity,boundedness,animacy,gender,kinship,social status, physical properties, and function.[6] Physical propertiesrefer to how an entity exists in space. It can include shape, size, and material, for example.[7] Thefunctionclass of semantic properties refers to noun class markers that indicate the purpose of an entity or how humans utilize an entity. For example, in theDyirballanguage, the morphemebalammarks each entity in its noun class with the semantic property of edibility,[8]and Burmese encodes the semantic property for the ability to cut or pierce. Encoding the functional property for transportation, housing, and speech are also attested in world languages.[9] Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semantic_property
Insyntax,shiftingoccurs when two or moreconstituentsappearing on the same side of their commonheadexchange positions in a sense to obtain non-canonical order. The most widely acknowledged type of shifting isheavy NP shift,[1]but shifting involving a heavy NP is just one manifestation of the shifting mechanism. Shifting occurs in most if not all European languages, and it may in fact be possible in all natural languages includingsign languages.[2]Shifting is notinversion, and inversion is not shifting, but the two mechanisms are similar insofar as they are both present in languages like English that have relatively strict word order. The theoretical analysis of shifting varies in part depending on the theory of sentence structure that one adopts. If one assumes relatively flat structures, shifting does not result in adiscontinuity.[3]Shifting is often motivated by the relative weight of the constituents involved. The weight of a constituent is determined by a number of factors: e.g., number of words, contrastive focus, andsemanticcontent. Shifting is illustrated with the following pairs of sentences. The first sentence of each pair shows what can be considered canonical order, whereas the second gives an alternative order that results from shifting: The first sentence with canonical order, where the object noun phrase (NP) precedes the oblique prepositional phrase (PP), is marginal due to the relative 'heaviness' of the NP compared to the PP. The second sentence, which shows shifting, is better because it has the lighter PP preceding the much heavier NP. The following examples illustrate shifting with particle verbs: When the object of the particle verb is a pronoun, the pronoun must precede the particle, whereas when the object is an NP, the particle can precede the NP. Each of the two constituents involved is said to shift, whereby this shifting is motivated by the weight of the two relative to each other. In English verb phrases, heavier constituents tend to follow lighter constituents. The following examples illustrate shifting using pronouns, clauses, and PPs: When the pronoun appears, it is much lighter than the PP, so it precedes the PP. But if the full clause appears, it is heavier than the PP and can therefore follow it. Thesyntactic categoryof the constituents involved in shifting is not limited; they can even be of the same type, e.g. In the first pair, the shifted constituents are PPs, and in the second pair, the shifted constituents are NPs. The second pair illustrates again that shifting is often motivated by the relative weight of the constituents involved; the NPanyone who used Wikipediais heavier than the NPa cheater. The examples so far have shifting occurring in verb phrases. Shifting is not restricted to verb phrases. It can also occur, for instance, in NPs: These examples again illustrate shifting that is motivated by the relative weight of the constituents involved. The heavier of the two constituents prefers to appear further to the right. The example sentences above all have the shifted constituents appearing after theirhead(see below). Constituents that precede their head can also shift, e.g. Since the finite verb is viewed as the head of the clause in each case, these data allow an analysis in terms of shifting. The subject and modal adverb have swapped positions. In other languages that have manyhead-finalstructures, shifting in the pre-head domain is a common occurrence. If one assumes relatively flat structures, the analysis of many canonical instances of shifting is straightforward. Shifting occurs among two or more sister constituents that appear on the same side of their head.[4]The following trees illustrate the basic shifting constellation in aphrase structure grammar(= constituency grammar) first and in adependency grammarsecond: The two constituency-based trees show a flat VP that allows n-ary branching (as opposed to just binary branching). The two dependency-based trees show the same VP. Regardless of whether one chooses the constituency- or the dependency-based analysis, the important thing about these examples is the relative flatness of the structure. This flatness results in a situation where shifting does not necessitate adiscontinuity(i.e. no long distance dependency), for there can be no crossing lines in the trees. The following trees further illustrate the point: Again due to the flatness of structure, shifting does not result in a discontinuity. In this example, both orders are acceptable because there is little difference in the relative weight between the two constituents that switch positions. An alternative analysis of shifting is necessary in a constituency grammar that posits strictly binary branching structures.[5]The more layered binary branching structures would result in crossing lines in the tree, which means movement (or copying) is necessary to avoid these crossing lines. The following trees are (merely) representative of the type of analysis that one might assume given strictly binary branching structures: The analysis shown with the trees assumes binary branching and leftward movement only. Given these restrictions, two instances of movement might be necessary to accommodate the surface order seen in tree b. The material in light gray represents copies that must be deleted in the phonological component. This sort of analysis of shifting has been criticized byRay Jackendoff, among others.[6]Jackendoff and Culicover argue for an analysis like that shown with the flatter trees above, whereby heavy NP shift does not result from movement, but rather from a degree of optionality in the ordering of a verb's complements. The preferred order in English is for the direct object to follow the indirect object in a double-object construction, and for adjuncts to follow objects of all kinds; but if the direct object is "heavy", the opposite order may be preferred (since this leads to a more right-branching tree structure which is easier to process). From a generativist's perspective, amysteriousproperty of shifting is that in the case of ditransitive verbs, a shifted direct object prevents extraction of the indirect object viawh-movement: Some generativists use this example to argue against the hypothesis that shifting merely results from choice between alternative complement orders, a hypothesis that does not imply movement. Their analysis in terms of a strictly binary branching tree resulting from leftward movements would in turn be able to explain this restriction. However, there are at least two ways of countering this argument: 1) in case one wants to explain choice, if choice is assumed to be performed between possible orders, the impossible order is not in the linguistic potential and it cannot be chosen; however, 2) in case one wants to explain generation, that is, to explain how a linguistic potential comes to exist in a situation, one can explain this phenomenon in terms of generation and avoidance rules: for instance, one of the reasons for avoiding a wording in potentiality would be an ordering in which a preposition falls before a nominal group by accident as in the example above. In other words, from a functional perspective, we either recognise that these fake clauses are none of the clauses we can choose from (choice of possible clauses) or we say that they are generated and avoided because they might cause listeners to misunderstand what the speaker is saying (generation and avoidance).
https://en.wikipedia.org/wiki/Shifting_(syntax)
Inlinguistic typology,active–stative alignment(alsosplit intransitive alignmentorsemantic alignment) is a type ofmorphosyntactic alignmentin which the soleargument("subject") of anintransitiveclause (often symbolized asS) is sometimes marked in the same way as anagentof atransitive verb(that is, like asubjectsuch as "I" or "she" inEnglish) but other times in the same way as a direct object (such as "me" or "her" in English). Languages with active–stative alignment are often calledactive languages. Thecaseoragreementof the intransitive argument (S) depends on semantic or lexical criteria particular to each language. The criteria tend to be based on the degree ofvolition, or control over the verbal action exercised by the participant. For example, if one tripped and fell, an active–stative language might require one to say the equivalent of "fell me." To say "I fell" would mean that the person had done it on purpose, such as taking a fall in boxing. Another possibility is empathy; for example, if someone's dog were run over by a car, one might say the equivalent of "died her." To say "she died" would imply that the person was not affected emotionally. If the core arguments of a transitive clause are termedA(agentof a transitive verb) andP(patientof a transitive verb), active–stative languages can be described as languages that align intransitiveSasS = P/O∗∗("fell me") orS = A("I fell"), depending on the criteria described above. Active–stative languages contrast withaccusative languagessuch as English that generally alignSasS = A, and withergative languagesthat generally alignSasS = P/O. From this we can deduce that there are two types ofSin Active languages. On the other hand, in Ergative languages some types ofO/Pcan beO/P=A, and in this respect, we have to consider that there are also two types ofOin Ergative languages. Active languages can be said to be a phenomenon at the intersection of these complex issues. For most such languages, the case of the intransitive argument is lexically fixed for each verb, regardless of the actual degree of volition of the subject, but often corresponding to the most typical situation. For example, the argument ofswimmay always be treated like the transitive subject (agent-like), and the argument ofsleeplike the transitive direct object (patient-like). InDakota, arguments of active verbs such asto runare marked like transitive agents, as in accusative languages, and arguments of inactive verbs such asto standare marked like transitive objects, as in ergative languages. In such language, if the subject of a verb likerunorswallowis defined as agentive, it will be always marked so even if the action of swallowing is involuntary. This subtype is sometimes known assplit-S. In other languages, the marking of the intransitive argument is decided by the speaker, based on semantic considerations. For any given intransitive verb, the speaker may choose whether to mark the argument as agentive or patientive. In some of these languages, agentive marking encodes a degree ofvolitionor control over the action, with thepatientiveused as the default case; in others, patientive marking encodes a lack of volition or control, suffering from or being otherwise affected by the action, or sympathy on the part of the speaker, with the agentive used as the default case. These two subtypes (patientive-defaultandagentive-default) are sometimes known asfluid-S. If the language hasmorphologicalcase, the arguments of atransitive verbare marked by using the agentive case for the subject and the patientive case for the object. The argument of anintransitive verbmay be marked as either.[1] Languages lacking caseinflectionsmay indicate case by differentword orders,verb agreement, usingadpositions, etc. For example, the patientive argument might precede theverb, and the agentive argument might follow the verb. Cross-linguistically, the agentive argument tends to be marked, and the patientive argument tends to be unmarked. That is, if one case is indicated by zero-inflection, it is often the patientive. Additionally, active languages differ from ergative languages in how split case marking intersects with Silverstein's (1976) nominal hierarchy: Specifically, ergative languages with split case marking are more likely to use ergative rather than accusative marking for NPs lower down the hierarchy (to the right), whereas active languages are more likely to use active marking for NPs higher up the hierarchy (to the left), like first and second person pronouns.[2]Dixon states that "In active languages, if active marking applies to an NP type a, it applies to every NP type to the left of a on the nominal hierarchy." Active languages are a relatively new field of study. Activemorphosyntactic alignmentused to be not recognized as such, and it was treated mostly as an interesting deviation from the standard alternatives (nominative–accusative and ergative–absolutive). Also, active languages are few and often show complications and special cases ("pure" active alignment is an ideal).[3] Thus, the terminology used is rather flexible. The morphosyntactic alignment of active languages is also termedactive–stative alignmentorsemantic alignment. The termsagentive caseandpatientive caseused above are sometimes replaced by the termsactiveandinactive. (†) = extinct language According to Castro Alves (2010), a split-S alignment can be safely reconstructed for Proto-Northern Jê finite clauses. Clauses headed by a non-finite verb, on the contrary, would have been alignedergativelyin this reconstructed language. The reconstructedPre-Proto-Indo-Europeanlanguage,[7]not to be confused with theProto-Indo-European language, its direct descendant, shows many features known to correlate with active alignment like the animate vs. inanimate distinction, related to the distinction between active and inactive or stative verb arguments. Even in its descendant languages, there are traces of a morphological split between volitional and nonvolitional verbs, such as a pattern in verbs of perception and cognition where the argument takes an oblique case (calledquirky subject), a relic of which can be seen inMiddle Englishmethinksor in the distinction betweenseevs.lookorhearvs.listen. Other possible relics from a structure, in descendant languages of Indo-European, include conceptualization of possession and extensive use of particles.
https://en.wikipedia.org/wiki/Split_intransitivity
In certain theories oflinguistics,thematic relations, also known assemantic rolesorthematic roles, are the various roles that anoun phrasemay play with respect to the action or state described by a governing verb, commonly the sentence's main verb. For example, in the sentence "Susan ate an apple",Susanis the doer of the eating, so she is anagent;[1]an appleis the item that is eaten, so it is apatient. Since their introduction in the mid-1960s by Jeffrey Gruber andCharles Fillmore,[2][3]semantic roles have been a core linguistic concept and ground of debate between linguist approaches, because of their potential in explaining the relationship between syntax and semantics (also known as thesyntax-semantics interface),[3]that is how meaning affects the surface syntactic codification of language. The notion of semantic roles play a central role especially infunctionalistand language-comparative (typological) theories of language and grammar. While most modern linguistic theories make reference to such relations in one form or another, the general term, as well as the terms for specific relations, varies: "participant role", "semantic role", and "deep case" have also been employed with similar sense. The notion of semantic roles was introduced into theoretical linguistics in the 1960s, by Jeffrey Gruber andCharles Fillmore,[3][2][4]and alsoJackendoffdid some early work on it in 1972.[3][5][6] The focus of these studies on semantic aspects, and how they affect syntax, was part of a shift away fromChomsky's syntactic-centered approach, and in particular the notion of theautonomy of syntax, and his recentAspects of the Theory of Syntax(1965). The following major thematic relations have been identified:[7] There are not always clear boundaries between these relations. For example, in "the hammer broke the window",hammermight be labeled anagent, aninstrument, aforce, or possibly acause. Nevertheless, somethematic relationlabels are more logically plausible than others. In many functionally oriented linguistic approaches, the above thematic roles have been grouped into the two macroroles (also called generalized semantic roles or proto-roles) ofactorandundergoer. This notion of semantic macroroles was introduced byVan Valin's Ph.D. thesis in 1977, developed inrole and reference grammar, and then adapted in several linguistic approaches.[8][9] According to Van Valin, while thematic roles define semantic relations, and relations like subject and direct object are syntactic ones, the semantic macroroles of actor and undergoer are relations that lie at theinterface between semantics and syntax.[10] Linguistic approaches that have adopted, in various forms, this notion of semantic macroroles include: the Generalized Semantic Roles ofFoleyand Van Valin Role and reference grammar (1984),David Dowty’s 1991 theory of thematic proto-roles,[11]Kibrik's Semantic hyperroles (1997),Simon Dik's 1989Functional discourse grammar, and some late 1990s versions ofHead-driven phrase structure grammar.[3][8] In Dowty’s theory of thematic proto-roles, semantic roles are considered asprototype notions, in which there is a prototypical agent role that has those traits characteristically associated to it, while other thematic roles have less of those traits and are accordingly proportionally more distant to the prototypical agent.[6]The same goes for the opposite pole of the continuum, the patient proto-role. In many languages, such asFinnish,HungarianandTurkish, thematic relations may be reflected in thecase-marking on the noun. For instance, Hungarian has aninstrumental caseending (-val/-vel), which explicitly marks the instrument of a sentence. Languages like English often mark such thematic relations with prepositions. The termthematic relationis frequently confused withtheta role. Many linguists (particularlygenerative grammarians) use the terms interchangeably. This is because theta roles are typically named by the most prominent thematic relation that they are associated with. Different theoretical approaches often closely tie differentgrammatical relationsofsubjectandobject, etc., to semantic relations. In thetypological tradition, for example, agents/actors (or "agent-like" arguments) frequently overlap with the notion of subject (S). These ideas, when they are used distinctly, can be distinguished as follows: Thematic relations concern the nature of the relationship between themeaningof the verb and themeaningof the noun. Theta roles are about thenumber of argumentsthat a verb requires (which is a purely syntactic notion). Theta roles are syntactic relations that refers to the semantic thematic relations. For example, take the sentence "Reggie gave the kibble to Fergus on Friday."
https://en.wikipedia.org/wiki/Thematic_relation
Informal semantics, atype shifteris aninterpretationrule that changes an expression'ssemantic type. For instance, theEnglishexpression "John" might ordinarilydenoteJohn himself, but a type shifting rule calledLiftcan raise its denotation to afunctionwhich takes a property and returns "true" if John himself has that property. Lift can be seen as mapping an individual onto theprincipal ultrafilterthat it generates.[1][2][3] Type shifters were proposed byBarbara ParteeandMats Roothin 1983 to allow for systematic typeambiguity. Work of the period assumed thatsyntactic categoriescorresponded directly with semantic types, and researchers thus had to "generalize to the worst case" when particular uses of particular expressions from a given category required an especially high type. Moreover, Partee argued that evidence, in fact, supported expressions having different types in different contexts. Thus, she and Rooth proposed type shifting as a principled mechanism for generating the ambiguity.[1][2][3] Type shifters remain a standard tool in formal semantic work, particularly incategorial grammarand related frameworks. Type shifters have also been used to interpret quantifiers in object position and to capturescope ambiguities. In that regard, they serve as an alternative to syntactic operations such asquantifier raisingused in mainstreamgenerativeapproaches to semantics.[4][5]Type shifters have also been used to generate andcomposealternative sets without the need to fully adopt analternative-based semantics.[6][7] Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Type_shifter
AnEnglish writing styleis a combination of features in anEnglish languagecompositionthat has become characteristic of a particular writer, a genre, a particular organization, or a profession more broadly (e.g.,legal writing). An individual's writing style may be distinctive for particular themes, personal idiosyncrasies of phrasing and/oridiolect; recognizable combinations of these patterns may be defined metaphorically as a writer's "voice." Organizations that employ writers or commission written work from individuals may require that writers conform to a "house style" defined by the organization. This conformity enables a more consistent readability of composite works produced by many authors and promotes usability of, for example, references to other cited works. In many kinds of professional writing aiming for effective transfer of information, adherence to a standardised style can facilitate the comprehension of readers who are already accustomed to it.[1]Many of these standardised styles are documented instyle guides. All writing has some style, even if the author is not thinking about a personal style. It is important to understand that style reflects meaning. For instance, if a writer wants to express a sense of euphoria, he or she might write in a style overflowing with expressive modifiers. Some writers use styles that are very specific, for example in pursuit of an artistic effect. Stylistic rule-breaking is exemplified by the poet. An example isE. E. Cummings, whose writing consists mainly of onlylower caseletters, and often uses unconventionaltypography,spacing, andpunctuation. Even in non-artistic writing, every person who writes has his or her own personal style. Many large publications define a house style to be used throughout the publication, a practice almost universal among newspapers and well-known magazines. These styles can cover the means of expression and sentence structures, such as those adopted byTime. They may also include features peculiar to a publication; the practice atThe Economist, for example, is that articles are rarely attributed to an individual author. General characteristics have also been prescribed for different categories of writing, such as injournalism, the use ofSI units, orquestionnaire construction. University students, especially graduate students, are encouraged to write papers in an approved style. This practice promotes readability and ensures that references to cited works are noted in a uniform way. Typically, students are encouraged to use a style commonly adopted by journals publishing articles in the field of study. The list ofStyle Manuals & Guides, from theUniversity of MemphisLibraries, includes thirty academic style manuals that are currently in print, and twelve that are available on-line.[2]Citation of referenced works is a key element in academic style.[3] The requirements for writing and citing articles accessed on-line may sometimes differ from those for writing and citing printed works. Some of the details are covered inThe Columbia Guide to Online Style.[4]
https://en.wikipedia.org/wiki/English_writing_style
Anidiom(the quality of it being known asidiomaticnessoridiomaticity) is asyntactical,grammatical, orphonologicalstructure peculiar to alanguagethat is actually realized, as opposed to possible but unrealized structures that could have developed to serve thesame semantic functionsbut did not.[1] The grammar of a language (itsmorphology,phonology, andsyntax) is inherently arbitrary and peculiar to a specific language (orgroup of related languages). For example, although in English it isidiomatic(accepted as structurally correct) to say "cats are associated with agility", other forms could have developed, such as "cats associate toward agility" or "cats are associated of agility".[2]Unidiomatic constructions sound wrong to fluent speakers, although they are often entirely comprehensible. For example, the title of the classic bookEnglish as She Is Spokeis easy to understand (its idiomatic counterpart isEnglish as It Is Spoken), but it deviates from English idiom in thegenderof the pronoun and theinflectionof the verb.Lexical gapsare another key example of idiom.
https://en.wikipedia.org/wiki/Idiom_(language_structure)
This list comprises widespread modern beliefs aboutEnglishlanguage usagethat are documented by a reliable source to be misconceptions. With no authoritativelanguage academy,guidanceon English language usage can come from many sources. This can create problems, as described by Reginald Close: Teachers and textbook writers ofteninventrules which their students and readers repeat and perpetuate. These rules are usually statements about English usage which the authors imagine to be,as a rule, true. But statements of this kind are extremely difficult to formulate both simply and accurately. They are rarely altogether true; often only partially true; sometimes contradicted by usage itself. Sometimes the contrary to them is also true.[1] Manyusageforms are commonly perceived asnonstandardorerrorsdespite being either widely used or endorsed by authoritative descriptions.[2][a] Perceived violations of correct English usage elicit visceral reactions in many people, or may lead to a perception of a writer as careless, uneducated, or lacking attention to detail. For example, respondents to a 1986BBCpoll were asked to submit "the three points of grammatical usage they most disliked". Participants said their points "'made their blood boil', 'gave a pain to their ear', 'made them shudder', and 'appalled' them".[3] Mignon Fogartywrites that "nearly all grammarians agree that it's fine to end sentences with prepositions, at least in some cases."[7]Fowler's Modern English Usagesays, "One of the most persistent myths about prepositions in English is that they properly belong before the word or words they govern and should not be placed at the end of a clause or sentence."[8]Preposition strandingwas in use long before any English speakersconsidered it incorrect. This idea probably began in the 17th century, owing to an essay by the poetJohn Dryden, and it is still taught in schools at the beginning of the 21st century.[4]But "every major grammarian for more than a century has tried to debunk" this idea; "it's perfectly natural to put a preposition at the end of a sentence, and it has been since Anglo-Saxon times."[9]Many examples of terminal prepositions occur in classic works of literature, including the plays ofShakespeare.[5]The saying "This is the sort of nonsense up with which I will not put"[10][5][b]satirizes the awkwardness that can result from prohibiting sentence-ending prepositions.Associated Press styleandChicago Styleboth allow this usage. "There is no such rule" against splitting an infinitive, according toThe Oxford Guide to Plain English,[11]and it has "never been wrong to 'split' an infinitive".[12]In some cases it may be preferable to split an infinitive.[11][13]In his grammar bookA Plea for the Queen's English(1864),Henry Alfordclaimed that because "to" was part of the infinitive, the parts were inseparable.[14]This was in line with a 19th-century movement among grammarians to transfer Latin rules to the English language. In Latin, infinitives are single words (e.g., "amare, cantare, audire"), making split infinitives impossible.[11] Those who impose this rule on themselves or their students are following a modern English "rule" that was neither used historically nor universally followed in professional writing. Jeremy Butterfield described this perceived prohibition as one of "the folk commandments of English usage".[15]TheChicago Manual of Stylesays: There is a widespread belief—one with no historical or grammatical foundation—that it is an error to begin a sentence with a conjunction such as "and", "but", or "so". In fact, a substantial percentage (often as many as 10 percent) of the sentences in first-rate writing begin with conjunctions. It has been so for centuries, and even the most conservative grammarians have followed this practice.[16][c] Regarding the word "and",Fowler's Modern English Usagestates, "There is a persistent belief that it is improper to begin a sentence withAnd, but this prohibition has been cheerfully ignored by standard authors from Anglo-Saxon times onwards."[17]Garner's Modern American Usageadds, "It is rank superstition that this coordinating conjunction cannot properly begin a sentence."[18]The word "but" suffers from similar misconceptions.Garnersays, "It is a gross canard that beginning a sentence withbutis stylistically slipshod. In fact, doing so is highly desirable in any number of contexts, as many style books have said (many correctly pointing out thatbutis more effective thanhoweverat the beginning of a sentence)".[19]Fowler'sechoes this sentiment: "The widespread public belief thatButshould not be used at the beginning of a sentence seems to be unshakeable. Yet it has no foundation."[20] It is a misconception that the passive voice is always incorrect in English.[21]Some "writing tutors" believe that the passive voice is to be avoided in all cases,[22]but "there are legitimate uses for the passive voice", says Paul Brians.[23]Mignon Fogartyalso points out that "passive sentences aren't incorrect"[24]and "If you don't know who is responsible for an action, passive voice can be the best choice".[25][d]When the active or passive voice can be used without much awkwardness, there arediffering opinionsabout which is preferable.Bryan A. Garnernotes, "Many writers talk about passive voice without knowing exactly what it is. In fact, many think that any BE-VERB signals passive voice."[26] Some proscriptions of passive voice stem from its use to avoid accountability or asweasel words, rather than from its supposed ungrammaticality. Some style guides use the termdouble negativeto refer exclusively to thenonstandarduse of reinforcing negations (negative concord, which is considered standard in some other languages), e.g., using "I don't know nothing" to mean "I know nothing". But the term "double negative" can sometimes refer to the standard English constructions calledlitotesor nested negatives, e.g., using "He is not unhealthy" to mean "He is healthy". In some cases, nested negation is used to convey nuance, uncertainty, or the possibility ofa third optionother than a statement or its negation. For example, an author may write "I'm not unconvinced by his argument" to imply they find an argument persuasive, but not definitive.[27] Some writers suggest avoiding nested negatives as arule of thumbfor clear and concise writing.[28]Overuse of nested negatives can result in sentences that are difficult to parse, as in the sentence "I am not sure whether it is not true to say that the Milton who once seemed not unlike a seventeenth-century Shelley had not become[...]" Richard Nordquist writes, "no rule exists regarding the number of sentences that make up a paragraph", noting that professional writers use "paragraphs as short as a single word".[29]According to theOxford Guide to Plain English: If you can say what you want to say in a single sentence that lacks a direct connection with any other sentence, just stop there and go on to a new paragraph. There's no rule against it. A paragraph can be a single sentence, whether long, short, or middling.[30] According to theUniversity of North Carolina at Chapel Hill'sWriting Center's website, "Many students define paragraphs in terms of length: a paragraph is a group of at least five sentences, a paragraph is half a page long, etc." The website explains, "Length and appearance do not determine whether a section in a paper is a paragraph. For instance, in some styles of writing, particularly journalistic styles, a paragraph can be just one sentence long."[31] Writers such asShakespeare,Samuel Johnson, and others since Anglo-Saxon days have been "shrinking English". Some opinion makers in the 17th and 18th century eschewed contractions, but beginning in the 1920s, usage guides have mostly allowed them.[32]Most writing handbooks now recommend using contractions to create more readable writing,[33]but many schools continue to teach that contractions are prohibited in academic and formal writing,[34][35][36]contributing to this misconception. Common examples of words described as "not real" include "funnest", "impactful", and "mentee",[37][38]all of which are in common use, appear in numerous dictionaries as English words,[39][40][41][42]and follow standard rules for constructing English words frommorphemes. Many linguists follow adescriptiveapproach to language, where some usages are labeled merely nonstandard, not improper or incorrect. The word "inflammable" can be derived by two different constructions, both following standard rules of English grammar: appending the suffix-ableto the wordinflamecreates a word meaning "able to be inflamed", while adding the prefixin-to the wordflammablecreates a word meaning "not flammable". Thus "inflammable" is anauto-antonym, a word that can be its own antonym, depending on context. Because of the risk of confusion, style guides sometimes recommend using the unambiguous terms "flammable" and "not flammable".[43] It is sometimes claimed that "nauseous" means "causing nausea" (nauseating), not suffering from it (nauseated). This prescription is contradicted by vast evidence from English usage, and Merriam-Webster finds no source for the rule before a published letter by a physician, Deborah Leary, in 1949.[44] It is true that the adjective "healthful" has been pushed out in favor of "healthy" in recent times.[45]But the distinction between the words dates only to the 19th century. Before that, the words were used interchangeably; some examples date to the 16th century.[46]The use of "healthful" in place of "healthy" is now regarded as unusual enough that it may be consideredhypercorrected.[47]
https://en.wikipedia.org/wiki/Common_English_usage_misconceptions
SomeEnglishwords are often used in ways that are contentious among writers onusageandprescriptive commentators. The contentious usages are especiallycommonin spoken English, and academiclinguistspoint out that they are accepted by many listeners. While in some circles the usages below may make the speaker sound uneducated or illiterate, in other circles the morestandardor more traditional usage may make the speaker sound stilted or pretentious. For a list of disputes more complicated than the usage of a single word or phrase, seeEnglish usage controversies. Those who insist thatuniquecannot be modified by such adverbs asmore, most, andveryare clearly wrong: our evidence shows that it can be and frequently is modified by such adverbs.[126]
https://en.wikipedia.org/wiki/List_of_English_words_with_disputed_usage
Īhām(ایهام) inPersian,Urdu,KurdishandArabic poetryis a literary device in which an author uses a word, or an arrangement of words, that can be read in several ways. Each of the meanings may be logically sound, equally true and intended.[1] In the 12th century,Rashid al-Din Vatvatdefinedīhāmas follows: "Īhāmin Persian means to create doubt. This is a literary device, also calledtakhyīl[to make one suppose and fancy], whereby a writer (dabīr), in prose, or a poet, in verse, employs a word with two different meanings, one direct and immediate (qarīb) and the other remote and strange (gharīb), in such a manner that the listener, as soon as he hears that word, thinks of its direct meaning while in actuality the remote meaning is intended."[1] Amir Khusrow(1253–1325 CE) introduced the notion that any of the several meanings of a word, or phrase, might be equally true and intended, creating a multilayered text.[2]Discerning the various layers of meanings would be a challenge to the reader, who has to focus on and keep turning over the passage in his mind, applying his erudition and imagination to perceive alternative meanings.[1] Another idea associated withīhāmis that a verse may function as a mirror of the reader's condition, as expressed by the 14th-century authorShaykh Maneri: "A verse by itself has no fixed meaning. It is the reader/listener who picks up an idea consistent with the subjective condition of his mind."[1]The 15th-century poetFawhr-e Din Nizamiconsideredīhāman essential element of any good work of poetry: "A poem that doesn't have dual-meaning words, such a poem does not attract anyone at all—a poem without words of two senses."[3] Īhāmis an important stylistic device inSufiliterature, perfected by writers such asHafez(1325/1326–1389/1390 CE).[1][4]Nalîis an example of another poet who has usedīhāmwidely in his poetry. Applications of this "art of ambiguity" or "amphibology" include texts that can be read as descriptions of earthly or divine love.[4][5][6] Haleh Pourafzal and Roger Montgomery, writing inHaféz: Teachings of the Philosopher of Love(1998), discussīhāmin terms of "biluminosity", simultaneous illumination from two directions, describing it as "a technique of comparison involving wordplay, sound association, and double entendre, keeping the reader in doubt as to the 'right' meaning of the word. Biluminosity removes the burden of choice and invites the reader to enter a more empowering dimension ofīhāmthat embraces the quality of amphibians [...]—beings capable of living equally well in two radically different environments. As a result, the reader is freed from the obsession to find the 'right answer' through speculation and instead can concentrate on enjoying nuances and being awed by how the slightest shift in perception creates a new meaning. [...] From the perspective of Haféz as the composer of poetry, biluminosity allows two different points of view to shed light upon each other."[7]
https://en.wikipedia.org/wiki/%C4%AAh%C4%81m
Thesuffix-onym(fromAncient Greek:ὄνυμα,lit.'name') is abound morpheme, that is attached to the end of aroot word, thus forming a newcompound wordthat designates a particularclassofnames. Inlinguisticterminology, compound words that are formed with suffix -onym are most commonly used as designations for variousonomasticclasses. Most onomastic terms that are formed with suffix -onym areclassical compounds, whose word roots are taken fromclassical languages(Greek and Latin).[1][2] For example, onomastic terms liketoponymandlinguonymare typical classical (or neoclassical) compounds, formed from suffix-onymand classical (Greek and Latin) root words (Ancient Greek:τόπος/ place;Latin:lingua/ language). In some compounds, the-onymmorpheme has been modified by replacing (or dropping) the "o". In the compounds likeananymandmetanym, the correct forms (anonymandmetonym) were pre-occupied by other meanings. Other, late 20th century examples, such ashypernymandcharacternym, are typically redundantneologisms, for which there are more traditional words formed with the full-onym(hyperonymandcharactonym). The English suffix-onymis from theAncient Greeksuffix-ώνυμον(ōnymon), neuter of the suffixώνυμος(ōnymos), having a specified kind of name, from the Greekὄνομα(ónoma),Aeolic Greekὄνυμα (ónyma), "name". The form-ōnymosis that taken byónomawhen it is the end component of abahuvrihicompound, but in English its use is extended totatpuruṣacompounds. The suffix is found in many modern languages with various spellings. Examples are:Dutchsynoniem,GermanSynonym,Portuguesesinónimo,Russianсиноним (sinonim),Polishsynonim,Finnishsynonyymi,Indonesiansinonim,Czechsynonymum. According to a 1988 study[3]of words ending in-onym, there are four discernible classes of-onymwords: (1) historic, classic, or, for want of better terms, naturally occurring or common words; (2) scientific terminology, occurring in particular in linguistics, onomastics, etc.; (3) language games; and (4)nonce words. Older terms are known to gain new, sometimes contradictory, meanings (e.g.,eponymandcryptonym). In many cases, two or more words describe the same phenomenon, but no precedence is discernible (e.g.,necronymandpenthonym). New words are sometimes created, the meaning of which duplicating existing terms. On occasion, new words are formed with little regard to historical principles.
https://en.wikipedia.org/wiki/-onym
Semantic heterogeneityis whendatabase schemaordatasetsfor the same domain are developed by independent parties, resulting in differences in meaning and interpretation of data values.[1]Beyondstructured data, the problem of semantic heterogeneity is compounded due to the flexibility ofsemi-structured dataand varioustaggingmethods applied to documents orunstructured data. Semantic heterogeneity is one of the more important sources of differences inheterogeneous datasets. Yet, for multiple data sources to interoperate with one another, it is essential to reconcile thesesemanticdifferences. Decomposing the various sources of semantic heterogeneities provides a basis for understanding how to map and transform data to overcome these differences. One of the first known classification schemes applied todata semanticsis from William Kent more than two decades ago.[2]Kent's approach dealt more with structuralmappingissues than differences in meaning, which he pointed todata dictionariesas potentially solving. One of the most comprehensive classifications is from Pluempitiwiriyawej and Hammer, "Classification Scheme for Semantic and Schematic Heterogeneities in XML Data Sources".[3]They classify heterogeneities into three broad classes: Moreover, mismatches or conflicts can occur between set elements (a "population" mismatch) or attributes (a "description" mismatch). Michael Bergman expanded upon this schema by adding a fourth major explicit category of language, and also added some examples of each kind of semantic heterogeneity, resulting in about 40 distinct potential categories[4].[5]This table shows the combined 40 possible sources of semantic heterogeneities across sources: Language Encoding For example,ASCIIvUTF-8 Ambiguous sentence references, such asI'm glad I'm a man, and so is Lola(LolabyRay Daviesand theKinks) Synonyms Acronyms Homonyms When two types (classes or sets) are asserted as being the same when the scope and reference are not (for example,Berlinthe cityvBerlinthe official city-state) When two individuals are asserted as being the same when they are actually distinct (for example,John F. Kennedythe presidentvJohn F. Kennedythe aircraft carrier) Domain Data representation Confusion often arises in the use of literalsvURIsvobject types Data A common problem, more acute with closed world approaches than withopen world ones A different approach toward classifying semantics and integration approaches is taken byShethet al.[6]Under their concept, they split semantics into three forms: implicit, formal and powerful. Implicit semantics are what is either largely present or can easily be extracted; formal languages, though relatively scarce, occur in the form ofontologiesor otherdescription logics; and powerful (soft) semantics are fuzzy and not limited to rigid set-based assignments. Sheth et al.'s main point is thatfirst-order logic(FOL) or description logic is inadequate alone to properly capture the needed semantics. Besides data interoperability, relevant areas ininformation technologythat depend on reconciling semantic heterogeneities includedata mapping,semantic integration, andenterprise information integration, among many others. From the conceptual to actual data, there are differences in perspective, vocabularies, measures and conventions once any two data sources are brought together. Explicit attention to these semantic heterogeneities is one means to get the information to integrate or interoperate. A mere twenty years ago, information technology systems expressed and stored data in a multitude of formats and systems. The Internet and Web protocols have done much to overcome these sources of differences. While there is a large number of categories of semantic heterogeneity, these categories are also patterned and can be anticipated and corrected. These patterned sources inform what kind of work must be done to overcome semantic differences where they still reside.
https://en.wikipedia.org/wiki/Semantic_heterogeneity
Semantic integrationis the process of interrelating information from diverse sources, for example calendars and to do lists, email archives, presence information (physical, psychological, and social), documents of all sorts, contacts (includingsocial graphs), search results, and advertising and marketing relevance derived from them. In this regard,semanticsfocuses on the organization of and action uponinformationby acting as an intermediary between heterogeneous data sources, which may conflict not only by structure but also context or value. Inenterprise application integration(EAI), semantic integration can facilitate or even automate the communication between computer systems usingmetadata publishing. Metadata publishing potentially offers the ability to automatically linkontologies. One approach to (semi-)automated ontology mapping requires the definition of a semantic distance or its inverse,semantic similarityand appropriate rules. Other approaches include so-calledlexical methods, as well as methodologies that rely on exploiting the structures of the ontologies. For explicitly stating similarity/equality, there exist special properties or relationships in most ontology languages.OWL, for example has "owl:equivalentClass", "owl:equivalentProperty" and "owl:sameAs". Eventually system designs may see the advent of composable architectures where published semantic-based interfaces are joined together to enable new and meaningful capabilities[citation needed]. These could predominately be described by means of design-time declarative specifications, that could ultimately be rendered and executed at run-time[citation needed]. Semantic integration can also be used to facilitate design-time activities of interface design and mapping. In this model, semantics are only explicitly applied to design and the run-time systems work at thesyntaxlevel[citation needed]. This "early semantic binding" approach can improve overall system performance while retaining the benefits of semantic driven design[citation needed]. From the industry use case, it has been observed that the semantic mappings were performed only within the scope of the ontology class or thedatatypeproperty. These identified semantic integrations are (1) integration of ontology class instances into another ontology class without any constraint, (2) integration of selected instances in one ontology class into another ontology class by the range constraint of the property value and (3) integration of ontology class instances into another ontology class with the value transformation of the instance property. Each of them requires a particular mapping relationship, which is respectively: (1) equivalent or subsumption mapping relationship, (2) conditional mapping relationship that constraints the value of property (data range) and (3) transformation mapping relationship that transforms the value of property (unit transformation). Each identified mapping relationship can be defined as either (1) direct mapping type, (2) data range mapping type or (3) unit transformation mapping type. In the case of integrating supplemental data source, SELECT ?medicationWHERE {?diagnosis a example:Diagnosis .?diagnosis example:name “TB of vertebra” .?medication example:canTreat ?diagnosis .} SELECT DRUG.medIDFROM DIAGNOSIS, DRUG, DRUG_DIAGNOSISWHERE DIAGNOSIS.diagnosisID=DRUG_DIAGNOSIS.diagnosisIDAND DRUG.medID=DRUG_DIAGNOSIS.medIDAND DIAGNOSIS.name=”TB of vertebra” ThePacific Symposium on Biocomputinghas been a venue for the popularization of the ontology mapping task in the biomedical domain, and a number of papers on the subject can be found in its proceedings.
https://en.wikipedia.org/wiki/Semantic_integration
Semantic matchingis a technique used in computer science to identify information that issemantically related. Given any two graph-like structures, e.g.classifications,taxonomiesdatabase or XML schemas andontologies, matching is an operator which identifies those nodes in the two structures which semantically correspond to one another. For example, applied to file systems, it can determine that a folder labeled "car" is semantically equivalent to another folder "automobile" because they aresynonymsin English. This information can be taken from a linguistic resource likeWordNet. In recent years many of them have been offered.[1]S-Match is an example of asemantic matching operator.[2]It works on lightweight ontologies,[3]namely graph structures where each node is labeled by a natural language sentence, for example in English. These sentences are translated into a formal logical formula (according to an artificial unambiguous language) codifying the meaning of the node taking into account its position in the graph. For example, in case the folder "car" is under another folder "red" we can say that the meaning of the folder "car" is "red car" in this case. This is translated into the logical formula "red AND car". The output of S-Match is a set of semantic correspondences called mappings attached with one of the following semantic relations:disjointness(⊥),equivalence(≡), more specific (⊑) and less specific (⊒). In our example, the algorithm will return a mapping between "car" and "automobile" attached with an equivalence relation. Information semantically matched can also be used as a measure of relevance through a mapping of near-term relationships. Such use of S-Match technology is prevalent in the career space where it is used to gauge depth of skills through relational mapping of information found in applicant resumes. Semantic matching represents a fundamental technique in many applications in areas such as resource discovery,data integration,data migration, query translation, peer-to-peer networks, agent communication, schema, and ontology merging. Its use is also being investigated in other areas such as event processing.[4]In fact, it has been proposed as a valid solution to the semantic heterogeneity problem, namely managing the diversity in knowledge. Interoperability among people of different cultures and languages, having different viewpoints, and using different terminology has always been a huge problem. Especially with the advent of the Web and the consequential information explosion, the problem seems to be emphasized. People face the concrete problem of retrieving, disambiguation, and integrating information coming from a wide variety of sources.
https://en.wikipedia.org/wiki/Semantic_matching
Asemantic network, orframe networkis aknowledge basethat representssemanticrelations betweenconceptsin a network. This is often used as a form ofknowledge representation. It is adirectedorundirected graphconsisting ofvertices, which representconcepts, andedges, which representsemantic relationsbetweenconcepts,[1]mapping or connectingsemantic fields. A semantic network may be instantiated as, for example, agraph databaseor aconcept map. Typical standardized semantic networks are expressed assemantic triples. Semantic networks are used inneurolinguisticsandnatural language processingapplications such assemantic parsing[2]andword-sense disambiguation.[3]Semantic networks can also be used as a method to analyze large texts and identify the main themes and topics (e.g., ofsocial mediaposts), to reveal biases (e.g., in news coverage), or even to map an entire research field.[4] Examples of the use of semantic networks inlogic,directed acyclic graphsas a mnemonic tool, dates back centuries, the earliest documented use being the Greek philosopherPorphyry's commentary onAristotle'scategoriesin the third century AD. Incomputing history, "Semantic Nets" for thepropositional calculuswere firstimplementedforcomputersbyRichard H. Richensof theCambridge Language Research Unitin 1956 as an "interlingua" formachine translationofnatural languages,[5]although the importance of this work and the Cambridge Language Research Unit was only belatedly realized. Semantic networks were also independently implemented by Robert F. Simmons[6]and Sheldon Klein, using thefirst-order predicate calculusas a base, after being inspired by a demonstration ofVictor Yngve. The "line of research was originated by the first President of theAssociation for Computational Linguistics, Victor Yngve, who in 1960 had published descriptions ofalgorithmsfor using aphrase structure grammarto generate syntactically well-formed nonsense sentences. Sheldon Klein and I about 1962–1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text."[7]Other researchers, most notablyM. Ross Quillian[8]and others atSystem Development Corporationhelped contribute to their work in the early 1960s as part of the SYNTHEX project. It's these publications at System Development Corporation that most modern derivatives of the term "semantic network" cite as their background. Later prominent works were done byAllan M. Collinsand Quillian (e.g., Collins and Quillian;[9][10]Collins and Loftus[11]Quillian[12][13][14][15]). Still later in 2006, Hermann Helbig fully describedMultiNet.[16] In the late 1980s, two universities in theNetherlands,GroningenandTwente, jointly began a project calledKnowledge Graphs, which are semantic networks but with the added constraint that edges are restricted to be from a limited set of possible relations, to facilitatealgebras on the graph.[17]In the subsequent decades, the distinction between semantic networks andknowledge graphswas blurred.[18][19]In 2012,Googlegave their knowledge graph the nameKnowledge Graph. The semantic link network was systematically studied as asemantic social networkingmethod. Its basic model consists of semantic nodes, semantic links between nodes, and a semantic space that defines the semantics of nodes and links and reasoning rules on semantic links. The systematic theory and model was published in 2004.[20]This research direction can trace to the definition of inheritance rules for efficient model retrieval in 1998[21]and the Active Document Framework ADF.[22]Since 2003, research has developed toward social semantic networking.[23]This work is a systematic innovation at the age of theWorld Wide Weband global social networking rather than an application or simple extension of the Semantic Net (Network). Its purpose and scope are different from that of the Semantic Net (or network).[24]The rules for reasoning and evolution and automatic discovery of implicit links play an important role in the Semantic Link Network.[25][26]Recently it has been developed to support Cyber-Physical-Social Intelligence.[27]It was used for creating a general summarization method.[28]The self-organised Semantic Link Network was integrated with a multi-dimensional category space to form a semantic space to support advanced applications with multi-dimensional abstractions and self-organised semantic links[29][30]It has been verified that Semantic Link Network play an important role in understanding and representation throughtext summarisationapplications.[31][32]Semantic Link Network has been extended from cyberspace to cyber-physical-social space. Competition relation and symbiosis relation as well as their roles in evolving society were studied in the emerging topic: Cyber-Physical-Social Intelligence[33] More specialized forms of semantic networks has been created for specific use. For example, in 2008, Fawsy Bendeck's PhD thesis formalized theSemantic Similarity Network(SSN) that contains specialized relationships and propagation algorithms to simplify thesemantic similarityrepresentation and calculations.[34] A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another. Most semantic networks are cognitively based. They consist of arcs (spokes) and nodes (hubs) which can be organized into a taxonomic hierarchy. Different semantic networks can also be connected by bridge nodes. Semantic networks contributed to the ideas ofspreading activation,inheritance, and nodes as proto-objects. One process of constructing semantic networks, known also asco-occurrence networks, includes identifying keywords in the text, calculating the frequencies of co-occurrences, and analyzing the networks to find central words and clusters of themes in the network.[35] In the field oflinguistics, semantic networks represent how the human mind handles associated concepts. Typically, concepts in a semantic network can have one of two different relationships: either semantic or associative. If semantic in relation, the two concepts are linked by any of the following semantic relationships:synonymy,antonymy,hypernymy,hyponymy,holonymy,meronymy,metonymy, orpolysemy. These are not the only semantic relationships, but some of the most common. If associative in relation, the two concepts are linked based on their frequency to occur together. These associations are accidental, meaning that nothing about their individual meanings requires them to be associated with one another, only that they typically are. Examples of this would be pig and farm, pig and trough, or pig and mud. While nothing about the meaning of pig forces it to be associated with farms, as pigs can be wild, the fact that pigs are so frequently found on farms creates an accidental associated relationship. These thematic relationships are common within semantic networks and are notable results infree associationtests. As the initial word is given, activation of the most closely related concepts begin, spreading outward to the lesser associated concepts. An example of this would be the initial word pig prompting mammal, then animal, and then breathes. This example shows that taxonomic relationships are inherent within semantic networks. The most closely related concepts typically sharesemantic features, which are determinants of semantic similarity scores. Words with higher similarity scores are more closely related, thus have higher probability of being a close word in the semantic network. These relationships can be suggested into the brain throughpriming, where previous examples of the same relationship are shown before the target word is shown. The effect of priming on a semantic network linking can be seen through the speed of the reaction time to the word. Priming can help to reveal the structure of a semantic network and which words are most closely associated with the original word. Disruption of a semantic network can lead to a semantic deficit (not to be confused with assemantic dementia). There exists physical manifestation of semantic relationships in the brain as well. Category-specific semantic circuits show that words belonging to different categories are processed in circuits differently located throughout the brain. For example, the semantic circuits for a word associated with the face or mouth (such as lick) is located in a different place of the brain than a word associated with the leg or foot (such as kick). This is a primary result of a 2013 study published byFriedemann Pulvermüller[citation needed]. These semantic circuits are directly tied to their sensorimotor areas of the brain. This is known as embodied semantics, a subtopic ofembodied language processing. If brain damage occurs, the normal processing of semantic networks could be disrupted, leading to preference into what kind of relationships dominate the semantic network in the mind. The following code shows an example of a semantic network in theLisp programming languageusing anassociation list. To extract all the information about the "canary" type, one would use theassocfunction with a key of "canary".[36] An example of a semantic network isWordNet, alexicaldatabase ofEnglish. It groups English words into sets of synonyms calledsynsets, provides short, general definitions, and records the various semantic relations between these synonym sets. Some of the most common semantic relations defined aremeronymy(A is a meronym of B if A is part of B),holonymy(B is a holonym of A if B contains A),hyponymy(ortroponymy) (A is subordinate of B; A is kind of B),hypernymy(A is superordinate of B),synonymy(A denotes the same as B) andantonymy(A denotes the opposite of B). WordNet properties have been studied from anetwork theoryperspective and compared to other semantic networks created fromRoget's Thesaurusandword associationtasks. From this perspective the three of them are asmall world structure.[37] It is also possible to represent logical descriptions using semantic networks such as theexistential graphsofCharles Sanders Peirceor the relatedconceptual graphsofJohn F. Sowa.[1]These have expressive power equal to or exceeding standardfirst-order predicate logic. Unlike WordNet or other lexical or browsing networks, semantic networks using these representations can be used for reliable automated logical deduction. Some automated reasoners exploit the graph-theoretic features of the networks during processing. Other examples of semantic networks areGellishmodels.Gellish Englishwith itsGellish English dictionary, is aformal languagethat is defined as a network of relations between concepts and names of concepts. Gellish English is a formal subset of natural English, just as Gellish Dutch is a formal subset of Dutch, whereas multiple languages share the same concepts. Other Gellish networks consist of knowledge models and information models that are expressed in the Gellish language. A Gellish network is a network of (binary) relations between things. Each relation in the network is an expression of a fact that is classified by a relation type. Each relation type itself is a concept that is defined in the Gellish language dictionary. Each related thing is either a concept or an individual thing that is classified by a concept. The definitions of concepts are created in the form of definition models (definition networks) that together form a Gellish Dictionary. A Gellish network can be documented in a Gellish database and is computer interpretable. SciCrunchis a collaboratively edited knowledge base for scientific resources. It provides unambiguous identifiers (Research Resource IDentifiers or RRIDs) for software, lab tools etc. and it also provides options to create links between RRIDs and from communities. Another example of semantic networks, based oncategory theory, isologs. Here each type is an object, representing a set of things, and each arrow is a morphism, representing a function.Commutative diagramsalso are prescribed to constrain the semantics. In the social sciences people sometimes use the term semantic network to refer toco-occurrence networks.[38][39]The basic idea is that words that co-occur in a unit of text, e.g. a sentence, are semantically related to one another. Ties based on co-occurrence can then be used to construct semantic networks. This process includes identifying keywords in the text, constructing co-occurrence networks, and analyzing the networks to find central words and clusters of themes in the network. It is a particularly useful method to analyze large text andbig data.[40] There are also elaborate types of semantic networks connected with corresponding sets of software tools used forlexicalknowledge engineering, like the Semantic Network Processing System (SNePS) of Stuart C. Shapiro[41]or theMultiNetparadigm of Hermann Helbig,[42]especially suited for the semantic representation of natural language expressions and used in severalNLPapplications. Semantic networks are used in specialized information retrieval tasks, such asplagiarism detection. They provide information on hierarchical relations in order to employsemantic compressionto reduce language diversity and enable the system to match word meanings, independently from sets of words used. The Knowledge Graphproposed by Google in 2012 is actually an application of semantic network in search engine. Modeling multi-relational data like semantic networks in low-dimensional spaces through forms ofembeddinghas benefits in expressing entity relationships as well as extracting relations from mediums like text. There are many approaches to learning these embeddings, notably using Bayesian clustering frameworks or energy-based frameworks, and more recently, TransE[43](NeurIPS2013). Applications of embedding knowledge base data includeSocial network analysisandRelationship extraction.
https://en.wikipedia.org/wiki/Semantic_networks
TheSemantic Web, sometimes known asWeb 3.0, is an extension of theWorld Wide Webthrough standards[1]set by theWorld Wide Web Consortium(W3C). The goal of the Semantic Web is to makeInternetdatamachine-readable. To enable the encoding ofsemanticswith the data, technologies such asResource Description Framework(RDF)[2]andWeb Ontology Language(OWL)[3]are used. These technologies are used to formally representmetadata. For example,ontologycan describeconcepts, relationships betweenentities, and categories of things. These embedded semantics offer significant advantages such asreasoningover data and operating with heterogeneous data sources.[4]These standards promote common data formats and exchange protocols on the Web, fundamentally the RDF. According to the W3C, "The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries."[5]The Semantic Web is therefore regarded as an integrator across different content and information applications and systems. The term was coined byTim Berners-Leefor a web of data (ordata web)[6]that can be processed by machines[7]—that is, one in which much of themeaningismachine-readable. While its critics have questioned its feasibility, proponents argue that applications inlibraryandinformation science, industry,biologyandhuman sciencesresearch have already proven the validity of the original concept.[8] Berners-Lee originally expressed his vision of the Semantic Web in 1999 as follows: I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A "Semantic Web", which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The "intelligent agents" people have touted for ages will finally materialize.[9] The 2001Scientific Americanarticle by Berners-Lee,Hendler, andLassiladescribed an expected evolution of the existing Web to a Semantic Web.[10]In 2006, Berners-Lee and colleagues stated that: "This simple idea…remains largely unrealized".[11]In 2013, more than four million Web domains (out of roughly 250 million total) contained Semantic Web markup.[12] In the following example, the text "Paul Schuster was born in Dresden" on a website will be annotated, connecting a person with their place of birth. The followingHTMLfragment shows how a small graph is being described, inRDFa-syntax using aschema.orgvocabulary and aWikidataID: The example defines the following fivetriples(shown inTurtlesyntax). Each triple represents one edge in the resulting graph: the first element of the triple (thesubject) is the name of the node where the edge starts, the second element (thepredicate) the type of the edge, and the last and third element (theobject) either the name of the node where the edge ends or a literal value (e.g. a text, a number, etc.). The triples result in the graph shown inthe given figure. One of the advantages of usingUniform Resource Identifiers (URIs)is that they can be dereferenced using theHTTPprotocol. According to the so-calledLinked Open Dataprinciples, such a dereferenced URI should result in a document that offers further data about the given URI. In this example, all URIs, both for edges and nodes (e.g.http://schema.org/Person,http://schema.org/birthPlace,http://www.wikidata.org/entity/Q1731) can be dereferenced and will result in further RDF graphs, describing the URI, e.g. that Dresden is a city in Germany, or that a person, in the sense of that URI, can be fictional. The second graph shows the previous example, but now enriched with a few of the triples from the documents that result from dereferencinghttps://schema.org/Person(green edge) andhttps://www.wikidata.org/entity/Q1731(blue edges). Additionally to the edges given in the involved documents explicitly, edges can be automatically inferred: the triple from the original RDFa fragment and the triple from the document athttps://schema.org/Person(green edge in the figure) allow to infer the following triple, givenOWLsemantics (red dashed line in the second Figure): The concept of thesemantic networkmodel was formed in the early 1960s by researchers such as thecognitive scientistAllan M. Collins,linguistRoss QuillianandpsychologistElizabeth F. Loftusas a form to represent semantically structured knowledge. When applied in the context of the modern internet, it extends the network ofhyperlinkedhuman-readableweb pagesby inserting machine-readable metadata about pages and how they are related to each other. This enablesautomated agentsto access the Web more intelligently and perform more tasks on behalf of users. The term "Semantic Web" was coined byTim Berners-Lee,[7]the inventor of the World Wide Web and director of the World Wide Web Consortium ("W3C"), which oversees the development of proposed Semantic Web standards. He defines the Semantic Web as "a web of data that can be processed directly and indirectly by machines". Many of the technologies proposed by the W3C already existed before they were positioned under the W3C umbrella. These are used in various contexts, particularly those dealing with information that encompasses a limited and defined domain, and where sharing data is a common necessity, such as scientific research or data exchange among businesses. In addition, other technologies with similar goals have emerged, such asmicroformats. Many files on a typical computer can be loosely divided into either human-readable documents, or machine-readable data. Examples of human-readable document files are mail messages, reports, and brochures. Examples of machine-readable data files are calendars, address books, playlists, and spreadsheets, which are presented to a user using an application program that lets the files be viewed, searched, and combined. Currently, the World Wide Web is based mainly on documents written inHypertext Markup Language(HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags provide a method by which computers can categorize the content of web pages. In the examples below, the field names "keywords", "description" and "author" are assigned values such as "computing", and "cheap widgets for sale" and "John Doe". Because of this metadata tagging and categorization, other computer systems that want to access and share this data can easily identify the relevant values. With HTML and a tool to render it (perhapsweb browsersoftware, perhaps anotheruser agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as "this document's title is 'Widget Superstore'", but there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of €199, or that it is a consumer product. Rather, HTML can only say that the span of text "X586172" is something that should be positioned near "Acme Gizmo" and "€199", etc. There is no way to say "this is a catalog" or even to establish that "Acme Gizmo" is a kind of title or that "€199" is a price. There is also no way to express that these pieces of information are bound together in describing a discrete item, distinct from other items perhaps listed on the page. Semantic HTMLrefers to the traditional HTML practice of markup following intention, rather than specifying layout details directly. For example, the use of<em>denoting "emphasis" rather than<i>, which specifiesitalics. Layout details are left up to the browser, in combination withCascading Style Sheets. But this practice falls short of specifying the semantics of objects such as items for sale or prices. Microformats extend HTML syntax to createmachine-readablesemantic markup about objects including people, organizations, events and products.[13]Similar initiatives includeRDFa,MicrodataandSchema.org. The Semantic Web takes the solution further. It involves publishing in languages specifically designed for data:Resource Description Framework(RDF),Web Ontology Language(OWL), and Extensible Markup Language (XML). HTML describes documents and the links between them. RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or airplane parts. These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-accessibledatabases,[14]or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e., to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to humandeductive reasoningandinference, thereby obtaining more meaningful results and helping computers to perform automated information gathering and research. An example of a tag that would be used in a non-semantic web page: Encoding similar information in a semantic web page might look like this: Tim Berners-Lee calls the resulting network ofLinked DatatheGiant Global Graph, in contrast to the HTML-based World Wide Web. Berners-Lee posits that if the past was document sharing, the future isdata sharing. His answer to the question of "how" provides three points of instruction. One, a URL should point to the data. Two, anyone accessing the URL should get data back. Three, relationships in the data should point to additional URLs with data. Tags, including hierarchical categories and tags that are collaboratively added and maintained (e.g. withfolksonomies) can be considered part of, of potential use to or a step towards the semantic Web vision.[15][16][17] Uniqueidentifiers, including hierarchical categories and collaboratively added ones, analysis tools andmetadata, including tags, can be used to create forms of semantic webs – webs that are to a certain degree semantic.[18]In particular, such has been used for structuring scientific research i.a. by research topics andscientific fieldsby the projectsOpenAlex,[19][20][21]WikidataandScholiawhich are under development and provideAPIs, Web-pages, feeds and graphs for varioussemantic queries. Tim Berners-Lee has described the Semantic Web as a component of Web 3.0.[22] People keep asking what Web 3.0 is. I think maybe when you've got an overlay ofscalable vector graphics– everything rippling and folding and looking misty – onWeb 2.0and access to a semantic Web integrated across a huge space of data, you'll have access to an unbelievable data resource … "Semantic Web" is sometimes used as a synonym for "Web 3.0",[23]though the definition of each term varies. The next generation of the Web is often termed Web 4.0, but its definition is not clear. According to some sources, it is a Web that involvesartificial intelligence,[24]theinternet of things,pervasive computing,ubiquitous computingand theWeb of Thingsamong other concepts.[25]According to the European Union, Web 4.0 is "the expected fourth generation of the World Wide Web. Using advanced artificial and ambient intelligence, the internet of things, trusted blockchain transactions, virtual worlds and XR capabilities, digital and real objects and environments are fully integrated and communicate with each other, enabling truly intuitive, immersive experiences, seamlessly blending the physical and digital worlds".[26] Some of the challenges for the Semantic Web include vastness, vagueness, uncertainty, inconsistency, and deceit.Automated reasoning systemswill have to deal with all of these issues in order to deliver on the promise of the Semantic Web. This list of challenges is illustrative rather than exhaustive, and it focuses on the challenges to the "unifying logic" and "proof" layers of the Semantic Web. The World Wide Web Consortium (W3C) Incubator Group for Uncertainty Reasoning for the World Wide Web[27](URW3-XG) final report lumps these problems together under the single heading of "uncertainty".[28]Many of the techniques mentioned here will require extensions to the Web Ontology Language (OWL) for example to annotate conditional probabilities. This is an area of active research.[29] Standardization for Semantic Web in the context of Web 3.0 is under the care of W3C.[30] The term "Semantic Web" is often used more specifically to refer to the formats and technologies that enable it.[5]The collection, structuring and recovery of linked data are enabled by technologies that provide aformal descriptionof concepts, terms, and relationships within a givenknowledge domain. These technologies are specified as W3C standards and include: TheSemantic Web Stackillustrates the architecture of the Semantic Web. The functions and relationships of the components can be summarized as follows:[31] Well-established standards: Not yet fully realized: The intent is to enhance theusabilityand usefulness of the Web and its interconnectedresourcesby creatingsemantic web services, such as: Such services could be useful to public search engines, or could be used forknowledge managementwithin an organization. Business applications include: In a corporation, there is a closed group of users and the management is able to enforce company guidelines like the adoption of specific ontologies and use ofsemantic annotation. Compared to the public Semantic Web there are lesser requirements onscalabilityand the information circulating within a company can be more trusted in general; privacy is less of an issue outside of handling of customer data. Critics question the basic feasibility of a complete or even partial fulfillment of the Semantic Web, pointing out both difficulties in setting it up and a lack of general-purpose usefulness that prevents the required effort from being invested. In a 2003 paper, Marshall and Shipman point out the cognitive overhead inherent in formalizing knowledge, compared to the authoring of traditional webhypertext:[46] While learning the basics of HTML is relatively straightforward, learning a knowledge representation language or tool requires the author to learn about the representation's methods of abstraction and their effect on reasoning. For example, understanding the class-instance relationship, or the superclass-subclass relationship, is more than understanding that one concept is a "type of" another concept. [...] These abstractions are taught to computer scientists generally and knowledge engineers specifically but do not match the similar natural language meaning of being a "type of" something. Effective use of such a formal representation requires the author to become a skilled knowledge engineer in addition to any other skills required by the domain. [...] Once one has learned a formal representation language, it is still often much more effort to express ideas in that representation than in a less formal representation [...]. Indeed, this is a form of programming based on the declaration of semantic data and requires an understanding of how reasoning algorithms will interpret the authored structures. According to Marshall and Shipman, thetacitand changing nature of much knowledge adds to theknowledge engineeringproblem, and limits the Semantic Web's applicability to specific domains. A further issue that they point out are domain- or organization-specific ways to express knowledge, which must be solved through community agreement rather than only technical means.[46]As it turns out, specialized communities and organizations for intra-company projects have tended to adopt semantic web technologies greater than peripheral and less-specialized communities.[47]The practical constraints toward adoption have appeared less challenging where domain and scope is more limited than that of the general public and the World-Wide Web.[47] Finally, Marshall and Shipman see pragmatic problems in the idea of (Knowledge Navigator-style) intelligent agents working in the largely manually curated Semantic Web:[46] In situations in which user needs are known and distributed information resources are well described, this approach can be highly effective; in situations that are not foreseen and that bring together an unanticipated array of information resources, the Google approach is more robust. Furthermore, the Semantic Web relies on inference chains that are more brittle; a missing element of the chain results in a failure to perform the desired action, while the human can supply missing pieces in a more Google-like approach. [...] cost-benefit tradeoffs can work in favor of specially-created Semantic Web metadata directed at weaving together sensible well-structured domain-specific information resources; close attention to user/customer needs will drive these federations if they are to be successful. Cory Doctorow's critique ("metacrap")[48]is from the perspective of human behavior and personal preferences. For example, people may include spurious metadata into Web pages in an attempt to mislead Semantic Web engines that naively assume the metadata's veracity. This phenomenon was well known with metatags that fooled theAltavistaranking algorithm into elevating the ranking of certain Web pages: the Google indexing engine specifically looks for such attempts at manipulation.Peter GärdenforsandTimo Honkelapoint out that logic-based semantic web technologies cover only a fraction of the relevant phenomena related to semantics.[49][50] Enthusiasm about the semantic web could be tempered by concerns regardingcensorshipandprivacy. For instance,text-analyzingtechniques can now be easily bypassed by using other words, metaphors for instance, or by using images in place of words. An advanced implementation of the semantic web would make it much easier for governments to control the viewing and creation of online information, as this information would be much easier for an automated content-blocking machine to understand. In addition, the issue has also been raised that, with the use ofFOAFfiles and geolocationmeta-data, there would be very little anonymity associated with the authorship of articles on things such as a personal blog. Some of these concerns were addressed in the "Policy Aware Web" project[51]and is an active research and development topic. Another criticism of the semantic web is that it would be much more time-consuming to create and publish content because there would need to be two formats for one piece of data: one for human viewing and one for machines. However, many web applications in development are addressing this issue by creating a machine-readable format upon the publishing of data or the request of a machine for such data. The development of microformats has been one reaction to this kind of criticism. Another argument in defense of the feasibility of semantic web is the likely falling price of human intelligence tasks in digital labor markets, such asAmazon'sMechanical Turk.[citation needed] Specifications such as eRDF and RDFa allow arbitrary RDF data to be embedded in HTML pages. TheGRDDL(Gleaning Resource Descriptions from Dialects of Language) mechanism allows existing material (including microformats) to be automatically interpreted as RDF, so publishers only need to use a single format, such as HTML. The first research group explicitly focusing on the Corporate Semantic Web was the ACACIA team atINRIA-Sophia-Antipolis, founded in 2002. Results of their work include theRDF(S)based Corese[52]search engine, and the application of semantic web technology in the realm ofdistributed artificial intelligencefor knowledge management (e.g. ontologies andmulti-agent systemsfor corporate semantic Web)[53]andE-learning.[54] Since 2008, the Corporate Semantic Web research group, located at theFree University of Berlin, focuses on building blocks: Corporate Semantic Search, Corporate Semantic Collaboration, and Corporate Ontology Engineering.[55] Ontology engineering research includes the question of how to involve non-expert users in creating ontologies and semantically annotated content[56]and for extracting explicit knowledge from the interaction of users within enterprises. Tim O'Reilly, who coined the term Web 2.0, proposed a long-term vision of the Semantic Web as a web of data, where sophisticated applications are navigating and manipulating it.[57]The data web transforms the World Wide Web from adistributedfile systeminto adistributed database.[58]
https://en.wikipedia.org/wiki/Semantic_Web
Inlinguistics, acalque(/kælk/) orloan translationis awordorphraseborrowed from anotherlanguagebyliteralword-for-word or root-for-roottranslation. When used as averb, "to calque" means to borrow a word or phrase from another language while translating its components, so as to create a new word or phrase (lexeme) in the target language. For instance, the English wordskyscraperhas been calqued in dozens of other languages,[1]combining words for "sky" and "scrape" in each language, as for exampleWolkenkratzerin German,arranha-céuin Portuguese,wolkenkrabberin Dutch,rascacieloin Spanish,grattacieloin Italian,gökdelenin Turkish, andmatenrō(摩天楼)in Japanese. Calques, like direct borrowings, often function as linguistic gap-fillers, emerging when a language lacks existing vocabulary to express new ideas, technologies, or objects. This phenomenon is widespread and is often attributed to the shared conceptual frameworks across human languages. Speakers of different languages tend to perceive the world through common categories such as time, space, and quantity, making the translation of concepts across languages both possible and natural.[2] Calquing is distinct fromphono-semantic matching: while calquing includessemantictranslation, it does not consist ofphoneticmatching—i.e., of retaining the approximatesoundof the borrowed word by matching it with a similar-sounding pre-existing word ormorphemein the target language.[3] Proving that a word is a calque sometimes requires more documentation than does an untranslated loanword because, in some cases, a similar phrase might have arisen in both languages independently. This is less likely to be the case when the grammar of the proposed calque is quite different from that of the borrowing language, or when the calque contains less obvious imagery. One system classifies calques into five groups. This terminology is not universal:[4] Some linguists refer to aphonological calque, in which the pronunciation of a word is imitated in the other language.[8]For example, the English word "radar" becomes the similar-sounding Chinese word雷达(pinyin:léidá),[8]which literally means "to arrive (as fast) as thunder". Partial calques, or loan blends, translate some parts of a compound but not others.[9]For example, the name of the Irish digital television serviceSaorviewis a partial calque of that of the UK service "Freeview", translating the first half of the word from English to Irish but leaving the second half unchanged. Other examples include "liverwurst" (< GermanLeberwurst)[10]and "apple strudel" (< GermanApfelstrudel).[11] The "computer mouse" was named in English for its resemblance to theanimal. Many other languages use their word for "mouse" for the "computer mouse", sometimes using adiminutiveor, inChinese, adding the word "cursor" (标), makingshǔbiāo"mouse cursor" (simplified Chinese:鼠标;traditional Chinese:鼠標;pinyin:shǔbiāo).[citation needed]Another example is the Spanish wordratónthat means both the animal and the computer mouse.[12] The common English phrase "flea market" is a loan translation of the Frenchmarché aux puces("market with fleas").[13]At least 22 other languages calque the French expression directly or indirectly through another language. The wordloanwordis a calque of theGermannounLehnwort. In contrast, the termcalqueis a loanword, from the Frenchnouncalque("tracing, imitation, close copy").[14] Another example of a common morpheme-by-morpheme loan-translation is of theEnglishword "skyscraper", akenning-like term which may be calqued using the word for "sky" or "cloud" and the word, variously, for "scrape", "scratch", "pierce", "sweep", "kiss", etc. At least 54 languages have their own versions of the English word. SomeGermanicandSlavic languagesderived their words for "translation" from words meaning "carrying across" or "bringing across", calquing from the Latintranslātiōortrādūcō.[15] The Latinweekday namescame to be associated by ancient Germanic speakers with their own gods following a practice known asinterpretatio germanica: the Latin "Day ofMercury",Mercurii dies(latermercrediin modernFrench), was borrowed intoLate Proto-Germanicas the "Day ofWōđanaz" (Wodanesdag), which becameWōdnesdæginOld English, then "Wednesday" in Modern English.[16] Since at least 1894, according to theTrésor de la langue française informatisé, theFrenchtermcalquehas been used in itslinguisticsense, namely in a publication by Louis Duvau:[17] Un autre phénomène d'hybridation est la création dans une langue d'un mot nouveau, dérivé ou composé à l'aide d'éléments existant déja dans cette langue, et ne se distinguant en rien par l'aspect extérieur des mots plus anciens, mais qui, en fait, n'est que lecalqued'un mot existant dans la langue maternelle de celui qui s'essaye à un parler nouveau. [...] nous voulons rappeler seulement deux ou trois exemples de cescalquesd'expressions, parmi les plus certains et les plus frappants. Another phenomenon of hybridization is the creation in a language of a new word, derived or composed with the help of elements already existing in that language, and which is not distinguished in any way by the external aspect of the older words, but which, in fact, is only thecopy(calque) of a word existing in the mother tongue of the one who tries out a new language. [...] we want to recall only two or three examples of thesecopies(calques) of expressions, among the most certain and the most striking. Since at least 1926, the termcalquehas been attested in English through a publication by the linguistOtakar Vočadlo[cs]:[18] Notes Bibliography
https://en.wikipedia.org/wiki/Calque
Adead metaphoris afigure of speechwhich has lost the originalimageryof its meaning by extensive, repetitive, and popular usage, or because it refers to an obsolete technology or forgotten custom. Because deadmetaphorshave a conventional meaning that differs from the original, they can be understood without knowing their earlier connotation. Dead metaphors are generally the result of asemantic shiftin the evolution of a language,[1]a process called theliteralizationof a metaphor.[2]A distinction is often made between those dead metaphors whose origins are entirely unknown to the majority of people using them (such as the expression "tokick the bucket") and those whose source is widely known or symbolism easily understood but not often thought about (the idea of "falling in love"). The long standing metaphorical application of a term can similarly lose their metaphorical quality, coming simply to denote a larger application of the term. The wings of a plane now no longer seem to metaphorically refer to a bird's wings; rather, the term 'wing' was expanded to include non-living things. Similarly, the legs of a chair is no longer a metaphor but an expansion of the term "leg" to include any supporting pillar. There is debate among literary scholars whether so-called "dead metaphors" are dead or are metaphors. Literary scholar R.W. Gibbs noted that for a metaphor to be dead, it would necessarily lose the metaphorical qualities that it comprises. These qualities, however, still remain. A person can understand the expression "falling head-over-heels in love" even if they have never encountered that variant of the phrase "falling in love".Analytic philosopherMax Blackargued that the dead metaphor should not be considered a metaphor at all, but rather classified as a separate vocabulary item.[3] In addition, philosophers such asColin Murray TurbayneandKendall Waltonhave outlined the manner in which "dead metaphors" may continue to exert influence upon a user's thoughts long after their metaphorical properties have seemingly vanished. Their research illustrates the manner in which "dead metaphors" have often become incorporated into accepted scientific and philosophical theories while also contributing to considerable obfuscation of thought over time.[4][5][6][7] Thislinguisticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Dead_metaphor
Aeuphemism(/ˈjuːfəmɪzəm/YOO-fə-miz-əm) is an innocuous word or expression used in place of one that is deemedoffensiveor suggests something unpleasant.[1]Some euphemisms are intended to amuse, while others use bland, inoffensive terms for concepts that the user wishes to downplay. Euphemisms may be used to mask profanity or refer totopics some considertaboosuch as mental or physical disability, sexual intercourse, bodily excretions, pain, violence, illness, or death in a polite way.[2] Euphemismcomes from theGreekwordeuphemia(εὐφημία) which refers to the use of 'words of good omen'; it is a compound ofeû(εὖ), meaning 'good, well', andphḗmē(φήμη), meaning 'prophetic speech; rumour, talk'.[3]Euphemeis a reference to the female Greek spirit of words of praise and positivity, etc. The termeuphemismitself was used as a euphemism by theancient Greeks; with the meaning "to keep a holy silence" (speaking well by not speaking at all).[4] Reasons for using euphemisms vary by context and intent. Commonly, euphemisms are used to avoid directly addressing subjects that might be deemed negative or embarrassing, such asdeath,sex, and excretory bodily functions. They may be created for innocent, well-intentioned purposes or nefariously and cynically, intentionally to deceive, confuse ordeny. Euphemisms which emerge as dominant social euphemisms are often created to serve progressive causes.[5][6]TheOxford University Press'sDictionary of Euphemismsidentifies "late" as an occasionally ambiguous term, whose nature as a euphemism for dead and an adjective meaning overdue, can cause confusion in listeners.[7] Euphemisms are also used to mitigate, soften or downplay the gravity of large-scale injustices,war crimes, or other events that warrant a pattern of avoidance in official statements or documents. For instance, one reason for the comparative scarcity of written evidence documenting the exterminations atAuschwitz, relative to their sheer number, is "directives for the extermination process obscured in bureaucratic euphemisms".[8]Another example of this is during the 2022Russian invasion of Ukraine, where Russian PresidentVladimir Putin, in his speech starting the invasion, called the invasion a "special military operation".[9] Euphemisms are sometimes used to lessen the opposition to a political move. For example, according to linguistGhil'ad Zuckermann, Israeli Prime MinisterBenjamin Netanyahuused the neutral Hebrew lexical itemפעימותpeimót(literally 'beatings (of the heart)'), rather thanנסיגהnesigá('withdrawal'), to refer to the stages in the Israeli withdrawal from theWest Bank(seeWye River Memorandum), in order to lessen the opposition of right-wing Israelis to such a move.[10]Peimótwas thus used as a euphemism for 'withdrawal'.[10]: 181 Euphemism may be used as arhetorical strategy, in which case its goal is to change thevalenceof a description.[clarification needed] Using a euphemism can in itself be controversial, as in the following examples: The use of euphemism online is known as "algospeak" when used to evade automated online moderation techniques used on Meta and TikTok's platforms.[13][14][15][16][17]Algospeak has been used in debate about theIsraeli–Palestinian conflict.[18][19] Phonetic euphemism is used to replace profanities and blasphemies, diminishing their intensity. To alter the pronunciation or spelling of a taboo word (such asprofanity) to form a euphemism is known astaboo deformation, or aminced oath. Such modifications include: Euphemisms formed fromunderstatementsincludeasleepfor dead anddrinkingfor consuming alcohol. "Tired and emotional" is a notorious British euphemism for "drunk", one of manyrecurring jokespopularized by the satirical magazinePrivate Eye; it has been used by MPs to avoidunparliamentary language. Pleasant, positive, worthy, neutral, or nondescript terms are often substituted for explicit or unpleasant ones, with many substituted terms deliberately coined by sociopolitical movements,marketing,public relations, oradvertisinginitiatives, including: Some examples ofCockneyrhyming slangmay serve the same purpose: to call a person aberksounds less offensive than to call a person acunt, thoughberkis short forBerkeley Hunt,[20]which rhymes withcunt.[21] The use of a term with a softer connotation, though it shares the same meaning. For instance,screwed upis a euphemism for 'fucked up';hook-upandlaidare euphemisms for 'sexual intercourse'. Expressions or words from a foreign language may be imported for use as euphemism. For example, the French wordenceintewas sometimes used instead of the English wordpregnant;[22]abattoirforslaughterhouse, although in French the word retains its explicit violent meaning 'a place for beating down', conveniently lost on non-French speakers.Entrepreneurforbusinessman, adds glamour;douche(French for 'shower') for vaginal irrigation device;bidet('little pony') for vessel for anal washing. Ironically, although in English physical "handicaps" are almost always described with euphemism, in French the English wordhandicapis used as a euphemism for their problematic wordsinfirmitéorinvalidité.[23] Periphrasis, orcircumlocution, is one of the most common: to "speak around" a given word,implyingit without saying it. Over time, circumlocutions become recognized as established euphemisms for particular words or ideas. Bureaucraciesfrequently spawn euphemisms intentionally, asdoublespeakexpressions. For example, in the past, the US military used the term "sunshine units" for contamination byradioactive isotopes.[24]The United StatesCentral Intelligence Agencyrefers to systematictortureas "enhanced interrogation techniques".[25]An effective death sentence in the Soviet Union during theGreat Purgeoften used the clause "imprisonmentwithout right to correspondence": the person sentenced would be shot soon after conviction.[26]As early as 1939, Nazi officialReinhard Heydrichused the termSonderbehandlung("special treatment") to meansummary executionof persons viewed as "disciplinary problems" by the Nazis even before commencing thesystematic extermination of the Jews.Heinrich Himmler, aware that the word had come to be known to mean murder, replaced that euphemism with one in which Jews would be "guided" (to their deaths) through the slave-labor and extermination camps[27]after having been "evacuated" to their doom. Such was part of the formulation ofEndlösung der Judenfrage(the "Final Solution to the Jewish Question"), which became known to the outside world during theNuremberg Trials.[28] Frequently, over time, euphemisms themselves become taboo words, through the linguistic process ofsemantic changeknown aspejoration, which University of Oregon linguist Sharon Henderson Taylor dubbed the "euphemism cycle" in 1974,[29]also frequently referred to as the "euphemism treadmill", as worded bySteven Pinker.[30]For instance, the place of human defecation is a needy candidate for a euphemism in all eras.Toiletis an 18th-century euphemism, replacing the older euphemismhouse-of-office, which in turn replaced the even older euphemismsprivy-houseandbog-house.[31]In the 20th century, where the old euphemismslavatory(a place where one washes) andtoilet(a place where one dresses[32]) had grown from widespread usage (e.g., in the United States) to being synonymous with the crude act they sought to deflect, they were sometimes replaced withbathroom(a place where one bathes),washroom(a place where one washes), orrestroom(a place where one rests) or even by the extreme formpowder room(a place where one applies facial cosmetics).[citation needed]The formwater closet, often shortened toW.C., is a less deflective form.[citation needed]The wordshitappears to have originally been a euphemism for defecation in Pre-Germanic, as theProto-Indo-European root*sḱeyd-, from which it was derived, meant 'to cut off'.[33] Another example in American English is the replacement of "colored people" with "Negro" (euphemism by foreign language), which itself came to be replaced by either "African American" or "Black".[34]Also in the United States the term "ethnic minorities" in the 2010s has been replaced by "people of color".[34] Venereal disease, which associated shameful bacterial infection with a seemingly worthy ailment emanating fromVenus, the goddess of love, soon lost its deflective force in the post-classical education era, as "VD", which was replaced by thethree-letter initialism"STD" (sexually transmitted disease); later, "STD" was replaced by "STI" (sexually transmitted infection).[35] Intellectually-disabled people were originally defined with words such as "morons" or "imbeciles", which then became commonly used insults. The medical diagnosis was changed to "mentally retarded", which morphed into the pejorative, "retard", against those with intellectual disabilities. To avoid the negative connotations of their diagnoses, students who need accommodations because of such conditions are often labeled as "special needs" instead, although the words "special" or "SPED" (short for "special education") have long been schoolyard insults.[36][better source needed]As of August 2013, theSocial Security Administrationreplaced the term "mental retardation" with "intellectual disability".[37]Since 2012, that change in terminology has been adopted by theNational Institutes of Healthand the medical industry at large.[38]There are numerousdisability-related euphemisms that have negative connotations.
https://en.wikipedia.org/wiki/Euphemism_treadmill
Inlinguistics, afalse friendis a word in a different language that looks or sounds similar to a word in a given language, but differs significantly in meaning. Examples of false friends includeEnglishembarrassedandSpanishembarazado('pregnant'); EnglishparentsversusPortugueseparentesandItalianparenti(the latter two both meaning 'relatives'); EnglishdemandandFrenchdemander('ask'); and Englishgift,GermanGift('poison'), andNorwegiangift(both 'married' and 'poison'). The term was introduced by a French book,Les faux amis: ou, Les trahisons du vocabulaire anglais(False friends: or, the betrayals of English vocabulary), published in 1928. As well as producing completely false friends, the use ofloanwordsoften results in the use of a word in a restrictedcontext, which may then develop new meanings not found in the original language. For example,angstmeans 'fear' in a general sense (as well as 'anxiety') in German, but when it was borrowed into English in the context ofpsychology, its meaning was restricted to a particular type of fear described as "a neurotic feeling of anxiety and depression".[1]Also,gymnasiummeant both 'a place of education' and 'a place for exercise' inLatin, but its meaning became restricted tothe formerin German and tothe latterin English, making the expressions into false friends in those languages as well as inAncient Greek, where it started out as 'a place for naked exercise'.[2] False friends are bilingualhomophonesor bilingualhomographs,[3]i.e., words in two or more languages that look similar (homographs) or sound similar (homophones), but differ significantly in meaning.[3][4] The origin of the term is as a shortened version of the expression "false friend of a translator", the English translation of a French expression (French:faux amis du traducteur) introduced by Maxime Kœssler and Jules Derocquigny in their 1928 book,[5]with a sequel,Autres Mots anglais perfides. From theetymologicalpoint of view, false friends can be created in several ways. If language A borrowed a word from language B, or both borrowed the word from a third language or inherited it from a common ancestor, and later the word shifted in meaning or acquired additional meanings in at least one of these languages, anative speakerof one language will face a false friend when learning the other. Sometimes, presumably both senses were present in the common ancestor language, but the cognate words took on different restricted senses in Language A and Language B.[6] Actual, which in English is usually a synonym ofreal, has a different meaning in other European languages, in which it means 'current' or 'up-to-date', and has the logical derivative as averb, meaning 'to make current' or 'to update'.Actualise(oractualize) in English means 'to make a reality of'.[7] The Italian wordconfetti('sugared almonds') has acquired a new meaning in English, French and Dutch; in Italian, the corresponding word iscoriandoli.[8] English and Spanish, both of which have borrowed from Ancient Greek and Latin, have multiple false friends, such as: English andJapanesealso have diverse false friends, many of them beingwasei-eigoandgairaigowords.[9] The wordfrienditself has cognates in the other Germanic languages, but the Scandinavian ones (likeSwedishfrände,Danishfrænde) predominantly mean 'relative'. The originalProto-Germanicword meant simply 'someone whom one cares for' and could therefore refer to both a friend and a relative, but it lost various degrees of the 'friend' sense in the Scandinavian languages, while it mostly lost the sense of 'relative' in English (the pluralfriendsis still, rarely, used for "kinsfolk", as in the Scottish proverbFriends agree best at a distance, quoted in 1721). TheEstonianandFinnish languagesare related, which gives rise to false friends such as swapped forms for south and south-west:[4] Or Estonianvaim('spirit' or 'ghost') and Finnishvaimo('wife');[3]or Estoniankoristaja('a cleaner') and Finnishkoristaja('a decorator'). A high level of lexical similarity exists between German andDutch,[10]but shifts in meaning of words with a shared etymology have in some instances resulted in 'bi-directional false friends':[11][12] Note thatdie Seemeans 'sea', and thus is not a false friend. The meanings could diverge significantly. For example, theProto-Malayo-Polynesianword*qayam('domesticated animal') became specialized in descendant languages:Malay/Indonesianayam('chicken'),Cebuanoayam('dog'), andGaddangayam('pig').[6] In Swedish, the wordroligmeans 'fun':ett roligt skämt'a funny joke', while in the closely related languages Danish and Norwegian it means 'calm' (as in "he was calm despite all the commotion around him"). However, the Swedish original meaning of 'calm' is retained in some related words such asro'calmness', andorolig'worrisome, anxious', literally 'un-calm'.[13]The Danish and Norwegian wordsemestermeans term (as in school term), but the Swedish wordsemestermeans holiday. The Danish wordfrokostmeans lunch, while the Norwegian wordfrokostand the Swedish wordfrukostboth mean breakfast. Pseudo-anglicismsare new words formed from Englishmorphemesindependently from an analogous English construct and with a different intended meaning.[14] Japaneseis notable for its pseudo-anglicisms, known aswasei-eigo('Japan-made English').[15][16] In bilingual situations, false friends often result in asemantic change—a real new meaning that is then commonly used in a language. For example, the Portuguesehumoroso('capricious') changed its meaning in American Portuguese to 'humorous', owing to the English surface-cognatehumorous.[17] TheAmerican Italianfattorialost its original meaning, "farm", in favor of "factory", owing to the phonetically similar surface-cognate Englishfactory(cf. Standard Italianfabbrica, 'factory'). Instead of the originalfattoria, the phonetic adaptation American Italianfarmabecame the new signifier for "farm" (Weinreich 1963: 49; see "one-to-one correlation between signifiers and referents").[full citation needed] Due to the closeness between Italianterra rossa('red soil') and Portugueseterra roxa'purple soil',Italian farmers in Brazilusedterra roxato describe a type of soil similar to thered Mediterranean soil.[18]The actual Portuguese word for "red" isvermelha. Nevertheless,terra roxaandterra vermelhaare still used interchangeably in Brazilian agriculture.[19] Quebec Frenchis also known for shifting the meanings of some words toward those of their English cognates, but such words are considered false friends in European French. For example,éventuellementis commonly used as "eventually" in Quebec but means "perhaps" in Europe. This phenomenon is analyzed byGhil'ad Zuckermannas "(incestuous)phono-semantic matching".[20]
https://en.wikipedia.org/wiki/False_friend
Athought disorder(TD) is a disturbance incognitionwhich affectslanguage, thought andcommunication.[1][2]Psychiatric and psychological glossaries in 2015 and 2017 identified thought disorders as encompassingpoverty of ideas,paralogia(a reasoning disorder characterized by expression of illogical or delusional thoughts),word salad, anddelusions—all disturbances of thought content and form. Two specific terms have been suggested—content thought disorder (CTD) and formal thought disorder (FTD). CTD has been defined as a thought disturbance characterized by multiple fragmented delusions, and the termthought disorderis often used to refer to an FTD:[3]a disruption of the form (or structure) of thought.[4]Also known as disorganized thinking, FTD results in disorganized speech and is recognized as a major feature ofschizophreniaand otherpsychoses[5][6](includingmood disorders,dementia,mania, andneurological diseases).[7][5][8]Disorganized speech leads to an inference of disorganized thought.[9]Thought disorders includederailment,[10]pressured speech,poverty of speech,tangentiality,verbigeration, andthought blocking.[8]One of the first known public presentations of thought disorders, or specifically OCD as it is known today, was in 1691. Bishop John Moore gave a speech before Queen Mary II, about "religious melancholy."[11] Formal thought disorder affects the form (rather than the content) of thought.[12]Unlike hallucinations and delusions, it is an observable, objective sign of psychosis.[12]FTD is a common core symptom of a psychotic disorder, and may be seen as a marker of severity and as an indicator of prognosis.[8][13]It reflects a cluster of cognitive, linguistic, and affective disturbances that have generated research interest in the fields of cognitive neuroscience, neurolinguistics, and psychiatry.[8] Eugen Bleuler, who namedschizophrenia, said that TD was its defining characteristic.[14]Disturbances of thinking and speech, such asclangingorecholalia, may also be present inTourette syndrome;[15]other symptoms may be found indelirium.[16]A clinical difference exists between these two groups. Patients with psychoses are less likely to show awareness or concern about disordered thinking, and those with other disorders are aware and concerned about not being able to think clearly.[17] Thought content is the subject of an individual's thoughts, or the types of ideas expressed by the individual.[18]Mental health professionals define normal thought content as the absence of significant abnormalities, distortions, or harmful thoughts.[19]Normal thought content aligns with reality, is appropriate to the situation, and does not cause significant distress or impair functioning.[19] A person's cultural background must be considered when assessing thought content. Abnormalities in thought content differ across cultures.[20]Specific types of abnormal thought content can be features of different psychiatric illnesses.[21] Examples of disordered thought content include: Thought process is a person's form, flow, and coherence of thinking.[23]This is how they use language and put ideas together. A normal thought process is logical, linear, meaningful, and goal-directed.[18]A logical, linear thought process is one that demonstrates rational connections between thoughts in a way that is sequential that allows others to understand.[18][23]Thought process is not what a person thinks, rather it is how a person expresses their thoughts.[25] Formal thought disorder (FTD), also known as disorganized speech or disorganized thinking, is a disorder of a person's thought process in which they are unable to express their thoughts in a logical and linear fashion.[26]To be considered FTD, disorganized thinking must be severe enough that it impairs effective communication.[27]Disorganized speech is a core symptom ofpsychosis, and therefore can be a feature of any condition that has a potential to cause psychosis, includingschizophrenia,mania,major depressive disorder,delirium,postpartum psychosis, major neurocognitive disorder, andsubstance induced psychosis.[18]FTD reflects a cluster of cognitive, linguistic, and affective disturbances, and has generated research interest from the fields ofcognitive neuroscience,neurolinguistics, andpsychiatry.[8] It can be subdivided into clusters of positive and negative symptoms and objective (rather than subjective) symptoms.[13]Onthe scale of positive and negative symptoms, they have been grouped intopositive formal thought disorder(posFTD) andnegative formal thought disorder(negFTD).[13][12]Positive subtypes werepressure of speech,tangentiality,derailment, incoherence, andillogicality;[13]negative subtypes werepoverty of speechand poverty of content.[12][13]The two groups were posited to be at either end of a spectrum of normal speech, but later studies showed them to be poorly correlated.[12]A comprehensive measure of FTD is the Thought and Language Disorder (TALD) Scale.[28]The Kiddie Formal Thought Disorder Rating Scale (K-FTDS) can be used to assess the presence of formal thought disorder in children and their childhood.[29]Although it is very extensive and time-consuming, its results are in great detail and reliable.[30] Nancy Andreasenpreferred to identify TDs asthought-language-communication disorders(TLC disorders).[31][32]Up to seven domains of FTD have been described on the Thought, Language, Communication (TLC) Scale, with most of the variance accounted for by two or three domains.[12]Some TLC disorders are more suggestive of severe disorder, and are listed with the first 11 items.[32] TheDSM-5categorizes FTD as "a psychotic symptom, manifested as bizarre speech and communication." FTD may include incoherence, peculiar words, disconnected ideas, or a lack of unprompted content expected from normal speech.[33]Clinical psychologiststypically assess FTD by initiating an exploratory conversation with patients and observing the patient's verbal responses.[34] FTD is often used to establish a diagnosis ofschizophrenia; incross-sectional studies, 27 to 80 percent of patients with schizophrenia present with FTD. A hallmark feature of schizophrenia, it is also widespread amongst otherpsychiatric disorders; up to 60 percent of those withschizoaffective disorderand 53 percent of those withclinical depressiondemonstrate FTD, suggesting that it is not exclusive to schizophrenia. About six percent of healthy subjects exhibit a mild form of FTD.[35]TheDSM-5-TRmentions that less severe FTD may happen during the initial (prodromal) and residual periods of schizophrenia.[27] The characteristics of FTD vary amongst disorders. A number of studies indicate that FTD inmaniais marked by irrelevant intrusions and pronounced combinatory thinking, usually with a playfulness and flippancy absent from patients with schizophrenia.[36][37][38]The FTD present in patients with schizophrenia was characterized by disorganization,neologism, and fluid thinking, and confusion with word-finding difficulty.[38] There is limited data on thelongitudinalcourse of FTD.[39]The most comprehensive longitudinal study of FTD by 2023 found a distinction in the longitudinal course of thought-disorder symptoms between schizophrenia and other psychotic disorders. The study also found an association between pre-index assessments[clarification needed]of social, work and educational functioning and the longitudinal course of FTD.[40] Several theories have been developed to explain the causes of formal thought disorder. It has been proposed that FTD relates toneurocognitionviasemantic memory.[41]Semantic networkimpairment in people with schizophrenia—measured by the difference between fluency (e.g. the number of animals' names produced in 60 seconds) and phonological fluency (e.g. the number of words beginning with "F" produced in 60 seconds)—predicts the severity of formal thought disorder, suggesting that verbal information (throughsemantic priming) is unavailable.[41]Otherhypothesesincludeworking memorydeficit (being confused about what has already been said in a conversation) and attentional focus.[41] FTD in schizophrenia has been found to be associated with structural and functional abnormalities in the language network, where structural studies have found bilateralgrey matterdeficits; deficits in the bilateralinferior frontal gyrus, bilateralinferior parietal lobuleand bilateralsuperior temporal gyrusare FTDcorrelates.[35]Other studies did not find an association between FTD and structural aberrations of the language network, however, and regions not included in the language network have been associated with FTD.[35]Future research is needed to clarify whether there is an association with FTD in schizophrenia and neural abnormalities in the language network.[35] Transmitter systemswhich might cause FTD have also been investigated. Studies have found thatglutamatedysfunction, due to ararefactionof glutamatergicsynapsesin the superior temporal gyrus in patients with schizophrenia, is a major cause of positive FTD.[35] The heritability of FTD has been demonstrated in a number of family and twin studies.Imaging geneticsstudies, using a semantic verbal-fluency task performed by the participants duringfunctional MRIscanning, revealed thatalleleslinked to glutamatergic transmission contribute to functional aberrations in typical language-related brain areas.[35]FTD is not solelygenetically determined, however; environmental influences, such as allusive thinking in parents during childhood, and environmental risk factors for schizophrenia (including childhood abuse, migration, social isolation, andcannabisuse) also contribute to the pathophysiology of FTD.[42] The origins of FTD have been theorised from asocial-learningperspective. Singer and Wynne said that familial communication patterns play a key role in shaping the development of FTD; dysfunctional social interactions undermine a child's development of cohesive, stable mental representations of the world, increasing their risk of developing FTD.[43] Antipsychoticmedication is often used to treat FTD. Although the vast majority of studies of the efficacy of antipsychotic treatment do not report effects on syndromes or symptoms, six older studies report the effects of antipsychotic treatment on FTD.[44][45][46][47][48][49]These studies and clinical experience indicate that antipsychotics are often an effective treatment for patients with positive or negative FTD, but not all patients respond to them. Cognitive behavioural therapy(CBT) is another treatment for FTD, but its effectiveness has not been well-studied.[35]Largerandomised controlled trialsevaluating the effectiveness of CBT for treating psychosis often exclude individuals with severe FTD because it reduces thetherapeutic alliancerequired by the therapy.[50]However, provisional evidence suggests that FTD may not preclude the effectiveness of CBT.[50]Kircher and colleagues have suggested that the following methods should be used in CBT for patients with FTD:[35] Language abnormalities exist in the general population, and do not necessarily indicate a condition.[51]They can occur in schizophrenia and other disorders (such as mania or depression), or in anyone who may be tired or stressed.[1][52]To distinguish thought disorder, patterns of speech, severity of symptoms, their frequency, and any resulting functional impairment can be considered.[32] Symptoms of FTD includederailment,[10]pressured speech,poverty of speech,tangentiality, andthought blocking.[8]The most common forms of FTD observed are tangentiality and circumstantiality.[53]FTD is a hallmark feature ofschizophrenia, but is also associated with other conditions that can cause psychosis (includingmood disorders,dementia,mania, andneurological diseases).[4][7][52]Impairedattention, poormemory, and difficulty formulating abstractconceptsmay also reflect TD, and can be observed and assessed with mental-status tests such asserial sevensor memory tests.[54] Thirty symptoms (or features) of TD have been described, including:[55][12] Psychiatric and psychologicalglossariesin 2015 and 2017 definedthought disorder'as disturbed thinking orcognitionwhich affectscommunication,language, or thought content including poverty of ideas,neologisms, paralogia,word salad, anddelusions[7][87](disturbances of thought content and form), and suggested the more-specific terms content thought disorder (CTD) and formal thought disorder (FTD).[2]CTD was defined as a TD characterized by multiple fragmented delusions,[88][87]and FTD was defined as a disturbance in the form or structure of thinking.[89][90]The 2013DSM-5only used the term FTD, primarily as asynonymfor disorganized thinking and speech.[91]This contrasts with the 1992ICD-10(which only used the word "thought disorder", always accompanied with "delusion" and "hallucination")[92]and a 2002medical dictionarywhich generally defined thought disorders similarly to the psychiatric glossaries[93]and used the word in other entries as the ICD-10 did.[94] A 2017 psychiatric text describing thought disorder as a "disorganization syndrome" in the context of schizophrenia: "Thought disorder" here refers to disorganization of the form of thought and not content. An older use of the term "thought disorder" included the phenomena of delusions and sometimes hallucinations, but this is confusing and ignores the clear differences in the relationships between symptoms that have become apparent over the past 30 years. Delusions and hallucinations should be identified as psychotic symptoms, and thought disorder should be taken to mean formal thought disorders or a disorder of verbal cognition. The text said that somecliniciansuse the term "formal thought disorder" broadly, referring to abnormalities in thought form with psychotic cognitive signs or symptoms,[95]and studies of cognition and subsyndromes in schizophrenia may refer to FTD asconceptual disorganizationordisorganization factor.[82] Some disagree: Unfortunately, "thought disorder" is often involved rather loosely to refer to both FTD and delusional content. For the sake of clarity, the unqualified use of the phrase "thought disorder" should be discarded from psychiatric communication. Even the designation "formal thought disorder" covers too wide a territory. It should always be made clear whether one is referring to derailment or loose associations, flight of ideas, or circumstantiality. It was believed that TD occurred only in schizophrenia, but later findings indicate that it may occur in other psychiatric conditions (including mania) and in people without mental illness.[97]Not all people with schizophrenia have a TD; the condition is notspecificto the disease.[98] When defining thought-disorder subtypes and classifying them as positive or negative symptoms,Nancy Andreasenfound[98]that different subtypes of TD occur at different frequencies in those with mania, depression, and schizophrenia. People with mania have pressured speech as the most prominent symptom, and have rates of derailment, tangentiality, and incoherence as prominent as in those with schizophrenia. They are likelier to have pressured speech, distractibility, and circumstantiality.[98][99] People with schizophrenia have more negative TD, including poverty of speech and poverty of content of speech, but also have relatively high rates of some positive TD.[98]Derailment, loss of goal, poverty of content of speech, tangentiality and illogicality are particularly characteristic of schizophrenia.[100]People with depression have relatively-fewer TDs; the most prominent are poverty of speech, poverty of content of speech, and circumstantiality. Andreasen noted the diagnostic usefulness of dividing the symptoms into subtypes; negative TDs without full affective symptoms suggest schizophrenia.[98][99] She also cited the prognostic value of negative-positive-symptom divisions. In manic patients, most TDs resolve six months after evaluation; this suggests that TDs in mania, although as severe as in schizophrenia, tend to improve.[98]In people with schizophrenia, however, negative TDs remain after six months and sometimes worsen; positive TDs somewhat improve. A negative TD is a good predictor of some outcomes; patients with prominent negative TDs are worse in social functioning six months later.[98]More prominent negative symptoms generally suggest a worse outcome; however, some people may do well, respond to medication, and have normal brain function. Positive symptoms vary similarly.[101] A prominent TD at illness onset suggests a worse prognosis, including:[82] TD which is unresponsive to treatment predicts a worse illness course.[82]In schizophrenia, TD severity tends to be more stable than hallucinations and delusions. Prominent TDs are more unlikely to diminish in middle age, compared with positive symptoms.[82]Less-severe TD may occur during theprodromaland residual periods of schizophrenia.[102]Treatment for thought disorder may include psychotherapy, such as cognitive behavior therapy (CBT), and psychotropic medications.[103] TheDSM-5includes delusions, hallucinations, disorganized thought process (formal thought disorder), and disorganized or abnormal motor behavior (includingcatatonia) as key symptoms of psychosis.[6]Schizophrenia-spectrum disorders such as schizoaffective disorder and schizophreniform disorder typically consist of prominent hallucinations, delusions and FTD; the latter presents as severely disorganized, bizarre, and catatonic behavior.[4][6]Psychotic disorders due to medical conditions and substance use typically consist of delusions and hallucinations.[6][104]The rarer delusional disorder and shared psychotic disorder typically present with persistent delusions.[104]FTDs are commonly found in schizophrenia and mood disorders, with poverty of speech content more common in schizophrenia.[105] Psychoses such as schizophrenia andbipolar maniaare distinguishable frommalingering, when an individual fakes illness for other gains, by clinical presentations; malingerers feign thought content with no irregularities in form such as derailment or looseness of association.[106]Negative symptoms, including alogia, may be absent, and chronic thought disorder is typically distressing.[106] Autism spectrum disorders (ASD) whose diagnosis requires the onset of symptoms before three years of age can be distinguished from early-onset schizophrenia; schizophrenia under age 10 is extremely rare, and ASD patients do not display FTDs.[107]However, it has been suggested that individuals with ASD display language disturbances like those found in schizophrenia; a 2008 study found that children and adolescents with ASD showed significantly more illogical thinking and loose associations than control subjects.[108]The illogical thinking was related to cognitive functioning and executive control; the loose associations were related to communication symptoms and parent reports of stress and anxiety.[108] Rorschach testshave been useful for assessing TD in disturbed patients.[109][1]A series of inkblots are shown, and patient responses are analyzed to determine disturbances of thought.[1]The nature of the assessment offers insight into the cognitive processes of another, and how they respond to equivocal stimuli.[110]Hermann Rorschachdeveloped this test to diagnose schizophrenia after realizing that people with schizophrenia gave drastically different interpretations ofKlecksographieinkblots from others whose thought processes were considered normal,[111]and it has become one of the most widely used assessment tools for diagnosing TDs.[1] The Thought Disorder Index (TDI), also known as the Delta Index, was developed to help further determine the severity of TD in verbal responses.[1]TDI scores are primarily derived from verbally-expressed interpretations of the Rorschach test, but TDI can also be used with other verbal samples (including theWechsler Adult Intelligence Scale).[1]TDI has a twenty-three-category scoring index; each category scores the level of severity on a scale from 0 to 1, with .25 being mild and 1.00 being most severe (0.25, 0.50, 0.75, 1.00).[1] TD has been criticized as being based on circular or incoherent definitions.[112][need quotation to verify]Symptoms of TD are inferred from disordered speech, based on the assumption that disordered speech arises from disordered thought. Although TD is typically associated with psychosis, similar phenomena can appear in different disorders and leading to misdiagnosis.[113] A criticism related to the separation of symptoms of schizophrenia into negative or positive symptoms, including TD, is that it oversimplifies the complexity of TD and its relationship to other positive symptoms.[114]Factor analysishas found that negative symptoms tend to correlate with one another, but positive symptoms tend to separate into two groups.[114]The three clusters became known as negative symptoms, psychotic symptoms, and disorganization symptoms.[101]Alogia, a TD traditionally classified as a negative symptom, can be separated into two types: poverty of speech content (a disorganization symptom) and poverty of speech, response latency, and thought blocking (negative symptoms).[115]Positive-negative-symptom diametrics, however, may enable a more accurate characterization of schizophrenia.[116]
https://en.wikipedia.org/wiki/Formal_thought_disorder
Ageneric trademark, also known as agenericized trademarkorproprietary eponym, is atrademarkorbrand namethat, because of its popularity or significance, has become thegeneric termfor, or synonymous with, a general class ofproductsorservices, usually against the intentions of the trademark's owner. A trademark is prone to genericization, or "genericide",[1][2]when a brand name acquires substantialmarket dominanceormind share, becoming so widely used for similar products or services that it is no longer associated with the trademark owner, e.g.,linoleum,bubble wrap,thermos, andaspirin.[3]A trademark thus popularized is at risk of being challenged or revoked, unless the trademark owner works sufficiently to correct and prevent such broad use.[4][5][6] Trademark owners can inadvertently contribute to genericization by failing to provide an alternative generic name for their product or service or using the trademark in similar fashion togeneric terms.[7]In one example, theOtis Elevator Company's trademark of the word "escalator" was cancelled following a petition fromToledo-basedHaughton Elevator Company. In rejecting an appeal from Otis, an examiner from theUnited States Patent and Trademark Officecited the company's own use of the term "escalator" alongside the generic term "elevator" in multiple advertisements without any trademark significance.[8]Therefore, trademark owners go to extensive lengths to avoid genericization and trademark erosion. Genericization may be specific to certain professions and other subpopulations. For example,Luer-Lok (Luer lock),[9]Phoroptor (phoropter),[10]andPort-a-Cath (portacath)[11]have genericizedmind shareamongphysiciansdue to a lack of alternative names in common use: as a result, consumers may not realize that the term is a brand name rather than amedical eponymor generic term. Pharmaceuticaltrade namesare somewhat protected from genericization due to the modern practice of assigningnonproprietary namesbased on a drug's chemical structure.[12]This circumvents the problem of a trademarked name entering common use by providing a generic name as soon as a novel pharmaceutical enters the market. For example,aripiprazole, the nonproprietary name for Abilify, was well-documented since its invention.[13][14][15]Warfarin, originally introduced as arat poison, was approved for human use under the brand name Coumadin.[16] Examples of genericization before the modern system ofgeneric drugsincludeaspirin, introduced to the market in 1897, andheroin, introduced in 1898. Both were originally trademarks ofBayer AG. However, U.S. court rulings in 1918 and 1921 found the terms to be genericized, stating the company's failure to reinforce the brand's connection with their product as the reason.[17] A different sense of the wordgenericizedin the pharmaceutical industry refers to products whose patent protection has expired. For example,Lipitorwas genericized in the U.S. when the first competing generic version was approved by the FDA in November 2011. In this same context, the termgenericizationrefers to the process of a brand drug losing market exclusivity to generics. Trademark erosion, orgenericization, is a special case ofantonomasiarelated totrademarks. It happens when a trademark becomes so common that it starts being used as a common name,[18][3][19]most often occurring when the original company has failed to prevent such use.[18][19]Once it has become an appellative, the word cannot be registered any more; this is why companies try hard not to let their trademark become too common, a phenomenon that could otherwise be considered a successful move since it would mean that the company gained exceptional recognition. An example of trademark erosion is the verb "to hoover" (used with the meaning of "vacuum cleaning"), which originated from theHoovercompany brand name. Nintendois an example of a brand that successfully fought trademark erosion, having managed to replace excessive use of its name with the term "game console", at that time aneologism.[18][20] Whether or not a mark is popularly identified as genericized, the owner of the mark may still be able to enforce theproprietaryrights that attach to the use or registration of the mark, as long as the mark continues to exclusively identify the owner as the commercial origin of the applicable products or services. If the mark does not perform this essential function and it is no longer possible to legally enforce rights in relation to the mark, the mark may have become generic. In many legal systems (e.g., in theUnited Statesbut not inGermany) a generic mark forms part of thepublic domainand can be commercially exploited by anyone. Nevertheless, there exists the possibility of a trademark becoming a revocable generic term in German (and European) trademark law. The process by which trademark rights are diminished or lost as a result of common use in the marketplace is known asgenericization. This process typically occurs over a period of time in which a mark is not used as a trademark (i.e., where it is not used to exclusively identify the products or services of a particular business), where a mark falls into disuse entirely, or where the trademark owner does not enforce its rights throughactionsforpassing offortrademark infringement. One risk factor that may lead to genericization is the use of a trademark as averb,pluralorpossessive, unless the mark itself is possessive or plural (e.g., "Friendly's" restaurants).[21] However, in highly inflected languages, a tradename may have to carry case endings in usage. An example isFinnish, where "Microsoftin" is thegenitive caseand "Facebookista" is theelative case.[22] Generic use of a trademark presents an inherent risk to the effective enforcement of trademark rights and may ultimately lead to genericization. Trademark owners may take various steps to reduce the risk, including educating businesses and consumers on appropriate trademark use, avoiding use of their marks in a generic manner, and systematically and effectively enforcing their trademark rights. If a trademark is associated with a newinvention, the trademark owner may also consider developing a generic term for the product to be used in descriptive contexts, to avoid inappropriate use of the "house" mark. Such a term is called ageneric descriptorand is frequently used immediately after the trademark to provide a description of the product or service. For example, "Kleenextissues" ("facial tissues" being the generic descriptor) or "Velcro-brand fasteners" for Velcro brand name hook-and-loop fasteners. Another common practice among trademark owners is to follow their trademark with the wordbrandto help define the word as a trademark.Johnson & Johnsonchanged the lyrics of theirBand-Aidtelevision commercial jingle from, "I am stuck on Band-Aids, 'cause Band-Aid's stuck on me" to "I am stuck on Band-Aidbrand, 'cause Band-Aid's stuck on me."[23]Googlehas gone to lengths to prevent this process, discouraging publications from using the term 'googling' in reference to Web searches. In 2006, both theOxford English Dictionary[24]and theMerriam Webster Collegiate Dictionary[25]struck a balance between acknowledging widespread use of the verb coinage and preserving the particular search engine's association with the coinage, defininggoogle(all lower case, with -leending) as a verb meaning "use the Google search engine to obtain information on the Internet". TheSwedish Language Councilreceived a complaint from Google for its inclusion ofogooglebar(meaning 'ungoogleable') on its list of new Swedish words from 2012. The Language Council chose to remove the word to avoid a legal process, but in return wrote that "[w]e decide together which words should be and how they are defined, used and spelled".[26] Where a trademark is used generically, a trademark owner may need to take aggressive measures to retainexclusive rightsto the trademark.Xerox Corporationattempted to prevent the genericization of its core trademark through an extensive public relations campaign advising consumers to "photocopy" instead of "xerox" documents.[27] TheLego Companyhas worked to prevent the genericization of its plasticbuilding blocksfollowing the expiration of Lego's last major patents in 1978.[28]Lego manuals and catalogs throughout the 1980s included a message imploring customers to preserve the brand name by "referring to [their] bricks as 'LEGO Bricks or Toys', and not just 'LEGOS'."[29][30]In the early 2000s, the company acquired the Legos.comURLin order to redirect customers to the Lego.com website and deliver a similar message.[31]Despite these efforts, many children and adults in the United States continue to use "Legos" as the plural form of "Lego," but competing and interchangeable products, such as those manufactured byMega Brands, are often referred to simply as building blocks or construction blocks.[32]The company has successfully put legal pressure on theSwedish Academyand theInstitute for Language and Folkloreto remove the nounlegofrom their dictionaries.[33] Adobe Inc.has experienced mixed success with preventing the genericization of their trademarked software,Adobe Photoshop. This is shown via recurring use of "photoshop" as a noun, verb, or general adjective for allphoto manipulationthroughout the Internet and mass media.[34] Since 2003, theEuropean Unionhas actively sought to restrict the use ofgeographical indicationsby third parties outside the EU by enforcing laws regarding "protected designation of origin".[35]Although a geographical indication for specialty food or drink may be generic, it is not a trademark because it does not serve to identify exclusively a specific commercial enterprise and therefore cannot constitute a genericized trademark. The extension of protection for geographical indications is somewhat controversial. A geographical indication may have been registered as a trademark elsewhere; for example, if "Parma Ham" was part of a trademark registered inCanadaby a Canadian manufacturer, then ham manufacturers inParma, Italy, might be unable to use this name in Canada. Wines (such asBordeaux,PortandChampagne), cheeses (such asRoquefort,Parmesan,Gouda, andFeta),Piscoliquor, andScotch whiskyare examples of geographical indications. Compare Russian use of "Шампанское" (= Shampanskoye) forchampagne-type wine made in Russia. In the 1990s, the Parma consortium successfully sued theAsdasupermarketchain to prevent it using the description "Parma ham" onprosciuttoproduced in Parma but sliced outside the Parma region.[36]The European Court ruled that pre-packaged ham must be produced, sliced, and packaged in Parma in order to be labeled for sale as "Parma ham".[37] A trademark is said to fall somewhere along a scale from being "distinctive" to "generic" (used primarily as a common name for the product or service rather than an indication of source). Among distinctive trademarks the scale goes from strong to weak:[38]
https://en.wikipedia.org/wiki/Genericized_trademark
Language changeis the process of alteration in the features of a singlelanguage, or of languages in general, over time. It is studied in several subfields oflinguistics:historical linguistics,sociolinguistics, andevolutionary linguistics. Traditional theories of historical linguistics identify three main types of change: systematic change in the pronunciation ofphonemes, orsound change;borrowing, in which features of a language or dialect are introduced or altered as a result of influence from another language or dialect; andanalogical change, in which the shape or grammatical behavior of a word is altered to more closely resemble that of another word. Research on language change generally assumes theuniformitarian principle—the presumption that language changes in the past took place according to the same general principles as language changes visible in the present.[1] Language change usually does not occur suddenly, but rather takes place via an extended period ofvariation, during which new and old linguistic features coexist. All living languages are continually undergoing change. Some commentators use derogatory labels such as "corruption" to suggest that language change constitutes a degradation in the quality of a language, especially when the change originates fromhuman erroror is aprescriptivelydiscouraged usage.[2]Modern linguistics rejects this concept, since from a scientific point of view such innovations cannot be judged in terms of good or bad.[3][4]John Lyonsnotes that "any standard of evaluation applied to language-change must be based upon a recognition of the various functions a language 'is called upon' to fulfil in the society which uses it".[5] Over enough time, changes in a language can accumulate to such an extent that it is no longer recognizable as the same language. For instance,modern Englishis the result of centuries of language change applying toOld English, even though modern English is extremely divergent from Old English in grammar, vocabulary, and pronunciation. The two may be thought of as distinct languages, but Modern English is a "descendant" of its "ancestor" Old English. When multiple languages are all descended from the same ancestor language, as theRomance languagesare fromVulgar Latin, they are said to form alanguage familyand be "genetically" related. According toGuy Deutscher, the tricky question is "Why are changes not brought up short and stopped in their tracks? At first sight, there seem to be all the reasons in the world why society should never let the changes through." He sees the reason for tolerating change in the fact that we already are used to "synchronic variation", to the extent that we are hardly aware of it. For example, when we hear the word "wicked", we automatically interpret it as either "evil" or "wonderful", depending on whether it is uttered by an elderly lady or a teenager. Deutscher speculates that "[i]n a hundred years' time, when the original meaning of 'wicked' has all but been forgotten, people may wonder how it was ever possible for a word meaning 'evil' to change its sense to 'wonderful' so quickly."[6] Sound change—i.e., change in the pronunciation ofphonemes—can lead tophonological change(i.e., change in the relationships between phonemes within the structure of a language). For instance, if the pronunciation of one phoneme changes to become identical to that of another phoneme, the two original phonemes can merge into a single phoneme, reducing the total number of phonemes the language contains. Determining the exact course of sound change in historical languages can pose difficulties, since the technology ofsound recordingdates only from the 19th century, and thus sound changes before that time must be inferred from written texts. Theorthographicalpractices of historical writers provide the main (indirect) evidence of how language sounds have changed over the centuries. Poetic devices such as rhyme and rhythm can also provide clues to earlier phonetic and phonological patterns. A principal axiom of historical linguistics, established by the linguists of theNeogrammarianschool of thought in the 19th century, is that sound change is said to be "regular"—i.e., a given sound change simultaneously affects all words in which the relevant set of phonemes appears, rather than each word's pronunciation changing independently of each other. The degree to which the Neogrammarian hypothesis is an accurate description of how sound change takes place, rather than a useful approximation, is controversial; but it has proven extremely valuable to historical linguistics as aheuristic, and enabled the development of methodologies ofcomparative reconstructionandinternal reconstructionthat allow linguists to extrapolate backwards from known languages to the properties of earlier, unattested languagesand hypothesize sound changes that may have taken place in them. The study of lexical changes forms thediachronicportion of the science ofonomasiology. The ongoing influx of new words into theEnglish language(for example) helps make it a rich field for investigation into language change, despite the difficulty of defining precisely and accurately the vocabulary available to speakers of English. Throughoutits history, English has not onlyborrowed wordsfrom other languages but has re-combined and recycled them to create new meanings, whilstlosing some old words. Dictionary-writerstry to keep track of the changes in languages by recording (and, ideally, dating) the appearance in a language of new words, or of new usages for existing words. By the same token, they may tag some words eventually as "archaic" or "obsolete". Standardisation ofspellingoriginated centuries ago.[vague][citation needed]Differences in spelling often catch the eye of a reader of a text from a previous century. The pre-print era had fewerliteratepeople: languages lacked fixed systems of orthography, and the manuscripts that survived often show words spelled according to regional pronunciation and to personal preference. Semantic changes are shifts in the meanings of existing words. Basic types of semantic change include: After a word enters a language, its meaning can change as through a shift in thevalenceof its connotations. As an example, when "villain" entered English it meant 'peasant' or 'farmhand', but acquired the connotation 'low-born' or 'scoundrel', and today only the negative use survives. Thus 'villain' has undergonepejoration. Conversely, the word "wicked" is undergoing amelioration in colloquial contexts, shifting from its original sense of 'evil', to the much more positive one as of 2009[update]of 'brilliant'. Words' meanings may also change in terms of the breadth of their semantic domain. Narrowing a word limits its alternative meanings, whereas broadening associates new meanings with it. For example, "hound" (Old Englishhund) once referred to any dog, whereas in modern English it denotes only a particular type of dog. On the other hand, the word "dog" itself has been broadened from its Old English root 'dogge', the name of a particular breed, to become the general term for all domestic canines.[11] Syntactic changeis the evolution of thesyntacticstructure of anatural language. Over time, syntactic change is the greatest modifier of a particular language.[citation needed]Massive changes – attributable either tocreolizationor torelexification– may occur both in syntax and in vocabulary. Syntactic change can also be purely language-internal, whether independent within the syntactic component or the eventual result of phonological or morphological change.[citation needed] ThesociolinguistJennifer Coates, following William Labov, describes linguistic change as occurring in the context of linguisticheterogeneity. She explains that "[l]inguistic change can be said to have taken place when a new linguistic form, used by some sub-group within a speech community, is adopted by other members of that community and accepted as the norm."[12] The sociolinguistWilliam Labovrecorded the change inpronunciationin a relatively short period in the American resort ofMartha's Vineyardand showed how this resulted from social tensions and processes.[13]Even in the relatively short time that broadcast media have recorded their work, one can observe the difference between thepronunciationof the newsreaders of the 1940s and the 1950s and the pronunciation of today. The greater acceptance and fashionability ofregional accentsin media may[original research?]also reflect a more democratic, less formal society — compare the widespread adoption oflanguage policies. Can and Patton (2010) provide a quantitative analysis of twentieth-century Turkish literature using forty novels of forty authors. Using weighted least squares regression and a sliding window approach, they show that, as time passes, words, in terms of both tokens (in text) and types (in vocabulary), have become longer. They indicate that the increase in word lengths with time can be attributed to the government-initiated language "reform" of the 20th century. This reform aimed at replacing foreign words used in Turkish, especially Arabic- and Persian-based words (since they were in majority when the reform was initiated in early 1930s), with newly coined pure Turkish neologisms created by adding suffixes to Turkish word stems (Lewis, 1999). Can and Patton (2010), based on their observations of the change of a specific word use (more specifically in newer works the preference ofamaoverfakat, both borrowed from Arabic and meaning "but", and their inverse usage correlation is statistically significant), also speculate that the word length increase can influence the common word choice preferences of authors. Kadochnikov (2016) analyzes the political and economic logic behind the development of the Russian language. Ever since the emergence of the unified Russian state in the 15th and 16th centuries the government played a key role in standardizing the Russian language and developing itsprescriptive normswith the fundamental goal of ensuring that it can be efficiently used as a practical tool in all sorts of legal, judicial, administrative and economic affairs throughout the country.[14] Altintas, Can, and Patton (2007) introduce a systematic approach to language change quantification by studying unconsciously used language features in time-separated parallel translations. For this purpose, they use objective style markers such as vocabulary richness and lengths of words, word stems and suffixes, and employ statistical methods to measure their changes over time. Languages perceived to be "higher status" stabilise or spread at the expense of other languages perceived by their own speakers to be "lower-status". Historical examples are the early Welsh and Lutheran Bible translations, leading to the liturgical languages Welsh and High German thriving today, unlike other Celtic or German variants.[15] For prehistory, Forster and Renfrew (2011)[16]argue that in some cases there is a correlation of language change with intrusive male Y chromosomes but not with female mtDNA. They then speculate that technological innovation (transition from hunting-gathering to agriculture, or from stone to metal tools) or military prowess (as in the abduction of British women by Vikings toIceland) causes immigration of at least some males, and perceived status change. Then, in mixed-language marriages with these males, prehistoric women would often have chosen to transmit the "higher-status" spouse's language to their children, yielding the language/Y-chromosome correlation seen today.
https://en.wikipedia.org/wiki/Language_change
Lexicologyis the branch oflinguisticsthat analyzes thelexiconof a specificlanguage. A word is the smallest meaningful unit of alanguagethat can stand on its own, and is made up of small components calledmorphemesand even smaller elements known asphonemes, or distinguishing sounds. Lexicology examines every feature of a word – includingformation,spelling,origin,usage, anddefinition.[1] Lexicology also considers the relationships that exist between words. In linguistics, thelexiconof a language is composed oflexemes, which are abstract units of meaning that correspond to a set of related forms of a word. Lexicology looks at how words can be broken down as well as identifies common patterns they follow.[2] Lexicology is associated withlexicography, which is the practice of compilingdictionaries.[3] The termlexicologyderives from theGreekword λεξικόνlexicon(neuter of λεξικόςlexikos, "of or for words",[4]from λέξιςlexis, "speech" or "word"[5]) and -λογία-logia, "the study of" (asuffixderived from λόγοςlogos, amongst others meaning "learning, reasoning, explanation, subject-matter").[6]Etymology as a science is actually a focus of lexicology. Since lexicology studies the meaning of words and their semantic relations, it often explores the history and development of a word. Etymologists analyze related languages using thecomparative method, which is a set of techniques that allow linguists to recover the ancestral phonological, morphological, syntactic, etc., components of modern languages by comparing theircognatematerial.[7]This means manyword rootsfrom different branches of the Indo-Europeanlanguage familycan be traced back to single words from theProto-Indo-European language. TheEnglish language, for instance, contains moreborrowed words(or loan words) in itsvocabularythan native words.[8]Examples includeparkourfromFrench,karaokefromJapanese,coconutfromPortuguese,mangofromHindi, etc. A lot ofmusic terminology, likepiano,solo, andopera, is borrowed fromItalian. These words can be further classified according to the linguistic element that is borrowed: phonemes, morphemes, and semantics.[7] General lexicologyis the broad study of words regardless of a language's specific properties. It is concerned with linguistic features that are common among all languages, such as phonemes and morphemes.Special lexicology, on the other hand, looks at what a particular language contributes to its vocabulary, such asgrammars.[2]Altogether lexicological studies can be approached two ways: These complementary perspectives were proposed bySwisslinguistFerdinand de Saussure.[10]Lexicology can have both comparative and contrastive methodologies.Comparative lexicologysearches for similar features that are shared among two or more languages.Contrastive lexicologyidentifies the linguistic characteristics which distinguish between related and unrelated languages.[9] Thesubfieldof semantics that pertains especially to lexicological work is calledlexical semantics. In brief, lexical semantics contemplates the significance of words and their meanings through several lenses, includingsynonymy,antonymy,hyponymy, andpolysemy, among others. Semantic analysis of lexical material may involve both thecontextualizationof the word(s) andsyntactic ambiguity.Semasiologyandonomasiologyare relevant linguistic disciplines associated with lexical semantics.[9] A word can have two kinds of meaning: grammatical and lexical.Grammatical meaningrefers to a word's function in a language, such astenseorplurality, which can be deduced fromaffixes.Lexical meaningis not limited to a single form of a word, but rather what the word denotes as a base word. For example, theverbto walkcan becomewalks,walked, andwalking –each word has a different grammatical meaning, but the same lexical meaning ("to move one's feet at a regular pace").[11] Another focus of lexicology isphraseology, which studies multi-word expressions, oridioms, like 'raining cats and dogs.' The meaning of the phrase as a whole has a different meaning than each word does on its own and is often unpredictable when considering its components individually. Phraseology examines how and why such meanings exist, and analyzes the laws that govern these word combinations.[12] Idioms and other phraseological units can be classified according to content and/ or meaning. They are difficult to translate word-for-word from one language to another.[13] Lexicographyis the study oflexiconsand the art of compiling dictionaries.[14]It is divided into two separateacademic disciplines:
https://en.wikipedia.org/wiki/Lexicology
Lexical semantics(also known aslexicosemantics), as a subfield oflinguisticsemantics, is the study of word meanings.[1][2]It includes the study of how words structure their meaning, how they act ingrammarandcompositionality,[1]and the relationships between the distinct senses and uses of a word.[2] The units of analysis in lexical semantics are lexical units which include not only words but also sub-words or sub-units such asaffixesand evencompound wordsandphrases. Lexical units include the catalogue of words in a language, thelexicon. Lexical semantics looks at how the meaning of the lexical units correlates with the structure of the language orsyntax. This is referred to assyntax-semantics interface.[3] The study of lexical semantics concerns: Lexical units, also referred to as syntactic atoms, can be independent such as in the case of root words or parts of compound words or they require association with other units, as prefixes and suffixes do. The former are termedfree morphemesand the latterbound morphemes.[4]They fall into a narrow range of meanings (semantic fields) and can combine with each other to generate new denotations. Cognitive semanticsis the linguistic paradigm/framework that since the 1980s has generated the most studies in lexical semantics, introducing innovations likeprototype theory,conceptual metaphors, andframe semantics.[5] Lexical items contain information about category (lexical and syntactic), form and meaning. The semantics related to these categories then relate to each lexical item in thelexicon.[6]Lexical items can also be semantically classified based on whether their meanings are derived from single lexical units or from their surrounding environment. Lexical items participate in regular patterns of association with each other. Some relations between lexical items includehyponymy, hypernymy,synonymy, andantonymy, as well ashomonymy.[6] Hyponymy and hypernymyrefer to a relationship between a general term and the more specific terms that fall under the category of the general term. For example, the colorsred,green,blueandyelloware hyponyms. They fall under the general term ofcolor, which is the hypernym. Hyponyms and hypernyms can be described by using ataxonomy, as seen in the example. Synonymrefers to words that are pronounced and spelled differently but contain the same meaning. Antonymrefers to words that are related by having the opposite meanings to each other. There are three types of antonyms:graded antonyms,complementary antonyms, andrelational antonyms. Homonymyrefers to the relationship between words that are spelled or pronounced the same way but hold different meanings. Polysemyrefers to a word having two or more related meanings. Lexical semantics also explores whether the meaning of a lexical unit is established by looking at its neighbourhood in thesemantic network,[7](words it occurs with in natural sentences), or whether the meaning is already locally contained in the lexical unit. In English,WordNetis an example of a semantic network. It contains English words that are grouped intosynsets. Some semantic relations between these synsets aremeronymy,hyponymy,synonymy, andantonymy. First proposed by Trier in the 1930s,[8]semantic fieldtheory proposes that a group of words with interrelated meanings can be categorized under a larger conceptual domain. This entire entity is thereby known as a semantic field. The wordsboil,bake,fry, androast, for example, would fall under the larger semantic category ofcooking. Semantic field theory asserts that lexical meaning cannot be fully understood by looking at a word in isolation, but by looking at a group of semantically related words.[9]Semantic relations can refer to any relationship in meaning betweenlexemes, including synonymy(bigandlarge),antonymy(bigandsmall),hypernymy and hyponymy(roseandflower),converseness(buyandsell),and incompatibility. Semantic field theory does not have concrete guidelines that determine the extent of semantic relations between lexemes. The abstract validity of the theory is a subject of debate.[8] Knowing the meaning of a lexical item therefore means knowing the semantic entailments the word brings with it. However, it is also possible to understand only one word of a semantic field without understanding other related words. Take, for example, a taxonomy of plants and animals: it is possible to understand the wordsroseandrabbitwithout knowing what amarigoldor amuskratis. This is applicable to colors as well, such as understanding the wordredwithout knowing the meaning ofscarlet,but understandingscarletwithout knowing the meaning ofredmay be less likely. A semantic field can thus be very large or very small, depending on the level of contrast being made between lexical items. While cat and dog both fall under the larger semantic field of animal, including the breed of dog, likeGerman shepherd,would require contrasts between other breeds of dog (e.g.corgi, orpoodle), thus expanding the semantic field further.[10] Event structure is defined as the semantic relation of a verb and its syntactic properties.[11]Event structure has three primary components:[12] Verbs can belong to one of three types: states, processes, or transitions. (1a) defines the state of the door being closed; there is no opposition in thispredicate. (1b) and (1c) both have predicates showing transitions of the door going from being implicitlyopentoclosed. (1b) gives theintransitiveuse of the verb close, with no explicit mention of the causer, but (1c) makes explicit mention of theagentinvolved in the action. The analysis of these different lexical units had a decisive role in the field of "generative linguistics" during the 1960s.[13]The termgenerativewas proposed by Noam Chomsky in his bookSyntactic Structurespublished in 1957. The termgenerative linguisticswas based on Chomsky'sgenerative grammar, a linguistic theory that states systematic sets of rules (X' theory) can predict grammatical phrases within a natural language.[14]Generative Linguistics is also known as Government-Binding Theory. Generative linguists of the 1960s, includingNoam ChomskyandErnst von Glasersfeld, believed semantic relations betweentransitive verbsandintransitive verbswere tied to their independent syntactic organization.[13]This meant that they saw a simple verb phrase as encompassing a more complex syntactic structure.[13] Lexicalist theories became popular during the 1980s, and emphasized that a word's internal structure was a question ofmorphologyand not ofsyntax.[15]Lexicalist theories emphasized that complex words (resulting from compounding and derivation ofaffixes) have lexical entries that are derived from morphology, rather than resulting from overlapping syntactic and phonological properties, as Generative Linguistics predicts. The distinction between Generative Linguistics and Lexicalist theories can be illustrated by considering the transformation of the worddestroytodestruction: Alexical entrylists the basic properties of either the whole word, or the individual properties of the morphemes that make up the word itself. The properties oflexical itemsinclude their category selectionc-selection, selectional propertiess-selection, (also known as semantic selection),[13]phonological properties, and features. The properties of lexical items are idiosyncratic, unpredictable, and contain specific information about the lexical items that they describe.[13] The following is an example of a lexical entry for the verbput: Lexicalist theories state that a word's meaning is derived from its morphology or a speaker's lexicon, and not its syntax. The degree of morphology's influence on overall grammar remains controversial.[13]Currently, the linguists that perceive one engine driving both morphological items and syntactic items are in the majority. By the early 1990s, Chomsky'sminimalist frameworkon language structure led to sophisticated probing techniques for investigating languages.[16]These probing techniques analyzed negative data overprescriptive grammars, and because of Chomsky's proposed Extended Projection Principle in 1986, probing techniques showed where specifiers of a sentence had moved to in order to fulfill the EPP. This allowed syntacticians to hypothesize that lexical items with complex syntactic features (such asditransitive,inchoative, andcausativeverbs), could select their own specifier element within asyntax treeconstruction. (For more on probing techniques, see Suci, G., Gammon, P., & Gamlin, P. (1979)). This brought the focus back on thesyntax-lexical semantics interface; however, syntacticians still sought to understand the relationship between complex verbs and their related syntactic structure, and to what degree the syntax was projected from the lexicon, as the Lexicalist theories argued. In the mid 1990s, linguistsHeidi Harley,Samuel Jay Keyser, andKenneth Haleaddressed some of the implications posed by complex verbs and a lexically-derived syntax. Their proposals indicated that the predicates CAUSE and BECOME, referred to as subunits within a Verb Phrase, acted as a lexical semantic template.[17]Predicatesare verbs and state or affirm something about the subject of the sentence or the argument of the sentence. For example, the predicateswentandis herebelow affirm the argument of the subject and the state of the subject respectively. The subunits of Verb Phrases led to the Argument Structure Hypothesis and Verb Phrase Hypothesis, both outlined below.[18]The recursion found under the "umbrella" Verb Phrase, the VP Shell, accommodated binary-branching theory; another critical topic during the 1990s.[19]Current theory recognizes the predicate in Specifier position of a tree in inchoative/anticausativeverbs (intransitive), or causative verbs (transitive) is what selects thetheta roleconjoined with a particular verb.[13] Kenneth HaleandSamuel Jay Keyserintroduced their thesis on lexical argument structure during the early 1990s.[20]They argue that a predicate's argument structure is represented in the syntax, and that the syntactic representation of the predicate is a lexical projection of its arguments. Thus, the structure of a predicate is strictly a lexical representation, where each phrasal head projects its argument onto a phrasal level within the syntax tree. The selection of this phrasal head is based on Chomsky's Empty Category Principle. This lexical projection of the predicate's argument onto the syntactic structure is the foundation for the Argument Structure Hypothesis.[20]This idea coincides with Chomsky'sProjection Principle, because it forces a VP to be selected locally and be selected by a Tense Phrase (TP). Based on the interaction between lexical properties, locality, and the properties of the EPP (where a phrasal head selects another phrasal element locally), Hale and Keyser make the claim that the Specifier position or a complement are the only two semantic relations that project a predicate's argument. In 2003, Hale and Keyser put forward this hypothesis and argued that a lexical unit must have one or the other, Specifier or Complement, but cannot have both.[21] Morris HalleandAlec Marantzintroduced the notion ofdistributed morphologyin 1993.[22]This theory views the syntactic structure of words as a result of morphology and semantics, instead of the morpho-semantic interface being predicted by the syntax. Essentially, the idea that under the Extended Projection Principle there is a local boundary under which a special meaning occurs. This meaning can only occur if a head-projecting morpheme is present within the local domain of the syntactic structure.[23]The following is an example of the tree structure proposed by distributed morphology for the sentence"John's destroying the city".Destroyis the root, V-1 represents verbalization, and D represents nominalization.[23] In her 2008 book,Verb Meaning and The Lexicon: A First-Phase Syntax, linguistGillian Ramchandacknowledges the roles of lexical entries in the selection of complex verbs and their arguments.[24]'First-Phase' syntax proposes that event structure and event participants are directly represented in the syntax by means ofbinary branching. This branching ensures that the Specifier is the consistently subject, even when investigating the projection of a complex verb's lexical entry and its corresponding syntactic construction. This generalization is also present in Ramchand's theory that the complement of a head for a complex verb phrase must co-describe the verb's event. Ramchand also introduced the concept of Homomorphic Unity, which refers to the structural synchronization between the head of a complex verb phrase and its complement. According to Ramchand, Homomorphic Unity is "when two event descriptors are syntactically Merged, the structure of the complement must unify with the structure of the head."[24] The unaccusative hypothesis was put forward by David Perlmutter in 1987, and describes how two classes of intransitive verbs have two different syntactic structures. These areunaccusative verbsandunergative verbs.[25]These classes of verbs are defined by Perlmutter only in syntactic terms. They have the following structures underlyingly: The following is an example from English: In (2a) the verb underlyingly takes a direct object, while in (2b) the verb underlyingly takes a subject. The change-of-state property of Verb Phrases (VP) is a significant observation for the syntax of lexical semantics because it provides evidence that subunits are embedded in the VP structure, and that the meaning of the entire VP is influenced by this internal grammatical structure. (For example, the VPthe vase brokecarries a change-of-state meaning of the vase becoming broken, and thus has a silent BECOME subunit within its underlying structure.) There are two types of change-of-state predicates:inchoativeandcausative. Inchoative verbs areintransitive, meaning that they occur without a direct object, and these verbs express that their subject has undergone a certain change of state. Inchoative verbs are also known asanticausativeverbs.[27]Causative verbs are transitive, meaning that they occur with a direct object, and they express that the subject causes a change of state in the object. LinguistMartin Haspelmathclassifies inchoative/causative verb pairs under three main categories: causative, anticausative, and non-directed alternations.[28]Non-directed alternations are further subdivided into labile, equipollent, and suppletive alternations. Englishtends to favourlabile alternations,[29]meaning that the same verb is used in the inchoative and causative forms.[28]This can be seen in the following example:brokeis an intransitive inchoative verb in (3a) and a transitive causative verb in (3b). As seen in the underlying tree structure for (3a), the silent subunit BECOME is embedded within the Verb Phrase (VP), resulting in the inchoative change-of-state meaning (y become z). In the underlying tree structure for (3b), the silent subunits CAUS and BECOME are both embedded within the VP, resulting in the causative change-of-state meaning (x cause y become z).[13] English change of state verbs are often de-adjectival, meaning that they are derived from adjectives. We can see this in the following example: In example (4a) we start with a stative intransitive adjective, and derive (4b) where we see an intransitive inchoative verb. In (4c) we see a transitive causative verb. Some languages (e.g.,German,Italian, andFrench), have multiple morphological classes of inchoative verbs.[31]Generally speaking, these languages separate their inchoative verbs into three classes: verbs that are obligatorily unmarked (they are not marked with areflexive pronoun,clitic, oraffix), verbs that are optionally marked, and verbs that are obligatorily marked. The causative verbs in these languages remain unmarked.Haspelmathrefers to this as theanticausativealternation. For example, inchoative verbs inGermanare classified into three morphological classes.Class Averbs necessarily form inchoatives with the reflexive pronounsich,Class Bverbs form inchoatives necessarily without the reflexive pronoun, andClass Cverbs form inchoatives optionally with or without the reflexive pronoun. In example (5), the verbzerbrachis an unmarked inchoative verb fromClass B, which also remains unmarked in its causative form.[31] Die the Vase vase zerbrach. broke Die Vasezerbrach. the vasebroke 'The vase broke.' Hans John zerbrach broke die the Vase. vase Hanszerbrachdie Vase. Johnbrokethe vase 'John broke the vase.' In contrast, the verböffneteis aClass Averb which necessarily takes the reflexive pronounsichin its inchoative form, but remains unmarked in its causative form. Die the Tür door öffnete opened sich. REFL Die Tür öffnetesich. the door openedREFL 'The door opened.' Hans John öffnete opened die the Tür. door Hansöffnetedie Tür. John opened the door 'John opened the door.' There has been some debate as to whether the different classes of inchoative verbs are purely based in morphology, or whether the differentiation is derived from the lexical-semantic properties of each individual verb. While this debate is still unresolved in languages such asItalian,French, andGreek, it has been suggested by linguist Florian Schäfer that there are semantic differences between marked and unmarked inchoatives inGerman. Specifically, that only unmarked inchoative verbs allow an unintentional causer reading (meaning that they can take on an "x unintentionally caused y" reading).[31] Causative morphemes are present in the verbs of many languages (e.g.,Tagalog,Malagasy,Turkish, etc.), usually appearing in the form of an affix on the verb.[27]This can be seen in the following examples fromTagalog, where the causative prefixpag-(realized here asnag) attaches to the verbtumbato derive a causative transitive verb in (7b), but the prefix does not appear in the inchoative intransitive verb in (7a).Haspelmathrefers to this as thecausativealternation. Tumumba fell ang the bata. child Tumumba ang bata. fell the child 'The child fell.' Nagtumba CAUS-fall ng of bata child si DET Rosa. Rosa Nagtumba ng bata si Rosa. CAUS-fall of child DET Rosa 'Rosa knocked the child down.' Richard Kayne proposed the idea of unambiguous paths as an alternative to c-commanding relationships, which is the type of structure seen in examples (8). The idea of unambiguous paths stated that an antecedent and an anaphor should be connected via an unambiguous path. This means that the line connecting an antecedent and an anaphor cannot be broken by another argument.[32]When applied to ditransitive verbs, this hypothesis introduces the structure in diagram (8a). In this tree structure it can be seen that the same path can be traced from either DP to the verb. Tree diagram (7b) illustrates this structure with an example from English. This analysis was a step toward binary branching trees, which was a theoretical change that was furthered by Larson's VP-shell analysis.[33] Larson posited his Single Complement Hypothesis in which he stated that every complement is introduced with one verb. The Double Object Construction presented in 1988 gave clear evidence of a hierarchical structure using asymmetrical binary branching.[33]Sentences with double objects occur with ditransitive verbs, as we can see in the following example: It appears as if the verbsendhas two objects, or complements (arguments): bothMary, the recipient andparcel, the theme. The argument structure of ditransitive verb phrases is complex and has undergone different structural hypothesis. The original structural hypothesis was that of ternary branching seen in (9a) and (9b), but following from Kayne's 1981 analysis, Larson maintained that each complement is introduced by a verb.[32][33] Their hypothesis shows that there is a lower verb embedded within a VP shell that combines with an upper verb (can be invisible), thus creating a VP shell (as seen in the tree diagram to the right). Most current theories no longer allow the ternary tree structure of (9a) and (9b), so the theme and the goal/recipient are seen in a hierarchical relationship within abinary branchingstructure.[35] Following are examples of Larson's tests to show that the hierarchical (superior) order of any two objects aligns with a linear order, so that the second is governed (c-commanded) by the first.[33]This is in keeping with X'Bar Theory of Phrase Structure Grammar, with Larson's tree structure using the empty Verb to which the V is raised. Reflexives and reciprocals (anaphors) show this relationship in which they must be c-commanded by their antecedents, such that the (10a) is grammatical but (10b) is not: A pronoun must have a quantifier as its antecedent: Question words follow this order: The effect of negative polarity means that "any" must have a negative quantifier as an antecedent: These tests with ditransitive verbs that confirm c-command also confirm the presence of underlying or invisible causative verbs. In ditransitive verbs such asgive someone something,send someone something,show someone somethingetc. there is an underlying causative meaning that is represented in the underlying structure. As seen in example in (9a) above,John sent Mary a package, there is the underlying meaning that 'John "caused" Mary to have a package'. Larson proposed that both sentences in (9a) and (9b) share the same underlying structure and the difference on the surface lies in that the double object construction "John sent Mary a package" is derived by transformation from a NP plus PP construction "John sent a package to Mary". Beck and Johnson, however, give evidence that the two underlying structures are not the same.[36]In so doing, they also give further evidence of the presence of two VPs where the verb attaches to a causative verb. In examples (14a) and (b), each of the double object constructions are alternated with NP + PP constructions. Beck and Johnson show that the object in (15a) has a different relation to the motion verb as it is not able to carry the meaning of HAVING which the possessor (9a) and (15a) can. In (15a), Satoshi is an animate possessor and so is caused to HAVE kisimen. The PPfor Satoshiin (15b) is of a benefactive nature and does not necessarily carry this meaning of HAVE either. The underlying structures are therefore not the same. The differences lie in the semantics and the syntax of the sentences, in contrast to the transformational theory of Larson. Further evidence for the structural existence of VP shells with an invisible verbal unit is given in the application of the adjunct or modifier "again". Sentence (16) is ambiguous and looking into the two different meanings reveals a difference in structure. However, in (17a), it is clear that it was Sally who repeated the action of opening the door. In (17b), the event is in the door being opened and Sally may or may not have opened it previously. To render these two different meanings, "again" attaches to VPs in two different places, and thus describes two events with a purely structural change.
https://en.wikipedia.org/wiki/Lexical_semantics
Acalque/kælk/or loan translation is awordorphraseborrowed from anotherlanguagebyliteral, word-for-word (Latin: "verbum pro verbo") translation. This list contains examples ofcalquesin various languages. Latin calques many terms from Greek,[58][59]many of which have beenborrowed by English. Examples of Romance language expressions calqued from foreign languages include: Many calques found in Southwestern US Spanish come from English: Also technological terms calqued from English are used throughout the Spanish-speaking world: Note:From a technical standpoint,Danishand thebokmålstandard ofNorwegianare the same language, with minor spelling and pronunciation differences (equivalent to British and American English). For this reason, they will share a section. In more recent times, the Macedonian language has calqued new words from otherprestige languagesincludingGerman,FrenchandEnglish. Some words were originally calqued intoRussianand then absorbed into Macedonian, considering the close relatedness of the two languages. Therefore, many of these calques can also be consideredRussianisms. The poetAleksandr Pushkin(1799–1837) was perhaps the most influential among the Russian literary figures who would transform the modern Russian language and vastly expand its ability to handle abstract and scientific concepts by importing the sophisticated vocabulary of Western intellectuals.[citation needed] Although some Western vocabulary entered the language as loanwords – e.g., Italiansalvietta, "napkin", was simply Russified in sound and spelling to салфетка (salfetka) – Pushkin and those he influenced most often preferred to render foreign borrowings into Russian by calquing. Compound words were broken down to their component roots, which were then translated piece-by-piece to their Slavic equivalents. But not all of the coinages caught on and became permanent additions to the lexicon; for example, любомудрие (ljubomudrie) was promoted by 19th-century Russian intellectuals as a calque of "philosophy", but the word eventually fell out of fashion, and modern Russian instead uses the loanword философия (filosofija). SinceFinnish, aUralic language, differs radically in pronunciation and orthography from Indo-European languages, most loans adopted in Finnish either are calques or soon become such as foreign words are translated into Finnish. Examples include: When Jewsimmigrate to Israel, they oftenHebraizetheir surnames. One approach to doing so was by calque from the original (often German or Yiddish) surname. For instance, Imi Lichtenfield (itself a half-calque[definition needed]), founder of the martial artKrav Maga, became Imi Sde-Or. Both last names mean "light field". For more examples and other approaches, see the article onHebraization of surnames. According to linguistGhil'ad Zuckermann, the more contributing languages have a structurally identical expression, the more likely it is to be calqued into the target language. In Israeli (his term for "Modern Hebrew") one usesmá nishmà, lit. "what's heard?", with the meaning of "what's up?". Zuckermann argues that this is a calque not only of the Yiddish expression ?וואָס הערט זיך (vos hert zikh?), but also of the parallel expressions in Polish, Russian and Romanian. Whereas most revivalists were native Yiddish-speakers, many first speakers of Modern Hebrew spoke Russian and Polish too. So a Polish speaker in the 1930s might have usedmá nishmànot (only) due to Yiddishvos hert zikh?but rather (also) due to PolishCo słychać?A Russian Jew might have usedma nishmadue toЧто слышно?(pronouncedchto slyshno) and a Romanian Israeli would echoce se aude.[78]According to Zuckermann, such multi-sourced calquing is a manifestation of theCongruence principle.[79] ModernMalayalamis replete with calques from English. The calques manifest themselves as idioms and expressions and many have gone on to become clichés. However standalone words are very few. The following is a list of commonly used calque phrases/expressions.All of these are exact translations of the corresponding English phrases.
https://en.wikipedia.org/wiki/List_of_calques
In thedystopiannovelNineteen Eighty-Four(also published as1984), byGeorge Orwell,Newspeakis thefictional languageofOceania, atotalitariansuperstate. To meet the ideological requirements ofIngsoc(English Socialism) in Oceania, the Party created Newspeak, which is acontrolled languageof simplified grammar and limited vocabulary designed to limit a person's ability forcritical thinking. The Newspeak language thus limits the person's ability to articulate and communicate abstract concepts, such aspersonal identity, self-expression, andfree will,[1][2]which arethoughtcrimes, acts of personal independence that contradict theideological orthodoxyof Ingsoccollectivism.[3][4] In the appendix to the novel, "The Principles of Newspeak", Orwell explains that Newspeak follows most rules of English grammar, yet is a language characterised by a continually diminishing vocabulary; complete thoughts are reduced to simple terms of simplistic meaning. The political contractions of Newspeak —Ingsoc(English Socialism),Minitrue(Ministry of Truth),Miniplenty(Ministry of Plenty) — are similar to Nazi and Soviet contractions in the 20th century, such asGestapo(Geheime Staatspolizei),politburo(Political Bureau of the Central Committee of the Communist Party of the Soviet Union),Comintern(Communist International),kolkhoz(collective farm), andKomsomol(communist youth union). Newspeak contractions usually aresyllabic abbreviationsmeant to conceal the speaker's ideology from the speaker and the listener.[1]: 310–8 As aconstructed language, Newspeak is a language of plannedphonology, limited grammar, and finite vocabulary, much like the phonology, grammar, and vocabulary ofBasic English(British American Scientific International Commercial English), which was proposed by the British linguistCharles Kay Ogdenin 1930. As acontrolled languagewithout complex constructions or ambiguous usages, Basic English was designed to be easy to learn, to sound, and to speak, with a vocabulary of 850 words composed specifically to facilitate the communication of facts, not the communication of abstract thought. While employed as a propagandist byBBCduring the Second World War (1939–1945), Orwell grew to believe that the constructions of Basic English, as a controlled language, imposed functional limitations upon the speech, the writing, and the thinking of the users.[5] In the essay "Politics and the English Language" (1946)[6]and in "The Principles of Newspeak" appendix toNineteen Eighty-Four(1949), Orwell discusses the communication function of English and contemporary ideological changes in usage during the 1940s. In the novel, the linguistic decadence of English is the central theme about language-as-communication.[7]: 171In the essay,Standard Englishwas characterised by dying metaphors, pretentious diction, and high-flown rhetoric. Orwell concludes: "I said earlier that the decadence of our language is probably curable. Those who deny this [decadence] may argue that language merely reflects existing social conditions, and that we cannot influence its development, by any direct tinkering with words or constructions."[6] Orwell argued that the decline of English went hand-in-hand with the decline ofintellectualismamong society, and thus facilitated the manipulation of listeners and speakers and writers into consequent political chaos.[7]The story ofNineteen Eighty-Fourportrays the connection betweenauthoritarianrégimes and doublespeak language, earlier discussed in "Politics and the English Language": When the general atmosphere is bad, language must suffer. I should expect to find — this is a guess, which I have not sufficient knowledge to verify — that the German, Russian and Italian languages have all deteriorated in the last ten or fifteen years, as a result of dictatorship. But if thought corrupts language, language can also corrupt thought.[6] In contemporary political usage, the termNewspeakis used to impugn an opponent who introduces new definitions of words to suit their political agenda.[8][9] To eliminate the expression of ambiguity and nuance from Oldspeak (Standard English) in order to reduce the English language's communication functions, Newspeak uses simplistic constructions of language, such as the dichotomies ofpleasurevs.painandhappinessvs.sadness. Such dichotomies produced the linguistic and political concepts ofgoodthinkandcrimethinkthat reinforce thetotalitarianismof The Party over the people ofOceania. The long-term goal of The Party is that, by 2050, Newspeak would be the universal language of every member of The Party and of Oceanian society, except for the Proles, the working class of Oceania.[1]: 309 In Newspeak, English root-words function both as nouns and as verbs, which reduces the vocabulary available for the speaker to communicate meaning; e.g. as a noun and as a verb, the wordthinkeliminates the wordthoughtto functionally communicatethoughts, which are the products ofintellectualism. As a form of personal communication, Newspeak is spoken instaccatorhythm, using short words that are easy to pronounce, so that speech is physically automatic and intellectually unconscious, by which mental habits the user of Newspeak avoidscritical thinking. English words ofcomparative and superlativemeanings and irregular spellings were simplified; thus,betterbecomesgooderandbestbecomesgoodest. The Newspeak prefixesplus–anddoubleplus–are used for emphasis, e.g.pluscoldmeans "very cold" anddoublepluscoldmeans "very very cold".[original research?]Newspeak forms adjectives by appending the suffix–fulto a root-word, e.g.goodthinkfulmeans "orthodox in thought"; whilst adverbs are formed by adding the suffix–wise, e.g.goodthinkwisemeans "in an orthodox manner". The intellectual purpose of Newspeak is to make all anti-Ingsocthoughts "literally unthinkable" as speech. As constructed, Newspeak vocabulary communicates the exact expression of sense and meaning that a member of the Party could wish to express, while excluding secondary denotations and connotations, eliminating the ways oflateral thinking(indirect thinking), which allow a word to have additional meanings. The linguistic simplification of Oldspeak into Newspeak was realised with neologisms, the elimination of ideologically undesirable words, and the elimination of the politically unorthodox meanings of words.[1]: 310 The wordfreestill existed in Newspeak, but only to communicate the absence of something, e.g. "The dog is free from lice" or "This field is free of weeds". The word could not denotefree will, because intellectual freedom was no longer supposed to exist in Oceania. The limitations of Newspeak's vocabulary enabled the Party to effectively control the population's minds, by allowing the user only a very narrow range of spoken and written thought; hence, words such as:crimethink(thought crime),doublethink(accepting contradictory beliefs), andIngsoccommunicated only their surface meanings.[1]: 309–10 In the story ofNineteen Eighty-Four, thelexicologistcharacterSymediscusses his editorial work on the latest edition of theNewspeak Dictionary: By 2050—earlier, probably—all real knowledge of Oldspeak will have disappeared. The whole literature of the past will have been destroyed.Chaucer,Shakespeare,Milton,Byron—they'll exist only in Newspeak versions, not merely changed into something different, but actually contradictory of what they used to be. Even the literature of The Party will change. Even the slogans will change. How could you have a slogan likeFreedom is Slaverywhen the concept of freedom has been abolished? The whole climate of thought will be different. In fact, there willbeno thought, as we understand it now. Orthodoxy means not thinking—not needing to think. Orthodoxy is unconsciousness.[1] Newspeak words are classified by three distinct classes: the A, B, and C vocabularies. The words of the A vocabulary describe the functional concepts of daily life (e.g. eating and drinking, working and cooking). It consists mostly of English words, but they are very small in number compared to English, and each word's meanings are "far more rigidly defined" than in English. The words of the B vocabulary are deliberately constructed for political purposes to convey complex ideas in a simple form. They are compound words and noun-verbs with political significance that are meant to impose and instill in Oceania's citizens the correct mental attitudes required by the Party. In the appendix, Orwell explains that the very structure of the B vocabulary (the fact that they are compound words) carries ideological weight.[1]: 310The large number of contractions in the B vocabulary—for example, the Ministry of Truth being called Minitrue, the Records Department being called Recdep, the Fiction Department being called Ficdep, the Teleprogrammes Department being called Teledep—is not done simply to save time. As with examples of compound words in the political language of the 20th century—Nazi,Gestapo,Politburo,Comintern,Inprecor,Agitprop, and many others—Orwell remarks that the Party believed that abbreviating a name could "narrowly and subtly" alter a word's meaning. Newspeak is supposed to make this effort a conscious purpose: [...]Cominternis a word that can be uttered almost without taking thought, whereasCommunist Internationalis a phrase over which one is obliged to linger at least momentarily. In the same way, the associations called up by a word likeMinitrueare fewer and more controllable than those called up by Ministry of Truth. This accounted not only for the habit of abbreviating whenever possible, but also for the almost exaggerated care that was taken to make every word easily pronounceable.[1]: 318 The B words in Newspeak are supposed to sound pleasant, while also being easily pronounceable, in an attempt to make speech on anything political "staccato and monotonous" and, ultimately, mask from the speaker all ideological content. The words of the C vocabulary are scientific and technical terms that supplement the linguistic functions of the A and B vocabularies. These words are the same scientific terms in English, but many of them have had their meanings rigidified to attempt, as with the A vocabulary, to prevent speakers from being able to express anti-government thoughts. Distribution of the C vocabulary is limited, because the Party does not want citizens to know more than a select few ways of life or techniques of production. Hence, the Oldspeak wordsciencehas no equivalent term in Newspeak; instead, these words are simply treated as specific technical words for speaking of technical fields.[1]: 309–323 Newspeak's grammar isgreatly simplifedcompared to English. It also has two "outstanding" characteristics: almost completely interchangeable linguistic functions between theparts of speech(any word can function as a verb, noun, adjective, or adverb), and heavyinflectionalregularity in the construction of usages and of words.[1]: 311Inflectional regularity means that most irregular words are replaced with regular words combined with prefixes and suffixes. For example, thepreteriteand thepast participleconstructions of verbs are alike, with both ending in–ed.Hence, the Newspeak preterite of the English wordstealisstealed,and that of the wordthinkisthinked.Likewise, the past participles ofswim, give, bring, speak,andtakewere, respectivelyswimmed, gived, bringed, speaked,andtaked,with all irregular forms (such asswam, gave,andbrought) being eliminated. Theauxiliaries(includingto be),pronouns,demonstratives, and relatives still inflect irregularly. They mostly follow their use in English, but the wordwhomand theshallandshouldtenses are dropped,whombeing replaced bywhoandshallandshouldbywillandwould. In spoken and written Newspeak, suffixes are also used in the elimination of irregular conjugations: Therefore, the Oldspeak sentence "He ran extremely quickly" would become "He runned doubleplusspeedwise". This is a list of Newspeak words known from the novel. It does not include words carried over directly from English with no change in meaning, nor does it include regular uses of the listed affixes (e.g.unbellyfeel) unless they are particularly significant. The novel says that the Ministry of Truth uses a jargon "not actually Newspeak, but consisting largely of Newspeak words" for its internal memos. As many of the words in this list (e.g. "bb", "upsub") come from such memos, it is not certain whether those words are actually Newspeak. Fiction:
https://en.wikipedia.org/wiki/Newspeak
In theNeo-Griceanapproach tosemanticsandpragmaticschampioned byYalelinguistLaurence Horn, theQ-principle("Q" for "Quantity") is a reformulation ofPaul Grice's maxim of quantity (seeGricean maxims) combined with the first two sub-maxims of manner.[1]The Q-principle states: "Say as much as you can (given R)." As such it interacts with theR-principle, which states: "Say no more than you must (given Q)."[2][3] The Q-principle leads to theimplicature(or narrowing) that if the speaker did not make a stronger statement (or say more), then its denial is (implied to be) true. For instance, the inference from "He entered a house" to "He did not enter his own house" is Q-based inference, i.e. deriving from the Q-principle.[2] Thissemanticsarticle is astub. You can help Wikipedia byexpanding it. Thispragmatics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Q-based_narrowing
Aretronymis a newer name for something thatdifferentiatesit from something else that is newer, similar, or seen in everyday life; thus, avoiding confusion between the two.[1][2] The termretronym, aneologismcomposed of thecombining formsretro-(from Latinretro,[3]"before") +-nym(from Greekónoma, "name"), was coined byFrank Mankiewiczin 1980 and popularized byWilliam SafireinThe New York Times Magazine.[4][5] In 2000,The American Heritage Dictionary(4th edition) became the first major dictionary to include the wordretronym.[6] The global war from 1914 to 1918 was referred to at the time as theGreat War. However, after the subsequent global war erupted in 1939, the phraseGreat Warwas gradually deprecated. The first came to be known asWorld War Iand the second asWorld War II. The first bicycles with two wheels of equal size were called "safety bicycles" because they were easier to handle than the then-dominant style that had one large wheel and one small wheel, which then became known as an "ordinary" bicycle.[7]Since the end of the 19th century, most bicycles have been expected to have two equal-sized wheels, and the other type has been renamed "penny-farthing" or "high-wheeler" bicycle.[8] TheAtari Video Computer Systemplatform was rebranded the "Atari 2600" (after its product code, CX-2600) in 1982 following the launch of its successor, theAtari 5200, and all hardware and software related to the platform were released under this new branding from that point on. Prior to that time, Atari often used the initialism "VCS" in official literature and other media, but colloquially the Video Computer System was often simply called "the Atari."[9] The first film in theStar Warsfranchise released in 1977 was simply titledStar Wars. It was given the subtitle "Episode IV: A New Hope" for its 1981 theatrical re-release, shortly after the release of its sequelThe Empire Strikes Backin 1980.[10]Initially, this subtitle was limited to the opening text crawl, as all three films in theoriginal Star Wars trilogy(Star Wars,The Empire Strikes Back, andReturn of the Jedi) were still sold under their original theatrical titles on home media formats (such as VHS and Laserdisc). It was not until their 2004 DVD releases that the titles of the individual three films were changed to follow the same titling pattern as theStar Wars prequel trilogy(e.g.Star Wars Episode IV - A New Hope). In the 1990s, when the Internet became widely popular andemailaccounts' instant delivery common, mail carried by thepostal servicecame to be called "snail mail" for its slower delivery and email sometimes just "mail."[citation needed] Advances in technology and science are often responsible for the coinage of retronyms. For example, the termacoustic guitarwas coined with the advent of theelectric guitar,[4]analog watchwas introduced to distinguish from thedigital watch,[5]push bikewas created to distinguish from themotorized bicycle, andfeature phonewas coined to distinguish from thesmartphone. Likewise,visible lightrefers toelectromagnetic radiationon the narrowvisible spectrum, andwater icewas coined to distinguish the solid state ofwater(including exotic forms) from the solid state of othervolatilessuch as carbon dioxide and argon.
https://en.wikipedia.org/wiki/Retronym
Inlinguistics, asemantic fieldis a related set of words groupedsemantically(bymeaning) that refers to a specific subject.[1][2]The term is also used inanthropology,[3]computational semiotics,[4]and technicalexegesis.[5] Brinton (2000: p. 112) defines "semantic field" or "semantic domain" and relates the linguistic concept tohyponymy: Related to the concept of hyponymy, but more loosely defined, is the notion of a semantic field or domain. A semantic field denotes a segment of reality symbolized by a set of related words. The words in a semantic field share a commonsemantic property.[6] A general and intuitive description is that words in a semantic field are not necessarilysynonymous, but are all used to talk about the same general phenomenon.[7]Synonymy requires the sharing of asememeorseme, but the semantic field is a larger area surrounding those. A meaning of a word is dependent partly on its relation to other words in the same conceptual area.[8]The kinds of semantic fields vary from culture to culture and anthropologists use them to study belief systems and reasoning across cultural groups.[7] Andersen (1990: p.327) identifies the traditional usage of "semantic field" theory as: Traditionally, semantic fields have been used for comparing the lexical structure of different languages and different states of the same language.[9] The origin of the field theory of semantics is thelexical field theoryintroduced byJost Trierin the 1930s,[10]: 31although according toJohn Lyonsit has historical roots in the ideas ofWilhelm von HumboldtandJohann Gottfried Herder.[1]In the 1960sStephen Ullmannsaw semantic fields as crystallising and perpetuating the values of society.[10]: 32For John Lyons in the 1970s words related in any sense belonged to the same semantic field,[10]: 32and the semantic field was simply alexical category, which he described as alexical field.[10]: 31Lyons emphasised the distinction between semantic fields andsemantic networks.[10]: 31In the 1980sEva Kittaydeveloped a semantic field theory ofmetaphor. This approach is based on the idea that the items in a semantic field have specific relations to other items in the same field, and that a metaphor works by re-ordering the relations of a field by mapping them on to the existing relations of another field.[11]Sue AtkinsandCharles J. Fillmorein the 1990s proposedframe semanticsas an alternative to semantic field theory.[12] The semantic field of a given word shifts over time. TheEnglishword "man" used to mean "human being" exclusively, while today it predominantly means "adult male," but its semantic field still extends in some uses to the generic "human" (seeMannaz). Overlapping semantic fields are problematic, especially intranslation. Words that have multiple meanings (calledpolysemouswords) are often untranslatable, especially with all their connotations. Such words are frequentlyloanedinstead of translated. Examples include "chivalry" (literally "horsemanship", related to "cavalry"), "dharma" (literally, "support"), and "taboo". Semantic field theory has informed the discourse of Anthropology as Ingold (1996: p. 127) relates: Semiology is not, of course, the same as semantics. Semiology is based on the idea that signs have meaning in relation to each other, such that a whole society is made up of relationally held meanings. But semantic fields do not stand in relations of opposition to each other, nor do they derive their distinctiveness in this way, nor indeed are they securely bounded at all. Rather, semantic fields are constantly flowing into each other. I may define a field of religion, but it soon becomes that of ethnic identity and then of politics and selfhood, and so on. In the very act of specifying semantic fields, people engage in an act of closure whereby they become conscious of what they have excluded and what they must therefore include.[3]
https://en.wikipedia.org/wiki/Semantic_field
Anirrelevant conclusion,[1]also known asignoratio elenchi(Latinfor 'ignoring refutation') ormissing the point, is theinformal fallacyof presenting anargumentwhose conclusion fails to address the issue in question. It falls into the broad class ofrelevancefallacies.[2] The irrelevant conclusion should not be confused withformal fallacy, an argument whose conclusion does not follow from itspremises; instead, it is that despite its formalconsistencyit is not relevant to the subject being talked about. Ignoratio elenchiis one of the fallacies identified byAristotlein hisOrganon. In a broader sense he asserted that all fallacies are a form ofignoratio elenchi.[3][4] Ignoratio Elenchi, according to Aristotle, is a fallacy that arises from "ignorance of the nature of refutation". To refute an assertion, Aristotle says we must prove its contradictory; the proof, consequently, of a proposition which stood in any other relation than that to the original, would be anignoratio elenchi. Since Aristotle, the scope of the fallacy has been extended to include all cases of proving the wrong point ... "I am required to prove a certain conclusion; I prove, not that, but one which is likely to be mistaken for it; in that lies the fallacy ... For instance, instead of proving that 'this person has committed an atrocious fraud', you prove that 'this fraud he is accused of is atrocious'"; ... The nature of the fallacy, then, consists in substituting for a certain issue another which is more or less closely related to it and arguing the substituted issue. The fallacy does not take into account whether the arguments do or do not really support the substituted issue, it only calls attention to the fact that they do not constitute proof of the original one… It is a particularly prevalent and subtle fallacy and it assumes a great variety of forms. But whenever it occurs and whatever form it takes, it is brought about by an assumption that leads the person guilty of it to substitute for a definite subject of inquiry another which is in close relation with it.[5] ●Example 1: A and B are debating as to whether criticizing indirectly has any merit in general. A attempts to support their position with an argument that politics ought not to be criticized on social media because the message is not directly being heard by the head of state; this would make them guilty ofignoratio elenchi, as people such as B may be criticizing politics because they have a strong message for their peers, or because they wish to bring attention to political matters, rather than ever intending that their views would be directly read by the president. ●Example 2: A and B are debating about the law. B missed the point. The question was not if B's neighbor believes that law should allow, but rather if the law does allow it or not. Samuel Johnson's unique "refutation" ofBishop Berkeley'simmaterialism, his claim that matter did not actually exist but only seemed to exist,[6]has been described asignoratio elenchi:[7]during a conversation withBoswell, Johnson powerfully kicked a nearby stone and proclaimed of Berkeley's theory, "I refute itthus!"[8](See alsoargumentum ad lapidem.) A related concept is that of thered herring, which is a deliberate attempt to divert a process of enquiry by changing the subject.[2]Ignoratio elenchiis sometimes confused withstraw manargument.[2] The phraseignoratio elenchiis fromLatin'an ignoring of a refutation'. Hereelenchiis thegenitivesingular of the Latin nounelenchus, which is fromAncient Greekἔλεγχος(elenchos)'an argument of disproof or refutation'.[9]The translation in English of the Latin expression has varied somewhat.Hamblinproposed "misconception of refutation" or "ignorance of refutation" as a literal translation,[10]John Arthur Oesterle preferred "ignoring the issue", and[10]Irving Copi,Christopher Tindaleand others used "irrelevant conclusion".[10][11]
https://en.wikipedia.org/wiki/Irrelevant_conclusion
GNU Bison, commonly known asBison, is aparser generatorthat is part of theGNU Project. Bison reads a specification in Bison syntax (described as "machine-readableBNF"[3]), warns about anyparsingambiguities, and generates a parser that reads sequences oftokensand decides whether the sequence conforms to the syntax specified by the grammar. The generated parsers are portable: they do not require any specific compilers. Bison by default generatesLALR(1) parsersbut it can also generatecanonical LR, IELR(1) andGLRparsers.[4] InPOSIXmode, Bison is compatible withYacc, but also has several extensions over this earlier program, including Flex, an automaticlexical analyser, is often used with Bison, to tokenise input data and provide Bison with tokens.[5] Bison was originally written by Robert Corbett in 1985.[1]Later, in 1989, Robert Corbett released another parser generator namedBerkeley Yacc. Bison was made Yacc-compatible byRichard Stallman.[6] Bison isfree softwareand is available under theGNU General Public License, with an exception (discussed below) allowing its generated code to be used without triggering thecopyleftrequirements of the licence. One delicate issue with LR parser generators is the resolution of conflicts (shift/reduce and reduce/reduce conflicts). With many LR parser generators, resolving conflicts requires the analysis of the parser automaton, which demands some expertise from the user. To aid the user in understanding conflicts more intuitively, Bison can instead automatically generate counterexamples. Forambiguous grammars, Bison often can even produce counterexamples that show the grammar is ambiguous. For instance, on a grammar suffering from the infamousdangling elseproblem, Bison reports Reentrancy is a feature which has been added to Bison and does not exist in Yacc. Normally, Bison generates a parser which is notreentrant. In order to achieve reentrancy the declaration%define api.puremust be used. More details on Bison reentrancy can be found in the Bison manual.[7] Bison can generate code forC,C++,DandJava.[8] For using the Bison-generated parser from other languages alanguage bindingtool such asSWIGcan be used. Because Bison generates source code that in turn gets added to the source code of other software projects, it raises some simple but interesting copyright questions. The code generated by Bison includes significant amounts of code from the Bison project itself. The Bison package is distributed under the terms of theGNU General Public License(GPL) but an exception has been added so that the GPL does not apply to output.[9][10] Earlier releases of Bison stipulated that parts of its output were also licensed under the GPL, due to the inclusion of the yyparse() function from the original source code in the output. Free software projects that use Bison may have a choice of whether to distribute the source code which their project feeds into Bison, or the resulting C code made output by Bison. Both are sufficient for a recipient to be able to compile the project source code. However, distributing only the input carries the minor inconvenience that the recipients must have a compatible copy of Bison installed so that they can generate the necessary C code when compiling the project. And distributing only the C code in output, creates the problem of making it very difficult for the recipients to modify the parser since this code was written neitherbya human norforhumans - its purpose is to be fed directly into a C compiler. These problems can be avoided by distributing both the input files and the generated code. Most people will compile using the generated code, no different from any other software package, but anyone who wants to modify the parser component can modify the input files first and re-generate the generated files before compiling. Projects distributing both usually do not have the generated files in theirversion controlsystems. The files are only generated when making a release. Some licenses, such as theGPL, require that the source code be in "the preferred form of the work for making modifications to it". GPL'd projects using Bison must thus distribute the files which are the input for Bison. Of course, they can also include the generated files. Because Bison was written as a replacement for Yacc, and is largely compatible, the code from a lot of projects using Bison could equally be fed into Yacc. This makes it difficult to determine if a project "uses" Bison-specific source code or not. In many cases, the "use" of Bison could be trivially replaced by the equivalent use of Yacc or one of its other derivatives. Bison has features not found in Yacc, so some projects can be truly said to "use" Bison, since Yacc would not suffice. The following list is of projects which are known to "use" Bison in the looser sense, that they use free software development tools and distribute code which is intended to be fed into Bison or a Bison-compatible package. The following example shows how to use Bison and flex to write a simple calculator program (only addition and multiplication) and a program for creating anabstract syntax tree. The next two files provide definition and implementation of the syntax tree functions. The tokens needed by the Bison parser will be generated using flex. The names of the tokens are typically neutral: "TOKEN_PLUS" and "TOKEN_STAR", not "TOKEN_ADD" and "TOKEN_MULTIPLY". For instance if we were to support the unary "+" (as in "+1"), it would be wrong to name this "+" "TOKEN_ADD". In a language such as C, "int *ptr" denotes the definition of a pointer, not a product: it would be wrong to name this "*" "TOKEN_MULTIPLY". Since the tokens are provided by flex we must provide the means to communicate between theparser and the lexer.[23]The data type used for communication,YYSTYPE, is set using Bison%uniondeclaration. Since in this sample we use the reentrant version of both flex and yacc we are forced to provide parameters for theyylexfunction, when called fromyyparse.[23]This is done through Bison%lex-paramand%parse-paramdeclarations.[24] The code needed to obtain the syntax tree using the parser generated by Bison and the scanner generated by flex is the following. A simple makefile to build the project is the following.
https://en.wikipedia.org/wiki/GNU_Bison
Analgorithmis fundamentally a set of rules or defined procedures that is typically designed and used to solve a specific problem or a broad set of problems. Broadly, algorithms define process(es), sets of rules, or methodologies that are to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations. With the increasing automation of services, more and more decisions are being made by algorithms. Some general examples are; risk assessments, anticipatory policing, and pattern recognition technology.[1] The following is alist of well-known algorithmsalong with one-line descriptions for each. HybridAlgorithms
https://en.wikipedia.org/wiki/List_of_algorithms#Parsing
Adependent-marking languagehas grammatical markers ofagreementandcase governmentbetween the words ofphrasesthat tend to appear more ondependentsthan onheads. The distinction betweenhead-markingand dependent-marking was first explored byJohanna Nicholsin 1986,[1]and has since become a central criterion in language typology in which languages are classified according to whether they are more head-marking or dependent-marking. Many languages employ both head and dependent-marking, but some employdouble-marking, and yet others employzero-marking. However, it is not clear that the head of a clause has anything to do with the head of a noun phrase, or even what the head of a clause is. Englishhas few inflectional markers of agreement and so can be construed as zero-marking much of the time. Dependent-marking, however, occurs when a singular or plural noun demands the singular or plural form of the demonstrative determinerthis/theseorthat/thoseand when a verb or preposition demands the subject or object form of a personal pronoun:I/me,he/him,she/her,they/them,who/whom. The following representations ofdependency grammarillustrate some cases:[2] Plural nouns in English require the plural form of a dependent demonstrative determiner, and prepositions require the object form of a dependent personal pronoun. Such instances of dependent-marking are a relatively rare occurrence in English, but dependent-marking occurs much more frequently in related languages, such asGerman. There, for instance, dependent-marking is present in most noun phrases. A noun marks its dependent determiner: The noun marks the dependent determiner in gender (masculine, feminine, or neuter) and number (singular or plural). In other words, the gender and number of the noun determine the form of the determiner that must appear. Nouns in German also mark their dependent adjectives in gender and number, but the markings vary across determiners and adjectives. Also, a head noun in German can mark a dependent noun with the genitive case.
https://en.wikipedia.org/wiki/Dependent-marking_language
Adouble-marking languageis one in which thegrammaticalmarks showing relations between different constituents of a phrase tend to be placed on both theheads(or nuclei) of the phrase in question, and on the modifiers or dependents. Pervasive double-marking is rather rare, but instances of double-marking occur in manylanguages. For example, inTurkish, in a genitive construction involving two definite nouns, both the possessor and the possessed are marked, the former with a suffix marking the possessor (and corresponding to apossessive adjectivein English) and the latter in thegenitive case. For example, 'brother' iskardeş,and 'dog' isköpek,but 'brother's dog' iskardeşin köpeği.(The consonant change is part of a regularconsonant lenition.) Another example is a language in which endings that markgenderorcaseare used to indicate the role of both nouns and their associated modifiers (such as adjectives) in a sentence (such asRussianandSpanish) or in which case endings are supplemented by verb endings marking thesubject,direct objectand/orindirect objectof a sentence. Proto-Indo-Europeanhad double-marking in both verb phrases (verbs were marked for person and number, nominals for case) and noun-adjective phrases (both marked with the same case-and-number endings) but not in possessive phrases (only the dependent was marked).
https://en.wikipedia.org/wiki/Double-marking_language
Ingrammarandtheoretical linguistics,governmentorrectionrefers to the relationship between a word and its dependents. One can discern between at least three concepts of government: the traditional notion ofcase government, the highly specialized definition of government in somegenerativemodels ofsyntax, and a much broader notion independency grammars. In traditional Latin and Greek (and other) grammars, government is the control byverbsandprepositionsof the selection of grammatical features of other words. Most commonly, a verb or preposition is said to "govern" a specificgrammatical caseif its complement must take that case in a grammatically correct structure (see:case government).[1]For example, inLatin, mosttransitive verbsrequire theirdirect objectto appear in theaccusative case, while thedative caseis reserved forindirect objects. Thus, the phraseI see youwould be rendered asTevideoin Latin, using the accusative formtefor the second person pronoun, andI give a present to youwould be rendered asTibi donumdo, using both an accusative (donum) for the direct and a dative (tibi; the dative of the second person pronoun) for the indirect object; the phraseI help you, however, would be rendered asTibifaveo, using only the dative formtibi. The verbfavere(to help), like many others, is an exception to this default government pattern: its one and only object must be in the dative. Although no direct object in the accusative is controlled by the specific verb, this object is traditionally considered to be an indirect one, mainly becausepassivizationis unavailable except perhaps in an impersonal manner and for certain verbs of this type. A semantic alternation may also be achieved when different case constructions are available with a verb:Idcredo(idis an accusative) meansI believe this, I have this opinionandEicredo(eiis a dative) meansI trust this, I confide in this. Prepositions (and postpositions and circumpositions, i.e.adpositions) are like verbs in their ability to govern the case of their complement, and like many verbs, many adpositions can govern more than one case, with distinct interpretations. For examplein Italywould beinItalia,Italiabeing anablativecase form, buttowards Italywould beinItaliam,Italiambeing an accusative case form. The abstract syntactic relation of government ingovernment and binding theory, aphrase structure grammar, is an extension of the traditional notion of case government.[2]Verbs govern their objects, and more generally,headsgovern their dependents.AgovernsBif and only if:[3] This definition is explained in more detail in thegovernmentsection of the article on government and binding theory. One sometimes encounters definitions of government that are much broader than the one just produced. Government is understood as the property that regulates which words can or must appear with the referenced word.[4]This broader understanding of government is part of manydependency grammars. The notion is that many individual words in a given sentence can appear only by virtue of the fact that some other word appears in that sentence. According to this definition, government occurs between any two words connected by a dependency, the dominant word opening slots for subordinate words. The dominant word is thegovernor, and the subordinates are itsgovernees. The following dependency tree illustrates governors and governees: The wordhasgovernsFredandordered; in other words,hasis governor over its governeesFredandordered. Similarly,orderedgovernsdishandfor, that is,orderedis governor over its governeesdishandfor; etc. This understanding of government is widespread among dependency grammars.[5] The distinction between the termsgovernorandheadis a source of confusion, given the definitions of government produced above. Indeed,governorandheadare overlapping concepts. The governor and the head of a given word will often be one and the same other word. The understanding of these concepts becomes difficult, however, whendiscontinuitiesare involved. The following example of aw-frontingdiscontinuity from German illustrates the difficulty: Wem who-DAT denkst think du you haben have sie they geholfen? helped? Wem denkst du haben sie geholfen? who-DAT think you have they helped? 'Who do you think they helped?' Two of the criteria mentioned above for identifying governors (and governees) are applicable to the interrogative pronounwem'whom'. This pronoun receives dative case from the verbgeholfen'helped' (= case government) and it can appear by virtue of the fact thatgeholfenappears (= licensing). Given these observations, one can make a strong argument thatgeholfenis the governor ofwem, even though the two words are separated from each other by the rest of the sentence. In such constellations, one sometimes distinguishes betweenheadandgovernor.[6]So while the governor ofwemisgeholfen, the head ofwemis taken to be the finite verbdenkst'think'. In other words, when a discontinuity occurs, one assumes that the governor and the head (of the relevant word) are distinct, otherwise they are the same word. Exactly how the termsheadandgovernorare used can depend on the particular theory of syntax that is employed.
https://en.wikipedia.org/wiki/Government_(linguistics)
Government and binding(GB,GBT) is a theory ofsyntaxand aphrase structure grammarin the tradition oftransformational grammardeveloped principally byNoam Chomskyin the 1980s.[1][2][3]This theory is a radical revision of his earlier theories[4][5][6]and was later revised inThe Minimalist Program(1995)[7]and several subsequent papers, the latest beingThree Factors in Language Design(2005).[8]Although there is a large literature on government and binding theory which is not written by Chomsky, Chomsky's papers have been foundational in setting the research agenda. The name refers to two central subtheories of the theory:government, which is an abstract syntactic relation applicable, among other things, to the assignment ofcase; andbinding, which deals chiefly with the relationships betweenpronounsand the expressions with which they areco-referential. GB was the first theory to be based on theprinciples and parametersmodel of language, which also underlies the later developments of the minimalist program. The main application of thegovernmentrelation concerns the assignment ofcase. Government is defined as follows: AgovernsB if and only if Governorsare heads of thelexical categories(V, N, A, P) andtensed I(T). Am-commandsB if A does notdominateB and B does not dominate A and the firstmaximal projectionof A dominates B, where the maximal projection of a head X is XP. This means that for example in a structure like the following, Am-commandsB, but B does notm-commandA: In addition,barrieris defined as follows:[9]A barrier is any node Z such that The government relation makes case assignment unambiguous. The tree diagram below illustrates how DPs are governed and assigned case by their governing heads: Another important application of the government relation constrains the occurrence and identity oftracesas theEmpty Category Principlerequires them to be properly governed. Bindingcan be defined as follows: Consider the sentence "Johnisaw hisimother", which is diagrammed below using simplephrase structure trees. The NP "John" c-commands "his" because the first parent of the NP, S, contains "his". "John" and "his" are also coreferential (they refer to the same person), therefore "John" binds "his". On the other hand, in the ungrammatical sentence "*The mother of Johnilikes himselfi", "John" does not c-command "himself", so they have no binding relationship despite the fact that they corefer. The importance of binding is shown in the grammaticality or ungrammaticality of the following sentences: Binding is used, along with particular binding principles, to explain the ungrammaticality of statements 1, 3, and 4. The applicable rules are called Binding Principle A, Binding Principle B, and Binding Principle C. Since "himself" is not c-commanded by "John" in sentence [3], Principle A is violated. In sentence [1], "him" is bound by "John", violating Principle B. In sentence [4], the first instance of "John" binds the second, violating Principle C. Note that Principles A and B refer to "governing categories"—domains which limit the scope of binding. The definition of a governing category laid out inLectures on Government and Binding[1]is complex, but in most cases the governing category is essentially the minimal clause or complex NP. Notes Further reading
https://en.wikipedia.org/wiki/Government_and_binding_theory
Inlinguistics, theheadornucleusof aphraseis the word that determines thesyntacticcategory of that phrase. For example, the head of thenoun phrase"boiling hot water" is thenoun(head noun) "water". Analogously, the head of acompoundis thestemthat determines the semantic category of that compound. For example, the head of the compound noun "handbag" is "bag", since a handbag is a bag, not a hand. The other elements of the phrase or compoundmodifythe head, and are therefore the head'sdependents.[1]Headed phrases and compounds are calledendocentric, whereasexocentric("headless") phrases and compounds (if they exist) lack a clear head. Heads are crucial to establishing the direction ofbranching. Head-initial phrases are right-branching, head-final phrases are left-branching, and head-medial phrases combine left- and right-branching. Examine the following expressions: The worddogis theheadofbig red dogsince it determines that the phrase is anoun phrase, not anadjective phrase. Because the adjectivesbigandredmodify this head noun, they are itsdependents.[2]Similarly, in the compound nounbirdsong,the stemsongis the head since it determines the basic meaning of the compound. The stembirdmodifies this meaning and is therefore dependent onsong.Birdsongis a kind of song, not a kind of bird. Conversely, asongbirdis a type of bird since the stembirdis the head in this compound. The heads of phrases can often be identified by way ofconstituency tests. For instance, substituting a single word in place of the phrasebig red dogrequires the substitute to be a noun (or pronoun), not an adjective. Many theories of syntax represent heads by means of tree structures. These trees tend to be organized in terms of one of two relations: either in terms of theconstituencyrelation ofphrase structure grammarsor thedependencyrelation ofdependency grammars. Both relations are illustrated with the following trees:[3] The constituency relation is shown on the left and the dependency relation on the right. The a-trees identify heads by way of category labels, whereas the b-trees use the words themselves as the labels.[4]The nounstories(N) is the head over the adjectivefunny(A). In the constituency trees on the left, the noun projects its category status up to the mother node, so that the entire phrase is identified as a noun phrase (NP). In the dependency trees on the right, the noun projects only a single node, whereby this node dominates the one node that the adjective projects, a situation that also identifies the entirety as an NP. The constituency trees are structurally the same as their dependency counterparts, the only difference being that a different convention is used for marking heads and dependents. The conventions illustrated with these trees are just a couple of the various tools that grammarians employ to identify heads and dependents. While other conventions abound, they are usually similar to the ones illustrated here. The four trees above show a head-final structure. The following trees illustrate head-final structures further as well as head-initial and head-medial structures. The constituency trees (= a-trees) appear on the left, and dependency trees (= b-trees) on the right. Henceforth the convention is employed where the words appear as the labels on the nodes. The next four trees are additional examples of head-final phrases: The following six trees illustrate head-initial phrases: And the following six trees are examples of head-medial phrases: The head-medial constituency trees here assume a more traditional n-ary branching analysis. Since some prominent phrase structure grammars (e.g. most work inGovernment and binding theoryand theMinimalist Program) take all branching to be binary, these head-medial a-trees may be controversial. Trees that are based on theX-bar schemaalso acknowledge head-initial, head-final, and head-medial phrases, although the depiction of heads is less direct. The standard X-bar schema for English is as follows: This structure is both head-initial and head-final, which makes it head-medial in a sense. It is head-initial insofar as the head X0precedes its complement, but it is head-final insofar as the projection X' of the head follows its specifier. Some language typologists classify languagesyntaxaccording to ahead directionality parameterinword order, that is, whether a phrase ishead-initial(= right-branching) orhead-final(= left-branching), assuming that it has a fixed word order at all. English is more head-initial than head-final, as illustrated with the following dependency tree of the first sentence ofFranz Kafka'sThe Metamorphosis: The tree shows the extent to which English is primarily a head-initial language. On the broadest level, the verb phrase "discovered that he had been changed into a monstrous verminous bug" begins with the verb headword "discovered". Structure is descending as speech and processing move (visually in writing) from left to right. Most dependencies have the head preceding its dependent(s), although there are also head-final dependencies in the tree. For instance, the determiner-noun and adjective-noun dependencies are head-final as well as the subject-verb dependencies. Most other dependencies in English are, however, head-initial as the tree shows. The mixed nature of head-initial and head-final structures is common across languages. In fact purely head-initial or purely head-final languages probably do not exist, although there are some languages that approach purity in this respect, for instance Japanese. The following tree is of the same sentence from Kafka's story. The glossing conventions are those established byLehmann. One can easily see the extent to which Japanese is head-final: A large majority of head-dependent orderings in Japanese are head-final. This fact is obvious in this tree, since structure is strongly ascending as speech and processing move from left to right. Thus the word order of Japanese is in a sense the opposite of English. It is also common to classify languagemorphologyaccording to whether a phrase ishead-markingordependent-marking. A given dependency is head-marking, if something about the dependent influences the form of the head, and a given dependency is dependent-marking, if something about the head influences the form of the dependent. For instance, in the Englishpossessive case, possessive marking ('s) appears on the dependent (the possessor), whereas inHungarianpossessive marking appears on the head noun:[5] In aprosodic unit, the head is the part that extends from the first stressed syllable up to (but not including) the tonic syllable. A high head is the stressed syllable that begins the head and is high in pitch, usually higher than the beginning pitch of the tone on the tonic syllable. For example: The↑bus was late. A low head is the syllable that begins the head and is low in pitch, usually lower than the beginning pitch of the tone on the tonic syllable. The↓bus was late.
https://en.wikipedia.org/wiki/Head_(linguistics)
Head-driven phrase structure grammar(HPSG) is a highly lexicalized,constraint-based grammar[1][2]developed byCarl PollardandIvan Sag.[3][4]It is a type ofphrase structure grammar, as opposed to adependency grammar, and it is the immediate successor togeneralized phrase structure grammar. HPSG draws from other fields such ascomputer science(data type theoryandknowledge representation) and usesFerdinand de Saussure's notion of thesign. It uses a uniform formalism and is organized in a modular way which makes it attractive fornatural language processing. An HPSG includes principles and grammar rules andlexiconentries which are normally not considered to belong to a grammar. The formalism is based on lexicalism. This means that the lexicon is more than just a list of entries; it is in itself richly structured. Individual entries are marked with types. Types form a hierarchy. Early versions of the grammar were very lexicalized with few grammatical rules (schema). More recent research has tended to add more and richer rules, becoming more likeconstruction grammar.[5] The basic type HPSG deals with is the sign.Wordsandphrasesare two different subtypes of sign. A word has two features:[PHON](the sound, thephoneticform) and[SYNSEM](thesyntacticandsemanticinformation), both of which are split into subfeatures. Signs and rules are formalized astypedfeature structures. HPSG generates strings by combining signs, which are defined by their location within a type hierarchy and by their internal feature structure, represented byattribute value matrices(AVMs).[4][6]Features take types or lists of types as their values, and these values may in turn have their own feature structure. Grammatical rules are largely expressed through the constraints signs place on one another. A sign's feature structure describes its phonological, syntactic, and semantic properties. In common notation, AVMs are written with features in upper case and types in italicized lower case. Numbered indices in an AVM represent token identical values. In the simplified AVM for the word (in this case the verb, not the noun as in "nice walks for the weekend") "walks" below, the verb's categorical information (CAT) is divided into features that describe it (HEAD) and features that describe its arguments (VALENCE). "Walks" is a sign of typewordwith a head of typeverb. As an intransitive verb, "walks" has no complement but requires a subject that is a third person singular noun. The semantic value of the subject (CONTENT) is co-indexed with the verb's only argument (the individual doing the walking). The following AVM for "she" represents a sign with a SYNSEM value that could fulfill those requirements. Signs of typephraseunify with one or more children and propagate information upward. The following AVM encodes theimmediate dominance rulefor ahead-subj-phrase, which requires two children: the head child (a verb) and a non-head child that fulfills the verb's SUBJ constraints. The end result is a sign with a verb head, empty subcategorization features, and a phonological value that orders the two children. Although the actual grammar of HPSG is composed entirely of feature structures, linguists often use trees to represent the unification of signs where the equivalent AVM would be unwieldy. Variousparsersbased on the HPSG formalism have been written and optimizations are currently being investigated. An example of a system analyzingGermansentencesis provided by theFreie Universität Berlin.[7]In addition the CoreGram[8]project of the Grammar Group of theFreie Universität Berlinprovides open source grammars that were implemented in the TRALE system. Currently there are grammars forGerman,[9]Danish,[10]Mandarin Chinese,[11]Maltese,[12]andPersian[13]that share a common core and are publicly available. Large HPSG grammars of various languages are being developed in the Deep Linguistic Processing with HPSG Initiative (DELPH-IN).[14]Wide-coverage grammars of English,[15]German,[16]andJapanese[17]are available under an open-source license. These grammars can be used with a variety of inter-compatible open-source HPSG parsers:LKB, PET,[18]Ace,[19]andagree.[20]All of these produce semantic representations in the format of “Minimal Recursion Semantics,” MRS.[21]The declarative nature of the HPSG formalism means that these computational grammars can typically be used for bothparsingand generation (producing surface strings from semantic inputs). Treebanks, also distributed byDELPH-IN, are used to develop and test the grammars, as well as to train ranking models to decide on plausible interpretations when parsing (or realizations when generating). Enjuis a freely available wide-coverage probabilistic HPSG parser for English developed by the Tsujii Laboratory atThe University of TokyoinJapan.[22]
https://en.wikipedia.org/wiki/Head-driven_phrase_structure_grammar
A language ishead-markingif thegrammaticalmarks showingagreementbetween different words of aphrasetend to be placed on theheads(or nuclei) of phrases, rather than on themodifiersordependents. Many languages employ both head-marking anddependent-marking, and some languages double up and are thusdouble-marking. The concept of head/dependent-marking was proposed byJohanna Nicholsin 1986 and has come to be widely used as a basic category inlinguistic typology.[1] The concepts of head-marking and dependent-marking are commonly applied to languages that have richer inflectional morphology thanEnglish. There are, however, a few types of agreement in English that can be used to illustrate those notions. The following graphic representations of aclause, anoun phrase, and aprepositional phraseinvolve agreement. The three tree structures shown are those of adependency grammar, as opposed to those of aphrase structure grammar:[2] Heads and dependents are identified by the actual hierarchy of words, and the concepts of head-marking and dependent-marking are indicated with the arrows. Subject-verb agreement, shown in the tree on the left, is a case of head-marking because the singular subjectJohnrequires the inflectional suffix-sto appear on the finite verbcheats, the head of the clause. The determiner-noun agreement, shown in the tree in the middle, is a case of dependent-marking because the plural nounhousesrequires the dependent determiner to appear in its plural form,these, not in its singular form,this. The preposition-pronoun agreement ofcase government, shown in the tree on the right, is also an instance of dependent-marking because the head prepositionwithrequires the dependent pronoun to appear in its object form,him, not in its subject form,he. The distinction between head-marking and dependent-marking shows the most in noun phrases and verb phrases, which have significant variation among and within languages.[3] Languages may be head-marking in verb phrases and dependent-marking in noun phrases, such as mostBantu languages, or vice versa, and it has been argued that the subject rather than the verb is the head of a clause so "head-marking" is not necessarily a coherent typology. Still, languages that are head-marking in both noun and verb phrases are common enough to make the term useful for typological description. Head-marked possessive noun phrases are common in the Americas, Melanesia,Afro-Asiatic languages(status constructus) andTurkic languagesand infrequent elsewhere. Dependent-marked noun phrases have a complementary distribution and are frequent inAfrica,Eurasia,Australia, andNew Guinea, the only area in which both types overlap appreciably. Double-marked possession is rare but found in languages around the Eurasian periphery such asFinnish, in theHimalayas, and along thePacific CoastofNorth America.Zero-markedpossession is also uncommon, with instances mostly found near theequator, but it does not form any true clusters.[4] The head-markedclauseis common in theAmericas, Australia, New Guinea, and the Bantu languages but is very rare elsewhere. The dependent-marked clause is common in Eurasia andNorthern Africa, sparse inSouth America, and rare in North America. In New Guinea, it clusters in the Eastern Highlands and in Australia in the south, east, and interior with the very oldPama-Nyunganfamily. Double-marking is moderately well attested in the Americas, Australia, and New Guinea, and the southern fringe of Eurasia (chiefly in theCaucasian languagesand Himalayan mountain enclaves), and it is particularly favored in Australia and the westernmost Americas. The zero-marked object is unsurprisingly common inSoutheast AsiaandWestern Africa, two centers ofmorphologicalsimplicity, but it is also very common in New Guinea and moderately common inEastern AfricaandCentral Americaand South America, among languages of average or higher morphological complexity.[5][6] ThePacific Rimdistribution of head-marking may reflectpopulation movements beginning tens of thousands of years agoandfounder effects.Kusundahas traces in the Himalayas, and there are Caucasian enclaves, both of which are perhaps remnants oftypologypreceding the spreads ofinterior Eurasian language families. The dependent-marking type is found everywhere but rare in the Americas, possibly another result of founder effects. In the Americas, all four types are found along the Pacific Coast, but in the East, only head-marking is common. Whether the diversity of types along the Pacific Coast reflects a great age or an overlay of more recent Eurasian colonizations on an earlier American stratum remains to be seen.[7]
https://en.wikipedia.org/wiki/Head-marking_language
Minimalist grammarsare a class offormal grammarsthat aim to provide a more rigorous, usually proof-theoretic, formalization of ChomskyanMinimalist programthan is normally provided in the mainstream Minimalist literature. A variety of particular formalizations exist, most of them developed byEdward Stabler, Alain Lecomte, Christian Retoré, or combinations thereof. Lecomte and Retoré (2001)[1]introduce a formalism that modifies that core of theLambek Calculusto allow for movement-like processes to be described without resort to the combinatorics ofCombinatory categorial grammar. The formalism is presented in proof-theoretic terms. Differing only slightly in notation from Lecomte and Retoré (2001), we can define a minimalist grammar as a 3-tupleG=(C,F,L){\displaystyle G=(C,F,L)}, whereC{\displaystyle C}is a set of "categorial" features,F{\displaystyle F}is a set of "functional" features (which come in two flavors, "weak", denoted simplyf{\displaystyle f}, and "strong", denotedf∗{\displaystyle f*}), andL{\displaystyle L}is a set of lexical atoms, denoted as pairsw:t{\displaystyle w:t}, wherew{\displaystyle w}is some phonological/orthographic content, andt{\displaystyle t}is a syntactic type defined recursively as follows: We can now define 6 inference rules: The first rule merely makes it possible to use lexical items with no extra assumptions. The second rule is just a means of introducing assumptions into the derivation. The third and fourth rules just perform directional feature checking, combining the assumptions required to build the subparts that are being combined. The entropy rule presumably allows the ordered sequents to be broken up into unordered sequents. And finally, the last rule implements "movement" by means of assumption elimination. The last rule can be given a number of different interpretations in order to fully mimic movement of the normal sort found in the Minimalist Program. The account given by Lecomte and Retoré (2001) is that if one of the product types is a strong functional feature, then the phonological/orthographic content associated with that type on the right is substituted with the content of thea, and the other is substituted with the empty string; whereas if neither is strong, then the phonological/orthographic content is substituted for the category feature, and the empty string is substituted for the weak functional feature. That is, we can rephrase the rule as two sub-rules as follows: Another alternative would be to construct pairs in the/Eand\Esteps, and use the∘E{\displaystyle \circ E}rule as given, substituting the phonological/orthographic contentainto the highest of the substitution positions, and the empty string in the rest of the positions. This would be more in line with the Minimalist Program, given that multiple movements of an item are possible, where only the highest position is "spelled out". As a simple example of this system, we can show how to generate the sentencewho did John seewith the following toy grammar: LetG=({N,S},{W},L){\displaystyle G=(\{N,S\},\{W\},L)}, whereLcontains the following words: The proof for the sentencewho did John seeis therefore:
https://en.wikipedia.org/wiki/Minimalist_grammar
Inlinguistics,transformational grammar(TG) ortransformational-generative grammar(TGG) was the earliestmodelofgrammarproposed within the research tradition ofgenerative grammar.[1]Like current generative theories, it treated grammar as a system offormal rulesthat generate all and onlygrammaticalsentences of a given language. What was distinctive about transformational grammar was that it positedtransformation rulesthat mapped a sentence'sdeep structureto its pronounced form. For example, in many variants of transformational grammar, theEnglishactivevoice sentence "Emma saw Daisy" and itspassivecounterpart "Daisy was seen by Emma" share a common deep structure generated byphrase structure rules, differing only in that the latter's structure is modified by a passivization transformation rule. Transformational grammar was a species ofgenerative grammarand shared many of its goals and postulations, including the notion of linguistics as acognitive science, the need forformal explicitness, and thecompetence-performancedistinction.[2]Transformational grammar included two kinds of rules: phrase-structure rules and transformational rules. In transformational grammar, each sentence in a language has two levels of representation: a deep structure and a surface structure.[3]The deep structure represents a sentence's coresemantic relationsand is mapped onto the surface structure, which follows the sentence'sphonological systemvery closely, viatransformations. Deep structures are generated byphrase structure grammarsusingrewrite rules. Transformations are rules that map a deep structure to a surface structure. For example, a typical transformation in TG issubject-auxiliary inversion(SAI). That rule takes as its input a declarative sentence with an auxiliary, such as "John has eaten all the heirloom tomatoes", and transforms it into "Has John eaten all the heirloom tomatoes?" In the original formulation (Chomsky 1957), those rules held over strings of terminals, constituent symbols or both. (NP = Noun Phrase and AUX = Auxiliary) In the 1970s, by the time of the Extended Standard Theory, following Joseph Emonds's work on structure preservation, transformations came to be viewed as holding over trees. By the end of government and binding theory, in the late 1980s, transformations were no longer structure-changing operations at all; instead, they added information to already existing trees by copying constituents. The earliest conceptions of transformations were that they were construction-specific devices. For example, there was a transformation that turned active sentences into passive ones. A different transformation raised embedded subjects into main clause subject position in sentences such as "John seems to have gone", and a third reordered arguments in the dative alternation. With the shift from rules to principles and constraints in the 1970s, those construction-specific transformations morphed into general rules (all the examples just mentioned are instances of NP movement), which eventually changed into the single general rulemove alphaor Move. Transformations actually come in two types: the post-deep structure kind mentioned above, which are string- or structure-changing, and generalized transformations (GTs). GTs were originally proposed in the earliest forms of generative grammar (such as in Chomsky 1957). They take small structures, either atomic or generated by other rules, and combine them. For example, the generalized transformation of embedding would take the kernel "Dave said X" and the kernel "Dan likes smoking" and combine them into "Dave said Dan likes smoking." GTs are thus structure-building rather than structure-changing. In the Extended Standard Theory andgovernment and binding theory, GTs were abandoned in favor of recursive phrase structure rules, but they are still present intree-adjoining grammaras the Substitution and Adjunction operations, and have recently reemerged in mainstream generative grammar in Minimalism, as the operations Merge and Move. In generativephonology, another form of transformation is thephonological rule, which describes a mapping between anunderlying representation(thephoneme) and the surface form that is articulated duringnatural speech.[4] An important feature of all transformational grammars is that they are more powerful thancontext-free grammars.[5]Chomsky formalized this idea in theChomsky hierarchy. He argued that it is impossible to describe the structure of natural languages with context-free grammars.[6]His general position on the context-dependency of natural language has held up, though his specific examples of the inadequacy of CFGs in terms of their weak generative capacity were disproved.[7][8] Using a term such as "transformation" may give the impression that theories of transformational generative grammar are intended as a model of the processes by which the human mind constructs and understands sentences, but Chomsky clearly stated that a generative grammar models only the knowledge that underlies the human ability to speak and understand, arguing that because most of that knowledge is innate, a baby can have a large body of knowledge about the structure of language in general and so need tolearnonly the idiosyncratic features of the language(s) to which it is exposed.[citation needed] Chomsky is not the first person to suggest that all languages have certain fundamental things in common. He quoted philosophers who posited the same basic idea several centuries ago. But Chomsky helped make the innateness theory respectable after a period dominated by more behaviorist attitudes towards language. He made concrete and technically sophisticated proposals about the structure of language as well as important proposals about how grammatical theories' success should be evaluated.[9] Chomsky argued that "grammatical" and "ungrammatical" can be meaningfully and usefully defined. In contrast, an extreme behaviorist linguist would argue that language can be studied only through recordings or transcriptions of actual speech and that the role of the linguist is to look for patterns in such observed speech, not to hypothesize about why such patterns might occur or to label particular utterances grammatical or ungrammatical. Few linguists in the 1950s actually took such an extreme position, but Chomsky was on the opposite extreme, defining grammaticality in an unusuallymentalisticway for the time.[10]He argued that the intuition of anative speakeris enough to define the grammaticality of a sentence; that is, if a particular string of English words elicits a double-take or a feeling of wrongness in a native English speaker, with various extraneous factors affecting intuitions controlled for, it can be said that the string of words is ungrammatical. That, according to Chomsky, is entirely distinct from the question of whether a sentence is meaningful or can be understood. It is possible for a sentence to be both grammatical and meaningless, as in Chomsky's famous example, "colorless green ideas sleep furiously".[11]But such sentences manifest a linguistic problem that is distinct from that posed by meaningful but ungrammatical (non)-sentences such as "man the bit sandwich the", the meaning of which is fairly clear, but which nonative speakerwould accept as well-formed. In the 1960s, Chomsky introduced two central ideas relevant to the construction and evaluation of grammatical theories. One was the distinction betweencompetenceandperformance. Chomsky noted that when people speak in the real world, they often make linguistic errors, such as starting a sentence and then abandoning it midway through. He argued that such errors in linguisticperformanceare irrelevant to the study of linguisticcompetence, the knowledge that allows people to construct and understand grammatical sentences. Consequently, the linguist can study an idealised version of language, which greatly simplifies linguistic analysis. The other idea related directly to evaluation of theories of grammar. Chomsky distinguished between grammars that achievedescriptive adequacyand those that go further and achieveexplanatory adequacy. A descriptively adequate grammar for a particular language defines the (infinite) set of grammatical sentences in that language; that is, it describes the language in its entirety. A grammar that achieves explanatory adequacy has the additional property that it gives insight into the mind's underlying linguistic structures. In other words, it does not merely describe the grammar of a language, but makes predictions about how linguistic knowledge is mentally represented. For Chomsky, such mental representations are largely innate and so if a grammatical theory has explanatory adequacy, it must be able to explain different languages' grammatical nuances as relatively minor variations in the universal pattern of human language. Chomsky argued that even though linguists were still a long way from constructing descriptively adequate grammars, progress in descriptive adequacy would come only if linguists held explanatory adequacy as their goal: real insight into individual languages' structure can be gained only by comparative study of a wide range of languages, on the assumption that they are all cut from the same cloth.[citation needed] Chomsky developed transformational grammar in the late 1950s, drawing on older work including that of thestructuralists.[12][2]Its central ideas are maintained to varying degrees in present-day approaches to syntax such asMinimalism, while others such asCombinatory categorial grammarare distinctly non-transformational.[13]
https://en.wikipedia.org/wiki/Transformational_grammar
Inlinguistics,word order(also known aslinear order) is the order of thesyntacticconstituentsof alanguage.Word order typologystudies it from a cross-linguistic perspective, and examines how languages employ different orders. Correlations between orders found in different syntactic sub-domains are also of interest. The primary word orders that are of interest are Some languages use relatively fixed word order, often relying on the order of constituents to convey grammatical information. Other languages—often those that convey grammatical information throughinflection—allow more flexible word order, which can be used to encodepragmaticinformation, such astopicalisationor focus. However, even languages with flexible word order have a preferred or basic word order,[1]with other word orders considered "marked".[2] Constituent word order is defined in terms of afinite verb(V) in combination with two arguments, namely the subject (S), and object (O).[3][4][5][6]Subject and object are here understood to benouns, sincepronounsoften tend to display different word order properties.[7][8]Thus, atransitivesentence has six logically possible basic word orders: These are all possible word orders for the subject, object, and verb in the order of most common to rarest (the examples use "she" as the subject, "loves" as the verb, and "him" as the object): Sometimes patterns are more complex: someGermanic languageshave SOV (Subject-Object-verb) in subordinate clauses, butV2 word orderin main clauses, SVO word order being the most common. Using the guidelines above, the unmarked word order is then SVO. Manysynthetic languagessuch asArabic,[11]Latin,Greek,Persian,Romanian,Assyrian,Assamese,Russian,Turkish,Korean,Japanese,Finnish, andBasquehave no strict word order; rather, thesentence structureis highly flexible and reflects thepragmaticsof the utterance. However, also in languages of this kind there is usually a pragmatically neutral constituent order that is most commonly encountered in each language. Topic-prominent languagesorganize sentences to emphasize theirtopic–commentstructure. Nonetheless, there is often a preferred order; in Latin and Turkish, SOV is the most frequent outside of poetry, and inFinnishSVO is both the most frequent and obligatory when case marking fails to disambiguate argument roles. Just as languages may have different word orders in different contexts, so may they have both fixed and free word orders. For example, Russian has a relatively fixed SVO (Subject-Verb-Object) word order in transitive clauses, but a much freer SV / VS order in intransitive clauses.[citation needed]Cases like this can be addressed by encoding transitive and intransitive clauses separately, with the symbol "S" being restricted to the argument of an intransitive clause, and "A" for the actor/agent of a transitive clause. ("O" for object may be replaced with "P" for "patient" as well.) Thus, Russian is fixed AVO (Absolutive-Verb-Object) but flexible SV/VS. In such an approach, the description of word order extends more easily to languages that do not meet the criteria in the preceding section. For example,Mayan languageshave been described with the rather uncommon VOS word order. However, they areergative–absolutive languages, and the more specific word order is intransitive VS, transitive VOA (Verb-Object-Absolutive), where the S and Oargumentsboth trigger the same type of agreement on the verb. Indeed, many languages that some thought had a VOS (Verb-Object-Subject) word order turn out to be ergative like Mayan. Every language falls under one of the six word order types; the unfixed type is somewhat disputed in the community, as the languages where it occurs have one of the dominant word orders but every word order type is grammatically correct. The table below displays the word order surveyed byDryer. The 2005 study[12]surveyed 1228 languages, and the updated 2013 study[8]investigated 1377 languages. Percentage was not reported in his studies. Hammarström (2016)[13]calculated the constituent orders of 5252 languages in two ways. His first method, counting languages directly, yielded results similar to Dryer's studies, indicating both SOV and SVO have almost equal distribution. However, when stratified bylanguage families, the distribution showed that the majority of the families had SOV structure, meaning that a small number of families contain SVO structure. Fixed word order is one out of many ways to ease the processing of sentence semantics and reducing ambiguity. One method of making the speech stream less open to ambiguity (complete removal of ambiguity is probably impossible) is a fixed order ofargumentsand other sentenceconstituents. This works because speech is inherently linear. Another method is to label the constituents in some way, for example withcase marking,agreement, or anothermarker. Fixed word order reduces expressiveness but added marking increases information load in the speech stream, and for these reasons strict word order seldom occurs together with strict morphological marking, one counter-example beingPersian.[1] Observing discourse patterns, it is found that previously given information (topic) tends to precede new information (comment). Furthermore, acting participants (especially humans) are more likely to be talked about (to be topic) than things simply undergoing actions (like oranges being eaten). If acting participants are often topical, and topic tends to be expressed early in the sentence, this entails that acting participants have a tendency to be expressed early in the sentence. This tendency can thengrammaticalizeto a privileged position in the sentence, the subject. The mentioned functions of word order can be seen to affect the frequencies of the various word order patterns: The vast majority of languages have an order in which S precedes O and V. Whether V precedes O or O precedes V, however, has been shown to be a very telling difference with wide consequences on phrasal word orders.[14] In many languages, standard word order can be subverted in order to form questions or as a means of emphasis. In languages such as O'odham and Hungarian, which are discussed below, almost all possible permutations of a sentence are grammatical, but not all of them are used.[15]In languages such as English and German, word order is used as a means of turning declarative into interrogative sentences: A:'Wen liebt Kate?' / 'Kate liebtwen?' [Whom does Kate love? / Kate loveswhom?] (OVS/SVO) B:'Sie liebt Mark' / 'Mark ist der, den sie liebt' [She loves Mark / It isMarkwhom she loves.] (SVO/OSV) C:'Liebt Kate Mark?' [Does Kate love Mark?] (VSO) In (A), the first sentence shows the word order used for wh-questions in English and German. The second sentence is anecho question; it would be uttered only after receiving an unsatisfactory or confusing answer to a question. One could replace the wordwen[whom] (which indicates that this sentence is a question) with an identifier such asMark: 'Kate liebtMark?' [Kate lovesMark?]. In that case, since no change in word order occurs, it is only by means ofstressandtonethat we are able to identify the sentence as a question. In (B), the first sentence is declarative and provides an answer to the first question in (A). The second sentence emphasizes that Kate does indeed loveMark, and not whomever else we might have assumed her to love. However, a sentence this verbose is unlikely to occur in everyday speech (or even in written language), be it in English or in German. Instead, one would most likely answer the echo question in (A) simply by restating:Mark!. This is the same for both languages. In yes–no questions such as (C), English and German usesubject-verb inversion. But, whereas English relies ondo-supportto form questions from verbs other than auxiliaries, German has no such restriction and uses inversion to form questions, even from lexical verbs. Despite this, English, as opposed to German, has very strict word order. In German, word order can be used as a means to emphasize a constituent in an independent clause by moving it to the beginning of the sentence. This is a defining characteristic of German as a V2 (verb-second) language, where, in independent clauses, the finite verb always comes second and is preceded by one and only one constituent. In closed questions, V1 (verb-first) word order is used. And lastly, dependent clauses use verb-final word order. However, German cannot be called an SVO language since no actual constraints are imposed on the placement of the subject and object(s), even though a preference for a certain word-order over others can be observed (such as putting the subject after the finite verb in independent clauses unless it already precedes the verb[clarification needed]). The order of constituents in aphrasecan vary as much as the order of constituents in aclause. Normally, thenoun phraseand theadpositional phraseare investigated. Within the noun phrase, one investigates whether the followingmodifiersoccur before and/or after thehead noun. Within the adpositional clause, one investigates whether the languages makes use of prepositions (in London), postpositions (London in), or both (normally with different adpositions at both sides) either separately (For whom?orWhom for?) or at the same time (from her away; Dutch example:met hem meemeaningtogether with him). There are several common correlations between sentence-level word order and phrase-level constituent order. For example, SOV languages generally putmodifiersbefore heads and usepostpositions. VSO languages tend to place modifiers after their heads, and useprepositions. For SVO languages, either order is common. For example, French (SVO) uses prepositions(dans la voiture, à gauche),and places adjectives after(une voiture spacieuse).However, a small class of adjectives generally go before their heads(une grande voiture). On the other hand, in English (also SVO) adjectives almost always go before nouns(a big car),and adverbs can go either way, but initially is more common(greatly improved).(English has a very small number of adjectives that go after the heads, such asextraordinaire, which kept its position when borrowed from French.) Russian places numerals after nouns to express approximation (шесть домов=six houses, домов шесть=circa six houses). Some languages do not have a fixed word order and often use a significant amount of morphological marking to disambiguate the roles of the arguments. However, the degree of marking alone does not indicate whether a language uses a fixed or free word order: some languages may use a fixed order even when they provide a high degree of marking, while others (such as some varieties ofDatooga) may combine a free order with a lack of morphological distinction between arguments. Typologically, there is a trend that high-animacy actors are more likely to be topical than low-animacy undergoers; this trend can come through even in languages with free word order, giving a statistical bias for SO order (or OS order in ergative systems; however, ergative systems do not always extend to the highest levels of animacy, sometimes giving way to an accusative system (seesplit ergativity).[16] Most languages with a high degree of morphological marking have rather flexible word orders, such asPolish,Hungarian,Spanish,Latin,Albanian, andO'odham. In some languages, a general word order can be identified, but this is much harder in others.[17]When the word order is free, different choices of word order can be used to help identify thethemeand therheme. Word order in Hungarian sentences can change according to the speaker's communicative intentions. Hungarian word order is not free in the sense that it must reflect the information structure of the sentence, distinguishing the emphatic part that carries new information (rheme) from the rest of the sentence that carries little or no new information (theme). The position of focus in a Hungarian sentence is immediately before the verb, that is, nothing can separate the emphatic part of the sentence from the verb. For "Kateatea piece of cake", the possibilities are: The only freedom in Hungarian word order is that the order of parts outside the focus position and the verb may be freely changed without any change to the communicative focus of the sentence, as seen in sentences 2 and 3 as well as in sentences 6 and 7 above. These pairs of sentences have the same information structure, expressing the same communicative intention of the speaker, because the part immediately preceding the verb is left unchanged. The emphasis can be on the action (verb) itself, as seen in sentences 1, 6 and 7, or it can be on parts other than the action (verb), as seen in sentences 2, 3, 4 and 5. If the emphasis is not on the verb, and the verb has a co-verb (in the above example 'meg'), then the co-verb is separated from the verb, and always follows the verb. Also the enclitic-tmarks the direct object: 'torta' (cake) + '-t' -> 'tortát'. Hindi-Urdu(Hindustani) is essentially a verb-final (SOV) language, with relatively free word order since in most cases postpositions explicitly mark the relationships of noun phrases to the other sentence constituents.[18]Word order in Hindustani does not usually encode grammatical functions.[19]Constituents can be scrambled to express different information structural configurations, or for stylistic reasons. The first syntactic constituent in a sentence is usually the topic,[20][19]which may under certain conditions be marked by the particle "to" (तो / تو), similar in some respects to Japanese topic markerは(wa).[21][22][23][24]Some rules governing the position of words in a sentence are as follows: Some of all the possible word order permutations of the sentence "The girlreceived a giftfrom the boyon her birthday." are shown below. In Portuguese,cliticpronouns andcommasallow many different orders:[citation needed] Braces ({ }) are used above to indicate omitted subject pronouns, which may be implicit in Portuguese. Because ofconjugation, thegrammatical personis recovered. In Classical Latin, the endings of nouns, verbs, adjectives, and pronouns allow for extremely flexible order in most situations. Latin lacks articles. The subject, verb, and object can come in any order in a Latin sentence, although most often (especially in subordinate clauses) the verb comes last.[26]Pragmatic factors, such as topic and focus, play a large part in determining the order. Thus the following sentences each answer a different question:[27] Latin prose often follows the word order "Subject, Direct Object, Indirect Object, Adverb, Verb",[28]but this is more of a guideline than a rule. Adjectives in most cases go before the noun they modify,[29]but some categories, such as those that determine or specify (e.g.Via Appia"Appian Way"), usually follow the noun. In Classical Latin poetry, lyricists followed word order very loosely to achieve a desiredscansion. Due to the presence of grammatical cases (nominative, genitive, dative, accusative, ablative, and in some cases or dialects vocative and locative) applied to nouns, pronouns and adjectives, Albanian permits a large variety of word order combinations. In the spoken language, an alternative word order to the most common S-V-O helps the speaker to emphasise a word and hence make a nuanced change to the meaning. For example: In these examples, "(mua)" can be omitted when not in first position, causing a perceivable change in emphasis; the latter being of different intensity. "Më" is always followed by the verb. Thus, a sentence consisting of a subject, a verb and two objects (a direct and an indirect one), can be expressed in six ways without "mua", and in twenty-four ways with "mua", adding up to thirty possible combinations. O'odham is a language that is spoken in southern Arizona and Northern Sonora, Mexico. It has free word order, with only theauxiliary bound to one spot. Here is an example in literal translation:[15] Those examples are all grammatically valid variations on the sentence "The cowboy is branding the calves," but some are rarely found in natural speech, as is discussed in Grammaticality. Languages change over time. When language change involves a shift in a language's syntax, this is calledsyntactic change. An example of this is found in Old English, which at one point had flexible word order, before losing it over the course of its evolution.[30]In Old English, both of the following sentences would be considered grammatically correct: This flexibility continues into early Middle English, where it seems to drop out of usage.[31]Shakespeare's plays use OV word order frequently, as can be seen from this example: A modern speaker of English would possibly recognise this as a grammatically comprehensible sentence, but nonetheless archaic. There are some verbs, however, that are entirely acceptable in this format: This is acceptable to a modern English speaker and is not considered archaic. This is due to the verb "to be", which acts as bothauxiliaryand main verb. Similarly, other auxiliary andmodal verbsallow for VSO word order ("Must he perish?"). Non-auxiliary and non-modal verbs require insertion of an auxiliary to conform to modern usage ("Did he buy the book?"). Shakespeare's usage of word order is not indicative of English at the time, which had dropped OV order at least a century before.[34] This variation between archaic and modern can also be shown in the change between VSO to SVO inCoptic, the language of the Christian Church in Egypt.[35] There are some languages which have different preferred word orders in different dialects. One such case is Andean Spanish, spoken in Peru. While Spanish is classified as an SVO language,[36]Peruvian Spanish has been influenced by Quechua and Aymara, both SOV languages.[37]This has led to some first-language (L1) Spanish speakers using OV word order in more sentences than would be expected. L2 speakers in Peru also use this word order. Poetry and stories can use different word orders to emphasize certain aspects of the sentence. In English, this is calledanastrophe. Here is an example: "Kate loves Mark." "Mark Kate loves." Here SVO is changed to OSV to emphasize the object. Differences in word order complicate translation and language education – in addition to changing individual words, the order must be changed. The area of linguistics that is concerned with translation and education islanguage acquisition. The reordering of words can cause problems when transcribing stories. Rhyme schemes can change, as well as the meaning behind the words. This can be especially problematic whentranslating poetry.
https://en.wikipedia.org/wiki/Word_order
Azero-marking languageis one with nogrammatical markson neither the dependents (or themodifiers) nor theheads(or thenuclei) that show the relationship between different constituents of a phrase. Pervasive zero marking is very rare, but instances of zero marking in various forms occur in quite a number oflanguages.VietnameseandIndonesianare two national languages listed in theWorld Atlas of Language Structuresas having zero-marking. In manyEast and Southeast Asian languages, such asThaiandChinese, the headverband its dependents are not marked for any arguments or for thenouns' roles in the sentence. On the other hand, possession is marked in such languages by the use ofclitic particlesbetween possessor and possessed. Some languages, such as many dialects ofArabic, use a similar process, called juxtaposition, to indicate possessive relationships. In Arabic, two nouns next to each other could indicate a possessed-possessor construction:كتب مريمkutub Maryam"Maryam's books" (literally "books Maryam"). InClassicalandModern Standard Arabic, however, the second noun is in the genitive case, as inكتبُ مريمٍkutub-u Maryam-a. Zero-marking, when it occurs, tends to show a strong relationship with word order. Languages in which zero-marking is widespread are almost allsubject–verb–object, perhaps because verb-medial order allows two or morenounsto be recognized as such much more easily thansubject–object–verb,object–subject–verb,verb–subject–object, orverb–object–subjectorder, for which two nouns might be adjacent and their role in a sentence possibly thus confused.[citation needed]It has been suggested thatverb-finallanguages may be likely to develop verb-medial order if marking on nouns is lost.[citation needed]
https://en.wikipedia.org/wiki/Zero-marking_language
Infix notation Polish notation(PN), also known asnormal Polish notation(NPN),[1]Łukasiewicz notation,Warsaw notation,Polish prefix notation,Eastern Notationor simplyprefix notation, is a mathematical notation in whichoperatorsprecedetheiroperands, in contrast to the more commoninfix notation, in which operators are placedbetweenoperands, as well asreverse Polish notation(RPN), in which operatorsfollowtheir operands. It does not need any parentheses as long as each operator has a fixednumber of operands. The description "Polish" refers to thenationalityoflogicianJan Łukasiewicz,[2]: 24[3]: 78[4]who invented Polish notation in 1924.[5]: 367, Footnote 3[6]: 180, Footnote 3 The termPolish notationis sometimes taken (as the opposite ofinfix notation) to also include reverse Polish notation.[7] When Polish notation is used as a syntax for mathematical expressions byprogramming languageinterpreters, it is readily parsed intoabstract syntax treesand can, in fact, define aone-to-one representationfor the same. Because of this,Lisp(see below) and related programming languages define their entire syntax in prefix notation (and others use postfix notation). A quotation from a paper byJan Łukasiewiczin 1931[5]: 367, Footnote 3[6]: 180, Footnote 3states how the notation was invented: I came upon the idea of a parenthesis-free notation in 1924. I used that notation for the first time in my article Łukasiewicz (1), p. 610, footnote. The reference cited by Łukasiewicz, i.e., Łukasiewicz (1),[8]is apparently a lithographed report inPolish. The referring paper[5]by Łukasiewicz was reviewed byHenry A. Pogorzelskiin theJournal of Symbolic Logicin 1965.[9]Heinrich Behmann, editor in 1924 of the article ofMoses Schönfinkel,[10]already had the idea of eliminating parentheses in logic formulas. In one of his papers Łukasiewicz stated that his notation is the most compact and the first linearly written parentheses-free notation, but not the first one asGottlob Fregeproposed his parentheses-freeBegriffsschriftnotation in 1879 already.[11] Alonzo Churchmentions this notation in his classic book onmathematical logicas worthy of remark in notational systems even contrasted toAlfred WhiteheadandBertrand Russell's logical notational exposition and work inPrincipia Mathematica.[12] In Łukasiewicz's 1951 book,Aristotle's Syllogistic from the Standpoint of Modern Formal Logic, he mentions that the principle of his notation was to write thefunctorsbefore theargumentsto avoid brackets and that he had employed his notation in his logical papers since 1929.[3]: 78He then goes on to cite, as an example, a 1930 paper he wrote withAlfred Tarskion thesentential calculus.[13] While no longer used much in logic,[14]Polish notation has since found a place incomputer science. The expression for adding the numbers 1 and 2 is written in Polish notation as+ 1 2(prefix), rather than as1 + 2(infix). In more complex expressions, the operators still precede their operands, but the operands may themselves be expressions including again operators and their operands. For instance, the expression that would be written in conventional infix notation as can be written in Polish notation as Assuming a givenarityof all involved operators (here the "−" denotes the binary operation of subtraction, not the unary function of sign-change), any well-formed prefix representation is unambiguous, and brackets within the prefix expression are unnecessary. As such, the above expression can be further simplified to The processing of the product is deferred until its two operands are available (i.e., 5 minus 6, and 7). As withanynotation, the innermost expressions are evaluated first, but in Polish notation this "innermost-ness" can be conveyed by the sequence of operators and operands rather than by bracketing. In the conventional infix notation, parentheses are required to override the standardprecedence rules, since, referring to the above example, moving them or removing them changes the meaning and the result of the expression. This version is written in Polish notation as When dealing with non-commutative operations, like division or subtraction, it is necessary to coordinate the sequential arrangement of the operands with the definition of how the operator takes its arguments, i.e., from left to right. For example,÷ 10 5, with 10 to the left of 5, has the meaning of 10 ÷ 5 (read as "divide 10 by 5"), or− 7 6, with 7 left to 6, has the meaning of 7 − 6 (read as "subtract from 7 the operand 6"). Prefix/postfix notation is especially popular for its innate ability to express the intended order of operations without the need for parentheses and other precedence rules, as are usually employed withinfix notation. Instead, the notation uniquely indicates which operator to evaluate first. The operators are assumed to have a fixedarityeach, and all necessary operands are assumed to be explicitly given. A valid prefix expression always starts with an operator and ends with an operand. Evaluation can either proceed from left to right, or in the opposite direction. Starting at the left, the input string, consisting of tokens denoting operators or operands, is pushed token for token on astack, until the top entries of the stack contain the number of operands that fits to the top most operator (immediately beneath). This group of tokens at the stacktop (the last stacked operator and the according number of operands) is replaced by the result of executing the operator on these/this operand(s). Then the processing of the input continues in this manner. The rightmost operand in a valid prefix expression thus empties the stack, except for the result of evaluating the whole expression. When starting at the right, the pushing of tokens is performed similarly, just the evaluation is triggered by an operator, finding the appropriate number of operands that fits its arity already at the stacktop. Now the leftmost token of a valid prefix expression must be an operator, fitting to the number of operands in the stack, which again yields the result. As can be seen from the description, apush-down storewith no capability of arbitrary stack inspection suffices to implement thisparsing. The above sketched stack manipulation works—with mirrored input—also for expressions inreverse Polish notation. The table below shows the core ofJan Łukasiewicz's notation in modern logic. Some letters in the Polish notation table stand for particular words inPolish, as shown: Thequantifiersranged over propositional values in Łukasiewicz's work on many-valued logics. Bocheńskiintroduced a system of Polish notation that names all 16 binaryconnectivesof classicalpropositional logic.[18]: 16For classical propositional logic, it is a compatible extension of the notation of Łukasiewicz. But the notations are incompatible in the sense that Bocheński usesL{\displaystyle L}andM{\displaystyle M}(for nonimplication and converse nonimplication) in propositional logic and Łukasiewicz usesL{\displaystyle L}andM{\displaystyle M}in modal logic. Prefix notation has seen wide application inLispS-expressions, where the parentheses are required since the operators in the language are themselves data (first-class functions). Lisp functions may also bevariadic. TheTclprogramming language, much like Lisp also uses Polish notation through the mathop library. The Ambi[19]programming language uses Polish notation for arithmetic operations and program construction.LDAPfilter syntax uses Polish prefix notation.[20] Postfix notation is used in manystack-oriented programming languageslikePostScriptandForth.CoffeeScriptsyntax also allows functions to be called using prefix notation, while still supporting the unary postfix syntax common in other languages. The number of return values of an expression equals the difference between the number of operands in an expression and the total arity of the operators minus the total number of return values of the operators. Polish notation, usually in postfix form, is the chosen notation of certaincalculators, notably fromHewlett-Packard.[21]At a lower level, postfix operators are used by somestack machinessuch as theBurroughs large systems.
https://en.wikipedia.org/wiki/Polish_notation
Insyntacticanalysis, aconstituentis a word or a group of words that function as a single unit within a hierarchical structure. The constituent structure of sentences is identified usingtests for constituents.[1]These tests apply to a portion of a sentence, and the results provide evidence about the constituent structure of the sentence. Many constituents arephrases. A phrase is a sequence of one or more words (in some theories two or more) built around aheadlexical itemand working as a unit within a sentence. A word sequence is shown to be a phrase/constituent if it exhibits one or more of the behaviors discussed below. The analysis of constituent structure is associated mainly withphrase structure grammars, althoughdependency grammarsalso allow sentence structure to be broken down into constituent parts. Tests for constituents are diagnostics used to identify sentence structure. There are numerous tests for constituents that are commonly used to identify the constituents of English sentences. 15 of the most commonly used tests are listed next: 1)coordination(conjunction), 2) pro-form substitution (replacement), 3)topicalization(fronting), 4)do-so-substitution, 5)one-substitution, 6)answer ellipsis(question test), 7)clefting, 8)VP-ellipsis, 9) pseudoclefting, 10) passivization, 11) omission (deletion), 12) intrusion, 13) wh-fronting, 14) general substitution, 15)right node raising(RNR). The order in which these 15 tests are listed here corresponds to the frequency of use, coordination being the most frequently used of the 15 tests and RNR being the least frequently used. A general word of caution is warranted when employing these tests, since they often deliver contradictory results. The tests are merely rough-and-ready tools that grammarians employ to reveal clues about syntactic structure. Some syntacticians even arrange the tests on a scale of reliability, with less-reliable tests treated as useful to confirm constituency though not sufficient on their own. Failing to pass a single test does not mean that the test string is not a constituent, and conversely, passing a single test does not necessarily mean the test string is a constituent. It is best to apply as many tests as possible to a given string in order to prove or to rule out its status as a constituent. The 15 tests are introduced, discussed, and illustrated below mainly relying on the same one sentence:[2] By restricting the introduction and discussion of the tests for constituents below mainly to this one sentence, it becomes possible to compare the results of the tests. To aid the discussion and illustrations of the constituent structure of this sentence, the following two sentence diagrams are employed (D = determiner, N = noun, NP = noun phrase, Pa = particle, S = sentence, V = Verb, VP =verb phrase): These diagrams show two potential analyses of the constituent structure of the sentence. A given node in a tree diagram is understood as marking a constituent, that is, a constituent is understood as corresponding to a given node and everything that that node exhaustively dominates. Hence the first tree, which shows the constituent structure according todependency grammar, marks the following words and word combinations as constituents:Drunks,off,the,the customers, andput off the customers.[3]The second tree, which shows the constituent structure according tophrase structure grammar, marks the following words and word combinations as constituents:Drunks,could,put,off,the,customers,the customers,put off the customers, andcould put off the customers. The analyses in these two tree diagrams provide orientation for the discussion of tests for constituents that now follows. Thecoordinationtest assumes that only constituents can be coordinated, i.e., joined by means of a coordinator such asand,or, orbut:[4]The next examples demonstrate that coordination identifies individual words as constituents: The square brackets mark the conjuncts of the coordinate structures. Based on these data, one might assume thatdrunks,could,put off, andcustomersare constituents in the test sentence because these strings can be coordinated withbums,would,drive away, andneighbors, respectively. Coordination also identifies multi-word strings as constituents: These data suggest thatthe customers,put off the customers, andcould put off the customersare constituents in the test sentence. Examples such as (a-g) are not controversial insofar as many theories of sentence structure readily view the strings tested in sentences (a-g) as constituents. However, additional data are problematic, since they suggest that certain strings are also constituents even though most theories of syntax do not acknowledge them as such, e.g. These data suggest thatcould put off,put off these, andDrunks couldare constituents in the test sentence. Most theories of syntax reject the notion that these strings are constituents, though. Data such as (h-j) are sometimes addressed in terms of theright node raising(RNR) mechanism. The problem for the coordination test represented by examples (h-j) is compounded when one looks beyond the test sentence, for one quickly finds that coordination suggests that a wide range of strings are constituents that most theories of syntax do not acknowledge as such, e.g. The stringsfrom home on Tuesdayandfrom home on Tuesday on his bicycleare not viewed as constituents in most theories of syntax, and concerning sentence (m), it is very difficult there to even discern how one should delimit the conjuncts of the coordinate structure. The coordinate structures in (k-l) are sometimes characterized in terms of non-constituent conjuncts (NCC), and the instance of coordination in sentence (m) is sometimes discussed in terms of stripping and/orgapping. Due to the difficulties suggested with examples (h-m), many grammarians view coordination skeptically regarding its value as a test for constituents. The discussion of the other tests for constituents below reveals that this skepticism is warranted, since coordination identifies many more strings as constituents than the other tests for constituents.[5] Proformsubstitution, or replacement, involves replacing the test string with the appropriate proform (e.g. pronoun, pro-verb, pro-adjective, etc.). Substitution normally involves using a definite proform likeit,he,there,here, etc. in place of a phrase or a clause. If such a change yields a grammatical sentence where the general structure has not been altered, then the test string is likely a constituent:[6] These examples suggest thatDrunks,the customers, andput off the customersin the test sentence are constituents. An important aspect of the proform test is the fact that it fails to identify most subphrasal strings as constituents, e.g. These examples suggest that the individual wordscould,put,off, andcustomersshould not be viewed as constituents. This suggestion is of course controversial, since most theories of syntax assume that individual words are constituents by default. The conclusion one can reach based on such examples, however, is that proform substitution using a definite proform identifies phrasal constituents only; it fails to identify sub-phrasal strings as constituents. Topicalizationinvolves moving the test string to the front of the sentence. It is a simple movement operation.[7]Many instances of topicalization seem only marginally acceptable when taken out of context. Hence to suggest a context, an instance of topicalization can be preceded by...andand a modal adverb can be added as well (e.g.certainly): These examples suggest thatthe customersandput off the customersare constituents in the test sentence. Topicalization is like many of the other tests in that it identifies phrasal constituents only. When the test sequence is a sub-phrasal string, topicalization fails: These examples demonstrate thatcustomers,could,put,off, andthefail the topicalization test. Since these strings are all sub-phrasal, one can conclude that topicalization is unable to identify sub-phrasal strings as constituents. Do-so-substitution is a test that substitutes a form ofdo so(does so,did so,done so,doing so) into the test sentence for the target string. This test is widely used to probe the structure of strings containing verbs (becausedois a verb).[8]The test is limited in its applicability, though, precisely because it is only applicable to strings containing verbs: The 'a' example suggests thatput off the customersis a constituent in the test sentence, whereas the b example fails to suggest thatcould put off the customersis a constituent, fordo socannot include the meaning of themodal verbcould. To illustrate more completely how thedo sotest is employed, another test sentence is now used, one that contains two post-verbal adjunct phrases: These data suggest thatmet them,met them in the pub, andmet them in the pub because we had timeare constituents in the test sentence. Taken together, such examples seem to motivate a structure for the test sentence that has a left-branching verb phrase, because only a left-branching verb phrase can view each of the indicated strings as a constituent. There is a problem with this sort of reasoning, however, as the next example illustrates: In this case,did soappears to stand in for the discontinuous word combination consisting ofmet themandbecause we had time. Such a discontinuous combination of words cannot be construed as a constituent. That such an interpretation ofdid sois indeed possible is seen in a fuller sentence such asYou met them in the cafe because you had time, and we did so in the pub. In this case, the preferred reading ofdid sois that it indeed simultaneously stands in for bothmet themandbecause we had time. Theone-substitution test replaces the test string with the indefinite pronounoneorones.[9]If the result is acceptable, then the test string is deemed a constituent. Sinceoneis a type of pronoun,one-substitution is only of value when probing the structure of noun phrases. In this regard, the test sentence from above is expanded in order to better illustrate the manner in which one-substitution is generally employed: These examples suggest thatcustomers,loyal customers,customers around here,loyal customers around here, andcustomers around here who we rely onare constituents in the test sentence. Some have pointed to a problem associated with theone-substitution in this area, however. This problem is that it is impossible to produce a single constituent structure of the noun phrasethe loyal customers around here who we rely onthat could simultaneous view all of the indicated strings as constituents.[10]Another problem that has been pointed out concerning theone-substitution as a test for constituents is the fact that it at times suggests that non-string word combinations are constituents,[11]e.g. The word combination consisting of bothloyal customersandwho we rely onis discontinuous in the test sentence, a fact that should motivate one to generally question the value ofone-substitution as a test for constituents. The answer fragment test involves forming a question that contains a single wh-word (e.g.who,what,where, etc.). If the test string can then appear alone as the answer to such a question, then it is likely a constituent in the test sentence:[12] These examples suggest thatDrunks,the customers, andput off the customersare constituents in the test sentence. The answer fragment test is like most of the other tests for constituents in that it does not identify sub-phrasal strings as constituents: These answer fragments are all grammatically unacceptable, suggesting thatcould,put,off, andcustomersare not constituents. Note as well that the latter two questions themselves are ungrammatical. It is apparently often impossible to form the question in a way that could successfully elicit the indicated strings as answer fragments. The conclusion, then, is that the answer fragment test is like most of the other tests in that it fails to identify sub-phrasal strings as constituents. Cleftinginvolves placing the test string X within the structure beginning withIt is/was:It was X that....[13]The test string appears as the pivot of the cleft sentence: These examples suggest thatDrunksandthe customersare constituents in the test sentence. Example c is of dubious acceptability, suggesting thatput off the customersmay not be constituent in the test string. Clefting is like most of the other tests for constituents in that it fails to identify most individual words as constituents: The examples suggest that each of the individual wordscould,put,off,the, andcustomersare not constituents, contrary to what most theories of syntax assume. In this respect, clefting is like many of the other tests for constituents in that it only succeeds at identifying certain phrasal strings as constituents. The VP-ellipsis test checks to see which strings containing one or more predicative elements (usually verbs) can be elided from a sentence. Strings that can be elided are deemed constituents:[14]The symbol ∅ is used in the following examples to mark the position of ellipsis: These examples suggest thatput offis not a constituent in the test sentence, but thatimmediately put off the customers,put off the customers when they arrive, andimmediately put off the customers when they arriveare constituents. Concerning the stringput off the customersin (b), marginal acceptability makes it difficult to draw a conclusion aboutput off the customers. There are various difficulties associated with this test. The first of these is that it can identify too many constituents, such as in this case here where it is impossible to produce a single constituent structure that could simultaneously view each of the three acceptable examples (c-e) as having elided a constituent. Another problem is that the test can at times suggest that a discontinuous word combination is a constituent, e.g.: In this case, it appears as though the elided material corresponds to the discontinuous word combination includinghelpandin the office. Pseudoclefting is similar to clefting in that it puts emphasis on a certain phrase in a sentence. There are two variants of the pseudocleft test. One variant inserts the test string X in a sentence starting with a free relative clause:What.....is/are X; the other variant inserts X at the start of the sentence followed by theit/areand then the free relative clause:X is/are what/who...Only the latter of these two variants is illustrated here.[15] These examples suggest thatDrunks,the customers, andput off the customersare constituents in the test sentence. Pseudoclefting fails to identify most individual words as constituents: The pseudoclefting test is hence like most of the other tests insofar as it identifies phrasal strings as constituents, but does not suggest that sub-phrasal strings are constituents. Passivization involves changing an active sentence to a passive sentence, or vice versa. Theobjectof the active sentence is changed to thesubjectof the corresponding passive sentence:[16] The fact that sentence (b), the passive sentence, is acceptable, suggests thatDrunksandthe customersare constituents in sentence (a). The passivization test used in this manner is only capable of identifying subject and object words, phrases, and clauses as constituents. It does not help identify other phrasal or sub-phrasal strings as constituents. In this respect, the value of passivization as test for constituents is very limited. Omission checks whether the target string can be omitted without influencing the grammaticality of the sentence. In most cases, local and temporal adverbials, attributive modifiers, and optional complements can be safely omitted and thus qualify as constituents.[17] This sentence suggests that the definite articletheis a constituent in the test sentence. Regarding the test sentence, however, the omission test is very limited in its ability to identify constituents, since the strings that one wants to check do not appear optionally. Therefore, the test sentence is adapted to better illustrate the omission test: The ability to omitobnoxious,immediately, andwhen they arrivesuggests that these strings are constituents in the test sentence. Omission used in this manner is of limited applicability, since it is incapable of identifying any constituent that appears obligatorily. Hence there are many target strings that most accounts of sentence structure take to be constituents but that fail the omission test because these constituents appear obligatorily, such as subject phrases. Intrusion probes sentence structure by having an adverb "intrude" into parts of the sentence. The idea is that the strings on either side of the adverb are constituents.[18] Example (a) suggests thatDrunksandcould put off the customersare constituents. Example (b) suggests thatDrunks couldandput off the customersare constituents. The combination of (a) and (b) suggest in addition thatcouldis a constituent. Sentence (c) suggests thatDrunks could putandoff the customersare not constituents. Example (d) suggests thatDrunks could put offandthe customersare not constituents. And example (e) suggests thatDrunks could put off theandcustomersare not constituents. Those that employ the intrusion test usually use a modal adverb likedefinitely. This aspect of the test is problematic, though, since the results of the test can vary based upon the choice of adverb. For instance, manner adverbs distribute differently than modal adverbs and will hence suggest a distinct constituent structure from that suggested by modal adverbs. Wh-fronting checks to see if the test string can be fronted as a wh-word.[19]This test is similar to the answer fragment test insofar it employs just the first half of that test, disregarding the potential answer to the question. These examples suggest thatDrunks,the customers, andput off the customersare constituents in the test sentence. Wh-fronting is like a number of the other tests in that it fails to identify many subphrasal strings as constituents: These examples demonstrate a lack of evidence for viewing the individual wordswould,put,off,the, andcustomersas constituents. The general substitution test replaces the test string with some other word or phrase.[20]It is similar to proform substitution, the only difference being that the replacement word or phrase is not a proform, e.g. These examples suggest that the stringsDrunks,the customers, andcouldare constituents in the test sentence. There is a major problem with this test, for it is easily possible to find a replacement word for strings that the other tests suggest are clearly not constituents, e.g. These examples suggest thatcould put,Drunks could, andcould put off theare constituents in the test sentence. This is contrary to what the other tests reveal and to what most theories of sentence structure assume. The value of general substitution as test for constituents is therefore suspect. It is like the coordination test in that it suggests that too many strings are constituents. Right node raising, abbreviated as RNR, is a test that isolates the test string on the right side of a coordinate structure.[21]The assumption is that only constituents can be shared by the conjuncts of a coordinate structure, e.g. These examples suggest thatcould put off the customers,put off the customers, andthe customersare constituents in the test sentence. There are two problems with the RNR diagnostic as a test for constituents. The first is that it is limited in its applicability, since it is only capable of identifying strings as constituents if they appear on the right side of the test sentence. The second is that it can suggest strings to be constituents that most of the other tests suggest are not constituents. To illustrate this point, a different example must be used: These examples suggest thattheir bicycles (his bicycle) to us to use if need be,to us to use if need be, andto use if need beare constituents in the test sentence. Most theories of syntax do not view these strings as constituents, and more importantly, most of the other tests suggest that they are not constituents. In short, these tests are not taken for granted because a constituent may pass one test and fail to pass many others. We need to consult our intuitive thinking when judging the constituency of any set of words. A word of caution is warranted concerning the tests for constituents as just discussed above. These tests are found in textbooks on linguistics and syntax that are written mainly with the syntax of English in mind, and the examples that are discussed are mainly from English. The tests may or may not be valid and useful when probing the constituent structure of other languages. Ideally, a battery of tests for constituents can and should be developed for each language, catered to the idiosyncrasies of the language at hand. Constituent structure analyses of sentences are a central concern for theories of syntax. A given theory can produce an analysis of constituent structure that is quite unlike the next. This point is evident with the two tree diagrams above of the sentenceDrunks could put off the customers, where the dependency grammar analysis of constituent structure looks very much unlike the phrase structure analysis. The crucial difference across the two analyses is that the phrase structure analysis views every individual word as a constituent by default, whereas the dependency grammar analysis sees only those individual words as constituents that do not dominate other words. Phrase structure grammars therefore acknowledge many more constituents than dependency grammars. A second example further illustrates this point (D = determiner, N = noun, NP = noun phrase, Pa = particle, S = sentence, V = Verb, V' = verb-bar, VP = verb phrase): The dependency grammar tree shows five words and word combinations as constituents:who,these,us,these diagrams, andshow us. The phrase structure tree, in contrast, shows nine words and word combinations as constituents:what,do,these,diagrams,show,us,these diagrams,show us, anddo these diagrams show us. The two diagrams thus disagree concerning the status ofdo,diagrams,show, anddo these diagrams show us, the phrase structure diagram showing them as constituents and the dependency grammar diagram showing them as non-constituents. To determine which analysis is more plausible, one turns to the tests for constituents discussed above.[22] Within phrase structure grammars, views about of constituent structure can also vary significantly. Many modern phrase structure grammars assume that syntactic branching is always binary, that is, each greater constituent is necessarily broken down into two lesser constituents. More dated phrase structures analyses are, however, more likely to allow n-ary branching, that is, each greater constituent can be broken down into one, two, or more lesser constituents. The next two trees illustrate the distinction (Aux =auxiliary verb, AuxP = auxiliary verb phrase, Aux' = Aux-bar, D = determiner, N = noun, NP = noun phrase, P = preposition, PP = prepositional phrase, Pa = particle, S = sentence, t = trace, V = Verb, V' = verb-bar, VP = verb phrase): The details in the second diagram here not crucial to the point at hand. This point is that the all branching there is strictly binary, whereas in the first tree diagram ternary branching is present twice, for the AuxP and for the VP. Observe in this regard that strictly binary branching analyses increase the number of (overt) constituents to what is possible. The word combinationshave sent many things to usandmany things to usare shown as constituents in the second tree diagram but not in the first. Which of these two analyses is better is again at least in part a matter of what the tests for constituents can reveal.
https://en.wikipedia.org/wiki/Constituent_(linguistics)
Informal languages,terminal and nonterminal symbolsarepartsof thevocabularyunder aformal grammar.Vocabularyis a finite, nonempty set of symbols.Terminal symbolsare symbols that cannot be replaced by other symbols of the vocabulary.Nonterminal symbolsare symbols that can be replaced by other symbols of the vocabulary by theproduction rulesunder the same formal grammar.[2] A formal grammar defines a formal language over the vocabulary of the grammar. In the context offormal language, the termvocabularyis more commonly known asalphabet. Nonterminal symbols are also calledsyntactic variables. Terminal symbols are those symbols that can appear in the formal language defined by a formal grammar. The process of applying the production rules successively to astart symbolmight not terminate, but if it terminates when there is no more production rule can be applied, the output string will consist only of terminal symbols. For example, consider a grammar defined by two rules. In this grammar, the symbolБis a terminal symbol andΨis both a nonterminal symbol and the start symbol. The production rules for creating strings are as follows: HereБis a terminal symbol because no rule exists to replace it with other symbols. On the other hand,Ψhas two rules that can change it, thus it is nonterminal. The rules define a formal language that containscountably infinitemany finite-lengthwordsby the fact that we can apply the first rule any countable times as we wish. Diagram 1 illustrates a string that can be produced with this grammar. Nonterminal symbols are those symbols that cannot appear in the formal language defined by a formal grammar. A formal grammar includes astart symbol, which is a designated member of the set of nonterminal symbols. We can derive a set of strings of only terminal symbols by successively applying the production rules. The generated set is a formal language over the set of terminal symbols. Context-free grammarsare those grammars in which the left-hand side of each production rule consists of only a single nonterminal symbol. This restriction is non-trivial; not all languages can be generated by context-free grammars. Those that can are calledcontext-free languages. These are exactly the languages that can be recognized by a non-deterministicpush down automaton. Context-free languages are the theoretical basis for the syntax of mostprogramming languages. A grammar is defined byproduction rules(or just 'productions') that specify which symbols can replace which other symbols; these rules can be used to generate strings, or to parse them. Each such rule has ahead, or left-hand side, which consists of the string that can be replaced, and abody, or right-hand side, which consists of a string that can replace it. Rules are often written in the formhead→body; e.g., the rulea→bspecifies thatacan be replaced byb. In the classic formalization of generative grammars first proposed byNoam Chomskyin the 1950s,[3][4]a grammarGconsists of the following components: A grammar is formally defined as the ordered quadruple⟨N,Σ,P,S⟩{\displaystyle \langle N,\Sigma ,P,S\rangle }. Such a formal grammar is often called arewriting systemor aphrase structure grammarin the literature.[5][6] Backus–Naur formis a notation for expressing certain grammars. For instance, the following production rules in Backus-Naur form are used to represent an integer (which can be signed): In this example, terminal symbols are{−,0,1,2,3,4,5,6,7,8,9}{\displaystyle \{-,0,1,2,3,4,5,6,7,8,9\}}, and nonterminal symbols are{{\displaystyle \{}<digit>,<integer>}{\displaystyle \}}.[note 2] Another example is: In this example, terminal symbols are{a,b,c,d}{\displaystyle \{\mathrm {a,b,c,d} \}}, and nonterminal symbols are{S,A}{\displaystyle \{\mathrm {S,A} \}}.
https://en.wikipedia.org/wiki/Terminal_and_nonterminal_symbols
Categorial grammaris a family of formalisms innatural languagesyntaxthat share the central assumption thatsyntactic constituentscombine asfunctionsandarguments. Categorial grammar posits a close relationship between the syntax andsemantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s byKazimierz Ajdukiewiczand in the 1950s byYehoshua Bar-HillelandJoachim Lambek. It saw a surge of interest in the 1970s following the work ofRichard Montague, whoseMontague grammarassumed a similar view of syntax. It continues to be a major paradigm, particularly withinformal semantics. A categorial grammar consists of two parts: a lexicon, which assigns a set of types (also called categories) to each basic symbol, and sometype inferencerules, which determine how the type of a string of symbols follows from the types of the constituent symbols. It has the advantage that the type inference rules can be fixed once and for all, so that the specification of a particular language grammar is entirely determined by the lexicon. A categorial grammar shares some features with thesimply typed lambda calculus. Whereas thelambda calculushas only one function typeA→B{\displaystyle A\rightarrow B}, a categorial grammar typically has two function types, one type that is applied on the left, and one on the right. For example, a simple categorial grammar might have two function typesB/A{\displaystyle B/A\,\!}andA∖B{\displaystyle A\backslash B}. The first,B/A{\displaystyle B/A\,\!}, is the type of a phrase that results in a phrase of typeB{\displaystyle B\,\!}when followed (on the right) by a phrase of typeA{\displaystyle A\,\!}. The second,A∖B{\displaystyle A\backslash B\,\!}, is the type of a phrase that results in a phrase of typeB{\displaystyle B\,\!}when preceded (on the left) by a phrase of typeA{\displaystyle A\,\!}. The notation is based upon algebra. A fraction when multiplied by (i.e.concatenatedwith) its denominator yields its numerator. As concatenation is notcommutative, it makes a difference whether the denominator occurs to the left or right. The concatenation must be on the same side as the denominator for it to cancel out. The first and simplest kind of categorial grammar is called a basic categorial grammar, or sometimes an AB-grammar (afterAjdukiewiczandBar-Hillel). Given a set of primitive typesPrim{\displaystyle {\text{Prim}}\,\!}, letTp(Prim){\displaystyle {\text{Tp}}({\text{Prim}})\,\!}be the set of types constructed from primitive types. In the basic case, this is the least set such thatPrim⊆Tp(Prim){\displaystyle {\text{Prim}}\subseteq {\text{Tp}}({\text{Prim}})}and ifX,Y∈Tp(Prim){\displaystyle X,Y\in {\text{Tp}}({\text{Prim}})}then(X/Y),(Y∖X)∈Tp(Prim){\displaystyle (X/Y),(Y\backslash X)\in {\text{Tp}}({\text{Prim}})}. Think of these as purely formal expressions freely generated from the primitive types; any semantics will be added later. Some authors assume a fixed infinite set of primitive types used by all grammars, but by making the primitive types part of the grammar, the whole construction is kept finite. A basic categorial grammar is a tuple(Σ,Prim,S,◃){\displaystyle (\Sigma ,{\text{Prim}},S,\triangleleft )}whereΣ{\displaystyle \Sigma \,\!}is a finite set of symbols,Prim{\displaystyle {\text{Prim}}\,\!}is a finite set of primitive types, andS∈Tp(Prim){\displaystyle S\in {\text{Tp}}({\text{Prim}})}. The relation◃{\displaystyle \triangleleft }is the lexicon, which relates types to symbols(◃)⊆Tp(Prim)×Σ{\displaystyle (\triangleleft )\subseteq {\text{Tp}}({\text{Prim}})\times \Sigma }. Since the lexicon is finite, it can be specified by listing a set of pairs likeTYPE◃symbol{\displaystyle TYPE\triangleleft {\text{symbol}}}. Such a grammar for English might have three basic types(N,NP,andS){\displaystyle (N,NP,{\text{ and }}S)\,\!}, assigningcount nounsthe typeN{\displaystyle N\,\!}, complete noun phrases the typeNP{\displaystyle NP\,\!}, and sentences the typeS{\displaystyle S\,\!}. Then anadjectivecould have the typeN/N{\displaystyle N/N\,\!}, because if it is followed by a noun then the whole phrase is a noun. Similarly, adeterminerhas the typeNP/N{\displaystyle NP/N\,\!}, because it forms a complete noun phrase when followed by a noun. Intransitiveverbshave the typeNP∖S{\displaystyle NP\backslash S}, and transitive verbs the type(NP∖S)/NP{\displaystyle (NP\backslash S)/NP}. Then a string of words is a sentence if it has overall typeS{\displaystyle S\,\!}. For example, take the string "the bad boy made that mess". Now "the" and "that" are determiners, "boy" and "mess" are nouns, "bad" is an adjective, and "made" is a transitive verb, so the lexicon is {NP/N◃the{\displaystyle NP/N\triangleleft {\text{the}}},NP/N◃that{\displaystyle NP/N\triangleleft {\text{that}}},N◃boy{\displaystyle N\triangleleft {\text{boy}}},N◃mess{\displaystyle N\triangleleft {\text{mess}}},N/N◃bad{\displaystyle N/N\triangleleft {\text{bad}}},(NP∖S)/NP◃made{\displaystyle (NP\backslash S)/NP\triangleleft {\text{made}}}}. and the sequence of types in the string is theNP/N,badN/N,boyN,made(NP∖S)/NP,thatNP/N,messN{\displaystyle {{\text{the}} \atop {NP/N,}}{{\text{bad}} \atop {N/N,}}{{\text{boy}} \atop {N,}}{{\text{made}} \atop {(NP\backslash S)/NP,}}{{\text{that}} \atop {NP/N,}}{{\text{mess}} \atop {N}}} now find functions and appropriate arguments and reduce them according to the twoinference rulesX←X/Y,Y{\displaystyle X\leftarrow X/Y,\;Y}andX←Y,Y∖X{\displaystyle X\leftarrow Y,\;Y\backslash X}: .NP/N,N/N,N,(NP∖S)/NP,NP/N,N⏟{\displaystyle .\qquad NP/N,\;N/N,\;N,\;(NP\backslash S)/NP,\;\underbrace {NP/N,\;N} }.NP/N,N/N,N,(NP∖S)/NP,NP⏟{\displaystyle .\qquad NP/N,\;N/N,\;N,\;\underbrace {(NP\backslash S)/NP,\quad NP} }.NP/N,N/N,N⏟,(NP∖S){\displaystyle .\qquad NP/N,\;\underbrace {N/N,\;N} ,\qquad (NP\backslash S)}.NP/N,N⏟,(NP∖S){\displaystyle .\qquad \underbrace {NP/N,\;\quad N} ,\;\qquad (NP\backslash S)}.NP,(NP∖S)⏟{\displaystyle .\qquad \qquad \underbrace {NP,\;\qquad (NP\backslash S)} }.S{\displaystyle .\qquad \qquad \qquad \quad \;\;\;S} The fact that the result isS{\displaystyle S\,\!}means that the string is a sentence, while the sequence of reductions shows that it can be parsed as ((the (bad boy)) (made (that mess))). Categorial grammars of this form (having only function application rules) are equivalent in generative capacity tocontext-free grammarsand are thus often considered inadequate for theories of natural language syntax. Unlike CFGs, categorial grammars arelexicalized, meaning that only a small number of (mostly language-independent) rules are employed, and all other syntactic phenomena derive from the lexical entries of specific words. Another appealing aspect of categorial grammars is that it is often easy to assign them a compositional semantics, by first assigninginterpretation typesto all the basic categories, and then associating all thederived categorieswith appropriatefunctiontypes. The interpretation of any constituent is then simply the value of a function at an argument. With some modifications to handleintensionalityandquantification, this approach can be used to cover a wide variety of semantic phenomena. A Lambek grammar is an elaboration of this idea that has a concatenation operator for types, and several other inference rules. Mati Pentus has shown that these still have the generative capacity of context-free grammars. For the Lambek calculus, there is a type concatenation operator⋆{\displaystyle \star }, so thatPrim⊆Tp(Prim){\displaystyle {\text{Prim}}\subseteq {\text{Tp}}({\text{Prim}})}and ifX,Y∈Tp(Prim){\displaystyle X,Y\in {\text{Tp}}({\text{Prim}})}then(X/Y),(X∖Y),(X⋆Y)∈Tp(Prim){\displaystyle (X/Y),(X\backslash Y),(X\star Y)\in {\text{Tp}}({\text{Prim}})}. The Lambek calculus consists of several deduction rules, which specify how type inclusion assertions can be derived. In the following rules, upper case roman letters stand for types, upper case Greek letters stand for sequences of types. A sequent of the formX←Γ{\displaystyle X\leftarrow \Gamma }can be read: a string is of typeXif it consists of the concatenation of strings of each of the types inΓ. If a type is interpreted as a set of strings, then the ← may be interpreted as ⊇, that is, "includes as a subset". A horizontal line means that the inclusion above the line implies the one below the line. The process is begun by the Axiom rule, which has no antecedents and just says that any type includes itself. The Cut rule says that inclusions can be composed. The other rules come in pairs, one pair for each type construction operator, each pair consisting of one rule for the operator in the target, one in the source, of the arrow. The name of a rule consists of the operator and an arrow, with the operator on the side of the arrow on which it occurs in the conclusion. For an example, here is a derivation of "type raising", which says that(B/A)∖B←A{\displaystyle (B/A)\backslash B\leftarrow A}. The names of rules and the substitutions used are to the right. Recall that acontext-free grammaris a 4-tupleG=(V,Σ,::=,S){\displaystyle G=(V,\,\Sigma ,\,::=,\,S)}where From the point of view of categorial grammars, a context-free grammar can be seen as a calculus with a set of special purpose axioms for each language, but with no type construction operators and no inference rules except Cut. Specifically, given a context-free grammar as above, define a categorial grammar(Prim,Σ,◃,S){\displaystyle ({\text{Prim}},\,\Sigma ,\,\triangleleft ,\,S)}wherePrim=V∪Σ{\displaystyle {\text{Prim}}=V\cup \Sigma }, andTp(Prim)=Prim{\displaystyle {\text{Tp}}({\text{Prim}})={\text{Prim}}\,\!}. Let there be an axiomx←x{\displaystyle {x\leftarrow x}}for every symbolx∈V∪Σ{\displaystyle x\in V\cup \Sigma }, an axiomX←Γ{\displaystyle {X\leftarrow \Gamma }}for every production ruleX::=Γ{\displaystyle X::=\Gamma \,\!}, a lexicon entrys◃s{\displaystyle {s\triangleleft s}}for every terminal symbols∈Σ{\displaystyle s\in \Sigma }, and Cut for the only rule. This categorial grammar generates the same language as the given CFG. Of course, this is not a basic categorial grammar, since it has special axioms that depend upon the language; i.e. it is not lexicalized. Also, it makes no use at all of non-primitive types. To show that any context-free language can be generated by a basic categorial grammar, recall that any context-free language can be generated by a context-free grammar inGreibach normal form. The grammar is in Greibach normal form if every production rule is of the formA::=sA0…AN−1{\displaystyle A::=sA_{0}\ldots A_{N-1}}, where capital letters are variables,s∈Σ{\displaystyle s\in \Sigma }, andN≥0{\displaystyle N\geq 0}, that is, the right side of the production is a single terminal symbol followed by zero or more (non-terminal) variables. Now given a CFG in Greibach normal form, define a basic categorial grammar with a primitive type for each non-terminal variablePrim=V{\displaystyle {\text{Prim}}=V\,\!}, and with an entry in the lexiconA/AN−1/…/A0◃s{\displaystyle A/A_{N-1}/\ldots /A_{0}\triangleleft s}, for each production ruleA::=sA0…AN−1{\displaystyle A::=sA_{0}\ldots A_{N-1}}. It is fairly easy to see that this basic categorial grammar generates the same language as the original CFG. Note that the lexicon of this grammar will generally assign multiple types to each symbol. The same construction works for Lambek grammars, since they are an extension of basic categorial grammars. It is necessary to verify that the extra inference rules do not change the generated language. This can be done and shows that every context-free language is generated by some Lambek grammar. To show the converse, that every language generated by a Lambek grammar is context-free, is much more difficult. It was an open problem for nearly thirty years, from the early 1960s until about 1991 when it was proven by Pentus. The basic idea is, given a Lambek grammar,(Prim,Σ,◃,S){\displaystyle ({\text{Prim}},\,\Sigma ,\,\triangleleft ,\,S)}construct a context-free grammar(V,Σ,::=,S){\displaystyle (V,\,\Sigma ,\,::=,\,S)}with the same set of terminal symbols, the same start symbol, with variables some (not all) typesV⊆Tp(Prim){\displaystyle V\subseteq {\text{Tp}}({\text{Prim}})\,\!}, and with a production ruleT::=s{\displaystyle T::={\text{s}}\,\!}for each entryT◃s{\displaystyle T\triangleleft {\text{s}}}in the lexicon, and production rulesT::=Γ{\displaystyle T::=\Gamma \,\!}for certain sequentsT←Γ{\displaystyle T\leftarrow \Gamma }that are derivable in the Lambek calculus. Of course, there are infinitely many types and infinitely many derivable sequents, so in order to make a finite grammar it is necessary put a bound on the size of the types and sequents that are needed. The heart of Pentus's proof is to show that there is such a finite bound. The notation in this field is not standardized. The notations used in formal language theory, logic,category theory, and linguistics, conflict with each other. In logic, arrows point to the more general from the more particular, that is, to the conclusion from the hypotheses. In this article, this convention is followed, i.e. the target of the arrow is the more general (inclusive) type. In logic, arrows usually point left to right. In this article this convention is reversed for consistency with the notation of context-free grammars, where the single non-terminal symbol is always on the left. We use the symbol::={\displaystyle ::=}in a production rule as inBackus–Naur form. Some authors use an arrow, which unfortunately may point in either direction, depending on whether the grammar is thought of as generating or recognizing the language. Some authors on categorial grammars writeB∖A{\displaystyle B\backslash A}instead ofA∖B{\displaystyle A\backslash B}. The convention used here follows Lambek and algebra. The basic ideas of categorial grammar date from work byKazimierz Ajdukiewicz(in 1935) and other scholars from the Polish tradition of mathematical logic includingStanisław Leśniewski,Emil PostandAlfred Tarski. Ajdukiewicz's formal approach to syntax was influenced byEdmund Husserl'spure logical grammar, which was formalized byRudolph Carnap. It represents a development in the historical idea of universal logical grammar as an underlying structure of all languages. A core concept of the approach is the substitutability of syntactic categories—hence the name categorial grammar. The membership of an element (e.g., word or phrase) in a syntactic category (word class, phrase type) is established by thecommutation test, and theformal grammaris constructed through series of such tests.[1] The term categorial grammar was coined byYehoshua Bar-Hillel(in 1953). In 1958,Joachim Lambekintroduced asyntactic calculusthat formalized the functiontype constructorsalong with various rules for the combination of functions. This calculus is a forerunner oflinear logicin that it is asubstructural logic. Montague grammaris based on the same principles as categorial grammar.[2]Montague'swork helped to bolster interest in categorial grammar by associating it with his highly successful formal treatment of natural languagesemantics. Later work in categorial grammar has focused on the improvement of syntactic coverage. One formalism that has received considerable attention in recent years isSteedmanandSzabolcsi'scombinatory categorial grammar, which builds oncombinatory logicinvented byMoses SchönfinkelandHaskell Curry. There are a number of related formalisms of this kind in linguistics, such astype logical grammarandabstract categorial grammar.[3] A variety of changes to categorial grammar have been proposed to improve syntactic coverage. Some of the most common are listed below. Most systems of categorial grammar subdivide categories. The most common way to do this is by tagging them withfeatures, such asperson,gender,number, andtense. Sometimes only atomic categories are tagged in this way. In Montague grammar, it is traditional to subdivide function categories using a multiple slash convention, soA/BandA//Bwould be two distinct categories of left-applying functions, that took the same arguments but could be distinguished between by other functions taking them as arguments. Rules of function composition are included in many categorial grammars. An example of such a rule would be one that allowed the concatenation of a constituent of typeA/Bwith one of typeB/Cto produce a new constituent of typeA/C. The semantics of such a rule would simply involve the composition of the functions involved. Function composition is important in categorial accounts ofconjunctionand extraction, especially as they relate to phenomena likeright node raising. The introduction of function composition into a categorial grammar leads to many kinds of derivational ambiguity that are vacuous in the sense that they do not correspond tosemantic ambiguities. Many categorial grammars include a typical conjunction rule, of the general formX CONJ X → X, whereXis a category. Conjunction can generally be applied to nonstandard constituents resulting from type raising or function composition.. The grammar is extended to handle linguistic phenomena such as discontinuous idioms, gapping and extraction.[4]
https://en.wikipedia.org/wiki/Lambek_calculus
TheMantel test, named afterNathan Mantel, is astatisticaltest of thecorrelationbetween twomatrices. The matrices must be of the same dimension; in most applications, they are matrices of interrelations between the samevectorsof objects. The test was first published byNathan Mantel, a biostatistician at theNational Institutes of Health, in 1967.[1]Accounts of it can be found in advanced statistics books (e.g., Sokal & Rohlf 1995[2]). The test is commonly used inecology, where the data are usually estimates of the "distance" between objects such asspeciesof organisms. For example, one matrix might contain estimates of thegeneticdistances (i.e., the amount of difference between two different genomes) between all possible pairs of species in the study, obtained by the methods ofmolecular systematics; while the other might contain estimates of the geographical distance between the ranges of each species to every other species. In this case, the hypothesis being tested is whether the variation in genetics for these organisms is correlated to the variation in geographical distance. If there arenobjects, and the matrix issymmetrical(so the distance from objectato objectbis the same as the distance frombtoa) such a matrix contains distances. Because distances are not independent of each other – since changing the "position" of one object would changen−1{\displaystyle n-1}of these distances (the distance from that object to each of the others) – we can not assess the relationship between the two matrices by simply evaluating thecorrelation coefficientbetween the two sets of distances and testing itsstatistical significance. The Mantel test deals with this problem. The procedure adopted is a kind of randomization orpermutation test. The correlation between the two sets ofn(n−1)/2{\displaystyle n(n-1)/2}distances is calculated, and this is both the measure of correlation reported and thetest statisticon which the test is based. In principle, any correlation coefficient could be used, but normally thePearson product-moment correlation coefficientis used. In contrast to the ordinary use of the correlation coefficient, to assess significance of any apparent departure from a zero correlation, the rows and columns of one of the matrices are subjected torandom permutationsmany times, with the correlation being recalculated after each permutation. The significance of the observed correlation is the proportion of such permutations that lead to a higher correlation coefficient. The reasoning is that if thenull hypothesisof there being no relation between the two matrices is true, then permuting the rows and columns of the matrix should be equally likely to produce a larger or a smaller coefficient. In addition to overcoming the problems arising from the statistical dependence of elements within each of the two matrices, use of the permutation test means that no reliance is being placed on assumptions about the statistical distributions of elements in the matrices. Manystatistical packagesinclude routines for carrying out the Mantel test. The various papers introducing the Mantel test (and its extension, the partial Mantel test) lack a clear statistical framework specifying fully the null and alternative hypotheses. This may convey the wrong idea that these tests are universal. For example, the Mantel and partial Mantel tests can be flawed in the presence of spatial auto-correlation and return erroneously low p-values. See, e.g., Guillot and Rousset (2013).[3]
https://en.wikipedia.org/wiki/Mantel_test
Morisita's overlap index, named afterMasaaki Morisita, is astatistical measure of dispersionof individuals in a population. It is used to compare overlap amongsamples(Morisita 1959). This formula is based on the assumption that increasing the size of the samples will increase the diversity because it will include different habitats (i.e. different faunas). Formula: CD= 0 if the two samples do not overlap in terms of species, andCD= 1 if the species occur in the same proportions in both samples.[citation needed] Horn's modification of the index is (Horn 1966): Note, not to be confused withMorisita’s index of dispersion. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Morisita%27s_overlap_index
Theoverlap coefficient,[note 1]orSzymkiewicz–Simpson coefficient,[citation needed][3][4][5]is asimilarity measurethat measures the overlap between two finitesets. It is related to theJaccard indexand is defined as the size of theintersectiondivided by the size of the smaller of two sets: Note that0≤overlap⁡(A,B)≤1{\displaystyle 0\leq \operatorname {overlap} (A,B)\leq 1}. If setAis asubsetofBor the converse, then the overlap coefficient is equal to 1. Thismetric geometry-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Overlap_coefficient
TheRenkonen similarity index(P), is a measure of dissimilarity between twocommunities(sites), based on relative (proportional)abundancespi=ni/∑ni{\displaystyle p_{i}=n_{i}/\sum {n_{i}}}of individuals of compositespecies. It was developed by the botanistOlavi Renkonenand published in 1938.[1] P=∑min(p1;p2){\displaystyle P=\sum {min(p_{1};p_{2})}}, p1{\displaystyle p_{1}}- percentage structure of one set, p2{\displaystyle p_{2}}- percentage structure of second set. Thecodomainof thisdistance functionranges from 1 (identical proportional abundances) to 0 (notaxashared). Thisbiologyarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Renkonen_similarity_index
TheTversky index, named afterAmos Tversky,[1]is an asymmetricsimilarity measureonsetsthat compares a variant to a prototype. The Tversky index can be seen as a generalization of theSørensen–Dice coefficientand theJaccard index. For setsXandYthe Tversky index is a number between 0 and 1 given by S(X,Y)=|X∩Y||X∩Y|+α|X∖Y|+β|Y∖X|{\displaystyle S(X,Y)={\frac {|X\cap Y|}{|X\cap Y|+\alpha |X\setminus Y|+\beta |Y\setminus X|}}} Here,X∖Y{\displaystyle X\setminus Y}denotes therelative complementof Y in X. Further,α,β≥0{\displaystyle \alpha ,\beta \geq 0}are parameters of the Tversky index. Settingα=β=1{\displaystyle \alpha =\beta =1}produces the Jaccard index; settingα=β=0.5{\displaystyle \alpha =\beta =0.5}produces the Sørensen–Dice coefficient. If we considerXto be the prototype andYto be the variant, thenα{\displaystyle \alpha }corresponds to the weight of the prototype andβ{\displaystyle \beta }corresponds to the weight of the variant. Tversky measures withα+β=1{\displaystyle \alpha +\beta =1}are of special interest.[2] Because of the inherent asymmetry, the Tversky index does not meet the criteria for a similarity metric. However, if symmetry is needed a variant of the original formulation has been proposed usingmaxandminfunctions[3]. S(X,Y)=|X∩Y||X∩Y|+β(αa+(1−α)b){\displaystyle S(X,Y)={\frac {|X\cap Y|}{|X\cap Y|+\beta \left(\alpha a+(1-\alpha )b\right)}}} a=min(|X∖Y|,|Y∖X|){\displaystyle a=\min \left(|X\setminus Y|,|Y\setminus X|\right)}, b=max(|X∖Y|,|Y∖X|){\displaystyle b=\max \left(|X\setminus Y|,|Y\setminus X|\right)}, This formulation also re-arranges parametersα{\displaystyle \alpha }andβ{\displaystyle \beta }. Thus,α{\displaystyle \alpha }controls the balance between|X∖Y|{\displaystyle |X\setminus Y|}and|Y∖X|{\displaystyle |Y\setminus X|}in the denominator. Similarly,β{\displaystyle \beta }controls the effect of the symmetric difference|X△Y|{\displaystyle |X\,\triangle \,Y\,|}versus|X∩Y|{\displaystyle |X\cap Y|}in the denominator.
https://en.wikipedia.org/wiki/Tversky_index
Universal adaptive strategy theory(UAST) is an evolutionary theory developed byJ. Philip Grimein collaboration with Simon Pierce describing the general limits toecologyandevolutionbased on thetrade-offthat organisms face when the resources they gain from the environment are allocated between either growth, maintenance or regeneration – known as the universal three-way trade-off. A universal three-waytrade-offproducesadaptive strategiesthroughout the tree of life, with extreme strategies facilitating the survival of genes via: C (competitive), the survival of the individual using traits that maximize resource acquisition and resource control in consistently productiveniches; S (stress-tolerant), individual survival via maintenance of metabolic performance in variable and unproductive niches; or R (ruderal), rapid gene propagation via rapid completion of the lifecycle and regeneration in niches where events are frequently lethal to the individual. It is impossible for an organism to evolve a survival strategy in which all resources are devoted exclusively to one of these investment paths, but relatively extreme strategies exist, with a range of intermediates. The system can be represented by a triangle, with the three extreme possibilities at its vertices. The different species may be located at some particular point inside this triangle, accommodating a certain percentage of each of the three strategies. It is possible to use multivariate statistics to determine the main trends inphenotypic variabilityin a range of organisms, which for various major animal groups (most prominentlyvertebrates), has been shown to have three main endpoints consistent with UAST. UAST is a key part of the twin-filter model describing how species with similar overall strategies but divergent sets of minor traitscoexistin ecological communities. C-S-RTriangle theory is the application of UAST toplant biology. The three strategies are competitor, stress tolerator, and ruderal. These strategies each thrive best in a unique combination of either high or low intensities ofstressanddisturbance. Competitors are plant species that thrive in areas of low intensity stress (moisture deficit) and disturbance and excel inbiological competition. These species are able to outcompete other plants by most efficiently tapping into available resources. Competitors do this through a combination of favorable characteristics, including rapid growth rate, high productivity (growth in height, lateral spread, and root mass), and high capacity forphenotypic plasticity. This last feature allows competitors to be highly flexible in morphology and adjust the allocation of resources throughout the various parts of the plant as needed over the course of the growing season. Stress tolerators are plant species that live in areas of high intensity stress and low intensity disturbance. Species that have adapted this strategy generally have slow growth rates, long-lived leaves, high rates of nutrient retention, and low phenotypic plasticity. Stress tolerators respond to environmental stresses through physiological variability. These species are often found in stressful environments such as alpine or arid habitats, deep shade, nutrient deficient soils, and areas of extreme pH levels. Ruderalsare plant species that prosper in situations of high intensity disturbance and low intensity stress. These species are fast-growing and rapidly complete their life cycles, and generally produce large amounts of seeds. Plants that have adapted this strategy are often found colonizing recently disturbed land, and are oftenannuals. Understanding the differences between the CSR theory and its major alternative theR* theoryhas been a major goal incommunity ecologyfor many years.[1][2]Unlike the R* theory that predicts that competitive ability is determined by the ability to grow under low levels of resources, the CSR theory predicts that competitive ability is determined byrelative growth rateand other size related traits. While some experiments supported the R* predictions, other supported the CSR predictions.[1]The different predictions stem from different assumptions on thesize asymmetry of the competition. The R* theory assumes that competition is size symmetric (i.e. resource exploitation is proportional to individual biomass), the CSR theory assumes that competition is size-asymmetric (i.e. large individuals exploit disproportional higher amounts of resources compared with smaller individuals).[3]
https://en.wikipedia.org/wiki/Universal_adaptive_strategy_theory_(UAST)
Thesimple matching coefficient (SMC)orRand similarity coefficientis astatisticused for comparing thesimilarityanddiversityofsamplesets.[1][better source needed] Given two objects, A and B, each withnbinary attributes, SMC is defined as:SMC=number of matching attributestotal number of attributes=M00+M11M00+M11+M01+M10{\displaystyle {\begin{aligned}{\text{SMC}}&={\frac {\text{number of matching attributes}}{\text{total number of attributes}}}\\[8pt]&={\frac {M_{00}+M_{11}}{M_{00}+M_{11}+M_{01}+M_{10}}}\end{aligned}}} where Thesimple matching distance (SMD), which measures dissimilarity between sample sets, is given by1−SMC{\displaystyle 1-{\text{SMC}}}.[2][better source needed] SMC is linearly related to Hamann similarity:SMC=(Hamann+1)/2{\displaystyle {\text{SMC}}=({\text{Hamann}}+1)/2}. Also,SMC=1−D2/n{\displaystyle {\text{SMC}}=1-D^{2}/n}, whereD2{\displaystyle D^{2}}is the squared Euclidean distance between the two objects (binary vectors) andnis the number of attributes. The SMC is very similar to the more popularJaccard index. The main difference is that the SMC has the termM00{\displaystyle M_{00}}in its numerator and denominator, whereas the Jaccard index does not. Thus, the SMC counts both mutual presences (when an attribute is present in both sets) and mutual absence (when an attribute is absent in both sets) as matches and compares it to the total number of attributes in the universe, whereas the Jaccard index only counts mutual presence as matches and compares it to the number of attributes that have been chosen by at least one of the two sets. In market basket analysis, for example, the basket of two consumers who we wish to compare might only contain a small fraction of all the available products in the store, so the SMC will usually return very high values of similarities even when the baskets bear very little resemblance, thus making the Jaccard index a more appropriate measure of similarity in that context. For example, consider a supermarket with 1000 products and two customers. The basket of the first customer contains salt and pepper and the basket of the second contains salt and sugar. In this scenario, the similarity between the two baskets as measured by the Jaccard index would be 1/3, but the similarity becomes 0.998 using the SMC. In other contexts, where 0 and 1 carry equivalent information (symmetry), the SMC is a better measure of similarity. For example, vectors of demographic variables stored indummy variables, such as binary gender, would be better compared with the SMC than with the Jaccard index since the impact of gender on similarity should be equal, independently of whether male is defined as a 0 and female as a 1 or the other way around. However, when we have symmetric dummy variables, one could replicate the behaviour of the SMC by splitting the dummies into two binary attributes (in this case, male and female), thus transforming them into asymmetric attributes, allowing the use of the Jaccard index without introducing any bias. By using this trick, the Jaccard index can be considered as making the SMC a fully redundant metric. The SMC remains, however, more computationally efficient in the case of symmetric dummy variables since it does not require adding extra dimensions. The Jaccard index is also more general than the SMC and can be used to compare other data types than just vectors of binary attributes, such asprobability measures.
https://en.wikipedia.org/wiki/Simple_matching_coefficient
Inprobability theoryandinformation theory, themutual information(MI) of tworandom variablesis a measure of the mutualdependencebetween the two variables. More specifically, it quantifies the "amount of information" (inunitssuch asshannons(bits),natsorhartleys) obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that ofentropyof a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable. Not limited to real-valued random variables and linear dependence like thecorrelation coefficient, MI is more general and determines how different thejoint distributionof the pair(X,Y){\displaystyle (X,Y)}is from the product of the marginal distributions ofX{\displaystyle X}andY{\displaystyle Y}. MI is theexpected valueof thepointwise mutual information(PMI). The quantity was defined and analyzed byClaude Shannonin his landmark paper "A Mathematical Theory of Communication", although he did not call it "mutual information". This term was coined later byRobert Fano.[2]Mutual Information is also known asinformation gain. Let(X,Y){\displaystyle (X,Y)}be a pair ofrandom variableswith values over the spaceX×Y{\displaystyle {\mathcal {X}}\times {\mathcal {Y}}}. If their joint distribution isP(X,Y){\displaystyle P_{(X,Y)}}and the marginal distributions arePX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}, the mutual information is defined as whereDKL{\displaystyle D_{\mathrm {KL} }}is theKullback–Leibler divergence, andPX⊗PY{\displaystyle P_{X}\otimes P_{Y}}is theouter productdistribution which assigns probabilityPX(x)⋅PY(y){\displaystyle P_{X}(x)\cdot P_{Y}(y)}to each(x,y){\displaystyle (x,y)}. Expressed in terms of theentropyH(⋅){\displaystyle H(\cdot )}and theconditional entropyH(⋅|⋅){\displaystyle H(\cdot |\cdot )}of the random variablesX{\displaystyle X}andY{\displaystyle Y}, one also has (Seerelation to conditional and joint entropy): Notice, as per property of theKullback–Leibler divergence, thatI(X;Y){\displaystyle I(X;Y)}is equal to zero precisely when the joint distribution coincides with the product of the marginals, i.e. whenX{\displaystyle X}andY{\displaystyle Y}are independent (and hence observingY{\displaystyle Y}tells you nothing aboutX{\displaystyle X}).I(X;Y){\displaystyle I(X;Y)}is non-negative, it is a measure of the price for encoding(X,Y){\displaystyle (X,Y)}as a pair of independent random variables when in reality they are not. If thenatural logarithmis used, the unit of mutual information is thenat. If thelog base2 is used, the unit of mutual information is theshannon, also known as the bit. If thelog base10 is used, the unit of mutual information is thehartley, also known as the ban or the dit. The mutual information of two jointly discrete random variablesX{\displaystyle X}andY{\displaystyle Y}is calculated as a double sum:[3]: 20 whereP(X,Y){\displaystyle P_{(X,Y)}}is thejoint probabilitymassfunctionofX{\displaystyle X}andY{\displaystyle Y}, andPX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}are themarginal probabilitymass functions ofX{\displaystyle X}andY{\displaystyle Y}respectively. In the case of jointly continuous random variables, the double sum is replaced by adouble integral:[3]: 251 whereP(X,Y){\displaystyle P_{(X,Y)}}is now the joint probabilitydensityfunction ofX{\displaystyle X}andY{\displaystyle Y}, andPX{\displaystyle P_{X}}andPY{\displaystyle P_{Y}}are the marginal probability density functions ofX{\displaystyle X}andY{\displaystyle Y}respectively. Intuitively, mutual information measures the information thatX{\displaystyle X}andY{\displaystyle Y}share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, ifX{\displaystyle X}andY{\displaystyle Y}are independent, then knowingX{\displaystyle X}does not give any information aboutY{\displaystyle Y}and vice versa, so their mutual information is zero. At the other extreme, ifX{\displaystyle X}is a deterministic function ofY{\displaystyle Y}andY{\displaystyle Y}is a deterministic function ofX{\displaystyle X}then all information conveyed byX{\displaystyle X}is shared withY{\displaystyle Y}: knowingX{\displaystyle X}determines the value ofY{\displaystyle Y}and vice versa. As a result, the mutual information is the same as the uncertainty contained inY{\displaystyle Y}(orX{\displaystyle X}) alone, namely theentropyofY{\displaystyle Y}(orX{\displaystyle X}). A very special case of this is whenX{\displaystyle X}andY{\displaystyle Y}are the same random variable. Mutual information is a measure of the inherent dependence expressed in thejoint distributionofX{\displaystyle X}andY{\displaystyle Y}relative to the marginal distribution ofX{\displaystyle X}andY{\displaystyle Y}under the assumption of independence. Mutual information therefore measures dependence in the following sense:I⁡(X;Y)=0{\displaystyle \operatorname {I} (X;Y)=0}if and only ifX{\displaystyle X}andY{\displaystyle Y}are independent random variables. This is easy to see in one direction: ifX{\displaystyle X}andY{\displaystyle Y}are independent, thenp(X,Y)(x,y)=pX(x)⋅pY(y){\displaystyle p_{(X,Y)}(x,y)=p_{X}(x)\cdot p_{Y}(y)}, and therefore: Moreover, mutual information is nonnegative (i.e.I⁡(X;Y)≥0{\displaystyle \operatorname {I} (X;Y)\geq 0}see below) andsymmetric(i.e.I⁡(X;Y)=I⁡(Y;X){\displaystyle \operatorname {I} (X;Y)=\operatorname {I} (Y;X)}see below). UsingJensen's inequalityon the definition of mutual information we can show thatI⁡(X;Y){\displaystyle \operatorname {I} (X;Y)}is non-negative, i.e.[3]: 28 The proof is given considering the relationship with entropy, as shown below. IfC{\displaystyle C}is independent of(A,B){\displaystyle (A,B)}, then Mutual information can be equivalently expressed as: whereH(X){\displaystyle \mathrm {H} (X)}andH(Y){\displaystyle \mathrm {H} (Y)}are the marginalentropies,H(X∣Y){\displaystyle \mathrm {H} (X\mid Y)}andH(Y∣X){\displaystyle \mathrm {H} (Y\mid X)}are theconditional entropies, andH(X,Y){\displaystyle \mathrm {H} (X,Y)}is thejoint entropyofX{\displaystyle X}andY{\displaystyle Y}. Notice the analogy to the union, difference, and intersection of two sets: in this respect, all the formulas given above are apparent from the Venn diagram reported at the beginning of the article. In terms of a communication channel in which the outputY{\displaystyle Y}is a noisy version of the inputX{\displaystyle X}, these relations are summarised in the figure: BecauseI⁡(X;Y){\displaystyle \operatorname {I} (X;Y)}is non-negative, consequently,H(X)≥H(X∣Y){\displaystyle \mathrm {H} (X)\geq \mathrm {H} (X\mid Y)}. Here we give the detailed deduction ofI⁡(X;Y)=H(Y)−H(Y∣X){\displaystyle \operatorname {I} (X;Y)=\mathrm {H} (Y)-\mathrm {H} (Y\mid X)}for the case of jointly discrete random variables: The proofs of the other identities above are similar. The proof of the general case (not just discrete) is similar, with integrals replacing sums. Intuitively, if entropyH(Y){\displaystyle \mathrm {H} (Y)}is regarded as a measure of uncertainty about a random variable, thenH(Y∣X){\displaystyle \mathrm {H} (Y\mid X)}is a measure of whatX{\displaystyle X}doesnotsay aboutY{\displaystyle Y}. This is "the amount of uncertainty remaining aboutY{\displaystyle Y}afterX{\displaystyle X}is known", and thus the right side of the second of these equalities can be read as "the amount of uncertainty inY{\displaystyle Y}, minus the amount of uncertainty inY{\displaystyle Y}which remains afterX{\displaystyle X}is known", which is equivalent to "the amount of uncertainty inY{\displaystyle Y}which is removed by knowingX{\displaystyle X}". This corroborates the intuitive meaning of mutual information as the amount of information (that is, reduction in uncertainty) that knowing either variable provides about the other. Note that in the discrete caseH(Y∣Y)=0{\displaystyle \mathrm {H} (Y\mid Y)=0}and thereforeH(Y)=I⁡(Y;Y){\displaystyle \mathrm {H} (Y)=\operatorname {I} (Y;Y)}. ThusI⁡(Y;Y)≥I⁡(X;Y){\displaystyle \operatorname {I} (Y;Y)\geq \operatorname {I} (X;Y)}, and one can formulate the basic principle that a variable contains at least as much information about itself as any other variable can provide. For jointly discrete or jointly continuous pairs(X,Y){\displaystyle (X,Y)}, mutual information is theKullback–Leibler divergencefrom the product of themarginal distributions,pX⋅pY{\displaystyle p_{X}\cdot p_{Y}}, of thejoint distributionp(X,Y){\displaystyle p_{(X,Y)}}, that is, Furthermore, letp(X,Y)(x,y)=pX∣Y=y(x)∗pY(y){\displaystyle p_{(X,Y)}(x,y)=p_{X\mid Y=y}(x)*p_{Y}(y)}be the conditional mass or density function. Then, we have the identity The proof for jointly discrete random variables is as follows: Similarly this identity can be established for jointly continuous random variables. Note that here the Kullback–Leibler divergence involves integration over the values of the random variableX{\displaystyle X}only, and the expressionDKL(pX∣Y∥pX){\displaystyle D_{\text{KL}}(p_{X\mid Y}\parallel p_{X})}still denotes a random variable becauseY{\displaystyle Y}is random. Thus mutual information can also be understood as theexpectationof the Kullback–Leibler divergence of theunivariate distributionpX{\displaystyle p_{X}}ofX{\displaystyle X}from theconditional distributionpX∣Y{\displaystyle p_{X\mid Y}}ofX{\displaystyle X}givenY{\displaystyle Y}: the more different the distributionspX∣Y{\displaystyle p_{X\mid Y}}andpX{\displaystyle p_{X}}are on average, the greater theinformation gain. If samples from a joint distribution are available, a Bayesian approach can be used to estimate the mutual information of that distribution. The first work to do this, which also showed how to do Bayesian estimation of many other information-theoretic properties besides mutual information, was.[5]Subsequent researchers have rederived[6]and extended[7]this analysis. See[8]for a recent paper based on a prior specifically tailored to estimation of mutual information per se. Besides, recently an estimation method accounting for continuous and multivariate outputs,Y{\displaystyle Y}, was proposed in .[9] The Kullback-Leibler divergence formulation of the mutual information is predicated on that one is interested in comparingp(x,y){\displaystyle p(x,y)}to the fully factorizedouter productp(x)⋅p(y){\displaystyle p(x)\cdot p(y)}. In many problems, such asnon-negative matrix factorization, one is interested in less extreme factorizations; specifically, one wishes to comparep(x,y){\displaystyle p(x,y)}to a low-rank matrix approximation in some unknown variablew{\displaystyle w}; that is, to what degree one might have Alternately, one might be interested in knowing how much more informationp(x,y){\displaystyle p(x,y)}carries over its factorization. In such a case, the excess information that the full distributionp(x,y){\displaystyle p(x,y)}carries over the matrix factorization is given by the Kullback-Leibler divergence The conventional definition of the mutual information is recovered in the extreme case that the processW{\displaystyle W}has only one value forw{\displaystyle w}. Several variations on mutual information have been proposed to suit various needs. Among these are normalized variants and generalizations to more than two variables. Many applications require ametric, that is, a distance measure between pairs of points. The quantity satisfies the properties of a metric (triangle inequality,non-negativity,indiscernabilityand symmetry), where equalityX=Y{\displaystyle X=Y}is understood to mean thatX{\displaystyle X}can be completely determined fromY{\displaystyle Y}.[10] This distance metric is also known as thevariation of information. IfX,Y{\displaystyle X,Y}are discrete random variables then all the entropy terms are non-negative, so0≤d(X,Y)≤H(X,Y){\displaystyle 0\leq d(X,Y)\leq \mathrm {H} (X,Y)}and one can define a normalized distance Plugging in the definitions shows that This is known as the Rajski Distance.[11]In a set-theoretic interpretation of information (see the figure forConditional entropy), this is effectively theJaccard distancebetweenX{\displaystyle X}andY{\displaystyle Y}. Finally, is also a metric. Sometimes it is useful to express the mutual information of two random variables conditioned on a third. For jointlydiscrete random variablesthis takes the form which can be simplified as For jointlycontinuous random variablesthis takes the form which can be simplified as Conditioning on a third random variable may either increase or decrease the mutual information, but it is always true that for discrete, jointly distributed random variablesX,Y,Z{\displaystyle X,Y,Z}. This result has been used as a basic building block for proving otherinequalities in information theory. Several generalizations of mutual information to more than two random variables have been proposed, such astotal correlation(or multi-information) anddual total correlation. The expression and study of multivariate higher-degree mutual information was achieved in two seemingly independent works: McGill (1954)[12]who called these functions "interaction information", and Hu Kuo Ting (1962).[13]Interaction information is defined for one variable as follows: and forn>1,{\displaystyle n>1,} Some authors reverse the order of the terms on the right-hand side of the preceding equation, which changes the sign when the number of random variables is odd. (And in this case, the single-variable expression becomes the negative of the entropy.) Note that The multivariate mutual information functions generalize the pairwise independence case that states thatX1,X2{\displaystyle X_{1},X_{2}}if and only ifI(X1;X2)=0{\displaystyle I(X_{1};X_{2})=0}, to arbitrary numerous variable. n variables are mutually independent if and only if the2n−n−1{\displaystyle 2^{n}-n-1}mutual information functions vanishI(X1;…;Xk)=0{\displaystyle I(X_{1};\ldots ;X_{k})=0}withn≥k≥2{\displaystyle n\geq k\geq 2}(theorem 2[14]). In this sense, theI(X1;…;Xk)=0{\displaystyle I(X_{1};\ldots ;X_{k})=0}can be used as a refined statistical independence criterion. For 3 variables, Brenner et al. applied multivariate mutual information toneural codingand called its negativity "synergy"[15]and Watkinson et al. applied it to genetic expression.[16]For arbitrary k variables, Tapia et al. applied multivariate mutual information to gene expression.[17][14]It can be zero, positive, or negative.[13]The positivity corresponds to relations generalizing the pairwise correlations, nullity corresponds to a refined notion of independence, and negativity detects high dimensional "emergent" relations and clusterized datapoints[17]). One high-dimensional generalization scheme which maximizes the mutual information between the joint distribution and other target variables is found to be useful infeature selection.[18] Mutual information is also used in the area of signal processing as ameasure of similaritybetween two signals. For example, FMI metric[19]is an image fusion performance measure that makes use of mutual information in order to measure the amount of information that the fused image contains about the source images. TheMatlabcode for this metric can be found at.[20]A python package for computing all multivariate mutual informations,conditional mutual information, joint entropies, total correlations, information distance in a dataset of n variables is available.[21] Directed information,I⁡(Xn→Yn){\displaystyle \operatorname {I} \left(X^{n}\to Y^{n}\right)}, measures the amount of information that flows from the processXn{\displaystyle X^{n}}toYn{\displaystyle Y^{n}}, whereXn{\displaystyle X^{n}}denotes the vectorX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}andYn{\displaystyle Y^{n}}denotesY1,Y2,...,Yn{\displaystyle Y_{1},Y_{2},...,Y_{n}}. The termdirected informationwas coined byJames Masseyand is defined as Note that ifn=1{\displaystyle n=1}, the directed information becomes the mutual information. Directed information has many applications in problems wherecausalityplays an important role, such ascapacity of channelwith feedback.[22][23] Normalized variants of the mutual information are provided by thecoefficients of constraint,[24]uncertainty coefficient[25]or proficiency:[26] The two coefficients have a value ranging in [0, 1], but are not necessarily equal. This measure is not symmetric. If one desires a symmetric measure they can consider the followingredundancymeasure: which attains a minimum of zero when the variables are independent and a maximum value of when one variable becomes completely redundant with the knowledge of the other. See alsoRedundancy (information theory). Another symmetrical measure is thesymmetric uncertainty(Witten & Frank 2005), given by which represents theharmonic meanof the two uncertainty coefficientsCXY,CYX{\displaystyle C_{XY},C_{YX}}.[25] If we consider mutual information as a special case of thetotal correlationordual total correlation, the normalized version are respectively, This normalized version also known asInformation Quality Ratio (IQR)which quantifies the amount of information of a variable based on another variable against total uncertainty:[27] There's a normalization[28]which derives from first thinking of mutual information as an analogue tocovariance(thusShannon entropyis analogous tovariance). Then the normalized mutual information is calculated akin to thePearson correlation coefficient, In the traditional formulation of the mutual information, eacheventorobjectspecified by(x,y){\displaystyle (x,y)}is weighted by the corresponding probabilityp(x,y){\displaystyle p(x,y)}. This assumes that all objects or events are equivalentapart fromtheir probability of occurrence. However, in some applications it may be the case that certain objects or events are moresignificantthan others, or that certain patterns of association are more semantically important than others. For example, the deterministic mapping{(1,1),(2,2),(3,3)}{\displaystyle \{(1,1),(2,2),(3,3)\}}may be viewed as stronger than the deterministic mapping{(1,3),(2,1),(3,2)}{\displaystyle \{(1,3),(2,1),(3,2)\}}, although these relationships would yield the same mutual information. This is because the mutual information is not sensitive at all to any inherent ordering in the variable values (Cronbach 1954,Coombs, Dawes & Tversky 1970,Lockhead 1970), and is therefore not sensitive at all to theformof the relational mapping between the associated variables. If it is desired that the former relation—showing agreement on all variable values—be judged stronger than the later relation, then it is possible to use the followingweighted mutual information(Guiasu 1977). which places a weightw(x,y){\displaystyle w(x,y)}on the probability of each variable value co-occurrence,p(x,y){\displaystyle p(x,y)}. This allows that certain probabilities may carry more or less significance than others, thereby allowing the quantification of relevantholisticorPrägnanzfactors. In the above example, using larger relative weights forw(1,1){\displaystyle w(1,1)},w(2,2){\displaystyle w(2,2)}, andw(3,3){\displaystyle w(3,3)}would have the effect of assessing greaterinformativenessfor the relation{(1,1),(2,2),(3,3)}{\displaystyle \{(1,1),(2,2),(3,3)\}}than for the relation{(1,3),(2,1),(3,2)}{\displaystyle \{(1,3),(2,1),(3,2)\}}, which may be desirable in some cases of pattern recognition, and the like. This weighted mutual information is a form of weighted KL-Divergence, which is known to take negative values for some inputs,[29]and there are examples where the weighted mutual information also takes negative values.[30] A probability distribution can be viewed as apartition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be? Theadjusted mutual informationor AMI subtracts the expectation value of the MI, so that the AMI is zero when two different distributions are random, and one when two distributions are identical. The AMI is defined in analogy to theadjusted Rand indexof two different partitions of a set. Using the ideas ofKolmogorov complexity, one can consider the mutual information of two sequences independent of any probability distribution: To establish that this quantity is symmetric up to a logarithmic factor (IK⁡(X;Y)≈IK⁡(Y;X){\displaystyle \operatorname {I} _{K}(X;Y)\approx \operatorname {I} _{K}(Y;X)}) one requires thechain rule for Kolmogorov complexity(Li & Vitányi 1997). Approximations of this quantity viacompressioncan be used to define adistance measureto perform ahierarchical clusteringof sequences without having anydomain knowledgeof the sequences (Cilibrasi & Vitányi 2005). Unlike correlation coefficients, such as theproduct moment correlation coefficient, mutual information contains information about all dependence—linear and nonlinear—and not just linear dependence as the correlation coefficient measures. However, in the narrow case that the joint distribution forX{\displaystyle X}andY{\displaystyle Y}is abivariate normal distribution(implying in particular that both marginal distributions are normally distributed), there is an exact relationship betweenI{\displaystyle \operatorname {I} }and the correlation coefficientρ{\displaystyle \rho }(Gel'fand & Yaglom 1957). The equation above can be derived as follows for a bivariate Gaussian: Therefore, WhenX{\displaystyle X}andY{\displaystyle Y}are limited to be in a discrete number of states, observation data is summarized in acontingency table, with row variableX{\displaystyle X}(ori{\displaystyle i}) and column variableY{\displaystyle Y}(orj{\displaystyle j}). Mutual information is one of the measures ofassociationorcorrelationbetween the row and column variables. Other measures of association includePearson's chi-squared teststatistics,G-teststatistics, etc. In fact, with the same log base, mutual information will be equal to theG-testlog-likelihood statistic divided by2N{\displaystyle 2N}, whereN{\displaystyle N}is the sample size. In many applications, one wants to maximize mutual information (thus increasing dependencies), which is often equivalent to minimizingconditional entropy. Examples include:
https://en.wikipedia.org/wiki/Mutual_information#Metric
Acousticsis a branch ofphysicsthat deals with the study ofmechanical wavesin gases, liquids, and solids including topics such asvibration,sound,ultrasoundandinfrasound. A scientist who works in the field of acoustics is anacousticianwhile someone working in the field of acoustics technology may be called anacoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio andnoise controlindustries. Hearingis one of the most crucial means of survival in the animal world andspeechis one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge.Robert Bruce Lindsay's "Wheel of Acoustics" is a well-accepted overview of the various fields in acoustics.[1] The word "acoustic" is derived from theGreekword ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear"[2]and that from ἀκουστός (akoustos), "heard, audible",[3]which in turn derives from the verb ἀκούω(akouo), "I hear".[4] The Latin synonym is "sonic", after which the termsonicsused to be a synonym for acoustics[5]and later a branch of acoustics.[5]Frequenciesabove and below theaudible rangeare called "ultrasonic" and "infrasonic", respectively. In the 6th century BC, the ancient Greek philosopherPythagoraswanted to know why somecombinations of musical soundsseemed more beautiful than others, and he found answers in terms of numerical ratios representing theharmonicovertone serieson a string. He is reputed to have observed that when the lengths of vibrating strings are expressible as ratios of integers (e.g. 2 to 3, 3 to 4), the tones produced will be harmonious, and the smaller the integers the more harmonious the sounds. For example, a string of a certain length would sound particularly harmonious with a string of twice the length (other factors being equal). In modern parlance, if a string sounds the note C when plucked, a string twice as long will sound a C an octave lower. In one system ofmusical tuning, the tones in between are then given by 16:9 for D, 8:5 for E, 3:2 for F, 4:3 for G, 6:5 for A, and 16:15 for B, in ascending order.[6] Aristotle(384–322 BC) understood that sound consisted of compressions and rarefactions of air which "falls upon and strikes the air which is next to it...",[7][8]a very good expression of the nature ofwavemotion.On Things Heard, generally ascribed toStrato of Lampsacus, states that the pitch is related to the frequency of vibrations of the air and to the speed of sound.[9] In about 20 BC, the Roman architect and engineerVitruviuswrote a treatise on the acoustic properties of theaters including discussion of interference, echoes, and reverberation—the beginnings ofarchitectural acoustics.[10]In Book V of hisDe architectura(The Ten Books of Architecture) Vitruvius describes sound as a wave comparable to a water wave extended to three dimensions, which, when interrupted by obstructions, would flow back and break up following waves. He described the ascending seats in ancient theaters as designed to prevent this deterioration of sound and also recommended bronze vessels (echea) of appropriate sizes be placed in theaters to resonate with the fourth, fifth and so on, up to the double octave, in order to resonate with the more desirable, harmonious notes.[11][12][13] During theIslamic golden age, Abū Rayhān al-Bīrūnī (973–1048) is believed to have postulated that the speed of sound was much slower than the speed of light.[14][15] The physical understanding of acoustical processes advanced rapidly during and after theScientific Revolution. MainlyGalileo Galilei(1564–1642) but alsoMarin Mersenne(1588–1648), independently, discovered the completelaws of vibrating strings(completing what Pythagoras and Pythagoreans had started 2000 years earlier). Galileo wrote "Waves are produced by thevibrationsof a sonorous body, which spread through the air, bringing to the tympanum of theeara stimulus which the mind interprets as sound", a remarkable statement that points to the beginnings of physiological and psychological acoustics. Experimental measurements of thespeed of soundin air were carried out successfully between 1630 and 1680 by a number of investigators, prominently Mersenne. Inspired by Mersenne'sHarmonie universelle(Universal Harmony) or 1634, the Rome-based Jesuit scholarAthanasius Kircherundertook research in acoustics.[16]Kircher published two major books on acoustics: theMusurgia universalis(Universal Music-Making) in 1650[17]and thePhonurgia nova(New Sound-Making) in 1673.[18]Meanwhile,Newton(1642–1727) derived the relationship for wave velocity in solids, a cornerstone ofphysical acoustics(Principia, 1687). Substantial progress in acoustics, resting on firmer mathematical and physical concepts, was made during the eighteenth century byEuler(1707–1783),Lagrange(1736–1813), andd'Alembert(1717–1783). During this era, continuum physics, or field theory, began to receive a definite mathematical structure. The wave equation emerged in a number of contexts, including the propagation of sound in air.[19] In the nineteenth century the major figures of mathematical acoustics wereHelmholtzin Germany, who consolidated the field of physiological acoustics, andLord Rayleighin England, who combined the previous knowledge with his own copious contributions to the field in his monumental workThe Theory of Sound(1877). Also in the 19th century, Wheatstone, Ohm, and Henry developed the analogy between electricity and acoustics. The twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place. The first such application wasSabine's groundbreaking work in architectural acoustics, and many others followed. Underwater acoustics was used for detecting submarines in the first World War.Sound recordingand the telephone played important roles in a global transformation of society. Sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing. The ultrasonic frequency range enabled wholly new kinds of application in medicine and industry. New kinds of transducers (generators and receivers of acoustic energy) were invented and put to use. Acoustics is defined byANSI/ASA S1.1-2013as "(a) Science ofsound, including its production, transmission, and effects, including biological and psychological effects. (b) Those qualities of a room that, together, determine its character with respect to auditory effects." The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations. The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into sonic energy, producing a sound wave. There is one fundamental equation that describes sound wave propagation, theacoustic wave equation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about anearthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert. The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. Influids, sound propagates primarily as apressure wave. In solids, mechanical waves can take many forms includinglongitudinal waves,transverse wavesandsurface waves. Acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment. This interaction can be described as either adiffraction,interferenceor areflectionor a mix of the three. If severalmediaare present, arefractioncan also occur. Transduction processes are also of special importance to acoustics. In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear, known as thethreshold of hearing, is nine orders of magnitude smaller than the ambient pressure. Theloudnessof these disturbances is related to thesound pressure level(SPL) which is measured on a logarithmic scale in decibels. Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because this is how ourearsinterpret sound. What we experience as "higher pitched" or "lower pitched" sounds are pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic measurement, acoustic signals are sampled in time, and then presented in more meaningful forms such as octave bands or time frequency plots. Both of these popular methods are used to analyze sound and better understand the acoustic phenomenon. The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls between 20Hzand 20,000 Hz. This range is important because its frequencies can be detected by the human ear. This range has a number of applications, including speech communication and music. The ultrasonic range refers to the very high frequencies: 20,000 Hz and higher. This range has shorter wavelengths which allow better resolution in imaging technologies. Medical applications such asultrasonographyand elastography rely on the ultrasonic frequency range. On the other end of the spectrum, the lowest frequencies are known as the infrasonic range. These frequencies can be used to study geological phenomena such as earthquakes. Analytic instruments such as thespectrum analyzerfacilitate visualization and measurement of acoustic signals and their properties. Thespectrogramproduced by such an instrument is a graphical display of the time varying pressure level and frequency profiles which give a specific acoustic signal its defining character. Atransduceris a device for converting one form of energy into another. In an electroacoustic context, this means converting sound energy into electrical energy (or vice versa). Electroacoustic transducers includeloudspeakers,microphones,particle velocitysensors,hydrophonesandsonarprojectors. These devices convert a sound wave to or from an electric signal. The most widely used transduction principles areelectromagnetism,electrostaticsandpiezoelectricity. The transducers in most common loudspeakers (e.g.woofersandtweeters), are electromagnetic devices that generate waves using a suspended diaphragm driven by an electromagneticvoice coil, sending off pressure waves.Electret microphonesandcondenser microphonesemploy electrostatics—as the sound wave strikes the microphone's diaphragm, it moves and induces a voltage change. The ultrasonic systems used in medical ultrasonography employ piezoelectric transducers. These are made from special ceramics in which mechanical vibrations and electrical fields are interlinked through a property of the material itself. An acoustician is an expert in the science of sound.[20] There are many types of acoustician, but they usually have aBachelor's degreeor higher qualification. Some possess a degree in acoustics, while others enter the discipline via studies in fields such asphysicsorengineering. Much work in acoustics requires a good grounding inMathematicsandscience. Many acoustic scientists work in research and development. Some conduct basic research to advance our knowledge of the perception (e.g.hearing,psychoacousticsorneurophysiology) ofspeech,musicandnoise. Other acoustic scientists advance understanding of how sound is affected as it moves through environments, e.g.underwater acoustics, architectural acoustics orstructural acoustics. Other areas of work are listed under subdisciplines below. Acoustic scientists work in government, university and private industry laboratories. Many go on to work inAcoustical Engineering. Some positions, such asFaculty (academic staff)require aDoctor of Philosophy. Archaeoacoustics, also known as the archaeology of sound, is one of the only ways to experience the past with senses other than our eyes.[21]Archaeoacoustics is studied by testing the acoustic properties of prehistoric sites, including caves. Iegor Rezkinoff, a sound archaeologist, studies the acoustic properties of caves through natural sounds like humming and whistling.[22]Archaeological theories of acoustics are focused around ritualistic purposes as well as a way of echolocation in the caves. In archaeology, acoustic sounds and rituals directly correlate as specific sounds were meant to bring ritual participants closer to a spiritual awakening.[21]Parallels can also be drawn between cave wall paintings and the acoustic properties of the cave; they are both dynamic.[22]Because archaeoacoustics is a fairly new archaeological subject, acoustic sound is still being tested in these prehistoric sites today. Aeroacousticsis the study of noise generated by air movement, for instance via turbulence, and the movement of sound through the fluid air. This knowledge was applied in the 1920s and '30s to detect aircraft beforeradarwas invented and is applied inacoustical engineeringto study how to quietenaircraft. Aeroacoustics is important for understanding how windmusical instrumentswork.[23] Acoustic signal processing is the electronic manipulation of acoustic signals. Applications include:active noise control; design forhearing aidsorcochlear implants;echo cancellation;music information retrieval, and perceptual coding (e.g.MP3orOpus).[24] Architectural acoustics (also known as building acoustics) involves the scientific understanding of how to achieve good sound within a building.[25]It typically involves the study of speech intelligibility, speech privacy, music quality, and vibration reduction in the built environment.[26]Commonly studied environments are hospitals, classrooms, dwellings, performance venues, recording and broadcasting studios. Focus considerations include room acoustics, airborne and impact transmission in building structures, airborne and structure-borne noise control, noise control of building systems and electroacoustic systems.[27] Bioacousticsis the scientific study of the hearing and calls of animal calls, as well as how animals are affected by the acoustic and sounds of their habitat.[28] This subdiscipline is concerned with the recording, manipulation and reproduction of audio using electronics.[29]This might include products such asmobile phones, large scalepublic addresssystems orvirtual realitysystems in research laboratories. Environmental acoustics is the study of noise and vibrations, and their impact on structures, objects, humans, and animals. The main aim of these studies is to reduce levels of environmental noise and vibration. Typical work and research within environmental acoustics concerns the development of models used in simulations, measurement techniques, noise mitigation strategies, and the development of standards and regulations. Research work now also has a focus on the positive use of sound in urban environments:soundscapesandtranquility.[30] Examples of noise and vibration sources include railways,[31]road traffic, aircraft, industrial equipment and recreational activities.[32] Musical acoustics is the study of the physics of acoustic instruments; theaudio signal processingused in electronic music; the computer analysis of music and composition, and the perception andcognitive neuroscience of music.[33] Many studies have been conducted to identify the relationship between acoustics andcognition, or more commonly known aspsychoacoustics, in which what one hears is a combination of perception and biological aspects.[34]The information intercepted by the passage of sound waves through the ear is understood and interpreted through the brain, emphasizing the connection between the mind and acoustics. Psychological changes have been seen as brain waves slow down or speed up as a result of varying auditory stimulus which can in turn affect the way one thinks, feels, or even behaves.[35]This correlation can be viewed in normal, everyday situations in which listening to an upbeat or uptempo song can cause one's foot to start tapping or a slower song can leave one feeling calm and serene. In a deeper biological look at the phenomenon of psychoacoustics, it was discovered that the central nervous system is activated by basic acoustical characteristics of music.[36]By observing how the central nervous system, which includes the brain and spine, is influenced by acoustics, the pathway in which acoustic affects the mind, and essentially the body, is evident.[36] Acousticians study the production, processing and perception of speech.Speech recognitionandSpeech synthesisare two important areas of speech processing using computers. The subject also overlaps with the disciplines of physics,physiology,psychology, andlinguistics.[37] Structural acoustics is the study of motions and interactions of mechanical systems with their environments and the methods of their measurement, analysis, and control. There are several sub-disciplines found within this regime: Applications might include:ground vibrationsfrom railways;vibration isolationto reduce vibration in operating theatres; studying how vibration can damage health (vibration white finger);vibration controlto protect a building from earthquakes, or measuring how structure-borne sound moves through buildings.[38] Ultrasonics deals with sounds at frequencies too high to be heard by humans. Specialisms include medical ultrasonics (including medical ultrasonography),sonochemistry,ultrasonic testing, material characterisation and underwater acoustics (sonar).[39] Underwater acoustics is the scientific study of natural and man-made sounds underwater. Applications includesonarto locatesubmarines,underwater communication by whales,climate changemonitoring by measuringsea temperaturesacoustically,sonic weapons,[40]and marine bioacoustics.[41]
https://en.wikipedia.org/wiki/Acoustics
Analog models of gravityare attempts tomodelvarious phenomena ofgeneral relativity(e.g.,black holesorcosmologicalgeometries) using otherphysical systemssuch aswavesin a moving fluid andelectromagnetic wavesin adielectricmedium.[1]These analogs (or analogies) serve to provide new ways of looking at problems, permit ideas from other realms of science to be applied, and may create opportunities for practical experiments within the analog that can be applied back to the source phenomena. Analog models of gravity have been used in hundreds of published articles in the last decade.[2] It has been shown thatBose-Einstein condensates(BEC) are a good platform to study analog gravity.[3]Rotating blackholes described byKerr metrichave been implemented in a BEC ofexciton-polaritons(a quantum fluid of light).[4] Gravity waveshave been recognized as a promising system for studying analog gravity models. Recent experiments have demonstrated that these waves can effectively simulate phase space horizons, drawing parallels to black hole physics. Specifically, the use of surface gravity water waves has enabled the observation of logarithmic phase singularities and the onset ofFermi–Dirac statistics, phenomena typically associated with quantum systems and gravitational theories.[5]This approach provides valuable insights into the analogies between classical wave systems and quantum mechanical behaviors, expanding the possibilities for exploring gravitational analogs in a controlled laboratory environment. Thisrelativity-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Analog_models_of_gravity
Inastrophysics, agravastar(ablend wordof "gravitationalvacuumstar") is an object hypothesized in a 2001 paper byPawel O. MazurandEmil Mottolaas an alternative to theblack holetheory.[1]It has the usual black holemetricoutside of thehorizon, butde Sitter metricinside. On the horizon there is a thin shell ofexotic matter. Thissolutionto theEinstein equationsis stable and has nosingularities.[2]Further theoretical considerations of gravastars include the notion of anestar(a second gravastarnestedwithin the first one).[3][4] In the original formulation by Mazur and Mottola,[5]a gravastar is composed of three regions, differentiated by the relationship between pressurepand energy densityρ[jargon]. The central region consists offalse vacuumor "dark energy", and in this regionp= −ρ. Surrounding it is a thin shell ofperfect fluidwherep=ρ. On the exterior is true vacuum, wherep=ρ= 0. The dark-energy-like behavior of the inner region prevents collapse to a singularity, and the presence of the thin shell prevents the formation of anevent horizon, avoiding the infiniteblue shift[jargon]. The inner region has thermodynamically noentropyand may be thought of as a gravitationalBose–Einstein condensate. Severe red-shifting of photons as they climb out of the gravity well would make the fluid shell also seem very cold, almostabsolute zero. In addition to the original thin-shell formulation, gravastars with continuous pressure have been proposed. These objects must containanisotropicstress.[6] Externally, a gravastar appears similar to a black hole: it is visible by the high-energy radiation it emits while consuming matter, and by theHawking radiationit creates.[citation needed]Astronomers search the sky forX-raysemitted by infalling matter to detect black holes. A gravastar would produce an identical signature. It is also possible, if the thin shell is transparent to radiation, that gravastars may be distinguished from ordinary black holes by differentgravitational lensingproperties, as photon like particles' paths[jargon]may pass through.[7] Mazur and Mottola suggest that the violent creation of a gravastar might be an explanation for the origin of ouruniverseand many other universes because all the matter from a collapsing star would implode "through" the central hole and explode into a new dimension and expand forever, which would be consistent with the current theories regarding theBig Bang.[8]This "new dimension" exerts an outward pressure on the Bose-Einstein condensate layer and prevents it from collapsing further. Gravastars also could provide a mechanism for describing howdark energyaccelerates theexpansion of the universe. One possible hypothesis uses Hawking radiation as a means to exchange energy between the "parent" universe and the "child" universe, and so cause the rate of expansion to accelerate, but this area is under much speculation.[citation needed] Gravastar formation may provide an alternative explanation for sudden and intensegamma-ray burststhroughout space.[citation needed] LIGO's observations of gravitational waves from colliding objects have been found either to not be consistent with the gravastar concept,[9][10][11]or to be indistinguishable from ordinary black holes.[12][13] By taking quantum physics into account, the gravastar hypothesis attempts to resolve contradictions caused by conventionalblack holetheories.[14] In a gravastar, the event horizon is not present. The layer of positive-pressure fluid would lie just outside the "event horizon", being prevented from complete collapse by the inner false vacuum.[2]Due to the absence of an event horizon, the time coordinate of the exterior vacuum geometry is everywhere valid. In 2007, theoretical work indicated that under certain conditions, gravastars as well as other alternative black hole models are not stable when they rotate.[15]Theoretical work has also shown that certain rotating gravastars are stable assuming certain angular velocities, shell thicknesses, and compactnesses. It is also possible that some gravastars which are mathematically unstable may be physically stable over cosmological timescales.[16]Theoretical support for the feasibility of gravastars does not exclude the existence of black holes as shown in other theoretical studies.[17]
https://en.wikipedia.org/wiki/Gravastar
Hawking radiationisblack-body radiationreleased outside ablack hole'sevent horizondue to quantum effects according to a model developed byStephen Hawkingin 1974.[1]The radiation was not predicted by previous models which assumed that onceelectromagnetic radiationis inside the event horizon, it cannot escape. Hawking radiation is predicted to be extremely faint and is many orders of magnitude below the current besttelescopes' detecting ability. Hawking radiation would reduce themassandrotational energyof black holes and consequently cause black hole evaporation. Because of this, black holes that do not gain mass through other means are expected to shrink and ultimately vanish. For all except the smallest black holes, this happens extremely slowly. The radiation temperature, calledHawking temperature, is inversely proportional to the black hole's mass, somicro black holesare predicted to be larger emitters of radiation than larger black holes and should dissipate faster per their mass. Consequently, if small black holes exist, as permitted by the hypothesis ofprimordial black holes, they will lose mass more rapidly as they shrink, leading to a final cataclysm of high energy radiation alone.[2]Such radiation bursts have not yet been detected. Modern black holes were first predicted byEinstein's 1915 theory ofgeneral relativity. Evidence of the astrophysical objects termedblack holesbegan to mount half a century later,[3]and these objects are of current interest primarily because of their compact size and immensegravitational attraction. Early research into black holes was done by individuals such asKarl SchwarzschildandJohn Wheeler, who modeled black holes as having zero entropy.[3][4] A black hole can form when enoughmatterorenergyis compressed into a volume small enough that theescape velocityis greater than the speed of light. Because nothing can travel that fast, nothing within a certain distance, proportional to the mass of the black hole, can escape beyond that distance. The region beyond which not even light can escape is theevent horizon: an observer outside it cannot observe, become aware of, or be affected by events within the event horizon.[5]: 25–36 Alternatively, using a set ofinfalling coordinatesin general relativity, one can conceptualize the event horizon as the region beyond which space is infalling faster than the speed of light. (Although nothing can travelthroughspace faster than light, space itself can infall at any speed.)[6]Once matter is inside the event horizon, all of the matter inside falls inevitably into agravitational singularity, a place of infinite curvature and zero size, leaving behind a warped spacetime devoid of any matter;[verification needed]a classical black hole is pure emptyspacetime, and the simplest (nonrotating and uncharged) is characterized just by its mass and event horizon.[5]: 37–43 In 1971 Soviet scientistsYakov ZeldovichandAlexei Starobinskyproposed thatrotating black holesought to create and emit particles, reasoning by analogy with electromagnetic spinning metal spheres. In 1972,Jacob Bekensteindeveloped a theory and reported that the black holes should have an entropy proportional to their surface area.[7]InitiallyStephen Hawkingargued against Bekenstein's theory, viewing black holes as a simple object with no entropy.[8]: 425After meeting Zeldovich in Moscow in 1973, Hawking put these two ideas together using his mixture of quantum field theory and general relativity.[8]: 435In his 1974 paper Hawking showed that in theory, black holes radiate particles as if it were a blackbody. Particles escaping effectively drain energy from the black hole. Due to Bekenstein's contribution to black hole entropy,[9]it is also known asBekenstein–Hawking radiation.[10] Hawking radiation derives fromvacuum fluctuations. A quantum fluctuation in the electromagnetic field can result in a photon outside of the black hole horizon paired with one on the inside. The horizon allows one to escape in each direction.[8]: 439 Hawking radiation is dependent on theUnruh effectand theequivalence principleapplied to black-hole horizons. Close to the event horizon of a black hole, a local observer must accelerate to keep from falling in. An accelerating observer sees a thermal bath of particles that pop out of the local acceleration horizon, turn around, and free-fall back in. The condition of local thermal equilibrium implies that the consistent extension of this local thermal bath has a finite temperature at infinity, which implies that some of these particles emitted by the horizon are not reabsorbed and become outgoing Hawking radiation.[11][12] ASchwarzschild black holehas a metric The black hole is the background spacetime for a quantum field theory. The field theory is defined by a local path integral, so if the boundary conditions at the horizon are determined, the state of the field outside will be specified. To find the appropriate boundary conditions, consider a stationary observer just outside the horizon at position The local metric to lowest order is which isRindlerin terms ofτ=⁠t/4M⁠. The metric describes a frame that is accelerating to keep from falling into the black hole. The local acceleration,α=⁠1/ρ⁠, diverges asρ→ 0. The horizon is not a special boundary, and objects can fall in. So the local observer should feel accelerated in ordinary Minkowski space by the principle of equivalence. The near-horizon observer must see the field excited at a local temperature which is theUnruh effect. The gravitational redshift is given by the square root of the time component of the metric. So for the field theory state to consistently extend, there must be a thermal background everywhere with the local temperature redshift-matched to the near horizon temperature: The inverse temperature redshifted tor′at infinity is andris the near-horizon position, near2M, so this is really Thus a field theory defined on a black-hole background is in a thermal state whose temperature at infinity is From the black-hole temperature, it is straightforward to calculate the black-hole entropyS. The change in entropy when a quantity of heatdQis added is The heat energy that enters serves to increase the total mass, so So the entropy of a black hole is proportional to its surface area: where, since the radius of the black hole is twice its mass, we have that the area A is given by Assuming that a small black hole has zero entropy, the integration constant is zero. Forming a black hole is the most efficient way to compress mass into a region, and this entropy is also a bound on the information content of any sphere in space time. The form of the result strongly suggests that the physical description of a gravitating theory can besomehow encodedonto a bounding surface. When particles escape, the black hole loses a small amount of its energy and therefore some of its mass (mass and energy are related byEinstein's equationE=mc2). Consequently, an evaporating black hole will have a finite lifespan. Bydimensional analysis, the life span of a black hole can be shown to scale as the cube of its initial mass,[13][14]: 176–177and Hawking estimated that any black hole formed in the early universe with a mass of less than approximately 1012kg would have evaporated completely by the present day.[15] In 1976,Don Pagerefined this estimate by calculating the power produced, and the time to evaporation, for a non-rotating, non-chargedSchwarzschild black holeof massM.[13]The time for the event horizon or entropy of a black hole to halve is known as the Page time.[16]The calculations are complicated by the fact that a black hole, being of finite size, is not a perfect black body; theabsorption cross sectiongoes down in a complicated,spin-dependent manner as frequency decreases, especially when the wavelength becomes comparable to the size of the event horizon. Page concluded that primordial black holes could survive to the present day only if their initial mass were roughly4×1011kgor larger. Writing in 1976, Page using the understanding of neutrinos at the time erroneously worked on the assumption that neutrinos have no mass and that only two neutrino flavors exist, and therefore his results of black hole lifetimes do not match the modern results which take into account3 flavors of neutrinos with nonzero masses. A 2008 calculation using the particle content of theStandard Modeland theWMAPfigure for the age of the universe yielded a mass bound of(5.00±0.04)×1011kg.[17] Some pre-1998 calculations, using outdated assumptions about neutrinos, were as follows: If black holes evaporate under Hawking radiation, a solar mass black hole will evaporate over 1064years which is vastly longer than the age of the universe.[18]A supermassive black hole with a mass of 1011(100 billion)M☉will evaporate in around2×10100years.[13]: 3263Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014M☉during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 2 × 10106years.[18]Post-1998 science modifies these results slightly; for example, the modern estimate of a solar-mass black hole lifetime is 1067years.[19] Thepoweremitted by a black hole in the form of Hawking radiation can be estimated for the simplest case of a nonrotating, non-chargedSchwarzschild black holeof massM. Combining the formulas for theSchwarzschild radiusof the black hole, theStefan–Boltzmann lawof blackbody radiation, the above formula for the temperature of the radiation, and the formula for the surface area of asphere(the black hole's event horizon), several equations can be derived. The Hawking radiation temperature is:[2][20][21] The Bekenstein–Hawking luminosity of a black hole, under the assumption of pure photon emission (i.e. that no other particles are emitted) and under the assumption that the horizon is the radiating surface is:[21][20] wherePis the luminosity, i.e., the radiated power,ħis thereduced Planck constant,cis thespeed of light,Gis thegravitational constantandMis the mass of the black hole. It is worth mentioning that the above formula has not yet been derived in the framework ofsemiclassical gravity. The time that the black hole takes to dissipate is:[21][20] whereMandVare the mass and (Schwarzschild) volume of the black hole,mPandtPare Planck mass and Planck time. A black hole of onesolar mass(M☉=2.0×1030kg) takes more than1067yearsto evaporate—much longer than the currentage of the universeat1.4×1010years.[22]But for a black hole of1011kg, the evaporation time is2.6×109years. This is why some astronomers are searching for signs of explodingprimordial black holes. Since the universe contains thecosmic microwave background radiation, in order for the black hole to dissipate, the black hole must have a temperature greater than that of the present-day blackbody radiation of the universe of 2.7 K. The relationship between mass and temperature for Hawking radiation then implies the mass must be less than 0.8% of the mass of theEarth. This in turn means any black hole that could dissipate cannot be one created by stellar collapse. Only primordial black holes might be created with this little mass.[23] Black hole evaporation has several significant consequences: Thetrans-Planckian problemis the issue that Hawking's original calculation includesquantumparticles where thewavelengthbecomes shorter than thePlanck lengthnear the black hole's horizon. This is due to the peculiar behavior there, where time stops as measured from far away. A particle emitted from a black hole with afinitefrequency, if traced back to the horizon, must have had aninfinitefrequency, and therefore a trans-Planckian wavelength. TheUnruh effectand the Hawking effect both talk about field modes in the superficially stationaryspacetimethat change frequency relative to other coordinates that are regular across the horizon. This is necessarily so, since to stay outside a horizon requires acceleration that constantlyDoppler shiftsthe modes.[citation needed] An outgoingphotonof Hawking radiation, if the mode is traced back in time, has a frequency that diverges from that which it has at great distance, as it gets closer to the horizon, which requires the wavelength of the photon to "scrunch up" infinitely at the horizon of the black hole. In a maximally extended externalSchwarzschild solution, that photon's frequency stays regular only if the mode is extended back into the past region where no observer can go, so Hawking used a different black hole solution without a past region, one that forms at a finite time in the past. In that case, the source of all the outgoing photons can be identified: a microscopic point right at the moment that the black hole first formed.[citation needed] The quantum fluctuations at that tiny point, in Hawking's original calculation, contain all the outgoing radiation. The modes that eventually contain the outgoing radiation at long times are redshifted by such a huge amount by their long sojourn next to the event horizon that they start off as modes with a wavelength much shorter than the Planck length. Since the laws of physics at such short distances are unknown, some find Hawking's original calculation unconvincing.[24][25][26][27] The trans-Planckian problem is nowadays mostly considered a mathematical artifact of horizon calculations. The same effect occurs for regular matter falling onto awhite holesolution. Matter that falls on the white hole accumulates on it, but has no future region into which it can go. Tracing the future of this matter, it is compressed onto the final singular endpoint of the white hole evolution, into a trans-Planckian region. The reason for these types of divergences is that modes that end at the horizon from the point of view of outside coordinates are singular in frequency there. The only way to determine what happens classically is to extend in some other coordinates that cross the horizon. There exist alternative physical pictures that give the Hawking radiation in which the trans-Planckian problem is addressed.[citation needed]The key point is that similar trans-Planckian problems occur when the modes occupied with Unruh radiation are traced back in time.[11]In the Unruh effect, the magnitude of the temperature can be calculated from ordinaryMinkowskifield theory, and is not controversial. The formulas from the previous section are applicable only if the laws of gravity are approximately valid all the way down to the Planck scale. In particular, for black holes with masses below the Planck mass (~10−8kg), they result in impossible lifetimes below the Planck time (~10−43s). This is normally seen as an indication that the Planck mass is the lower limit on the mass of a black hole. In a model withlarge extra dimensions(10 or 11), the values of Planck constants can be radically different, and the formulas for Hawking radiation have to be modified as well. In particular, the lifetime of a micro black hole with a radius below the scale of the extra dimensions is given by equation 9 in Cheung (2002)[28]and equations 25 and 26 in Carr (2005).[29] whereM∗is the low-energy scale, which could be as low as a few TeV, andnis the number of large extra dimensions. This formula is now consistent with black holes as light as a few TeV, with lifetimes on the order of the "new Planck time" ~10−26s. A detailed study of the quantum geometry of a black holeevent horizonhas been made usingloop quantum gravity.[30][31]Loop-quantization does not reproduce the result forblack hole entropyoriginally discovered byBekensteinandHawking, unless the value ofa free parameteris set to cancel out various constants such that the Bekenstein–Hawking entropy formula is reproduced. However,quantum gravitationalcorrections to the entropy and radiation of black holes have been computed based on the theory. Based on the fluctuations of the horizon area, a quantum black hole exhibits deviations from the Hawking radiation spectrum that would be observable wereX-raysfrom Hawking radiation of evaporatingprimordial black holesto be observed.[32]The quantum effects are centered at a set of discrete and unblended frequencies highly pronounced on top of the Hawking spectrum.[33] In June 2008,NASAlaunched theFermi space telescope, which is searching for the terminal gamma-ray flashes expected from evaporatingprimordial black holes. As of Jan 1st, 2024, none have been detected.[34] If speculativelarge extra dimensiontheories are correct, thenCERN'sLarge Hadron Collidermay be able to create micro black holes and observe their evaporation. No such micro black hole has been observed at CERN.[35][36][37][38] Under experimentally achievable conditions for gravitational systems, this effect is too small to be observed directly. It was predicted that Hawking radiation could be studied by analogy usingsonic black holes, in whichsound perturbationsare analogous to light in a gravitational black hole and the flow of an approximatelyperfect fluidis analogous to gravity (seeAnalog models of gravity).[39]Observations of Hawking radiation were reported, insonic black holesemployingBose–Einstein condensates.[40][41][42] In September 2010 an experimental set-up created a laboratory "white hole event horizon" that the experimenters claimed was shown to radiate an optical analog to Hawking radiation.[43]However, the results remain unverified and debatable,[44][45]and its status as a genuine confirmation remains in doubt.[46]
https://en.wikipedia.org/wiki/Hawking_radiation
Quantum gravity(QG) is a field oftheoretical physicsthat seeks to describegravityaccording to the principles ofquantum mechanics. It deals with environments in which neither gravitational nor quantum effects can be ignored,[1]such as in the vicinity ofblack holesor similar compact astrophysical objects, as well as in the early stages of the universe moments after theBig Bang.[2] Three of the fourfundamental forcesof nature are described within the framework of quantum mechanics andquantum field theory: theelectromagnetic interaction, thestrong force, and theweak force; this leaves gravity as the only interaction that has not been fully accommodated. The current understanding of gravity is based onAlbert Einstein'sgeneral theory of relativity, which incorporates his theory of special relativity and deeply modifies the understanding of concepts like time and space. Although general relativity is highly regarded for its elegance and accuracy, it has limitations: thegravitational singularitiesinside black holes, the ad hoc postulation ofdark matter, as well asdark energyand its relation to thecosmological constantare among the current unsolved mysteries regarding gravity,[3]all of which signal the collapse of the general theory of relativity at different scales and highlight the need for a gravitational theory that goes into the quantum realm. At distances close to thePlanck length, like those near the center of a black hole,quantum fluctuationsof spacetime are expected to play an important role.[4]Finally, the discrepancies between the predicted value for thevacuum energyand the observed values (which, depending on considerations, can be of 60 or 120 orders of magnitude)[5][6]highlight the necessity for a quantum theory of gravity. The field of quantum gravity is actively developing, and theorists are exploring a variety of approaches to the problem of quantum gravity, the most popular beingM-theoryandloop quantum gravity.[7]All of these approaches aim to describe the quantum behavior of thegravitational field, which does not necessarily includeunifying all fundamental interactionsinto a single mathematical framework. However, many approaches to quantum gravity, such asstring theory, try to develop a framework that describes all fundamental forces. Such a theory is often referred to as atheory of everything. Some of the approaches, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. Other lesser-known but no less important theories includecausal dynamical triangulation,noncommutative geometry, andtwistor theory.[8] One of the difficulties of formulating a quantum gravity theory is that direct observation of quantum gravitational effects is thought to only appear at length scales near thePlanck scale, around 10−35meters, a scale far smaller, and hence only accessible with far higher energies, than those currently available in high energyparticle accelerators. Therefore, physicists lack experimental data which could distinguish between the competing theories which have been proposed.[n.b. 1][n.b. 2] Thought experimentapproaches have been suggested as a testing tool for quantum gravity theories.[9][10]In the field of quantum gravity there are several open questions – e.g., it is not known how spin of elementary particles sources gravity, and thought experiments could provide a pathway to explore possible resolutions to these questions,[11]even in the absence of lab experiments or physical observations. In the early 21st century, new experiment designs and technologies have arisen which suggest that indirect approaches to testing quantum gravity may be feasible over the next few decades.[12][13][14][15]This field of study is calledphenomenological quantum gravity. Much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. General relativity models gravity as curvature ofspacetime: in the slogan ofJohn Archibald Wheeler, "Spacetime tells matter how to move; matter tells spacetime how to curve."[16]On the other hand, quantum field theory is typically formulated in theflatspacetime used inspecial relativity. No theory has yet proven successful in describing the general situation where the dynamics of matter, modeled with quantum mechanics, affect the curvature of spacetime. If one attempts to treat gravity as simply another quantum field, the resulting theory is notrenormalizable.[17]Even in the simpler case where the curvature of spacetime is fixeda priori, developing quantum field theory becomes more mathematically challenging, and many ideas physicists use in quantum field theory on flat spacetime are no longer applicable.[18] It is widely hoped that a theory of quantum gravity would allow us to understand problems of very high energy and very small dimensions of space, such as the behavior ofblack holes, and theorigin of the universe.[1] One major obstacle is that forquantum field theory in curved spacetimewith a fixed metric,bosonic/fermionicoperator fieldssupercommuteforspacelike separated points. (This is a way of imposing aprinciple of locality.) However, in quantum gravity, the metric is dynamical, so that whether two points are spacelike separated depends on the state. In fact, they can be in aquantum superpositionof being spacelike and not spacelike separated.[citation needed] The observation that allfundamental forcesexcept gravity have one or more knownmessenger particlesleads researchers to believe that at least one must exist for gravity. This hypothetical particle is known as thegraviton. These particles act as aforce particlesimilar to thephotonof the electromagnetic interaction. Under mild assumptions, the structure of general relativity requires them to follow the quantum mechanical description of interacting theoretical spin-2 massless particles.[19][20][21][22][23]Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. TheWeinberg–Witten theoremplaces some constraints on theories in whichthe graviton is a composite particle.[24][25]While gravitons are an important theoretical step in a quantum mechanical description of gravity, they are generally believed to be undetectable because they interact too weakly.[26] General relativity, likeelectromagnetism, is aclassical field theory. One might expect that, as with electromagnetism, the gravitational force should also have a correspondingquantum field theory. However, gravity is perturbativelynonrenormalizable.[27][28]For a quantum field theory to be well defined according to this understanding of the subject, it must beasymptotically freeorasymptotically safe. The theory must be characterized by a choice offinitely manyparameters, which could, in principle, be set by experiment. For example, inquantum electrodynamicsthese parameters are the charge and mass of the electron, as measured at a particular energy scale. On the other hand, in quantizing gravity there are, inperturbation theory,infinitely many independent parameters(counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of therenormalization grouptells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, thenevery oneof the infinitely many unknown parameters would begin to matter, and we could make no predictions at all.[29] It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normalperturbation theoryis not a reliable guide to the renormalizability of the theory, and that there reallyisaUV fixed pointfor gravity. Since this is a question ofnon-perturbativequantum field theory, finding a reliable answer is difficult, pursued in theasymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken bystring theory, where all of the excitations of the string essentially manifest themselves as new symmetries.[30][better source needed] In aneffective field theory, not all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory.[31]Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally.[32] By treating general relativity as aneffective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses.[31]Another example is the calculation of the corrections to the Bekenstein-Hawking entropy formula.[33][34] A fundamental lesson of general relativity is that there is no fixed spacetime background, as found inNewtonian mechanicsandspecial relativity; the spacetime geometry is dynamic. While simple to grasp in principle, this is a complex idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be arelational theory,[35]in which the only physically relevant information is the relationship between different events in spacetime. On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory,Minkowski spacetimeis the fixed background of the theory. String theorycan be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise tospace-timein a dynamic way. Although string theory had its origins in the study ofquark confinementand not of quantum gravity, it was soon discovered that the string spectrum contains thegraviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in theAdS/CFTcorrespondence) which is a weak form ofbackground dependence. Loop quantum gravityis the fruit of an effort to formulate abackground-independentquantum theory. Topological quantum field theoryprovided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, includingspin networks.[citation needed] Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation. Phenomena such as theUnruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles). A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories, time acts as an independent background through which states evolve, with theHamiltonian operatoracting as thegenerator of infinitesimal translationsof quantum states through time.[36]In contrast, general relativitytreats time as a dynamical variablewhich relates directly with matter and moreover requires the Hamiltonian constraint to vanish.[37]Because this variability of time has beenobserved macroscopically, it removes any possibility of employing a fixed notion of time, similar to the conception of time in quantum theory, at the macroscopic level. There are a number of proposed quantum gravity theories.[38]Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments become available.[39][40] The central idea of string theory is to replace the classical concept of apoint particlein quantum field theory with a quantum theory of one-dimensional extended objects: string theory.[41]At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, differentmodesof oscillation of one and the same type of fundamental string appear as particles with different (electricand other)charges. In this way, string theory promises to be aunified descriptionof all particles and interactions.[42]The theory is successful in that one mode will always correspond to agraviton, themessenger particleof gravity; however, the price of this success is unusual features such as six extra dimensions of space in addition to the usual three for space and one for time.[43] In what is called thesecond superstring revolution, it was conjectured that both string theory and a unification of general relativity andsupersymmetryknown assupergravity[44]form part of a hypothesized eleven-dimensional model known asM-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.[45][46]As presently understood, however, string theory admits a very large number (10500by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge. Loop quantum gravity seriously considers general relativity's insight that spacetime is a dynamical field and is therefore a quantum object. Its second idea is that the quantum discreteness that determines the particle-like behavior of other field theories (for instance, the photons of the electromagnetic field) also affects the structure of space. The main result of loop quantum gravity is that there is a granular structure of space at the Planck length. This is derived from the following considerations: In the case of electromagnetism, thequantum operatorrepresenting the energy of each frequency of the field has a discrete spectrum. Thus the energy of each frequency is quantized, and the quanta are the photons. In the case of gravity, the operators representing the area and the volume of each surface or space region likewise have discrete spectra. Thus area and volume of any portion of space are also quantized, where the quanta are elementary quanta of space. It follows, then, that spacetime has an elementary quantum granular structure at the Planck scale, which cuts off the ultraviolet infinities of quantum field theory. The quantum state of spacetime is described in the theory by means of a mathematical structure calledspin networks. Spin networks were initially introduced byRoger Penrosein abstract form, and later shown byCarlo RovelliandLee Smolinto derive naturally from a non-perturbative quantization of general relativity. Spin networks do not represent quantum states of a field in spacetime: they represent directly quantum states of spacetime. The theory is based on the reformulation of general relativity known asAshtekar variables, which represent geometric gravity using mathematical analogues ofelectricandmagnetic fields.[47][48]In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps.[49][50][51][52] The dynamics of the theory is today constructed in several versions. One version starts with thecanonical quantizationof general relativity. The analogue of theSchrödinger equationis aWheeler–DeWitt equation, which can be defined within the theory.[53]In the covariant, orspinfoamformulation of the theory, the quantum dynamics is obtained via a sum over discrete versions of spacetime, called spinfoams. These represent histories of spin networks. There are a number of other approaches to quantum gravity. The theories differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified.[54][55]Such theories include: As was emphasized above, quantum gravitational effects are extremely weak and therefore difficult to test. For this reason, the possibility of experimentally testing quantum gravity had not received much attention prior to the late 1990s. However, since the 2000s, physicists have realized that evidence for quantum gravitational effects can guide the development of the theory. Since theoretical development has been slow, the field ofphenomenological quantum gravity, which studies the possibility of experimental tests, has obtained increased attention.[60] The most widely pursued possibilities for quantum gravity phenomenology include gravitationally mediated entanglement,[61][62]violations ofLorentz invariance, imprints of quantum gravitational effects in thecosmic microwave background(in particular its polarization), and decoherence induced by fluctuations[63][64][65]in thespace-time foam.[66]The latter scenario has been searched for in light fromgamma-ray burstsand both astrophysical and atmosphericneutrinos, placing limits on phenomenological quantum gravity parameters.[67][68][69] ESA'sINTEGRALsatellite measured polarization of photons of different wavelengths and was able to place a limit in the granularity of space that is less than 10−48m, or 13 orders of magnitude below the Planck scale.[70][71][better source needed] TheBICEP2 experimentdetected what was initially thought to be primordialB-mode polarizationcaused bygravitational wavesin the early universe. Had the signal in fact been primordial in origin, it could have been an indication of quantum gravitational effects, but it soon transpired that the polarization was due tointerstellar dustinterference.[72]
https://en.wikipedia.org/wiki/Quantum_gravity
Superfluid vacuum theory(SVT), sometimes known as theBEC vacuum theory, is an approach intheoretical physicsandquantum mechanicswhere the fundamental physicalvacuum(non-removable background) is considered as asuperfluidor as aBose–Einstein condensate(BEC). The microscopic structure of this physical vacuum is currently unknown and is a subject of intensive studies in SVT. An ultimate goal of this research is to developscientific modelsthat unify quantum mechanics (which describes three of the four knownfundamental interactions) withgravity, making SVT a derivative ofquantum gravityand describes all known interactions in the Universe, at both microscopic and astronomic scales, as different manifestations of the same entity, superfluid vacuum. The concept of aluminiferous aetheras a medium sustainingelectromagnetic waveswas discarded after the advent of thespecial theory of relativity, as the presence of the concept alongside special relativity results in several contradictions; in particular, aether having a definite velocity at each spacetime point will exhibit a preferred direction. This conflicts with the relativistic requirement that all directions within a light cone are equivalent. However, as early as in 1951P.A.M. Diracpublished two papers where he pointed out that we should take into account quantum fluctuations in the flow of the aether.[1][2]His arguments involve the application of theuncertainty principleto the velocity of aether at any spacetime point, implying that the velocity will not be a well-defined quantity. In fact, it will be distributed over various possible values. At best, one could represent the aether by a wave function representing the perfectvacuum statefor which all aether velocities are equally probable. Inspired by Dirac's ideas, K. P. Sinha, C. Sivaram andE. C. G. Sudarshanpublished in 1975 a series of papers that suggested a new model for the aether according to which it is a superfluid state of fermion and anti-fermion pairs, describable by a macroscopicwave function.[3][4][5]They noted that particle-like small fluctuations of superfluid background obey theLorentz symmetry, even if the superfluid itself is non-relativistic. Nevertheless, they decided to treat the superfluid as therelativisticmatter – by putting it into the stress–energy tensor of theEinstein field equations. This did not allow them to describe therelativistic gravityas a small fluctuation of the superfluid vacuum, as subsequent authors have noted[citation needed]. Since then, several theories have been proposed within the SVT framework. They differ in how the structure and properties of the backgroundsuperfluidmust look. In absence of observational data which would rule out some of them, these theories are being pursued independently. According to the approach, the background superfluid is assumed to be essentially non-relativistic whereas theLorentz symmetryis not an exact symmetry of Nature but rather the approximate description valid only for small fluctuations. An observer who resides inside such vacuum and is capable of creating or measuring the small fluctuations would observe them asrelativisticobjects – unless theirenergyandmomentumare sufficiently high to make theLorentz-breakingcorrections detectable.[6]If the energies and momenta are below the excitation threshold then thesuperfluidbackground behaves like theideal fluid, therefore, theMichelson–Morley-type experiments would observe nodrag forcefrom such aether.[1][2] Further, in the theory of relativity theGalilean symmetry(pertinent to ourmacroscopicnon-relativistic world) arises as the approximate one – when particles' velocities are small compared tospeed of lightin vacuum. In SVT one does not need to go through Lorentz symmetry to obtain the Galilean one – the dispersion relations of most non-relativistic superfluids are known to obey the non-relativistic behavior at large momenta.[7][8][9] To summarize, the fluctuations of vacuum superfluid behave like relativistic objects at "small"[nb 1]momenta (a.k.a. the "phononic limit") and like non-relativistic ones at large momenta. The yet unknown nontrivial physics is believed to be located somewhere between these two regimes. In the relativisticquantum field theorythe physical vacuum is also assumed to be some sort of non-trivial medium to which one can associatecertain energy. This is because the concept of absolutely empty space (or "mathematical vacuum") contradicts the postulates ofquantum mechanics. According to QFT, even in absence of real particles the background is always filled by pairs of creating and annihilatingvirtual particles. However, a direct attempt to describe such medium leads to the so-calledultraviolet divergences. In some QFT models, such as quantum electrodynamics, these problems can be "solved" using therenormalizationtechnique, namely, replacing the diverging physical values by their experimentally measured values. In other theories, such as thequantum general relativity, this trickdoes not work, and reliable perturbation theory cannot be constructed. According to SVT, this is because in the high-energy ("ultraviolet") regime theLorentz symmetrystarts failing so dependent theories cannot be regarded valid for all scales of energies and momenta. Correspondingly, while the Lorentz-symmetric quantum field models are obviously a good approximation below the vacuum-energy threshold, in its close vicinity the relativistic description becomes more and more "effective" and less and less natural since one will need to adjust the expressions for thecovariantfield-theoretical actions by hand. According togeneral relativity, gravitational interaction is described in terms ofspacetimecurvatureusing the mathematical formalism ofdifferential geometry. This was supported by numerous experiments and observations in the regime of low energies. However, the attempts to quantize general relativity led to varioussevere problems, therefore, the microscopic structure of gravity is still ill-defined. There may be a fundamental reason for this—thedegrees of freedomof general relativity are based on what may be only approximate andeffective. The question of whether general relativity is an effective theory has been raised for a long time.[10] According to SVT, the curved spacetime arises as the small-amplitudecollective excitationmode of the non-relativistic background condensate.[6][11]The mathematical description of this is similar tofluid-gravity analogywhich is being used also in theanalog gravitymodels.[12]Thus,relativistic gravityis essentially a long-wavelength theory of the collective modes whose amplitude is small compared to the background one. Outside this requirement the curved-space description of gravity in terms of the Riemannian geometry becomes incomplete or ill-defined. The notion of thecosmological constantmakes sense in a relativistic theory only, therefore, within the SVT framework this constant can refer at most to the energy of small fluctuations of the vacuum above a background value, but not to the energy of the vacuum itself.[13]Thus, in SVT this constant does not have any fundamental physical meaning, and related problems such as thevacuum catastrophe, simply do not occur in the first place. According togeneral relativity, the conventionalgravitational waveis: Superfluid vacuum theory brings into question the possibility that a relativistic object possessing both of these properties exists in nature.[11]Indeed, according to the approach, the curved spacetime itself is the smallcollective excitationof the superfluid background, therefore, the property (1) means that thegravitonwould be in fact the "small fluctuation of the small fluctuation", which does not look like a physically robust concept (as if somebody tried to introduce small fluctuations inside aphonon, for instance). As a result, it may be not just a coincidence that in general relativity the gravitational field alone has no well-definedstress–energy tensor, only thepseudotensorone.[14]Therefore, the property (2) cannot be completely justified in a theory with exactLorentz symmetrywhich the general relativity is. Though, SVT does nota prioriforbid an existence of the non-localizedwave-like excitations of the superfluid background which might be responsible for the astrophysical phenomena which are currently beingattributedto gravitational waves, such as theHulse–Taylor binary. However, such excitations cannot be correctly described within the framework of a fullyrelativistictheory. TheHiggs bosonis the spin-0 particle that has been introduced inelectroweak theoryto give mass to theweak bosons. The origin of mass of the Higgs boson itself is not explained by electroweak theory. Instead, this mass is introduced as a free parameter by means of theHiggs potential, which thus makes it yet another free parameter of theStandard Model.[15]Within the framework of theStandard Model(or its extensions) the theoretical estimates of this parameter's value are possible only indirectly and results differ from each other significantly.[16]Thus, the usage of the Higgs boson (or any other elementary particle with predefined mass) alone is not the most fundamental solution of themass generationproblem but only its reformulationad infinitum. Another known issue of theGlashow–Weinberg–Salam modelis the wrong sign of mass term in the (unbroken) Higgs sector for energies above thesymmetry-breaking scale.[nb 2] While SVT does not explicitly forbid the existence of theelectroweak Higgs particle, it has its own idea of the fundamental mass generation mechanism – elementary particles acquire mass due to the interaction with the vacuum condensate, similarly to the gap generation mechanism insuperconductorsorsuperfluids.[11][17]Although this idea is not entirely new, one could recall the relativisticColeman-Weinberg approach,[18]SVT gives the meaning to the symmetry-breaking relativisticscalar fieldas describing small fluctuations of background superfluid which can be interpreted as an elementary particle only under certain conditions.[19]In general, one allows two scenarios to happen: Thus, the Higgs boson, even if it exists, would be a by-product of the fundamental mass generation phenomenon rather than its cause.[19] Also, some versions of SVT favor awave equation based on the logarithmic potentialrather than on thequarticone. The former potential has not only the Mexican-hat shape, necessary for thespontaneous symmetry breaking, but also someother featureswhich make it more suitable for the vacuum's description. In this model the physical vacuum is conjectured to be strongly-correlatedquantum Bose liquidwhose ground-statewavefunctionis described by thelogarithmic Schrödinger equation. It was shown that therelativistic gravitational interactionarises as the small-amplitudecollective excitationmode whereas relativisticelementary particlescan be described by theparticle-like modesin the limit of low energies and momenta.[17]The essential difference of this theory from others is that in the logarithmic superfluid the maximal velocity of fluctuations is constant in the leading (classical) order. This allows to fully recover the relativity postulates in the "phononic" (linearized) limit.[11] The proposed theory has many observational consequences. They are based on the fact that at high energies and momenta the behavior of the particle-like modes eventually becomes distinct from therelativisticone – they can reach thespeed of light limitat finite energy.[20]Among other predicted effects is thesuperluminalpropagation and vacuumCherenkov radiation.[21] Theory advocates the mass generation mechanism which is supposed to replace or alter theelectroweak Higgsone. It was shown that masses of elementary particles can arise as a result of interaction with the superfluid vacuum, similarly to the gap generation mechanism insuperconductors.[11][17]For instance, thephotonpropagating in the averageinterstellarvacuum acquires a tiny mass which is estimated to be about 10−35electronvolt. One can also derive an effective potential for the Higgs sector which is different from the one used in theGlashow–Weinberg–Salam model, yet it yields the mass generation and it is free of the imaginary-mass problem[nb 2]appearing in theconventional Higgs potential.[19]
https://en.wikipedia.org/wiki/Superfluid_vacuum_theory
Ingeneral topologyandanalysis, aCauchy spaceis a generalization ofmetric spacesanduniform spacesfor which the notion of Cauchy convergence still makes sense. Cauchy spaces were introduced by H. H. Keller in 1968, as an axiomatic tool derived from the idea of aCauchy filter, in order to studycompletenessintopological spaces. Thecategoryof Cauchy spaces andCauchy continuous mapsisCartesian closed, and contains the category ofproximity spaces. Throughout,X{\displaystyle X}is a set,℘(X){\displaystyle \wp (X)}denotes thepower setofX,{\displaystyle X,}and allfiltersare assumed to beproper/non-degenerate(i.e. a filter may not contain the empty set). A Cauchy space is a pair(X,C){\displaystyle (X,C)}consisting of a setX{\displaystyle X}together with afamilyC⊆℘(℘(X)){\displaystyle C\subseteq \wp (\wp (X))}of (proper) filters onX{\displaystyle X}having all of the following properties: An element ofC{\displaystyle C}is called aCauchy filter, and a mapf{\displaystyle f}between Cauchy spaces(X,C){\displaystyle (X,C)}and(Y,D){\displaystyle (Y,D)}isCauchy continuousif↑f(C)⊆D{\displaystyle \uparrow f(C)\subseteq D}; that is, the image of each Cauchy filter inX{\displaystyle X}is a Cauchy filter base inY.{\displaystyle Y.} Any Cauchy space is also aconvergence space, where a filterF{\displaystyle F}converges tox{\displaystyle x}ifF∩U(x){\displaystyle F\cap U(x)}is Cauchy. In particular, a Cauchy space carries a naturaltopology. The natural notion ofmorphismbetween Cauchy spaces is that of aCauchy-continuous function, a concept that had earlier been studied for uniform spaces.
https://en.wikipedia.org/wiki/Cauchy_space
Inabstract algebra, acompletionis any of several relatedfunctorsonringsandmodulesthat result in completetopological ringsandmodules. Completion is similar tolocalization, and together they are among the most basic tools in analysingcommutative rings. Complete commutative rings have a simpler structure than general ones, andHensel's lemmaapplies to them. Inalgebraic geometry, a completion of a ring of functionsRon a spaceXconcentrates on aformal neighborhoodof a point ofX: heuristically, this is a neighborhood so small thatallTaylor seriescentered at the point are convergent. An algebraic completion is constructed in a manner analogous tocompletionof ametric spacewithCauchy sequences, and agrees with it in the case whenRhas a metric given by anon-Archimedeanabsolute value. Suppose thatEis anabelian groupwith a descendingfiltration of subgroups. One then defines the completion (with respect to the filtration) as theinverse limit: This is again an abelian group. UsuallyEis anadditiveabelian group. IfEhas additional algebraic structure compatible with the filtration, for instanceEis afiltered ring, a filteredmodule, or a filteredvector space, then its completion is again an object with the same structure that is complete in the topology determined by the filtration. This construction may be applied both tocommutativeandnoncommutative rings. As may be expected, when the intersection of theFiE{\displaystyle F^{i}E}equals zero, this produces acomplete topological ring. Incommutative algebra, the filtration on acommutative ringRby the powers of a properidealIdetermines the Krull (afterWolfgang Krull) orI-adic topologyonR. The case of amaximalidealI=m{\displaystyle I={\mathfrak {m}}}is especially important, for example the distinguished maximal ideal of avaluation ring. Thebasis of open neighbourhoodsof 0 inRis given by the powersIn, which arenestedand form a descending filtration onR: (Open neighborhoods of anyr∈Rare given by cosetsr+In.) The (I-adic) completion is theinverse limitof thefactor rings, pronounced "R I hat". The kernel of the canonical mapπfrom the ring to its completion is the intersection of the powers ofI. Thusπis injective if and only if this intersection reduces to the zero element of the ring; by theKrull intersection theorem, this is the case for any commutativeNoetherian ringwhich is anintegral domainor alocal ring. There is a related topology onR-modules, also called Krull orI-adic topology. A basis of open neighborhoods of a moduleMis given by the sets of the form TheI-adic completion of anR-moduleMis the inverse limit of the quotients This procedure converts any module overRinto a completetopological moduleoverR^I{\displaystyle {\widehat {R}}_{I}}ifIis finitely generated.[1] Completions can also be used to analyze the local structure ofsingularitiesof ascheme. For example, the affine schemes associated toC[x,y]/(xy){\displaystyle \mathbb {C} [x,y]/(xy)}and the nodal cubicplane curveC[x,y]/(y2−x2(1+x)){\displaystyle \mathbb {C} [x,y]/(y^{2}-x^{2}(1+x))}have similar looking singularities at the origin when viewing their graphs (both look like a plus sign). Notice that in the second case, anyZariski neighborhoodof the origin is still an irreducible curve. If we use completions, then we are looking at a "small enough" neighborhood where the node has two components. Taking the localizations of these rings along the ideal(x,y){\displaystyle (x,y)}and completing givesC[[x,y]]/(xy){\displaystyle \mathbb {C} [[x,y]]/(xy)}andC[[x,y]]/((y+u)(y−u)){\displaystyle \mathbb {C} [[x,y]]/((y+u)(y-u))}respectively, whereu{\displaystyle u}is the formal square root ofx2(1+x){\displaystyle x^{2}(1+x)}inC[[x,y]].{\displaystyle \mathbb {C} [[x,y]].}More explicitly, the power series: Since both rings are given by the intersection of two ideals generated by a homogeneous degree 1 polynomial, we can see algebraically that the singularities "look" the same. This is because such a scheme is the union of two non-equal linear subspaces of the affine plane.
https://en.wikipedia.org/wiki/Completion_(algebra)
In themathematicalfield oftopology, auniform spaceis asetwith additionalstructurethat is used to defineuniform properties, such ascompleteness,uniform continuityanduniform convergence. Uniform spaces generalizemetric spacesandtopological groups, but the concept is designed to formulate the weakest axioms needed for most proofs inanalysis. In addition to the usual properties of a topological structure, in a uniform space one formalizes the notions of relative closeness and closeness of points. In other words, ideas like "xis closer toathanyis tob" make sense in uniform spaces. By comparison, in a general topological space, given setsA,Bit is meaningful to say that a pointxisarbitrarily closetoA(i.e., in theclosureofA), or perhaps thatAis asmaller neighborhoodofxthanB, but notions of closeness of points and relative closeness are not described well by topological structure alone. There are three equivalent definitions for a uniform space. They all consist of a space equipped with a uniform structure. This definition adapts the presentation of a topological space in terms ofneighborhood systems. A nonempty collectionΦ{\displaystyle \Phi }of subsets ofX×X{\displaystyle X\times X}is auniform structure(or auniformity) if it satisfies the following axioms: The non-emptiness ofΦ{\displaystyle \Phi }taken together with (2) and (3) states thatΦ{\displaystyle \Phi }is afilteronX×X.{\displaystyle X\times X.}If the last property is omitted we call the spacequasiuniform. An elementU{\displaystyle U}ofΦ{\displaystyle \Phi }is called avicinityorentouragefrom theFrenchword forsurroundings. One usually writesU[x]={y:(x,y)∈U}=pr2⁡(U∩({x}×X)),{\displaystyle U[x]=\{y:(x,y)\in U\}=\operatorname {pr} _{2}(U\cap (\{x\}\times X)\,),}whereU∩({x}×X){\displaystyle U\cap (\{x\}\times X)}is the vertical cross section ofU{\displaystyle U}andpr2{\displaystyle \operatorname {pr} _{2}}is the canonical projection onto the second coordinate. On a graph, a typical entourage is drawn as a blob surrounding the "y=x{\displaystyle y=x}" diagonal; all the differentU[x]{\displaystyle U[x]}'s form the vertical cross-sections. If(x,y)∈U{\displaystyle (x,y)\in U}then one says thatx{\displaystyle x}andy{\displaystyle y}areU{\displaystyle U}-close. Similarly, if all pairs of points in a subsetA{\displaystyle A}ofX{\displaystyle X}areU{\displaystyle U}-close (that is, ifA×A{\displaystyle A\times A}is contained inU{\displaystyle U}),A{\displaystyle A}is calledU{\displaystyle U}-small. An entourageU{\displaystyle U}issymmetricif(x,y)∈U{\displaystyle (x,y)\in U}precisely when(y,x)∈U.{\displaystyle (y,x)\in U.}The first axiom states that each point isU{\displaystyle U}-close to itself for each entourageU.{\displaystyle U.}The third axiom guarantees that being "bothU{\displaystyle U}-close andV{\displaystyle V}-close" is also a closeness relation in the uniformity. The fourth axiom states that for each entourageU{\displaystyle U}there is an entourageV{\displaystyle V}that is "not more than half as large". Finally, the last axiom states that the property "closeness" with respect to a uniform structure is symmetric inx{\displaystyle x}andy.{\displaystyle y.} Abase of entouragesorfundamental system of entourages(orvicinities) of a uniformityΦ{\displaystyle \Phi }is any setB{\displaystyle {\mathcal {B}}}of entourages ofΦ{\displaystyle \Phi }such that every entourage ofΦ{\displaystyle \Phi }contains a set belonging toB.{\displaystyle {\mathcal {B}}.}Thus, by property 2 above, a fundamental systems of entouragesB{\displaystyle {\mathcal {B}}}is enough to specify the uniformityΦ{\displaystyle \Phi }unambiguously:Φ{\displaystyle \Phi }is the set of subsets ofX×X{\displaystyle X\times X}that contain a set ofB.{\displaystyle {\mathcal {B}}.}Every uniform space has a fundamental system of entourages consisting of symmetric entourages. Intuition about uniformities is provided by the example ofmetric spaces: if(X,d){\displaystyle (X,d)}is a metric space, the setsUa={(x,y)∈X×X:d(x,y)≤a}wherea>0{\displaystyle U_{a}=\{(x,y)\in X\times X:d(x,y)\leq a\}\quad {\text{where}}\quad a>0}form a fundamental system of entourages for the standard uniform structure ofX.{\displaystyle X.}Thenx{\displaystyle x}andy{\displaystyle y}areUa{\displaystyle U_{a}}-close precisely when the distance betweenx{\displaystyle x}andy{\displaystyle y}is at mosta.{\displaystyle a.} A uniformityΦ{\displaystyle \Phi }isfinerthan another uniformityΨ{\displaystyle \Psi }on the same set ifΦ⊇Ψ;{\displaystyle \Phi \supseteq \Psi ;}in that caseΨ{\displaystyle \Psi }is said to becoarserthanΦ.{\displaystyle \Phi .} Uniform spaces may be defined alternatively and equivalently using systems ofpseudometrics, an approach that is particularly useful infunctional analysis(with pseudometrics provided byseminorms). More precisely, letf:X×X→R{\displaystyle f:X\times X\to \mathbb {R} }be a pseudometric on a setX.{\displaystyle X.}The inverse imagesUa=f−1([0,a]){\displaystyle U_{a}=f^{-1}([0,a])}fora>0{\displaystyle a>0}can be shown to form a fundamental system of entourages of a uniformity. The uniformity generated by theUa{\displaystyle U_{a}}is the uniformity defined by the single pseudometricf.{\displaystyle f.}Certain authors call spaces the topology of which is defined in terms of pseudometricsgauge spaces. For afamily(fi){\displaystyle \left(f_{i}\right)}of pseudometrics onX,{\displaystyle X,}the uniform structure defined by the family is theleast upper boundof the uniform structures defined by the individual pseudometricsfi.{\displaystyle f_{i}.}A fundamental system of entourages of this uniformity is provided by the set offiniteintersections of entourages of the uniformities defined by the individual pseudometricsfi.{\displaystyle f_{i}.}If the family of pseudometrics isfinite, it can be seen that the same uniform structure is defined by asinglepseudometric, namely theupper envelopesupfi{\displaystyle \sup _{}f_{i}}of the family. Less trivially, it can be shown that a uniform structure that admits acountablefundamental system of entourages (hence in particular a uniformity defined by a countable family of pseudometrics) can be defined by a single pseudometric. A consequence is thatanyuniform structure can be defined as above by a (possibly uncountable) family of pseudometrics (see Bourbaki: General Topology Chapter IX §1 no. 4). Auniform space(X,Θ){\displaystyle (X,\Theta )}is a setX{\displaystyle X}equipped with a distinguished family of coveringsΘ,{\displaystyle \Theta ,}called "uniform covers", drawn from the set ofcoveringsofX,{\displaystyle X,}that form afilterwhen ordered by star refinement. One says that a coverP{\displaystyle \mathbf {P} }is astar refinementof coverQ,{\displaystyle \mathbf {Q} ,}writtenP<∗Q,{\displaystyle \mathbf {P} <^{*}\mathbf {Q} ,}if for everyA∈P,{\displaystyle A\in \mathbf {P} ,}there is aU∈Q{\displaystyle U\in \mathbf {Q} }such that ifA∩B≠∅,B∈P,{\displaystyle A\cap B\neq \varnothing ,B\in \mathbf {P} ,}thenB⊆U.{\displaystyle B\subseteq U.}Axiomatically, the condition of being a filter reduces to: Given a pointx{\displaystyle x}and a uniform coverP,{\displaystyle \mathbf {P} ,}one can consider the union of the members ofP{\displaystyle \mathbf {P} }that containx{\displaystyle x}as a typical neighbourhood ofx{\displaystyle x}of "size"P,{\displaystyle \mathbf {P} ,}and this intuitive measure applies uniformly over the space. Given a uniform space in the entourage sense, define a coverP{\displaystyle \mathbf {P} }to be uniform if there is some entourageU{\displaystyle U}such that for eachx∈X,{\displaystyle x\in X,}there is anA∈P{\displaystyle A\in \mathbf {P} }such thatU[x]⊆A.{\displaystyle U[x]\subseteq A.}These uniform covers form a uniform space as in the second definition. Conversely, given a uniform space in the uniform cover sense, the supersets of⋃{A×A:A∈P},{\displaystyle \bigcup \{A\times A:A\in \mathbf {P} \},}asP{\displaystyle \mathbf {P} }ranges over the uniform covers, are the entourages for a uniform space as in the first definition. Moreover, these two transformations are inverses of each other.[1] Every uniform spaceX{\displaystyle X}becomes atopological spaceby defining a nonempty subsetO⊆X{\displaystyle O\subseteq X}to be open if and only if for everyx∈O{\displaystyle x\in O}there exists an entourageV{\displaystyle V}such thatV[x]{\displaystyle V[x]}is a subset ofO.{\displaystyle O.}In this topology, the neighbourhood filter of a pointx{\displaystyle x}is{V[x]:V∈Φ}.{\displaystyle \{V[x]:V\in \Phi \}.}This can be proved with a recursive use of the existence of a "half-size" entourage. Compared to a general topological space the existence of the uniform structure makes possible the comparison of sizes of neighbourhoods:V[x]{\displaystyle V[x]}andV[y]{\displaystyle V[y]}are considered to be of the "same size". The topology defined by a uniform structure is said to beinduced by the uniformity. A uniform structure on a topological space iscompatiblewith the topology if the topology defined by the uniform structure coincides with the original topology. In general several different uniform structures can be compatible with a given topology onX.{\displaystyle X.} A topological space is calleduniformizableif there is a uniform structure compatible with the topology. Every uniformizable space is acompletely regulartopological space. Moreover, for a uniformizable spaceX{\displaystyle X}the following are equivalent: Some authors (e.g. Engelking) add this last condition directly in the definition of a uniformizable space. The topology of a uniformizable space is always asymmetric topology; that is, the space is anR0-space. Conversely, each completely regular space is uniformizable. A uniformity compatible with the topology of a completely regular spaceX{\displaystyle X}can be defined as the coarsest uniformity that makes all continuous real-valued functions onX{\displaystyle X}uniformly continuous. A fundamental system of entourages for this uniformity is provided by all finite intersections of sets(f×f)−1(V),{\displaystyle (f\times f)^{-1}(V),}wheref{\displaystyle f}is a continuous real-valued function onX{\displaystyle X}andV{\displaystyle V}is an entourage of the uniform spaceR.{\displaystyle \mathbf {R} .}This uniformity defines a topology, which is clearly coarser than the original topology ofX;{\displaystyle X;}that it is also finer than the original topology (hence coincides with it) is a simple consequence of complete regularity: for anyx∈X{\displaystyle x\in X}and a neighbourhoodX{\displaystyle X}ofx,{\displaystyle x,}there is a continuous real-valued functionf{\displaystyle f}withf(x)=0{\displaystyle f(x)=0}and equal to 1 in the complement ofV.{\displaystyle V.} In particular, a compact Hausdorff space is uniformizable. In fact, for a compact Hausdorff spaceX{\displaystyle X}the set of all neighbourhoods of the diagonal inX×X{\displaystyle X\times X}form theuniqueuniformity compatible with the topology. A Hausdorff uniform space ismetrizableif its uniformity can be defined by acountablefamily of pseudometrics. Indeed, as discussedabove, such a uniformity can be defined by asinglepseudometric, which is necessarily a metric if the space is Hausdorff. In particular, if the topology of avector spaceis Hausdorff and definable by a countable family ofseminorms, it is metrizable. Similar tocontinuous functionsbetweentopological spaces, which preservetopological properties, are theuniformly continuous functionsbetween uniform spaces, which preserve uniform properties. A uniformly continuous function is defined as one where inverse images of entourages are again entourages, or equivalently, one where the inverse images of uniform covers are again uniform covers. Explicitly, a functionf:X→Y{\displaystyle f:X\to Y}between uniform spaces is calleduniformly continuousif for every entourageV{\displaystyle V}inY{\displaystyle Y}there exists an entourageU{\displaystyle U}inX{\displaystyle X}such that if(x1,x2)∈U{\displaystyle \left(x_{1},x_{2}\right)\in U}then(f(x1),f(x2))∈V;{\displaystyle \left(f\left(x_{1}\right),f\left(x_{2}\right)\right)\in V;}or in other words, wheneverV{\displaystyle V}is an entourage inY{\displaystyle Y}then(f×f)−1(V){\displaystyle (f\times f)^{-1}(V)}is an entourage inX{\displaystyle X}, wheref×f:X×X→Y×Y{\displaystyle f\times f:X\times X\to Y\times Y}is defined by(f×f)(x1,x2)=(f(x1),f(x2)).{\displaystyle (f\times f)\left(x_{1},x_{2}\right)=\left(f\left(x_{1}\right),f\left(x_{2}\right)\right).} All uniformly continuous functions are continuous with respect to the induced topologies. Uniform spaces with uniform maps form acategory. Anisomorphismbetween uniform spaces is called auniform isomorphism; explicitly, it is auniformly continuousbijectionwhoseinverseis also uniformly continuous. Auniform embeddingis an injective uniformly continuous mapi:X→Y{\displaystyle i:X\to Y}between uniform spaces whose inversei−1:i(X)→X{\displaystyle i^{-1}:i(X)\to X}is also uniformly continuous, where the imagei(X){\displaystyle i(X)}has the subspace uniformity inherited fromY.{\displaystyle Y.} Generalizing the notion ofcomplete metric space, one can also define completeness for uniform spaces. Instead of working withCauchy sequences, one works withCauchy filters(orCauchy nets). ACauchy filter(respectively, aCauchy prefilter)F{\displaystyle F}on a uniform spaceX{\displaystyle X}is afilter(respectively, aprefilter)F{\displaystyle F}such that for every entourageU,{\displaystyle U,}there existsA∈F{\displaystyle A\in F}withA×A⊆U.{\displaystyle A\times A\subseteq U.}In other words, a filter is Cauchy if it contains "arbitrarily small" sets. It follows from the definitions that each filter that converges (with respect to the topology defined by the uniform structure) is a Cauchy filter. Aminimal Cauchy filteris a Cauchy filter that does not contain any smaller (that is, coarser) Cauchy filter (other than itself). It can be shown that every Cauchy filter contains a uniqueminimal Cauchy filter. The neighbourhood filter of each point (the filter consisting of all neighbourhoods of the point) is a minimal Cauchy filter. Conversely, a uniform space is calledcompleteif every Cauchy filter converges. Any compact Hausdorff space is a complete uniform space with respect to the unique uniformity compatible with the topology. Complete uniform spaces enjoy the following important property: iff:A→Y{\displaystyle f:A\to Y}is auniformly continuousfunction from adensesubsetA{\displaystyle A}of a uniform spaceX{\displaystyle X}into acompleteuniform spaceY,{\displaystyle Y,}thenf{\displaystyle f}can be extended (uniquely) into a uniformly continuous function on all ofX.{\displaystyle X.} A topological space that can be made into a complete uniform space, whose uniformity induces the original topology, is called acompletely uniformizable space. Acompletionof a uniform spaceX{\displaystyle X}is a pair(i,C){\displaystyle (i,C)}consisting of a complete uniform spaceC{\displaystyle C}and auniform embeddingi:X→C{\displaystyle i:X\to C}whose imagei(X){\displaystyle i(X)}is adense subsetofC.{\displaystyle C.} As with metric spaces, every uniform spaceX{\displaystyle X}has aHausdorff completion: that is, there exists a complete Hausdorff uniform spaceY{\displaystyle Y}and a uniformly continuous mapi:X→Y{\displaystyle i:X\to Y}(ifX{\displaystyle X}is a Hausdorff uniform space theni{\displaystyle i}is atopological embedding) with the following property: The Hausdorff completionY{\displaystyle Y}is unique up to isomorphism. As a set,Y{\displaystyle Y}can be taken to consist of theminimalCauchy filters onX.{\displaystyle X.}As the neighbourhood filterB(x){\displaystyle \mathbf {B} (x)}of each pointx{\displaystyle x}inX{\displaystyle X}is a minimal Cauchy filter, the mapi{\displaystyle i}can be defined by mappingx{\displaystyle x}toB(x).{\displaystyle \mathbf {B} (x).}The mapi{\displaystyle i}thus defined is in general not injective; in fact, the graph of the equivalence relationi(x)=i(x′){\displaystyle i(x)=i(x')}is the intersection of all entourages ofX,{\displaystyle X,}and thusi{\displaystyle i}is injective precisely whenX{\displaystyle X}is Hausdorff. The uniform structure onY{\displaystyle Y}is defined as follows: for eachsymmetricentourageV{\displaystyle V}(that is, such that(x,y)∈V{\displaystyle (x,y)\in V}implies(y,x)∈V{\displaystyle (y,x)\in V}), letC(V){\displaystyle C(V)}be the set of all pairs(F,G){\displaystyle (F,G)}of minimal Cauchy filterswhich have in common at least oneV{\displaystyle V}-small set. The setsC(V){\displaystyle C(V)}can be shown to form a fundamental system of entourages;Y{\displaystyle Y}is equipped with the uniform structure thus defined. The seti(X){\displaystyle i(X)}is then a dense subset ofY.{\displaystyle Y.}IfX{\displaystyle X}is Hausdorff, theni{\displaystyle i}is an isomorphism ontoi(X),{\displaystyle i(X),}and thusX{\displaystyle X}can be identified with a dense subset of its completion. Moreover,i(X){\displaystyle i(X)}is always Hausdorff; it is called theHausdorff uniform space associated withX.{\displaystyle X.}IfR{\displaystyle R}denotes the equivalence relationi(x)=i(x′),{\displaystyle i(x)=i(x'),}then the quotient spaceX/R{\displaystyle X/R}is homeomorphic toi(X).{\displaystyle i(X).} Ua≜d−1([0,a])={(m,n)∈M×M:d(m,n)≤a}.{\displaystyle \qquad U_{a}\triangleq d^{-1}([0,a])=\{(m,n)\in M\times M:d(m,n)\leq a\}.} BeforeAndré Weilgave the first explicit definition of a uniform structure in 1937, uniform concepts, like completeness, were discussed usingmetric spaces.Nicolas Bourbakiprovided the definition of uniform structure in terms of entourages in the bookTopologie GénéraleandJohn Tukeygave the uniform cover definition. Weil also characterized uniform spaces in terms of a family of pseudometrics.
https://en.wikipedia.org/wiki/Complete_uniform_space
Infunctional analysisand related areas ofmathematics, acomplete topological vector spaceis atopological vector space(TVS) with the property that whenever points get progressively closer to each other, then there exists some pointx{\displaystyle x}towards which they all get closer. The notion of "points that get progressively closer" is made rigorous byCauchy netsorCauchy filters, which are generalizations ofCauchy sequences, while "pointx{\displaystyle x}towards which they all get closer" means that this Cauchynetor filterconverges tox.{\displaystyle x.}The notion of completeness for TVSs uses the theory ofuniform spacesas a framework to generalize the notion ofcompleteness for metric spaces. But unlike metric-completeness, TVS-completeness does not depend on any metric and is defined forallTVSs, including those that are notmetrizableorHausdorff. Completeness is an extremely important property for a topological vector space to possess. The notions of completeness fornormed spacesandmetrizable TVSs, which are commonly defined in terms ofcompletenessof a particular norm or metric, can both be reduced down to this notion of TVS-completeness – a notion that is independent of any particular norm or metric. Ametrizable topological vector spaceX{\displaystyle X}with atranslation invariant metric[note 1]d{\displaystyle d}is complete as a TVS if and only if(X,d){\displaystyle (X,d)}is acomplete metric space, which by definition means that everyd{\displaystyle d}-Cauchy sequenceconverges to some point inX.{\displaystyle X.}Prominent examples of complete TVSs that are alsometrizableinclude allF-spacesand consequently also allFréchet spaces,Banach spaces, andHilbert spaces. Prominent examples of complete TVS that are (typically)notmetrizable include strictLF-spacessuch as thespace of test functionsCc∞(U){\displaystyle C_{c}^{\infty }(U)}with it canonical LF-topology, thestrong dual spaceof any non-normableFréchet space, as well as many otherpolar topologiesoncontinuous dual spaceor othertopologies on spaces of linear maps. Explicitly, atopological vector spaces(TVS) iscompleteif everynet, or equivalently, everyfilter, that isCauchywith respect to the space'scanonicaluniformitynecessarily converges to some point. Said differently, a TVS is complete if its canonical uniformity is acomplete uniformity. Thecanonical uniformityon a TVS(X,τ){\displaystyle (X,\tau )}is the unique[note 2]translation-invariantuniformitythat induces onX{\displaystyle X}the topologyτ.{\displaystyle \tau .}This notion of "TVS-completeness" dependsonlyon vector subtraction and the topology of the TVS; consequently, it can be applied to all TVSs, including those whose topologies can not be defined in termsmetricsorpseudometrics. Afirst-countableTVS is complete if and only if every Cauchy sequence (or equivalently, everyelementaryCauchy filter) converges to some point. Every topological vector spaceX,{\displaystyle X,}even if it is notmetrizableor notHausdorff, has acompletion, which by definition is a complete TVSC{\displaystyle C}into whichX{\displaystyle X}can beTVS-embeddedas adensevector subspace. Moreover, every Hausdorff TVS has aHausdorffcompletion, which is necessarily uniqueup toTVS-isomorphism. However, as discussed below, all TVSs have infinitely many non-Hausdorff completions that arenotTVS-isomorphic to one another. This section summarizes the definition of a completetopological vector space(TVS) in terms of bothnetsandprefilters. Information about convergence of nets and filters, such as definitions and properties, can be found in the article aboutfilters in topology. Every topological vector space (TVS) is a commutativetopological groupwith identity under addition and the canonical uniformity of a TVS is definedentirelyin terms of subtraction (and thus addition); scalar multiplication is not involved and no additional structure is needed. ThediagonalofX{\displaystyle X}is the set[1]ΔX=def{(x,x):x∈X}{\displaystyle \Delta _{X}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{(x,x):x\in X\}}and for anyN⊆X,{\displaystyle N\subseteq X,}thecanonical entourage/vicinityaroundN{\displaystyle N}is the setΔX(N)=def{(x,y)∈X×X:x−y∈N}=⋃y∈X[(y+N)×{y}]=ΔX+(N×{0}){\displaystyle {\begin{alignedat}{4}\Delta _{X}(N)~&~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{(x,y)\in X\times X~:~x-y\in N\}\\&=\bigcup _{y\in X}[(y+N)\times \{y\}]\\&=\Delta _{X}+(N\times \{0\})\end{alignedat}}}where if0∈N{\displaystyle 0\in N}thenΔX(N){\displaystyle \Delta _{X}(N)}contains the diagonalΔX({0})=ΔX.{\displaystyle \Delta _{X}(\{0\})=\Delta _{X}.} IfN{\displaystyle N}is asymmetric set(that is, if−N=N{\displaystyle -N=N}), thenΔX(N){\displaystyle \Delta _{X}(N)}issymmetric, which by definition means thatΔX(N)=(ΔX(N))op{\displaystyle \Delta _{X}(N)=\left(\Delta _{X}(N)\right)^{\operatorname {op} }}holds where(ΔX(N))op=def{(y,x):(x,y)∈ΔX(N)},{\displaystyle \left(\Delta _{X}(N)\right)^{\operatorname {op} }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{(y,x):(x,y)\in \Delta _{X}(N)\right\},}and in addition, this symmetric set'scompositionwith itself is:ΔX(N)∘ΔX(N)=def{(x,z)∈X×X:there existsy∈Xsuch thatx,z∈y+N}=⋃y∈X[(y+N)×(y+N)]=ΔX+(N×N).{\displaystyle {\begin{alignedat}{4}\Delta _{X}(N)\circ \Delta _{X}(N)~&~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{(x,z)\in X\times X~:~{\text{ there exists }}y\in X{\text{ such that }}x,z\in y+N\right\}\\&=\bigcup _{y\in X}[(y+N)\times (y+N)]\\&=\Delta _{X}+(N\times N).\end{alignedat}}} IfL{\displaystyle {\mathcal {L}}}is any neighborhood basis at the origin in(X,τ){\displaystyle (X,\tau )}then thefamily of subsetsofX×X:{\displaystyle X\times X:}BL=def{ΔX(N):N∈L}{\displaystyle {\mathcal {B}}_{\mathcal {L}}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{\Delta _{X}(N):N\in {\mathcal {L}}\right\}}is aprefilteronX×X.{\displaystyle X\times X.}IfNτ(0){\displaystyle {\mathcal {N}}_{\tau }(0)}is theneighborhood filterat the origin in(X,τ){\displaystyle (X,\tau )}thenBNτ(0){\displaystyle {\mathcal {B}}_{{\mathcal {N}}_{\tau }(0)}}forms abase of entouragesfor auniform structureonX{\displaystyle X}that is consideredcanonical.[2]Explicitly, by definition,thecanonical uniformityonX{\displaystyle X}induced by(X,τ){\displaystyle (X,\tau )}[2]is thefilterUτ{\displaystyle {\mathcal {U}}_{\tau }}onX×X{\displaystyle X\times X}generated by the above prefilter:Uτ=defBNτ(0)↑=def{S⊆X×X:there existsN∈Nτ(0)such thatΔX(N)⊆S}{\displaystyle {\mathcal {U}}_{\tau }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\mathcal {B}}_{{\mathcal {N}}_{\tau }(0)}^{\uparrow }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{S\subseteq X\times X~:~{\text{there exists }}N\in {\mathcal {N}}_{\tau }(0){\text{ such that }}\Delta _{X}(N)\subseteq S\right\}}whereBNτ(0)↑{\displaystyle {\mathcal {B}}_{{\mathcal {N}}_{\tau }(0)}^{\uparrow }}denotes theupward closureofBNτ(0){\displaystyle {\mathcal {B}}_{{\mathcal {N}}_{\tau }(0)}}inX×X.{\displaystyle X\times X.}The same canonical uniformity would result by using a neighborhood basis of the origin rather the filter of all neighborhoods of the origin. IfL{\displaystyle {\mathcal {L}}}is any neighborhood basis at the origin in(X,τ){\displaystyle (X,\tau )}then the filter onX×X{\displaystyle X\times X}generated by the prefilterBL{\displaystyle {\mathcal {B}}_{\mathcal {L}}}is equal to the canonical uniformityUτ{\displaystyle {\mathcal {U}}_{\tau }}induced by(X,τ).{\displaystyle (X,\tau ).} The general theory ofuniform spaceshas its own definition of a "Cauchy prefilter" and "Cauchy net". For the canonical uniformity onX,{\displaystyle X,}these definitions reduce down to those given below. Supposex∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}is a net inX{\displaystyle X}andy∙=(yj)j∈J{\displaystyle y_{\bullet }=\left(y_{j}\right)_{j\in J}}is a net inY.{\displaystyle Y.}The productI×J{\displaystyle I\times J}becomes adirected setby declaring(i,j)≤(i2,j2){\displaystyle (i,j)\leq \left(i_{2},j_{2}\right)}if and only ifi≤i2{\displaystyle i\leq i_{2}}andj≤j2.{\displaystyle j\leq j_{2}.}Thenx∙×y∙=def(xi,yj)(i,j)∈I×J{\displaystyle x_{\bullet }\times y_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(x_{i},y_{j}\right)_{(i,j)\in I\times J}}denotes the (Cartesian)product net, where in particularx∙×x∙=def(xi,xj)(i,j)∈I×I.{\textstyle x_{\bullet }\times x_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(x_{i},x_{j}\right)_{(i,j)\in I\times I}.}IfX=Y{\displaystyle X=Y}then the image of this net under the vector addition mapX×X→X{\displaystyle X\times X\to X}denotes thesumof these two nets:[3]x∙+y∙=def(xi+yj)(i,j)∈I×J{\displaystyle x_{\bullet }+y_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(x_{i}+y_{j}\right)_{(i,j)\in I\times J}}and similarly theirdifferenceis defined to be the image of the product net under the vector subtraction map(x,y)↦x−y{\displaystyle (x,y)\mapsto x-y}:x∙−y∙=def(xi−yj)(i,j)∈I×J.{\displaystyle x_{\bullet }-y_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(x_{i}-y_{j}\right)_{(i,j)\in I\times J}.}In particular, the notationx∙−x∙=(xi)i∈I−(xi)i∈I{\displaystyle x_{\bullet }-x_{\bullet }=\left(x_{i}\right)_{i\in I}-\left(x_{i}\right)_{i\in I}}denotes theI2{\displaystyle I^{2}}-indexed net(xi−xj)(i,j)∈I×I{\displaystyle \left(x_{i}-x_{j}\right)_{(i,j)\in I\times I}}and not theI{\displaystyle I}-indexed net(xi−xi)i∈I=(0)i∈I{\displaystyle \left(x_{i}-x_{i}\right)_{i\in I}=(0)_{i\in I}}since using the latter as the definition would make the notation useless. Anetx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}in a TVSX{\displaystyle X}is called aCauchy net[4]ifx∙−x∙=def(xi−xj)(i,j)∈I×I→0inX.{\displaystyle x_{\bullet }-x_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(x_{i}-x_{j}\right)_{(i,j)\in I\times I}\to 0\quad {\text{ in }}X.}Explicitly, this means that for every neighborhoodN{\displaystyle N}of0{\displaystyle 0}inX,{\displaystyle X,}there exists some indexi0∈I{\displaystyle i_{0}\in I}such thatxi−xj∈N{\displaystyle x_{i}-x_{j}\in N}for all indicesi,j∈I{\displaystyle i,j\in I}that satisfyi≥i0{\displaystyle i\geq i_{0}}andj≥i0.{\displaystyle j\geq i_{0}.}It suffices to check any of these defining conditions for any givenneighborhood basisof0{\displaystyle 0}inX.{\displaystyle X.}ACauchy sequenceis a sequence that is also a Cauchy net. Ifx∙→x{\displaystyle x_{\bullet }\to x}thenx∙×x∙→(x,x){\displaystyle x_{\bullet }\times x_{\bullet }\to (x,x)}inX×X{\displaystyle X\times X}and so the continuity of the vector subtraction mapS:X×X→X,{\displaystyle S:X\times X\to X,}which is defined byS(x,y)=defx−y,{\displaystyle S(x,y)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~x-y,}guarantees thatS(x∙×x∙)→S(x,x){\displaystyle S\left(x_{\bullet }\times x_{\bullet }\right)\to S(x,x)}inX,{\displaystyle X,}whereS(x∙×x∙)=(xi−xj)(i,j)∈I×I=x∙−x∙{\displaystyle S\left(x_{\bullet }\times x_{\bullet }\right)=\left(x_{i}-x_{j}\right)_{(i,j)\in I\times I}=x_{\bullet }-x_{\bullet }}andS(x,x)=x−x=0.{\displaystyle S(x,x)=x-x=0.}This proves that every convergent net is a Cauchy net. By definition, a space is calledcompleteif the converse is also always true. That is,X{\displaystyle X}is complete if and only if the following holds: A similar characterization of completeness holds if filters and prefilters are used instead of nets. A series∑i=1∞xi{\displaystyle \sum _{i=1}^{\infty }x_{i}}is called aCauchy series(respectively, aconvergent series) if the sequence ofpartial sums(∑i=1nxi)n=1∞{\displaystyle \left(\sum _{i=1}^{n}x_{i}\right)_{n=1}^{\infty }}is aCauchy sequence(respectively, aconvergent sequence).[5]Every convergent series is necessarily a Cauchy series. In a complete TVS, every Cauchy series is necessarily a convergent series. AprefilterB{\displaystyle {\mathcal {B}}}on atopological vector spaceX{\displaystyle X}is called aCauchy prefilter[6]if it satisfies any of the following equivalent conditions: It suffices to check any of the above conditions for any givenneighborhood basisof0{\displaystyle 0}inX.{\displaystyle X.}ACauchy filteris a Cauchy prefilter that is also afilteronX.{\displaystyle X.} IfB{\displaystyle {\mathcal {B}}}is a prefilter on a topological vector spaceX{\displaystyle X}and ifx∈X,{\displaystyle x\in X,}thenB→x{\displaystyle {\mathcal {B}}\to x}inX{\displaystyle X}if and only ifx∈cl⁡B{\displaystyle x\in \operatorname {cl} {\mathcal {B}}}andB{\displaystyle {\mathcal {B}}}is Cauchy.[3] For anyS⊆X,{\displaystyle S\subseteq X,}a prefilterC{\displaystyle {\mathcal {C}}}onS{\displaystyle S}is necessarily a subset of℘(S){\displaystyle \wp (S)}; that is,C⊆℘(S).{\displaystyle {\mathcal {C}}\subseteq \wp (S).} A subsetS{\displaystyle S}of a TVS(X,τ){\displaystyle (X,\tau )}is called acomplete subsetif it satisfies any of the following equivalent conditions: The subsetS{\displaystyle S}is called asequentially complete subsetif every Cauchy sequence inS{\displaystyle S}(or equivalently, every elementary Cauchy filter/prefilter onS{\displaystyle S}) converges to at least one point ofS.{\displaystyle S.} Importantly,convergence to points outside ofS{\displaystyle S}does not prevent a set from being complete: IfX{\displaystyle X}is not Hausdorff and if every Cauchy prefilter onS{\displaystyle S}converges to some point ofS,{\displaystyle S,}thenS{\displaystyle S}will be complete even if some or all Cauchy prefilters onS{\displaystyle S}alsoconverge to points(s) inX∖S.{\displaystyle X\setminus S.}In short, there is no requirement that these Cauchy prefilters onS{\displaystyle S}convergeonlyto points inS.{\displaystyle S.}The same can be said of the convergence of Cauchy nets inS.{\displaystyle S.} As a consequence, if a TVSX{\displaystyle X}isnotHausdorff then every subset of the closure of{0}{\displaystyle \{0\}}inX{\displaystyle X}is complete because it is compact and every compact set is necessarily complete. In particular, if∅≠S⊆clX⁡{0}{\displaystyle \varnothing \neq S\subseteq \operatorname {cl} _{X}\{0\}}is a proper subset, such asS={0}{\displaystyle S=\{0\}}for example, thenS{\displaystyle S}would be complete even thougheveryCauchy net inS{\displaystyle S}(and also every Cauchy prefilter onS{\displaystyle S}) converges toeverypoint inclX⁡{0},{\displaystyle \operatorname {cl} _{X}\{0\},}including those points inclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}that do not belong toS.{\displaystyle S.}This example also shows that complete subsets (and indeed, even compact subsets) of a non-Hausdorff TVS may fail to be closed. For example, if∅≠S⊆clX⁡{0}{\displaystyle \varnothing \neq S\subseteq \operatorname {cl} _{X}\{0\}}thenS=clX⁡{0}{\displaystyle S=\operatorname {cl} _{X}\{0\}}if and only ifS{\displaystyle S}is closed inX.{\displaystyle X.} Atopological vector spaceX{\displaystyle X}is called acomplete topological vector spaceif any of the following equivalent conditions are satisfied: where if in additionX{\displaystyle X}ispseudometrizableor metrizable (for example, anormed space) then this list can be extended to include: A topological vector spaceX{\displaystyle X}issequentially completeif any of the following equivalent conditions are satisfied: The existence of the canonical uniformity was demonstrated above by defining it. The theorem below establishes that the canonical uniformity of any TVS(X,τ){\displaystyle (X,\tau )}is the only uniformity onX{\displaystyle X}that is both (1) translation invariant, and (2) generates onX{\displaystyle X}the topologyτ.{\displaystyle \tau .} Theorem[7](Existence and uniqueness of the canonical uniformity)—The topology of any TVS can be derived from a unique translation-invariant uniformity. IfN(0){\displaystyle {\mathcal {N}}(0)}is anyneighborhood baseof the origin, then the family{Δ(N):N∈N(0)}{\displaystyle \left\{\Delta (N):N\in {\mathcal {N}}(0)\right\}}is a base for this uniformity. This section is dedicated to explaining the precise meanings of the terms involved in this uniqueness statement. For any subsetsΦ,Ψ⊆X×X,{\displaystyle \Phi ,\Psi \subseteq X\times X,}let[1]Φop=def{(y,x):(x,y)∈Φ}{\displaystyle \Phi ^{\operatorname {op} }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{(y,x)~:~(x,y)\in \Phi \}}and letΦ∘Ψ=def{(x,z):there existsy∈Xsuch that(x,y)∈Ψand(y,z)∈Φ}=⋃y∈X{(x,z):(x,y)∈Ψand(y,z)∈Φ}{\displaystyle {\begin{alignedat}{4}\Phi \circ \Psi ~&~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{(x,z):{\text{ there exists }}y\in X{\text{ such that }}(x,y)\in \Psi {\text{ and }}(y,z)\in \Phi \right\}\\&=~\bigcup _{y\in X}\{(x,z)~:~(x,y)\in \Psi {\text{ and }}(y,z)\in \Phi \}\end{alignedat}}}A non-empty familyB⊆℘(X×X){\displaystyle {\mathcal {B}}\subseteq \wp (X\times X)}is called abase of entouragesor afundamental system of entouragesifB{\displaystyle {\mathcal {B}}}is aprefilteronX×X{\displaystyle X\times X}satisfying all of the following conditions: Auniformityoruniform structureonX{\displaystyle X}is afilterU{\displaystyle {\mathcal {U}}}onX×X{\displaystyle X\times X}that is generated by some base of entouragesB,{\displaystyle {\mathcal {B}},}in which case we say thatB{\displaystyle {\mathcal {B}}}is abase of entouragesforU.{\displaystyle {\mathcal {U}}.} For a commutative additive groupX,{\displaystyle X,}atranslation-invariant fundamental system of entourages[7]is a fundamental system of entouragesB{\displaystyle {\mathcal {B}}}such that for everyΦ∈B,{\displaystyle \Phi \in {\mathcal {B}},}(x,y)∈Φ{\displaystyle (x,y)\in \Phi }if and only if(x+z,y+z)∈Φ{\displaystyle (x+z,y+z)\in \Phi }for allx,y,z∈X.{\displaystyle x,y,z\in X.}A uniformityB{\displaystyle {\mathcal {B}}}is called atranslation-invariant uniformity[7]if it has a base of entourages that is translation-invariant. The canonical uniformity on any TVS is translation-invariant.[7] The binary operator∘{\displaystyle \;\circ \;}satisfies all of the following: Symmetric entourages Call a subsetΦ⊆X×X{\displaystyle \Phi \subseteq X\times X}symmetricifΦ=Φop,{\displaystyle \Phi =\Phi ^{\operatorname {op} },}which is equivalent toΦop⊆Φ.{\displaystyle \Phi ^{\operatorname {op} }\subseteq \Phi .}This equivalence follows from the identity(Φop)op=Φ{\displaystyle \left(\Phi ^{\operatorname {op} }\right)^{\operatorname {op} }=\Phi }and the fact that ifΨ⊆X×X,{\displaystyle \Psi \subseteq X\times X,}thenΦ⊆Ψ{\displaystyle \Phi \subseteq \Psi }if and only ifΦop⊆Ψop.{\displaystyle \Phi ^{\operatorname {op} }\subseteq \Psi ^{\operatorname {op} }.}For example, the setΦop∩Φ{\displaystyle \Phi ^{\operatorname {op} }\cap \Phi }is always symmetric for everyΦ⊆X×X.{\displaystyle \Phi \subseteq X\times X.}And because(Φ∩Ψ)op=Φop∩Ψop,{\displaystyle (\Phi \cap \Psi )^{\operatorname {op} }=\Phi ^{\operatorname {op} }\cap \Psi ^{\operatorname {op} },}ifΦ{\displaystyle \Phi }andΨ{\displaystyle \Psi }are symmetric then so isΦ∩Ψ.{\displaystyle \Phi \cap \Psi .} Relatives LetΦ⊆X×X{\displaystyle \Phi \subseteq X\times X}be arbitrary and letPr1,Pr2:X×X→X{\displaystyle \operatorname {Pr} _{1},\operatorname {Pr} _{2}:X\times X\to X}be the canonical projections onto the first and second coordinates, respectively. For anyS⊆X,{\displaystyle S\subseteq X,}defineS⋅Φ=def{y∈X:Φ∩(S×{x})≠∅}=Pr2⁡(Φ∩(S×X)){\displaystyle S\cdot \Phi ~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{y\in X:\Phi \cap (S\times \{x\})\neq \varnothing \}~=~\operatorname {Pr} _{2}(\Phi \cap (S\times X))}Φ⋅S=def{x∈X:Φ∩({x}×S)≠∅}=Pr1⁡(Φ∩(X×S))=S⋅(Φop){\displaystyle \Phi \cdot S~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{x\in X:\Phi \cap (\{x\}\times S)\neq \varnothing \}~=~\operatorname {Pr} _{1}(\Phi \cap (X\times S))=S\cdot \left(\Phi ^{\operatorname {op} }\right)}whereΦ⋅S{\displaystyle \Phi \cdot S}(respectively,S⋅Φ{\displaystyle S\cdot \Phi }) is called the set ofleft(respectively,right)Φ{\displaystyle \Phi }-relativesof (points in)S.{\displaystyle S.}Denote the special case whereS={p}{\displaystyle S=\{p\}}is a singleton set for somep∈X{\displaystyle p\in X}by:p⋅Φ=def{p}⋅Φ={y∈X:(p,y)∈Φ}{\displaystyle p\cdot \Phi ~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{p\}\cdot \Phi ~=~\{y\in X:(p,y)\in \Phi \}}Φ⋅p=defΦ⋅{p}={x∈X:(x,p)∈Φ}=p⋅(Φop){\displaystyle \Phi \cdot p~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\Phi \cdot \{p\}~=~\{x\in X:(x,p)\in \Phi \}~=~p\cdot \left(\Phi ^{\operatorname {op} }\right)}IfΦ,Ψ⊆X×X{\displaystyle \Phi ,\Psi \subseteq X\times X}then(Φ∘Ψ)⋅S=Φ⋅(Ψ⋅S).{\textstyle (\Phi \circ \Psi )\cdot S=\Phi \cdot (\Psi \cdot S).}Moreover,⋅{\displaystyle \,\cdot \,}right distributes overboth unions and intersections, meaning that ifR,S⊆X{\displaystyle R,S\subseteq X}then(R∪S)⋅Φ=(R⋅Φ)∪(S⋅Φ){\displaystyle (R\cup S)\cdot \Phi ~=~(R\cdot \Phi )\cup (S\cdot \Phi )}and(R∩S)⋅Φ⊆(R⋅Φ)∩(S⋅Φ).{\displaystyle (R\cap S)\cdot \Phi ~\subseteq ~(R\cdot \Phi )\cap (S\cdot \Phi ).} Neighborhoods and open sets Two pointsx{\displaystyle x}andy{\displaystyle y}areΦ{\displaystyle \Phi }-closeif(x,y)∈Φ{\displaystyle (x,y)\in \Phi }and a subsetS⊆X{\displaystyle S\subseteq X}is calledΦ{\displaystyle \Phi }-smallifS×S⊆Φ.{\displaystyle S\times S\subseteq \Phi .} LetB⊆℘(X×X){\displaystyle {\mathcal {B}}\subseteq \wp (X\times X)}be a base of entourages onX.{\displaystyle X.}Theneighborhood prefilterat a pointp∈X{\displaystyle p\in X}and, respectively, on a subsetS⊆X{\displaystyle S\subseteq X}are thefamilies of sets:B⋅p=defB⋅{p}={Φ⋅p:Φ∈B}andB⋅S=def{Φ⋅S:Φ∈B}{\displaystyle {\mathcal {B}}\cdot p~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\mathcal {B}}\cdot \{p\}=\{\Phi \cdot p:\Phi \in {\mathcal {B}}\}\qquad {\text{ and }}\qquad {\mathcal {B}}\cdot S~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{\Phi \cdot S:\Phi \in {\mathcal {B}}\}}and the filters onX{\displaystyle X}that each generates is known as theneighborhood filterofp{\displaystyle p}(respectively, ofS{\displaystyle S}). Assign to everyx∈X{\displaystyle x\in X}the neighborhood prefilterB⋅x=def{Φ⋅x:Φ∈B}{\displaystyle {\mathcal {B}}\cdot x~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{\Phi \cdot x:\Phi \in {\mathcal {B}}\}}and use theneighborhood definition of "open set"to obtain atopologyonX{\displaystyle X}called thetopology induced byB{\displaystyle {\mathcal {B}}}or theinduced topology. Explicitly, a subsetU⊆X{\displaystyle U\subseteq X}is open in this topology if and only if for everyu∈U{\displaystyle u\in U}there exists someN∈B⋅u{\displaystyle N\in {\mathcal {B}}\cdot u}such thatN⊆U;{\displaystyle N\subseteq U;}that is,U{\displaystyle U}is open if and only if for everyu∈U{\displaystyle u\in U}there exists someΦ∈B{\displaystyle \Phi \in {\mathcal {B}}}such thatΦ⋅u=def{x∈X:(x,u)∈Φ}⊆U.{\displaystyle \Phi \cdot u~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{x\in X:(x,u)\in \Phi \}\subseteq U.} The closure of a subsetS⊆X{\displaystyle S\subseteq X}in this topology is:clX⁡S=⋂Φ∈B(Φ⋅S)=⋂Φ∈B(S⋅Φ).{\displaystyle \operatorname {cl} _{X}S=\bigcap _{\Phi \in {\mathcal {B}}}(\Phi \cdot S)=\bigcap _{\Phi \in {\mathcal {B}}}(S\cdot \Phi ).} Cauchy prefilters and complete uniformities A prefilterF⊆℘(X){\displaystyle {\mathcal {F}}\subseteq \wp (X)}on a uniform spaceX{\displaystyle X}with uniformityU{\displaystyle {\mathcal {U}}}is called aCauchy prefilterif for every entourageN∈U,{\displaystyle N\in {\mathcal {U}},}there exists someF∈F{\displaystyle F\in {\mathcal {F}}}such thatF×F⊆N.{\displaystyle F\times F\subseteq N.} A uniform space(X,U){\displaystyle (X,{\mathcal {U}})}is called acomplete uniform space(respectively, asequentially complete uniform space) if every Cauchy prefilter (respectively, every elementary Cauchy prefilter) onX{\displaystyle X}converges to at least one point ofX{\displaystyle X}whenX{\displaystyle X}is endowed with the topology induced byU.{\displaystyle {\mathcal {U}}.} Case of a topological vector space If(X,τ){\displaystyle (X,\tau )}is atopological vector spacethen for anyS⊆X{\displaystyle S\subseteq X}andx∈X,{\displaystyle x\in X,}ΔX(N)⋅S=S+NandΔX(N)⋅x=x+N,{\displaystyle \Delta _{X}(N)\cdot S=S+N\qquad {\text{ and }}\qquad \Delta _{X}(N)\cdot x=x+N,}and the topology induced onX{\displaystyle X}by the canonical uniformity is the same as the topology thatX{\displaystyle X}started with (that is, it isτ{\displaystyle \tau }). LetX{\displaystyle X}andY{\displaystyle Y}be TVSs,D⊆X,{\displaystyle D\subseteq X,}andf:D→Y{\displaystyle f:D\to Y}be a map. Thenf:D→Y{\displaystyle f:D\to Y}isuniformly continuousif for every neighborhoodU{\displaystyle U}of the origin inX,{\displaystyle X,}there exists a neighborhoodV{\displaystyle V}of the origin inY{\displaystyle Y}such that for allx,y∈D,{\displaystyle x,y\in D,}ify−x∈U{\displaystyle y-x\in U}thenf(y)−f(x)∈V.{\displaystyle f(y)-f(x)\in V.} Suppose thatf:D→Y{\displaystyle f:D\to Y}is uniformly continuous. Ifx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}is a Cauchy net inD{\displaystyle D}thenf∘x∙=(f(xi))i∈I{\displaystyle f\circ x_{\bullet }=\left(f\left(x_{i}\right)\right)_{i\in I}}is a Cauchy net inY.{\displaystyle Y.}IfB{\displaystyle {\mathcal {B}}}is a Cauchy prefilter inD{\displaystyle D}(meaning thatB{\displaystyle {\mathcal {B}}}is a family of subsets ofD{\displaystyle D}that is Cauchy inX{\displaystyle X}) thenf(B){\displaystyle f\left({\mathcal {B}}\right)}is a Cauchy prefilter inY.{\displaystyle Y.}However, ifB{\displaystyle {\mathcal {B}}}is a Cauchy filter onD{\displaystyle D}then althoughf(B){\displaystyle f\left({\mathcal {B}}\right)}will be a Cauchyprefilter, it will be a Cauchy filter inY{\displaystyle Y}if and only iff:D→Y{\displaystyle f:D\to Y}is surjective. We review the basic notions related to the general theory of complete pseudometric spaces. Recall that everymetricis apseudometricand that a pseudometricp{\displaystyle p}is a metric if and only ifp(x,y)=0{\displaystyle p(x,y)=0}impliesx=y.{\displaystyle x=y.}Thus everymetric spaceis apseudometric spaceand a pseudometric space(X,p){\displaystyle (X,p)}is a metric space if and only ifp{\displaystyle p}is a metric. IfS{\displaystyle S}is a subset of apseudometric space(X,d){\displaystyle (X,d)}then thediameterofS{\displaystyle S}is defined to bediam⁡(S)=defsup{d(s,t):s,t∈S}.{\displaystyle \operatorname {diam} (S)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\sup _{}\{d(s,t):s,t\in S\}.} A prefilterB{\displaystyle {\mathcal {B}}}on a pseudometric space(X,d){\displaystyle (X,d)}is called ad{\displaystyle d}-Cauchy prefilteror simply aCauchy prefilterif for eachrealr>0,{\displaystyle r>0,}there is someB∈B{\displaystyle B\in {\mathcal {B}}}such that the diameter ofB{\displaystyle B}is less thanr.{\displaystyle r.} Suppose(X,d){\displaystyle (X,d)}is a pseudometric space. Anetx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}inX{\displaystyle X}is called ad{\displaystyle d}-Cauchy netor simply aCauchy netifTails⁡(x∙){\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)}is a Cauchy prefilter, which happens if and only if or equivalently, if and only if(d(xj,xk))(i,j)∈I×I→0{\displaystyle \left(d\left(x_{j},x_{k}\right)\right)_{(i,j)\in I\times I}\to 0}inR.{\displaystyle \mathbb {R} .}This is analogous to the following characterization of the converge ofx∙{\displaystyle x_{\bullet }}to a point: ifx∈X,{\displaystyle x\in X,}thenx∙→x{\displaystyle x_{\bullet }\to x}in(X,d){\displaystyle (X,d)}if and only if(xi,x)i∈I→0{\displaystyle \left(x_{i},x\right)_{i\in I}\to 0}inR.{\displaystyle \mathbb {R} .} ACauchy sequenceis a sequence that is also a Cauchy net.[note 3] Every pseudometricp{\displaystyle p}on a setX{\displaystyle X}induces the usual canonical topology onX,{\displaystyle X,}which we'll denote byτp{\displaystyle \tau _{p}}; it also induces a canonicaluniformityonX,{\displaystyle X,}which we'll denote byUp.{\displaystyle {\mathcal {U}}_{p}.}The topology onX{\displaystyle X}induced by the uniformityUp{\displaystyle {\mathcal {U}}_{p}}is equal toτp.{\displaystyle \tau _{p}.}A netx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}inX{\displaystyle X}is Cauchy with respect top{\displaystyle p}if and only if it is Cauchy with respect to the uniformityUp.{\displaystyle {\mathcal {U}}_{p}.}The pseudometric space(X,p){\displaystyle (X,p)}is acomplete(resp. a sequentially complete) pseudometric space if and only if(X,Up){\displaystyle \left(X,{\mathcal {U}}_{p}\right)}is acomplete(resp. a sequentially complete) uniform space. Moreover, the pseudometric space(X,p){\displaystyle (X,p)}(resp. the uniform space(X,Up){\displaystyle \left(X,{\mathcal {U}}_{p}\right)}) is complete if and only if it is sequentially complete. A pseudometric space(X,d){\displaystyle (X,d)}(for example, ametric space) is calledcompleteandd{\displaystyle d}is called acomplete pseudometricif any of the following equivalent conditions hold: And if additiond{\displaystyle d}is a metric then we may add to this list: EveryF-space, and thus also everyFréchet space,Banach space, andHilbert spaceis a complete TVS. Note that everyF-space is aBaire spacebut there are normed spaces that are Baire but not Banach.[9] A pseudometricd{\displaystyle d}on a vector spaceX{\displaystyle X}is said to be atranslation invariant pseudometricifd(x,y)=d(x+z,y+z){\displaystyle d(x,y)=d(x+z,y+z)}for all vectorsx,y,z∈X.{\displaystyle x,y,z\in X.} Suppose(X,τ){\displaystyle (X,\tau )}ispseudometrizable TVS(for example, a metrizable TVS) and thatp{\displaystyle p}isanypseudometric onX{\displaystyle X}such that the topology onX{\displaystyle X}induced byp{\displaystyle p}is equal toτ.{\displaystyle \tau .}Ifp{\displaystyle p}is translation-invariant, then(X,τ){\displaystyle (X,\tau )}is a complete TVS if and only if(X,p){\displaystyle (X,p)}is a complete pseudometric space.[10]Ifp{\displaystyle p}isnottranslation-invariant, then may be possible for(X,τ){\displaystyle (X,\tau )}to be a complete TVS but(X,p){\displaystyle (X,p)}tonotbe a complete pseudometric space[10](see this footnote[note 4]for an example).[10] Theorem[11][12](Klee)—Letd{\displaystyle d}beany[note 5]metric on a vector spaceX{\displaystyle X}such that the topologyτ{\displaystyle \tau }induced byd{\displaystyle d}onX{\displaystyle X}makes(X,τ){\displaystyle (X,\tau )}into a topological vector space. If(X,d){\displaystyle (X,d)}is a complete metric space then(X,τ){\displaystyle (X,\tau )}is a complete-TVS. Two norms on a vector space are calledequivalentif and only if they induce the same topology.[13]Ifp{\displaystyle p}andq{\displaystyle q}are two equivalent norms on a vector spaceX{\displaystyle X}then thenormed space(X,p){\displaystyle (X,p)}is aBanach spaceif and only if(X,q){\displaystyle (X,q)}is a Banach space. See this footnote for an example of a continuous norm on a Banach space that isnotequivalent to that Banach space's given norm.[note 6][13]All norms on a finite-dimensional vector space are equivalent and every finite-dimensional normed space is a Banach space.[14]Every Banach space is a complete TVS. A normed space is a Banach space (that is, its canonical norm-induced metric is complete) if and only if it is complete as a topological vector space. Acompletion[15]of a TVSX{\displaystyle X}is a complete TVS that contains a dense vector subspace that is TVS-isomorphic toX.{\displaystyle X.}In other words, it is a complete TVSC{\displaystyle C}into whichX{\displaystyle X}can beTVS-embeddedas adensevector subspace. Every TVS-embedding is auniform embedding. Every topological vector space has a completion. Moreover, every Hausdorff TVS has aHausdorffcompletion, which is necessarily uniqueup toTVS-isomorphism. However, all TVSs, even those that are Hausdorff, (already) complete, and/or metrizable have infinitely many non-Hausdorff completions that arenotTVS-isomorphic to one another. For example, the vector space consisting of scalar-valuedsimple functionsf{\displaystyle f}for which|f|p<∞{\displaystyle |f|_{p}<\infty }(where this seminorm is defined in the usual way in terms ofLebesgue integration) becomes aseminormed spacewhen endowed with this seminorm, which in turn makes it into both apseudometric spaceand a non-Hausdorff non-complete TVS; any completion of this space is a non-Hausdorff complete seminormed space that whenquotientedby the closure of its origin (so as toobtain a Hausdorff TVS) results in (a spacelinearlyisometrically-isomorphicto) the usual complete HausdorffLp{\displaystyle L^{p}}-space(endowed with the usual complete‖⋅‖p{\displaystyle \|\cdot \|_{p}}norm). As another example demonstrating the usefulness of completions, the completions oftopological tensor products, such asprojective tensor productsorinjective tensor products, of the Banach spaceℓ1(S){\displaystyle \ell ^{1}(S)}with a complete Hausdorff locally convex TVSY{\displaystyle Y}results in a complete TVS that is TVS-isomorphic to a "generalized"ℓ1(S;Y){\displaystyle \ell ^{1}(S;Y)}-space consistingY{\displaystyle Y}-valued functions onS{\displaystyle S}(where this "generalized" TVS is defined analogously to original spaceℓ1(S){\displaystyle \ell ^{1}(S)}of scalar-valued functions onS{\displaystyle S}). Similarly, the completion of the injective tensor product of thespace of scalar-valuedCk{\displaystyle C^{k}}-test functionswith such a TVSY{\displaystyle Y}is TVS-isomorphic to the analogously defined TVS ofY{\displaystyle Y}-valuedCk{\displaystyle C^{k}}test functions. As the example below shows, regardless of whether or not a space is Hausdorff or already complete, everytopological vector space(TVS) has infinitely many non-isomorphic completions.[16] However, every Hausdorff TVS has aHausdorffcompletion that is unique up to TVS-isomorphism.[16]But nevertheless, every Hausdorff TVS still has infinitely many non-isomorphic non-Hausdorff completions. Example(Non-uniqueness of completions):[15]LetC{\displaystyle C}denote any complete TVS and letI{\displaystyle I}denote any TVS endowed with theindiscrete topology, which recall makesI{\displaystyle I}into a complete TVS. Since bothI{\displaystyle I}andC{\displaystyle C}are complete TVSs, so is their productI×C.{\displaystyle I\times C.}IfU{\displaystyle U}andV{\displaystyle V}are non-empty open subsets ofI{\displaystyle I}andC,{\displaystyle C,}respectively, thenU=I{\displaystyle U=I}and(U×V)∩({0}×C)={0}×V≠∅,{\displaystyle (U\times V)\cap (\{0\}\times C)=\{0\}\times V\neq \varnothing ,}which shows that{0}×C{\displaystyle \{0\}\times C}is a dense subspace ofI×C.{\displaystyle I\times C.}Thus by definition of "completion,"I×C{\displaystyle I\times C}is a completion of{0}×C{\displaystyle \{0\}\times C}(it doesn't matter that{0}×C{\displaystyle \{0\}\times C}is already complete). So by identifying{0}×C{\displaystyle \{0\}\times C}withC,{\displaystyle C,}ifX⊆C{\displaystyle X\subseteq C}is a dense vector subspace ofC,{\displaystyle C,}thenX{\displaystyle X}has bothC{\displaystyle C}andI×C{\displaystyle I\times C}as completions. Every Hausdorff TVS has aHausdorffcompletion that is unique up to TVS-isomorphism.[16]But nevertheless, as shown above, every Hausdorff TVS still has infinitely many non-isomorphic non-Hausdorff completions. Properties of Hausdorff completions[17]—Suppose thatX{\displaystyle X}andC{\displaystyle C}are Hausdorff TVSs withC{\displaystyle C}complete. Suppose thatE:X→C{\displaystyle E:X\to C}is a TVS-embedding onto a dense vector subspace ofC.{\displaystyle C.}Then IfE2:X→C2{\displaystyle E_{2}:X\to C_{2}}is a TVS embedding onto a dense vector subspace of a complete Hausdorff TVSC2{\displaystyle C_{2}}having the above universal property, then there exists a unique (bijective) TVS-isomorphismI:C→C2{\displaystyle I:C\to C_{2}}such thatE2=I∘E.{\displaystyle E_{2}=I\circ E.} Corollary[17]—SupposeC{\displaystyle C}is a complete Hausdorff TVS andX{\displaystyle X}is a dense vector subspace ofC.{\displaystyle C.}Then every continuous linear mapf:X→Z{\displaystyle f:X\to Z}into a complete Hausdorff TVSZ{\displaystyle Z}has a unique continuous linear extension to a mapC→Z.{\displaystyle C\to Z.} Existence of Hausdorff completions A Cauchy filterB{\displaystyle {\mathcal {B}}}on a TVSX{\displaystyle X}is called aminimal Cauchy filter[17]if there doesnotexist a Cauchy filter onX{\displaystyle X}that is strictly coarser thanB{\displaystyle {\mathcal {B}}}(that is, "strictly coarser thanB{\displaystyle {\mathcal {B}}}" means contained as a proper subset ofB{\displaystyle {\mathcal {B}}}). IfB{\displaystyle {\mathcal {B}}}is a Cauchy filter onX{\displaystyle X}then the filter generated by the following prefilter:{B+N:B∈BandNis a neighborhood of0inX}{\displaystyle \left\{B+N~:~B\in {\mathcal {B}}{\text{ and }}N{\text{ is a neighborhood of }}0{\text{ in }}X\right\}}is the unique minimal Cauchy filter onX{\displaystyle X}that is contained as a subset ofB.{\displaystyle {\mathcal {B}}.}[17]In particular, for anyx∈X,{\displaystyle x\in X,}the neighborhood filter atx{\displaystyle x}is a minimal Cauchy filter. LetM{\displaystyle \mathbb {M} }be the set of all minimal Cauchy filters onX{\displaystyle X}and letE:X→M{\displaystyle E:X\rightarrow \mathbb {M} }be the map defined by sendingx∈X{\displaystyle x\in X}to the neighborhood filter ofx{\displaystyle x}inX.{\displaystyle X.}EndowM{\displaystyle \mathbb {M} }with the following vector space structure: GivenB,C∈M{\displaystyle {\mathcal {B}},{\mathcal {C}}\in \mathbb {M} }and a scalars,{\displaystyle s,}letB+C{\displaystyle {\mathcal {B}}+{\mathcal {C}}}(resp.sB{\displaystyle s{\mathcal {B}}}) denote the unique minimal Cauchy filter contained in the filter generated by{B+C:B∈B,C∈C}{\displaystyle \left\{B+C:B\in {\mathcal {B}},C\in {\mathcal {C}}\right\}}(resp.{sB:B∈B}{\displaystyle \{sB:B\in {\mathcal {B}}\}}). For everybalancedneighborhoodN{\displaystyle N}of the origin inX,{\displaystyle X,}letU(N)=def{B∈M:there existB∈Band a neighborhoodVof the origin inXsuch thatB+V⊆N}{\displaystyle \mathbb {U} (N)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{{\mathcal {B}}\in \mathbb {M} ~:~{\text{ there exist }}B\in {\mathcal {B}}{\text{ and a neighborhood }}V{\text{ of the origin in }}X{\text{ such that }}B+V\subseteq N\right\}} IfX{\displaystyle X}is Hausdorff then the collection of all setsU(N),{\displaystyle \mathbb {U} (N),}asN{\displaystyle N}ranges over all balanced neighborhoods of the origin inX,{\displaystyle X,}forms a vector topology onM{\displaystyle \mathbb {M} }makingM{\displaystyle \mathbb {M} }into a complete Hausdorff TVS. Moreover, the mapE:X→M{\displaystyle E:X\rightarrow \mathbb {M} }is a TVS-embedding onto a dense vector subspace ofM.{\displaystyle \mathbb {M} .}[17] IfX{\displaystyle X}is ametrizable TVSthen a Hausdorff completion ofX{\displaystyle X}can be constructed using equivalence classes of Cauchy sequences instead of minimal Cauchy filters. This subsection details how every non-Hausdorff TVSX{\displaystyle X}can be TVS-embedded onto a dense vector subspace of a complete TVS. The proof that every Hausdorff TVS has a Hausdorff completion is widely available and so this fact will be used (without proof) to show that every non-Hausdorff TVS also has a completion. These details are sometimes useful for extending results from Hausdorff TVSs to non-Hausdorff TVSs. LetI=cl⁡{0}{\displaystyle I=\operatorname {cl} \{0\}}denote the closure of the origin inX,{\displaystyle X,}whereI{\displaystyle I}is endowed with its subspace topology induced byX{\displaystyle X}(so thatI{\displaystyle I}has theindiscrete topology). SinceI{\displaystyle I}has the trivial topology, it is easily shown that every vector subspace ofX{\displaystyle X}that is an algebraic complement ofI{\displaystyle I}inX{\displaystyle X}is necessarily atopological complementofI{\displaystyle I}inX.{\displaystyle X.}[18][19]LetH{\displaystyle H}denote any topological complement ofI{\displaystyle I}inX,{\displaystyle X,}which is necessarily a Hausdorff TVS (since it is TVS-isomorphic to the quotient TVSX/I{\displaystyle X/I}[note 7]). SinceX{\displaystyle X}is thetopological direct sumofI{\displaystyle I}andH{\displaystyle H}(which means thatX=I⊕H{\displaystyle X=I\oplus H}in the category of TVSs), the canonical mapI×H→I⊕H=Xgiven by(x,y)↦x+y{\displaystyle I\times H\to I\oplus H=X\quad {\text{ given by }}\quad (x,y)\mapsto x+y}is a TVS-isomorphism.[19]LetA:X=I⊕H→I×H{\displaystyle A~:~X=I\oplus H~\to ~I\times H}denote the inverse of this canonical map. (As a side note, it follows that every open and every closed subsetU{\displaystyle U}ofX{\displaystyle X}satisfiesU=I+U.{\displaystyle U=I+U.}[proof 1]) The Hausdorff TVSH{\displaystyle H}can be TVS-embedded, say via the mapInH:H→C,{\displaystyle \operatorname {In} _{H}:H\to C,}onto a dense vector subspace of its completionC.{\displaystyle C.}SinceI{\displaystyle I}andC{\displaystyle C}are complete, so is their productI×C.{\displaystyle I\times C.}LetIdI:I→I{\displaystyle \operatorname {Id} _{I}:I\to I}denote the identity map and observe that the product mapIdI×InH:I×H→I×C{\displaystyle \operatorname {Id} _{I}\times \operatorname {In} _{H}:I\times H\to I\times C}is a TVS-embedding whose image is dense inI×C.{\displaystyle I\times C.}Define the map[note 8]B:X=I⊕H→I×CbyB=def(IdI×InH)∘A{\displaystyle B:X=I\oplus H\to I\times C\quad {\text{ by }}\quad B~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(\operatorname {Id} _{I}\times \operatorname {In} _{H}\right)\circ A}which is a TVS-embedding ofX=I⊕H{\displaystyle X=I\oplus H}onto a dense vector subspace of the complete TVSI×C.{\displaystyle I\times C.}Moreover, observe that the closure of the origin inI×C{\displaystyle I\times C}is equal toI×{0},{\displaystyle I\times \{0\},}and thatI×{0}{\displaystyle I\times \{0\}}and{0}×C{\displaystyle \{0\}\times C}are topological complements inI×C.{\displaystyle I\times C.} To summarize,[19]given any algebraic (and thus topological) complementH{\displaystyle H}ofI=defcl⁡{0}{\displaystyle I~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\operatorname {cl} \{0\}}inX{\displaystyle X}and given any completionC{\displaystyle C}of the Hausdorff TVSH{\displaystyle H}such thatH⊆C,{\displaystyle H\subseteq C,}then the natural inclusion[20]InH:X=I⊕H→I⊕C{\displaystyle \operatorname {In} _{H}:X=I\oplus H\to I\oplus C}is a well-defined TVS-embedding ofX{\displaystyle X}onto a dense vector subspace of the complete TVSI⊕C{\displaystyle I\oplus C}where moreover,X=I⊕H⊆I⊕C≅I×C.{\displaystyle X=I\oplus H\subseteq I\oplus C\cong I\times C.} Theorem[7][21](Topology of a completion)—LetC{\displaystyle C}be a complete TVS and letX{\displaystyle X}be a dense vector subspace ofX.{\displaystyle X.}IfNX(0){\displaystyle {\mathcal {N}}_{X}(0)}is anyneighborhood baseof the origin inX{\displaystyle X}then the setNX(0)=def{clC⁡N:N∈NX(0)}{\displaystyle {\mathcal {N}}_{X}(0)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{\operatorname {cl} _{C}N~:~N\in {\mathcal {N}}_{X}(0)\right\}}is a neighborhood of the origin in the completionC{\displaystyle C}ofX.{\displaystyle X.} IfX{\displaystyle X}is locally convex andP{\displaystyle {\mathcal {P}}}is a family of continuous seminorms onX{\displaystyle X}that generate the topology ofX,{\displaystyle X,}then the family of all continuous extensions toC{\displaystyle C}of all members ofP{\displaystyle {\mathcal {P}}}is a generating family of seminorms forC.{\displaystyle C.} Said differently, ifC{\displaystyle C}is a completion of a TVSX{\displaystyle X}withX⊆C{\displaystyle X\subseteq C}and ifN{\displaystyle {\mathcal {N}}}is aneighborhood baseof the origin inX,{\displaystyle X,}then the family of sets{clC⁡N:N∈N}{\displaystyle \left\{\operatorname {cl} _{C}N~:~N\in {\mathcal {N}}\right\}}is a neighborhood basis at the origin inC.{\displaystyle C.}[3] Theorem[22](Completions of quotients)—LetM{\displaystyle M}be ametrizable topological vector spaceand letN{\displaystyle N}be a closed vector subspace ofM.{\displaystyle M.}Suppose thatC{\displaystyle C}is a completion ofM.{\displaystyle M.}Then the completion ofM/N{\displaystyle M/N}is TVS-isomorphic toC/clC⁡N.{\displaystyle C/\operatorname {cl} _{C}N.}If in additionM{\displaystyle M}is a normed space, then this TVS-isomorphism is also an isometry. Grothendieck's Completeness Theorem LetE{\displaystyle {\mathcal {E}}}denote theequicontinuous compactologyon the continuous dual spaceX′,{\displaystyle X^{\prime },}which by definition consists of allequicontinuousweak-* closedand weak-*boundedabsolutely convex subsetsofX′{\displaystyle X^{\prime }}[23](which are necessarily weak-* compact subsets ofX′{\displaystyle X^{\prime }}). Assume that everyE′∈E{\displaystyle E^{\prime }\in {\mathcal {E}}}is endowed with theweak-* topology. AfilterB{\displaystyle {\mathcal {B}}}onX′{\displaystyle X^{\prime }}is said toconverge continuouslytox′∈X′{\displaystyle x^{\prime }\in X^{\prime }}if there exists someE′∈E∩B{\displaystyle E^{\prime }\in {\mathcal {E}}\cap {\mathcal {B}}}containingx′{\displaystyle x^{\prime }}(that is,x′∈E′{\displaystyle x^{\prime }\in E^{\prime }}) such that the trace ofB{\displaystyle {\mathcal {B}}}onE′,{\displaystyle E^{\prime },}which is the familyB|E′=def{B∩E′:B∈B},{\displaystyle {\mathcal {B}}{\big \vert }_{E^{\prime }}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{B\cap E^{\prime }:B\in {\mathcal {B}}\right\},}converges tox′{\displaystyle x^{\prime }}inE′{\displaystyle E^{\prime }}(that is, ifB|E′→x′{\displaystyle {\mathcal {B}}{\big \vert }_{E^{\prime }}\to x^{\prime }}in the given weak-* topology).[24]The filterB{\displaystyle {\mathcal {B}}}converges continuously tox′{\displaystyle x^{\prime }}if and only ifB−x′{\displaystyle {\mathcal {B}}-x^{\prime }}converges continuously to the origin, which happens if and only if for everyx∈X,{\displaystyle x\in X,}the filter⟨B,x+N⟩→⟨x′,x⟩{\displaystyle \langle {\mathcal {B}},x+{\mathcal {N}}\rangle \to \langle x^{\prime },x\rangle }in the scalar field (which isR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) whereN{\displaystyle {\mathcal {N}}}denotes any neighborhood basis at the origin inX,{\displaystyle X,}⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }denotes theduality pairing, and⟨B,x+N⟩{\displaystyle \langle {\mathcal {B}},x+{\mathcal {N}}\rangle }denotes the filter generated by{⟨B,x+N⟩:B∈B,N∈N}.{\displaystyle \{\langle B,x+N\rangle ~:~B\in {\mathcal {B}},N\in {\mathcal {N}}\}.}[24]A mapf:X′→T{\displaystyle f:X^{\prime }\to T}into a topological space (such asR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) is said to beγ{\displaystyle \gamma }-continuousif whenever a filterB{\displaystyle {\mathcal {B}}}onX′{\displaystyle X^{\prime }}converges continuouslytox′∈X′,{\displaystyle x^{\prime }\in X^{\prime },}thenf(B)→f(x′).{\displaystyle f({\mathcal {B}})\to f\left(x^{\prime }\right).}[24] Grothendieck's Completeness Theorem[24]—IfX{\displaystyle X}is a Hausdorff topological vector space then its completion is linearly isomorphic to the set of allγ{\displaystyle \gamma }-continuouslinear functions onX′.{\displaystyle X^{\prime }.} If a TVSX{\displaystyle X}has any of the following properties then so does its completion: Completions of Hilbert spaces Every inner product space(H,⟨⋅,⋅⟩){\displaystyle \left(H,\langle \cdot ,\cdot \rangle \right)}has a completion(H¯,⟨⋅,⋅⟩H¯){\displaystyle \left({\overline {H}},\langle \cdot ,\cdot \rangle _{\overline {H}}\right)}that is a Hilbert space, where the inner product⟨⋅,⋅⟩H¯{\displaystyle \langle \cdot ,\cdot \rangle _{\overline {H}}}is the unique continuous extension toH¯{\displaystyle {\overline {H}}}of the original inner product⟨⋅,⋅⟩.{\displaystyle \langle \cdot ,\cdot \rangle .}The norm induced by(H¯,⟨⋅,⋅⟩H¯){\displaystyle \left({\overline {H}},\langle \cdot ,\cdot \rangle _{\overline {H}}\right)}is also the unique continuous extension toH¯{\displaystyle {\overline {H}}}of the norm induced by⟨⋅,⋅⟩.{\displaystyle \langle \cdot ,\cdot \rangle .}[25][21] Other preserved properties IfX{\displaystyle X}is aHausdorffTVS, then the continuous dual space ofX{\displaystyle X}is identical to the continuous dual space of the completion ofX.{\displaystyle X.}[30]The completion of a locally convexbornological spaceis abarrelled space.[27]IfX{\displaystyle X}andY{\displaystyle Y}areDF-spacesthen theprojective tensor product, as well as its completion, of these spaces is a DF-space.[31] The completion of theprojective tensor productof two nuclear spaces is nuclear.[26]The completion of a nuclear space is TVS-isomorphic with a projective limit ofHilbert spaces.[26] IfX=Y⊕Z{\displaystyle X=Y\oplus Z}(meaning that the addition mapY×Z→X{\displaystyle Y\times Z\to X}is a TVS-isomorphism) has a Hausdorff completionC{\displaystyle C}then(clC⁡Y)+(clC⁡Z)=C.{\displaystyle \left(\operatorname {cl} _{C}Y\right)+\left(\operatorname {cl} _{C}Z\right)=C.}If in additionX{\displaystyle X}is aninner product spaceandY{\displaystyle Y}andZ{\displaystyle Z}areorthogonal complementsof each other inX{\displaystyle X}(that is,⟨Y,Z⟩={0}{\displaystyle \langle Y,Z\rangle =\{0\}}), thenclC⁡Y{\displaystyle \operatorname {cl} _{C}Y}andclC⁡Z{\displaystyle \operatorname {cl} _{C}Z}are orthogonal complements in theHilbert spaceC.{\displaystyle C.} Iff:X→Y{\displaystyle f:X\to Y}is anuclear linear operatorbetween two locally convex spaces and ifC{\displaystyle C}be a completion ofX{\displaystyle X}thenf{\displaystyle f}has a unique continuous linear extension to a nuclear linear operatorF:C→Y.{\displaystyle F:C\to Y.}[26] LetX{\displaystyle X}andY{\displaystyle Y}be two Hausdorff TVSs withY{\displaystyle Y}complete. LetC{\displaystyle C}be a completion ofX.{\displaystyle X.}LetL(X;Y){\displaystyle L(X;Y)}denote the vector space of continuous linear operators and letI:L(X;Y)→L(C;Y){\displaystyle I:L(X;Y)\to L(C;Y)}denote the map that sends everyf∈L(X;Y){\displaystyle f\in L(X;Y)}to its unique continuous linear extension onC.{\displaystyle C.}ThenI:L(X;Y)→L(C;Y){\displaystyle I:L(X;Y)\to L(C;Y)}is a (surjective) vector space isomorphism. Moreover,I:L(X;Y)→L(C;Y){\displaystyle I:L(X;Y)\to L(C;Y)}maps families ofequicontinuoussubsets onto each other. Suppose thatL(X;Y){\displaystyle L(X;Y)}is endowed with aG{\displaystyle {\mathcal {G}}}-topologyand thatH{\displaystyle {\mathcal {H}}}denotes the closures inC{\displaystyle C}of sets inG.{\displaystyle {\mathcal {G}}.}Then the mapI:LG(X;Y)→LH(C;Y){\displaystyle I:L_{\mathcal {G}}(X;Y)\to L_{\mathcal {H}}(C;Y)}is also a TVS-isomorphism.[26] Theorem—[11]Letd{\displaystyle d}beany(not assumed to be translation-invariant) metric on a vector spaceX{\displaystyle X}such that the topologyτ{\displaystyle \tau }induced byd{\displaystyle d}onX{\displaystyle X}makes(X,τ){\displaystyle (X,\tau )}into a topological vector space. If(X,d){\displaystyle (X,d)}is a complete metric space then(X,τ){\displaystyle (X,\tau )}is a complete-TVS. Every TVS has acompletionand every Hausdorff TVS has a Hausdorff completion.[36]Every complete TVS isquasi-complete spaceandsequentially complete.[37]However, the converses of the above implications are generally false.[37]There exists asequentially completelocally convex TVS that is notquasi-complete.[29] If a TVS has a complete neighborhood of the origin then it is complete.[38]Every completepseudometrizable TVSis abarrelled spaceand aBaire space(and thus non-meager).[39]The dimension of a complete metrizable TVS is either finite or uncountable.[19] Anyneighborhood basisof any point in a TVS is a Cauchy prefilter. Every convergent net (respectively, prefilter) in a TVS is necessarily a Cauchy net (respectively, a Cauchy prefilter).[6]Any prefilter that is subordinate to (that is, finer than) a Cauchy prefilter is necessarily also a Cauchy prefilter[6]and any prefilter finer than a Cauchy prefilter is also a Cauchy prefilter. The filter associated with a sequence in a TVS is Cauchy if and only if the sequence is a Cauchy sequence. Every convergent prefilter is a Cauchy prefilter. IfX{\displaystyle X}is a TVS and ifx∈X{\displaystyle x\in X}is a cluster point of a Cauchy net (respectively, Cauchy prefilter), then that Cauchy net (respectively, that Cauchy prefilter) converges tox{\displaystyle x}inX.{\displaystyle X.}[3]If a Cauchy filter in a TVS has anaccumulation pointx{\displaystyle x}then it converges tox.{\displaystyle x.} Uniformly continuous maps send Cauchy nets to Cauchy nets.[3]A Cauchy sequence in a Hausdorff TVSX,{\displaystyle X,}when considered as a set, is not necessarilyrelatively compact(that is, its closure inX{\displaystyle X}is not necessarily compact[note 9]) although it is precompact (that is, its closure in the completion ofX{\displaystyle X}is compact). Every Cauchy sequence is abounded subsetbut this is not necessarily true of Cauchy net. For example, letN{\displaystyle \mathbb {N} }have it usual order, let≤{\displaystyle \,\leq \,}denote anypreorderon the non-indiscreteTVSX{\displaystyle X}(that is,X{\displaystyle X}does not have thetrivial topology; it is also assumed thatX∩N=∅{\displaystyle X\cap \mathbb {N} =\varnothing }) and extend these two preorders to the unionI=defX∪N{\displaystyle I~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~X\cup \mathbb {N} }by declaring thatx≤n{\displaystyle x\leq n}holds for everyx∈X{\displaystyle x\in X}andn∈N.{\displaystyle n\in \mathbb {N} .}Letf:I→X{\displaystyle f:I\to X}be defined byf(i)=i{\displaystyle f(i)=i}ifi∈X{\displaystyle i\in X}andf(i)=0{\displaystyle f(i)=0}otherwise (that is, ifi∈N{\displaystyle i\in \mathbb {N} }), which is a net inX{\displaystyle X}since the preordered set(I,≤){\displaystyle (I,\leq )}isdirected(this preorder onI{\displaystyle I}is alsopartial order(respectively, atotal order) if this is true of(X,≤){\displaystyle (X,\leq )}). This netf{\displaystyle f}is a Cauchy net inX{\displaystyle X}because it converges to the origin, but the set{f(i):i∈I}=X{\displaystyle \{f(i):i\in I\}=X}is not a bounded subset ofX{\displaystyle X}(becauseX{\displaystyle X}does not have the trivial topology). Suppose thatX∙=(Xi)i∈I{\displaystyle X_{\bullet }=\left(X_{i}\right)_{i\in I}}is a family of TVSs and thatX{\displaystyle X}denotes the product of these TVSs. Suppose that for every indexi,{\displaystyle i,}Bi{\displaystyle {\mathcal {B}}_{i}}is a prefilter onXi.{\displaystyle X_{i}.}Then the product of this family of prefilters is a Cauchy filter onX{\displaystyle X}if and only if eachBi{\displaystyle {\mathcal {B}}_{i}}is a Cauchy filter onXi.{\displaystyle X_{i}.}[17] Iff:X→Y{\displaystyle f:X\to Y}is an injectivetopological homomorphismfrom a complete TVS into a Hausdorff TVS then the image off{\displaystyle f}(that is,f(X){\displaystyle f(X)}) is a closed subspace ofY.{\displaystyle Y.}[34]Iff:X→Y{\displaystyle f:X\to Y}is atopological homomorphismfrom a completemetrizableTVS into a Hausdorff TVS then the range off{\displaystyle f}is a closed subspace ofY.{\displaystyle Y.}[34]Iff:X→Y{\displaystyle f:X\to Y}is auniformly continuousmap between two Hausdorff TVSs then the image underf{\displaystyle f}of a totally bounded subset ofX{\displaystyle X}is a totally bounded subset ofY.{\displaystyle Y.}[40] Uniformly continuous extensions Suppose thatf:D→Y{\displaystyle f:D\to Y}is a uniformly continuous map from a dense subsetD{\displaystyle D}of a TVSX{\displaystyle X}into a complete Hausdorff TVSY.{\displaystyle Y.}Thenf{\displaystyle f}has a unique uniformly continuous extension to all ofX.{\displaystyle X.}[3]If in additionf{\displaystyle f}is a homomorphism then its unique uniformly continuous extension is also a homomorphism.[3]This remains true if "TVS" is replaced by "commutative topological group."[3]The mapf{\displaystyle f}is not required to be a linear map and thatD{\displaystyle D}is not required to be a vector subspace ofX.{\displaystyle X.} Uniformly continuous linear extensions Supposef:X→Y{\displaystyle f:X\to Y}be a continuous linear operator between two Hausdorff TVSs. IfM{\displaystyle M}is a dense vector subspace ofX{\displaystyle X}and if the restrictionf|M:M→Y{\displaystyle f{\big \vert }_{M}:M\to Y}toM{\displaystyle M}is atopological homomorphismthenf:X→Y{\displaystyle f:X\to Y}is also a topological homomorphism.[41]So ifC{\displaystyle C}andD{\displaystyle D}are Hausdorff completions ofX{\displaystyle X}andY,{\displaystyle Y,}respectively, and iff:X→Y{\displaystyle f:X\to Y}is a topological homomorphism, thenf{\displaystyle f}'s unique continuous linear extensionF:C→D{\displaystyle F:C\to D}is a topological homomorphism. (Note that it's possible forf:X→Y{\displaystyle f:X\to Y}to be surjective but forF:C→D{\displaystyle F:C\to D}tonotbe injective.)[41] SupposeX{\displaystyle X}andY{\displaystyle Y}are Hausdorff TVSs,M{\displaystyle M}is a dense vector subspace ofX,{\displaystyle X,}andN{\displaystyle N}is a dense vector subspaces ofY.{\displaystyle Y.}IfM{\displaystyle M}are andN{\displaystyle N}are topologically isomorphic additive subgroups via a topological homomorphismf{\displaystyle f}then the same is true ofX{\displaystyle X}andY{\displaystyle Y}via the unique uniformly continuous extension off{\displaystyle f}(which is also a homeomorphism).[42] Complete subsets Every complete subset of a TVS issequentially complete. A complete subset of a Hausdorff TVSX{\displaystyle X}is a closed subset ofX.{\displaystyle X.}[3][38] Every compact subset of a TVS is complete (even if the TVS is not Hausdorff or not complete).[3][38]Closed subsets of a complete TVS are complete; however, if a TVSX{\displaystyle X}is not complete thenX{\displaystyle X}is a closed subset ofX{\displaystyle X}that is not complete. The empty set is complete subset of every TVS. IfC{\displaystyle C}is a complete subset of a TVS (the TVS is not necessarily Hausdorff or complete) then any subset ofC{\displaystyle C}that is closed inC{\displaystyle C}is complete.[38] Topological complements IfX{\displaystyle X}is a non-normableFréchet spaceon which there exists a continuous norm thenX{\displaystyle X}contains a closed vector subspace that has notopological complement.[29]IfX{\displaystyle X}is a complete TVS andM{\displaystyle M}is a closed vector subspace ofX{\displaystyle X}such thatX/M{\displaystyle X/M}is not complete, thenH{\displaystyle H}doesnothave atopological complementinX.{\displaystyle X.}[29] Subsets of completions LetM{\displaystyle M}be aseparablelocally convexmetrizable topological vector spaceand letC{\displaystyle C}be its completion. IfS{\displaystyle S}is a bounded subset ofC{\displaystyle C}then there exists a bounded subsetR{\displaystyle R}ofX{\displaystyle X}such thatS⊆clC⁡R.{\displaystyle S\subseteq \operatorname {cl} _{C}R.}[29] Relation to compact subsets A subset of a TVS (notassumed to be Hausdorff or complete) iscompactif and only if it is complete andtotally bounded.[43][proof 2]Thus a closed andtotally boundedsubset of a complete TVS is compact.[44][3] In a Hausdorff locally convex TVS, the convex hull of aprecompactset is again precompact.[45]Consequently, in a complete locally convex Hausdorff TVS, the closed convex hull of a compact subset is again compact.[46] The convex hull of compact subset of aHilbert spaceisnotnecessarily closed and so alsonotnecessarily compact. For example, letH{\displaystyle H}be the separable Hilbert spaceℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}of square-summable sequences with the usual norm‖⋅‖2{\displaystyle \|\cdot \|_{2}}and leten=(0,…,0,1,0,…){\displaystyle e_{n}=(0,\ldots ,0,1,0,\ldots )}be the standardorthonormal basis(that is1{\displaystyle 1}at thenth{\displaystyle n^{\text{th}}}-coordinate). The closed setS={0}∪{1nen}{\displaystyle S=\{0\}\cup \left\{{\tfrac {1}{n}}e_{n}\right\}}is compact but its convex hullco⁡S{\displaystyle \operatorname {co} S}isnota closed set becauseh:=∑n=1∞12n1nen{\displaystyle h:=\sum _{n=1}^{\infty }{\tfrac {1}{2^{n}}}{\tfrac {1}{n}}e_{n}}belongs to the closure ofco⁡S{\displaystyle \operatorname {co} S}inH{\displaystyle H}buth∉co⁡S{\displaystyle h\not \in \operatorname {co} S}(since every sequencez∈co⁡S{\displaystyle z\in \operatorname {co} S}is a finiteconvex combinationof elements ofS{\displaystyle S}and so is necessarily0{\displaystyle 0}in all but finitely many coordinates, which is not true ofh{\displaystyle h}).[47]However, like in all complete Hausdorff locally convex spaces, theclosedconvex hullK:=co¯S{\displaystyle K:={\overline {\operatorname {co} }}S}of this compact subset is compact.[46]The vector subspaceX:=span⁡S{\displaystyle X:=\operatorname {span} S}is apre-Hilbert spacewhen endowed with the substructure that the Hilbert spaceH{\displaystyle H}induces on it butX{\displaystyle X}is not complete andh∉K∩X{\displaystyle h\not \in K\cap X}(sinceh∉X{\displaystyle h\not \in X}). The closed convex hull ofS{\displaystyle S}inX{\displaystyle X}(here, "closed" means with respect toX,{\displaystyle X,}and not toH{\displaystyle H}as before) is equal toK∩X,{\displaystyle K\cap X,}which is not compact (because it is not a complete subset). This shows that in a Hausdorff locally convex space that is not complete, the closed convex hull of compact subset mightfailto be compact (although it will beprecompact/totally bounded). Every complete totally bounded set is relatively compact.[3]IfX{\displaystyle X}is any TVS then the quotient mapq:X→X/clX⁡{0}{\displaystyle q:X\to X/\operatorname {cl} _{X}\{0\}}is aclosed map[48]and thusS+clX⁡{0}⊆clX⁡S{\displaystyle S+\operatorname {cl} _{X}\{0\}\subseteq \operatorname {cl} _{X}S}A subsetS{\displaystyle S}of a TVSX{\displaystyle X}is totally bounded if and only if its image under the canonical quotient mapq:X→X/clX⁡{0}{\displaystyle q:X\to X/\operatorname {cl} _{X}\{0\}}is totally bounded.[19]ThusS{\displaystyle S}is totally bounded if and only ifS+clX⁡{0}{\displaystyle S+\operatorname {cl} _{X}\{0\}}is totally bounded. In any TVS, the closure of a totally bounded subset is again totally bounded.[3]In a locally convex space, the convex hull and thedisked hullof a totally bounded set is totally bounded.[36]IfS{\displaystyle S}is a subset of a TVSX{\displaystyle X}such that every sequence inS{\displaystyle S}has a cluster point inS{\displaystyle S}thenS{\displaystyle S}is totally bounded.[19]A subsetS{\displaystyle S}of a Hausdorff TVSX{\displaystyle X}is totally bounded if and only if every ultrafilter onS{\displaystyle S}is Cauchy, which happens if and only if it is pre-compact (that is, its closure in the completion ofX{\displaystyle X}is compact).[40] IfS⊆X{\displaystyle S\subseteq X}is compact, thenclX⁡S=S+clX⁡{0}{\displaystyle \operatorname {cl} _{X}S=S+\operatorname {cl} _{X}\{0\}}and this set is compact. Thus the closure of a compact set is compact[note 10](that is, all compact sets arerelatively compact).[49]Thus the closure of a compact set is compact. Every relatively compact subset of a Hausdorff TVS is totally bounded.[40] In a complete locally convex space, the convex hull and the disked hull of a compact set are both compact.[36]More generally, ifK{\displaystyle K}is a compact subset of a locally convex space, then the convex hullco⁡K{\displaystyle \operatorname {co} K}(resp. the disked hullcobal⁡K{\displaystyle \operatorname {cobal} K}) is compact if and only if it is complete.[36]Every subsetS{\displaystyle S}ofclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}is compact and thus complete.[proof 3]In particular, ifX{\displaystyle X}is not Hausdorff then there exist compact complete sets that are not closed.[3] Proofs
https://en.wikipedia.org/wiki/Complete_topological_vector_space
Inmathematical analysis,Ekeland's variational principle, discovered byIvar Ekeland,[1][2][3]is a theorem that asserts that there exist nearly optimal solutions to someoptimization problems. Ekeland's principle can be used when the lowerlevel setof a minimization problems is notcompact, so that theBolzano–Weierstrass theoremcannot be applied. The principle relies on thecompletenessof themetric space.[4] The principle has been shown to be equivalent to completeness of metric spaces.[5]Inproof theory, it is equivalent toΠ11CA0over RCA0, i.e. relatively strong. It also leads to a quick proof of theCaristi fixed point theorem.[4][6] Ekeland was associated with theParis Dauphine Universitywhen he proposed this theorem.[1] A functionf:X→R∪{−∞,+∞}{\displaystyle f:X\to \mathbb {R} \cup \{-\infty ,+\infty \}}valued in theextended real numbersR∪{−∞,+∞}=[−∞,+∞]{\displaystyle \mathbb {R} \cup \{-\infty ,+\infty \}=[-\infty ,+\infty ]}is said to bebounded belowifinff(X)=infx∈Xf(x)>−∞{\displaystyle \inf _{}f(X)=\inf _{x\in X}f(x)>-\infty }and it is calledproperif it has a non-emptyeffective domain, which by definition is the setdom⁡f=def{x∈X:f(x)≠+∞},{\displaystyle \operatorname {dom} f~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{x\in X:f(x)\neq +\infty \},}and it is never equal to−∞.{\displaystyle -\infty .}In other words, a map isproperif is valued inR∪{+∞}{\displaystyle \mathbb {R} \cup \{+\infty \}}and not identically+∞.{\displaystyle +\infty .}The mapf{\displaystyle f}is proper and bounded below if and only if−∞<inff(X)≠+∞,{\displaystyle -\infty <\inf _{}f(X)\neq +\infty ,}or equivalently, if and only ifinff(X)∈R.{\displaystyle \inf _{}f(X)\in \mathbb {R} .} A functionf:X→[−∞,+∞]{\displaystyle f:X\to [-\infty ,+\infty ]}islower semicontinuousat a givenx0∈X{\displaystyle x_{0}\in X}if for every realy<f(x0){\displaystyle y<f\left(x_{0}\right)}there exists aneighborhoodU{\displaystyle U}ofx0{\displaystyle x_{0}}such thatf(u)>y{\displaystyle f(u)>y}for allu∈U.{\displaystyle u\in U.}A function is calledlower semicontinuousif it is lower semicontinuous at every point ofX,{\displaystyle X,}which happens if and only if{x∈X:f(x)>y}{\displaystyle \{x\in X:~f(x)>y\}}is anopen setfor everyy∈R,{\displaystyle y\in \mathbb {R} ,}or equivalently, if and only if all of its lowerlevel sets{x∈X:f(x)≤y}{\displaystyle \{x\in X:~f(x)\leq y\}}areclosed. Ekeland's variational principle[7]—Let(X,d){\displaystyle (X,d)}be acomplete metric spaceand letf:X→R∪{+∞}{\displaystyle f:X\to \mathbb {R} \cup \{+\infty \}}be aproperlower semicontinuousfunction that isbounded below(soinff(X)∈R{\displaystyle \inf _{}f(X)\in \mathbb {R} }). Pickx0∈X{\displaystyle x_{0}\in X}such thatf(x0)∈R{\displaystyle f(x_{0})\in \mathbb {R} }(or equivalently,f(x0)≠+∞{\displaystyle f(x_{0})\neq +\infty }) and fix any realε>0.{\displaystyle \varepsilon >0.}There exists somev∈X{\displaystyle v\in X}such thatf(v)≤f(x0)−εd(x0,v){\displaystyle f(v)~\leq ~f\left(x_{0}\right)-\varepsilon \;d\left(x_{0},v\right)}and for everyx∈X{\displaystyle x\in X}other thanv{\displaystyle v}(that is,x≠v{\displaystyle x\neq v}),f(v)<f(x)+εd(v,x).{\displaystyle f(v)~<~f(x)+\varepsilon \;d(v,x).} Define a functionG:X×X→R∪{+∞}{\displaystyle G:X\times X\to \mathbb {R} \cup \{+\infty \}}byG(x,y)=deff(x)+εd(x,y){\displaystyle G(x,y)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f(x)+\varepsilon \;d(x,y)}which is lower semicontinuous because it is the sum of the lower semicontinuous functionf{\displaystyle f}and the continuous function(x,y)↦εd(x,y).{\displaystyle (x,y)\mapsto \varepsilon \;d(x,y).}Givenz∈X,{\displaystyle z\in X,}denote the functions with one coordinate fixed atz{\displaystyle z}byGz=defG(z,⋅):X→R∪{+∞}and{\displaystyle G_{z}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~G(z,\cdot ):X\to \mathbb {R} \cup \{+\infty \}\;{\text{ and }}}Gz=defG(⋅,z):X→R∪{+∞}{\displaystyle G^{z}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~G(\cdot ,z):X\to \mathbb {R} \cup \{+\infty \}}and define the setF(z)=def{y∈X:Gz(y)≤f(z)}={y∈X:f(y)+εd(y,z)≤f(z)},{\displaystyle F(z)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{y\in X:G^{z}(y)\leq f(z)\right\}~=~\{y\in X:f(y)+\varepsilon \;d(y,z)\leq f(z)\},}which is not empty sincez∈F(z).{\displaystyle z\in F(z).}An elementv∈X{\displaystyle v\in X}satisfies the conclusion of this theorem if and only ifF(v)={v}.{\displaystyle F(v)=\{v\}.}It remains to find such an element. It may be verified that for everyx∈X,{\displaystyle x\in X,} Lets0=infx∈F(x0)f(x),{\displaystyle s_{0}=\inf _{x\in F\left(x_{0}\right)}f(x),}which is a real number becausef{\displaystyle f}was assumed to be bounded below. Pickx1∈F(x0){\displaystyle x_{1}\in F\left(x_{0}\right)}such thatf(x1)<s0+2−1.{\displaystyle f\left(x_{1}\right)<s_{0}+2^{-1}.}Having definedsn−1{\displaystyle s_{n-1}}andxn,{\displaystyle x_{n},}letsn=definfx∈F(xn)f(x){\displaystyle s_{n}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\inf _{x\in F\left(x_{n}\right)}f(x)}and pickxn+1∈F(xn){\displaystyle x_{n+1}\in F\left(x_{n}\right)}such thatf(xn+1)<sn+2−(n+1).{\displaystyle f\left(x_{n+1}\right)<s_{n}+2^{-(n+1)}.}For anyn≥0,{\displaystyle n\geq 0,}xn+1∈F(xn){\displaystyle x_{n+1}\in F\left(x_{n}\right)}guarantees thatsn≤f(xn+1){\displaystyle s_{n}\leq f\left(x_{n+1}\right)}andF(xn+1)⊆F(xn),{\displaystyle F\left(x_{n+1}\right)\subseteq F\left(x_{n}\right),}which in turn impliessn+1≥sn{\displaystyle s_{n+1}\geq s_{n}}and thus alsof(xn+2)≥sn+1≥sn.{\displaystyle f\left(x_{n+2}\right)\geq s_{n+1}\geq s_{n}.}So ifn≥1{\displaystyle n\geq 1}thenxn+1∈F(xn)=def{y∈X:f(y)+εd(y,xn)≤f(xn)}{\displaystyle x_{n+1}\in F\left(x_{n}\right){\stackrel {\scriptscriptstyle {\text{def}}}{=}}\left\{y\in X:f(y)+\varepsilon \;d\left(y,x_{n}\right)\leq f\left(x_{n}\right)\right\}}andf(xn+1)≥sn−1,{\displaystyle f\left(x_{n+1}\right)\geq s_{n-1},}which guaranteeεd(xn+1,xn)≤f(xn)−f(xn+1)≤f(xn)−sn−1<12n.{\displaystyle \varepsilon \;d\left(x_{n+1},x_{n}\right)~\leq ~f\left(x_{n}\right)-f\left(x_{n+1}\right)~\leq ~f\left(x_{n}\right)-s_{n-1}~<~{\frac {1}{2^{n}}}.} It follows that for all positive integersn,p≥1,{\displaystyle n,p\geq 1,}d(xn+p,xn)≤2ε−12n,{\displaystyle d\left(x_{n+p},x_{n}\right)~\leq ~2\;{\frac {\varepsilon ^{-1}}{2^{n}}},}which proves thatx∙:=(xn)n=0∞{\displaystyle x_{\bullet }:=\left(x_{n}\right)_{n=0}^{\infty }}is a Cauchy sequence. BecauseX{\displaystyle X}is a complete metric space, there exists somev∈X{\displaystyle v\in X}such thatx∙{\displaystyle x_{\bullet }}converges tov.{\displaystyle v.}For anyn≥0,{\displaystyle n\geq 0,}sinceF(xn){\displaystyle F\left(x_{n}\right)}is a closed set that contain the sequencexn,xn+1,xn+2,…,{\displaystyle x_{n},x_{n+1},x_{n+2},\ldots ,}it must also contain this sequence's limit, which isv;{\displaystyle v;}thusv∈F(xn){\displaystyle v\in F\left(x_{n}\right)}and in particular,v∈F(x0).{\displaystyle v\in F\left(x_{0}\right).} The theorem will follow once it is shown thatF(v)={v}.{\displaystyle F(v)=\{v\}.}So letx∈F(v){\displaystyle x\in F(v)}and it remains to showx=v.{\displaystyle x=v.}Becausex∈F(xn){\displaystyle x\in F\left(x_{n}\right)}for alln≥0,{\displaystyle n\geq 0,}it follows as above thatεd(x,xn)≤2−n,{\displaystyle \varepsilon \;d\left(x,x_{n}\right)\leq 2^{-n},}which implies thatx∙{\displaystyle x_{\bullet }}converges tox.{\displaystyle x.}Becausex∙{\displaystyle x_{\bullet }}also converges tov{\displaystyle v}and limits in metric spaces are unique,x=v.{\displaystyle x=v.}◼{\displaystyle \blacksquare }Q.E.D. For example, iff{\displaystyle f}and(X,d){\displaystyle (X,d)}are as in the theorem's statement and ifx0∈X{\displaystyle x_{0}\in X}happens to be a global minimum point off,{\displaystyle f,}then the vectorv{\displaystyle v}from the theorem's conclusion isv:=x0.{\displaystyle v:=x_{0}.} Corollary[8]—Let(X,d){\displaystyle (X,d)}be acomplete metric space, and letf:X→R∪{+∞}{\displaystyle f:X\to \mathbb {R} \cup \{+\infty \}}be alower semicontinuousfunctional onX{\displaystyle X}that is bounded below and not identically equal to+∞.{\displaystyle +\infty .}Fixε>0{\displaystyle \varepsilon >0}and a pointx0∈X{\displaystyle x_{0}\in X}such thatf(x0)≤ε+infx∈Xf(x).{\displaystyle f\left(x_{0}\right)~\leq ~\varepsilon +\inf _{x\in X}f(x).}Then, for everyλ>0,{\displaystyle \lambda >0,}there exists a pointv∈X{\displaystyle v\in X}such thatf(v)≤f(x0),{\displaystyle f(v)~\leq ~f\left(x_{0}\right),}d(x0,v)≤λ,{\displaystyle d\left(x_{0},v\right)~\leq ~\lambda ,}and, for allx≠v,{\displaystyle x\neq v,}f(x)+ελd(v,x)>f(v).{\displaystyle f(x)+{\frac {\varepsilon }{\lambda }}d(v,x)~>~f(v).} The principle could be thought of as follows: For any pointx0{\displaystyle x_{0}}which nearly realizes the infimum, there exists another pointv{\displaystyle v}, which is at least as good asx0{\displaystyle x_{0}}, it is close tox0{\displaystyle x_{0}}and the perturbed function,f(x)+ελd(v,x){\displaystyle f(x)+{\frac {\varepsilon }{\lambda }}d(v,x)}, has unique minimum atv{\displaystyle v}. A good compromise is to takeλ:=ε{\displaystyle \lambda :={\sqrt {\varepsilon }}}in the preceding result.[8]
https://en.wikipedia.org/wiki/Ekeland%27s_variational_principle
In themathematicalareas oforderandlattice theory, theKnaster–Tarski theorem, named afterBronisław KnasterandAlfred Tarski, states the following: It was Tarski who stated the result in its most general form,[1]and so the theorem is often known asTarski's fixed-point theorem. Some time earlier, Knaster and Tarski established the result for the special case whereLis thelatticeofsubsetsof a set, thepower setlattice.[2] The theorem has important applications informal semantics of programming languagesandabstract interpretation, as well as ingame theory. A kind of converse of this theorem was proved byAnne C. Davis: If everyorder-preserving functionf:L→Lon a latticeLhas a fixed point, thenLis a complete lattice.[3] Since complete lattices cannot beempty(they must contain asupremumandinfimumof the empty set), the theorem in particular guarantees the existence of at least one fixed point off, and even the existence of aleastfixed point(orgreatestfixed point). In many practical cases, this is the most important implication of the theorem. Theleast fixpointoffis the least elementxsuch thatf(x) =x, or, equivalently, such thatf(x) ≤x; thedualholds for thegreatest fixpoint, the greatest elementxsuch thatf(x) =x. Iff(limxn) = limf(xn) for all ascendingsequencesxn, then the least fixpoint offis limfn(0) where 0 is theleast elementofL, thus giving a more "constructive" version of the theorem. (See:Kleene fixed-point theorem.) More generally, iffis monotonic, then the least fixpoint offis the stationary limit offα(0), taking α over theordinals, wherefαis defined bytransfinite induction:fα+1=f(fα) andfγfor a limit ordinal γ is theleast upper boundof thefβfor all β ordinals less than γ.[4]The dual theorem holds for the greatest fixpoint. For example, in theoreticalcomputer science, least fixed points ofmonotonic functionsare used to defineprogram semantics, seeLeast fixed point § Denotational semanticsfor an example. Often a more specialized version of the theorem is used, whereLis assumed to be the lattice of all subsets of a certain set ordered bysubset inclusion. This reflects the fact that in many applications only such lattices are considered. One then usually is looking for the smallest set that has the property of being a fixed point of the functionf.Abstract interpretationmakes ample use of the Knaster–Tarski theorem and the formulas giving the least and greatest fixpoints. The Knaster–Tarski theorem can be used to give a simple proof of theCantor–Bernstein–Schroeder theorem[5][6]and it is also used in establishing theBanach–Tarski paradox. Weaker versions of the Knaster–Tarski theorem can be formulated for ordered sets, but involve more complicated assumptions. For example:[citation needed] This can be applied to obtain various theorems oninvariant sets, e.g. the Ok's theorem: In particular, using the Knaster-Tarski principle one can develop the theory of global attractors for noncontractive discontinuous (multivalued)iterated function systems. For weakly contractive iterated function systems theKantorovich theorem(known also as Tarski-Kantorovich fixpoint principle) suffices. Other applications of fixed-point principles for ordered sets come from the theory ofdifferential,integralandoperatorequations. Let us restate the theorem. For a complete lattice⟨L,≤⟩{\displaystyle \langle L,\leq \rangle }and a monotone functionf:L→L{\displaystyle f\colon L\rightarrow L}onL, the set of all fixpoints offis also a complete lattice⟨P,≤⟩{\displaystyle \langle P,\leq \rangle }, with: Proof.We begin by showing thatPhas both a least element and a greatest element. LetD= {x|x≤f(x)}andx∈D(we know that at least 0Lbelongs toD). Then becausefis monotone we havef(x) ≤f(f(x)), that isf(x) ∈D. Now letu=⋁D{\displaystyle u=\bigvee D}(uexists becauseD⊆LandLis a complete lattice). Then for allx∈Dit is true thatx≤uandf(x) ≤f(u), sox≤f(x) ≤f(u). Therefore,f(u) is an upper bound ofD, butuis the least upper bound, sou≤f(u), i.e.u∈D. Thenf(u) ∈D(becausef(u) ≤f(f(u)))and sof(u) ≤ufrom which followsf(u) =u. Because every fixpoint is inDwe have thatuis the greatest fixpoint off. The functionfis monotone on the dual (complete) lattice⟨Lop,≥⟩{\displaystyle \langle L^{op},\geq \rangle }. As we have just proved, its greatest fixpoint exists. It is the least fixpoint ofL, soPhas least and greatest elements, that is more generally, every monotone function on a complete lattice has a least fixpoint and a greatest fixpoint. Fora,binLwe write [a,b] for theclosed intervalwith boundsaandb: {x∈L|a≤x≤b}. Ifa≤b, then⟨[a,b], ≤⟩is a complete lattice. It remains to be proven thatPis a complete lattice. Let1L=⋁L{\displaystyle 1_{L}=\bigvee L},W⊆Pandw=⋁W{\displaystyle w=\bigvee W}. We show thatf([w, 1L]) ⊆ [w, 1L]. Indeed, for everyx∈Wwe havex=f(x) and sincewis the least upper bound ofW,x≤f(w). In particularw≤f(w). Then fromy∈ [w, 1L]follows thatw≤f(w) ≤f(y), givingf(y) ∈ [w, 1L]or simplyf([w, 1L]) ⊆ [w, 1L]. This allows us to look atfas a function on the complete lattice [w, 1L]. Then it has a least fixpoint there, giving us the least upper bound ofW. We've shown that an arbitrary subset ofPhas a supremum, that is,Pis a complete lattice. Chang, Lyuu and Ti[7]present an algorithm for finding a Tarski fixed-point in atotally-orderedlattice, when the order-preserving function is given by avalue oracle. Their algorithm requiresO(log⁡L){\displaystyle O(\log L)}queries, whereLis the number of elements in the lattice. In contrast, for a general lattice (given as an oracle), they prove a lower bound ofΩ(L){\displaystyle \Omega (L)}queries. Deng, Qi and Ye[8]present several algorithms for finding a Tarski fixed-point. They consider two kinds of lattices: componentwise ordering andlexicographic ordering. They consider two kinds of input for the functionf:value oracle, or a polynomial function. Their algorithms have the following runtime complexity (wheredis the number of dimensions, andNiis the number of elements in dimensioni): The algorithms are based onbinary search. On the other hand, determining whether a given fixed point isuniqueis computationally hard: Ford=2, for componentwise lattice and a value-oracle, the complexity ofO(log2⁡L){\displaystyle O(\log ^{2}L)}is optimal.[9]But ford>2, there are faster algorithms: Tarski's fixed-point theorem has applications tosupermodular games.[8]Asupermodular game(also called agame of strategic complements[12]) is agamein which theutility functionof each player hasincreasing differences, so thebest responseof a player is a weakly-increasing function of other players' strategies. For example, consider a game of competition between two firms. Each firm has to decide how much money to spend on research. In general, if one firm spends more on research, the other firm's best response is to spend more on research too. Some common games can be modeled as supermodular games, for exampleCournot competition,Bertrand competitionandInvestment Games. Because the best-response functions are monotone, Tarski's fixed-point theorem can be used to prove the existence of apure-strategyNash equilibrium(PNE) in a supermodular game. Moreover, Topkis[13]showed that the set of PNE of a supermodular game is a complete lattice, so the game has a "smallest" PNE and a "largest" PNE. Echenique[14]presents an algorithm for finding all PNE in a supermodular game. His algorithm first uses best-response sequences to find the smallest and largest PNE; then, he removes some strategies and repeats, until all PNE are found. His algorithm is exponential in the worst case, but runs fast in practice. Deng, Qi and Ye[8]show that a PNE can be computed efficiently by finding a Tarski fixed-point of an order-preserving mapping associated with the game.
https://en.wikipedia.org/wiki/Knaster%E2%80%93Tarski_theorem
Inmathematics,generalised means(orpower meanorHölder meanfromOtto Hölder)[1]are a family of functions for aggregating sets of numbers. These include as special cases thePythagorean means(arithmetic,geometric, andharmonicmeans). Ifpis a non-zeroreal number, andx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}are positive real numbers, then thegeneralized meanorpower meanwith exponentpof these positive real numbers is[2][3] Mp(x1,…,xn)=(1n∑i=1nxip)1/p.{\displaystyle M_{p}(x_{1},\dots ,x_{n})=\left({\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{p}\right)^{{1}/{p}}.} (Seep-norm). Forp= 0we set it equal to the geometric mean (which is the limit of means with exponents approaching zero, as proved below): M0(x1,…,xn)=(∏i=1nxi)1/n.{\displaystyle M_{0}(x_{1},\dots ,x_{n})=\left(\prod _{i=1}^{n}x_{i}\right)^{1/n}.} Furthermore, for asequenceof positive weightswiwe define theweighted power meanas[2]Mp(x1,…,xn)=(∑i=1nwixip∑i=1nwi)1/p{\displaystyle M_{p}(x_{1},\dots ,x_{n})=\left({\frac {\sum _{i=1}^{n}w_{i}x_{i}^{p}}{\sum _{i=1}^{n}w_{i}}}\right)^{{1}/{p}}}and whenp= 0, it is equal to theweighted geometric mean: M0(x1,…,xn)=(∏i=1nxiwi)1/∑i=1nwi.{\displaystyle M_{0}(x_{1},\dots ,x_{n})=\left(\prod _{i=1}^{n}x_{i}^{w_{i}}\right)^{1/\sum _{i=1}^{n}w_{i}}.} The unweighted means correspond to setting allwi= 1. A few particular values ofpyield special cases with their own names:[4] For the purpose of the proof, we will assume without loss of generality thatwi∈[0,1]{\displaystyle w_{i}\in [0,1]}and∑i=1nwi=1.{\displaystyle \sum _{i=1}^{n}w_{i}=1.} We can rewrite the definition ofMp{\displaystyle M_{p}}using the exponential function as Mp(x1,…,xn)=exp⁡(ln⁡[(∑i=1nwixip)1/p])=exp⁡(ln⁡(∑i=1nwixip)p){\displaystyle M_{p}(x_{1},\dots ,x_{n})=\exp {\left(\ln {\left[\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\right]}\right)}=\exp {\left({\frac {\ln {\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)}}{p}}\right)}} In the limitp→ 0, we can applyL'Hôpital's ruleto the argument of the exponential function. We assume thatp∈R{\displaystyle p\in \mathbb {R} }butp≠ 0, and that the sum ofwiis equal to 1 (without loss in generality);[7]Differentiating the numerator and denominator with respect top, we havelimp→0ln⁡(∑i=1nwixip)p=limp→0∑i=1nwixipln⁡xi∑j=1nwjxjp1=limp→0∑i=1nwixipln⁡xi∑j=1nwjxjp=∑i=1nwiln⁡xi∑j=1nwj=∑i=1nwiln⁡xi=ln⁡(∏i=1nxiwi){\displaystyle {\begin{aligned}\lim _{p\to 0}{\frac {\ln {\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)}}{p}}&=\lim _{p\to 0}{\frac {\frac {\sum _{i=1}^{n}w_{i}x_{i}^{p}\ln {x_{i}}}{\sum _{j=1}^{n}w_{j}x_{j}^{p}}}{1}}\\&=\lim _{p\to 0}{\frac {\sum _{i=1}^{n}w_{i}x_{i}^{p}\ln {x_{i}}}{\sum _{j=1}^{n}w_{j}x_{j}^{p}}}\\&={\frac {\sum _{i=1}^{n}w_{i}\ln {x_{i}}}{\sum _{j=1}^{n}w_{j}}}\\&=\sum _{i=1}^{n}w_{i}\ln {x_{i}}\\&=\ln {\left(\prod _{i=1}^{n}x_{i}^{w_{i}}\right)}\end{aligned}}} By the continuity of the exponential function, we can substitute back into the above relation to obtainlimp→0Mp(x1,…,xn)=exp⁡(ln⁡(∏i=1nxiwi))=∏i=1nxiwi=M0(x1,…,xn){\displaystyle \lim _{p\to 0}M_{p}(x_{1},\dots ,x_{n})=\exp {\left(\ln {\left(\prod _{i=1}^{n}x_{i}^{w_{i}}\right)}\right)}=\prod _{i=1}^{n}x_{i}^{w_{i}}=M_{0}(x_{1},\dots ,x_{n})}as desired.[2] Assume (possibly after relabeling and combining terms together) thatx1≥⋯≥xn{\displaystyle x_{1}\geq \dots \geq x_{n}}. Then limp→∞Mp(x1,…,xn)=limp→∞(∑i=1nwixip)1/p=x1limp→∞(∑i=1nwi(xix1)p)1/p=x1=M∞(x1,…,xn).{\displaystyle {\begin{aligned}\lim _{p\to \infty }M_{p}(x_{1},\dots ,x_{n})&=\lim _{p\to \infty }\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\\&=x_{1}\lim _{p\to \infty }\left(\sum _{i=1}^{n}w_{i}\left({\frac {x_{i}}{x_{1}}}\right)^{p}\right)^{1/p}\\&=x_{1}=M_{\infty }(x_{1},\dots ,x_{n}).\end{aligned}}} The formula forM−∞{\displaystyle M_{-\infty }}follows fromM−∞(x1,…,xn)=1M∞(1/x1,…,1/xn)=xn.{\displaystyle M_{-\infty }(x_{1},\dots ,x_{n})={\frac {1}{M_{\infty }(1/x_{1},\dots ,1/x_{n})}}=x_{n}.} Letx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}be a sequence of positive real numbers, then the following properties hold:[1] In general, ifp<q, thenMp(x1,…,xn)≤Mq(x1,…,xn){\displaystyle M_{p}(x_{1},\dots ,x_{n})\leq M_{q}(x_{1},\dots ,x_{n})}and the two means are equal if and only ifx1=x2= ... =xn. The inequality is true for real values ofpandq, as well as positive and negative infinity values. It follows from the fact that, for all realp,∂∂pMp(x1,…,xn)≥0{\displaystyle {\frac {\partial }{\partial p}}M_{p}(x_{1},\dots ,x_{n})\geq 0}which can be proved usingJensen's inequality. In particular, forpin{−1, 0, 1}, the generalized mean inequality implies thePythagorean meansinequality as well as theinequality of arithmetic and geometric means. We will prove the weighted power mean inequality. For the purpose of the proof we will assume the following without loss of generality:wi∈[0,1]∑i=1nwi=1{\displaystyle {\begin{aligned}w_{i}\in [0,1]\\\sum _{i=1}^{n}w_{i}=1\end{aligned}}} The proof for unweighted power means can be easily obtained by substitutingwi= 1/n. Suppose an average between power means with exponentspandqholds:(∑i=1nwixip)1/p≥(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\geq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}}applying this, then:(∑i=1nwixip)1/p≥(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}{\frac {w_{i}}{x_{i}^{p}}}\right)^{1/p}\geq \left(\sum _{i=1}^{n}{\frac {w_{i}}{x_{i}^{q}}}\right)^{1/q}} We raise both sides to the power of −1 (strictly decreasing function in positive reals):(∑i=1nwixi−p)−1/p=(1∑i=1nwi1xip)1/p≤(1∑i=1nwi1xiq)1/q=(∑i=1nwixi−q)−1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{-p}\right)^{-1/p}=\left({\frac {1}{\sum _{i=1}^{n}w_{i}{\frac {1}{x_{i}^{p}}}}}\right)^{1/p}\leq \left({\frac {1}{\sum _{i=1}^{n}w_{i}{\frac {1}{x_{i}^{q}}}}}\right)^{1/q}=\left(\sum _{i=1}^{n}w_{i}x_{i}^{-q}\right)^{-1/q}} We get the inequality for means with exponents−pand−q, and we can use the same reasoning backwards, thus proving the inequalities to be equivalent, which will be used in some of the later proofs. For anyq> 0and non-negative weights summing to 1, the following inequality holds:(∑i=1nwixi−q)−1/q≤∏i=1nxiwi≤(∑i=1nwixiq)1/q.{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{-q}\right)^{-1/q}\leq \prod _{i=1}^{n}x_{i}^{w_{i}}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}.} The proof follows fromJensen's inequality, making use of the fact thelogarithmis concave:log⁡∏i=1nxiwi=∑i=1nwilog⁡xi≤log⁡∑i=1nwixi.{\displaystyle \log \prod _{i=1}^{n}x_{i}^{w_{i}}=\sum _{i=1}^{n}w_{i}\log x_{i}\leq \log \sum _{i=1}^{n}w_{i}x_{i}.} By applying theexponential functionto both sides and observing that as a strictly increasing function it preserves the sign of the inequality, we get∏i=1nxiwi≤∑i=1nwixi.{\displaystyle \prod _{i=1}^{n}x_{i}^{w_{i}}\leq \sum _{i=1}^{n}w_{i}x_{i}.} Takingq-th powers of thexiyields∏i=1nxiq⋅wi≤∑i=1nwixiq∏i=1nxiwi≤(∑i=1nwixiq)1/q.{\displaystyle {\begin{aligned}&\prod _{i=1}^{n}x_{i}^{q{\cdot }w_{i}}\leq \sum _{i=1}^{n}w_{i}x_{i}^{q}\\&\prod _{i=1}^{n}x_{i}^{w_{i}}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}.\end{aligned}}} Thus, we are done for the inequality with positiveq; the case for negatives is identical but for the swapped signs in the last step: ∏i=1nxi−q⋅wi≤∑i=1nwixi−q.{\displaystyle \prod _{i=1}^{n}x_{i}^{-q{\cdot }w_{i}}\leq \sum _{i=1}^{n}w_{i}x_{i}^{-q}.} Of course, taking each side to the power of a negative number-1/qswaps the direction of the inequality. ∏i=1nxiwi≥(∑i=1nwixi−q)−1/q.{\displaystyle \prod _{i=1}^{n}x_{i}^{w_{i}}\geq \left(\sum _{i=1}^{n}w_{i}x_{i}^{-q}\right)^{-1/q}.} We are to prove that for anyp<qthe following inequality holds:(∑i=1nwixip)1/p≤(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}}ifpis negative, andqis positive, the inequality is equivalent to the one proved above:(∑i=1nwixip)1/p≤∏i=1nxiwi≤(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\leq \prod _{i=1}^{n}x_{i}^{w_{i}}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}} The proof for positivepandqis as follows: Define the following function:f:R+→R+f(x)=xqp{\displaystyle f(x)=x^{\frac {q}{p}}}.fis a power function, so it does have a second derivative:f″(x)=(qp)(qp−1)xqp−2{\displaystyle f''(x)=\left({\frac {q}{p}}\right)\left({\frac {q}{p}}-1\right)x^{{\frac {q}{p}}-2}}which is strictly positive within the domain off, sinceq>p, so we knowfis convex. Using this, and the Jensen's inequality we get:f(∑i=1nwixip)≤∑i=1nwif(xip)(∑i=1nwixip)q/p≤∑i=1nwixiq{\displaystyle {\begin{aligned}f\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)&\leq \sum _{i=1}^{n}w_{i}f(x_{i}^{p})\\[3pt]\left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{q/p}&\leq \sum _{i=1}^{n}w_{i}x_{i}^{q}\end{aligned}}}after raising both side to the power of1/q(an increasing function, since1/qis positive) we get the inequality which was to be proven: (∑i=1nwixip)1/p≤(∑i=1nwixiq)1/q{\displaystyle \left(\sum _{i=1}^{n}w_{i}x_{i}^{p}\right)^{1/p}\leq \left(\sum _{i=1}^{n}w_{i}x_{i}^{q}\right)^{1/q}} Using the previously shown equivalence we can prove the inequality for negativepandqby replacing them with−qand−p, respectively. The power mean could be generalized further to thegeneralizedf-mean: Mf(x1,…,xn)=f−1(1n⋅∑i=1nf(xi)){\displaystyle M_{f}(x_{1},\dots ,x_{n})=f^{-1}\left({{\frac {1}{n}}\cdot \sum _{i=1}^{n}{f(x_{i})}}\right)} This covers the geometric mean without using a limit withf(x) = log(x). The power mean is obtained forf(x) =xp. Properties of these means are studied in de Carvalho (2016).[3] A power mean serves a non-linearmoving averagewhich is shifted towards small signal values for smallpand emphasizes big signal values for bigp. Given an efficient implementation of amoving arithmetic meancalledsmoothone can implement a moving power mean according to the followingHaskellcode.
https://en.wikipedia.org/wiki/Generalized_mean
Inmathematics, theLpspacesarefunction spacesdefined using a natural generalization of thep-normfor finite-dimensionalvector spaces. They are sometimes calledLebesgue spaces, named afterHenri Lebesgue(Dunford & Schwartz 1958, III.3), although according to theBourbakigroup (Bourbaki 1987) they were first introduced byFrigyes Riesz(Riesz 1910). Lpspaces form an important class ofBanach spacesinfunctional analysis, and oftopological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines. The Euclidean length of a vectorx=(x1,x2,…,xn){\displaystyle x=(x_{1},x_{2},\dots ,x_{n})}in then{\displaystyle n}-dimensionalrealvector spaceRn{\displaystyle \mathbb {R} ^{n}}is given by theEuclidean norm:‖x‖2=(x12+x22+⋯+xn2)1/2.{\displaystyle \|x\|_{2}=\left({x_{1}}^{2}+{x_{2}}^{2}+\dotsb +{x_{n}}^{2}\right)^{1/2}.} The Euclidean distance between two pointsx{\displaystyle x}andy{\displaystyle y}is the length‖x−y‖2{\displaystyle \|x-y\|_{2}}of the straight line between the two points. In many situations, the Euclidean distance is appropriate for capturing the actual distances in a given space. In contrast, consider taxi drivers in a grid street plan who should measure distance not in terms of the length of the straight line to their destination, but in terms of therectilinear distance, which takes into account that streets are either orthogonal or parallel to each other. The class ofp{\displaystyle p}-norms generalizes these two examples and has an abundance of applications in many parts ofmathematics,physics, andcomputer science. For areal numberp≥1,{\displaystyle p\geq 1,}thep{\displaystyle p}-normorLp{\displaystyle L^{p}}-normofx{\displaystyle x}is defined by‖x‖p=(|x1|p+|x2|p+⋯+|xn|p)1/p.{\displaystyle \|x\|_{p}=\left(|x_{1}|^{p}+|x_{2}|^{p}+\dotsb +|x_{n}|^{p}\right)^{1/p}.}The absolute value bars can be dropped whenp{\displaystyle p}is a rational number with an even numerator in its reduced form, andx{\displaystyle x}is drawn from the set of real numbers, or one of its subsets. The Euclidean norm from above falls into this class and is the2{\displaystyle 2}-norm, and the1{\displaystyle 1}-norm is the norm that corresponds to therectilinear distance. TheL∞{\displaystyle L^{\infty }}-normormaximum norm(or uniform norm) is the limit of theLp{\displaystyle L^{p}}-norms forp→∞{\displaystyle p\to \infty }, given by:‖x‖∞=max{|x1|,|x2|,…,|xn|}{\displaystyle \|x\|_{\infty }=\max \left\{|x_{1}|,|x_{2}|,\dotsc ,|x_{n}|\right\}} For allp≥1,{\displaystyle p\geq 1,}thep{\displaystyle p}-norms and maximum norm satisfy the properties of a "length function" (ornorm), that is: Abstractly speaking, this means thatRn{\displaystyle \mathbb {R} ^{n}}together with thep{\displaystyle p}-norm is anormed vector space. Moreover, it turns out that this space iscomplete, thus making it aBanach space. The grid distance or rectilinear distance (sometimes called the "Manhattan distance") between two points is never shorter than the length of the line segment between them (the Euclidean or "as the crow flies" distance). Formally, this means that the Euclidean norm of any vector is bounded by its 1-norm:‖x‖2≤‖x‖1.{\displaystyle \|x\|_{2}\leq \|x\|_{1}.} This fact generalizes top{\displaystyle p}-norms in that thep{\displaystyle p}-norm‖x‖p{\displaystyle \|x\|_{p}}of any given vectorx{\displaystyle x}does not grow withp{\displaystyle p}: For the opposite direction, the following relation between the1{\displaystyle 1}-norm and the2{\displaystyle 2}-norm is known:‖x‖1≤n‖x‖2.{\displaystyle \|x\|_{1}\leq {\sqrt {n}}\|x\|_{2}~.} This inequality depends on the dimensionn{\displaystyle n}of the underlying vector space and follows directly from theCauchy–Schwarz inequality. In general, for vectors inCn{\displaystyle \mathbb {C} ^{n}}where0<r<p:{\displaystyle 0<r<p:}‖x‖p≤‖x‖r≤n1r−1p‖x‖p.{\displaystyle \|x\|_{p}\leq \|x\|_{r}\leq n^{{\frac {1}{r}}-{\frac {1}{p}}}\|x\|_{p}~.} This is a consequence ofHölder's inequality. InRn{\displaystyle \mathbb {R} ^{n}}forn>1,{\displaystyle n>1,}the formula‖x‖p=(|x1|p+|x2|p+⋯+|xn|p)1/p{\displaystyle \|x\|_{p}=\left(|x_{1}|^{p}+|x_{2}|^{p}+\cdots +|x_{n}|^{p}\right)^{1/p}}defines an absolutelyhomogeneous functionfor0<p<1;{\displaystyle 0<p<1;}however, the resulting function does not define a norm, because it is notsubadditive. On the other hand, the formula|x1|p+|x2|p+⋯+|xn|p{\displaystyle |x_{1}|^{p}+|x_{2}|^{p}+\dotsb +|x_{n}|^{p}}defines a subadditive function at the cost of losing absolute homogeneity. It does define anF-norm, though, which is homogeneous of degreep.{\displaystyle p.} Hence, the functiondp(x,y)=∑i=1n|xi−yi|p{\displaystyle d_{p}(x,y)=\sum _{i=1}^{n}|x_{i}-y_{i}|^{p}}defines ametric. Themetric space(Rn,dp){\displaystyle (\mathbb {R} ^{n},d_{p})}is denoted byℓnp.{\displaystyle \ell _{n}^{p}.} Although thep{\displaystyle p}-unit ballBnp{\displaystyle B_{n}^{p}}around the origin in this metric is "concave", the topology defined onRn{\displaystyle \mathbb {R} ^{n}}by the metricBp{\displaystyle B_{p}}is the usual vector space topology ofRn,{\displaystyle \mathbb {R} ^{n},}henceℓnp{\displaystyle \ell _{n}^{p}}is alocally convextopological vector space. Beyond this qualitative statement, a quantitative way to measure the lack of convexity ofℓnp{\displaystyle \ell _{n}^{p}}is to denote byCp(n){\displaystyle C_{p}(n)}the smallest constantC{\displaystyle C}such that the scalar multipleCBnp{\displaystyle C\,B_{n}^{p}}of thep{\displaystyle p}-unit ball contains the convex hull ofBnp,{\displaystyle B_{n}^{p},}which is equal toBn1.{\displaystyle B_{n}^{1}.}The fact that for fixedp<1{\displaystyle p<1}we haveCp(n)=n1p−1→∞,asn→∞{\displaystyle C_{p}(n)=n^{{\tfrac {1}{p}}-1}\to \infty ,\quad {\text{as }}n\to \infty }shows that the infinite-dimensional sequence spaceℓp{\displaystyle \ell ^{p}}defined below, is no longer locally convex.[citation needed] There is oneℓ0{\displaystyle \ell _{0}}norm and another function called theℓ0{\displaystyle \ell _{0}}"norm" (with quotation marks). The mathematical definition of theℓ0{\displaystyle \ell _{0}}norm was established byBanach'sTheory of Linear Operations. Thespaceof sequences has a complete metric topology provided by theF-normon theproduct metric:[citation needed](xn)↦‖x‖:=d(0,x)=∑n2−n|xn|1+|xn|.{\displaystyle (x_{n})\mapsto \|x\|:=d(0,x)=\sum _{n}2^{-n}{\frac {|x_{n}|}{1+|x_{n}|}}.}Theℓ0{\displaystyle \ell _{0}}-normed space is studied in functional analysis, probability theory, and harmonic analysis. Another function was called theℓ0{\displaystyle \ell _{0}}"norm" byDavid Donoho—whose quotation marks warn that this function is not a proper norm—is the number of non-zero entries of the vectorx.{\displaystyle x.}[citation needed]Many authorsabuse terminologyby omitting the quotation marks. Defining00=0,{\displaystyle 0^{0}=0,}the zero "norm" ofx{\displaystyle x}is equal to|x1|0+|x2|0+⋯+|xn|0.{\displaystyle |x_{1}|^{0}+|x_{2}|^{0}+\cdots +|x_{n}|^{0}.} This is not anormbecause it is nothomogeneous. For example, scaling the vectorx{\displaystyle x}by a positive constant does not change the "norm". Despite these defects as a mathematical norm, the non-zero counting "norm" has uses inscientific computing,information theory, andstatistics–notably incompressed sensinginsignal processingand computationalharmonic analysis. Despite not being a norm, the associated metric, known asHamming distance, is a valid distance, since homogeneity is not required for distances. Thep{\displaystyle p}-norm can be extended to vectors that have an infinite number of components (sequences), which yields the spaceℓp.{\displaystyle \ell ^{p}.}This contains as special cases: The space of sequences has a natural vector space structure by applying scalar addition and multiplication. Explicitly, the vector sum and the scalar action for infinitesequencesof real (orcomplex) numbers are given by:(x1,x2,…,xn,xn+1,…)+(y1,y2,…,yn,yn+1,…)=(x1+y1,x2+y2,…,xn+yn,xn+1+yn+1,…),λ⋅(x1,x2,…,xn,xn+1,…)=(λx1,λx2,…,λxn,λxn+1,…).{\displaystyle {\begin{aligned}&(x_{1},x_{2},\ldots ,x_{n},x_{n+1},\ldots )+(y_{1},y_{2},\ldots ,y_{n},y_{n+1},\ldots )\\={}&(x_{1}+y_{1},x_{2}+y_{2},\ldots ,x_{n}+y_{n},x_{n+1}+y_{n+1},\ldots ),\\[6pt]&\lambda \cdot \left(x_{1},x_{2},\ldots ,x_{n},x_{n+1},\ldots \right)\\={}&(\lambda x_{1},\lambda x_{2},\ldots ,\lambda x_{n},\lambda x_{n+1},\ldots ).\end{aligned}}} Define thep{\displaystyle p}-norm:‖x‖p=(|x1|p+|x2|p+⋯+|xn|p+|xn+1|p+⋯)1/p{\displaystyle \|x\|_{p}=\left(|x_{1}|^{p}+|x_{2}|^{p}+\cdots +|x_{n}|^{p}+|x_{n+1}|^{p}+\cdots \right)^{1/p}} Here, a complication arises, namely that theserieson the right is not always convergent, so for example, the sequence made up of only ones,(1,1,1,…),{\displaystyle (1,1,1,\ldots ),}will have an infinitep{\displaystyle p}-norm for1≤p<∞.{\displaystyle 1\leq p<\infty .}The spaceℓp{\displaystyle \ell ^{p}}is then defined as the set of all infinite sequences of real (or complex) numbers such that thep{\displaystyle p}-norm is finite. One can check that asp{\displaystyle p}increases, the setℓp{\displaystyle \ell ^{p}}grows larger. For example, the sequence(1,12,…,1n,1n+1,…){\displaystyle \left(1,{\frac {1}{2}},\ldots ,{\frac {1}{n}},{\frac {1}{n+1}},\ldots \right)}is not inℓ1,{\displaystyle \ell ^{1},}but it is inℓp{\displaystyle \ell ^{p}}forp>1,{\displaystyle p>1,}as the series1p+12p+⋯+1np+1(n+1)p+⋯,{\displaystyle 1^{p}+{\frac {1}{2^{p}}}+\cdots +{\frac {1}{n^{p}}}+{\frac {1}{(n+1)^{p}}}+\cdots ,}diverges forp=1{\displaystyle p=1}(theharmonic series), but is convergent forp>1.{\displaystyle p>1.} One also defines the∞{\displaystyle \infty }-norm using thesupremum:‖x‖∞=sup(|x1|,|x2|,…,|xn|,|xn+1|,…){\displaystyle \|x\|_{\infty }=\sup(|x_{1}|,|x_{2}|,\dotsc ,|x_{n}|,|x_{n+1}|,\ldots )}and the corresponding spaceℓ∞{\displaystyle \ell ^{\infty }}of all bounded sequences. It turns out that[1]‖x‖∞=limp→∞‖x‖p{\displaystyle \|x\|_{\infty }=\lim _{p\to \infty }\|x\|_{p}}if the right-hand side is finite, or the left-hand side is infinite. Thus, we will considerℓp{\displaystyle \ell ^{p}}spaces for1≤p≤∞.{\displaystyle 1\leq p\leq \infty .} Thep{\displaystyle p}-norm thus defined onℓp{\displaystyle \ell ^{p}}is indeed a norm, andℓp{\displaystyle \ell ^{p}}together with this norm is aBanach space. In complete analogy to the preceding definition one can define the spaceℓp(I){\displaystyle \ell ^{p}(I)}over a generalindex setI{\displaystyle I}(and1≤p<∞{\displaystyle 1\leq p<\infty }) asℓp(I)={(xi)i∈I∈KI:∑i∈I|xi|p<+∞},{\displaystyle \ell ^{p}(I)=\left\{(x_{i})_{i\in I}\in \mathbb {K} ^{I}:\sum _{i\in I}|x_{i}|^{p}<+\infty \right\},}where convergence on the right means that only countably many summands are nonzero (see alsoUnconditional convergence). With the norm‖x‖p=(∑i∈I|xi|p)1/p{\displaystyle \|x\|_{p}=\left(\sum _{i\in I}|x_{i}|^{p}\right)^{1/p}}the spaceℓp(I){\displaystyle \ell ^{p}(I)}becomes a Banach space. In the case whereI{\displaystyle I}is finite withn{\displaystyle n}elements, this construction yieldsRn{\displaystyle \mathbb {R} ^{n}}with thep{\displaystyle p}-norm defined above. IfI{\displaystyle I}is countably infinite, this is exactly the sequence spaceℓp{\displaystyle \ell ^{p}}defined above. For uncountable setsI{\displaystyle I}this is a non-separableBanach space which can be seen as thelocally convexdirect limitofℓp{\displaystyle \ell ^{p}}-sequence spaces.[2] Forp=2,{\displaystyle p=2,}the‖⋅‖2{\displaystyle \|\,\cdot \,\|_{2}}-norm is even induced by a canonicalinner product⟨⋅,⋅⟩,{\displaystyle \langle \,\cdot ,\,\cdot \rangle ,}called theEuclidean inner product, which means that‖x‖2=⟨x,x⟩{\displaystyle \|\mathbf {x} \|_{2}={\sqrt {\langle \mathbf {x} ,\mathbf {x} \rangle }}}holds for all vectorsx.{\displaystyle \mathbf {x} .}This inner product can expressed in terms of the norm by using thepolarization identity. Onℓ2,{\displaystyle \ell ^{2},}it can be defined by⟨(xi)i,(yn)i⟩ℓ2=∑ixiyi¯.{\displaystyle \langle \left(x_{i}\right)_{i},\left(y_{n}\right)_{i}\rangle _{\ell ^{2}}~=~\sum _{i}x_{i}{\overline {y_{i}}}.}Now consider the casep=∞.{\displaystyle p=\infty .}Define[note 1]ℓ∞(I)={x∈KI:suprange⁡|x|<+∞},{\displaystyle \ell ^{\infty }(I)=\{x\in \mathbb {K} ^{I}:\sup \operatorname {range} |x|<+\infty \},}where for allx{\displaystyle x}[3][note 2]‖x‖∞≡inf{C∈R≥0:|xi|≤Cfor alli∈I}={suprange⁡|x|ifX≠∅,0ifX=∅.{\displaystyle \|x\|_{\infty }\equiv \inf\{C\in \mathbb {R} _{\geq 0}:|x_{i}|\leq C{\text{ for all }}i\in I\}={\begin{cases}\sup \operatorname {range} |x|&{\text{if }}X\neq \varnothing ,\\0&{\text{if }}X=\varnothing .\end{cases}}} The index setI{\displaystyle I}can be turned into ameasure spaceby giving it thediscrete σ-algebraand thecounting measure. Then the spaceℓp(I){\displaystyle \ell ^{p}(I)}is just a special case of the more generalLp{\displaystyle L^{p}}-space (defined below). AnLp{\displaystyle L^{p}}space may be defined as a space of measurable functions for which thep{\displaystyle p}-th power of theabsolute valueisLebesgue integrable, where functions which agree almost everywhere are identified. More generally, let(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}be ameasure spaceand1≤p≤∞.{\displaystyle 1\leq p\leq \infty .}[note 3]Whenp≠∞{\displaystyle p\neq \infty }, consider the setLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}of allmeasurable functionsf{\displaystyle f}fromS{\displaystyle S}toC{\displaystyle \mathbb {C} }orR{\displaystyle \mathbb {R} }whoseabsolute valueraised to thep{\displaystyle p}-th power has a finite integral, or in symbols:[4]‖f‖p=def(∫S|f|pdμ)1/p<∞.{\displaystyle \|f\|_{p}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(\int _{S}|f|^{p}\;\mathrm {d} \mu \right)^{1/p}<\infty .} To define the set forp=∞,{\displaystyle p=\infty ,}recall that two functionsf{\displaystyle f}andg{\displaystyle g}defined onS{\displaystyle S}are said to beequalalmost everywhere, writtenf=g{\displaystyle f=g}a.e., if the set{s∈S:f(s)≠g(s)}{\displaystyle \{s\in S:f(s)\neq g(s)\}}is measurable and has measure zero. Similarly, a measurable functionf{\displaystyle f}(and itsabsolute value) isbounded(ordominated)almost everywhereby a real numberC,{\displaystyle C,}written|f|≤C{\displaystyle |f|\leq C}a.e., if the (necessarily) measurable set{s∈S:|f(s)|>C}{\displaystyle \{s\in S:|f(s)|>C\}}has measure zero. The spaceL∞(S,μ){\displaystyle {\mathcal {L}}^{\infty }(S,\mu )}is the set of all measurable functionsf{\displaystyle f}that are bounded almost everywhere (by some realC{\displaystyle C}) and‖f‖∞{\displaystyle \|f\|_{\infty }}is defined as theinfimumof these bounds:‖f‖∞=definf{C∈R≥0:|f(s)|≤Cfor almost everys}.{\displaystyle \|f\|_{\infty }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\inf\{C\in \mathbb {R} _{\geq 0}:|f(s)|\leq C{\text{ for almost every }}s\}.}Whenμ(S)≠0{\displaystyle \mu (S)\neq 0}then this is the same as theessential supremumof the absolute value off{\displaystyle f}:[note 4]‖f‖∞={esssup⁡|f|ifμ(S)>0,0ifμ(S)=0.{\displaystyle \|f\|_{\infty }~=~{\begin{cases}\operatorname {esssup} |f|&{\text{if }}\mu (S)>0,\\0&{\text{if }}\mu (S)=0.\end{cases}}} For example, iff{\displaystyle f}is a measurable function that is equal to0{\displaystyle 0}almost everywhere[note 5]then‖f‖p=0{\displaystyle \|f\|_{p}=0}for everyp{\displaystyle p}and thusf∈Lp(S,μ){\displaystyle f\in {\mathcal {L}}^{p}(S,\,\mu )}for allp.{\displaystyle p.} For every positivep,{\displaystyle p,}the value under‖⋅‖p{\displaystyle \|\,\cdot \,\|_{p}}of a measurable functionf{\displaystyle f}and its absolute value|f|:S→[0,∞]{\displaystyle |f|:S\to [0,\infty ]}are always the same (that is,‖f‖p=‖|f|‖p{\displaystyle \|f\|_{p}=\||f|\|_{p}}for allp{\displaystyle p}) and so a measurable function belongs toLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}if and only if its absolute value does. Because of this, many formulas involvingp{\displaystyle p}-norms are stated only for non-negative real-valued functions. Consider for example the identity‖f‖pr=‖fr‖p/r,{\displaystyle \|f\|_{p}^{r}=\|f^{r}\|_{p/r},}which holds wheneverf≥0{\displaystyle f\geq 0}is measurable,r>0{\displaystyle r>0}is real, and0<p≤∞{\displaystyle 0<p\leq \infty }(here∞/r=def∞{\displaystyle \infty /r\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\infty }whenp=∞{\displaystyle p=\infty }). The non-negativity requirementf≥0{\displaystyle f\geq 0}can be removed by substituting|f|{\displaystyle |f|}in forf,{\displaystyle f,}which gives‖|f|‖pr=‖|f|r‖p/r.{\displaystyle \|\,|f|\,\|_{p}^{r}=\|\,|f|^{r}\,\|_{p/r}.}Note in particular that whenp=r{\displaystyle p=r}is finite then the formula‖f‖pp=‖|f|p‖1{\displaystyle \|f\|_{p}^{p}=\||f|^{p}\|_{1}}relates thep{\displaystyle p}-norm to the1{\displaystyle 1}-norm. Seminormed space ofp{\displaystyle p}-th power integrable functions Each set of functionsLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}forms avector spacewhen addition and scalar multiplication are defined pointwise.[note 6]That the sum of twop{\displaystyle p}-th power integrable functionsf{\displaystyle f}andg{\displaystyle g}is againp{\displaystyle p}-th power integrable follows from‖f+g‖pp≤2p−1(‖f‖pp+‖g‖pp),{\textstyle \|f+g\|_{p}^{p}\leq 2^{p-1}\left(\|f\|_{p}^{p}+\|g\|_{p}^{p}\right),}[proof 1]although it is also a consequence ofMinkowski's inequality‖f+g‖p≤‖f‖p+‖g‖p{\displaystyle \|f+g\|_{p}\leq \|f\|_{p}+\|g\|_{p}}which establishes that‖⋅‖p{\displaystyle \|\cdot \|_{p}}satisfies thetriangle inequalityfor1≤p≤∞{\displaystyle 1\leq p\leq \infty }(the triangle inequality does not hold for0<p<1{\displaystyle 0<p<1}). ThatLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}is closed under scalar multiplication is due to‖⋅‖p{\displaystyle \|\cdot \|_{p}}beingabsolutely homogeneous, which means that‖sf‖p=|s|‖f‖p{\displaystyle \|sf\|_{p}=|s|\|f\|_{p}}for every scalars{\displaystyle s}and every functionf.{\displaystyle f.} Absolute homogeneity, thetriangle inequality, and non-negativity are the defining properties of aseminorm. Thus‖⋅‖p{\displaystyle \|\cdot \|_{p}}is a seminorm and the setLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}ofp{\displaystyle p}-th power integrable functions together with the function‖⋅‖p{\displaystyle \|\cdot \|_{p}}defines aseminormed vector space. In general, theseminorm‖⋅‖p{\displaystyle \|\cdot \|_{p}}is not anormbecause there might exist measurable functionsf{\displaystyle f}that satisfy‖f‖p=0{\displaystyle \|f\|_{p}=0}but are notidenticallyequal to0{\displaystyle 0}[note 5](‖⋅‖p{\displaystyle \|\cdot \|_{p}}is a norm if and only if no suchf{\displaystyle f}exists). Zero sets ofp{\displaystyle p}-seminorms Iff{\displaystyle f}is measurable and equals0{\displaystyle 0}a.e. then‖f‖p=0{\displaystyle \|f\|_{p}=0}for all positivep≤∞.{\displaystyle p\leq \infty .}On the other hand, iff{\displaystyle f}is a measurable function for which there exists some0<p≤∞{\displaystyle 0<p\leq \infty }such that‖f‖p=0{\displaystyle \|f\|_{p}=0}thenf=0{\displaystyle f=0}almost everywhere. Whenp{\displaystyle p}is finite then this follows from thep=1{\displaystyle p=1}case and the formula‖f‖pp=‖|f|p‖1{\displaystyle \|f\|_{p}^{p}=\||f|^{p}\|_{1}}mentioned above. Thus ifp≤∞{\displaystyle p\leq \infty }is positive andf{\displaystyle f}is any measurable function, then‖f‖p=0{\displaystyle \|f\|_{p}=0}if and only iff=0{\displaystyle f=0}almost everywhere. Since the right hand side (f=0{\displaystyle f=0}a.e.) does not mentionp,{\displaystyle p,}it follows that all‖⋅‖p{\displaystyle \|\cdot \|_{p}}have the samezero set(it does not depend onp{\displaystyle p}). So denote this common set byN=def{f:f=0μ-almost everywhere}={f∈Lp(S,μ):‖f‖p=0}∀p.{\displaystyle {\mathcal {N}}\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\{f:f=0\ \mu {\text{-almost everywhere}}\}=\{f\in {\mathcal {L}}^{p}(S,\,\mu ):\|f\|_{p}=0\}\qquad \forall \ p.}This set is a vector subspace ofLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}for every positivep≤∞.{\displaystyle p\leq \infty .} Quotient vector space Like everyseminorm, the seminorm‖⋅‖p{\displaystyle \|\cdot \|_{p}}induces anorm(defined shortly) on the canonicalquotient vector spaceofLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}by its vector subspaceN={f∈Lp(S,μ):‖f‖p=0}.{\textstyle {\mathcal {N}}=\{f\in {\mathcal {L}}^{p}(S,\,\mu ):\|f\|_{p}=0\}.}This normed quotient space is calledLebesgue spaceand it is the subject of this article. We begin by defining the quotient vector space. Given anyf∈Lp(S,μ),{\displaystyle f\in {\mathcal {L}}^{p}(S,\,\mu ),}thecosetf+N=def{f+h:h∈N}{\displaystyle f+{\mathcal {N}}\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\{f+h:h\in {\mathcal {N}}\}}consists of all measurable functionsg{\displaystyle g}that are equal tof{\displaystyle f}almost everywhere. The set of all cosets, typically denoted byLp(S,μ)/N=def{f+N:f∈Lp(S,μ)},{\displaystyle {\mathcal {L}}^{p}(S,\mu )/{\mathcal {N}}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~~\{f+{\mathcal {N}}:f\in {\mathcal {L}}^{p}(S,\mu )\},}forms a vector space with origin0+N=N{\displaystyle 0+{\mathcal {N}}={\mathcal {N}}}when vector addition and scalar multiplication are defined by(f+N)+(g+N)=def(f+g)+N{\displaystyle (f+{\mathcal {N}})+(g+{\mathcal {N}})\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;(f+g)+{\mathcal {N}}}ands(f+N)=def(sf)+N.{\displaystyle s(f+{\mathcal {N}})\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;(sf)+{\mathcal {N}}.}This particular quotient vector space will be denoted byLp(S,μ)=defLp(S,μ)/N.{\displaystyle L^{p}(S,\,\mu )~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\mathcal {L}}^{p}(S,\mu )/{\mathcal {N}}.}Two cosets are equalf+N=g+N{\displaystyle f+{\mathcal {N}}=g+{\mathcal {N}}}if and only ifg∈f+N{\displaystyle g\in f+{\mathcal {N}}}(or equivalently,f−g∈N{\displaystyle f-g\in {\mathcal {N}}}), which happens if and only iff=g{\displaystyle f=g}almost everywhere; if this is the case thenf{\displaystyle f}andg{\displaystyle g}are identified in the quotient space. Hence, strictly speakingLp(S,μ){\displaystyle L^{p}(S,\,\mu )}consists ofequivalence classesof functions.[5] Thep{\displaystyle p}-norm on the quotient vector space Given anyf∈Lp(S,μ),{\displaystyle f\in {\mathcal {L}}^{p}(S,\,\mu ),}the value of the seminorm‖⋅‖p{\displaystyle \|\cdot \|_{p}}on thecosetf+N={f+h:h∈N}{\displaystyle f+{\mathcal {N}}=\{f+h:h\in {\mathcal {N}}\}}is constant and equal to‖f‖p;{\displaystyle \|f\|_{p};}denote this unique value by‖f+N‖p,{\displaystyle \|f+{\mathcal {N}}\|_{p},}so that:‖f+N‖p=def‖f‖p.{\displaystyle \|f+{\mathcal {N}}\|_{p}\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\|f\|_{p}.}This assignmentf+N↦‖f+N‖p{\displaystyle f+{\mathcal {N}}\mapsto \|f+{\mathcal {N}}\|_{p}}defines a map, which will also be denoted by‖⋅‖p,{\displaystyle \|\cdot \|_{p},}on thequotient vector spaceLp(S,μ)=defLp(S,μ)/N={f+N:f∈Lp(S,μ)}.{\displaystyle L^{p}(S,\mu )~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~~{\mathcal {L}}^{p}(S,\mu )/{\mathcal {N}}~=~\{f+{\mathcal {N}}:f\in {\mathcal {L}}^{p}(S,\mu )\}.}This map is anormonLp(S,μ){\displaystyle L^{p}(S,\mu )}called thep{\displaystyle p}-norm. The value‖f+N‖p{\displaystyle \|f+{\mathcal {N}}\|_{p}}of a cosetf+N{\displaystyle f+{\mathcal {N}}}is independent of the particular functionf{\displaystyle f}that was chosen to represent the coset, meaning that ifC∈Lp(S,μ){\displaystyle {\mathcal {C}}\in L^{p}(S,\mu )}is any coset then‖C‖p=‖f‖p{\displaystyle \|{\mathcal {C}}\|_{p}=\|f\|_{p}}for everyf∈C{\displaystyle f\in {\mathcal {C}}}(sinceC=f+N{\displaystyle {\mathcal {C}}=f+{\mathcal {N}}}for everyf∈C{\displaystyle f\in {\mathcal {C}}}). The LebesgueLp{\displaystyle L^{p}}space Thenormed vector space(Lp(S,μ),‖⋅‖p){\displaystyle \left(L^{p}(S,\mu ),\|\cdot \|_{p}\right)}is calledLp{\displaystyle L^{p}}spaceor theLebesgue spaceofp{\displaystyle p}-th power integrable functions and it is aBanach spacefor every1≤p≤∞{\displaystyle 1\leq p\leq \infty }(meaning that it is acomplete metric space, a result that is sometimes called theRiesz–Fischer theorem). When the underlying measure spaceS{\displaystyle S}is understood thenLp(S,μ){\displaystyle L^{p}(S,\mu )}is often abbreviatedLp(μ),{\displaystyle L^{p}(\mu ),}or even justLp.{\displaystyle L^{p}.}Depending on the author, the subscript notationLp{\displaystyle L_{p}}might denote eitherLp(S,μ){\displaystyle L^{p}(S,\mu )}orL1/p(S,μ).{\displaystyle L^{1/p}(S,\mu ).} If the seminorm‖⋅‖p{\displaystyle \|\cdot \|_{p}}onLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}happens to be a norm (which happens if and only ifN={0}{\displaystyle {\mathcal {N}}=\{0\}}) then the normed space(Lp(S,μ),‖⋅‖p){\displaystyle \left({\mathcal {L}}^{p}(S,\,\mu ),\|\cdot \|_{p}\right)}will belinearlyisometrically isomorphicto the normed quotient space(Lp(S,μ),‖⋅‖p){\displaystyle \left(L^{p}(S,\mu ),\|\cdot \|_{p}\right)}via the canonical mapg∈Lp(S,μ)↦{g}{\displaystyle g\in {\mathcal {L}}^{p}(S,\,\mu )\mapsto \{g\}}(sinceg+N={g}{\displaystyle g+{\mathcal {N}}=\{g\}}); in other words, they will be,up toalinear isometry, the same normed space and so they may both be called "Lp{\displaystyle L^{p}}space". The above definitions generalize toBochner spaces. In general, this process cannot be reversed: there is no consistent way to define a "canonical" representative of each coset ofN{\displaystyle {\mathcal {N}}}inLp.{\displaystyle L^{p}.}ForL∞,{\displaystyle L^{\infty },}however, there is atheory of liftsenabling such recovery. For1≤p≤∞{\displaystyle 1\leq p\leq \infty }theℓp{\displaystyle \ell ^{p}}spaces are a special case ofLp{\displaystyle L^{p}}spaces; whenS{\displaystyle S}are thenatural numbersN{\displaystyle \mathbb {N} }andμ{\displaystyle \mu }is thecounting measure. More generally, if one considers any setS{\displaystyle S}with the counting measure, the resultingLp{\displaystyle L^{p}}space is denotedℓp(S).{\displaystyle \ell ^{p}(S).}For example,ℓp(Z){\displaystyle \ell ^{p}(\mathbb {Z} )}is the space of all sequences indexed by the integers, and when defining thep{\displaystyle p}-norm on such a space, one sums over all the integers. The spaceℓp(n),{\displaystyle \ell ^{p}(n),}wheren{\displaystyle n}is the set withn{\displaystyle n}elements, isRn{\displaystyle \mathbb {R} ^{n}}with itsp{\displaystyle p}-norm as defined above. Similar toℓ2{\displaystyle \ell ^{2}}spaces,L2{\displaystyle L^{2}}is the onlyHilbert spaceamongLp{\displaystyle L^{p}}spaces. In the complex case, the inner product onL2{\displaystyle L^{2}}is defined by⟨f,g⟩=∫Sf(x)g(x)¯dμ(x).{\displaystyle \langle f,g\rangle =\int _{S}f(x){\overline {g(x)}}\,\mathrm {d} \mu (x).}Functions inL2{\displaystyle L^{2}}are sometimes calledsquare-integrable functions,quadratically integrable functionsorsquare-summable functions, but sometimes these terms are reserved for functions that are square-integrable in some other sense, such as in the sense of aRiemann integral(Titchmarsh 1976). As any Hilbert space, every spaceL2{\displaystyle L^{2}}is linearly isometric to a suitableℓ2(I),{\displaystyle \ell ^{2}(I),}where the cardinality of the setI{\displaystyle I}is the cardinality of an arbitrary basis for this particularL2.{\displaystyle L^{2}.} If we use complex-valued functions, the spaceL∞{\displaystyle L^{\infty }}is acommutativeC*-algebrawith pointwise multiplication and conjugation. For many measure spaces, including all sigma-finite ones, it is in fact a commutativevon Neumann algebra. An element ofL∞{\displaystyle L^{\infty }}defines abounded operatoron anyLp{\displaystyle L^{p}}space bymultiplication. If0<p<1,{\displaystyle 0<p<1,}thenLp(μ){\displaystyle L^{p}(\mu )}can be defined as above, that is:Np(f)=∫S|f|pdμ<∞.{\displaystyle N_{p}(f)=\int _{S}|f|^{p}\,d\mu <\infty .}In this case, however, thep{\displaystyle p}-norm‖f‖p=Np(f)1/p{\displaystyle \|f\|_{p}=N_{p}(f)^{1/p}}does not satisfy the triangle inequality and defines only aquasi-norm. The inequality(a+b)p≤ap+bp,{\displaystyle (a+b)^{p}\leq a^{p}+b^{p},}valid fora,b≥0,{\displaystyle a,b\geq 0,}implies thatNp(f+g)≤Np(f)+Np(g){\displaystyle N_{p}(f+g)\leq N_{p}(f)+N_{p}(g)}and so the functiondp(f,g)=Np(f−g)=‖f−g‖pp{\displaystyle d_{p}(f,g)=N_{p}(f-g)=\|f-g\|_{p}^{p}}is a metric onLp(μ).{\displaystyle L^{p}(\mu ).}The resulting metric space iscomplete.[6] In this settingLp{\displaystyle L^{p}}satisfies areverse Minkowski inequality, that is foru,v∈Lp{\displaystyle u,v\in L^{p}}‖|u|+|v|‖p≥‖u‖p+‖v‖p{\displaystyle {\Big \|}|u|+|v|{\Big \|}_{p}\geq \|u\|_{p}+\|v\|_{p}} This result may be used to proveClarkson's inequalities, which are in turn used to establish theuniform convexityof the spacesLp{\displaystyle L^{p}}for1<p<∞{\displaystyle 1<p<\infty }(Adams & Fournier 2003). The spaceLp{\displaystyle L^{p}}for0<p<1{\displaystyle 0<p<1}is anF-space: it admits a complete translation-invariant metric with respect to which the vector space operations are continuous. It is the prototypical example of anF-spacethat, for most reasonable measure spaces, is notlocally convex: inℓp{\displaystyle \ell ^{p}}orLp([0,1]),{\displaystyle L^{p}([0,1]),}every open convex set containing the0{\displaystyle 0}function is unbounded for thep{\displaystyle p}-quasi-norm; therefore, the0{\displaystyle 0}vector does not possess a fundamental system of convex neighborhoods. Specifically, this is true if the measure spaceS{\displaystyle S}contains an infinite family of disjoint measurable sets of finite positive measure. The only nonempty convex open set inLp([0,1]){\displaystyle L^{p}([0,1])}is the entire space. Consequently, there are no nonzero continuous linear functionals onLp([0,1]);{\displaystyle L^{p}([0,1]);}thecontinuous dual spaceis the zero space. In the case of thecounting measureon the natural numbers (i.e.Lp(μ)=ℓp{\displaystyle L^{p}(\mu )=\ell ^{p}}), the bounded linear functionals onℓp{\displaystyle \ell ^{p}}are exactly those that are bounded onℓ1{\displaystyle \ell ^{1}}, i.e., those given by sequences inℓ∞.{\displaystyle \ell ^{\infty }.}Althoughℓp{\displaystyle \ell ^{p}}does contain non-trivial convex open sets, it fails to have enough of them to give a base for the topology. Having no linear functionals is highly undesirable for the purposes of doing analysis. In case of the Lebesgue measure onRn,{\displaystyle \mathbb {R} ^{n},}rather than work withLp{\displaystyle L^{p}}for0<p<1,{\displaystyle 0<p<1,}it is common to work with theHardy spaceHpwhenever possible, as this has quite a few linear functionals: enough to distinguish points from one another. However, theHahn–Banach theoremstill fails inHpforp<1{\displaystyle p<1}(Duren 1970, §7.5). Supposep,q,r∈[1,∞]{\displaystyle p,q,r\in [1,\infty ]}satisfy1p+1q=1r{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}={\tfrac {1}{r}}}. Iff∈Lp(S,μ){\displaystyle f\in L^{p}(S,\mu )}andg∈Lq(S,μ){\displaystyle g\in L^{q}(S,\mu )}thenfg∈Lr(S,μ){\displaystyle fg\in L^{r}(S,\mu )}and[7]‖fg‖r≤‖f‖p‖g‖q.{\displaystyle \|fg\|_{r}~\leq ~\|f\|_{p}\,\|g\|_{q}.} This inequality, calledHölder's inequality, is in some sense optimal since ifr=1{\displaystyle r=1}andf{\displaystyle f}is a measurable function such thatsup‖g‖q≤1∫S|fg|dμ<∞{\displaystyle \sup _{\|g\|_{q}\leq 1}\,\int _{S}|fg|\,\mathrm {d} \mu ~<~\infty }where thesupremumis taken over the closed unit ball ofLq(S,μ),{\displaystyle L^{q}(S,\mu ),}thenf∈Lp(S,μ){\displaystyle f\in L^{p}(S,\mu )}and‖f‖p=sup‖g‖q≤1∫Sfgdμ.{\displaystyle \|f\|_{p}~=~\sup _{\|g\|_{q}\leq 1}\,\int _{S}fg\,\mathrm {d} \mu .} Minkowski inequality, which states that‖⋅‖p{\displaystyle \|\cdot \|_{p}}satisfies thetriangle inequality, can be generalized: If the measurable functionF:M×N→R{\displaystyle F:M\times N\to \mathbb {R} }is non-negative (where(M,μ){\displaystyle (M,\mu )}and(N,ν){\displaystyle (N,\nu )}are measure spaces) then for all1≤p≤q≤∞,{\displaystyle 1\leq p\leq q\leq \infty ,}[8]‖‖F(⋅,n)‖Lp(M,μ)‖Lq(N,ν)≤‖‖F(m,⋅)‖Lq(N,ν)‖Lp(M,μ).{\displaystyle \left\|\left\|F(\,\cdot ,n)\right\|_{L^{p}(M,\mu )}\right\|_{L^{q}(N,\nu )}~\leq ~\left\|\left\|F(m,\cdot )\right\|_{L^{q}(N,\nu )}\right\|_{L^{p}(M,\mu )}\ .} If1≤p<∞{\displaystyle 1\leq p<\infty }then every non-negativef∈Lp(μ){\displaystyle f\in L^{p}(\mu )}has anatomic decomposition,[9]meaning that there exist a sequence(rn)n∈Z{\displaystyle (r_{n})_{n\in \mathbb {Z} }}of non-negative real numbers and a sequence of non-negative functions(fn)n∈Z,{\displaystyle (f_{n})_{n\in \mathbb {Z} },}calledthe atoms, whose supports(supp⁡fn)n∈Z{\displaystyle \left(\operatorname {supp} f_{n}\right)_{n\in \mathbb {Z} }}arepairwise disjoint setsof measureμ(supp⁡fn)≤2n+1,{\displaystyle \mu \left(\operatorname {supp} f_{n}\right)\leq 2^{n+1},}such thatf=∑n∈Zrnfn,{\displaystyle f~=~\sum _{n\in \mathbb {Z} }r_{n}\,f_{n}\,,}and for every integern∈Z,{\displaystyle n\in \mathbb {Z} ,}‖fn‖∞≤2−np,{\displaystyle \|f_{n}\|_{\infty }~\leq ~2^{-{\tfrac {n}{p}}}\,,}and12‖f‖pp≤∑n∈Zrnp≤2‖f‖pp,{\displaystyle {\tfrac {1}{2}}\|f\|_{p}^{p}~\leq ~\sum _{n\in \mathbb {Z} }r_{n}^{p}~\leq ~2\|f\|_{p}^{p}\,,}and where moreover, the sequence of functions(rnfn)n∈Z{\displaystyle (r_{n}f_{n})_{n\in \mathbb {Z} }}depends only onf{\displaystyle f}(it is independent ofp{\displaystyle p}).[9]These inequalities guarantee that‖fn‖pp≤2{\displaystyle \|f_{n}\|_{p}^{p}\leq 2}for all integersn{\displaystyle n}while the supports of(fn)n∈Z{\displaystyle (f_{n})_{n\in \mathbb {Z} }}being pairwise disjoint implies[9]‖f‖pp=∑n∈Zrnp‖fn‖pp.{\displaystyle \|f\|_{p}^{p}~=~\sum _{n\in \mathbb {Z} }r_{n}^{p}\,\|f_{n}\|_{p}^{p}\,.} An atomic decomposition can be explicitly given by first defining for every integern∈Z,{\displaystyle n\in \mathbb {Z} ,}[9][note 7]tn=inf{t∈R:μ(f>t)<2n}{\displaystyle t_{n}=\inf\{t\in \mathbb {R} :\mu (f>t)<2^{n}\}}and then lettingrn=2n/ptnandfn=frn1(tn+1<f≤tn){\displaystyle r_{n}~=~2^{n/p}\,t_{n}~{\text{ and }}\quad f_{n}~=~{\frac {f}{r_{n}}}\,\mathbf {1} _{(t_{n+1}<f\leq t_{n})}}whereμ(f>t)=μ({s:f(s)>t}){\displaystyle \mu (f>t)=\mu (\{s:f(s)>t\})}denotes the measure of the set(f>t):={s∈S:f(s)>t}{\displaystyle (f>t):=\{s\in S:f(s)>t\}}and1(tn+1<f≤tn){\displaystyle \mathbf {1} _{(t_{n+1}<f\leq t_{n})}}denotes theindicator functionof the set(tn+1<f≤tn):={s∈S:tn+1<f(s)≤tn}.{\displaystyle (t_{n+1}<f\leq t_{n}):=\{s\in S:t_{n+1}<f(s)\leq t_{n}\}.}The sequence(tn)n∈Z{\displaystyle (t_{n})_{n\in \mathbb {Z} }}is decreasing and converges to0{\displaystyle 0}asn→∞.{\displaystyle n\to \infty .}[9]Consequently, iftn=0{\displaystyle t_{n}=0}thentn+1=0{\displaystyle t_{n+1}=0}and(tn+1<f≤tn)=∅{\displaystyle (t_{n+1}<f\leq t_{n})=\varnothing }so thatfn=1rnf1(tn+1<f≤tn){\displaystyle f_{n}={\frac {1}{r_{n}}}\,f\,\mathbf {1} _{(t_{n+1}<f\leq t_{n})}}is identically equal to0{\displaystyle 0}(in particular, the division1rn{\displaystyle {\tfrac {1}{r_{n}}}}byrn=0{\displaystyle r_{n}=0}causes no issues). Thecomplementary cumulative distribution functiont∈R↦μ(|f|>t){\displaystyle t\in \mathbb {R} \mapsto \mu (|f|>t)}of|f|=f{\displaystyle |f|=f}that was used to define thetn{\displaystyle t_{n}}also appears in the definition of the weakLp{\displaystyle L^{p}}-norm (given below) and can be used to express thep{\displaystyle p}-norm‖⋅‖p{\displaystyle \|\cdot \|_{p}}(for1≤p<∞{\displaystyle 1\leq p<\infty }) off∈Lp(S,μ){\displaystyle f\in L^{p}(S,\mu )}as the integral[9]‖f‖pp=p∫0∞tp−1μ(|f|>t)dt,{\displaystyle \|f\|_{p}^{p}~=~p\,\int _{0}^{\infty }t^{p-1}\mu (|f|>t)\,\mathrm {d} t\,,}where the integration is with respect to the usual Lebesgue measure on(0,∞).{\displaystyle (0,\infty ).} Thedual spaceofLp(μ){\displaystyle L^{p}(\mu )}for1<p<∞{\displaystyle 1<p<\infty }has a natural isomorphism withLq(μ),{\displaystyle L^{q}(\mu ),}whereq{\displaystyle q}is such that1p+1q=1{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1}. This isomorphism associatesg∈Lq(μ){\displaystyle g\in L^{q}(\mu )}with the functionalκp(g)∈Lp(μ)∗{\displaystyle \kappa _{p}(g)\in L^{p}(\mu )^{*}}defined byf↦κp(g)(f)=∫fgdμ{\displaystyle f\mapsto \kappa _{p}(g)(f)=\int fg\,\mathrm {d} \mu }for everyf∈Lp(μ).{\displaystyle f\in L^{p}(\mu ).} κp:Lq(μ)→Lp(μ)∗{\displaystyle \kappa _{p}:L^{q}(\mu )\to L^{p}(\mu )^{*}}is a well defined continuous linear mapping which is anisometryby theextremal caseof Hölder's inequality. If(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}is aσ{\displaystyle \sigma }-finite measure spaceone can use theRadon–Nikodym theoremto show that anyG∈Lp(μ)∗{\displaystyle G\in L^{p}(\mu )^{*}}can be expressed this way, i.e.,κp{\displaystyle \kappa _{p}}is anisometric isomorphismofBanach spaces.[10]Hence, it is usual to say simply thatLq(μ){\displaystyle L^{q}(\mu )}is thecontinuous dual spaceofLp(μ).{\displaystyle L^{p}(\mu ).} For1<p<∞,{\displaystyle 1<p<\infty ,}the spaceLp(μ){\displaystyle L^{p}(\mu )}isreflexive. Letκp{\displaystyle \kappa _{p}}be as above and letκq:Lp(μ)→Lq(μ)∗{\displaystyle \kappa _{q}:L^{p}(\mu )\to L^{q}(\mu )^{*}}be the corresponding linear isometry. Consider the map fromLp(μ){\displaystyle L^{p}(\mu )}toLp(μ)∗∗,{\displaystyle L^{p}(\mu )^{**},}obtained by composingκq{\displaystyle \kappa _{q}}with thetranspose(or adjoint) of the inverse ofκp:{\displaystyle \kappa _{p}:} jp:Lp(μ)⟶κqLq(μ)∗⟶(κp−1)∗Lp(μ)∗∗{\displaystyle j_{p}:L^{p}(\mu )\mathrel {\overset {\kappa _{q}}{\longrightarrow }} L^{q}(\mu )^{*}\mathrel {\overset {\left(\kappa _{p}^{-1}\right)^{*}}{\longrightarrow }} L^{p}(\mu )^{**}} This map coincides with thecanonical embeddingJ{\displaystyle J}ofLp(μ){\displaystyle L^{p}(\mu )}into its bidual. Moreover, the mapjp{\displaystyle j_{p}}is onto, as composition of two onto isometries, and this proves reflexivity. If the measureμ{\displaystyle \mu }onS{\displaystyle S}issigma-finite, then the dual ofL1(μ){\displaystyle L^{1}(\mu )}is isometrically isomorphic toL∞(μ){\displaystyle L^{\infty }(\mu )}(more precisely, the mapκ1{\displaystyle \kappa _{1}}corresponding top=1{\displaystyle p=1}is an isometry fromL∞(μ){\displaystyle L^{\infty }(\mu )}ontoL1(μ)∗.{\displaystyle L^{1}(\mu )^{*}.} The dual ofL∞(μ){\displaystyle L^{\infty }(\mu )}is subtler. Elements ofL∞(μ)∗{\displaystyle L^{\infty }(\mu )^{*}}can be identified with bounded signedfinitelyadditive measures onS{\displaystyle S}that areabsolutely continuouswith respect toμ.{\displaystyle \mu .}Seeba spacefor more details. If we assume the axiom of choice, this space is much bigger thanL1(μ){\displaystyle L^{1}(\mu )}except in some trivial cases. However,Saharon Shelahproved that there are relatively consistent extensions ofZermelo–Fraenkel set theory(ZF +DC+ "Every subset of the real numbers has theBaire property") in which the dual ofℓ∞{\displaystyle \ell ^{\infty }}isℓ1.{\displaystyle \ell ^{1}.}[11] Colloquially, if1≤p<q≤∞,{\displaystyle 1\leq p<q\leq \infty ,}thenLp(S,μ){\displaystyle L^{p}(S,\mu )}contains functions that are more locally singular, while elements ofLq(S,μ){\displaystyle L^{q}(S,\mu )}can be more spread out. Consider theLebesgue measureon the half line(0,∞).{\displaystyle (0,\infty ).}A continuous function inL1{\displaystyle L^{1}}might blow up near0{\displaystyle 0}but must decay sufficiently fast toward infinity. On the other hand, continuous functions inL∞{\displaystyle L^{\infty }}need not decay at all but no blow-up is allowed. More formally:[12] Neither condition holds for the Lebesgue measure on the real line while both conditions holds for thecounting measureon any finite set. As a consequence of theclosed graph theorem, the embedding is continuous, i.e., theidentity operatoris a bounded linear map fromLq{\displaystyle L^{q}}toLp{\displaystyle L^{p}}in the first case andLp{\displaystyle L^{p}}toLq{\displaystyle L^{q}}in the second. Indeed, if the domainS{\displaystyle S}has finite measure, one can make the following explicit calculation usingHölder's inequality‖1fp‖1≤‖1‖q/(q−p)‖fp‖q/p{\displaystyle \ \|\mathbf {1} f^{p}\|_{1}\leq \|\mathbf {1} \|_{q/(q-p)}\|f^{p}\|_{q/p}}leading to‖f‖p≤μ(S)1/p−1/q‖f‖q.{\displaystyle \ \|f\|_{p}\leq \mu (S)^{1/p-1/q}\|f\|_{q}.} The constant appearing in the above inequality is optimal, in the sense that theoperator normof the identityI:Lq(S,μ)→Lp(S,μ){\displaystyle I:L^{q}(S,\mu )\to L^{p}(S,\mu )}is precisely‖I‖q,p=μ(S)1/p−1/q{\displaystyle \|I\|_{q,p}=\mu (S)^{1/p-1/q}}the case of equality being achieved exactly whenf=1{\displaystyle f=1}μ{\displaystyle \mu }-almost-everywhere. Let1≤p<∞{\displaystyle 1\leq p<\infty }and(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}be a measure space and consider an integrablesimple functionf{\displaystyle f}onS{\displaystyle S}given byf=∑j=1naj1Aj,{\displaystyle f=\sum _{j=1}^{n}a_{j}\mathbf {1} _{A_{j}},}whereaj{\displaystyle a_{j}}are scalars,Aj∈Σ{\displaystyle A_{j}\in \Sigma }has finite measure and1Aj{\displaystyle {\mathbf {1} }_{A_{j}}}is theindicator functionof the setAj,{\displaystyle A_{j},}forj=1,…,n.{\displaystyle j=1,\dots ,n.}By construction of theintegral, the vector space of integrable simple functions isdenseinLp(S,Σ,μ).{\displaystyle L^{p}(S,\Sigma ,\mu ).} More can be said whenS{\displaystyle S}is anormaltopological spaceandΣ{\displaystyle \Sigma }itsBorel 𝜎–algebra. SupposeV⊆S{\displaystyle V\subseteq S}is an open set withμ(V)<∞.{\displaystyle \mu (V)<\infty .}Then for every Borel setA∈Σ{\displaystyle A\in \Sigma }contained inV{\displaystyle V}there exist a closed setF{\displaystyle F}and an open setU{\displaystyle U}such thatF⊆A⊆U⊆Vandμ(U∖F)=μ(U)−μ(F)<ε,{\displaystyle F\subseteq A\subseteq U\subseteq V\quad {\text{and}}\quad \mu (U\setminus F)=\mu (U)-\mu (F)<\varepsilon ,}for everyε>0{\displaystyle \varepsilon >0}. Subsequently, there exists aUrysohn function0≤φ≤1{\displaystyle 0\leq \varphi \leq 1}onS{\displaystyle S}that is1{\displaystyle 1}onF{\displaystyle F}and0{\displaystyle 0}onS∖U,{\displaystyle S\setminus U,}with∫S|1A−φ|dμ<ε.{\displaystyle \int _{S}|\mathbf {1} _{A}-\varphi |\,\mathrm {d} \mu <\varepsilon \,.} IfS{\displaystyle S}can be covered by an increasing sequence(Vn){\displaystyle (V_{n})}of open sets that have finite measure, then the space ofp{\displaystyle p}–integrable continuous functions is dense inLp(S,Σ,μ).{\displaystyle L^{p}(S,\Sigma ,\mu ).}More precisely, one can use bounded continuous functions that vanish outside one of the open setsVn.{\displaystyle V_{n}.} This applies in particular whenS=Rd{\displaystyle S=\mathbb {R} ^{d}}and whenμ{\displaystyle \mu }is the Lebesgue measure. For example, the space of continuous and compactly supported functions as well as the space of integrablestep functionsare dense inLp(Rd){\displaystyle L^{p}(\mathbb {R} ^{d})}. If0<p<∞{\displaystyle 0<p<\infty }is any positive real number,μ{\displaystyle \mu }is aprobability measureon a measurable space(S,Σ){\displaystyle (S,\Sigma )}(so thatL∞(μ)⊆Lp(μ){\displaystyle L^{\infty }(\mu )\subseteq L^{p}(\mu )}), andV⊆L∞(μ){\displaystyle V\subseteq L^{\infty }(\mu )}is a vector subspace, thenV{\displaystyle V}is a closed subspace ofLp(μ){\displaystyle L^{p}(\mu )}if and only ifV{\displaystyle V}is finite-dimensional[13](V{\displaystyle V}was chosen independent ofp{\displaystyle p}). In this theorem, which is due toAlexander Grothendieck,[13]it is crucial that the vector spaceV{\displaystyle V}be a subset ofL∞{\displaystyle L^{\infty }}since it is possible to construct an infinite-dimensional closed vector subspace ofL1(S1,12πλ){\displaystyle L^{1}\left(S^{1},{\tfrac {1}{2\pi }}\lambda \right)}(which is even a subset ofL4{\displaystyle L^{4}}), whereλ{\displaystyle \lambda }isLebesgue measureon theunit circleS1{\displaystyle S^{1}}and12πλ{\displaystyle {\tfrac {1}{2\pi }}\lambda }is the probability measure that results from dividing it by its massλ(S1)=2π.{\displaystyle \lambda (S^{1})=2\pi .}[13] In statistics, measures ofcentral tendencyandstatistical dispersion, such as themean,median, andstandard deviation, can be defined in terms ofLp{\displaystyle L^{p}}metrics, and measures of central tendency can be characterized assolutions to variational problems. Inpenalized regression, "L1 penalty" and "L2 penalty" refer to penalizing either theL1{\displaystyle L^{1}}normof a solution's vector of parameter values (i.e. the sum of its absolute values), or its squaredL2{\displaystyle L^{2}}norm (itsEuclidean length). Techniques which use an L1 penalty, likeLASSO, encourage sparse solutions (where the many parameters are zero).[14]Elastic net regularizationuses a penalty term that is a combination of theL1{\displaystyle L^{1}}norm and the squaredL2{\displaystyle L^{2}}norm of the parameter vector. TheFourier transformfor the real line (or, forperiodic functions, seeFourier series), mapsLp(R){\displaystyle L^{p}(\mathbb {R} )}toLq(R){\displaystyle L^{q}(\mathbb {R} )}(orLp(T){\displaystyle L^{p}(\mathbf {T} )}toℓq{\displaystyle \ell ^{q}}) respectively, where1≤p≤2{\displaystyle 1\leq p\leq 2}and1p+1q=1.{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1.}This is a consequence of theRiesz–Thorin interpolation theorem, and is made precise with theHausdorff–Young inequality. By contrast, ifp>2,{\displaystyle p>2,}the Fourier transform does not map intoLq.{\displaystyle L^{q}.} Hilbert spacesare central to many applications, fromquantum mechanicstostochastic calculus. The spacesL2{\displaystyle L^{2}}andℓ2{\displaystyle \ell ^{2}}are both Hilbert spaces. In fact, by choosing a Hilbert basisE,{\displaystyle E,}i.e., a maximal orthonormal subset ofL2{\displaystyle L^{2}}or any Hilbert space, one sees that every Hilbert space is isometrically isomorphic toℓ2(E){\displaystyle \ell ^{2}(E)}(sameE{\displaystyle E}as above), i.e., a Hilbert space of typeℓ2.{\displaystyle \ell ^{2}.} Let(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}be a measure space, andf{\displaystyle f}ameasurable functionwith real or complex values onS.{\displaystyle S.}Thedistribution functionoff{\displaystyle f}is defined fort≥0{\displaystyle t\geq 0}byλf(t)=μ{x∈S:|f(x)|>t}.{\displaystyle \lambda _{f}(t)=\mu \{x\in S:|f(x)|>t\}.} Iff{\displaystyle f}is inLp(S,μ){\displaystyle L^{p}(S,\mu )}for somep{\displaystyle p}with1≤p<∞,{\displaystyle 1\leq p<\infty ,}then byMarkov's inequality,λf(t)≤‖f‖pptp{\displaystyle \lambda _{f}(t)\leq {\frac {\|f\|_{p}^{p}}{t^{p}}}} A functionf{\displaystyle f}is said to be in the spaceweakLp(S,μ){\displaystyle L^{p}(S,\mu )}, orLp,w(S,μ),{\displaystyle L^{p,w}(S,\mu ),}if there is a constantC>0{\displaystyle C>0}such that, for allt>0,{\displaystyle t>0,}λf(t)≤Cptp{\displaystyle \lambda _{f}(t)\leq {\frac {C^{p}}{t^{p}}}} The best constantC{\displaystyle C}for this inequality is theLp,w{\displaystyle L^{p,w}}-norm off,{\displaystyle f,}and is denoted by‖f‖p,w=supt>0tλf1/p(t).{\displaystyle \|f\|_{p,w}=\sup _{t>0}~t\lambda _{f}^{1/p}(t).} The weakLp{\displaystyle L^{p}}coincide with theLorentz spacesLp,∞,{\displaystyle L^{p,\infty },}so this notation is also used to denote them. TheLp,w{\displaystyle L^{p,w}}-norm is not a true norm, since thetriangle inequalityfails to hold. Nevertheless, forf{\displaystyle f}inLp(S,μ),{\displaystyle L^{p}(S,\mu ),}‖f‖p,w≤‖f‖p{\displaystyle \|f\|_{p,w}\leq \|f\|_{p}}and in particularLp(S,μ)⊂Lp,w(S,μ).{\displaystyle L^{p}(S,\mu )\subset L^{p,w}(S,\mu ).} In fact, one has‖f‖Lpp=∫|f(x)|pdμ(x)≥∫{|f(x)|>t}tp+∫{|f(x)|≤t}|f|p≥tpμ({|f|>t}),{\displaystyle \|f\|_{L^{p}}^{p}=\int |f(x)|^{p}d\mu (x)\geq \int _{\{|f(x)|>t\}}t^{p}+\int _{\{|f(x)|\leq t\}}|f|^{p}\geq t^{p}\mu (\{|f|>t\}),}and raising to power1/p{\displaystyle 1/p}and taking the supremum int{\displaystyle t}one has‖f‖Lp≥supt>0tμ({|f|>t})1/p=‖f‖Lp,w.{\displaystyle \|f\|_{L^{p}}\geq \sup _{t>0}t\;\mu (\{|f|>t\})^{1/p}=\|f\|_{L^{p,w}}.} Under the convention that two functions are equal if they are equalμ{\displaystyle \mu }almost everywhere, then the spacesLp,w{\displaystyle L^{p,w}}are complete (Grafakos 2004). For any0<r<p{\displaystyle 0<r<p}the expression‖|f|‖Lp,∞=sup0<μ(E)<∞μ(E)−1/r+1/p(∫E|f|rdμ)1/r{\displaystyle \||f|\|_{L^{p,\infty }}=\sup _{0<\mu (E)<\infty }\mu (E)^{-1/r+1/p}\left(\int _{E}|f|^{r}\,d\mu \right)^{1/r}}is comparable to theLp,w{\displaystyle L^{p,w}}-norm. Further in the casep>1,{\displaystyle p>1,}this expression defines a norm ifr=1.{\displaystyle r=1.}Hence forp>1{\displaystyle p>1}the weakLp{\displaystyle L^{p}}spaces areBanach spaces(Grafakos 2004). A major result that uses theLp,w{\displaystyle L^{p,w}}-spaces is theMarcinkiewicz interpolation theorem, which has broad applications toharmonic analysisand the study ofsingular integrals. As before, consider ameasure space(S,Σ,μ).{\displaystyle (S,\Sigma ,\mu ).}Letw:S→[a,∞),a>0{\displaystyle w:S\to [a,\infty ),a>0}be a measurable function. Thew{\displaystyle w}-weightedLp{\displaystyle L^{p}}spaceis defined asLp(S,wdμ),{\displaystyle L^{p}(S,w\,\mathrm {d} \mu ),}wherewdμ{\displaystyle w\,\mathrm {d} \mu }means the measureν{\displaystyle \nu }defined byν(A)≡∫Aw(x)dμ(x),A∈Σ,{\displaystyle \nu (A)\equiv \int _{A}w(x)\,\mathrm {d} \mu (x),\qquad A\in \Sigma ,} or, in terms of theRadon–Nikodym derivative,w=dνdμ{\displaystyle w={\tfrac {\mathrm {d} \nu }{\mathrm {d} \mu }}}thenormforLp(S,wdμ){\displaystyle L^{p}(S,w\,\mathrm {d} \mu )}is explicitly‖u‖Lp(S,wdμ)≡(∫Sw(x)|u(x)|pdμ(x))1/p{\displaystyle \|u\|_{L^{p}(S,w\,\mathrm {d} \mu )}\equiv \left(\int _{S}w(x)|u(x)|^{p}\,\mathrm {d} \mu (x)\right)^{1/p}} AsLp{\displaystyle L^{p}}-spaces, the weighted spaces have nothing special, sinceLp(S,wdμ){\displaystyle L^{p}(S,w\,\mathrm {d} \mu )}is equal toLp(S,dν).{\displaystyle L^{p}(S,\mathrm {d} \nu ).}But they are the natural framework for several results in harmonic analysis (Grafakos 2004); they appear for example in theMuckenhoupt theorem: for1<p<∞,{\displaystyle 1<p<\infty ,}the classicalHilbert transformis defined onLp(T,λ){\displaystyle L^{p}(\mathbf {T} ,\lambda )}whereT{\displaystyle \mathbf {T} }denotes theunit circleandλ{\displaystyle \lambda }the Lebesgue measure; the (nonlinear)Hardy–Littlewood maximal operatoris bounded onLp(Rn,λ).{\displaystyle L^{p}(\mathbb {R} ^{n},\lambda ).}Muckenhoupt's theorem describes weightsw{\displaystyle w}such that the Hilbert transform remains bounded onLp(T,wdλ){\displaystyle L^{p}(\mathbf {T} ,w\,\mathrm {d} \lambda )}and the maximal operator onLp(Rn,wdλ).{\displaystyle L^{p}(\mathbb {R} ^{n},w\,\mathrm {d} \lambda ).} One may also define spacesLp(M){\displaystyle L^{p}(M)}on a manifold, called theintrinsicLp{\displaystyle L^{p}}spacesof the manifold, usingdensities. Given a measure space(Ω,Σ,μ){\displaystyle (\Omega ,\Sigma ,\mu )}and alocally convex spaceE{\displaystyle E}(here assumed to becomplete), it is possible to define spaces ofp{\displaystyle p}-integrableE{\displaystyle E}-valued functions onΩ{\displaystyle \Omega }in a number of ways. One way is to define the spaces ofBochner integrableandPettis integrablefunctions, and then endow them withlocally convexTVS-topologiesthat are (each in their own way) a natural generalization of the usualLp{\displaystyle L^{p}}topology. Another way involvestopological tensor productsofLp(Ω,Σ,μ){\displaystyle L^{p}(\Omega ,\Sigma ,\mu )}withE.{\displaystyle E.}Element of the vector spaceLp(Ω,Σ,μ)⊗E{\displaystyle L^{p}(\Omega ,\Sigma ,\mu )\otimes E}are finite sums of simple tensorsf1⊗e1+⋯+fn⊗en,{\displaystyle f_{1}\otimes e_{1}+\cdots +f_{n}\otimes e_{n},}where each simple tensorf×e{\displaystyle f\times e}may be identified with the functionΩ→E{\displaystyle \Omega \to E}that sendsx↦ef(x).{\displaystyle x\mapsto ef(x).}Thistensor productLp(Ω,Σ,μ)⊗E{\displaystyle L^{p}(\Omega ,\Sigma ,\mu )\otimes E}is then endowed with a locally convex topology that turns it into atopological tensor product, the most common of which are theprojective tensor product, denoted byLp(Ω,Σ,μ)⊗πE,{\displaystyle L^{p}(\Omega ,\Sigma ,\mu )\otimes _{\pi }E,}and theinjective tensor product, denoted byLp(Ω,Σ,μ)⊗εE.{\displaystyle L^{p}(\Omega ,\Sigma ,\mu )\otimes _{\varepsilon }E.}In general, neither of these space are complete so theircompletionsare constructed, which are respectively denoted byLp(Ω,Σ,μ)⊗^πE{\displaystyle L^{p}(\Omega ,\Sigma ,\mu ){\widehat {\otimes }}_{\pi }E}andLp(Ω,Σ,μ)⊗^εE{\displaystyle L^{p}(\Omega ,\Sigma ,\mu ){\widehat {\otimes }}_{\varepsilon }E}(this is analogous to how the space of scalar-valuedsimple functionsonΩ,{\displaystyle \Omega ,}when seminormed by any‖⋅‖p,{\displaystyle \|\cdot \|_{p},}is not complete so a completion is constructed which, after being quotiented byker⁡‖⋅‖p,{\displaystyle \ker \|\cdot \|_{p},}is isometrically isomorphic to the Banach spaceLp(Ω,μ){\displaystyle L^{p}(\Omega ,\mu )}).Alexander Grothendieckshowed that whenE{\displaystyle E}is anuclear space(a concept he introduced), then these two constructions are, respectively, canonically TVS-isomorphic with the spaces of Bochner and Pettis integral functions mentioned earlier; in short, they are indistinguishable. The vector space of (equivalence classesof) measurable functions on(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}is denotedL0(S,Σ,μ){\displaystyle L^{0}(S,\Sigma ,\mu )}(Kalton, Peck & Roberts 1984). By definition, it contains all theLp,{\displaystyle L^{p},}and is equipped with the topology ofconvergence in measure. Whenμ{\displaystyle \mu }is a probability measure (i.e.,μ(S)=1{\displaystyle \mu (S)=1}), this mode of convergence is namedconvergence in probability. The spaceL0{\displaystyle L^{0}}is always atopological abelian groupbut is only atopological vector spaceifμ(S)<∞.{\displaystyle \mu (S)<\infty .}This is because scalar multiplication is continuous if and only ifμ(S)<∞.{\displaystyle \mu (S)<\infty .}If(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}isσ{\displaystyle \sigma }-finite then theweaker topologyoflocal convergence in measureis anF-space, i.e. acompletelymetrizable topological vector space. Moreover, this topology is isometric to global convergence in measure(S,Σ,ν){\displaystyle (S,\Sigma ,\nu )}for a suitable choice ofprobability measureν.{\displaystyle \nu .} The description is easier whenμ{\displaystyle \mu }is finite. Ifμ{\displaystyle \mu }is afinite measureon(S,Σ),{\displaystyle (S,\Sigma ),}the0{\displaystyle 0}function admits for the convergence in measure the followingfundamental system of neighborhoodsVε={f:μ({x:|f(x)|>ε})<ε},ε>0.{\displaystyle V_{\varepsilon }={\Bigl \{}f:\mu {\bigl (}\{x:|f(x)|>\varepsilon \}{\bigr )}<\varepsilon {\Bigr \}},\qquad \varepsilon >0.} The topology can be defined by any metricd{\displaystyle d}of the formd(f,g)=∫Sφ(|f(x)−g(x)|)dμ(x){\displaystyle d(f,g)=\int _{S}\varphi {\bigl (}|f(x)-g(x)|{\bigr )}\,\mathrm {d} \mu (x)}whereφ{\displaystyle \varphi }is bounded continuous concave and non-decreasing on[0,∞),{\displaystyle [0,\infty ),}withφ(0)=0{\displaystyle \varphi (0)=0}andφ(t)>0{\displaystyle \varphi (t)>0}whent>0{\displaystyle t>0}(for example,φ(t)=min(t,1).{\displaystyle \varphi (t)=\min(t,1).}Such a metric is calledLévy-metric forL0.{\displaystyle L^{0}.}Under this metric the spaceL0{\displaystyle L^{0}}is complete. However, as mentioned above, scalar multiplication is continuous with respect to this metric only ifμ(S)<∞{\displaystyle \mu (S)<\infty }. To see this, consider the Lebesgue measurable functionf:R→R{\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} }defined byf(x)=x{\displaystyle f(x)=x}. Then clearlylimc→0d(cf,0)=∞{\displaystyle \lim _{c\rightarrow 0}d(cf,0)=\infty }. The spaceL0{\displaystyle L^{0}}is in general not locally bounded, and not locally convex. For the infinite Lebesgue measureλ{\displaystyle \lambda }onRn,{\displaystyle \mathbb {R} ^{n},}the definition of the fundamental system of neighborhoods could be modified as followsWε={f:λ({x:|f(x)|>εand|x|<1ε})<ε}{\displaystyle W_{\varepsilon }=\left\{f:\lambda \left(\left\{x:|f(x)|>\varepsilon {\text{ and }}|x|<{\tfrac {1}{\varepsilon }}\right\}\right)<\varepsilon \right\}} The resulting spaceL0(Rn,λ){\displaystyle L^{0}(\mathbb {R} ^{n},\lambda )}, with the topology of local convergence in measure, is isomorphic to the spaceL0(Rn,gλ),{\displaystyle L^{0}(\mathbb {R} ^{n},g\,\lambda ),}for any positiveλ{\displaystyle \lambda }–integrable densityg.{\displaystyle g.}
https://en.wikipedia.org/wiki/Lp_space
Inmathematics, theLpspacesarefunction spacesdefined using a natural generalization of thep-normfor finite-dimensionalvector spaces. They are sometimes calledLebesgue spaces, named afterHenri Lebesgue(Dunford & Schwartz 1958, III.3), although according to theBourbakigroup (Bourbaki 1987) they were first introduced byFrigyes Riesz(Riesz 1910). Lpspaces form an important class ofBanach spacesinfunctional analysis, and oftopological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines. The Euclidean length of a vectorx=(x1,x2,…,xn){\displaystyle x=(x_{1},x_{2},\dots ,x_{n})}in then{\displaystyle n}-dimensionalrealvector spaceRn{\displaystyle \mathbb {R} ^{n}}is given by theEuclidean norm:‖x‖2=(x12+x22+⋯+xn2)1/2.{\displaystyle \|x\|_{2}=\left({x_{1}}^{2}+{x_{2}}^{2}+\dotsb +{x_{n}}^{2}\right)^{1/2}.} The Euclidean distance between two pointsx{\displaystyle x}andy{\displaystyle y}is the length‖x−y‖2{\displaystyle \|x-y\|_{2}}of the straight line between the two points. In many situations, the Euclidean distance is appropriate for capturing the actual distances in a given space. In contrast, consider taxi drivers in a grid street plan who should measure distance not in terms of the length of the straight line to their destination, but in terms of therectilinear distance, which takes into account that streets are either orthogonal or parallel to each other. The class ofp{\displaystyle p}-norms generalizes these two examples and has an abundance of applications in many parts ofmathematics,physics, andcomputer science. For areal numberp≥1,{\displaystyle p\geq 1,}thep{\displaystyle p}-normorLp{\displaystyle L^{p}}-normofx{\displaystyle x}is defined by‖x‖p=(|x1|p+|x2|p+⋯+|xn|p)1/p.{\displaystyle \|x\|_{p}=\left(|x_{1}|^{p}+|x_{2}|^{p}+\dotsb +|x_{n}|^{p}\right)^{1/p}.}The absolute value bars can be dropped whenp{\displaystyle p}is a rational number with an even numerator in its reduced form, andx{\displaystyle x}is drawn from the set of real numbers, or one of its subsets. The Euclidean norm from above falls into this class and is the2{\displaystyle 2}-norm, and the1{\displaystyle 1}-norm is the norm that corresponds to therectilinear distance. TheL∞{\displaystyle L^{\infty }}-normormaximum norm(or uniform norm) is the limit of theLp{\displaystyle L^{p}}-norms forp→∞{\displaystyle p\to \infty }, given by:‖x‖∞=max{|x1|,|x2|,…,|xn|}{\displaystyle \|x\|_{\infty }=\max \left\{|x_{1}|,|x_{2}|,\dotsc ,|x_{n}|\right\}} For allp≥1,{\displaystyle p\geq 1,}thep{\displaystyle p}-norms and maximum norm satisfy the properties of a "length function" (ornorm), that is: Abstractly speaking, this means thatRn{\displaystyle \mathbb {R} ^{n}}together with thep{\displaystyle p}-norm is anormed vector space. Moreover, it turns out that this space iscomplete, thus making it aBanach space. The grid distance or rectilinear distance (sometimes called the "Manhattan distance") between two points is never shorter than the length of the line segment between them (the Euclidean or "as the crow flies" distance). Formally, this means that the Euclidean norm of any vector is bounded by its 1-norm:‖x‖2≤‖x‖1.{\displaystyle \|x\|_{2}\leq \|x\|_{1}.} This fact generalizes top{\displaystyle p}-norms in that thep{\displaystyle p}-norm‖x‖p{\displaystyle \|x\|_{p}}of any given vectorx{\displaystyle x}does not grow withp{\displaystyle p}: For the opposite direction, the following relation between the1{\displaystyle 1}-norm and the2{\displaystyle 2}-norm is known:‖x‖1≤n‖x‖2.{\displaystyle \|x\|_{1}\leq {\sqrt {n}}\|x\|_{2}~.} This inequality depends on the dimensionn{\displaystyle n}of the underlying vector space and follows directly from theCauchy–Schwarz inequality. In general, for vectors inCn{\displaystyle \mathbb {C} ^{n}}where0<r<p:{\displaystyle 0<r<p:}‖x‖p≤‖x‖r≤n1r−1p‖x‖p.{\displaystyle \|x\|_{p}\leq \|x\|_{r}\leq n^{{\frac {1}{r}}-{\frac {1}{p}}}\|x\|_{p}~.} This is a consequence ofHölder's inequality. InRn{\displaystyle \mathbb {R} ^{n}}forn>1,{\displaystyle n>1,}the formula‖x‖p=(|x1|p+|x2|p+⋯+|xn|p)1/p{\displaystyle \|x\|_{p}=\left(|x_{1}|^{p}+|x_{2}|^{p}+\cdots +|x_{n}|^{p}\right)^{1/p}}defines an absolutelyhomogeneous functionfor0<p<1;{\displaystyle 0<p<1;}however, the resulting function does not define a norm, because it is notsubadditive. On the other hand, the formula|x1|p+|x2|p+⋯+|xn|p{\displaystyle |x_{1}|^{p}+|x_{2}|^{p}+\dotsb +|x_{n}|^{p}}defines a subadditive function at the cost of losing absolute homogeneity. It does define anF-norm, though, which is homogeneous of degreep.{\displaystyle p.} Hence, the functiondp(x,y)=∑i=1n|xi−yi|p{\displaystyle d_{p}(x,y)=\sum _{i=1}^{n}|x_{i}-y_{i}|^{p}}defines ametric. Themetric space(Rn,dp){\displaystyle (\mathbb {R} ^{n},d_{p})}is denoted byℓnp.{\displaystyle \ell _{n}^{p}.} Although thep{\displaystyle p}-unit ballBnp{\displaystyle B_{n}^{p}}around the origin in this metric is "concave", the topology defined onRn{\displaystyle \mathbb {R} ^{n}}by the metricBp{\displaystyle B_{p}}is the usual vector space topology ofRn,{\displaystyle \mathbb {R} ^{n},}henceℓnp{\displaystyle \ell _{n}^{p}}is alocally convextopological vector space. Beyond this qualitative statement, a quantitative way to measure the lack of convexity ofℓnp{\displaystyle \ell _{n}^{p}}is to denote byCp(n){\displaystyle C_{p}(n)}the smallest constantC{\displaystyle C}such that the scalar multipleCBnp{\displaystyle C\,B_{n}^{p}}of thep{\displaystyle p}-unit ball contains the convex hull ofBnp,{\displaystyle B_{n}^{p},}which is equal toBn1.{\displaystyle B_{n}^{1}.}The fact that for fixedp<1{\displaystyle p<1}we haveCp(n)=n1p−1→∞,asn→∞{\displaystyle C_{p}(n)=n^{{\tfrac {1}{p}}-1}\to \infty ,\quad {\text{as }}n\to \infty }shows that the infinite-dimensional sequence spaceℓp{\displaystyle \ell ^{p}}defined below, is no longer locally convex.[citation needed] There is oneℓ0{\displaystyle \ell _{0}}norm and another function called theℓ0{\displaystyle \ell _{0}}"norm" (with quotation marks). The mathematical definition of theℓ0{\displaystyle \ell _{0}}norm was established byBanach'sTheory of Linear Operations. Thespaceof sequences has a complete metric topology provided by theF-normon theproduct metric:[citation needed](xn)↦‖x‖:=d(0,x)=∑n2−n|xn|1+|xn|.{\displaystyle (x_{n})\mapsto \|x\|:=d(0,x)=\sum _{n}2^{-n}{\frac {|x_{n}|}{1+|x_{n}|}}.}Theℓ0{\displaystyle \ell _{0}}-normed space is studied in functional analysis, probability theory, and harmonic analysis. Another function was called theℓ0{\displaystyle \ell _{0}}"norm" byDavid Donoho—whose quotation marks warn that this function is not a proper norm—is the number of non-zero entries of the vectorx.{\displaystyle x.}[citation needed]Many authorsabuse terminologyby omitting the quotation marks. Defining00=0,{\displaystyle 0^{0}=0,}the zero "norm" ofx{\displaystyle x}is equal to|x1|0+|x2|0+⋯+|xn|0.{\displaystyle |x_{1}|^{0}+|x_{2}|^{0}+\cdots +|x_{n}|^{0}.} This is not anormbecause it is nothomogeneous. For example, scaling the vectorx{\displaystyle x}by a positive constant does not change the "norm". Despite these defects as a mathematical norm, the non-zero counting "norm" has uses inscientific computing,information theory, andstatistics–notably incompressed sensinginsignal processingand computationalharmonic analysis. Despite not being a norm, the associated metric, known asHamming distance, is a valid distance, since homogeneity is not required for distances. Thep{\displaystyle p}-norm can be extended to vectors that have an infinite number of components (sequences), which yields the spaceℓp.{\displaystyle \ell ^{p}.}This contains as special cases: The space of sequences has a natural vector space structure by applying scalar addition and multiplication. Explicitly, the vector sum and the scalar action for infinitesequencesof real (orcomplex) numbers are given by:(x1,x2,…,xn,xn+1,…)+(y1,y2,…,yn,yn+1,…)=(x1+y1,x2+y2,…,xn+yn,xn+1+yn+1,…),λ⋅(x1,x2,…,xn,xn+1,…)=(λx1,λx2,…,λxn,λxn+1,…).{\displaystyle {\begin{aligned}&(x_{1},x_{2},\ldots ,x_{n},x_{n+1},\ldots )+(y_{1},y_{2},\ldots ,y_{n},y_{n+1},\ldots )\\={}&(x_{1}+y_{1},x_{2}+y_{2},\ldots ,x_{n}+y_{n},x_{n+1}+y_{n+1},\ldots ),\\[6pt]&\lambda \cdot \left(x_{1},x_{2},\ldots ,x_{n},x_{n+1},\ldots \right)\\={}&(\lambda x_{1},\lambda x_{2},\ldots ,\lambda x_{n},\lambda x_{n+1},\ldots ).\end{aligned}}} Define thep{\displaystyle p}-norm:‖x‖p=(|x1|p+|x2|p+⋯+|xn|p+|xn+1|p+⋯)1/p{\displaystyle \|x\|_{p}=\left(|x_{1}|^{p}+|x_{2}|^{p}+\cdots +|x_{n}|^{p}+|x_{n+1}|^{p}+\cdots \right)^{1/p}} Here, a complication arises, namely that theserieson the right is not always convergent, so for example, the sequence made up of only ones,(1,1,1,…),{\displaystyle (1,1,1,\ldots ),}will have an infinitep{\displaystyle p}-norm for1≤p<∞.{\displaystyle 1\leq p<\infty .}The spaceℓp{\displaystyle \ell ^{p}}is then defined as the set of all infinite sequences of real (or complex) numbers such that thep{\displaystyle p}-norm is finite. One can check that asp{\displaystyle p}increases, the setℓp{\displaystyle \ell ^{p}}grows larger. For example, the sequence(1,12,…,1n,1n+1,…){\displaystyle \left(1,{\frac {1}{2}},\ldots ,{\frac {1}{n}},{\frac {1}{n+1}},\ldots \right)}is not inℓ1,{\displaystyle \ell ^{1},}but it is inℓp{\displaystyle \ell ^{p}}forp>1,{\displaystyle p>1,}as the series1p+12p+⋯+1np+1(n+1)p+⋯,{\displaystyle 1^{p}+{\frac {1}{2^{p}}}+\cdots +{\frac {1}{n^{p}}}+{\frac {1}{(n+1)^{p}}}+\cdots ,}diverges forp=1{\displaystyle p=1}(theharmonic series), but is convergent forp>1.{\displaystyle p>1.} One also defines the∞{\displaystyle \infty }-norm using thesupremum:‖x‖∞=sup(|x1|,|x2|,…,|xn|,|xn+1|,…){\displaystyle \|x\|_{\infty }=\sup(|x_{1}|,|x_{2}|,\dotsc ,|x_{n}|,|x_{n+1}|,\ldots )}and the corresponding spaceℓ∞{\displaystyle \ell ^{\infty }}of all bounded sequences. It turns out that[1]‖x‖∞=limp→∞‖x‖p{\displaystyle \|x\|_{\infty }=\lim _{p\to \infty }\|x\|_{p}}if the right-hand side is finite, or the left-hand side is infinite. Thus, we will considerℓp{\displaystyle \ell ^{p}}spaces for1≤p≤∞.{\displaystyle 1\leq p\leq \infty .} Thep{\displaystyle p}-norm thus defined onℓp{\displaystyle \ell ^{p}}is indeed a norm, andℓp{\displaystyle \ell ^{p}}together with this norm is aBanach space. In complete analogy to the preceding definition one can define the spaceℓp(I){\displaystyle \ell ^{p}(I)}over a generalindex setI{\displaystyle I}(and1≤p<∞{\displaystyle 1\leq p<\infty }) asℓp(I)={(xi)i∈I∈KI:∑i∈I|xi|p<+∞},{\displaystyle \ell ^{p}(I)=\left\{(x_{i})_{i\in I}\in \mathbb {K} ^{I}:\sum _{i\in I}|x_{i}|^{p}<+\infty \right\},}where convergence on the right means that only countably many summands are nonzero (see alsoUnconditional convergence). With the norm‖x‖p=(∑i∈I|xi|p)1/p{\displaystyle \|x\|_{p}=\left(\sum _{i\in I}|x_{i}|^{p}\right)^{1/p}}the spaceℓp(I){\displaystyle \ell ^{p}(I)}becomes a Banach space. In the case whereI{\displaystyle I}is finite withn{\displaystyle n}elements, this construction yieldsRn{\displaystyle \mathbb {R} ^{n}}with thep{\displaystyle p}-norm defined above. IfI{\displaystyle I}is countably infinite, this is exactly the sequence spaceℓp{\displaystyle \ell ^{p}}defined above. For uncountable setsI{\displaystyle I}this is a non-separableBanach space which can be seen as thelocally convexdirect limitofℓp{\displaystyle \ell ^{p}}-sequence spaces.[2] Forp=2,{\displaystyle p=2,}the‖⋅‖2{\displaystyle \|\,\cdot \,\|_{2}}-norm is even induced by a canonicalinner product⟨⋅,⋅⟩,{\displaystyle \langle \,\cdot ,\,\cdot \rangle ,}called theEuclidean inner product, which means that‖x‖2=⟨x,x⟩{\displaystyle \|\mathbf {x} \|_{2}={\sqrt {\langle \mathbf {x} ,\mathbf {x} \rangle }}}holds for all vectorsx.{\displaystyle \mathbf {x} .}This inner product can expressed in terms of the norm by using thepolarization identity. Onℓ2,{\displaystyle \ell ^{2},}it can be defined by⟨(xi)i,(yn)i⟩ℓ2=∑ixiyi¯.{\displaystyle \langle \left(x_{i}\right)_{i},\left(y_{n}\right)_{i}\rangle _{\ell ^{2}}~=~\sum _{i}x_{i}{\overline {y_{i}}}.}Now consider the casep=∞.{\displaystyle p=\infty .}Define[note 1]ℓ∞(I)={x∈KI:suprange⁡|x|<+∞},{\displaystyle \ell ^{\infty }(I)=\{x\in \mathbb {K} ^{I}:\sup \operatorname {range} |x|<+\infty \},}where for allx{\displaystyle x}[3][note 2]‖x‖∞≡inf{C∈R≥0:|xi|≤Cfor alli∈I}={suprange⁡|x|ifX≠∅,0ifX=∅.{\displaystyle \|x\|_{\infty }\equiv \inf\{C\in \mathbb {R} _{\geq 0}:|x_{i}|\leq C{\text{ for all }}i\in I\}={\begin{cases}\sup \operatorname {range} |x|&{\text{if }}X\neq \varnothing ,\\0&{\text{if }}X=\varnothing .\end{cases}}} The index setI{\displaystyle I}can be turned into ameasure spaceby giving it thediscrete σ-algebraand thecounting measure. Then the spaceℓp(I){\displaystyle \ell ^{p}(I)}is just a special case of the more generalLp{\displaystyle L^{p}}-space (defined below). AnLp{\displaystyle L^{p}}space may be defined as a space of measurable functions for which thep{\displaystyle p}-th power of theabsolute valueisLebesgue integrable, where functions which agree almost everywhere are identified. More generally, let(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}be ameasure spaceand1≤p≤∞.{\displaystyle 1\leq p\leq \infty .}[note 3]Whenp≠∞{\displaystyle p\neq \infty }, consider the setLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}of allmeasurable functionsf{\displaystyle f}fromS{\displaystyle S}toC{\displaystyle \mathbb {C} }orR{\displaystyle \mathbb {R} }whoseabsolute valueraised to thep{\displaystyle p}-th power has a finite integral, or in symbols:[4]‖f‖p=def(∫S|f|pdμ)1/p<∞.{\displaystyle \|f\|_{p}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(\int _{S}|f|^{p}\;\mathrm {d} \mu \right)^{1/p}<\infty .} To define the set forp=∞,{\displaystyle p=\infty ,}recall that two functionsf{\displaystyle f}andg{\displaystyle g}defined onS{\displaystyle S}are said to beequalalmost everywhere, writtenf=g{\displaystyle f=g}a.e., if the set{s∈S:f(s)≠g(s)}{\displaystyle \{s\in S:f(s)\neq g(s)\}}is measurable and has measure zero. Similarly, a measurable functionf{\displaystyle f}(and itsabsolute value) isbounded(ordominated)almost everywhereby a real numberC,{\displaystyle C,}written|f|≤C{\displaystyle |f|\leq C}a.e., if the (necessarily) measurable set{s∈S:|f(s)|>C}{\displaystyle \{s\in S:|f(s)|>C\}}has measure zero. The spaceL∞(S,μ){\displaystyle {\mathcal {L}}^{\infty }(S,\mu )}is the set of all measurable functionsf{\displaystyle f}that are bounded almost everywhere (by some realC{\displaystyle C}) and‖f‖∞{\displaystyle \|f\|_{\infty }}is defined as theinfimumof these bounds:‖f‖∞=definf{C∈R≥0:|f(s)|≤Cfor almost everys}.{\displaystyle \|f\|_{\infty }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\inf\{C\in \mathbb {R} _{\geq 0}:|f(s)|\leq C{\text{ for almost every }}s\}.}Whenμ(S)≠0{\displaystyle \mu (S)\neq 0}then this is the same as theessential supremumof the absolute value off{\displaystyle f}:[note 4]‖f‖∞={esssup⁡|f|ifμ(S)>0,0ifμ(S)=0.{\displaystyle \|f\|_{\infty }~=~{\begin{cases}\operatorname {esssup} |f|&{\text{if }}\mu (S)>0,\\0&{\text{if }}\mu (S)=0.\end{cases}}} For example, iff{\displaystyle f}is a measurable function that is equal to0{\displaystyle 0}almost everywhere[note 5]then‖f‖p=0{\displaystyle \|f\|_{p}=0}for everyp{\displaystyle p}and thusf∈Lp(S,μ){\displaystyle f\in {\mathcal {L}}^{p}(S,\,\mu )}for allp.{\displaystyle p.} For every positivep,{\displaystyle p,}the value under‖⋅‖p{\displaystyle \|\,\cdot \,\|_{p}}of a measurable functionf{\displaystyle f}and its absolute value|f|:S→[0,∞]{\displaystyle |f|:S\to [0,\infty ]}are always the same (that is,‖f‖p=‖|f|‖p{\displaystyle \|f\|_{p}=\||f|\|_{p}}for allp{\displaystyle p}) and so a measurable function belongs toLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}if and only if its absolute value does. Because of this, many formulas involvingp{\displaystyle p}-norms are stated only for non-negative real-valued functions. Consider for example the identity‖f‖pr=‖fr‖p/r,{\displaystyle \|f\|_{p}^{r}=\|f^{r}\|_{p/r},}which holds wheneverf≥0{\displaystyle f\geq 0}is measurable,r>0{\displaystyle r>0}is real, and0<p≤∞{\displaystyle 0<p\leq \infty }(here∞/r=def∞{\displaystyle \infty /r\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\infty }whenp=∞{\displaystyle p=\infty }). The non-negativity requirementf≥0{\displaystyle f\geq 0}can be removed by substituting|f|{\displaystyle |f|}in forf,{\displaystyle f,}which gives‖|f|‖pr=‖|f|r‖p/r.{\displaystyle \|\,|f|\,\|_{p}^{r}=\|\,|f|^{r}\,\|_{p/r}.}Note in particular that whenp=r{\displaystyle p=r}is finite then the formula‖f‖pp=‖|f|p‖1{\displaystyle \|f\|_{p}^{p}=\||f|^{p}\|_{1}}relates thep{\displaystyle p}-norm to the1{\displaystyle 1}-norm. Seminormed space ofp{\displaystyle p}-th power integrable functions Each set of functionsLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}forms avector spacewhen addition and scalar multiplication are defined pointwise.[note 6]That the sum of twop{\displaystyle p}-th power integrable functionsf{\displaystyle f}andg{\displaystyle g}is againp{\displaystyle p}-th power integrable follows from‖f+g‖pp≤2p−1(‖f‖pp+‖g‖pp),{\textstyle \|f+g\|_{p}^{p}\leq 2^{p-1}\left(\|f\|_{p}^{p}+\|g\|_{p}^{p}\right),}[proof 1]although it is also a consequence ofMinkowski's inequality‖f+g‖p≤‖f‖p+‖g‖p{\displaystyle \|f+g\|_{p}\leq \|f\|_{p}+\|g\|_{p}}which establishes that‖⋅‖p{\displaystyle \|\cdot \|_{p}}satisfies thetriangle inequalityfor1≤p≤∞{\displaystyle 1\leq p\leq \infty }(the triangle inequality does not hold for0<p<1{\displaystyle 0<p<1}). ThatLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}is closed under scalar multiplication is due to‖⋅‖p{\displaystyle \|\cdot \|_{p}}beingabsolutely homogeneous, which means that‖sf‖p=|s|‖f‖p{\displaystyle \|sf\|_{p}=|s|\|f\|_{p}}for every scalars{\displaystyle s}and every functionf.{\displaystyle f.} Absolute homogeneity, thetriangle inequality, and non-negativity are the defining properties of aseminorm. Thus‖⋅‖p{\displaystyle \|\cdot \|_{p}}is a seminorm and the setLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}ofp{\displaystyle p}-th power integrable functions together with the function‖⋅‖p{\displaystyle \|\cdot \|_{p}}defines aseminormed vector space. In general, theseminorm‖⋅‖p{\displaystyle \|\cdot \|_{p}}is not anormbecause there might exist measurable functionsf{\displaystyle f}that satisfy‖f‖p=0{\displaystyle \|f\|_{p}=0}but are notidenticallyequal to0{\displaystyle 0}[note 5](‖⋅‖p{\displaystyle \|\cdot \|_{p}}is a norm if and only if no suchf{\displaystyle f}exists). Zero sets ofp{\displaystyle p}-seminorms Iff{\displaystyle f}is measurable and equals0{\displaystyle 0}a.e. then‖f‖p=0{\displaystyle \|f\|_{p}=0}for all positivep≤∞.{\displaystyle p\leq \infty .}On the other hand, iff{\displaystyle f}is a measurable function for which there exists some0<p≤∞{\displaystyle 0<p\leq \infty }such that‖f‖p=0{\displaystyle \|f\|_{p}=0}thenf=0{\displaystyle f=0}almost everywhere. Whenp{\displaystyle p}is finite then this follows from thep=1{\displaystyle p=1}case and the formula‖f‖pp=‖|f|p‖1{\displaystyle \|f\|_{p}^{p}=\||f|^{p}\|_{1}}mentioned above. Thus ifp≤∞{\displaystyle p\leq \infty }is positive andf{\displaystyle f}is any measurable function, then‖f‖p=0{\displaystyle \|f\|_{p}=0}if and only iff=0{\displaystyle f=0}almost everywhere. Since the right hand side (f=0{\displaystyle f=0}a.e.) does not mentionp,{\displaystyle p,}it follows that all‖⋅‖p{\displaystyle \|\cdot \|_{p}}have the samezero set(it does not depend onp{\displaystyle p}). So denote this common set byN=def{f:f=0μ-almost everywhere}={f∈Lp(S,μ):‖f‖p=0}∀p.{\displaystyle {\mathcal {N}}\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\{f:f=0\ \mu {\text{-almost everywhere}}\}=\{f\in {\mathcal {L}}^{p}(S,\,\mu ):\|f\|_{p}=0\}\qquad \forall \ p.}This set is a vector subspace ofLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}for every positivep≤∞.{\displaystyle p\leq \infty .} Quotient vector space Like everyseminorm, the seminorm‖⋅‖p{\displaystyle \|\cdot \|_{p}}induces anorm(defined shortly) on the canonicalquotient vector spaceofLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}by its vector subspaceN={f∈Lp(S,μ):‖f‖p=0}.{\textstyle {\mathcal {N}}=\{f\in {\mathcal {L}}^{p}(S,\,\mu ):\|f\|_{p}=0\}.}This normed quotient space is calledLebesgue spaceand it is the subject of this article. We begin by defining the quotient vector space. Given anyf∈Lp(S,μ),{\displaystyle f\in {\mathcal {L}}^{p}(S,\,\mu ),}thecosetf+N=def{f+h:h∈N}{\displaystyle f+{\mathcal {N}}\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\{f+h:h\in {\mathcal {N}}\}}consists of all measurable functionsg{\displaystyle g}that are equal tof{\displaystyle f}almost everywhere. The set of all cosets, typically denoted byLp(S,μ)/N=def{f+N:f∈Lp(S,μ)},{\displaystyle {\mathcal {L}}^{p}(S,\mu )/{\mathcal {N}}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~~\{f+{\mathcal {N}}:f\in {\mathcal {L}}^{p}(S,\mu )\},}forms a vector space with origin0+N=N{\displaystyle 0+{\mathcal {N}}={\mathcal {N}}}when vector addition and scalar multiplication are defined by(f+N)+(g+N)=def(f+g)+N{\displaystyle (f+{\mathcal {N}})+(g+{\mathcal {N}})\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;(f+g)+{\mathcal {N}}}ands(f+N)=def(sf)+N.{\displaystyle s(f+{\mathcal {N}})\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;(sf)+{\mathcal {N}}.}This particular quotient vector space will be denoted byLp(S,μ)=defLp(S,μ)/N.{\displaystyle L^{p}(S,\,\mu )~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\mathcal {L}}^{p}(S,\mu )/{\mathcal {N}}.}Two cosets are equalf+N=g+N{\displaystyle f+{\mathcal {N}}=g+{\mathcal {N}}}if and only ifg∈f+N{\displaystyle g\in f+{\mathcal {N}}}(or equivalently,f−g∈N{\displaystyle f-g\in {\mathcal {N}}}), which happens if and only iff=g{\displaystyle f=g}almost everywhere; if this is the case thenf{\displaystyle f}andg{\displaystyle g}are identified in the quotient space. Hence, strictly speakingLp(S,μ){\displaystyle L^{p}(S,\,\mu )}consists ofequivalence classesof functions.[5] Thep{\displaystyle p}-norm on the quotient vector space Given anyf∈Lp(S,μ),{\displaystyle f\in {\mathcal {L}}^{p}(S,\,\mu ),}the value of the seminorm‖⋅‖p{\displaystyle \|\cdot \|_{p}}on thecosetf+N={f+h:h∈N}{\displaystyle f+{\mathcal {N}}=\{f+h:h\in {\mathcal {N}}\}}is constant and equal to‖f‖p;{\displaystyle \|f\|_{p};}denote this unique value by‖f+N‖p,{\displaystyle \|f+{\mathcal {N}}\|_{p},}so that:‖f+N‖p=def‖f‖p.{\displaystyle \|f+{\mathcal {N}}\|_{p}\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\|f\|_{p}.}This assignmentf+N↦‖f+N‖p{\displaystyle f+{\mathcal {N}}\mapsto \|f+{\mathcal {N}}\|_{p}}defines a map, which will also be denoted by‖⋅‖p,{\displaystyle \|\cdot \|_{p},}on thequotient vector spaceLp(S,μ)=defLp(S,μ)/N={f+N:f∈Lp(S,μ)}.{\displaystyle L^{p}(S,\mu )~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~~{\mathcal {L}}^{p}(S,\mu )/{\mathcal {N}}~=~\{f+{\mathcal {N}}:f\in {\mathcal {L}}^{p}(S,\mu )\}.}This map is anormonLp(S,μ){\displaystyle L^{p}(S,\mu )}called thep{\displaystyle p}-norm. The value‖f+N‖p{\displaystyle \|f+{\mathcal {N}}\|_{p}}of a cosetf+N{\displaystyle f+{\mathcal {N}}}is independent of the particular functionf{\displaystyle f}that was chosen to represent the coset, meaning that ifC∈Lp(S,μ){\displaystyle {\mathcal {C}}\in L^{p}(S,\mu )}is any coset then‖C‖p=‖f‖p{\displaystyle \|{\mathcal {C}}\|_{p}=\|f\|_{p}}for everyf∈C{\displaystyle f\in {\mathcal {C}}}(sinceC=f+N{\displaystyle {\mathcal {C}}=f+{\mathcal {N}}}for everyf∈C{\displaystyle f\in {\mathcal {C}}}). The LebesgueLp{\displaystyle L^{p}}space Thenormed vector space(Lp(S,μ),‖⋅‖p){\displaystyle \left(L^{p}(S,\mu ),\|\cdot \|_{p}\right)}is calledLp{\displaystyle L^{p}}spaceor theLebesgue spaceofp{\displaystyle p}-th power integrable functions and it is aBanach spacefor every1≤p≤∞{\displaystyle 1\leq p\leq \infty }(meaning that it is acomplete metric space, a result that is sometimes called theRiesz–Fischer theorem). When the underlying measure spaceS{\displaystyle S}is understood thenLp(S,μ){\displaystyle L^{p}(S,\mu )}is often abbreviatedLp(μ),{\displaystyle L^{p}(\mu ),}or even justLp.{\displaystyle L^{p}.}Depending on the author, the subscript notationLp{\displaystyle L_{p}}might denote eitherLp(S,μ){\displaystyle L^{p}(S,\mu )}orL1/p(S,μ).{\displaystyle L^{1/p}(S,\mu ).} If the seminorm‖⋅‖p{\displaystyle \|\cdot \|_{p}}onLp(S,μ){\displaystyle {\mathcal {L}}^{p}(S,\,\mu )}happens to be a norm (which happens if and only ifN={0}{\displaystyle {\mathcal {N}}=\{0\}}) then the normed space(Lp(S,μ),‖⋅‖p){\displaystyle \left({\mathcal {L}}^{p}(S,\,\mu ),\|\cdot \|_{p}\right)}will belinearlyisometrically isomorphicto the normed quotient space(Lp(S,μ),‖⋅‖p){\displaystyle \left(L^{p}(S,\mu ),\|\cdot \|_{p}\right)}via the canonical mapg∈Lp(S,μ)↦{g}{\displaystyle g\in {\mathcal {L}}^{p}(S,\,\mu )\mapsto \{g\}}(sinceg+N={g}{\displaystyle g+{\mathcal {N}}=\{g\}}); in other words, they will be,up toalinear isometry, the same normed space and so they may both be called "Lp{\displaystyle L^{p}}space". The above definitions generalize toBochner spaces. In general, this process cannot be reversed: there is no consistent way to define a "canonical" representative of each coset ofN{\displaystyle {\mathcal {N}}}inLp.{\displaystyle L^{p}.}ForL∞,{\displaystyle L^{\infty },}however, there is atheory of liftsenabling such recovery. For1≤p≤∞{\displaystyle 1\leq p\leq \infty }theℓp{\displaystyle \ell ^{p}}spaces are a special case ofLp{\displaystyle L^{p}}spaces; whenS{\displaystyle S}are thenatural numbersN{\displaystyle \mathbb {N} }andμ{\displaystyle \mu }is thecounting measure. More generally, if one considers any setS{\displaystyle S}with the counting measure, the resultingLp{\displaystyle L^{p}}space is denotedℓp(S).{\displaystyle \ell ^{p}(S).}For example,ℓp(Z){\displaystyle \ell ^{p}(\mathbb {Z} )}is the space of all sequences indexed by the integers, and when defining thep{\displaystyle p}-norm on such a space, one sums over all the integers. The spaceℓp(n),{\displaystyle \ell ^{p}(n),}wheren{\displaystyle n}is the set withn{\displaystyle n}elements, isRn{\displaystyle \mathbb {R} ^{n}}with itsp{\displaystyle p}-norm as defined above. Similar toℓ2{\displaystyle \ell ^{2}}spaces,L2{\displaystyle L^{2}}is the onlyHilbert spaceamongLp{\displaystyle L^{p}}spaces. In the complex case, the inner product onL2{\displaystyle L^{2}}is defined by⟨f,g⟩=∫Sf(x)g(x)¯dμ(x).{\displaystyle \langle f,g\rangle =\int _{S}f(x){\overline {g(x)}}\,\mathrm {d} \mu (x).}Functions inL2{\displaystyle L^{2}}are sometimes calledsquare-integrable functions,quadratically integrable functionsorsquare-summable functions, but sometimes these terms are reserved for functions that are square-integrable in some other sense, such as in the sense of aRiemann integral(Titchmarsh 1976). As any Hilbert space, every spaceL2{\displaystyle L^{2}}is linearly isometric to a suitableℓ2(I),{\displaystyle \ell ^{2}(I),}where the cardinality of the setI{\displaystyle I}is the cardinality of an arbitrary basis for this particularL2.{\displaystyle L^{2}.} If we use complex-valued functions, the spaceL∞{\displaystyle L^{\infty }}is acommutativeC*-algebrawith pointwise multiplication and conjugation. For many measure spaces, including all sigma-finite ones, it is in fact a commutativevon Neumann algebra. An element ofL∞{\displaystyle L^{\infty }}defines abounded operatoron anyLp{\displaystyle L^{p}}space bymultiplication. If0<p<1,{\displaystyle 0<p<1,}thenLp(μ){\displaystyle L^{p}(\mu )}can be defined as above, that is:Np(f)=∫S|f|pdμ<∞.{\displaystyle N_{p}(f)=\int _{S}|f|^{p}\,d\mu <\infty .}In this case, however, thep{\displaystyle p}-norm‖f‖p=Np(f)1/p{\displaystyle \|f\|_{p}=N_{p}(f)^{1/p}}does not satisfy the triangle inequality and defines only aquasi-norm. The inequality(a+b)p≤ap+bp,{\displaystyle (a+b)^{p}\leq a^{p}+b^{p},}valid fora,b≥0,{\displaystyle a,b\geq 0,}implies thatNp(f+g)≤Np(f)+Np(g){\displaystyle N_{p}(f+g)\leq N_{p}(f)+N_{p}(g)}and so the functiondp(f,g)=Np(f−g)=‖f−g‖pp{\displaystyle d_{p}(f,g)=N_{p}(f-g)=\|f-g\|_{p}^{p}}is a metric onLp(μ).{\displaystyle L^{p}(\mu ).}The resulting metric space iscomplete.[6] In this settingLp{\displaystyle L^{p}}satisfies areverse Minkowski inequality, that is foru,v∈Lp{\displaystyle u,v\in L^{p}}‖|u|+|v|‖p≥‖u‖p+‖v‖p{\displaystyle {\Big \|}|u|+|v|{\Big \|}_{p}\geq \|u\|_{p}+\|v\|_{p}} This result may be used to proveClarkson's inequalities, which are in turn used to establish theuniform convexityof the spacesLp{\displaystyle L^{p}}for1<p<∞{\displaystyle 1<p<\infty }(Adams & Fournier 2003). The spaceLp{\displaystyle L^{p}}for0<p<1{\displaystyle 0<p<1}is anF-space: it admits a complete translation-invariant metric with respect to which the vector space operations are continuous. It is the prototypical example of anF-spacethat, for most reasonable measure spaces, is notlocally convex: inℓp{\displaystyle \ell ^{p}}orLp([0,1]),{\displaystyle L^{p}([0,1]),}every open convex set containing the0{\displaystyle 0}function is unbounded for thep{\displaystyle p}-quasi-norm; therefore, the0{\displaystyle 0}vector does not possess a fundamental system of convex neighborhoods. Specifically, this is true if the measure spaceS{\displaystyle S}contains an infinite family of disjoint measurable sets of finite positive measure. The only nonempty convex open set inLp([0,1]){\displaystyle L^{p}([0,1])}is the entire space. Consequently, there are no nonzero continuous linear functionals onLp([0,1]);{\displaystyle L^{p}([0,1]);}thecontinuous dual spaceis the zero space. In the case of thecounting measureon the natural numbers (i.e.Lp(μ)=ℓp{\displaystyle L^{p}(\mu )=\ell ^{p}}), the bounded linear functionals onℓp{\displaystyle \ell ^{p}}are exactly those that are bounded onℓ1{\displaystyle \ell ^{1}}, i.e., those given by sequences inℓ∞.{\displaystyle \ell ^{\infty }.}Althoughℓp{\displaystyle \ell ^{p}}does contain non-trivial convex open sets, it fails to have enough of them to give a base for the topology. Having no linear functionals is highly undesirable for the purposes of doing analysis. In case of the Lebesgue measure onRn,{\displaystyle \mathbb {R} ^{n},}rather than work withLp{\displaystyle L^{p}}for0<p<1,{\displaystyle 0<p<1,}it is common to work with theHardy spaceHpwhenever possible, as this has quite a few linear functionals: enough to distinguish points from one another. However, theHahn–Banach theoremstill fails inHpforp<1{\displaystyle p<1}(Duren 1970, §7.5). Supposep,q,r∈[1,∞]{\displaystyle p,q,r\in [1,\infty ]}satisfy1p+1q=1r{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}={\tfrac {1}{r}}}. Iff∈Lp(S,μ){\displaystyle f\in L^{p}(S,\mu )}andg∈Lq(S,μ){\displaystyle g\in L^{q}(S,\mu )}thenfg∈Lr(S,μ){\displaystyle fg\in L^{r}(S,\mu )}and[7]‖fg‖r≤‖f‖p‖g‖q.{\displaystyle \|fg\|_{r}~\leq ~\|f\|_{p}\,\|g\|_{q}.} This inequality, calledHölder's inequality, is in some sense optimal since ifr=1{\displaystyle r=1}andf{\displaystyle f}is a measurable function such thatsup‖g‖q≤1∫S|fg|dμ<∞{\displaystyle \sup _{\|g\|_{q}\leq 1}\,\int _{S}|fg|\,\mathrm {d} \mu ~<~\infty }where thesupremumis taken over the closed unit ball ofLq(S,μ),{\displaystyle L^{q}(S,\mu ),}thenf∈Lp(S,μ){\displaystyle f\in L^{p}(S,\mu )}and‖f‖p=sup‖g‖q≤1∫Sfgdμ.{\displaystyle \|f\|_{p}~=~\sup _{\|g\|_{q}\leq 1}\,\int _{S}fg\,\mathrm {d} \mu .} Minkowski inequality, which states that‖⋅‖p{\displaystyle \|\cdot \|_{p}}satisfies thetriangle inequality, can be generalized: If the measurable functionF:M×N→R{\displaystyle F:M\times N\to \mathbb {R} }is non-negative (where(M,μ){\displaystyle (M,\mu )}and(N,ν){\displaystyle (N,\nu )}are measure spaces) then for all1≤p≤q≤∞,{\displaystyle 1\leq p\leq q\leq \infty ,}[8]‖‖F(⋅,n)‖Lp(M,μ)‖Lq(N,ν)≤‖‖F(m,⋅)‖Lq(N,ν)‖Lp(M,μ).{\displaystyle \left\|\left\|F(\,\cdot ,n)\right\|_{L^{p}(M,\mu )}\right\|_{L^{q}(N,\nu )}~\leq ~\left\|\left\|F(m,\cdot )\right\|_{L^{q}(N,\nu )}\right\|_{L^{p}(M,\mu )}\ .} If1≤p<∞{\displaystyle 1\leq p<\infty }then every non-negativef∈Lp(μ){\displaystyle f\in L^{p}(\mu )}has anatomic decomposition,[9]meaning that there exist a sequence(rn)n∈Z{\displaystyle (r_{n})_{n\in \mathbb {Z} }}of non-negative real numbers and a sequence of non-negative functions(fn)n∈Z,{\displaystyle (f_{n})_{n\in \mathbb {Z} },}calledthe atoms, whose supports(supp⁡fn)n∈Z{\displaystyle \left(\operatorname {supp} f_{n}\right)_{n\in \mathbb {Z} }}arepairwise disjoint setsof measureμ(supp⁡fn)≤2n+1,{\displaystyle \mu \left(\operatorname {supp} f_{n}\right)\leq 2^{n+1},}such thatf=∑n∈Zrnfn,{\displaystyle f~=~\sum _{n\in \mathbb {Z} }r_{n}\,f_{n}\,,}and for every integern∈Z,{\displaystyle n\in \mathbb {Z} ,}‖fn‖∞≤2−np,{\displaystyle \|f_{n}\|_{\infty }~\leq ~2^{-{\tfrac {n}{p}}}\,,}and12‖f‖pp≤∑n∈Zrnp≤2‖f‖pp,{\displaystyle {\tfrac {1}{2}}\|f\|_{p}^{p}~\leq ~\sum _{n\in \mathbb {Z} }r_{n}^{p}~\leq ~2\|f\|_{p}^{p}\,,}and where moreover, the sequence of functions(rnfn)n∈Z{\displaystyle (r_{n}f_{n})_{n\in \mathbb {Z} }}depends only onf{\displaystyle f}(it is independent ofp{\displaystyle p}).[9]These inequalities guarantee that‖fn‖pp≤2{\displaystyle \|f_{n}\|_{p}^{p}\leq 2}for all integersn{\displaystyle n}while the supports of(fn)n∈Z{\displaystyle (f_{n})_{n\in \mathbb {Z} }}being pairwise disjoint implies[9]‖f‖pp=∑n∈Zrnp‖fn‖pp.{\displaystyle \|f\|_{p}^{p}~=~\sum _{n\in \mathbb {Z} }r_{n}^{p}\,\|f_{n}\|_{p}^{p}\,.} An atomic decomposition can be explicitly given by first defining for every integern∈Z,{\displaystyle n\in \mathbb {Z} ,}[9][note 7]tn=inf{t∈R:μ(f>t)<2n}{\displaystyle t_{n}=\inf\{t\in \mathbb {R} :\mu (f>t)<2^{n}\}}and then lettingrn=2n/ptnandfn=frn1(tn+1<f≤tn){\displaystyle r_{n}~=~2^{n/p}\,t_{n}~{\text{ and }}\quad f_{n}~=~{\frac {f}{r_{n}}}\,\mathbf {1} _{(t_{n+1}<f\leq t_{n})}}whereμ(f>t)=μ({s:f(s)>t}){\displaystyle \mu (f>t)=\mu (\{s:f(s)>t\})}denotes the measure of the set(f>t):={s∈S:f(s)>t}{\displaystyle (f>t):=\{s\in S:f(s)>t\}}and1(tn+1<f≤tn){\displaystyle \mathbf {1} _{(t_{n+1}<f\leq t_{n})}}denotes theindicator functionof the set(tn+1<f≤tn):={s∈S:tn+1<f(s)≤tn}.{\displaystyle (t_{n+1}<f\leq t_{n}):=\{s\in S:t_{n+1}<f(s)\leq t_{n}\}.}The sequence(tn)n∈Z{\displaystyle (t_{n})_{n\in \mathbb {Z} }}is decreasing and converges to0{\displaystyle 0}asn→∞.{\displaystyle n\to \infty .}[9]Consequently, iftn=0{\displaystyle t_{n}=0}thentn+1=0{\displaystyle t_{n+1}=0}and(tn+1<f≤tn)=∅{\displaystyle (t_{n+1}<f\leq t_{n})=\varnothing }so thatfn=1rnf1(tn+1<f≤tn){\displaystyle f_{n}={\frac {1}{r_{n}}}\,f\,\mathbf {1} _{(t_{n+1}<f\leq t_{n})}}is identically equal to0{\displaystyle 0}(in particular, the division1rn{\displaystyle {\tfrac {1}{r_{n}}}}byrn=0{\displaystyle r_{n}=0}causes no issues). Thecomplementary cumulative distribution functiont∈R↦μ(|f|>t){\displaystyle t\in \mathbb {R} \mapsto \mu (|f|>t)}of|f|=f{\displaystyle |f|=f}that was used to define thetn{\displaystyle t_{n}}also appears in the definition of the weakLp{\displaystyle L^{p}}-norm (given below) and can be used to express thep{\displaystyle p}-norm‖⋅‖p{\displaystyle \|\cdot \|_{p}}(for1≤p<∞{\displaystyle 1\leq p<\infty }) off∈Lp(S,μ){\displaystyle f\in L^{p}(S,\mu )}as the integral[9]‖f‖pp=p∫0∞tp−1μ(|f|>t)dt,{\displaystyle \|f\|_{p}^{p}~=~p\,\int _{0}^{\infty }t^{p-1}\mu (|f|>t)\,\mathrm {d} t\,,}where the integration is with respect to the usual Lebesgue measure on(0,∞).{\displaystyle (0,\infty ).} Thedual spaceofLp(μ){\displaystyle L^{p}(\mu )}for1<p<∞{\displaystyle 1<p<\infty }has a natural isomorphism withLq(μ),{\displaystyle L^{q}(\mu ),}whereq{\displaystyle q}is such that1p+1q=1{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1}. This isomorphism associatesg∈Lq(μ){\displaystyle g\in L^{q}(\mu )}with the functionalκp(g)∈Lp(μ)∗{\displaystyle \kappa _{p}(g)\in L^{p}(\mu )^{*}}defined byf↦κp(g)(f)=∫fgdμ{\displaystyle f\mapsto \kappa _{p}(g)(f)=\int fg\,\mathrm {d} \mu }for everyf∈Lp(μ).{\displaystyle f\in L^{p}(\mu ).} κp:Lq(μ)→Lp(μ)∗{\displaystyle \kappa _{p}:L^{q}(\mu )\to L^{p}(\mu )^{*}}is a well defined continuous linear mapping which is anisometryby theextremal caseof Hölder's inequality. If(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}is aσ{\displaystyle \sigma }-finite measure spaceone can use theRadon–Nikodym theoremto show that anyG∈Lp(μ)∗{\displaystyle G\in L^{p}(\mu )^{*}}can be expressed this way, i.e.,κp{\displaystyle \kappa _{p}}is anisometric isomorphismofBanach spaces.[10]Hence, it is usual to say simply thatLq(μ){\displaystyle L^{q}(\mu )}is thecontinuous dual spaceofLp(μ).{\displaystyle L^{p}(\mu ).} For1<p<∞,{\displaystyle 1<p<\infty ,}the spaceLp(μ){\displaystyle L^{p}(\mu )}isreflexive. Letκp{\displaystyle \kappa _{p}}be as above and letκq:Lp(μ)→Lq(μ)∗{\displaystyle \kappa _{q}:L^{p}(\mu )\to L^{q}(\mu )^{*}}be the corresponding linear isometry. Consider the map fromLp(μ){\displaystyle L^{p}(\mu )}toLp(μ)∗∗,{\displaystyle L^{p}(\mu )^{**},}obtained by composingκq{\displaystyle \kappa _{q}}with thetranspose(or adjoint) of the inverse ofκp:{\displaystyle \kappa _{p}:} jp:Lp(μ)⟶κqLq(μ)∗⟶(κp−1)∗Lp(μ)∗∗{\displaystyle j_{p}:L^{p}(\mu )\mathrel {\overset {\kappa _{q}}{\longrightarrow }} L^{q}(\mu )^{*}\mathrel {\overset {\left(\kappa _{p}^{-1}\right)^{*}}{\longrightarrow }} L^{p}(\mu )^{**}} This map coincides with thecanonical embeddingJ{\displaystyle J}ofLp(μ){\displaystyle L^{p}(\mu )}into its bidual. Moreover, the mapjp{\displaystyle j_{p}}is onto, as composition of two onto isometries, and this proves reflexivity. If the measureμ{\displaystyle \mu }onS{\displaystyle S}issigma-finite, then the dual ofL1(μ){\displaystyle L^{1}(\mu )}is isometrically isomorphic toL∞(μ){\displaystyle L^{\infty }(\mu )}(more precisely, the mapκ1{\displaystyle \kappa _{1}}corresponding top=1{\displaystyle p=1}is an isometry fromL∞(μ){\displaystyle L^{\infty }(\mu )}ontoL1(μ)∗.{\displaystyle L^{1}(\mu )^{*}.} The dual ofL∞(μ){\displaystyle L^{\infty }(\mu )}is subtler. Elements ofL∞(μ)∗{\displaystyle L^{\infty }(\mu )^{*}}can be identified with bounded signedfinitelyadditive measures onS{\displaystyle S}that areabsolutely continuouswith respect toμ.{\displaystyle \mu .}Seeba spacefor more details. If we assume the axiom of choice, this space is much bigger thanL1(μ){\displaystyle L^{1}(\mu )}except in some trivial cases. However,Saharon Shelahproved that there are relatively consistent extensions ofZermelo–Fraenkel set theory(ZF +DC+ "Every subset of the real numbers has theBaire property") in which the dual ofℓ∞{\displaystyle \ell ^{\infty }}isℓ1.{\displaystyle \ell ^{1}.}[11] Colloquially, if1≤p<q≤∞,{\displaystyle 1\leq p<q\leq \infty ,}thenLp(S,μ){\displaystyle L^{p}(S,\mu )}contains functions that are more locally singular, while elements ofLq(S,μ){\displaystyle L^{q}(S,\mu )}can be more spread out. Consider theLebesgue measureon the half line(0,∞).{\displaystyle (0,\infty ).}A continuous function inL1{\displaystyle L^{1}}might blow up near0{\displaystyle 0}but must decay sufficiently fast toward infinity. On the other hand, continuous functions inL∞{\displaystyle L^{\infty }}need not decay at all but no blow-up is allowed. More formally:[12] Neither condition holds for the Lebesgue measure on the real line while both conditions holds for thecounting measureon any finite set. As a consequence of theclosed graph theorem, the embedding is continuous, i.e., theidentity operatoris a bounded linear map fromLq{\displaystyle L^{q}}toLp{\displaystyle L^{p}}in the first case andLp{\displaystyle L^{p}}toLq{\displaystyle L^{q}}in the second. Indeed, if the domainS{\displaystyle S}has finite measure, one can make the following explicit calculation usingHölder's inequality‖1fp‖1≤‖1‖q/(q−p)‖fp‖q/p{\displaystyle \ \|\mathbf {1} f^{p}\|_{1}\leq \|\mathbf {1} \|_{q/(q-p)}\|f^{p}\|_{q/p}}leading to‖f‖p≤μ(S)1/p−1/q‖f‖q.{\displaystyle \ \|f\|_{p}\leq \mu (S)^{1/p-1/q}\|f\|_{q}.} The constant appearing in the above inequality is optimal, in the sense that theoperator normof the identityI:Lq(S,μ)→Lp(S,μ){\displaystyle I:L^{q}(S,\mu )\to L^{p}(S,\mu )}is precisely‖I‖q,p=μ(S)1/p−1/q{\displaystyle \|I\|_{q,p}=\mu (S)^{1/p-1/q}}the case of equality being achieved exactly whenf=1{\displaystyle f=1}μ{\displaystyle \mu }-almost-everywhere. Let1≤p<∞{\displaystyle 1\leq p<\infty }and(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}be a measure space and consider an integrablesimple functionf{\displaystyle f}onS{\displaystyle S}given byf=∑j=1naj1Aj,{\displaystyle f=\sum _{j=1}^{n}a_{j}\mathbf {1} _{A_{j}},}whereaj{\displaystyle a_{j}}are scalars,Aj∈Σ{\displaystyle A_{j}\in \Sigma }has finite measure and1Aj{\displaystyle {\mathbf {1} }_{A_{j}}}is theindicator functionof the setAj,{\displaystyle A_{j},}forj=1,…,n.{\displaystyle j=1,\dots ,n.}By construction of theintegral, the vector space of integrable simple functions isdenseinLp(S,Σ,μ).{\displaystyle L^{p}(S,\Sigma ,\mu ).} More can be said whenS{\displaystyle S}is anormaltopological spaceandΣ{\displaystyle \Sigma }itsBorel 𝜎–algebra. SupposeV⊆S{\displaystyle V\subseteq S}is an open set withμ(V)<∞.{\displaystyle \mu (V)<\infty .}Then for every Borel setA∈Σ{\displaystyle A\in \Sigma }contained inV{\displaystyle V}there exist a closed setF{\displaystyle F}and an open setU{\displaystyle U}such thatF⊆A⊆U⊆Vandμ(U∖F)=μ(U)−μ(F)<ε,{\displaystyle F\subseteq A\subseteq U\subseteq V\quad {\text{and}}\quad \mu (U\setminus F)=\mu (U)-\mu (F)<\varepsilon ,}for everyε>0{\displaystyle \varepsilon >0}. Subsequently, there exists aUrysohn function0≤φ≤1{\displaystyle 0\leq \varphi \leq 1}onS{\displaystyle S}that is1{\displaystyle 1}onF{\displaystyle F}and0{\displaystyle 0}onS∖U,{\displaystyle S\setminus U,}with∫S|1A−φ|dμ<ε.{\displaystyle \int _{S}|\mathbf {1} _{A}-\varphi |\,\mathrm {d} \mu <\varepsilon \,.} IfS{\displaystyle S}can be covered by an increasing sequence(Vn){\displaystyle (V_{n})}of open sets that have finite measure, then the space ofp{\displaystyle p}–integrable continuous functions is dense inLp(S,Σ,μ).{\displaystyle L^{p}(S,\Sigma ,\mu ).}More precisely, one can use bounded continuous functions that vanish outside one of the open setsVn.{\displaystyle V_{n}.} This applies in particular whenS=Rd{\displaystyle S=\mathbb {R} ^{d}}and whenμ{\displaystyle \mu }is the Lebesgue measure. For example, the space of continuous and compactly supported functions as well as the space of integrablestep functionsare dense inLp(Rd){\displaystyle L^{p}(\mathbb {R} ^{d})}. If0<p<∞{\displaystyle 0<p<\infty }is any positive real number,μ{\displaystyle \mu }is aprobability measureon a measurable space(S,Σ){\displaystyle (S,\Sigma )}(so thatL∞(μ)⊆Lp(μ){\displaystyle L^{\infty }(\mu )\subseteq L^{p}(\mu )}), andV⊆L∞(μ){\displaystyle V\subseteq L^{\infty }(\mu )}is a vector subspace, thenV{\displaystyle V}is a closed subspace ofLp(μ){\displaystyle L^{p}(\mu )}if and only ifV{\displaystyle V}is finite-dimensional[13](V{\displaystyle V}was chosen independent ofp{\displaystyle p}). In this theorem, which is due toAlexander Grothendieck,[13]it is crucial that the vector spaceV{\displaystyle V}be a subset ofL∞{\displaystyle L^{\infty }}since it is possible to construct an infinite-dimensional closed vector subspace ofL1(S1,12πλ){\displaystyle L^{1}\left(S^{1},{\tfrac {1}{2\pi }}\lambda \right)}(which is even a subset ofL4{\displaystyle L^{4}}), whereλ{\displaystyle \lambda }isLebesgue measureon theunit circleS1{\displaystyle S^{1}}and12πλ{\displaystyle {\tfrac {1}{2\pi }}\lambda }is the probability measure that results from dividing it by its massλ(S1)=2π.{\displaystyle \lambda (S^{1})=2\pi .}[13] In statistics, measures ofcentral tendencyandstatistical dispersion, such as themean,median, andstandard deviation, can be defined in terms ofLp{\displaystyle L^{p}}metrics, and measures of central tendency can be characterized assolutions to variational problems. Inpenalized regression, "L1 penalty" and "L2 penalty" refer to penalizing either theL1{\displaystyle L^{1}}normof a solution's vector of parameter values (i.e. the sum of its absolute values), or its squaredL2{\displaystyle L^{2}}norm (itsEuclidean length). Techniques which use an L1 penalty, likeLASSO, encourage sparse solutions (where the many parameters are zero).[14]Elastic net regularizationuses a penalty term that is a combination of theL1{\displaystyle L^{1}}norm and the squaredL2{\displaystyle L^{2}}norm of the parameter vector. TheFourier transformfor the real line (or, forperiodic functions, seeFourier series), mapsLp(R){\displaystyle L^{p}(\mathbb {R} )}toLq(R){\displaystyle L^{q}(\mathbb {R} )}(orLp(T){\displaystyle L^{p}(\mathbf {T} )}toℓq{\displaystyle \ell ^{q}}) respectively, where1≤p≤2{\displaystyle 1\leq p\leq 2}and1p+1q=1.{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1.}This is a consequence of theRiesz–Thorin interpolation theorem, and is made precise with theHausdorff–Young inequality. By contrast, ifp>2,{\displaystyle p>2,}the Fourier transform does not map intoLq.{\displaystyle L^{q}.} Hilbert spacesare central to many applications, fromquantum mechanicstostochastic calculus. The spacesL2{\displaystyle L^{2}}andℓ2{\displaystyle \ell ^{2}}are both Hilbert spaces. In fact, by choosing a Hilbert basisE,{\displaystyle E,}i.e., a maximal orthonormal subset ofL2{\displaystyle L^{2}}or any Hilbert space, one sees that every Hilbert space is isometrically isomorphic toℓ2(E){\displaystyle \ell ^{2}(E)}(sameE{\displaystyle E}as above), i.e., a Hilbert space of typeℓ2.{\displaystyle \ell ^{2}.} Let(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}be a measure space, andf{\displaystyle f}ameasurable functionwith real or complex values onS.{\displaystyle S.}Thedistribution functionoff{\displaystyle f}is defined fort≥0{\displaystyle t\geq 0}byλf(t)=μ{x∈S:|f(x)|>t}.{\displaystyle \lambda _{f}(t)=\mu \{x\in S:|f(x)|>t\}.} Iff{\displaystyle f}is inLp(S,μ){\displaystyle L^{p}(S,\mu )}for somep{\displaystyle p}with1≤p<∞,{\displaystyle 1\leq p<\infty ,}then byMarkov's inequality,λf(t)≤‖f‖pptp{\displaystyle \lambda _{f}(t)\leq {\frac {\|f\|_{p}^{p}}{t^{p}}}} A functionf{\displaystyle f}is said to be in the spaceweakLp(S,μ){\displaystyle L^{p}(S,\mu )}, orLp,w(S,μ),{\displaystyle L^{p,w}(S,\mu ),}if there is a constantC>0{\displaystyle C>0}such that, for allt>0,{\displaystyle t>0,}λf(t)≤Cptp{\displaystyle \lambda _{f}(t)\leq {\frac {C^{p}}{t^{p}}}} The best constantC{\displaystyle C}for this inequality is theLp,w{\displaystyle L^{p,w}}-norm off,{\displaystyle f,}and is denoted by‖f‖p,w=supt>0tλf1/p(t).{\displaystyle \|f\|_{p,w}=\sup _{t>0}~t\lambda _{f}^{1/p}(t).} The weakLp{\displaystyle L^{p}}coincide with theLorentz spacesLp,∞,{\displaystyle L^{p,\infty },}so this notation is also used to denote them. TheLp,w{\displaystyle L^{p,w}}-norm is not a true norm, since thetriangle inequalityfails to hold. Nevertheless, forf{\displaystyle f}inLp(S,μ),{\displaystyle L^{p}(S,\mu ),}‖f‖p,w≤‖f‖p{\displaystyle \|f\|_{p,w}\leq \|f\|_{p}}and in particularLp(S,μ)⊂Lp,w(S,μ).{\displaystyle L^{p}(S,\mu )\subset L^{p,w}(S,\mu ).} In fact, one has‖f‖Lpp=∫|f(x)|pdμ(x)≥∫{|f(x)|>t}tp+∫{|f(x)|≤t}|f|p≥tpμ({|f|>t}),{\displaystyle \|f\|_{L^{p}}^{p}=\int |f(x)|^{p}d\mu (x)\geq \int _{\{|f(x)|>t\}}t^{p}+\int _{\{|f(x)|\leq t\}}|f|^{p}\geq t^{p}\mu (\{|f|>t\}),}and raising to power1/p{\displaystyle 1/p}and taking the supremum int{\displaystyle t}one has‖f‖Lp≥supt>0tμ({|f|>t})1/p=‖f‖Lp,w.{\displaystyle \|f\|_{L^{p}}\geq \sup _{t>0}t\;\mu (\{|f|>t\})^{1/p}=\|f\|_{L^{p,w}}.} Under the convention that two functions are equal if they are equalμ{\displaystyle \mu }almost everywhere, then the spacesLp,w{\displaystyle L^{p,w}}are complete (Grafakos 2004). For any0<r<p{\displaystyle 0<r<p}the expression‖|f|‖Lp,∞=sup0<μ(E)<∞μ(E)−1/r+1/p(∫E|f|rdμ)1/r{\displaystyle \||f|\|_{L^{p,\infty }}=\sup _{0<\mu (E)<\infty }\mu (E)^{-1/r+1/p}\left(\int _{E}|f|^{r}\,d\mu \right)^{1/r}}is comparable to theLp,w{\displaystyle L^{p,w}}-norm. Further in the casep>1,{\displaystyle p>1,}this expression defines a norm ifr=1.{\displaystyle r=1.}Hence forp>1{\displaystyle p>1}the weakLp{\displaystyle L^{p}}spaces areBanach spaces(Grafakos 2004). A major result that uses theLp,w{\displaystyle L^{p,w}}-spaces is theMarcinkiewicz interpolation theorem, which has broad applications toharmonic analysisand the study ofsingular integrals. As before, consider ameasure space(S,Σ,μ).{\displaystyle (S,\Sigma ,\mu ).}Letw:S→[a,∞),a>0{\displaystyle w:S\to [a,\infty ),a>0}be a measurable function. Thew{\displaystyle w}-weightedLp{\displaystyle L^{p}}spaceis defined asLp(S,wdμ),{\displaystyle L^{p}(S,w\,\mathrm {d} \mu ),}wherewdμ{\displaystyle w\,\mathrm {d} \mu }means the measureν{\displaystyle \nu }defined byν(A)≡∫Aw(x)dμ(x),A∈Σ,{\displaystyle \nu (A)\equiv \int _{A}w(x)\,\mathrm {d} \mu (x),\qquad A\in \Sigma ,} or, in terms of theRadon–Nikodym derivative,w=dνdμ{\displaystyle w={\tfrac {\mathrm {d} \nu }{\mathrm {d} \mu }}}thenormforLp(S,wdμ){\displaystyle L^{p}(S,w\,\mathrm {d} \mu )}is explicitly‖u‖Lp(S,wdμ)≡(∫Sw(x)|u(x)|pdμ(x))1/p{\displaystyle \|u\|_{L^{p}(S,w\,\mathrm {d} \mu )}\equiv \left(\int _{S}w(x)|u(x)|^{p}\,\mathrm {d} \mu (x)\right)^{1/p}} AsLp{\displaystyle L^{p}}-spaces, the weighted spaces have nothing special, sinceLp(S,wdμ){\displaystyle L^{p}(S,w\,\mathrm {d} \mu )}is equal toLp(S,dν).{\displaystyle L^{p}(S,\mathrm {d} \nu ).}But they are the natural framework for several results in harmonic analysis (Grafakos 2004); they appear for example in theMuckenhoupt theorem: for1<p<∞,{\displaystyle 1<p<\infty ,}the classicalHilbert transformis defined onLp(T,λ){\displaystyle L^{p}(\mathbf {T} ,\lambda )}whereT{\displaystyle \mathbf {T} }denotes theunit circleandλ{\displaystyle \lambda }the Lebesgue measure; the (nonlinear)Hardy–Littlewood maximal operatoris bounded onLp(Rn,λ).{\displaystyle L^{p}(\mathbb {R} ^{n},\lambda ).}Muckenhoupt's theorem describes weightsw{\displaystyle w}such that the Hilbert transform remains bounded onLp(T,wdλ){\displaystyle L^{p}(\mathbf {T} ,w\,\mathrm {d} \lambda )}and the maximal operator onLp(Rn,wdλ).{\displaystyle L^{p}(\mathbb {R} ^{n},w\,\mathrm {d} \lambda ).} One may also define spacesLp(M){\displaystyle L^{p}(M)}on a manifold, called theintrinsicLp{\displaystyle L^{p}}spacesof the manifold, usingdensities. Given a measure space(Ω,Σ,μ){\displaystyle (\Omega ,\Sigma ,\mu )}and alocally convex spaceE{\displaystyle E}(here assumed to becomplete), it is possible to define spaces ofp{\displaystyle p}-integrableE{\displaystyle E}-valued functions onΩ{\displaystyle \Omega }in a number of ways. One way is to define the spaces ofBochner integrableandPettis integrablefunctions, and then endow them withlocally convexTVS-topologiesthat are (each in their own way) a natural generalization of the usualLp{\displaystyle L^{p}}topology. Another way involvestopological tensor productsofLp(Ω,Σ,μ){\displaystyle L^{p}(\Omega ,\Sigma ,\mu )}withE.{\displaystyle E.}Element of the vector spaceLp(Ω,Σ,μ)⊗E{\displaystyle L^{p}(\Omega ,\Sigma ,\mu )\otimes E}are finite sums of simple tensorsf1⊗e1+⋯+fn⊗en,{\displaystyle f_{1}\otimes e_{1}+\cdots +f_{n}\otimes e_{n},}where each simple tensorf×e{\displaystyle f\times e}may be identified with the functionΩ→E{\displaystyle \Omega \to E}that sendsx↦ef(x).{\displaystyle x\mapsto ef(x).}Thistensor productLp(Ω,Σ,μ)⊗E{\displaystyle L^{p}(\Omega ,\Sigma ,\mu )\otimes E}is then endowed with a locally convex topology that turns it into atopological tensor product, the most common of which are theprojective tensor product, denoted byLp(Ω,Σ,μ)⊗πE,{\displaystyle L^{p}(\Omega ,\Sigma ,\mu )\otimes _{\pi }E,}and theinjective tensor product, denoted byLp(Ω,Σ,μ)⊗εE.{\displaystyle L^{p}(\Omega ,\Sigma ,\mu )\otimes _{\varepsilon }E.}In general, neither of these space are complete so theircompletionsare constructed, which are respectively denoted byLp(Ω,Σ,μ)⊗^πE{\displaystyle L^{p}(\Omega ,\Sigma ,\mu ){\widehat {\otimes }}_{\pi }E}andLp(Ω,Σ,μ)⊗^εE{\displaystyle L^{p}(\Omega ,\Sigma ,\mu ){\widehat {\otimes }}_{\varepsilon }E}(this is analogous to how the space of scalar-valuedsimple functionsonΩ,{\displaystyle \Omega ,}when seminormed by any‖⋅‖p,{\displaystyle \|\cdot \|_{p},}is not complete so a completion is constructed which, after being quotiented byker⁡‖⋅‖p,{\displaystyle \ker \|\cdot \|_{p},}is isometrically isomorphic to the Banach spaceLp(Ω,μ){\displaystyle L^{p}(\Omega ,\mu )}).Alexander Grothendieckshowed that whenE{\displaystyle E}is anuclear space(a concept he introduced), then these two constructions are, respectively, canonically TVS-isomorphic with the spaces of Bochner and Pettis integral functions mentioned earlier; in short, they are indistinguishable. The vector space of (equivalence classesof) measurable functions on(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}is denotedL0(S,Σ,μ){\displaystyle L^{0}(S,\Sigma ,\mu )}(Kalton, Peck & Roberts 1984). By definition, it contains all theLp,{\displaystyle L^{p},}and is equipped with the topology ofconvergence in measure. Whenμ{\displaystyle \mu }is a probability measure (i.e.,μ(S)=1{\displaystyle \mu (S)=1}), this mode of convergence is namedconvergence in probability. The spaceL0{\displaystyle L^{0}}is always atopological abelian groupbut is only atopological vector spaceifμ(S)<∞.{\displaystyle \mu (S)<\infty .}This is because scalar multiplication is continuous if and only ifμ(S)<∞.{\displaystyle \mu (S)<\infty .}If(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}isσ{\displaystyle \sigma }-finite then theweaker topologyoflocal convergence in measureis anF-space, i.e. acompletelymetrizable topological vector space. Moreover, this topology is isometric to global convergence in measure(S,Σ,ν){\displaystyle (S,\Sigma ,\nu )}for a suitable choice ofprobability measureν.{\displaystyle \nu .} The description is easier whenμ{\displaystyle \mu }is finite. Ifμ{\displaystyle \mu }is afinite measureon(S,Σ),{\displaystyle (S,\Sigma ),}the0{\displaystyle 0}function admits for the convergence in measure the followingfundamental system of neighborhoodsVε={f:μ({x:|f(x)|>ε})<ε},ε>0.{\displaystyle V_{\varepsilon }={\Bigl \{}f:\mu {\bigl (}\{x:|f(x)|>\varepsilon \}{\bigr )}<\varepsilon {\Bigr \}},\qquad \varepsilon >0.} The topology can be defined by any metricd{\displaystyle d}of the formd(f,g)=∫Sφ(|f(x)−g(x)|)dμ(x){\displaystyle d(f,g)=\int _{S}\varphi {\bigl (}|f(x)-g(x)|{\bigr )}\,\mathrm {d} \mu (x)}whereφ{\displaystyle \varphi }is bounded continuous concave and non-decreasing on[0,∞),{\displaystyle [0,\infty ),}withφ(0)=0{\displaystyle \varphi (0)=0}andφ(t)>0{\displaystyle \varphi (t)>0}whent>0{\displaystyle t>0}(for example,φ(t)=min(t,1).{\displaystyle \varphi (t)=\min(t,1).}Such a metric is calledLévy-metric forL0.{\displaystyle L^{0}.}Under this metric the spaceL0{\displaystyle L^{0}}is complete. However, as mentioned above, scalar multiplication is continuous with respect to this metric only ifμ(S)<∞{\displaystyle \mu (S)<\infty }. To see this, consider the Lebesgue measurable functionf:R→R{\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} }defined byf(x)=x{\displaystyle f(x)=x}. Then clearlylimc→0d(cf,0)=∞{\displaystyle \lim _{c\rightarrow 0}d(cf,0)=\infty }. The spaceL0{\displaystyle L^{0}}is in general not locally bounded, and not locally convex. For the infinite Lebesgue measureλ{\displaystyle \lambda }onRn,{\displaystyle \mathbb {R} ^{n},}the definition of the fundamental system of neighborhoods could be modified as followsWε={f:λ({x:|f(x)|>εand|x|<1ε})<ε}{\displaystyle W_{\varepsilon }=\left\{f:\lambda \left(\left\{x:|f(x)|>\varepsilon {\text{ and }}|x|<{\tfrac {1}{\varepsilon }}\right\}\right)<\varepsilon \right\}} The resulting spaceL0(Rn,λ){\displaystyle L^{0}(\mathbb {R} ^{n},\lambda )}, with the topology of local convergence in measure, is isomorphic to the spaceL0(Rn,gλ),{\displaystyle L^{0}(\mathbb {R} ^{n},g\,\lambda ),}for any positiveλ{\displaystyle \lambda }–integrable densityg.{\displaystyle g.}
https://en.wikipedia.org/wiki/P-norm
Inmathematics, ametric spaceis asettogether with a notion ofdistancebetween itselements, usually calledpoints. The distance is measured by afunctioncalled ametricordistance function.[1]Metric spaces are a general setting for studying many of the concepts ofmathematical analysisandgeometry. The most familiar example of a metric space is3-dimensional Euclidean spacewith its usual notion of distance. Other well-known examples are asphereequipped with theangular distanceand thehyperbolic plane. A metric may correspond to ametaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with theHamming distance, which measures the number of characters that need to be changed to get from one string to another. Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, includingRiemannian manifolds,normed vector spaces, andgraphs. Inabstract algebra, thep-adic numbersarise as elements of thecompletionof a metric structure on therational numbers. Metric spaces are also studied in their own right inmetric geometry[2]andanalysis on metric spaces.[3] Many of the basic notions ofmathematical analysis, includingballs,completeness, as well asuniform,Lipschitz, andHölder continuity, can be defined in the setting of metric spaces. Other notions, such ascontinuity,compactness, andopenandclosed sets, can be defined for metric spaces, but also in the even more general setting oftopological spaces. To see the utility of different notions of distance, consider thesurface of the Earthas a set of points. We can measure the distance between two such points by the length of theshortest path along the surface, "as the crow flies"; this is particularly useful for shipping and aviation. We can also measure the straight-line distance between two points through the Earth's interior; this notion is, for example, natural inseismology, since it roughly corresponds to the length of time it takes for seismic waves to travel between those two points. The notion of distance encoded by the metric space axioms has relatively few requirements. This generality gives metric spaces a lot of flexibility. At the same time, the notion is strong enough to encode many intuitive facts about what distance means. This means that general results about metric spaces can be applied in many different contexts. Like many fundamental mathematical concepts, the metric on a metric space can be interpreted in many different ways. A particular metric may not be best thought of as measuring physical distance, but, instead, as the cost of changing from one state to another (as withWasserstein metricson spaces ofmeasures) or the degree of difference between two objects (for example, theHamming distancebetween two strings of characters, or theGromov–Hausdorff distancebetween metric spaces themselves). Formally, ametric spaceis anordered pair(M,d)whereMis a set anddis ametriconM, i.e., afunctiond:M×M→R{\displaystyle d\,\colon M\times M\to \mathbb {R} }satisfying the following axioms for all pointsx,y,z∈M{\displaystyle x,y,z\in M}:[4][5] If the metricdis unambiguous, one often refers byabuse of notationto "the metric spaceM". By taking all axioms except the second, one can show that distance is always non-negative:0=d(x,x)≤d(x,y)+d(y,x)=2d(x,y){\displaystyle 0=d(x,x)\leq d(x,y)+d(y,x)=2d(x,y)}Therefore the second axiom can be weakened toIfx≠y, thend(x,y)≠0{\textstyle {\text{If }}x\neq y{\text{, then }}d(x,y)\neq 0}and combined with the first to maked(x,y)=0⟺x=y{\textstyle d(x,y)=0\iff x=y}.[6] Thereal numberswith the distance functiond(x,y)=|y−x|{\displaystyle d(x,y)=|y-x|}given by theabsolute differenceform a metric space. Many properties of metric spaces and functions between them are generalizations of concepts inreal analysisand coincide with those concepts when applied to the real line. The Euclidean planeR2{\displaystyle \mathbb {R} ^{2}}can be equipped with many different metrics. TheEuclidean distancefamiliar from school mathematics can be defined byd2((x1,y1),(x2,y2))=(x2−x1)2+(y2−y1)2.{\displaystyle d_{2}((x_{1},y_{1}),(x_{2},y_{2}))={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}.} ThetaxicaborManhattandistanceis defined byd1((x1,y1),(x2,y2))=|x2−x1|+|y2−y1|{\displaystyle d_{1}((x_{1},y_{1}),(x_{2},y_{2}))=|x_{2}-x_{1}|+|y_{2}-y_{1}|}and can be thought of as the distance you need to travel along horizontal and vertical lines to get from one point to the other, as illustrated at the top of the article. Themaximum,L∞{\displaystyle L^{\infty }}, orChebyshev distanceis defined byd∞((x1,y1),(x2,y2))=max{|x2−x1|,|y2−y1|}.{\displaystyle d_{\infty }((x_{1},y_{1}),(x_{2},y_{2}))=\max\{|x_{2}-x_{1}|,|y_{2}-y_{1}|\}.}This distance does not have an easy explanation in terms of paths in the plane, but it still satisfies the metric space axioms. It can be thought of similarly to the number of moves akingwould have to make on achessboardto travel from one point to another on the given space. In fact, these three distances, while they have distinct properties, are similar in some ways. Informally, points that are close in one are close in the others, too. This observation can be quantified with the formulad∞(p,q)≤d2(p,q)≤d1(p,q)≤2d∞(p,q),{\displaystyle d_{\infty }(p,q)\leq d_{2}(p,q)\leq d_{1}(p,q)\leq 2d_{\infty }(p,q),}which holds for every pair of pointsp,q∈R2{\displaystyle p,q\in \mathbb {R} ^{2}}. A radically different distance can be defined by settingd(p,q)={0,ifp=q,1,otherwise.{\displaystyle d(p,q)={\begin{cases}0,&{\text{if }}p=q,\\1,&{\text{otherwise.}}\end{cases}}}UsingIverson brackets,d(p,q)=[p≠q]{\displaystyle d(p,q)=[p\neq q]}In thisdiscrete metric, all distinct points are 1 unit apart: none of them are close to each other, and none of them are very far away from each other either. Intuitively, the discrete metric no longer remembers that the set is a plane, but treats it just as an undifferentiated set of points. All of these metrics make sense onRn{\displaystyle \mathbb {R} ^{n}}as well asR2{\displaystyle \mathbb {R} ^{2}}. Given a metric space(M,d)and asubsetA⊆M{\displaystyle A\subseteq M}, we can considerAto be a metric space by measuring distances the same way we would inM. Formally, theinduced metriconAis a functiondA:A×A→R{\displaystyle d_{A}:A\times A\to \mathbb {R} }defined bydA(x,y)=d(x,y).{\displaystyle d_{A}(x,y)=d(x,y).}For example, if we take the two-dimensional sphereS2as a subset ofR3{\displaystyle \mathbb {R} ^{3}}, the Euclidean metric onR3{\displaystyle \mathbb {R} ^{3}}induces the straight-line metric onS2described above. Two more useful examples are the open interval(0, 1)and the closed interval[0, 1]thought of as subspaces of the real line. Arthur Cayley, in his article "On Distance", extended metric concepts beyond Euclidean geometry into domains bounded by a conic in a projective space. Hisdistancewas given by logarithm of across ratio. Any projectivity leaving the conic stable also leaves the cross ratio constant, so isometries are implicit. This method provides models forelliptic geometryandhyperbolic geometry, andFelix Klein, in several publications, established the field ofnon-euclidean geometrythrough the use of theCayley-Klein metric. The idea of an abstract space with metric properties was addressed in 1906 byRené Maurice Fréchet[7]and the termmetric spacewas coined byFelix Hausdorffin 1914.[8][9][10] Fréchet's work laid the foundation for understandingconvergence,continuity, and other key concepts in non-geometric spaces. This allowed mathematicians to study functions and sequences in a broader and more flexible way. This was important for the growing field of functional analysis. Mathematicians like Hausdorff andStefan Banachfurther refined and expanded the framework of metric spaces. Hausdorff introducedtopological spacesas a generalization of metric spaces. Banach's work infunctional analysisheavily relied on the metric structure. Over time, metric spaces became a central part ofmodern mathematics. They have influenced various fields includingtopology,geometry, andapplied mathematics. Metric spaces continue to play a crucial role in the study of abstract mathematical concepts. A distance function is enough to define notions of closeness and convergence that were first developed inreal analysis. Properties that depend on the structure of a metric space are referred to asmetric properties. Every metric space is also atopological space, and some metric properties can also be rephrased without reference to distance in the language of topology; that is, they are reallytopological properties. For any pointxin a metric spaceMand any real numberr> 0, theopen ballof radiusraroundxis defined to be the set of points that are strictly less than distancerfromx:Br(x)={y∈M:d(x,y)<r}.{\displaystyle B_{r}(x)=\{y\in M:d(x,y)<r\}.}This is a natural way to define a set of points that are relatively close tox. Therefore, a setN⊆M{\displaystyle N\subseteq M}is aneighborhoodofx(informally, it contains all points "close enough" tox) if it contains an open ball of radiusraroundxfor somer> 0. Anopen setis a set which is a neighborhood of all its points. It follows that the open balls form abasefor a topology onM. In other words, the open sets ofMare exactly the unions of open balls. As in any topology,closed setsare the complements of open sets. Sets may be both open and closed as well as neither open nor closed. This topology does not carry all the information about the metric space. For example, the distancesd1,d2, andd∞defined above all induce the same topology onR2{\displaystyle \mathbb {R} ^{2}}, although they behave differently in many respects. Similarly,R{\displaystyle \mathbb {R} }with the Euclidean metric and its subspace the interval(0, 1)with the induced metric arehomeomorphicbut have very different metric properties. Conversely, not every topological space can be given a metric. Topological spaces which are compatible with a metric are calledmetrizableand are particularly well-behaved in many ways: in particular, they areparacompact[11]Hausdorff spaces(hencenormal) andfirst-countable.[a]TheNagata–Smirnov metrization theoremgives a characterization of metrizability in terms of other topological properties, without reference to metrics. Convergence of sequencesin Euclidean space is defined as follows: Convergence of sequences in a topological space is defined as follows: In metric spaces, both of these definitions make sense and they are equivalent. This is a general pattern fortopological propertiesof metric spaces: while they can be defined in a purely topological way, there is often a way that uses the metric which is easier to state or more familiar from real analysis. Informally, a metric space iscompleteif it has no "missing points": every sequence that looks like it should converge to something actually converges. To make this precise: a sequence(xn)in a metric spaceMisCauchyif for everyε > 0there is an integerNsuch that for allm,n>N,d(xm,xn) < ε. By the triangle inequality, any convergent sequence is Cauchy: ifxmandxnare both less thanεaway from the limit, then they are less than2εaway from each other. If the converse is true—every Cauchy sequence inMconverges—thenMis complete. Euclidean spaces are complete, as isR2{\displaystyle \mathbb {R} ^{2}}with the other metrics described above. Two examples of spaces which are not complete are(0, 1)and the rationals, each with the metric induced fromR{\displaystyle \mathbb {R} }. One can think of(0, 1)as "missing" its endpoints 0 and 1. The rationals are missing all the irrationals, since any irrational has a sequence of rationals converging to it inR{\displaystyle \mathbb {R} }(for example, its successive decimal approximations). These examples show that completeness isnota topological property, sinceR{\displaystyle \mathbb {R} }is complete but the homeomorphic space(0, 1)is not. This notion of "missing points" can be made precise. In fact, every metric space has a uniquecompletion, which is a complete space that contains the given space as adensesubset. For example,[0, 1]is the completion of(0, 1), and the real numbers are the completion of the rationals. Since complete spaces are generally easier to work with, completions are important throughout mathematics. For example, in abstract algebra, thep-adic numbersare defined as the completion of the rationals under a different metric. Completion is particularly common as a tool infunctional analysis. Often one has a set of nice functions and a way of measuring distances between them. Taking the completion of this metric space gives a new set of functions which may be less nice, but nevertheless useful because they behave similarly to the original nice functions in important ways. For example,weak solutionstodifferential equationstypically live in a completion (aSobolev space) rather than the original space of nice functions for which the differential equation actually makes sense. A metric spaceMisboundedif there is anrsuch that no pair of points inMis more than distancerapart.[b]The least suchris called thediameterofM. The spaceMis calledprecompactortotally boundedif for everyr> 0there is a finitecoverofMby open balls of radiusr. Every totally bounded space is bounded. To see this, start with a finite cover byr-balls for some arbitraryr. Since the subset ofMconsisting of the centers of these balls is finite, it has finite diameter, sayD. By the triangle inequality, the diameter of the whole space is at mostD+ 2r. The converse does not hold: an example of a metric space that is bounded but not totally bounded isR2{\displaystyle \mathbb {R} ^{2}}(or any other infinite set) with the discrete metric. Compactness is a topological property which generalizes the properties of a closed and bounded subset of Euclidean space. There are several equivalent definitions of compactness in metric spaces: One example of a compact space is the closed interval[0, 1]. Compactness is important for similar reasons to completeness: it makes it easy to find limits. Another important tool isLebesgue's number lemma, which shows that for any open cover of a compact space, every point is relatively deep inside one of the sets of the cover. Unlike in the case of topological spaces or algebraic structures such asgroupsorrings, there is no single "right" type ofstructure-preserving functionbetween metric spaces. Instead, one works with different types of functions depending on one's goals. Throughout this section, suppose that(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}are two metric spaces. The words "function" and "map" are used interchangeably. One interpretation of a "structure-preserving" map is one that fully preserves the distance function: It follows from the metric space axioms that a distance-preserving function is injective. A bijective distance-preserving function is called anisometry.[13]One perhaps non-obvious example of an isometry between spaces described in this article is the mapf:(R2,d1)→(R2,d∞){\displaystyle f:(\mathbb {R} ^{2},d_{1})\to (\mathbb {R} ^{2},d_{\infty })}defined byf(x,y)=(x+y,x−y).{\displaystyle f(x,y)=(x+y,x-y).} If there is an isometry between the spacesM1andM2, they are said to beisometric. Metric spaces that are isometric areessentially identical. On the other end of the spectrum, one can forget entirely about the metric structure and studycontinuous maps, which only preserve topological structure. There are several equivalent definitions of continuity for metric spaces. The most important are: Ahomeomorphismis a continuous bijection whose inverse is also continuous; if there is a homeomorphism betweenM1andM2, they are said to behomeomorphic. Homeomorphic spaces are the same from the point of view of topology, but may have very different metric properties. For example,R{\displaystyle \mathbb {R} }is unbounded and complete, while(0, 1)is bounded but not complete. A functionf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isuniformly continuousif for every real numberε > 0there existsδ > 0such that for all pointsxandyinM1such thatd(x,y)<δ{\displaystyle d(x,y)<\delta }, we haved2(f(x),f(y))<ε.{\displaystyle d_{2}(f(x),f(y))<\varepsilon .} The only difference between this definition and the ε–δ definition of continuity is the order of quantifiers: the choice of δ must depend only on ε and not on the pointx. However, this subtle change makes a big difference. For example, uniformly continuous maps take Cauchy sequences inM1to Cauchy sequences inM2. In other words, uniform continuity preserves some metric properties which are not purely topological. On the other hand, theHeine–Cantor theoremstates that ifM1is compact, then every continuous map is uniformly continuous. In other words, uniform continuity cannot distinguish any non-topological features of compact metric spaces. ALipschitz mapis one that stretches distances by at most a bounded factor. Formally, given a real numberK> 0, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isK-Lipschitzifd2(f(x),f(y))≤Kd1(x,y)for allx,y∈M1.{\displaystyle d_{2}(f(x),f(y))\leq Kd_{1}(x,y)\quad {\text{for all}}\quad x,y\in M_{1}.}Lipschitz maps are particularly important in metric geometry, since they provide more flexibility than distance-preserving maps, but still make essential use of the metric.[14]For example, a curve in a metric space isrectifiable(has finite length) if and only if it has a Lipschitz reparametrization. A 1-Lipschitz map is sometimes called anonexpandingormetric map. Metric maps are commonly taken to be the morphisms of thecategory of metric spaces. AK-Lipschitz map forK< 1is called acontraction. TheBanach fixed-point theoremstates that ifMis a complete metric space, then every contractionf:M→M{\displaystyle f:M\to M}admits a uniquefixed point. If the metric spaceMis compact, the result holds for a slightly weaker condition onf: a mapf:M→M{\displaystyle f:M\to M}admits a unique fixed point ifd(f(x),f(y))<d(x,y)for allx≠y∈M1.{\displaystyle d(f(x),f(y))<d(x,y)\quad {\mbox{for all}}\quad x\neq y\in M_{1}.} Aquasi-isometryis a map that preserves the "large-scale structure" of a metric space. Quasi-isometries need not be continuous. For example,R2{\displaystyle \mathbb {R} ^{2}}and its subspaceZ2{\displaystyle \mathbb {Z} ^{2}}are quasi-isometric, even though one is connected and the other is discrete. The equivalence relation of quasi-isometry is important ingeometric group theory: theŠvarc–Milnor lemmastates that all spaces on which a groupacts geometricallyare quasi-isometric.[15] Formally, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}is aquasi-isometric embeddingif there exist constantsA≥ 1andB≥ 0such that1Ad2(f(x),f(y))−B≤d1(x,y)≤Ad2(f(x),f(y))+Bfor allx,y∈M1.{\displaystyle {\frac {1}{A}}d_{2}(f(x),f(y))-B\leq d_{1}(x,y)\leq Ad_{2}(f(x),f(y))+B\quad {\text{ for all }}\quad x,y\in M_{1}.}It is aquasi-isometryif in addition it isquasi-surjective, i.e. there is a constantC≥ 0such that every point inM2{\displaystyle M_{2}}is at distance at mostCfrom some point in the imagef(M1){\displaystyle f(M_{1})}. Given two metric spaces(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}: Anormed vector spaceis a vector space equipped with anorm, which is a function that measures the length of vectors. The norm of a vectorvis typically denoted by‖v‖{\displaystyle \lVert v\rVert }. Any normed vector space can be equipped with a metric in which the distance between two vectorsxandyis given byd(x,y):=‖x−y‖.{\displaystyle d(x,y):=\lVert x-y\rVert .}The metricdis said to beinducedby the norm‖⋅‖{\displaystyle \lVert {\cdot }\rVert }. Conversely,[16]if a metricdon avector spaceXis then it is the metric induced by the norm‖x‖:=d(x,0).{\displaystyle \lVert x\rVert :=d(x,0).}A similar relationship holds betweenseminormsandpseudometrics. Among examples of metrics induced by a norm are the metricsd1,d2, andd∞onR2{\displaystyle \mathbb {R} ^{2}}, which are induced by theManhattan norm, theEuclidean norm, and themaximum norm, respectively. More generally, theKuratowski embeddingallows one to see any metric space as a subspace of a normed vector space. Infinite-dimensional normed vector spaces, particularly spaces of functions, are studied infunctional analysis. Completeness is particularly important in this context: a complete normed vector space is known as aBanach space. An unusual property of normed vector spaces is thatlinear transformationsbetween them are continuous if and only if they are Lipschitz. Such transformations are known asbounded operators. Acurvein a metric space(M,d)is a continuous functionγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}. Thelengthofγis measured byL(γ)=sup0=x0<x1<⋯<xn=T{∑k=1nd(γ(xk−1),γ(xk))}.{\displaystyle L(\gamma )=\sup _{0=x_{0}<x_{1}<\cdots <x_{n}=T}\left\{\sum _{k=1}^{n}d(\gamma (x_{k-1}),\gamma (x_{k}))\right\}.}In general, this supremum may be infinite; a curve of finite length is calledrectifiable.[17]Suppose that the length of the curveγis equal to the distance between its endpoints—that is, it is the shortest possible path between its endpoints. After reparametrization by arc length,γbecomes ageodesic: a curve which is a distance-preserving function.[15]A geodesic is a shortest possible path between any two of its points.[c] Ageodesic metric spaceis a metric space which admits a geodesic between any two of its points. The spaces(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}and(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}are both geodesic metric spaces. In(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}, geodesics are unique, but in(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}, there are often infinitely many geodesics between two points, as shown in the figure at the top of the article. The spaceMis alength space(or the metricdisintrinsic) if the distance between any two pointsxandyis the infimum of lengths of paths between them. Unlike in a geodesic metric space, the infimum does not have to be attained. An example of a length space which is not geodesic is the Euclidean plane minus the origin: the points(1, 0)and(-1, 0)can be joined by paths of length arbitrarily close to 2, but not by a path of length 2. An example of a metric space which is not a length space is given by the straight-line metric on the sphere: the straight line between two points through the center of the Earth is shorter than any path along the surface. Given any metric space(M,d), one can define a new, intrinsic distance functiondintrinsiconMby setting the distance between pointsxandyto be the infimum of thed-lengths of paths between them. For instance, ifdis the straight-line distance on the sphere, thendintrinsicis the great-circle distance. However, in some casesdintrinsicmay have infinite values. For example, ifMis theKoch snowflakewith the subspace metricdinduced fromR2{\displaystyle \mathbb {R} ^{2}}, then the resulting intrinsic distance is infinite for any pair of distinct points. ARiemannian manifoldis a space equipped with a Riemannianmetric tensor, which determines lengths oftangent vectorsat every point. This can be thought of defining a notion of distance infinitesimally. In particular, a differentiable pathγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}in a Riemannian manifoldMhas length defined as the integral of the length of the tangent vector to the path:L(γ)=∫0T|γ˙(t)|dt.{\displaystyle L(\gamma )=\int _{0}^{T}|{\dot {\gamma }}(t)|dt.}On a connected Riemannian manifold, one then defines the distance between two points as the infimum of lengths of smooth paths between them. This construction generalizes to other kinds of infinitesimal metrics on manifolds, such assub-RiemannianandFinsler metrics. The Riemannian metric is uniquely determined by the distance function; this means that in principle, all information about a Riemannian manifold can be recovered from its distance function. One direction in metric geometry is finding purely metric ("synthetic") formulations of properties of Riemannian manifolds. For example, a Riemannian manifold is aCAT(k)space(a synthetic condition which depends purely on the metric) if and only if itssectional curvatureis bounded above byk.[20]ThusCAT(k)spaces generalize upper curvature bounds to general metric spaces. Real analysis makes use of both the metric onRn{\displaystyle \mathbb {R} ^{n}}and theLebesgue measure. Therefore, generalizations of many ideas from analysis naturally reside inmetric measure spaces: spaces that have both ameasureand a metric which are compatible with each other. Formally, ametric measure spaceis a metric space equipped with aBorel regular measuresuch that every ball has positive measure.[21]For example Euclidean spaces of dimensionn, and more generallyn-dimensional Riemannian manifolds, naturally have the structure of a metric measure space, equipped with theLebesgue measure. Certainfractalmetric spaces such as theSierpiński gasketcan be equipped with the α-dimensionalHausdorff measurewhere α is theHausdorff dimension. In general, however, a metric space may not have an "obvious" choice of measure. One application of metric measure spaces is generalizing the notion ofRicci curvaturebeyond Riemannian manifolds. Just asCAT(k)andAlexandrov spacesgeneralize sectional curvature bounds,RCD spacesare a class of metric measure spaces which generalize lower bounds on Ricci curvature.[22] Ametric space isdiscreteif its induced topology is thediscrete topology. Although many concepts, such as completeness and compactness, are not interesting for such spaces, they are nevertheless an object of study in several branches of mathematics. In particular,finite metric spaces(those having afinitenumber of points) are studied incombinatoricsandtheoretical computer science.[23]Embeddings in other metric spaces are particularly well-studied. For example, not every finite metric space can beisometrically embeddedin a Euclidean space or inHilbert space. On the other hand, in the worst case the required distortion (bilipschitz constant) is only logarithmic in the number of points.[24][25] For anyundirected connected graphG, the setVof vertices ofGcan be turned into a metric space by defining thedistancebetween verticesxandyto be the length of the shortest edge path connecting them. This is also calledshortest-path distanceorgeodesic distance. Ingeometric group theorythis construction is applied to theCayley graphof a (typically infinite)finitely-generated group, yielding theword metric. Up to a bilipschitz homeomorphism, the word metric depends only on the group and not on the chosen finite generating set.[15] An important area of study in finite metric spaces is the embedding of complex metric spaces into simpler ones while controlling the distortion of distances. This is particularly useful in computer science and discrete mathematics, where algorithms often perform more efficiently on simpler structures like tree metrics. A significant result in this area is that any finite metric space can be probabilistically embedded into atree metricwith an expected distortion ofO(logn){\displaystyle O(logn)}, wheren{\displaystyle n}is the number of points in the metric space.[26] This embedding is notable because it achieves the best possible asymptotic bound on distortion, matching the lower bound ofΩ(logn){\displaystyle \Omega (logn)}. The tree metrics produced in this embeddingdominatethe original metrics, meaning that distances in the tree are greater than or equal to those in the original space. This property is particularly useful for designing approximation algorithms, as it allows for the preservation of distance-related properties while simplifying the underlying structure. The result has significant implications for various computational problems: The technique involves constructing a hierarchical decomposition of the original metric space and converting it into a tree metric via a randomized algorithm. TheO(logn){\displaystyle O(logn)}distortion bound has led to improvedapproximation ratiosin several algorithmic problems, demonstrating the practical significance of this theoretical result. In modern mathematics, one often studies spaces whose points are themselves mathematical objects. A distance function on such a space generally aims to measure the dissimilarity between two objects. Here are some examples: The idea of spaces of mathematical objects can also be applied to subsets of a metric space, as well as metric spaces themselves.HausdorffandGromov–Hausdorff distancedefine metrics on the set of compact subsets of a metric space and the set of compact metric spaces, respectively. Suppose(M,d)is a metric space, and letSbe a subset ofM. Thedistance fromSto a pointxofMis, informally, the distance fromxto the closest point ofS. However, since there may not be a single closest point, it is defined via aninfimum:d(x,S)=inf{d(x,s):s∈S}.{\displaystyle d(x,S)=\inf\{d(x,s):s\in S\}.}In particular,d(x,S)=0{\displaystyle d(x,S)=0}if and only ifxbelongs to theclosureofS. Furthermore, distances between points and sets satisfy a version of the triangle inequality:d(x,S)≤d(x,y)+d(y,S),{\displaystyle d(x,S)\leq d(x,y)+d(y,S),}and therefore the mapdS:M→R{\displaystyle d_{S}:M\to \mathbb {R} }defined bydS(x)=d(x,S){\displaystyle d_{S}(x)=d(x,S)}is continuous. Incidentally, this shows that metric spaces arecompletely regular. Given two subsetsSandTofM, theirHausdorff distanceisdH(S,T)=max{sup{d(s,T):s∈S},sup{d(t,S):t∈T}}.{\displaystyle d_{H}(S,T)=\max\{\sup\{d(s,T):s\in S\},\sup\{d(t,S):t\in T\}\}.}Informally, two setsSandTare close to each other in the Hausdorff distance if no element ofSis too far fromTand vice versa. For example, ifSis an open set in Euclidean spaceTis anε-netinsideS, thendH(S,T)<ε{\displaystyle d_{H}(S,T)<\varepsilon }. In general, the Hausdorff distancedH(S,T){\displaystyle d_{H}(S,T)}can be infinite or zero. However, the Hausdorff distance between two distinct compact sets is always positive and finite. Thus the Hausdorff distance defines a metric on the set of compact subsets ofM. The Gromov–Hausdorff metric defines a distance between (isometry classes of) compact metric spaces. TheGromov–Hausdorff distancebetween compact spacesXandYis the infimum of the Hausdorff distance over all metric spacesZthat containXandYas subspaces. While the exact value of the Gromov–Hausdorff distance is rarely useful to know, the resulting topology has found many applications. If(M1,d1),…,(Mn,dn){\displaystyle (M_{1},d_{1}),\ldots ,(M_{n},d_{n})}are metric spaces, andNis theEuclidean normonRn{\displaystyle \mathbb {R} ^{n}}, then(M1×⋯×Mn,d×){\displaystyle {\bigl (}M_{1}\times \cdots \times M_{n},d_{\times }{\bigr )}}is a metric space, where theproduct metricis defined byd×((x1,…,xn),(y1,…,yn))=N(d1(x1,y1),…,dn(xn,yn)),{\displaystyle d_{\times }{\bigl (}(x_{1},\ldots ,x_{n}),(y_{1},\ldots ,y_{n}){\bigr )}=N{\bigl (}d_{1}(x_{1},y_{1}),\ldots ,d_{n}(x_{n},y_{n}){\bigr )},}and the induced topology agrees with theproduct topology. By the equivalence of norms in finite dimensions, a topologically equivalent metric is obtained ifNis thetaxicab norm, ap-norm, themaximum norm, or any other norm which is non-decreasing as the coordinates of a positiven-tuple increase (yielding the triangle inequality). Similarly, a metric on the topological product of countably many metric spaces can be obtained using the metricd(x,y)=∑i=1∞12idi(xi,yi)1+di(xi,yi).{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {d_{i}(x_{i},y_{i})}{1+d_{i}(x_{i},y_{i})}}.} The topological product of uncountably many metric spaces need not be metrizable. For example, an uncountable product of copies ofR{\displaystyle \mathbb {R} }is notfirst-countableand thus is not metrizable. IfMis a metric space with metricd, and∼{\displaystyle \sim }is anequivalence relationonM, then we can endow the quotient setM/∼{\displaystyle M/\!\sim }with a pseudometric. The distance between two equivalence classes[x]{\displaystyle [x]}and[y]{\displaystyle [y]}is defined asd′([x],[y])=inf{d(p1,q1)+d(p2,q2)+⋯+d(pn,qn)},{\displaystyle d'([x],[y])=\inf\{d(p_{1},q_{1})+d(p_{2},q_{2})+\dotsb +d(p_{n},q_{n})\},}where theinfimumis taken over all finite sequences(p1,p2,…,pn){\displaystyle (p_{1},p_{2},\dots ,p_{n})}and(q1,q2,…,qn){\displaystyle (q_{1},q_{2},\dots ,q_{n})}withp1∼x{\displaystyle p_{1}\sim x},qn∼y{\displaystyle q_{n}\sim y},qi∼pi+1,i=1,2,…,n−1{\displaystyle q_{i}\sim p_{i+1},i=1,2,\dots ,n-1}.[30]In general this will only define apseudometric, i.e.d′([x],[y])=0{\displaystyle d'([x],[y])=0}does not necessarily imply that[x]=[y]{\displaystyle [x]=[y]}. However, for some equivalence relations (e.g., those given by gluing together polyhedra along faces),d′{\displaystyle d'}is a metric. The quotient metricd′{\displaystyle d'}is characterized by the followinguniversal property. Iff:(M,d)→(X,δ){\displaystyle f\,\colon (M,d)\to (X,\delta )}is a metric (i.e. 1-Lipschitz) map between metric spaces satisfyingf(x) =f(y)wheneverx∼y{\displaystyle x\sim y}, then the induced functionf¯:M/∼→X{\displaystyle {\overline {f}}\,\colon {M/\sim }\to X}, given byf¯([x])=f(x){\displaystyle {\overline {f}}([x])=f(x)}, is a metric mapf¯:(M/∼,d′)→(X,δ).{\displaystyle {\overline {f}}\,\colon (M/\sim ,d')\to (X,\delta ).} The quotient metric does not always induce thequotient topology. For example, the topological quotient of the metric spaceN×[0,1]{\displaystyle \mathbb {N} \times [0,1]}identifying all points of the form(n,0){\displaystyle (n,0)}is not metrizable since it is notfirst-countable, but the quotient metric is a well-defined metric on the same set which induces acoarser topology. Moreover, different metrics on the original topological space (a disjoint union of countably many intervals) lead to different topologies on the quotient.[31] A topological space issequentialif and only if it is a (topological) quotient of a metric space.[32] There are several notions of spaces which have less structure than a metric space, but more than a topological space. There are also numerous ways of relaxing the axioms for a metric, giving rise to various notions of generalized metric spaces. These generalizations can also be combined. The terminology used to describe them is not completely standardized. Most notably, infunctional analysispseudometrics often come fromseminormson vector spaces, and so it is natural to call them "semimetrics". This conflicts with the use of the term intopology. Some authors define metrics so as to allow the distance functiondto attain the value ∞, i.e. distances are non-negative numbers on theextended real number line.[4]Such a function is also called anextended metricor "∞-metric". Every extended metric can be replaced by a real-valued metric that is topologically equivalent. This can be done using asubadditivemonotonically increasing bounded function which is zero at zero, e.g.d′(x,y)=d(x,y)/(1+d(x,y)){\displaystyle d'(x,y)=d(x,y)/(1+d(x,y))}ord″(x,y)=min(1,d(x,y)){\displaystyle d''(x,y)=\min(1,d(x,y))}. The requirement that the metric take values in[0,∞){\displaystyle [0,\infty )}can be relaxed to consider metrics with values in other structures, including: These generalizations still induce auniform structureon the space. ApseudometriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }which satisfies the axioms for a metric, except that instead of the second (identity of indiscernibles) onlyd(x,x)=0{\displaystyle d(x,x)=0}for allx{\displaystyle x}is required.[34]In other words, the axioms for a pseudometric are: In some contexts, pseudometrics are referred to assemimetrics[35]because of their relation toseminorms. Occasionally, aquasimetricis defined as a function that satisfies all axioms for a metric with the possible exception of symmetry.[36]The name of this generalisation is not entirely standardized.[37] Quasimetrics are common in real life. For example, given a setXof mountain villages, the typical walking times between elements ofXform a quasimetric because travel uphill takes longer than travel downhill. Another example is thelength of car ridesin a city with one-way streets: here, a shortest path from pointAto pointBgoes along a different set of streets than a shortest path fromBtoAand may have a different length. A quasimetric on the reals can be defined by settingd(x,y)={x−yifx≥y,1otherwise.{\displaystyle d(x,y)={\begin{cases}x-y&{\text{if }}x\geq y,\\1&{\text{otherwise.}}\end{cases}}}The 1 may be replaced, for example, by infinity or by1+y−x{\displaystyle 1+{\sqrt {y-x}}}or any othersubadditivefunction ofy-x. This quasimetric describes the cost of modifying a metal stick: it is easy to reduce its size byfiling it down, but it is difficult or impossible to grow it. Given a quasimetric onX, one can define anR-ball aroundxto be the set{y∈X|d(x,y)≤R}{\displaystyle \{y\in X|d(x,y)\leq R\}}. As in the case of a metric, such balls form a basis for a topology onX, but this topology need not be metrizable. For example, the topology induced by the quasimetric on the reals described above is the (reversed)Sorgenfrey line. In ametametric, all the axioms of a metric are satisfied except that the distance between identical points is not necessarily zero. In other words, the axioms for a metametric are: Metametrics appear in the study ofGromov hyperbolic metric spacesand their boundaries. Thevisual metametricon such a space satisfiesd(x,x)=0{\displaystyle d(x,x)=0}for pointsx{\displaystyle x}on the boundary, but otherwised(x,x){\displaystyle d(x,x)}is approximately the distance fromx{\displaystyle x}to the boundary. Metametrics were first defined by Jussi Väisälä.[38]In other work, a function satisfying these axioms is called apartial metric[39][40]or adislocated metric.[34] AsemimetriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }that satisfies the first three axioms, but not necessarily the triangle inequality: Some authors work with a weaker form of the triangle inequality, such as: The ρ-inframetric inequality implies the ρ-relaxed triangle inequality (assuming the first axiom), and the ρ-relaxed triangle inequality implies the 2ρ-inframetric inequality. Semimetrics satisfying these equivalent conditions have sometimes been referred to asquasimetrics,[41]nearmetrics[42]orinframetrics.[43] The ρ-inframetric inequalities were introduced to modelround-trip delay timesin theinternet.[43]The triangle inequality implies the 2-inframetric inequality, and theultrametric inequalityis exactly the 1-inframetric inequality. Relaxing the last three axioms leads to the notion of apremetric, i.e. a function satisfying the following conditions: This is not a standard term. Sometimes it is used to refer to other generalizations of metrics such as pseudosemimetrics[44]or pseudometrics;[45]in translations of Russian books it sometimes appears as "prametric".[46]A premetric that satisfies symmetry, i.e. a pseudosemimetric, is also called a distance.[47] Any premetric gives rise to a topology as follows. For a positive realr{\displaystyle r}, ther{\displaystyle r}-ballcentered at a pointp{\displaystyle p}is defined as A set is calledopenif for any pointp{\displaystyle p}in the set there is anr{\displaystyle r}-ballcentered atp{\displaystyle p}which is contained in the set. Every premetric space is a topological space, and in fact asequential space. In general, ther{\displaystyle r}-ballsthemselves need not be open sets with respect to this topology. As for metrics, the distance between two setsA{\displaystyle A}andB{\displaystyle B}, is defined as This defines a premetric on thepower setof a premetric space. If we start with a (pseudosemi-)metric space, we get a pseudosemimetric, i.e. a symmetric premetric. Any premetric gives rise to apreclosure operatorcl{\displaystyle cl}as follows: The prefixespseudo-,quasi-andsemi-can also be combined, e.g., apseudoquasimetric(sometimes calledhemimetric) relaxes both the indiscernibility axiom and the symmetry axiom and is simply a premetric satisfying the triangle inequality. For pseudoquasimetric spaces the openr{\displaystyle r}-ballsform a basis of open sets. A very basic example of a pseudoquasimetric space is the set{0,1}{\displaystyle \{0,1\}}with the premetric given byd(0,1)=1{\displaystyle d(0,1)=1}andd(1,0)=0.{\displaystyle d(1,0)=0.}The associated topological space is theSierpiński space. Sets equipped with an extended pseudoquasimetric were studied byWilliam Lawvereas "generalized metric spaces".[48]From acategoricalpoint of view, the extended pseudometric spaces and the extended pseudoquasimetric spaces, along with their corresponding nonexpansive maps, are the best behaved of themetric space categories. One can take arbitrary products and coproducts and form quotient objects within the given category. If one drops "extended", one can only take finite products and coproducts. If one drops "pseudo", one cannot take quotients. Lawvere also gave an alternate definition of such spaces asenriched categories. The ordered set(R,≥){\displaystyle (\mathbb {R} ,\geq )}can be seen as acategorywith onemorphisma→b{\displaystyle a\to b}ifa≥b{\displaystyle a\geq b}and none otherwise. Using+as thetensor productand 0 as theidentitymakes this category into amonoidal categoryR∗{\displaystyle R^{*}}. Every (extended pseudoquasi-)metric space(M,d){\displaystyle (M,d)}can now be viewed as a categoryM∗{\displaystyle M^{*}}enriched overR∗{\displaystyle R^{*}}: The notion of a metric can be generalized from a distance between two elements to a number assigned to a multiset of elements. Amultisetis a generalization of the notion of asetin which an element can occur more than once. Define the multiset unionU=XY{\displaystyle U=XY}as follows: if an elementxoccursmtimes inXandntimes inYthen it occursm+ntimes inU. A functiondon the set of nonempty finite multisets of elements of a setMis a metric[49]if By considering the cases of axioms 1 and 2 in which the multisetXhas two elements and the case of axiom 3 in which the multisetsX,Y, andZhave one element each, one recovers the usual axioms for a metric. That is, every multiset metric yields an ordinary metric when restricted to sets of two elements. A simple example is the set of all nonempty finite multisetsX{\displaystyle X}of integers withd(X)=max(X)−min(X){\displaystyle d(X)=\max(X)-\min(X)}. More complex examples areinformation distancein multisets;[49]andnormalized compression distance(NCD) in multisets.[50]
https://en.wikipedia.org/wiki/Distance_function
TheLevel-set method(LSM) is a conceptual framework for usinglevel setsas a tool fornumerical analysisofsurfacesandshapes. LSM can performnumerical computationsinvolvingcurvesand surfaces on a fixedCartesian gridwithout having toparameterizethese objects.[1]LSM makes it easier to perform computations on shapes with sharp corners andshapesthat changetopology(such as by splitting in two or developing holes). These characteristics make LSM effective formodelingobjects that vary in time, such as anairbaginflating or a drop of oil floating in water. The figure on the right illustrates several ideas about LSM. In the upper left corner is abounded regionwith a well-behaved boundary. Below it, the red surface is the graph of a level set functionφ{\displaystyle \varphi }determining this shape, and the flat blue region represents theX-Yplane. The boundary of the shape is then the zero-level set ofφ{\displaystyle \varphi }, while the shape itself is the set of points in the plane for whichφ{\displaystyle \varphi }is positive (interior of the shape) or zero (at the boundary). In the top row, the shape's topology changes as it is split in two. It is challenging to describe this transformation numerically byparameterizingthe boundary of the shape and following its evolution. An algorithm can be used to detect the moment the shape splits in two and then construct parameterizations for the two newly obtained curves. On the bottom row, however, the plane at which the level set function is sampled is translated upwards, on which the shape's change in topology is described. It is less challenging to work with a shape through its level-set function rather than with itself directly, in which a method would need to consider all the possible deformations the shape might undergo. Thus, in two dimensions, the level-set method amounts to representing aclosed curveΓ{\displaystyle \Gamma }(such as the shape boundary in our example) using anauxiliary functionφ{\displaystyle \varphi }, called the level-set function. The curveΓ{\displaystyle \Gamma }is represented as the zero-level set ofφ{\displaystyle \varphi }by and the level-set method manipulatesΓ{\displaystyle \Gamma }implicitlythrough the functionφ{\displaystyle \varphi }. This functionφ{\displaystyle \varphi }is assumed to take positive values inside the region delimited by the curveΓ{\displaystyle \Gamma }and negative values outside.[2][3] If the curveΓ{\displaystyle \Gamma }moves in the normal direction with a speedv{\displaystyle v}, then by chain rule and implicit differentiation, it can be determined that the level-set functionφ{\displaystyle \varphi }satisfies thelevel-set equation Here,|⋅|{\displaystyle |\cdot |}is theEuclidean norm(denoted customarily by single bars in partial differential equations), andt{\displaystyle t}is time. This is apartial differential equation, in particular aHamilton–Jacobi equation, and can be solved numerically, for example, by usingfinite differenceson a Cartesian grid.[2][3] However, the numerical solution of the level set equation may require advanced techniques. Simple finite difference methods fail quickly.Upwindingmethods such as theGodunov methodare considered better; however, the level set method does not guarantee preservation of the volume and shape of the set level in an advection field that maintains shape and size, for example, a uniform orrotational velocityfield. Instead, the shape of the level set may become distorted, and the level set may disappear over a few time steps. Therefore, high-order finite difference schemes, such as high-order essentially non-oscillatory (ENO) schemes, are often required, and even then, the feasibility of long-term simulations is questionable. More advanced methods have been developed to overcome this; for example, combinations of the leveling method with tracking marker particles suggested by the velocity field.[4] Consider a unit circle inR2{\textstyle \mathbb {R} ^{2}}, shrinking in on itself at a constant rate, i.e. each point on the boundary of the circle moves along its inwards pointing normally at some fixed speed. The circle will shrink and eventually collapse down to a point. If an initial distance field is constructed (i.e. a function whose value is the signedEuclidean distanceto the boundary, positive interior, negative exterior) on the initial circle, the normalized gradient of this field will be the circle normal. If the field has a constant value subtracted from it in time, the zero level (which was the initial boundary) of the new fields will also be circular and will similarly collapse to a point. This is due to this being effectively the temporal integration of theEikonal equationwith a fixed frontvelocity. The level-set method was developed in 1979 by Alain Dervieux,[5]and subsequently popularized byStanley OsherandJames Sethian. It has since become popular in many disciplines, such asimage processing,computer graphics,computational geometry,optimization,computational fluid dynamics, andcomputational biology.
https://en.wikipedia.org/wiki/Level-set_method
Aneikonal equation(fromGreekεἰκών, image[1][2]) is anon-linearfirst-order partial differential equationthat is encountered in problems ofwave propagation. The classical eikonal equation ingeometric opticsis a differential equation of the form wherex{\displaystyle x}lies in an open subset ofRn{\displaystyle \mathbb {R} ^{n}},n(x){\displaystyle n(x)}is a positive function,∇{\displaystyle \nabla }denotes thegradient, and|⋅|{\displaystyle |\cdot |}is theEuclidean norm. The functionn{\displaystyle n}is given and one seeks solutionsu{\displaystyle u}. In the context ofgeometric optics, the functionn{\displaystyle n}is therefractive indexof the medium. More generally, an eikonal equation is an equation of the form whereH{\displaystyle H}is a function of2n{\displaystyle 2n}variables. Here the functionH{\displaystyle H}is given, andu{\displaystyle u}is the solution. IfH(x,y)=|y|−n(x){\displaystyle H(x,y)=|y|-n(x)}, then equation (2) becomes (1). Eikonal equations naturally arise in theWKB method[3]and the study ofMaxwell's equations.[4]Eikonal equations provide a link betweenphysical (wave) opticsandgeometric (ray) optics. One fast computational algorithm to approximate the solution to the eikonal equation is thefast marching method. The term "eikonal" was first used in the context of geometric optics byHeinrich Bruns.[5]However, the actual equation appears earlier in the seminal work ofWilliam Rowan Hamiltonongeometric optics.[6] Suppose thatΩ{\displaystyle \Omega }is an open set with suitably smooth boundary∂Ω{\displaystyle \partial \Omega }. The solution to the eikonal equation can be interpreted as the minimal amount of time required to travel fromx{\displaystyle x}to∂Ω{\displaystyle \partial \Omega }, wheref:Ω¯→(0,+∞){\displaystyle f:{\bar {\Omega }}\to (0,+\infty )}is the speed of travel, andq:∂Ω→[0,+∞){\displaystyle q:\partial \Omega \to [0,+\infty )}is an exit-time penalty. (Alternatively this can be posed as a minimal cost-to-exit by making the right-sideC(x)/f(x){\displaystyle C(x)/f(x)}andq{\displaystyle q}an exit-cost penalty.) In the special case whenf=1{\displaystyle f=1}, the solution gives thesigned distancefrom∂Ω{\displaystyle \partial \Omega }.[7] By assuming that∇u(x){\displaystyle \nabla u(x)}exists at all points, it is easy to prove thatu(x){\displaystyle u(x)}corresponds to a time-optimal control problem usingBellman's optimality principleand a Taylor expansion.[8]Unfortunately, it is not guaranteed that∇u(x){\displaystyle \nabla u(x)}exists at all points, and more advanced techniques are necessary to prove this. This led to the development ofviscosity solutionsin the 1980s byPierre-Louis LionsandMichael G. Crandall,[9]and Lions won aFields Medalfor his contributions. The physical meaning of the eikonal equation is related to the formula whereE{\displaystyle \mathbf {E} }is the electric field strength, andV{\displaystyle V}is the electric potential. There is a similar equation for velocity potential in fluid flow and temperature in heat transfer. The physical meaning of this equation in the electromagnetic example is that any charge in the region is pushed to move at right angles to the lines[clarification needed]of constant potential, and along lines of force determined by the field of theEvector and the sign of the charge. Ray optics and electromagnetism are related by the fact that the eikonal equation gives a second electromagnetic formula of the same form as the potential equation above where the line of constant potential has been replaced by a line of constant phase, and the force lines have been replaced by normal vectors coming out of the constant phase line at right angles. The magnitude of these normal vectors is given by the square root of the relative permittivity. The line of constant phase can be considered the edge of one of the advancing light waves (wavefront). The normal vectors are the rays the light is traveling down in ray optics. Several fast and efficient algorithms to solve the eikonal equation have been developed since the 1990s. Many of these algorithms take advantage of algorithms developed much earlier forshortest path problemson graphs with nonnegative edge lengths.[10]These algorithms take advantage of thecausalityprovided by the physical interpretation and typically discretize the domain using amesh[11][12][13][14]orregular grid[15][16]and calculate the solution at each discretized point. Eikonal solvers on triangulated surfaces were introduced by Kimmel and Sethian in 1998.[11][12] Sethian'sfast marching method(FMM)[15][16]was the first "fast and efficient" algorithm created to solve the Eikonal equation. The original description discretizes the domainΩ⊂Rn{\displaystyle \Omega \subset \mathbb {R} ^{n}}into a regular grid and "marches" the solution from "known" values to the undiscovered regions, precisely mirroring the logic ofDijkstra's algorithm. IfΩ{\displaystyle \Omega }is discretized and hasM{\displaystyle M}meshpoints, then the computational complexity isO(Mlog⁡M){\displaystyle O(M\log M)}where thelog{\displaystyle \log }term comes from the use of a heap (typically binary). A number of modifications can be prescribed to FMM since it is classified as a label-setting method. In addition, FMM has been generalized to operate on general meshes that discretize the domain.[11][12][13][14] Label-correcting methodssuch as theBellman–Ford algorithmcan also be used to solve the discretized Eikonal equation also with numerous modifications allowed (e.g. "Small Labels First"[10][17]or "Large Labels Last"[10][18]). Two-queue methods have also been developed[19]that are essentially a version of the Bellman-Ford algorithm except two queues are used with a threshold used to determine which queue a gridpoint should be assigned to based on local information. Sweeping algorithms such as thefast sweeping method(FSM)[20]are highly efficient for solving Eikonal equations when the correspondingcharacteristic curvesdo not change direction very often.[10]These algorithms are label-correcting but do not make use of a queue or heap, and instead prescribe different orderings for the gridpoints to be updated and iterate through these orderings until convergence. Some improvements were introduced such as "locking" gridpoints[19]during a sweep if does not receive an update, but on highly refined grids and higher-dimensional spaces there is still a large overhead due to having to pass through every gridpoint. Parallel methods have been introduced that attempt to decompose the domain and perform sweeping on each decomposed subset. Zhao's parallel implementation decomposes the domain inton{\displaystyle n}-dimensional subsets and then runs an individual FSM on each subset.[21]Detrixhe's parallel implementation also decomposes the domain, but parallelizes each individual sweep so that processors are responsible for updating gridpoints in an(n−1){\displaystyle (n-1)}-dimensionalhyperplaneuntil the entire domain is fully swept.[22] Hybrid methodshave also been introduced that take advantage of FMM's efficiency with FSM's simplicity. For example, the Heap Cell Method (HCM) decomposes the domain into cells and performs FMM on the cell-domain, and each time a "cell" is updated FSM is performed on the local gridpoint-domain that lies within that cell.[10]A parallelized version of HCM has also been developed.[23] For simplicity assume thatΩ{\displaystyle \Omega }is discretized into a uniform grid with spacingshx{\displaystyle h_{x}}andhy{\displaystyle h_{y}}in the x and y directions, respectively. Assume that a gridpointxij{\displaystyle x_{ij}}has valueUij=U(xij)≈u(xij){\displaystyle U_{ij}=U(x_{ij})\approx u(x_{ij})}. A first-order scheme to approximate the partial derivatives is where Due to the consistent, monotone, and causal properties of this discretization[10]it is easy to show that ifUX=min(Ui−1,j,Ui+1,j){\displaystyle U_{X}=\min(U_{i-1,j},U_{i+1,j})}andUY=min(Ui,j−1,Ui,j+1){\displaystyle U_{Y}=\min(U_{i,j-1},U_{i,j+1})}and|UX/hx−UY/hy|≤1/fij{\displaystyle |U_{X}/h_{x}-U_{Y}/h_{y}|\leq 1/f_{ij}}then which can be solved as a quadratic. In the limiting case ofhx=hy=h{\displaystyle h_{x}=h_{y}=h}, this reduces to This solution will always exist as long as|UX−UY|≤2h/fij{\displaystyle |U_{X}-U_{Y}|\leq {\sqrt {2}}h/f_{ij}}is satisfied and is larger than both,UX{\displaystyle U_{X}}andUY{\displaystyle U_{Y}}, as long as|UX−UY|≤h/fij{\displaystyle |U_{X}-U_{Y}|\leq h/f_{ij}}. If|UX/hx−UY/hy|≥1/fij{\displaystyle |U_{X}/h_{x}-U_{Y}/h_{y}|\geq 1/f_{ij}}, a lower-dimensional update must be performed by assuming one of the partial derivatives is0{\displaystyle 0}: Assume that a grid pointx{\displaystyle x}has valueU=U(x)≈u(x){\displaystyle U=U(x)\approx u(x)}. Repeating the same steps as in then=2{\displaystyle n=2}case we can use a first-order scheme to approximate the partial derivatives. LetUi{\displaystyle U_{i}}be the minimum of the values of the neighbors in the±ei{\displaystyle \pm \mathbf {e} _{i}}directions, whereei{\displaystyle \mathbf {e} _{i}}is astandard unit basis vector. The approximation is then Solving this quadratic equation forU{\displaystyle U}yields: If the discriminant in the square root is negative, then a lower-dimensional update must be performed (i.e. one of the partial derivatives is0{\displaystyle 0}). Ifn=2{\displaystyle n=2}then perform the one-dimensional update Ifn≥3{\displaystyle n\geq 3}then perform ann−1{\displaystyle n-1}dimensional update using the values{U1,…,Un}∖{Ui}{\displaystyle \{U_{1},\ldots ,U_{n}\}\setminus \{U_{i}\}}for everyi=1,…,n{\displaystyle i=1,\ldots ,n}and choose the smallest. An eikonal equation is one of the form The planex=(0,x′){\displaystyle x=(0,x')}can be thought of as the initial condition, by thinking ofx1{\displaystyle x_{1}}ast.{\displaystyle t.}We could also solve the equation on a subset of this plane, or on a curved surface, with obvious modifications. The eikonal equation shows up ingeometrical optics, which is a way of studying solutions of thewave equationc2|∇xu|2=|∂tu|2{\displaystyle c^{2}|\nabla _{x}u|^{2}=|\partial _{t}u|^{2}}, wherec(x){\displaystyle c(x)}andu(x,t){\displaystyle u(x,t)}. In geometric optics, the eikonal equation describes the phase fronts of waves. Under reasonable hypothesis on the "initial" data, the eikonal equation admits a local solution, but a global smooth solution (e.g. a solution for all time in the geometrical optics case) is not possible. The reason is thatcausticsmay develop. In the geometrical optics case, this means that wavefronts cross. We can solve the eikonal equation using the method of characteristics. One must impose the "non-characteristic" hypothesis∂p1H(x,p)≠0{\displaystyle \partial _{p_{1}}H(x,p)\neq 0}along the initial hypersurfacex=(0,x′){\displaystyle x=(0,x')}, whereH=H(x,p) andp= (p1,...,pn) is the variable that gets replaced by ∇u. Herex= (x1,...,xn) = (t,x′). First, solve the problemH(x,ξ(x))=0{\displaystyle H(x,\xi (x))=0},ξ(x)=∇u(x),x∈H{\displaystyle \xi (x)=\nabla u(x),x\in H}. This is done by defining curves (and values ofξ{\displaystyle \xi }on those curves) as That these equations have a solution for some interval0≤s<s1{\displaystyle 0\leq s<s_{1}}follows from standard ODE theorems (using the non-characteristic hypothesis). These curves fill out anopen setaround the planex=(0,x′){\displaystyle x=(0,x')}. Thus the curves define the value ofξ{\displaystyle \xi }in an open set about our initial plane. Once defined as such it is easy to see using the chain rule that∂sH(x(s),ξ(s))=0{\displaystyle \partial _{s}H(x(s),\xi (s))=0}, and thereforeH=0{\displaystyle H=0}along these curves. We want our solutionu{\displaystyle u}to satisfy∇u=ξ{\displaystyle \nabla u=\xi }, or more specifically, for everys{\displaystyle s},(∇u)(x(s))=ξ(x(s)).{\displaystyle (\nabla u)(x(s))=\xi (x(s)).}Assuming for a minute that this is possible, for any solutionu(x){\displaystyle u(x)}we must have and therefore In other words, the solutionu{\displaystyle u}will be given in a neighborhood of the initial plane by an explicit equation. However, since the different pathsx(t){\displaystyle x(t)}, starting from different initial points may cross, the solution may become multi-valued, at which point we have developed caustics. We also have (even before showing thatu{\displaystyle u}is a solution) It remains to show thatξ{\displaystyle \xi }, which we have defined in a neighborhood of our initial plane, is the gradient of some functionu{\displaystyle u}. This will follow if we show that the vector fieldξ{\displaystyle \xi }is curl free. Consider the first term in the definition ofξ{\displaystyle \xi }. This term,ξ(x(0))=∇u(x(0)){\displaystyle \xi (x(0))=\nabla u(x(0))}is curl free as it is the gradient of a function. As for the other term, we note The result follows.
https://en.wikipedia.org/wiki/Eikonal_equation
Aparallelof acurveis theenvelopeof a family ofcongruentcirclescentered on the curve. It generalises the concept ofparallel (straight) lines. It can also be defined as a curve whose points are at a constantnormal distancefrom a given curve.[1]These two definitions are not entirely equivalent as the latter assumessmoothness, whereas the former does not.[2] Incomputer-aided designthe preferred term for a parallel curve isoffset curve.[2][3][4](In other geometric contexts,the term offsetcan also refer totranslation.[5]) Offset curves are important, for example, innumerically controlledmachining, where they describe, for example, the shape of the cut made by a round cutting tool of a two-axis machine. The shape of the cut is offset from the trajectory of the cutter by a constant distance in the direction normal to the cutter trajectory at every point.[6] In the area of 2Dcomputer graphicsknown asvector graphics, the (approximate) computation of parallel curves is involved in one of the fundamental drawing operations, called stroking, which is typically applied topolylinesorpolybeziers(themselves called paths) in that field.[7] Except in the case of a line orcircle, the parallel curves have a more complicated mathematical structure than the progenitor curve.[1]For example, even if the progenitor curve issmooth, its offsets may not be so; this property is illustrated in the top figure, using asine curveas progenitor curve.[2]In general, even if a curve isrational, its offsets may not be so. For example, the offsets of a parabola are rational curves, but the offsets of anellipseor of ahyperbolaare not rational, even though these progenitor curves themselves are rational.[3] The notion also generalizes to 3Dsurfaces, where it is called anoffset surfaceorparallel surface.[8]Increasing asolidvolume by a (constant) distance offset is sometimes calleddilation.[9]The opposite operation is sometimes calledshelling.[8]Offset surfaces are important innumerically controlledmachining, where they describe the shape of the cut made by a ball nose end mill of a three-axis machine.[10]Other shapes of cutting bits can be modelled mathematically by general offset surfaces.[11] If there is a regular parametric representationx→=(x(t),y(t)){\displaystyle {\vec {x}}=(x(t),y(t))}of the given curve available, the second definition of a parallel curve (s. above) leads to the following parametric representation of the parallel curve with distance|d|{\displaystyle |d|}: In cartesian coordinates: The distance parameterd{\displaystyle d}may be negative. In this case, one gets a parallel curve on the opposite side of the curve (see diagram on the parallel curves of a circle). One can easily check that a parallel curve of a line is a parallel line in the common sense, and the parallel curve of a circle is a concentric circle. If the given curve is polynomial (meaning thatx(t){\displaystyle x(t)}andy(t){\displaystyle y(t)}are polynomials), then the parallel curves are usually not polynomial. In CAD area this is a drawback, because CAD systems use polynomials or rational curves. In order to get at least rational curves, the square root of the representation of the parallel curve has to be solvable. Such curves are calledpythagorean hodograph curvesand were investigated by R.T. Farouki.[14] Generally the analytic representation of a parallel curve of animplicit curveis not possible. Only for the simple cases of lines and circles the parallel curves can be described easily. For example: In general, presuming certain conditions, one can prove the existence of anoriented distance functionh(x,y){\displaystyle h(x,y)}. In practice one has to treat it numerically.[15]Considering parallel curves the following is true: Example:The diagram shows parallel curves of the implicit curve with equationf(x,y)=x4+y4−1=0.{\displaystyle \;f(x,y)=x^{4}+y^{4}-1=0\;.}Remark:The curvesf(x,y)=x4+y4−1=d{\displaystyle \;f(x,y)=x^{4}+y^{4}-1=d\;}are not parallel curves, because|grad⁡f(x,y)|=1{\displaystyle \;|\operatorname {grad} f(x,y)|=1\;}is not true in the area of interest. And:[17] When determining the cutting path of part with a sharp corner formachining, you must define the parallel (offset) curve to a given curve that has a discontinuous normal at the corner. Even though the given curve is not smooth at the sharp corner, its parallel curve may be smooth with a continuous normal, or it may havecuspswhen the distance from the curve matches the radius ofcurvatureat the sharp corner. As describedabove, the parametric representation of a parallel curve,x→d(t){\displaystyle {\vec {x}}_{d}(t)}, to a given curver,x→(t){\displaystyle {\vec {x}}(t)}, with distance|d|{\displaystyle |d|}is: At a sharp corner (t=tc{\displaystyle t=t_{c}}), the normal tox→(tc){\displaystyle {\vec {x}}(t_{c})}given byn→(tc){\displaystyle {\vec {n}}(t_{c})}is discontinuous, meaning theone-sided limitof the normal from the leftn→(tc−){\displaystyle {\vec {n}}(t_{c}^{-})}is unequal to the limit from the rightn→(tc+){\displaystyle {\vec {n}}(t_{c}^{+})}. Mathematically, However, we can define a normal fan[11]n→f(α){\displaystyle {\vec {n}}_{f}(\alpha )}that provides aninterpolantbetweenn→(tc−){\displaystyle {\vec {n}}(t_{c}^{-})}andn→(tc+){\displaystyle {\vec {n}}(t_{c}^{+})}, and usen→f(α){\displaystyle {\vec {n}}_{f}(\alpha )}in place ofn→(tc){\displaystyle {\vec {n}}(t_{c})}at the sharp corner: The resulting definition of the parallel curvex→d(t){\displaystyle {\vec {x}}_{d}(t)}provides the desired behavior: In general, the parallel curve of aBézier curveis not another Bézier curve, a result proved by Tiller and Hanson in 1984.[18]Thus, in practice, approximation techniques are used. Any desired level of accuracy is possible by repeatedly subdividing the curve, though better techniques require fewer subdivisions to attain the same level of accuracy. A 1997 survey by Elber, Lee and Kim[19]is widely cited, though better techniques have been proposed more recently. A modern technique based oncurve fitting, with references and comparisons to other algorithms, as well as open source JavaScript source code, was published in a blog post[20]in September 2022. Another efficient algorithm for offsetting is the level approach described byKimmeland Bruckstein (1993).[21] Offset surfaces are important innumerically controlledmachining, where they describe the shape of the cut made by a ball nose end mill of a three-axis mill.[10]If there is a regular parametric representationx→(u,v)=(x(u,v),y(u,v),z(u,v)){\displaystyle {\vec {x}}(u,v)=(x(u,v),y(u,v),z(u,v))}of the given surface available, the second definition of a parallel curve (see above) generalizes to the following parametric representation of the parallel surface with distance|d|{\displaystyle |d|}: Distance parameterd{\displaystyle d}may be negative, too. In this case one gets a parallel surface on the opposite side of the surface (see similar diagram on the parallel curves of a circle). One easily checks: a parallel surface of a plane is a parallel plane in the common sense and the parallel surface of a sphere is a concentric sphere. Note the similarity to the geometric properties ofparallel curves. The problem generalizes fairly obviously to higher dimensions e.g. to offset surfaces, and slightly less trivially topipe surfaces.[23]Note that the terminology for the higher-dimensional versions varies even more widely than in the planar case, e.g. other authors speak of parallel fibers, ribbons, and tubes.[24]For curves embedded in 3D surfaces the offset may be taken along ageodesic.[25] Another way to generalize it is (even in 2D) to consider a variable distance, e.g. parametrized by another curve.[22]One can for example stroke (envelope) with an ellipse instead of circle[22]as it is possible for example inMETAFONT.[26] More recentlyAdobe Illustratorhas added somewhat similar facility in versionCS5, although the control points for the variable width are visually specified.[27]In contexts where it's important to distinguish between constant and variable distance offsetting the acronyms CDO and VDO are sometimes used.[9] Assume you have a regular parametric representation of a curve,x→(t)=(x(t),y(t)){\displaystyle {\vec {x}}(t)=(x(t),y(t))}, and you have a second curve that can be parameterized by its unit normal,d→(n→){\displaystyle {\vec {d}}({\vec {n}})}, where the normal ofd→(n→)=n→{\displaystyle {\vec {d}}({\vec {n}})={\vec {n}}}(this parameterization by normal exists for curves whose curvature is strictly positive or negative, and thus convex, smooth, and not straight). The parametric representation of the general offset curve ofx→(t){\displaystyle {\vec {x}}(t)}offset byd→(n→){\displaystyle {\vec {d}}({\vec {n}})}is: Note that the trival offset,d→(n→)=dn→{\displaystyle {\vec {d}}({\vec {n}})=d{\vec {n}}}, gives you ordinary parallel (aka, offset) curves. General offset surfaces describe the shape of cuts made by a variety of cutting bits used by three-axis end mills innumerically controlledmachining.[11]Assume you have a regular parametric representation of a surface,x→(u,v)=(x(u,v),y(u,v),z(u,v)){\displaystyle {\vec {x}}(u,v)=(x(u,v),y(u,v),z(u,v))}, and you have a second surface that can be parameterized by its unit normal,d→(n→){\displaystyle {\vec {d}}({\vec {n}})}, where the normal ofd→(n→)=n→{\displaystyle {\vec {d}}({\vec {n}})={\vec {n}}}(this parameterization by normal exists for surfaces whoseGaussian curvatureis strictly positive, and thus convex, smooth, and not flat). The parametric representation of the general offset surface ofx→(t){\displaystyle {\vec {x}}(t)}offset byd→(n→){\displaystyle {\vec {d}}({\vec {n}})}is: Note that the trival offset,d→(n→)=dn→{\displaystyle {\vec {d}}({\vec {n}})=d{\vec {n}}}, gives you ordinary parallel (aka, offset) surfaces. Note the similarity to the geometric properties ofgeneral offset curves. The geometric properties listed above for general offset curves and surfaces can be derived for offsets of arbitrary dimension. Assume you have a regular parametric representation of an n-dimensional surface,x→(u→){\displaystyle {\vec {x}}({\vec {u}})}, where the dimension ofu→{\displaystyle {\vec {u}}}is n-1. Also assume you have a second n-dimensional surface that can be parameterized by its unit normal,d→(n→){\displaystyle {\vec {d}}({\vec {n}})}, where the normal ofd→(n→)=n→{\displaystyle {\vec {d}}({\vec {n}})={\vec {n}}}(this parameterization by normal exists for surfaces whoseGaussian curvatureis strictly positive, and thus convex, smooth, and not flat). The parametric representation of the general offset surface ofx→(u→){\displaystyle {\vec {x}}({\vec {u}})}offset byd→(n→){\displaystyle {\vec {d}}({\vec {n}})}is: First, notice that the normal ofx→(u→)={\displaystyle {\vec {x}}({\vec {u}})=}the normal ofd→(n→(u→))=n→(u→),{\displaystyle {\vec {d}}({\vec {n}}({\vec {u}}))={\vec {n}}({\vec {u}}),}by definition. Now, we'll apply the differential w.r.t.u→{\displaystyle {\vec {u}}}tox→d{\displaystyle {\vec {x}}_{d}}, which gives us its tangent vectors spanning its tangent plane. Notice, the tangent vectors forx→d{\displaystyle {\vec {x}}_{d}}are the sum of tangent vectors forx→(u→){\displaystyle {\vec {x}}({\vec {u}})}and its offsetd→(n→){\displaystyle {\vec {d}}({\vec {n}})}, which share the same unit normal. Thus,the general offset surface shares the same tangent plane and normal withx→(u→){\displaystyle {\vec {x}}({\vec {u}})}andd→(n→(u→)){\displaystyle {\vec {d}}({\vec {n}}({\vec {u}}))}. That aligns with the nature of envelopes. We now consider theWeingarten equationsfor theshape operator, which can be written as∂n→=−∂x→S{\displaystyle \partial {\vec {n}}=-\partial {\vec {x}}S}. IfS{\displaystyle S}is invertable,∂x→=−∂n→S−1{\displaystyle \partial {\vec {x}}=-\partial {\vec {n}}S^{-1}}. Recall that the principal curvatures of a surface are theeigenvaluesof the shape operator, the principal curvature directions are itseigenvectors, the Gauss curvature is itsdeterminant, and the mean curvature is half itstrace. The inverse of the shape operator holds these same values for the radii of curvature. Substituting into the equation for the differential ofx→d{\displaystyle {\vec {x}}_{d}}, we get: Next, we use theWeingarten equationsagain to replace∂n→{\displaystyle \partial {\vec {n}}}: Then, we solve for∂x→{\displaystyle \partial {\vec {x}}}and multiple both sides by−S{\displaystyle -S}to get back to theWeingarten equations, this time for∂x→d{\displaystyle \partial {\vec {x}}_{d}}: Thus,Sd=(I+SSn−1)−1S{\displaystyle S_{d}=(I+SS_{n}^{-1})^{-1}S}, and inverting both sides gives us,Sd−1=S−1+Sn−1{\displaystyle S_{d}^{-1}=S^{-1}+S_{n}^{-1}}.
https://en.wikipedia.org/wiki/Parallel_curve
Arc lengthis the distance between two points along a section of acurve. Development of a formulation of arc length suitable for applications to mathematics and the sciences is a problem in vector calculus and in differential geometry. In the most basic formulation of arc length for a vector valued curve (thought of as the trajectory of a particle), the arc length is obtained by integrating thethe magnitude of the velocity vectorover the curve with respect to time. Thus the length of a continuously differentiable curve(x(t),y(t)){\displaystyle (x(t),y(t))}, fora≤t≤b{\displaystyle a\leq t\leq b}, in theEuclidean planeis given as theintegralL=∫abx′(t)2+y′(t)2dt,{\displaystyle L=\int _{a}^{b}{\sqrt {x'(t)^{2}+y'(t)^{2}}}\,dt,}(becausex′(t)2+y′(t)2{\displaystyle {\sqrt {x'(t)^{2}+y'(t)^{2}}}}is the magnitude of thevelocity vector(x′(t),y′(t)){\displaystyle (x'(t),y'(t))}, i.e., the particle's speed). The defining integral of arc length does not always have aclosed-form expression, andnumerical integrationmay be used instead to obtain numerical values of arc length. Determining the length of an irregular arc segment by approximating the arc segment as connected (straight)line segmentsis also calledcurve rectification. For arectifiable curvethese approximations don't get arbitrarily large (so the curve has a finite length). Acurvein theplanecan be approximated by connecting afinitenumber ofpointson the curve using (straight)line segmentsto create apolygonal path. Since it is straightforward to calculate thelengthof each linear segment (using thePythagorean theoremin Euclidean space, for example), the total length of the approximation can be found bysummationof the lengths of each linear segment;that approximation is known as the(cumulative)chordaldistance.[1] If the curve is not already a polygonal path, then using a progressively larger number of line segments of smaller lengths will result in better curve length approximations. Such a curve length determination by approximating the curve as connected (straight) line segments is calledrectificationof a curve. The lengths of the successive approximations will not decrease and may keep increasing indefinitely, but for smooth curves they will tend to a finite limit as the lengths of the segments getarbitrarily small. For some curves, there is a smallest numberL{\displaystyle L}that is an upper bound on the length of all polygonal approximations (rectification). These curves are calledrectifiableand thearc lengthis defined as the numberL{\displaystyle L}. Asigned arc lengthcan be defined to convey a sense oforientationor "direction" with respect to a reference point taken asoriginin the curve (see also:curve orientationandsigned distance).[2] Letf:[a,b]→Rn{\displaystyle f\colon [a,b]\to \mathbb {R} ^{n}}becontinuously differentiable(i.e., the derivative is a continuous function) function. The length of the curve is given by the formulaL(f)=∫ab|f′(t)|dt{\displaystyle L(f)=\int _{a}^{b}|f'(t)|\,dt}where|f′(t)|{\displaystyle |f'(t)|}is the Euclidean norm of the tangent vectorf′(t){\displaystyle f'(t)}to the curve. To justify this formula, define the arc length aslimitof the sum of linear segment lengths for a regular partition of[a,b]{\displaystyle [a,b]}as the number of segments approaches infinity. This means L(f)=limN→∞∑i=1N|f(ti)−f(ti−1)|{\displaystyle L(f)=\lim _{N\to \infty }\sum _{i=1}^{N}{\bigg |}f(t_{i})-f(t_{i-1}){\bigg |}} whereti=a+i(b−a)/N=a+iΔt{\displaystyle t_{i}=a+i(b-a)/N=a+i\Delta t}withΔt=b−aN=ti−ti−1{\displaystyle \Delta t={\frac {b-a}{N}}=t_{i}-t_{i-1}}fori=0,1,…,N.{\displaystyle i=0,1,\dotsc ,N.}This definition is equivalent to the standard definition of arc length as an integral: L(f)=limN→∞∑i=1N|f(ti)−f(ti−1)|=limN→∞∑i=1N|f(ti)−f(ti−1)Δt|Δt=∫ab|f′(t)|dt.{\displaystyle L(f)=\lim _{N\to \infty }\sum _{i=1}^{N}{\bigg |}f(t_{i})-f(t_{i-1}){\bigg |}=\lim _{N\to \infty }\sum _{i=1}^{N}\left|{\frac {f(t_{i})-f(t_{i-1})}{\Delta t}}\right|\Delta t=\int _{a}^{b}{\Big |}f'(t){\Big |}\ dt.} The last equality is proved by the following steps: With the above step result, it becomes ∑i=1N|∫01f′(ti−1+θ(ti−ti−1))dθ|Δt−∑i=1N|f′(ti)|Δt.{\displaystyle \sum _{i=1}^{N}\left|\int _{0}^{1}f'(t_{i-1}+\theta (t_{i}-t_{i-1}))\ d\theta \right|\Delta t-\sum _{i=1}^{N}\left|f'(t_{i})\right|\Delta t.} Terms are rearranged so that it becomes Δt∑i=1N(|∫01f′(ti−1+θ(ti−ti−1))dθ|−∫01|f′(ti)|dθ)≦Δt∑i=1N(∫01|f′(ti−1+θ(ti−ti−1))|dθ−∫01|f′(ti)|dθ)=Δt∑i=1N∫01|f′(ti−1+θ(ti−ti−1))|−|f′(ti)|dθ{\displaystyle {\begin{aligned}&\Delta t\sum _{i=1}^{N}\left(\left|\int _{0}^{1}f'(t_{i-1}+\theta (t_{i}-t_{i-1}))\ d\theta \right|-\int _{0}^{1}\left|f'(t_{i})\right|d\theta \right)\\&\qquad \leqq \Delta t\sum _{i=1}^{N}\left(\int _{0}^{1}\left|f'(t_{i-1}+\theta (t_{i}-t_{i-1}))\right|\ d\theta -\int _{0}^{1}\left|f'(t_{i})\right|d\theta \right)\\&\qquad =\Delta t\sum _{i=1}^{N}\int _{0}^{1}\left|f'(t_{i-1}+\theta (t_{i}-t_{i-1}))\right|-\left|f'(t_{i})\right|\ d\theta \end{aligned}}} where in the leftmost side|f′(ti)|=∫01|f′(ti)|dθ{\textstyle \left|f'(t_{i})\right|=\int _{0}^{1}\left|f'(t_{i})\right|d\theta }is used. By||f′(ti−1+θ(ti−ti−1))|−|f′(ti)||<ε{\textstyle \left|\left|f'(t_{i-1}+\theta (t_{i}-t_{i-1}))\right|-\left|f'(t_{i})\right|\right|<\varepsilon }forN>(b−a)/δ(ε){\textstyle N>(b-a)/\delta (\varepsilon )}so thatΔt<δ(ε){\displaystyle \Delta t<\delta (\varepsilon )}, it becomes Δt∑i=1N(|∫01f′(ti−1+θ(ti−ti−1))dθ|−|f′(ti)|)<εNΔt{\displaystyle \Delta t\sum _{i=1}^{N}\left(\left|\int _{0}^{1}f'(t_{i-1}+\theta (t_{i}-t_{i-1}))\ d\theta \right|-\left|f'(t_{i})\right|\right)<\varepsilon N\Delta t} with|f′(ti)|=∫01|f′(ti)|dθ{\displaystyle \left|f'(t_{i})\right|=\int _{0}^{1}\left|f'(t_{i})\right|d\theta },εNΔt=ε(b−a){\displaystyle \varepsilon N\Delta t=\varepsilon (b-a)}, andN>(b−a)/δ(ε){\displaystyle N>(b-a)/\delta (\varepsilon )}. In the limitN→∞,{\displaystyle N\to \infty ,}δ(ε)→0{\displaystyle \delta (\varepsilon )\to 0}soε→0{\displaystyle \varepsilon \to 0}thus the left side of<{\displaystyle <}approaches0{\displaystyle 0}. In other words,∑i=1N|f(ti)−f(ti−1)Δt|Δt=∑i=1N|f′(ti)|Δt{\displaystyle \sum _{i=1}^{N}\left|{\frac {f(t_{i})-f(t_{i-1})}{\Delta t}}\right|\Delta t=\sum _{i=1}^{N}\left|f'(t_{i})\right|\Delta t}in this limit, and the right side of this equality is just theRiemann integralof|f′(t)|{\displaystyle \left|f'(t)\right|}on[a,b].{\displaystyle [a,b].}This definition of arc length shows that the length of a curve represented by acontinuously differentiablefunctionf:[a,b]→Rn{\displaystyle f:[a,b]\to \mathbb {R} ^{n}}on[a,b]{\displaystyle [a,b]}is always finite, i.e.,rectifiable. The definition of arc length of a smooth curve as the integral of the norm of the derivative is equivalent to the definition L(f)=sup∑i=1N|f(ti)−f(ti−1)|{\displaystyle L(f)=\sup \sum _{i=1}^{N}{\bigg |}f(t_{i})-f(t_{i-1}){\bigg |}} where thesupremumis taken over all possible partitionsa=t0<t1<⋯<tN−1<tN=b{\displaystyle a=t_{0}<t_{1}<\dots <t_{N-1}<t_{N}=b}of[a,b].{\displaystyle [a,b].}[3]This definition as the supremum of the all possible partition sums is also valid iff{\displaystyle f}is merely continuous, not differentiable. A curve can be parameterized in infinitely many ways. Letφ:[a,b]→[c,d]{\displaystyle \varphi :[a,b]\to [c,d]}be any continuously differentiablebijection. Theng=f∘φ−1:[c,d]→Rn{\displaystyle g=f\circ \varphi ^{-1}:[c,d]\to \mathbb {R} ^{n}}is another continuously differentiable parameterization of the curve originally defined byf.{\displaystyle f.}The arc length of the curve is the same regardless of the parameterization used to define the curve: L(f)=∫ab|f′(t)|dt=∫ab|g′(φ(t))φ′(t)|dt=∫ab|g′(φ(t))|φ′(t)dtin the caseφis non-decreasing=∫cd|g′(u)|duusing integration by substitution=L(g).{\displaystyle {\begin{aligned}L(f)&=\int _{a}^{b}{\Big |}f'(t){\Big |}\ dt=\int _{a}^{b}{\Big |}g'(\varphi (t))\varphi '(t){\Big |}\ dt\\&=\int _{a}^{b}{\Big |}g'(\varphi (t)){\Big |}\varphi '(t)\ dt\quad {\text{in the case }}\varphi {\text{ is non-decreasing}}\\&=\int _{c}^{d}{\Big |}g'(u){\Big |}\ du\quad {\text{using integration by substitution}}\\&=L(g).\end{aligned}}} If aplanar curveinR2{\displaystyle \mathbb {R} ^{2}}is defined by the equationy=f(x),{\displaystyle y=f(x),}wheref{\displaystyle f}iscontinuously differentiable, then it is simply a special case of a parametric equation wherex=t{\displaystyle x=t}andy=f(t).{\displaystyle y=f(t).}TheEuclidean distanceof each infinitesimal segment of the arc can be given by: dx2+dy2=1+(dydx)2dx.{\displaystyle {\sqrt {dx^{2}+dy^{2}}}={\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}\,}}dx.} The arc length is then given by: s=∫ab1+(dydx)2dx.{\displaystyle s=\int _{a}^{b}{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}\,}}dx.} Curves withclosed-form solutionsfor arc length include thecatenary,circle,cycloid,logarithmic spiral,parabola,semicubical parabolaandstraight line. The lack of a closed form solution for the arc length of anellipticandhyperbolicarc led to the development of theelliptic integrals. In most cases, including even simple curves, there are no closed-form solutions for arc length andnumerical integrationis necessary. Numerical integration of the arc length integral is usually very efficient. For example, consider the problem of finding the length of a quarter of the unit circle by numerically integrating the arc length integral. The upper half of the unit circle can be parameterized asy=1−x2.{\displaystyle y={\sqrt {1-x^{2}}}.}The intervalx∈[−2/2,2/2]{\displaystyle x\in \left[-{\sqrt {2}}/2,{\sqrt {2}}/2\right]}corresponds to a quarter of the circle. Sincedy/dx=−x/1−x2{\textstyle dy/dx=-x{\big /}{\sqrt {1-x^{2}}}}and1+(dy/dx)2=1/(1−x2),{\displaystyle 1+(dy/dx)^{2}=1{\big /}\left(1-x^{2}\right),}the length of a quarter of the unit circle is ∫−2/22/2dx1−x2.{\displaystyle \int _{-{\sqrt {2}}/2}^{{\sqrt {2}}/2}{\frac {dx}{\sqrt {1-x^{2}}}}\,.} The 15-pointGauss–Kronrodrule estimate for this integral of1.570796326808177differs from the true length of arcsin⁡x|−2/22/2=π2{\displaystyle \arcsin x{\bigg |}_{-{\sqrt {2}}/2}^{{\sqrt {2}}/2}={\frac {\pi }{2}}} by1.3×10−11and the 16-pointGaussian quadraturerule estimate of1.570796326794727differs from the true length by only1.7×10−13. This means it is possible to evaluate this integral to almostmachine precisionwith only 16 integrand evaluations. Letx(u,v){\displaystyle \mathbf {x} (u,v)}be a surface mapping and letC(t)=(u(t),v(t)){\displaystyle \mathbf {C} (t)=(u(t),v(t))}be a curve on this surface. The integrand of the arc length integral is|(x∘C)′(t)|.{\displaystyle \left|\left(\mathbf {x} \circ \mathbf {C} \right)'(t)\right|.}Evaluating the derivative requires thechain rulefor vector fields: D(x∘C)=(xuxv)(u′v′)=xuu′+xvv′.{\displaystyle D(\mathbf {x} \circ \mathbf {C} )=(\mathbf {x} _{u}\ \mathbf {x} _{v}){\binom {u'}{v'}}=\mathbf {x} _{u}u'+\mathbf {x} _{v}v'.} The squared norm of this vector is (xuu′+xvv′)⋅(xuu′+xvv′)=g11(u′)2+2g12u′v′+g22(v′)2{\displaystyle \left(\mathbf {x} _{u}u'+\mathbf {x} _{v}v'\right)\cdot (\mathbf {x} _{u}u'+\mathbf {x} _{v}v')=g_{11}\left(u'\right)^{2}+2g_{12}u'v'+g_{22}\left(v'\right)^{2}} (wheregij{\displaystyle g_{ij}}is thefirst fundamental formcoefficient), so the integrand of the arc length integral can be written asgab(ua)′(ub)′{\displaystyle {\sqrt {g_{ab}\left(u^{a}\right)'\left(u^{b}\right)'\,}}}(whereu1=u{\displaystyle u^{1}=u}andu2=v{\displaystyle u^{2}=v}). LetC(t)=(r(t),θ(t)){\displaystyle \mathbf {C} (t)=(r(t),\theta (t))}be a curve expressed in polar coordinates. The mapping that transforms from polar coordinates to rectangular coordinates is x(r,θ)=(rcos⁡θ,rsin⁡θ).{\displaystyle \mathbf {x} (r,\theta )=(r\cos \theta ,r\sin \theta ).} The integrand of the arc length integral is|(x∘C)′(t)|.{\displaystyle \left|\left(\mathbf {x} \circ \mathbf {C} \right)'(t)\right|.}The chain rule for vector fields shows thatD(x∘C)=xrr′+xθθ′.{\displaystyle D(\mathbf {x} \circ \mathbf {C} )=\mathbf {x} _{r}r'+\mathbf {x} _{\theta }\theta '.}So the squared integrand of the arc length integral is (xr⋅xr)(r′)2+2(xr⋅xθ)r′θ′+(xθ⋅xθ)(θ′)2=(r′)2+r2(θ′)2.{\displaystyle \left(\mathbf {x_{r}} \cdot \mathbf {x_{r}} \right)\left(r'\right)^{2}+2\left(\mathbf {x} _{r}\cdot \mathbf {x} _{\theta }\right)r'\theta '+\left(\mathbf {x} _{\theta }\cdot \mathbf {x} _{\theta }\right)\left(\theta '\right)^{2}=\left(r'\right)^{2}+r^{2}\left(\theta '\right)^{2}.} So for a curve expressed in polar coordinates, the arc length is:∫t1t2(drdt)2+r2(dθdt)2dt=∫θ(t1)θ(t2)(drdθ)2+r2dθ.{\displaystyle \int _{t_{1}}^{t_{2}}{\sqrt {\left({\frac {dr}{dt}}\right)^{2}+r^{2}\left({\frac {d\theta }{dt}}\right)^{2}\,}}dt=\int _{\theta (t_{1})}^{\theta (t_{2})}{\sqrt {\left({\frac {dr}{d\theta }}\right)^{2}+r^{2}\,}}d\theta .} The second expression is for a polar graphr=r(θ){\displaystyle r=r(\theta )}parameterized byt=θ{\displaystyle t=\theta }. Now letC(t)=(r(t),θ(t),ϕ(t)){\displaystyle \mathbf {C} (t)=(r(t),\theta (t),\phi (t))}be a curve expressed in spherical coordinates whereθ{\displaystyle \theta }is the polar angle measured from the positivez{\displaystyle z}-axis andϕ{\displaystyle \phi }is the azimuthal angle. The mapping that transforms from spherical coordinates to rectangular coordinates isx(r,θ,ϕ)=(rsin⁡θcos⁡ϕ,rsin⁡θsin⁡ϕ,rcos⁡θ).{\displaystyle \mathbf {x} (r,\theta ,\phi )=(r\sin \theta \cos \phi ,r\sin \theta \sin \phi ,r\cos \theta ).} Using the chain rule again shows thatD(x∘C)=xrr′+xθθ′+xϕϕ′.{\displaystyle D(\mathbf {x} \circ \mathbf {C} )=\mathbf {x} _{r}r'+\mathbf {x} _{\theta }\theta '+\mathbf {x} _{\phi }\phi '.}Alldot productsxi⋅xj{\displaystyle \mathbf {x} _{i}\cdot \mathbf {x} _{j}}wherei{\displaystyle i}andj{\displaystyle j}differ are zero, so the squared norm of this vector is(xr⋅xr)(r′2)+(xθ⋅xθ)(θ′)2+(xϕ⋅xϕ)(ϕ′)2=(r′)2+r2(θ′)2+r2sin2⁡θ(ϕ′)2.{\displaystyle \left(\mathbf {x} _{r}\cdot \mathbf {x} _{r}\right)\left(r'^{2}\right)+\left(\mathbf {x} _{\theta }\cdot \mathbf {x} _{\theta }\right)\left(\theta '\right)^{2}+\left(\mathbf {x} _{\phi }\cdot \mathbf {x} _{\phi }\right)\left(\phi '\right)^{2}=\left(r'\right)^{2}+r^{2}\left(\theta '\right)^{2}+r^{2}\sin ^{2}\theta \left(\phi '\right)^{2}.} So for a curve expressed in spherical coordinates, the arc length is∫t1t2(drdt)2+r2(dθdt)2+r2sin2⁡θ(dϕdt)2dt.{\displaystyle \int _{t_{1}}^{t_{2}}{\sqrt {\left({\frac {dr}{dt}}\right)^{2}+r^{2}\left({\frac {d\theta }{dt}}\right)^{2}+r^{2}\sin ^{2}\theta \left({\frac {d\phi }{dt}}\right)^{2}\,}}dt.} A very similar calculation shows that the arc length of a curve expressed in cylindrical coordinates is∫t1t2(drdt)2+r2(dθdt)2+(dzdt)2dt.{\displaystyle \int _{t_{1}}^{t_{2}}{\sqrt {\left({\frac {dr}{dt}}\right)^{2}+r^{2}\left({\frac {d\theta }{dt}}\right)^{2}+\left({\frac {dz}{dt}}\right)^{2}\,}}dt.} Arc lengths are denoted bys, since the Latin word for length (or size) isspatium. In the following lines,r{\displaystyle r}represents theradiusof acircle,d{\displaystyle d}is itsdiameter,C{\displaystyle C}is itscircumference,s{\displaystyle s}is the length of an arc of the circle, andθ{\displaystyle \theta }is the angle which the arc subtends at thecentreof the circle. The distancesr,d,C,{\displaystyle r,d,C,}ands{\displaystyle s}are expressed in the same units. Two units of length, thenautical mileand themetre(or kilometre), were originally defined so the lengths of arcs ofgreat circleson the Earth's surface would be simply numerically related to the angles they subtend at its centre. The simple equations=θ{\displaystyle s=\theta }applies in the following circumstances: The lengths of the distance units were chosen to make the circumference of the Earth equal40000kilometres, or21600nautical miles. Those are the numbers of the corresponding angle units in one complete turn. Those definitions of the metre and the nautical mile have been superseded by more precise ones, but the original definitions are still accurate enough for conceptual purposes and some calculations. For example, they imply that one kilometre is exactly 0.54 nautical miles. Using official modern definitions, one nautical mile is exactly 1.852 kilometres,[4]which implies that 1 kilometre is about0.53995680nautical miles.[5]This modern ratio differs from the one calculated from the original definitions by less than one part in 10,000. For much of thehistory of mathematics, even the greatest thinkers considered it impossible to compute the length of an irregular arc. AlthoughArchimedeshad pioneered a way of finding the area beneath a curve with his "method of exhaustion", few believed it was even possible for curves to have definite lengths, as do straight lines. The first ground was broken in this field, as it often has been incalculus, byapproximation. People began to inscribepolygonswithin the curves and compute the length of the sides for a somewhat accurate measurement of the length. By using more segments, and by decreasing the length of each segment, they were able to obtain a more and more accurate approximation. In particular, by inscribing a polygon of many sides in a circle, they were able to find approximate values ofπ.[6][7] In the 17th century, the method of exhaustion led to the rectification by geometrical methods of severaltranscendental curves: thelogarithmic spiralbyEvangelista Torricelliin 1645 (some sources sayJohn Wallisin the 1650s), thecycloidbyChristopher Wrenin 1658, and thecatenarybyGottfried Leibnizin 1691. In 1659, Wallis creditedWilliam Neile's discovery of the first rectification of a nontrivialalgebraic curve, thesemicubical parabola.[8]The accompanying figures appear on page 145. On page 91, William Neile is mentioned asGulielmus Nelius. Before the full formal development of calculus, the basis for the modern integral form for arc length was independently discovered byHendrik van HeuraetandPierre de Fermat. In 1659 van Heuraet published a construction showing that the problem of determining arc length could be transformed into the problem of determining the area under a curve (i.e., an integral). As an example of his method, he determined the arc length of a semicubical parabola, which required finding the area under aparabola.[9]In 1660, Fermat published a more general theory containing the same result in hisDe linearum curvarum cum lineis rectis comparatione dissertatio geometrica(Geometric dissertation on curved lines in comparison with straight lines).[10] Building on his previous work with tangents, Fermat used the curve whosetangentatx=ahad aslopeof so the tangent line would have the equation Next, he increasedaby a small amount toa+ε, making segmentACa relatively good approximation for the length of the curve fromAtoD. To find the length of the segmentAC, he used thePythagorean theorem: which, when solved, yields In order to approximate the length, Fermat would sum up a sequence of short segments. As mentioned above, some curves are non-rectifiable. That is, there is no upper bound on the lengths of polygonal approximations; the length can be madearbitrarily large. Informally, such curves are said to have infinite length. There are continuous curves on which every arc (other than a single-point arc) has infinite length. An example of such a curve is theKoch curve. Another example of a curve with infinite length is the graph of the function defined byf(x) =xsin(1/x) for any open set with 0 as one of its delimiters andf(0) = 0. Sometimes theHausdorff dimensionandHausdorff measureare used to quantify the size of such curves. LetM{\displaystyle M}be a(pseudo-)Riemannian manifold,g{\displaystyle g}the (pseudo-)metric tensor,γ:[0,1]→M{\displaystyle \gamma :[0,1]\rightarrow M}a curve inM{\displaystyle M}defined byn{\displaystyle n}parametric equations and The length ofγ{\displaystyle \gamma }, is defined to be or, choosing local coordinatesx{\displaystyle x}, where is the tangent vector ofγ{\displaystyle \gamma }att.{\displaystyle t.}The sign in the square root is chosen once for a given curve, to ensure that the square root is a real number. The positive sign is chosen for spacelike curves; in a pseudo-Riemannian manifold, the negative sign may be chosen for timelike curves. Thus the length of a curve is a non-negative real number. Usually no curves are considered which are partly spacelike and partly timelike. Intheory of relativity, arc length of timelike curves (world lines) is theproper timeelapsed along the world line, and arc length of a spacelike curve theproper distancealong the curve.
https://en.wikipedia.org/wiki/Signed_arc_length